Нагоре-надолу и пак, и пак

Post Syndicated from Нева Мичева original https://toest.bg/nagore-nadolu-i-pak-i-pak/

Напоследък около мен се изреждат доста кръгли рождени дни и забелязвам една интересна тенденция – неминуемо ще се намери някой добронамерен приятел или роднина да ожали рожденика заради встъпването му в нова житейска криза. Изглежда, че не само пубертетът и кризата на средната възраст са моменти, подходящи за „подивяване“. Към тях се добавят и кризите на 20-те, 30-те и 40-те, кризата на четвърт века, кризата на седмокласника и „страшните три“ (и на малките не им се разминава) – въобще положението става от криза в криза. Чудя се защо сами си го правим екстракризисно, и ми е интересно да чуя твоите размишления по въпроса.

Усмихнати поздрави,
Елица

О, веднага се сещам и за кризата на идентичността, която ни спохожда във всички десетилетия (да не кажа години, месеци и дори денонощия) – и лично, и национално, и поколенчески. За кризата на доверието, на съвестта, на самочувствието – коя повече, коя по-малко кодифицирана, но всичките отдавна познати като жанр. Сещам се за икономическите и енергийните кризи от последните десетилетия, за периодично олюляващата се от кризи на демокрацията Европа и за сегашната правителствена криза в България. За глобалната здравна криза, в която още се намираме. И за неотдавнашния момент, в който „Гардиън“ взе решение да остави – поради непригодност към загрубелите обстоятелства – словосъчетанието „климатична промяна“, за да вкара в употреба „климатична криза“.

Ако отворим пресата ей сега, оттам ще се изсипе лавина от световни кризи – от остри спешности, като вездесъщото пластмасово замърсяване и разпръснатите, но непрестанни хуманитарни бедствия, до по-отвлечени тенденции, като кризата на вниманието или на мъжествеността. Излишно е да казвам, мила Ели, че и аз съм в душевна микрокриза поради грамадното закъснение, с което ти отговарям, макар и въпросът ти много да ми допадна. „Криза след криза“ никак не ми се струва преувеличено за постоянния кипеж във и около човека. Именно затова разбирам защо маркирането на определени възрасти като непременно кризисни ти се вижда прекалено – защото е избежимо. А колкото по-малко стрес без нужда, толкова по-хубаво.

Ние, хората, притежаваме естествена склонност към драматизъм в описанията – понеже всичко най-добре схващаме и помним в истории, обичаме и да им подсилим контурите, за да станат по-отчетливи, по-паметни. Да наричаме понятния си дискомфорт или динамичността на определени свои стадии с име, предназначено за далеч по-страшни сътресения, вероятно ни помага. На когото – с каквото му е нужно: да се почувства част от нещо по-голямо или нещо по-несамотно; да даде име на неназовимото, което го притиска; да се оневини с форсмажорни обстоятелства; по-бързо да каталогизира перипетиите си.

Надали някой би оспорвал, че пубертетът е изпитание; че в определена фаза от развитието на личността натрупаната умора е способна да предизвика срив; че двайсетгодишните и петдесетгодишните имат различни грижи и нужди. Но и надали някой би се учудил кой знае колко, ако хлапето му се справи в крачка с трудностите на пубертета; ако сривовете не се случат или по нищо не приличат на тези на останалите; ако двайсетгодишните наоколо се окажат по-отговорни и по-адекватни от петдесетгодишните (питай патило), а петдесетгодишните – по-бодри и по-игриви от двайсетгодишните. Затова хич не обичам – а чувствам, че и ти не харесваш особено – предварителното „ожалване“. То е безпредметно и някак злорадо. Да не говорим, че настройва за поражение.

Прелом. Първото значение на „криза“ е медицинско: „важен момент, период в развитието на болест, от който зависи изходът на болестта“. Тоест разклон, на който единият път е към възстановяване, другият – към небитието. Подобни екстремни алтернативи вече не се срещат често по нашите ширини, та да употребяваме думата за щяло и нещяло. А и животът и характерните му стръмнини не заслужават да ги разглеждаме като болест. От друга страна обаче, емоцията също има права. Изглежда, че в свят, в който едни буквално изтезават други, да кажеш например, че „жестоко те мъчи“ съмнение, е неприлично пресилено. И все пак има измерение – емоционалното, – в което това може да е единствено вярното.

Ако се вгледаме, много радикални промени – да се събереш с някого, да смениш работата си, дома или държавата, да се откажеш от познатото в полза на по-желаното, да създадеш нещо – притежават всички белези и доста от последиците на кризата. Обаче ги наричаме другояче. „Криза“ сякаш поема цялата ни съпротива и антипатия към промяната без наше съгласие (а каква по-натрапена промяна от тази на растящата възраст?).

Промяна. Преди години се заговори, че китайската дума за „криза“ се изписвала с два знака, единият от които означавал „опасност“, а другият – „възможност“ (в смисъла на „сгода“, „слука“, „шанс“). Бидейки във висша степен утешително, нещото се повтаряше до втръсване: всяко зло за добро с екзотичен ореол. После се оказа, че не е точно така: съчетанието на двата знака ще рече чисто и просто „надвиснал риск“ и за китайците никак не е окуражаващо. Обаче знаеш ли какво? Неверността на популярното твърдение („думата е съставена от това и това“) не отменя верността на извода („с новото идват и опасности, и шансове“). Действително се случва злото да е за добро. Или въобще да не е зло.

Човек трябва да се справя с толкова много променливи през живота си, че волно или неволно се стреми към сигурност, към равновесие, към намаляване на разклоненията, възлите и дори новостите, само и само да има стабилност. Промяната вещае хаос – същото, мисля, което ти наричаш „подивяване“: период на преобразяване в друго, на неяснота и невъзможен контрол, все едно дали дивото идва отвън, или отвътре. Очевидно е защо в кризите не влизаме с радост. Но не е речено, че няма да излезем с придобивки.

Равносметка. „Критични обстоятелства“ неслучайно звучи сходно с „критическа възраст“ или „критично мислене“. Кризата и критиката произлизат от гръцкия (то кое ли не произлиза от него) – от дума, обемаща значения като „откъсване“, „оценка“ и „отсъждане“. Споменавам го само защото ми харесва да си мисля за критиката като за поставяне на нещо в криза с цел да се изпита качеството му. Иначе през етимологическите разкопки се стига до всякакви нищо незначещи извън лингвистиката родства. (Кризата и критиката, освен другото, са рожби на общ индоевропейски корен, от който изникват и „криминален“, „дискриминация“, „екскремент“, „секрет“ и „концерт“…)

Изпадала съм в криза от възрастта си веднъж – когато бях на 27. Чувствах се ничия (90-те не бяха читаво време за социално прохождане), провалена в основните очаквания към младата жена от ранния постсоц и като цяло – свършена. Започваше цяло ново хилядолетие, а аз бях никъде и не знаех накъде да се обърна и какво да направя, та нещо по-смислено да се конкретизира около мен. От немай-къде продължих да се опитвам да налучкам и малко по-късно взех да усещам, че може би не е залудо.

Нужда. През 1980 г. Джон Ленън измисля за сина си Шон приспивната песничка „Мило момче“: „Така очаквам – казва в нея – да те видя как порастваш. Но май и двамата ще трябва да сме търпеливи, защото има дълъг път за извървяване и мъчна работа за вършене. Дотогава обаче, преди да пресечеш улицата, хвани ръката ми. Животът е онова, което ти се случва, докато си правиш други планове.“ Тази последна фраза, цитирана милион пъти с името на Ленън под нея, не е негова. А той не дочаква да види сина си пораснал, въпреки заразителния си оптимизъм („всеки ден става все по-добре и по-добре“, пее в същата песен).

И все пак – както в криворазбраната китайска криза – нещата тук са верни и неверни едновременно. Оздравяване има, има и смърт; има надежда и страдание; има лоши изходи от измислени кризи и добри възможности покрай истински затруднения; всеки ден става по-добре и по-зле… Прелом, промяна, равносметка, нужда – тези думи се повтарят в речниците около „криза“. Каквото и да е, към колкото и широк кръг от преживявания да е приложима, тя също попада в „други планове“.

Заглавна снимка: Кармен Маура в „Жени на ръба на нервна криза“, реж. Педро Алмодовар, 1988

„Говори с Нева“ е рубрика за писма от читатели. Винаги съм си мечтала да поддържам такава и да имам адрес, на който непознати да ми пишат, за да ми разкажат нещо важно за себе си, което да обсъдим – както във влака, когато разговорът тръгне. Случка, върху която да поразсъждаваме, чуденка, която да разчепкаме още малко, наблюдение, към което да добавя друго. Сигурна съм, че както аз винаги съм искала да отговарям на писма, така има хора, които винаги са искали да ги напишат. Заповядайте.

Източник

На второ четене: „Може би Естер“ от Катя Петровская

Post Syndicated from Севда Семер original https://toest.bg/na-vtoro-chetene-mozhe-bi-ester/

Никой от нас не чете единствено най-новите книги. Тогава защо само за тях се пише? „На второ четене“ е рубрика, в която отваряме списъците с книги, публикувани преди поне година, четем ги и препоръчваме любимите си от тях. Рубриката е част от партньорската програма Читателски клуб „Тоест“. Изборът на заглавия обаче е единствено на авторите – Стефан Иванов и Севда Семер, които биха ви препоръчали тези книги и ако имаше как веднъж на две седмици да се разходите с тях в книжарницата.

„Може би Естер“ от Катя Петровская

превод от немски Милен Милев, изд. „Парадокс“, 2016

За какво се нуждаеш от този човек, защо разравяш прахта му? И какво е да си свързана с него?

Тези въпроси са отправени към Катя Петровская в писмо, което тя помества в книгата. Родена в Киев, в семейство от еврейски произход, авторката пише на немски историята на рода си – и сякаш търси именно отговорите на тези два въпроса.

Случи се няколко пъти да ме питат какво чета, след като бях започнала „Може би Естер“. И установих колко е трудно да се отговори. Когато казвах за какво става дума в книгата, хората първо предполагаха, че е автобиография. Обяснявах, че не се усеща точно така, защото историите са не само на авторката и рода ѝ, но и на хора извън него – тоест на по-големия контекст, включващ световните войни. Значи е историческа? И това не е съвсем точно – пояснявах, че книгата е разделена на отделни истории, малко като сборник с разкази. Има части от какви ли не източници, като цитираното горе писмо, а има и пасажи в поток на съзнанието и дори един сън. Значи са есета? Може би нещо такова, казвах, но продължавах да се чудя за по-подходяща дума. Тогава проверих как е описана на български „Може би Естер“ – най-често е обяснена просто като „книга“. Трудно е да се определи жанрът на онова, което е по-болезнено от история, по-отдалечено от мемоар, по-лично от есе.

В интервю с Капка Касабова Марин Бодаков сравнява нейната книга „Към езерото“ с „Може би Естер“ и с „В памет на паметта“ на Мария Степанова – защото са все книги за биографичната и автобиографичната памет. Капка Касабова говори за новите търсения:

Тази преработка е вид алхимия, за която колективно сме готови. При преливането на малкия и големия разказ, личното и колективното, сурово житейското и художественото в най-добрия случай се получава златна сплав.

Четенето на Катя Петровская наистина се усеща, като да присъстваш на алхимичен процес с цялото му обръщане на прах в злато. Какво виждаме, когато гледаме назад? Себе си? Липсите? Силата на корените? Всичко това – стига да поискаме да го видим. В „Драмата на надареното дете“ терапевтката Алис Милер говори за необходимостта да познаваме историята на собственото си детство, да разберем истината за това, което сме преживели. Тя пише за нуждата да погледнем отвъд обичайно приетото – че детството е един щастлив момент от живота – и да видим къде всъщност ни е било трудно, в кое сме останали неразбрани, сами или наранени. Това е нужно, за да пораснем истински – и да успеем да бъдем истинското си аз.

Катя Петровская минава през подобен процес, но задачата ѝ е още по-голяма и по-болезнена: не само собствената ѝ личност, но цялото ѝ семейство минава под лупата. Също както приемаме, че детството е щастлив период, така и за някои семейни травми се знае, че са тежки – и толкова. Но какво означава да видиш семейството си не като брой загинали или оцелели в Холокоста и брой съдби, променени от войната и революцията, а да се заровиш в историите? Да изградиш поне емоционална връзка с онези, които по една или друга причина отсъстват от семейния пейзаж, да видиш лицата им, да научиш имената им.

В търсенето някои парчета си остават липсващи. Заглавието идва от името на нейната прабаба, която може би се е казвала Естер, но със сигурност е била единствената останала в Киев, когато семейството бяга от нацистите. Тя става тяхна жертва – отзовава се на призива всички от еврейски произход да се явят на определено място. Бащата на авторката не може да си спомни името ѝ с точност не защото я чувства твърде далечна или защото тя няма значение, а напротив, от близост – защото сам я е наричал „бабо“ и не е чувал други нейни имена освен „бабо“ и „мамо“.

Авторката се опитва едновременно да се отдалечи и да се приближи. Заглавието на книгата съдържа точно това – опита да се видят хората като история, но и приемането, че някои части ще останат неясни, понеже са твърде близо, за да бъдат фокусирани.

В нашите географски ширини много хора познаваме от първо лице прийома, че за семейството не се говори извън него. И заради това е важно да се разказват тези истории. В един момент авторката споменава, че майка ѝ я моли изрично да не слага нещо в книгата. Не само конкретният факт, но и молбата да бъде прикрит биват показани. По този начин е конструирана цялата книга. Съсредоточеното усилие да се изровят тези забравени или труднодостъпни спомени е съчетано с описанието на трудностите. Авторката говори и как е забравила всичко от момента, в който влиза в Освиенцим. Липсата на спомени също може да говори.

Езикът и липсата му са не само символично, но и буквално част от историята. Поколения след поколения, семейството е преподавало на глухонеми деца. Поместени са писма от благодарни родители и историите на тези учители в семейството. Но това се превръща и в метафора за идентичността на самата авторка: „Нашето юдейство за мен си остана глухонямо, а глухонямостта – юдейска.“ А за преместването си в Германия авторката казва:

Впуснах се в немския, все едно борбата срещу немотата продължаваше, нали немският, немецкий, на руски е езикът на немите, немой немец, че той немецът въобще не може да говори. Този немски ми беше като багета или лозова пръчка в търсенето на домашните ми, които векове наред бяха учили глухонеми деца на говор, като че трябваше да науча немия немски, за да мога да говоря, а това желание ми беше непонятно.

Разбира се, особено трудно е да се четат истории за война и смърт по тези земи точно днес. Но е и особено важно. В книгата има много въпроси без отговор – не защото не им е даден такъв, а защото отговор може би не съществува. Авторката пита:

Остава ли едно място същото място, ако на това място човек убива, после заравя, взривява, изравя, изгаря, смила, разпръсква, мълчи, сади, лъже, зарива с боклук, наводнява, бетонира, отново мълчи, отцепва, арестува опечалени, по-късно изгражда десет паметника, поменува личните жертви веднъж годишно или пък смята, че няма нищо общо с това?

Да смятаме, че нямаме нищо общо, все така ми се струва едно от най-ужасните неща. „Може би Естер“ ми върна усещането за връзките – семейни, исторически, лични, емоционални, – които държат всички ни заедно. За мен беше напомняне за общото, което имаме, и за отговорността ни към него.

Заглавно изображение: Колаж от корицата на книгата „Може би Естер“ (изд. „Парадокс“, худ. Христо Райчев, Румен Баросов) и снимка на Sincerely Media / Unsplash
Активните дарители на „Тоест“ получават постоянна отстъпка в размер на 20% от коричната цена на всички заглавия от каталога на „Парадокс“, както и на няколко други български издателства в рамките на партньорската програма Читателски клуб „Тоест“. За повече информация прочетете на toest.bg/club.

Източник

Metasploit Weekly Wrap-Up

Post Syndicated from Grant Willcox original https://blog.rapid7.com/2022/07/01/metasploit-weekly-wrap-up-164/

SAMR Auxiliary Module

Metasploit Weekly Wrap-Up

A new SAMR auxiliary module has been added that allows users to add, lookup, and delete computer accounts from an AD domain. This should be useful for pentesters on engagements who need to create an AD account to gain an initial foothold into the domain for lateral movement attacks, or who need to use this functionality as an attack primitive.

Note when using this module that there is a standard number of computers a user can add, so be wary that you may get STATUS_DS_MACHINE_ACCOUNT_QUOTA_EXCEEDED error messages if you try to run this repeatedly. It should also be noted that whilst a standard user can create a computer account, you will need additional privileges to delete that account.

A Pesky Table Bug Gets Squashed

A well known bug in Rex-Tables when trying to render tables which contain unsupported characters has now been fixed in Rex-Text 0.2.38, which has now been pulled into the framework. This should solve a number of issues that have been reported over the last year such as https://github.com/rapid7/metasploit-framework/issues/15833, https://github.com/rapid7/metasploit-framework/issues/14955, and https://github.com/rapid7/metasploit-framework/issues/15044. It should also help improve experiences with some of the new LDAP work we have been working on lately, so that users should have a smoother experience once that releases.

PHP Mailer Argument Injection Module Improvements

As a final point of note, community contributor erikbomb has improved the PHP Mailer Argument Injection exploit targeting CVE-2016-10033 and CVE-2016-10045 to now support changing the name of the fields for the name, email, and message objects. This should allow this exploit to work under additional scenarios where these settings may need to be altered for the exploit to successfully run. Much thanks to erikbomb for these enhancements!

New module content (1)

  • SAMR Computer Management by JaGoTu and Spencer McIntyre – This adds an auxiliary module that can be used to add, lookup, and delete computer accounts from an active directory domain. The computer account can offer a sort of foothold into the domain for lateral movements or as a common attack primitive.

Enhancements and features (1)

  • #16721 from erikbomb – This updates the PHP Mailer Argument Injection exploit to allow setting the names of certain fields via advanced options. These configuration options then allow the exploit to work in additional scenarios.

Bugs fixed (2)

  • #16722 from bcoles – Fixes module metadata for stability and reliability.
  • #16729 from gwillcox-r7 – Fixes a crash in Metasploit’s console when trying to render tables which contain unsupported characters.

Get it

As always, you can update to the latest Metasploit Framework with msfupdate
and you can get more details on the changes since the last blog post from
GitHub:

If you are a git user, you can clone the Metasploit Framework repo (master branch) for the latest.
To install fresh without using git, you can use the open-source-only Nightly Installers or the
binary installers (which also include the commercial edition).

Use AWS CloudWatch as a destination for Amazon Redshift Audit logs

Post Syndicated from Nita Shah original https://aws.amazon.com/blogs/big-data/using-aws-cloudwatch-as-destination-for-amazon-redshift-audit-logs/

Amazon Redshift is a fast, scalable, secure, and fully-managed cloud data warehouse that makes it simple and cost-effective to analyze all of your data using standard SQL. Amazon Redshift has comprehensive security capabilities to satisfy the most demanding requirements. To help you to monitor the database for security and troubleshooting purposes, Amazon Redshift logs information about connections and user activities in your database. This process is called database auditing.

Amazon Redshift Audit Logging is good for troubleshooting, monitoring, and security purposes, making it possible to determine suspicious queries by checking the connections and user logs to see who is connecting to the database. It gives information, such as the IP address of the user’s computer, the type of authentication used by the user, or the timestamp of the request. Audit logs make it easy to identify who modified the data. Amazon Redshift logs all of the SQL operations, including connection attempts, queries, and changes to your data warehouse. These logs can be accessed via SQL queries against system tables, saved to a secure Amazon Simple Storage Service (Amazon S3) Amazon location, or exported to Amazon CloudWatch. You can view your Amazon Redshift cluster’s operational metrics on the Amazon Redshift console, use CloudWatch, and query Amazon Redshift system tables directly from your cluster.

This post will walk you through the process of configuring CloudWatch as an audit log destination. It will also show you that the latency of log delivery to either Amazon S3 or CloudWatch is reduced to less than a few minutes using enhanced Amazon Redshift Audit Logging. You can enable audit logging to Amazon CloudWatch via the AWS-Console or AWS CLI & Amazon Redshift API.

Solution overview

Amazon Redshift logs information to two locations-system tables and log files.

  1. System tables: Amazon Redshift logs data to system tables automatically, and history data is available for two to five days based on log usage and available disk space. To extend the log data retention period in system tables, use the Amazon Redshift system object persistence utility from AWS Labs on GitHub. Analyzing logs through system tables requires Amazon Redshift database access and compute resources.
  2. Log files: Audit logging to CloudWatch or to Amazon S3 is an optional process. When you turn on logging on your cluster, you can choose to export audit logs to Amazon CloudWatch or Amazon S3. Once logging is enabled, it captures data from the time audit logging is enabled to the present time. Each logging update is a continuation of the previous logging update. Access to audit log files doesn’t require access to the Amazon Redshift database, and reviewing logs stored in Amazon S3 doesn’t require database computing resources. Audit log files are stored indefinitely in CloudWatch logs or Amazon S3 by default.

Amazon Redshift logs information in the following log files:

  • Connection log – Provides information to monitor users connecting to the database and related connection information. This information might be their IP address.
  • User log – Logs information about changes to database user definitions.
  • User activity log – It tracks information about the types of queries that both the users and the system perform in the database. It’s useful primarily for troubleshooting purposes.

Benefits of enhanced audit logging

For a better customer experience, the existing architecture of the audit logging solution has been improved to make audit logging more consistent across AWS services. This new enhancement will reduce log export latency from hours to minutes with a fine grain of access control. Enhanced audit logging improves the robustness of the existing delivery mechanism, thus reducing the risk of data loss. Enhanced audit logging will let you export logs either to Amazon S3 or to CloudWatch.

The following section will show you how to configure audit logging using CloudWatch and its benefits.

Setting up CloudWatch as a log destination

Using CloudWatch to view logs is a recommended alternative to storing log files in Amazon S3. It’s simple to configure and it may suit your monitoring requirements, especially if you use it already to monitor other services and application.

To set up a CloudWatch as your log destination, complete the following steps:

  1. On the Amazon Redshift console, choose Clusters in the navigation pane.
    This page lists the clusters in your account in the current Region. A subset of properties of each cluster is also displayed.
  2. Choose cluster where you want to configure CloudWatch logs.

  3. Select properties to edit audit logging.
  4. Choose Turn on configure audit logging, and CloudWatch under log export type.
  5. Select save changes.

Analyzing audit log in near real-time

To run SQL commands, we use redshift-query-editor-v2, a web-based tool that you can use to explore, analyze, share, and collaborate on data stored on Amazon Redshift. However, you can use any client tools of your choice to run SQL queries.

Now we’ll run some simple SQLs and analyze the logs in CloudWatch in near real-time.

  1. Run test SQLs to create and drop user.
  2. On the AWS Console, choose CloudWatch under services, and then select Log groups from the right panel.
  3. Select the userlog – user logs created in near real-time in CloudWatch for the test user that we just created and dropped earlier.

Benefits of using CloudWatch as a log destination

  • It’s easy to configure, as it doesn’t require you to modify bucket policies.
  • It’s easy to view logs and search through logs for specific errors, patterns, fields, etc.
  • You can have a centralized log solution across all AWS services.
  • No need to build a custom solution such as AWS Lambda or Amazon Athena to analyze the logs.
  • Logs will appear in near real-time.
  • It has improved log latency from hours to just minutes.
  • By default, log groups are encrypted in CloudWatch and you also have the option to use your own custom key.
  • Fine-granular configuration of what log types to export based on your specific auditing requirements.
  • It lets you export log groups’ logs to Amazon S3 if needed.

Setting up Amazon S3 as a log destination

Although using CloudWatch as a log destination is the recommended approach, you also have the option to use Amazon S3 as a log destination. When the log destination is set up to an Amzon S3 location, enhanced audit logging logs will be checked every 15 minutes and will be exported to Amazon S3. You can configure audit logging on Amazon S3 as a log destination from the console or through the AWS CLI.

Once you save the changes, the Bucket policy will be set as the following using the Amazon Redshift service principal.

For additional details please refer to Amazon Redshift audit logging.

For enabling logging through AWS CLI – db-auditing-cli-api.

Cost

Exporting logs into Amazon S3 can be more cost-efficient, though considering all of the benefits which CloudWatch provides regarding search, real-time access to data, building dashboards from search results, etc., it can better suit those who perform log analysis.

For further details, refer to the following:

Best practices

Amazon Redshift uses the AWS security frameworks to implement industry-leading security in the areas of authentication, access control, auditing, logging, compliance, data protection, and network security. For more information, refer to Security in Amazon Redshift.

Audit logging to CloudWatch or to Amazon S3 is an optional process, but to have the complete picture of your Amazon Redshift usage, we always recommend enabling audit logging, particularly in cases where there are compliance requirements.

Log data is stored indefinitely in CloudWatch Logs or Amazon S3 by default. This may incur high, unexpected costs. We recommend that you configure how long to store log data in a log group or Amazon S3 to balance costs with compliance retention requirements. Apply the right compression to reduce the log file size.

Conclusion

This post demonstrated how to get near real-time Amazon Redshift logs using CloudWatch as a log destination using enhanced audit logging. This new functionality helps make Amazon Redshift Audit logging easier than ever, without the need to implement a custom solution to analyze logs. We also demonstrated how the new enhanced audit logging reduces log latency significantly on Amazon S3 with fine-grained access control compared to the previous version of audit logging.

Unauthorized access is a serious problem for most systems. As an administrator, you can start exporting logs to prevent any future occurrence of things such as system failures, outages, corruption of information, and other security risks.


About the Authors

Nita Shah is an Analytics Specialist Solutions Architect at AWS based out of New York. She has been building data warehouse solutions for over 20 years and specializes in Amazon Redshift. She is focused on helping customers design and build enterprise-scale well-architected analytics and decision support platforms.

Evgenii Rublev is a Software Development Engineer on the Amazon Redshift team. He has worked on building end-to-end applications for over 10 years. He is passionate about innovations in building high-availability and high-performance applications to drive a better customer experience. Outside of work, Evgenii enjoys spending time with his family, traveling, and reading books.

Yanzhu Ji is a Product manager on the Amazon Redshift team. She worked on Amazon Redshift team as a Software Engineer before becoming a Product Manager, she has rich experience of how the customer facing Amazon Redshift features are built from planning to launching, and always treat customers’ requirements as first priority. In personal life, Yanzhu likes painting, photography and playing tennis.

Ryan Liddle is a Software Development Engineer on the Amazon Redshift team. His current focus is on delivering new features and behind the scenes improvements to best service Amazon Redshift customers. On the weekend he enjoys reading, exploring new running trails and discovering local restaurants.

Understanding the lifecycle of Amazon EC2 Dedicated Hosts

Post Syndicated from Sheila Busser original https://aws.amazon.com/blogs/compute/understanding-the-lifecycle-of-amazon-ec2-dedicated-hosts/

This post is written by Benjamin Meyer, Sr. Solutions Architect, and Pascal Vogel, Associate Solutions Architect.

Amazon Elastic Compute Cloud (Amazon EC2) Dedicated Hosts enable you to run software on dedicated physical servers. This lets you comply with corporate compliance requirements or per-socket, per-core, or per-VM licensing agreements by vendors, such as Microsoft, Oracle, and Red Hat. Dedicated Hosts are also required to run Amazon EC2 Mac Instances.

The lifecycles and states of Amazon EC2 Dedicated Hosts and Amazon EC2 instances are closely connected and dependent on each other. To operate Dedicated Hosts correctly and consistently, it is critical to understand the interplay between Dedicated Hosts and EC2 Instances. In this post, you’ll learn how EC2 instances are reliant on their (dedicated) hosts. We’ll also dive deep into their respective lifecycles, the connection points of these lifecycles, and the resulting considerations.

What is an EC2 instance?

An EC2 instance is a virtual server running on top of a physical Amazon EC2 host. EC2 instances are launched using a preconfigured template called Amazon Machine Image (AMI), which packages the information required to launch an instance. EC2 instances come in various CPU, memory, storage and GPU configurations, known as instance types, to enable you to choose the right instance for your workload. The process of finding the right instance size is known as right sizing. Amazon EC2 builds on the AWS Nitro System, which is a combination of dedicated hardware and the lightweight Nitro hypervisor. The EC2 instances that you launch in your AWS Management Console via Launch Instances are launched on AWS-controlled physical hosts.

What is an Amazon EC2 Bare Metal instance?

Bare Metal instances are instances that aren’t using the Nitro hypervisor. Bare Metal instances provide direct access to physical server hardware. Therefore, they let you run legacy workloads that don’t support a virtual environment, license-restricted business-critical applications, or even your own hypervisor. Workloads on Bare Metal instances continue to utilize AWS Cloud features, such as Amazon Elastic Block Store (Amazon EBS), Elastic Load Balancing (ELB), and Amazon Virtual Private Cloud (Amazon VPC).

What is an Amazon EC2 Dedicated Host?

An Amazon EC2 Dedicated Host is a physical server fully dedicated to a single customer. With visibility of sockets and physical cores of the Dedicated Host, you can address corporate compliance requirements, such as per-socket, per-core, or per-VM software licensing agreements.

You can launch EC2 instances onto a Dedicated Host. Instance families such as M5, C5, R5, M5n, C5n, and R5n allow for the launching of different instance sizes, such as4xlarge and 8xlarge, to the same host. Other instance families only support a homogenous launching of a single instance size. For more details, see Dedicated Host instance capacity.

As an example, let’s look at an M6i Dedicated Host. M6i Dedicated Hosts have 2 sockets and 64 physical cores. If you allocate a M6i Dedicated Host, then you can specify what instance type you’d like to support for allocation. In this case, possible instance sizes are:

  • large
  • xlarge
  • 2xlarge
  • 4xlarge
  • 8xlarge
  • 12xlarge
  • 16xlarge
  • 24xlarge
  • 32xlarge
  • metal

The number of instances that you can launch on a single M6i Dedicated Host depends on the selected instance size. For example:

  • In the case of xlarge (4 vCPUs), a maximum of 32 m6i.xlarge instances can be scheduled on this Dedicated Host.
  • In the case of 8xlarge (32 vCPUs), a maximum of 4 m6i.8xlarge instances can be scheduled on this Dedicated Host.
  • In the case of metal (128 vCPUs), a maximum of 1 m6i.metal instance can be scheduled on this Dedicated Host.

When launching an EC2 instance on a Dedicated Host, you’re billed for the Dedicated Host but not for the instance. The cost for Amazon EBS volumes is the same as in the case of regular EC2 instances.

Exemplary homogenious M6i Dedicated Host shown with 32 m6i.xlarge, four m6i.8xlarge and one m6i.metal each.

Exemplary M6i Dedicated Host instance selections: m6i.xlarge, m6i.8xlarge and m6i.metal

Understanding the EC2 instance lifecycle

Amazon EC2 instance lifecycle states and transitions

Throughout its lifecycle, an EC2 instance transitions through different states, starting with its launch and ending with its termination. Upon Launch, an EC2 instance enters the pending state. You can only launch EC2 instances on Dedicated Hosts in the available state. You aren’t billed for the time that the EC2 instance is in any state other than running. When launching an EC2 instance on a Dedicated Host, you’re billed for the Dedicated Host but not for the instance. Depending on the user action, the instance can transition into three different states from the running state:

  1. Via Reboot from the running state, the instance enters the rebooting state. Once the reboot is complete, it reenters the running state.
  2. In the case of an Amazon EBS-backed instance, a Stop or Stop-Hibernate transitions the running instance into the stopping state. After reaching the stopped state, it remains there until further action is taken. Via Start, the instance will reenter the pending and subsequently the running state. Via Terminate from the stopped state, the instance will enter the terminated state. As part of a Stop or Stop-Hibernate and subsequent Start, the EC2 instance may move to a different AWS-managed host. On Reboot, it remains on the same AWS-managed host.
  3. Via Terminate from the running state, the instance will enter the shutting-down state, and finally the terminated state. An instance can’t be started from the terminated state.

Understanding the Amazon EC2 Dedicated Host lifecycle

A diagram of the the Amazon EC2 Dedicated Host lifecycle states and transitions between them.

Amazon EC2 Dedicated Host lifecycle states and transitions

An Amazon EC2 Dedicated Host enters the available state as soon as you allocate it in your AWS account. Only if the Dedicated Host is in the available state, you can launch EC2 instances on it. You aren’t billed for the time that your Dedicated Host is in any state other than available. From the available state, the following states and state transitions can be reached:

  1. You can Release the Dedicated Host, transitioning it into the released state. Amazon EC2 Mac Instances Dedicated Hosts have a minimum allocation time of 24h. They can’t be released within the 24h. You can’t release a Dedicated Host that contains instances in one of the following states: pending, running, rebooting, stopping, or shutting down. Consequently, you must Stop or Terminate any EC2 instances on the Dedicated Host and wait until it’s in the available state before being able to release it. Once an instance is in the stopped state, you can move it to a different Dedicated Host by modifying its Instance placement configuration.
  2. The Dedicated Host may enter the pending state due to a number of reasons. In case of an EC2 Mac instance, stopping or terminating a Mac instance initiates a scrubbing workflow of the underlying Dedicated Host, during which it enters the pending state. This scrubbing workflow includes tasks such as erasing the internal SSD, resetting NVRAM, and more, and it can take up to 50 minutes to complete. Additionally, adding or removing a Dedicated Host to or from a Resource Group can cause the Dedicated Host to go into the pending state. From the pending state, the Dedicated Host will reenter the available state.
  3. The Dedicated Host may enter the under-assessment state if AWS is investigating a possible issue with the underlying infrastructure, such as a hardware defect or network connectivity event. While the host is in the under-assessment state, all of the EC2 instances running on it will have the impaired status. Depending on the nature of the underlying issue and if it’s configured, the Dedicated Host will initiate host auto recovery.

If Dedicated Host Auto Recovery is enabled for your host, then AWS attempts to restart the instances currently running on a defect Dedicated Host on an automatically allocated replacement Dedicated Host without requiring your manual intervention. When host recovery is initiated, the AWS account owner is notified by email and by an AWS Health Dashboard event. A second notification is sent after the host recovery has been successfully completed. Initially, the replacement Dedicated Host is in the pending state. EC2 instances running on the defect dedicated Host remain in the impaired status throughout this process. For more information, see the Host Recovery documentation.

Once all of the EC2 instances have been successfully relaunched on the replacement Dedicated Host, it enters the available state. Recovered instances reenter the running state. The original Dedicated Host enters the released-permanent-failure state. However, if the EC2 instances running on the Dedicated Host don’t support host recovery, then the original Dedicated Host enters the permanent-failure state instead.

Conclusion

In this post, we’ve explored the lifecycles of Amazon EC2 instances and Amazon EC2 Dedicated Hosts. We took a close look at the individual lifecycle states and how both lifecycles must be considered in unison to operate EC2 Instances on EC2 Dedicated Hosts correctly and consistently. To learn more about operating Amazon EC2 Dedicated Hosts, visit the EC2 Dedicated Hosts User Guide.

Monitor your Amazon QuickSight deployments using the new Amazon CloudWatch integration

Post Syndicated from Mayank Agarwal original https://aws.amazon.com/blogs/big-data/monitor-your-amazon-quicksight-deployments-using-the-new-amazon-cloudwatch-integration/

Amazon QuickSight is a fully-managed, cloud-native business intelligence (BI) service that makes it easy to connect to your data, create interactive dashboards, and share these with tens of thousands of users, either within the QuickSight interface or embedded in software as a service (SaaS) applications or web portals. With QuickSight providing insights to power your daily decisions, it becomes more important than even for administrators and developers to ensure their QuickSight dashboards and data refreshes are operating smoothly as expected.

We recently announced the availability of QuickSight metrics within Amazon CloudWatch, which enables developers and administrators to monitor the availability and performance of their QuickSight deployments in real time. With the availability of metrics related to dashboard views, visual load times, and data ingestion details into SPICE (the QuickSight in-memory data store), developers and administrators can ensure that end-users of QuickSight deployments have an uninterrupted experience with relevant data. CloudWatch integration is now available in QuickSight Enterprise Edition in all supported Regions. These metrics can be accessed via CloudWatch, and allow QuickSight deployments to be monitored similarly to other application deployments on AWS, with the ability to generate alarms on failures and to slice and dice historical events to view trends and identify optimization opportunities. Metrics are kept for a period of 15 months, allowing them to be used for historical comparison and trend analysis.

Feature overview

QuickSight emits the following metrics to track the performance and availability of dataset ingestions, dashboards, and visuals. In addition to individual asset metrics, QuickSight also emits aggregated metrics to track performance and availability of all dashboards and SPICE ingestions for an account in a Region.

. Metric Description Unit
1 IngestionErrorCount The number of failed ingestions. Count
2 IngestionInvocationCount The number of ingestions initiated. This includes scheduled and manual ingestions that are triggered through either the QuickSight console or through APIs. Count
3 IngestionLatency The time from ingestion initiation to completion. Second
4 IngestionRowCount The number of successful row ingestions. Count
5 DashboardViewCount The number of times that a dashboard has been loaded or viewed. This includes all access patterns such as web, mobile, and embedded. Count
6 DashboardViewLoadTime The time that it takes a dashboard to load. The time is measured starting from the navigation to the dashboard to when all visuals within the view port are rendered. Millisecond
7 VisualLoadTime The time it takes for a QuickSight visual to load, including the round-trip query time from the client to QuickSight and back to the client. Millisecond
8 VisualLoadErrorCount The number of times a QuickSight visual fails to complete a data load. Count

Access QuickSight metrics in CloudWatch

Use the following procedure to access QuickSight metrics in CloudWatch:

  1. Sign in to the AWS account associated with your QuickSight account.
  2. In the upper-left corner of the AWS Console Home, choose Services, and then choose CloudWatch.
  3. On the CloudWatch console, under Metrics in the navigation pane, choose All metrics, and choose QuickSight.
  4. To access individual metrics, choose Dashboard metrics, Visual metrics, and Ingestion metrics.
  5. To access aggregate metrics, choose Aggregate metrics.

Visualize metrics on the CloudWatch console

You can use the CloudWatch console to visualize metric data generated from your QuickSight deployment. For more information, see Graphing metrics.

Create an alarm using CloudWatch console

You can also create a CloudWatch alarm that monitors CloudWatch metrics for your QuickSight assets. CloudWatch automatically sends you a notification when the metric reaches a threshold you specify. For examples, see Using Amazon CloudWatch alarms.

Use case overview

Let’s consider a fictional company, OkTank, which is an independent software vendor (ISV) in the healthcare space. They have an application that is used by different hospitals across different regions of the country to manage their revenue. OkTank has hundreds of hospitals with thousands of healthcare employees accessing their application and has embedded operations related to their business using multiple QuickSight dashboards in their application. In addition, they allow embedded authoring experience to each hospital’s in-house data analysts to build their own dashboards for their BI needs.

All the dashboards are powered by a database cluster, and they have multiple ingestion schedules. Because their QuickSight usage is growing and hospitals’ in-house data analysts are contributing by bringing in more data and their own dashboards, OkTank wants to monitor and make sure they’re providing their readers with a consistent, performant, and uninterrupted experience on QuickSight.

OkTank has some key monitoring needs that they deem critical:

  • Monitoring console – They want a general monitoring console where they can monitor reader engagement in their account, most popular dashboards, and overall visual load performance. They would like to monitor overall ingestion performance in their account.
  • Dashboard adoption and performance – They want to monitor traffic growth with respect to performance to make sure they’re meeting scaling needs.
  • Visual performance and availability – They have some visuals with complex queries and would like to make sure these queries are running fast enough without failures so that their readers have a performant and uninterrupted experience.
  • Ingestion failures – They want to be alerted if any scheduled ingestion fails, so that they can act right away and make sure their readers don’t experience any interruptions.

In the following sections, we discuss how OkTank meets each monitoring need in more detail.

Monitoring console

OkTank wants to have a general monitoring console to look at key KPIs, monitor reader engagement, and make sure their readers are getting a consistent and uninterrupted experience with QuickSight.

To create a monitoring console and add a KPI metric to it, OkTank takes the following steps:

  1. On the CloudWatch console, under Metrics in the navigation pane, choose Dashboards.
  2. Choose Create dashboard.
  3. Enter the dashboard name and choose Create dashboard.
  4. On the blank dashboard landing page, choose either Add a first widget or the plus sign to add a widget.
  5. In the Add widget section, choose Number.

  6. On the Browse tab, choose QuickSight.
  7. Choose Aggregate metrics.
  8. Select DashboardViewCount.
  9. Choose Create widget.
  10. On the options menu of the newly created widget, choose Edit.
  11. Enter the desired widget name.
  12. For Statistic, choose Sum.
  13. For Period, choose 1 day.
  14. Choose Update widget.

With the widget options, OkTank has added more KPIs on the console, such as average dashboard load time across the region during the day and the 10 most popular dashboards with the highest views, and created their monitoring console.

Dashboard adoption and performance

OkTank has some critical dashboards, and they want to monitor adoption of that dashboard and track its loading performance to make sure they can meet scaling needs.

They take the following steps to create a widget:

  1. On the monitoring console, choose the plus sign.
  2. In the Add widget section, choose Line.
  3. In the Add to this dashboard section, choose Metrics.
  4. On the Browse tab, choose QuickSight.
  5. Choose Dashboard metrics.
  6. Choose the DashboardViewCount and DashbordViewLoadTime metrics of the critical dashboard.
  7. Choose Create widget.

The newly created widget shows critical dashboards views and load times in multiple dimensions.

Visual performance and availability

OkTank has some visuals that require them to run complex queries while loading. They want to provide their readers with consistent and uninterrupted experience. In addition, they would like to be alerted in case a query experiences failures when running or takes longer than the desired runtime.

They take the following steps to monitor and set up an alarm:

  1. On the monitoring console, choose the plus sign.
  2. In the Add widget section, choose Line.
  3. In the Add to this dashboard section, choose Metrics.
  4. On the Browse tab, choose QuickSight.
  5. Choose Visual metrics.
  6. Choose the VisualLoadTime metric of the critical visual and configure the time period on the menu above the chart.
  7. To get alerted in case the critical visual fails to load due to query failure, choose the VisualLoadErrorCount metric.

    The newly created widget shows visuals load performance over the selected time frame.
  8. On the Graphed metrics tab, select the VisualLoadErrorCount metric.
  9. On the Actions menu, choose Create alarm.
  10. For Metric name, enter a name.
  11. Confirm that the value for DashboardId matches the dashboard that has the visual.

    In the Conditions section, OkTank wants to be notified when the error count is greater than or equal to 1.
  12. For Threshold type, select Static.
  13. Select Greater/Equal.
  14. Enter 1.
  15. Choose Next.
  16. In the Notification section, choose Select an existing SNS topic or Create a new topic.
  17. If you’re creating a new topic, provide a name for the topic and email addresses of recipients.
  18. Choose Create topic.
  19. Enter an alarm name and optional description.
  20. Choose Next.
  21. Verify the details and choose Create alarm.

The alarm is now available on the CloudWatch console. If the visual fails to load, the VisualLoadErrorCount value becomes 1 or more (depending on the number of times the dashboard is invoked) and the alarm state is set to In alarm.

Choose the alarm to get more details.

You can scroll down for more information about the alarm.

OkTank also receives an email to the email endpoint defined in the Amazon Simple Notification Service (Amazon SNS) topic.

Ingestion failures

OkTank wants to be alerted if any scheduled SPICE data ingestion fails, so that they can act right away and make sure their readers don’t experience any interruptions. This allows the administrator to find out the root cause of the SPICE ingestion failure (for example, an overloaded database instance) and fix it to ensure the latest data is available in the dependent dashboards.

They take the following steps to monitor and set up an alarm:

  1. On the monitoring console, choose the plus sign.
  2. In the Add widget section, choose Line.
  3. In the Add to this dashboard section, choose Metrics.
  4. On the Browse tab, choose QuickSight.
  5. Choose Ingestion metrics.
  6. Choose the IngestionErrorCount metric of the dataset and configure the time period on the menu above the chart.
  7. Follow the same steps as in the previous section to set up an alarm.

When ingestion fails for the dataset, the alarm changes to an In Alarm state and you receive an email notification.

The following screenshot shows an example of the email.

Conclusion

With QuickSight metrics in CloudWatch, QuickSight developers and administrators can observe and respond to the availability and performance of their QuickSight ecosystem in near-real time. They can monitor dataset ingestions, dashboards, and visuals to provide end-users of QuickSight and applications that embed QuickSight dashboards with a consistent, performant, and uninterrupted experience.

Try out QuickSight metrics in Amazon CloudWatch to monitor your Amazon QuickSight deployments, and share your feedback and questions in the comments.


About the Authors

Mayank Agarwal is a product manager for Amazon QuickSight, AWS’ cloud-native, fully managed BI service. He focuses on account administration, governance and developer experience. He started his career as an embedded software engineer developing handheld devices. Prior to QuickSight he was leading engineering teams at Credence ID, developing custom mobile embedded device and web solutions using AWS services that make biometric enrollment and identification fast, intuitive, and cost-effective for Government sector, healthcare and transaction security applications.

Raj Jayaraman is a Senior Specialist Solutions Architect for Amazon QuickSight. Raj focuses on helping customers develop sample dashboards, embed analytics and adopt BI design patterns and best practices.

Rapid7 Belfast Recognized for “Company Connection” During COVID-19 Pandemic

Post Syndicated from Rapid7 original https://blog.rapid7.com/2022/07/01/rapid7-belfast-recognized-for-company-connection-during-covid-19-pandemic/

Rapid7 Belfast Recognized for “Company Connection” During COVID-19 Pandemic

Irish News has recognized Rapid7 in its Workplace and Employment Awards, where we’ve taken home the trophy for Best Company Connection. Reflecting on the past two years, this award recognizes the organization that best demonstrates how it has adapted its workplace well-being strategy to the challenges of remote working influenced by the COVID-19 pandemic. Specifically, this includes how the company has remained committed to providing excellent support to its staff throughout, and maintaining contact and connection with workers during periods of uncertainty and isolation.

Rapid7 Belfast Recognized for “Company Connection” During COVID-19 Pandemic

Rapid7 has been part of Belfast’s booming technology scene since 2014 and is home to a growing team of engineers, developers, and customer advisors. From 2020 to 2022, the office population nearly doubled in size to support the increasing demand from customers around the world for streamlined and accessible cybersecurity solutions. Maintaining Rapid7’s commitment to the core values of “Be an Advocate,” “Never Done,” “Impact Together,” “Challenge Convention,” and “Bring You” was a critical focal point for our local leadership as they scaled their teams in the midst of an unprecedented global pandemic.

The judges were very impressed by Rapid7’s holistic response to this new way of working, and how the company recognised the importance of maintaining contact, culture, and connection during such unprecedented times. Programs that stood out included leadership engagement through weekly Town Halls, engagement with mental well-being experts, and several grassroots community initiatives, including an Academy group designed to support parents in homeschooling their children.

Rapid7 Belfast Recognized for “Company Connection” During COVID-19 Pandemic

In addition to taking home the winning title, Rapid7 was also recognised as a finalist in two other categories this year: Best People Development Programme and Best Place to Work. Rapid7’s global commitment to its employees has been recognized in other recent designations, including the #1 spot on the Boston Business Journal Best Places to Work list in June and landing at #2 on Comparably’s list of Best Workplaces in Boston in March. Expanding our winning track record into the United Kingdom speaks to how we support employees in creating the career experience of a lifetime while positively impacting our customers and the greater cybersecurity community.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Additional reading:

[$] Removing the scheduler’s energy-margin heuristic

Post Syndicated from original https://lwn.net/Articles/899303/

The CPU scheduler’s job has never been easy; it must find a way to allocate
CPU time to all tasks in the system that is fair, allows all tasks to
progress, and maximizes the throughput of the system as a whole. More
recently, it has been called upon to satisfy another constraint: minimizing
the system’s energy consumption. There is currently a
patch set
in circulation, posted by Vincent Donnefort with work from
Dietmar Eggemann as well, that changes how this constraint is met. The
actual change is small, but it illustrates how hard it can be to get the
needed heuristics right.

Image background removal using Amazon SageMaker semantic segmentation

Post Syndicated from Patrick Gryczka original https://aws.amazon.com/blogs/architecture/image-background-removal-using-amazon-sagemaker-semantic-segmentation/

Many individuals are creating their own ecommerce and online stores in order to sell their products and services. This simplifies and speeds the process of getting products out to your selected markets. This is a critical key indicator for the success of your business.

Artificial Intelligence/Machine Learning (AI/ML) and automation can offer you an improved and seamless process for image manipulation. You can take a picture identifying your products. You can then remove the background in order to publish high quality and clean product images. These images can be added to your online stores for consumers to view and purchase. This automated process will drastically decrease the manual effort required, though some manual quality review will be necessary. It will increase your time-to-market (TTM) and quickly get your products out to customers.

This blog post explains how you can automate the removal of image backgrounds by combining semantic segmentation inferences using Amazon SageMaker JumpStart. You can automate image processing using AWS Lambda. We will walk you through how you can set up an Amazon SageMaker JumpStart semantic segmentation inference endpoint using curated training data.

Amazon SageMaker JumpStart solution overview

Solution architecture for automatically processing new images and outputting isolated labels identified through semantic segmentation.

Figure 1. Architecture for automatically processing new images and outputting isolated labels identified through semantic segmentation.

The example architecture in Figure 1 shows a serverless architecture that uses SageMaker to perform semantic segmentation on images. Image processing takes place within a Lambda function, which extracts the identified (product) content from the background content in the image.

In this event driven architecture, Amazon Simple Storage Service (Amazon S3) invokes a Lambda function each time a new product image lands in the Uploaded Image Bucket. That Lambda function calls out to a semantic segmentation endpoint in Amazon SageMaker. The function then receives a segmentation mask that identifies the pixels that are part of the segment we are identifying. Then, the Lambda function processes the image to isolate the identified segment from the rest of the image, outputting the result to our Processed Image Bucket.

Semantic segmentation model

The semantic segmentation algorithm provides a fine-grained, pixel-level approach to developing computer vision applications. It tags every pixel in an image with a class label from a predefined set of classes. Because the semantic segmentation algorithm classifies every pixel in an image, it also provides information about the shapes of the objects contained in the image. The segmentation output is represented as a grayscale image, called a segmentation mask. A segmentation mask is a grayscale image with the same shape as the input image.

You can use the segmentation mask and replace the pixels that correspond to the class that is identified with the pixels from the original image. You can use the Python library PIL to do pixel manipulation on the image. The following images show how the image in Figure 1 will result in the image shown in Figure 2, when passed through semantic segmentation. When you use the Figure 2 mask and replace it with pixels from Figure 1, the end result is the image from Figure 3. Due to minor quality issues of the final image, you will need to do manual cleanup after automation.

Car image with background

Figure 2. Car image with background

Car mask image

Figure 3. Car mask image

Final image, background removed

Figure 4. Final image, background removed

SageMaker JumpStart streamlines the deployment of the prebuilt model on SageMaker, which supports the semantic segmentation algorithm. You can test this using the sample Jupyter notebook available at Extract Image using Semantic Segmentation, which demonstrates how to extract an individual form from the surrounding background.

Learn more about SageMaker JumpStart

SageMaker JumpStart is a quick way to learn about SageMaker features and capabilities through curated one-step solutions, example notebooks, and deployable pre-trained models. You can also fine-tune the models and then deploy them. You can access JumpStart using Amazon SageMaker Studio or programmatically through the SageMaker APIs.

SageMaker JumpStart provides lot of different semantic segmentation models that are pre-trained with class of objects it can identify. These models are fine-tuned for a sample dataset. You can tune the model with your dataset to get an effective mask for the class of object you want to retrieve from the image. When you fine-tune a model, you can use the default dataset or choose your own data, which is located in an Amazon S3 bucket. You can customize the hyperparameters of the training job that are used to fine-tune the model.

When the fine-tuning process is complete, JumpStart provides information about the model: parent model, training job name, training job Amazon Resource Name (ARN), training time, and output path. We retrieve the deploy_image_uri, deploy_source_uri, and base_model_uri for the pre-trained model. You can host the pre-trained base-model by creating an instance of sagemaker.model.Model and deploying it.

Conclusion

In this blog, we review the steps to use Amazon SageMaker JumpStart and AWS Lambda for automation and processing of images. It uses pre-trained machine learning models and inference. The solution ingests the product images, identifies your products, and then removes the image background. After some review and QA, you can then publish your products to your ecommerce store or other medium.

Further resources:

Security updates for Friday

Post Syndicated from original https://lwn.net/Articles/899701/

Security updates have been issued by Debian (firefox-esr, isync, kernel, and systemd), Fedora (chromium, curl, firefox, golang-github-vultr-govultr-2, and xen), Mageia (openssl, python-bottle, and python-pyjwt), Red Hat (compat-openssl10, curl, expat, firefox, go-toolset-1.17 and go-toolset-1.17-golang, go-toolset:rhel8, kernel, kpatch-patch, libarchive, libgcrypt, libinput, libxml2, pcre2, php:7.4, php:8.0, qemu-kvm, ruby:2.6, thunderbird, and vim), and Ubuntu (curl, libjpeg6b, and vim).

Optimizing TCP for high WAN throughput while preserving low latency

Post Syndicated from Mike Freemon original https://blog.cloudflare.com/optimizing-tcp-for-high-throughput-and-low-latency/

Optimizing TCP for high WAN throughput while preserving low latency

Optimizing TCP for high WAN throughput while preserving low latency

Here at Cloudflare we’re constantly working on improving our service. Our engineers are looking at hundreds of parameters of our traffic, making sure that we get better all the time.

One of the core numbers we keep a close eye on is HTTP request latency, which is important for many of our products. We regard latency spikes as bugs to be fixed. One example is the 2017 story of “Why does one NGINX worker take all the load?”, where we optimized our TCP Accept queues to improve overall latency of TCP sockets waiting for accept().

Performance tuning is a holistic endeavor, and we monitor and continuously improve a range of other performance metrics as well, including throughput. Sometimes, tradeoffs have to be made. Such a case occurred in 2015, when a latency spike was discovered in our processing of HTTP requests. The solution at the time was to set tcp_rmem to 4 MiB, which minimizes the amount of time the kernel spends on TCP collapse processing. It was this collapse processing that was causing the latency spikes. Later in this post we discuss TCP collapse processing in more detail.

The tradeoff is that using a low value for tcp_rmem limits TCP throughput over high latency links. The following graph shows the maximum throughput as a function of network latency for a window size of 2 MiB. Note that the 2 MiB corresponds to a tcp_rmem value of 4 MiB due to the tcp_adv_win_scale setting in effect at the time.

Optimizing TCP for high WAN throughput while preserving low latency

For the Cloudflare products then in existence, this was not a major problem, as connections terminate and content is served from nearby servers due to our BGP anycast routing.

Since then, we have added new products, such as Magic WAN, WARP, Spectrum, Gateway, and others. These represent new types of use cases and traffic flows.

For example, imagine you’re a typical Magic WAN customer. You have connected all of your worldwide offices together using the Cloudflare global network. While Time to First Byte still matters, Magic WAN office-to-office traffic also needs good throughput. For example, a lot of traffic over these corporate connections will be file sharing using protocols such as SMB. These are elephant flows over long fat networks. Throughput is the metric every eyeball watches as they are downloading files.

We need to continue to provide world-class low latency while simultaneously providing high throughput over high-latency connections.

Before we begin, let’s introduce the players in our game.

TCP receive window is the maximum number of unacknowledged user payload bytes the sender should transmit (bytes-in-flight) at any point in time. The size of the receive window can and does go up and down during the course of a TCP session. It is a mechanism whereby the receiver can tell the sender to stop sending if the sent packets cannot be successfully received because the receive buffers are full. It is this receive window that often limits throughput over high-latency networks.

net.ipv4.tcp_adv_win_scale is a (non-intuitive) number used to account for the overhead needed by Linux to process packets. The receive window is specified in terms of user payload bytes. Linux needs additional memory beyond that to track other data associated with packets it is processing.

The value of the receive window changes during the lifetime of a TCP session, depending on a number of factors. The maximum value that the receive window can be is limited by the amount of free memory available in the receive buffer, according to this table:

tcp_adv_win_scale TCP window size
4 15/16 * available memory in receive buffer
3 ⅞ * available memory in receive buffer
2 ¾ * available memory in receive buffer
1 ½ * available memory in receive buffer
0 available memory in receive buffer
-1 ½ * available memory in receive buffer
-2 ¼ * available memory in receive buffer
-3 ⅛ * available memory in receive buffer

We can intuitively (and correctly) understand that the amount of available memory in the receive buffer is the difference between the used memory and the maximum limit. But what is the maximum size a receive buffer can be? The answer is sk_rcvbuf.

sk_rcvbuf is a per-socket field that specifies the maximum amount of memory that a receive buffer can allocate. This can be set programmatically with the socket option SO_RCVBUF. This can sometimes be useful to do, for localhost TCP sessions, for example, but in general the use of SO_RCVBUF is not recommended.

So how is sk_rcvbuf set? The most appropriate value for that depends on the latency of the TCP session and other factors. This makes it difficult for L7 applications to know how to set these values correctly, as they will be different for every TCP session. The solution to this problem is Linux autotuning.

Linux autotuning

Linux autotuning is logic in the Linux kernel that adjusts the buffer size limits and the receive window based on actual packet processing. It takes into consideration a number of things including TCP session RTT, L7 read rates, and the amount of available host memory.

Autotuning can sometimes seem mysterious, but it is actually fairly straightforward.

The central idea is that Linux can track the rate at which the local application is reading data off of the receive queue. It also knows the session RTT. Because Linux knows these things, it can automatically increase the buffers and receive window until it reaches the point at which the application layer or network bottleneck links are the constraint on throughput (and not host buffer settings). At the same time, autotuning prevents slow local readers from having excessively large receive queues. The way autotuning does that is by limiting the receive window and its corresponding receive buffer to an appropriate size for each socket.

The values set by autotuning can be seen via the Linux “ss” command from the iproute package (e.g. “ss -tmi”).  The relevant output fields from that command are:

Recv-Q is the number of user payload bytes not yet read by the local application.

rcv_ssthresh is the window clamp, a.k.a. the maximum receive window size. This value is not known to the sender. The sender receives only the current window size, via the TCP header field. A closely-related field in the kernel, tp->window_clamp, is the maximum window size allowable based on the amount of available memory. rcv_sshthresh is the receiver-side slow-start threshold value.

skmem_r is the actual amount of memory that is allocated, which includes not only user payload (Recv-Q) but also additional memory needed by Linux to process the packet (packet metadata). This is known within the kernel as sk_rmem_alloc.

Note that there are other buffers associated with a socket, so skmem_r does not represent the total memory that a socket might have allocated. Those other buffers are not involved in the issues presented in this post.

skmem_rb is the maximum amount of memory that could be allocated by the socket for the receive buffer. This is higher than rcv_ssthresh to account for memory needed for packet processing that is not packet data. Autotuning can increase this value (up to tcp_rmem max) based on how fast the L7 application is able to read data from the socket and the RTT of the session. This is known within the kernel as sk_rcvbuf.

rcv_space is the high water mark of the rate of the local application reading from the receive buffer during any RTT. This is used internally within the kernel to adjust sk_rcvbuf.

Earlier we mentioned a setting called tcp_rmem. net.ipv4.tcp_rmem consists of three values, but in this document we are always referring to the third value (except where noted). It is a global setting that specifies the maximum amount of memory that any TCP receive buffer can allocate, i.e. the maximum permissible value that autotuning can use for sk_rcvbuf. This is essentially just a failsafe for autotuning, and under normal circumstances should play only a minor role in TCP memory management.

It’s worth mentioning that receive buffer memory is not preallocated. Memory is allocated based on actual packets arriving and sitting in the receive queue. It’s also important to realize that filling up a receive queue is not one of the criteria that autotuning uses to increase sk_rcvbuf. Indeed, preventing this type of excessive buffering (bufferbloat) is one of the benefits of autotuning.

What’s the problem?

The problem is that we must have a large TCP receive window for high BDP sessions. This is directly at odds with the latency spike problem mentioned above.

Something has to give. The laws of physics (speed of light in glass, etc.) dictate that we must use large window sizes. There is no way to get around that. So we are forced to solve the latency spikes differently.

A brief recap of the latency spike problem

Sometimes a TCP session will fill up its receive buffers. When that happens, the Linux kernel will attempt to reduce the amount of memory the receive queue is using by performing what amounts to a “defragmentation” of memory. This is called collapsing the queue. Collapsing the queue takes time, which is what drives up HTTP request latency.

We do not want to spend time collapsing TCP queues.

Why do receive queues fill up to the point where they hit the maximum memory limit? The usual situation is when the local application starts out reading data from the receive queue at one rate (triggering autotuning to raise the max receive window), followed by the local application slowing down its reading from the receive queue. This is valid behavior, and we need to handle it correctly.

Selecting sysctl values

Before exploring solutions, let’s first decide what we need as the maximum TCP window size.

As we have seen above in the discussion about BDP, the window size is determined based upon the RTT and desired throughput of the connection.

Because Linux autotuning will adjust correctly for sessions with lower RTTs and bottleneck links with lower throughput, all we need to be concerned about are the maximums.

For latency, we have chosen 300 ms as the maximum expected latency, as that is the measured latency between our Zurich and Sydney facilities. It seems reasonable enough as a worst-case latency under normal circumstances.

For throughput, although we have very fast and modern hardware on the Cloudflare global network, we don’t expect a single TCP session to saturate the hardware. We have arbitrarily chosen 3500 mbps as the highest supported throughput for our highest latency TCP sessions.

The calculation for those numbers results in a BDP of 131MB, which we round to the more aesthetic value of 128 MiB.

Recall that allocation of TCP memory includes metadata overhead in addition to packet data. The ratio of actual amount of memory allocated to user payload size varies, depending on NIC driver settings, packet size, and other factors. For full-sized packets on some of our hardware, we have measured average allocations up to 3 times the packet data size. In order to reduce the frequency of TCP collapse on our servers, we set tcp_adv_win_scale to -2. From the table above, we know that the max window size will be ¼ of the max buffer space.

We end up with the following sysctl values:

net.ipv4.tcp_rmem = 8192 262144 536870912
net.ipv4.tcp_wmem = 4096 16384 536870912
net.ipv4.tcp_adv_win_scale = -2

A tcp_rmem of 512MiB and tcp_adv_win_scale of -2 results in a maximum window size that autotuning can set of 128 MiB, our desired value.

Disabling TCP collapse

Patient: Doctor, it hurts when we collapse the TCP receive queue.

Doctor: Then don’t do that!

Generally speaking, when a packet arrives at a buffer when the buffer is full, the packet gets dropped. In the case of these receive buffers, Linux tries to “save the packet” when the buffer is full by collapsing the receive queue. Frequently this is successful, but it is not guaranteed to be, and it takes time.

There are no problems created by immediately just dropping the packet instead of trying to save it. The receive queue is full anyway, so the local receiver application still has data to read. The sender’s congestion control will notice the drop and/or ZeroWindow and will respond appropriately. Everything will continue working as designed.

At present, there is no setting provided by Linux to disable the TCP collapse. We developed an in-house patch to the kernel to disable the TCP collapse logic.

Kernel patch – Attempt #1

The kernel patch for our first attempt was straightforward. At the top of tcp_try_rmem_schedule(), if the memory allocation fails, we simply return (after pred_flag = 0 and tcp_sack_reset()), thus completely skipping the tcp_collapse and related logic.

It didn’t work.

Although we eliminated the latency spikes while using large buffer limits, we did not observe the throughput we expected.

One of the realizations we made as we investigated the situation was that standard network benchmarking tools such as iperf3 and similar do not expose the problem we are trying to solve. iperf3 does not fill the receive queue. Linux autotuning does not open the TCP window large enough. Autotuning is working perfectly for our well-behaved benchmarking program.

We need application-layer software that is slightly less well-behaved, one that exercises the autotuning logic under test. So we wrote one.

A new benchmarking tool

Anomalies were seen during our “Attempt #1” that negatively impacted throughput. The anomalies were seen only under certain specific conditions, and we realized we needed a better benchmarking tool to detect and measure the performance impact of those anomalies.

This tool has turned into an invaluable resource during the development of this patch and raised confidence in our solution.

It consists of two Python programs. The reader opens a TCP session to the daemon, at which point the daemon starts sending user payload as fast as it can, and never stops sending.

The reader, on the other hand, starts and stops reading in a way to open up the TCP receive window wide open and then repeatedly causes the buffers to fill up completely. More specifically, the reader implemented this logic:

  1. reads as fast as it can, for five seconds
    • this is called fast mode
    • opens up the window
  2. calculates 5% of the high watermark of the bytes reader during any previous one second
  3. for each second of the next 15 seconds:
    • this is called slow mode
    • reads that 5% number of bytes, then stops reading
    • sleeps for the remainder of that particular second
    • most of the second consists of no reading at all
  4. steps 1-3 are repeated in a loop three times, so the entire run is 60 seconds

This has the effect of highlighting any issues in the handling of packets when the buffers repeatedly hit the limit.

Revisiting default Linux behavior

Taking a step back, let’s look at the default Linux behavior. The following is kernel v5.15.16.

Optimizing TCP for high WAN throughput while preserving low latency

The Linux kernel is effective at freeing up space in order to make room for incoming packets when the receive buffer memory limit is hit. As documented previously, the cost for saving these packets (i.e. not dropping them) is latency.

However, the latency spikes, in milliseconds, for tcp_try_rmem_schedule(), are:

tcp_rmem 170 MiB, tcp_adv_win_scale +2 (170p2):

@ms:
[0]       27093 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
[1]           0 |
[2, 4)        0 |
[4, 8)        0 |
[8, 16)       0 |
[16, 32)      0 |
[32, 64)     16 |

tcp_rmem 146 MiB, tcp_adv_win_scale +3 (146p3):

@ms:
(..., 16)  25984 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
[16, 20)       0 |
[20, 24)       0 |
[24, 28)       0 |
[28, 32)       0 |
[32, 36)       0 |
[36, 40)       0 |
[40, 44)       1 |
[44, 48)       6 |
[48, 52)       6 |
[52, 56)       3 |

tcp_rmem 137 MiB, tcp_adv_win_scale +4 (137p4):

@ms:
(..., 16)  37222 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
[16, 20)       0 |
[20, 24)       0 |
[24, 28)       0 |
[28, 32)       0 |
[32, 36)       0 |
[36, 40)       1 |
[40, 44)       8 |
[44, 48)       2 |

These are the latency spikes we cannot have on the Cloudflare global network.

Kernel patch – Attempt #2

So the “something” that was not working in Attempt #1 was that the receive queue memory limit was hit early on as the flow was just ramping up (when the values for sk_rmem_alloc and sk_rcvbuf were small, ~800KB). This occurred at about the two second mark for 137p4 test (about 2.25 seconds for 170p2).

In hindsight, we should have noticed that tcp_prune_queue() actually raises sk_rcvbuf when it can. So we modified the patch in response to that, added a guard to allow the collapse to execute when sk_rmem_alloc is less than the threshold value.

net.ipv4.tcp_collapse_max_bytes = 6291456

The next section discusses how we arrived at this value for tcp_collapse_max_bytes.

The patch is available here.

The results with the new patch are as follows:

oscil – 300ms tests

Optimizing TCP for high WAN throughput while preserving low latency

oscil – 20ms tests

Optimizing TCP for high WAN throughput while preserving low latency

oscil – 0ms tests

Optimizing TCP for high WAN throughput while preserving low latency

iperf3 – 300 ms tests

Optimizing TCP for high WAN throughput while preserving low latency

iperf3 – 20 ms tests

Optimizing TCP for high WAN throughput while preserving low latency

iperf3 – 0ms tests

Optimizing TCP for high WAN throughput while preserving low latency

All tests are successful.

Setting tcp_collapse_max_bytes

In order to determine this setting, we need to understand what the biggest queue we can collapse without incurring unacceptable latency.

Optimizing TCP for high WAN throughput while preserving low latency
Optimizing TCP for high WAN throughput while preserving low latency

Using 6 MiB should result in a maximum latency of no more than 2 ms.

Cloudflare production network results

Current production settings (“Old”)

net.ipv4.tcp_rmem = 8192 2097152 16777216
net.ipv4.tcp_wmem = 4096 16384 33554432
net.ipv4.tcp_adv_win_scale = -2
net.ipv4.tcp_collapse_max_bytes = 0
net.ipv4.tcp_notsent_lowat = 4294967295

tcp_collapse_max_bytes of 0 means that the custom feature is disabled and that the vanilla kernel logic is used for TCP collapse processing.

New settings under test (“New”)

net.ipv4.tcp_rmem = 8192 262144 536870912
net.ipv4.tcp_wmem = 4096 16384 536870912
net.ipv4.tcp_adv_win_scale = -2
net.ipv4.tcp_collapse_max_bytes = 6291456
net.ipv4.tcp_notsent_lowat = 131072

The tcp_notsent_lowat setting is discussed in the last section of this post.

The middle value of tcp_rmem was changed as a result of separate work that found that Linux autotuning was setting receive buffers too high for localhost sessions. This updated setting reduces TCP memory usage for those sessions, but does not change anything about the type of TCP sessions that is the focus of this post.

For the following benchmarks, we used non-Cloudflare host machines in Iowa, US, and Melbourne, Australia performing data transfers to the Cloudflare data center in Marseille, France. In Marseille, we have some hosts configured with the existing production settings, and others with the system settings described in this post. Software used is perf3 version 3.9, kernel 5.15.32.

Throughput results

Optimizing TCP for high WAN throughput while preserving low latency

RTT (ms) Throughput with Current Settings (mbps) Throughput with New Settings (mbps) Increase Factor
Iowa to Marseille 121 276 6600 24x
Melbourne to Marseille 282 120 3800 32x

Iowa-Marseille throughput

Optimizing TCP for high WAN throughput while preserving low latency

Iowa-Marseille receive window and bytes-in-flight

Optimizing TCP for high WAN throughput while preserving low latency

Melbourne-Marseille throughput

Optimizing TCP for high WAN throughput while preserving low latency

Melbourne-Marseille receive window and bytes-in-flight

Optimizing TCP for high WAN throughput while preserving low latency

Even with the new settings in place, the Melbourne to Marseille performance is limited by the receive window on the Cloudflare host. This means that further adjustments to these settings yield even higher throughput.

Latency results

The Y-axis on these charts are the 99th percentile time for TCP collapse in seconds.

Cloudflare hosts in Marseille running the current production settings

Optimizing TCP for high WAN throughput while preserving low latency

Cloudflare hosts in Marseille running the new settings

Optimizing TCP for high WAN throughput while preserving low latency

The takeaway in looking at these graphs is that maximum TCP collapse time for the new settings is no worse than with the current production settings. This is the desired result.

Send Buffers

What we have shown so far is that the receiver side seems to be working well, but what about the sender side?

As part of this work, we are setting tcp_wmem max to 512 MiB. For oscillating reader flows, this can cause the send buffer to become quite large. This represents bufferbloat and wasted kernel memory, both things that nobody likes or wants.

Fortunately, there is already a solution: tcp_notsent_lowat. This setting limits the size of unsent bytes in the write queue. More details can be found at https://lwn.net/Articles/560082.

The results are significant:

Optimizing TCP for high WAN throughput while preserving low latency

The RTT for these tests was 466ms. Throughput is not negatively affected. Throughput is at full wire speed in all cases (1 Gbps). Memory usage is as reported by /proc/net/sockstat, TCP mem.

Our web servers already set tcp_notsent_lowat to 131072 for its sockets. All other senders are using 4 GiB, the default value. We are changing the sysctl so that 131072 is in effect for all senders running on the server.

Conclusion

The goal of this work is to open the throughput floodgates for high BDP connections while simultaneously ensuring very low HTTP request latency.

We have accomplished that goal.

What’s Up, Home? – Razor-sharp Thinking

Post Syndicated from Janne Pikkarainen original https://blog.zabbix.com/whats-up-home-razor-sharp-thinking/21507/

Can you monitor a Philips OneBlade shaver with Zabbix? Of course, you can! But why do that and how to monitor a dumb device with zero IoT capabilities?

Welcome to my weekly blog: I get my bread and butter by being a monitoring tech lead in a global cyber security company, but I monitor my home for fun with Zabbix & Grafana and do some weird experiments.

Staying Alive

We all know how the battery-operated shavers, toothbrushes and similar devices sound very energetic and trustworthy immediately after you have charged their battery to full. Over time (over not so long time) they start to sound tired, but technically you can still use them. Or, you think you can still use them, but instead, they will betray you and die in the middle of the operation. Zabbix to the rescue!

Sing to me, bad boy

To get an idea about the battery runtime left, I needed to somehow capture the sound frequency and analyze it. The recording part was easy — after I had charged my razor to full level, I did leave it running and recorded the sound with my iPhone Voice Memos.

But how to get the sound frequency? This is the part where the audio engineers of the world can laugh at me in unison.

At first, I tried with Audacity as traditionally it has done all the tricks I possibly need to do with audio. Unfortunately, I could not find a way to accomplish my dream with it, and even if I would have, I fear I would have to manually do something with it, instead of the automated fashion I’m wishing for.

I could see all kinds of frequencies with Audacity, but was not able to isolate the humming sound of Philips OneBlade, at least not to a format I could use with Zabbix. Yes, Audacity has macros and some functionality remotely from the command line, but I interrupted my attempts with it. If you can do stuff like this with Audacity, drop me a note, I’m definitely interested!

Here come the numbers

Then, after a bit of searching, I found out aubiopitch. It analyzes the sample and returns a proper heckton of numbers back to you.

Those are not GPS coordinates or lottery numbers. That’s a timestamp in seconds and the sound frequency in Hz. And, just by peeking at the file manually, I found out that the values around 100, plus-minus something, were constantly present in the file. Yes, my brains have developed a very good pattern matching algorithm when it comes to log files, as that’s what I have been staring at for the last 20+ years.

As my 30+ minutes sample contained over 300,000 lines of these numbers, I did not want to bother my poor little home Zabbix with this kind of data volume for my initial analysis. I hate spreadsheet programs, especially with data that spans to hundreds of thousands of rows or more, so how to analyze my data? I possibly could have utilized Grafana’s CSV plugin, but to make things more interesting (for me, anyway), I called to my old friend gnuplot instead. Well, a friend in a sense that I know that it exists and that I occasionally used it two decades ago for simple plotting.

There it is, my big long needle in a haystack! Among some other environmental sounds, aubiopitch did recognize the Philips soundtrack as well! What if I filter out those higher frequencies? Or at least attempt to, my gnuplot-fu is not strong.

Yes, there it is, the upper line steadily coming down. After my first recording, it looks like that with a full battery the captured frequency starts from about 115 Hz, and everything goes well until about 93 Hz, but if I would start to shave around that time, I would better be quick, as I would only have two to three minutes left before the frequency quickly spirals down.

Production show-stoppers

This thing is not in “production” yet, because

  • I need to do more recordings to see if I get similar frequencies each time
  • I need to fiddle with iPhone Shortcuts to make this as automated as possible.

Anyway, I did start building a preliminary Zabbix template with some macros already filled in…

… and I have a connection established between my dear Siri and Zabbix, too; this will be a topic for another blog entry in the future.

I am hoping that I could get Siri to upload the Voice Memo automatically to my Zabbix Raspberry Pi, which then would immediately analyze the data with aubiopitch maybe with a simple incron hook, and Zabbix would parse the values. That part is yet to be implemented, but I am getting there. It’s just numbers, and in the end, I will just point Zabbix to a simple text file to gather its numbers or make zabbix_sender to send in the values. Been there, done that.

I have been working at Forcepoint since 2014 and for this post to happen I needed to use some razor-sharp thinking. — Janne Pikkarainen

The post What’s Up, Home? – Razor-sharp Thinking appeared first on Zabbix Blog.

ICYMI: Serverless Q2 2022

Post Syndicated from dboyne original https://aws.amazon.com/blogs/compute/icymi-serverless-q2-2022/

Welcome to the 18th edition of the AWS Serverless ICYMI (in case you missed it) quarterly recap. Every quarter, we share all the most recent product launches, feature enhancements, blog posts, webinars, Twitch live streams, and other interesting things that you might have missed!

In case you missed our last ICYMI, check out what happened last quarter here.

AWS Lambda

For Node.js developers, AWS Lambda now supports the Node.js 16.x runtime version. This offers new features, including the Stable timers promises API and RegExp match indices. There is also new documentation for TypeScript with Lambda.

Customers are rapidly adopting the new runtime version by updating to Node.js 16.x. To help keep Lambda functions secure, AWS continually updates Node.js 16 with all minor updates released by the Node.js community when using the zip archive format. Read the release blog post to learn more about building Lambda functions with Node.js 16.x.

A new Lambda custom runtime is now available for PowerShell. It makes it even easier to run Lambda functions written in PowerShell. Although Lambda has supported PowerShell since 2018, this new version simplifies the process and reduces the additional steps required during the development process.

To get started, see the GitHub repository which contains the code, examples and installation instructions.

PowerShell code in Lambda console

PowerShell code in Lambda console

AWS Lambda Powertools is an open-source library to help customers discover and incorporate serverless best practices more easily. Powertools for Python went GA in July 2020, followed by Java in 2021, TypeScript in 2022, and .NET is coming soon. AWS Lambda Powertools crossed the 10M download milestone and TypeScript support has now moved from beta to a release candidate.

When building with Lambda, it’s important to develop solutions to handle retries and failures when the same event may be received more than once. Lambda Powertools provide a utility to handle idempotency within your functions.

To learn more:

AWS Step Functions

AWS Step Functions launched a new opt-in console experience to help builders analyze, debug, and optimize Step Functions Standard Workflows. This allows you to debug workflow executions and analyze the payload as it passes through each state. To opt in to the new console experience and get started, follow these detailed instructions.

Events Tab in Step Functions Workflow

Events tab in Step Functions workflow

Amazon EventBridge

Amazon EventBridge released support for global endpoints in April 2022. Global endpoints provide a reliable way for you to improve availability and reliability of event-driven applications. Using global endpoints, you can fail over event ingestion automatically to another Region during service disruptions.

The new IngestionToInvocationStartLatency metric exposes the time to process events from the point at which they are ingested by EventBridge to the point of the first invocation. Amazon Route 53 uses this information to failover event ingestion automatically to a secondary Region if the metric exceeds a configured threshold of 30 seconds, consecutively for 5 minutes.

To learn more:

Amazon EventBridge Architecture for Global Endpoints

Amazon EventBridge global endpoints architecture diagram

Serverless Blog Posts

April

Apr 6 – Getting Started with Event-Driven Architecture

Apr 7 – Introducing global endpoints for Amazon EventBridge

Apr 11 – Building an event-driven application with Amazon EventBridge

Apr 12 – Orchestrating high performance computing with AWS Step Functions and AWS Batch

Apr 14 – Working with events and the Amazon EventBridge schema registry

Apr 20 – Handling Lambda functions idempotency with AWS Lambda Powertools

Apr 26 – Build a custom Java runtime for AWS Lambda

May

May 05 – Amazon EC2 DL1 instances Deep Dive

May 05 – Orchestrating Amazon S3 Glacier Deep Archive object retrieval using AWS Step Functions

May 09 – Benefits of migrating to event-driven architecture

May 09 – Debugging AWS Step Functions executions with the new console experience

May 12 – Node.js 16.x runtime now available in AWS Lambda

May 25 – Introducing the PowerShell custom runtime for AWS Lambda

June

Jun 01 – Testing Amazon EventBridge events using AWS Step Functions

Jun 02 – Optimizing your AWS Lambda costs – Part 1

Jun 02 – Optimizing your AWS Lambda costs – Part 2

Jun 02 – Extending PowerShell on AWS Lambda with other services

Jun 02 – Running AWS Lambda functions on AWS Outposts using AWS IoT Greengrass

Jun 14 – Combining Amazon AppFlow with AWS Step Functions to maximize application integration benefits

Jun 14 – Capturing GPU Telemetry on the Amazon EC2 Accelerated Computing Instances

Serverlesspresso goes global

Serverlesspresso in five countries

Serverlesspresso is a serverless event-driven application that allows you to order coffee from your phone.

Since building Serverlesspresso for reinvent 2021, the Developer Advocate team have put in approximately 100 additional development hours to improve the application to make it a multi-tenant event-driven serverless app.

This allowed us to run Serverlesspresso concurrently at five separate events across Europe on a single day in June, serving over 5,000 coffees. Each order is orchestrated by a single Step Functions workflow. To read more about how this application is built:

AWS Heroes EMEA Summit in Milan, Italy

AWS Heros in Milan, Italy 2022

AWS Heroes EMEA Summit in Milan, Italy

The AWS Heroes program recognizes talented experts whose enthusiasm for knowledge-sharing has a real impact within the community. The EMEA-based Heroes gathered for a Summit on June 28 to share their thoughts, providing valuable feedback on topics such as containers, serverless and machine learning.

Serverless workflow collection added to Serverless Land

Serverless Land is a website that is maintained by the Serverless Developer Advocate team to help you learn with workshops, patterns, blogs and videos.

The Developer Advocate team have extended Serverless Land and introduced the new AWS Step Functions workflows collection.

Using the new collection you can explore common patterns built with Step Functions and use the 1-click deploy button to deploy straight into your AWS account.

Serverless Workflows Collection on Serverless Land

Serverless Workflows Collection on Serverless Land

Videos

Serverless Office Hours – Tues 10AM PT

ServerlessLand YouTube Channel

ServerlessLand YouTube Channel

Weekly live virtual office hours. In each session we talk about a specific topic or technology related to serverless and open it up to helping you with your real serverless challenges and issues. Ask us anything you want about serverless technologies and applications.

YouTube: youtube.com/serverlessland
Twitch: twitch.tv/aws

April

May

June

FooBar Serverless YouTube channel

FooBar Serverless YouTube Header

FooBar Serverless Channel

Marcia Villalba frequently publishes new videos on her popular serverless YouTube channel. You can view all of Marcia’s videos at https://www.youtube.com/c/FooBar_codes.

April

May

June

Still looking for more?

The Serverless landing page has more information. The Lambda resources page contains case studies, webinars, whitepapers, customer stories, reference architectures, and even more Getting Started tutorials.

You can also follow the Serverless Developer Advocacy team on Twitter to see the latest news, follow conversations, and interact with the team.

Кой, ако не президентът…

Post Syndicated from Емилия Милчева original https://toest.bg/koy-ako-ne-prezidentut/

Кой, ако не президентът Румен Радев, е насреща, за да нагласи един служебен кабинет и да извади България от тресавището? Да качи на гърба си локомотива и да издърпа цялата композиция като бурлак на Волга, за да тръгне влакът. Защото политиците… ами eто, видяхме ги колко могат – минаха шест месеца, коалицията се разбута и пак избори, четвърти за година и нещо.

Ако партиите в парламента не успеят да се договорят за правителство, упованието в президентската институция ще нарасне като упованието в мощите на св. Йоан Кръстител от Созопол. Такова явление не е наблюдавано досега в българската политика. По време на Виденовата зима, когато имаше хиперинфлация и бунтове, президентът Петър Стоянов (1997–2002) беше олицетворение на просветения разум и институционализъм, а назначеният от него служебен кабинет започна стабилизацията на държавата с подготовката на валутния борд. Но служебните правителства на Румен Радев наброяват три още в първия му мандат, а по всичко личи, че и във втория ще са няколко. Чрез тях обаче Радев акумулира власт и трупа рейтинг за сметка на Народното събрание в парламентарната република България.

Улеснен е – конституционно разтоварен от отговорности, но със собствени стратегически назначения в регулатори и съдебна система, с назначения на посланици и шефове на спецслужби. Нещо повече, служебно правителство на Радев подмени миналата година шефовете на ДАНС, ДАР, военното разузнаване, съгласувано с президента, вместо да назначи временно изпълняващи и да остави на редовен кабинет да избере титулярите. Така че в момента в ръцете на президента е националната сигурност (заедно с НСО, чийто шеф той така и не смени, въпреки че премиерът Кирил Петков поиска оставката на бригаден генерал Емил Тонев).

Капитулация на един главнокомандващ

Предвид тези факти е странно, когато президентът обяснява как не е бил информиран за изгонването на 70-те руски дипломати и служители. Държавният глава призова правителството в оставка да преосмисли действията си предвид „последствията от ескалацията на кризата в двустранните отношения“, а премиера Кирил Петков – да свика заседание на кабинета. След изявлението му Русия заплаши със закриване на посолството си с вербална нота, връчена от посланичката на страната у нас Митрофанова. Тя даде ултиматум до 12 часа в петък да бъдат оттеглени нотите за обявените за персона нон грата служители на руските задгранични представителства – в противен случай ще говори с президента Путин.

А петъчният ден бе открит с публичната подкрепа, заявена от лидера на ГЕРБ Бойко Борисов, за решението за изгонването на руските дипломати, и с настойчивостта на ГЕРБ, ДПС и „Възраждане“ да извикат в парламента премиера за изслушване по темата. „Сега има война. Ние сме на страната на Украйна, на евро-атлантическия свят. Ако влезем в детайлите и в глупостта им, ще излезе, че не сме от тази страна на чертата, а ние сме. И дотогава, макар и аматьорски, дилетантски, абсолютно непрофесионални, тези действия трябва да бъдат подкрепени, за да стигнат до края“, заяви Борисов. За първи път бившият премиер, който в трите си мандата показа съобразяване с руските зависимости и не направи сериозен опит да ги прекрати, в опозиция действа като 100-процентов евро-атлантик. И тук съвпадения с държавния глава няма.

Откакто влезе във втория си мандат, спечелен заради противостоенето му на ГЕРБ и на главния прокурор Иван Гешев, президентът заговори в един глас с ГЕРБ и изпадна в амнезия за Гешев, чиято оставка допреди две години искаше наравно с други. Макар да критикува правителството заради несъстоялата се съдебна реформа, той удобно забрави за собствения си проект за промени в Конституцията. Какво от това, че през ноември м.г. обяви, че му „идва времето“.

В едно обаче Румен Радев е бил винаги праволинеен – нито по времето, когато беше активен борец срещу „бойковизма“, а още повече сега не е произнасял и дума срещу влиянието на Кремъл, камо ли да го дефинира като заплаха за националната сигурност заради генерирането на корупция, на страх и недоверие в ЕС и НАТО. По време на протестите през 2020 г., а и през 2021 г., когато се самономинира за втори президентски мандат, демократичната общност се правеше, че не забелязва това… опущение.

Радев обаче винаги е разбирал ядреното бъдеще на България като бъдеще за АЕЦ „Белене“; тръбата на „Турски поток“ – като злоупотреби със средства, но не и като още по-тежка зависимост от „Газпром“; военна помощ за Украйна – като въвличане на България във война; а за спрените от „Газпром“ доставки на руски газ обвини правителството. Като натовски генерал и бивш началник на ВВС той дори поиска България сама да пази въздушното си пространство, въпреки че е наясно колко несъстоятелно е това заради състоянието на бойната авиация. Разбиранията му за „руския“ Крим са константни още от първия мандат.

Затова и президентът няма да застане на страната на премиера Петков и правителството в оставка и да го подкрепи срещу шантажа на Кремъл. Настроенията в българското общество са колебливи. Според национално представително изследване на общественото мнение, проведено на терен в периода 6–16 юни 2022 г. от „Алфа Рисърч“ по поръчка на Институт „Отворено общество – София“, над 39% от анкетираните заявяват, че България трябва да се позиционира в съюз със страните от НАТО и ЕС в случай на ново разделение в Европа, подобно на Студената война. Но макар и с най-голям дял във всички възрастови групи, подкрепящите нямат мнозинство в нито една.

За съюз със страни като Русия и Беларус се обявяват 23% от запитаните. Близо 7% биха предпочели друг вариант на поведение на страната. Но е твърде голям делът на хората, които не могат да преценят или не отговарят на въпроса – близо 31%. За спечелването им на следващи избори ще се конкурират и БСП, и „Възраждане“, и „Български възход“ на Стефан Янев, защо не и „Изправи се, България“ на Мая Манолова.

Президентът ще помогне с каквото може. Откакто се позиционира като критик на правителството (прави го почти ежедневно), канонадата от жълти медии по Радев спря. А неговата представителка в СЕМ Габриела Наплатанова пък спомогна настоящият директор на БНТ Емил Кошлуков, известен с близостта си и до ГЕРБ, и до ДПС, да запази поста си. Удобното оправдание беше принципност, в името на която тя се въздържа от гласуване, макар да оцени високо професионализма на един от кандидатите и неин бивш шеф в bTV Венелин Петков.

Залезът на Борисов

Докато звездата на Румен Радев се вдига все по-нависоко, Борисов залязва. Повече от година бившият премиер и все още лидер на ГЕРБ обикаля из България, говори по партийни сбирки и отмаря в Банкя с внуците и кучетата. Оттам „вдига“ ветото за Северна Македония с призива си към премиера Кирил Петков да внесе т.нар. френско предложение в парламента още на 22 юни, след което тутакси изригнаха похвали от председателя на ЕНП Манфред Вебер, еврокомисаря по разширяването Оливер Вархеи и албанския премиер Еди Рама. Макар че именно правителството на Борисов блокира европейския път на Скопие през 2019 г.

Но въпреки някои ситуационни победи, в т.ч. успешния вот на недоверие, Борисов е наясно, че времето му е изтекло и е най-добре да се оттегли – само че не си е изработил пътя за отстъпление. Той е твърде обременен от десетте години управление – политическо людоедство, корупционни схеми, чадър от главния прокурор, укрепване на руското влияние. Докато президентът няма как да бъде уличен в корупция, тъй като пряко не управлява.

Въпреки някогашното си противоборство обаче, днес те като че ли играят в един отбор – на противниците на правителството на Кирил Петков, независимо от мнозинството в 47-мото НС, подкрепило френското предложение за ветото за Северна Македония. ГЕРБ и ДПС, гласували заедно с „Продължаваме промяната“ и „Демократична България“, изпълниха волята на европейските политически семейства, към които принадлежат. А президентът така и не успя да изиграе „македонската карта“, въпреки упорството си да издига бариери пред разрешаването на проблема.

Ако миналата година служебните правителства бяха добър инструмент за рейтинга на Радев, също и ускорител за политическата кариера на Кирил Петков и Асен Василев, сега ситуацията е различна. Кризите са прекалено много, а напрежението е твърде високо – растящи цени, скъпи горива, нови страхове заради нови щамове на COVID-19, война.

Този път Румен Радев ще съставя с неохота четвъртия си служебен кабинет. Какво да се прави – кой, ако не той…

Заглавна снимка: Румен Радев и Кирил Петков по време на честването на 3 март т.г. © Министерски съвет на Република България

Източник

Международно разследване на OCCRP с участието на „Биволъ“ Три компании у нас с внос на фосфати от Башар Асад. Сред купувачите – путинов олигарх под санкции Андрей Мелниченко

Post Syndicated from Николай Марченко original https://bivol.bg/%D1%82%D1%80%D0%B8-%D0%BA%D0%BE%D0%BC%D0%BF%D0%B0%D0%BD%D0%B8%D0%B8-%D1%83-%D0%BD%D0%B0%D1%81-%D1%81-%D0%B2%D0%BD%D0%BE%D1%81-%D0%BD%D0%B0-%D1%84%D0%BE%D1%81%D1%84%D0%B0%D1%82%D0%B8-%D0%BE%D1%82.html

петък 1 юли 2022


Три български дружества са внасяли фосфати от Сирийската арабска република, чийто президент Башар Асад, правителството и ресорното Министерство на петрола са под санкциите на САЩ от 2011 г., както и…

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close