How Munich Re Automation Solutions Ltd built a digital insurance platform on AWS

Post Syndicated from Sid Singh original https://aws.amazon.com/blogs/architecture/how-munich-re-automation-solutions-ltd-built-a-digital-insurance-platform-on-aws/

Underwriting for life insurance can be quite manual and often time-intensive with lots of re-keying by advisers before underwriting decisions can be made and policies finally issued. In the digital age, people purchasing life insurance want self-service interactions with their prospective insurer. People want speed of transaction with time to cover reduced from days to minutes. While this has been achieved in the general insurance space with online car and home insurance journeys, this is not always the case in the life insurance space. This is where Munich Re Automation Solutions Ltd (MRAS) offers its customers, a competitive edge to shrink the quote-to-fulfilment process using their ALLFINANZ solution.

ALLFINANZ is a cloud-based life insurance and analytics solution to underwrite new life insurance business. It is designed to transform the end consumer’s journey, delivering everything they need to become a policyholder. The core digital services offered to all ALLFINANZ customers include Rulebook Hub, Risk Assessment Interview delivery, Decision Engine, deep analytics (including predictive modeling capabilities), and technical integration services—for example, API integration and SSO integration.

Current state architecture

The ALLFINANZ application began as a traditional three-tier architecture deployed within a datacenter. As MRAS migrated their workload to the AWS cloud, they looked at their regulatory requirements and the technology stack, and decided on the silo model of the multi-tenant SaaS system. Each tenant is provided a dedicated Amazon Virtual Private Cloud (VPC) that holds network and application components, fully isolated from other primary insurers.

As an entry point into the ALLFINANZ environment, MRAS uses Amazon Route 53 to route incoming traffic to the appropriate Amazon VPC. The routing relies on a model where subdomains are assigned to each tenant, for example the subdomain allfinanz.tenant1.munichre.cloud is the subdomain for tenant 1. The diagram below shows the ALLFINANZ architecture. Note: not all links between components are shown here for simplicity.

Current high-level solution architecture for the ALLFINANZ solution

Figure 1. Current high-level solution architecture for the ALLFINANZ solution

  1. The solution uses Route 53 as the DNS service, which provides two entry points to the SaaS solution for MRAS customers:
    • The URL allfinanz.<tenant-id>.munichre.cloud allows user access to the ALLFINANZ Interview Screen (AIS). The AIS can exist as a standalone application, or can be integrated with a customer’s wider digital point-of -sale process.
    • The URL api.allfinanz.<tenant-id>.munichre.cloud is used for accessing the application’s Web services and REST APIs.
  2. Traffic from both entry points flows through the load balancers. While HTTP/S traffic from the application user access entry point flows through an Application Load Balancer (ALB), TCP traffic from the REST API clients flows through a Network Load Balancer (NLB). Transport Layer Security (TLS) termination for user traffic happens at the ALB using certificates provided by the AWS Certificate Manager.  Secure communication over the public network is enforced through TLS validation of the server’s identity.
  3. Unlike application user access traffic, REST API clients use mutual TLS authentication to authenticate a customer’s server. Since NLB doesn’t support mutual TLS, MRAS opted for a solution to pass this traffic to a backend NGINX server for the TLS termination. Mutual TLS is enforced by using self-signed client and server certificates issued by a certificate authority that both the client and the server trust.
  4. Authenticated traffic from ALB and NGINX servers is routed to EC2 instances hosting the application logic. These EC2 instances are hosted in an auto-scaling group spanning two Availability Zones (AZs) to provide high availability and elasticity, therefore, allowing the application to scale to meet fluctuating demand.
  5. Application transactions are persisted in the backend Amazon Relational Database Service MySQL instances. This database layer is configured across multi-AZs, providing high availability and automatic failover.
  6. The application requires the capability to integrate evidence from data sources external to the ALLFINANZ service. This message sharing is enabled through the Amazon MQ managed message broker service for Apache Active MQ.
  7. Amazon CloudWatch is used for end-to-end platform monitoring through logs collection and application and infrastructure metrics and alerts to support ongoing visibility of the health of the application.
  8. Software deployment and associated infrastructure provisioning is automated through infrastructure as code using a combination of Git, Amazon CodeCommit, Ansible, and Terraform.
  9. Amazon GuardDuty continuously monitors the application for malicious activity and delivers detailed security findings for visibility and remediation. GuardDuty also allows MRAS to provide evidence of the application’s strong security posture to meet audit and regulatory requirements.

High availability, resiliency, and security

MRAS deploys their solution across multiple AWS AZs to meet high-availability requirements and ensure operational resiliency. If one AZ has an ongoing event, the solution will remain operational, as there are instances receiving production traffic in another AZ. As described above, this is achieved using ALBs and NLBs to distribute requests to the application subnets across AZs.

The ALLFINANZ solution uses private subnets to segregate core application components and the database storage platform. Security groups provide networking security measures at the elastic network interface level. MRAS restrict access from incoming connection requests to ranges of IP addresses by attaching security groups to the ALBs. Amazon Inspector monitors workloads for software vulnerabilities and unintended network exposure. AWS WAF is integrated with the ALB to protect from SQL injection or cross-site scripting attacks on the application.

Optimizing the existing workload

One of the key benefits of this architecture is that now MRAS can standardize the infrastructure configuration and ensure consistent versioning of the workload across tenants. This makes onboarding new tenants as simple as provisioning another VPC with the same infrastructure footprint.

MRAS are continuing to optimize their architecture iteratively, examining components to modernize to cloud-native components and evolving towards the pool model of multi-tenant SaaS architecture wherever possible. For example, MRAS centralized their per-tenant NAT gateway deployment to a centralized outbound Internet routing design using AWS Transit Gateway, saving approximately 30% on their overall NAT gateway spend.

Conclusion

The AWS global infrastructure has allowed MRAS to serve more than 40 customers in five AWS regions around the world. This solution improves customers’ experience and workload maintainability by standardizing and automating the infrastructure and workload configuration within a SaaS model, compared with multiple versions for the on-premise deployments. SaaS customers are also freed up from the undifferentiated heavy lifting of infrastructure operations, allowing them to focus on their business of underwriting for life insurance.

MRAS used the AWS Well-Architected Framework to assess their architecture and list key recommendations. AWS also offers Well-Architected SaaS Lens and AWS SaaS Factory Program, with a collection of resources to empower and enable insurers at any stage of their SaaS on AWS journey.

What’s Up, Home? – Time to Get Sirious

Post Syndicated from Janne Pikkarainen original https://blog.zabbix.com/whats-up-home-time-to-get-sirious/22400/

Can you integrate Zabbix with Siri? Of course, you can! By day, I am a monitoring tech lead in a global cyber security company. By night, I monitor my home with Zabbix & Grafana and do some weird experiments with them. Welcome to my weekly blog about the home project.

I have lost count of exactly when, but a couple of major iOS/macOS versions ago Apple’s Siri gained the Shortcuts application. It allows you to automate all kinds of stuff and do some drag-and-drop ‘programming’.

What do I use it for? You guessed it right — I integrate Shortcuts with Zabbix API.

Setting up the Zabbix side

For my home Zabbix environment, I do not have any complex access rights set. So, setting up the API token for Shortcuts to consume was almost a one-click operation. In Zabbix, I went to User settings → API tokens → Create API token and let it do its stuff.

Creating a new shortcut

Now that I have the API token in place, next we need to create the shortcut. That’s not too much work though — run the Shortcuts application and create a new shortcut. What the shortcut below does is:

  • calls Zabbix API and requests our fridge temperature
  • parses the value and appends “degrees Celsius” to it
  • returns the value

Yes, that’s all of it. Drag ‘n drop a couple of elements and assign some values. Done.

Time to get Sirious

Ok, so we have our shortcut in place. What happens if I now ask Siri to check beer temperature? This happens.

The result is actually our refrigerator temperature, the beer thing was just to make this more interesting. But, as you can see, integrating Zabbix with Siri or vice versa is not too hard.

Any real-world use cases for this, other than the geek factor? I don’t know. It might be handy to request the latest alerts or similar from Siri if I’m driving my car and I get to hear that something’s wrong at work.

I have been working at Forcepoint since 2014 and I confess I actually use Siri for some basic work stuff, too. — Janne Pikkarainen

This post was originally published on the author’s LinkedIn account.

The post What’s Up, Home? – Time to Get Sirious appeared first on Zabbix Blog.

Мементото на Промяната

Post Syndicated from Емилия Милчева original https://toest.bg/mementoto-na-promyanata/

Забраната на райския газ – акцията, която лидерката на БСП и вицепремиерка Корнелия Нинова избра за края на мандата и на надеждите за ново правителство, е символна. С райски газ може безболезнено да се извади мъдрец – действа бързо, притъпява болката и създава лека еуфория. Заради което е и предпочитана дрога на партита.

И ентусиазмът от управлението на четворната коалиция спадна бързо. Обобщението на шестте месеца (защото в седмия на практика не се работеше) е ясно –

почти нищо от договореното в коалиционното споразумение не беше свършено. 

Но едно важно обещание бе изпълнено – намери се решение за вдигане на ветото за преговорите за еврочленство на Северна Македония. Наред с това обаче енергията за промяна се изтощи, пометена от несполуките на Промяната и страховете от настоящето. Каквото и да отчетат управляващите за тези шест месеца, две липси са безспорни – на сглобка между четирите политически сили в лицето на „Продължаваме промяната“, БСП, „Има такъв народ“ и „Демократична България“ и на законодателна програма със срокове.

Безвремието властваше в коалиция, в която законопроекти се държаха на ръчно управление от Министерския съвет. Още в коалиционното споразумение механизмът за взаимодействие беше пропуснат, краткото приложение със законодателни намерения не беше сериозно, реформите бяха заместени от анализи, а експертните съвети за решаване на конфликтни въпроси на практика не действаха. В коалицията подготвиха програма и предложения за нови закони, обвързани със срокове, едва през юли, след като кабинетът на Кирил Петков бе свален с вот на недоверие.

Политически дискурс

Коалицията механично слепи четири политически сили, две от които са си ценностно близки и споделят (почти) eдин и същ електорат – ДБ и ПП, а другите две – БСП и ИТН, бяха приобщени единствено на основата на антигерберизма.

Кога БСП стана приемлив партньор за управление? 

Още в края на първия от трите мандата на ГЕРБ и Бойко Борисов всички забравиха за крадливата тройна коалиция, спрените еврофондове и гьола „Белене“. Причините са същите както и сега – Борисов и прокуратурата не отидоха отвъд политическото говорене и парата избяга в свирката както и сега.

Макар точно на БСП и на нейния председател Сергей Станишев да се дължи номинацията на Делян Пеевски за шеф на ДАНС, а двойната и тройната коалиция да уедриха значимостта му на медиен и политически фактор, днес партията на социалистите е мислена за партньор в същия коалиционен формат – ПП, ДБ и БСП – и след предстоящите избори през октомври. (След „приноса“ си за разпада на управлението ИТН са изключени по подразбиране.)

Същата БСП, която изглежда като бабата на „Възраждане“ – 

въпреки опитите да бъде представена като „европейска партия“, отхвърли Истанбулската конвенция, не спира да настоява за преговори с „Газпром“ и за връщане на руски дипломати, обяви се срещу изгонването на руската посланичка Митрофанова (независимо от унизителното ѝ отношение към българите и България) и подкрепи ПП за дерогацията за руския петрол, тоест за „Лукойл“.

И макар че социалистите спомогнаха за избора на Иван Гешев за главен прокурор, участваха в тайния сговор за ВСС и не бяха активни на протестите през лятото на 2020 г., те се оказаха в правилния лагер. Така, въпреки слабите изборни резултати, се сдобиха с ресурсни ведомства – горските стопанства, Министерството на земеделието, Министерството на икономиката, където лидерката Корнелия Нинова имаше съвсем пряк контрол върху оръжейните сделки. Наред с това бяха уредени

над 500 назначения на соцфункционери във феодите на „Позитано“ 20, по данни на „Клуб Z“.

„Непредставимо ми е ГЕРБ да се върне на власт, те са като радиоактивни отпадъци“, заяви в сряда в предаването „Лице в лице“ по bTV депутатът Александър Симов (БСП). Но инфлация от почти 17%, която не спира да расте, скъпи горива и също скъпо отопление през зимата, рецесия, която се приближава и към България – всичко това изчегъртва обвиненията в мафия и задкулисие. Разкритията за частната фирма „Eвролаб 2011“ на ГКПП Капитан Андреево, инсталирана от ГЕРБ на вратата между Изтока и ЕС и свързвана с наркобарони, не стигат за поддържане на огъня.

С какво ще се запомни последният парламент

47-мото Народно събрание одобри закриването на спецправосъдието, а в последния си работен ден, 27 юли, Спецпрокуратурата оневини двама от списъка „Магнитски“ – депутата от ДПС Делян Пеевски и бившия зам.-председател на Бюрото за контрол на специалните разузнавателни средства Илко Желязков, също свързан с ДПС. Заради липса на данни за извършено престъпление проверките срещу тях са били прекратени, съобщи в интервю за БНР Валентина Маджарова, ръководителката на Специализираната прокуратура, преназначена на работа в Софийската градска прокуратура. Олигархията и нейните лейтенанти получиха индулгенцията си – и както заключиха от в. „Сега“, „санкционираните от САЩ нямат никакъв проблем с българските власти“.

Но освен закриването на спецправосъдието, управляващата коалиция успя да приеме окончателно и промените в Закона за съдебната власт, с които приравни статута на европейските делегирани прокурори на националните прокурори. От трибуната на Народното събрание правосъдната министърка в оставка Надежда Йорданова каза:

Българският народ очаква справедливост. Българският народ иска злоупотребите с евросредства да бъдат разследвани и виновните да бъдат изправени пред съда. С този законопроект народните представители направиха това възможно и устояха на всички опити да се засили или установи власт на главния прокурор и върху европейските делегирани прокурори.

Но управляващата коалиция не направи и най-малък опит за преговори за конституционни промени, а комисията, оглавявана от съпредседателя на ДБ Христо Иванов, така и не заработи. Не бяха избрани нови членове на Инспектората към ВСС, въпреки че мандатът на настоящите изтече преди повече от две години. Конституционният съд е в непълен състав от 9 месеца, тъй като парламентът не избра двама съдии, а няма как да избере и 11-членната си квота във ВСС – крайният срок е октомври, когато са и изборите, но до момента номинации така и не са направени.

Борбата с корупцията – едно от двете най-съществени предизборни обещания наред със съдебната реформа, с които ПП стана първа сила на изборите през ноември 2021 г., също не отчита особени успехи. В предстоящата предизборна кампания ще им е по-трудно да убеждават, че тази битка трябва да продължи – след като дори не успяха да реформират КПКОНПИ. Законопроектът, внесен от ПП в началото на края на коалицията, си спечели куп критики от политици и експерти заради лошо качество. Предвид развоя на събитията с връщането на мандата и разпускането на 47-мото НС, и Бойко Рашков няма как да оглави Антикорупционната комисия – не и в този парламент.

Вместо да стартират съществена реформа в борбата с изпирането на пари – област, в която България неизменно търпи критики, управляващите предложиха съществената първа стъпка с изваждането на структурата за борба с изпирането на пари от ДАНС едва когато започна повторната въртележка с мандатите след успешния вот на недоверие към кабинета.

Реформите в службите за сигурност, овладени от президента Радев още при служебните кабинети, бяха не просто заобиколени, а игнорирани от ПП. Така призивите на ДБ за прочистването им от руско влияние останаха да висят във въздуха. Изгонването на 70-те руски дипломати и служители и изявите в медиите на премиера Петков имаха по-скоро краткотраен ефект. В интервюто пред „Таймс“ той обяснява как си е създал могъщи врагове в Русия, след като е сложил край на зависимостта на България от руските газови доставки, и е решен да се бори срещу търговията с влияние на Путин в Източна Европа – но се опасява, че Москва смята да го свали от власт завинаги.

Нищо съществено не се случи в секторите, които засягат всички българи – с изключение на повишението на пенсиите, в здравеопазването и образованието всичко си е постарому (ако не броим повечето почивни дни за учениците през следващата учебна 2022–2023 г.). Формулата на просветния министър Николай Денков за висшите училища – атестиране, акредитиране, консолидиране – спря до атестирането на преподавателите. Концепцията за обединяване на висшите училища трябваше да е готова до 30 юни, но беше представена почти месец по-късно, дни преди правителството да си тръгне – на 27 юли.

С изключение на смяната на няколко директори на държавни болници, в здравеопазването не се усетиха ефективни промени. Неумелият опит на управляващите, при това неосигурен финансово, да прокарат увеличение на заплатите за медицински персонал през повишение на цените на клинични пътеки беше отменен от Върховния административен съд. Решението ще бъде обжалвано, но няма шансове медицинските сестри да получат обещаваното им от всяка власт увеличение на възнагражденията. Сега трябваше да стане 1500 лв., началната основна заплата на лекар – средно 2000 лв., а на санитар – 910 лв.

Здравната министърка Асена Сербезова ще приключи мандата си със скандала между две вериги частни болници: едната – свързана с Тихомир Каменов, „Търговска лига“ и „Чайкафарма“, а другата – с д-р Михаил Тиков и „Булфарма“. Първите подадоха сигнал до прокуратурата, обвинявайки Сербезова в корупция, тъй като удовлетворила искания за нови дейности на УМБАЛ „Софиямед“, УМБАЛ „Пълмед“, МБАЛ „Бургасмед“, МБАЛ „Света София“ и МБАЛ „Света Анна“ за същите медицински дейности, „за които „Сърце и мозък“ чака повече от година“. (Първите три болници са от групата на Тиков.)

Но този скандал е следствие от нерешения от времената на тройната коалиция проблем с Националната здравна карта, която да уреди къде и от какви легла има нужда. Такава карта вече е напълно излишна, тъй като безконтролното разрастване на болнични легла е факт – ведно със занижения контрол от НЗОК, а тези два фактора, заедно с огромното доплащане от страна на пациента, разболяват още повече здравеопазването.

Правителството не успя да подготви и внесе навреме в парламента законопроектите, заложени в Плана за възстановяване и устойчивост, по който България очаква да получи наесен първите 1,2 млрд. евро. Поне от хаоса с обявения за противоконституционен избор на председателя на КЕВР Станислав Тодоров някакъв изход се намери – да бъде върнат на поста бившият шеф на регулатора Иван Иванов. Решението беше подкрепено и от част от депутатите на ПП, освен от ГЕРБ, ДПС, ИТН и „Възраждане“, след като президентът даде срок на парламента до края на настоящата седмица да реши проблема. Това означаваше, че няма как да се задвижи процедурата за избор на нов председател на КЕВР, тъй като би отнела близо три седмици.

Меморандумът с Gemcorp, прекратен, след като избухна скандалът със сянката на руските олигарси в капиталите на компанията и с гарантирания ѝ достъп до чувствителна информация за българската енергетика, хвърли петно върху Промяната. Упорството на управляващите да запазят господството на „Лукойл“ върху българския пазар на горива с отсрочката до 2024 г., по време на която България ще продължи да купува руски петрол, също не им прави добра репутационна услуга, в т.ч. и мълчаливият отказ да сменят представителя на държавата в „Лукойл“, който държи „златната акция“ и би могъл да влияе чрез нея на ценовата политика на компанията.

Предстоящото

Правителството на Кирил Петков оставя на служебния кабинет да се справи с осигуряване на доставките на газ за зимата. Русия използва синьото гориво като инструмент, с който „наказва“ ЕС заради санкциите за войната в Украйна, и периодично обявява намаляване на доставките с различни доводи. В резултат цените на газа надхвърлиха 200 евро/МВтч. Европейската комисия предупреди страните членки да подготвят кризисни планове и намаления на консумацията.

Кирил Петков съобщи вчера за евентуално осигуряване на 7 танкера с втечнен природен газ за България, но условието е служебният кабинет на Румен Радев да обезпечи докарването на синьото гориво до България. Това означава, че служебната власт трябва да договори капацитет за разтоварване на гръцкия терминал „Ревитуса“ и на някой от терминалите за втечнен природен газ в Турция, където бяха разтоварени и предишните количества.

Този договор, който днес ще се предложи, е важна стъпка на екипа, съвместно с ЕК, за осигуряване на газ за следващите 6 месеца. Така улесняваме следващото правителство да довърши започнатото.

Това заяви премиерът в оставка, без да дава подробности за цената, както и откъде ще бъде доставена суровината. Предишните танкери бяха осигурени от американската компания „Шиниър“.

При избори през октомври, последваща рулетка с мандатите и трудни преговори за ново правителство, управлението на назначените от президента министри може да се проточи до ноември-декември. Следователно те ще имат грижата да осигурят газа, да напълнят газохранилището в Чирен и да намерят решение за огромния дълг от 328 млн. лв. на „Топлофикация София“ ЕАД към държавната „Булгаргаз“ и за ликвидните ѝ проблеми. Компенсациите за бизнеса заради скъпия ток бяха удължени до 30 септември с решение на отиващото си правителство.

С какво ще трябва да справят още служебните министри? Украинските бежанци. Какво ще се случи с тях, как ще продължи подпомагането им, в държавните бази ли ще останат? Съгласно настоящата програма след 31 август около 27 000 души ще трябва да напуснат хотелите и почивните бази на министерствата. Ако все още не са си намерили работа, ще трябва да им се търсят места за изкарване на зимата – или друго решение. В България има близо 82 000 украинци със статут на временна закрила, от които децата са 44 979.

За служебното правителство остава да довърши и работата по Стратегическия план на България за земеделието, който ЕК върна с над 200 забележки, а земеделският министър съобщи, че „вече са затворени около 40% от тях“. В плана са заложени 8 млрд. евро в подкрепа на земеделския сектор за следващите 5 години.

Ще се наложи четвъртият кабинет на президента да намери решения за зимната поддръжка на пътищата. Взаимните обвинения между ПП и ИТН за корупция задълбочиха проблема, наследен от управлението на ГЕРБ, което фаворизираше бранша. Но сега заплахата е още по-голяма заради прекратени обществени поръчки за текущ ремонт и поддръжка на републиканската пътна мрежа: от АПИ – според премиера Петков, от Комисията за защита на конкуренцията – според регионалния министър в оставка Гроздан Караджов. А провеждането на обществена поръчка отнема няколко месеца.

Така, докато служебното правителство търси решения на тези проблеми, ще тече предизборната кампания на партиите. И райски газ в изобилие.

Заглавна снимка (архивна): Стопкадър от видеоизлъчване на „Дневник“ от пресконференцията на Кирил Петков и Асен Василев

Източник

UEFI rootkits and UEFI secure boot

Post Syndicated from original https://mjg59.dreamwidth.org/60654.html

Kaspersky describes a UEFI-implant used to attack Windows systems. Based on it appearing to require patching of the system firmware image, they hypothesise that it’s propagated by manually dumping the contents of the system flash, modifying it, and then reflashing it back to the board. This probably requires physical access to the board, so it’s not especially terrifying – if you’re in a situation where someone’s sufficiently enthusiastic about targeting you that they’re reflashing your computer by hand, it’s likely that you’re going to have a bad time regardless.

But let’s think about why this is in the firmware at all. Sophos previously discussed an implant that’s sufficiently similar in some technical details that Kaspersky suggest they may be related to some degree. One notable difference is that the MyKings implant described by Sophos installs itself into the boot block of legacy MBR partitioned disks. This code will only be executed on old-style BIOS systems (or UEFI systems booting in BIOS compatibility mode), and they have no support for code signatures, so there’s no need to be especially clever. Run malicious code in the boot block, patch the next stage loader, follow that chain all the way up to the kernel. Simple.

One notable distinction here is that the MBR boot block approach won’t be persistent – if you reinstall the OS, the MBR will be rewritten[1] and the infection is gone. UEFI doesn’t really change much here – if you reinstall Windows a new copy of the bootloader will be written out and the UEFI boot variables (that tell the firmware which bootloader to execute) will be updated to point at that. The implant may still be on disk somewhere, but it won’t be run.

But there’s a way to avoid this. UEFI supports loading firmware-level drivers from disk. If, rather than providing a backdoored bootloader, the implant takes the form of a UEFI driver, the attacker can set a different set of variables that tell the firmware to load that driver at boot time, before running the bootloader. OS reinstalls won’t modify these variables, which means the implant will survive and can reinfect the new OS install. The only way to get rid of the implant is to either reformat the drive entirely (which most OS installers won’t do by default) or replace the drive before installation.

This is much easier than patching the system firmware, and achieves similar outcomes – the number of infected users who are going to wipe their drives to reinstall is fairly low, and the kernel could be patched to hide the presence of the implant on the filesystem[2]. It’s possible that the goal was to make identification as hard as possible, but there’s a simpler argument here – if the firmware has UEFI Secure Boot enabled, the firmware will refuse to load such a driver, and the implant won’t work. You could certainly just patch the firmware to disable secure boot and lie about it, but if you’re at the point of patching the firmware anyway you may as well just do the extra work of installing your implant there.

I think there’s a reasonable argument that the existence of firmware-level rootkits suggests that UEFI Secure Boot is doing its job and is pushing attackers into lower levels of the stack in order to obtain the same outcomes. Technologies like Intel’s Boot Guard may (in their current form) tend to block user choice, but in theory should be effective in blocking attacks of this form and making things even harder for attackers. It should already be impossible to perform attacks like the one Kaspersky describes on more modern hardware (the system should identify that the firmware has been tampered with and fail to boot), which pushes things even further – attackers will have to take advantage of vulnerabilities in the specific firmware they’re targeting. This obviously means there’s an incentive to find more firmware vulnerabilities, which means the ability to apply security updates for system firmware as easily as security updates for OS components is vital (hint hint if your system firmware updates aren’t available via LVFS you’re probably doing it wrong).

We’ve known that UEFI rootkits have existed for a while (Hacking Team had one in 2015), but it’s interesting to see a fairly widespread one out in the wild. Protecting against this kind of attack involves securing the entire boot chain, including the firmware itself. The industry has clearly been making progress in this respect, and it’ll be interesting to see whether such attacks become more common (because Secure Boot works but firmware security is bad) or not.

[1] As we all remember from Windows installs overwriting Linux bootloaders
[2] Although this does run the risk of an infected user booting another OS instead, and being able to see the implant

comment count unavailable comments

[The Lost Bots] Season 2, Episode 2: The Worst and Best Hollywood Cybersecurity Depictions

Post Syndicated from Rapid7 original https://blog.rapid7.com/2022/07/28/the-lost-bots-season-2-episode-2-the-worst-and-best-hollywood-cybersecurity-depictions/

[The Lost Bots] Season 2, Episode 2: The Worst and Best Hollywood Cybersecurity Depictions

Welcome back to The Lost Bots! In this episode, our hosts Jeffrey Gardner, Detection and Response (D&R) Practice Advisor, and Steven Davis, Lead D&R Sales Technical Advisor, walk us through the most hilariously bad and surprisingly accurate depictions of cybersecurity in popular film and television. They chat about back-end inaccuracies, made-up levels of encryption, and pulled power plugs that somehow end cyberattacks. Then they give a shout-out to some of the cinematic treatments that get it right — including a surprising nod to the original 1993 “Jurassic Park.”

For Season 2, we’re publishing new episodes of The Lost Bots on the last Thursday of every month. Check back with us on Thursday, August 31, for Episode 3!

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Schedule email reports and configure threshold based-email alerts using Amazon QuickSight

Post Syndicated from Niyati Upadhyay original https://aws.amazon.com/blogs/big-data/schedule-email-reports-and-configure-threshold-based-email-alerts-using-amazon-quicksight/

Amazon QuickSight is a cloud-scale business intelligence (BI) service that you can use to deliver easy-to-understand insights to the people you work with, wherever they are.

You can build dashboards using combinations of data in the cloud, on premises, in other software as a service (SaaS) apps, and in flat files. Although users can always view and interact with dashboards on-demand in their browser or our native mobile apps, sometimes users prefer to receive notifications and report snapshots on a scheduled basis or when a certain value surpasses a user-defined threshold.

In this post, we walk you through the features and process of scheduling email reports and configuring threshold-based alerts.

Overview of solution

In QuickSight Enterprise edition, you can email a report from each dashboard on a scheduled basis or based on a threshold set for KPI and gauge visuals. Scheduled reports include settings for when to send them, the contents to include, and who receives the email.

Scheduled email reports work with row-level security so that each user receives reports containing only data that is relevant to them. Alert reports include threshold value, alert condition, and the receiver’s email. To set up or change the report sent from a dashboard, make sure that you’re an owner or co-owner of the dashboard.

To receive email reports, the users or group members must be part of your QuickSight account. They must have completed the sign-up process to activate their subscription as QuickSight readers, authors, or admins.

In this post, we configure the email settings for a QuickSight dashboard for users and construct a custom email for each user or group based on their data permissions.

The solution includes the following high-level steps:

  1. Set up scheduled email alerts for your existing reports.
  2. Set up threshold-based email alerts for the existing reports.
  3. View alert history.
  4. Set up email alerts if the dataset refresh fails.

Prerequisites

For this walkthrough, you should have the following prerequisites:

Set up scheduled email alerts

To configure scheduled emails for your reports, complete the following steps:

  1. On the QuickSight console, choose Dashboard in the navigation pane.
  2. Open a dashboard.
  3. On the Share menu, choose Email report.

  1. For Schedule, choose the frequency for the report. For this post, we choose Repeat once a week.

  1. For Send first report on, choose a date and time.
  2. For Time zone, choose the time zone.
  3. For Report title, enter a custom title for the report.
  4. For (Optional) E-mail subject line, leave it blank to use the report title or enter a custom subject line.
  5. For (Optional) E-mail body text, leave it blank or enter a custom message to display at the beginning of the email.

  1. Select Include PDF attachment to attach a PDF snapshot of the items visible on the first sheet of the dashboard.
  2. For Optimize report for, choose a default layout option for new users.
  3. Under Recipients, select specific recipients from the list (recommended), or select Send email report to all users with access to dashboard.
  4. To send a sample of the report before you save changes, choose Send test report.

This option is displayed next to the user name of the dashboard owner.

  1. To view a list of the datasets used by this report, choose View dataset list.

  1. Choose Save report or Update report.

A “Report scheduled” message briefly appears to confirm your entries.

  1. To immediately send a report, choose Update & send a report now.

The report is sent immediately, even if your schedule’s start date is in the future.

The following screenshot shows the PDF report visible to the user AlejandroRosalez. They have access to data where the state is California or Texas, and the city is Los Angeles or Fort Worth.

The following screenshot shows the report visible to the user SaanviSarkar. They can see data for any city, but only if the state is Texas.

The following screenshot shows the report visible to the user MarthaRivera. Martha can see the data for any city or state.

The following screenshot shows that no data is visible to the workshop user, which isn’t present in the permissions.csv file.

Set up threshold-based email alerts

To create an alert based on a threshold, complete the following steps:

  1. On the QuickSight dashboard, choose Dashboards, and navigate to the dashboard that you want.

For more information about viewing dashboards as a dashboard subscriber in QuickSight, see Exploring Dashboards.

  1. In the dashboard, select the KPI or gauge visual that you want to create an alert for.
  2. On the options menu at upper-right on the visual, choose the Create alert icon.

  1. For Alert name, enter a name for the alert.
  2. For Alert value, choose a value that you want to set the threshold for.

The values that are available for this option are based on the values the dashboard author sets in the visual. For example, let’s say you have a KPI visual that shows a percent difference between two dates. Given that, you see two alert value options: percent difference and actual.

If the visual has only one value, you can’t change this option. It’s the current value and is displayed here so that you can use it as a reference while you choose a threshold. For example, if you’re setting an alert on actual, this value shows you what the current actual cost is (for example, $5). With this reference value, you can make more informed decisions while setting your threshold.

  1. For Condition, choose a condition for the threshold.
    • Is above – The alert triggers if the alert value goes above the threshold.
    • Is below – The alert triggers if the alert value goes below the threshold.
    • Is equal to – The alert triggers if the alert value is equal to the threshold.
  1. For Threshold, enter a value to prompt the alert.
  2. Choose Save.

A message appears indicating that the alert has been saved. If your data crosses the threshold you set, you get a notification by email at the address that’s associated with your QuickSight account.

View alert history

To view the history of when an alert was triggered, complete the following steps:

  1. On the QuickSight console, choose Dashboards, and navigate to the dashboard that you want to view alert history for.
  2. Choose Alerts.
  3. In the Manage dashboard alerts section, find the alert that you want to view the history for, and expand History under the alert name.

Set up email alerts if the dataset refresh fails.

To configure emails alerts, if the your dataset refresh fails, complete the following steps:

  1. On the QuickSight console, choose Dataset, and choose the dataset that you want to set an alert for.
  2. Select Email owners when a refresh fails.
  3. Close the window.

Clean up

To avoid incurring future charges, delete the QuickSight users and Enterprise account.

Conclusion

This post showed how to set up email scheduling of QuickSight dashboards for users and groups, as well as how end-users (readers) can configure alerts to be sent to them when a value surpasses or drops below a given threshold.

You can send dashboard snapshots as emails to groups of readers, and each reader receives custom reports as per the security configurations set on the dataset. For more information, see sending reports by email and threshold alerts.

You can try this solution for your own use cases. If you have comments or feedback, please leave them in the comments.


About the Author

Niyati Upadhyay is a Solutions Architect at AWS. She joined AWS in 2019 and specializes in building and supporting big data solutions that help customers analyze and get value out of their data.

Accelerate your data warehouse migration to Amazon Redshift – Part 6

Post Syndicated from Michael Soo original https://aws.amazon.com/blogs/big-data/part-6-accelerate-your-data-warehouse-migration-to-amazon-redshift/

This is the sixth in a series of posts. We’re excited to share dozens of new features to automate your schema conversion; preserve your investment in existing scripts, reports, and applications; accelerate query performance; and potentially simplify your migrations from legacy data warehouses to Amazon Redshift.

Check out all the previous posts in this series:

Amazon Redshift is the cloud data warehouse of choice for tens of thousands of customers who use it to analyze exabytes of data to gain business insights. With Amazon Redshift, you can query data across your data warehouse, operational data stores, and data lake using standard SQL. You can also integrate other AWS services such as Amazon EMR, Amazon Athena, Amazon SageMaker, AWS Glue, AWS Lake Formation, and Amazon Kinesis to use all the analytic capabilities in the AWS Cloud.

Migrating a data warehouse can be a complex undertaking. Your legacy workload might rely on proprietary features that aren’t directly supported by a modern data warehouse like Amazon Redshift. For example, some data warehouses enforce primary key constraints, making a tradeoff with DML performance. Amazon Redshift lets you define a primary key but uses the constraint for query optimization purposes only. If you use Amazon Redshift, or are migrating to Amazon Redshift, you may need a mechanism to check that primary key constraints are not being violated by extract, transform, and load (ETL) processes.

In this post, we describe two design patterns that you can use to accomplish this efficiently. We also show you how to use the AWS Schema Conversion Tool (AWS SCT) to automatically apply the design patterns to your SQL code.

We start by defining the semantics to address. Then we describe the design patterns and analyze their performance. We conclude by showing you how AWS SCT can automatically convert your code to enforce primary keys.

Primary keys

A primary key (PK) is a set of attributes such that no two rows can have the same value in the PK. For example, the following Teradata table has a two-attribute primary key (emp_id, div_id). Presumably, employee IDs are unique only within divisions.

CREATE TABLE testschema.emp ( 
  emp_id INTEGER NOT NULL
, name VARCHAR(12) NOT NULL
, div_id INTEGER NOT NULL
, job_title VARCHAR(12)
, salary DECIMAL(8,2)
, birthdate DATE NOT NULL ) 
CONSTRAINT pk_emp_id PRIMARY KEY (emp_id, div_id);

Most databases require that a primary key satisfy two criteria:

  • Uniqueness – The PK values are unique over all rows in the table
  • Not NULL – The PK attributes don’t accept NULL values

In this post, we focus on how to support the preceding primary key semantics. We describe two design patterns that you can use to develop SQL applications that respect primary keys in Amazon Redshift. Our focus is on INSERT-SELECT statements. Customers have told us that INSERT-SELECT operations comprise over 50% of the DML workload against tables with unique constraints. We briefly provide some guidance for other DML statements later in the post.

INSERT-SELECT

In the rest of this post, we dive deep into design patterns for INSERT-SELECT statements. We’re concerned with statements of the following form:

INSERT INTO <target table> SELECT * FROM <staging table>

The schema of the staging table is identical to the target table on a column-by-column basis.

A duplicate PK value can be introduced by two scenarios:

  • The staging table contains duplicates, meaning there are two or more rows in the staging data with the same PK value
  • There is a row x in the staging table and a row y in the target table that share the same PK value

Note that these situations are independent. It can be the case that the staging table contains duplicates, the staging table and target table share a duplicate, or both.

It’s imperative that the staging table doesn’t contain duplicate PK values. To ensure this, you can apply deduplication logic, as described in this post, to the staging table when it’s loaded. Alternatively, if your upstream source can guarantee that duplicates have been eliminated before delivery, you can eliminate this step.

Join

The first design pattern simply joins the staging and target tables. If any rows are returned, then the staging and target tables share a primary key value.

Suppose we have staging and target tables defined as the following:

CREATE TABLE stg ( 
  pk_col INTEGER 
, payload VARCHAR(100) 
, PRIMARY KEY (pk_col)
); 

CREATE TABLE tgt ( 
  pk_col INTEGER 
, payload VARCHAR(100) 
, PRIMARY KEY (pk_col)
);
We can use the following query to detect any duplicate primary key values:
SELECT count(1) 
FROM stg, tgt 
WHERE tgt.pk_col = stg.pk_col;

If the primary key has multiple columns, then the WHERE condition can be extended:

SELECT count(1)
FROM stg, tgt
WHERE
    tgt.pk_col1 = stg.pk_col1
AND tgt.pk_col2 = tgt.pk_col2
AND …
;

There is one complication with this design pattern. If you allow NULL values in the primary key column, then you need to add special code to handle the NULL to NULL matching:

SELECT count(1)
FROM stg, tgt
WHERE
   (tgt.pk_col = stg.pk_col) 
OR (tgt.pk_col IS NULL AND stg.pk_col IS NULL)
;

This is the primary disadvantage of this design pattern—the code can be ugly and unintuitive. Furthermore, if you have a multicolumn primary key, then the code becomes even more complicated.

INTERSECT

The second design pattern that we describe uses the Amazon Redshift INTERSECT operation. INTERSECT is a set-based operation that determines if two queries have any rows in common. You can check out UNION, INTERSECT, and EXCEPT in the Amazon Redshift documentation for more information.

We can determine if the staging and target table have duplicate PK values using the following query:

SELECT COUNT(1)
FROM (
  SELECT pk_col FROM stg
  INTERSECT
  SELECT pk_col FROM tgt
) a
;

If the primary key is composed of more than one column, you can simply modify the subqueries to include the additional columns:

SELECT COUNT(1)
FROM (
  SELECT pk_col1, pk_col2, …, pk_coln FROM stg
  INTERSECT
  SELECT pk_col, pk_col2, …, pk_coln FROM tgt
) a
;

This pattern’s main advantage is its simplicity. The code is easier to understand and validate than the join design pattern. INTERSECT handles the NULL to NULL matching implicitly so you don’t have to write any special code for NULL values.

Performance

We tested both design patterns using an Amazon Redshift cluster consisting of 12 ra3.4xlarge nodes. Each node contained 12 CPU and 96 GB of memory.

We created the staging and target tables with the same distribution and sort keys to minimize data redistribution at query time.

We generated the test data artificially using a custom program. The target dataset contained 1 billion rows of data. We ran 10 trials of both algorithms using staging datasets that ranged from 20–200 million rows, in 20-million-row increments.

In the following graph, the join design pattern is shown as a blue line. The intersect design pattern is shown as an orange line.

You can observe that the performance of both algorithms is excellent. Each is able to detect duplicates in less than 1 second for all trials. The join algorithm outperforms the intersect algorithm, but both have excellent performance.

So, which algorithm should use you choose? If you’re developing a new application on Amazon Redshift, the intersect algorithm is probably the best choice. The inherent NULL matching logic and simple intuitive code make this the best choice for new applications.

Conversely, if you need to squeeze every bit of performance from your application, then the join algorithm is your best option. In this case, you’ll have to trade complexity and perhaps extra effort in code review to gain the extra performance.

Automation

If you’re migrating an existing application to Amazon Redshift, you can use AWS SCT to automatically convert your SQL code.

Let’s see how this works. Suppose you have the following Teradata table. We use it as the target table in an INSERT-SELECT operation.

CREATE MULTISET TABLE testschema.test_pk_tgt (
  pk_col INTEGER NOT NULL
, payload VARCHAR(100) NOT NULL
, PRIMARY KEY (pk_col)
);

The staging table is identical to the target table, with the same columns and data types.

Next, we create a procedure to load the target table from the staging table. The procedure contains a single INSERT-SELECT statement:

REPLACE PROCEDURE testschema.insert_select()
BEGIN
INSERT INTO testschema.test_pk_tgt (pk_col, payload)
SELECT pk_col, payload FROM testschema.test_pk_stg;
END;

Now we use AWS SCT to convert the Teradata stored procedure to Amazon Redshift. First, open Settings, Conversion settings, and ensure that you’ve selected the option Automate Primary key / Unique constraint. If you don’t select this option, AWS SCT won’t add the PK check to the converted code.

Next, choose the stored procedure in the source database tree, right-click, and choose Convert schema.

AWS SCT converts the stored procedure (and embedded INSERT-SELECT) using the join rewrite pattern. Because AWS SCT performs the conversion for you, it uses the join rewrite pattern to leverage its performance advantage.

And that’s it, it’s that simple. If you’re migrating from Oracle or Teradata, you can use AWS SCT to convert your INSERT-SELECT statements now. We’ll be adding support for additional data warehouse engines soon.

In this post, we focused on INSERT-SELECT statements, but we’re also happy to report that AWS SCT can enforce primary key constraints for INSERT-VALUE and UPDATE statements. AWS SCT injects the appropriate SELECT statement into your code to determine if the INSERT-VALUE or UPDATE will create duplicate primary key values. Download the latest version of AWS SCT and give it a try!

Conclusion

In this post, we showed you how to enforce primary keys in Amazon Redshift. If you’re implementing a new application in Amazon Redshift, you can use the design patterns in this post to enforce the constraints as part of your ETL stream.

Also, if you’re migrating from an Oracle or Teradata database, you can use AWS SCT to automatically convert your SQL to Amazon Redshift. AWS SCT will inject additional code into your SQL stream to enforce your unique key constraints, and thereby insulate your application code from any related changes.

We’re happy to share these updates to help you in your data warehouse migration projects. In the meantime, you can learn more about Amazon Redshift and AWS SCT. Happy migrating!


About the authors

Michael Soo is a Principal Database Engineer with the AWS Database Migration Service team. He builds products and services that help customers migrate their database workloads to the AWS cloud.

Illia Kravtsov is a Database Developer with the AWS Project Delta Migration team. He has 10+ years experience in data warehouse development with Teradata and other MPP databases.

Get a Clear Picture of Your Data Spread With BackBlaze and DataIntell

Post Syndicated from Jennifer Newman original https://www.backblaze.com/blog/get-a-clear-picture-of-your-data-spread-with-backblaze-and-dataintell/

Do you know where your data is? It’s a question more and more businesses have to ask themselves, and if you don’t have a definitive answer, you’re not alone. The average company manages over 100TB of data. By 2025, it’s estimated that 463 exabytes of data will be created each day globally. That’s a massive amount of data to keep tabs on.

But understanding where your data lives is just one part of the equation. Your next question is probably, “How much is it costing me?” A new partnership between Backblaze and DataIntell can help you get answers to both questions.

What Is DataIntell?

DataIntell is an application designed to help you better understand your data and
storage utilization. This analytic tool helps identify old and unused files and gives better
insights into data changes, file duplication, and used space over time. It is designed to
help you manage large amounts of data growth. It provides detailed, user friendly, and accurate analytics of your data use, storage, and cost, allowing you to optimize your storage and monitor its usage no matter where it lives—on-premises or in the cloud.

How Does Backblaze Integrate With DataIntell?

Together, DataIntell and Backblaze provide you with the best of both worlds. DataIntell allows you to identify and understand the costs and security of your data today, while Backblaze provides you with a simple, scalable, and reliable cloud storage option for the future.

“DataIntell offers a unique storage analysis and data management software which facilitates decision making while reducing costs and increasing efficiency, either for on-prem, cloud, or archives. With Backblaze and DataIntell, organizations can now manage their data growth and optimize their storage cost with these two simple and easy-to-use solutions.
—Olivier Rivard, President/CTO, DataIntell

How Does This Partnership Benefit Joint Customers?

This partnership delivers value to joint customers in three key areas:

  • It allows you to make the most of your data wherever it lives, at speed, and with a 99.9% uptime SLA—no cold delays or speed premiums.
  • You can easily migrate on-premises data and data stored on tape to scalable, affordable cloud storage.
  • You can stretch your budget (further) with S3-compatible storage predictably priced at a fraction of the cost of other cloud providers.

“Unlike legacy providers, Backblaze offers always-hot storage in one tier, so there’s no juggling between tiers to stay within budget. By partnering with DataIntell, we can offer a cost-effective solution to joint customers looking to simplify their storage spend and data management efforts.”
—Nilay Patel, Vice President of Sales and Partnerships, Backblaze

Getting Started With Backblaze B2 and DataIntell

Are you looking for more insight into your data landscape? Contact our Sales team today to get started.

The post Get a Clear Picture of Your Data Spread With BackBlaze and DataIntell appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Using AWS Lambda to run external transactions on Db2 for IBM i

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/using-aws-lambda-to-run-external-transactions-on-db2-for-ibm-i/

This post is written by Basil Lin, Cloud Application Architect, and Jud Neer, Delivery Practice Manager.

Db2 for IBM i (Db2) is a relational database management system that can pose connectivity challenges with cloud environments because of a lack of native support. However, by using Docker on Amazon ECR and AWS Lambda container images, you can transfer data between the two environments with a serverless architecture.

While mainframe modernization solutions are helping customers migrate from on-premises technologies to agile cloud solutions, a complete migration is not always immediately possible. AWS offers broad modernization support from rehosting to refactoring, and platform augmentation is a common scenario for customers getting started on their cloud journey.

Db2 is a common database in on-premises workloads. One common use case of platform augmentation is maintaining Db2 as the existing system-of-record while rehosting applications in AWS. To ensure Db2 data consistency, a change data capture (CDC) process must be able to capture any database changes as SQL transactions. A mechanism then runs these transactions on the existing Db2 database.

While AWS provides CDC tools for multiple services, converting and running these changes for Db2 requires proprietary IBM drivers. Conventionally, you can implement this by hosting a stream-processing application on a server. However, this approach relies on traditional server architecture. This may be less efficient, incur higher overhead, and may not meet availability requirements.

To avoid these issues, you can build this transaction mechanism using a serverless architecture. This blog post’s approach uses ECR and Lambda to externalize and run serverless, on-demand transactions on Db2 for IBM i databases.

Overview

The solution you deploy relies on a Lambda container image to run SQL queries on Db2. While you provide your own Lambda invocation methods and queries, this solution includes the drivers and connection code required to interface with Db2. The following architecture diagram shows this generic solution with no application-specific triggers:

Architecture diagram

This solution builds a Docker image containerized with Db2 interfacing code. The code consists of a Lambda handler to run the specified database transactions, a base class that helps create database Python functions via Open Database Connectivity (ODBC), and finally a forwarder class to establish encrypted connections with the target database.

Deployment scripts create the Docker image, deploy the image to ECR, and create a Lambda function from the image. Lambda then runs your queries on your target Db2 database. This solution does not include the Lambda invocation trigger, the Amazon VPC, and the AWS Direct Connect connection as part of the deployment, but these components may be necessary depending on your use case. The README in the sample repository shows the complete deployment prerequisites.

To interface with Db2, the Lambda function establishes an ODBC session using a proprietary IBM driver. This enables the use of high-level ODBC functions to manipulate the Db2 database management system.

Even with the proprietary driver, ODBC does not properly support TLS encryption with Db2. During testing, enabling the TLS encryption option can cause issues with database connectivity. To work around this limitation, a forwarding package captures all ODBC traffic and forwards packets using TLS encryption to the database. The forwarder opens a local socket listener on port 8471 for unencrypted loopback connections. Once the Lambda function initializes an unencrypted ODBC connection locally, the forwarding package then captures, encrypts, and forwards all ODBC calls to the target Db2 database. This method allows Lambda to form encrypted connections with your target database while still using ODBC to control transactions.

With secure connectivity in place, you can invoke the Lambda function. The function starts the forwarder and retrieves Db2 access credentials from AWS Secrets Manager, as shown in the following diagram. The function then attempts an ODBC loopback connection to send transactions to the forwarder.

Flow process

If the connection is successful, the Lambda function runs the queries, and the forwarder sends the queries to the target Db2. However, if the connection fails, it makes a second connection attempt. The second attempt consists of both restarting the forwarder module and the loopback connection. If the second attempt fails again, the function errors out.

After the transactions complete, a cleanup process runs and the function exits with a success status, unless an exception occurs during the function invocation. If an exception arises during the transaction, the function exits with a failure status. This is an important consideration when building retry mechanisms. You must review Lambda exit statuses to prevent default AWS retry mechanisms from causing unintended invocations.

To simplify deployment, the solution contains scripts you can use. Once you provide AWS credentials, the deployment script deploys a base set of infrastructure into AWS, including the ECR repository for the Docker images and the Secrets Manager secret for the Db2 configuration details.

The deployment script also asks for Db2 configuration details. After you finish entering these, the script sends the information to AWS to configure the previously deployed secret.

Once the secret configuration is complete, the script then builds and pushes a base Docker image to the deployed ECR repository. This base image contains a few basic Python prerequisite libraries necessary for the final code, and also the RPM driver for interfacing with Db2 via ODBC.

Finally, the script builds the solution infrastructure and deploys it into the AWS Cloud. Using the base image in ECR, it creates a Lambda function from a new Docker container image containing the SQL queries and the ODBC transaction code. After deployment, the solution is ready for testing and customization for your use case.

Prerequisites

Before deployment, you must have the following:

  1. The cloned code repository locally.
  2. A local environment configured for deployment.
  3. Amazon VPC and networking configured with Db2 access.

You can find detailed prerequisites and associated instructions in the README file.

Deploying the solution

The deployment creates an ECR repository, a Secrets Manager secret, a Lambda function built from a base container image uploaded to the ECR repo, and associated elastic network interfaces (ENIs) for VPC access.

Because of the complexity of the deployment, a combination of Bash and Python scripts automates the process by automatically deploying infrastructure templates, building and pushing container images, and prompting for input where required. Refer to the README included in the repository for detailed instructions.

To deploy:

  1. Ensure you have met the prerequisites.
  2. Open the README file in the repository and follow the deployment instructions
    1. Configure your local AWS CLI environment.
    2. Configure the project environment variables file.
    3. Run the deployment scripts.
  3. Test connectivity by invoking the deployed Lambda function
  4. Change infrastructure and code for specific queries and use cases

Cleanup

To avoid incurring additional charges, ensure that you delete unused resources. The README contains detailed instructions. You may either manually delete the resources provisioned through the AWS Management Console, or use the automated cleanup script in the repository. The deletion of resources may take up to 45 minutes to complete because of the ENIs created for Lambda in your VPC.

Conclusion

In this blog post, you learn how to run external transactions securely on Db2 for IBM i databases using a combination of Amazon ECR and AWS Lambda. By using Docker to package the driver, forwarder, and custom queries, you can execute transactions from Lambda, allowing modern architectures to interface directly with Db2 workloads. Get started by cloning the GitHub repository and following the deployment instructions.

For more serverless learning resources, visit Serverless Land.

[$] Security requirements for new kernel features

Post Syndicated from original https://lwn.net/Articles/902466/

The relatively new io_uring subsystem has
changed the way asynchronous I/O is done on Linux systems and improved
performance significantly. It has also, however, begun to run up a record
of disagreements with the kernel’s security community. A recent
discussion about security hooks for the new uring_cmd mechanism
shows how easily requirements can be overlooked in a complex system with no
overall supervision.

What’s New in InsightVM and Nexpose: Q2 2022 in Review

Post Syndicated from Randi Whitcomb original https://blog.rapid7.com/2022/07/28/whats-new-in-insightvm-and-nexpose-q2-2022-in-review/

What’s New in InsightVM and Nexpose: Q2 2022 in Review

The Vulnerability Management team kicked off Q2 by remediating the instances of Spring4Shell (CVE-2022-22965) and Spring Cloud (CVE-2022-22963) vulnerabilities that impacted cybersecurity teams worldwide. We also made several investments to both InsightVM and Nexpose throughout the second quarter that will help improve and better automate vulnerability management for your organization. Let’s dive in!

New dashboard cards based on CVSS v3 Severity (InsightVM)

CVSS (Common Vulnerability Scoring System) is an open standard for scoring the severity of vulnerabilities; it’s a key metric that organizations use to prioritize risk in their environments. To empower organizations with tools to do this more effectively, we recently duplicated seven CVSS dashboard cards in InsightVM to include a version that sorts the vulnerabilities based on CVSS v3 scores.The v3 CVSS system made some changes to both quantitative and qualitative scores. For example, Log4Shell had a score of 9.3 (high) in v2 and a 10 (critical) in v3.

Having both V2 and V3 version dashboards available allows you to prioritize and sort vulnerabilities according to your chosen methodology. Security is not one-size-fits all, and the CVSS v2 scoring might provide more accurate vulnerability prioritization for some customers. InsightVM allows customers to choose whether v2 or v3 scoring is a better option for their organizations’ unique needs.  

The seven cards now available for CVSS v3 are:

  • Exploitable Vulnerabilities by CVSS Score
  • Exploitable Vulnerability Discovery Date by CVSS Score
  • Exploitable Vulnerability Publish Age by CVSS Score
  • Vulnerability Count By CVSS Score Over Time
  • Vulnerabilities by CVSS Score
  • Vulnerability Discovery Date by CVSS Score
  • Vulnerability Publish Age by CVSS Score
What’s New in InsightVM and Nexpose: Q2 2022 in Review

Asset correlation for Citrix VDI instances (InsightVM)

You asked, and we listened. By popular demand, InsightVM can now identify agent-based assets that are Citrix VDI instances and correlate them to the user, enabling more accurate asset/instance tagging.

Previously, when a user started a non-persistent VDI, it created a new AgentID, which then created a new asset in the console and consumed a user license. The InsightVM team is excited to bring this solution to our customers for this widely persistent problem.

Through the Improved Agent experience for Citrix VDI instances, when User X logs into their daily virtual desktop, it will automatically correlate to User’s experience, maintain the asset history, and consume only one license. The result is a smoother, more streamlined experience for organizations that deploy and scan Citrix VDI.

Scan Assistant made even easier to manage (Nexpose and InsightVM)

In December 2021, we launched Scan Assistant, a lightweight service deployed on an asset that uses digital certificates for handshake instead of account-based credentials; This alleviates the credential management headaches VM teams often encounter. The Scan Assistant is also designed to drive improved vulnerability scanning performance in both InsightVM and Nexpose, with faster completion times for both vulnerability and policy scans.

We recently released Scan Assistant 1.1.0, which automates Scan Assistant software updates and digital certificate rotation for customers seeking to deploy and maintain a fleet of Scan Assistants. This new automation improves security – digital certificates are more difficult to compromise than credentials – and simplifies administration for organizations by enabling them to centrally manage features from the Security Console.

Currently, these enhancements are only available on Windows OS. To opt into automated Scan Assistant software updates and/or digital certificate rotation, please visit the Scan Assistant tab in the Scan Template.

What’s New in InsightVM and Nexpose: Q2 2022 in Review

What’s New in InsightVM and Nexpose: Q2 2022 in Review

Recurring coverage (Nexpose and InsightVM)

Rapid7 is committed to providing ongoing monitoring and coverage for a number of software products and services. The Vulnerability Management team continuously evaluates items to add to our recurring coverage list, basing selections on threat and security advisories, overall industry adoption, and customer requests.

We recently added several notable software products/services to our list of recurring coverage, including:

  • AlmaLinux and Rocky Linux. These free Linux operating systems have grown in popularity among Rapid7 Vulnerability Management customers seeking a replacement for CentOS. Adding recurring coverage for both AlmaLinux and Rocky Linux enables customers to more safely make the switch and maintain visibility into their vulnerability risk profile.
  • Oracle E-Business Suite. ERP systems contain organizations’ “crown jewels” – like customer data, financial information, strategic plans, and other proprietary data – so it’s no surprise that attacks on these systems have increased in recent years. Our new recurring coverage for the Oracle E-Business Suite is one of the most complex pieces of recurring coverage added to our list, providing coverage for several different components to ensure ongoing protection for Oracle E-Business Suite customers’ most valuable information.
  • VMware Horizon. The VMware Horizon platform enables the delivery of virtual desktops and applications across a number of operating systems. VDI is a prime target for bad actors trying to access customer environments, due in part to its multiple entry points; once a hacker gains entry, it’s fairly easy for them to jump into a company’s servers and critical files. By providing recurring coverage for both the VMware server and client, Rapid7 gives customers broad coverage of this particular risk profile.

Remediation Projects (InsightVM)​​

Remediation Projects help security teams collaborate and track progress of remediation work (often assigned to their IT ops counterparts). We’re excited to announce a few updates to this feature:

Better way to track progress for projects

The InsightVM team has updated the metric that calculates progress for Remediation Projects. The new metric will advance for each individual asset remediated within a “solution” group. Yes, this means customers no longer have to wait for all the affected assets to be remediated to see progress. Security teams can thus have meaningful discussions about progress with assigned remediators or upper management. Learn more.

Remediator Export

We added a new and much requested solution-based CSV export option to Remediation Projects. Remediator Export contains detailed information about the assets, vulnerabilities, proof data, and more for a given solution. This update makes it easy and quick for the Security teams to share relevant data with the Remediation team. It also gives remediators all of the information they need.On the other hand, the remediators will have all the information they need. We call this a win-win for both teams! Learn more.

Project search bar for Projects

Our team has added a search bar on the Remediation Projects page. This highly requested feature empowers customers to easily locate a project instead of having to scroll down the entire list.

What’s New in InsightVM and Nexpose: Q2 2022 in Review

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Security updates for Thursday

Post Syndicated from original https://lwn.net/Articles/902795/

Security updates have been issued by Debian (firefox-esr), Fedora (chromium, gnupg1, java-17-openjdk, osmo, and podman), Oracle (grafana and java-17-openjdk), Red Hat (389-ds:1.4, container-tools:rhel8, grafana, java-1.8.0-openjdk, java-11-openjdk, java-17-openjdk, kernel, kernel-rt, kpatch-patch, pandoc, squid, and squid:4), Slackware (samba), and SUSE (crash, mariadb, pcre2, python-M2Crypto, virtualbox, and xen).

New UFEI Rootkit

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/07/new-ufei-rootkit.html

Kaspersky is reporting on a new UFEI rootkit that survives reinstalling the operating system and replacing the hard drive. From an article:

The firmware compromises the UEFI, the low-level and highly opaque chain of firmware required to boot up nearly every modern computer. As the software that bridges a PC’s device firmware with its operating system, the UEFI—short for Unified Extensible Firmware Interface—is an OS in its own right. It’s located in an SPI-connected flash storage chip soldered onto the computer motherboard, making it difficult to inspect or patch the code. Because it’s the first thing to run when a computer is turned on, it influences the OS, security apps, and all other software that follows.

Both links have lots of technical details; the second contains a list of previously discovered UFEI rootkits. Also relevant are the NSA’s capabilities—now a decade old—in this area.

Handy Tips #34: Creating context-sensitive problem thresholds with Zabbix user macros

Post Syndicated from Arturs Lontons original https://blog.zabbix.com/handy-tips-34-creating-context-sensitive-problem-thresholds-with-zabbix-user-macros/22281/

Provide context and define custom problem thresholds by using Zabbix user macros.

Problem thresholds can vary for the same metric on different monitoring endpoints. We can have a server where having 10% of free space is perfectly fine, and a server where anything below 20% is a cause for concern.

Define Zabbix user macros with context:

  • Override the default macro value with a context-specific value
  • Add flexibility by using context macros as problem thresholds

  • Define a default value that will be used if a matching context is not found
  • Any low-level discovery macro value can be used as the context

Check out the video to learn how to define and use user macros with context:

How to define macros with context:

  1. Navigate to ConfigurationHosts
  2. Click on the Discovery button next to your host
  3. Press the Create discovery rule button
  4. We will use the net.if.discovery key to discover network interfaces
  5. Add the discovery rule
  6. Press the Item prototypes button
  7. Press the Create item prototype button
  8. We will use the net.if.in[“{#IFNAME}”] item key
  9. Add the Change per second and Custom multiplier:8 preprocessing steps
  10. Add the item prototype
  11. Press the trigger prototypes button
  12. Press the Create trigger prototype button
  13. Create a trigger prototype: avg(/Linux server/net.if.in[“{#IFNAME}”],1m)>{$IF.BAND.MAX:”{#IFNAME}”}
  14. Add the trigger prototype
  15. Click on the host and navigate to the Macros section
  16. Create macros with context
  17. Provide context for interface names: {$IF.BAND.MAX:”enp0s3″}
  18. Press the Update button
  19. Simulate a problem and check if context is taken into account

Tips and best practices
  • Macro context can be matched with static text or a regular expression
  • Only low-level discovery macros are supported in the context
  • Simple context macros are matched before matching context macros that contain regular expressions
  • Macro context must be quoted with ” if the context contains a } character or starts with a ” character

Learn how to get the most out of your low-level discovery rules to create smart and flexible items, triggers, and hosts by registering for the Zabbix Certified Professional course. During the course, you will learn how to enhance your low-level discovery workflows by using overrides, filters, macro context, and receive hands-on practical experience in creating fully custom low-level discovery rules from scratch.

The post Handy Tips #34: Creating context-sensitive problem thresholds with Zabbix user macros appeared first on Zabbix Blog.

What we learnt from the CSTA 2022 Annual Conference

Post Syndicated from James Robinson original https://www.raspberrypi.org/blog/what-we-learnt-from-the-csta-2022-annual-conference/

From experience, being connected to a community of fellow computing educators is really important, especially given that some members of the community may be the only computing educator in their school, district, or country. These professional connections enable educators to share and learn from each other, develop their practice, and importantly reduce any feelings of isolation.

It was great to see the return of the Computer Science Teachers Association (CSTA) Annual Conference to an in-person event this year, and I was really excited to be able to attend.

A teacher attending Picademy laughs as she works through an activity

Our small Raspberry Pi Foundation team headed to Chicago for four and a half days of meetups, professional development, and conversations with educators from all across the US and around the world. Over the week our team ran workshops, delivered a keynote talk, gave away copies of Hello World magazine, and signed up many new subscribers. You too can subscribe to Hello World magazine for free at helloworld.cc/subscribe.

We spoke to so many educators about all parts of the Raspberry Pi Foundation’s work, with a particular focus on the Hello World magazine and podcast, and of course The Big Book of Computing Pedagogy. In collaboration with CSTA, we were really proud to be able to provide all attendees with their own physical copy of this very special edition. 

It was genuinely exciting to see how pleased attendees were to receive their copy of The Big Book of Computing Pedagogy. So many came to talk to us about how they’d used the digital copy already and their plans for using the book for training and development initiatives in their schools and districts. We gave away every last spare copy we had to teachers who wanted to share the book with their colleagues who couldn’t attend.

Don’t worry if you couldn’t make it to the conference, The Big Book of Computing Pedagogy is available as a free PDF, which due to its Creative Commons licence you are welcome to print for yourself.

Another goal for us at CSTA was to support and encourage new authors to the magazine in order to ensure that Hello World continues to be the magazine for computing educators, by computing educators. Anyone can propose an article idea for Hello World by completing this form. We’re confident that every computing educator out there has at least one story to tell, lessons or learnings to share, or perhaps a cautionary tale of something that failed.

We’ll review any and all ideas and will support you to craft your idea into a finished article. This is exactly what we began to do at the conference with our workshop for writers led by Gemma Coleman, our fantastic Hello World Editor. We’re really excited to see these ideas flourish into full-blown articles over the coming weeks and months.

Our week culminated in a keynote talk delivered by Sue, Jane, and James, exploring how we developed our 12 pedagogy principles that underpin The Big Book of Computing Pedagogy, as well as much of the content we create at the Raspberry Pi Foundation. These principles are designed to describe a set of approaches that educators can add to their toolkit, giving them a shared language and the agency to select when and how they employ each approach. This was something we explored with teachers in our final breakout session where teachers applied these principles to describe a lesson or activity of their own.

We found the experience extremely valuable and relished the opportunity to talk about teaching and learning with educators and share our work. We are incredibly grateful to the entire CSTA team for organising a fantastic conference and inviting us to participate.

Discover more with Hello World — for free

Cover of issue 19 of Hello World magazine.

Subscribe now to get each new Hello World straight to your digital inbox, for free! And if you’re based in the UK and do paid or unpaid work in education, you can subscribe for free print issues.

The post What we learnt from the CSTA 2022 Annual Conference appeared first on Raspberry Pi.

ТЕЦ „Брикел“: История, която трябваше да свърши през 2010 г. 

Post Syndicated from Тоест original https://toest.bg/brikel-part1/

Ръждясали метални конструкции, липсващи стъкла по прозорците, плътен слой черен прах по повърхностите на площадката на ТЕЦ „Брикел“ напомнят на декор от индустриалната история на Гълъбово. Градчето се намира в Маришкия въглищен басейн в Старозагорска област, наричан „туптящото сърце“ на българската енергетика.

Облаците дим в различни нюанси на сивото се стелят летаргично, но упорито през 150-метровия комин. Промъкват се необезпокоявано и през дупките на липсващите прозорци и всеки процеп на сградата. На институционален език това се нарича „неорганизирани емисии“ и представлява отпадни газове от горенето на въглища, които съдържат коктейл от замърсители, вредни за човешкото здраве и околната среда.

Екипът, който проверяваше и прекара два часа на обекта, се почувства зле само след два часа. Не мога да повярвам, че в такива условия работят български граждани, и ще чакам в най-къс срок всички институции да ми дадат доклад, за да видим какви да бъдат следващите стъпки във връзка с това невероятно замърсяване, което се случва ежедневно на този обект.

Това заяви премиерът в оставка Кирил Петков на заседание на Министерския съвет след извънредното си посещение в топлоелектроцентралата на 19 юли т.г. Той също така съобщи, че е подал искане към Държавната агенция за национална сигурност да се установи дали Христо Ковачки е собственик на ТЕЦ „Брикел“ или има контрол върху това дружество. До редакционното приключване на статията резултати от исканите от Петков проверки не са публично оповестени.

При инспекцията на място бяха отчетени нива на замърсяване със серен диоксид от 5600 µg/m3 и с фини прахови частици от 1357 µg/m3. „Те цитираха едни данни от моментните измервания на станцията, които са некоректни“, заяви пред bTV изпълнителният директор на „Брикел“ Янилин Павлов три дни след проверката.

Нормите за опазване на човешкото здраве по тези показатели се осредняват за час, за денонощие и за година. В Европейския съюз, съответно и в българското законодателство, средночасовата норма за серен диоксид е 350 µg/m3, а средноденонощната – 125 µg/m3. Нормите за фини прахови частици с размер до 10 микрона (ФПЧ10) са: средноденонощна – до 50 µg/m3, и средногодишна – 40 µg/m3. Това означава, че при изненадващата инспекция на предприятието в Гълъбово на 19 юли, в която се включиха и РЗИ, РИОСВ, „Пожарна безопасност“ и Инспекцията по труда,

замърсяването е било поне 15 пъти над нормата за серен диоксид и поне 27 пъти – за ФПЧ10.

Трябва да се отбележи, че най-вероятно измерените стойности при проверката са моментни, а не средночасови или средноденонощни, и прякото им сравнение със законовите норми не е напълно коректно. Но дори така отчетените превишения са, меко казано, обезпокоителни.

Нормите за качеството на въздуха в Европейския съюз до голяма степен са плод на политически консенсус. Препоръките на Световната здравна организация, които са осъвременени през 2021 г. и се фокусират върху човешкото здраве, без да вземат предвид икономически съображения на индустрията и правителствата, са още по-строги. Допустимите норми според СЗО са между 2 и 3 пъти по-ниски от нормите на ЕС.

Дразненето на очите и лигавиците e сред най-разпознаваемите симптоми при замърсяването на въздуха. Хроничното излагане на замърсители като фини прахови частици и серен диоксид може да доведе до сърдечносъдови проблеми (сърдечна аритмия и инфаркт) и белодробни болести (астматични пристъпи, бронхит, рак на белия дроб). Според СЗО няма безопасно ниво на замърсяване с прахови частици.

„Всичко е в нормите и е в границите, които са ни поставени“,

повтори Павлов в друго интервю – пред „Нова телевизия“. Макар че на въпроса как си обяснява многобройните сигнали на природозащитни организации и протести на жителите на Гълъбово заради мръсния въздух в района, той потвърждава, че „това, което е очевадно, е така“.

Това, което стана очевадно след извънредната проверка и изостри общественото внимание, се е превърнало в нормално ежедневие за осемте хиляди жители на Гълъбово и е документирано на стотици снимки. Дори след края на плановия ремонт на инсталацията през юни т.г. ТЕЦ „Брикел“ дни наред бълва облаци черен дим.

Колаж от снимки от сигнали на граждани, изпратени до РИОСВ – Стара Загора в първата половина на юли 2022 г. © „Грийнпийс – България“

Ако „Брикел“ затвори, 1300 работници от предприятието ще останат без работа, както и още 1000 души от държавната „Мини Марица-изток“, предупреди още Янилин Павлов. Историята се повтаря: работниците не за първи път са използвани за спасяването на бизнеси около Христо Ковачки.

Машина на времето

Построена през 1964 г., ТЕЦ „Брикел“ е най-старата въглищна централа в Маришкия басейн и една от най-старите в България. Тя трябваше да затвори преди повече от десетилетие заради невъзможност да спази новите, по-високи изисквания за ограничаване на замърсяването след присъединяването на България към Европейския съюз през 2007 г. В подобно положение бяха и други стари централи: ТЕЦ „Бобов дол“, ТЕЦ „Република“ в Перник, ТЕЦ „Марица 3“ в Димитровград и ТЕЦ „Сливен“, които природозащитната организация „Грийнпийс“ сочи като част от „въглищната империя на Ковачки“.

„Брикел“ беше получила разрешение да работи 20 000 часа със заварената си инсталация и без пречиствателни съоръжения за периода между 2008 и 2011 г. Този лимит беше изчерпан, без предприятието да започне подготвителни дейности по затварянето на централата. Вместо да настоява за спазването на изискванията, кабинетът „Борисов 1“ два пъти моли Европейската комисия за отсрочка за замърсяващите тецове заради опасността да останат без работа „2000 работници и още няколко хиляди миньори заради затваряне на други производства“.

Синдикатите активно съдействат в процеса по удължаването на живота на замърсителите.

През 2010 г. Константин Тренчев, тогавашният президент на КТ „Подкрепа“, критикува кратките според него срокове за изграждането на пречистващите съоръжения, макар самите централи да са декларирали като проблем не сроковете, а размера на необходимите инвестиции. През октомври същата година работниците от „Брикел“ излизат на протест и от другия синдикат, КНСБ, настояват за отсрочка от 18 месеца, през които да бъде построена инсталация за очистване на серния диоксид от отпадните газове, въпреки информацията за отказа на собственика да предприеме тази стъпка.

Дни по-късно самият Христо Ковачки настоява за такава отсрочка, като обещава инвестиция в сероочистваща инсталация пред министрите на околната среда Нона Караджова и на енергетиката Трайчо Трайков. През зимния сезон предприятието работи със сероочистка назаем от ТЕЦ „Сливен“, определена от Караджова като „самоделка“. В същото време за доста кратък срок централата се сдобива със собствена сероочистка, изградена „с много труд и изцяло със собствени средства на дружеството“, с проектанти от компанията и външни специалисти и с банкова гаранция от Първа инвестиционна банка.

Когато през 2003 г. техният собственик е имал възможността да избира между това дали да работи 20 000 часа и да прибира печалбата, или да направи инвестиции, включително и в екологична насока, той е избрал големите пари, вместо да гарантира бъдещето на своето производство.

Думите са на Бойко Борисов по време на първия му мандат като министър-председател. В следващите два негови кабинета ТЕЦ „Брикел“ продължи работа с остаряващата си инсталация и все по-честo с нефункционираща сероочистваща инсталация. Междувременно на 12 май т.г.

Съдът на ЕС осъди България за системното замърсяване на въздуха със серен диоксид в района на Гълъбово между 2007 и 2018 г.

Ако „Брикел“ спре работа, Гълъбово, Сливен, Бобов дол и Русе ще останат без отопление за зимата, защото централата обогатява въглищата, които тецовете на тези места използват в производството си, предупреждава директорът Янилин Павлов през 2022 г. В края на 2010 г. правителството позволи на ТЕЦ „Брикел“ да работи на „минимални мощности“ за отоплителния сезон заради „невъзможността в краткосрочен план за намиране на решение за отоплението на град Гълъбово“. Повече от десетилетие по-късно „краткосрочният“ проблем няма решение и социалният аргумент все така прави черните пушеци по-малко черни в очите на управляващите и на голяма част от обществото.

Временното разрешение на правителството предвиждаше окончателното извеждане на ТЕЦ „Брикел“ от експлоатация да стане най-късно в края на април 2011 г. Месец преди крайния срок обаче предприятието внася в Изпълнителната агенция по околна среда заявление за издаване на комплексно разрешително, за да може да възстанови дейността си. Екоминистерството се ангажира „да съдейства за бързо провеждане на екологичните процедури“ за издаване на документа на предприятието, което предстои да се докаже като недосегаем хроничен замърсител.

На преден план – депото за отпадъци на ТЕЦ „Брикел“, по-известно като „черното езеро“, което често запрашава региона при силен вятър. На заден план – централите „Ей И Ес“ (вляво) и „Брикел“ (вдясно) през януари 2021 г. © „Грийнпийс – България“

Няма значение кой е собственикът на „Брикел“,

заявява днес Янилин Павлов. Той признава, че „коментират много теми“ с бизнесмена Христо Ковачки, който през годините се представя като консултант. В онзи период около 2010 г. обаче бизнесменът активно участва в преговорите за удължаването на работата на ТЕЦ „Брикел“, а медиите открито го наричат собственик на тази и други централи.

Дори тогавашният министър-председател Бойко Борисов изразява изненадата си от този факт пред Министерския съвет. „Между другото, много се удивих, когато видях онзи ден на преговорите за „Брикел“ с министър Трайков и Нона Караджова господин Ковачки, защото по документи той никъде не се води в „Брикел“, там се води някакво 25–26-годишно момиче, което сигурно преди пет-шест години е било доста по-младо, по моята теория за възрастта“, коментира Борисов според стенограмата от заседанието на 27 октомври 2010 г.

Става дума за Ива Цветкова, собственичка на дружеството „Калиста 2004“, което приватизира „Брикел“ ЕАД през 2004 г., когато тя е на 21 години, срещу сумата от 29,2 млн. лв. Фирмата е регистрирана на ул. „3020“ №34 в Индустриална зона „Орион“ в София, подобно на много други дружества, свързвани с Христо Ковачки.

Скоро след обрата със закриването, стигнало до удължаване на живота на „Брикел“, през 2011 г. това предприятие, както и няколко други дружества от енергийния сектор,

преминават в ръцете на офшорни фирми на Сейшелските острови

с позволението на Комисията за защита на конкуренцията. Мащабната рокада идва на фона на запор на имущество за 143 млн. лв. на Христо Ковачки заради съмнения, че е придобито от престъпна дейност. Запорът е наложен в рамките на съдебно дело за данъчни измами, по което бизнесменът впоследствие е оправдан.

Офшорната история обаче не свършва дотук – дружествата напускат данъчния рай с влизането в сила на Закона за икономическите и финансовите взаимоотношения с дружества, регистрирани в юрисдикции с преференциален данъчен режим и техните действителни собственици. Причината е, че компаниите с офшорна собственост не могат да участват в обществени поръчки, нито да получават достъп до държавни ресурси. Освен това те не могат да получават и разрешения за концесии или енергийни лицензи.

В рамките на няколко седмици над десет компании от Сейшелските острови с активи в българската енергетика се прехвърлят на новорегистрирани фирми в Кипър и Великобритания със седалища на няколко общи адреса. Комисията за защита на конкуренцията отново одобрява сделките без опасения за концентрация.

Днес тези връзки трябва да бъдат анализирани от ДАНС по заръка на премиера в оставка Кирил Петков, а името на Христо Ковачки все по-трудно се отронва от устите на политици, регулаторни органи, журналисти.

Очаквайте продължение…

Заглавна снимка: Стопкадър от видеоклип на „Грийнпийс – България“, заснет с дрон над ТЕЦ „Брикел“

Източник

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close