Top 5: Featured Architecture Content for July

Post Syndicated from Elyse Lopez original https://aws.amazon.com/blogs/architecture/best-top-5-featured-architecture-content-for-july/

The AWS Architecture Center provides new and notable reference architecture diagrams, vetted architecture solutions, AWS Well-Architected best practices, whitepapers, and more. This blog post features some of our best picks from the new and newly updated content we released this month.

1. Tag Tamer Solution

Consistency is key when you’re using tags to keep your AWS resources organized. This brand-new AWS Solutions Implementation helps you apply and manage tags for new and existing AWS resources via a pre-built web user interface that enforces tagging rules and helps you spot inconsistencies.

Tag Tamer solution

2. AWS DevOps Monitoring Dashboard

Do you need better insight into how your DevOps initiatives are performing? This new AWS Solutions Implementation automates the process of ingesting, analyzing, and visualizing continuous integration/continuous delivery (CI/CD) metrics for near-real-time analytics. The solution includes pre-built Amazon QuickSight dashboards, but you can also customize it for use with your existing business intelligence tools.

3. An Overview of AWS Cloud Data Migration Services

Migrating data to the cloud is a well-understood business imperative, but it’s still ongoing work for many, many AWS customers. Effective planning of data migrations—particularly when working with live, mission-critical data—should use best practices built from broad experience. This whitepaper was recently updated with the latest guidance.

4. Building an AWS Perimeter

In traditional on-premises environments, you establish a high-level perimeter to help keep untrusted entities from getting in and your data from getting out. This new whitepaper offers guidance on how to draw the same sort of circle around your AWS resources in the cloud so you can clearly separate “my AWS” from other customers.

5. AWS Well-Architected Tool

This update to the AWS Well-Architected Tool gives you the option to mark certain best practices as “not applicable” when you’re running a workload review, and to record why the best practice doesn’t apply. The new functionality offers better flexibility when certain best practices might not be applicable to your business needs or organizational maturity. You can mark best practices as not applicable using the Well-Architected API, too.

Тънкият лед на преговорите се напука

Post Syndicated from Емилия Милчева original https://toest.bg/tunkiyat-led-na-pregovorite-se-napuka/

Лятната жега e почти 40-градусова. Откъм полето на политиката чуваме как се напуква тънкият лед на преговорите за правителство. В това представление първите редове вече са откупени от Бойко Борисов и ГЕРБ, които чакат в процепите да изпопадат преговарящо-разговарящите – и тогава бурно да ръкопляскат. Случи ли се, в ледените води ще се сгромолясат и трите „партии на промяната“ – победителката на…

Източник

И влезе Пламен Николов

Post Syndicated from Емилия Милчева original https://toest.bg/i-vleze-plamen-nikolov/

За първи път в България политическата сила, победила на избори, предлага министър-председател, който не е партийният ѝ лидер. От името на партия „Има такъв народ“ депутатът предприемач с докторска степен по философия на правото, политиката и икономиката Пламен Николов получи вчера мандата за съставяне на правителство от президента Румен Радев. В партия „Има такъв народ“ д-р Николов е експерт по…

Източник

Рискът от преболедуването. Постковидните симптоми и последствията от тях

Post Syndicated from Йоанна Елми original https://toest.bg/postkovidnite-simptomi/

Разболях се през октомври 2020 г. До февруари т.г. все още имах лека температура, симптомите се успокояваха трудно. Спрях да се задъхвам някъде през март. Проблемите бяха много – смяна на лекари, терапия с няколко вида антибиотици, които нямаха ефект. Това разказва С., 31-годишна, от София. След разговора излиза малко да потича – миналото лято е бягала по маратони, сега заради кашлицата и умората…

Източник

Гражданството като кауза. Разговор с Венелин Стойчев

Post Syndicated from Светла Енчева original https://toest.bg/venelin-stoychev-interview-uchebnik/

По повод новия учебник по гражданско образование за ХII клас на издателство „Булвест 2000“/Klett разговаряме с един от авторите му – Венелин Стойчев, магистър по политология и доктор по социология. В авторския екип на учебника са още проф. дсн Георги Димитров, основател на специалностите „Социология“ в ЮЗУ и „Европеистика“ в СУ, доц. д-р Велина Тодорова, юристка, университетска преподавателка…

Източник

На второ четене: „Сивият мъх сияе“

Post Syndicated from Севда Семер original https://toest.bg/na-vtoro-chetene-siviyat-mah-siyae/

Никой от нас не чете единствено най-новите книги. Тогава защо само за тях се пише? „На второ четене“ е рубрика, в която отваряме списъците с книги, публикувани преди поне година, четем ги и препоръчваме любимите си от тях. Рубриката е част от партньорската програма Читателски клуб „Тоест“. Изборът на заглавия обаче е единствено на авторите – Стефан Иванов и Севда Семер, които биха ви препоръчали…

Източник

Седмицата в „Тоест“ (26–30 юли)

Post Syndicated from Тоест original https://toest.bg/editorial-26-30-july-2021/

Малко преди развръзката на фабулата около състава и посоката на новото правителство, облаците над разговарящите страни се сгъстиха и прехвърчаха гръмотевици. Емилия Милчева проследява историята и добавя контекст в материала си „Тънкият лед на преговорите се напука“. Късно в петък президентът Радев връчи мандат за съставяне на правителство на „Има такъв народ“ – първата политическа сила в Народното…

Източник

How to Log Amazon SES details using Amazon CloudWatch

Post Syndicated from Rajat Kashyap original https://aws.amazon.com/blogs/messaging-and-targeting/how-to-log-amazon-ses-details-using-amazon-cloudwatch/

One of the use cases Amazon Simple Email Service (Amazon SES) users try to implement is to centralize various SES notifications from different domains or email addresses to get insights about how many of those emails were delivered, bounced or complaint. If you have these details, they can be valuable to help you take appropriate business decisions to further strengthen your mail delivery process. Currently SES provides various metrics like number of sends, reject etc. but there is no direct way to log information like From email identity, recipient email identity, Subject, timestamp, source IP address, messageId etc. when sending emails.

In this blog post, you will learn how to capture detailed notifications about your bounces, complaints, and deliveries and log those in Amazon CloudWatch. With a centralized logging solution, customers can keep track of domains or email addresses from which they received complaints, identify email issues, stay informed, and even build custom dashboard capabilities. Logging the notifications in CloudWatch will help you to store these notifications for the long term and also will allow you to set up a process to back up this data and setup life-cycle policies for data retention.

Let’s quickly understand how Amazon SES categorizes if any email got delivered, bounced or received a complaint.

Types of notifications

Bounce occurs when a message cannot be delivered to the intended recipient. And there are two types of Bounce, hard bounce and soft bounce.

  • Hard bounces occur when email cannot be delivered because of a persistent issue, such as when a recipient’s email address or domain does not exist and Amazon SES will no longer attempt to deliver the message.
  • Soft bounces occur when there is a temporary issue preventing the email from being delivered, such as when the recipient’s mailbox is full, when the connection to the receiving email server times out, or when there are too many simultaneous connections to the receiving mail server. When there are soft bounces, Amazon will attempt to redeliver it again.

Complaint occurs when a recipient,

  • Reports that they don’t want to receive an email.
  • Clicks the “Report spam” button in their email client, and complains to their email provider that such emails belong to the Spam category.

Delivery occurs when,

  • An email is delivered successfully to recipient’s mail server.

Prerequisites

For this post, you should be familiar with the following:

Architecture Overview

The AWS CloudFormation template given in this post automatically sets up the different architecture components, to capture detailed notifications about your bounces, complaints, and deliveries and log those in Amazon CloudWatch. You still have to perform some manual tasks of configuring and validating components. For details, please follow the below steps in sequence.


Getting Started with Solution Deployment

Prerequisite tasks to be completed before deploying the logging solution:

  1. Domain and Email Address are verified
  2.  Creation of Amazon Simple Notification Service (Amazon SNS) Topic to capture detailed notifications for bounces, complaints or deliveries, please refer Create SNS Topic.
  3. Setup notifications at domain or from email address level, for the notifications you want to log into CloudWatch.
    1. Click on the Verified Identity domain name you want to setup bounce notifications.
    2. In the Notifications tab, click on Feedback Notifications section -> Click edit button to navigate on next page.
    3. From the drop down select and Update Bounce Notifications topic with the SNS Topic ARN that you created in prior step and click Save Changes.

Note: For this blog post, you will learn how to log bounce notifications. To capture complaints or delivery notifications you can configure SNS topic and redeploy the CloudFormation template multiple times choosing the Complaint and Delivery event types to capture all notifications.

Once the prerequisite tasks are completed, the logging solution is ready to be deployed.

As a very first step, download ses_bounce_logging_blog.yml CloudFormation file from the below given link, once you saved this on your local machine, follow the next steps to install this solution.

https://github.com/aws-samples/digital-user-engagement-reference architectures/blob/master/cloudformation/ses_bounce_logging_blog.yml

Steps to run the CloudFormation template:

  1. Go to CloudFormation Console and Click Create Stack.
  2. Select Upload template file radio button and Click Choose file to upload ses_bounce_logging_blog.yml file you downloaded earlier.
  3. Click Next on Create Stack screen.
  4. Specify Stack Name, for example ses-bounce-logging.
  5. Change default value of CloudWatchGroupName if needed.
  6. Select the Event TypeBounce”, “Complaint”, or “Delivery” you wish to track.
  7. Enter Amazon Resource Name (ARN) of Amazon SNS topic created in prior step, to capture bounce notification in SNSTopicName parameters field, and click Next.
  8. Click Next on Configure stack options screen.
  9. Select “I acknowledge that AWS CloudFormation might create IAM resources” and click Create Stack.

Wait for the CloudFormation template to complete and then verify resources in the CloudFormation stack has been created. Click on individual resources and verify.

  • IAM Role was created.
  • Lambda function to log capture bounce notification was created.
  • Verify that Lambda function subscription to SNS topic has been created and confirmed.

  • You can also verify SNS and Lambda integration in Lambda console.

How to test the solution?

You can test the Bounce scenario using Amazon SES mailbox simulators. When you send an email to selected mailbox simulator scenario, you will get a simulated detailed notification back to the Amazon SNS topic configured in the notification section described in the pre-requisite section. You can use AWS CLI, an AWS SDK, or Amazon SES console for the particular domain that you have configured to receive notifications.

[email protected] (In scope of this blog post and applies to test bounce notifications)

Use below in case you want to test other scenario:

[email protected]

[email protected]

Screenshots showing how to send bounce email using AWS Console

  1. Go to Amazon SES -> Select Verified Domain Identity Checkbox.
  2. Click on Send a Test Email Button.
  3. Fill in the required information, as given in the below snapshot like From-address, test Scenario (Bounce) and click Send Test Email.
  4. As soon as bounce notification is received in the SNS topic it will be sent to the Lambda function and finally logged in the /aws /ses/bounce_logs CloudWatch log group.

Clean up

When you’re done with this exercise, complete the following steps to delete your resources and stop incurring costs:

  1. Delete the SNS topic that you created.
  2. On the CloudFormation console, select your stack and choose Delete.

This cleans up all the resources created by the stack.

Conclusion

In this blog post, we have shown you how to capture and build a solution for Bounce notifications. We explained how to combine Amazon Simple Notification Service, AWS Lambda, and Amazon CloudWatch to create the logging solution. To enhance the visualization, you can filter Metrics in Amazon CloudWatch, allowing you to graph metrics and make it searchable. As the notifications are stored in Amazon CloudWatch, you can export the logs to Amazon S3 for the long term. You can modify the CloudFormation template in this blog and deploy it multiple times to capture complaints or delivery notifications for your business use cases.

 


About the Authors

Rajat Kashyap is a Solutions Architect at AWS. He is Containers, DevOps, BigData, Analytics and AI/ML enthusiast and loves helping customers design secure, reliable, scalable and cost-effective solutions on AWS. As a trusted customer advisor, he help organizations understand best practices for advanced AWS cloud-based solutions and help them in migration and modernization of their workload using Well Architected design principles and best practices.

 

 

Ashish Mehra is a Solutions Architecture Manager at AWS. He is a Middleware, Serverless, IoT and Containers enthusiast and loves helping customers design secure, reliable and cost-effective solutions on AWS.

Metasploit Wrap-Up

Post Syndicated from Christophe De La Fuente original https://blog.rapid7.com/2021/07/30/metasploit-wrap-up-123/

New Olympic Discipline: Hive Hunting

Metasploit Wrap-Up

This week, community contributor Hakyac added a new Olympic discipline to Metasploit exploit sport category, which is based on the work of community security researchers @jonasLyk and Kevin Beaumont). The rules are simple: You need to abuse a flaw in Windows 10 and 11 configuration to pass through the defense and access Security Account Manager (SAM) files. Any local unprivileged player is able to read this sensitive security information, such as hashes of user/admin passwords. The best strategy to win a gold medal is to start abusing Windows Volume Shadow Copy Service (VSS) to access these files and copy them locally. Finally, you just need to dump the NTLM hashes, use them in a pass-the-hash attack and score with a remote code execution.

Note that Microsoft issued an out-of-band advisory and tracked this vulnerability as CVE-2021-36934. You can find more information about the rules in this blog post. Happy Hive hunting!

Gold Medal for NetGear R7000 in Swimming 100m Heap Overflow

Our own Grant Willcox added a new exploit module that won the Swimming 100m Heap Overflow discipline. It took advantage of a flaw in genie.cgi?backup.cgi page of Netgear R7000 routers to enable a telnet server and easily got code execution as the root user. Note that, whereas firmware versions 1.0.11.116 and prior are vulnerable, this module can only be used with versions 1.0.11.116 at the moment. The check method can still be used to detect if older devices are vulnerable. This module is based on research done by @colorlight2019. A new gold medal for the Metasploit team, great job!

New module content (5)

  • Netgear R7000 backup.cgi Heap Overflow RCE by Grant Willcox, SSD Disclosure, and colorlight2019, which exploits CVE-2021-31802 – This adds an module that will leverage CVE-2021-31802 which is an unauthenticated RCE in Netgear R7000 routers. The vulnerability is leveraged to execute a shellcode stub that will enable telnet which can then be accessed for root privileges on the affected device.
  • Pi-Hole Remove Commands Linux Priv Esc by Emanuele Barbeno and h00die, which exploits CVE-2021-29449 – This adds a local privilege escalation module that targets Pi-Hole versions >= 3.0 and <= 5.2.4. In vulnerable versions of the software, a user with sudo privileges can escalate to root by passing shell commands to either the removecustomcname, removecustomdns, or removestaticdhcp function. The functions have minimal sanitization, and they pass the input to the sed command. By default, the www-data user is permitted to run sudo without supplying a password as configured in the sudoers.d/pihole file.
  • WordPress Plugin Modern Events Calendar – Authenticated Remote Code Execution by Nguyen Van Khanh, Ron Jost, and Yann Castel, which exploits CVE-2021-24145 – This adds a module that exploits an authenticated file upload vulnerability in the WordPress plugin known as Modern Events Calendar. For versions before 5.16.5, an administrative user can upload a php payload via the calendar import feature by setting the content type of the file to text/csv. Code execution with the privileges of the user running the server is achieved by sending a request for the uploaded file.
  • WordPress Plugin SP Project and Document – Authenticated Remote Code Execution by Ron Jost and Yann Castel, which exploits CVE-2021-24347 – This adds a module that exploits an authenticated file upload vulnerability in the WordPress plugin, SP Project and Document Manager. For versions below 4.22, an authenticated user can upload arbitrary PHP code because the security check only blocks the upload of files with a .php extension, meaning that uploading a file with a .pHp extension is allowed. Once uploaded, requesting the file will result in code execution as the www-data user.
  • Windows SAM secrets leak – HiveNightmare by Kevin Beaumont, Yann Castel, and romarroca, which exploits CVE-2021-36934 – This adds a new exploit module that exploits a configuration issue in Windows 10 (from version 1809) and 11, identified as CVE-2021-36934. Due to permission issues, any local user is able to read SAM and SYSTEM hives. This module abuses Windows Volume Shadow Copy Service (VSS) to access these files and save them locally.

Enhancements and features

  • #15444 from pingport80 – This adds additional support for Powershell sessions to some methods in the File mixin leveraged by post modules.
  • #15465 from sjanusz-r7 – Updates the local exploit suggester to gracefully handle modules raising unintended exceptions and nil target information

Bugs fixed

  • #15359 from stephenbradshaw – Fixes a bug in the ssh_login_pubkey which would crash out when not connected to the db
  • #15460 from pingport80 – This fixes a localization-related issue in the File libraries copy_file method caused by it searching for a word in the output to determine success.

Get it

As always, you can update to the latest Metasploit Framework with msfupdate and you can get more details on the changes since the last blog post from GitHub:

If you are a git user, you can clone the Metasploit Framework repo (master branch) for the latest. To install fresh without using git, you can use the open-source-only Nightly Installers or the binary installers (which also include the commercial edition).

Understanding Amazon Machine Images for Red Hat Enterprise Linux with Microsoft SQL Server

Post Syndicated from Emma White original https://aws.amazon.com/blogs/compute/understanding-amazon-machine-images-for-red-hat-enterprise-linux-with-microsoft-sql-server/

This post is written by Kumar Abhinav, Sr. Product Manager EC2, and David Duncan, Principal Solution Architect. 

Customers now have access to AWS license-included Amazon Machine Images (AMI) for hosting their SQL Server workloads with Red Hat Enterprise Linux (RHEL). With these AMIs, customers can easily build highly available, reliable, and performant Microsoft SQL Server 2017 and 2019 clusters with RHEL 7 and 8, with a simple pay-as-you-go pricing model that doesn’t require long-term commitments or upfront costs. By switching from Windows Server to RHEL to run SQL Server workloads, customers can reduce their operating system licensing costs and further lower total cost of ownership. This blog post provides a deep dive into how to deploy SQL Server on RHEL using these new AMIs, how to tune instances for performance, and how to reduce licensing costs with RHEL.

Overview

The RHEL AMIs with SQL Server are customized for the following editions of SQL Server 2019 or SQL Server 2017:

  • Express/Web Edition is an entry-level database for small web and mobile apps.
  • Standard Edition is a full-featured database designed for medium-sized applications and data marts.
  • Enterprise Edition is a mission-critical database built for high-performing, intelligent apps.

To maintain high availability (HA), you can bring the same HA configuration from an on-premises Enterprise Edition to AWS by combining SQL Server Availability Groups with the Red Hat Enterprise Linux HA add-on. The HA add-on provides a light-weight cluster management portfolio that runs across multiple AWS Availability Zones to eliminate single point of failure and deliver timely recovery when there is need.

These AMIs include all the software packages required to run SQL Server on RHEL along with the most recent updates and security patches. By removing some of the heavy lifting around deployment, you can deploy SQL Server instances faster to accommodate growth and service events.

The following architecture diagram shows the necessary building blocks for SQL Server Enterprise HA configuration on RHEL.

instance configuration

To build the RHEL 7 and RHEL 8 AMIs, we focused on requirements from the Red Hat community of practice for SQL Server, which includes code, documentation, playbooks, and other artifacts relating to deployment of SQL Server on RHEL. We used Amazon EC2 Image Builder for installing and updating Microsoft SQL Server. For custom configuration or for installing any additional software, you can use the base machine image and extend it using EC2 Image Builder. If you have different, requirements, you can build your own images using the Red Hat Image Builder.

Tuning for virtual performance and compatibility with storage options

It is a common practice to apply pre-built performance profiles for SQL Server deployments when running on-premises. However, you do not need to apply the operating system performance tuning profiles for SQL Server to optimize the performance on Amazon EC2. The AMIs include the virtual-guest tuning from Red Hat along with additional optimizations for the EC2 environment. For example, the images include the timeout for NVMe IO operations set to the maximum possible value for an experience that is more consistent with the way EBS volumes are managed. Database administrators can further configure workload specific tuning parameters such as paging, swapping, and memory pressure using the Microsoft SQL Server performance best practices guidelines.

SQL Server availability groups help achieve HA and improve the read performance of your database cluster. However, this approach only improves availability at the database layer. RHEL with HA further improves the availability of a SQL Server cluster by providing service failover capabilities at the operating system layer. You can easily build a highly available database cluster as shown in the following figure by using the RHEL with SQL Server and HA add-on AMI on an instance of their choice in multiple Availability Zones.

 

High availability SQL Cluster built on top of RHEL HA

When it comes to storage, AWS offers many different choices. Amazon Elastic Block Storage (EBS) offers Provisioned IOPS volumes for specific performance requirements, where you know you need a specific level of performance required for the database operations. Provisioned IOPS are an excellent option when the general-purpose volume doesn’t meet your requirements of levels of I/O operations necessary for your production database. EBS volumes add the flexibility you need to increase your volume storage space size and performance through API calls. With Amazon EBS, you can also use additional data volumes directly attached to your instance or leverage multiple instance store volumes for performance targets. Volume IOPS are optimized to be sustainable even if they climb into the thousands, and your maximum IOPS does not decrease.

Storing data on secondary volumes improves performance of your database

Provisioning only one large root EBS volume for the database storage and sharing that with the operating system and any logging, management tools, or monitoring processes is not a well-architected solution. That shared activity reduces the bandwidth and operational performance of the database workloads. On the other hand, using practices like separating the database workloads onto separate EBS volumes or leveraging instance store volumes work well for use cases like storing a large number of temp tables. By separating the volumes by specialized activities, the performance of each component is independently manageable. Profile your utilization to choose the right combination of EBS and instance storage options for your workload.

Lower Total Cost of Ownership

Another benefit of using RHEL AMIs with SQL Server is cost savings. When you move from Windows Server to RHEL to run SQL Server, you can reduce the operating system licensing costs. Windows Server virtual machines are priced per core and hence you pay more for virtual machines with large numbers of cores. In other words, the more virtual machine cores you have, the more you pay in software license fees. On the other hand, RHEL has just two pricing tiers. One is for virtual machines with fewer than four cores and the other is for virtual machines with four cores or more. You pay the same operating system software subscription fees whether you choose a virtual machine with two or three virtual cores. Similarly, for larger workloads, your operating system software subscription costs are the same no matter whether you choose a virtual machine with eight or 16 virtual cores.

In addition, with the elasticity of EC2, you can save costs by sizing your starting workloads accordingly and later resizing your instances to prevent over provisioning compute resources when workloads experience uneven usage patterns, such as month end business reporting and batch programs. You can choose to use On-Demand Instances or use Savings Plans to build flexible pricing for long-term compute costs effectively. With AWS, you have the flexibility to right-size your instances and save costs without compromising business agility.

Conclusion

These new RHEL with SQL Server AMIs on Amazon EC2 are pre-configured and optimized to reduce undifferentiated heavy lifting. Customers can easily build highly available, reliable, and performant database clusters using RHEL with SQL Server along with the HA add-on and Provisioned IOPS EBS volumes. To get started, search for RHEL with SQL Server in the Amazon EC2 Console or find it in the AWS Marketplace. To learn more about Red Hat Enterprise Linux on EC2, check out the frequently asked questions page.

 

I Am Parting With My Crypto Library

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/07/i-am-parting-with-my-crypto-library.html

The time has come for me to find a new home for my (paper) cryptography library. It’s about 150 linear feet of books, conference proceedings, journals, and monographs — mostly from the 1980s, 1990s, and 2000s.

My preference is that it goes to an educational institution, but will consider a corporate or personal home if that’s the only option available. If you think you can break it up and sell it, I’ll consider that as a last resort. New owner pays all packaging and shipping costs, and possibly a purchase price depending on who you are and what you want to do with the library.

If you are interested, please email me. I can send photos.

EDITED TO ADD (8/1): I am talking with two universities at the Internet Archive. It will find a good home. Thank you all for your suggestions.

Benefits of Modernizing On-premise Analytics with an AWS Lake House

Post Syndicated from Vikas Nambiar original https://aws.amazon.com/blogs/architecture/benefits-of-modernizing-on-premise-analytics-with-an-aws-lake-house/

Organizational analytics systems have shifted from running in the background of IT systems to being critical to an organization’s health.

Analytics systems help businesses make better decisions, but they tend to be complex and are often not agile enough to scale quickly. To help with this, customers upgrade their traditional on-premises online analytic processing (OLAP) databases to hyper converged infrastructure (HCI) solutions. However, these systems incur operational overhead, are limited by proprietary formats, have limited elasticity, and tie customers into costly and inhibiting licensing agreements. These all bind an organization’s growth to the growth of the appliance provider.

In this post, we provide you a reference architecture and show you how an AWS lake house will help you overcome the aforementioned limitations. Our solution provides you the ability to scale, integrate with multiple sources, improve business agility, and help future proof your analytics investment.

High-level architecture for implementing an AWS lake house

Lake house architecture uses a ring of purpose-built data consumers and services centered around a data lake. This approach acknowledges that a one-size-fits-all approach to analytics eventually leads to compromises. These compromises can include agility associated with change management and impact of different business domain reporting requirements on the data from a central platform. As such, simply integrating a data lake with a data warehouse is not sufficient.

Each step in Figure 1 needs to be de-coupled to build a lake house.

Data flow in a lake house

Figure 1. Data flow in a lake house

 

High-level design for an AWS lake house implementation

Figure 2. High-level design for an AWS lake house implementation

Building a lake house on AWS

These steps summarize building a lake house on AWS:

  1. Identify source system extraction capabilities to define an ingestion layer that loads data into a data lake.
  2. Build data ingestion layer using services that support source systems extraction capabilities.
  3. Build a governance and transformation layer to manipulate data.
  4. Provide capability to consume and visualize information via purpose-built consumption/value layer.

This lake house architecture provides you a de-coupled architecture. Services can be added, removed, and updated independently when new data sources are identified like data sources to enrich data via AWS Data Exchange. This can happen while services in the purpose-built consumption layer address individual business unit requirements.

Building the data ingestion layer

Services in this layer work directly with the source systems based on their supported data extraction patterns. Data is then placed into a data lake.

Figure 3 shows the following services to be included in this layer:

  • AWS Transfer Family for SFTP integrates with source systems to extract data using secure shell (SSH), SFTP, and FTPS/FTP. This service is for systems that support batch transfer modes and have no real-time requirements, such as external data entities.
  • AWS Glue connects to real-time data streams to extract, load, transform, clean, and enrich data.
  • AWS Database Migration Service (AWS DMS) connects and migrates data from relational databases, data warehouses, and NoSQL databases.
Ingestion layer against source systems

Figure 3. Ingestion layer against source systems

Services in this layer are managed services that provide operational excellence by removing patching and upgrade overheads. Being managed services, they will also detect extraction spikes and scale automatically or on-demand based on your specifications.

Building the data lake layer

A data lake built on Amazon Simple Storage Service (Amazon S3) provides the ideal target layer to store, process, and cycle data over time. As the central aspect of the architecture, Amazon S3 allows the data lake to hold multiple data formats and datasets. It can also be integrated with most if not all AWS services and third-party applications.

Figure 4 shows the following services to be included in this layer:

  • Amazon S3 acts as the data lake to hold multiple data formats.
  • Amazon S3 Glacier provides the data archiving and long-term backup storage layer for processed data. It also reduces the amount of data indexed by transformation layer services.
Figure 4. Data lake integrated to ingestion layer

Figure 4. Data lake integrated to ingestion layer

The data lake layer provides 99.999999999% data durability and supports various data formats, allowing you to future proof the data lake. Data lakes on Amazon S3 also integrate with other AWS ecosystem services (for example, AWS Athena for interactive querying or third-party tools running off Amazon Elastic Compute Cloud (Amazon EC2) instances).

Defining the governance and transformation layer

Services in this layer transform raw data in the data lake to a business consumable format, along with providing operational monitoring and governance capabilities.

Figure 5 shows the following services to be included in this layer:

  1. AWS Glue discovers and transforms data, making it available for search and querying.
  2. Amazon Redshift (Transient) functions as an extract, transform, and load (ETL) node using RA3 nodes. RA3 nodes can be paused outside ETL windows. Once paused, Amazon Redshift’s data sharing capability allows for live data sharing for read purposes, which reduces costs to customers. It also allows for creation of separate, smaller read-intensive business intelligence (BI) instances from the larger write-intensive ETL instances required during ETL runs.
  3. Amazon CloudWatch monitors and observes your enabled services. It integrates with existing IT service management and change management systems such as ServiceNow for alerting and monitoring.
  4. AWS Security Hub implements a single security pane by aggregating, organizing, and prioritizing security alerts from services used, such as Amazon GuardDuty, Amazon Inspector, Amazon Macie, AWS Identity and Access Management (IAM) Access Analyzer, AWS Systems Manager, and AWS Firewall Manager.
  5. Amazon Managed Workflows for Apache Airflow (MWAA) sequences your workflow events to ingest, transform, and load data.
  6. Amazon Lake Formation standardizes data lake provisioning.
  7. AWS Lambda runs custom transformation jobs if required, or developed over a period of time that hold custom business logic IP.
Governance and transformation layer prepares data in the lake

Figure 5. Governance and transformation layer prepares data in the lake

This layer provides operational isolation wherein least privilege access control can be implemented to keep operational staff separate from the core services. It also lets you implement custom transformation tasks using Lambda. This allows you to consistently build lakes across all environments and single view of security via AWS Security Hub.

Building the value layer

This layer generates value for your business by provisioning decoupled, purpose-built visualization services, which decouples business units from change management impacts of other units.

Figure 6 shows the following services to be included in this value layer:

  1. Amazon Redshift (BI cluster) acts as the final store for data processed by the governance and transformation layer.
  2. Amazon Elasticsearch Service (Amazon ES) conducts log analytics and provides real-time application and clickstream analysis, including for data from previous layers.
  3. Amazon SageMaker prepares, builds, trains, and deploys machine learning models that provide businesses insights on possible scenarios such as predictive maintenance, churn predictions, demand forecasting, etc.
  4. Amazon QuickSight acts as the visualization layer, allowing business and support resources users to create reports, dashboards accessible across devices and embedded into other business applications, portals, and websites.
Value layer with services for purpose-built consumption

Figure 6. Value layer with services for purpose-built consumption

Conclusion

By using services managed by AWS as a starting point, you can build a data lake house on AWS. This open standard based, pay-as-you-go data lake will help future proof your analytics platform. With the AWS data lake house architecture provided in this post, you can expand your architecture, avoid excessive license costs associated with proprietary software and infrastructure (along with their ongoing support costs). These capabilities are typically unavailable in on-premises OLAP/HCI based data analytics platforms.

Related information

[$] Strict memcpy() bounds checking for the kernel

Post Syndicated from original https://lwn.net/Articles/864521/rss

The C programming language is famously prone to memory-safety problems
that lead to buffer overflows and a seemingly endless stream of security
vulnerabilities. But, even in C, it is possible to improve the
situation in many cases. One of those is the memcpy() family of
functions, which are used to efficiently copy or overwrite blocks of
memory; with a bit of help from the compiler, those functions can be
prevented from writing past the end of the
destination object they are passed. Enforcing that condition in the kernel
is harder than one might expect, though, as this
massive patch set
from Kees Cook shows.

EC2 Image Builder and Hands-free Hardening of Windows Images for AWS Elastic Beanstalk

Post Syndicated from Carlos Santos original https://aws.amazon.com/blogs/devops/ec2-image-builder-for-windows-on-aws-elastic-beanstalk/

AWS Elastic Beanstalk takes care of undifferentiated heavy lifting for customers by regularly providing new platform versions to update all Linux-based and Windows Server-based platforms. In addition to the updates to existing software components and support for new features and configuration options incorporated into the Elastic Beanstalk managed Amazon Machine Images (AMI), you may need to install third-party packages, or apply additional controls in order to meet industry or internal security criteria; for example, the Defense Information Systems Agency’s (DISA) Security Technical Implementation Guides (STIG).

In this blog post you will learn how to automate the process of customizing Elastic Beanstalk managed AMIs using EC2 Image Builder and apply the medium and low severity STIG settings to Windows instances whenever new platform versions are released.

You can extend the solution in this blog post to go beyond system hardening. EC2 Image Builder allows you to execute scripts that define the custom configuration for an image, known as Components. There are over 20 Amazon managed Components that you can use. You can also create your own, and even share with others.

These services are discussed in this blog post:

  • EC2 Image Builder simplifies the building, testing, and deployment of virtual machine and container images.
  • Amazon EventBridge is a serverless event bus that simplifies the process of building event-driven architectures.
  • AWS Lambda lets you run code without provisioning or managing servers.
  • AWS Systems Manager Parameter Store provides secure, hierarchical storage for configuration data, and secrets.
  • AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services.

Prerequisites

This solution has the following prerequisites:

All of the code necessary to deploy the solution is available on the . The repository details the solution’s codebase, and the “Deploying the Solution” section walks through the deployment process. Let’s start with a walkthrough of the solution’s design.

Overview of solution

The solution automates the following three steps.

Steps being automated

Figure 1 – Steps being automated

The Image Builder Pipeline takes care of launching an EC2 instance using the Elastic Beanstalk managed AMI, hardens the image using EC2 Image Builder’s STIG Medium Component, and outputs a new AMI that can be used by application teams to create their Elastic Beanstalk Environments.

Figure 2 – EC2 Image Builder Pipeline Steps

Figure 2 – EC2 Image Builder Pipeline Steps

To automate Step 1, an Amazon EventBridge rule is used to trigger an AWS Lambda function to get the latest AMI ID for the Elastic Beanstalk platform used, and ensures that the Parameter Store parameter is kept up to date.

AMI Version Change Monitoring Flow

Figure 3 – AMI Version Change Monitoring Flow

Steps 2 and 3 are trigged upon change to the Parameter Store parameter. An EventBridge rule is created to trigger a Lambda function, which manages the creation of a new EC2 Image Builder Recipe, updates the EC2 Image Builder Pipeline to use this new recipe, and starts a new instance of an EC2 Image Builder Pipeline.

EC2 Image Builder Pipeline Execution Flow

Figure 4 – EC2 Image Builder Pipeline Execution Flow

If you would also like to store the ID of the newly created AMI, see the Tracking the latest server images in Amazon EC2 Image Builder pipelines blog post on how to use Parameter Store for this purpose. This will enable you to notify teams that a new AMI is available for consumption.

Let’s dive a bit deeper into each of these pieces and how to deploy the solution.

Walkthrough

The following are the high-level steps we will be walking through in the rest of this post.

  • Deploy SAM template that will provision all pieces of the solution. Checkout the Using container image support for AWS Lambda with AWS SAM blog post for more details.
  • Invoke the AMI version monitoring AWS Lambda function. The EventBridge rule is configured for a daily trigger and for the purposes of this blog post, we do not want to wait that long prior to seeing the pipeline in action.
  • View the details of the resultant image after the Image Builder Pipeline completes

Deploying the Solution

The first step to deploying the solution is to create the Elastic Container Registry Repository that will be used to upload the image artifacts created. You can do so using the following AWS CLI or AWS Tools for PowerShell command:

aws ecr create-repository --repository-name elastic-beanstalk-image-pipeline-trigger --image-tag-mutability IMMUTABLE --image-scanning-configuration scanOnPush=true --region us-east-1
New-ECRRepository -RepositoryName elastic-beanstalk-image-pipeline-trigger -ImageTagMutability IMMUTABLE -ImageScanningConfiguration_ScanOnPush $True -Region us-east-1

This will return output similar to the following. Take note of the repositoryUri as you will be using that in an upcoming step.

ECR repository creation output

Figure 5 – ECR repository creation output

With the repository configured, you are ready to get the solution. Either download or clone the project’s aws-samples/elastic-beanstalk-image-pipeline-trigger GitHub repository to a local directory. Once you have the project downloaded, you can compile it using the following command from the project’s src/BeanstalkImageBuilderPipeline directory.

dotnet publish -c Release -o ./bin/Release/net5.0/linux-x64/publish

The output should look like:

NET project compilation output

Figure 6 – .NET project compilation output

Now that the project is compiled, you are ready to create the container image by executing the following SAM CLI command.

sam build --template-file ./serverless.template
SAM build command output

Figure 7 – SAM build command output

Next up deploy the SAM template with the following command, replacing REPOSITORY_URL with the URL of the ECR repository created earlier:

sam deploy --stack-name elastic-beanstalk-image-pipeline --image-repository <REPOSITORY_URL> --capabilities CAPABILITY_IAM --region us-east-1

The SAM CLI will both push the container image and create the CloudFormation Stack, deploying all resources needed for this solution. The deployment output will look similar to:

SAM deploy command output

Figure 8 – SAM deploy command output

With the CloudFormation Stack completed, you are ready to move onto starting the pipeline to create a custom Windows AMI with the medium DISA STIG applied.

Invoke AMI ID Monitoring Lambda

Let’s start by invoking the Lambda function, depicted in Figure 3, responsible for ensuring that the latest Elastic Beanstalk managed AMI ID is stored in Parameter Store.

aws lambda invoke --function-name BeanstalkManagedAmiMonitor response.json --region us-east-1
Invoke-LMFunction -FunctionName BeanstalkManagedAmiMonitor -Region us-east-1
Lambda invocation output

Figure 9 – Lambda invocation output

The Lambda’s CloudWatch log group contains the BeanstalkManagedAmiMonitor function’s output. For example, below you can see that the SSM parameter is being updated with the new AMI ID.

BeanstalkManagedAmiMonitor Lambda’s log

Figure 10 – BeanstalkManagedAmiMonitor Lambda’s log

After this Lambda function updates the Parameter Store parameter with the latest AMI ID, the EC2 Image Builder recipe will be updated to use this AMI ID as the parent image, and the Image Builder pipeline will be started. You can see evidence of this by going to the ImageBuilderTrigger Lambda function’s CloudWatch log group. Below you can see a log entry with the message “Starting image pipeline execution…”.

ImageBuilderTrigger Lambda’s log

Figure 11 – ImageBuilderTrigger Lambda’s log

To keep track of the status of the image creation, navigate to the EC2 Image Builder console, and select the 1.0.1 version of the demo-beanstalk-image.

EC2 Image Builder images list

Figure 12 – EC2 Image Builder images list

This will display the details for that build. Keep an eye on the status. While the image is being create, you will see the status as “Building”. Applying the latest Windows updates and DISA STIG can take about an hour.

EC2 Image Builder image build version details

Figure 13 – EC2 Image Builder image build version details

Once the AMI has been created, the status will change to “Available”. Click on the version column’s link to see the details of that version.

EC2 Image Builder image build version details

Figure 14 – EC2 Image Builder image build version details

You can use the AMI ID listed when creating an Elastic Beanstalk application. When using the create new environment wizard, you can modify the capacity settings to specify this custom AMI ID. The automation is configured to run on a daily basis. Only for the purposes of this post, did we have to invoke the Lambda function directly.

Cleaning up

To avoid incurring future charges, delete the resources using the following commands, replacing the AWS_ACCOUNT_NUMBER placeholder with appropriate value.

aws imagebuilder delete-image --image-build-version-arn arn:aws:imagebuilder:us-east-1:<AWS_ACCOUNT_NUMBER>:image/demo-beanstalk-image/1.0.1/1 --region us-east-1

aws imagebuilder delete-image-pipeline --image-pipeline-arn arn:aws:imagebuilder:us-east-1:<AWS_ACCOUNT_NUMBER>:image-pipeline/windowsbeanstalkimagepipeline --region us-east-1

aws imagebuilder delete-image-recipe --image-recipe-arn arn:aws:imagebuilder:us-east-1:<AWS_ACCOUNT_NUMBER>:image-recipe/demo-beanstalk-image/1.0.1 --region us-east-1

aws cloudformation delete-stack --stack-name elastic-beanstalk-image-pipeline --region us-east-1

aws cloudformation wait stack-delete-complete --stack-name elastic-beanstalk-image-pipeline --region us-east-1

aws ecr delete-repository --repository-name elastic-beanstalk-image-pipeline-trigger --force --region us-east-1
Remove-EC2IBImage -ImageBuildVersionArn arn:aws:imagebuilder:us-east-1:<AWS_ACCOUNT_NUMBER>:image/demo-beanstalk-image/1.0.1/1 -Region us-east-1

Remove-EC2IBImagePipeline -ImagePipelineArn arn:aws:imagebuilder:us-east-1:<AWS_ACCOUNT_NUMBER>:image-pipeline/windowsbeanstalkimagepipeline -Region us-east-1

Remove-EC2IBImageRecipe -ImageRecipeArn arn:aws:imagebuilder:us-east-1:<AWS_ACCOUNT_NUMBER>:image-recipe/demo-beanstalk-image/1.0.1 -Region us-east-1

Remove-CFNStack -StackName elastic-beanstalk-image-pipeline -Region us-east-1

Wait-CFNStack -StackName elastic-beanstalk-image-pipeline -Region us-east-1

Remove-ECRRepository -RepositoryName elastic-beanstalk-image-pipeline-trigger -IgnoreExistingImages $True -Region us-east-1

Conclusion

In this post, you learned how to leverage EC2 Image Builder, Lambda, and EventBridge to automate the creation of a Windows AMI with the medium DISA STIGs applied that can be used for Elastic Beanstalk environments. Don’t stop there though, you can apply these same techniques whenever you need to base recipes on AMIs that the image origin is not available in EC2 Image Builder.

Further Reading

EC2 Image Builder has a number of image origins supported out of the box, see the Automate OS Image Build Pipelines with EC2 Image Builder blog post for more details. EC2 Image Builder is also not limited to just creating AMIs. The Build and Deploy Docker Images to AWS using EC2 Image Builder blog post shows you how to build Docker images that can be utilized throughout your organization.

These resources can provide additional information on the topics touched on in this article:

Carlos Santos

Carlos Santos

Carlos Santos is a Microsoft Specialist Solutions Architect with Amazon Web Services (AWS). In his role, Carlos helps customers through their cloud journey, leveraging his experience with application architecture, and distributed system design.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close