Приехме важни промени в Закона за електронното управление

Post Syndicated from Bozho original https://blog.bozho.net/blog/4132

Вчера на второ четене приехме измененията в Закона за електронното управление, който подготвихме в Министерството на електронното управленир преди година и нещо. В него има много важни промени, в т.ч.:

  • приравняваме служебните справки на удостоверения, за да не трябва да променяме всички закони и наредби, за да отпаднат удостоверенията.
  • физическите лица ще могат да заявяват електронни услуги без да подписват заявленията с КЕП (който продължава да е бариера пред използването на е-услуги).
  • електронните услуги ще са с по-ниска такса спрямо тези на гише
  • разширяваме електронното връчване, като го правим възможно за актове и електронни фишове. Ще може в системата за сигурно връчване да се посочи, че човек желае напр. МВР да му връчва там електронните фишове.
  • уреждаме дейността на посредниците, които могат да предоставят услуги на гише по-близо до гражданите, които не ползват интернет (напр. за да не ходят до областния град)
  • няма да може с подзаконов акт да се въвежда изискване за представяне на документи за доказване на обстоятелства, които ги има в регистър. Т.е. всички наредби, вкл. общински, в които се искат непредвидени в закон удостоверения, бележки, стикери и подобни стават незаконосъобразни.
  • всички регистри ще трябва да минат от тетрадки в електронен вид през 2024 г., когато влиза в сила задължението регистрите да се водят или чрез собствена информационна система, или чрез новоуредената централизирана система за водене на регистри.
  • въвеждаме задължение за изпращане на напомняне при изтичащ срок на документ („индивидуален административен акт“)
  • отпадат факсът и телексът от Административнопроцесуалния кодекс
  • въвеждаме редица важни определения като „регистър“ и „централен администратор на данни“ и уеднаквяваме редица дефиниции в няколко закона, които се бяха разнинали през годините.

Всичко това ще започне да се прилага в следващите месеци. С него се покриват и цели, поставени в плана за възстановяване и устойчивост и е важна стъпка към повече удобство за гражданите и повече ефективост на администрацията. Приемането на закона не трансформира администрацията автоматично, но поставя важни рамки и задължения, с които Министерството на електронното управление може бързо да напредне.

Материалът Приехме важни промени в Закона за електронното управление е публикуван за пръв път на БЛОГодаря.

Using AWS CloudFormation and AWS Cloud Development Kit to provision multicloud resources

Post Syndicated from Aaron Sempf original https://aws.amazon.com/blogs/devops/using-aws-cloudformation-and-aws-cloud-development-kit-to-provision-multicloud-resources/

Customers often need to architect solutions to support components across multiple cloud service providers, a need which may arise if they have acquired a company running on another cloud, or for functional purposes where specific services provide a differentiated capability. In this post, we will show you how to use the AWS Cloud Development Kit (AWS CDK) to create a single pane of glass for managing your multicloud resources.

AWS CDK is an open source framework that builds on the underlying functionality provided by AWS CloudFormation. It allows developers to define cloud resources using common programming languages and an abstraction model based on reusable components called constructs. There is a misconception that CloudFormation and CDK can only be used to provision resources on AWS, but this is not the case. The CloudFormation registry, with support for third party resource types, along with custom resource providers, allow for any resource that can be configured via an API to be created and managed, regardless of where it is located.

Multicloud solution design paradigm

Multicloud solutions are often designed with services grouped and separated by cloud, creating a segregation of resource and functions within the design. This approach leads to a duplication of layers of the solution, most commonly a duplication of resources and the deployment processes for each environment. This duplication increases cost, and leads to a complexity of management increasing the potential break points within the solution or practice. 

Along with simplifying resource deployments, and the ever-increasing complexity of customer needs, so too has the need increased for the capability of IaC solutions to deploy resources across hybrid or multicloud environments. Through meeting this need, a proliferation of supported tools, frameworks, languages, and practices has created “choice overload”. At worst, this scares the non-cloud-savvy away from adopting an IaC solution benefiting their cloud journey, and at best confuses the very reason for adopting an IaC practice.

A single pane of glass

Systems Thinking is a holistic approach that focuses on the way a system’s constituent parts interrelate and how systems work as a whole especially over time and within the context of larger systems. Systems thinking is commonly accepted as the backbone of a successful systems engineering approach. Designing solutions taking a full systems view, based on the component’s function and interrelation within the system across environments, more closely aligns with the ability to handle the deployment of each cloud-specific resource, from a single control plane.

While AWS provides a list of services that can be used to help design, manage and operate hybrid and multicloud solutions, with AWS as the primary cloud you can go beyond just using services to support multicloud. CloudFormation registry resource types model and provision resources using custom logic, as a component of stacks in CloudFormation. Public extensions are not only provided by AWS, but third-party extensions are made available for general use by publishers other than AWS, meaning customers can create their own extensions and publish them for anyone to use.

The AWS CDK, which has a 1:1 mapping of all AWS CloudFormation resources, as well as a library of abstracted constructs, supports the ability to import custom AWS CloudFormation extensions, enabling customers and partners to create custom AWS CDK constructs for their extensions. The chosen programming language can be used to inherit and abstract the custom resource into reusable AWS CDK constructs, allowing developers to create solutions that contain native AWS extensions along with secondary hybrid or alternate cloud resources.

Providing the ability to integrate mixed resources in the same stack more closely aligns with the functional design and often diagrammatic depiction of the solution. In essence, we are creating a single IaC pane of glass over the entire solution, deployed through a single control plane. This lowers the complexity and the cost of maintaining separate modules and deployment pipelines across multiple cloud providers.

A common use case for a multicloud: disaster recovery

One of the most common use cases of the requirement for using components across different cloud providers is the need to maintain data sovereignty while designing disaster recovery (DR) into a solution.

Data sovereignty is the idea that data is subject to the laws of where it is physically located, and in some countries extends to regulations that if data is collected from citizens of a geographical area, then the data must reside in servers located in jurisdictions of that geographical area or in countries with a similar scope and rigor in their protection laws. 

This requires organizations to remain in compliance with their host country, and in cases such as state government agencies, a stricter scope of within state boundaries, data sovereignty regulations. Unfortunately, not all countries, and especially not all states, have multiple AWS regions to select from when designing where their primary and recovery data backups will reside. Therefore, the DR solution needs to take advantage of multiple cloud providers in the same geography, and as such a solution must be designed to backup or replicate data across providers.

The multicloud solution

A multicloud solution to the proposed use case would be the backup of data from an AWS resource such as an Amazon S3 bucket to another cloud provider within the same geography, such as an Azure Blob Storage container, using AWS event driven behaviour to trigger the copying of data from the primary AWS resource to the secondary Azure backup resource.

Following the IaC single pane of glass approach, the Azure Blob Storage container is created as a resource type in the CloudFormation Registry, and imported into the AWS CDK to be used as a construct in the solution. However, before the extension resource type can be used effectively in the CDK as a reusable construct and added to your private library, you will first need to go through the import into CDK process for creating Constructs.

There are three different levels of constructs, beginning with low-level constructs, which are called CFN Resources (or L1, short for “layer 1”). These constructs directly represent all resources available in AWS CloudFormation. They are named CfnXyz, where Xyz is name of the resource.

Layer 1 Construct

In this example, an L1 construct named CfnAzureBlobStorage represents an Azure::BlobStorage AWS CloudFormation extension. Here you also explicitly configure the ref property, in order for higher level constructs to access the Output value which will be the Azure blob container url being provisioned.

import { CfnResource } from "aws-cdk-lib";
import { Secret, ISecret } from "aws-cdk-lib/aws-secretsmanager";
import { Construct } from "constructs";

export interface CfnAzureBlobStorageProps {
  subscriptionId: string;
  clientId: string;
  tenantId: string;
  clientSecretName: string;
}

// L1 Construct
export class CfnAzureBlobStorage extends Construct {
  // Allows accessing the ref property
  public readonly ref: string;

  constructor(scope: Construct, id: string, props: CfnAzureBlobStorageProps) {
    super(scope, id);

    const secret = this.getSecret("AzureClientSecret", props.clientSecretName);
    
    const azureBlobStorage = new CfnResource(
      this,
      "ExtensionAzureBlobStorage",
      {
        type: "Azure::BlobStorage",
        properties: {
          AzureSubscriptionId: props.subscriptionId,
          AzureClientId: props.clientId,
          AzureTenantId: props.tenantId,
          AzureClientSecret: secret.secretValue.unsafeUnwrap()
        },
      }
    );

    this.ref = azureBlobStorage.ref;
  }

  private getSecret(id: string, secretName: string) : ISecret {  
    return Secret.fromSecretNameV2(this, secretName.concat("Value"), secretName);
  }
}

As with every CDK Construct, the constructor arguments are scope, id and props. scope and id are propagated to the cdk.Construct base class. The props argument is of type CfnAzureBlobStorageProps which includes four properties all of type string. This is how the Azure credentials are propagated down from upstream constructs.

Layer 2 Construct

The next level of constructs, L2, also represent AWS resources, but with a higher-level, intent-based API. They provide similar functionality, but incorporate the defaults, boilerplate, and glue logic you’d be writing yourself with a CFN Resource construct. They also provide convenience methods that make it simpler to work with the resource.

In this example, an L2 construct is created to abstract the CfnAzureBlobStorage L1 construct and provides additional properties and methods.

import { Construct } from "constructs";
import { CfnAzureBlobStorage } from "./cfn-azure-blob-storage";

// L2 Construct
export class AzureBlobStorage extends Construct {
  public readonly blobContainerUrl: string;

  constructor(
    scope: Construct,
    id: string,
    subscriptionId: string,
    clientId: string,
    tenantId: string,
    clientSecretName: string
  ) {
    super(scope, id);

    const azureBlobStorage = new CfnAzureBlobStorage(
      this,
      "CfnAzureBlobStorage",
      {
        subscriptionId: subscriptionId,
        clientId: clientId,
        tenantId: tenantId,
        clientSecretName: clientSecretName,
      }
    );

    this.blobContainerUrl = azureBlobStorage.ref;
  }
}

The custom L2 construct class is declared as AzureBlobStorage, this time without the Cfn prefix to represent an L2 construct. This time the constructor arguments include the Azure credentials and client secret, and the ref from the L1 construct us output to the public variable AzureBlobContainerUrl.

As an L2 construct, the AzureBlobStorage construct could be used in CDK Apps along with AWS Resource Constructs in the same Stack, to be provisioned through AWS CloudFormation creating the IaC single pane of glass for a multicloud solution.

Layer 3 Construct

The true value of the CDK construct programming model is in the ability to extend L2 constructs, which represent a single resource, into a composition of multiple constructs that provide a solution for a common task. These are Layer 3, L3, Constructs also known as patterns.

In this example, the L3 construct represents the solution architecture to backup objects uploaded to an Amazon S3 bucket into an Azure Blob Storage container in real-time, using AWS Lambda to process event notifications from Amazon S3.

import { RemovalPolicy, Duration, CfnOutput } from "aws-cdk-lib";
import { Bucket, BlockPublicAccess, EventType } from "aws-cdk-lib/aws-s3";
import { DockerImageFunction, DockerImageCode } from "aws-cdk-lib/aws-lambda";
import { PolicyStatement, Effect } from "aws-cdk-lib/aws-iam";
import { LambdaDestination } from "aws-cdk-lib/aws-s3-notifications";
import { IStringParameter, StringParameter } from "aws-cdk-lib/aws-ssm";
import { Secret, ISecret } from "aws-cdk-lib/aws-secretsmanager";
import { Construct } from "constructs";
import { AzureBlobStorage } from "./azure-blob-storage";

// L3 Construct
export class S3ToAzureBackupService extends Construct {
  constructor(
    scope: Construct,
    id: string,
    azureSubscriptionIdParamName: string,
    azureClientIdParamName: string,
    azureTenantIdParamName: string,
    azureClientSecretName: string
  ) {
    super(scope, id);

    // Retrieve existing SSM Parameters
    const azureSubscriptionIdParameter = this.getSSMParameter("AzureSubscriptionIdParam", azureSubscriptionIdParamName);
    const azureClientIdParameter = this.getSSMParameter("AzureClientIdParam", azureClientIdParamName);
    const azureTenantIdParameter = this.getSSMParameter("AzureTenantIdParam", azureTenantIdParamName);    
    
    // Retrieve existing Azure Client Secret
    const azureClientSecret = this.getSecret("AzureClientSecret", azureClientSecretName);

    // Create an S3 bucket
    const sourceBucket = new Bucket(this, "SourceBucketForAzureBlob", {
      removalPolicy: RemovalPolicy.RETAIN,
      blockPublicAccess: BlockPublicAccess.BLOCK_ALL,
    });

    // Create a corresponding Azure Blob Storage account and a Blob Container
    const azurebBlobStorage = new AzureBlobStorage(
      this,
      "MyCustomAzureBlobStorage",
      azureSubscriptionIdParameter.stringValue,
      azureClientIdParameter.stringValue,
      azureTenantIdParameter.stringValue,
      azureClientSecretName
    );

    // Create a lambda function that will receive notifications from S3 bucket
    // and copy the new uploaded object to Azure Blob Storage
    const copyObjectToAzureLambda = new DockerImageFunction(
      this,
      "CopyObjectsToAzureLambda",
      {
        timeout: Duration.seconds(60),
        code: DockerImageCode.fromImageAsset("copy_s3_fn_code", {
          buildArgs: {
            "--platform": "linux/amd64"
          }
        }),
      },
    );

    // Add an IAM policy statement to allow the Lambda function to access the
    // S3 bucket
    sourceBucket.grantRead(copyObjectToAzureLambda);

    // Add an IAM policy statement to allow the Lambda function to get the contents
    // of an S3 object
    copyObjectToAzureLambda.addToRolePolicy(
      new PolicyStatement({
        effect: Effect.ALLOW,
        actions: ["s3:GetObject"],
        resources: [`arn:aws:s3:::${sourceBucket.bucketName}/*`],
      })
    );

    // Set up an S3 bucket notification to trigger the Lambda function
    // when an object is uploaded
    sourceBucket.addEventNotification(
      EventType.OBJECT_CREATED,
      new LambdaDestination(copyObjectToAzureLambda)
    );

    // Grant the Lambda function read access to existing SSM Parameters
    azureSubscriptionIdParameter.grantRead(copyObjectToAzureLambda);
    azureClientIdParameter.grantRead(copyObjectToAzureLambda);
    azureTenantIdParameter.grantRead(copyObjectToAzureLambda);

    // Put the Azure Blob Container Url into SSM Parameter Store
    this.createStringSSMParameter(
      "AzureBlobContainerUrl",
      "Azure blob container URL",
      "/s3toazurebackupservice/azureblobcontainerurl",
      azurebBlobStorage.blobContainerUrl,
      copyObjectToAzureLambda
    );      

    // Grant the Lambda function read access to the secret
    azureClientSecret.grantRead(copyObjectToAzureLambda);

    // Output S3 bucket arn
    new CfnOutput(this, "sourceBucketArn", {
      value: sourceBucket.bucketArn,
      exportName: "sourceBucketArn",
    });

    // Output the Blob Conatiner Url
    new CfnOutput(this, "azureBlobContainerUrl", {
      value: azurebBlobStorage.blobContainerUrl,
      exportName: "azureBlobContainerUrl",
    });
  }

}

The custom L3 construct can be used in larger IaC solutions by calling the class called S3ToAzureBackupService and providing the Azure credentials and client secret as properties to the constructor.

import * as cdk from "aws-cdk-lib";
import { Construct } from "constructs";
import { S3ToAzureBackupService } from "./s3-to-azure-backup-service";

export class MultiCloudBackupCdkStack extends cdk.Stack {
  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);

    const s3ToAzureBackupService = new S3ToAzureBackupService(
      this,
      "MyMultiCloudBackupService",
      "/s3toazurebackupservice/azuresubscriptionid",
      "/s3toazurebackupservice/azureclientid",
      "/s3toazurebackupservice/azuretenantid",
      "s3toazurebackupservice/azureclientsecret"
    );
  }
}

Solution Diagram

Diagram 1: IaC Single Control Plane, demonstrates the concept of the Azure Blob Storage extension being imported from the AWS CloudFormation Registry into AWS CDK as an L1 CfnResource, wrapped into an L2 Construct and used in an L3 pattern alongside AWS resources to perform the specific task of backing up from and Amazon s3 Bucket into an Azure Blob Storage Container.

Multicloud IaC with CDK

Diagram 1: IaC Single Control Plan

The CDK application is then synthesized into one or more AWS CloudFormation Templates, which result in the CloudFormation service deploying AWS resource configurations to AWS and Azure resource configurations to Azure.

This solution demonstrates not only how to consolidate the management of secondary cloud resources into a unified infrastructure stack in AWS, but also the improved productivity by eliminating the complexity and cost of operating multiple deployment mechanisms into multiple public cloud environments.

The following video demonstrates an example in real-time of the end-state solution:

Next Steps

While this was just a straightforward example, with the same approach you can use your imagination to come up with even more and complex scenarios where AWS CDK can be used as a single pane of glass for IaC to manage multicloud and hybrid solutions.

To get started with the solution discussed in this post, this workshop will provide you with the instructions you need to understand the steps required to create the S3ToAzureBackupService.

Once you have learned how to create AWS CloudFormation extensions and develop them into AWS CDK Constructs, you will learn how, with just a few lines of code, you can develop reusable multicloud unified IaC solutions that deploy through a single AWS control plane.

Conclusion

By adopting AWS CloudFormation extensions and AWS CDK, deployed through a single AWS control plane, the cost and complexity of maintaining deployment pipelines across multiple cloud providers is reduced to a single holistic solution-focused pipeline. The techniques demonstrated in this post and the related workshop provide a capability to simplify the design of complex systems, improve the management of integration, and more closely align the IaC and deployment management practices with the design.

About the authors:

Aaron Sempf

Aaron Sempf is a Global Principal Partner Solutions Architect, in the Global Systems Integrators team. With over twenty years in software engineering and distributed system, he focuses on solving for large scale integration and event driven systems. When not working with AWS GSI partners, he can be found coding prototypes for autonomous robots, IoT devices, and distributed solutions.

 
Puneet Talwar

Puneet Talwar

Puneet Talwar is a Senior Solutions Architect at Amazon Web Services (AWS) on the Australian Public Sector team. With a background of over twenty years in software engineering, he particularly enjoys helping customers build modern, API Driven software architectures at scale. In his spare time, he can be found building prototypes for micro front ends and event driven architectures.

Metasploit Weekly Wrap-Up

Post Syndicated from Christopher Granleese original https://blog.rapid7.com/2023/09/08/metasploit-weekly-wrap-up-26/

New module content (4)

Roundcube TimeZone Authenticated File Disclosure

Metasploit Weekly Wrap-Up

Authors: joel, stonepresto, and thomascube
Type: Auxiliary
Pull request: #18286 contributed by cudalac
Path: auxiliary/gather/roundcube_auth_file_read
AttackerKB reference: CVE-2017-16651

Description: This PR adds a module to retrieve an arbitrary file on hosts running Roundcube versions from 1.1.0 through version 1.3.2.

Elasticsearch Memory Disclosure

Authors: Eric Howard, R0NY, and h00die
Type: Auxiliary
Pull request: #18322 contributed by h00die
Path: auxiliary/scanner/http/elasticsearch_memory_disclosure
AttackerKB reference: CVE-2021-22145

Description: Adds an aux scanner module which exploits a memory disclosure vulnerability within Elasticsearch 7.10.0 to 7.13.3 (inclusive) by submitting a malformed query that generates an error message containing previously used portions of a data buffer. The disclosed memory could contain sensitive information such as Elasticsearch documents or authentication details.

QueueJumper – MSMQ RCE Check

Authors: Bastian Kanbach, Haifei Li, and Wayne Low
Type: Auxiliary
Pull request: #18281 contributed by bka-dev
Path: auxiliary/scanner/msmq/cve_2023_21554_queuejumper
AttackerKB reference: CVE-2023-21554

Description: This PR adds a module that detects Windows hosts that are vulnerable to Microsoft Message Queuing Remote Code Execution aka QueueJumper.

SolarView Compact unauthenticated remote command execution vulnerability.

Author: h00die-gr3y
Type: Exploit
Pull request: #18313 contributed by h00die-gr3y
Path: exploits/linux/http/solarview_unauth_rce_cve_2023_23333
AttackerKB reference: CVE-2023-23333

Description: This PR adds a module which exploits a vulnerability that allows remote code execution on a vulnerable SolarView Compact device by bypassing internal restrictions through the vulnerable endpoint downloader.php using the file parameter. Firmware versions up to v6.33 are vulnerable.

Enhancements and features (2)

  • #18179 from jvoisin – This improves the windows checkvm post module by adding new techniques to identify the hypervisor in which the session is running.
  • #18190 from jvoisin – This improves the linux checkvm post module by adding new techniques to identify the hypervisor in which the session is running.

Documentation

You can find the latest Metasploit documentation on our docsite at docs.metasploit.com.

Get it

As always, you can update to the latest Metasploit Framework with msfupdate
and you can get more details on the changes since the last blog post from
GitHub:

If you are a git user, you can clone the Metasploit Framework repo (master branch) for the latest.
To install fresh without using git, you can use the open-source-only Nightly Installers or the
binary installers (which also include the commercial edition).

Managing Amazon EBS volume throughput limits in Amazon OpenSearch Service domains

Post Syndicated from Pranit Kumar original https://aws.amazon.com/blogs/big-data/managing-amazon-ebs-volume-throughput-limits-in-amazon-opensearch-service-domains/

In this blog post, we discuss the impact of Amazon Elastic Block Store (Amazon EBS) volume IOPS and throughput limits on Amazon OpenSearch Service domain and how to prevent/mitigate throughput throttling situation.

Amazon OpenSearch Service is a managed service that makes it easy for you to perform website searches, interactive log analytics, real-time application monitoring, and more. Based on the open source OpenSearch suite, Amazon OpenSearch Service allows you to search, visualize, and analyze up to petabytes of text and unstructured data.

An OpenSearch Service domain primarily contains nodes with the following set of roles.

  • Cluster manager (dedicated master): Responsible for managing the cluster and checking the health of the data nodes in the cluster.
  • Data: Responsible for serving search and indexing requests and storing the indexed data.
  • Ultrawarm: Nodes which use Amazon S3 as a backing store to provide lower-cost storage.

When creating an OpenSearch Service domain, you choose the storage for the data nodes with local Non-Volatile Memory Express (NVMe) or with Amazon EBS volumes.

If the OpenSearch Service data node storage is backed by Amazon EBS volumes, depending on your workload, EBS throughput can heavily influence performance of the OpenSearch Service domain. The EBS volume performance metric is defined by the following two key parameters.

  • IOPS defines the number of IO operations performed per second.
  • Throughput is a measure of how much data can be transferred in a given amount of time. It is usually measured in bytes per second.

Whenever IOPS or throughput of the data node breaches the maximum allowed limit of the EBS volume or the EC2 instance of the data node, then the OpenSearch Service domain experiences IOPS or throughput throttling. This can result in high search and indexing latency and in the worst scenario node crash as well.

Maximum allowed IOPS and throughput for the data node

The maximum allowed value for IOPS or the throughput for the data node in an OpenSearch Service domain is the minimum of the following two values.

Throughput throttling and its impact on an Amazon OpenSearch Service domain

Throughput throttling happens when the total EBS throughput on a data node exceeds the maximum allowed throughput value of that data node in the OpenSearch Service domain.

The ThroughputThrottle metric for the domain or node can be seen in the Amazon CloudWatch console at the following location.

  • Domain: “ES/OpenSearchService > Per-Domain, Per-Client Metrics”
  • Node: “ES/OpenSearchService > ClientId, DomainName, NodeId”

The value of 1 in the ThroughputThrottle metric signifies a throttling event for the domain or node.

If a data node in the domain experiences throughput throttling for a consistent period, it can result in the following performance degradation for the data node.

  • Slower EBS volume performance.
  • High read/write latency.

This can affect the checks performed by the cluster manager or data node. It can result in:

  • FS (file system) health check failure performed by the data node.
  • Follower check failure performed by cluster manager due to high request latency.

This will result in the cluster manager marking such data nodes unhealthy, resulting in the data node being removed from the cluster. This can lead to a yellow or red cluster status.

Throughput value calculation

Total throughput for the data node is the total bytes read and written to the EBS volume per second. The following metrics provides the read and write throughput for the data node in the Amazon Opensearch Service domain.

Total throughput for the data node in the OpenSearch Service domain is calculated as the following.

Throughput = ReadThroughputMicroBursting + WriteThroughputMicroBursting

To get total throughput for the data node, follow these steps.

  1. Go to Amazon Cloudwatch metrics.
  2. Go to ES/OpenSearchService > ClientId, DomainName, NodeId.
  3. Select ReadThroughputMicroBursting and WriteThroughputMicroBursting metric.
  4. Go to Graphed metrics.
  5. Use Add math and create formulas to sum ReadThroughputMicroBursting and WriteThroughputMicroBursting values.

Handling throughput throttle

When the maximum allowed throughput limit is breached on the data node in an OpenSearch Service domain, a disk throughput throttle notification is sent to the AWS console. Throughput throttling on the data node can happen due to various reasons, such as the following.

  • A sudden increase in the index rate or search rate to the data node of the OpenSearch Service domain.
  • A blue/green event happening on the OpenSearch Service domain during peak hours.
  • The OpenSearch Service domain is under-scaled.

We suggest the following measures to prevent throughput throttling for the OpenSearch Service domain.

  • Monitor the traffic to the OpenSearch Service domain and create alarms on the search and index traffic sent to the OpenSearch Service domain.
  • Set up off-peak hours for OpenSearch Service domain so that the updates that lead to blue/green deployments are executed when there is less demand.
  • Monitor the ThroughputThrottle cluster metrics for the OpenSearch Service domain.
  • Monitor shard skewness for the OpenSearch Service domain. Shard skewness can lead to uneven load distribution of traffic to data nodes and can lead to hot nodes in the cluster, which can experience high index and search traffic that results in throttling.
  • If you are hitting EBS Volume or EC2 instance throughput limits for the data node, you will need to scale up the OpenSearch Service domain to avoid throughput throttling. Check the limits provided by EBS volumes and  Amazon EBS optimized instances used by the data node and scale up the OpenSearch cluster accordingly.

Every scenario requires specific investigation and the appropriate measures to resolve it. Still, we suggest the following guidelines as part of a broader approach to handling throughput throttle.

  • If high throughput is seen on a specific set of data nodes most of the time, shard skewness may be causing hot nodes. In such cases, resolving shard skewness will help the situation.
  • If OpenSearch Service domain is experiencing uneven traffic patterns, check for sudden bursts resulting in throttling. In such scenarios, streamlining the traffic pattern can be helpful.
  • If throughput throttling is seen on most of the nodes on the cluster with consistent traffic patterns, scaling up of the OpenSearch Service domain should be considered.

Conclusion

In this post, we covered the Amazon EBS throughput throttling in OpenSearch Service domain, its impact, and ways to monitor and handle it. We provided suggestions that can be used to handle such throttling situations.

Related links


About the Authors

Pranit Kumar is a Sr. Software Dev Engineer working on OpenSearch at Amazon Web Services. He is interested in distributed systems and solving complex problems.

Dhrubajyoti Das is an Engineering Manager working on OpenSearch at Amazon Web Services. He is deeply interested in high scalable systems and infrastructure related challenges.

Identify regional feature parity using the AWS CloudFormation registry

Post Syndicated from Matt Howard original https://aws.amazon.com/blogs/devops/identify-regional-feature-parity-using-the-aws-cloudformation-registry/

The AWS Cloud spans more than 30 geographic regions around the world and is continuously adding new locations. When a new region launches, a core set of services are included with additional services launching within 12 months of a new region launch. As your business grows, so do your needs to expand to new regions and new markets, and it’s imperative that you understand which services and features are available in a region prior to launching your workload.

In this post, I’ll demonstrate how you can query the AWS CloudFormation registry to identify which services and features are supported within a region, so you can make informed decisions on which regions are currently compatible with your application’s requirements.

CloudFormation registry

The CloudFormation registry contains information about the AWS and third-party extensions, such as resources, modules, and hooks, that are available for use in your AWS account. You can utilize the CloudFormation API to provide a list of all the available AWS public extensions within a region. As resource availability may vary by region, you can refer to the CloudFormation registry for that region to gain an accurate list of that region’s service and feature offerings.

To view the AWS public extensions available in the region, you can use the following AWS Command Line Interface (AWS CLI) command which calls the list-types CloudFormation API. This API call returns summary information about extensions that have been registered with the CloudFormation registry. To learn more about the AWS CLI, please check out our Get started with the AWS CLI documentation page.

aws cloudformation list-types --visibility PUBLIC --filters Category=AWS_TYPES --region us-east-2

The output of this command is the list of CloudFormation extensions available in the us-east-2 region. The call has been filtered to restrict the visibility to PUBLIC which limits the returned list to extensions that are publicly visible and available to be activated within any AWS account. It is also filtered to AWS_TYPES only for Category to only list extensions available for use from Amazon. The region filter determines which region to use and therefore which region’s CloudFormation registry types to list. A snippet of the output of this command is below:

{
  "TypeSummaries": [
    {
      "Type": "RESOURCE",
      "TypeName": "AWS::ACMPCA::Certificate",
      "TypeArn": "arn:aws:cloudformation:us-east-2::type/resource/AWS-ACMPCA-Certificate",
      "LastUpdated": "2023-07-20T13:58:56.947000+00:00",
      "Description": "A certificate issued via a private certificate authority"
    },
    {
      "Type": "RESOURCE",
      "TypeName": "AWS::ACMPCA::CertificateAuthority",
      "TypeArn": "arn:aws:cloudformation:us-east-2::type/resource/AWS-ACMPCA-CertificateAuthority",
      "LastUpdated": "2023-07-19T14:06:07.618000+00:00",
      "Description": "Private certificate authority."
    },
    {
      "Type": "RESOURCE",
      "TypeName": "AWS::ACMPCA::CertificateAuthorityActivation",
      "TypeArn": "arn:aws:cloudformation:us-east-2::type/resource/AWS-ACMPCA-CertificateAuthorityActivation",
      "LastUpdated": "2023-07-20T13:45:58.300000+00:00",
      "Description": "Used to install the certificate authority certificate and update the certificate authority status."
    }
  ]
}

This output lists all of the Amazon provided CloudFormation resource types that are available within the us-east-2 region, specifically three AWS Private Certificate Authority resource types. You can see that these match with the AWS Private Certificate Authority resource type reference documentation.

Filtering the API response

You can also perform client-side filtering and set the output format on the AWS CLI’s response to make the list of resource types easy to parse. In the command below the output parameter is set to text and used with the query parameter to return only the TypeName field for each resource type.

aws cloudformation list-types --visibility PUBLIC --filters Category=AWS_TYPES --region us-east-2 --output text --query 'TypeSummaries[*].[TypeName]'

It removes the extraneous definition information such as description and last updated sections. A snippet of the resulting output looks like this:

AWS::ACMPCA::Certificate
AWS::ACMPCA::CertificateAuthority
AWS::ACMPCA::CertificateAuthorityActivation

Now you have a method of generating a consolidated list of all the resource types CloudFormation supports within the us-east-2 region.

Comparing two regions

Now that you know how to generate a list of CloudFormation resource types in a region, you can compare with a region you plan to expand your workload to, such as the Israel (Tel Aviv) region which just launched in August of 2023. This region launched with core services available, and AWS service teams are hard at work bringing additional services and features to the region.

Adjust your command above by changing the region parameter from us-east-2 to il-central-1 which will allow you to list all the CloudFormation resource types in the Israel (Tel Aviv) region.

aws cloudformation list-types --visibility PUBLIC --filters Category=AWS_TYPES --region il-central-1 --output text --query 'TypeSummaries[*].[TypeName]'

Now compare the differences between the two regions to understand which services and features may not have launched in the Israel (Tel Aviv) region yet. You can use the diff command to compare the output of the two CloudFormation registry queries:

diff -y <(aws cloudformation list-types --visibility PUBLIC --filters Category=AWS_TYPES --region us-east-2 --output text --query 'TypeSummaries[*].[TypeName]') <(aws cloudformation list-types --visibility PUBLIC --filters Category=AWS_TYPES --region il-central-1 --output text --query 'TypeSummaries[*].[TypeName]')

Here’s an example snippet of the command’s output:

AWS::S3::AccessPoint                   AWS::S3::AccessPoint
AWS::S3::Bucket                        AWS::S3::Bucket
AWS::S3::BucketPolicy                  AWS::S3::BucketPolicy
AWS::S3::MultiRegionAccessPoint         <
AWS::S3::MultiRegionAccessPointPolicy   <
AWS::S3::StorageLens                    <
AWS::S3ObjectLambda::AccessPoint       AWS::S3ObjectLambda::AccessPoint

Here, you see regional service parity of services supported by CloudFormation, down to the feature level. Amazon Simple Storage Service (Amazon S3) is a core service that was available at Israel (Tel Aviv) region’s launch. However, certain Amazon S3 features such as Storage Lens and Multi-Region Access Points are not yet launched in the region.

With this level of detail, you are able to accurately determine if the region you’re considering for expansion currently has the service and feature offerings necessary to support your workload.

Evaluating CloudFormation stacks

Now that you know how to compare the CloudFormation resource types supported between two regions, you can make this more applicable by evaluating an existing CloudFormation stack and determining if the resource types specified in the stack are available in a region.

As an example, you can deploy the sample LAMP stack scalable and durable template which can be found, among others, in our Sample templates documentation page. Instructions on how to deploy the stack in your own account can be found in our CloudFormation Get started documentation.

You can use the list-stack-resources API to query the stack and return the list of resource types used within it. You again use client-side filtering and set the output format on the AWS CLI’s response to make the list of resource types easy to parse.

aws cloudformation list-stack-resources --stack-name PHPHelloWorldSample --region us-east-2 --output text --query 'StackResourceSummaries[*].[ResourceType]'

Which provides the below list

AWS::ElasticLoadBalancingV2::Listener
AWS::ElasticLoadBalancingV2::TargetGroup
AWS::ElasticLoadBalancingV2::LoadBalancer
AWS::EC2::SecurityGroup
AWS::RDS::DBInstance
AWS::EC2::SecurityGroup

Next, use the below command which uses grep with the -v flag to compare the Israel (Tel Aviv) region’s available CloudFormation registry resource types with the resource types used in the CloudFormation stack.

grep -v -f <(aws cloudformation list-types --visibility PUBLIC --filters Category=AWS_TYPES --region il-central-1 --output text --query 'TypeSummaries[*].[TypeName]') <(aws cloudformation list-stack-resources --stack-name PHPHelloWorldSample --region us-east-2 --output text --query 'StackResourceSummaries[*].[ResourceType]') 

The output is blank, which indicates all of the CloudFormation resource types specified in the stack are available in the Israel (Tel Aviv) region.

Now try an example where a service or feature may not yet be launched in the region, AWS Cloud9 for example. Update the stack template to include the AWS::Cloud9::EnvironmentEC2 resource type. To do this, include the following lines within the CloudFormation template json file’s Resources section as shown below and update the stack. Instructions on how to modify a CloudFormation template and update the stack can be found in the AWS CloudFormation stack updates documentation.

{
  "Cloud9": {
    "Type": "AWS::Cloud9::EnvironmentEC2",
    "Properties": {
      "InstanceType": "t3.micro"
    }
  }
}

Now, rerun the grep command you used previously.

grep -v -f <(aws cloudformation list-types --visibility PUBLIC --filters Category=AWS_TYPES --region il-central-1 --output text --query 'TypeSummaries[*].[TypeName]') <(aws cloudformation list-stack-resources --stack-name PHPHelloWorldSample --region us-east-2 --output text --query 'StackResourceSummaries[*].[ResourceType]') 

The output returns the below line indicating the AWS::Cloud9::EnvironmentEC2 resource type is not present in the CloudFormation registry for the Israel (Tel Aviv), yet. You would not be able to deploy this resource type in that region.

AWS::Cloud9::EnvironmentEC2

To clean-up, delete the stack you deployed by following our documentation on Deleting a stack.

This solution can be expanded to evaluate all of your CloudFormation stacks within a region. To do this, you would use the list-stacks API to list all of your stack names and then loop through each one by calling the list-stack-resources API to generate a list of all the resource types used in your CloudFormation stacks within the region. Finally, you’d use the grep example above to compare the list of resource types contained in all of your stacks with the CloudFormation registry for the region.

A note on opt-in regions

If you intend to compare a newly launched region, you need to first enable the region which will then allow you to perform the AWS CLI queries provided above. This is because only regions introduced prior to March 20, 2019 are all enabled by default. For example, to query the Israel (Tel Aviv) region you must first enable the region. You can learn more about how to enable new AWS Regions on our documentation page, Specifying which AWS Regions your account can use.

Conclusion

In this blog post, I demonstrated how you can query the CloudFormation registry to compare resource availability between two regions. I also showed how you can evaluate existing CloudFormation stacks to determine if they are compatible in another region. With this solution, you can make informed decisions regarding your regional expansion based on the current service and feature offerings within a region. While this is an effective solution to compare regional availability, please consider these key points:

  1. This is a point in time snapshot of a region’s service offerings and service teams are regularly adding services and features following a new region launch. I recommend you share your interest for local region delivery and/or request service roadmap information by contacting your AWS sales representative.
  2. A feature may not yet have CloudFormation support within the region which means it won’t display in the registry, even though the feature may be available via Console or API within the region.
  3. This solution will not provide details on the properties available within a resource type.

 

Matt Howard

Matt is a Principal Technical Account Manager (TAM) for AWS Enterprise Support. As a TAM, Matt provides advocacy and technical guidance to help customers plan and build solutions using AWS best practices. Outside of AWS, Matt enjoys spending time with family, sports, and video games.

ASRock Rack GENOAD8X-2T/BCM Review An Uncomfortably Good Motherboard

Post Syndicated from Patrick Kennedy original https://www.servethehome.com/asrock-rack-genoad8x-2t-bcm-review-an-uncomfortably-good-motherboard-amd-epyc/

We review the ASRock Rack GENOAD8X-2T/BCM sporting seven PCIe Gen5 x16 and one x8 slots in this massive AMD EPYC motherboard

The post ASRock Rack GENOAD8X-2T/BCM Review An Uncomfortably Good Motherboard appeared first on ServeTheHome.

Benjamin: Towards a new SymPy

Post Syndicated from jake original https://lwn.net/Articles/943995/

In a series of posts on his blog, Oscar Benjamin looks at SymPy, which is a Python-based symbolic-mathematics library. In the first article, he outlines the “big changes for SymPy with particular focus on speed“. The second covers polynomial handling; subsequent articles will examine other pieces of the puzzle.

I will be writing this in a series of blog posts. This first post will outline the structure of the foundations of a computer algebra system (CAS) like SymPy, describe some problems SymPy currently has and what can be done to address them. Then subsequent posts will focus in more detail on particular components and the work that has been done and what should be done in the future.

[$] Prerequisites for large anonymous folios

Post Syndicated from corbet original https://lwn.net/Articles/943758/

The work to add support for large anonymous
folios
to the kernel has been underway for some time, but this feature
has not yet landed in the mainline. The author of this work, Ryan Roberts,
has been trying to get a handle on what the remaining obstacles are so he
can address them. On September 6, an online meeting of
memory-management developers discussed that topic and made some progress;
there is still some work to do, though, before large anonymous folios can
go upstream.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close