Sending events to Amazon EventBridge from AWS Organizations accounts

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/sending-events-to-amazon-eventbridge-from-aws-organizations-accounts/

This post is written by Elinisa Canameti, Associate Cloud Architect, and Iris Kraja, Associate Cloud Architect.

AWS Organizations provides a hierarchical grouping of multiple accounts. This helps larger projects establish a central managing system that governs the infrastructure. This can help with meeting budgetary and security requirements.

Amazon EventBridge is a serverless event-driven service that delivers events from customized applications, AWS services, or software as service (SaaS) applications to targets.

This post shows how to send events from multiple accounts in AWS Organizations to the management account using AWS CDK. This uses AWS CloudFormation StackSets to deploy the infrastructure in the member’s accounts.

Solution overview

This blog post describes a centralized event management strategy in a multi-account setup. The AWS accounts are organized by using AWS Organizations service into one management account and many member accounts.  You can explore and deploy the reference solution from this GitHub repository.

In the example, there are events from three different AWS services: Amazon CloudWatch, AWS Config, and Amazon GuardDuty. These are sent from the member accounts to the management account.

The management account publishes to an Amazon SNS topic, which sends emails when these events occur. AWS CloudFormation StackSets are created in the management account to deploy the infrastructure in the member’s accounts.

Overview

To fully automate the process of adding and removing accounts to the organization unit, an EventBridge rule is triggered:

  • When a new account is moved from and in the organization unit (MoveAccount event).
  • When an account is removed from the organization (RemoveAccountFromOrganization event).

These events invoke an AWS Lambda function, which updates the management EventBridge rule with the additional source accounts.

Prerequisites

  1. At least one AWS account, which represents the member’s account.
  2. An AWS account, which is the management account.
  3. Install AWS Command Line (CLI).
  4. Install AWS CDK.

Set up the environment with AWS Organizations

1. Login into the main account used to manage your organization.

2. Select the AWS Organizations service. Choose Create Organization. Once you create the organization, you receive a verification email.

3. After verification is completed, you can add other customer accounts into the organization.

4. AWS sends an invitation to each account added under the root account.

5. To see the invitation, log in to each of the accounts and search for AWS Organization service. You can find the invitation option listed on the side.

6. Accept the invitation from the root account.

7. Create an Organization Unit (OU) and place all the member accounts, which should propagate the events to the management account in this OU. The OU identifier is used later to deploy the StackSet.

8. Finally, enable trusted access in the organization to be able to create StackSets and deploy resources from the management account to the member accounts.

Management account

After the initial deployment of the solution in the management account, a StackSet is created. It’s configured to deploy the member account infrastructure in all the members of the organization unit.

When you add a new account in the organization unit, this StackSet automatically deploys the resources specified. Read the Member account section for more information about resources in this stack.

All events coming from the member account pass through the custom event bus in the management account. To allow other accounts to put events in the management account, the resource policy of the event bus grants permission to every account in the organization:

{
  "Version": "2012-10-17",
  "Statement": [{
    "Sid": "AllowAllAccountsInOrganizationToPutEvents",
    "Effect": "Allow",
    "Principal": "*",
    "Action": "events:PutEvents",
    "Resource": "arn:aws:events:us-east-1:xxxxxxxxxxxx:event-bus/CentralEventBus ",
    "Condition": {
      "StringEquals": {
        "aws:PrincipalOrgID": "o-xxxxxxxx"
      }
    }
  }]
}

You must configure EventBridge in the management account to handle the events coming from the member accounts. In this example, you send an email with the event information using SNS. Create a new rule with the following event pattern:

Event pattern

When you add or remove accounts in the organization unit, an EventBridge rule invokes a Lambda function to update the rule in the management account. This rule reacts to two events from the organization’s source: MoveAccount and RemoveAccountFromOrganization.

Event pattern

// EventBridge to trigger updateRuleFunction Lambda whenever a new account is added or removed from the organization.
new Rule(this, 'AWSOrganizationAccountMemberChangesRule', {
   ruleName: 'AWSOrganizationAccountMemberChangesRule',
   eventPattern: {
      source: ['aws.organizations'],
      detailType: ['AWS API Call via CloudTrail'],
      detail: {
        eventSource: ['organizations.amazonaws.com'],
        eventName: [
          'RemoveAccountFromOrganization',
          'MoveAccount'
        ]
      }
    },
    targets: [
      new LambdaFunction(updateRuleFunction)
    ]
});

Custom resource Lambda function

The custom resource Lambda function is executed to run custom logic whenever the CloudFormation stack is created, updated or deleted.

// Cloudformation Custom resource event
switch (event.RequestType) {
  case "Create":
    await putRule(accountIds);
    break;
  case "Update":
    await putRule(accountIds);
    break;
  case "Delete":
    await deleteRule()
}

async function putRule(accounts) {
  await eventBridgeClient.putRule({
    Name: rule.name,
    Description: rule.description,
    EventBusName: eventBusName,
    EventPattern: JSON.stringify({
      account: accounts,
      source: rule.sources
    })
  }).promise();
  await eventBridgeClient.putTargets({
    Rule: rule.name,
    Targets: [
      {
        Arn: snsTopicArn,
        Id: `snsTarget-${rule.name}`
      }
    ]
  }).promise();
}

Amazon EventBridge triggered Lambda function code

// New AWS Account moved to the organization unit of out it.
if (eventName === 'MoveAccount' || eventName === 'RemoveAccountFromOrganization') {
   await putRule(accountIds);
}

All events generated from the members’ accounts are then sent to a SNS topic, which has an email address as an endpoint. More sophisticated targets can be configured depending on the application’s needs. The targets include, but are not limited to: Step Functions state machine, Kinesis stream, SQS queue, etc.

Member account

In the member account, we use an Amazon EventBridge rule to route all the events coming from Amazon CloudWatch, AWS Config, and Amazon GuardDuty to the event bus created in the management account.

    const rule = {
      name: 'MemberEventBridgeRule',
      sources: ['aws.cloudwatch', 'aws.config', 'aws.guardduty'],
      description: 'The Rule propagates all Amazon CloudWatch Events, AWS Config Events, AWS Guardduty Events to the management account'
    }

    const cdkRule = new Rule(this, rule.name, {
      description: rule.description,
      ruleName: rule.name,
      eventPattern: {
        source: rule.sources,
      }
    });
    cdkRule.addTarget({
      bind(_rule: IRule, generatedTargetId: string): RuleTargetConfig {
        return {
          arn: `arn:aws:events:${process.env.REGION}:${process.env.CDK_MANAGEMENT_ACCOUNT}:event-bus/${eventBusName.valueAsString}`,
          id: generatedTargetId,
          role: publishingRole
        };
      }
    });

Deploying the solution

Bootstrap the management account:

npx cdk bootstrap  \ 
    --profile <MANAGEMENT ACCOUNT AWS PROFILE>  \ 
    --cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess  \
    aws://<MANAGEMENT ACCOUNT ID>/<REGION>

To deploy the stack, use the cdk-deploy-to.sh script and pass as argument the management account ID, Region, AWS Organization ID, AWS Organization Unit ID and an accessible email address.

sh ./cdk-deploy-to.sh <MANAGEMENT ACCOUNT ID> <REGION> <AWS ORGANIZATION ID> <MEMBER ORGANIZATION UNIT ID> <EMAIL ADDRESS> AwsOrganizationsEventBridgeSetupManagementStack 

Make sure to subscribe to the SNS topic, when you receive an email after the Management stack is deployed.

After the deployment is completed, a stackset starts deploying the infrastructure to the members’ account. You can view this process in the AWS Management Console, under the AWS CloudFormation service in the management account as shown in the following image, or by logging in to the member account, under CloudFormation stacks.

Stack output

Testing the environment

This project deploys an Amazon CloudWatch billing alarm to the member accounts to test that we’re retrieving email notifications when this metric is in alarm. Using the member’s account credentials, run the following AWS CLI Command to change the alarm status to “In Alarm”:

aws cloudwatch set-alarm-state --alarm-name 'BillingAlarm' --state-value ALARM --state-reason "testing only" 

You receive emails at the email address configured as an endpoint on the Amazon SNS topic.

Conclusion

This blog post shows how to use AWS Organizations to organize your application’s accounts by using organization units and how to centralize event management using Amazon EventBridge across accounts in the organization. A fully automated solution is provided to ensure that adding new accounts to the organization unit is efficient.

For more serverless learning resources, visit https://serverlessland.com.

Migrating a self-managed message broker to Amazon SQS

Post Syndicated from Vikas Panghal original https://aws.amazon.com/blogs/architecture/migrating-a-self-managed-message-broker-to-amazon-sqs/

Amazon Payment Services is a payment service provider that operates across the Middle East and North Africa (MENA) geographic regions. Our mission is to provide online businesses with an affordable and trusted payment experience. We provide a secure online payment gateway that is straightforward and safe to use.

Amazon Payment Services has regional experts in payment processing technology in eight countries throughout the Gulf Cooperation Council (GCC) and Levant regional areas. We offer solutions tailored to businesses in their international and local currency. We are continuously improving and scaling our systems to deliver with near-real-time processing capabilities. Everything we do is aimed at creating safe, reliable, and rewarding payment networks that connect the Middle East to the rest of the world.

Our use case of message queues

Our business built a high throughput and resilient queueing system to send messages to our customers. Our implementation relied on a self-managed RabbitMQ cluster and consumers. Consumer is a software that subscribes to a topic name in the queue. When subscribed, any message published into the queue tagged with the same topic name will be received by the consumer for processing. The cluster and consumers were both deployed on Amazon Elastic Compute Cloud (Amazon EC2) instances. As our business scaled, we faced challenges with our existing architecture.

Challenges with our message queues architecture

Managing a RabbitMQ cluster with its nodes deployed inside Amazon EC2 instances came with some operational burdens. Dealing with payments at scale, managing queues, performance, and availability of our RabbitMQ cluster introduced significant challenges:

  • Managing durability with RabbitMQ queues. When messages are placed in the queue, they persist and survive server restarts. But during a maintenance window they can be lost because we were using a self-managed setup.
  • Back-pressure mechanism. Our setup lacked a back-pressure mechanism, which resulted in flooding our customers with huge number of messages in peak times. All messages published into the queue were getting sent at the same time.
  • Customer business requirements. Many customers have business requirements to delay message delivery for a defined time to serve their flow. Our architecture did not support this delay.
  • Retries. We needed to implement a back-off strategy to space out multiple retries for temporarily failed messages.
Figure 1. Amazon Payment Services’ previous messaging architecture

Figure 1. Amazon Payment Services’ previous messaging architecture

The previous architecture shown in Figure 1 was able to process a large load of messages within a reasonable delivery time. However, when the message queue built up due to network failures on the customer side, the latency of the overall flow was affected. This required manually scaling the queues, which added significant human effort, time, and overhead. As our business continued to grow, we needed to maintain a strict delivery time service level agreement (SLA.)

Using Amazon SQS as the messaging backbone

The Amazon Payment Services core team designed a solution to use Amazon Simple Queue Service (SQS) with AWS Fargate (see Figure 2.) Amazon SQS is a fully managed message queuing service that enables customers to decouple and scale microservices, distributed systems, and serverless applications. It is a highly scalable, reliable, and durable message queueing service that decreases the complexity and overhead associated with managing and operating message-oriented middleware.

Amazon SQS offers two types of message queues. SQS standard queues offer maximum throughput, best-effort ordering, and at-least-once delivery. SQS FIFO queues provide that messages are processed exactly once, in the exact order they are sent. For our application, we used SQS FIFO queues.

In SQS FIFO queues, messages are stored in partitions (a partition is an allocation of storage replicated across multiple Availability Zones within an AWS Region). With message distribution through message group IDs, we were able to achieve better optimization and partition utilization for the Amazon SQS queues. We could offer higher availability, scalability, and throughput to process messages through consumers.

Figure 2. Amazon Payment Services’ new architecture using Amazon SQS, Amazon ECS, and Amazon SNS

Figure 2. Amazon Payment Services’ new architecture using Amazon SQS, Amazon ECS, and Amazon SNS

This serverless architecture provided better scaling options for our payment processing services. This helps manage the MENA geographic region peak events for the customers without the need for capacity provisioning. Serverless architecture helps us reduce our operational costs, as we only pay when using the services. Our goals in developing this initial architecture were to achieve consistency, scalability, affordability, security, and high performance.

How Amazon SQS addressed our needs

Migrating to Amazon SQS helped us address many of our requirements and led to a more robust service. Some of our main issues included:

Losing messages during maintenance windows

While doing manual upgrades on RabbitMQ and the hosting operating system, we sometimes faced downtimes. By using Amazon SQS, messaging infrastructure became automated, reducing the need for maintenance operations.

Handling concurrency

Different customers handle messages differently. We needed a way to customize the concurrency by customer. With SQS message group ID in FIFO queues, we were able to use a tag that groups messages together. Messages that belong to the same message group are always processed one by one, in a strict order relative to the message group. Using this feature and a consistent hashing algorithm, we were able to limit the number of simultaneous messages being sent to the customer.

Message delay and handling retries

When messages are sent to the queue, they are immediately pulled and received by customers. However, many customers ask to delay their messages for preprocessing work, so we introduced a message delay timer. Some messages encounter errors that can be resubmitted. But the window between multiple retries must be delayed until we receive delivery confirmation from our customer, or until the retries limit is exceeded. Using SQS, we were able to use the ChangeMessageVisibility operation, to adjust delay times.

Scalability and affordability

To save costs, Amazon SQS FIFO queues and Amazon ECS Fargate tasks run only when needed. These services process data in smaller units and run them in parallel. They can scale up efficiently to handle peak traffic loads. This will satisfy most architectures that handle non-uniform traffic without needing additional application logic.

Secure delivery

Our service delivers messages to the customers via host-to-host secure channels. To secure this data outside our private network, we use Amazon Simple Notification Service (SNS) as our delivery mechanism. Amazon SNS provides HTTPS endpoint delivery of messages coming to topics and subscriptions. AWS enables at-rest and/or in-transit encryption for all architectural components. Amazon SQS also provides AWS Key Management Service (KMS) based encryption or service-managed encryption to encrypt the data at rest.

Performance

To quantify our product’s performance, we monitor the message delivery delay. This metric evaluates the time between sending the message and when the customer receives it from Amazon payment services. Our goal is to have the message sent to the customer in near-real time once the transaction is processed. The new Amazon SQS/ECS architecture enabled us to achieve 200 ms with p99 latency.

Summary

In this blog post, we have shown how using Amazon SQS helped transform and scale our service. We were able to offer a secure, reliable, and highly available solution for businesses. We use AWS services and technologies to run Amazon Payment Services payment gateway, and infrastructure automation to deliver excellent customer service. By using Amazon SQS and Amazon ECS Fargate, Amazon Payment Services can offer secure message delivery at scale to our customers.

Activists are targeting Russians with open-source “protestware” (Technology Review)

Post Syndicated from original https://lwn.net/Articles/888860/

MIT Technology Review has taken
a brief look
at open-source projects that have added changes protesting
the war in Ukraine and drawn some questionable conclusions:

No tech firm has gone that far, but around two dozen open-source
software projects have been spotted adding code protesting the war,
according to observers tracking the protestware
movement. Open-source software is software that anyone can modify
and inspect, making it more transparent—and, in this case at least,
more open to sabotage.

Security updates for Tuesday

Post Syndicated from original https://lwn.net/Articles/888859/

Security updates have been issued by Debian (apache2 and thunderbird), Fedora (abcm2ps, containerd, dotnet6.0, expat, ghc-cmark-gfm, moodle, openssl, and zabbix), Mageia (389-ds-base, apache, bind, chromium-browser-stable, nodejs-tar, python-django/python-asgiref, and stunnel), openSUSE (icingaweb2, lapack, SUSE:SLE-15-SP4:Update (security), and thunderbird), Oracle (openssl), Slackware (bind), SUSE (apache2, bind, glibc, kernel-firmware, lapack, net-snmp, and thunderbird), and Ubuntu (binutils, linux, linux-aws, linux-aws-5.13, linux-gcp, linux-hwe-5.13, linux-kvm, linux-raspi, linux, linux-aws, linux-aws-5.4, linux-azure, linux-azure-5.4, linux-gke, linux-gkeop, linux-hwe-5.4, linux-ibm, linux-ibm-5.4, linux-kvm, linux-oracle, linux-oracle-5.4, and linux, linux-aws, linux-aws-hwe, linux-azure, linux-azure-4.15, linux-dell300x, linux-hwe, linux-gcp-4.15, linux-kvm, linux-oracle, linux-snapdragon).

Zero Trust for SaaS: Deploying mTLS on custom hostnames

Post Syndicated from Dina Kozlov original https://blog.cloudflare.com/zero-trust-for-saas-deploying-mtls-on-custom-hostnames/

Zero Trust for SaaS: Deploying mTLS on custom hostnames

Cloudflare has a large base of Software-as-a-Service (SaaS) customers who manage thousands or millions of their customers’ domains that use their SaaS service. We have helped those SaaS providers grow by extending our infrastructure and services to their customer’s domains through a product called Cloudflare for SaaS. Today, we’re excited to give our SaaS providers a new tool that will help their customers add an extra layer of security: they can now enable mutual TLS authentication on their customer’s domains through our Access product.

Primer on Mutual TLS

When you connect to a website, you should see a lock icon in the address bar — that’s your browser telling you that you’re connecting to a website over a secure connection and that the website has a valid public TLS certificate. TLS certificates keep Internet traffic encrypted using a public/private key pair to encrypt and decrypt traffic. They also provide authentication, proving to clients that they are connecting to the correct server.

To make a secure connection, a TLS handshake needs to take place. During the handshake, the client and the server exchange cryptographic keys, the client authenticates the identity of the server, and both the client and the server generate session keys that are later used to encrypt traffic.

A TLS handshake looks like this:

Zero Trust for SaaS: Deploying mTLS on custom hostnames

In a TLS handshake, the client always validates the certificate that is served by the server to make sure that it’s sending requests to the right destination. In the same way that the client needs to authenticate the identity of the server, sometimes the server needs to authenticate the client — to ensure that only authorized clients are sending requests to the server.

Let’s say that you’re managing a few services: service A writes information to a database. This database is absolutely crucial and should only have entries submitted by service A. Now, what if you have a bug in your system and service B accidentally makes a write call to the database?

You need something that checks whether a service is authorized to make calls to your database — like a bouncer. A bouncer has a VIP list — they can check people’s IDs against the list to see whether they’re allowed to enter a venue. Servers can use a similar model, one that uses TLS certificates as a form of ID.

In the same way that a bouncer has a VIP list, a server can have a Certificate Authority (CA) Root from which they issue certificates. Certificates issued from the CA Root are then provisioned onto clients. These client certificates can then be used to identify and authorize the client. As long as a client presents a valid certificate — one that the server can validate against the Root CA, it’s allowed to make requests. If a client doesn’t present a client certificate (isn’t on the VIP list) or presents an unauthorized client certificate, then the server can choose to reject the request. This process of validating client and server certificates is called mutual TLS authentication (mTLS) and is done during the TLS handshake.

When mTLS isn’t used, only the server is responsible for presenting a certificate, which the client verifies. With mTLS, both the client and the server present and validate one another’s certificates, pictured below.

Zero Trust for SaaS: Deploying mTLS on custom hostnames

mTLS + Access = Zero Trust

A few years ago, we added mTLS support to our Access product, allowing customers to enable a Zero Trust policy on their applications. Access customers can deploy a policy that dictates that all clients must present a valid certificate when making a request. That means that requests made without a valid certificate — usually from unauthorized clients — will be blocked, adding an extra layer of protection. Cloudflare has allowed customers to configure mTLS on their Cloudflare domains by setting up Access policies. The only caveat was that to use this feature, you had to be the owner of the domain. Now, what if you’re not the owner of a domain, but you do manage that domain’s origin? This is the case for a large base of our customers, the SaaS providers that extend their services to their customers’ domains that they do not own.

Extending Cloudflare benefits through SaaS providers

Cloudflare for SaaS enables SaaS providers to extend the benefits of the Cloudflare network to their customers’ domains. These domains are not owned by the SaaS provider, but they do use the SaaS provider’s service, routing traffic back to the SaaS provider’s origin.

By doing this, SaaS providers take on the responsibility of providing their customers with the highest uptime, lightning fast performance, and unparalleled security — something they can easily extend to their customers through Cloudflare.

Cloudflare for SaaS actually started out as SSL for SaaS. We built SSL for SaaS to give SaaS providers the ability to issue TLS certificates for their customers, keeping the SaaS provider’s customers safe and secure.

Since then, our SaaS customers have come to us with a new request: extend the mTLS support that we built out for our direct customers, but to their customers.

Why would SaaS providers want to use mTLS?

As a SaaS provider, there’s a wide range of services that you can provide. Some of these services require higher security controls than others.

Let’s say that the SaaS solution that you’re building is a payment processor. Each customer gets its own API endpoint that their users send requests to, for example, pay.<business_name>.com. As a payment processor, you don’t want any client or device to make requests to your service, instead you only want authorized devices to do so — mTLS does exactly that.

As the SaaS provider, you can configure a Root CA for each of your customers’ API endpoints. Then, have each Root CA issue client certificates that will be installed on authorized devices. Once the client certificates have been installed, all that is left is enforcing a check for valid certificates.

To recap, by doing this, as a SaaS provider, your customers can now ensure that requests bound for their payment processing API endpoint only come from valid devices. In addition, by deploying individual Root CAs for each customer, you also prevent clients that are authorized to make requests to one customers’ API endpoint from making requests to another customers’ API endpoint when they are not authorized to do so.

How can you set this up with Cloudflare?

As a SaaS provider, configure Cloudflare for SaaS and add your customer’s domains as Custom Hostnames. Then, in the Cloudflare for Teams dashboard, add mTLS authentication with a few clicks.

This feature is currently in Beta and is available for Enterprise customers to use. If you have any feedback, please let your Account Team know.

Free Software Awards winners announced: SecuRepairs, Protesilaos Stavrou, Paul Eggert

Post Syndicated from original https://lwn.net/Articles/888745/

The just-completed, online LibrePlanet conference was the venue for awarding this year’s Free Software Awards:

SecuRepairs was this year’s winner of the Award for Projects of Social Benefit, which is presented to a project or team responsible for applying free software, or the ideas of the free software movement, to intentionally and significantly benefit society. This award stresses the use of free software in service to humanity. SecuRepairs is an association of professionals working in the information security industry, who have the common cause of supporting the “right to repair” devices and software.

[…] The 2021 Award for Outstanding New Free Software Contributor went to Protesilaos “Prot” Stavrou, who in a few short years has become a mainstay of the GNU Emacs community through his blog posts, livestreams, conference talks, and code contributions.

[…] Paul Eggert was this year’s honoree for the Award for the Advancement of Free Software, an award given to an individual who has made a great contribution to the progress and development of free software through activities that accord with the spirit of free software. Eggert has been a contributor to the GNU operating system for over thirty years, including contributions to components like GNU Compiler Collection (GCC), and is the current maintainer of the Time Zone Database (tz), which provides accurate information on the world’s time zones.

AWS Week in Review – March 21, 2022

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/aws-week-in-review-march-21-2022/

This post is part of our Week in Review series. Check back each week for a quick round up of interesting news and announcements from AWS!

Another week, another round up of the most significant AWS launches from the previous seven days! Among the news, we have new AWS Heroes and a cost reduction. Also, improvements for customers using AWS Lambda and Amazon Elastic Kubernetes Service (EKS), and a new database-to-database connectivity option for Amazon Relational Database Service (RDS).

Last Week’s Launches
Here are some launches that caught my attention last week:

AWS Billing Conductor – This new tool provides customizable pricing and cost visibility for your end customers or business units and helps when you have specific showback and chargeback needs. To get started, see Getting Started with AWS Billing Conductor. And yes, you can call it “ABC.”

Cost Reduction for Amazon Route 53 Resolver DNS Firewall – Starting from the beginning of March, we are introducing a new tiered pricing structure that reduces query processing fees as your query volume increases. We are also implementing internal optimizations to reduce the number of DNS queries for which you are charged without affecting the number of DNS queries that are inspected or introducing any other changes to your security posture. For more info, see the What’s New.

Share Test Events in the Lambda Console With Other Developers – You can now share the test events you create in the Lambda console with other team members and have a consistent set of test events across your team. This new capability is based on Amazon EventBridge schemas and is available in the AWS Regions where both Lambda and EventBridge are available. Have a look at the What’s New for more details.

Use containerd with Windows Worker Nodes Managed by Amazon EKS – containerd is a container runtime that manages the complete container lifecycle on its host system with an emphasis on simplicity, robustness, and portability. In this way, you can get on Windows similar performance, security, and stability benefits to those available for Linux worker nodes. Here’s the What’s New with more info.

Amazon RDS for PostgreSQL databases can now connect and retrieve data from MySQL databases – You can connect your RDS PostgreSQL databases to Amazon Aurora MySQL-compatible, MySQL, and MariaDB databases. This capability works by adding support to mysql_fdw, an extension that implements a Foreign Data Wrapper (FDW) for MySQL. Foreign Data Wrappers are libraries that PostgreSQL databases can use to communicate with an external data source. Find more info in the What’s New.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
New AWS Heroes – It’s great to see both new and familiar faces joining the AWS Heroes program, a worldwide initiative that acknowledges individuals who have truly gone above and beyond to share knowledge in technical communities. Get to know them in the blog post!

More Than 400 Points of Presence for Amazon CloudFront – Impressive growth here, doubling the Points of Presence we had in October 2019. This number includes edge locations and mid-tier caches in AWS Regions. Do you know that edge locations are connected to the AWS Regions through the AWS network backbone? It’s a fully redundant, multiple 100GbE parallel fiber that circles the globe and links with tens of thousands of networks for improved origin fetches and dynamic content acceleration.

AWS Open Source News and Updates – A newsletter curated by my colleague Ricardo where he brings you the latest open-source projects, posts, events, and much more. This week he is also sharing a short list of some of the open-source roles currently open across Amazon and AWS, covering a broad range of open-source technologies. Read edition #105 here.

Upcoming AWS Events
Check your calendars and sign up for these AWS events:

The AWS Summits Are Back – Don’t forget to register to the AWS Summits in Brussels (on March 31) and Paris (on April 12). More summits are coming in the next weeks, and we’ll let you know in this weekly posts.

That’s all from me for this week. Come back next Monday for another Week in Review!

Danilo

Running cross-account workflows with AWS Step Functions and Amazon API Gateway

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/running-cross-account-workflows-with-aws-step-functions-and-amazon-api-gateway/

This post is written by Hardik Vasa, Senior Solutions Architect, and Pratik Jain, Cloud Infrastructure Architect.

AWS Step Functions allow you to build scalable and distributed applications using state machines. With the launch of Step Functions nested workflows, you can start a Step Functions workflow from another workflow. However, this requires both workflows to be in the same account. There are many use cases that require you to orchestrate workflows across different AWS accounts from one central AWS account.

This blog post covers a solution to invoke Step Functions workflows cross account using Amazon API Gateway. With this, you can perform cross-account orchestration for scheduling, ETL automation, resource deployments, security audits, and log aggregations all from a central account.

Overview

The following architecture shows a Step Functions workflow in account A invoking an API Gateway endpoint in account B, and passing the payload in the API request. The API then invokes another Step Functions workflow in account B asynchronously.  The resource policy on the API allows you to restrict access to a specific Step Functions workflow to prevent anonymous access.

Cross-account workflows

You can extend this architecture to run workflows across multiple Regions or accounts. This blog post shows running cross-account workflows with two AWS accounts.

To invoke an API Gateway endpoint, you can use Step Functions AWS SDK service integrations. This approach allows users to build solutions and integrate services within a workflow without writing code.

The example demonstrates how to use the cross-account capability using two AWS example accounts:

  • Step Functions state machine A: Account ID #111111111111
  • API Gateway API and Step Functions state machine B: Account ID #222222222222

Setting up

Start by creating state machine A in the account #111111111111. Next, create the state machine in target account #222222222222, followed by the API Gateway REST API integrated to the state machine in the target account.

Account A: #111111111111

In this account, create a state machine, which includes a state that invokes an API hosted in a different account.

Create an IAM role for Step Functions

  1. Sign in to the IAM console in account #111111111111, and then choose Roles from left navigation pane
  2. Choose Create role.
  3. For the Select trusted entity, under AWS service, select Step Functions from the list, and then choose Next.
  4. On the Add permissions page, choose Next.
  5. On the Review page, enter StepFunctionsAPIGatewayRole for Role name, and then choose Create role.
  6. Create inline policies to allow Step Functions to access the API actions of the services you need to control. Navigate to the role that you created and select Add Permissions and then Create inline policy.
  7. Use the Visual editor or the JSON tab to create policies for your role. Enter the following:
    Service: Execute-API
    Action: Invoke
    Resource: All Resources
  8. Choose Review policy.
  9. Enter APIExecutePolicy for name and choose Create Policy.

Creating a state machine in source account

  1. Navigate to the Step Functions console in account #111111111111 and choose Create state machine
  2. Select Design your workflow visually, and the click Standard and then click Next
  3. On the design page, search for APIGateway:Invoke state, then drag and drop the block on the page:
    Step Functions designer console
  4. In the API Gateway Invoke section on the right panel, update the API Parameters with the following JSON policy:
     {
      "ApiEndpoint.$": "$.ApiUrl",
      "Method": "POST",
      "Stage": "dev",
      "Path": "/execution",
      "Headers": {},
      "RequestBody": {
       "input.$": "$.body",
       "stateMachineArn.$": "$.stateMachineArn"
      },
      "AuthType": "RESOURCE_POLICY"
    }

    These parameters indicate that the ApiEndpoint, payload (body) and stateMachineArn are dynamically assigned values based on input provided during workflow execution. You can also choose to assign these values statically, based on your use case.

  5. [Optional] You can also configure the API Gateway Invoke state to retry upon task failure by configuring the retries setting.
    Configuring State Machine
  6. Choose Next and then choose Next again. On the Specify state machine settings page:
    1. Enter a name for your state machine.
    2. Select Choose an existing role under Permissions and choose StepFunctionsAPIGatewayRole.
    3. Select Log Level ERROR.
  7. Choose Create State Machine.

After creating this state machine, copy the state machine ARN for later use.

Account B: #222222222222

In this account, create an API Gateway REST API that integrates with the target state machine and enables access to this state machine by means of a resource policy.

Creating a state machine in the target account

  1. Navigate to the Step Functions Console in account #222222222222 and choose Create State Machine.
  2. Under Choose authoring method select Design your workflow visually and the type as Standard.
  3. Choose Next.
  4. On the design page, search for Pass state. Drag and drop the state.
    State machine
  5. Choose Next.
  6. In the Review generated code page, choose Next and:
    1. Enter a name for the state machine.
    2. Select Create new role under the Permissions section.
    3. Select Log Level ERROR.
  7. Choose Create State Machine.

Once the state machine is created, copy the state machine ARN for later use.

Next, set up the API Gateway REST API, which acts as a gateway to accept requests from the state machine in account A. This integrates with the state machine you just created.

Create an IAM Role for API Gateway

Before creating the API Gateway API endpoint, you must give API Gateway permission to call Step Functions API actions:

  1. Sign in to the IAM console in account #222222222222 and choose Roles. Choose Create role.
  2. On the Select trusted entity page, under AWS service, select API Gateway from the list, and then choose Next.
  3. On the Select trusted entity page, choose Next
  4. On the Name, review, and create page, enter APIGatewayToStepFunctions for Role name, and then choose Create role
  5. Choose the name of your role and note the Role ARN:
    arn:aws:iam::222222222222:role/APIGatewayToStepFunctions
  6. Select the IAM role (APIGatewayToStepFunctions) you created.
  7. On the Permissions tab, choose Add permission and choose Attach Policies.
  8. Search for AWSStepFunctionsFullAccess, choose the policy, and then click Attach policy.

Creating the API Gateway API endpoint

After creating the IAM role, create a custom API Gateway API:

  1. Open the Amazon API Gateway console in account #222222222222.
  2. Click Create API. Under REST API choose Build.
  3. Enter StartExecutionAPI for the API name, and then choose Create API.
  4. On the Resources page of StartExecutionAPI, choose Actions, Create Resource.
  5. Enter execution for Resource Name, and then choose Create Resource.
  6. On the /execution Methods page, choose Actions, Create Method.
  7. From the list, choose POST, and then select the check mark.

Configure the integration for your API method

  1. On the /execution – POST – Setup page, for Integration Type, choose AWS Service. For AWS Region, choose a Region from the list. For Regions that currently support Step Functions, see Supported Regions.
  2. For AWS Service, choose Step Functions from the list.
  3. For HTTP Method, choose POST from the list. All Step Functions API actions use the HTTP POST method.
  4. For Action Type, choose Use action name.
  5. For Action, enter StartExecution.
  6. For Execution Role, enter the role ARN of the IAM role that you created earlier, as shown in the following example. The Integration Request configuration can be seen in the image below.
    arn:aws:iam::222222222222:role/APIGatewayToStepFunctions
    API Gateway integration request configuration
  7. Choose Save. The visual mapping between API Gateway and Step Functions is displayed on the /execution – POST – Method Execution page.
    API Gateway method configuration

After you configure your API, you can configure the resource policy to allow the invoke action from the cross-account Step Functions State Machine. For the resource policy to function in cross-account scenarios, you must also enable AWS IAM authorization on the API method.

Configure IAM authorization for your method

  1. On the /execution – POST method, navigate to the Method Request, and under the Authorization option, select AWS_IAM and save.
  2. In the left navigation pane, choose Resource Policy.
  3. Use this policy template to define and enter the resource policy for your API.
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Principal": {
                    "Service": "states.amazonaws.com"
                },
                "Action": "execute-api:Invoke",
                "Resource": "execute-api:/*/*/*",
                "Condition": {
                    "StringEquals": {
                        "aws:SourceArn": [
                            "<SourceAccountStateMachineARN>"
                         ]
                    }
                }
            }
        ]
    }

    Note: You must replace <SourceAccountStateMachineARN> with the state machine ARN from account #111111111111 (account A).

  4. Choose Save.

Once the resource policy is configured, deploy the API to a stage.

Deploy the API

  1. In the left navigation pane, click Resources and choose Actions.
  2. From the Actions drop-down menu, choose Deploy API.
  3. In the Deploy API dialog box, choose [New Stage], enter dev in Stage name.
  4. Choose Deploy to deploy the API.

After deployment, capture the API ID, API Region, and the stage name. These are used as inputs during the execution phase.

Starting the workflow

To run the Step Functions workflow in account A, provide the following input:

{
   "ApiUrl": "<api_id>.execute-api.<region>.amazonaws.com",
   "stateMachineArn": "<stateMachineArn>",
   "body": "{\"someKey\":\"someValue\"}"
}

Start execution

Paste in the values of APIUrl and stateMachineArn from account B in the preceding input. Make sure the ApiUrl is in the format as shown.

AWS Serverless Application Model deployment

You can deploy the preceding solution architecture with the AWS Serverless Application Model (AWS SAM), which is an open-source framework for building serverless applications. During deployment, AWS SAM transforms and expands the syntax into AWS CloudFormation syntax, enabling you to build serverless applications faster.

Logging and monitoring

Logging and monitoring are vital for observability, measuring performance and audit purposes. Step Functions allows logging using CloudWatch Logs. Step Functions also automatically sends execution metrics to CloudWatch. You can learn more on monitoring Step Functions using CloudWatch.

Cleaning up

To avoid incurring any charges, delete all the resources that you have created in both the accounts. This would include deleting the Step Functions state machines and API Gateway API.

Conclusion

This blog post provides a step-by-step guide on securely invoking a cross-account Step Functions workflow from a central account using API Gateway as front end. This pattern can be extended to scale workflow executions across different Regions and accounts.

By using a centralized account to orchestrate workflows across AWS accounts, this can help prevent duplicating work in each account.

To learn more about serverless and AWS Step Functions, visit the Step Functions Developer Guide.

For more serverless learning resources, visit Serverless Land.

Accelerate your data warehouse migration to Amazon Redshift – Part 5

Post Syndicated from Michael Soo original https://aws.amazon.com/blogs/big-data/part-5-accelerate-your-data-warehouse-migration-to-amazon-redshift/

This is the fifth in a series of posts. We’re excited to share dozens of new features to automate your schema conversion; preserve your investment in existing scripts, reports, and applications; accelerate query performance; and potentially simplify your migrations from legacy data warehouses to Amazon Redshift.

Check out the all the posts in this series:

Amazon Redshift is the leading cloud data warehouse. No other data warehouse makes it as easy to gain new insights from your data. With Amazon Redshift, you can query exabytes of data across your data warehouse, operational data stores, and data lake using standard SQL. You can also integrate other AWS services such as Amazon EMR, Amazon Athena, Amazon SageMaker, AWS Glue, AWS Lake Formation, and Amazon Kinesis to use all the analytic capabilities in the AWS Cloud.

Until now, migrating a data warehouse to AWS has been a complex undertaking, involving a significant amount of manual effort. You need to manually remediate syntax differences, inject code to replace proprietary features, and manually tune the performance of queries and reports on the new platform.

Legacy workloads may rely on non-ANSI, proprietary features that aren’t directly supported by modern databases like Amazon Redshift. For example, many Teradata applications use SET tables, which enforce full row uniqueness—there can’t be two rows in a table that are identical in all of their attribute values.

If you’re an Amazon Redshift user, you may want to implement SET semantics but can’t rely on a native database feature. You can use the design patterns in this post to emulate SET semantics in your SQL code. Alternatively, if you’re migrating a workload to Amazon Redshift, you can use the AWS Schema Conversion Tool (AWS SCT) to automatically apply the design patterns as part of your code conversion.

In this post, we describe the SQL design patterns and analyze their performance, and show how AWS SCT can automate this as part of your data warehouse migration. Let’s start by understanding how SET tables behave in Teradata.

Teradata SET tables

At first glance, a SET table may seem similar to a table that has a primary key defined across all of its columns. However, there are some important semantic differences from traditional primary keys. Consider the following table definition in Teradata:

CREATE SET TABLE testschema.sales_by_month (
  sales_dt DATE
, amount DECIMAL(8,2)
);

We populate the table with four rows of data, as follows:

select * from testschema.sales_by_month order by sales_dt;

*** Query completed. 4 rows found. 2 columns returned. 
*** Total elapsed time was 1 second.

sales_dt amount
-------- ----------
22/01/01 100.00
22/01/02 200.00
22/01/03 300.00
22/01/04 400.00

Notice that we didn’t define a UNIQUE PRIMARY INDEX (similar to a primary key) on the table. Now, when we try to insert a new row into the table that is a duplicate of an existing row, the insert fails:

INSERT INTO testschema.sales_by_month values (20220101, 100);

 *** Failure 2802 Duplicate row error in testschema.sales_by_month.
 Statement# 1, Info =0 
 *** Total elapsed time was 1 second.

Similarly, if we try to update an existing row so that it becomes a duplicate of another row, the update fails:

UPDATE testschema.sales_by_month 
SET sales_dt = 20220101, amount = 100
WHERE sales_dt = 20220104 and amount = 400;

 *** Failure 2802 Duplicate row error in testschema.sales_by_month.
 Statement# 1, Info =0 
 *** Total elapsed time was 1 second.

In other words, simple INSERT-VALUE and UPDATE statements fail if they introduce duplicate rows into a Teradata SET table.

There is a notable exception to this rule. Consider the following staging table, which has the same attributes as the target table:

CREATE MULTISET TABLE testschema.sales_by_month_stg (
  sales_dt DATE
, amount DECIMAL(8,2)
);

The staging table is a MULTISET table and accepts duplicate rows. We populate three rows into the staging table. The first row is a duplicate of a row in the target table. The second and third rows are duplicates of each other, but don’t duplicate any of the target rows.

select * from testschema.sales_by_month_stg;

 *** Query completed. 3 rows found. 2 columns returned. 
 *** Total elapsed time was 1 second.

sales_dt amount
-------- ----------
22/01/01 100.00
22/01/05 500.00
22/01/05 500.00

Now we successfully insert the staging data into the target table (which is a SET table):

INSERT INTO testschema.sales_by_month (sales_dt, amount)
SELECT sales_dt, amount FROM testschema.sales_by_month_stg;

 *** Insert completed. One row added. 
 *** Total elapsed time was 1 second.

If we examine the target table, we can see that a single row for (2022-01-05, 500) has been inserted, and the duplicate row for (2022-01-01, 100) has been discarded. Essentially, Teradata silently discards any duplicate rows when it performs an INSERT-SELECT statement. This includes duplicates that are in the staging table and duplicates that are shared between the staging and target tables.

select * from testschema.sales_by_month order by sales_dt;

 *** Query completed. 6 rows found. 2 columns returned. 
 *** Total elapsed time was 1 second.

sales_dt amount
-------- ----------
22/01/01 100.00
22/01/02 200.00
22/01/03 300.00
22/01/03 200.00
22/01/04 400.00
22/01/05 500.00

Essentially, SET tables behave differently depending on the type of operation being run. An INSERT-VALUE or UPDATE operation suffers a failure if it introduces a duplicate row into the target. An INSERT-SELECT operation doesn’t suffer a failure if the staging table contains a duplicate row, or a duplicate row is shared between the staging and table tables.

In this post, we don’t go into detail on how to convert INSERT-VALUE or UPDATE statements. These statements typically involve one or a few rows and are less impactful in terms of performance than INSERT-SELECT statements. For INSERT-VALUE or UPDATE statements, you can materialize the row (or rows) being created, and join that set to the target table to check for duplicates.

INSERT-SELECT

In the rest of this post, we analyze INSERT-SELECT statements carefully. Customers have told us that INSERT-SELECT operations can comprise up to 78% of the INSERT workload against SET tables. We are concerned with statements with the following form:

INSERT into <target table> SELECT * FROM <staging table>

The schema of the staging table is identical to the target table on a column-by-column basis. As we mentioned earlier, a duplicate row can appear in two different circumstances:

  • The staging table is not set-unique, meaning that there are two or more full row duplicates in the staging data
  • There is a row x in the staging table and an identical row x in the target table

Because Amazon Redshift supports multiset table semantics, it’s possible that the staging table contains duplicates (the first circumstance we listed). Therefore, any automation must address both cases, because either can introduce a duplicate into an Amazon Redshift table.

Based on this analysis, we implemented the following algorithms:

  • MINUS – This implements the full set logic deduplication using SQL MINUS. MINUS works in all cases, including when the staging table isn’t set-unique and when the intersection of the staging table and target table is non-empty. MINUS also has the advantage that NULL values don’t require special comparison logic to overcome NULL to NULL comparisons. MINUS has the following syntax:
    INSERT INTO <target table> (<column list>)
    SELECT <column list> FROM <staging table> 
    MINUS
    SELECT <column list> FROM <target table>;

  • MINUS-MIN-MAX – This is an optimization on MINUS that incorporates a filter to limit the target table scan based on the values in the stage table. The min/max filters allow the query engine to skip large numbers of block during table scans. See Working with sort keys for more details.
    INSERT INTO <target table>(<column list>)
    SELECT <column list> FROM <staging table> 
    MINUS
    SELECT <column list> FROM <target table>
    WHERE <target table>.<sort key> >= (SELECT MIN(<sort key>) FROM <staging table>)
      AND <target table>).<sort key> <= (SELECT MAX(<sort key>) FROM <staging table>)
    );

We also considered other algorithms, but we don’t recommend that you use them. For example, you can perform a GROUP BY to eliminate duplicates in the staging table, but this step is unnecessary if you use the MINUS operator. You can also perform a left (or right) outer join to find shared duplicates between the staging and target tables, but then additional logic is needed to account for NULL = NULL conditions.

Performance

We tested the MINUS and MINUS-MIN-MAX algorithms on Amazon Redshift. We ran the algorithms on two Amazon Redshift clusters. The first configuration consisted of 6 x ra3.4xlarge nodes. The second consisted of 12 x ra3.4xlarge nodes. Each node contained 12 CPU and 96 GB of memory.

We created the stage and target tables with identical sort and distribution keys to minimize data movement. We loaded the same target dataset into both clusters. The target dataset consisted of 1.1 billion rows of data. We then created staging datasets that ranged from 20 million to 200 million rows, in 20 million row increments.

The following graph shows our results.

The test data was artificially generated and some skew was present in the distribution key values. This is manifested in the small deviations from linearity in the performance.

However, you can observe the performance increase that is afforded the MINUS-MIN-MAX algorithm over the basic MINUS algorithm (comparing orange lines or blue lines to themselves). If you’re implementing SET tables in Amazon Redshift, we recommend using MINUS-MIN-MAX because this algorithm provides a happy convergence of simple, readable code and good performance.

Automation

All Amazon Redshift tables allow duplicate rows, i.e., they are MULTISET tables by default. If you are converting a Teradata workload to run on Amazon Redshift, you’ll need to enforce SET semantics outside of the database.

We’re happy to share that AWS SCT will automatically convert your SQL code that operates against SET tables. AWS SCT will rewrite INSERT-SELECT that load SET tables to incorporate the rewrite patterns we described above.

Let’s see how this works. Suppose you have the following target table definition in Teradata:

CREATE SET TABLE testschema.fact (
  id bigint NOT NULL
, se_sporting_event_id INTEGER NOT NULL
, se_sport_type_name VARCHAR(15) NOT NULL
, se_home_team_id INTEGER NOT NULL
, se_away_team_id INTEGER NOT NULL
, se_location_id INTEGER NOT NULL
, se_start_date_time DATE NOT NULL
, se_sold_out INTEGER DEFAULT 0 NOT NULL
, stype_sport_type_name varchar(15) NOT NULL
, stype_short_name varchar(10) NOT NULL
, stype_long_name varchar(60) NOT NULL
, stype_description varchar(120)
, sd_sport_type_name varchar(15) NOT NULL
, sd_sport_league_short_name varchar(10) NOT NULL
, sd_short_name varchar(10) NOT NULL
, sd_long_name varchar(60)
, sd_description varchar(120)
, sht_id INTEGER NOT NULL
, sht_name varchar(30) NOT NULL
, sht_abbreviated_name varchar(10)
, sht_home_field_id INTEGER 
, sht_sport_type_name varchar(15) NOT NULL
, sht_sport_league_short_name varchar(10) NOT NULL
, sht_sport_division_short_name varchar(10)
, sat_id INTEGER NOT NULL
, sat_name varchar(30) NOT NULL
, sat_abbreviated_name varchar(10)
, sat_home_field_id INTEGER 
, sat_sport_type_name varchar(15) NOT NULL
, sat_sport_league_short_name varchar(10) NOT NULL
, sat_sport_division_short_name varchar(10)
, sl_id INTEGER NOT NULL
, sl_name varchar(60) NOT NULL
, sl_city varchar(60) NOT NULL
, sl_seating_capacity INTEGER
, sl_levels INTEGER
, sl_sections INTEGER
, seat_sport_location_id INTEGER
, seat_seat_level INTEGER
, seat_seat_section VARCHAR(15)
, seat_seat_row VARCHAR(10)
, seat_seat VARCHAR(10)
, seat_seat_type VARCHAR(15)
, pb_id INTEGER NOT NULL
, pb_full_name varchar(60) NOT NULL
, pb_last_name varchar(30)
, pb_first_name varchar(30)
, ps_id INTEGER NOT NULL
, ps_full_name varchar(60) NOT NULL
, ps_last_name varchar(30)
, ps_first_name varchar(30)
)
PRIMARY INDEX(id)
;

The stage table is identical to the target table, except that it’s created as a MULTISET table in Teradata.

Next, we create a procedure to load the fact table from the stage table. The procedure contains a single INSERT-SELECT statement:

REPLACE PROCEDURE testschema.insert_select()  
BEGIN
  INSERT INTO testschema.test_fact 
  SELECT * FROM testschema.test_stg;
END;

Now we use AWS SCT to convert the Teradata stored procedure to Amazon Redshift. First, select the stored procedure in the source database tree, then right-click and choose Convert schema.

AWS SCT converts the stored procedure (and embedded INSERT-SELECT) using the MINUS-MIN-MAX rewrite pattern.

And that’s it! Presently, AWS SCT only performs rewrite for INSERT-SELECT because those statements are heavily used by ETL workloads and have the most impact on performance. Although the example we used was embedded in a stored procedure, you can also use AWS SCT to convert the same statements if they’re in BTEQ scripts, macros, or application programs. Download the latest version of AWS SCT and give it a try!

Conclusion

In this post, we showed how to implement SET table semantics in Amazon Redshift. You can use the described design patterns to develop new applications that require SET semantics. Or, if you’re converting an existing Teradata workload, you can use AWS SCT to automatically convert your INSERT-SELECT statements so that they preserve the SET table semantics.

We’ll be back soon with the next installment in this series. Check back for more information on automating your migrations from Teradata to Amazon Redshift. In the meantime, you can learn more about Amazon Redshift and AWS SCT. Happy migrating!


About the Authors

Michael Soo is a Principal Database Engineer with the AWS Database Migration Service team. He builds products and services that help customers migrate their database workloads to the AWS cloud.

Po Hong, PhD, is a Principal Data Architect of the Modern Data Architecture Global Specialty Practice (GSP), AWS Professional Services.  He is passionate about helping customers to adopt innovative solutions and migrate from large scale MPP data warehouses to the AWS modern data architecture.

Incident notification mechanism using Amazon Pinpoint two-way SMS

Post Syndicated from Pavlos Ioannou Katidis original https://aws.amazon.com/blogs/messaging-and-targeting/incident-notification-mechanism-using-amazon-pinpoint-two-way-sms/

Unexpected situations that require immediate attention can occur in any industry. Part of resolving these incidents is the notifications’ delivery. For example, utility companies that have installed gas sensors need to notify immediately the available engineer if a leak occurs.

The goal of an incident management process is to restore a normal service operation as quickly as possible and to minimize the impact on business operations, thus ensuring that the best possible levels of service quality and availability are maintained. A key element of incident management is sending timely notifications to the assigned or available resource(s) who can rectify the issue.

An incident can take place at any time and the resource(s) assigned to it might not have internet access and even if they receive the message they might not be equipped to work on it. This creates five key requirements for an incident notifications mechanism:

  1. Notify the resources via a communication channel that ensures message delivery even without internet access
  2. Enable assigned resources to respond to a request via a communication channel that doesn’t require internet access
  3. Send reminder(s) in case there is no response from the assigned resource(s)
  4. Escalate to another resource in case the first one doesn’t reply or declines the incident
  5. Store the incident details & status for reporting and data analysis

In this blog post, I share a solution on how you can automate the delivery of incident notifications. This solution utilizes Amazon Pinpoint SMS channel to contact the designated resources who might not have access to the internet. Furthermore, the recipient of the SMS is able to reply with an acknowledgement. AWS Step Functions orchestrates the user journey using AWS Lambda functions to evaluate the recipients’ response and trigger the next best action. You will use AWS CloudFormation to deploy this solution.

Use Cases

An incident notification mechanism can vary depending the organization’s requirements and 3rd party system integrations. In this blog the solution covers all five points listed above but it might require further modifications depending your use case.

With minor modifications this solution can also be used in the following use cases:

  1. Medicine intake notification: It will notify the patient via SMS that it is their time to take their medicine. If the patient doesn’t acknowledge the SMS by replying then this can be escalated to their assigned doctor
  2. Assignment submission: It will notify the student that their assignment is due. If the student doesn’t acknowledge the SMS by replying then this can be escalated to their teacher

High-level Architecture

The solution requires the country of your SMS recipients to support two-way SMS. To check which countries, support two-way SMS visit this page.  If two-way SMS is supported then you will need to request a dedicated originating identity. You can also use Toll Free Number or 10DLC if your recipients are in the US.

Note: Sender ID doesn’t support two-way SMS.

A new incident is represented as an item in an Amazon DynamoDB table containing information such as description, URL, incident_id as well as the contact numbers for two resources. A resource is someone who has been assigned to work on this incident. The second resource is for escalation purposes in case the first one doesn’t acknowledge or decline the incident notification.

The Amazon DynamoDB table covers three functions for this solution:

  1. A way to add new incidents using either the AWS console or programmatically
  2. As a storage for variables that indicate the incident’s status and can be used from the solution to determine the next action(s)
  3. As a historical data storage for all incidents that have been created for data analysis purposes

The solution utilizes Amazon DynamoDB Streams to invoke an AWS Lambda function every time a new incident is created. The AWS Lambda function triggers an AWS Step Function State machine, which orchestrates three AWS Lambda functions:

  1. Send_First_SMS: Sends the first SMS
  2. Reminder_SMS: Sends a reminder SMS if the resource does not acknowledge the first SMS
  3. Incident_State_Review: Assesses the status of the incident and either goes back to the first AWS Lambda function or finishes the AWS Step Function State machine execution

The AWS Step Functions State machine uses the Choice state, which evaluates the response of the previous AWS Lambda function and decides on the next state. This is a very useful feature that can reduce custom code and potentially AWS Lambda invocations resulting to cost savings.

Additionally, the waiting between steps is also managed from AWS Step Functions State machine using the Wait state. This can be configured to wait seconds, days or till a specific point in the future.

To be able to receive SMS, this solution uses Amazon Pinpoint’s two-way SMS feature. When receiving an SMS Amazon Pinpoint sends a payload to an Amazon SNS topic, which needs to be created separately. An AWS Lambda function that is subscribed to the Amazon SNS topic processes the SMS content and performs one or both of the following actions:

  1. Update the incident status in the DynamoDB table
  2. Create a new Step Function State machine execution

In this solution SMS recipients can reply by typing either yes or no. The SMS response is not case sensitive.

An inbound SMS payload contains the originationNumber, destinationNumber, messageKeyword, messageBody, inboundMessageId and previousPublishedMessageId. Noticeably there isn’t a direct way to associate an inbound SMS with an incident. To overcome this challenge this solution uses a second DynamoDB table, which stores the message_id and incident_id every time an SMS is send to any of the two resources. This allows the solution to use the previousPublishedMessageId from the inbound SMS payload to fetch the respective incident_id from the second DynamoDB table.

The code in this solution uses AWS SDK for Python (Boto3).

Prerequisites

  1. An Amazon Pinpoint project with the SMS channel enabled – Guide on how to enable Amazon Pinpoint SMS channel
  2. Check if the country you want to send SMS to, supports two-way SMS – List with countries that support two-way SMS
  3. An originating identity that supports two-way SMS – Guide on how to request a phone number
  4. Increase your monthly SMS spending quota for Amazon Pinpoint – Guide on how to increase the monthly SMS spending quota

Deploy the solution

Step 1: Create an S3 bucket

  1. Navigate to the Amazon S3 console
  2. Select Create bucket
  3. Enter a unique name for Bucket name
  4. Select the AWS Region to be the same as the one of your Amazon Pinpoint project
  5. Scroll to the bottom of the page and select Create bucket
  6. Follow this link to download the GitHub repository. Once the repository is downloaded, unzip it and navigate to  \amazon-pinpoint-incident-notifications-mechanism-main\src
  7. Access the S3 bucket created above and upload the five .zip files

Step 2: Create a stack

  1. The application is deployed using an AWS CloudFormation template.
  2. Navigate to the AWS CloudFormation console select Create stack > With new resources (standard)
  3. Select Template is ready as Prerequisite – Prepare template and choose Upload a template file as Template source
  4. Select Choose file and from the GitHub repository downloaded in step 1.6 navigate to amazon-pinpoint-incident-notifications-mechanism-main\cfn upload CloudFormation_template.yaml and select Next
  5. Type Pinpoint-Incident-Notifications-Mechanism as Stack name, paste the S3 bucket name created in step 1.5 as the LambdaCodeS3BucketName, type the Amazon Pinpoint Originating Number in E.164 format as OriginatingIdenity, paste the Amazon Pinpoint project ID as PinpointProjectId and type 40 for WaitingBetweenSteps
  6. Select Next, till you reach to Step 4 Review where you will need to check the box I acknowledge that AWS CloudFormation might create IAM resources and then select Create Stack
  7. The stack creation process takes approximately 2 minutes. Click on the refresh button to get the latest event regarding the deployment status. Once the stack has been deployed successfully you should see the last Event with Logical ID Pinpoint-Incident-Notifications-Mechanism and with Status CREATE_COMPLETE

Step 3: Configure two-way SMS SNS topic

  1. Navigate to the Amazon Pinpoint console > SMS and voice > Phone numbers. Select the originating identity that supports two-way SMS. Scroll to the bottom of the page and click to expand the  and check the box to enable it.

    For SNS topic select Choose an existing SNS topic then using the drop down choose the one that contains the name of the AWS CloudFormation stack from Step 2.4 as well as the name TwoWaySMSSNSTopic and click Save.

Step 4: Create a new incident

To create a new incident, navigate to Amazon DynamoDB console > Tables and select the table containing the name of the AWS CloudFormation stack from Step 2.4 as well as the name IncidentInfoDynamoDB. Select View items and then Create item.

On the Create item page choose JSON, copy and paste the JSON below into the text box and replace the values for the first_contact and second_contact with a valid mobile number that you have access to.

Note: If you don’t have two different mobile numbers, enter the same for both first_contact and second_contact fields. The mobile numbers must follow E.164 format +<country code><number>.

{
   "incident_id":{
      "S":"123"
   },
   "incident_stat":{
      "S":"not_acknowledged"
   },
   "double_escalation":{
      "S":"no"
   },
   "description":{
      "S":"Error 111, unit 1 malfunctioned. Urgent assistance is required."
   },
   "url":{
      "S":"https://example.com/incident/111/overview"
   },
   "first_contact":{
      "S":"+4479083---"
   },
   "second_contact":{
      "S":"+4479083---"
   }
}

Incident fields description:

  • incident_id: Needs to be unique
  • incident_stat: This is used from the application to store the incident status. When creating the incident, this value should always be not_acknowledged
  • double_escalation: This is used from the application as a flag for recipients who try to escalate an incident that is already escalated. When creating the incident, this value should always be no
  • description: You can type a description that best describes the incident. Be aware that depending the number of characters the SMS parts will increase. For more information on SMS character limits visit this page
  • url: You can add a URL that resources can access to resolve the issue. If this field is not pertinent to your use case then type no url
  • first_contact: This should contain the mobile number in E.164 format for the first resource
  • second_contact: This should contain the mobile number in E.164 format for the second resource. The second resource will be contacted only if the first one does not acknowledge the SMS or declines the incident

Once the above is ready you can select Create item. This will execute the AWS Step Functions State machine and you should receive an SMS. You can reply with yes to acknowledge the incident or with no to decline it. Depending your response, the incident status in the DynamoDB table will be updated and if you reply no then the incident will be escalated sending a SMS to the second_contact.

Note: The SMS response is not case sensitive.

Clean-up

To remove the solution:

  1. Delete the AWS CloudFormation stack by following the steps listed in this guide
  2. Delete the dedicated originating identity that you used to send the SMS by following the steps listed in this guide
  3. Delete the Amazon Pinpoint project by navigating the Amazon Pinpoint console, select your Amazon Pinpoint Project, choose Settings > General settings > Delete Project

Next Steps

This solution currently works only if your SMS recipients are in one country. If your use case requires to send SMS to multiple countries you will need to:

  • Check this page to ensure that these countries support two-way SMS
  • Follow the instructions in this page to obtain a number that supports two-way SMS for each country
  • Expand the solution to identify the country of the SMS recipient and to choose the correct number accordingly. To identify the country of the SMS recipient you can use Amazon Pinpoint’s phone number validate service via Amazon Pinpoint API or SDKs. The phone validate service returns a list of data points per mobile number with one of them being the Country

Incidents that are not being acknowledged by any of the assigned resources, have their status updated to unacknowledged but they don’t escalate further. Depending your requirements, you can expand the solution to send an email using Amazon Pinpoint APIs or perform an outbound call using Amazon Connect APIs.

Conclusion

In this blog post, I have demonstrated how your organization can use Amazon Pinpoint two-way SMS and Step Functions to automate incident notifications. Furthermore, the solution highlights the synergy of AWS services and how you can build a custom solution with little effort that meets your requirements.

About the Author

Pavlos Ioannou Katidis

Pavlos Ioannou Katidis

Pavlos Ioannou Katidis is an Amazon Pinpoint and Amazon Simple Email Service Specialist Solutions Architect at AWS. He loves to dive deep into his customer’s technical issues and help them design communication solutions. In his spare time, he enjoys playing tennis, watching crime TV series, playing FPS PC games, and coding personal projects.

Optimize AI/ML workloads for sustainability: Part 2, model development

Post Syndicated from Benoit de Chateauvieux original https://aws.amazon.com/blogs/architecture/optimize-ai-ml-workloads-for-sustainability-part-2-model-development/

More complexity often means using more energy, and machine learning (ML) models are becoming bigger and more complex. And though ML hardware is getting more efficient, the energy required to train these ML models is increasing sharply.

In this series, we’re following the phases of the Well-Architected machine learning lifecycle (Figure 1) to optimize your artificial intelligence (AI)/ML workloads. In Part 2, we examine the model development phase and show you how to train, tune, and evaluate your ML model to help you reduce your carbon footprint.


If you missed the first part of this series, we showed you how to examine your workload to help you 1) evaluate the impact of your workload, 2) identify alternatives to training your own model, and 3) optimize data processing.


ML lifecycle

Figure 1. ML lifecycle

Model building

Define acceptable performance criteria

When you build an ML model, you’ll likely need to make trade-offs between your model’s accuracy and its carbon footprint. When we focus only on the model’s accuracy, we “ignore the economic, environmental, or social cost of reaching the reported accuracy.” Because the relationship between model accuracy and complexity is at best logarithmic, training a model longer or looking for better hyperparameters only leads to a small increase in performance.

Establish performance criteria that support your sustainability goals while meeting your business requirements, not exceeding them.

Select energy-efficient algorithms

Begin with a simple algorithm to establish a baseline. Then, test different algorithms with increasing complexity to observe whether performance has improved. If so, compare the performance gain against the difference in resources required.

Try to find simplified versions of algorithms. This will help you use less resources to achieve a similar outcome. For example, DistilBERT, a distilled version of BERT, has 40% fewer parameters, runs 60% faster, and preserves 97% of BERT’s performance.

Use pre-trained or partially pre-trained models

Consider techniques to avoid training a model from scratch:

  • Transfer Learning: Use a pre-trained source model and reuse it as the starting point for a second task. For example, a model trained on ImageNet (14 million images) can generalize with other datasets.
  • Incremental Training: Use artifacts from an existing model on an expanded dataset to train a new model.

Optimize your deep learning models to accelerate training

Compile your DL models from their high-level language representation to hardware-optimized instructions to reduce training time. You can achieve this with open-source compilers or Amazon SageMaker Training Compiler, which can speed up training of DL models by up to 50% by more efficiently using SageMaker GPU instances.

Start with small experiments, datasets, and compute resources

Experiment with smaller datasets in your development notebook. This allows you to iterate quickly with limited carbon emission.

Automate the ML environment

When building your model, use Lifecycle Configuration Scripts to automatically stop idle SageMaker Notebook instances. If you are using SageMaker Studio, install the auto-shutdown Jupyter extension to detect and stop idle resources.

Use the fully managed training process provided by SageMaker to automatically launch training instances and shut them down as soon as the training job is complete. This minimizes idle compute resources and thus limits the environmental impact of your training job.

Adopt a serverless architecture for your MLOps pipelines. For example, orchestration tools like AWS Step Functions or SageMaker Pipelines only provision resources when work needs to be done. This way, you’re not maintaining compute infrastructure 24/7.

Model training

Select sustainable AWS Regions

As mentioned in Part 1, select an AWS Region with sustainable energy sources. When regulations and legal aspects allow, choose Regions near Amazon renewable energy projects and Regions where the grid has low published carbon intensity to train your model.

Use a debugger

A debugger like SageMaker Debugger can identify training problems like system bottlenecks, overfitting, saturated activation functions, and under-utilization of system resources. It also provides built-in rules like LowGPUUtilization or Overfit. These rules monitor your workload and will automatically stop a training job as soon as it detects a bug (Figure 2), which helps you avoid unnecessary carbon emissions.

Automatically stop buggy training jobs with SageMaker Debugger

Figure 2. Automatically stop buggy training jobs with SageMaker Debugger

Optimize the resources of your training environment

Reference the recommended instance types for the algorithm you’ve selected in the SageMaker documentation. For example, for DeepAR, you should start with a single CPU instance and only switch to GPU and multiple instances when necessary.

Right size your training jobs with Amazon CloudWatch metrics that monitor the utilization of resources like CPU, GPU, memory, and disk utilization.

Consider Managed Spot Training, which takes advantage of unused Amazon Elastic Compute Cloud (Amazon EC2) capacity and can save you up to 90% in cost compared to On-Demand instances. By shaping your demand for the existing supply of EC2 instance capacity, you will improve your overall resource efficiency and reduce idle capacity of the overall AWS Cloud.

Use efficient silicon

Use AWS Trainium for optimized for DL training workloads. It is expected to be our most energy efficient processor for this purpose.

Archive or delete unnecessary training artifacts

Organize your ML experiments with SageMaker Experiments to clean up training resources you no longer need.

Reduce the volume of logs you keep. By default, CloudWatch retains logs indefinitely. By setting limited retention time for your notebooks and training logs, you’ll avoid the carbon footprint of unnecessary log storage.

Model tuning and evaluation

Use efficient cross-validation techniques for hyperparameter optimization

Prefer Bayesian search over random search (and avoid grid search). Bayesian search makes intelligent guesses about the next set of parameters to pick based on the prior set of trials. It typically requires 10 times fewer jobs than random search, and thus 10 times less compute resources, to find the best hyperparameters.

Limit the maximum number of concurrent training jobs. Running hyperparameter tuning jobs concurrently gets more work done quickly. However, a tuning job improves only through successive rounds of experiments. Typically, running one training job at a time achieves the best results with the least amount of compute resources.

Carefully choose the number of hyperparameters and their ranges. You get better results and use less compute resources by limiting your search to a few parameters and small ranges of values. If you know that a hyperparameter is log-scaled, convert it to further improve the optimization.

Use warm-start hyperparameter tuning

Use warm-start to leverage the learning gathered in previous tuning jobs to inform which combinations of hyperparameters to search over in the new tuning job. This technique avoids restarting hyperparameter optimization jobs from scratch and thus reduces the compute resources needed.

Measure results and improve

To monitor and quantify improvements of your training jobs, track the following metrics:

For storage:

Conclusion

In this blog post, we discussed techniques and best practices to reduce the energy required to build, train, and evaluate your ML models.

We also provided recommendations for the tuning process as it makes up a large part of the carbon impact of building an ML model. During hyperparameter and neural design search, hundreds of versions of a given model are created, trained, and evaluated before identifying an optimal design.

In the next post, we’ll continue our sustainability journey through the ML lifecycle and discuss the best practices you can follow when deploying and monitoring your model in production.

Want to learn more? Check out the Sustainability Pillar of the AWS Well-Architected Framework, the Architecting for sustainability session at re:Invent 2021, and other blog posts on architecting for sustainability.

Looking for more architecture content? AWS Architecture Center provides reference architecture diagrams, vetted architecture solutions, Well-Architected best practices, patterns, icons, and more!

Other posts in this series

How to use AWS Security Hub and Amazon OpenSearch Service for SIEM

Post Syndicated from Ely Kahn original https://aws.amazon.com/blogs/security/how-to-use-aws-security-hub-and-amazon-opensearch-service-for-siem/

AWS Security Hub provides you with a consolidated view of your security posture in Amazon Web Services (AWS) and helps you check your environment against security standards and current AWS security recommendations. Although Security Hub has some similarities to security information and event management (SIEM) tools, it is not designed as standalone a SIEM replacement. For example, Security Hub only ingests AWS-related security findings and does not directly ingest higher volume event logs, such as AWS CloudTrail logs. If you have use cases to consolidate AWS findings with other types of findings from on-premises or other non-AWS workloads, or if you need to ingest higher volume event logs, we recommend that you use Security Hub in conjunction with a SIEM tool.

There are also other benefits to using Security Hub and a SIEM tool together. These include being able to store findings for longer periods of time than Security Hub, aggregating findings across multiple administrator accounts, and further correlating Security Hub findings with each other and other log sources. In this blog post, we will show you how you can use Amazon OpenSearch Service (successor to Amazon Elasticsearch Service) as a SIEM and integrate Security Hub with it to accomplish these three use cases. Amazon OpenSearch Service is a fully managed service that makes it easier to deploy, manage, and scale Elasticsearch and Kibana. OpenSearch Service is a distributed, RESTful search and analytics engine that is capable of addressing a growing number of use cases. You can expand OpenSearch Service with AWS services like Kinesis or Kinesis Data Firehose, by integrating with other AWS services, or by using traditional agents like Beats and Logstash for log ingestion, and Kibana for data visualization. Although the OpenSearch Service also is not a SIEM out-of-the-box tool, with some customization, you can use it for SIEM tool use cases.

Security Hub plus SIEM use cases

By enabling Security Hub within your AWS Organizations account structure, you immediately start receiving the benefits of viewing all of your security findings from across various AWS and partner services on a single screen. Some organizations want to go a step further and use Security Hub in conjunction with a SIEM tool for the following reasons:

  • Correlate Security Hub findings with each other and other log sources – This is the most popular reason customers choose to implement this solution. If you have various log sources outside of Security Hub findings (such as application logs, database logs, partner logs, and security tooling logs), then it makes sense to consolidate these log sources into a single SIEM solution. Then you can view both your Security Hub findings and miscellaneous logs in the same place and create alerts based on interesting correlations.
  • Store findings for longer than 90 days after the last update date – Some organizations want or need to store Security Hub findings for longer than 90 days after the last update date. They may want to do this for historical investigation, or for audit and compliance needs. Either way, this solution offers you the ability to store Security Hub findings in a private Amazon Simple Storage Service (Amazon S3) bucket, which is then consumed by Amazon OpenSearch Service.
  • Aggregate findings across multiple administrator accounts – Security Hub has a feature customers can use to designate an administrator account if they have enabled Security Hub in multiple accounts. A Security Hub administrator account can view data from and manage configuration for its member accounts. This allows customers to view and manage all their findings from multiple member accounts in one place. Sometimes customers have multiple Security Hub administrator accounts, because they have multiple organizations in AWS Organizations. In this situation, you can use this solution to consolidate all of the Security Hub administrator accounts into a single OpenSearch Service with Kibana SIEM implementation to have a single view across your environments. This related blog post walks through this use case in more detail, and shows how to centralize Security Hub findings across multiple AWS Regions and administrators. However, this blog post takes this approach further by introducing OpenSearch Service with Kibana to the use case, for a full SIEM experience.

Solution architecture

Figure 1: SIEM implementation on Amazon OpenSearch Service

Figure 1: SIEM implementation on Amazon OpenSearch Service

The solution represented in Figure 1 shows the flexibility of integrations that are possible when you create a SIEM by using Amazon OpenSearch Service. The solution allows you to aggregate findings across multiple accounts, store findings in an S3 bucket indefinitely, and correlate multiple AWS and non-AWS services in one place for visualization. This post focuses on Security Hub’s integration with the solution, but the following AWS services are also able to integrate:

Each of these services has its own dedicated dashboard within the OpenSearch SIEM solution. This makes it possible for customers to view findings and data that are relevant to each service that the SIEM tool is ingesting. OpenSearch Service also allows the customer to create aggregated dashboards, consolidating multiple services within a single dashboard, if needed.

Prerequisites

We recommend that you enable Security Hub and AWS Config across all of your accounts and Regions. For more information about how to do this, see the documentation for Security Hub and AWS Config. We also recommend that you use Security Hub and AWS Config integration with AWS Organizations to simplify the setup and automatically enable these services in all current and future accounts in your organization.

Launch the solution

In order to launch this solution within your environment, you can either launch the solution by using an AWS CloudFormation template, or by following the steps presented later in this post to customize the deployment to support integrations with non-AWS services, multi-Organization deployments, or launch within your existing OpenSearch Service environment.

To launch the solution, follow the instructions for SIEM on Amazon OpenSearch Service on GitHub.

Use the solution

Before you start using the solution, we’ll show you how this solution appears in the Security Hub dashboard, as shown in Figure 2. Navigate here by following Step 3 from the GitHub README.

Figure 2: Pre-built dashboards within solution

Figure 2: Pre-built dashboards within solution

The Security Hub dashboard highlights all major components of the service within an OpenSearch Service dashboard environment. This includes supporting all of the service integrations that are available within Security Hub (such as GuardDuty, AWS Identity and Access Management (IAM) Access Analyzer, Amazon Inspector, Amazon Macie, and AWS Systems Manager Patch Manager). The dashboard displays both findings and security standards, and you can filter by AWS account, finding type, security standard, or service integration. Figure 3 shows an overview of the visual dashboard experience when you deploy the solution.

Figure 3: Dashboard preview

Figure 3: Dashboard preview

Use case 1: Correlate Security Hub findings with each other and other log sources and create alerts

This solution uses OpenSearch Service and Kibana to allow you to search through both Security Hub findings and logs from any other AWS and non-AWS systems. You can then create alerts within Kibana based on interesting correlations between Security Hub and any other logged events. Although Security Hub supports ingesting a vast number of integrations and findings, it cannot create correlation rules like a SIEM tool can. However, you can create such rules using SIEM on OpenSearch Service. It’s important to take a closer look when multiple AWS security services generate findings for a single resource, because this potentially indicates elevated risk or multiple risk vectors. Depending on your environment, the initial number of findings in Security Hub may be high, so you may need to prioritize which findings require immediate action. Security Hub natively gives you the ability to filter findings by resource, account, severity, and many other details.

As part of the findings, you can send notifications through alerts that are generated by SIEM on OpenSearch Service in several ways: Amazon Simple Notification Service (Amazon SNS) by consuming messages in an appropriate tool or configuring recipient email addresses, Amazon Chime, Slack (using AWS Chatbot) or custom webhook to your organization’s ticketing system. You can then respond to these new security incident-oriented findings through ticketing, chat, or incident management systems.

Solution overview for use case 1

Figure 4: Solution overview diagram

Figure 4: Solution overview diagram

Figure 4 gives an overview of the solution for use case 1. This solution requires that you have Security Hub and GuardDuty enabled in your AWS account. Logs from AWS services, including Security Hub, are ingested into an S3 bucket, then are automatically extracted, transformed, and loaded (ETL) and populated into the SIEM system that is running on OpenSearch Service using AWS Lambda. After capturing the logs, you will be able to visualize them on the dashboard and analyze correlations of multiple logs. Within the SIEM on OpenSearch Service solution, you will create a rule to detect failures, such as CloudTrail authentication failures in logs. Then, you will configure the solution to publish alerts to Amazon SNS and send emails when logs match rules.

Implement the solution for use case 1

You will now set up this workflow to alert you by email when logs in OpenSearch match certain rules that you create.

Step 1: Create and visualize findings in OpenSearch Dashboards

Security Hub and other AWS services export findings to Amazon S3 in a centralized log bucket. You can ingest logs from CloudTrail, VPC Flow Logs, and GuardDuty, which are often used in AWS security analytics. In this step, you import simulated security incident data in OpenSearch Dashboards, and use the dashboard to visualize the data in the logs.

To navigate OpenSearch Dashboards

  1. Generate pseudo-security incidents. You can simulate the results by generating sample findings in GuardDuty.
  2. In OpenSearch Dashboards, go to the Discover screen. The Discover screen is divided into three major sections: Search bar, index/display field list, and time-series display, as shown in Figure 5.
    Figure 5: OpenSearch Dashboards

    Figure 5: OpenSearch Dashboards

  3. In OpenSearch Dashboards, select log-aws-securityhub-* or log-aws-vpcflowlogs-* or log-aws-cloudtrail-* or any other index patterns and add event.module to the display field. event.module is a field that indicates where the log originates from. If you are collecting other threat information, such as Security Hub, @log-type is Security Hub, and event.module indicates where the log originated from (either Amazon Inspector or Amazon Macie for example). After you have added event.module, filter the desired Security Hub integrated service (for example, Amazon Inspector) to display. When testing the environment covered in this blog post outside a production context, you can use Kinesis Data Generator to generate sample user traffic. Other tools are also available.
  4. Select the following on the dashboard to see the visualized information:
    • CloudTrail Summary
    • VpcFlowLogs Summary
    • GuardDuty Summary
    • All – Threat Hunting

Step 2: Configure alerts to match log criteria

Next, you will configure alerts to match log criteria. First you need to set the destination for alerts, and then set what to monitor.

To configure alerts

  1. In OpenSearch Dashboards, in the left menu, choose Alerting.
  2. To add the details of SNS, on the Destinations tab, choose Add destinations, and enter the following parameters:
    • Name: aes-siem-alert-destination
    • Type: Amazon SNS
    • SNS Alert: arn:aws:sns:<AWS-REGION>:<111111111111>:aes-siem-alert
      • Replace <111111111111> with your AWS account ID and correct the Region name
      • Replace <AWS-REGION> with the Region you are using, for example, eu-west-1
    • IAM Role ARN: arn:aws:iam::<111111111111>:role/aes-siem-sns-role
      • Replace &<111111111111> with your AWS account ID
  3. Choose Create to complete setting the alert destination.
    Figure 6: Edit alert destination

    Figure 6: Edit alert destination

  4. In OpenSearch Dashboards, in the left menu, select Alerting. You will now set what to monitor. Here you monitor a CloudTrail trail authentication failure. There are two normalized log times: @timestamp and event.ingested. The difference is between the log occurrence time (@timestamp) and the SIEM reception time (event.ingested). Use event.ingested for logs with a large time lag from occurrence to reception. You can specify flexible conditions by selecting Define using extraction query for the filter definition.
  5. On the Monitors tab, choose Create monitor.
  6. Enter the following parameters. If there is no description, use the default value.
    • Name: Authentication failed
    • Method of definition: Define using extraction query
    • Indices: log-aws-cloudtrail-* (manual input, not pull-down)
    • Define extraction query: Enter the following query.
      {
      	"query": {
      		"bool": {
      			"filter": [
      			{"term": {"eventSource": "signin.amazonaws.com"}},
      			{"term": {"event.outcome": "failure"}},
      			{"range": {
      				"event.ingested": {
      				"from": "{{period_end}}||-20m",
      				"to": "{{period_end}}"}}
      				}
      			]
      		}
      	}
      }
      

  7. Enter the following remaining parameters of the monitor:
    • Frequency: By interval
    • Monitor schedule: Every 3 minutes
  8. Choose Create to create the monitor.

Step 3: Set up trigger to send email via Amazon SNS

Now you will set the alert firing condition, known as the trigger. This is the setting for alerting when the monitored conditions (Monitors) are met. By default, the alert will be triggered if the number of hits is greater than 0. In this step , you will not change it, only give it a name.

To set up the trigger

  1. Select Create trigger and for Trigger name, enter Authentication failed trigger.
  2. Scroll down to Configure actions.
    Figure 7: Create trigger

    Figure 7: Create trigger

  3. Set what the trigger should do (action). In this case, you want to publish to SNS. Set the following parameters for the body of the email
    • Action name: Authentication failed action
    • Destination: Choose aes-siem-alert-destination – (Amazon SNS)
    • Message subject: (SIEM) Auth failure alert
    • Action throttling: Select Enable action throttling, and set throttle action to only trigger every 10 minutes.
    • Message: Copy and paste the following message into the text box. After pasting, choose Send test message at the bottom right of the screen to confirm that you can receive the test email.

      Monitor ctx.monitor.name just entered alert status. Please investigate the issue.

      Trigger: ctx.trigger.name

      Severity: ctx.trigger.severity

      @timestamp: ctx.results.0.hits.hits.0._source.@timestamp

      event.action: ctx.results.0.hits.hits.0._source.event.action

      error.message: ctx.results.0.hits.hits.0._source.error.message

      count: ctx.results.0.hits.total.value

      source.ip: ctx.results.0.hits.hits.0._source.source.ip

      source.geo.country_name: ctx.results.0.hits.hits.0._source.source.geo.country_name

    Figure 8: Configure actions

    Figure 8: Configure actions

  4. You will receive an alert email in a few minutes. You can check the occurrence status, including the history, by the following method:
    1. In OpenSearch Dashboards, on the left menu, choose Alerting.
    2. On the Monitors tab, choose Authentication failed.
    3. You can check the status of the alert in the History pane.
    Figure 9: Email alert

    Figure 9: Email alert

Use case 1 shows you how to correlate various Security Hub findings through this OpenSearch Service SIEM solution. However, you can take the solution a step further and build more complex correlation checks by following the procedure in the blog post Correlate security findings with AWS Security Hub and Amazon EventBridge. This information can then be ingested into this OpenSearch Service SIEM solution for viewing on a single screen.

Use case 2: Store findings for longer than 90 days after last update date

Security Hub has a maximum storage time of 90 days for events, but your organization might require data storage beyond that period, with flexibility to specify a custom retention period to meet your needs. The SIEM on Amazon OpenSearch Service solution creates a centralized S3 bucket where findings from Security Hub and various other services are collected and stored, and this bucket can be configured to store data as long as you require. The S3 bucket can persist data indefinitely, or you can create an S3 object lifecycle policy to set a custom retention timeframe. Lifecycle policies allow you to either transition objects between S3 storage classes or delete objects after a specified period. Alternatively, you can use S3 Intelligent-Tiering to allow the Amazon S3 service to move data between tiers, based on user access patterns.

Either lifecycle policies or S3 Intelligent-Tiering will allow you to optimize costs for data that is stored in S3, to keep data for archive or backup purposes when it is no longer available in Security Hub or OpenSearch Service. Within the solution, this centralized bucket is called aes-siem-xxxxxxxx-log and is configured to store data for OpenSearch Service to consume indefinitely. The Amazon S3 User Guide has instructions for configuring an S3 lifecycle policy that is explicitly defined by the user on the centralized bucket. Or you can follow the instructions for configuring intelligent tiering to allow the S3 service to manage which tier data is stored in automatically. After data is archived, you can use Amazon Athena to query the S3 bucket for historical information that has been removed from OpenSearch Service, because this S3 bucket acts as a centralized security event repository.

Use case 3: Aggregate findings across multiple administrator accounts

There are cases where you might have multiple Security Hub administrator accounts within one or multiple organizations. For these use cases, you can consolidate findings across these multiple Security Hub administrator accounts into a single S3 bucket for centralized storage, archive, backup, and querying. This gives you the ability to create a single SIEM on OpenSearch Service to minimize the number of monitoring tools you need. In order to do this, you can use S3 replication to automatically copy findings to a centralized S3 bucket. You can follow this detailed walkthrough on how to set up the correct bucket permissions in order to allow replication between the accounts. You can also follow this related blog post to configure cross-Region Security Hub findings that are centralized in a single S3 bucket, if cross-Region replication is appropriate for your security needs. With cross-account S3 replication set up for Security Hub archived event data, you can import data from the centralized S3 bucket into OpenSearch Service by using the Lambda function within the solution in this blog post. This Lambda function automatically normalizes and enriches the log data and imports it into OpenSearch Service, so that users only need to configure data storage in the S3 bucket, and the Lambda function will automatically import the data.

Conclusion

In this blog post, we showed how you can use Security Hub with a SIEM to store findings for longer than 90 days, aggregate findings across multiple administrator accounts, and correlate Security Hub findings with each other and other log sources. We used the solution to walk through building the SIEM and explained how Security Hub could be used within that solution to add greater flexibility. This post describes one solution to create your own SIEM using OpenSearch Service; however, we also recommend that you read the blog post Visualize AWS Security Hub Findings using Analytics and Business Intelligence Tools, in order to see a different method of consolidating and visualizing insights from Security Hub.

To learn more, you can also try out this solution through the new SIEM on AWS OpenSearch Service workshop.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, please start a new thread on the Security Hub forum or contact AWS Support.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Ely Kahn

Ely Kahn

Ely Kahn is the Principal Product Manager for AWS Security Hub. Before his time at AWS, Ely was a co-founder for Sqrrl, a security analytics startup that AWS acquired and is now Amazon Detective. Earlier, Ely served in a variety of positions in the federal government, including Director of Cybersecurity at the National Security Council in the White House.

Anthony Pasquariello

Anthony Pasquariello

Anthony Pasquariello is a Senior Solutions Architect at AWS based in New York City. He specializes in modernization and security for our advanced enterprise customers. Anthony enjoys writing and speaking about all things cloud. He’s pursuing an MBA, and received his MS and BS in Electrical & Computer Engineering.

Aashmeet Kalra

Aashmeet Kalra

Aashmeet Kalra is a Principal Solutions Architect working in the Global and Strategic team at AWS in San Francisco. Aashmeet has over 17 years of experience designing and developing innovative solutions for customers globally. She specializes in advanced analytics, machine learning and builder/developer experience.

Grant Joslyn

Grant Joslyn

Grant Joslyn is a solutions architect for the US state and local government public sector team at Amazon Web Services (AWS). He specializes in end user compute and cloud automation. He provides technical and architectural guidance to customers building secure solutions on AWS. He is a subject matter expert and thought leader for strategic initiatives that help customers embrace DevOps practices.

Akihiro Nakajima

Akihiro Nakajima

Akihiro Nakajima is a Senior Solutions Architect, Security Specialist at Amazon Web Services Japan. He has more than 20 years of experience in security, specifically focused on incident analysis and response, threat hunting, and digital forensics. He leads development of open-source software, “SIEM on Amazon OpenSearch Service”.

Network performance update: Security Week

Post Syndicated from David Tuber original https://blog.cloudflare.com/network-performance-update-security-week/

Network performance update: Security Week

Network performance update: Security Week

Almost a year ago, we shared extensive benchmarking results of last mile networks all around the world. The results showed that on a range of tests (TCP connection time, time to first byte, time to last byte), and on different measures (p95, mean), Cloudflare was the fastest provider in 49% of networks around the world. Since then, we’ve worked to continuously improve performance towards the ultimate goal of being the fastest everywhere. We set a goal to grow the number of networks where we’re the fastest by 10% every Innovation Week. We met that goal last year, and we’re carrying the work over to 2022.

Today, we’re proud to report we are the fastest provider in 71% of the top 1,000 most reported networks around the world. Of course, we’re not done yet, but we wanted to share the latest results and explain how we did it.

Measuring what matters

To quantify network performance, we have to get enough data from around the world, across all manner of different networks, comparing ourselves with other providers. We used Real User Measurements (RUM) to fetch a 100kb file from several different providers. Users around the world report the performance of different providers. The more users who report the data, the higher fidelity the signal is. The goal is to provide an accurate picture of where different providers are faster, and more importantly, where Cloudflare can improve. You can read more about the methodology in the original Speed Week blog post here.

In the process of quantifying network performance, it became clear where we were not the fastest everywhere. After Full Stack Week, we found 596 country/network pairs where we were more than 100ms behind the leading provider (where a country/network pair is defined as the performance of a network within a particular country).

We are constantly going through the process of figuring out why we were slow — and then improving. The challenges we faced were unique to each network and highlighted a variety of different issues that are prevalent on the Internet. We’re going to deep dive into a couple of networks, and show how we diagnosed and then improved performance.

But before we do, here are the results of our efforts since Full Stack Week.

Network performance update: Security Week

*Performance is defined by p95 TCP connection time across top 1,000 networks in the world by number of IPv4 addresses advertised

Network performance update: Security Week

*Performance is defined by p95 TCP connection time across top 1,000 networks in the world by number of IPv4 addresses advertised

Curing congestion in Canada

In the spirit of Security Week, we want to highlight how a Magic Transit (Cloudflare’s network layer DDoS security) customer’s network problems provided line of sight into their internal congestion issues, and how our network was able to mitigate the problem in the short term.

One Magic Transit customer saw congestion in Canada due to insufficient peering with the Internet at key interconnection points. Congestion for customers means bad performance for users: for games, it can lead to lag and jittery gameplay, for video streaming, it can lead to buffering and poor resolution, and for video/VoIP applications, it can lead to calls dropping, garbled video/voice, and sections of calls missing entirely. Fixing congestion in this case means improving the way this customer connects to the rest of the Internet to make the user experience better for both the customer and users.

When customers connect to the Internet, they can do so in several ways: through an ISP that connects to other networks, through an Internet Exchange which houses many different providers interconnecting at a singular point, or point-to-point connections with other providers.

Network performance update: Security Week

In the case of this customer, they had direct connections to other providers and to Internet exchanges. They ran out of bandwidth with their point-to-point connections, meaning that they had too much traffic for the size of the links they had bought, which meant that the excess traffic had to go over Internet Exchanges that were out of the way, creating suboptimal network paths which increased latency.

We were able to use our network to help solve this problem. In the short term, we spread the traffic away from the congestion points. This removed hairpins to immediately improve the user experience. This restored service for this customer and all of their users.

Then, we went into action by accelerating previously planned upgrades of all of our Internet Exchange (IX) ports across Canada to ensure that we had enough capacity to handle them, even though the congestion wasn’t happening on our network. Finally, we reached out to the customer’s provider and quickly set up direct peering with them in Canada to provide direct interconnection close to the customer, so that we could provide them with a much better Internet experience. These actions made us the fastest provider on networks in Canada as well.

Keeping traffic in Australia

Next, we turn to a network that had poor performance in Australia. Users for that network were going all the way out of the country before going to Cloudflare. This created what is called a network hairpin. A network hairpin is caused by suboptimal connectivity in certain locations, which can cause users to traverse a network path that takes longer than it should. This hairpin effect made Cloudflare one of the slower providers for this network in Australia.

Network performance update: Security Week

To fix this, Cloudflare set up peering with this network in Sydney, and this allowed traffic from this network to go to Cloudflare within the country the network was based in. This reduced our connection time from 65ms to 45ms, catapulting us to be the #1 provider for this network in the region.

Update on Full Stack Week

All of this work and more has helped us optimize our network even further. At Full Stack Week, we announced that we were faster in more of the most reported networks than our competitors.  Out of the top 1,000 networks in the world (by number of IPv4 addresses advertised), here’s a breakdown of how many providers are number 1 in p95 TCP Connection Time, which represents the time it takes for a user to connect to the provider.  This data is from Full Stack Week (November 2021):

Network performance update: Security Week

As of Security Week, we improved our position to be faster in 19 new networks:

Network performance update: Security Week

Cloudflare is also committed to being the fastest provider in every country. This is a world map using the data that was to show the countries with the fastest network provider during Full Stack Week (November 2021):

Network performance update: Security Week

Here’s how the map of the world looks during Security Week (March 2022):

Network performance update: Security Week

We moved to number 1 in all of South America, more countries in Africa, the UK, Sweden, Iceland, and also more countries in the Asia Pacific region.

A fast network means fast security products

Cloudflare’s commitment to building the fastest network allows us to deliver unparalleled performance for all applications, including our security applications. There’s an adage in the industry that you have to sacrifice performance for security, but Cloudflare believes that you should be able to have your security and performance without having to sacrifice either. We’ve unveiled a ton of awesome new products and features for Security Week and all of them are built on top of our lightning-fast network. That means that all of these products will continue to get faster as we relentlessly pursue our goal of being the fastest network everywhere.

Queueing Amazon Pinpoint API calls to distribute SMS spikes

Post Syndicated from satyaso original https://aws.amazon.com/blogs/messaging-and-targeting/queueing-amazon-pinpoint-api-calls-to-distribute-sms-spikes/

Organizations across industries and verticals have user bases spread around the globe. Amazon Pinpoint enables them to send SMS messages to a global audience through a single API endpoint, and the messages are routed to destination countries by the service. Amazon Pinpoint utilizes downstream SMS providers to deliver the messages and these SMS providers offer a limited country specific threshold for sending SMS (referred to as Transactions Per Second or TPS). These thresholds are imposed by telecom regulators in each country to prohibit spamming. If customer applications send more messages than the threshold for a country, downstream SMS providers may reject the delivery.

Such scenarios can be avoided by ensuring that upstream systems do not send more than the permitted number of messages per second. This can be achieved using one of the following mechanisms:

  • Implement rate-limiting on upstream systems which call Amazon Pinpoint APIs.
  • Implement queueing and consume jobs at a pre-configured rate.

While rate-limiting and exponential backoffs are regarded best practices for many use cases, they can cause significant delays in message delivery in particular instances when message throughput is very high. Furthermore, utilizing solely a rate-limiting technique eliminates the potential to maximize throughput per country and priorities communications accordingly. In this blog post, we evaluate a solution based on Amazon SQS queues and how they can be leveraged to ensure that messages are sent with predictable delays.

Solution Overview

The solution consists of an Amazon SNS topic that filters and fans-out incoming messages to set of Amazon SQS queues based on a country parameter on the incoming JSON payload. The messages from the queues are then processed by AWS Lambda functions that in-turn invoke the Amazon Pinpoint APIs across one or more Amazon Pinpoint projects or accounts. The following diagram illustrates the architecture:

Step 1: Ingesting message requests

Upstream applications post messages to a pre-configured SNS topic instead of calling the Amazon Pinpoint APIs directly. This allows applications to post messages at a rate that is higher than Amazon Pinpoint’s TPS limits per country. For applications that are hosted externally, an Amazon API Gateway can also be configured to receive the requests and publish them to the SNS topic – allowing features such as routing and authentication.

Step 2: Queueing and prioritization

The SNS topic implements message filtering based on the country parameter and sends incoming JSON messages to separate SQS queues. This allows configuring downstream consumers based on the priority of individual queues and processing these messages at different rates.

The algorithm and attribute used for implementing message filtering can vary based on requirements. Similarly, filtering can be enabled based on business use-cases such as “REMINDERS”,   “VERIFICATION”, “OFFERS”, “EVENT NOTIFICATIONS” etc. as well. In this example, the messages are filtered based on a country attribute shown below:

Based on the filtering logic implemented, the messages are delivered to the corresponding SQS queues for further processing. Delivery failures are handled through a Dead Letter Queue (DLQ), enabling messages to be retried and pushed back into the queues.

Step 3: Consuming queue messages at fixed-rate

The messages from SQS queues are consumed by AWS Lambda functions that are configured per queue. These are light-weight functions that read messages in pre-configured batch sizes and call the Amazon Pinpoint Send Messages API. API call failures are handled through 1/ Exponential Backoff within the AWS SDK calls and 2/ DLQs setup as Destination Configs on the Lambda functions. The Amazon Pinpoint Send Messages API is a batch API that allows sending messages to 100 recipients at a time. As such, it is possible to have requests succeed partially – messages, within a single API call, that fail/throttle should also be sent to the DLQ and retried again.

The Lambda functions are configured to run at a fixed reserve concurrency value. This ensures that a fixed rate of messages is fetched from the queue and processed at any point of time. For example, a Lambda function receives messages from an SQS queue and calls the Amazon Pinpoint APIs. It has a reserved concurrency of 10 with a batch size of 10 items. The SQS queue rapidly receives 1,000 messages. The Lambda function scales up to 10 concurrent instances, each processing 10 messages from the queue. While it takes longer to process the entire queue, this results in a consistent rate of API invocations for Amazon Pinpoint.

Step 4: Monitoring and observability

Monitoring tools record performance statistics over time so that usage patterns can be identified. Timely detection of a problem (ideally before it affects end users) is the first step in observability. Detection should be proactive and multi-faceted, including alarms when performance thresholds are breached. For the architecture proposed in this blog, observability is enabled by using Amazon Cloudwatch and AWS X-Ray.

Some of the key metrics that are monitored using Amazon Cloudwatch are as follows:

  • Amazon Pinpoint
    • DirectSendMessagePermanentFailure
    • DirectSendMessageTemporaryFailure
    • DirectSendMessageThrottled
  • AWS Lambda
    • Invocations
    • Errors
    • Throttles
    • Duration
    • ConcurrentExecutions
  • Amazon SQS
    • ApproximateAgeOfOldestMessage
    • NumberOfMessagesSent
    • NumberOfMessagesReceived
  • Amazon SNS
    • NumberOfMessagesPublished
    • NumberOfNotificationsDelivered
    • NumberOfNotificationsFailed
    • NumberOfNotificationsRedrivenToDlq

AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture. With X-Ray, you can understand how the application and its underlying services are performing, to identify and troubleshoot the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through your application, and shows a map of your application’s underlying components.

Notes:

  1. If you are using Amazon Pinpoint’s Campaign or Journey feature to deliver SMS to recipients in various countries, you do not need to implement this solution. Amazon Pinpoint will drain messages depending on the MessagesPerSecond configuration pre-defined in the campaign/journey settings.
  2. If you need to send transactional SMS to a small number of countries (one or two), you should work with AWS support to define your SMS sending throughput for those countries to accommodate spikey SMS message traffic instead.

Conclusion

This post shows how customers can leverage Amazon Pinpoint along with Amazon SQS and AWS Lambda to build, regulate and prioritize SMS deliveries across multiple countries or business use-cases. This leads to predictable delays in message deliveries and provides customers with the ability to control the rate and priority of messages sent using Amazon Pinpoint.


About the Authors

Satyasovan Tripathy works as a Senior Specialist Solution Architect at AWS. He is situated in Bengaluru, India, and focuses on the AWS Digital User Engagement product portfolio. He enjoys reading and travelling outside of work.

Rajdeep Tarat is a Senior Solutions Architect at AWS. He lives in Bengaluru, India and helps customers architect and optimize applications on AWS. In his spare time, he enjoys music, programming, and reading.

[$] A look at some 5.17 development statistics

Post Syndicated from original https://lwn.net/Articles/887559/

At the conclusion of the 5.17 development cycle, 13038 non-merge
changesets had found their way into the mainline repository. That is a
lower level of activity than was seen for 5.16 (14,190 changesets) but well
above 5.15 (12,337). In other words, this was a fairly typical kernel
release. That is true in terms of where the work that made up the release
came from as well.

Developer Sabotages Open-Source Software Package

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/03/developer-sabotages-open-source-software-package.html

This is a big deal:

A developer has been caught adding malicious code to a popular open-source package that wiped files on computers located in Russia and Belarus as part of a protest that has enraged many users and raised concerns about the safety of free and open source software.

The application, node-ipc, adds remote interprocess communication and neural networking capabilities to other open source code libraries. As a dependency, node-ipc is automatically downloaded and incorporated into other libraries, including ones like Vue.js CLI, which has more than 1 million weekly downloads.

[…]

The node-ipc update is just one example of what some researchers are calling protestware. Experts have begun tracking other open source projects that are also releasing updates calling out the brutality of Russia’s war. This spreadsheet lists 21 separate packages that are affected.

One such package is es5-ext, which provides code for the ECMAScript 6 scripting language specification. A new dependency named postinstall.js, which the developer added on March 7, checks to see if the user’s computer has a Russian IP address, in which case the code broadcasts a “call for peace.”

It constantly surprises non-computer people how much critical software is dependent on the whims of random programmers who inconsistently maintain software libraries. Between log4j and this new protestware, it’s becoming a serious vulnerability. The White House tried to start addressing this problem last year, requiring a “software bill of materials” for government software:

…the term “Software Bill of Materials” or “SBOM” means a formal record containing the details and supply chain relationships of various components used in building software. Software developers and vendors often create products by assembling existing open source and commercial software components. The SBOM enumerates these components in a product. It is analogous to a list of ingredients on food packaging. An SBOM is useful to those who develop or manufacture software, those who select or purchase software, and those who operate software. Developers often use available open source and third-party software components to create a product; an SBOM allows the builder to make sure those components are up to date and to respond quickly to new vulnerabilities. Buyers can use an SBOM to perform vulnerability or license analysis, both of which can be used to evaluate risk in a product. Those who operate software can use SBOMs to quickly and easily determine whether they are at potential risk of a newly discovered vulnerability. A widely used, machine-readable SBOM format allows for greater benefits through automation and tool integration. The SBOMs gain greater value when collectively stored in a repository that can be easily queried by other applications and systems. Understanding the supply chain of software, obtaining an SBOM, and using it to analyze known vulnerabilities are crucial in managing risk.

It’s not a solution, but it’s a start.

EDITED TO ADD (3/22): Brian Krebs on protestware.

Cloud Pentesting, Pt. 1: Breaking Down the Basics

Post Syndicated from Eric Mortaro original https://blog.rapid7.com/2022/03/21/cloud-pentesting-pt-1-breaking-down-the-basics/

Cloud Pentesting, Pt. 1: Breaking Down the Basics

The concept of cloud computing has been around for awhile, but it seems like as of late — at least in the penetration testing field — more and more customers are looking to get a pentest done in their cloud deployment. What does that mean? How does that look? What can be tested, and what’s out of scope? Why would I want a pentest in the cloud? Let’s start with the basics here, to hopefully shed some light on what this is all about, and then we’ll get into the thick of it.

Cloud computing is the idea of using software and services that run on the internet as a way for an organization to deploy their once on-premise systems. This isn’t a new concept — in fact, the major vendors, such as Amazon’s AWS, Microsoft’s Azure, and Google’s Cloud Platform, have all been around for about 15 years. Still, cloud sometimes seems like it’s being talked about as if it was invented just yesterday, but we’ll get into that a bit more later.

So, cloud computing means using someone else’s computer, in a figurative or quite literal sense. Simple enough, right?  

Wrong! There are various ways that companies have started to utilize cloud providers, and these all impact how pentests are carried out in cloud environments. Let’s take a closer look at the three primary cloud configurations.

Traditional cloud usage

Some companies have simply lifted infrastructure and services straight from their own on-premise data centers and moved them into the cloud. This looks a whole lot like setting up one virtual private cloud (VPC), with numerous virtual machines, a flat network, and that’s it! While this might not seem like a company is using their cloud vendor to its fullest potential, they’re still reaping the benefits of never having to manage uptime of physical hardware, calling their ISP late at night because of an outage, or worrying about power outages or cooling.

But one inherent problem remains: The company still requires significant staff to maintain the virtual machines and perform operating system updates, software versioning, cipher suite usage, code base fixes, and more. This starts to look a lot like the typical vulnerability management (VM) program, where IT and security continue to own and maintain infrastructure. They work to patch and harden endpoints in the cloud and are still in line for changes to be committed to the cloud infrastructure.

Cloud-native usage

The other side of cloud adoption is a more mature approach, where a company has devoted time and effort toward transitioning their once on-premise infrastructure to a fully utilized cloud deployment. While this could very well include the use of the typical VPC, network stack, virtual machines, and more, the more mature organization will utilize cloud-native deployments. These could include storage services such as S3, function services, or even cloud-native Kubernetes.

Cloud-native users shift the priorities and responsibilities of IT and security teams so that they no longer act as gatekeepers to prevent the scaling up or out of infrastructure utilized by product teams. In most of these environments, the product teams own the ability to make commitments in the cloud without IT and security input. Meanwhile, IT and security focus on proper controls and configurations to prevent security incidents. Patching is exchanged for rebuilds, and network alerting and physical server isolation are handled through automated responses, such as an alert with AWS Config that automatically changes the security group for a resource in the cloud and isolates it for further investigation.

These types of deployments start to more fully utilize the capabilities of the cloud, such as automated deployment through infrastructure-as-code solutions like AWS Cloud Formation. Gone are the days when an organization would deploy Kubernetes on top of a virtual machine to deploy containers. Now, cloud-native vendors provide this service with AWS’s Elastic Kubernetes Services, Microsoft’s Azure Kubernetes Services, and for obvious reasons, Google’s Kubernetes Engine. These and other types of cloud native deployments really help to ease the burden on the organization.

Hybrid cloud

Then there’s hybrid cloud. This is where a customer can set up their on-premise environment to also tie into their cloud environment, or visa versa. One common theme we see is with Microsoft Azure, where the Azure AD Connect sync is used to synchronize on-premise Active Directory to Azure AD. This can be very beneficial when the company is using other Software-as-a-Service (SaaS) components, such as Microsoft Office 365.  

There are various benefits to utilizing hybrid cloud deployments. Maybe there are specific components that a customer wants to keep in house and support on their own infrastructure. Or perhaps the customer doesn’t yet have experience with how to maintain Kubernetes but is utilizing Google Cloud Platform. The ability to deploy your own services is the key to flexibility, and the cloud helps provide that.

In part two, we’ll take a closer look at how these different cloud deployments impact pentesting in the cloud.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close