Tag Archives: AWS

AWS Graviton Processor Support on Insight Agent

Post Syndicated from Rapid7 original https://blog.rapid7.com/2022/12/09/aws-graviton-processor-support-on-insight-agent/

AWS Graviton Processor Support on Insight Agent

By Marco Botros

Marco is a Technical Product Manager for Platform at Rapid7.

We are pleased to announce that the Insight Agent now supports the AWS Graviton processor. The Insight Agent supports various operating systems using the AWS Graviton processor, including Amazon Linux, Redhat, and Ubuntu. The full list of supported operating systems can be found in our documentation.

AWS first introduced its ARM-based server processor — Graviton — in 2018. It has since released Graviton2 in 2020 and Graviton3 in May of 2022. The Graviton3 Processor has a 25% better compute performance and uses up to 60% less energy than the Graviton2 Processor. Besides supporting Linux on the Graviton Processor, we will continue to support it on both 32-bit and 64-bit intel processors. However, you will only be able to see the new Graviton installer if your organization’s agents are not pinned or, if they have been pinned, are on an agent version higher than 3.2.0. The new Linux installer is called `agent_installer-arm64.sh`, and the intel-based Linux installer has been renamed to `agent_installer-x86_64.sh` (from just `agent_installer.sh`).

You can find more information on how to download and install the new Linux ARM Installer from the download section of Agent Management in the platform:

AWS Graviton Processor Support on Insight Agent

You can also use the Agent Test Set feature to roll out the new agent on a select set of machines before deploying it widely.

Rapid7 Integration For AWS Verified Access

Post Syndicated from Aaron Sawitsky original https://blog.rapid7.com/2022/11/30/rapid7-integration-for-aws-verified-access/

Rapid7 Integration For AWS Verified Access

Today at re:invent, Amazon Web Services (AWS) unveiled its new AWS Verified Access service, and we are thrilled to announce that InsightIDR — Rapid7’s next-gen SIEM and XDR — will support log ingestion from this new service when it is made generally available.

What Is AWS Verified Access?

AWS Verified Access is a new service that allows AWS customers to simplify secure access to private applications running on AWS, without requiring the use of a VPN. Verified Access also lets customers easily implement Zero Trust policies for each application reached via the service. The data needed for these policies is provided by integrations between Verified Access and third-party solutions like IdPs and device management tools. For example:

  • Access to a low-risk application might be granted to any employee who is logged into the organization’s IdP solution
  • Access to a highly sensitive application might only be granted to employees who are logged into the organization’s IdP solution, are part of a specific team at the company, are accessing from a company-managed computer that is fully updated, and have an IP address coming from a country on an allowlist

For customers who already have IdP and device management solutions, Verified Access can integrate with many of these vendors, allowing the customer to use their existing provider to define policies while still getting the convenience of VPN-less access to their private applications through Verified Access.

Unlock a Complete Picture of Your Cloud Security with InsightIDR

Verified Access generates detailed logs for every authorization attempt. InsightIDR will be able to ingest these logs from AWS’s just-announced Amazon Security Lake. InsightIDR customers will be able to see ingress activity from Verified Access alongside ingress events from sources like AWS Identity Access Management (IAM), VPNs, productivity apps, and more — not to mention telemetry from their broader cloud and on-premises environments. Like all ingress activity logs sent to InsightIDR, logs from Verified Access will be able to be used to detect suspicious activity, as well as be brought into investigations to help establish a complete timeline and blast radius of an incident. In addition, customers will have the ability to create custom alerts off of Verified Access logs to further scrutinize and monitor access to sensitive applications.

InsightIDR’s support for Verified Access is just the latest capability to come out of our never-ending dedication to support our customers as they adopt the newest cloud technologies. To learn more about how InsightIDR helps organizations using AWS, click here.

InsightIDR Launches Integration With New AWS Security Data Lake Service

Post Syndicated from Aaron Sawitsky original https://blog.rapid7.com/2022/11/29/insightidr-launches-integration-with-new-aws-security-data-lake-service/

InsightIDR Launches Integration With New AWS Security Data Lake Service

It has been an action-packed day at AWS re:Invent. For security professionals, one of the most exciting announcements has to be the launch of Amazon Security Lake. We see a lot of potential for this new service, which is why Rapid7 is proud to announce the immediate availability of an integration between InsightIDR and Security Lake. Read on to learn more!

What Is Amazon Security Lake?

Amazon Security Lake gives AWS customers a security data lake that centralizes AWS and third-party security logs. What’s more, all data sent to Security Lake is formatted using the recently-launched OCSF standard. That means even if logs come from different services or different vendors, all logs for a given activity (e.g. all cloud activity logs, all network activity logs, etc.) will have the same format in Security Lake. This will make it easy for customers and their third-party vendors to make use of the data in Security Lake without first having to normalize data.

Another big feature in Security Lake is the granular control it offers. Customers can choose which users and third-party integrations can access which data sources and determine the duration of data that is available to each. For example, a customer might give their developers the ability to view CloudTrail data from the past five days so they can troubleshoot issues, but give InsightIDR the ability to view CloudTrail data from the past year.

InsightIDR’s Integration With Amazon Security Lake

InsightIDR’s new integration allows it to ingest log data from Security Lake. At the moment, InsightIDR will only ingest logs from AWS CloudTrail. Over time, we plan to add support for additional OCSF log types, which will allow customers to send data from multiple AWS and third-party services to InsightIDR through one Amazon Security Lake integration. This will give us the potential ability to immediately ingest and parse logs from any new third-party solution that gets introduced, as long as that solution can export its logs to Security Lake. Another customer benefit is that by consolidating the ingestion of multiple logs via Moose, onboarding and ongoing maintenance will be greatly reduced.

If you are an existing InsightIDR customer and want to take advantage of the new integration with Amazon Security Lake, instructions for setup are here.

Aligning to AWS Foundational Security Best Practices With InsightCloudSec

Post Syndicated from Ryan Blanchard original https://blog.rapid7.com/2022/11/22/aligning-to-aws-foundational-security-best-practices-with-insightcloudsec/

Aligning to AWS Foundational Security Best Practices With InsightCloudSec

Written by Ryan Blanchard and James Alaniz

When an organization is moving their IT infrastructure to the cloud or expanding with net-new investment, one of the hardest tasks for the security team is to identify and establish the proper security policies and controls to keep their cloud environments secure and the applications and sensitive data they host safe.

This can be a challenge, particularly when teams lack the relevant experience and expertise to define such controls themselves, often looking to peers and the cloud service providers themselves for guidance. The good news for folks in this position is that the cloud providers have answered the call by providing curated sets of security controls, including recommended resource configurations and access policies to provide some clarity. In the case of AWS, this takes the form of the AWS Foundational Security Best Practices.

What are AWS Foundational Security Best Practices?

The AWS Foundational Security Best Practices standard is a set of controls intended as a framework for security teams to establish effective cloud security standards for their organization. This standard provides actionable and prescriptive guidance on how to improve and maintain your organization’s security posture, with controls spanning a wide variety of AWS services.

If you’re an organization that is just getting going in the cloud and has landed on AWS as your platform of choice, this standard is undoubtedly a really good place to start.

Enforcing AWS Foundational Security Best Practices can be a challenge

So, you’ve now been armed with a foundational guide to establishing a strong security posture for your cloud. Simple, right? Well, it’s important to be aware before you get going that actually implementing and operationalizing these best practices can be easier said than done. This is especially true if you’re working with a large, enterprise-scale environment.

One of the things that make it challenging to manage compliance with these best practices (or any compliance framework, for that matter) is the fact that the cloud is increasingly distributed, both from a physical perspective and in terms of adoption, access, and usage. This makes it hard to track and manage access permissions across your various business units, and also makes it difficult to understand how individual teams and users are doing in complying with organizational policies and standards.

Further complicating the matter is the reality that not all of these best practices are necessarily right for your business. There could be any number of reasons that your entire cloud environment, or even specific resources, workloads, or accounts, should be exempt from certain policies — or even subject to additional controls that aren’t captured in the AWS Foundational Security Best Practices, often for regulatory purposes.

This means you’ll want a security solution that has the ability to not just slice, dice, and report on compliance at the organization and account levels, but also lets you customize the policy sets based on what makes sense for you and your business needs. If not, you’re going to be at risk of constantly dealing with false positives and spending time working through which compliance issues need your teams’ attention.

Highlights from the AWS Foundational Security Best Practices Compliance Pack

There are hundreds of controls in the AWS Foundational Security Best Practices, and each of them have been included for good reason. In this interest of time this post won’t detail all of them, but will instead present a few highlights of controls to address issues that unfortunately pop up far too often.

KMS.3 — AWS KMS Keys should not be unintentionally deleted

AWS Key Management Service (AWS KMS) is a managed service that makes it easy for you to create and control the cryptographic keys used to encrypt and protect your data. It’s possible for keys to be inadvertently deleted. This can be problematic, because once keys are deleted they can never be recovered, and the data encrypted under that key is also permanently unrecoverable. When a KMS key is scheduled for deletion, a mandatory waiting period is enforced to allow time to correct an error or reverse the decision to delete. To help avoid unintentional deletion of KMS keys, the scheduled deletion can be canceled at any point during the waiting period and the KMS key will not be deleted.

Related InsightCloudSec Check: “Encryption Key with Pending Deletion”

[S3.1] — S3 Block Public Access setting should be enabled

As you’d expect, this check focuses on identifying S3 buckets that are available to the public internet. One of the first things you’ll want to be sure of is that you’re not leaving your sensitive data open to anyone with internet access. You might be surprised how often this happens.

Related InsightCloudSec Check: “Storage Container Exposed to the Public”

CloudFront.1 — CloudFront distributions should have origin access identity enabled

While you typically access content from CloudFront by requesting the specific object — or objects — you’re looking for, it is possible for someone to request the root URL instead. To avoid this, AWS allows you to configure CloudFront to return a “default root object” when a request for the root URL is made. This is critical, because failing to define a default root object passes requests to your origin server. If you are using an S3 bucket as your origin, the user would gain access to a complete list of the contents of your bucket.

Related InsightCloudSec Check: “Content Delivery Network Without Default Root Object”

Lambda.1 — Lambda function policies should prohibit public access

Like in the control highlighted earlier about publicly accessible S3 buckets, it’s also possible for Lambda to be configured in such a way that enables public users to access or invoke them. You’ll want to keep an eye out and make sure you’re not inadvertently giving people outside of your organization access and control of your functions.

Related InsightCloudSec Check: “Serverless Function Exposed to the Public”

CodeBuild.5 — CodeBuild project environments should not have privileged mode enabled

Docker containers prohibit access to any devices by default unless they have privileged mode enabled, which grants a build project’s Docker container access to all devices and the ability to manage objects such as images, containers, networks, and volumes. Unless the build project is used to build Docker images, to avoid unintended access or deletion of critical resources, this should never be used.

Related InsightCloudSec Check: “Build Project With Privileged Mode Enabled”

Continuously enforce AWS Foundational Security Best Practices with InsightCloudSec

InsightCloudSec allows security teams to establish and continuously measure compliance against organizational policies, whether they’re based on service provider best practices like those provided by AWS or tailored to specific business needs. This is accomplished through the use of compliance packs. A compliance pack within InsightCloudSec is a set of checks that can be used to continuously assess your cloud environments for compliance with a given regulatory framework or industry or provider best practices. The platform comes out of the box with 30+ compliance packs, including a dedicated pack for the AWS Foundational Security Best Practices.

InsightCloudSec continuously assesses your entire AWS environment for compliance with AWS’s recommendations, and detects non-compliant resources within minutes after they are created or an unapproved change is made. If you so choose, you can make use of the platform’s native, no-code automation to remediate the issue — either via deletion or by adjusting the configuration or permissions — without any human intervention.

If you’re interested in learning more about how InsightCloudSec helps continuously and automatically enforce cloud security standards, be sure to check out our bi-weekly demo series that goes live every other Wednesday at 1pm EST!

Shift management using Amazon Pinpoint’s 2-way-SMS

Post Syndicated from Pavlos Ioannou Katidis original https://aws.amazon.com/blogs/messaging-and-targeting/shift-management-using-amazon-pinpoints-2-way-sms/

Businesses with physical presence such as retail, restaurants and airlines need reliable solutions for shift management when demand fluctuates, employees call in sick, or other unforeseen circumstances arise. Their linear dependence on staff forces them to create overtime shifts as a way to cope with demand.

The overtime shift communication between the business and employees needs to be immediate and scalable. Employees in such industries might not have access to internet, which rules out communication channels like email and push notification. Furthermore once the employees receive the available shifts, they need an easy way to book them and if required request further support.

In these situations, businesses need communication methods that are accessible by any mobile device, available without internet connection, and allow for replies. SMS fills all of these requirements and more. This blog showcases how with Amazon Pinpoint SMS channel and other AWS services you can develop an application that communicates available shifts to employees who are interested while allowing them to book by replying.

The solution presented in this blog is for shift management but with small changes it can also communicate available appointments to customers / patients. An example of such use case can be found in health care, where there is a waiting list to see a doctor and patients might cancel the very last moment. The solution can communicate these available slots to all patients in the waiting list and allow them to book via SMS.

Architecture

The solution uses Amazon Pinpoint two way SMS, Amazon DynamoDB, AWS Lambda, Amazon Simple Notification Service (SNS) and Amazon Connect (optional). The next section dives deeper into the architecture diagram and logic flow.

shift_management_architecture

  1. The operator adds an item to the Shift’s campaigns Amazon DynamoDB table. The item consists of a unique key that is used as the Amazon Pinpoint Campaign Id and an attribute Campaign Message, which is used as the SMS message text that the campaign recipients will receive. The Campaign Message needs to include all available shifts that the operator wants to notify the employees about.
    1. Note: This can be done either from the AWS console or programmatically via API.
  2. Amazon DynamoDB streams invokes an AWS Lambda function upon creation of a new Amazon DynamoDB item. The AWS Lambda function uses the Amazon DynamoDB data to create and execute an Amazon Pinpoint SMS Campaign based on an existing customer segment of employees who are interested in overtime shifts. The customer segment is a prerequisite and it includes SMS endpoints of the employees who are interested in receiving shift updates.
  3. Employees who belong in that segment receive an SMS with the available shifts and they can reply to book the ones they are interested in.
  4. Any inbound SMS is published on an Amazon SNS topic.
  5. The 2 way SMS AWS Lambda function subscribes to the Amazon Simple Notification Service and processes all inbound SMS based on their message body.
  6. The Shift’s status Amazon DynamoDB table stores the status of the shifts, which gets updated depending on the inbound SMS.
  7. If the employee requires further assistance, they can trigger an Amazon Connect outbound call via SMS.

The diagram below illustrates the four possible messages an employee can send to the application. To safeguard the application from outsiders and bad actors, the 2 way SMS AWS Lambda function looks up if the senders mobile number is in an allow list. In this solution, the allow list is hardcoded as an AWS Lambda environment variable but it can be stored in a data base like Amazon DynamoDB.

shift management inbound-sms-business-logic

Solution implementation

Prerequisites

To deploy this solution, you must have the following:

  1. An originating identity that supports 2 way SMS in the country you are planning to send SMS to – Supported countries and regions (SMS channel).
  2. A mobile phone to send and receive SMS.
  3. An AWS account.
  4. An Amazon Pinpoint project – How to create an Amazon Pinpoint project.
  5. An SMS customer segment – Download the example CSV, that contains one SMS endpoint. Replace the phone number (column C) with yours and import it to Amazon Pinpoint – How to import an Amazon Pinpoint segment.
  6. Add your mobile number in the Amazon Pinpoint SMS sandbox – Amazon Pinpoint SMS sandbox.
  7. An Amazon Connect instance, number & contact flow if you want your employees to be able to request an agent call back. Download the example Connect contact flow that you can import to your Amazon Connect instance.

Note: UK numbers with a +447 prefix are not allowed by default. Before you can dial these UK mobile numbers, you must submit a service quota increase request. For more information, see Amazon Connect Service Quotas in the Amazon Connect Administrator Guide.

Deploy the solution

  1. Download the CloudFormation template and navigate to the AWS CloudFormation console in the AWS region you want to deploy the solution.
  2. Select Create stack and With new resources. Choose Template is ready as Prerequisite – Prepare template and Upload a template file as Specify template. Upload the template downloaded in step 1.
  3. Fill the AWS CloudFormation parameters as shown below:
    1. ApprovedNumbers: The mobile numbers that are allowed to use this service. The format should be E164 and if there is more than one number separate them by comma e.g. +4457434243,+432434324.
    2. OriginationNumber: The mobile number that you have in your Amazon Pinpoint account in E164 format e.g. +44384238975.
    3. PinpointProjectId: The existing Amazon Pinpoint project Id.
    4. SegmentId: The Amazon Pinpoint existing segment Id that you want to send the SMS notifications to.
    5. ConnectEnable: Select YES if you already have an Amazon Connect instance with a Contact Flow and Queue. If you select NO ignore all the fields below, the solution will still be deployed but employees won’t be able to request a call back.
    6. InstanceId: The Amazon Connect InstanceId. Follow this link to learn how to find your Amazon Connect InstanceId.
    7. ContactFlowID: The Amazon Connect Contact Flow Id that you want this solution to use. Follow this link to learn how to find your Amazon Connect ContactFlow id.
    8. QueueID: The Amazon Connect Queue Id. To obtain the Amazon Connect Queue Id navigate to your Amazon Connect instance > Routing > Queues and it should appear on the browser URL, see example: https://your-instance.awsapps.com/connect/queues/edit?id=0c7fed63-815b-4040-8dbc-255800fca6d7.
    9. SourcePhoneNumber: The Amazon Connect number in E164 format that is connected to the Contact Flow provided in step 7.
  4. Once the solution has been successfully deployed, navigate to the Amazon DynamoDB console and access the ShiftsStatus DynamoDB table. Each item created represents a shift and should have a unique shift_id that employees use to book the shifts, a column shift_status with value = available and a column shift_info where you can put additional information about the shift – see example below:
    {
       "shift_id":{
          "S":"XYZ1234"
       },
       "shift_info":{
          "S":"15/08 5h nightshift"
       },
       "shift_status":{
          "S":"available"
       }
    }

    Pinpoint_shift_status_dynamoDB

  5. Navigate to Amazon Pinpoint console > SMS and voice > Phone numbers, select the phone number that you used as OriginationNumber for this solution and enable Two-way SMS. Under the Incoming messages destination section, select Choose an existing SNS topic and select the one containing the name TwoWaySMSSNSTopic.
  6. Navigate to the Amazon DynamoDB console and access the ShiftsCampaignDynamoDB table. Each item you create represents an Amazon Pinpoint SMS campaign. Create an Amazon DynamoDB item and provide a unique campaign_id, which will be used as the Amazon Pinpoint Campaign name. Create a new attribute (string) with the name campaign_message and type all available shifts that you want to communicate via this campaign. It is important to include the shift id(s) for each of shifts you want your employees to be able to request – see example below.
    • Note: By completing this step, you will trigger an Amazon Pinpoint SMS Campaign. You can access the campaign information and analytics from the Amazon Pinpoint console.
{
  "campaign_id": {
    "S": "campaign_id1"
  },
  "campaign_message": {
    "S": "15/08 5h nightshift XYZ123, 18/08 3h dayshift XYZ124"
  }
}

shift_campaign_dynamoDB

Testing the solution

  • Make sure you have created the shifts in the ShiftsStatusDynamoDB Amazon DynamoDB table.
  • To test the SMS Campaign, replicate step 6 under Deploy the solution section.
  • Reply to the SMS received with the options below:
    1. Send a shift_id that doesn’t exist to receive an automatic response “This is not a valid shift code, please reply by typing REQUEST to view the available shifts”.
    2. Send a valid & available shift_id to book the shift and then check the ShiftsStatusDynamoDB Amazon DynamoDB table, where the shift_status should change to taken and there should be a new column employee with the mobile number of the employee who has requested it.
    3. Send REQUEST to receive all shifts with shift_status = available.
    4. If you have deployed the solution along with Amazon Connect, send AGENT and await for the call.

Next steps

This solution can be extended to support SMS sending to multiple countries by acquiring the respective originating identities. Using Amazon Pinpoint phone number validate service API the application can identify the country for each recipient and choose the correct originating identity accordingly.

Depending your business requirements you might want to change the agent call back option to chat via SMS. You can extend this solution to connect the agent via SMS chat by following the steps in this blog.

Clean-up

To delete the solution, navigate to the AWS CloudFormation console and delete the stack deployed.

About the Authors

Pavlos Ioannou Katidis

Pavlos Ioannou Katidis

Pavlos Ioannou Katidis is an Amazon Pinpoint and Amazon Simple Email Service Senior Specialist Solutions Architect at AWS. He loves diving deep into his customer’s technical issues and help them design communication solutions. In his spare time, he enjoys playing tennis, watching crime TV series, playing FPS PC games, and coding personal projects.

Optimize your sending reputation and deliverability with SES dedicated IPs

Post Syndicated from Lauren Cudney original https://aws.amazon.com/blogs/messaging-and-targeting/optimize-your-sending-reputation-and-deliverability-with-ses-dedicated-ips/

Optimize your sending reputation and deliverability with SES dedicated IPs

Email remains the best medium for communicating with customers, with a ROI of 4200%, higher than social media or blogs. Organizations that fail to adequately manage their email sending and reputation risk having their emails marked as spam, not reaching their customers’ inboxes, reducing trust with their customers and ultimately, losing revenue. Studies showed that 16% of all marketing emails have either gone completely missing or have been caught by popular spam filters. In this blog post we will explain the benefits of sending email over a dedicated IP, and how dedicated IPs (managed) makes it easy to do so.

Improve your sender reputation and deliverability with dedicated IPs
When customers sign up to SES, their sending is automatically sent from shared IPs. Shared IPs offer a cost-effective and safe method of sending email. A limitation of sending over a shared IP is that customers do not control their own sending reputations. The reputation of the IP that you send from is determined by the quality of content and engagement levels of all the emails sent from that IP. This means that good senders, that send highly engaged content or important transactional emails, cannot improve their sending reputation on shared IPs. By improving their sending reputation, senders can improve their deliverability rates and make sure that more of their emails get to to the recipient’s inbox rather than their junk folder. Today, this is avoided by customers sending via a dedicated IP. Dedicated IPs are exclusive to a single sender so other bad actors cannot affect their sending reputation and good senders can improve their sending reputation.

A common method organizations use to increase delivery rate is to lease dedicated IPs where they are the sole exclusive sender and do not share their IP with other senders. This helps grow and maintain sending reputation and build high levels of trust with ISPs and mailbox vendors, ensuring high delivery rates. Today however, there are a number of issues with sending email via dedicated IPs. Customers experience difficulties in estimating how many dedicated IPs they need to handle their sending volume. This means that customers often lease too many IPs and pay for bandwidth that they don’t need. Dedicated IPs must also be “warmed-up” by sending a gradually increasing amount of email each day via the IP so that the recipient ISPs and mailboxes do not see a sudden large burst of emails coming from it, which is a signal of spam and can result in a blocking. Customers must manually configure the amount of mail to increase by, often not reaching the required volume, on average, after 45 days, hampering their time-to-market agility. This burden of provisioning, configuring and managing dedicated IPs inhibits many email senders from adopting them, meaning that their sending reputation is not optimized.

Dedicated IPs (managed)
SES customers can now send their email via dedicated IPs (managed) and will have the entire process of provisioning, leasing, warming up and managing the IP fully automated. Dedicated IPs (managed) is a feature of SES that simplifies how SES customers setup and maintain email sending through a dedicated IP space. It builds on learnings and feedback gathered from customers using the current standard Dedicated IP offering.

Dedicated IPs (managed) provides the following key benefits:

  • Easy Onboarding – Customers can create a managed dedicated pool through the API/CLI/Console and start with dedicated sending, without having to open AWS support cases to lease/release individual IPs.
  • Auto-Scaling per ISP – No more manual monitoring or scaling of dedicated pools. The pool scales out and in automatically based on usage. This auto-scaling also takes into consideration ISPs specific policies. For example, if SES detects that an ISP supports a low daily send quota, the pool will scale-out to better distribute traffic to that ISP across more IPs. In the current offering, customers are responsible for right sizing the number of IPs in their sending pools.
  • Warmup per ISP – SES will track the warmup level for each IP in the pool toward each ISP individually. SES will also track domain reputation at the individual ISP level. If a customer has been sending all their traffic to Gmail, the IP is considered warmed up only for Gmail and cold for other ISPs. If the traffic pattern changes and customer ramps up their traffic to Hotmail, SES will ramp up traffic slowly for Hotmail as the IPs are not warmed up yet. In the current manual dedicated offering, warmup % is tracked at the aggregate IP level and therefore can’t track the individual ISP level of granularity.
  • Adaptive warmup – The warmup % calculation is adaptive and takes into account actual sending patterns at an ISP level. When the sending to an ISP drops, the warmup % also drops for the given ISP. Today, when warming up the IP, customers must specify a warm up schedule and choose their volumes. Rather than having to specify a schedule and guess the optimal volume to send, Dedicated IPs (managed) will adapt the sending volume based on each individual ISP’s capacity, optimizing the warm-up schedule
  • Spill-over into shared & defer – Messages will be accepted through the API/SMTP and the system will be deferring and retrying excess sending from a customer when it is above what the pool can safely support for a particular ISP. In the early phases of the pool, the system will still leverage spill-over into shared as is done in the current offering.

Onboarding

If you use several dedicated IP addresses with Amazon SES, you can create groups of those addresses. These groups are called dedicated IP pools. A common scenario is to create one pool of dedicated IP addresses for sending marketing communications, and another for sending transactional emails. Your sender reputation for transactional emails is then isolated from that of your marketing emails. In this scenario, if a marketing campaign generates a large number of complaints, the delivery of your transactional emails is not impacted.

Configuration sets are groups of rules that you can apply to your verified identities. A verified identity is a domain, subdomain, or email address you use to send email through Amazon SES. When you apply a configuration set to an email, all of the rules in that configuration set are applied to the email. For example, you can associate these dedicated IP pools with configuration sets and use one for sending marketing communications, and another for sending transactional emails.

Onboarding to a managed dedicated pool for the most part is very similar to onboarding to regular dedicated IP sending. It involves creating a dedicated on demand pool, associating the pool with a configuration set, and specifying the configuration set to use when sending email. The configuration set can be also applied implicitly to a sending identity by using the default configuration set feature.

Below are the instructions of how to get set up on dedicated IPs (managed)

Instructions

Console

Customer accounts allow-listed for the feature preview will be able to configure and view the relevant SES resources through the SES Console UI as well.

1. Go to the SES AWS console and click on Dedicated IPs
2. On the Dedicated IPs Screen, select the Dedicated IPs (managed) tab
3. Clink on the “Create dedicated on demand IP Pool” button
4. Enter the details of your new dedicated on demand pool. Specify Scaling Mode to be “OnDemand”. Do not associate it with a Configuration Set at this point. Click create.
5. Going back to the dedicated IP on demand pool view, you should see your newly created dedicated IPs (managed) pool in the “IP pools” table. If you have any existing standard dedicated pools, you can view them and their individual IPs under the “Standard dedicated IPs” tab.
6. View your current configuration sets.
7. Click on the “Create set” button.
8. Enter the details of your new configuration set. For Sending IP pool select your newly created dedicated on demand IP pool and click create.
9. View your verified sending identities and click on the identity you wish to onboard to dedicated sending.
10. Select the configuration tab. Under default configuration set, click on the Edit button.
11. Click on the “Assign a default configuration set” checkbox and select your newly created configuration set. Click save.
12. At this point all sending from that verified identity will automatically be sent using the dedicated on demandpool.

CLI

The dedicated on demand pool feature is currently in preview and not yet available through the public CLI. If you wish to configure or view your SES resources through the CLI, you can download and add an internal preview sesv2 model that contains the relevant API changes. A dedicated pool can be specified to be managed by setting the –scaling-mode MANAGED parameter when creating the dedicated pool.

wget https://tiny.amazon.com/qjdb5ewf/seses3amazemaidedijson -O "email-2019-09-27.dedicated-pool-managed.json"

aws configure add-model —service-model file://email-2019-09-27.dedicated-pool-managed.json —service-name sesv2-dedicated-managed

export AWS_DEFAULT_REGION=eu-central-1

# Create a managed dedicated pool
aws sesv2-dedicated-managed create-dedicated-ip-pool --pool-name dedicated1 --scaling-mode MANAGED

# Create a configuration set that that will send through the dedicated pool
aws sesv2-dedicated-managed create-configuration-set --configuration-set-name cs_dedicated1 --delivery-options SendingPoolName=dedicated1

# Configure the configuration set as the default for your sending identity
aws sesv2 put-email-identity-configuration-set-attributes --email-identity {{YOUR-SENDING-IDENTITY-HERE}} --configuration-set-name cs_dedicated1

# Send SES email through the API or SMTP without requiring any code changes. Emails will # be sent out through the dedicated pool.
aws ses send-email --destination ToAddresses={{DESTINATION-GOES-HERE}} --from {{YOUR-SENDING-IDENTITY-HERE}} --message "Subject={Data='Sending via managed ',Charset=UTF-8},Body={Text={Data=thebody,Charset=UTF-8}}"

Monitoring

We recommend customers onboard to event destinations [6] and delivery delay events [7] to more accurately track the sending performance of their dedicated sending.

Conclusion

In this blog post we explained the benefits of sending via a dedicated IP and the ease at which you can now do this using the new dedicated IPs (managed) feature.

For more information, please visit the below links:

https://docs.aws.amazon.com/ses/latest/dg/dedicated-ip.html 
https://docs.aws.amazon.com/ses/latest/dg/dedicated-ip-pools.html 
https://docs.aws.amazon.com/ses/latest/dg/managing-ip-pools.html
https://docs.aws.amazon.com/ses/latest/dg/using-configuration-sets-in-email.html
https://docs.aws.amazon.com/ses/latest/dg/managing-configuration-sets.html#default-config-sets
https://docs.aws.amazon.com/ses/latest/dg/monitor-using-event-publishing.html
https://aws.amazon.com/about-aws/whats-new/2020/06/amazon-ses-can-now-send-notifications-when-the-delivery-of-an-email-is-delayed/
https://docs.aws.amazon.com/ses/latest/dg/dedicated-ip-warming.html
https://docs.aws.amazon.com/ses/latest/dg/dedicated-ip-warming.html#dedicated-ip-auto-warm-up
https://docs.aws.amazon.com/ses/latest/dg/using-configuration-sets.html

https://dma.org.uk/uploads/misc/marketers-email-tracker-2019.pdf
https://www.emailtooltester.com/en/email-deliverability-test/

Amazon Simple Email Service (SES) helps improve inbox deliverability with new features

Post Syndicated from Lauren Cudney original https://aws.amazon.com/blogs/messaging-and-targeting/amazon-simple-email-service-ses-helps-improve-inbox-deliverability-with-new-features/

Email remains the core of any company’s communication stack, but emails cannot deliver maximum ROI unless they land in recipients’ inboxes. Email deliverability, or ensuring emails land in the inbox instead of spam, challenges senders because deliverability can be impacted by a number of variables, like sending identity, campaign setup, or email configuration. Today, email senders rely on dashboard metrics or deliverability consultants to optimize the inbox placement of their messages. However, the deliverability dashboards available today do not provide specific recommendations to improve deliverability. Consultants provide bespoke solutions, but can cost hundreds of dollars per hour and are not scalable.

 

That’s why, today, we’re proud to announce virtual deliverability manager, a suite of new deliverability features in Amazon Simple Email Service (SES) that improve deliverability insights and recommendations. A new deliverability dashboard gives email senders more insights through a detailed dashboard of metrics on email deliverability. The “advisor” feature provides configuration recommendations to help improve inbox placement. SES’ “Guardian” can implement changes to email sending to help increase the percent of messages that land in the inbox. Deliverability metrics and recommendations are generated in real-time through Amazon SES, without the need for a human consultant or agency.

AWS Simple Email Service deliverability insights dashboard

The new SES feature reports email insights at-a-glance with metrics like open and click rate, but can also give deeper insights into individual ISP and configuration performance. SES will deliver specific configuration recommendations to improve deliverability based on a specific campaign’s setup, like DMARC (Domain-based Message Authentication Reporting and Conformance) or DKIM (DomainKeys Identified Mail) for a specific domain. Senders can turn on automatic implementation, which allows SES to intelligently adjust email sending configuration to maximize inbox placement.

AWS Simple Email Service deliverability recommendations to improve email sendingApplying new deliverability features is simple in the AWS console. Senders can log into their SES account dashboard to enable the virtual deliverability manager features. The features are billed as monthly subscription and can be turned on or off at any time. Once the features are activated, SES provides deliverability insights and recommendations in real time, allowing senders to improve performance of future sending batches or campaigns. Senders using Simple Email Service (SES) get reliable, scalable email at the lowest industry prices. SES is backed by AWS’ data security, and email through SES supports  compliance with HIPAA-eligible, FedRAMP-, GDPR-, and ISO-certified options.

To learn more about virtual deliverability manager, visit https://aws.amazon.com/ses

Your guide to Amazon Communication Developer Services (CDS) at re:Invent 2022

Post Syndicated from Lauren Cudney original https://aws.amazon.com/blogs/messaging-and-targeting/your-guide-to-amazon-communication-developer-services-cds-at-reinvent-2022/

Your guide to AWS Communication Developer Services (CDS) at re:Invent 2022

AWS Communication Developer Services  has an exciting lineup at re:Invent this year, with 12 sessions spanning communication solutions through Amazon Pinpoint, Amazon Simple Email Service, and Amazon Chime SDK. With breakouts, workshops, and an in-person customer reception, there will be something for everyone. Here is a detailed guide to help you build your agenda and get the most our of your re:Invent 2022 experience.

AWS guide to CDS CPaaS at reInvent

In addition to AWS executive keynotes, like Adam Selipsky’s on Tuesday morning, there are several session you don’t want to miss:

Topic – CPaaS and AWS Communication Services

Improving multichannel communications (BIZ201) – Breakout session
Wednesday November 30, 11:30 PM – 12:30 PM PDT
Your communication strategy directly impacts customer satisfaction, revenue, growth, and retention. This session covers how the customer communication landscape is changing and how you can shift your product strategies to impact business outcomes. Learn how AWS Communication Developer Services (CDS) improve customer communication through CPaaS channels like email, SMS, push notifications, telephony, audio, and video. Explore how building a multichannel architecture can shift user habits and increase satisfaction. This session is tailored for chief marketing officers, chief product officers, chief innovation officers, marketing technologists, and communication builders.
Speaker: Siddhartha Rao, GM, Amazon Chime SDK, Amazon Web Services

Topic – Email (Amazon Simple Email Service)

Improve email inbox deliverability with innovation from Amazon SES (BIZ306) – Chalk Talk
Wednesday, November 30, 4:00 – 5:00 PM PDT
This chalk talk demonstrates how to improve email deliverability, leading to increased email ROI. Learn how to build new features in Amazon SES that increase deliverability through system and configuration suggestions, and then automatically implement those changes for more seamless sending. This chalk talk is designed for email data engineers, marketing operations professionals, and marketing engineers.

Topic – Messaging and Orchestration through SMS, Push, In App (Amazon Pinpoint)

Successfully engage customers using Amazon Pinpoint’s SMS channel (BIZ301) – Chalk Talk
Monday, November 28, 1:00 – 2:00 PM PDT
Developing or upgrading an SMS program can be challenging for any enterprise undergoing digital transformation. In this chalk talk, Amazon Pinpoint experts demonstrate how to build a two-way SMS solution and how to monitor global message deliverability. This talk is designed for communication builders, marketing technologists, and IT decision-makers.

Integrate AI and ML into customer engagement (BIZ309) – Chalk Talk
Thursday, December 1, 3:30 – 4:30 PM PDT
In this chalk talk, learn how to use Amazon Lex AI and Amazon Personalize ML to create two-way engagements with customers in Amazon Pinpoint. Amazon Pinpoint is an omni-channel messaging platform on AWS, supporting SMS, push notifications, email, voice, and custom channels. Follow along as Amazon Pinpoint specialists walk through sample architecture and code that uses Amazon Lex AI chatbots to check order status, share updates via the customer’s preferred channel, and personalize responses to the customer, taking customer engagement to the next level.

Topic – Voice and Video (Amazon Chime SDK)

Use AI to live transcribe and translate multiparty video calls – Workshop
Thursday, December 1, 2:00 – 4:00 PM PDT
Harness the power of real-time transcription in web-based voice or video calls through the Amazon Chime SDK. Join this interactive workshop to learn how to build transcription and translation into multiparty video calls requiring additional compliance and language accessibility, such as telehealth appointments, government services, parent-teacher interactions, or financial services. Experts guide attendees in a hands-on demo that combines the power of Amazon Chime SDK, Amazon Transcribe, and Amazon Translate into one simple interface. This workshop is designed for communication developers. You must bring your laptop to participate.

We look forward to seeing you at re:Invent!

How to Automatically Prevent Email Throttling when Reaching Concurrency Limit

Post Syndicated from Mark Richman original https://aws.amazon.com/blogs/messaging-and-targeting/prevent-email-throttling-concurrency-limit/

Introduction

Many users of Amazon Simple Email Service (Amazon SES) send large email campaigns that target tens of thousands of recipients. Regulating the flow of Amazon SES requests can prevent throttling due to exceeding the AWS service limit on the account.

Amazon SES service quotas include a soft limit on the number of emails sent per second (also known as the “sending rate”). This quota is intended to protect users from accidentally sending unintended volumes of email, or from spending more money than intended. Most Amazon SES customers have this quota increased, but very large campaigns may still exceed that limit. As a result, Amazon SES will throttle email requests. When this happens, your messages will fail to reach their destination.

This blog provides users of Amazon SES a mechanism for regulating the flow of messages that are sent to Amazon SES. Cloud Architects, Engineers, and DevOps Engineers designing new, or improving an existing Amazon SES solution would benefit from reading this post.

Overview

A common solution for regulating the flow of API requests to Amazon SES is achieved using Amazon Simple Queue Service (Amazon SQS). Amazon SQS can send, store, and receive messages at virtually any volume and can serve as part of a solution to buffer and throttle the rate of API calls. It achieves this without the need for other services to be available to process the messages. In this solution, Amazon SQS prevents messages from being lost while waiting for them to be sent as emails.

Fig 1 — High level architecture diagram

But this common solution introduces a new challenge. The mechanism used by the Amazon SQS event source mapping for AWS Lambda invokes a function as soon as messages are visible. Our challenge is to regulate the flow of messages, rather than invoke Amazon SES as messages arrive to the queue.

Fig 2 — Leaky bucket

Developers typically limit the flow of messages in a distributed system by implementing the “leaky bucket” algorithm. This algorithm is an analogy to a bucket which has a hole in the bottom from which water leaks out at a constant rate. Water can be added to the bucket intermittently. If too much water is added at once or at too high a rate, the bucket overflows.

In this solution, we prevent this overflow by using throttling. Throttling can be handled in two ways: either before messages reach Amazon SQS, or after messages are removed from the queue (“dequeued”). Both of these methods pose challenges in handling the throttled messages and reprocessing them. These challenges introduce complexity and lead to the excessive use of resources that may cause a snowball effect and make the throttling worse.

Developers often use the following techniques to help improve the successful processing of feeds and submissions:

  • Submit requests at times other than on the hour or on the half hour. For example, submit requests at 11 minutes after the hour or at 41 minutes after the hour. This can have the effect of limiting competition for system resources with other periodic services.
  • Take advantage of times during the day when traffic is likely to be low, such as early evening or early morning hours.

However, these techniques assume that you have control over the rate of requests, which is usually not the case.

Amazon SQS, acting as a highly scalable buffer, allows you to disregard the incoming message rate and store messages at virtually any volume. Therefore, there is no need to throttle messages before adding them to the queue. As long as you eventually process messages faster than you receive new ones, you will be fine with the inflow that will get processed later on.

Regulating flow of messages from Amazon SQS

The proposed solution in this post regulates the dequeue of messages from one or more SQS queues. This approach can help prevent you from exceeding the per-second quota of Amazon SES, thereby preventing Amazon SES from throttling your API calls.

Available configuration controls

When it comes to regulating outflow from Amazon SQS you have a few options. MaxNumberOfMessages controls the maximum number of messages you can dequeue in a single read request. WaitTimeSeconds defines whether Amazon SQS uses short polling (0 seconds wait) or long polling (more than 0 seconds) while waiting to read messages from a queue. Though these capabilities are helpful in many use cases, they don’t provide full control over outflow rates.

Amazon SQS Event source mapping for Lambda is a built-in mechanism that uses a poller within the Lambda service. The poller polls for visible messages in the queue. Once messages are read, they immediately invoke the configured Lambda function. In order to prevent downstream throttling, this solution implements a custom poller to regulate the rate of messages polled instead of the Amazon SQS Event source mechanism.

Custom poller Lambda

Let’s look at the process of implementing a custom poller Lambda function. Your function should actively regulate the outflow rate without throttling or losing any messages.

First, you have to consider how to invoke the poller Lambda function once every second. Using Amazon EventBridge rules you can schedule Lambda invocations at a rate of once per minute. You also have to consider how to complete processing of Amazon SES invocations as soon as possible. And finally, you have to consider how to send requests to Amazon SES at a rate as close as possible to your per-second quota, without exceeding it.

You can use long polling to meet all of these requirements. Using long polling (by setting the WaitTimeSeconds value to a number greater than zero) means the request queries all of the Amazon SQS servers, or waits until the maximum number of messages you can handle per second (the MaxNumberOfMessages value) are read. By setting the MaxNumberOfMessages equal to your Amazon SES request per-second quota, you prevent your requests from exceeding that limit.

By splitting the looping logic from the poll logic (by using two Lambda functions) the code loops every second (60 times per minute) and asynchronously runs the polling logic.

Fig 3 — Custom poller diagram

You can use the following Python code to create the scheduler loop function:

import os
from time import sleep, time_ns

import boto3

SENDER_FUNCTION_NAME = os.getenv("SENDER_FUNCTION_NAME")
lambda_client = boto3.client("lambda")

def lambda_handler(event, context):
    print(event)

    for _ in range(60):
        prev_ns = time_ns()

        response = lambda_client.invoke_async( 
            FunctionName=SENDER_FUNCTION_NAME, InvokeArgs="{}" 
        ) 
        print(response)

        delta_ns = time_ns() - prev_ns

        if delta_ns < 1_000_000_000: 
            secs = (1_000_000_000.0 - delta_ns) / 1_000_000_000 
            sleep(secs)

This Python code creates a poller function:

import json 
import os

import boto3

UNREGULATED_QUEUE_URL = os.getenv("UNREGULATED_QUEUE_URL") 
MAX_NUMBER_OF_MESSAGES = 3 
WAIT_TIME_SECONDS = 1 
CHARSET = "UTF-8"

ses_client = boto3.client("ses") 
sqs_client = boto3.client("sqs")

def lambda_handler(event, context): 
    response = sqs_client.receive_message( 
        QueueUrl=UNREGULATED_QUEUE_URL, 
        MaxNumberOfMessages=MAX_NUMBER_OF_MESSAGES, 
        WaitTimeSeconds=WAIT_TIME_SECONDS, 
    )

    try: 
        messages = response["Messages"] 
    except KeyError: 
        print("No messages in queue") 
        return

    for message in messages: 
        message_body = json.loads(message["Body"]) 
        to_address = message_body["to_address"] 
        from_address = message_body["from_address"] 
        subject = message_body["subject"] 
        body = message_body["body"]

        print(f"Sending email to {to_address}")

        ses_client.send_email( 
            Destination={ 
                "ToAddresses": [ 
                    to_address, 
                ], 
            }, 
            Message={ 
                "Body": { 
                    "Text": { 
                        "Charset": CHARSET, 
                        "Data": body, 
                    } 
                }, 
                "Subject": { 
                    "Charset": CHARSET, 
                    "Data": subject, 
                }, 
            }, 
            Source=from_address, 
        )

        sqs_client.delete_message( 
            QueueUrl=UNREGULATED_QUEUE_URL, ReceiptHandle=message["ReceiptHandle"] 
        )

Regulating flow of prioritized messages from Amazon SQS

In the use case above, you may be serving a very large marketing campaign (“campaign1”) that takes hours to process. At the same time, you may want to process another, much smaller campaign (“campaign2″), which won’t be able to run until campaign1 is complete.

Obvious solution is to prioritize the campaigns by processing both campaigns in parallel. For example, allocate 90% of the Amazon SES per-second capacity limit to process the larger campaign1, while allowing the smaller campaign2 to take 10% of the available capacity under the limit. Amazon SQS does not provide message priority functionalities out-of-the-box. Instead, create two separate queues and poll each queue according to your desired frequency.

Fig 4 — Prioritize campaigns by queue diagram

This solution works fine if you have consistent flow of incoming messages to both queues. Unfortunately, once you finish processing campaign2 you will keep processing campaign1, using only 90% of the limit capacity per second.

Handling unbalanced flow

For handling unbalanced flow of messages merge both of your poller Lambdas. Implement one Lambda that polls both queues for MaxNumberOfMessages (that equals 100% of the limits of both). In this implementation send from your poller Lambda 90% of campaign1 messages and 10% of campaign2 messages. When campaign2 no longer has messages to process, keep processing 100% of the capacity from campaign1’s queue.

Do not delete unsent messages from the queues. These messages will become visible after their queue’s visibility timeout is reached.

To further improve on the previous implementations, introduce a third FIFO Queue to aggregate all messages from both queues and regulate dequeuing from that third FIFO queue. This will allow you to use all available capacity under your SES limit, while interweaving messages from both campaigns at a 1:10 ratio.

Fig 5 — Adding FIFO merge queue diagram

Processing 100% of the available capacity limit of the large campaign1 and 10% of the capacity limit of the small campaign2 allows you to make sure campign2 messages will not wait until campaign1 messages were all processed. Once campain2 messages are all processed, campign1 messages will continue to be processed using 100% of the capacity limit.

You can find here instructions for Configuring Amazon SQS queues.

Conclusion

In this blog post, we have shown you how to regulate the dequeue of Amazon SQS queue messages. This will prevent you from exceeding your Amazon SES per second limit. This will also remove the need to deal with throttled requests. We explained how to combine Amazon SQS, AWS Lambda, Amazon EventBridge to create a custom serverless regulating queue poller. Finally, we described how to regulate the flow of Amazon SES requests when using multiple priority queues. These technics can reduce implementation time for reprocessing throttled requests, optimize utilization of SES request limit, and reduce costs.

About the Authors

This blog post was written by Guy Loewy and Mark Richman, AWS Senior Solutions Architects for SMB.

Deploying containers to AWS using ECS and CodePipeline 

Post Syndicated from Jessica Veljanovska original https://www.anchor.com.au/blog/2022/10/deploying-containers-to-aws/

Deploying containers to AWS using ECS and CodePipeline 

In this post, it will look at deploying containers to AWS using ECS and CodePipeline. Note that this is only an overview of the process and is not a complete tutorial on how to set this up. 

Containers 101 

However, before we look at how we can deploy Containers onto AWS, it is first worth looking at why we want to use containers, so what are containers?

Containers provide environments for applications to run in isolation. Unlike virtualisation, containers do not require a full guest operating system for each container instance, instead, they share the kernel and other lower-level components of the host operating system as provided by the underlying containerization platform (the most popular of which is Docker, which we will be using in examples from here onwards). This makes containers more efficient at using the underlying hardware resources to run applications. Since Containers are isolated, they can include all their dependencies separate from other running Containers. Suppose that you have two applications that each require specific, conflicting, versions of Python to run, they will not run correctly if this requirement is not met. With containers, you can run both applications with their own environments and dependencies on the same Host Operating system without conflict. 

Containers are also portable, as a developer you can create, run, and test an image on your local workstation and then deploy the same image into your production environment. Of course, deploying into production is never that simple, but that example highlights the flexibility that containers can afford to developers in creating their applications. 

This is achieved by using images, which are read-only templates that provide the containerization platform the instructions required to run the image as a Container. Each image is built from a DockerFile that provides the specification on how to build the image and can also include other images. This can be as simple or as complicated as it needs to be to successfully run the application.

FROM ubuntu:22.04 

COPY . /app 

RUN make /app 

CMD python /app/app.py

However, it is important to know that each image is made up of different layers, which are created based on each line of instruction in the DockerFile that is used to build the image. Each layer is cached by Docker which provides performance benefits to well optimised DockerFiles and the resulting images they create. When the image is run by Docker it creates a Container that flips the layers from the image and adds a runtime Read-Write layer on top, which can be used for logging and any other activity that the application needs to perform and cannot be read from the image. You can use the same image to run as many Containers (running instances of the image) as you desire. Finally, when a Container is removed, the image is retained and only the Read-Write layer is lost. 


Elastic Container Service 

Elastic Container Service or ECS is AWS’ native Container management system. As an orchestration system it makes it easy to deploy and manage containerized applications at scale with built in scheduling that can allow you to spin up/down Containers at specific times or even configure auto-scaling. ECS has three primary modes of operation, known as launch types. Elastic Container Service or ECS is AWS’ native Container management system. As an orchestration system it makes it easy to deploy and manage containerized applications at scale with built in scheduling that can allow you to spin up/down Containers at specific times or even configure auto-scaling. ECS has three primary modes of operation, known as launch types. The first launch type is the AWS EC2 launch type, where you run the ECS agent on EC2 instances that you maintain and manage. The available resource capacity is dependent on the number and type of EC2 instances that you are using. As you are already billed for the AWS resources consumed (EC2, EBS, etc.), there are no additional charges for using ECS with this launch type. If you are using AWS Outposts, you can use also utilise that capacity instead of Cloud-based EC2 instances. 

The second launch type is the AWS Fargate launch type. This is similar to the EC2 launch type, however, the major difference is that AWS are managing the underlying infrastructure. Because of this there are no inherent capacity constraints, and the billing models is based on the amount of vCPU and memory your Containers consume. 

The last launch type is the External launch type, which is known as ECS Anywhere. This allows you to use the features of ECS with your own on-premises hardware/infrastructure. This is billed at an hourly rate per managed instance. 

ECS operates using a hierarchy that connects together the different aspects of the service and allows flexibility in exactly how the application is run. This hierarchy consists of ECS Clusters, ECS Services, ECS Tasks, and finally the running Containers. We will briefly look at each of these services. ECS Clusters 

ECS Clusters are a logical grouping of ECS Services and their associated ECS Task. A Cluster is where scheduled tasks can be defined, either using fixed intervals or cron expression. If you are using the AWS EC2 Launch Type, it is also where you can configure Auto-Scaling for the EC2 instances in the form of Capacity Providers. 

ECS Services 

ECS Services are responsible for running ECS Tasks. They will ensure that the desired number of Tasks are running and will start new Tasks if an existing Task stops or fails for any reason in order to maintain the desired number of running Tasks. Auto-Scaling can be configured here to automatically update the desired number of Tasks based on CPU and Memory. You can also associate the Service with an Elastic Load Balancer Target Group and if you do this you can also use the number of requests as a metric for Auto-Scaling. 

ECS Tasks 

ECS Tasks are running instances of an ECS Task Definition. The Task Definition describes one or more Containers that make up the Task. This is where the main configuration for the Containers is defined, including the Image/s that are used, the Ports exposed, Environment variables, and if any storage volumes (for example EFS) are required. The Task as a whole can also have Resource sizing configured, which is a hard requirement for Fargate due to its billing model. Task definitions are versioned, so each time you make a change it will create a new revision, allowing you to easily roll back to an older configuration if required.


Elastic Container Registry 

Elastic Container Registry or ECR, is not formally part of ECS but instead supports it. ECR can be used to publicly or privately store Container images and like all AWS services has granular permissions provided using IAM. The main features of ECR, besides its ability to integrate with ECS, is the built-in vulnerability scanning for images, and the lack of throttling limitations that public container registries might impose. It is not a free service though, so you will need to pay for any usage that falls outside of the included Free-Tier inclusions.

ECR is not strictly required for using ECS, you can continue to use whatever image registry you want. However, if you are using a CodePipeline to deploy your Containers we recommend using ECR purely to avoid throttling issues preventing the CodePipeline from successfully running. 


CodePipeline 

In software development, a Pipeline is a collection of multiple tools and services working together in order to achieve a common goal, most commonly building and deploying application code. AWS CodePipeline is AWS’ native form of a Pipeline that supports both AWS services as well as external services in order to help automate deployments/releases.

CodePipeline is one part of the complete set of developer tools that AWS provides, which commonly have names starting with “Code”. This is important as by itself CodePipeline can only orchestrate other tooling. For a Pipeline that will deploy a Container to ECS, we will need AWS CodeCommit, AWS CodeBuild, and AWS CodeDeploy. In addition to running the components that are configured, CodePipeline can also provide and retrieve artifacts stored in S3 for each step. For example, it will store the application code from CodeCommit into S3 and provide this to CodeBuild, Code Build will then take this and will create its own artifact files that are provided to CodeDeploy. 

AWS CodeCommit 

AWS CodeCommit is a fully managed source control service that hosts private Git repositories. While this is not strictly required for the CodePipeline, we recommend using it to ensure that AWS has a cached copy of the code at all times. External git repositories can use actions or their own pipelines to push code to CodeCommit when it has been committed. CodeCommit is used in the Source stage of the CodePipeline to both provide the code that is used and to trigger the Pipeline to run when a commit is made. Alternatively, you can use CodeStar Connections to directly use GitHub or BitBucket as the Source stage instead.

AWS CodeBuild 

AWS CodeBuild is a fully managed integration service that can be used to run commands as defined in the BuildSpec that it draws its configuration from, either in CodeBuild itself, or from the Source repository. This flexibility allows it to compile source code, run tests, make API calls, or in our case build Docker Images using the DockerFile that is part of a code repository. CodeBuild is used in the Build stage of the CodePipeline to build the Docker Image, push it to ECR, and update any Environment Variables used later in the deployment.

The following is an example of what a typical BuildSpec might look like for our purposes. 

version: 0.2
  
  
  phases:
  
    pre_build:
  
      commands:
  
        - echo Logging in to Amazon ECR...
  
        - aws --version
  
        - aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $ACCOUNTID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
  
    build:
  
      commands:
  
        - echo Build started on `date`
  
        - echo Building the Docker image...
  
        - docker build -t "$REPOSITORY_URI:$IMAGE_TAG-$CODEBUILD_START_TIME" .
  
        - docker tag "$REPOSITORY_URI:$IMAGE_TAG-$CODEBUILD_START_TIME" "$REPOSITORY_URI:$IMAGE_TAG-$CODEBUILD_START_TIME"
  
    post_build:
  
      commands:
  
        - echo Build completed on `date`
  
        - echo Pushing the Docker images...
  
        - docker push "$REPOSITORY_URI:$IMAGE_TAG-$CODEBUILD_START_TIME"
  
        - echo Writing image definitions file...
  
        - printf '{"ImageURI":"%s"}'" $REPOSITORY_URI:$IMAGE_TAG-$CODEBUILD_START_TIME"  > imageDetail.json
  
        - echo $REPOSITORY_URI:$IMAGE_TAG-$CODEBUILD_START_TIME
  
        - sed -i 's|<APP_NAME>|'$IMAGE_REPO_NAME'|g' appspec.yaml taskdef.json
  
        - sed -i 's|<SERVICE_PORT>|'$SERVICE_PORT'|g' appspec.yaml taskdef.json
  
        - sed -i 's|<AWS_ACCOUNT_ID>|'$AWS_ACCOUNT_ID'|g' taskdef.json
  
        - sed -i 's|<MEMORY_RESV>|'$MEMORY_RESV'|g' taskdef.json
  
        - sed -i 's|<IMAGE_NAME>|'$REPOSITORY_URI'\:'$IMAGE_TAG-$CODEBUILD_START_TIME'|g' taskdef.json
  
        - sed -i 's|<IAM_EXEC_ROLE>|'$IAM_EXEC_ROLE'|g' taskdef.json
  
        - sed -i 's|<REGION>|'$REGION'|g' taskdef.json
  
  artifacts:
  
    files:
  
      - appspec.yaml
  
      - taskdef.json
  
      - imageDetail.json

Failures in this step are most likely caused by an incorrect, non-functioning DockerFile or hitting throttling from external Docker repositories. 

AWS CodeDeploy 

AWS CodeDeploy is a fully managed deployment service that integrates with several AWS services including ECS. It can also be used to deploy software to on-premises servers. The configuration of the deployment is defined using an appspec file. CodeDeploy offers several different deployment types and configurations. We tend to use the `Blue/Green` deployment type and the `CodeDeployDefault.ECSAllAtOnce` deployment configuration by default. 

The Blue/Green deployment model allows for the deployment to rollback to the previously deployed Tasks if the deployment is not successful. This makes use of Load Balancer Target Groups to determine if the created Tasks in the ECS Service are healthy. If the health checks fail, then the deployment will fail and trigger a rollback.  

In our ECS CodePipeline, CodeDeploy will run based on the following example appspec.yaml file. Note that the place holder variables in <> are updated with real configuration by the CodeBuild stage.

version: 0.0 

Resources: 

  - TargetService: 

      Type: AWS::ECS::Service 

      Properties: 

        TaskDefinition: <TASK_DEFINITION> 

        LoadBalancerInfo: 

          ContainerName: "<APP_NAME>" 

          ContainerPort: <SERVICE_PORT> 

You may have noticed a lack of configuration regarding the actual Container we will be running up to this point. As we noted earlier, ECS Tasks can use a taskdef file to configure the Task and Containers, which is exactly what we are doing here. One of the files the CodeBuild configuration above expects to be present is taskdef.json. Below is an example taskdef.json file, ss with the appspec.yaml file there are placeholder variables as indicated by <>.


{ 

  "executionRoleArn": "<IAM_EXEC_ROLE>", 

  "containerDefinitions": [ 

    { 

      "name": "<APP_NAME>", 

      "image": "<IMAGE_NAME>", 

      "essential": true, 

      "memoryReservation": <MEMORY_RESV>, 

      "portMappings": [ 

        { 

          "hostPort": 0, 

          "protocol": "tcp", 

          "containerPort": <SERVICE_PORT> 

        } 

      ], 

      "environment": [ 

        { 

          "name": "PORT", 

          "value": "<SERVICE_PORT>" 

        }, 

        { 

          "name": "APP_NAME", 

          "value": "<APP_NAME>" 

        }, 

        { 

          "name": "IMAGE_BUILD_DATE", 

          "value": "<IMAGE_BUILD_DATE>" 

        }, 

      ], 

      "mountPoints": [] 

    } 

  ], 

  "volumes": [], 

  "requiresCompatibilities": [ 

    "EC2" 

  ], 

  "networkMode": "bridge", 

  "family": "<APP_NAME>" 

Failures in this stage of the CodePipeline generally mean that there is something wrong with the Container that is preventing it from starting successfully and passing its health check. Causes of this are varied and rely heavily on having good logging in place. Failures can range from the DockerFile being configured to execute a command that does not exist, to the application itself erroring out when starting for whatever reason might be logged, such as pulling incomplete data from RDS or failing to connect to an external service. 

Summary 

In summary, we have seen what Containers are, why they are useful, and what options AWS provides with its Elastic Container Service (ECS) for running them. Additionally, we looked at what parts of AWS CodePipeline we would need to use in order to deploy our Containers to ECS using CodePipeline.

For further information on the services used, I would highly recommend looking at the documentation that AWS provides. 

For a more hands-on guided walkthrough on setting up ECS and CodePipeline, AWS provide the following resources, there is also plenty of third party material you can find online. 

The post Deploying containers to AWS using ECS and CodePipeline  appeared first on Anchor | Cloud Engineering Services.

Using CodePipeline to Host a Static Website

Post Syndicated from Jessica Veljanovska original https://www.anchor.com.au/blog/2022/10/using-codepipeline-to-host-a-static-website/

There are a variety of ways in which to host a website online. This blog (post) explores how to simply publish and automate a statically built website. Hugo is one such example of a system which can create static websites and is popularly used for blogs.

The final website itself, will consist and contain; HTML, CSS, Images and JavaScript. During this process, (Anchor’s cloud experts) have listed the AWS Services in order to achieve our goal. These include:

  • Cloudfront
  • CodeBuild
  • CodePipeline
  • IAM
  • Amazon S3
  • CodeCommit

This Solution should be well below $5 per month as most of the items are within the Always Free Tier, except the AWS S3 Storage (Which only has valid free-tier for the first 12 month) depending on the traffic.   

In order to keep this simple, the below steps are done via the console, although I have also published a similar simplified project using Terraform which can be found on Github. 

Setup the AWS Components
Setup S3 

To begin with, you will need to setup some storage for the website’s content, as well as the pipeline’s artifacts. 

Create the bucket for your Pipeline Files. Ensure that Block all public access is checked. It’s recommended you also enable Server-side encryption by default with SSE-S3. 

Create a bucket for your Website Files. Ensure that block all public access is NOT checked. This bucket will be open to the world as it will need to be configured to host a website. 

 

When the Website Files bucket is created. Go to Properties tab and Edit Static website hosting. Set it to Enable, and select the type as Host a static website. Save Changes. Note the URL under Bucket website endpoint.Go to Permissions tab and Edit the Bucket policy. Paste in a policy such as the sample below. Update ${website-bucket-name} accordingly to match the name of the bucket. 

<{ "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": "s3:GetObject", "Resource": "arn:aws:s3:::${website-bucket-name}/*" } ] } >

 

Setup IAM Users

You will need to first create an IAM Role which your CodePipeline and CodeBuild will be able to assume. Some of the below permissions can be drilled down further, as they are fairly generic. In this case we are merging the CodePipeline/CodeBuild into one user.  Create a Role in IAM with a unique name based on your project. You will need to setup the below trust policy.  


<{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "codebuild.amazonaws.com", "codepipeline.amazonaws.com" ] }, "Action": "sts:AssumeRole" } ] } >

Create a Policy. Paste in a policy such as the below sample. Update ${website-bucket-name} and ${pipeline-bucket-name} accordingly. Attach the policy to the role you created in the step prior. 


<{ "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Action": "s3:*", "Resource": [ "arn:aws:s3:::${pipeline-bucket-name}/*", "arn:aws:s3:::${pipeline-bucket-name}", "arn:aws:s3:::${website-bucket-name}/*", "arn:aws:s3:::${website-bucket-name}" ] }>,

<{ "Sid": "", "Effect": "Allow", "Action": [ "codebuild:*", "codecommit:*", "cloudfront:CreateInvalidation" ], "Resource": "*" }>,

< { "Sid": "", "Effect": "Allow", "Action": [ "logs:Put*", "logs:FilterLogEvents", "logs:Create*" ], "Resource": "*" } ] } >

Setup CloudFront

  • Access Cloudfront and select Create distribution.
    Under Origin domain – select the S3 Bucket you created earlier.
  • Under Viewer protocol policy, set your desired actions. If you have a proper SSL you can set it up later and use Redirect HTTP to HTTPS.
  • Under Allowed HTTP methods, select GET, HEAD.
  • You can setup Alternate domain name here, but make sure you have an ACM Certificate to cover it, and setup Customer SSL certificate if you wish to use HTTPS.
  • Set the Default root object to index.html. This will ensure it loads the website if someone visits the root of the domain.
  • Leave everything else as default for the moment and click Create distribution.
  • Note down the Distribution ID, as you will need it for the Pipeline.

Setup the Pipeline

Now that all the components have been created, let’s setup the Pipeline to Tie them all Together.
Setup CodeCommit 
Navigate in to CodeCommit, and select Create Repository
You can use git-remote-codecommit in order to checkout the repository locally. You will need to check it out to make additional changes.  

You will need to make a sample commit, so create a directory called public and a file called index.html within it with some sample content, and push it up. 

$ cat public/index.html 
This is an S3-hosted website test 

After this is done, you should have a branch called “master” or “main” depending on your local configuration. This will need to be referenced during pipeline creation.
Setup buildspec in Repository 

Add a buildSpec.yml file to the CodeCommit Repo in order to automatically upload your files to AWS S3, and Invalidate the Cloudfront Cache. Note that ${bucketname} and ${cloudfront_id} in the examples below need to be replaced with the real values.  


version: 0.2

phases: 

  install: 

    commands: 

      - aws s3 sync ./public/ s3://${bucketname}/ 

      - aws cloudfront create-invalidation --distribution-id ${cloudfront_id} --paths "/*" 

Setup CodePipeline 

In our example, we’re going to use a very simple and straightforward pipeline. It will have a Source and Deployment phase. 

Navigate in to CodePipeline, and select Create Pipeline. Enter your Pipeline name, and select the role you created earlier under Service role. 

Under Advanced Settings, set the Artifact Store to Custom Location and update the field to reference the pipeline bucket you created earlier on. 

Click next, and move to Adding a Source Provider. Select the Source provider, Repository name and Branch name setup previously, and click next leaving everything else as default. 

On the Build section, select Build provider as AWS CodeBuild, and click Create Project under Project name 

This will open a new window. Codebuild will need to be created through this interface, otherwise it doesn’t support the artifacts, and source configuration correctly. 

Under Environment, select the latest Ubuntu Runtime for codebuild, and under Service role select the IAM role you created earlier. Once that’s all done, click Continue to CodePipeline and it will close out the window and your project will now be created. 

Click Next, and then Skip deploy stage (we’re doing it during the build stage!). Finally, click on create the pipeline and it will start running based on the work you’ve done so far.  

The website so far should now be available in the browser! Any further changes to the CodeCommit repository will result in the website being updated on S3 within minutes! 

The post Using CodePipeline to Host a Static Website appeared first on Anchor | Cloud Engineering Services.

AWS Communication Developer Services (CDS) recognized in the 2022 Gartner Market Guide for CPaaS

Post Syndicated from laurcudn original https://aws.amazon.com/blogs/messaging-and-targeting/aws-communication-developer-services-cds-recognized-in-the-2022-gartner-market-guide-for-cpaas/

2022 Gartner Market Guide for CPaaS recognizes AWS Communication Developer Services (CDS)

Every business needs to communicate with customers through channels like email, SMS, notifications, voice, and even video. CPaaS, or Communications Platform as a Service, provers combine those capabilities into a single offering. Communication developers exploring new CPaaS solutions can read the 2022 Gartner Market Guide for CPaaS to learn about AWS Communication Developer Services’ (CDS) capabilities in these channels.

What is CPaaS?

Gartner defines CPaaS, or Communication Platform as a Service, as “cPaaS offer[ing] application leaders a cloud-based, multilayered middleware on which they can develop, run and distribute communications software. The platform offers APIs and integrated development environments that simplify the integration of communications capabilities (for example, voice, messaging and video) into applications, services or business processes.” CPaaS offerings allow companies to integrate communication capabilities into their apps, websites, and technologies in a flexible and customized manner. Companies can choose to add channels and features, like SMS, email, voice, notifications, and more through APIs and SDKs. CPaaS also describes the layers of target audience segmentation, message orchestration and vertical solutions in communication technologies.

2022 Gartner CPaaS Market Guide

The 2022 Gartner CPaaS Market Guide provides key considerations to keep in mind when selecting a communications platform partner to increase customer engagement across channels. Gartner defines the market in three levels, Foundational, Emerging, and Potential Differentiation.

Foundational: Foundational requirements are common communication APIs, and are estimated by Gartner to represent about 85% of current market spend. These fall in to channel-based capabilities like SIP trunking, phone numbers, long and short codes, network connections, SMS, voice, notifications, orchestration, inbound and outbound email, and basic security like 2-factor authorization (2FA) via SMS, email, voice, or other channels. AWS Communication Developer Services (CDS) are built on flexible APIs that support all of the channels listed above. CDS allows builders to use APIs as simple configuration out of the box, or customize APIs through more advanced coding.

Emerging: Emerging offerings include ML capabilities like messaging and voice bots or sentiment analysis, advanced message options, reputation and delivery managers, and campaign managers. AWS Communication Developer Services (CDS) is built using AWS’ leading ML capabilities, and makes it easy for developers to implement advanced machine learning and artificial intelligence features like voice and messaging bots, speech to text, and sentiment analysis. AWS Communication Developer Services (CDS) has orchestrated email and message capabilities. Campaign management is simple through designated campaign and journey features.

Potential differentiation: Gartner identifies potential long-term opportunities for differentiation in areas like vertical solutions and integration with contact centers. CDS has helped many healthcare, financial services, and entertainment customers modernize their customer communications. Our customers work closely with leading partners like Accenture, Local Measure, and many more for seamless integration into current technology and communication stacks. Through CDS’ close ties with the Amazon Connect contact center solution, it’s easy to integrate contact center capabilities with messaging and multichannel communications.

Gartner CPaaS Architecture Diagram

Market Direction and Trends:

Gartner predicts continued growth of the CPaaS category. Gartner reports that most new CPaaS customers enter the category with a specific channel need, such as push notifications, then expands their offerings to include additional channels and functionalities as new needs arise.
Trends include conversational everything and the rise of messaging and voice bots, predictive intelligence for personalization, and local global expansion through Channel Partners.

About AWS Communication Developer Services

AWS Communication Developer Services (CDS) are a set of SDKs and APIs that allow developers to embed customer communications directly into their applications. Developers use AWS CDS to improve communication with their customers across different use cases, including application-to-person (A2P) communications, asynchronous communications, and real-time communications. AWS CDS supports channels like email, SMS, push notifications, chat, audio, video, and voice over the PSTN. Send messages to your customers with the scale of AWS and pay only for what you use.

Use Cases

Alerts + Notifications

Send customers alerts or notifications through email, SMS, push, in-app notifications, or via voice API

Promotional Messages

Send promotions and marketing messages with top email inbox deliverability and segmented messaging

In-App Video

Build your application or website with scalable video technology that can support telehealth, field support, education, and more

Interactive Voice Response (IVR)

Build an IVR system that can support touch-tone, conversation voice bots, and text-to-speech

Want to read the full Market Guide?
Download the 2022 Gartner CPaaS Market Guide for free here.

Retry delivering failed SMS using Amazon Pinpoint

Post Syndicated from satyaso original https://aws.amazon.com/blogs/messaging-and-targeting/how-to-utilise-amazon-pinpoint-to-retry-unsuccessful-sms-delivery/

Organizations in many sectors and verticals have user bases to whom they send transactional SMS messages such as OTPs (one-time passwords), Notices, or transaction/purchase confirmations, among other things. Amazon Pinpoint enables customers to send transactional SMS messages to a global audience through a single API endpoint, and the messages are routed to recipients by the service. Amazon Pinpoint relies on downstream SMS providers and telecom operators to deliver the messages to end user’s device. While most of the times the SMS messages gets delivered to recipients but sometimes these messages could not get delivered due to  carrier/telecom related issues which are transient in nature. This impacts customer’s brand name. As a result, customers need to implement a solution that allows them to retry the transmission of SMS messages that fail due to transitory problems caused by downstream SMS providers or telecom operators.

In this blog post, you will discover how to retry sending unsuccessfully delivered SMS messages caused by transitory problems at the downstream SMS provider or telecom operator side.

Prerequisites

For this post, you should be familiar with the following:

Managing an AWS account
Amazon Pinpoint
Amazon Pinpoint SMS events
AWS Lambda
AWS CloudFormation
Amazon Kinesis Firehose
Kinesis Streams
Amazon DynamoDB WCU and RCU accordingly

Architecture Overview

The architecture depicted below is a potential architecture for re-sending unsuccessful SMS messages at real time. The application sends the SMS message to Amazon Pinpoint for delivery using sendMessge API. Pinpoint receives the message and returns a receipt notification with the Message ID; the application records the message content and ID to a Datastore or DynamoDB. Amazon Pinpoint delivers messages to users and then receives SMS engagement events. The same SMS engagement events are provided to Amazon Kinesis Data Streams which as an event source for Lambda function that validates the event type, If the event type indicates that the SMS message was unable to be sent and it make sense to retry, the Lambda function logic retrieves respective “message id” from the SMS events and then retrieves the message body from the database. Then it sends the SMS message to Amazon  Pinpoint for redelivery, you can choose same or an alternative origination number as origination identity while resending the SMS to end users. We recommend configuring the number of retries and adding a retry message tag within Pinpoint to analyse retries and also to avoid infinite loops. All events are also sent to Amazon Kinesis Firehose which then saved to your S3 data lake for later audit and analytics purpose.

Note: The Lambda concurrency and DynamoDB WCU/RCUs need to be provisioned accordingly. The AWS CloudFormation template provided in this post automatically sets up the different architecture components required to retry unsuccessful SMS messages

Retry delivering failed SMS using Amazon Pinpoint

At the same time, if you use Amazon Kinesis Firehose delivery stream instead of Kinesis data stream to stream data to a storage location, you might consider utilising Transformation lambda as part of the kinesis Firehose delivery stream to retry unsuccessful messages. The architecture is as follows; application sends the SMS payload to Amazon Pinpoint using SendMessage API/SDK while also writing the message body to a persistent data store, in this case a DynamoDB database. The SMS related events are then sent to Amazon Kinesis Firehose, where a   transformation lambda is setup. In essence, if SMS event type returns no errors, the event is returned to Firehose as-is. However, if an event type fails and it makes sense to retry, lambda logic sends another SendMessage until the retry count (specified to 5 within the code) is reached. If just one retry attempt is made, S3 storage is not loaded with an event (thus the result=Dropped). Since Pinpoint event do not contain actual SMS content, a call to DynamoDB is made to get the message body for a new SendMessage.

Retry SMS diagram

Amazon Pinpoint provides event response for each transactional SMS communications for retrying unsuccessful SMS connections, there are primarily two factors to consider in this architecture. 1/ Type of event (event_type) 2/ Record Status (record_status). So whenever the event_type is “_SMS.FAILURE” and record_status is any of “UNREACHABLE”, “UNKNOWN”, “CARRIER_UNREACHABLE”, “EXPIRED”. Then surely customer application need to retry the SMS message delivery. Following pseudo code snippet explains the conditional flow for failed SMS sending logic within the lambda function.

Code Sample:
If event.event_type = '_SMS.FAILURE': and event.record_status == 'UNREACHABLE' 
	'| UNKNOWN | CARRIER_UNREACHABLE | TTL_EXPIRED'
	sendMessage(message content, Destination) # resend the SMS message then 
	output_record = { "recordId": record["recordId"], 'result': 'Dropped', 'data': 
		base64.b64encode(payload.encode('utf-8')) } 
else 
	output_record = { "recordId": record["recordId"], 'result': 'Ok', 
						'data': base64.b64encode(payload.encode('utf-8')) }

Getting started with solution deployment

Prerequisite tasks to be completed before deploying the logging solution

  1. Go to CloudFormation Console and Click Create Stack.
  2. Select Amazon S3 Url redio button and provide the cloud formation linkAWS console creating a Pinpoint template
  3. Click Next on Create Stack screen.
  4. Specify Stack Name, for example “SMS-retry-stack”
  5. Specify event stream configuration option, this will trigger the respective child cloud formation stack . There are three Event stream configuration you can choose from.
    • No Existing event stream setup – Select this option if you don’t have any event stream setup for Amazon Pinpoint.
    • Event stream setup with Amazon Kinesis Stream – Select this option if your Amazon Pinpoint project already have Amazon Kinesis as event stream destination.
    • Event stream setup with Amazon Kinesis Firehose – Select this option if you have configured Kinesis Firehose delivery stream as event stream destination.AWS console specifying Pinpoint stack details
  6. Specify the Amazon Pinpoint project app ID (Pinpoint project ID), and click Next.
  7. Click Next on Configure stack options screen.
  8. Select “I acknowledge that AWS CloudFormation might create IAM resources” and click Create Stack.
  9. Wait for the CloudFormation template to complete and then verify resources in the CloudFormation stack has been created. Click on individual resources and verify.
    • Parent stack-SMS retry parent stack
    • Child Stack –SMS retry child stack
  10. As described in the architectural overview session, the maxRetries configuration inside “RetryLambdaFunction” ensures that unsuccessful SMS messages are tried resending repeatedly. This number is set to 3 by default.” If you want to adjust the maxRetry count, go to the settings “RetryLambdaFunction” and change it to the desired number.SMS retry lambda

Notes :- The Cloudformation link in the blog specifically points to the parent cloudformation template, which has links to the child Cloudformation stack, these child stacks will be deployed automatically as you go through the patent stack.

Testing the solution

You can test the solution using “PinpointDDBProducerLambdaFunction” and SMS simulator numbers . PinpointDDBProducerLambdaFunction has sample code that shall trigger the SMS using Amazon Pinpoint.

testing SMS retry solution

Now follow the steps below to test the solution.

  1. Go to environment variables for PinpointDDBProducerLambdaFunction­­
  2. Update “destinationNumber” and “pinpointApplicationID,” where destination number is the recipient number for whom you wish to send the SMS as a failed attempt and Amazon Pinpoint application id is the Pinpoint Project ID for which the Pinpoint SMS channel has already been configured.
  3. Deploy and test the Lambda function.
  4. Check the “Pinpoint Message state” DyanamoDB table and open the Latest table ITEM.
  5. If you observe the table Items, it states the retry_count=2 (SMS send retry has been attempted 2 times and all_retries_failed=true ( for both of the times the SMS could not get delivered.)
Notes :
  • If existing Kinesis stream has pre-defined destination lambda then current stack will not replace it but exit gracefully.
  • If existing Kinesis firehose has pre-existing transformation lambda then current stack shall not replace the current stack.

Remarks

This SMS retry solution is based on best effort. This means that the solution is dependent on event response data from SMS aggregators. If the SMS aggregator data is incorrect, this slotion may not produce the desired effec

Cost

Considering that the retry mechanism is applicable for 1000000 unsuccessful SMS messages per month, this solution will approximately cost around $20 per month. Here is AWS calculator link for reference

Clean up

When you’re done with this exercise, complete the following steps to delete your resources and stop incurring costs:

  • On the CloudFormation console, select your stack and choose Delete.
  • This cleans up all the resources created by the stack.

Conclusion

In this blog post, we have demonstrated how customers can retry sending the undelivered/failed SMS messages via Amazon Pinpoint. We explained how to leverage the Amazon kinesis data streams and AWS Lambda functions to assess the status of unsuccessful SMS messages and retry delivering them in an automatic manner.

Extending the solution

This blog provides a rightful frame work to Implement a solution to retry sending failed SMS messages. You can download the AWS Cloudformation templates, code, and scripts for this solution from our GitHub repository and modify it to fit your needs.


About the Authors
Satyasovan Tripathy works as a Senior Specialist Solution Architect at AWS. He is situated in Bengaluru, India, and focuses on the AWS Digital User Engagement product portfolio. He enjoys reading and travelling outside of work.

Nikhil Khokhar is a Solutions Architect at AWS. He specializes in building and supporting data streaming solutions that help customers analyze and get value out of their data. In his free time, he makes use of his 3D printing skills to solve everyday problems.

Target your customers with ML based on their interest in a product or product attribute.

Post Syndicated from Pavlos Ioannou Katidis original https://aws.amazon.com/blogs/messaging-and-targeting/use-machine-learning-to-target-your-customers-based-on-their-interest-in-a-product-or-product-attribute/

Customer segmentation allows marketers to better tailor their efforts to specific subgroups of their audience. Businesses who employ customer segmentation can create and communicate targeted marketing messages that resonate with specific customer groups. Segmentation increases the likelihood that customers will engage with the brand, and reduces the potential for communications fatigue—that is, the disengagement of customers who feel like they’re receiving too many messages that don’t apply to them. For example, if your business wants to launch an email campaign about business suits, the target audience should only include people who wear suits.

This blog presents a solution that uses Amazon Personalize to generate highly personalized Amazon Pinpoint customer segments. Using Amazon Pinpoint, you can send messages to those customer segments via campaigns and journeys.

Personalizing Pinpoint segments

Marketers first need to understand their customers by collecting customer data such as key characteristics, transactional data, and behavioral data. This data helps to form buyer personas, understand how they spend their money, and what type of information they’re interested in receiving.

You can create two types of customer segments in Amazon Pinpoint: imported and dynamic. With both types of segments, you need to perform customer data analysis and identify behavioral patterns. After you identify the segment characteristics, you can build a dynamic segment that includes the appropriate criteria. You can learn more about dynamic and imported segments in the Amazon Pinpoint User Guide.

Businesses selling their products and services online could benefit from segments based on known customer preferences, such as product category, color, or delivery options. Marketers who want to promote a new product or inform customers about a sale on a product category can use these segments to launch Amazon Pinpoint campaigns and journeys, increasing the probability that customers will complete a purchase.

Building targeted segments requires you to obtain historical customer transactional data, and then invest time and resources to analyze it. This is where the use of machine learning can save time and improve the accuracy.

Amazon Personalize is a fully managed machine learning service, which requires no prior ML knowledge to operate. It offers ready to use models for segment creation as well as product recommendations, called recipes. Using Amazon Personalize USER_SEGMENTATION recipes, you can generate segments based on a product ID or a product attribute.

About this solution

The solution is based on the following reference architectures:

Both of these architectures are deployed as nested stacks along the main application to showcase how contextual segmentation can be implemented by integrating Amazon Personalize with Amazon Pinpoint.

High level architecture

Architecture Diagram

Once training data and training configuration are uploaded to the Personalize data bucket (1) an AWS Step Function state machine is executed (2). This state machine implements a training workflow to provision all required resources within Amazon Personalize. It trains a recommendation model (3a) based on the Item-Attribute-Affinity recipe. Once the solution is created, the workflow creates a batch segment job to get user segments (3b). The job configuration focuses on providing segments of users that are interested in action genre movies

{ "itemAttributes": "ITEMS.genres = \"Action\"" }

When the batch segment job finishes, the result is uploaded to Amazon S3 (3c). The training workflow state machine publishes Amazon Personalize state changes on a custom event bus (4). An Amazon Event Bridge rule listens on events describing that a batch segment job has finished (5). Once this event is put on the event bus, a batch segment postprocessing workflow is executed as AWS Step Function state machine (6). This workflow reads and transforms the segment job output from Amazon Personalize (7) into a CSV file that can be imported as static segment into Amazon Pinpoint (8). The CSV file contains only the Amazon Pinpoint endpoint-ids that refer to the corresponding users from the Amazon Personalize recommendation segment, in the following format:

Id
hcbmnng30rbzf7wiqn48zhzzcu4
tujqyruqu2pdfqgmcgkv4ux7qlu
keul5pov8xggc0nh9sxorldmlxc
lcuxhxpqh/ytkitku2zynrqb2ce

The mechanism to resolve an Amazon Pinpoint endpoint id relies on the user id that is set in Amazon Personalize to be also referenced in each endpoint within Amazon Pinpoint using the user ID attribute.

State machine for getting Amazon Pinpoint endpoints

The workflow ensures that the segment file has a unique filename so that the segments within Amazon Pinpoint can be identified independently. Once the segment CSV file is uploaded to S3 (7), the segment import workflow creates a new imported segment within Amazon Pinpoint (8).

Datasets

The solution uses an artificially generated movies’ dataset called Bingewatch for demonstration purposes. The data is pre-processed to make it usable in the context of Amazon Personalize and Amazon Pinpoint. The pre-processed data consists of the following:

  • Interactions’ metadata created out of the Bingewatch ratings.csv
  • Items’ metadata created out of the Bingewatch movies.csv
  • users’ metadata created out of the Bingewatch ratings.csv, enriched with invented data about e-mail address and age
  • Amazon Pinpoint endpoint data

Interactions’ dataset

The interaction dataset describes movie ratings from Bingewatch users. Each row describes a single rating by a user identified by a user id.

The EVENT_VALUE describes the actual rating from 1.0 to 5.0 and the EVENT_TYPE specifies that the rating resulted because a user watched this movie at the given TIMESTAMP, as shown in the following example:

USER_ID,ITEM_ID,EVENT_VALUE,EVENT_TYPE,TIMESTAMP
1,1,4.0,Watch,964982703 
2,3,4.0,Watch,964981247
3,6,4.0,Watch,964982224
...

Items’ dataset

The item dataset describes each available movie using a TITLE, RELEASE_YEAR, CREATION_TIMESTAMP and a pipe concatenated list of GENRES, as shown in the following example:

ITEM_ID,TITLE,RELEASE_YEAR,CREATION_TIMESTAMP,GENRES
1,Toy Story,1995,788918400,Adventure|Animation|Children|Comedy|Fantasy
2,Jumanji,1995,788918400,Adventure|Children|Fantasy
3,Grumpier Old Men,1995,788918400,Comedy|Romance
...

Users’ dataset

The users dataset contains all known users identified by a USER_ID. This dataset contains artificially generated metadata that describe the users’ GENDER and AGE, as shown in the following example:

USER_ID,GENDER,E_MAIL,AGE
1,Female,[email protected],21
2,Female,[email protected],35
3,Male,[email protected],37
4,Female,[email protected],47
5,Agender,[email protected],50
...

Amazon Pinpoint endpoints

To map Amazon Pinpoint endpoints to users in Amazon Personalize, it is important to have a consisted user identifier. The mechanism to resolve an Amazon Pinpoint endpoint id relies that the user id in Amazon Personalize is also referenced in each endpoint within Amazon Pinpoint using the userId attribute, as shown in the following example:

User.UserId,ChannelType,User.UserAttributes.Gender,Address,User.UserAttributes.Age
1,EMAIL,Female,[email protected],21
2,EMAIL,Female,[email protected],35
3,EMAIL,Male,[email protected],37
4,EMAIL,Female,[email protected],47
5,EMAIL,Agender,[email protected],50
...

Solution implementation

Prerequisites

To deploy this solution, you must have the following:

Note: This solution creates an Amazon Pinpoint project with the name personalize. If you want to deploy this solution on an existing Amazon Pinpoint project, you will need to perform changes in the YAML template.

Deploy the solution

Step 1: Deploy the SAM solution

Clone the GitHub repository to your local machine (how to clone a GitHub repository). Navigate to the GitHub repository location in your local machine using SAM CLI and execute the command below:

sam deploy --stack-name contextual-targeting --guided

Fill the fields below as displayed. Change the AWS Region to the AWS Region of your preference, where Amazon Pinpoint and Amazon Personalize are available. The Parameter Email is used from Amazon Simple Notification Service (SNS) to send you an email notification when the Amazon Personalize job is completed.

Configuring SAM deploy
======================
        Looking for config file [samconfig.toml] :  Not found
        Setting default arguments for 'sam deploy'     =========================================
        Stack Name [sam-app]: contextual-targeting
        AWS Region [us-east-1]: eu-west-1
        Parameter Email []: [email protected]
        Parameter PEVersion [v1.2.0]:
        Parameter SegmentImportPrefix [pinpoint/]:
        #Shows you resources changes to be deployed and require a 'Y' to initiate deploy
        Confirm changes before deploy [y/N]:
        #SAM needs permission to be able to create roles to connect to the resources in your template
        Allow SAM CLI IAM role creation [Y/n]:
        #Preserves the state of previously provisioned resources when an operation fails
        Disable rollback [y/N]:
        Save arguments to configuration file [Y/n]:
        SAM configuration file [samconfig.toml]:
        SAM configuration environment [default]:
        Looking for resources needed for deployment:
        Creating the required resources...
        [...]
        Successfully created/updated stack - contextual-targeting in eu-west-1
======================

Step 2: Import the initial segment to Amazon Pinpoint

We will import some initial and artificially generated endpoints into Amazon Pinpoint.

Execute the command below to your AWS CLI in your local machine.

The command below is compatible with Linux:

SEGMENT_IMPORT_BUCKET=$(aws cloudformation describe-stacks --stack-name contextual-targeting --query 'Stacks[0].Outputs[?OutputKey==`SegmentImportBucket`].OutputValue' --output text)
aws s3 sync ./data/pinpoint s3://$SEGMENT_IMPORT_BUCKET/pinpoint

For Windows PowerShell use the command below:

$SEGMENT_IMPORT_BUCKET = (aws cloudformation describe-stacks --stack-name contextual-targeting --query 'Stacks[0].Outputs[?OutputKey==`SegmentImportBucket`].OutputValue' --output text)
aws s3 sync ./data/pinpoint s3://$SEGMENT_IMPORT_BUCKET/pinpoint

Step 3: Upload training data and configuration for Amazon Personalize

Now we are ready to train our initial recommendation model. This solution provides you with dummy training data as well as a training and inference configuration, which needs to be uploaded into the Amazon Personalize S3 bucket. Training the model can take between 45 and 60 minutes.

Execute the command below to your AWS CLI in your local machine.

The command below is compatible with Linux:

PERSONALIZE_BUCKET=$(aws cloudformation describe-stacks --stack-name contextual-targeting --query 'Stacks[0].Outputs[?OutputKey==`PersonalizeBucketName`].OutputValue' --output text)
aws s3 sync ./data/personalize s3://$PERSONALIZE_BUCKET

For Windows PowerShell use the command below:

$PERSONALIZE_BUCKET = (aws cloudformation describe-stacks --stack-name contextual-targeting --query 'Stacks[0].Outputs[?OutputKey==`PersonalizeBucketName`].OutputValue' --output text)
aws s3 sync ./data/personalize s3://$PERSONALIZE_BUCKET

Step 4: Review the inferred segments from Amazon Personalize

Once the training workflow is completed, you should receive an email on the email address you provided when deploying the stack. The email should look like the one in the screenshot below:

SNS notification for Amazon Personalize job

Navigate to the Amazon Pinpoint Console > Your Project > Segments and you should see two imported segments. One named endpoints.csv that contains all imported endpoints from Step 2. And then a segment named ITEMSgenresAction_<date>-<time>.csv that contains the ids of endpoints that are interested in action movies inferred by Amazon Personalize

Amazon Pinpoint segments created by the solution

You can engage with Amazon Pinpoint customer segments via Campaigns and Journeys. For more information on how to create and execute Amazon Pinpoint Campaigns and Journeys visit the workshop Building Customer Experiences with Amazon Pinpoint.

Next steps

Contextual targeting is not bound to a single channel, like in this solution email. You can extend the batch-segmentation-postprocessing workflow to fit your engagement and targeting requirements.

For example, you could implement several branches based on the referenced endpoint channel types and create Amazon Pinpoint customer segments that can be engaged via Push Notifications, SMS, Voice Outbound and In-App.

Clean-up

To delete the solution, run the following command in the AWS CLI.

The command below is compatible with Linux:

SEGMENT_IMPORT_BUCKET=$(aws cloudformation describe-stacks --stack-name contextual-targeting --query 'Stacks[0].Outputs[?OutputKey==`SegmentImportBucket`].OutputValue' --output text)
PERSONALIZE_BUCKET=$(aws cloudformation describe-stacks --stack-name contextual-targeting --query 'Stacks[0].Outputs[?OutputKey==`PersonalizeBucketName`].OutputValue' --output text)
aws s3 rm s3://$SEGMENT_IMPORT_BUCKET/ --recursive
aws s3 rm s3://$PERSONALIZE_BUCKET/ --recursive
sam delete

For Windows PowerShell use the command below:

$SEGMENT_IMPORT_BUCKET=$(aws cloudformation describe-stacks --stack-name contextual-targeting --query 'Stacks[0].Outputs[?OutputKey==`SegmentImportBucket`].OutputValue' --output text)
$PERSONALIZE_BUCKET=$(aws cloudformation describe-stacks --stack-name contextual-targeting --query 'Stacks[0].Outputs[?OutputKey==`PersonalizeBucketName`].OutputValue' --output text)
aws s3 rm s3://$SEGMENT_IMPORT_BUCKET/ --recursive
aws s3 rm s3://$PERSONALIZE_BUCKET/ --recursive
sam delete

Amazon Personalize resources like Dataset groups, datasets, etc. are not created via AWS Cloudformation, thus you have to delete them manually. Please follow the instructions in the official AWS documentation on how to clean up the created resources.

About the Authors

Pavlos Ioannou Katidis

Pavlos Ioannou Katidis

Pavlos Ioannou Katidis is an Amazon Pinpoint and Amazon Simple Email Service Specialist Solutions Architect at AWS. He loves to dive deep into his customer’s technical issues and help them design communication solutions. In his spare time, he enjoys playing tennis, watching crime TV series, playing FPS PC games, and coding personal projects.

Christian Bonzelet

Christian Bonzelet

Christian Bonzelet is an AWS Solutions Architect at DFL Digital Sports. He loves those challenges to provide high scalable systems for millions of users. And to collaborate with lots of people to design systems in front of a whiteboard. He uses AWS since 2013 where he built a voting system for a big live TV show in Germany. Since then, he became a big fan on cloud, AWS and domain driven design.

Updated requirements for US toll-free phone numbers

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/updated-requirements-for-us-toll-free-phone-numbers/

Many Amazon Pinpoint customers use toll-free phone numbers to send messages to their customers in the United States. A toll-free number is a 10-digit number that begins with one of the following three-digit codes: 800, 888, 877, 866, 855, 844, or 833. You can use toll-free numbers to send both SMS and voice messages to recipients in the US.

What’s changing

Historically, US toll-free numbers have been available to purchase with no registration required. To prevent spam and other types of abuse, the United States mobile carriers now require new toll-free numbers to be registered as well. The carriers also require all existing toll-free numbers to be registered by September 30, 2022. The carriers will block SMS messages sent from unregistered toll-free numbers after this date.

If you currently use toll-free numbers to send SMS messages, you must complete this registration process for both new and existing toll-free numbers. We’re committed to helping you comply with these changing carrier requirements.

Information you provide as part of this registration process will be provided to the US carriers through our downstream partners. It can take up to 15 business days for your registration to be processed. To help prevent disruptions of service with your toll-free number, you should submit your registration no later than September 12th, 2022.

Requesting new toll-free numbers

Starting today, when you request a United States toll-free number in the Amazon Pinpoint console, you’ll see a new page that you can use to register your use case. Your toll-free number registration must be completed and verified before you can use it to send SMS messages. For more information about completing this registration process, see US toll-free number registration requirements and process in the Amazon Pinpoint User Guide.

Registering existing toll-free numbers

You can also use the Amazon Pinpoint console to register toll-free numbers that you already have in your account. For more information about completing the registration process for existing toll-free numbers, see US toll-free number registration requirements and process in the Amazon Pinpoint User Guide.

In closing

Change is a constant factor in the SMS and voice messaging industry. Carriers often introduce new processes in order to protect their customers. The new registration requirements for toll-free numbers are a good example of these kinds of changes. We’ll work with you to help make sure that these changes have minimal impact on your business. If you have any concerns about these changing requirements, open a ticket in the AWS Support Center.

Collaboration Drives Secure Cloud Innovation: Insights From AWS re:Inforce

Post Syndicated from Jesse Mack original https://blog.rapid7.com/2022/08/02/collaboration-drives-secure-cloud-innovation-insights-from-aws-re-inforce/

Collaboration Drives Secure Cloud Innovation: Insights From AWS re:Inforce

This year’s AWS re:Inforce conference brought together a wide range of organizations that are shaping the future of the cloud. Last week in Boston, cloud service providers (CSPs), security vendors, and other leading organizations gathered to discuss how we can go about building cloud environments that are both safe and scalable, driving innovation without sacrificing security.

This array of attendees looks a lot like the cloud landscape itself. Multicloud architectures are now the norm, and organizations have begun to search for ways to bring their lengthening lists of vendors together, so they can gain a more cohesive picture of what’s going on in their environment. It’s a challenge, to be sure — but also an opportunity.

These themes came to the forefront in one of Rapid7’s on-demand booth presentations at AWS re:Inforce, “Speeding Up Your Adoption of CSP Innovation.” In this talk, Chris DeRamus, VP of Technology – Cloud Security at Rapid7, sat down with Merritt Baer — Principal, Office of the CISO at AWS — and Nick Bialek — Lead Cloud Security Engineer at Northwestern Mutual — to discuss how organizations can create processes and partnerships that help them quickly and securely utilize new services that CSPs roll out. Here’s a closer look at what they had to say.

Building a framework

The first step in any security program is drawing a line for what is and isn’t acceptable — and for many organizations, compliance frameworks are a key part of setting that baseline. This holds true for cloud environments, especially in highly regulated industries like finance and healthcare. But as Merritt pointed out, what that framework looks like varies based on the organization.

“It depends on the shop in terms of what they embrace and how that works for them,” she said. Benchmarks like CIS and NIST can be a helpful starting point in moving toward “continuous compliance,” she noted, as you make decisions about your cloud architecture, but the journey doesn’t end there.

For example, Nick said he and his team at Northwestern Mutual use popular compliance benchmarks as a foundation, leveraging curated packs within InsightCloudSec to give them fast access to the most common compliance controls. But from there, they use multiple frameworks to craft their own rigorous internal standards, giving them the best of all worlds.

The key is to be able to leverage detective controls that can find noncompliant resources across your environment so you can take automated actions to remediate — and to be able to do all this from a single vantage point. For Nick’s team, that is InsightCloudSec, which provides them a “single engine to determine compliance with a single set of security controls, which is very powerful,” he said.

Evaluating new services

Consolidating your view of the cloud environment is critical — but when you want to bring on a new service and quickly evaluate it for risk, Merritt and Nick agreed on the importance of embracing collaboration and multiplicity. When it’s working well, a multicloud approach can allow this evaluation process to happen much more quickly and efficiently than a single organization working on their own.

“We see success when customers are embracing this deliberate multi-account architecture,” Merritt said of her experience working with AWS users.

At Northwest Mutual, Nick and his team use a group evaluation approach when onboarding a new cloud service. They’ll start the process with the provider, such as AWS, then ask Rapid7 to evaluate the service for risks. Finally, the Northwest Mutual team will do an assessment that pays close attention to the factors that matter most to them, like disaster recovery and identity and access management.

This model helps Nick and his team realize the benefits of the cloud. They want to be able to consume new services quickly so they can innovate at scale, but their team alone can’t keep up the work needed to fully vet each new resource for risks. They need a partner that can help them keep pace with the speed and elasticity of the cloud.

“You need someone who can move fast with you,” Nick said.

Automating at scale

Another key component of operating quickly and at scale is automation. “Reducing toil and manual work,” as Nick put it, is essential in the context of fast-moving and complex cloud environments.

“The only way to do anything at scale is to leverage automation,” Merritt insisted. Shifting security left means weaving it into all decisions about IT architecture and application development — and that means innovation and security are no longer separate ideas, but simultaneous parts of the same process. When security needs to keep pace with development, being able to detect configuration drift and remediate it with automated actions can be the difference between success and stalling out.

Plus, who actually likes repetitive, manual tasks anyway?

“You can really put a lot of emphasis on narrowing that gray area of human decision-making down to decisions that are truly novel or high-stakes,” Merritt said.

This leveling-up of decision-making is the real opportunity for security in the age of cloud, Merritt believes. Security teams get to be freed from their former role as “the shop of no” and instead work as innovators to creatively solve next-generation problems. Instead of putting up barriers, security in the age of cloud means laying down new roads — and it’s collaboration across internal teams and with external vendors that makes this new model possible.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

[VIDEO] An Inside Look at AWS re:Inforce 2022 From the Rapid7 Team

Post Syndicated from Jesse Mack original https://blog.rapid7.com/2022/07/29/video-an-inside-look-at-aws-re-inforce-2022-from-the-rapid7-team/

[VIDEO] An Inside Look at AWS re:Inforce 2022 From the Rapid7 Team

The summer of conferences rolls on for the cybersecurity and tech community — and for us, the excitement of being able to gather in person after two-plus years still hasn’t worn off. RSA was the perfect kick-off to a renewed season of security together, and we couldn’t have been happier that our second big stop on the journey, AWS re:Inforce, took place right in our own backyard in Boston, Massachusetts — home not only to the Rapid7 headquarters but also a strong and vibrant community of cloud, security, and other technology pros.

We asked three of our team members who attended the event — Peter Scott, VP Strategic Enablement – Cloud Security; Ryan Blanchard, Product Marketing Manager – InsightCloudSec; and Megan Connolly, Senior Security Solutions Engineer — to answer a few questions and give us their experience from AWS re:Inforce 2022. Here’s what they had to say.

What was your most memorable moment from AWS re:Inforce this year?



[VIDEO] An Inside Look at AWS re:Inforce 2022 From the Rapid7 Team

What was your biggest takeaway from the conference? How will it shape the way you think about cloud and cloud security practices in the months to come?



[VIDEO] An Inside Look at AWS re:Inforce 2022 From the Rapid7 Team

Thanks to everyone who came to say hello and talk cloud with us at AWS re:Inforce. We hope to see the rest of you in just under two weeks at Black Hat 2022 in Las Vegas!

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Rapid7 at AWS re:Inforce: 2 Big Announcements

Post Syndicated from Aaron Sawitsky original https://blog.rapid7.com/2022/07/26/rapid7-at-aws-re-inforce-2-big-announcements/

Rapid7 at AWS re:Inforce: 2 Big Announcements

This year’s AWS re:Inforce conference in Boston has been jam-packed with thrilling speakers, deep insights on all things cloud, and some much-needed in-person collaboration from all walks of the technology community. It also coincides with some exciting announcements from AWS — and we’re honored to be a part of two of them. Here’s a look at how Rapid7 is building on our existing partnership with Amazon Web Services to help organizations securely advance in today’s cloud-native business landscape.

InsightIDR awarded AWS Security Competency

For seven years, AWS has issued security competencies to partners who have a proven track record of helping customers secure their AWS environments. Today at re:Inforce, AWS re-launched their Security Competency program, so that it better aligns with customers’ constantly evolving security challenges. Rapid7 is proud to be included in this re-launch, having obtained a security competency under the new criteria for its InsightIDR solution in the Threat Detection and Response category. This is Rapid7’s second AWS security competency and fourth AWS competency.

This designation recognizes that InsightIDR has demonstrated and successfully met AWS’s technical and quality requirements for providing customers with a deep level of software expertise in security incident and event management (SIEM), helping them achieve their cloud security goals.

InsightIDR integrates with a number of AWS services, including CloudTrail, GuardDuty, S3, VPC Traffic Mirroring, and SQS. InsightIDR’s UEBA feature includes dedicated AWS detections. The Insight Agent can be installed on EC2 instances for continuous monitoring. InsightIDR also features an out-of-the-box honeypot purpose-built for AWS environments. Taken together, these integrations and features give AWS customers the threat detection and response capabilities they need, all in a SaaS solution that can be deployed in a matter of weeks.

Adding another competency to Rapid7’s repertoire reaffirms our commitment to giving organizations the tools they need to innovate securely in a cloud-first world.

Rapid7 named a launch partner for AWS GuardDuty Malware Protection

Malware Protection is the new malware detection capability AWS has added to their GuardDuty service — and we’re honored to join them as a launch partner, with two products that support this new GuardDuty functionality.

GuardDuty is AWS’s threat detection service. It monitors AWS environments for suspicious behavior. Malware Protection introduces a new type of detection capability to GuardDuty. When GuardDuty fires an alert that’s related to an Amazon Elastic Cloud Compute (EC2) instance or a container running on EC2, Malware Protection will automatically run a scan on the instance in question and detect malware using machine learning and threat intelligence. When trojans, worms, rootkits, crypto miners, or other forms of malware are detected, they appear as new findings in GuardDuty, so security teams can take the right remediation actions.

Rapid7 customers can ingest GuardDuty findings (including the new malware detections) into InsightIDR and InsightCloudSec. In InsightIDR, each type of GuardDuty finding can be treated as a notable behavior or as an alert which will automatically trigger a new investigation. This allows security teams to know the instant suspicious activity is detected in their AWS environment and react accordingly. Should an investigation be triggered, teams can use InsightIDR’s native automation capabilities to enrich the data from GuardDuty, quarantine a user, and more. In the case where GuardDuty detects malware, teams can pull additional data from the Insight agent and even terminate malicious processes. In addition, customers can use InsightIDR’s Dashboards capability to keep an eye on GuardDuty and spot trends in the findings.

InsightCloudSec customers can likewise build automated bots that automatically react to GuardDuty findings. When GuardDuty has detected malware, a customer might configure a bot that terminates the infected instance. Alternatively, a customer might choose to reconfigure the instance’s security group to effectively isolate it while the team investigates. The options are practically endless.

Rapid7 and AWS continue to deepen partnership to protect your cloud workloads

AWS re:Inforce 2022 provides a welcome opportunity for the community to come together and share insights about managing and securing cloud environments, and we can’t think of better timing to announce these two areas of partnership with AWS. Click here to learn more about what we’re up to at this year’s AWS re:Inforce conference in Boston.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.