Tag Archives: Uncategorized

AWS Step Functions support in Visual Studio Code

Post Syndicated from Rob Sutter original https://aws.amazon.com/blogs/compute/aws-step-functions-support-in-visual-studio-code/

The AWS Toolkit for Visual Studio Code has been installed over 115,000 times since launching in July 2019. We are excited to announce toolkit support for AWS Step Functions, enabling you to define, visualize, and create your Step Functions workflows without leaving VS Code.

Version 1.8 of the toolkit provides two new commands in the Command Palette to help you define and visualize your workflows. The toolkit also provides code snippets for seven different Amazon States Language (ASL) state types and additional service integrations to speed up workflow development. Automatic linting detects errors in your state machine as you type, and provides tooltips to help you correct the errors. Finally, the toolkit allows you to create or update Step Functions workflows in your AWS account without leaving VS Code.

Defining a new state machine

To define a new Step Functions state machine, first open the VS Code Command Palette by choosing Command Palette from the View menu. Enter Step Functions to filter the available options and choose AWS: Create a new Step Functions state machine.

Screen capture of the Command Palette in Visual Studio Code with the text ">AWS Step Functions" entered

Creating a new Step Functions state machine in VS Code

A dialog box appears with several options to help you get started quickly. Select Hello world to create a basic example using a series of Pass states.

A screen capture of the Visual Studio Code Command Palette "Select a starter template" dialog with "Hello world" selected

Selecting the “Hello world” starter template

VS Code creates a new Amazon States Language file containing a workflow with examples of the Pass, Choice, Fail, Wait, and Parallel states.

A screen capture of a Visual Studio Code window with a "Hello World" example state machine

The “Hello World” example state machine

Pass states allow you to define your workflow before building the implementation of your logic with Task states. This lets you work with business process owners to ensure you have the workflow right before you start writing code. For more information on the other state types, see State Types in the ASL documentation.

Save your new workflow by choosing Save from the File menu. VS Code automatically applies the .asl.json extension.

Visualizing state machines

In addition to helping define workflows, the toolkit also enables you to visualize your workflows without leaving VS Code.

To visualize your new workflow, open the Command Palette and enter Preview state machine to filter the available options. Choose AWS: Preview state machine graph.

A screen capture of the Visual Studio Code Command Palette with the text ">Preview state machine" entered and the option "AWS: Preview state machine graph" highlighted

Previewing the state machine graph in VS Code

The toolkit renders a visualization of your workflow in a new tab to the right of your workflow definition. The visualization updates automatically as the workflow definition changes.

A screen capture of a Visual Studio Code window with two side-by-side tabs, one with a state machine definition and one with a preview graph for the same state machine

A state machine preview graph

Modifying your state machine definition

The toolkit provides code snippets for 12 different ASL states and service integrations. To insert a code snippet, place your cursor within the States object in your workflow and press Ctrl+Space to show the list of available states.

A screen capture of a Visual Studio Code window with a code snippet insertion dialog showing twelve Amazon States Langauge states

Code snippets are available for twelve ASL states

In this example, insert a newline after the definition of the Pass state, press Ctrl+Space, and choose Map State to insert a code snippet with the required structure for an ASL Map State.

Debugging state machines

The toolkit also includes features to help you debug your Step Functions state machines. Visualization is one feature, as it allows the builder and the product owner to confirm that they have a shared understanding of the relevant process.

Automatic linting is another feature that helps you debug your workflows. For example, when you insert the Map state into your workflow, a number of errors are detected, underlined in red in the editor window, and highlighted in red in the Minimap. The visualization tab also displays an error to inform you that the workflow definition has errors.

A screen capture of a Visual Studio Code window with a tooltip dialog indicating an "Unreachable state" error

A tooltip indicating an “Unreachable state” error

Hovering over an error opens a tooltip with information about the error. In this case, the toolkit is informing you that MapState is unreachable. Correct this error by changing the value of Next in the Pass state above from Hello World Example to MapState. The red underline automatically disappears, indicating the error has been resolved.

To finish reconciling the errors in your workflow, cut all of the following states from Hello World Example? through Hello World and paste into MapState, replacing the existing values of MapState.Iterator.States. The workflow preview updates automatically, indicating that the errors have been resolved. The MapState is indicated by the three dashed lines surrounding most of the workflow.

A Visual Studio Code window displaying two tabs, an updated state machine definition and the automatically-updated preview of the same state machine

Automatically updating the state machine preview after changes

Creating and updating state machines in your AWS account

The toolkit enables you to publish your state machine directly to your AWS account without leaving VS Code. Before publishing a state machine to your account, ensure that you establish credentials for your AWS account for the toolkit.

Creating a state machine in your AWS account

To publish a new state machine to your AWS account, bring up the VS Code Command Palette as before. Enter Publish to filter the available options and choose AWS: Publish state machine to Step Functions.

Screen capture of the Visual Studio Command Palette with the command "AWS: Publish state machine to Step Functions" highlighted

Publishing a state machine to AWS Step Functions

Choose Quick Create from the dialog box to create a new state machine in your AWS account.

Screen Capture from a Visual Studio Code flow to publish a state machine to AWS Step Functions with "Quick Create" highlighted

Publishing a state machine to AWS Step Functions

Select an existing execution role for your state machine to assume. This role must already exist in your AWS account.

For more information on creating execution roles for state machines, please visit Creating IAM Roles for AWS Step Functions.

Screen capture from Visual Studio Code showing a selection execution role dialog with "HelloWorld_IAM_Role" selected

Selecting an IAM execution role for a state machine

Provide a name for the new state machine in your AWS account, for example, Hello-World. The name must be from one to 80 characters, and can use alphanumeric characters, dashes, or underscores.

Screen capture from a Visual Studio Code flow entering "Hello-World" as a state machine name

Naming your state machine

Press the Enter or Return key to confirm the name of your state machine. The Output console opens, and the toolkit displays the result of creating your state machine. The toolkit provides the full Amazon Resource Name (ARN) of your new state machine on completion.

Screen capture from Visual Studio Code showing the successful creation of a new state machine in the Output window

Output of creating a new state machine

You can check creation for yourself by visiting the Step Functions page in the AWS Management Console. Choose the newly-created state machine and the Definition tab. The console displays the definition of your state machine along with a preview graph.

Screen capture of the AWS Management Console showing the newly-created state machine

Viewing the new state machine in the AWS Management Console

Updating a state machine in your AWS account

It is common to change workflow definitions as you refine your application. To update your state machine in your AWS account, choose Quick Update instead of Quick Create. Select your existing workflow.

A screen capture of a Visual Studio Code dialog box with a single state machine displayed and highlighted

Selecting an existing state machine to update

The toolkit displays “Successfully updated state machine” and the ARN of your state machine in the Output window on completion.

Summary

In this post, you learn how to use the AWS Toolkit for VS Code to create and update Step Functions state machines in your local development environment. You discover how sample templates, code snippets, and automatic linting can accelerate your development workflows. Finally, you see how to create and update Step Functions workflows in your AWS account without leaving VS Code.

Install the latest release of the toolkit and start building your workflows in VS Code today.

 

Building a Raspberry Pi telepresence robot using serverless: Part 1

Post Syndicated from Moheeb Zara original https://aws.amazon.com/blogs/compute/building-a-raspberry-pi-telepresence-robot-using-serverless-part-1/

A Pimoroni STS-Pi Robot Kit connected to AWS for remote control and viewing.

A Pimoroni STS-Pi Robot Kit connected to AWS for remote control and viewing.

A telepresence robot allows you to explore remote environments from the comfort of your home through live stream video and remote control. These types of robots can improve the lives of the disabled, elderly, or those that simply cannot be with their coworkers or loved ones in person. Some are used to explore off-world terrain and others for search and rescue.

This guide walks through building a simple telepresence robot using a Pimoroni STS-PI Raspberry Pi robot kit. A Raspberry Pi is a small low-cost device that runs Linux. Add-on modules for Raspberry Pi are called “hats”. You can substitute this kit with any mobile platform that uses two motors wired to an Adafruit Motor Hat or a Pimoroni Explorer Hat.

The sample serverless application uses AWS Lambda and Amazon API Gateway to create a REST API for driving the robot. A Python application running on the robot uses AWS IoT Core to receive drive commands and authenticate with Amazon Kinesis Video Streams with WebRTC using an IoT Credentials Provider. In the next blog I walk through deploying a web frontend to both view the livestream and control the robot via the API.

Prerequisites

You need the following to complete the project:

A Pimoroni STS-Pi robot kit, Explorer Hat, Raspberry Pi, camera, and battery.

A Pimoroni STS-Pi robot kit, Explorer Hat, Raspberry Pi, camera, and battery.

Estimated Cost: $120

There are three major parts to this project. First deploy the serverless backend using the AWS Serverless Application Repository. Then assemble the robot and run an installer on the Raspberry Pi. Finally, configure and run the Python application on the robot to confirm it can be driven through the API and is streaming video.

Deploy the serverless application

In this section, use the Serverless Application Repository to deploy the backend resources for the robot. The resources to deploy are defined using the AWS Serverless Application Model (SAM), an open-source framework for building serverless applications using AWS CloudFormation. To deeper understand how this application is built, look at the SAM template in the GitHub repository.

An architecture diagram of the AWS IoT and Amazon Kinesis Video Stream resources of the deployed application.

The Python application that runs on the robot requires permissions to connect as an IoT Thing and subscribe to messages sent to a specific topic on the AWS IoT Core message broker. The following policy is created in the SAM template:

RobotIoTPolicy:
      Type: "AWS::IoT::Policy"
      Properties:
        PolicyName: !Sub "${RobotName}Policy"
        PolicyDocument:
          Version: "2012-10-17"
          Statement:
            - Effect: Allow
              Action:
                - iot:Connect
                - iot:Subscribe
                - iot:Publish
                - iot:Receive
              Resource:
                - !Sub "arn:aws:iot:*:*:topicfilter/${RobotName}/action"
                - !Sub "arn:aws:iot:*:*:topic/${RobotName}/action"
                - !Sub "arn:aws:iot:*:*:topic/${RobotName}/telemetry"
                - !Sub "arn:aws:iot:*:*:client/${RobotName}"

To transmit video, the Python application runs the amazon-kinesis-video-streams-webrtc-sdk-c sample in a subprocess. Instead of using separate credentials to authenticate with Kinesis Video Streams, a Role Alias policy is created so that IoT credentials can be used.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "iot:Connect",
        "iot:AssumeRoleWithCertificate"
      ],
      "Resource": "arn:aws:iot:Region:AccountID:rolealias/robot-camera-streaming-role-alias",
      "Effect": "Allow"
    }
  ]
}

When the above policy is attached to a certificate associated with an IoT Thing, it can assume the following role:

 KVSCertificateBasedIAMRole:
      Type: 'AWS::IAM::Role'
      Properties:
        AssumeRolePolicyDocument:
          Version: '2012-10-17'
          Statement:
          - Effect: 'Allow'
            Principal:
              Service: 'credentials.iot.amazonaws.com'
            Action: 'sts:AssumeRole'
        Policies:
        - PolicyName: !Sub "KVSIAMPolicy-${AWS::StackName}"
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
            - Effect: Allow
              Action:
                - kinesisvideo:ConnectAsMaster
                - kinesisvideo:GetSignalingChannelEndpoint
                - kinesisvideo:CreateSignalingChannel
                - kinesisvideo:GetIceServerConfig
                - kinesisvideo:DescribeSignalingChannel
              Resource: "arn:aws:kinesisvideo:*:*:channel/${credentials-iot:ThingName}/*"

This role grants access to connect and transmit video over WebRTC using the Kinesis Video Streams signaling channel deployed by the serverless application. An architecture diagram of the API endpoint in the deployed application.

A deployed API Gateway endpoint, when called with valid JSON, invokes a Lambda function that publishes to an IoT message topic, RobotName/action. The Python application on the robot subscribes to this topic and drives the motors based on any received message that maps to a command.

  1. Navigate to the aws-serverless-telepresence-robot application in the Serverless Application Repository.
  2. Choose Deploy.
  3. On the next page, under Application Settings, fill out the parameter, RobotName.
  4. Choose Deploy.
  5. Once complete, choose View CloudFormation Stack.
  6. Select the Outputs tab. Copy the ApiURL and the EndpointURL for use when configuring the robot.

Create and download the AWS IoT device certificate

The robot requires an AWS IoT root CA (fetched by the install script), certificate, and private key to authenticate with AWS IoT Core. The certificate and private key are not created by the serverless application since they can only be downloaded on creation. Create a new certificate and attach the IoT policy and Role Alias policy deployed by the serverless application.

  1. Navigate to the AWS IoT Core console.
  2. Choose Manage, Things.
  3. Choose the Thing that corresponds with the name of the robot.
  4. Under Security, choose Create certificate.
  5. Choose Activate.
  6. Download the Private Key and Thing Certificate. Save these securely, as this is the only time you can download this certificate.
  7. Choose Attach Policy.
  8. Two policies are created and must be attached. From the list, select
    <RobotName>Policy
    AliasPolicy-<AppName>
  9. Choose Done.

Flash an operating system to an SD card

The Raspberry Pi single-board Linux computer uses an SD card as the main file system storage. Raspbian Buster Lite is an officially supported Debian Linux operating system that must be flashed to an SD card. Balena.io has created an application called balenaEtcher for the sole purpose of accomplishing this safely.

  1. Download the latest version of Raspbian Buster Lite.
  2. Download and install balenaEtcher.
  3. Insert the SD card into your computer and run balenaEtcher.
  4. Choose the Raspbian image. Choose Flash to burn the image to the SD card.
  5. When flashing is complete, balenaEtcher dismounts the SD card.

Configure Wi-Fi and SSH headless

Typically, a keyboard and monitor are used to configure Wi-Fi or to access the command line on a Raspberry Pi. Since it is on a mobile platform, configure the Raspberry Pi to connect to a Wi-Fi network and enable remote access headless by adding configuration files to the SD card.

  1. Re-insert the SD card to your computer so that it shows as volume boot.
  2. Create a file in the boot volume of the SD card named wpa_supplicant.conf.
  3. Paste in the following contents, substituting your Wi-Fi credentials.
    ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
            update_config=1
            country=<Insert country code here>
    
            network={
             ssid="<Name of your WiFi>"
             psk="<Password for your WiFi>"
            }

  4. Create an empty file without a file extension in the boot volume named ssh. At boot, the Raspbian operating system looks for this file and enables remote access if it exists. This can be done from a command line:
    cd path/to/volume/boot
    touch ssh

  5. Safely eject the SD card from your computer.

Assemble the robot

For this section, you can use the Pimoroni STS-Pi robot kit with a Pimoroni Explorer Hat, along with a Raspberry Pi Model 3 B+ or newer, and a camera module. Alternatively, you can use any two motor robot platform that uses the Explorer Hat or Adafruit Motor Hat.

  1. Follow the instructions in this video to assemble the Pimoroni STS-Pi robot kit.
  2. Place the SD card in the Raspberry Pi.
  3. Since the installation may take some time, power the Raspberry Pi using a USB 5V power supply connected to a wall plug rather than a battery.

Connect remotely using SSH

Use your computer to gain remote command line access of the Raspberry Pi using SSH. Both devices must be on the same network.

  1. Open a terminal application with SSH installed. It is already built into Linux and Mac OS, to enable SSH on Windows follow these instructions.
  2. Enter the following to begin a secure shell session as user pi on the default local hostname raspberrypi, which resolves to the IP address of the device using MDNS:
  3. If prompted to add an SSH key to the list of known hosts, type yes.
  4. When prompted for a password, type raspberry. This is the default password and can be changed using the raspi-config utility.
  5. Upon successful login, you now have shell access to your Raspberry Pi device.

Enable the camera using raspi-config

A built-in utility, raspi-config, provides an easy to use interface for configuring Raspbian. You must enable the camera module, along with I2C, a serial bus used for communicating with the motor driver.

  1. In an open SSH session, type the following to open the raspi-config utility:
    sudo raspi-config

  2. Using the arrows, choose Interfacing Options.
  3. Choose Camera. When prompted, choose Yes to enable the camera module.
  4. Repeat the process to enable the I2C interface.
  5. Select Finish and reboot.

Run the install script

An installer script is provided for building and installing the Kinesis Video Stream WebRTC producer, AWSIoTPythonSDK and Pimoroni Explorer Hat Python libraries. Upon completion, it creates a directory with the following structure:

├── /home/pi/Projects/robot
│  └── main.py // The main Python application
│  └── config.json // Parameters used by main.py
│  └── kvsWebrtcClientMasterGstSample //Kinesis Video Stream producer
│  └── /certs
│     └── cacert.pem // Amazon SFSRootCAG2 Certificate Authority
│     └── certificate.pem // AWS IoT certificate placeholder
│     └── private.pem.key // AWS IoT private key placeholder
  1. Open an SSH session on the Raspberry Pi.
  2. (Optional) If using the Adafruit Motor Hat, run this command, otherwise the script defaults to the Pimoroni Explorer Hat.
    export MOTOR_DRIVER=adafruit  

  3. Run the following command to fetch and execute the installer script.
    wget -O - https://raw.githubusercontent.com/aws-samples/aws-serverless-telepresence-robot/master/scripts/install.sh | bash

  4. While the script installs, proceed to the next section.

Configure the code

The Python application on the robot subscribes to AWS IoT Core to receive messages. It requires the certificate and private key created for the IoT thing to authenticate. These files must be copied to the directory where the Python application is stored on the Raspberry Pi.

It also requires the IoT Credentials endpoint is added to the file config.json to assume permissions necessary to transmit video to Amazon Kinesis Video Streams.

  1. Open an SSH session on the Raspberry Pi.
  2. Open the certificate.pem file with the nano text editor and paste in the contents of the certificate downloaded earlier.
    cd/home/pi/Projects/robot/certs
    nano certificate.pem

  3. Press CTRL+X and then Y to save the file.
  4. Repeat the process with the private.key.pem file.
    nano private.key.pem

  5. Open the config.json file.
    cd/home/pi/Projects/robot
    nano config.json

  6. Provide the following information:
    IOT_THINGNAME: The name of your robot, as set in the serverless application.
    IOT_CORE_ENDPOINT: This is found under the Settings page in the AWS IoT Core console.
    IOT_GET_CREDENTIAL_ENDPOINT: Provided by the serverless application.
    ROLE_ALIAS: This is already set to match the Role Alias deployed by the serverless application.
    AWS_DEFAULT_REGION: Corresponds to the Region the application is deployed in.
  7. Save the file using CTRL+X and Y.
  8. To start the robot, run the command:
    python3 main.py

  9. To stop the script, press CTRL+C.

View the Kinesis video stream

The following steps create a WebRTC connection with the robot to view the live stream.

  1. Navigate to the Amazon Kinesis Video Streams console.
  2. Choose Signaling channels from the left menu.
  3. Choose the channel that corresponds with the name of your robot.
  4. Open the Media Playback card.
  5. After a moment, a WebRTC peer to peer connection is negotiated and live video is displayed.
    An animated gif demonstrating a live video stream from the robot.

Sending drive commands

The serverless backend includes an Amazon API Gateway REST endpoint that publishes JSON messages to the Python script on the robot.

The robot expects a message:

{ “action”: <direction> }

Where direction can be “forward”, “backwards”, “left”, or “right”.

  1. While the Python script is running on the robot, open another terminal window.
  2. Run this command to tell the robot to drive forward. Replace <API-URL> using the endpoint listed under Outputs in the CloudFormation stack for the serverless application.
    curl -d '{"action":"forward"}' -H "Content-Type: application/json" -X POST https://<API-URL>/publish

    An animated gif demonstrating the robot being driven from a REST request.

Conclusion

In this post, I show how to build and program a telepresence robot with remote control and a live video feed in the cloud. I did this by installing a Python application on a Raspberry Pi robot and deploying a serverless application.

The Python application uses AWS IoT credentials to receive remote commands from the cloud and transmit live video using Kinesis Video Streams with WebRTC. The serverless application deploys a REST endpoint using API Gateway and a Lambda function. Any application that can connect to the endpoint can drive the robot.

In part two, I build on this project by deploying a web interface for the robot using AWS Amplify.

A preview of the web frontend built in the next blog.

A preview of the web frontend built in the next blog.

 

 

Retrying Undelivered Voice Messages with Amazon Pinpoint

Post Syndicated from Heidi Gloudemans original https://aws.amazon.com/blogs/messaging-and-targeting/retrying-undelivered-voice-messages-with-amazon-pinpoint/

Note: This post was written by Murat Balkan, an AWS Senior Solutions Architect.


Many of our customers use voice notifications to deliver mission-critical and time-sensitive messages to their users. Customers often configure their systems to retry delivery when these voice messages aren’t delivered the first time around. Other customers set up their systems to fall back to another channel in this situation.

This blog post shows you how to retry the delivery of a voice message if the initial attempt wasn’t successful.

Architecture

By completing the steps in this post, you can create a system that uses the architecture illustrated in the following image:

 

First, a Lambda function calls the SendMessage operation in the Amazon Pinpoint API. The SendMessage operation then initiates a phone call to the recipient and generates a unique message ID, which is returned to the Lambda function. The Lambda function then adds this message ID to a DynamoDB table.

While Amazon Pinpoint attempts to deliver a message to a recipient, it emits several event records. These records indicate when the call is initiated, when the phone is ringing, when the call is answered, and so forth. Amazon Pinpoint publishes these events to an Amazon SNS topic. In this example, we’re only interested in the BUSY, FAILED, and NO_ANSWER event types, so we add some filtering criteria.

An Amazon SQS queue then subscribes to the Amazon SNS topic and monitors the incoming events. The Delivery Delay attribute of this queue is also set at the queue level. This configuration provides a back-off retry mechanism for failed voice messages.

When the Delivery Delay timer is reached, another Lambda function polls the queue and extracts the MessageId attribute from the polled message. It uses this attribute to locate the DynamoDB record for the original call. This record also tells us how many times Amazon Pinpoint has attempted to deliver the message.

The Lambda function compares the number of retries to a MAX_RETRY environment variable to determine whether it should attempt to send the message again.

Prerequisites

To start sending transactional voice messages, create an Amazon Pinpoint project and then request a phone number. Next, clone this GitHub repository to your local machine.

After you add a long code to your account, use AWS SAM to deploy the remaining parts of this serverless architecture. You provide the long number as an input parameter to this template.

The AWS SAM template creates the following resources:

  • A Lambda function (CallGenerator) that initiates a voice call using the Amazon Pinpoint API.
  • An Amazon SNS topic that collects state change events from Amazon Pinpoint.
  • An Amazon SQS queue that queues the messages.
  • A Lambda function (RetryCallGenerator) that polls the Amazon SQS queue and re-initiates the previously failed call attempt by calling the CallGenerator function.
  • A DynamoDB table that contains information about previous attempts to deliver the call.

The template also defines a custom Lambda resource, CustomResource, which creates a configuration set in Amazon Pinpoint. This configuration set specifies the events to send to Amazon SNS. A Lambda environment variable, CONFIG_SET_NAME, contains the name of the configuration set.

This architecture consists of two Lambda functions, which are represented as two different apps in the AWS SAM template. These functions are named CallGenerator and RetryCallGenerator. The CallGenerator function initiates the voice message with Amazon Pinpoint. The SendMessage API in Amazon Pinpoint returns a MessageId. The architecture uses this ID as a key to connect messages to the various events that they generate. The CallGenerator function also retains this ID in a DynamoDB table called call_attempts. The RetryCallGenerator function looks up the MessageId in the call_attempts table. If necessary, the function tries to send the message again by invoking the CallGenerator function.

 

Deploying and Testing

Start by downloading the template from the GitHub repository. AWS SAM requires you to specify an Amazon Simple Storage Service (Amazon S3) bucket to hold the deployment artifacts. If you haven’t already created a bucket for this purpose, create one now. The bucket should be reachable by a AWS Identity and Access Management (IAM) user.

At the command line, enter the following command to package the application:

sam package --template template.yaml --output-template-file output_template.yaml --s3-bucket BUCKET_NAME_HERE

In the preceding command, replace BUCKET_NAME_HERE with the name of the Amazon S3 bucket that should hold the deployment artifacts.

AWS SAM packages the application and copies it into the Amazon S3 bucket. This AWS SAM template requires you to specify three parameters: longCode, the phone number that’s used to make the outbound calls; maxRetry, which is used to set the MAX_RETRY environment variable for the RetryCallGenerator application; and retryDelaySeconds, which sets the delivery delay time for the Amazon SQS queue.

When the AWS SAM package command finishes running, enter the following command to deploy the package:

sam deploy --template-file output_template.yaml --stack-name blogstack --capabilities CAPABILITY_IAM --parameter-overrides maxRetry=2 longCode=LONG_CODE retryDelaySeconds=60

In the preceding command, replace LONG_CODE with the dedicated phone number that you acquired earlier.

When you run this command, AWS SAM shows the progress of the deployment. When the deployment finishes, you can test it by sending a sample event to the CallGenerator Lambda function. Use the following sample event to test the Lambda function:

{
"Message": "<speak>Thank you for visiting the AWS <emphasis>Messaging and Targeting Blog</emphasis>.</speak>",
"PhoneNumber": "DESTINATION_PHONE_NUMBER",
"RetryCount": 0
}

In the preceding event, replace DESTINATION_PHONE_NUMBER with the phone number to which you want to send a test message.

Important: Telecommunication providers take several steps to limit unsolicited voice messages. For example, providers in the United States only deliver a certain number of automated voice messages to each recipient per day. For this reason, you can only use Amazon Pinpoint to send 10 calls per day to each recipient. Keep this limit in mind during the testing process.

Conclusion

This architecture shows how Amazon Pinpoint can deliver state change events to Amazon SNS and how a serverless application can use it. You can adapt this architecture to apply to other use cases, such as call auditing, advanced call analytics and more.

Съд на ЕС: Данък върху рекламата

Post Syndicated from nellyo original https://nellyo.wordpress.com/2020/03/17/google-12/

На 3 март 2020 година Съдът на ЕС оповести решение по дело C‑482/18с предмет преюдициално запитване, отправено на основание член 267 ДФЕС  в рамките на производство по дело Google Ireland Limited срещу Nemzeti Adó- és Vámhivatal Kiemelt Adó- és Vámigazgatósága (данъчна администрация, Унгария).

Накратко, делото се отнася до това, какви са задълженията на Google към една национална данъчна администрация, ако в тази държава се предвижда данък върху рекламата.

Запитването е отправено в рамките на спор между Google Ireland Limited, установено в Ирландия дружество, и Nemzeti Adó- és Vámhivatal Kiemelt Adó- és Vámigazgatósága (данъчна администрация, Унгария) по повод на решения, с които тази администрация е наложила на това дружество редица глоби за това, че е нарушило задължението за регистрация, предвидено в унгарското законодателство в тежест на лицата, упражняващи дейност, която подлежи на облагане с данък върху рекламата.

Запитващата юрисдикция иска да се установи дали членове 7/B и 7/D от Закона за данъка върху рекламата са съвместими с член 56 ДФЕС и с принципа за недопускане на дискриминация:

„1)      Следва ли членове 18 ДФЕС и 56 [ДФЕС] и забраната за дискриминация да се тълкуват в смисъл, че не допускат данъчно законодателство на държава членка, чийто санкционен режим предвижда за неизпълнение на задължението за регистрация за целите на данъка върху рекламата налагането на глоба за неизпълнение, която за установени извън Унгария дружества може да достигне общо до 2 000 пъти по-висок размер от тази, налагана на дружества, установени в Унгария?

2)      Може ли да се приеме, че описаната в предходния въпрос очевидно висока и с наказателен характер санкция може да възпре доставчиците на услуги, които не са установени в Унгария, да предоставят услуги в тази държава?

3)      Следва ли член 56 ДФЕС и забраната за дискриминация да се тълкуват в смисъл, че не допускат правна уредба, по силата на която за предприятията, установени в Унгария, задължението за регистрация бива изпълнено автоматично, без подаване на изрична декларация, [само] с предоставянето при вписване в търговския регистър на унгарски данъчен идентификационен номер, независимо дали съответното предприятие публикува или не рекламни съобщения, докато в случая с предприятията, които не са установени в Унгария, но публикуват рекламни съобщения в тази държава, изпълнението на задължението за регистрация не е автоматично, а става чрез изрична декларация, като в случай на неизпълнение може да им бъде наложена изрично предвидена санкция?

4)      При положителен отговор на първия въпрос, следва ли член 56 ДФЕС и забраната за дискриминация да се тълкуват в смисъл, че не допускат санкция като разглежданата в главното производство, наложена за неизпълнение на задължението за регистрация за целите на данъка върху рекламата, доколкото описаната правна уредба противоречи на тази разпоредба?

5)      Следва ли член 56 ДФЕС и забраната за дискриминация да се тълкуват в смисъл, че не допускат разпоредба, според която, в случаите на предприятия, установени в чужбина, решението, с което им се налага имуществена санкция, е окончателно и изпълняемо от момента на съобщаването му, като същото може да бъде обжалвано единствено в съдебно производство, в което съответният съд няма право да проведе съдебно заседание и се допуска събирането само на писмени доказателства, докато в случаите на предприятия, установени в Унгария, наложената имуществена санкция може да се обжалва по административен ред и освен това съдебното производство не е ограничено по какъвто и да било начин?

Съдът

1)      Член 56 ДФЕС трябва да се тълкува в смисъл, че допуска правна уредба на държава членка, която налага на установените в друга държава членка доставчици на рекламни услуги задължение за регистрация за целите на облагането им с данък върху рекламата, докато доставчиците на такива услуги, установени в държавата членка на облагане, са освободени от това задължение с мотива, че имат задължения за деклариране или регистрация за целите на облагането им с какъвто и да било друг данък, приложим на територията на посочената държава членка.

2)      Член 56 ДФЕС трябва да се тълкува в смисъл, че не допуска правна уредба на държава членка, с която на установените в друга държава членка доставчици на услуги, които не са изпълнили задължение за регистрация за целите на облагането им с данък върху рекламата, се налага, в рамките на няколко дни, поредица от глоби, чийто размер, считано от втората глоба, се утроява спрямо размера на предходната глоба при всяко ново констатиране на неизпълнението на това задължение и достига до сборен размер от няколко милиона евро, без компетентният орган, преди приемането на решението, определящо окончателно сборния размер на тези глоби, да е предоставил на тези доставчици необходимото време, за да изпълнят задълженията си, да им е дал възможност да изразят своето становище и самият той да е разгледал тежестта на нарушението, докато размерът на глобата, която се налага на доставчик, установен в държавата членка на облагане, пропуснал да изпълни подобно задължение за регистрация или деклариране в нарушение на общите разпоредби на националното данъчно законодателство, е значително по-нисък и не се увеличава в случай на продължено неспазване на такова задължение нито в същите пропорции, нито непременно в толкова кратки срокове.

 

AWS Online Tech Talks for March 2020

Post Syndicated from Jimmy Cooper original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-for-march-2020/

Register for March 2020 webinars

Join us for live, online presentations led by AWS solutions architects and engineers. AWS Online Tech Talks cover a range of topics and expertise levels, and feature technical deep dives, demonstrations, customer examples, and live Q&A with AWS experts.

Note – All sessions are free and in Pacific Time. Can’t join us live? Access webinar recordings and slides on our On-Demand Portal.

Tech talks this month, by category, are:

March 23, 2020 | 9:00 AM – 10:00 AM PT – Build and Deploy Full-Stack Containerized Applications with the New Amazon ECS CLI – ​Learn to build, deploy and operate your containerized apps, all from the comfort of your terminal, with the new ECS CLI V2.​​​

March 23, 2020 | 11:00 AM – 12:00 PM PT – Understanding High Availability and Disaster Recovery Features for Amazon RDS for Oracle – ​Learn when and how to use the High Availability (HA) and Disaster Recovery (DR) features to meet recovery point objective (RPO) and recovery time objective (RTO) requirements for mission critical Oracle databases running on Amazon RDS.​

March 23, 2020 | 1:00 PM – 2:00 PM PT – Intro to AWS IAM Access Analyzer – ​Learn how you can use IAM Access Analyzer to identify resource policies that don’t comply with your organization’s security requirements.​

March 24, 2020 | 9:00 AM – 10:00 AM PT – Introducing HTTP APIs: A Better, Cheaper, Faster Way to Build APIs – ​Building APIs? Save up to 70% and reduce latency by 60% using HTTP APIs, now generally available from Amazon API Gateway.​

March 24, 2020 | 11:00 AM – 12:00 PM PT – Move to Managed Windows File Storage – ​Move to Managed: Learn about simplifying your Windows file shares with Amazon FSx for Windows File Server​.

March 24, 2020 | 1:00 PM – 2:00 PM PT – Advanced Architectures with AWS Transit Gateway – ​Learn how to use AWS Transit Gateway with centralized security appliances to improve network security and manageability​.

March 25, 2020 | 9:00 AM – 10:00 AM PT – How a U.S. Government Agency Accelerated Its Migration to GovCloud – ​Learn how a government agency mitigated risk and accelerated its migration of 200 VMs to GovCloud using CloudEndure Migration.​​

March 25, 2020 | 11:00 AM – 12:00 PM PT – Machine Learning-Powered Contact Center Analytics with Contact Lens for Amazon Connect – ​Every engagement your customer has with your contact center can provide your team with powerful insights. Explore how with ML-powered analytics, register for this Tech Talk to learn how Contact Lens for Amazon Connect helps you deeply understand your customer conversations.​​

March 25, 2020 | 1:00 PM – 2:00 PM PT – Monitoring Serverless Applications Using CloudWatch ServiceLens – ​In this tech talk, we will provide an introduction to CloudWatch ServiceLens and how it enables you to visualize and analyze the health, performance, and availability of your microservice based applications in a single place.

March 26, 2020 | 9:00 AM – 10:00 AM PT – End User Computing on AWS: Secure Solutions for the Agile Enterprise – ​Learn how tens of thousands of companies are accelerating their cloud migration and increasing agility with Amazon End-User Computing services.

March 26, 2020 | 11:00 AM – 12:00 PM PT – Bring the Power of Machine Learning to Your Fight Against Online Fraud – ​Learn how machine learning can help you automatically start detecting online fraud faster with Amazon Fraud Detector.​

March 26, 2020 | 1:00 PM – 2:00 PM PT – Bring-Your-Own-License (BYOL) on AWS Made Easier – ​Learn how to bring your own licenses (BYOL) to AWS for optimizing Total Cost of Ownership (TCO) with a new, simplified BYOL experience.​​

March 27, 2020 | 9:00 AM – 10:00 AM PT – Intro to Databases for Games: Hands-On with Amazon DynamoDB and Amazon Aurora – Making a game and need to integrate a database? We’ll show you how!​

March 27, 2020 | 11:00 AM – 12:00 PM PT – Accelerate Your Analytics by Migrating to a Cloud Data Warehouse – ​Learn how to accelerate your analytics by migrating to a cloud data warehouse​​​.​

March 30, 2020 | 9:00 AM – 10:00 AM PT – Optimizing Lambda Performance for Your Serverless Applications – ​Learn how to optimize the performance of your Lambda functions through new features like improved VPC networking and Provisioned Concurrency.​​​

March 30, 2020 | 11:00 AM – 12:00 PM PT – Protecting Your Web Application Using AWS Managed Rules for AWS WAF – ​Learn about the new AWS WAF experience and how you can leverage AWS Managed Rules to protect your web application.​

March 30, 2020 | 1:00 PM – 2:00 PM PT – Migrating ASP.NET applications to AWS with Windows Web Application Migration Assistant – ​Join this tech talk to learn how to migrate ASP.NET web applications into a fully managed AWS Elastic Beanstalk environment.​

March 31, 2020 | 9:00 AM – 10:00 AM PT – Forrester Explains the Total Economic Impact™ of Working with AWS Managed Services – ​Learn about the 243% ROI your organization could expect from an investment in AWS Managed Services from the Forrester Total Economic Impact™ study.​

March 31, 2020 | 11:00 AM – 12:00 PM PT – Getting Started with AWS IoT Greengrass Solution Accelerators for Edge Computing – ​Learn how you can use AWS IoT Greengrass Solution Accelerators to quickly build solutions for industrial, consumer, and commercial use cases.

March 31, 2020 | 1:00 PM – 2:00 PM PT – A Customer’s Perspective on Building an Event-Triggered System-of-Record Application with Amazon QLDB – ​UK’s Dept. of Vehicle and Licensing shares its perspective on building an Event-triggered System of Record Application with Amazon QLDB​.

April 1, 2020 | 9:00 AM – 10:00 AM PT – Containers for Game Development – ​Learn the basic architectures for utilizing containers and how to operate your environment, helping you seamlessly scale and save cost while building your game backend on AWS.​

April 1, 2020 | 11:00 AM – 12:00 PM PT – Friends Don’t Let Friends Manage Data Streaming Infrastructure – ​Eliminate the headaches of managing your streaming data infrastructure with fully managed AWS services for Apache Kafka and Apache Flink.​

April 2, 2020 | 9:00 AM – 10:00 AM PT – How to Train and Tune Your Models with Amazon SageMaker – ​Learn how Amazon SageMaker provides everything you need to tune and debug models and execute training experiments​.

April 2, 2020 | 11:00 AM – 12:00 PM PT – Enterprise Transformation: Migration and Modernization Demystified – ​Learn how to lead your business transformation while executing cloud migrations and modernizations​.

April 2, 2020 | 1:00 PM – 2:00 PM PT – How to Build Scalable Web Based Applications for Less with Amazon EC2 Spot Instances – ​Learn how you can scale and optimize web based applications running on Amazon EC2 for cost and performance, all while handling peak demand.​

April 3, 2020 | 9:00 AM – 10:00 AM PT – Migrating File Data to AWS: Demo & Technical Guidance – ​In this tech talk, we’ll touch upon AWS storage destination services in brief, and demo how you can use AWS DataSync and AWS Storage Gateway to easily and securely move your file data into AWS for file services, processing, analytics, machine learning, and archiving, as well as providing on-premises access where needed.​

April 3, 2020 | 11:00 AM – 12:00 PM PT – Building Real-Time Audio and Video Calling in Your Applications with the Amazon Chime SDK – Learn how to quickly build real-time communication capabilities in your own applications for engaging customer experiences.

COVID-19 : блог за научна информация на български език

Post Syndicated from nellyo original https://nellyo.wordpress.com/2020/03/16/covid-19-info/

В условията на дезинформация и дефицит на данни: http://covidresource-bg.org/

Проектът covidresource възникна като спонтанна идея за популяризиране на най-важните научни открития, свързани с лечението на инфекцията от COVID-19 и възникващите усложнения. Научният свят обединява усилията си с клиницистите от засегнатите райони, за да се постигнат максимално бързо максимално ефективни решения.

Всички големи научни платформи поддържат отворен достъп до ресурсите, свързани с разпространението на коронавируса, ограничаването му, ефективните мерки за превенция и лечението на заболелите.

Даваме си сметка, че целият ресурс на българската здравна система вече е впрегнат с преодоляване на последствията от разпространението на заболяването у нас. За да улесним медиците във всекидневните решения, които трябва да взимат – както при лечение на пациенти, инфектирани с COVID-19, така и при работа с неинфектирани пациенти от високорискови групи, ще превеждаме най-важните от всекидневно обновяваните публикации.

Проектът използва единствено научни източници с доказана достоверност.    

Building faster, lower cost, better APIs – HTTP APIs now generally available

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/building-better-apis-http-apis-now-generally-available/

In July 2015, AWS announced Amazon API Gateway. This enabled developers to build secure, scalable APIs quickly in front of a variety of different types of architectures. Since then, the API Gateway team continues to build new features and services for customers.

Figure 1: API Gateway feature highlights timeline

In early 2019, the team evaluated the current services and made plans for the next chapter of API Gateway. They prototyped new languages and technologies, applied lessons learned from building the REST and WebSocket APIs, and looked closely at customer feedback. The result is HTTP APIs for Amazon API Gateway, a service built from the ground up to be faster, lower cost, and simpler to use. In short, HTTP APIs offers a better solution for building APIs. If you are building an API and HTTP APIs fit your requirements, this is the place to start.

Faster

For the majority of use cases, HTTP APIs offers up to 60% reduction in latency. Developers strive to build applications with minimal latency and maximum functionality. They understand that each service involved in the application process can introduce latency.

All services add latency

Figure 2: All services add latency

With that in mind, HTTP APIs is built to reduce the latency overhead of the API Gateway service. Combining both the request and response, 99% of all requests (p99) have less than 10 ms of additional latency from HTTP API.

Lower cost

At Amazon, one of our core leadership principles is frugality. We believe in doing things in a cost-effective manner and passing those savings to our customers. With the availability of new technology, and the expertise of running API Gateway for almost five years, we built HTTP APIs to run more efficiently.

REST/HTTP APIs price comparison

Figure 3: REST/HTTP APIs price comparison

Using the pricing for us-east-1, figure 3 shows a cost comparison for 100 million, 500 million, and 1 billion requests a month. Overall, HTTP APIs is at least 71% lower cost compared to API Gateway REST APIs.

Simpler

On the user interface for HTTP API, the API Gateway team has made the entire experience more intuitive and easier to use.

CORS configuration

Figure 4: CORS configuration

Another example is the configuration of cross origin resource scripting (CORS). CORS provides security by controlling cross-domain access to servers and can be difficult to understand and configure. HTTP APIs enables a developer to configure CORS settings quickly using a simple, easy-to-understand UI. This same approach throughout the UI creates a powerful, yet easy approach to building APIs.

New features

HTTP APIs beta was announced at AWS re:Invent 2019 with powerful features like JWT authorizers, auto-deploying stages, and simplified route integrations. Today, HTTP APIs is generally available (GA) with more features to help developers build APIs faster, lower cost and better.

Private integrations

HTTP APIs now offers developers the ability to integrate with resources secured in a Amazon VPC. When developing with technologies like containers via Amazon Elastic Container Service (ECS) or Amazon Elastic Kubernetes Service (EKS), the underlying Amazon EC2 clusters must reside inside a VPC. While it is possible to make these services available through Elastic Load Balancing, developers can also take advantage of HTTP APIs to front their applications.

VPC link configuration

Figure 5: VPC link configuration

To create a private integration, you need a VPC link. VPC links take advantage of AWS Hyperplane, the Network Function Virtualization platform used for Network Load Balancer and NAT Gateway. With this technology, multiple HTTP APIs can use a single VPC link to a VPC. Likewise, multiple REST APIs can use a REST APIs VPC link.

Private integration configuration

Figure 6: Private integration configuration

Once a VPC link exists, you can configure an HTTP APIs private integration using a Network Load Balancer (NLB), an Application Load Balancer (ALB), or the resource discovery service, AWS Cloud Map.

Custom domain cross compatibility

Amazon API Gateway now offers the ability to share custom domains across REST APIs and HTTP API. This flexibility allows developers to mix and match between REST APIs and HTTP APIs while building applications.

Custom domain cross compatibility

Figure 7: Custom domain cross compatibility

Previously, when building applications needing a consistent domain name, developers could only use a single type of API. Because applications often require features only available on REST, REST APIs are a common choice. From today, you can distribute these routes between HTTP APIs and REST APIs based on feature requirement.

Request throttling

HTTP APIs now offers the ability to do granular throttling at the stage and route level. API throttling is an often-overlooked API feature that is critical to the health of APIs and their infrastructure. By default, API Gateway limits the steady-state request rate to 10,000 requests per second (rps) with a 5,000 request burst limit. These are soft limits and can be raised via AWS Service Quotas.

HTTP APIs has the concept of stages that can be used for different purposes. Applications can have dev, beta, and prod stages on the same API. Additionally, for backwards compatibility, you can configure multiple production stages of the same API. Each stage has optional burst and rate limit settings that override the account level setting of 10,000 rps.

Stage level throttling

Figure 8: Stage level throttling

Throttling can also be set at the route level. A route is a combination of the path and method. For example, the GET method on a root path (/) combine to make a route. At the time of this writing, route level throttling must be created with the AWS Command Line Interface (CLI) or an AWS SDK. To set the throttling on a route of / [ANY] on the $default stage, use the following CLI command:

aws apigatewayv2 update-stage --api-id <your API ID> --stage-name $default --route-settings '{"ANY /": {"ThrottlingBurstLimit":1000, "ThrottlingRateLimit":2000}}'

Stage variables

HTTP APIs now supports the use of stage variables to pass dynamic data to the backend integration or even define the integration itself. When a stage is defined on HTTP API, it creates a new path to the backend integration. The following table shows a domain with several stages:

StagePath
$defaultwww.mydomain.com
devwww.mydomain.com/dev
betawww.mydomain.com/beta

When you access the link for the dev stage, the dev stage variables are passed to the backend integration in the event object. The backend uses this information when processing the request. While not a best practice for passing secrets, it is useful for designating non-secret data, like environment-specific endpoints or feature switches.

Stage variables are also used to dynamically define the backend integration. For example, if the integration uses one AWS Lambda function for production and another for testing, you can use stage variables to dynamically route to the appropriate Lambda function as shown below:

Dynamically choosing an integration point

Figure 9: Dynamically choosing an integration point

When building dynamic integrations, it is also important to update permissions accordingly. HTTP APIs automatically adds invocation rights when an integration points to a single Lambda function. However, when using multiple functions, you must create and manage the role manually. Do this by turning off the Grant API Gateway permission to invoke your Lambda function option and entering a custom role with the appropriate permissions.

Integration custom role

Figure 10: Integration custom role

Lambda payload version 2.0

HTTP APIs now supports an updated event payload and response format for the Lambda function integration. Version 2.0 payload simplifies the format of the event object sent to the Lambda function. Here is a comparison of the event object that is sent to a Lambda function in version 1.0 and 2.0:

Version 1.0

{
    "version": "1.0",
    "resource": "/Echo",
    "path": "/Echo",
    "httpMethod": "GET",
    "headers": {
        "Content-Length": "0",
        "Host": "0000000000.execute-api.us-east-1.amazonaws.com",
        "User-Agent": "TestClient",
        "X-Amzn-Trace-Id": "Root=1-5e6ab926-933e1530e55773a0709dfaa6",
        "X-Forwarded-For": "1.1.1.1",
        "X-Forwarded-Port": "443",
        "X-Forwarded-Proto": "https",
        "accept": "*/*",
        "accept-encoding": "gzip, deflate, br",
        "cache-control": "no-cache",
        "clientInformation": "private",
        "cookie": "Cookie_2=value; Cookie_3=value; Cookie_4=value"
    },
    "multiValueHeaders": {
        "Content-Length": [
            "0"
        ],
        "Host": [
            "0000000000.execute-api.us-east-1.amazonaws.com"
        ],
        "X-Amzn-Trace-Id": [
            "Root=1-5e6ab926-933e1530e55773a0709dfaa6"
        ],
        "X-Forwarded-For": [
            "1.1.1.1"
        ],
        "X-Forwarded-Port": [
            "443"
        ],
        "X-Forwarded-Proto": [
            "https"
        ],
        "accept": [
            "*/*"
        ],
        "accept-encoding": [
            "gzip, deflate, br"
        ],
        "cache-control": [
            "no-cache"
        ],
        "clientInformation": [
            "public",
            "private"
        ],
        "cookie": [
            "Cookie_2=value; Cookie_3=value; Cookie_4=value"
        ]
    },
    "queryStringParameters": {
        "getValueFor": "newClient"
    },
    "multiValueQueryStringParameters": {
        "getValueFor": [
            "newClient"
        ]
    },
    "requestContext": {
        "accountId": "0000000000",
        "apiId": "0000000000",
        "domainName": "0000000000.execute-api.us-east-1.amazonaws.com",
        "domainPrefix": "0000000000",
        "extendedRequestId": "JTHd9j2EoAMEPEA=",
        "httpMethod": "GET",
        "identity": {
            "accessKey": null,
            "accountId": null,
            "caller": null,
            "cognitoAuthenticationProvider": null,
            "cognitoAuthenticationType": null,
            "cognitoIdentityId": null,
            "cognitoIdentityPoolId": null,
            "principalOrgId": null,
            "sourceIp": "1.1.1.1",
            "user": null,
            "userAgent": "TestClient",
            "userArn": null
        },
        "path": "/Echo",
        "protocol": "HTTP/1.1",
        "requestId": "JTHd9j2EoAMEPEA=",
        "requestTime": "12/Mar/2020:22:35:18 +0000",
        "requestTimeEpoch": 1584052518094,
        "resourceId": null,
        "resourcePath": "/Echo",
        "stage": "$default"
    },
    "pathParameters": null,
    "stageVariables": null,
    "body": null,
    "isBase64Encoded": true
}

Version 2.0

{
    "version": "2.0",
    "routeKey": "ANY /Echo",
    "rawPath": "/Echo",
    "rawQueryString": "getValueFor=newClient",
    "cookies": [
        "Cookie_2=value",
        "Cookie_3=value",
        "Cookie_4=value"
    ],
    "headers": {
        "accept": "*/*",
        "accept-encoding": "gzip, deflate, br",
        "cache-control": "no-cache",
        "clientinformation": "public,private",
        "content-length": "0",
        "host": "0000000000.execute-api.us-east-1.amazonaws.com",
        "user-agent": "TestClient",
        "x-amzn-trace-id": "Root=1-5e6ab967-cfe253ce6f8b90986a678c40",
        "x-forwarded-for": "1.1.1.1",
        "x-forwarded-port": "443",
        "x-forwarded-proto": "https"
    },
    "queryStringParameters": {
        "getValueFor": "newClient"
    },
    "requestContext": {
        "accountId": "0000000000",
        "apiId": "0000000000",
        "domainName": "0000000000.execute-api.us-east-1.amazonaws.com",
        "domainPrefix": "0000000000",
        "http": {
            "method": "GET",
            "path": "/Echo",
            "protocol": "HTTP/1.1",
            "sourceIp": "1.1.1.1",
            "userAgent": "TestClient"
        },
        "requestId": "JTHoQgr2oAMEPMg=",
        "routeId": "47matwk",
        "routeKey": "ANY /Echo",
        "stage": "$default",
        "time": "12/Mar/2020:22:36:23 +0000",
        "timeEpoch": 1584052583903
    },
    "isBase64Encoded": true
}

Additionally, version 2.0 allows more flexibility in the format of the response object from the Lambda function. Previously, this was the required format of the response:

{
  “statusCode”: 200,
  “body”:
  {
    “Name”: “Eric Johnson”,
    “TwitterHandle”: “@edjgeek”
  },
  Headers: {
    “Access-Control-Allow-Origin”: “https://amazon.com”
  }
}

When using version 2.0, the response is simpler:

{
  “Name”: “Eric Johnson”,
  “TwitterHandle”: “@edjgeek”
}

When HTTP APIs receives the response, it uses data like CORS settings and integration response codes to populate the missing data.

By default, new Lambda function integrations use version 2.0. You can change this under the Advanced settings toggle for the Lambda function integration. The version applies to both the event and response payloads. If you choose version 1.0, the old event format is sent to the Lambda function and the full response object must be returned.

Lambda integration advanced settings

Figure 11: Lambda integration advanced settings

OpenAPI/Swagger support

HTTP APIs now supports importing Swagger or OpenAPI configuration files. This makes it simple to migrate from other API Gateway services to HTTP API. When importing a configuration file, HTTP APIs implements all supported features and reports any features that are not currently supported.

AWS Serverless Application Model (SAM) support

At the time of this writing, the AWS Serverless Application Framework (SAM) supports most features released in beta at re:Invent 2019. AWS SAM support for many GA features is scheduled for release by March 20, 2020.

Conclusion

For almost five years, Amazon API Gateway has enabled developers to build highly scalable and durable application programming interfaces. It has allowed the abstraction of tasks like authorization, throttling, and data validation from within the application code to a managed service. With the introduction of HTTP APIs for Amazon API Gateway, developers can now use this powerful service in a faster, lower cost, and better way.

Пак ли е на дневен ред идеята за сливане на обществените медии

Post Syndicated from nellyo original https://nellyo.wordpress.com/2020/03/07/%D0%BF%D0%B0%D0%BA-%D0%BB%D0%B8-%D0%B5-%D0%BD%D0%B0-%D0%B4%D0%BD%D0%B5%D0%B2%D0%B5%D0%BD-%D1%80%D0%B5%D0%B4-%D0%B8%D0%B4%D0%B5%D1%8F%D1%82%D0%B0-%D0%B7%D0%B0-%D1%81%D0%BB%D0%B8%D0%B2%D0%B0%D0%BD%D0%B5/

 

SD Card Speed Test

Post Syndicated from Simon Long original https://www.raspberrypi.org/blog/sd-card-speed-test/

Since we first launched Raspberry Pi, an SD card (or microSD card) has always been a vital component. Without an SD card to store the operating system, Raspberry Pi is pretty useless*! Over the ensuing eight years, SD cards have become the default removable storage technology, used in cameras, smartphones, games consoles and all sorts of other devices. Prices have plummeted to the point where smaller size cards are practically given away for free, and at the same time storage capacity has increased to the point where you can store a terabyte on your thumbnail.

SD card speed ratings, and why they matter

However, the fact that SD cards are now so commonplace sometimes conceals the fact that not all SD cards are created equal. SD cards have a speed rating – how fast you can read or write data to the card – and as card sizes have increased, so have speed ratings. If you want to store 4K video from your digital camera, it is important not just that the card is big enough to hold it, but also that you can write it to the card fast enough to keep up with the huge amount of data coming out of the camera.

The speed of an SD card will also directly affect how fast your Raspberry Pi runs, in just the same way as the speed of a hard drive affects how fast a conventional desktop computer runs. The faster you can read data from the card, the faster your Raspberry Pi will boot, and the faster programs will load. Equally, write speed will also affect how well any programs which save large quantities of data run – so it’s important to use a good-quality card.

What speed can I expect from my SD card?

The speed rating of an SD card should be printed either on the card itself or on the packaging.

The 32GB card shown below is Class 4, denoted by the 4 inside the letter C – this indicates that it can write at 4MB/s.

The 64GB card shown below is Class 10, and so can write at 10MB/s. It also shows the logo of UHS (“ultra high speed”) Class 1, the 1 inside the letter U, which corresponds to the same speed.

More recently, speeds have started to be quoted in terms of the intended use of the card, with Class V10 denoting a card intended for video at 10MB/s, for example. But the most recent speed categorisation – and the one most relevant to use in a Raspberry Pi – is the new A (for “application”) speed class. We recommend the use of Class A1 cards (as the one above – see the A1 logo to the right of the Class 10 symbol) in Raspberry Pi – in addition to a write speed of 10MB/s, these support at least 1500 read operations and 500 write operations per second. All the official Raspberry Pi microSD cards we sell meet this specification.

A new tool for testing your SD card speed

We’ve all heard the stories of people who have bought a large capacity SD card at a too-good-to-be-true price from a dodgy eBay seller, and found that their card labelled as 64GB can only actually hold 2GB of data. But that is at least fairly easy to spot – it’s much harder to work out whether your supposedly fast SD card is actually meeting its specified speed, and unscrupulous manufacturers and sellers often mislabel low quality cards as having unachievable speeds.

Today, as the first part of a new suite of tests which will enable you to perform various diagnostics on your Raspberry Pi hardware, we are releasing a tool which allows you to test your SD card to check that it performs as it should.

To install the new tool, from a terminal do

sudo apt update
sudo apt install agnostics

(“agnostics”? In this case it’s nothing to do with religion! I’ll leave you to work out the pun…)

Once installed, you will find the new application “Raspberry Pi Diagnostics” in the main menu under “Accessories”, and if you launch it, you’ll see a screen like this:

In future, this screen will show a list of the diagnostic tests, and you will be able to select which you want to run using the checkboxes in the right-hand column. But for now, the only test available is SD Card Speed Test; just press “Run Tests” to start it.

Understanding your speed test results

One thing to note is that the write performance of SD cards declines over time. A new card is blank and data can be written to what is effectively “empty” memory, which is fast; but as a card fills up, memory needs to be erased before it can be overwritten, and so writes will become slower the more a card is used. The pass / fail criteria in this test assume a new (or at least freshly formatted) card; don’t be alarmed if the write speed test fails when run on the SD card you’ve been using for six months! If you do notice your Raspberry Pi slowing down over time, it may be worth backing up your SD card using the SD Card Copier tool and reformatting it.

The test takes a minute or so to run on a Raspberry Pi 4 (it’ll take longer on older models), and at the end you’ll see a results screen with either (hopefully) PASS or (if you are less fortunate) FAIL. To see the detailed results of the speed test, press “Show Log”, which will open the test log file in a text editor. (The log file is also written to your home directory as rpdiags.txt.)

We are testing against the A1 specification, which requires a sequential write speed of 10MB/s, 500 random write operations per second, and 1500 random read operations per second; we run the test up to three times. (Tests of this nature are liable to errors due to other background operations accessing the SD card while the test is running, which can affect the result – by running the test multiple times we try to reduce the likelihood of a single bad run resulting in a fail.)

If the test result was a pass, great! Your SD card is good enough to provide optimum performance in your Raspberry Pi. If it failed, have a look in the log file – you’ll see something like:

Raspberry Pi Diagnostics - version 0.1
Mon Feb 24 09:44:16 2020

Test : SD Card Speed Test
Run 1
prepare-file;0;0;12161;23
seq-write;0;0;4151;8
rand-4k-write;0;0;3046;761
rand-4k-read;9242;2310;0;0
Sequential write speed 4151 kb/s (target 10000) - FAIL
Note that sequential write speed declines over time as a card is used - your card may require reformatting
Random write speed 761 IOPS (target 500) - PASS
Random read speed 2310 IOPS (target 1500) - PASS
Run 2
prepare-file;0;0;8526;16
...

You can see just how your card compares to the stated targets; if it is pretty close to them, then your card is only just below specification and is probably fine to use. But if you are seeing significantly lower scores than the targets, you might want to consider getting another card.



[*] unless you’re using PXE network or USB mass storage boot modes of course.

The post SD Card Speed Test appeared first on Raspberry Pi.

Testing and creating CI/CD pipelines for AWS Step Functions

Post Syndicated from Matt Noyce original https://aws.amazon.com/blogs/devops/testing-and-creating-ci-cd-pipelines-for-aws-step-functions-using-aws-codepipeline-and-aws-codebuild/

AWS Step Functions allow users to easily create workflows that are highly available, serverless, and intuitive. Step Functions natively integrate with a variety of AWS services including, but not limited to, AWS Lambda, AWS Batch, AWS Fargate, and Amazon SageMaker. It offers the ability to natively add error handling, retry logic, and complex branching, all through an easy-to-use JSON-based language known as the Amazon States Language.

AWS CodePipeline is a fully managed Continuous Delivery System that allows for easy and highly configurable methods for automating release pipelines. CodePipeline allows the end-user the ability to build, test, and deploy their most critical applications and infrastructure in a reliable and repeatable manner.

AWS CodeCommit is a fully managed and secure source control repository service. It eliminates the need to support and scale infrastructure to support highly available and critical code repository systems.

This blog post demonstrates how to create a CI/CD pipeline to comprehensively test an AWS Step Function state machine from start to finish using CodeCommit, AWS CodeBuild, CodePipeline, and Python.

CI/CD pipeline steps

The pipeline contains the following steps, as shown in the following diagram.

CI/CD pipeline steps

  1. Pull the source code from source control.
  2. Lint any configuration files.
  3. Run unit tests against the AWS Lambda functions in codebase.
  4. Deploy the test pipeline.
  5. Run end-to-end tests against the test pipeline.
  6. Clean up test state machine and test infrastructure.
  7. Send approval to approvers.
  8. Deploy to Production.

Prerequisites

In order to get started building this CI/CD pipeline there are a few prerequisites that must be met:

  1. Create or use an existing AWS account (instructions on creating an account can be found here).
  2. Define or use the example AWS Step Function states language definition (found below).
  3. Write the appropriate unit tests for your Lambda functions.
  4. Determine end-to-end tests to be run against AWS Step Function state machine.

The CodePipeline project

The following screenshot depicts what the CodePipeline project looks like, including the set of stages run in order to securely, reliably, and confidently deploy the AWS Step Function state machine to Production.

CodePipeline project

Creating a CodeCommit repository

To begin, navigate to the AWS console to create a new CodeCommit repository for your state machine.

CodeCommit repository

In this example, the repository is named CalculationStateMachine, as it contains the contents of the state machine definition, Python tests, and CodeBuild configurations.

CodeCommit structure

Breakdown of repository structure

In the CodeCommit repository above we have the following folder structure:

  1. config – this is where all of the Buildspec files will live for our AWS CodeBuild jobs.
  2. lambdas – this is where we will store all of our AWS Lambda functions.
  3. tests – this is the top-level folder for unit and end-to-end tests. It contains two sub-folders (unit and e2e).
  4. cloudformation – this is where we will add any extra CloudFormation templates.

Defining the state machine

Inside of the CodeCommit repository, create a State Machine Definition file called sm_def.json that defines the state machine in Amazon States Language.

This example creates a state machine that invokes a collection of Lambda functions to perform calculations on the given input values. Take note that it also performs a check against a specific value and, through the use of a Choice state, either continues the pipeline or exits it.

sm_def.json file:

{
  "Comment": "CalulationStateMachine",
  "StartAt": "CleanInput",
  "States": {
    "CleanInput": {
      "Type": "Task",
      "Resource": "arn:aws:states:::lambda:invoke",
      "Parameters": {
        "FunctionName": "CleanInput",
        "Payload": {
          "input.$": "$"
        }
      },
      "Next": "Multiply"
    },
    "Multiply": {
      "Type": "Task",
      "Resource": "arn:aws:states:::lambda:invoke",
      "Parameters": {
        "FunctionName": "Multiply",
        "Payload": {
          "input.$": "$.Payload"
        }
      },
      "Next": "Choice"
    },
    "Choice": {
      "Type": "Choice",
      "Choices": [
        {
          "Variable": "$.Payload.result",
          "NumericGreaterThanEquals": 20,
          "Next": "Subtract"
        }
      ],
      "Default": "Notify"
    },
    "Subtract": {
      "Type": "Task",
      "Resource": "arn:aws:states:::lambda:invoke",
      "Parameters": {
        "FunctionName": "Subtract",
        "Payload": {
          "input.$": "$.Payload"
        }
      },
      "Next": "Add"
    },
    "Notify": {
      "Type": "Task",
      "Resource": "arn:aws:states:::sns:publish",
      "Parameters": {
        "TopicArn": "arn:aws:sns:us-east-1:657860672583:CalculateNotify",
        "Message.$": "$$",
        "Subject": "Failed Test"
      },
      "End": true
    },
    "Add": {
      "Type": "Task",
      "Resource": "arn:aws:states:::lambda:invoke",
      "Parameters": {
        "FunctionName": "Add",
        "Payload": {
          "input.$": "$.Payload"
        }
      },
      "Next": "Divide"
    },
    "Divide": {
      "Type": "Task",
      "Resource": "arn:aws:states:::lambda:invoke",
      "Parameters": {
        "FunctionName": "Divide",
        "Payload": {
          "input.$": "$.Payload"
        }
      },
      "End": true
    }
  }
}

This will yield the following AWS Step Function state machine after the pipeline completes:

State machine

CodeBuild Spec files

The CI/CD pipeline uses a collection of CodeBuild BuildSpec files chained together through CodePipeline. The following sections demonstrate what these BuildSpec files look like and how they can be used to chain together and build a full CI/CD pipeline.

AWS States Language linter

In order to determine whether or not the State Machine Definition is valid, include a stage in your CodePipeline configuration to evaluate it. Through the use of a Ruby Gem called statelint, you can verify the validity of your state machine definition as follows:

lint_buildspec.yaml file:

version: 0.2
env:
  git-credential-helper: yes
phases:
  install:
    runtime-versions:
      ruby: 2.6
    commands:
      - yum -y install rubygems
      - gem install statelint

  build:
    commands:
      - statelint sm_def.json

If your configuration is valid, you do not see any output messages. If the configuration is invalid, you receive a message telling you that the definition is invalid and the pipeline terminates.

Lambda unit testing

In order to test your Lambda function code, you need to evaluate whether or not it passes a set of tests. You can test each individual Lambda function deployed and used inside of the state machine. You can feed various inputs into your Lambda functions and assert that the output is what you expect it to be. In this case, you use Python pytest to kick-off tests and validate results.

unit_test_buildspec.yaml file:

version: 0.2
env:
  git-credential-helper: yes
phases:
  install:
    runtime-versions:
      python: 3.8
    commands:
      - pip3 install -r tests/requirements.txt

  build:
    commands:
      - pytest -s -vvv tests/unit/ --junitxml=reports/unit.xml

reports:
  StateMachineUnitTestReports:
    files:
      - "**/*"
    base-directory: "reports"

Take note that in the CodeCommit repository includes a directory called tests/unit, which includes a collection of unit tests that are run and validated against your Lambda function code. Another very important part of this BuildSpec file is the reports section, which generates reports and metrics about the results, trends, and overall success of your tests.

CodeBuild test reports

After running the unit tests, you are able to see reports about the results of the run. Take note of the reports section of the BuildSpec file, along with the –junitxml=reports/unit.xml command run along with the pytest command. This generates a set of reports that can be visualized in CodeBuild.

Navigate to the specific CodeBuild project you want to examine and click on the specific execution of interest. There is a tab called Reports, as seen in the following screenshot:

Test reports

Select the specific report of interest to see a breakdown of the tests that have run, as shown in the following screenshot:

Test visualization

With Report Groups, you can also view an aggregated list of tests that have run over time. This report includes various features such as the number of average test cases that have run, average duration, and the overall pass rate, as shown in the following screenshot:

Report groups

The AWS CloudFormation template step

The following BuildSpec file is used to generate an AWS CloudFormation template that inject the State Machine Definition into AWS CloudFormation.

template_sm_buildspec.yaml file:

version: 0.2
env:
  git-credential-helper: yes
phases:
  install:
    runtime-versions:
      python: 3.8

  build:
    commands:
      - python template_statemachine_cf.py

The Python script that templates AWS CloudFormation to deploy the State Machine Definition given the sm_def.json file in your repository follows:

template_statemachine_cf.py file:

import sys
import json

def read_sm_def (
    sm_def_file: str
) -> dict:
    """
    Reads state machine definition from a file and returns it as a dictionary.

    Parameters:
        sm_def_file (str) = the name of the state machine definition file.

    Returns:
        sm_def_dict (dict) = the state machine definition as a dictionary.
    """

    try:
        with open(f"{sm_def_file}", "r") as f:
            return f.read()
    except IOError as e:
        print("Path does not exist!")
        print(e)
        sys.exit(1)

def template_state_machine(
    sm_def: dict
) -> dict:
    """
    Templates out the CloudFormation for creating a state machine.

    Parameters:
        sm_def (dict) = a dictionary definition of the aws states language state machine.

    Returns:
        templated_cf (dict) = a dictionary definition of the state machine.
    """
    
    templated_cf = {
        "AWSTemplateFormatVersion": "2010-09-09",
        "Description": "Creates the Step Function State Machine and associated IAM roles and policies",
        "Parameters": {
            "StateMachineName": {
                "Description": "The name of the State Machine",
                "Type": "String"
            }
        },
        "Resources": {
            "StateMachineLambdaRole": {
                "Type": "AWS::IAM::Role",
                "Properties": {
                    "AssumeRolePolicyDocument": {
                        "Version": "2012-10-17",
                        "Statement": [
                            {
                                "Effect": "Allow",
                                "Principal": {
                                    "Service": "states.amazonaws.com"
                                },
                                "Action": "sts:AssumeRole"
                            }
                        ]
                    },
                    "Policies": [
                        {
                            "PolicyName": {
                                "Fn::Sub": "States-Lambda-Execution-${AWS::StackName}-Policy"
                            },
                            "PolicyDocument": {
                                "Version": "2012-10-17",
                                "Statement": [
                                    {
                                        "Effect": "Allow",
                                        "Action": [
                                            "logs:CreateLogStream",
                                            "logs:CreateLogGroup",
                                            "logs:PutLogEvents",
                                            "sns:*"             
                                        ],
                                        "Resource": "*"
                                    },
                                    {
                                        "Effect": "Allow",
                                        "Action": [
                                            "lambda:InvokeFunction"
                                        ],
                                        "Resource": "*"
                                    }
                                ]
                            }
                        }
                    ]
                }
            },
            "StateMachine": {
                "Type": "AWS::StepFunctions::StateMachine",
                "Properties": {
                    "DefinitionString": sm_def,
                    "RoleArn": {
                        "Fn::GetAtt": [
                            "StateMachineLambdaRole",
                            "Arn"
                        ]
                    },
                    "StateMachineName": {
                        "Ref": "StateMachineName"
                    }
                }
            }
        }
    }

    return templated_cf


sm_def_dict = read_sm_def(
    sm_def_file='sm_def.json'
)

print(sm_def_dict)

cfm_sm_def = template_state_machine(
    sm_def=sm_def_dict
)

with open("sm_cfm.json", "w") as f:
    f.write(json.dumps(cfm_sm_def))

Deploying the test pipeline

In order to verify the full functionality of an entire state machine, you should stand it up so that it can be tested appropriately. This is an exact replica of what you will deploy to Production: a completely separate stack from the actual production stack that is deployed after passing appropriate end-to-end tests and approvals. You can take advantage of the AWS CloudFormation target supported by CodePipeline. Please take note of the configuration in the following screenshot, which shows how to configure this step in the AWS console:

Deploy test pipeline

End-to-end testing

In order to validate that the entire state machine works and executes without issues given any specific changes, feed it some sample inputs and make assertions on specific output values. If the specific assertions pass and you get the output that you expect to receive, you can proceed to the manual approval phase.

e2e_tests_buildspec.yaml file:

version: 0.2
env:
  git-credential-helper: yes
phases:
  install:
    runtime-versions:
      python: 3.8
    commands:
      - pip3 install -r tests/requirements.txt

  build:
    commands:
      - pytest -s -vvv tests/e2e/ --junitxml=reports/e2e.xml

reports:
  StateMachineReports:
    files:
      - "**/*"
    base-directory: "reports"

Manual approval (SNS topic notification)

In order to proceed forward in the CI/CD pipeline, there should be a formal approval phase before moving forward with a deployment to Production. Using the Manual Approval stage in AWS CodePipeline, you can configure the pipeline to halt and send a message to an Amazon SNS topic before moving on further. The SNS topic can have a variety of subscribers, but in this case, subscribe an approver email address to the topic so that they can be notified whenever an approval is requested. Once the approver approves the pipeline to move to Production, the pipeline will proceed with deploying the production version of the Step Function state machine.

This Manual Approval stage can be configured in the AWS console using a configuration similar to the following:

Manual approval

Deploying to Production

After the linting, unit testing, end-to-end testing, and the Manual Approval phases have passed, you can move on to deploying the Step Function state machine to Production. This phase is similar to the Deploy Test Stage phase, except the name of your AWS CloudFormation stack is different. In this case, you also take advantage of the AWS CloudFormation target for CodeDeploy:

Deploy to production

After this stage completes successfully, your pipeline execution is complete.

Cleanup

After validating that the test state machine and Lambda functions work, include a CloudFormation step that will tear-down the existing test infrastructure (as it is no longer needed). This can be configured as a new CodePipeline step similar to the below configuration:

CloudFormation Template for cleaning up resources

Conclusion

You have linted and validated your AWS States Language definition, unit tested your Lambda function code, deployed a test AWS state machine, run end-to-end tests, received Manual Approval to deploy to Production, and deployed to Production. This gives you and your team confidence that any changes made to your state machine and surrounding Lambda function code perform correctly in Production.

 

About the Author

matt noyce profile photo

 

Matt Noyce is a Cloud Application Architect in Professional Services at Amazon Web Services.
He works with customers to architect, design, automate, and build solutions on AWS
for their business needs.

AWS has launched the Activate Founders package for Startups 🚀

Post Syndicated from Alejandra Quetzalli original https://aws.amazon.com/blogs/aws/aws-has-launched-the-activate-founders-package-for-startups-%F0%9F%9A%80/

Are you in a Startup?

As of today, AWS has launched the Activate Founders package for Startups! 🚀🚀🚀This package unlocks a new set of benefits. If your startup isn’t affiliated with a venture capital firm, accelerator, or incubator, then your startup can now apply to receive $1,000 in AWS Activate Credits (valid for 2 years) and $350 in AWS Developer Support Credits of AWS technical support (valid for 1 year).

👉🏽Visit aws.amazon.com/activate to learn more about the Activate Founders package and apply today.

What kind of Startups will the Activate Founders package help?

The Activate Founders package will help a lot of startups that have not yet raised institutional funding or have no plans to do so.

What kind of benefits will the Activate Founders package bring to customers?

  • Let’s start with Activate Founders credits. 💰They’re a cost-saving opportunity for experimenting, building, testing, and deploying startup architecture on AWS.
  • The AWS Developer Support Credits give startups unlimited support case access to AWS technical support through email 📧.
  • 7 Core AWS Trusted Advisor best practice checks
  • Free digital training paths for various business and technical roles, skill levels, and cloud topics.

What kind of Startups qualify for the Activate Founders package?

Startups interested in applying for the Activate Founders package benefits cannot have institutional funding, must have an AWS account, and complete the Activate Founders application form. (Startups affiliated with a venture capital firm, accelerator, or incubator, or who have previously received Activate benefits will not qualify.)

The main 6 criteria to qualify for the Activate Founders package are as follows…

  1. Complete the Activate Founders application form and provide a description of their startup.
  2. No institutional funding and not affiliated with a venture capital firm, accelerator, or incubator.
  3. Must have an AWS Account ID.
  4. Have not previously received Activate benefits.
  5. A company website or company web profile (e.i. CrunchBase, AngelList, Product Hunt, Entrepedia).
  6. Provide their LinkedIn profile.

How does a Startup apply for the Activate Founders package?

👉🏽Startups interested in applying for the Activate Founders package benefits can access the application directly from the AWS Activate console: https://console.aws.amazon.com/activate.

That said, more information about the AWS Activate program itself or details about all Activate benefits packages can be found here: https://aws.amazon.com/activate.

Will you apply?

We hope so! We look forward to helping even more customers. 🥳

The History of Pong | Code the Classics

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/the-history-of-pong-code-the-classics/

One topic explored in Code the Classics from Raspberry Pi Press is the origin story and success of Pong, one of the most prominent games in early video game history.

‘The success of Pong led to the creation of Pong home consoles (and numerous unofficial clones) that could be connected to a television. Versions have also appeared on many home computers.’

Ask anyone to describe a game of table tennis and they’ll invariably tell you the same thing: the sport involves a table split into quarters, a net dividing the two halves, a couple of paddles, and a nice round ping-pong ball to bat back and forth between two players. Take a look at the 1972 video game Pong, however, and you’ll notice some differences. The table, for instance, is simply split in half and it’s viewed side-on, the paddles look like simple lines, and the ball is square. Yet no one – not even now – would have much trouble equating the two.

Back in the early 1970s, this was literally as good as it got. The smattering of low-powered arcade machines of the time were incapable of realistic-looking graphics, so developers had to be creative, hoping imaginative gamers would fill the gaps and buy into whatever they were trying to achieve. It helped enormously that there was a huge appetite for the new, emerging video game industry at that time. Nolan Bushnell was certainly hungry for more – and had he turned his nose up at Spacewar!, a space combat game created by Steve Russell in 1962, then Pong would never even have come about.

“The most important game I played was Spacewar! on a PDP-1 when I was in college,” he says, of the two-player space shooter that was popular among computer scientists and required a $120,000 machine to run. Although the visuals were nothing to write home about, the game was one of the first graphical video games ever made. It pitted two spaceships against each other and its popularity spread, in part, because the makers decided the code could be distributed freely to anyone who wanted it. “It was a great game, fun, challenging, but only playable on a very expensive computer late at night and the wee hours of the morning,” Nolan says. “In my opinion, it was a very important step.”

Nolan was so taken by Spacewar! that he made a version of the game with a colleague, Ted Dabney. Released in 1971, Computer Space allowed gamers to control a rocket in a battle against flying saucers, with the aim being to get more hits than the enemy in a set period of time. To make it attractive to players, it was placed in a series of colourful, space-age, moulded arcade cabinets. Nolan and Ted sold 1500 of them; even though they made just $500 from the venture, it was enough to spur them into continuing. They came up with the idea for Pong and created a company called Atari.

One of their best moves was employing engineer Al Alcorn, who had worked with Nolan at the American electronics company Ampex. Al was asked to create a table tennis game based on a similar title that had been released on the Magnavox Odyssey console, on the pretence that the game would be released by General Electric. In truth, Nolan simply wanted to work out Al’s potential, but he was blown away by what his employee came up with. Addictive and instantly recognisable, Atari realised Pong could be a major hit. The game’s familiarity with players meant it could be picked up and played by just about anyone.

Even so, Nolan had a hard time convincing others. Manufacturers turned the company down, so he visited the manager of a bar called Andy Capp’s in Sunnyvale, California and asked them to take Pong for a week. The manager soon had to call Nolan to tell him the machine had broken: it had become stuffed full of quarters from gamers who loved the game. By 1973, production of the cabinet was in overdrive and 8000 were sold. It led to the creation of a Pong home console which sold more than 150,000 machines. People queued to get their hands on one and Atari was on its way to become a legendary games company.

For Nolan, it was justification for his perseverance and belief. Suddenly, the man who had become interested in electronics at school, where he would spend time creating devices and connecting bulbs and batteries, was being talked of as a key player in the fledgling video game industry. But what did Nolan, Ted, Al, and the rest of the Atari team do to make the game so special? “We made it a good, solid, fun game to play,” says Nolan. “And we made it simple, easy, and quickly understood. Keeping things simple is more difficult to do than building something complex. You can’t dress up bad gameplay with good graphics.”

Making Pong

On the face of it, Pong didn’t look like much. Each side had a paddle that could be moved directly up and down using the controller, and the ball would be hit from one side to the other. The score was kept at the top of the screen and the idea was to force the opposing player to miss. It meant the game program needed to determine how the ball was hit and where the ball would go from that point. And that’s the crux of Pong’s success: the game encouraged people to keep playing and learning in the hope of attaining the skills to become a master.

When creating Pong, then, the designers had a few things in mind. One of the most important parts of the game was the movement of the paddles. This involved a simple, vertical rectangle that went up and down. One of the benefits Atari had when it created Pong was that it controlled not just the software but the hardware too. By building the cabinet, it was able to determine how those paddles should be moved. “The most important thing if you want to get the gameplay right is to use a knob to move the paddle,” advises Nolan. “No one has done a good Pong using touchscreens or a joystick.”

Look at a Pong cabinet close up – there are plenty of YouTube videos which show the game in action on the original machine – and you will see what Nolan means. You’ll notice that players turned a knob anticlockwise to move the paddle down, and clockwise to move it up. Far from being confusing, it felt intuitive.

Movement of the ball

With the paddles moving, Atari’s developers were able to look at the movement of the ball. At its most basic, if the ball continued to make contact with the paddles, it would constantly move back and forth. If it did not make contact, then it would continue moving in the direction it had embarked upon and leave the screen. At this stage, a new ball was introduced in the centre of the screen and the advantage was given to the player who had just chalked up a point. If you watch footage of the original Pong, you will see that the new ball was aimed at the player who had just let the ball go past. There was a chance he or she would miss again.

To avoid defeat, players had to be quite nifty on the controls and stay alert. Watching the ball go back and forth at great speed could be quite mesmerising as it left a blurred trail across the cathode ray tube display. There was no need to waste computing power by animating the ball because the main attention was focused on what would happen when it collided with the paddle. It had to behave as you’d expect. “The game did not exist without collisions of the ball to the paddle,” says Nolan.

Al realised that the ball needed to behave differently depending on where it hit the paddle. When playing a real game of tennis, if the ball hits the centre of the racket, it will behave differently from a ball that hits the edge. Certainly, the ball is not going to be travelling in a simple, straight path back and forth as you hit it; it is always likely to go off at an angle. This, though, is the trickiest part of making Pong “The ball should bounce up from an upper collision with more obtuse angles as the edge of the paddle is approached,” Nolan says. “This balances the risk of missing with the fact that an obtuse angle is harder to return.” This is what Pong is all about: making sure you hit the ball with the paddle, but in a manner that makes it difficult for the opposing player to return it. “A player wants the ball to be just out of reach for the opponent or be hard for him or her to predict.”

Read on…

This post is part of a much longer deep dive into the history of Pong in Code the Classics, our 224-page hardback book that not only tells the stories of some of the seminal video games of the 1970s and 1980s, but also shows you how to use Python and Pygame Zero to create your own games inspired by them, following examples programmed by Raspberry Pi founder Eben Upton.

In conjunction with today’s blog post, we’re offering £1 off Code the Classics when you order your copy between now and midnight Wednesday 26 Feb 2020 from the Raspberry Pi Press online store. Simply follow this link or enter the discount code PONG at checkout to get your copy for only £11, with free shipping in the UK.

Code the Classics is also available as a free download, although the physical book is rather delightful, so we really do recommend purchasing it.

The post The History of Pong | Code the Classics appeared first on Raspberry Pi.

20 години без проф. Тончо Жечев

Post Syndicated from nellyo original https://nellyo.wordpress.com/2020/02/22/20-toncho/

 

toncho01100213

Да си спомним за професор Тончо Жечев.

Навършват се 20 години, откакто си отиде.  От  1998 г.  до смъртта си беше  член на медийния регулатор от президентската квота. Юрист по образование,  разказвач от класа и много добър човек.

 

 

Colour us bewildered

Post Syndicated from Liz Upton original https://www.raspberrypi.org/blog/colour-us-bewildered/

A Russian-speaking friend over at Farnell pointed us at this video. Apparently it’s been made by Amperot.ru, a Russian Raspberry Pi Approved Reseller, who are running a t-shirt giveaway. We got our hands on a subtitled video, and…words fail me. Please turn the sound up before you start watching.

Конкурс от Amperkot.ru. Розыгрыш фирменной футболки с Raspberry Pi (ENG & RUS Subtitles)

Следи за новостями 1) на сайте Amperkot.ru 2) в группе Вконтакте (vk.com/amperkot) 3) на канале в Телеграме (t.me/amperkot_ru)

We hope you enjoy this as much as we did. I have known Eben for more than twenty years now, and I’ve never seen him try to cram his whole fist into his mouth with mirth before.

Many thanks to Rapanui, who we wrote about here back in November. We suspect this will be as much of a surprise to them as it was to us. (The words coming out of Mart’s mouth are decidedly not his own.)

The post Colour us bewildered appeared first on Raspberry Pi.

Deploy and publish to an Amazon MQ broker using AWS serverless

Post Syndicated from Moheeb Zara original https://aws.amazon.com/blogs/compute/deploy-and-publish-to-an-amazon-mq-broker-using-aws-serverless/

If you’re managing a broker on premises or in the cloud with a dependent existing infrastructure, Amazon MQ can provide easily deployed, managed ActiveMQ brokers. These support a variety of messaging protocols that can offload operational overhead. That can be useful when deploying a serverless application that communicates with one or more external applications that also communicate with each other.

This post walks through deploying a serverless backend and an Amazon MQ broker in one step using the AWS Serverless Application Model (AWS SAM). It shows you how to publish to a topic using AWS Lambda and then how to create a client application to consume messages from the topic, using a supported protocol. As a result, the AWS services and features supported by AWS Lambda can now be delivered to an external application connected to an Amazon MQ broker using STOMP, AMQP, MQTT, OpenWire, or WSS.

Although many protocols are supported by Amazon MQ, this walkthrough focuses on one. MQTT is a lightweight publish–subscribe messaging protocol. It is built to work in a small code footprint and is one of the most well-supported messaging protocols across programming languages. The protocol also introduced quality of service (QoS) to ensure message delivery when a device goes offline. Using QoS features, you can limit failure states in an interdependent network of applications.

To simplify this configuration, I’ve provided an AWS Serverless Application Repository application that deploys AWS resources using AWS CloudFormation. Two resources are deployed, a single instance Amazon MQ broker and a Lambda function. The Lambda function uses Node.js and an MQTT library to act as a producer and publish to a message topic on the Amazon MQ broker. A provided sample Node.js client app can act as an MQTT client and subscribe to the topic to receive messages.

Prerequisites

The following resources are required to complete the walkthrough:

Required steps

To complete the walkthrough, follow these steps:

  • Clone the Aws-sar-lambda-publish-amazonmq GitHub repository.
  • Deploy the AWS Serverless Application Repository application.
  • Run a Node.js MQTT client application.
  • Send a test message from an AWS Lambda function.
  • Use composite destinations.

Clone the GitHub repository

Before beginning, clone or download the project repository from GitHub. It contains the sample Node.js client application used later in this walkthrough.

Deploy the AWS Serverless Application Repository application

  1. Navigate to the page for the lambda-publish-amazonmq AWS Serverless Application Repository application.
  2. In Application settings, fill the following fields:

    – AdminUsername
    – AdminPassword
    – ClientUsername
    – ClientPassword

    These are the credentials for the Amazon MQ broker. The admin credentials are assigned to environment variables used by the Lambda function to publish messages to the Amazon MQ broker. The client credentials are used in the Node.js client application.

  3. Choose Deploy.

Creation can take up to 10 minutes. When completed, proceed to the next section.

Run a Node.js MQTT client application

The Amazon MQ broker supports OpenWire, AMQP, STOMP, MQTT, and WSS connections. This allows any supported programming language to publish and consume messages from an Amazon MQ queue or topic.

To demonstrate this, you can deploy the sample Node.js MQTT client application included in the GitHub project for the AWS Serverless Application Repository app. The client credentials created in the previous section are used here.

  1. Open a terminal application and change to the client-app directory in the GitHub project folder by running the following command:
    cd ~/some-project-path/aws-sar-lambda-publish-amazonmq/client-app
  2. Install the Node.js dependencies for the client application:
    npm install
  3. The app requires a WSS endpoint to create an Amazon MQ broker MQTT WebSocket connection. This can be found on the broker page in the Amazon MQ console, under Connections.
  4. The node app takes four arguments separated by spaces. Provide the user name and password of the client created on deployment, followed by the WSS endpoint and a topic, some/topic.
    node app.js "username" "password" "wss://endpoint:port" "some/topic"
  5. After connected prints in the terminal, leave this app running, and proceed to the next section.

There are three important components run by this code to subscribe and receive messages:

  • Connecting to the MQTT broker.
  • Subscribing to the topic on a successful connection.
  • Creating a handler for any message events.

The following code example shows connecting to the MQTT broker.

const args = process.argv.slice(2)

let options = {
  username: args[0],
  password: args[1],
  clientId: 'mqttLambda_' + uuidv1()
}

let mqEndpoint = args[2]
let topic = args[3]

let client = mqtt.connect( mqEndpoint, options)

The following code example shows subscribing to the topic on a successful connection.

// When connected, subscribe to the topic

client.on('connect', function() {
  console.log("connected")

  client.subscribe(topic, function (err) {
    if(err) console.log(err)
  })
})

The following code example shows creating a handler for any message events.

// Log messages

client.on('message', function (topic, message) {
  console.log(`message received on ${topic}: ${message.toString()}`)
})

Send a test message from an AWS Lambda function

Now that the Amazon MQ broker, PublishMessage Lambda function, and the Node.js client application are running, you can test consuming messages from a serverless application.

  1. In the Lambda console, select the newly created PublishMessage Lambda function. Its name begins with the name given to the AWS Serverless Application Repository application on deployment.
  2. Choose Test.
  3. Give the new test event a name, and optionally modify the message. Choose Create.
  4. Choose Test to invoke the Lambda function with the test event.
  5. If the execution is successful, the message appears in the terminal where the Node.js client-app is running.

Using composite destinations

The Amazon MQ broker uses an XML configuration to enable and configure ActiveMQ features. One of these features, composite destinations, makes one-to-many relationships on a single destination possible. This means that a queue or topic can be configured to forward to another queue, topic, or combination.

This is useful when fanning out to a number of clients, some of whom are consuming queues while others are consuming topics. The following steps demonstrate how you can easily modify the broker configuration and define multiple destinations for a topic.

  1. On the Amazon MQ Configurations page, select the matching configuration from the list. It has the same stack name prefix as your broker.
  2. Choose Edit configuration.
  3. After the broker tag, add the following code example. It creates a new virtual composite destination where messages published to “some/topic” publishes to a queue “A.queue” and a topic “foo.”
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <broker schedulePeriodForDestinationPurge="10000" xmlns="http://activemq.apache.org/schema/core">
      
      <destinationInterceptors>
        <virtualDestinationInterceptor>
          <virtualDestinations>
            <compositeTopic name="some.topic">
              <forwardTo>
                <queue physicalName="A.Queue"/>
                <topic physicalName="foo" />
              </forwardTo>
            </compositeTopic>
          </virtualDestinations>
        </virtualDestinationInterceptor>
      </destinationInterceptors>
      <destinationPolicy>
  4. Choose Save, add a description for this revision, and then choose Save.
  5. In the left navigation pane, choose Brokers, and select the broker with the stack name prefix.
  6. Under Details, choose Edit.
  7. Under Configuration, select the latest configuration revision that you just created.
  8. Choose Schedule modifications, Immediately, Apply.

After the reboot is complete, run another test of the Lambda function. Then, open and log in to the ActiveMQ broker web console, which can be found under Connections on the broker page. To log in, use the admin credentials created on deployment.

On the Queues page, a new queue “A.Queue” was generated because you published to some/topic, which has a composite destination configured.

Conclusion

It can be difficult to tackle architecting a solution with multiple client destinations and networked applications. Although there are many ways to go about solving this problem, this post showed you how to deploy a robust solution using ActiveMQ with a serverless workflow. The workflow publishes messages to a client application using MQTT, a well-supported and lightweight messaging protocol.

To accomplish this, you deployed a serverless application and an Amazon MQ broker in one step using the AWS Serverless Application Repository. You also ran a Node.js MQTT client application authenticated as a registered user in the Amazon MQ broker. You then used Lambda to test publishing a message to a topic on the Amazon MQ broker. Finally, you extended functionality by modifying the broker configuration to support a virtual composite destination, allowing delivery to multiple topic and queue destinations.

With the completion of this project, you can take things further by integrating other AWS services and third-party or custom client applications. Amazon MQ provides multiple protocol endpoints that are widely used across the software and platform landscape. Using serverless as an in-between, you can deliver features from services like Amazon EventBridge to your external applications, wherever they might be. You can also explore how to invoke an Lambda function from Amazon MQ.

 

Creating a Seamless Handoff Between Amazon Pinpoint and Amazon Connect

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/creating-a-seamless-handoff-between-amazon-pinpoint-and-amazon-connect/

Note: This post was written by Ilya Pupko, Senior Consultant for the AWS Digital User Engagement team.


Time to read5 minutes
Learning levelIntermediate (200)
Services usedAmazon Pinpoint, Amazon SNS, AWS Lambda, Amazon Lex, Amazon Connect

Your customers deserve to have helpful communications with your brand, regardless of the channel that you use to interact with them. There are many situations in which you might have to move customers from one channel to another—for example, when a customer is interacting with a chatbot over SMS, but their needs suddenly change to require voice assistance. To create a great customer experience, your communications with your customers should be seamless across all communication channels.

Welcome aboard Customer Obsessed Airlines

In this post, we look at a scenario that involves our fictitious airline, Customer Obsessed Airlines. Severe storms in one area of the country have caused Customer Obsessed Airlines to cancel a large number of flights. Customer Obsessed Airlines has to notify all of the affected customers of the cancellations right away. But most importantly, to keep customers as happy as possible in this unfortunate and unavoidable situation, Customer Obsessed Airlines has to make it easy for customers to rebook their flights.

Fortunately, Customer Obsessed Airlines has implemented the solution that’s outlined later in this post. This solution uses Amazon Pinpoint to send messages to a targeted segment of customers—in this case, the specific customers who were booked on the affected flights. Some of these customers might have straightforward travel itineraries that can simply be rebooked through interactions with a chatbot. Other customers who have more complex itineraries, or those who simply prefer to interact with a human over the phone, can be handed off to an agent in your call center.

About the solution

The solution that we’ll build to handle this scenario can be deployed in under an hour. The following diagram illustrates the interactions in this solution.

At a high level, this solution uses the following workflow:

  1. An event occurs. Automated impact analysis systems trigger the creation of custom segments—in this case, all passengers whose flights were cancelled.
  2. Amazon Pinpoint sends a message to the affected passengers through their preferred channels. Amazon Pinpoint supports the email, SMS, push, and voice channels, but in this example, we focus exclusively on SMS.
  3. Passengers who receive the message can respond. When they do, they interact with a chatbot that helps them book a different flight.
  4. If a passenger requests a live agent, or if their situation can’t be handled by a chatbot, then Amazon Pinpoint passes information about the customer’s situation and communication history to Amazon Connect. The passenger is entered into a queue. When the passenger reaches the front of the queue, they receive a phone call from an agent.
  5. After being re-booked, the passenger receives a written confirmation of the changes to their itinerary through their preferred channel. Passengers are also given the option of providing feedback on their interaction when the process is complete.

To build this solution, we use Amazon Pinpoint to segment our customers based on their attributes (such as which flight they’ve booked), and to deliver messages to those segments.

We also use Amazon Connect to manage the voice calling part of the solution, and Amazon Lex to power the chatbot. Finally, we connect these services using logic that’s defined in AWS Lambda functions.

Setting up the solution

Step 1: Set up Amazon Pinpoint and link it with Amazon Lex

The first step in setting up this solution is to create a new Amazon Pinpoint project and configure the SMS channel. When that’s done, you can create an Amazon Lex chatbot and link it to the Amazon Pinpoint project.

We described this process in detail in an earlier blog post. Complete the procedures in Create an SMS Chatbot with Amazon Pinpoint and Amazon Lex, and then proceed to step 2.

Step 2: Set up Amazon Connect and link it with your Amazon Lex chatbot

By completing step 1, we’ve created a system that can send messages to our passengers and receive messages from them. The next step is to create a way for passengers to communicate with our call center.

The Amazon Connect Administrator Guide provides instructions for linking an Amazon Lex bot to an Amazon Connect instance. For complete procedures, see Add an Amazon Lex Bot.

When you complete these procedures, link your Amazon Connect instance to the same Amazon Lex bot that you created in step 1. This step is intended to provide customers with a consistent, cohesive experience across channels.

Step 3: Set up an Amazon Connect callback queue and use Amazon Pinpoint keyword logic to trigger it

Now that we’ve configured Amazon Pinpoint and Amazon Connect, we can connect them.

Linking the two services makes it possible for passengers to request additional assistance. Traditionally, passengers in this situation would have to call a call center themselves and then wait on hold for an agent to become available. However, in this solution, our call center calls the passenger directly as soon as an agent is available. When the agent calls the passenger, the agent has all of the information about the passenger’s issue, as well as a transcript of the passenger’s interactions with your chatbot.

To implement an automatic callback mechanism, use the Amazon Pinpoint Connect Callback Requestor, which is available on the AWS GitHub page.

Next steps

By completing the preceding three steps, you can send messages to a subset of your users based on the criteria you choose and the type of message you want to send. Your customers can interact with your message by replying with questions. When they do, a chatbot responds intelligently and appropriately.

You can add to this solution by expanding it to cover other communication channels, such as push notifications. You can also automate the initial communication by integrating the solution with your systems of record.

We’re excited to see what you build using the solution that we outlined in this post. Let us know of your ideas and your successes in the comments.

Migrating ASP.NET applications to Elastic Beanstalk with Windows Web Application Migration Assistant

Post Syndicated from Sepehr Samiei original https://aws.amazon.com/blogs/devops/migrating-asp-net-applications-to-elastic-beanstalk-with-windows-web-application-migration-assistant/

This blog post discusses benefits of using AWS Elastic Beanstalk as a business application modernization tool, and walks you through how to use the new Windows Web Application Migration Assistant. Businesses and organizations in all types of industries are migrating their workloads to the Cloud in ever-increasing numbers. Among migrated workloads, websites hosted on Internet Information Services (IIS) on Windows Server is a common pattern. Developers and IT teams that manage these workloads want their web applications to scale seamlessly based on continuously changing load, without having to guess or forecast future demand. They want their infrastructure, including the IIS server and Windows Server operating system, to automatically receive latest patches and updates, and remain compliant with their baseline policies without having to spend lots of manual effort or heavily invest in expensive commercial products. Additionally, many businesses want to tap into the global infrastructure, security, and resiliency offered by the AWS Cloud.

AWS Elastic Beanstalk is designed to address these requirements. Elastic Beanstalk enables you to focus on your application code as it handles provisioning, maintenance, health-check, autoscaling, and other common tasks necessary to keep your application running. Now, the Windows Web Application Migration Assistant makes it easier than ever to migrate ASP.NET and ASP.NET Core web applications from IIS on Windows servers running on-premises or in another cloud to Elastic Beanstalk. This blog post discusses how you can use this open-source tool to automate your migration efforts and expedite your first step into the AWS Cloud.

 

What is Windows Web Application Migration Assistant?

The Windows Web Application Migration Assistant is an interactive PowerShell script that migrates entire websites and their configurations to Elastic Beanstalk. The migration assistant is available as an open-source project on GitHub, and can migrate entire ASP.NET applications running on .NET Framework in Windows, as well as on ASP.NET Core. Elastic Beanstalk also runs ASP.NET Core applications on Windows platform. For ASP.NET applications, you can use this tool to migrate both classic Web Forms applications, as well as ASP.NET MVC apps.

Windows Web Application Migration Assistant workflow

The migration assistant interactively walks you through the steps shown in the following diagram during the migration process.

Workflow of using Windows Web Application Migration Assistant. Showing steps as discover, select, DB connection string, generate, and deploy.

1-      Discover: The script automatically discovers and lists any websites hosted on the local IIS server.

2-      Select: You choose the websites that you wish to migrate.

3-      Database connection string: The script checks selected website’s web.config and iterates through their connection string settings. You are then prompted to update connection strings to have them point to the migrated database instance.

4-      Generate: The script generates an Elastic Beanstalk deployment bundle. This bundle includes web application binaries and instructions for Elastic Beanstalk on how it should host the application.

5-      Deploy: Finally, the script uploads the deployment bundle to the Elastic Beanstalk application for hosting.

Elastic Beanstalk application and other concepts

The most high-level concept in Elastic Beanstalk is application. An Elastic Beanstalk application is a logical collection of lower-level components, such as environments and versions. For example, you might have an Elastic Beanstalk application called MarketingApp, which contains two Elastic Beanstalk environments: Test and Production. Each one of these environments can host one of the different versions of your application, as shown in the following diagram.

Elastic Beanstalk application, environments, and versions.

As you can see in the diagram, although it’s possible to have multiple versions of the same application uploaded to Elastic Beanstalk, only one version is deployed to each environment at any point in time. Elastic Beanstalk enables rapid rollback and roll-forward between different versions, as well as swapping environments for Blue-Green deployments.

Migrating databases

The Windows Web Application Migration Assistant is specifically designed to migrate web applications only. However, it’s quite likely that web applications may have dependencies to one or more backend databases. You can use other tools and services provided by AWS to migrate databases. Depending on your requirements, there are multiple options available:

The Windows Web Application Migration Assistant allows you to migrate your database(s) during or after migration of web applications. In either case, your migrated web applications should be modified to point any connection strings to the new database endpoints. The migration assistant script also helps you to modify connection strings.

Prerequisites

Ensure your server meets the following software requirements.

  1. IIS 8 and above running on Windows 2012 and above.
  2. PowerShell 3.0 and above: To verify the version of PowerShell on your server, open a PowerShell window and enter this command: $PSVersionTable.PSVersion
  3. Microsoft Web Deploy version 3.6 or above: You can verify the installed version from list of programs in Add or Remove Programs in Windows Control Panel.
  4. .NET Framework (supported versions: 4.x, 2.0, 1.x) or .NET Core (supported versions: 3.0.0, 2.2.8, 2.1.14) installed on your web server.
  5. AWSPowerShell module for MS PowerShell: You can use the following cmdlet to install it: Install-Module AWSPowerShell. Make sure the latest version of AWSPowerShell module is installed. If you have an older version already installed, you can first uninstall it using following cmdlet: Uninstall-Module AWSPowerShell -AllVersions -Force
  6. WebAdministration module for MS PowerShell: You can check for this dependency by invoking the PowerShell command Import-Module WebAdministration.
  7. Since the script automatically inspects IIS and web application settings, it needs to run as a local administrator on your web server. Make sure you have appropriate credentials to run it. If you are logged in with a user other than a local Administrator, open a PowerShell session window as an administrator.
  8. Finally, make sure your server has unrestricted internet connectivity to the AWS IP range.

Configure the destination AWS account

You need to have an AWS Identity and Access Management (IAM) user in your destination AWS account. Follow the instructions to create a new IAM user. Your user needs credentials for both the AWS Console and programmatic access, so remember to check both boxes, as shown in the following screenshot.

AWS Console screen for adding a new IAM user.

On the permissions page, attach existing policies directly to this user, as shown in the following screenshot. Select the following two AWS-managed policies:

  • IAMReadOnlyAccess
  • AWSElasticBeanstalkFullAccess

Screen to add existing policies to a user.

Before finishing the user creation, get the user’s AccessKey and SecretKey from the console, as shown in the following screenshot.

Success message for creating a new user, showing secret keys and credentials.

Open a PowerShell terminal on your Windows Server and invoke the following two commands.

PS C:\> Import-Module AWSPowerShell
PS C:\> Set-AWSCredential -AccessKey {access_key_of_the_user} -SecretKey {secret_key_of_the_user} -StoreAs {profile_name} -ProfileLocation {optional - path_to_the_new_profile_file}

 

The parameter {profile_name} is an arbitrary name to label these credentials. The optional parameter path_to_the_new_profile_file can be used to store the encrypted credentials profile in a customized path.

Setting up

Once you’ve verified the checklist for web server and configured the destination AWS account, set up is as simple as:

  1. Download the script from GitHub, using clone or the download option of the repository, and place the extracted content on the local file system of your web server.
  2. Optionally, edit the utils/settings.txt JSON file, and set the following two variables (if not set, the script will use the default values of the AWS PowerShell profile name and path):
    • defaultAwsProfileFileLocation : {path_to_the_new_profile_file}
    • defaultAwsProfileName : {profile_name }

You’re now ready to run the migration assistant script to migrate your web applications.

Running the migration assistant

The migration assistant is a PowerShell script. Open a PowerShell window as administrator in your web server and run the MigrateIISWebsiteToElasticBeanstalk.ps1 script. Assuming you have copied the content of the GitHub package to your C: drive, launching the script looks as follows:

PS C:\> .\MigrateIISWebsiteToElasticBeanstalk.ps1

The migration assistant prompts you to enter the location of your AWS credentials file. If you have used the optional parameter to store your IAM user’s credentials in a file, you can enter its path here. Otherwise, choose Enter to skip this step. The assistant then prompts you to enter the profile name of your credentials. Enter the same {profile_name} that you had used for Set-AWSCredentials.

Having acquired access credentials, the assistant script then prompts you to enter an AWS Region. Enter the identifier of the AWS Region in which you want your migrated web application to run. If you are not sure which values can be used, check the list of available Regions using following cmdlet (in another PowerShell window):

Get-AWSRegion

For example, I am using ap-southeast-2 to deploy into the Sydney Region.

After specifying the Region, the migration assistant script starts scanning your web server for deployed web applications. You can then choose one of the discovered web applications by entering its application number as shown in the list.

Migration assistant lists web sites hosted in IIS and prompts to select one.

So far, you’ve kickstarted the migration assistant and selected a web application to migrate. At this point, the migration assistant begins to assess and analyze your web application. The next step is to update the connection strings for any backend databases.

 

Modifying connection strings

Once you enter the number of the web application that has to migrate, the assistant takes a snapshot of your environment and lists any connection strings used by your application. To update a connection string, enter its number. Optionally, you could choose Enter to skip this step, but remember to manually update connection strings, if any, before bringing the web application online.

Enter the number of the connection string you would like to update, or press ENTER:

Next, the assistant pauses and allows you to migrate your database. Choose Enter to continue.

Please separately migrate your database, if needed.

The assistant then prompts you to update any connection strings selected above. If you choose M, you can update the string manually by editing it in the file path provided by the migration assistant. Otherwise, paste the contents of the new connection string and choose Enter.

Enter "M" to manually edit the file containing the connection string, or paste the replacement string and press ENTER (default M) :

The script then continues with configuration parameters for the target Elastic Beanstalk environment.

Configuring the target Elastic Beanstalk environment using the script

The migration assistant script prompts you to enter a name for the Elastic Beanstalk application.

Please enter the name of your new EB application.

The name has to be unique:

By default, the migration assistant automatically creates an environment with a name prefixed with MigrationRun. It also creates a package containing your web application files. This package is called application source bundle. The migration assistant uploads this source bundle as a version inside the Elastic Beanstalk application.

Elastic Beanstalk automatically provisions an Amazon EC2 instance to host your web application. The default Amazon EC2 instance type is t3.medium. When the migration assistant prompts you to enter the instance type, you can choose Enter to accept the default size, or you can enter any other Amazon EC2 Instance Type.

Enter the instance type (default t3.medium) :

Elastic Beanstalk provides preconfigured solution stacks. The migration assistant script prompts you to select one of these solution stacks. Not only are these stacks preconfigured with an optimized operating system, as well as an application server and web server to host your application, but Elastic Beanstalk can also handle their ongoing maintenance by offering enhanced health checks, managed updates, and immutable deployment. These extra benefits are available with version 2 solution stacks (i.e. v2.x.x, as, for example, in “64bit Windows Server 2016 v2.3.0 running IIS 10.0”).

The migration assistant prompts you to enter the name of the Windows Server Elastic Beanstalk solution stack. For a list of all supported solution stacks, see .NET on Windows Server with IIS in the AWS Elastic Beanstalk Platforms guide.

Solution stack name (default 64bit Windows Server 2016 v2.3.0 running IIS 10.0):

That is the final step. Now, the migration assistant starts to do its magic and after a few minutes you should be able to see your application running in Elastic Beanstalk.

Advanced deployment options

The Windows Web Application Migration Assistant uses all the default values to quickly create a new Elastic Beanstalk application. However, you might want to change some of these default values to address your more specific requirements. For example, you may need to deploy your web application inside an Amazon Virtual Private Cloud (VPC), set up advanced health monitoring, or enable managed updates features.

To do this, you can stop running the migration assistant script after receiving this message:

An application bundle was successfully generated.

Now follow these steps:

  1. Sign in to your AWS console and go to the Elastic Beanstalk app creation page.
  2. From the top right corner of the page, make sure the correct AWS Region is selected.
  3. Create the application by following the instructions on the page.
  4. Choose Upload your code in the Application code section and upload the deployment bundle generated by the migration assistant. Go to your on-premises web server and open the folder to which you copied the migration assistant script. The deployment bundle is a .zip file placed in the MigrationRun-xxxxxxx/output folder. Use this .zip file as application code and upload it to Elastic Beanstalk.
  5. Choose the Configure more options button to set advanced settings as per your requirements.
  6.  Select the Create application button to instruct Elastic Beanstalk to start provisioning resources and deploy your application.

If your web application has to use existing SSL certificates, manually export those certificates from Windows IIS and then import them into AWS Certificate Manager (ACM) in your AWS account. Elastic Beanstalk creates an Elastic Load Balancer (ELB) for your application. You can configure this ELB to use your imported certificates. This way HTTPS connections will be terminated on the ELB, and you don’t need further configuration overhead on the web server. For more details, see Configuring HTTPS for your Elastic Beanstalk Environment.

Also keep in mind that Elastic Beanstalk supports a maximum of one HTTP and one HTTPS endpoint. No matter what port numbers are used for these endpoints within the on-premises IIS, Elastic Beanstalk will bind them to ports 80 and 433 respectively.

Any software dependencies outside of the website directory, such as Global Assembly Cache (GAC), have to manually be configured on the target environment.

Although Amazon EC2 supports Active Directory (AD)—both the managed AD service provided by AWS Directory Service and customer-managed AD—Elastic Beanstalk currently does not support web servers being joined to an AD domain. If your web application needs the web server to be joined to an Active Directory domain, either alter the application to eliminate the dependency, or use other approaches to migrate your application onto Amazon EC2 instances without using Elastic Beanstalk.

Cleanup

If you use the steps described in this post to test the migration assistant, it creates resources in your AWS account. You keep getting charged for these resources if they go beyond the free tier limit. To avoid that, you can delete resources to stop being billed for them. This is as easy as going to the Elastic Beanstalk console and deleting any applications that you no longer want.

Conclusion

This blog post discussed how you can use Windows Web Applications Migration Assistant to simplify the migration of your existing web applications running on Windows into Elastic Beanstalk. The migration assistant provides an automated script that’s based on best practices and automates commonly used migration steps. You don’t need in-depth skills with AWS to use this tool. Furthermore, by following the steps as performed by the Windows Web Application Migration Assistant tool and Elastic Beanstalk, you can acquire more in-depth skills and knowledge about AWS services and capabilities in practice and as your workloads move into the Cloud. These skills are essential to the ongoing modernization and evolution of your workloads in the Cloud.

Building a serverless URL shortener app without AWS Lambda – part 3

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/building-a-serverless-url-shortener-app-without-lambda-part-3/

This is the final installment of a three-part series on building a serverless URL shortener without using AWS Lambda. This series highlights the power of Amazon API Gateway and its ability to directly integrate with services like Amazon DynamoDB. The result is a low latency, highly available application that is built with managed services and requires minimal code.

In part one of this series, I demonstrate building a serverless URL shortener application without using AWS Lambda. In part two, I walk through implementing application security using Amazon API Gateway settings and Amazon Cognito. In this part of this series, I cover application observability and performance.

Application observability

Before I can gauge the performance of the application, I must first be able to observe the performance of my application. There are two AWS services that I configure to help with observability, AWS X-Ray and Amazon CloudWatch.

X-Ray

X-Ray is a tracing service that enables developers to observe and debug distributed applications. With X-Ray enabled, every call to the API Gateway endpoint is tagged and monitored throughout the application services. Now that I have the application up and running, I want to test for errors and latency. I use Nordstrom’s open-source load testing library, serverless-artillery, to generate activity to the API endpoint. During the load test, serverless-artillery generates 8,000 requests per second (RPS) for a period of five minutes. The results are as follows:

X-Ray Tracing

This indicates that, from the point the request reaches API Gateway to when a response is generated, the average time for each request is 8 milliseconds (ms) with a 4 ms integration time to DynamoDB. It also indicates that there were no errors, faults, or throttling.

I change the parameters to increase the load and observe how the application performs. This time serverless-artillery generates 11,000rps for a period of 30 seconds. The results are as follows:

X-Ray tracing with throttling

X-Ray now indicates request throttling. This is due to the default throttling limits of API Gateway. Each account has a soft limit of 10,000rps with a burst limit of 5k requests. Since I am load testing the API with 11,000rps, API Gateway is throttling requests over 10k per second. When throttling occurs, API Gateway responds to the client with a status code of 429. Using X-Ray, I can drill down into the response data to get a closer look at requests by status code.

X-Ray analytics

CloudWatch

The next tool I use for application observability is Amazon CloudWatch. CloudWatch captures data for individual services and supports metric based alarms. I create the following alarms to have insight into my application:

AlarmTrigger
APIGateway4xxAlarmOne percent of the API calls result in a 4xx error over a one-minute period.
APIGateway5xxAlarmOne percent of the API calls result in a 5xx error over a one-minute period.
APIGatewayLatencyAlarmThe p99 latency experience is over 75 ms over a five-minute period.
DDB4xxAlarmOne percent of the DynamoDB requests result in a 4xx error over a one-minute period.
DDB5xxAlarmOne percent of the DynamoDB requests result in a 5xx error over a one-minute period.
CloudFrontTotalErrorRateAlarmFive requests to CloudFront result in a 4xx or 5xx error over a one-minute period.
CloudFrontTotalCacheHitRAteAlarm80% or less of the requests to CloudFront result in a cache hit over a five-minute period. While this is not an error or a problem, it indicates the need for a more aggressive caching story.

Each of these alarms is configured to publish to a notification topic using Amazon Simple Notification Service (SNS). In this example I have configured my email address as a subscriber to the SNS topic. I could also subscribe a Lambda function or a mobile number for SMS message notification. I can also get a quick view of the status of my alarms on the CloudWatch console.

CloudWatch and X-Ray provide additional alerts when there are problems. They also provide observability to help remediate discovered issues.

Performance

With observability tools in place, I am now able to evaluate the performance of the application. In part one, I discuss using API Gateway and DynamoDB as the primary services for this application and the performance advantage provided. However, these performance advantages are limited to the backend only. To improve performance between the client and the API I configure throttling and a content delivery network with Amazon CloudFront.

Throttling

Request throttling is handled with API Gateway and can be configured at the stage level or at the resource and method level. Because this application is a URL shortener, the most important action is the 301 redirect that happens at /{linkId} – GET. I want to ensure that these calls take priority, so I set a throttling limit on all other actions.

The best way to do this is to set a global throttling of 2000rps with a burst of 1k. I then configure an override on the /{linkId} – GET method to 10,000rps with a burst of 5k. If the API is experiencing an extraordinarily high volume of calls, all other calls are rejected.

Content delivery network

The distance between a user and the API endpoint can severely affect the performance of an application. Simply put, the further the data has to travel, the slower the application. By configuring a CloudFront distribution to use the Amazon CloudFront Global Edge Network, I bring the data closer to the user and increase performance.

I configure the cache for /{linkId} – GET to “max-age=300” which tells CloudFront to store the response of that call for 5 minutes. The first call queries the API and database for a response, while all subsequent calls in the next five minutes receive the local cached response. I then set all other endpoint cache to “no-cache, no-store”, which tells CloudFront to never store the value from these calls. This ensures that as users are creating or editing their short-links, they get the latest data.

By bringing the data closer to the user, I now ensure that regardless of where the user is, they receive improved performance. To evaluate this, I return to serverless-artillery and test the CloudFront endpoint. The results are as follows:

MinMaxAveragep10p50p90p95p99
8.12 ms739 ms21.7 ms10.1 ms12.1 ms20 ms34 ms375 ms

To be clear, these are the 301 redirect response times. I configured serverless-artillery not to follow the redirects as I have no control of the speed of the resulting site. The maximum response time was 739 ms. This would be the initial uncached call. The p50 metric shows that half of the traffic is seeing a 12 ms or better response time while the p95 indicates that most of my traffic is experiencing an equal or better than 34 ms response time.

Conclusion

In this series, I talk through building a serverless URL shortener without the use of any Lambda functions. The resulting architecture looks like this:

This application is built by integrating multiple managed services together and applying business logic via mapping templates on API Gateway. Because these are all serverless managed services, they provide inherent availability and scale to meet the client load as needed.

While the “Lambda-less” pattern is not a match for every application, it is a great answer for building highly performant applications with minimal logic. The advantage to this pattern is also in its extensibility. With the data saved to DynamoDB, I can use the DynamoDB streaming feature to connect additional processing as needed. I can also use CloudFront access logs to evaluate internal application metrics. Clone this repo to start serving your own shortened URLs and submit a pull request if you have an improvement.

Did you miss any of this series?

  1. Part 1: Building the application.
  2. Part 2: Securing the application.

Happy coding!

Building a serverless URL shortener app without AWS Lambda – part 2

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/building-a-serverless-url-shortener-app-without-lambda-part-2/

This post is the second installment of a three-part series on building a serverless URL shortener without using AWS Lambda. The purpose of the series is to highlight the power of Amazon API Gateway and its ability to integrate directly with backend services like Amazon DynamoDB. The result is a low latency, highly available application that is built with managed services and requires minimal code.

In part one, I cover using Apache Velocity Templating Language (VTL) and Amazon API Gateway to manage business logic usually processed by an AWS Lambda function. Now I discuss several methods for securing the API Gateway and any resources behind it. Whether building a functionless application as described in part one, or proxying a compute layer with API Gateway, this post offers some best practices for configuring API Gateway security.

To refer to the full application, visit https://github.com/aws-samples/amazon-api-gateway-url-shortener. The template.yaml file is the AWS SAM configuration for the application, and the api.yaml is the OpenAPI configuration for the API. I include instructions on how to deploy the full application, together with a simple web client, in the README.md file.

There are several steps to secure the API. First, I use AWS Identity and Access Management (IAM) to ensure I practice least privilege access to the application services. Additionally, I enable authentication and authorization, enforce request validation, and configure Cross-Origin Resource Sharing (CORS).

Secure functionless architecture

IAM least privileges

When configuring API Gateway to limit access to services, I create two specific roles for interaction between API Gateway and Amazon DynamoDB.

The first, DDBReadRole, limits actions to GetItem, Scan, and Query. This role is applied to all GET methods on the API. For POST, PUT, and DELETE methods, there is a separate role called DDBCrudRole that allows only the DeleteItem and UpdateItem actions. Additionally, the SAM template dynamically assigns these roles to a specific table. Thus, allowing these roles to only perform the actions on this specific table.

Authentication and authorization

For authentication, I configure user management with Amazon Cognito. I then configure an API Gateway Cognito authorizer to manage request authorization. Finally, I configure the client for secure requests.

Configuring Cognito for authentication

For authentication, Cognito provides user directories called user pools that allow user creation and authentication. For the user interface, developers have the option of building their own with AWS Amplify or having Cognito host the authentication pages. For simplicity, I opt for Cognito hosted pages. The workflow looks like this:

Cognito authentication flow

To set up the Cognito service, I follow these steps:

  1. Create a Cognito user pool. The user pool defines the user data and registration flows. This application is configured to use an email address as the primary user name. It also requires email validation at time of registration.Cognito user pool
  2. Create a Cognito user pool client. The client application is connected to the user pool and has permission to call unauthenticated APIs to register and login users. The client application configures the callback URLs as well as the identity providers, authentication flows, and OAuth scopes.Cognito app client
  3. Create a Cognito domain for the registration and login pages. I configure the domain to use the standard Cognito domains with a subdomain of shortener. I could also configure this to match a custom domain.Cognito domain

Configuring the Cognito authorizer

Next, I integrate the user pool with API Gateway by creating a Cognito authorizer. The authorizer allows API Gateway to verify an incoming request with the user pool to allow or deny access. To configure the authorizer, I follow these steps:

  1. Create the authorizer on the API Gateway. I create a new authorizer and connect it to the proper Cognito user pool. I also set the header name to Authorization.Cognito authorizer
  2. Next I attach the authorizer to each resource and method needing authorization by this particular authorizer.Connect Cognito authorizer to method

Configure the client for secure requests

The last step for authorized requests is to configure the client. As explained above, the client interacts with Amazon Cognito to authenticate and obtain temporary credentials. The truncated temporary credentials follow the format:

{
  "id_token": "eyJraWQiOiJnZ0pJZzBEV3F4SVUwZngreklE…",
  "access_token": "eyJraWQiOiJydVVHemFuYjJ0VlZicnV1…",
  "refresh_token": "eyJjdHkiOiJKV1QiLCJlbmMiOiJBMjU…",
  "expires_in": 3600,
  "token_type": "Bearer"
}

For the client to access any API Gateway resources that require authentication, it must include the Authorization header with the value set to the id_token. API Gateway treats it as a standard JSON Web Token (JWT), and decodes for authorization.

Request validation

The next step in securing the application is to validate the request payload to ensure it contains the expected data. When creating a new short link, the POST method request body must match the following:

{
  “id”: ”short link”,
  “url”: “target url”
}

To configure request validation, I first create a schema defining the expected POST method body payload. The schema looks like this:

{
  "required" : [ "id", "url" ],
  "type" : "object",
  "properties" : {
    "id" : { "type" : "string"},
    "url" : {
      "pattern" : "^https?://[[email protected]:%._\\+~#=]{2,256}\\.[a-z]{2,6}\\b([[email protected]:%_\\+.~#?&//=]*)",
      "type" : "string”
    }
  }
}

The schema requires both id and url, and requires that they are both strings. It also uses a regex pattern to ensure that the url is a valid format.

Next, I create request validator definitions. Using the OpenAPI extensibility markup, I create three validation options: all, params-only, and body-only. Here is the markup:

 

x-amazon-apigateway-request-validators:
  all:
    validateRequestBody: true
    validateRequestParameters: true
  body:
    validateRequestBody: true
    validateRequestParameters: false
  params:
    validateRequestBody: false
    validateRequestParameters: true

These definitions appear in the OpenAPI template and are mapped to the choices on the console.

Attaching validation to methods

With the validation definitions in place, and the schema defined, I then attach the schema to the POST method and require validation of the request body against the schema. If the conditions of the schema are not met, API Gateway rejects the request with a status code of 400 and an error message stating, “Invalid request body”.

CORS

Cross-Origin Resource Sharing is a mechanism for allowing applications from different domains to communicate. The limitations are based on exchanged headers between the client and the server. For example, the server passes the Access-Control-Allow-Origin header, which indicates which client domain is allowed to interact with the server. The client passes the Origin header that indicates what domain the request is coming from. If the two headers do not match exactly, then the request is rejected.

It is also possible to use a wildcard value for many of the allowed values. For Origin, this means that any client domain can connect to the backend domain. While wildcards are possible, it is missing an opportunity to add another layer of security to the application. In light of this, I configure CORS to restrict API access the client application. To help understand the different CORS settings required, here is a layout of the API endpoints:

API resource and methods structure

When an endpoint requires authorization, or a method other than GET is used, browsers perform a pre-flight OPTIONS check. This means they make a request to the server to find out what the server allows.

To accommodate this, I configure an OPTIONS response using an API Gateway mock endpoint. This is the header configuration for the /app OPTIONS call:

Access-Control-Allow-Methods‘POST, GET, OPTIONS’
Access-Control-Allow-Headers‘authorization, content-type’
Access-Control-Allow-Origin‘<client-domain>’

The configuration for the /app/{linkId} OPTIONS call is similar:

Access-Control-Allow-Methods‘PUT, DELETE, OPTIONS’
Access-Control-Allow-Headers‘authorization, content-type’
Access-Control-Allow-Origin‘<client-domain>’

In addition to the OPTIONS call, I also add the browser required, Access-Control-Allow-Origin to the response header of PUT, POST, and DELETE methods.

Adding a header to the response is a two-step process. Because the response to the client is modeled at the Method Response, I first set the expected header here:

Response headers

The Integration Response is responsible for mapping the data from the integrated backend service to the proper values, so I map the value of the header here:

Resonse header values

With the proper IAM roles in place, authentication and authorization configured, and data validation enabled, I now have a secure backend to my serverless URL Shortener. Additionally, by making proper use of CORS I have given my test client access to the API to provide a full-stack application.

Conclusion

In this post, I demonstrate configuring built-in features of API Gateway to secure applications fronted with API Gateway. While this is not an exhaustive list of API Gateway features, it is a good starting point for API security and what can be done at the API Gateway level. In part three, I discuss how to observe and improve the performance of the application, as well as reporting on internal application metrics.

Continue to part three.

Happy coding!