Tag Archives: python

Develop a serverless application in Python using Amazon CodeWhisperer

Post Syndicated from Rafael Ramos original https://aws.amazon.com/blogs/devops/develop-a-serverless-application-in-python-using-amazon-codewhisperer/

While writing code to develop applications, developers must keep up with multiple programming languages, frameworks, software libraries, and popular cloud services from providers such as AWS. Even though developers can find code snippets on developer communities, to either learn from them or repurpose the code, manually searching for the snippets with an exact or even similar use case is a distracting and time-consuming process. They have to do all of this while making sure that they’re following the correct programming syntax and best coding practices.

Amazon CodeWhisperer, a machine learning (ML) powered coding aide for developers, lets you overcome those challenges. Developers can simply write a comment that outlines a specific task in plain English, such as “upload a file to S3.” Based on this, CodeWhisperer automatically determines which cloud services and public libraries are best-suited for the specified task, it creates the specific code on the fly, and then it recommends the generated code snippets directly in the IDE. And this isn’t about copy-pasting code from the web, but generating code based on the context of your file, such as which libraries and versions you have, as well as the existing code. Moreover, CodeWhisperer seamlessly integrates with your Visual Studio Code and JetBrains IDEs so that you can stay focused and never leave the development environment. At the time of this writing, CodeWhisperer supports Java, Python, JavaScript, C#, and TypeScript.

In this post, we’ll build a full-fledged, event-driven, serverless application for image recognition. With the aid of CodeWhisperer, you’ll write your own code that runs on top of AWS Lambda to interact with Amazon Rekognition, Amazon DynamoDB, Amazon Simple Notification Service (Amazon SNS), Amazon Simple Queue Service (Amazon SQS), Amazon Simple Storage Service (Amazon S3), and third-party HTTP APIs to perform image recognition. The users of the application can interact with it by either sending the URL of an image for processing, or by listing the images and the objects present on each image.

Solution overview

To make our application easier to digest, we’ll split it into three segments:

  1. Image download – The user provides an image URL to the first API. A Lambda function downloads the image from the URL and stores it on an S3 bucket. Amazon S3 automatically sends a notification to an Amazon SNS topic informing that a new image is ready for processing. Amazon SNS then delivers the message to an Amazon SQS queue.
  2. Image recognition – A second Lambda function handles the orchestration and processing of the image. It receives the message from the Amazon SQS queue, sends the image for Amazon Rekognition to process, stores the recognition results on a DynamoDB table, and sends a message with those results as JSON to a second Amazon SNS topic used in section three. A user can list the images and the objects present on each image by calling a second API which queries the DynamoDB table.
  3. 3rd-party integration – The last Lambda function reads the message from the second Amazon SQS queue. At this point, the Lambda function must deliver that message to a fictitious external e-mail server HTTP API that supports only XML payloads. Because of that, the Lambda function converts the JSON message to XML. Lastly, the function sends the XML object via HTTP POST to the e-mail server.

The following diagram depicts the architecture of our application:

Architecture diagram depicting the application architecture. It contains the service icons with the component explained on the text above

Figure 1. Architecture diagram depicting the application architecture. It contains the service icons with the component explained on the text above.

Prerequisites

Before getting started, you must have the following prerequisites:

Configure environment

We already created the scaffolding for the application that we’ll build, which you can find on this Git repository. This application is represented by a CDK app that describes the infrastructure according to the architecture diagram above. However, the actual business logic of the application isn’t provided. You’ll implement it using CodeWhisperer. This means that we already declared using AWS CDK components, such as the API Gateway endpoints, DynamoDB table, and topics and queues. If you’re new to AWS CDK, then we encourage you to go through the CDK workshop later on.

Deploying AWS CDK apps into an AWS environment (a combination of an AWS account and region) requires that you provision resources that the AWS CDK needs to perform the deployment. These resources include an Amazon S3 bucket for storing files and IAM roles that grant permissions needed to perform deployments. The process of provisioning these initial resources is called bootstrapping. The required resources are defined in an AWS CloudFormation stack, called the bootstrap stack, which is usually named CDKToolkit. Like any CloudFormation stack, it appears in the CloudFormation console once it has been deployed.

After cloning the repository, let’s deploy the application (still without the business logic, which we’ll implement later on using CodeWhisperer). For this post, we’ll implement the application in Python. Therefore, make sure that you’re under the python directory. Then, use the cdk bootstrap command to bootstrap an AWS environment for AWS CDK. Replace {AWS_ACCOUNT_ID} and {AWS_REGION} with corresponding values first:

cdk bootstrap aws://{AWS_ACCOUNT_ID}/{AWS_REGION}

For more information about bootstrapping, refer to the documentation.

The last step to prepare your environment is to enable CodeWhisperer on your IDE. See Setting up CodeWhisperer for VS Code or Setting up Amazon CodeWhisperer for JetBrains to learn how to do that, depending on which IDE you’re using.

Image download

Let’s get started by implementing the first Lambda function, which is responsible for downloading an image from the provided URL and storing that image in an S3 bucket. Open the get_save_image.py file from the python/api/runtime/ directory. This file contains an empty Lambda function handler and the needed inputs parameters to integrate this Lambda function.

  • url is the URL of the input image provided by the user,
  • name is the name of the image provided by the user, and
  • S3_BUCKET is the S3 bucket name defined by our application infrastructure.

Write a comment in natural language that describes the required functionality, for example:

# Function to get a file from url

To trigger CodeWhisperer, hit the Enter key after entering the comment and wait for a code suggestion. If you want to manually trigger CodeWhisperer, then you can hit Option + C on MacOS or Alt + C on Windows. You can browse through multiple suggestions (if available) with the arrow keys. Accept a code suggestion by pressing Tab. Discard a suggestion by pressing Esc or typing a character.

For more information on how to work with CodeWhisperer, see Working with CodeWhisperer in VS Code or Working with Amazon CodeWhisperer from JetBrains.

You should get a suggested implementation of a function that downloads a file using a specified URL. The following image shows an example of the code snippet that CodeWhisperer suggests:

Screenshot of the code generated by CodeWhisperer on VS Code. It has a function called get_file_from_url with the implementation suggestion to download a file using the requests lib

Figure 2. Screenshot of the code generated by CodeWhisperer on VS Code. It has a function called get_file_from_url with the implementation suggestion to download a file using the requests lib.

Be aware that CodeWhisperer uses artificial intelligence (AI) to provide code recommendations, and that this is non-deterministic. The result you get in your IDE may be different from the one on the image above. If needed, fine-tune the code, as CodeWhisperer generates the core logic, but you might want to customize the details depending on your requirements.

Let’s try another action, this time to upload the image to an S3 bucket:

# Function to upload image to S3

As a result, CodeWhisperer generates a code snippet similar to the following one:

Screenshot of the code generated by CodeWhisperer on VS Code. It has a function called upload_image with the implementation suggestion to download a file using the requests lib and upload it to S3 using the S3 client

Figure 3. Screenshot of the code generated by CodeWhisperer on VS Code. It has a function called upload_image with the implementation suggestion to download a file using the requests lib and upload it to S3 using the S3 client.

Now that you have the functions with the functionalities to download an image from the web and upload it to an S3 bucket, you can wire up both functions in the Lambda handler function by calling each function with the correct inputs.

Image recognition

Now let’s implement the Lambda function responsible for sending the image to Amazon Rekognition for processing, storing the results in a DynamoDB table, and sending a message with those results as JSON to a second Amazon SNS topic. Open the image_recognition.py file from the python/recognition/runtime/ directory. This file contains an empty Lambda and the needed inputs parameters to integrate this Lambda function.

  • queue_url is the URL of the Amazon SQS queue to which this Lambda function is subscribed,
  • table_name is the name of the DynamoDB table, and
  • topic_arn is the ARN of the Amazon SNS topic to which this Lambda function is published.

Using CodeWhisperer, implement the business logic of the next Lambda function as you did in the previous section. For example, to detect the labels from an image using Amazon Rekognition, write the following comment:

# Detect labels from image with Rekognition

And as a result, CodeWhisperer should give you a code snippet similar to the one in the following image:

Screenshot of the code generated by CodeWhisperer on VS Code. It has a function called detect_labels with the implementation suggestion to use the Rekognition SDK to detect labels on the given image

Figure 4. Screenshot of the code generated by CodeWhisperer on VS Code. It has a function called detect_labels with the implementation suggestion to use the Rekognition SDK to detect labels on the given image.

You can continue generating the other functions that you need to fully implement the business logic of your Lambda function. Here are some examples that you can use:

  • # Save labels to DynamoDB
  • # Publish item to SNS
  • # Delete message from SQS

Following the same approach, open the list_images.py file from the python/recognition/runtime/ directory to implement the logic to list all of the labels from the DynamoDB table. As you did previously, type a comment in plain English:

# Function to list all items from a DynamoDB table

Other frequently used code

Interacting with AWS isn’t the only way that you can leverage CodeWhisperer. You can use it to implement repetitive tasks, such as creating unit tests and converting message formats, or to implement algorithms like sorting and string matching and parsing. The last Lambda function that we’ll implement as part of this post is to convert a JSON payload received from Amazon SQS to XML. Then, we’ll POST this XML to an HTTP endpoint.

Open the send_email.py file from the python/integration/runtime/ directory. This file contains an empty Lambda function handler. An event is a JSON-formatted document that contains data for a Lambda function to process. Type a comment with your intent to get the code snippet:

# Transform json to xml

As CodeWhisperer uses the context of your files to generate code, depending on the imports that you have on your file, you’ll get an implementation such as the one in the following image:

Screenshot of the code generated by CodeWhisperer on VS Code. It has a function called json_to_xml with the implementation suggestion to transform JSON payload into XML payload

Figure 5. Screenshot of the code generated by CodeWhisperer on VS Code. It has a function called json_to_xml with the implementation suggestion to transform JSON payload into XML payload.

Repeat the same process with a comment such as # Send XML string with HTTP POST to get the last function implementation. Note that the email server isn’t part of this implementation. You can mock it, or simply ignore this HTTP POST step. Lastly, wire up both functions in the Lambda handler function by calling each function with the correct inputs.

Deploy and test the application

To deploy the application, run the command cdk deploy --all. You should get a confirmation message, and after a few minutes your application will be up and running on your AWS account. As outputs, the APIStack and RekognitionStack will print the API Gateway endpoint URLs. It will look similar to this example:

Outputs:
...
APIStack.RESTAPIEndpoint01234567 = https://examp1eid0.execute-
api.{your-region}.amazonaws.com/prod/
  1. The first endpoint expects two string parameters: url (the image file URL to download) and name (the target file name that will be stored on the S3 bucket). Use any image URL you like, but remember that you must encode an image URL before passing it as a query string parameter to escape the special characters. Use an online URL encoder of your choice for that. Then, use the curl command to invoke the API Gateway endpoint:
curl -X GET 'https://examp1eid0.execute-api.eu-east-
2.amazonaws.com/prod?url={encoded-image-URL}&name={file-name}'

Replace {encoded-image-URL} and {file-name} with the corresponding values. Also, make sure that you use the correct API endpoint that you’ve noted from the AWS CDK deploy command output as mentioned above.

  1. It will take a few seconds for the processing to happen in the background. Once it’s ready, see what has been stored in the DynamoDB table by invoking the List Images API (make sure that you use the correct URL from the output of your deployed AWS CDK stack):
curl -X GET 'https://examp1eid7.execute-api.eu-east-2.amazonaws.com/prod'

After you’re done, to avoid unexpected charges to your account, make sure that you clean up your AWS CDK stacks. Use the cdk destroy command to delete the stacks.

Conclusion

In this post, we’ve seen how to get a significant productivity boost with the help of ML. With that, as a developer, you can stay focused on your IDE and reduce the time that you spend searching online for code snippets that are relevant for your use case. Writing comments in natural language, you get context-based snippets to implement full-fledged applications. In addition, CodeWhisperer comes with a mechanism called reference tracker, which detects whether a code recommendation might be similar to particular CodeWhisperer training data. The reference tracker lets you easily find and review that reference code and see how it’s used in the context of another project. Lastly, CodeWhisperer provides the ability to run scans on your code (generated by CodeWhisperer as well as written by you) to detect security vulnerabilities.

During the preview period, CodeWhisperer is available to all developers across the world for free. Get started with the free preview on JetBrains, VS Code or AWS Cloud9.

About the author:

Rafael Ramos

Rafael is a Solutions Architect at AWS, where he helps ISVs on their journey to the cloud. He spent over 13 years working as a software developer, and is passionate about DevOps and serverless. Outside of work, he enjoys playing tabletop RPG, cooking and running marathons.

Caroline Gluck

Caroline is an AWS Cloud application architect based in New York City, where she helps customers design and build cloud native data science applications. Caroline is a builder at heart, with a passion for serverless architecture and machine learning. In her spare time, she enjoys traveling, cooking, and spending time with family and friends.

Jason Varghese

Jason is a Senior Solutions Architect at AWS guiding enterprise customers on their cloud migration and modernization journeys. He has served in multiple engineering leadership roles and has over 20 years of experience architecting, designing and building scalable software solutions. Jason holds a bachelor’s degree in computer engineering from the University of Oklahoma and an MBA from the University of Central Oklahoma.

Dmitry Balabanov

Dmitry is a Solutions Architect with AWS where he focuses on building reusable assets for customers across multiple industries. With over 15 years of experience in designing, building, and maintaining applications, he still loves learning new things. When not at work, he enjoys paragliding and mountain trekking.

Take part in the Hour of Code

Post Syndicated from Liz Smart original https://www.raspberrypi.org/blog/hour-of-code-activities/

Launched in 2013, Hour of Code is an initiative to introduce young people to computer science using fun one-hour tutorials. To date, over 100 million young people have completed an hour of code with it. 

A girl doing a physical computing project.

Although the Hour of Code website is accessible all year round, every December for Computer Science Education Week people worldwide run their own Hour of Code events. Each year we love seeing many Code Clubs, CoderDojos, and young people at home across the community complete their Hour of Code. You can register your 2022 Hour of Code event now to run between 5 and 11 December. 

To support your event, we have pulled together a bumper set of our free coding projects, which can each be completed in just one hour. You will find these activities on the Hour of Code website.

Two young digital makers using Raspberry Pi

There’s something for all ages and levels of experience, so put an hour aside and help young people make something fabulous with code:

Ages 7–11

Beginner

For younger creators new to coding, a Scratch project is a great place to start. 

alt=""

With our Space talk project, they can create a space scene with characters that ‘emote’ to share their thoughts or feelings using sounds, colours, and actions. Creators program the character emotes using Scratch blocks to control graphic effects, costume animation, and sound effects. 

Alternatively, our Stress ball project lets them code an onscreen stress ball that reacts to user clicks. Creators use the Paint and Sound editors in Scratch to personalise a clickable stress ball, and they add Scratch blocks to control graphic effects, costume animation, and sound effects. 

We love this fun stress ball example sent to us recently by young creator April from the United States:

Another great option is to use Code Club World, which is a free tool to help children who are new to coding.  

Creators can develop a character avatar, design a T-shirt, make some music, and more.

Comfortable

For 7- to 11-year-olds who are more comfortable with block-based coding, our project Broadcasting spells is ideal to choose. With the project, they connect Scratch blocks to code a wand that casts spells turning sprites into toads, and growing and shrinking them. Creators use broadcast blocks to transform multiple sprites at once, and they create sound effects with the Sound editor in Scratch. 

alt=""

Ages 11–14

Beginner

We have three exciting projects for trying text-based coding during Hour of Code in this category. The first, Anime expressions, is one of our brand-new ‘Introduction to web development’ projects. With this project, young people create a responsive webpage with text and images for an anime drawing tutorial. They write HTML to structure the webpage and CSS styles to apply layout, colour palettes, and fonts. 

For a great introduction to coding with Python, we have the project Hello world from our ‘Introduction to Python’ path. With this project, creators write Python text-based code to create an interactive program that shows text and emojis based on user input. They learn about variables as they use them to store text and numbers, and they learn about writing functions to organise code and do calculations, retrieve the current date and time, and make a customisable dice. 

alt=""

LED firefly is a fantastic physical making project in which young people use a Raspberry Pi Pico microcontroller and basic electronic components to create a blinking LED firefly. They program the LED’s light patterns with MicroPython code and activate it via a switch they make themselves using jumper wires.

A blinking LED with paper wings.

Comfortable

For 11- to 14-year-olds who are already comfortable with HTML, the Flip treat webcards project is a fun option. With this, they create a webpage showing a set of cards that flip when a visitor’s mouse pointer hovers over them. Creators use CSS styling and animations to add interactivity, then they customise the cards with fancy fonts and colour gradients.

Young people who have already done some Python coding can try out our project Target practice. With this project they create a game, using the p5 graphics library to draw a colourful target, and writing code so that the player scores points by hitting the target’s rings with arrows. While they create the project, they learn about RGB colours, shape positioning with x and y coordinates, and decisions using if, else-if, and else code statements. 

Ages 14+

Beginner

Our project Charting champions is a great introduction to data visualisation and analysis for coders aged 15 and older. With the project, they will discover the power of the Python programming language as they store Olympic medal data in lists and use the pygal library to create an interactive chart.

alt=""

Comfortable

Teenage coders who feel comfortable with Python programming can use our project Solar system simulator to code an animated, interactive solar system model using the Python p5 graphics library. Their model will be interactive, as they’ll use dictionaries to store planet facts that display when a user clicks on an orbiting planet.

Coding for Hour of Code and beyond

Now is the time to register your Hour of Code event, then decide which project you’d like to support young people to create. You can download certificates for each of the creators from the Hour of Code certificates page.

And make sure to check out our project paths so you know what projects you can help the young people you support to code beyond this one hour of code. 

We don’t just create activities so that other people can experience coding and digital making — we also get involved ourselves!

Two members of the Code Club working at computers.

Recently, our teams who support the Code Club and CoderDojo networks got together to make LED fireflies. We are excited to get coding again as part of Hour of Code and Computer Science Education Week.

The post Take part in the Hour of Code appeared first on Raspberry Pi.

Enrich VPC Flow Logs with resource tags and deliver data to Amazon S3 using Amazon Kinesis Data Firehose

Post Syndicated from Chaitanya Shah original https://aws.amazon.com/blogs/big-data/enrich-vpc-flow-logs-with-resource-tags-and-deliver-data-to-amazon-s3-using-amazon-kinesis-data-firehose/

VPC Flow Logs is an AWS feature that captures information about the network traffic flows going to and from network interfaces in Amazon Virtual Private Cloud (Amazon VPC). Visibility to the network traffic flows of your application can help you troubleshoot connectivity issues, architect your application and network for improved performance, and improve security of your application.

Each VPC flow log record contains the source and destination IP address fields for the traffic flows. The records also contain the Amazon Elastic Compute Cloud (Amazon EC2) instance ID that generated the traffic flow, which makes it easier to identify the EC2 instance and its associated VPC, subnet, and Availability Zone from where the traffic originated. However, when you have a large number of EC2 instances running in your environment, it may not be obvious where the traffic is coming from or going to simply based on the EC2 instance IDs or IP addresses contained in the VPC flow log records.

By enriching flow log records with additional metadata such as resource tags associated with the source and destination resources, you can more easily understand and analyze traffic patterns in your environment. For example, customers often tag their resources with resource names and project names. By enriching flow log records with resource tags, you can easily query and view flow log records based on an EC2 instance name, or identify all traffic for a certain project.

In addition, you can add resource context and metadata about the destination resource such as the destination EC2 instance ID and its associated VPC, subnet, and Availability Zone based on the destination IP in the flow logs. This way, you can easily query your flow logs to identify traffic crossing Availability Zones or VPCs.

In this post, you will learn how to enrich flow logs with tags associated with resources from VPC flow logs in a completely serverless model using Amazon Kinesis Data Firehose and the recently launched Amazon VPC IP Address Manager (IPAM), and also analyze and visualize the flow logs using Amazon Athena and Amazon QuickSight.

Solution overview

In this solution, you enable VPC flow logs and stream them to Kinesis Data Firehose. This solution enriches log records using an AWS Lambda function on Kinesis Data Firehose in a completely serverless manner. The Lambda function fetches resource tags for the instance ID. It also looks up the destination resource from the destination IP using the Amazon EC2 API and IPAM, and adds the associated VPC network context and metadata for the destination resource. It then stores the enriched log records in an Amazon Simple Storage Service (Amazon S3) bucket. After you have enriched your flow logs, you can query, view, and analyze them in a wide variety of services, such as AWS Glue, Athena, QuickSight, Amazon OpenSearch Service, as well as solutions from the AWS Partner Network such as Splunk and Datadog.

The following diagram illustrates the solution architecture.

Architecture

The workflow contains the following steps:

  1. Amazon VPC sends the VPC flow logs to the Kinesis Data Firehose delivery stream.
  2. The delivery stream uses a Lambda function to fetch resource tags for instance IDs from the flow log record and add it to the record. You can also fetch tags for the source and destination IP address and enrich the flow log record.
  3. When the Lambda function finishes processing all the records from the Kinesis Data Firehose buffer with enriched information like resource tags, Kinesis Data Firehose stores the result file in the destination S3 bucket. Any failed records that Kinesis Data Firehose couldn’t process are stored in the destination S3 bucket under the prefix you specify during delivery stream setup.
  4. All the logs for the delivery stream and Lambda function are stored in Amazon CloudWatch log groups.

Prerequisites

As a prerequisite, you need to create the target S3 bucket before creating the Kinesis Data Firehose delivery stream.

If using a Windows computer, you need PowerShell; if using a Mac, you need Terminal to run AWS Command Line Interface (AWS CLI) commands. To install the latest version of the AWS CLI, refer to Installing or updating the latest version of the AWS CLI.

Create a Lambda function

You can download the Lambda function code from the GitHub repo used in this solution. The example in this post assumes you are enabling all the available fields in the VPC flow logs. You can use it as is or customize per your needs. For example, if you intend to use the default fields when enabling the VPC flow logs, you need to modify the Lambda function with the respective fields. Creating this function creates an AWS Identity and Access Management (IAM) Lambda execution role.

To create your Lambda function, complete the following steps:

  1. On the Lambda console, choose Functions in the navigation pane.
  2. Choose Create function.
  3. Select Author from scratch.
  4. For Function name, enter a name.
  5. For Runtime, choose Python 3.8.
  6. For Architecture, select x86_64.
  7. For Execution role, select Create a new role with basic Lambda permissions.
  8. Choose Create function.

Create Lambda Function

You can then see code source page, as shown in the following screenshot, with the default code in the lambda_function.py file.

  1. Delete the default code and enter the code from the GitHub Lambda function aws-vpc-flowlogs-enricher.py.
  2. Choose Deploy.

VPC Flow Logs Enricher function

To enrich the flow logs with additional tag information, you need to create an additional IAM policy to give Lambda permission to describe tags on resources from the VPC flow logs.

  1. On the IAM console, choose Policies in the navigation pane.
  2. Choose Create policy.
  3. On the JSON tab, enter the JSON code as shown in the following screenshot.

This policy gives the Lambda function permission to retrieve tags for the source and destination IP and retrieve the VPC ID, subnet ID, and other relevant metadata for the destination IP from your VPC flow log record.

  1. Choose Next: Tags.

Tags

  1. Add any tags and choose Next: Review.

  1. For Name, enter vpcfl-describe-tag-policy.
  2. For Description, enter a description.
  3. Choose Create policy.

Create IAM Policy

  1. Navigate to the previously created Lambda function and choose Permissions in the navigation pane.
  2. Choose the role that was created by Lambda function.

A page opens in a new tab.

  1. On the Add permissions menu, choose Attach policies.

Add Permissions

  1. Search for the vpcfl-describe-tag-policy you just created.
  2. Select the vpcfl-describe-tag-policy and choose Attach policies.

Create the Kinesis Data Firehose delivery stream

To create your delivery stream, complete the following steps:

  1. On the Kinesis Data Firehose console, choose Create delivery stream.
  2. For Source, choose Direct PUT.
  3. For Destination, choose Amazon S3.

Kinesis Firehose Stream Source and Destination

After you choose Amazon S3 for Destination, the Transform and convert records section appears.

  1. For Data transformation, select Enable.
  2. Browse and choose the Lambda function you created earlier.
  3. You can customize the buffer size as needed.

This impacts on how many records the delivery stream will buffer before it flushes it to Amazon S3.

  1. You can also customize the buffer interval as needed.

This impacts how long (in seconds) the delivery stream will buffer the incoming records from the VPC.

  1. Optionally, you can enable Record format conversion.

If you want to query from Athena, it’s recommended to convert it to Apache Parquet or ORC and compress the files with available compression algorithms, such as gzip and snappy. For more performance tips, refer to Top 10 Performance Tuning Tips for Amazon Athena. In this post, record format conversion is disabled.

Transform and Conver records

  1. For S3 bucket, choose Browse and choose the S3 bucket you created as a prerequisite to store the flow logs.
  2. Optionally, you can specify the S3 bucket prefix. The following expression creates a Hive-style partition for year, month, and day:

AWSLogs/year=!{timestamp:YYYY}/month=!{timestamp:MM}/day=!{timestamp:dd}/

  1. Optionally, you can enable dynamic partitioning.

Dynamic partitioning enables you to create targeted datasets by partitioning streaming S3 data based on partitioning keys. The right partitioning can help you to save costs related to the amount of data that is scanned by analytics services like Athena. For more information, see Kinesis Data Firehose now supports dynamic partitioning to Amazon S3.

Note that you can enable dynamic partitioning only when you create a new delivery stream. You can’t enable dynamic partitioning for an existing delivery stream.

Destination Settings

  1. Expand Buffer hints, compression and encryption.
  2. Set the buffer size to 128 and buffer interval to 900 for best performance.
  3. For Compression for data records, select GZIP.

S3 Buffer settings

Create a VPC flow log subscription

Now you create a VPC flow log subscription for the Kinesis Data Firehose delivery stream you created.

Navigate to AWS CloudShell or Terminal/PowerShell for a Mac or Windows computer and run the following AWS CLI command to enable the subscription. Provide your VPC ID for the parameter --resource-ids and delivery stream ARN for the parameter --log-destination.

aws ec2 create-flow-logs \ 
--resource-type VPC \ 
--resource-ids vpc-0000012345f123400d \ 
--traffic-type ALL \ 
--log-destination-type kinesis-data-firehose \ 
--log-destination arn:aws:firehose:us-east-1:123456789101:deliverystream/PUT-Kinesis-Demo-Stream \ 
--max-aggregation-interval 60 \ 
--log-format '${account-id} ${action} ${az-id} ${bytes} ${dstaddr} ${dstport} ${end} ${flow-direction} ${instance-id} ${interface-id} ${log-status} ${packets} ${pkt-dst-aws-service} ${pkt-dstaddr} ${pkt-src-aws-service} ${pkt-srcaddr} ${protocol} ${region} ${srcaddr} ${srcport} ${start} ${sublocation-id} ${sublocation-type} ${subnet-id} ${tcp-flags} ${traffic-path} ${type} ${version} ${vpc-id}'

If you’re running CloudShell for the first time, it will take a few seconds to prepare the environment to run.

After you successfully enable the subscription for your VPC flow logs, it takes a few minutes depending on the intervals mentioned in the setup to create the log record files in the destination S3 folder.

To view those files, navigate to the Amazon S3 console and choose the bucket storing the flow logs. You should see the compressed interval logs, as shown in the following screenshot.

S3 destination bucket

You can download any file from the destination S3 bucket on your computer. Then extract the gzip file and view it in your favorite text editor.

The following is a sample enriched flow log record, with the new fields in bold providing added context and metadata of the source and destination IP addresses:

{'account-id': '123456789101',
 'action': 'ACCEPT',
 'az-id': 'use1-az2',
 'bytes': '7251',
 'dstaddr': '10.10.10.10',
 'dstport': '52942',
 'end': '1661285182',
 'flow-direction': 'ingress',
 'instance-id': 'i-123456789',
 'interface-id': 'eni-0123a456b789d',
 'log-status': 'OK',
 'packets': '25',
 'pkt-dst-aws-service': '-',
 'pkt-dstaddr': '10.10.10.11',
 'pkt-src-aws-service': 'AMAZON',
 'pkt-srcaddr': '52.52.52.152',
 'protocol': '6',
 'region': 'us-east-1',
 'srcaddr': '52.52.52.152',
 'srcport': '443',
 'start': '1661285124',
 'sublocation-id': '-',
 'sublocation-type': '-',
 'subnet-id': 'subnet-01eb23eb4fe5c6bd7',
 'tcp-flags': '19',
 'traffic-path': '-',
 'type': 'IPv4',
 'version': '5',
 'vpc-id': 'vpc-0123a456b789d',
 'src-tag-Name': 'test-traffic-ec2-1', 'src-tag-project': ‘Log Analytics’, 'src-tag-team': 'Engineering', 'dst-tag-Name': 'test-traffic-ec2-1', 'dst-tag-project': ‘Log Analytics’, 'dst-tag-team': 'Engineering', 'dst-vpc-id': 'vpc-0bf974690f763100d', 'dst-az-id': 'us-east-1a', 'dst-subnet-id': 'subnet-01eb23eb4fe5c6bd7', 'dst-interface-id': 'eni-01eb23eb4fe5c6bd7', 'dst-instance-id': 'i-06be6f86af0353293'}

Create an Athena database and AWS Glue crawler

Now that you have enriched the VPC flow logs and stored them in Amazon S3, the next step is to create the Athena database and table to query the data. You first create an AWS Glue crawler to infer the schema from the log files in Amazon S3.

  1. On the AWS Glue console, choose Crawlers in the navigation pane.
  2. Choose Create crawler.

Glue Crawler

  1. For Name¸ enter a name for the crawler.
  2. For Description, enter an optional description.
  3. Choose Next.

Glue Crawler properties

  1. Choose Add a data source.
  2. For Data source¸ choose S3.
  3. For S3 path, provide the path of the flow logs bucket.
  4. Select Crawl all sub-folders.
  5. Choose Add an S3 data source.

Add Data source

  1. Choose Next.

Data source classifiers

  1. Choose Create new IAM role.
  2. Enter a role name.
  3. Choose Next.

Configure security settings

  1. Choose Add database.
  2. For Name, enter a database name.
  3. For Description, enter an optional description.
  4. Choose Create database.

Create Database

  1. On the previous tab for the AWS Glue crawler setup, for Target database, choose the newly created database.
  2. Choose Next.

Set output and scheduling

  1. Review the configuration and choose Create crawler.

Create crawler

  1. On the Crawlers page, select the crawler you created and choose Run.

Run crawler

You can rerun this crawler when new tags are added to your AWS resources, so that they’re available for you to query from the Athena database.

Run Athena queries

Now you’re ready to query the enriched VPC flow logs from Athena.

  1. On the Athena console, open the query editor.
  2. For Database, choose the database you created.
  3. Enter the query as shown in the following screenshot and choose Run.

Athena query

The following code shows some of the sample queries you can run:

Select * from awslogs where "dst-az-id"='us-east-1a'
Select * from awslogs where "src-tag-project"='Log Analytics' or "dst-tag-team"='Engineering' 
Select "srcaddr", "srcport", "dstaddr", "dstport", "region", "az-id", "dst-az-id", "flow-direction" from awslogs where "az-id"='use1-az2' and "dst-az-id"='us-east-1a'

The following screenshot shows an example query result of the source Availability Zone to the destination Availability Zone traffic.

Athena query result

You can also visualize various charts for the flow logs stored in the S3 bucket via QuickSight. For more information, refer to Analyzing VPC Flow Logs using Amazon Athena, and Amazon QuickSight.

Pricing

For pricing details, refer to Amazon Kinesis Data Firehose pricing.

Clean up

To clean up your resources, complete the following steps:

  1. Delete the Kinesis Data Firehose delivery stream and associated IAM role and policies.
  2. Delete the target S3 bucket.
  3. Delete the VPC flow log subscription.
  4. Delete the Lambda function and associated IAM role and policy.

Conclusion

This post provided a complete serverless solution architecture for enriching VPC flow log records with additional information like resource tags using a Kinesis Data Firehose delivery stream and Lambda function to process logs to enrich with metadata and store in a target S3 file. This solution can help you query, analyze, and visualize VPC flow logs with relevant application metadata because resource tags have been assigned to resources that are available in the logs. This meaningful information associated with each log record wherever the tags are available makes it easy to associate log information to your application.

We encourage you to follow the steps provided in this post to create a delivery stream, integrate with your VPC flow logs, and create a Lambda function to enrich the flow log records with additional metadata to more easily understand and analyze traffic patterns in your environment.


About the Authors

Chaitanya Shah is a Sr. Technical Account Manager with AWS, based out of New York. He has over 22 years of experience working with enterprise customers. He loves to code and actively contributes to AWS solutions labs to help customers solve complex problems. He provides guidance to AWS customers on best practices for their AWS Cloud migrations. He is also specialized in AWS data transfer and in the data and analytics domain.

Vaibhav Katkade is a Senior Product Manager in the Amazon VPC team. He is interested in areas of network security and cloud networking operations. Outside of work, he enjoys cooking and the outdoors.

Learn to program in Python with our online courses

Post Syndicated from Rosa Brown original https://www.raspberrypi.org/blog/learn-to-program-in-python-online-courses-for-teachers/

If you’re new to teaching programming or looking to build or refresh your programming knowledge, we have a free resource that is perfect for you. Our ‘Learn to program in Python’ online course pathway is for educators who want to develop their understanding of the text-based language Python. Each course is packed with information and activities to help you apply what you learn in your classroom teaching.

A computing teacher and a learner do physical computing in the primary school classroom.

Why learn to program in Python?

Writing a program in Python is very similar to writing in English, which makes starting to program much easier. Python is also a general-purpose programming language, so once you’ve learned the basics, you can use Python for lots of different programming activities.

That’s why Python is a perfect choice for learning to program, and why many of our educational resources involve Python. Our seven online Python courses cover aspects from taking your first steps into programming, to writing a program to control an electronic circuit, to learning about object-oriented programming.

With time and practice, you will be able to use Python programming to create unique solutions to problems, build helpful tools, and make things that are important to you.

How does the Python course pathway work? 

The courses in the pathway have been written by our educators and include advice and activities to help you teach programming in your classroom. You can reuse the course activities to explain programming concepts to your learners and get them to write programs themselves. Because you will have first-hand experience of the activities, you’ll be able to anticipate your learners’ difficulties and adapt your lessons to suit them.

In a computing classroom, a smiling girl raises her hand.

All the courses are designed to take three or four weeks to complete, based on you spending two hours a week on participating. You can have free time-limited access to each course for the length of time it’s designed to take to complete. For example, if it’s a four-week course, like ‘Programming 101’, you can sign up for free to get four weeks of access.

The seven courses in the Python path can be completed in any order you like, and you can choose the courses that match your interests and needs.

A room of educators at desktop computers.

Each course involves activities that help you create a programming project using the concepts that you’re learning about. These activities are designed to be a fun and interactive way to reinforce what you’ve learned and can also be used with your learners in the classroom.

Course spotlight: Programming 101

If programming is completely new to you, our ‘Programming 101’ course is the best place to start. In ‘Programming 101’, we use this definition of programming to start with the idea that programming is about you telling a computer what to do: 

“Programming is how you get computers to solve problems.” 

We see programming as a chance to think creatively about a problem and about all the different ways it could be solved. While you might be unfamiliar with terms like programming, algorithms, or selection, the ‘Programming 101’ course demonstrates how they touch on things that many of us know from other areas of our lives.

On the course, you will:

  • Learn about basic programming concepts such as sequencing and repetition
  • Start to write your own programs
  • Discover how to interpret error messages to find and fix mistakes in your programs

What will you make in the courses?

Through building an understanding of programming, you will see how you can write your own programs to make games, quizzes, physical computing projects, and more. Here’s look at some of the things you could make in three of the seven courses: 

  • Programming 101: Write your first program in Python to make a personal assistant bot. You’ll discover how to make the output of your program respond to the user’s input.  
alt=""
You’ll write a program to create personal assistant bot in the ‘Programming 101’ course for beginners.
  • Programming with GUIs: Build a game where players compare two sets of emoji to find the emoji that matches. To make this game, you’ll use what you learn in the course to design the layout of a graphic user interface (GUI) and make sure only one emoji appears twice. 
alt=""
You’ll make an interactive graphic game in the ‘Programming with GUIs’ course.
  • Object-oriented Programming: Create a text-based adventure game with a character on a quest through different rooms! You’ll discover how to write a program that reacts to user input, and how to write your own code to create more challenges within the game based on your ideas.    

So check out our courses and start gaining Python programming skills today!

Python programming resources for young people

If you want to help your learners develop their understanding of programming in Python, you’ll be interested in these free resources we’ve created for young people: 

Introduction to Python: Our guided project path for learners who are new to text-based programming. We have created these projects with young people around the age of 9 to 13 in mind. Each project takes one hour to complete, and learners can make their own fun programs while learning about Python.

More Python: Our guided project path for learners who want to move beyond the ‘Intro to Python’ path to write programs that contain charts, artwork, and more. We’ve written these projects for young people around the age of 10 to 13.

Isaac Computer Science: This learning platform we’ve created for GCSE and A level students (age 14 to 18) uses Python and other text-based languages to teach the programming concepts within England’s computer science curriculum.   

The post Learn to program in Python with our online courses appeared first on Raspberry Pi.

How do I start my child coding?

Post Syndicated from Marc Scott original https://www.raspberrypi.org/blog/how-do-i-start-my-child-coding/

You may have heard a lot about coding and how important it is for children to start learning about coding as early as possible. Computers have become part of our lives, and we’re not just talking about the laptop or desktop computer you might have in your home or on your desk at work. Your phone, your microwave, and your car are all controlled by computers, and those computers need instructions to tell them what to do. Coding, or computer programming, involves writing those instructions.

A boy types code at a CoderDojo coding club.

If children discover a love for coding, they will have an avenue to make the things they want to make; to write programs and build projects that they find useful, fun, or interesting. So how do you give your child the opportunity to learn about coding? We’ve listed some free resources and suggested activities below.

Scratch Junior 

If you have a young child under about 7 years of age, then a great place to begin is with ScratchJr. This is an app available on Android and iOS phones and tablets, that lets children learn the basics of programming, without having to worry about making mistakes.

ScratchJr programming interface.

Code Club World

The Raspberry Pi Foundation has developed a series of activities for young learners, on their journey to developing their computing skills. Code Club World provides a platform for children to play with code to design their own avatar, make it dance, and play music. Plus they can share their creations with other learners. 

“You could have a go too and discover Scratch together. The platform is designed for complete beginners and it is great fun to play with.”

Carol Thornhill, Engineering Science MA, Mathematics teacher

Scratch

For 7- to 11-year-old children, Scratch is a good way to begin their journey in coding, or to progress from ScratchJr. Like ScratchJr, Scratch is a block-based language, allowing children to assemble code to produce games, animations, stories, or even use some of the add-ons to interact with electronic devices and explore physical computing.

A girl with her Scratch project
A girl with a Scratch project she has coded.

The Raspberry Pi Foundation has hundreds of Scratch projects that your child can try out, but the best place to begin is with our Introduction to Scratch path, which will provide your child with the basic skills they need, and then encourage them to build projects that are relevant to them, culminating in their creation of their own interactive ebook.

Your child may never tire of Scratch, and that is absolutely fine — it is a fully functioning programming language that is surprisingly powerful, when you learn to understand everything it can do. Another advantage of Scratch is that it provides easy access to graphics, sounds, and interactivity that can be trickier to achieve in other programming languages.

Python 

If you’re looking for more traditional programming languages for your child to progress on to, especially when they reach 12 years of age or beyond, then we like to direct our young learners to the Python programming language and to the languages that the World Wide Web is built on, particularly HTML, CSS, and JavaScript.

Animation coded in Python of an archery target disk.
An animation coded using Python.

Our Python resources cover the basics of using the language, and then progress from there. Python is one of the most widely used languages when it comes to the fields of artificial intelligence and data science, and we have resources to support your child in learning about these fascinating aspects of technology. Our projects can even introduce your child to the world of electronics and physical computing with activities that use the inexpensive Raspberry Pi Pico, and a handful of electronic components, enabling your kids to create a wide variety of art installations and useful gadgets.

“Trying Python doesn’t mean you can’t go back to Scratch or switch between Scratch and Python for different purposes. I still use Scratch for some projects myself!”

Tracy Gardner, Computer Science PhD, former IBM Software Architect and currently a project writer at the Raspberry Pi Foundation

A young person codes at a Raspberry Pi computer.
Python is a great text-based programming language for young people to learn.

Coding projects

On our coding tutorials website we have many different projects to help your child learn coding and digital making. These range from beginner resources like the Introduction to Scratch path to more advanced activities such as the Introduction to Unity path, where children can learn how to make 3D worlds and games. 

“Our new project paths can be tackled by young creators on their own, without adult intervention. Paths are structured so that they build skills and confidence in the early stages, and then provide more open-ended tasks and inspirational ideas that creators can adapt or work from.”

Rik Cross, BSc (Hons), PGCE, former teacher and Director of Informal Learning at the Raspberry Pi Foundation

Web development 

The Web is integral to many of our lives, and we believe that it is important for children to have an understanding of the technology that drives it. That is why we have an Introduction to the Web path that allows children to develop their own web pages, focusing on the kinds of webpages that they want to build, be that sending a greeting card, telling a story, or creating a showcase of their projects.

A girl has fun learning to code at home on a tablet sitting on a sofa.
It’s empowering for children to learn to how the websites they visit are created with code.

Coding clubs 

Coding clubs are a great place for children to have fun and become more confident with coding, where they can learn through making and share their creations with each other. The Raspberry Pi Foundation operates the world’s largest network of coding clubs — CoderDojo and Code Club

“I have a new group of creators at my Code Club every year and my favourite part is when they realise they really can let their imagination run wild. You want to make an animation where a talking pineapple chases a snowman — absolutely. You want to make a piece of scalable art out of 1000 pixelated cartoon musical instruments — go right ahead. If you can code it, you can make it ”

Liz Smart, Code Club and CoderDojo mentor, former Solutions Architect and project writer for the Raspberry Pi Foundation

Three teenage girls at a laptop.
At Code Club and CoderDojo, many young people enjoy teaming up to code projects together.

Coding challenges 

Once your child has learnt some of the basics, they may enjoy entering a coding challenge! The European Astro Pi Challenge programme allows young people to write code and actually have it run on the International Space Station, and Coolest Projects gives children a chance to showcase their projects from across the globe.

A Coolest Projects participant
A girl with her coded creation at an in-person Coolest Projects showcase.

Free resources 

No matter what technology your child wants to engage with, there is a wealth of free resources and materials available from organisations such as the Raspberry Pi Foundation and Scratch Foundation, that prepare young people for 21st century life. Whether they want to become professional software engineers, tinker with some electronics, or just have a play around … encourage them to explore some coding projects, and see what they can learn, make, and do!


Author: Marc Scott, BSc (Hons) is a former Science, Computer Science, and Engineering teacher and the Content Lead for Projects at the Raspberry Pi Foundation.

The post How do I start my child coding? appeared first on Raspberry Pi.

Simplify and optimize Python package management for AWS Glue PySpark jobs with AWS CodeArtifact

Post Syndicated from Ashok Padmanabhan original https://aws.amazon.com/blogs/big-data/simplify-and-optimize-python-package-management-for-aws-glue-pyspark-jobs-with-aws-codeartifact/

Data engineers use various Python packages to meet their data processing requirements while building data pipelines with AWS Glue PySpark Jobs. Languages like Python and Scala are commonly used in data pipeline development. Developers can take advantage of their open-source packages or even customize their own to make it easier and faster to perform use cases, such as data manipulation and analysis. However, managing standardized packages can be cumbersome with multiple teams using different versions of packages, installing non-approved packages, and causing duplicate development effort due to the lack of visibility of what is available at the enterprise level. This can be especially challenging in large enterprises with multiple data engineering teams.

ETL Developers have requirements to use additional packages for their AWS Glue ETL jobs. With security being job zero for customers, many will restrict egress traffic from their VPC to the public internet, and they need a way to manage the packages used by applications including their data processing pipelines.

Our proposed solution will enable you with network egress restrictions to manage packages centrally with AWS CodeArtifact and use their favorite libraries in their AWS Glue ETL PySpark code. In this post, we’ll describe how CodeArtifact can be used for managing packages and modules for AWS Glue ETL jobs, and we’ll demo a solution using Glue PySpark jobs that run within VPC Subnets that have no internet access.

Solution overview

The solution uses CodeArtifact as a tool to make it easier for organizations of any size to securely store, publish, and share software packages used in their ETL with AWS Glue. VPC Endpoints will be enabled for CodeArtifact and Glue to enable private link connections. AWS Step Functions makes it easy to coordinate the orchestration of components used in the data processing pipeline. Native integrations with both CodeArtifact and AWS Glue enable the workflow to both authenticate the request to CodeArtifact and start the AWS Glue ETL job.

The following architecture shows an implementation of a solution using AWS Glue, CodeArtifact, and Step Functions to use additional Python modules without egress internet access. The solution is deployed using AWS Cloud Development Kit (AWS CDK), an open-source software development framework to define your cloud application resources using familiar programming languages.

Solution Architecture for the blog post

Fig 1: Architecture Diagram for the Solution

To illustrate how to set up this architecture, we’ll walk you through the following steps:

  1. Deploying an AWS CDK stack to provision the following AWS Resources
    1. CodeArtifact
    2. An AWS Glue job
    3. Step Functions workflow
    4. Amazon Simple Storage Service (Amazon S3) bucket
    5. A VPC with a private Subnet and VPC Endpoints to Amazon S3 and CodeArtifact
  2. Validate the Deployment.
  3. Run a Sample Workflow – This workflow will run an AWS Glue PySpark job that uses a custom Python library, and an upgraded version of boto3.
  4. Cleaning up your resources.

Prerequisites

Make sure that you complete the following steps as prerequisites:

The solution

Launching your AWS CDK Stack

Step 1: Using your device’s command line, check out our Git repository to a local directory on your device:

git clone https://github.com/aws-samples/python-lib-management-without-internet-for-aws-glue-in-private-subnets.git

Step 2: Change directories to the new directory Amazon S3 script location:

cd python-lib-management-without-internet-for-aws-glue-in-private-subnets/scripts/s3

Step 3: Download the following CSV, which contains New York City Taxi and Limousine Commission (TLC) Trip weekly trips. This will serve as the input source for the AWS Glue Job:

aws s3 cp s3://nyc-tlc/misc/FOIL_weekly_trips_apps.csv .

Step 4: Change the directories to the path where the app.py file is located (in reference to the previous step, execute the following step):

cd ../..

Step 5: Create a virtual environment:

macOS/Linux:
python3 -m venv .env

Windows:
python -m venv .env

Step 6: Activate the virtual environment after the init process completes and the virtual environment is created:

macOS/Linux:
source .env/bin/activate

Windows:
.env\Scripts\activate.bat

Step 7: Install the required dependencies:

pip3 install -r requirements.txt

Step 8: Make sure that your AWS profile is setup along with the region that you want to deploy as mentioned in the prerequisite. Synthesize the templates. AWS CDK apps use code to define the infrastructure, and when run they produce or “synthesize” a CloudFormation template for each stack defined in the application:

cdk synthesize

Step 9: BootStrap the cdk app using the following command:

cdk bootstrap aws://<AWS_ACCOUNTID>/<AWS_REGION>

Replace the place holder AWS_ACCOUNTID and AWS_REGION with your AWS account ID and the region to be deployed.

This step provisions the initial resources, including an Amazon S3 bucket for storing files and IAM roles that grant permissions needed to perform deployments.

Step 10: Deploy the solution. By default, some actions that could potentially make security changes require approval. In this deployment, you’re creating an IAM role. The following command overrides the approval prompts, but if you would like to manually accept the prompts, then omit the --require-approval never flag:

cdk deploy "*" --require-approval never

While the AWS CDK deploys the CloudFormation stacks, you can follow the deployment progress in your terminal:

AWS CDK Deployment progress in terminal

Fig 2: AWS CDK Deployment progress in terminal

Once the deployment is successful, you’ll see the successful status as follows:

AWS CDK Deployment completion success

Fig 3: AWS CDK Deployment completion success

Step 11: Log in to the AWS Console, go to CloudFormation, and see the output of the ApplicationStack stack:

AWS CloudFormation stack output

Fig 4: AWS CloudFormation stack output

Note the values of the DomainName and RepositoryName variables. We’ll use them in the next step to upload our artifacts

Step 12: We will upload a custom library into the repo that we created. This will be used by our Glue ETL job.

  • Install twine using pip:
python3 -m pip install twine

The custom python package glueutils-0.2.0.tar.gz can be found under this folder of the cloned repo:

cd scripts/custom_glue_library
  • Configure twine with the login command (additional details here ). Refer to step 11 for the DomainName and RepositoryName from the CloudFormation output:
aws codeartifact login --tool twine --domain <DomainName> --domain-owner <AWS_ACCOUNTID> --repository <RepositoryName>
  • Publish Python package assets:
twine upload --repository codeartifact glueutils-0.2.0.tar.gz
Python package publishing using twine

Fig 5: Python package publishing using twine

Validate the Deployment

The AWS CDK stack will deploy the following AWS resources:

  1. Amazon Virtual Private Cloud (Amazon VPC)
    1. One Private Subnet
  2. AWS CodeArtifact
    1. CodeArtifact Repository
    2. CodeArtifact Domain
    3. CodeArtifact Upstream Repository
  3. AWS Glue
    1. AWS Glue Job
    2. AWS Glue Database
    3. AWS Glue Connection
  4. AWS Step Function
  5. Amazon S3 Bucket for AWS CDK and also for storing scripts and CSV file
  6. IAM Roles and Policies
  7. Amazon Elastic Compute Cloud (Amazon EC2) Security Group

Step 1: Browse to the AWS account and region via the AWS Console to which the resources are deployed.

Step 2: Browse the Subnet page (https://<region> .console.aws.amazon.com/vpc/home?region=<region> #subnets:) (*Replace region with actual AWS Region to which your resources are deployed)

Step 3: Select the Subnet with name as ApplicationStack/enterprise-repo-vpc/Enterprise-Repo-Private-Subnet1

Step 4: Select the Route Table and validate that there are no Internet Gateway or NAT Gateway for routes to Internet, and that it’s similar to the following image:

Route table validation

Fig 6: Route table validation

Step 5: Navigate to the CodeArtifact console and review the repositories created. The enterprise-repo is your local repository, and pypi-store is the upstream repository connected to the PyPI, providing artifacts from pypi.org.

AWS CodeArifact repositories created

Fig 7: AWS CodeArifact repositories created

Step 6: Navigate to enterprise-repo and search for glueutils. This is the custom python package that we published.

AWS CodeArifact custom python package published

Fig 8: AWS CodeArifact custom python package published

Step 7: Navigate to Step Functions Console and review the enterprise-repo-step-function as follows:

AWS Step Functions workflow

Fig 9: AWS Step Functions workflow

The diagram shows how the Step Functions workflow will orchestrate the pattern.

  1. The first step CodeArtifactGetAuthorizationToken calls the getAuthorizationToken API to generate a temporary authorization token for accessing repositories in the domain (this token is valid for 15 mins.).
  2. The next step GenerateCodeArtifactURL takes the authorization token from the response and generates the CodeArtifact URL.
  3. Then, this will move into the GlueStartJobRun state, which makes a synchronous API call to run the AWS Glue job.

Step 8: Navigate to the AWS Glue Console and select the Jobs tab, then select enterprise-repo-glue-job.

The AWS Glue job is created with the following script and AWS Glue Connection enterprise-repo-glue-connection. The AWS Glue connection is a Data Catalog object that enables the job to connect to sources and APIs from within the VPC. The network type connection runs the job from within the private subnet to make requests to Amazon S3 and CodeArtifact over the VPC endpoint connection. This enables the job to run without any traffic through the internet.

Note the connections section in the AWS Glue PySpark Job, which makes the Glue job run on the private subnet in the VPC provisioned.

AWS Glue network connections

Fig 10: AWS Glue network connections

The job takes an Amazon S3 bucket, Glue Database, Python Job Installer Option, and Additional Python Modules as job parameters. The parameters --additional-python-modules and --python-modules-installer-option are passed to install the selected Python module from a PyPI repository hosted in AWS CodeArtifact.

The script itself first reads the Amazon S3 input path of the taxi data in the CSV format. A light transformation to sum the total trips by year, week, and app is performed. Then the output is written to an Amazon S3 path as parquet . A partitioned table in the AWS Glue Data Catalog will either be created or updated if it already exists .

You can find the Glue PySpark script here.

Run a sample workflow

The following steps will demonstrate how to run a sample workflow:

Step 1: Navigate to the Step Functions Console and select the enterprise-repo-step-function.

Step 2: Select Start execution and input the following: We’re including the glueutils and latest boto3 libraries as part of the job run. It is always recommended to pin your python dependencies to avoid any breaking change due to a future version of dependency . In the below example, the latest available version of boto3, and the 0.2.0 version of glueutils will be installed. To pin it to a specific release you may add  boto3==1.24.2   (Current latest release at the time of publishing this post).

{"pythonmodules": "boto3,glueutils==0.2.0"}

Step 3: Select Start execution and wait until Execution Status is Succeeded. This may take a few minutes.

Step 4: Navigate to the CodeArtifact Console to review the enterprise-repo repository. You’ll see the cached PyPi packages and all of their dependencies pulled down from PyPi.

Step 5: In the Glue Console under the Runs section of the enterprise-glue-job, you’ll see the parameters passed:

Fig 11 : AWS Glue job execution history

Fig 11 : AWS Glue job execution history

Note the --index-url which was passed as a parameter to the glue ETL job. The token is valid only for 15 minutes.

Step 6: Navigate to the Amazon CloudWatch Console and go to the /aws/glue-jobs log group to verify that the packages were installed from the local repo.

You will see that the 2 package names passed as parameters are installed with the corresponding versions.

Fig 12 : Amazon CloudWatch logs details for the Glue job

Fig 12 : Amazon CloudWatch logs details for the Glue job

Step 7: Navigate to the Amazon Athena console and select Query Editor.

Step 8: Run the following query to validate the output of the AWS Glue job:

SELECT year, app, SUM(total_trips) as sum_of_total_trips 
FROM 
"codeartifactblog_glue_db"."taxidataparquet" 
GROUP BY year, app;

Clean up

Make sure that you clean up all of the other AWS resources that you created in the AWS CDK Stack deployment. You can delete these resources via the AWS CDK Destroy command as follows or the CloudFormation console.

To destroy the resources using AWS CDK, follow these steps:

  1. Follow Steps 1-6 from the ‘Launching your CDK Stack’ section.
  2. Destroy the app by executing the following command:
    cdk destroy

Conclusion

In this post, we demonstrated how CodeArtifact can be used for managing Python packages and modules for AWS Glue jobs that run within VPC Subnets that have no internet access. We also demonstrated how the versions of existing packages can be updated (i.e., boto3) and a custom Python library (glueutils) that is developed locally is also managed through CodeArtifact.

This post enables you to use your favorite Python packages with AWS Glue ETL PySpark jobs by modifying the input to the AWS StepFunctions workflow (Step 2 in the Run a Sample workflow section).


About the Authors

Bret Pontillo is a Data & ML Engineer with AWS Professional Services. He works closely with enterprise customers building data lakes and analytical applications on the AWS platform. In his free time, Bret enjoys traveling, watching sports, and trying new restaurants.

Gaurav Gundal is a DevOps consultant with AWS Professional Services, helping customers build solutions on the customer platform. When not building, designing, or developing solutions, Gaurav spends time with his family, plays guitar, and enjoys traveling to different places.

Ashok Padmanabhan is a Sr. IOT Data Architect with AWS Professional Services, helping customers build data and analytics platform and solutions. When not helping customers build and design data lakes, Ashok enjoys spending time at the beach near his home in Florida.

Python coding for kids: Moving beyond the basics

Post Syndicated from Rebecca Franks original https://www.raspberrypi.org/blog/python-coding-for-kids-beyond-the-basics/

We are excited to announce our second new Python learning path, ‘More Python’, which shows young coders how to add real data to their programs while creating projects from a chart of Olympic medals to an interactive world map. The six guided Python projects in this free learning path are designed to enable young people to independently create their own Python projects about the topics that matter to them.

A girl points excitedly at a project on the Raspberry Pi Foundation's projects site.
Two kids are at a laptop with one of our coding projects.

In this post, we’ll show you how kids use the projects in the ‘More Python’ path, what they can make by following the path, and how the path structure helps them become confident and independent digital makers.

Python coding for kids: Our learning paths

Our ‘Introduction to Python’ learning path is the perfect place to start learning how to use Python, a text-based programming language. When we launched the Intro path in February, we explained why Python is such a popular, useful, and accessible programming language for young people.

Because Python has so much to offer, we have created a second Python path for young people who have learned the basics in the first path. In this new set of six projects, learners will discover new concepts and see how to add different types of real data to their programs.

Illustration of different graph types
By following the ‘More Python’ path, young people learn the skills to independently create a data visualisation for a topic they are passionate about in the final project.

Key questions answered

Who is this path for?

We have written the projects in this path with young people around the age of 10 to 13 in mind. To code in a text-based language, a young person needs to be familiar with using a keyboard, due to the typing involved. Learners should have already completed the ‘Introduction to Python’ project path, as they will build on the learning from that path.

Three young tech creators show off their tech project at Coolest Projects.

How do young people learn with the projects? 

Young people need access to a web browser to complete our project paths. Each project contains step-by-step instructions for learners to follow, and tick boxes to mark when they complete each step. On top of that, the projects have steps for learners to:

  • Reflect on what they have covered in the project
  • Share their projects with others
  • See suggestions to upgrade their projects

Young people also have the option to sign up for an account with us so they can save their progress at any time and collect badges.

A young person codes at a Raspberry Pi computer.

While learners follow the project instructions in this project path, they write their code into Trinket, a free web-based coding platform accessible in a browser. Each project contains a link to a starter Trinket, which includes everything to get started writing Python code — no need to install any additional software.

Screenshot of Python code in the online IDE Trinket.
This is what Python code on Trinket looks like.

If they prefer, however, young people also have the option of instead writing their code in a desktop-based programming environment, such as Thonny, as they work through the projects.

What will young people learn?  

To use data in their Python programs, the project instructions show learners how to:

  • Create and use lists
  • Create and use dictionaries
  • Read data from a data file

The projects support learners as they explore new concepts of digital visual media and: 

  • Create charts using the Python library Pygal
  • Plot pins on a map
  • Create randomised artwork

In each project, learners reflect and answer questions about their work, which is important for connecting the project’s content to their pre-existing knowledge.

In a computing classroom, a girl laughs at what she sees on the screen.

As they work through the projects, learners see different ways to present data and then decide how they want to present their data in the final project in the path. You’ll find out what the projects are on the path page, or at the bottom of this blog post.

The project path helps learners become independent coders and digital makers, as each project contains slightly less support than the one before. You can read about how our project paths are designed to increase young people’s independence, and explore our other free learning paths for young coders

How long will the path take to complete?

We’ve designed the path to be completed in around six one-hour sessions, with one hour per project, at home, in school, or at a coding club. The project instructions encourage learners to add code to upgrade their projects and go further if they wish. This means that young people might want to spend a little more time getting their projects exactly as they imagine them.

In a classroom, a teacher and a student look at a computer screen while the student types on the keyboard.

What can young people do next?

Use Unity to create a 3D world

Unity is a free development environment for creating 3D virtual environments, including games, visual novels, and animations, all with the text-based programming language C#. Our ‘Introduction to Unity’ project path for keen coders shows how to make 3D worlds and games with collectibles, timers, and non-player characters.

Take part in Coolest Projects Global

At the end of the ‘More Python’ path, learners are encouraged to register a project they’ve made using their new coding skills for Coolest Projects Global, our free and world-leading online technology showcase for young tech creators. The project they register will become part of the online gallery, where members of the Coolest Projects community can celebrate each other’s creations.

A young coder shows off her tech project for Coolest Projects to two other young tech creators.

We welcome projects from all young people, whether they are beginners or experienced coders and digital makers. Coolest Projects Global is a unique opportunity for young people to share their ingenuity with the world and with other young people who love coding and creating with digital technology.

Details about the projects in ‘More Python’

The ‘More Python’ path is structured according to our Digital Making Framework, with three Explore project, two Design projects, and a final Invent project.

Explore project 1: Charting champions

Illustration of a fast-moving, smiling robot wearing a champion's rosette.

In this Explore project, learners discover the power of lists in Python by creating an interactive chart of Olympic medals. They learn how to read data from a text file and then present that data as a bar chart.

Explore project 2: Solar system

Illustration of our solar system.

In this Explore project, learners create a simulation of the solar system. They revisit the drawing and animation skills that they learned in the ‘Introduction to Python’ project path to produce animated planets orbiting the sun. The animation is based on real data taken from a data file to simulate the speed that the planets move at as they orbit. The simulation is also interactive, using dictionaries to display data about the planets that have been selected.

Explore project 3: Codebreaker

Illustration of a person thinking about codebreaking.

The final Explore project gets learners to build on their knowledge of lists and dictionaries by creating a program that encodes and decodes a message using an Atbash cipher. The Atbash cipher was originally developed in the Hebrew language. It takes the alphabet and matches it to its reverse order to create a secret message. They also create a script that checks how many times certain letters have been used in an encoded message, so that they can discover patterns.

Design project 1: Encoded art

Illustration of a robot painting a portrait of another robot.

The first Design project allows learners to create fun pieces of artwork by encoding the letters of their name into images, patterns, or drawings. Learners can choose the images that will be produced for each letter, and whether these appear at random or in a geometric pattern.

Learners are encouraged to share their encoded artwork in the community library, where there are lots of fun projects to discover already. In this project, learners apply all of the coding skills and knowledge covered in the Explore projects, including working with dictionaries and lists.

Design project 2: Mapping data

Illustration of a map and a hand of someone marking it with a large pin.

In the next Design project, learners access data from a data file and use it to create location pins on a world map. They have six datasets to choose from, so they can use one that interests them. They can also choose from a variety of maps and design their own pin to truly personalise their projects.

Invent project: Persuasive data presentation

Illustration of different graph types

This project is designed to use all of the skills and knowledge covered in this path, and most of the skills from the ‘Introduction to Python’ path. Learners can choose from eight datasets to create data visualisations. They are also given instructions on how to access and prepare other datasets if they want to visualise data about a different topic.

Once learners have chosen their dataset, they can decide how they want it to be displayed. This could be a chart, a map with pins, or a unique data visualisation. There are lots of example projects to provide inspiration for learners. One of our favourites is the ISS Expedition project, which places flags on the ISS depending on the expedition number you enter.

The post Python coding for kids: Moving beyond the basics appeared first on Raspberry Pi.

New for Amazon CodeGuru Reviewer – Detector Library and Security Detectors for Log-Injection Flaws

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-for-amazon-codeguru-reviewer-detector-library-and-security-detectors-for-log-injection-flaws/

Amazon CodeGuru Reviewer is a developer tool that detects security vulnerabilities in your code and provides intelligent recommendations to improve code quality. For example, CodeGuru Reviewer introduced Security Detectors for Java and Python code to identify security risks from the top ten Open Web Application Security Project (OWASP) categories and follow security best practices for AWS APIs and common crypto libraries. At re:Invent, CodeGuru Reviewer introduced a secrets detector to identify hardcoded secrets and suggest remediation steps to secure your secrets with AWS Secrets Manager. These capabilities help you find and remediate security issues before you deploy.

Today, I am happy to share two new features of CodeGuru Reviewer:

  • A new Detector Library describes in detail the detectors that CodeGuru Reviewer uses when looking for possible defects and includes code samples for both Java and Python.
  • New security detectors have been introduced for detecting log-injection flaws in Java and Python code, similar to what happened with the recent Apache Log4j vulnerability we described in this blog post.

Let’s see these new features in more detail.

Using the Detector Library
To help you understand more clearly which detectors CodeGuru Reviewer uses to review your code, we are now sharing a Detector Library where you can find detailed information and code samples.

These detectors help you build secure and efficient applications on AWS. In the Detector Library, you can find detailed information about CodeGuru Reviewer’s security and code quality detectors, including descriptions, their severity and potential impact on your application, and additional information that helps you mitigate risks.

Note that each detector looks for a wide range of code defects. We include one noncompliant and compliant code example for each detector. However, CodeGuru uses machine learning and automated reasoning to identify possible issues. For this reason, each detector can find a range of defects in addition to the explicit code example shown on the detector’s description page.

Let’s have a look at a few detectors. One detector is looking for insecure cross-origin resource sharing (CORS) policies that are too permissive and may lead to loading content from untrusted or malicious sources.

Detector Library screenshot.

Another detector checks for improper input validation that can enable attacks and lead to unwanted behavior.

Detector Library screenshot.

Specific detectors help you use the AWS SDK for Java and the AWS SDK for Python (Boto3) in your applications. For example, there are detectors that can detect hardcoded credentials, such as passwords and access keys, or inefficient polling of AWS resources.

New Detectors for Log-Injection Flaws
Following the recent Apache Log4j vulnerability, we introduced in CodeGuru Reviewer new detectors that check if you’re logging anything that is not sanitized and possibly executable. These detectors cover the issue described in CWE-117: Improper Output Neutralization for Logs.

These detectors work with Java and Python code and, for Java, are not limited to the Log4j library. They don’t work by looking at the version of the libraries you use, but check what you are actually logging. In this way, they can protect you if similar bugs happen in the future.

Detector Library screenshot.

Following these detectors, user-provided inputs must be sanitized before they are logged. This avoids having an attacker be able to use this input to break the integrity of your logs, forge log entries, or bypass log monitors.

Availability and Pricing
These new features are available today in all AWS Regions where Amazon CodeGuru is offered. For more information, see the AWS Regional Services List.

The Detector Library is free to browse as part of the documentation. For the new detectors looking for log-injection flaws, standard pricing applies. See the CodeGuru pricing page for more information.

Start using Amazon CodeGuru Reviewer today to improve the security of your code.

Danilo

Coding for kids: Art, games, and animations with our new beginners’ Python path

Post Syndicated from Rebecca Franks original https://www.raspberrypi.org/blog/coding-for-kids-art-games-animations-beginners-python-programming/

Python is a programming language that’s popular with learners and educators in clubs and schools. It also is widely used by professional programmers, particularly in the data science field. Many educators and young people like how similar the Python syntax is to the English language.

Two girls code together at a computer.

That’s why Python is often the first text-based language that young people learn to program in. The familiar syntax can lower the barrier to taking the first steps away from a block-based programming environment, such as Scratch.

In 2021, Python ranked in first place in an industry-standard popularity index of a major software quality assessment company, confirming its favoured position in software engineering. Python is, for example, championed by Google and used in many of its applications.

Coding for kids in Python

Python’s popularity means there are many excellent resources for learning this language. These resources often focus on creating programs that produce text outputs. We wanted to do something different.

Two young people code at laptops.

Our new ‘Introduction to Python’ project path focuses on creating digital visuals using the Python p5 library. This library is like a set of tools that allows you to get creative by using Python code to draw shapes, edit images, and create frame-by-frame animations. That makes it the perfect choice for young learners: they can develop their knowledge and skills in Python programming while creating cool visuals that they’ll be proud of. 

What is in the ‘Introduction to Python’ path?

The ‘Introduction to Python’ project path is designed according to our Digital Making Framework, encouraging learners to become independent coders and digital makers by gently removing scaffolding as they progress along the projects in a path. Paths begin with three Explore projects, in which learners are guided through tasks that introduce them to new coding skills. Next, learners complete two Design projects. Here, they are encouraged to practise their skills and bring in their own interests to personalise their coding creations. Finally, learners complete one Invent project. This is where they put everything that they have learned together and create something unique that matters to them.

""
Emoji, archery, rockets, art, and movement are all part of this Python path.

The structure of our Digital Making Framework means that learners experience the structured development process of a coding project and learn how to turn their ideas into reality. The Framework also supports with finding errors in their code (debugging), showing them that errors are a part of computer programming and just temporary setbacks that you can overcome. 

What coding skills and knowledge will young people learn?

The Explore projects are where the initial learning takes place. The key programming concepts covered in this path are:

  • Variables
  • Performing calculations with variables
  • Using functions
  • Using selection (if, elif and else)
  • Using repetition (for loops)
  • Using randomisation
  • Importing from libraries

Learners also explore aspects of digital visual media concepts:

  • Coordinates
  • RGB colours
  • Screen size
  • Layers
  • Frames and animation

Learners then develop these skills and knowledge by putting them into practice in the Design and Invent projects, where they add in their own ideas and creativity. 

Explore project 1: Hello world emoji

In the first Explore project of this path, learners create an interactive program that uses emoji characters as the visual element.

""

This is the first step into Python and gets learners used to the syntax for printing text, using variables, and defining functions.

Explore project 2: Target practice

In this Explore project, learners create an archery game. They are introduced to the p5 library, which they use to draw an archery board and create the arrows.

""

The new programming concept covered in this project is selection, where learners use if, elif and else to allocate points for the game.

Explore project 3: Rocket launch

The final Explore project gets learners to animate a rocket launching into space. They create an interactive animation where the user is asked to enter an amount of fuel for the rocket launch. The animation then shows if the fuel is enough to get the rocket into orbit.

""

The new programming concept covered here is repetition. Learners use for loops to animate smoke coming from the exhaust of the rocket.

Design project 1: Make a face

The first Design project allows learners to unleash their creativity by drawing a face using the Python coding skills that they have built in the Explore projects. They have full control of the design for their face and can explore three examples for inspiration.

""

Learners are also encouraged to share their drawings in the community library, where there are lots of fun projects to discover already. In this project, learners apply all of the coding skills and knowledge covered in the Explore projects, including selection, repetition, and variables.

Design project 2: Don’t collide!

In the second Design project, learners code a scrolling game called ‘Don’t collide’, where a character or vehicle moves down the screen while having to avoid obstacles.

""

Learners can choose their own theme for the game, and decide what will move down the screen and what the obstacles will look like. In this project, they also get to practice everything they learned in the Explore projects. 

Invent project: Powerful patterns

This project is the ultimate chance for learners to put all of their skills and knowledge into practice and get creative. They design their own unique patterns and create frame-by-frame animations.

""

The Invent project offers ingredients, which are short reminders of all the key skills that learners have gained while completing the previous projects in the path. The ingredients encourage them to be independent whilst also supporting them with code snippets to help them along.

Key questions answered

Who is the Introduction to Python path for?

We have written the projects in the path with young people around the age of 9 to 13 in mind. To code in a text-based language, a young person needs to be familiar with using a keyboard, due to the typing involved. A learner may have completed one of our Scratch paths prior to this one, but this isn’t essential. and we encourage beginner coders to take this path first if that is their choice.

A young person codes at a Raspberry Pi computer.

What software do learners need to code these projects?

A web browser. In every project, starter code is provided in a free web-based development environment called Trinket, where learners add their own code. The starter Trinkets include everything that learners need to use Python and access the p5 library.

If preferred, the projects also include instructions for using a desktop-based programming environment, such as Thonny.

How long will the path take to complete?

We’ve designed the path to be completed in around six one-hour sessions, with one hour per project. However, the project instructions encourage learners to upgrade their projects and go further if they wish. This means that young people might want to spend a little more time getting their projects exactly as they imagine them. 

What can young people do next after completing this path?

Taking part in Coolest Projects Global

At the end of the path, learners are encouraged to register a project they’re making with their new coding skills for Coolest Projects Global, our world-leading online technology showcase for young people.

Three young tech creators show off their tech project at Coolest Projects.

Taking part is free, all online, and beginners as well as more experienced young tech creators are welcome and invited. This is their unique opportunity to share their ingenuity in an online gallery for the world and the Coolest Projects community to celebrate.

Coding more Python projects with us

Coming very soon is our ‘More Python’ path. In this path, learners will move beyond the basics they learned in Introduction to Python. They will learn how to use lists, dictionaries, and files to create charts, models, and artwork. Keep your eye on our blog and social media for the release of ‘More Python’.

The post Coding for kids: Art, games, and animations with our new beginners’ Python path appeared first on Raspberry Pi.

Create a serverless event-driven workflow to ingest and process Microsoft data with AWS Glue and Amazon EventBridge

Post Syndicated from Venkata Sistla original https://aws.amazon.com/blogs/big-data/create-a-serverless-event-driven-workflow-to-ingest-and-process-microsoft-data-with-aws-glue-and-amazon-eventbridge/

Microsoft SharePoint is a document management system for storing files, organizing documents, and sharing and editing documents in collaboration with others. Your organization may want to ingest SharePoint data into your data lake, combine the SharePoint data with other data that’s available in the data lake, and use it for reporting and analytics purposes. AWS Glue is a serverless data integration service that makes it easy to discover, prepare, and combine data for analytics, machine learning, and application development. AWS Glue provides all the capabilities needed for data integration so that you can start analyzing your data and putting it to use in minutes instead of months.

Organizations often manage their data on SharePoint in the form of files and lists, and you can use this data for easier discovery, better auditing, and compliance. SharePoint as a data source is not a typical relational database and the data is mostly semi structured, which is why it’s often difficult to join the SharePoint data with other relational data sources. This post shows how to ingest and process SharePoint lists and files with AWS Glue and Amazon EventBridge, which enables you to join other data that is available in your data lake. We use SharePoint REST APIs with a standard open data protocol (OData) syntax. OData advocates a standard way of implementing REST APIs that allows for SQL-like querying capabilities. OData helps you focus on your business logic while building RESTful APIs without having to worry about the various approaches to define request and response headers, query options, and so on.

AWS Glue event-driven workflows

Unlike a traditional relational database, SharePoint data may or may not change frequently, and it’s difficult to predict the frequency at which your SharePoint server generates new data, which makes it difficult to plan and schedule data processing pipelines efficiently. Running data processing frequently can be expensive, whereas scheduling pipelines to run infrequently can deliver cold data. Similarly, triggering pipelines from an external process can increase complexity, cost, and job startup time.

AWS Glue supports event-driven workflows, a capability that lets developers start AWS Glue workflows based on events delivered by EventBridge. The main reason to choose EventBridge in this architecture is because it allows you to process events, update the target tables, and make information available to consume in near-real time. Because frequency of data change in SharePoint is unpredictable, using EventBridge to capture events as they arrive enables you to run the data processing pipeline only when new data is available.

To get started, you simply create a new AWS Glue trigger of type EVENT and place it as the first trigger in your workflow. You can optionally specify a batching condition. Without event batching, the AWS Glue workflow is triggered every time an EventBridge rule matches, which may result in multiple concurrent workflows running. AWS Glue protects you by setting default limits that restrict the number of concurrent runs of a workflow. You can increase the required limits by opening a support case. Event batching allows you to configure the number of events to buffer or the maximum elapsed time before firing the particular trigger. When the batching condition is met, a workflow run is started. For example, you can trigger your workflow when 100 files are uploaded in Amazon Simple Storage Service (Amazon S3) or 5 minutes after the first upload. We recommend configuring event batching to avoid too many concurrent workflows, and optimize resource usage and cost.

To illustrate this solution better, consider the following use case for a wine manufacturing and distribution company that operates across multiple countries. They currently host all their transactional system’s data on a data lake in Amazon S3. They also use SharePoint lists to capture feedback and comments on wine quality and composition from their suppliers and other stakeholders. The supply chain team wants to join their transactional data with the wine quality comments in SharePoint data to improve their wine quality and manage their production issues better. They want to capture those comments from the SharePoint server within an hour and publish that data to a wine quality dashboard in Amazon QuickSight. With an event-driven approach to ingest and process their SharePoint data, the supply chain team can consume the data in less than an hour.

Overview of solution

In this post, we walk through a solution to set up an AWS Glue job to ingest SharePoint lists and files into an S3 bucket and an AWS Glue workflow that listens to S3 PutObject data events captured by AWS CloudTrail. This workflow is configured with an event-based trigger to run when an AWS Glue ingest job adds new files into the S3 bucket. The following diagram illustrates the architecture.

To make it simple to deploy, we captured the entire solution in an AWS CloudFormation template that enables you to automatically ingest SharePoint data into Amazon S3. SharePoint uses ClientID and TenantID credentials for authentication and uses Oauth2 for authorization.

The template helps you perform the following steps:

  1. Create an AWS Glue Python shell job to make the REST API call to the SharePoint server and ingest files or lists into Amazon S3.
  2. Create an AWS Glue workflow with a starting trigger of EVENT type.
  3. Configure CloudTrail to log data events, such as PutObject API calls to CloudTrail.
  4. Create a rule in EventBridge to forward the PutObject API events to AWS Glue when they’re emitted by CloudTrail.
  5. Add an AWS Glue event-driven workflow as a target to the EventBridge rule. The workflow gets triggered when the SharePoint ingest AWS Glue job adds new files to the S3 bucket.

Prerequisites

For this walkthrough, you should have the following prerequisites:

Configure SharePoint server authentication details

Before launching the CloudFormation stack, you need to set up your SharePoint server authentication details, namely, TenantID, Tenant, ClientID, ClientSecret, and the SharePoint URL in AWS Systems Manager Parameter Store of the account you’re deploying in. This makes sure that no authentication details are stored in the code and they’re fetched in real time from Parameter Store when the solution is running.

To create your AWS Systems Manager parameters, complete the following steps:

  1. On the Systems Manager console, under Application Management in the navigation pane, choose Parameter Store.
    systems manager
  2. Choose Create Parameter.
  3. For Name, enter the parameter name /DATALAKE/GlueIngest/SharePoint/tenant.
  4. Leave the type as string.
  5. Enter your SharePoint tenant detail into the value field.
  6. Choose Create parameter.
  7. Repeat these steps to create the following parameters:
    1. /DataLake/GlueIngest/SharePoint/tenant
    2. /DataLake/GlueIngest/SharePoint/tenant_id
    3. /DataLake/GlueIngest/SharePoint/client_id/list
    4. /DataLake/GlueIngest/SharePoint/client_secret/list
    5. /DataLake/GlueIngest/SharePoint/client_id/file
    6. /DataLake/GlueIngest/SharePoint/client_secret/file
    7. /DataLake/GlueIngest/SharePoint/url/list
    8. /DataLake/GlueIngest/SharePoint/url/file

Deploy the solution with AWS CloudFormation

For a quick start of this solution, you can deploy the provided CloudFormation stack. This creates all the required resources in your account.

The CloudFormation template generates the following resources:

  • S3 bucket – Stores data, CloudTrail logs, job scripts, and any temporary files generated during the AWS Glue extract, transform, and load (ETL) job run.
  • CloudTrail trail with S3 data events enabled – Enables EventBridge to receive PutObject API call data in a specific bucket.
  • AWS Glue Job – A Python shell job that fetches the data from the SharePoint server.
  • AWS Glue workflow – A data processing pipeline that is comprised of a crawler, jobs, and triggers. This workflow converts uploaded data files into Apache Parquet format.
  • AWS Glue database – The AWS Glue Data Catalog database that holds the tables created in this walkthrough.
  • AWS Glue table – The Data Catalog table representing the Parquet files being converted by the workflow.
  • AWS Lambda function – The AWS Lambda function is used as an AWS CloudFormation custom resource to copy job scripts from an AWS Glue-managed GitHub repository and an AWS Big Data blog S3 bucket to your S3 bucket.
  • IAM roles and policies – We use the following AWS Identity and Access Management (IAM) roles:
    • LambdaExecutionRole – Runs the Lambda function that has permission to upload the job scripts to the S3 bucket.
    • GlueServiceRole – Runs the AWS Glue job that has permission to download the script, read data from the source, and write data to the destination after conversion.
    • EventBridgeGlueExecutionRole – Has permissions to invoke the NotifyEvent API for an AWS Glue workflow.
    • IngestGlueRole – Runs the AWS Glue job that has permission to ingest data into the S3 bucket.

To launch the CloudFormation stack, complete the following steps:

  1. Sign in to the AWS CloudFormation console.
  2. Choose Launch Stack:
  3. Choose Next.
  4. For pS3BucketName, enter the unique name of your new S3 bucket.
  5. Leave pWorkflowName and pDatabaseName as the default.

cloud formation 1

  1. For pDatasetName, enter the SharePoint list name or file name you want to ingest.
  2. Choose Next.

cloud formation 2

  1. On the next page, choose Next.
  2. Review the details on the final page and select I acknowledge that AWS CloudFormation might create IAM resources.
  3. Choose Create.

It takes a few minutes for the stack creation to complete; you can follow the progress on the Events tab.

You can run the ingest AWS Glue job either on a schedule or on demand. As the job successfully finishes and ingests data into the raw prefix of the S3 bucket, the AWS Glue workflow runs and transforms the ingested raw CSV files into Parquet files and loads them into the transformed prefix.

Review the EventBridge rule

The CloudFormation template created an EventBridge rule to forward S3 PutObject API events to AWS Glue. Let’s review the configuration of the EventBridge rule:

  1. On the EventBridge console, under Events, choose Rules.
  2. Choose the rule s3_file_upload_trigger_rule-<CloudFormation-stack-name>.
  3. Review the information in the Event pattern section.

event bridge

The event pattern shows that this rule is triggered when an S3 object is uploaded to s3://<bucket_name>/data/SharePoint/tablename_raw/. CloudTrail captures the PutObject API calls made and relays them as events to EventBridge.

  1. In the Targets section, you can verify that this EventBridge rule is configured with an AWS Glue workflow as a target.

event bridge target section

Run the ingest AWS Glue job and verify the AWS Glue workflow is triggered successfully

To test the workflow, we run the ingest-glue-job-SharePoint-file job using the following steps:

  1. On the AWS Glue console, select the ingest-glue-job-SharePoint-file job.

glue job

  1. On the Action menu, choose Run job.

glue job action menu

  1. Choose the History tab and wait until the job succeeds.

glue job history tab

You can now see the CSV files in the raw prefix of your S3 bucket.

csv file s3 location

Now the workflow should be triggered.

  1. On the AWS Glue console, validate that your workflow is in the RUNNING state.

glue workflow running status

  1. Choose the workflow to view the run details.
  2. On the History tab of the workflow, choose the current or most recent workflow run.
  3. Choose View run details.

glue workflow visual

When the workflow run status changes to Completed, let’s check the converted files in your S3 bucket.

  1. Switch to the Amazon S3 console, and navigate to your bucket.

You can see the Parquet files under s3://<bucket_name>/data/SharePoint/tablename_transformed/.

parquet file s3 location

Congratulations! Your workflow ran successfully based on S3 events triggered by uploading files to your bucket. You can verify everything works as expected by running a query against the generated table using Amazon Athena.

Sample wine dataset

Let’s analyze a sample red wine dataset. The following screenshot shows a SharePoint list that contains various readings that relate to the characteristics of the wine and an associated wine category. This is populated by various wine tasters from multiple countries.

redwine dataset

The following screenshot shows a supplier dataset from the data lake with wine categories ordered per supplier.

supplier dataset

We process the red wine dataset using this solution and use Athena to query the red wine data and supplier data where wine quality is greater than or equal to 7.

athena query and results

We can visualize the processed dataset using QuickSight.

Clean up

To avoid incurring unnecessary charges, you can use the AWS CloudFormation console to delete the stack that you deployed. This removes all the resources you created when deploying the solution.

Conclusion

Event-driven architectures provide access to near-real-time information and help you make business decisions on fresh data. In this post, we demonstrated how to ingest and process SharePoint data using AWS serverless services like AWS Glue and EventBridge. We saw how to configure a rule in EventBridge to forward events to AWS Glue. You can use this pattern for your analytical use cases, such as joining SharePoint data with other data in your lake to generate insights, or auditing SharePoint data and compliance requirements.


About the Author

Venkata Sistla is a Big Data & Analytics Consultant on the AWS Professional Services team. He specializes in building data processing capabilities and helping customers remove constraints that prevent them from leveraging their data to develop business insights.

Introducing Code Club World: a new way for young people to learn to code at home

Post Syndicated from Laura Kirsop original https://www.raspberrypi.org/blog/code-club-world-free-online-platform-young-people-children-learn-to-code-at-home/

Today we are introducing you to Code Club World — a free online platform where young people aged 9 to 13 can learn to make stuff with code.

Images from Code Club World, a free online platform for children who want to learn to code

In Code Club World, young people can:

  • Start out by creating their personal robot avatar
  • Make music, design a t-shirt, and teach their robot avatar to dance!
  • Learn to code on islands with structured activities
  • Discover block-based and text-based coding in Scratch and Python
  • Earn badges for their progress 
  • Share their coding creations with family, friends, and the Code Club World community

Learning to code at home with Code Club World: meaningful, fun, flexible

When we spoke to parents and children about learning at home during the pandemic, it became clear to us that they were looking for educational tools that the children can enjoy and master independently, and that are as fun and social as the computer games and other apps the children love.

A girl has fun learning to code at home, sitting with a laptop on a sofa, with a dog sleeping next to her and her father writing code too.
Code Club World is educational, and as fun as the games and apps young people love.

What’s more, a free tool for learning to code at home is particularly important for young people who are unable to attend coding clubs in person. We believe every child should have access to a high-quality coding and digital making education. And with this in mind, we set out to create Code Club World, an online environment as rich and engaging as a face-to-face extracurricular learning experience, where all young people can learn to code.

The Code Club World activities are mapped to our research-informed Digital Making Framework — a coding and digital making curriculum for non-formal settings. That means when children are in the Code Club World environment, they are learning to code and use digital making to independently create their ideas and address challenges that matter to them.

Islands in the Code Club World online platform for children who want to learn to code for free.
Welcome to Code Club World — so many islands to explore!

By providing a structured pathway through the coding activities, a reward system of badges to engage and motivate learners, and a broad range of projects covering different topics, Code Club World supports learners at every stage, while making the activities meaningful, fun, and flexible.

A girl has fun learning to code at home on a tablet sitting on a sofa.
Code Club World’s home island works as well on mobile phones and tablets as on computers.

We’ve also designed Code Club World to be mobile-friendly, so if a young person uses a phone or tablet to visit the platform, they can still code cool things they will be proud of.

Created with the community

Since we started developing Code Club World, we have been working with a community of more than 1000 parents, educators, and children who are giving us valuable input to shape the direction of the platform. We’ve had some fantastic feedback from them:

“I’ve not coded before, but found this really fun! … I LOVED making the dance. It was so much fun and made me laugh!”

Learner, aged 11

“I love the concept of having islands to explore in making the journey through learning coding, it is fabulous and eye-catching.”

Parent

The platform is still in beta status — this means we’d love you to share it with young people in your family, school, or community so they can give their feedback and help make Code Club World even better.

Together, we will ensure every child has an equal opportunity to learn to code and make things that change their world.

The post Introducing Code Club World: a new way for young people to learn to code at home appeared first on Raspberry Pi.

Open-Sourcing a Monitoring GUI for Metaflow

Post Syndicated from Netflix Technology Blog original https://netflixtechblog.com/open-sourcing-a-monitoring-gui-for-metaflow-75ff465f0d60

Open-Sourcing a Monitoring GUI for Metaflow, Netflix’s ML Platform

tl;dr Today, we are open-sourcing a long-awaited GUI for Metaflow. The Metaflow GUI allows data scientists to monitor their workflows in real-time, track experiments, and see detailed logs and results for every executed task. The GUI can be extended with plugins, allowing the community to build integrations to other systems, custom visualizations, and embed upcoming features of Metaflow directly into its views.

Metaflow is a full-stack framework for data science that we started developing at Netflix over four years ago and which we open-sourced in 2019. It allows data scientists to define ML workflows, test them locally, scale-out to the cloud, and deploy to production in idiomatic Python code. Since open-sourcing, the Metaflow community has been growing quickly: it is now the 7th most starred active project on Netflix’s GitHub account with nearly 4800 stars. Outside Netflix, Metaflow is used to power machine learning in production by hundreds of companies across industries from bioinformatics to real estate.

Since its inception, Metaflow has been a command-line-centric tool. It makes it easy for data scientists to express even complex machine learning applications in idiomatic Python, test them locally, or scale them out in the cloud — all using their favorite IDEs and terminals. Following our culture of freedom and responsibility, Metaflow grants data scientists the freedom to choose the right modeling approach, handle data and features flexibly, and construct workflows easily while ensuring that the resulting project executes responsibly and robustly on the production infrastructure.

As the number and criticality of projects running on Metaflow increased — some of which are very central to our business — our ML platform team started receiving an increasing number of support requests. Frequently, the questions were of the nature “can you help me understand why my flow takes so long to execute” or “how can I find the logs for a model that failed last night.” Technically, Metaflow provides a Python API that allows the user to inspect all details e.g., in a notebook, but writing code in a notebook to answer basic questions like this felt overkill and unnecessarily tedious. After observing the situation for months, we started forming an understanding of the kind of a new user interface that could address the growing needs of our users.

Requirements for a Metaflow GUI

Metaflow is a human-centered system by design. We consider our Python API and the CLI to be integral parts of the overall user interface and user experience, which singularly focuses on making it easier to build production-ready ML projects from scratch. In our approach, Python code provides a highly expressive and productive user interface for expressing complex business logic, such as ML models and workflows. At the same time, the CLI allows users to execute specific commands quickly and even automate common actions. When it comes to complex, real-life development work like this, it would be hard to achieve the same level of productivity on a graphical user interface.

However, textual UIs are quite lacking when it comes to discoverability and getting a holistic understanding of the system’s state. The questions we were hearing reflected this gap: we were lacking a user interface that would allow the users, quite simply, to figure out quickly what is happening in their Metaflow projects.

Netflix has a long history of developing innovative tools for observability, so when we began to specify requirements for the new GUI, we were able to leverage experiences from the previous GUIs built for other use cases, as well as real-life user stories from Metaflow users. We wanted to scope the GUI tightly, focusing on a specific gap in the Metaflow experience:

  1. The GUI should allow the users to see what flows and tasks are executing and what is happening inside them. Notably, we didn’t want to replace any of the functionality in the Metaflow APIs or CLI with the GUI — just to complement them. This meant that the GUI would be read-only: all actions like writing code and starting executions should happen on the users’ IDE and terminal as before. We also had no need to build a model-monitoring GUI yet, which is a wholly separate problem domain.
  2. The GUI would be targeted at professional data scientists. Instead of a fancy GUI for demos and presentations, we wanted a serious productivity tool with carefully thought-out user workflows that would fit seamlessly into our toolchain of data science. This requires attention to small details: for instance, users should be able to copy a link to any view in the GUI and share it e.g., on Slack, for easy collaboration and support (or to integrate with the Metaflow Slack bot). And, there should be natural affordances for navigating between the CLI, the GUI, and notebooks.
  3. The GUI should be scalable and snappy: it should handle our existing repository consisting of millions of runs, some of which contain tens of thousands of tasks without hiccups. Based on our experiences with other GUIs operating at Netflix-scale, this is not a trivial requirement: scalability needs to be baked into the design from the very beginning. Sluggish GUIs are hard to debug and fix afterwards, and they can have a significantly negative impact on productivity.
  4. The GUI should integrate well with other GUIs. A modern ML stack consists of many independent systems like data warehouses, compute layers, model serving systems, and, in particular, notebooks. It should be possible to find runs and tasks of interest in the Metaflow GUI and use a task-specific view to jump to other GUIs for further information. Our landscape of tools is constantly evolving, so we didn’t want to hardcode these links and views in the GUI itself. Instead, following the integration-friendly ethos of Metaflow, we want to embed relevant information in the GUI as plugins.
  5. Finally, we wanted to minimize the operational overhead of the GUI. In particular, under no circumstances should the GUI impact Metaflow executions. The GUI backend should be a simple service, optionally sitting alongside the existing Metaflow metadata service, providing a read-only, real-time view to the stored state. The frontend side should be easily extensible and maintainable, suggesting that we wanted a modern React app.

Monitoring GUI for Metaflow

As our ML Platform team had limited frontend resources, we reached out to Codemate to help with the implementation. As it often happens in software engineering projects, the project took longer than expected to finish, mostly because the problem of tracking and visualizing thousands of concurrent objects in real-time in a highly distributed environment is a surprisingly non-trivial problem (duh!). After countless iterations, we are finally very happy with the outcome, which we have now used in production for a few months.

When you open the GUI, you see an overview of all flows and runs, both current and historical, which you can group and filter in various ways:

Runs Grouped by flows

We can use this view for experiment tracking: Metaflow records every execution automatically, so data scientists can track all their work using this view. Naturally, the view can be grouped by user. They can also tag their runs and filter the view by tags, allowing them to focus on particular subsets of experiments.

After you click a specific run, you see all its tasks on a timeline:

Timeline view for a run

The timeline view is extremely useful in understanding performance bottlenecks, distribution of task runtimes, and finding failed tasks. At the top, you can see global attributes of the run, such as its status, start time, parameters etc. You can click a specific task to see more details:

Task view

This task view shows logs produced by a task, its results, and optionally links to other systems that are relevant to the task. For instance, if the task had deployed a model to a model serving platform, the view could include a link to a UI used for monitoring microservices.

As specified in our requirements, the GUI should work well with Metaflow CLI. To facilitate this, the top bar includes a navigation component where the user can copy-paste any pathspec, i.e., a path to any object in the Metaflow universe, which are prominently shown in the CLI output. This way, the user can easily move from the CLI to the GUI to observe runs and tasks in detail.

While the CLI is great, it is challenging to visualize flows. Each flow can be represented as a Directed Acyclic Graph (DAG), and so the GUI provides a much better way to visualize a flow. The DAG view presents all the steps of a flow and how they are related. Each step may have developer comments. They are colored to indicate the current state. Split steps are grouped by shaded boxes, while steps that participated in a foreach are grouped by a double shade box. Clicking on a step will take you to the Task view.

DAG View

Users at different organizations will likely have some special use cases that are not directly supported. The Metaflow GUI is extensible through its plugin API. For example, Netflix has its container orchestration platform called Titus. Users can configure tasks to utilize Titus to scale up or out. When failures happen, users will need to access their Titus containers for more information, and within the task view, a simple plugin provides a link for further troubleshooting.

Example task-level plugin

Try it at home!

We know that our user stories and requirements for a Metaflow GUI are not unique to Netflix. A number of companies in the Metaflow community have requested GUI for Metaflow in the past. To support the thriving community and invite 3rd party contributions to the GUI, we are open-sourcing our Monitoring GUI for Metaflow today!

You can find detailed instructions for how to deploy the GUI here. If you want to see the GUI in action before deploying it, Outerbounds, a new startup founded by our ex-colleagues, has deployed a public demo instance of the GUI. Outerbounds also hosts an active Slack community of Metaflow users where you can find support for GUI-related issues and share feedback and ideas for improvement.

With the new GUI, data scientists don’t have to fly blind anymore. Instead of reaching out to a platform team for support, they can easily see the state of their workflows on their own. We hope that Metaflow users outside Netflix will find the GUI equally beneficial, and companies will find creative ways to improve the GUI with new plugins.

For more context on the development process and motivation for the GUI, you can watch this recording of the GUI launch meetup.


Open-Sourcing a Monitoring GUI for Metaflow was originally published in Netflix TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Code a Spectrum-style Crazy Golf game | Wireframe #54

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/code-a-spectrum-style-crazy-golf-game-wireframe-54/

Putt the ball around irrational obstacles in our retro take on golf. Mark Vanstone has the code

First released by Mr. Micro in 1983 – then under the banner of Sinclair Research – Krazy Golf was, confusingly, also called Crazy Golf. The loading screen featured the Krazy spelling, but on the cover, it was plain old Crazy Golf.

Designed for the ZX Spectrum, the game provided nine holes and a variety of obstacles to putt the ball around. Crazy Golf was released at a time when dozens of other games were hitting the Spectrum market, and although it was released under the Sinclair name and reviewed in magazines such as Crash, it didn’t make much impact. The game itself employed a fairly rudimentary control system, whereby the player selects the angle of the shot at the top left of the screen, sets the range via a bar along the top, and then presses the RETURN key to take the shot.

The game was called Crazy Golf on the cover, but weirdly, the loading screen spelled the name as Krazy Golf. The early games industry was strange.

If you’ve been following our Source Code articles each month, you will have seen the pinball game where a ball bounces off various surfaces. In that example, we used a few shortcuts to approximate the bounce angles. Here, we’re only going to have horizontal and vertical walls, so we can use some fairly straightforward maths to calculate more precisely the new angle as the ball bounces off a surface. In the original game, the ball was limited to only 16 angles, and the ball moved at the same speed regardless of the strength of the shot. We’re going to improve on this a bit so that there’s more flexibility around the shot angle; we’ll also get the ball to start moving fast and then reduce its speed until it stops.

Horizontal or vertical obstruction?

To make this work, we need to have a way of defining whether an obstruction is horizontal or vertical, as the calculation is different for each. We’ll have a background graphic showing the course and obstacles, but we’ll also need another map to check our collisions. We need to make a collision map that just has the obstacles on it, so we need a white background; mark all the horizontal surfaces red and all the vertical surfaces blue.

As we move the ball around the screen (in much the same way as our pinball game) we check to see if it has collided with a surface by sampling the colours of the pixels from the collision map. If the pixel’s blue, we know that the ball has hit a vertical wall; if it’s red, the wall’s horizontal. We then calculate the new angle for the ball. If we mark the hole as black, then we can also test for collision with that – if the ball’s in the hole, the game ends.

The pointer’s angle is rotated using degrees, but we’ll use radians for our ball direction as it will simplify our movement and bounce calculations.

Get the code

We have our ball bouncing mechanism, so now we need our user interaction system. We’ll use the left and right arrow keys to rotate our pointer, which designates the direction of the next shot. We also need a range-setting gizmo, which will be shown as a bar at the top of the screen. We can make that grow and shrink with the up and down arrows.

Then when we press the RETURN key, we transfer the pointer angle and the range to the ball and watch it go. We ought to count each shot so that we can display a tally to the player once they’ve putted the ball into the hole. From this point, it’s a simple task to create another eight holes – and then you’ll have a full crazy golf game!

Here’s Mark’s code for a simple golf game. To get it running on your system, you’ll need to install Pygame Zero. And for the full code, head to our Github.

Get your copy of Wireframe issue 55

You can read more features like this one in Wireframe issue 54, available directly from Raspberry Pi Press — we deliver worldwide.

And if you’d like a handy digital version of the magazine, you can also download issue 54 for free in PDF format.

The post Code a Spectrum-style Crazy Golf game | Wireframe #54 appeared first on Raspberry Pi.

AWS Config RDK: Deploying the Custom Rules using the Terraform

Post Syndicated from Madhu Sarma original https://aws.amazon.com/blogs/devops/aws-config-rdk-deploying-the-custom-rules-using-the-terraform/

To help customers using the Terraform for multi-cloud infrastructure deployment, we have introduced a new feature in the AWS Config Rule Development Kit (RDK) that allows you to export custom AWS Config rules to Terraform files so that you can deploy the RDK rules with Terraform.

This blog post is a complement to the previous post – How to develop custom AWS Config rules using the Rule Development Kit. Here I will show you how to prototype, develop, and deploy custom AWS Config rules. The steps for prototyping and developing the custom AWS Config rules remain identical, while a variation exists in the deployment step, which I’ll walk you through in detail. I would encourage you to review the previous blog post, so that you can follow along here.

In this post, you will learn how to export the custom AWS Config rule to Terraform files and deploy to AWS using the Terraform.

Background

RDK doesn’t support the Terraform for rules deployment, which is impacting customers using the Terraform (“Infrastructure As Code”) to provision AWS infrastructure. Therefore, we have provided one more option to deploy the rules by using the Terraform.

Getting Started

The first step is making sure that you installed the latest RDK version. After you have defined an AWS Config rule and prototyped using the AWS Config RDK as described in the previous blog post, follow the steps below to deploy the various AWS Config components across the compliance and satellite accounts.

Prerequisites

Validate that you downloaded the RDK that supports “export”, using the command “rdk export -h”, and you should see the below output. If the installed RDK doesn’t support the export feature, then update it by using the command  “pip install rdk”

(venv) 8c85902e4110:7RDK test$ rdk export -h 
 
usage: rdk export [-h] [-s RULESETS] [--all] [--lambda-layers LAMBDA_LAYERS]  
                  [--lambda-subnets LAMBDA_SUBNETS]  
                  [--lambda-security-groups LAMBDA_SECURITY_GROUPS]  
                  [--lambda-role-arn LAMBDA_ROLE_ARN]  
                  [--rdklib-layer-arn RDKLIB_LAYER_ARN] -v {0.11,0.12} -f  
                  {terraform}  
                  [<rulename> [<rulename> ...]]  
  
Used to export the Config Rule to terraform file.  
  
positional arguments:  
  <rulename>            Rule name(s) to export to a file.  
  
optional arguments:  
  -h, --help            show this help message and exit  
  -s RULESETS, --rulesets RULESETS  
                        comma-delimited list of RuleSet names  
  --all, -a             All rules in the working directory will be deployed.  
  --lambda-layers LAMBDA_LAYERS  
                        [optional] Comma-separated list of Lambda Layer ARNs  
                        to deploy with your Lambda function(s).  
  --lambda-subnets LAMBDA_SUBNETS  
                        [optional] Comma-separated list of Subnets to deploy  
                        your Lambda function(s).  
  --lambda-security-groups LAMBDA_SECURITY_GROUPS  
                        [optional] Comma-separated list of Security Groups to  
                        deploy with your Lambda function(s).  
  --lambda-role-arn LAMBDA_ROLE_ARN  
                        [optional] Assign existing iam role to lambda  
                        functions. If omitted, new lambda role will be  
                        created.  
  --rdklib-layer-arn RDKLIB_LAYER_ARN  
                        [optional] Lambda Layer ARN that contains the desired  
                        rdklib. Note that Lambda Layers are region-specific.  
  -v {0.11,0.12}, --version {0.11,0.12}  
                        Terraform version  
  -f {terraform}, --format {terraform}  
                        Export Format  

Create your rule

Create your rule by using the command below which creates the MY_FIRST_RULE rule.

7RDK test$ rdk create MY_FIRST_RULE  --runtime python3.6 --resource-types AWS::EC2::SecurityGroup  
Running create!  
Local Rule files created.  

This creates the three files below. Edit the “MY_FIRST_RULE.py” as per your business requirement, as described in the “Edit” section of this blog.

7RDK test$ cd MY_FIRST_RULE/ 
(venv) 8c85902e4110:MY_FIRST_RULE test$ls 
MY_FIRST_RULE.py        MY_FIRST_RULE_test.py   parameters.json

Export your rule to Terraform

Use the command below to export your rule to the Terraform files, which supports the two versions of Terraform (0.11 and 0.12). Use the “-v” argument to specify the version.

test$ cd ..  
7RDK test$ rdk export MY_FIRST_RULE -f terraform -v 0.12  
Running export  
Found Custom Rule.  
Zipping MY_FIRST_RULE  
Zipping complete.  
terraform version: 0.12  
Export completed.This will generate three .tf files.  
7RDK test$

This creates the four files.

  • << rule-name >>_rule.tf :
    • This script uploads the rule to the Amazon S3 bucket, deploys the lambda, and creates the AWS config rule and the required IAM roles/policies.
  • << rule-name >>_variables.tf:  Terraform variable definitions.
  • << rule-name >>.tfvars.json: Terraform variable values.
  • << rule-name >>.zip: Compiled rule code.
7RDK test$ cd MY_FIRST_RULE/  
(venv) 8c85902e4110:MY_FIRST_RULE test$ ls -1  
MY_FIRST_RULE.py  
MY_FIRST_RULE.zip  
MY_FIRST_RULE_test.py  
my_first_rule.tfvars.json  
my_first_rule_rule.tf  
my_first_rule_variables.tf  
parameters.json  

Deploy your rule using the Terraform

Initialize the Terraform by using “terraform init” to download the AWS provider Plug-In.

MY_FIRST_RULE test$ terraform init  
  
Initializing the backend...  
  
Initializing provider plugins...  
- Checking for available provider plugins...  
- Downloading plugin for provider "aws" (hashicorp/aws) 2.70.0...  
  
The following providers do not have any version constraints in configuration,  
so the latest version was installed.  
  
To prevent automatic upgrades to new major versions that may contain breaking  
changes, it is recommended to add version = "..." constraints to the  
corresponding provider blocks in configuration, with the constraint strings  
suggested below.  
  
* provider.aws: version = "~> 2.70"  
  
Terraform has been successfully initialized!  

To deploy the config rules, your role should have the permissions and should mention the role ARN in my_rule.tfvars.json

To apply the Terraform, it requires two arguments:

  • var-file: Terraform script variable file name, created while exporting the rule using RDK.
  • source_bucket: Your Amazon S3 bucket name, to upload the config rule lambda code.

Make sure that AWS provider is configured for your Terraform environment as mentioned in the docs.

MY_FIRST_RULE test$ terraform apply -var-file=my_first_rule.tfvars.json --var source_bucket=config-bucket-xxxxx  
  
aws_iam_policy.awsconfig_policy[0]: Creating...  
aws_iam_role.awsconfig[0]: Creating...  
aws_s3_bucket_object.rule_code: Creating...  
aws_iam_role.awsconfig[0]: Creation complete after 3s [id=my_first_rule-awsconfig-role]  
aws_iam_role_policy_attachment.readonly-role-policy-attach[0]: Creating...  
aws_iam_policy.awsconfig_policy[0]: Creation complete after 4s [id=arn:aws:iam::xxxxxxxxxxxx:policy/my_first_rule-awsconfig-policy]  
aws_iam_role_policy_attachment.awsconfig_policy_attach[0]: Creating...  
aws_s3_bucket_object.rule_code: Creation complete after 5s [id=MY_FIRST_RULE.zip]  
aws_lambda_function.rdk_rule: Creating...  
aws_iam_role_policy_attachment.readonly-role-policy-attach[0]: Creation complete after 2s [id=my_first_rule-awsconfig-role-20200726023315892200000001]  
aws_iam_role_policy_attachment.awsconfig_policy_attach[0]: Creation complete after 3s [id=my_first_rule-awsconfig-role-20200726023317242000000002]  
aws_lambda_function.rdk_rule: Still creating... [10s elapsed]  
aws_lambda_function.rdk_rule: Creation complete after 18s [id=RDK-Rule-Function-MY_FIRST_RULE]  
aws_lambda_permission.lambda_invoke: Creating...  
aws_config_config_rule.event_triggered[0]: Creating...  
aws_lambda_permission.lambda_invoke: Creation complete after 2s [id=AllowExecutionFromConfig]  
aws_config_config_rule.event_triggered[0]: Creation complete after 4s [id=MY_FIRST_RULE]  
  
Apply complete! Resources: 8 added, 0 changed, 0 destroyed.  

Login to your AWS console to validate the deployed config rule.

Clean up

Enter the following command to remove all the resources.

  1. MY_FIRST_RULE test$ terraform destroy

Conclusion

With this new feature, you can export the AWS config rules developed by RDK to the Terraform,  and integrate these files into your Terraform CI/CD pipeline to provision the config rules in AWS without using the RDK.

Code your own pinball game | Wireframe #53

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/code-your-own-pinball-game-wireframe-53/

Get flappers flapping and balls bouncing off bumpers. Mark Vanstone has the code in the new issue of Wireframe magazine, available now.

There are so many pinball video games that it’s become a genre in its own right. For the few of you who haven’t encountered pinball for some reason, it originated as an analogue arcade machine where a metal ball would be fired onto a sloping play area and bounce between obstacles. The player operates a pair of flippers by pressing buttons on each side of the machine, which will in turn ping the ball back up the play area to hit obstacles and earn points. The game ends when the ball falls through the exit at the bottom of the play area.

NES Pinball
One of the earliest pinball video games – it’s the imaginatively-named Pinball on the NES.

Recreating pinball machines for video games

Video game developers soon started trying to recreate pinball, first with fairly rudimentary graphics and physics, but with increasingly greater realism over time – if you look at Nintendo’s Pinball from 1984, then, say, Devil’s Crush on the Sega Mega Drive in 1990, and then 1992’s Pinball Dreams on PC, you can see how radically the genre evolved in just a few years. In this month’s Source Code, we’re going to put together a very simple rendition of pinball in Pygame Zero. We’re not going to use any complicated maths or physics systems, just a little algebra and trigonometry.

Let’s start with our background. We need an image which has barriers around the outside for the ball to bounce off, and a gap at the bottom for the ball to fall through. We also want some obstacles in the play area and an entrance at the side for the ball to enter when it’s first fired. In this case, we’re going to use our background as a collision map, too, so we need to design it so that all the areas that the ball can move in are black.

Pinball in Python
Here it is: your own pinball game in less than 100 lines of code.

Next, we need some flippers. These are defined as Actors with a pivot anchor position set near the larger end, and are positioned near the bottom of the play area. We detect left and right key presses and rotate the angle of the flippers by 20 degrees within a range of -30 to +30 degrees. If no key is pressed, then the flipper drops back down. With these elements in place, we have our play area and an ability for the player to defend the exit.

All we need now is a ball to go bouncing around the obstacles we’ve made. Defining the ball as an Actor, we can add a direction and a speed parameter to it. With these values set, the ball can be moved using a bit of trigonometry. Our new x-coordinate will move by the sin of the ball direction multiplied by the speed, and the new y-coordinate will move by the cos of the ball direction multiplied by speed. We need to detect collisions with objects and obstacles, so we sample four pixels around the ball to see if it’s hit anything solid. If it has, we need to make the ball bounce.

Get the code

Here’s Mark’s pinball code. To get it working on your system, you’ll need to install Pygame Zero. And to download the full code and assets, head here.

If you wanted more realistic physics, you’d calculate the reflection angle from the surface which has been hit, but in this case, we’re going to use a shortcut which will produce a rough approximation. We work out what direction the ball is travelling in and then rotate either left or right by a quarter of a turn until the ball no longer collides with a wall. We could finesse this calculation further to create a more accurate effect, but we’ll keep it simple for this sample. Finally, we need to add some gravity. As the play area is tilted downwards, we need to increase the ball speed as it travels down and decrease it as it travels up.

All of this should give you the bare bones of a pinball game. There’s lots more you could add to increase the realism, but we’ll leave you to discover the joys of normal vectors and dot products…

Get your copy of Wireframe issue 53

You can read more features like this one in Wireframe issue 53, available directly from Raspberry Pi Press — we deliver worldwide.

And if you’d like a handy digital version of the magazine, you can also download issue 53 for free in PDF format.

The post Code your own pinball game | Wireframe #53 appeared first on Raspberry Pi.

Recreate Gradius’ rock-spewing volcanoes | Wireframe #52

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/recreate-gradius-volcanoes-wireframe-52/

Code an homage to Konami’s classic shoot-’em-up, Gradius. Mark Vanstone has the code in the new edition of Wireframe magazine, available now.

Released by Konami in 1985, Gradius – also known as Nemesis outside Japan – brought a new breed of power-up system to arcades. One of the keys to its success was the way the player could customise their Vic Viper fighter craft by gathering capsules, which could then be ‘spent’ on weapons, speed-ups, and shields from a bar at the bottom of the screen.

Gradius screenshot
The Gradius volcanoes spew rocks at the player just before the end-of-level boss ship arrives.

Flying rocks

A seminal side-scrolling shooter, Gradius was particularly striking thanks to the variety of its levels: a wide range of hazards were thrown at the player, including waves of aliens, natural phenomena, and boss ships with engine cores that had to be destroyed in order to progress. One of the first stage’s biggest obstacles was a pair of volcanoes that spewed deadly rocks into the air: the rocks could be shot for extra points or just avoided to get through to the next section. In this month’s Source Code, we’re going to have a look at how to recreate the volcano-style flying rock obstacle from the game.

Our sample uses Pygame Zero and the randint function from the random module to provide the variations of trajectory that we need our rocks to have. We’ll need an actor created for our spaceship and a list to hold our rock Actors. We can also make a bullet Actor so we can make the ship fire lasers and shoot the rocks. We build up the scene in layers in our draw() function with a star-speckled background, then our rocks, followed by the foreground of volcanoes, and finally the spaceship and bullets.

Dodge and shoot the rocks in our homage to the classic Gradius.

Get the ship moving

In the update() function, we need to handle moving the ship around with the cursor keys. We can use a limit() function to make sure it doesn’t go off the screen, and the SPACE bar to trigger the bullet to be fired. After that, we need to update our rocks. At the start of the game our list of rocks will be empty, so we’ll get a random number generated, and if the number is 1, we make a new rock and add it to the list. If we have more than 100 rocks in our list, some of them will have moved off the screen, so we may as well reuse them instead of making more new rocks. During each update cycle, we’ll need to run through our list of rocks and update their position. When we make a rock, we give it a speed and direction, then when it’s updated, we move the rock upwards by its speed and then reduce the speed by 0.2. This will make it fly into the air, slow down, and then fall to the ground. 

Collision detection

From this code, we can make rocks appear just behind both of the volcanoes, and they’ll fly in a random direction upwards at a random speed. We can increase or decrease the number of rocks flying about by changing the random numbers that spawn them. We should be able to fly in and out of the rocks, but we could add some collision detection to check whether the rocks hit the ship – we may also want to destroy the ship if it’s hit by a rock. In our sample, we have an alternative, ‘shielded’ state to indicate that a collision has occurred. We can also check for collisions with the bullets: if a collision’s detected, we can make the rock and the bullet disappear by moving them off-screen, at which point they’re ready to be reused.

That’s about it for this month’s sample, but there are many more elements from the original game that you could add yourself: extra weapons, more enemies, or even an area boss.

Here’s Mark’s volcanic code. To get it working on your system, you’ll need to install Pygame Zero. And to download the full code and assets, head here.

Get your copy of Wireframe issue 52

You can read more features like this one in Wireframe issue 52, available directly from Raspberry Pi Press — we deliver worldwide.

Wireframe issue 52's cover

And if you’d like a handy digital version of the magazine, you can also download issue 52 for free in PDF format.

The post Recreate Gradius’ rock-spewing volcanoes | Wireframe #52 appeared first on Raspberry Pi.

Hosting Hugging Face models on AWS Lambda for serverless inference

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/hosting-hugging-face-models-on-aws-lambda/

This post written by Eddie Pick, AWS Senior Solutions Architect – Startups and Scott Perry, AWS Senior Specialist Solutions Architect – AI/ML

Hugging Face Transformers is a popular open-source project that provides pre-trained, natural language processing (NLP) models for a wide variety of use cases. Customers with minimal machine learning experience can use pre-trained models to enhance their applications quickly using NLP. This includes tasks such as text classification, language translation, summarization, and question answering – to name a few.

First introduced in 2017, the Transformer is a modern neural network architecture that has quickly become the most popular type of machine learning model applied to NLP tasks. It outperforms previous techniques based on convolutional neural networks (CNNs) or recurrent neural networks (RNNs). The Transformer also offers significant improvements in computational efficiency. Notably, Transformers are more conducive to parallel computation. This means that Transformer-based models can be trained more quickly, and on larger datasets than their predecessors.

The computational efficiency of Transformers provides the opportunity to experiment and improve on the original architecture. Over the past few years, the industry has seen the introduction of larger and more powerful Transformer models. For example, BERT was first published in 2018 and was able to get better benchmark scores on 11 natural language processing tasks using between 110M-340M neural network parameters. In 2019, the T5 model using 11B parameters achieved better results on benchmarks such as summarization, question answering, and text classification. More recently, the GPT-3 model was introduced in 2020 with 175B parameters and in 2021 the Switch Transformers are scaling to over 1T parameters.

One consequence of this trend toward larger and more powerful models is an increased barrier to entry. As the number of model parameters increases, as does the computational infrastructure that is necessary to train such a model. This is where the open-source Hugging Face Transformers project helps.

Hugging Face Transformers provides over 30 pretrained Transformer-based models available via a straightforward Python package. Additionally, there are over 10,000 community-developed models available for download from Hugging Face. This allows users to use modern Transformer models within their applications without requiring model training from scratch.

The Hugging Face Transformers project directly addresses challenges associated with training modern Transformer-based models. Many customers want a zero administration ML inference solution that allows Hugging Face Transformers models to be hosted in AWS easily. This post introduces a low touch, cost effective, and scalable mechanism for hosting Hugging Face models for real-time inference using AWS Lambda.

Overview

Our solution consists of an AWS Cloud Development Kit (AWS CDK) script that automatically provisions container image-based Lambda functions that perform ML inference using pre-trained Hugging Face models. This solution also includes Amazon Elastic File System (EFS) storage that is attached to the Lambda functions to cache the pre-trained models and reduce inference latency.Solution architecture

In this architectural diagram:

  1. Serverless inference is achieved by using Lambda functions that are based on container image
  2. The container image is stored in an Amazon Elastic Container Registry (ECR) repository within your account
  3. Pre-trained models are automatically downloaded from Hugging Face the first time the function is invoked
  4. Pre-trained models are cached within Amazon Elastic File System storage in order to improve inference latency

The solution includes Python scripts for two common NLP use cases:

  • Sentiment analysis: Identifying if a sentence indicates positive or negative sentiment. It uses a fine-tuned model on sst2, which is a GLUE task.
  • Summarization: Summarizing a body of text into a shorter, representative text. It uses a Bart model that was fine-tuned on the CNN / Daily Mail dataset.

For simplicity, both of these use cases are implemented using Hugging Face pipelines.

Prerequisites

The following is required to run this example:

Deploying the example application

  1. Clone the project to your development environment:
    git clone https://github.com/aws-samples/zero-administration-inference-with-aws-lambda-for-hugging-face.git
  2. Install the required dependencies:
    pip install -r requirements.txt
  3. Bootstrap the CDK. This command provisions the initial resources needed by the CDK to perform deployments:
    cdk bootstrap
  4. This command deploys the CDK application to its environment. During the deployment, the toolkit outputs progress indications:
    $ cdk deploy

Testing the application

After deployment, navigate to the AWS Management Console to find and test the Lambda functions. There is one for sentiment analysis and one for summarization.

To test:

  1. Enter “Lambda” in the search bar of the AWS Management Console:Console Search
  2. Filter the functions by entering “ServerlessHuggingFace”:Filtering functions
  3. Select the ServerlessHuggingFaceStack-sentimentXXXXX function:Select function
  4. In the Test event, enter the following snippet and then choose Test:Test function
{
   "text": "I'm so happy I could cry!"
}

The first invocation takes approximately one minute to complete. The initial Lambda function environment must be allocated and the pre-trained model must be downloaded from Hugging Face. Subsequent invocations are faster, as the Lambda function is already prepared and the pre-trained model is cached in EFS.Function test results

The JSON response shows the result of the sentiment analysis:

{
  "statusCode": 200,
  "body": {
    "label": "POSITIVE",
    "score": 0.9997532367706299
  }
}

Understanding the code structure

The code is organized using the following structure:

├── inference
│ ├── Dockerfile
│ ├── sentiment.py
│ └── summarization.py
├── app.py
└── ...

The inference directory contains:

  • The Dockerfile used to build a custom image to be able to run PyTorch Hugging Face inference using Lambda functions
  • The Python scripts that perform the actual ML inference

The sentiment.py script shows how to use a Hugging Face Transformers model:

import json
from transformers import pipeline

nlp = pipeline("sentiment-analysis")

def handler(event, context):
    response = {
        "statusCode": 200,
        "body": nlp(event['text'])[0]
    }
    return response

For each Python script in the inference directory, the CDK generates a Lambda function backed by a container image and a Python inference script.

CDK script

The CDK script is named app.py in the solution’s repository. The beginning of the script creates a virtual private cloud (VPC).

vpc = ec2.Vpc(self, 'Vpc', max_azs=2)

Next, it creates the EFS file system and an access point in EFS for the cached models:

        fs = efs.FileSystem(self, 'FileSystem',
                            vpc=vpc,
                            removal_policy=cdk.RemovalPolicy.DESTROY)
        access_point = fs.add_access_point('MLAccessPoint',
                                           create_acl=efs.Acl(
                                               owner_gid='1001', owner_uid='1001', permissions='750'),
                                           path="/export/models",
                                           posix_user=efs.PosixUser(gid="1001", uid="1001"))>

It iterates through the Python files in the inference directory:

docker_folder = os.path.dirname(os.path.realpath(__file__)) + "/inference"
pathlist = Path(docker_folder).rglob('*.py')
for path in pathlist:

And then creates the Lambda function that serves the inference requests:

            base = os.path.basename(path)
            filename = os.path.splitext(base)[0]
            # Lambda Function from docker image
            function = lambda_.DockerImageFunction(
                self, filename,
                code=lambda_.DockerImageCode.from_image_asset(docker_folder,
                                                              cmd=[
                                                                  filename+".handler"]
                                                              ),
                memory_size=8096,
                timeout=cdk.Duration.seconds(600),
                vpc=vpc,
                filesystem=lambda_.FileSystem.from_efs_access_point(
                    access_point, '/mnt/hf_models_cache'),
                environment={
                    "TRANSFORMERS_CACHE": "/mnt/hf_models_cache"},
            )

Adding a translator

Optionally, you can add more models by adding Python scripts in the inference directory. For example, add the following code in a file called translate-en2fr.py:

import json
from transformers 
import pipeline

en_fr_translator = pipeline('translation_en_to_fr')

def handler(event, context):
    response = {
        "statusCode": 200,
        "body": en_fr_translator(event['text'])[0]
    }
    return response

Then run:

$ cdk synth
$ cdk deploy

This creates a new endpoint to perform English to French translation.

Cleaning up

After you are finished experimenting with this project, run “cdk destroy” to remove all of the associated infrastructure.

Conclusion

This post shows how to perform ML inference for pre-trained Hugging Face models by using Lambda functions. To avoid repeatedly downloading the pre-trained models, this solution uses an EFS-based approach to model caching. This helps to achieve low-latency, near real-time inference. The solution is provided as infrastructure as code using Python and the AWS CDK.

We hope this blog post allows you to prototype quickly and include modern NLP techniques in your own products.

Introducing new self-paced courses to improve Java and Python code quality with Amazon CodeGuru

Post Syndicated from Rafael Ramos original https://aws.amazon.com/blogs/devops/new-self-paced-courses-to-improve-java-and-python-code-quality-with-amazon-codeguru/

Amazon CodeGuru icon

During the software development lifecycle, organizations have adopted peer code reviews as a common practice to keep improving code quality and prevent bugs from reaching applications in production. Developers traditionally perform those code reviews manually, which causes bottlenecks and blocks releases while waiting for the peer review. Besides impacting the teams’ agility, it’s a challenge to maintain a high bar for code reviews during the development workflow. This is especially challenging for less experienced developers, who have more difficulties identifying defects, such as thread concurrency and resource leaks.

With Amazon CodeGuru Reviewer, developers have an automated code review tool that catches critical issues, security vulnerabilities, and hard-to-find bugs during application development. CodeGuru Reviewer is powered by pre-trained machine learning (ML) models and uses millions of code reviews on thousands of open-source and Amazon repositories. It also provides recommendations on how to fix issues to improve code quality and reduces the time it takes to fix bugs before they reach customer-facing applications. Java and Python developers can simply add Amazon CodeGuru to their existing development pipeline and save time and reduce the cost and burden of bad code.

If you’re new to writing code or an experienced developer looking to automate code reviews, we’re excited to announce two new courses on CodeGuru Reviewer. These courses, developed by the AWS Training and Certification team, consist of guided walkthroughs, gaming elements, knowledge checks, and a final course assessment.

About the course

During these courses, you learn how to use CodeGuru Reviewer to automatically scan your code base, identify hard-to-find bugs and vulnerabilities, and get recommendations for fixing the bugs and security issues. The course covers CodeGuru Reviewer’s main features, provides a peek into how CodeGuru finds code anomalies, describes how its ML models were built, and explains how to understand and apply its prescriptive guidance and recommendations. Besides helping on improving the code quality, those recommendations are useful for new developers to learn coding best practices, such as refactor duplicated code, correct implementation of concurrency constructs, and how to avoid resource leaks.

The CodeGuru courses are designed to be completed within a 2-week time frame. The courses comprise 60 minutes of videos, which include 15 main lectures. Four of the lectures are specific to Java, and four focus on Python. The courses also include exercises and assessments at the end of each week, to provide you with in-depth, hands-on practice in a lab environment.

Week 1

During the first week, you learn the basics of CodeGuru Reviewer, including how you can benefit from ML and automated reasoning to perform static code analysis and identify critical defects from coding best practices. You also learn what kind of actionable recommendations CodeGuru Reviewer provides, such as refactoring, resource leak, potential race conditions, deadlocks, and security analysis. In addition, the course covers how to integrate this tool on your development workflow, such as your CI/CD pipeline.

Topics include:

  • What is Amazon CodeGuru?
  • How CodeGuru Reviewer is trained to provide intelligent recommendations
  • CodeGuru Reviewer recommendation categories
  • How to integrate CodeGuru Reviewer into your workflow

Week 2

Throughout the second week, you have the chance to explore CodeGuru Reviewer in more depth. With Java and Python code snippets, you have a more hands-on experience and dive into each recommendation category. You use these examples to learn how CodeGuru Reviewer looks for duplicated lines of code to suggest refactoring opportunities, how it detects code maintainability issues, and how it prevents resource leaks and concurrency bugs.

Topics include (for both Java and Python):

  • Common coding best practices
  • Resource leak prevention
  • Security analysis

Get started

Developed at the source, this new digital course empowers you to learn about CodeGuru from the experts at AWS whenever, wherever you want. Advance your skills and knowledge to build your future in the AWS Cloud. Enroll today:

Rafael Ramos

Rafael Ramos

Rafael is a Solutions Architect at AWS, where he helps ISVs on their journey to the cloud. He spent over 13 years working as a software developer, and is passionate about DevOps and serverless. Outside of work, he enjoys playing tabletop RPG, cooking and running marathons.

Integrate GitHub monorepo with AWS CodePipeline to run project-specific CI/CD pipelines

Post Syndicated from Vivek Kumar original https://aws.amazon.com/blogs/devops/integrate-github-monorepo-with-aws-codepipeline-to-run-project-specific-ci-cd-pipelines/

AWS CodePipeline is a continuous delivery service that enables you to model, visualize, and automate the steps required to release your software. With CodePipeline, you model the full release process for building your code, deploying to pre-production environments, testing your application, and releasing it to production. CodePipeline then builds, tests, and deploys your application according to the defined workflow either in manual mode or automatically every time a code change occurs. A lot of organizations use GitHub as their source code repository. Some organizations choose to embed multiple applications or services in a single GitHub repository separated by folders. This method of organizing your source code in a repository is called a monorepo.

This post demonstrates how to customize GitHub events that invoke a monorepo service-specific pipeline by reading the GitHub event payload using AWS Lambda.

 

Solution overview

With the default setup in CodePipeline, a release pipeline is invoked whenever a change in the source code repository is detected. When using GitHub as the source for a pipeline, CodePipeline uses a webhook to detect changes in a remote branch and starts the pipeline. When using a monorepo style project with GitHub, it doesn’t matter which folder in the repository you change the code, CodePipeline gets an event at the repository level. If you have a continuous integration and continuous deployment (CI/CD) pipeline for each of the applications and services in a repository, all pipelines detect the change in any of the folders every time. The following diagram illustrates this scenario.

 

GitHub monorepo folder structure

 

This post demonstrates how to customize GitHub events that invoke a monorepo service-specific pipeline by reading the GitHub event payload using Lambda. This solution has the following benefits:

  • Add customizations to start pipelines based on external factors – You can use custom code to evaluate whether a pipeline should be triggered. This allows for further customization beyond polling a source repository or relying on a push event. For example, you can create custom logic to automatically reschedule deployments on holidays to the next available workday.
  • Have multiple pipelines with a single source – You can trigger selected pipelines when multiple pipelines are listening to a single GitHub repository. This lets you group small and highly related but independently shipped artifacts such as small microservices without creating thousands of GitHub repos.
  • Avoid reacting to unimportant files – You can avoid triggering a pipeline when changing files that don’t affect the application functionality (such as documentation, readme, PDF, and .gitignore files).

In this post, we’re not debating the advantages or disadvantages of a monorepo versus a single repo, or when to create monorepos or single repos for each application or project.

 

Sample architecture

This post focuses on controlling running pipelines in CodePipeline. CodePipeline can have multiple stages like test, approval, and deploy. Our sample architecture considers a simple pipeline with two stages: source and build.

 

Github monorepo - CodePipeline Sample Architecture

This solution is made up of following parts:

  • An Amazon API Gateway endpoint (3) is backed by a Lambda function (5) to receive and authenticate GitHub webhook push events (2)
  • The same function evaluates incoming GitHub push events and starts the pipeline on a match
  • An Amazon Simple Storage Service (Amazon S3) bucket (4) stores the CodePipeline-specific configuration files
  • The pipeline contains a build stage with AWS CodeBuild

 

Normally, after you create a CI/CD pipeline, it automatically triggers a pipeline to release the latest version of your source code. From then on, every time you make a change in your source code, the pipeline is triggered. You can also manually run the last revision through a pipeline by choosing Release change on the CodePipeline console. This architecture uses the manual mode to run the pipeline. GitHub push events and branch changes are evaluated by the Lambda function to avoid commits that change unimportant files from starting the pipeline.

 

Creating an API Gateway endpoint

We need a single API Gateway endpoint backed by a Lambda function with the responsibility of authenticating and validating incoming requests from GitHub. You can authenticate requests using HMAC security or GitHub Apps. API Gateway only needs one POST method to consume GitHub push events, as shown in the following screenshot.

 

Creating an API Gateway endpoint

 

Creating the Lambda function

This Lambda function is responsible for authenticating and evaluating the GitHub events. As part of the evaluation process, the function can parse through the GitHub events payload, determine which files are changed, added, or deleted, and perform the appropriate action:

  • Start a single pipeline, depending on which folder is changed in GitHub
  • Start multiple pipelines
  • Ignore the changes if non-relevant files are changed

You can store the project configuration details in Amazon S3. Lambda can read this configuration to decide what needs to be done when a particular folder is matched from a GitHub event. The following code is an example configuration:

{

    "GitHubRepo": "SampleRepo",

    "GitHubBranch": "main",

    "ChangeMatchExpressions": "ProjectA/.*",

    "IgnoreFiles": "*.pdf;*.md",

    "CodePipelineName": "ProjectA - CodePipeline"

}

For more complex use cases, you can store the configuration file in Amazon DynamoDB.

The following is the sample Lambda function code in Python 3.7 using Boto3:

def lambda_handler(event, context):

    import json
    modifiedFiles = event["commits"][0]["modified"]
    #full path
    for filePath in modifiedFiles:
        # Extract folder name
        folderName = (filePath[:filePath.find("/")])
        break

    #start the pipeline
    if len(folderName)>0:
        # Codepipeline name is foldername-job. 
        # We can read the configuration from S3 as well. 
        returnCode = start_code_pipeline(folderName + '-job')

    return {
        'statusCode': 200,
        'body': json.dumps('Modified project in repo:' + folderName)
    }
    

def start_code_pipeline(pipelineName):
    client = codepipeline_client()
    response = client.start_pipeline_execution(name=pipelineName)
    return True

cpclient = None
def codepipeline_client():
    import boto3
    global cpclient
    if not cpclient:
        cpclient = boto3.client('codepipeline')
    return cpclient
   

Creating a GitHub webhook

GitHub provides webhooks to allow external services to be notified on certain events. For this use case, we create a webhook for a push event. This generates a POST request to the URL (API Gateway URL) specified for any files committed and pushed to the repository. The following screenshot shows our webhook configuration.

Creating a GitHub webhook2

Conclusion

In our sample architecture, two pipelines monitor the same GitHub source code repository. A Lambda function decides which pipeline to run based on the GitHub events. The same function can have logic to ignore unimportant files, for example any readme or PDF files.

Using API Gateway, Lambda, and Amazon S3 in combination serves as a general example to introduce custom logic to invoke pipelines. You can expand this solution for increasingly complex processing logic.

 

About the Author

Vivek Kumar

Vivek is a Solutions Architect at AWS based out of New York. He works with customers providing technical assistance and architectural guidance on various AWS services. He brings more than 23 years of experience in software engineering and architecture roles for various large-scale enterprises.

 

 

Gaurav-Sharma

Gaurav is a Solutions Architect at AWS. He works with digital native business customers providing architectural guidance on AWS services.

 

 

 

Nitin-Aggarwal

Nitin is a Solutions Architect at AWS. He works with digital native business customers providing architectural guidance on AWS services.