Amazon Q Developer is a generative artificial intelligence (AI) powered conversational assistant that can help you understand, build, extend, and operate AWS applications. You can ask questions about AWS architecture, your AWS resources, best practices, documentation, support, and more.
With Amazon Q Developer in your IDE, you can write a comment in natural language that outlines a specific task, such as, “Upload a file with server-side encryption.” Based on this information, Amazon Q Developer recommends one or more code snippets directly in the IDE that can accomplish the task. You can quickly and easily accept the top suggestions (tab key), view more suggestions (arrow keys), or continue writing your own code.
However, Amazon Q Developer in the IDE is more than just a code completion plugin. Amazon Q Developer is a generative AI (GenAI) powered assistant for software development that can be used to have a conversation about your code, get code suggestions, or ask questions about building software. This provides the benefits of collaborative paired programming, powered by GenAI models that have been trained on billions of lines of code, from the Amazon internal code-base and publicly available sources.
The challenge
At the 2024 AWS Summit in Sydney, an exhilarating code challenge took center stage, pitting a Blue Team against a Red Team, with approximately 10 to 15 challengers in each team, in a battle of coding prowess. The challenge consisted of 20 tasks, starting with basic math and string manipulation, and progressively escalating in difficulty to include complex algorithms and intricate ciphers.
The Blue Team had a distinct advantage, leveraging the powerful capabilities of Amazon Q Developer, the most capable generative AI-powered assistant for software development. With Q Developer’s guidance, the Blue Team navigated increasingly complex tasks with ease, tapping into Q Developer’s vast knowledge base and problem-solving abilities. In contrast, the Red Team competed without assistance, relying solely on their own coding expertise and problem-solving skills to tackle daunting challenges.
As the competition unfolded, the two teams battled it out, each striving to outperform the other. The Blue Team’s efficient use of Amazon Q Developer proved to be a game-changer, allowing them to tackle the most challenging tasks with remarkable speed and accuracy. However, the Red Team’s sheer determination and technical prowess kept them in the running, showcasing their ability to think outside the box and devise innovative solutions.
The culmination of the code challenge was a thrilling finale, with both teams pushing the boundaries of their skills and ultimately leaving the audience in a state of admiration for their remarkable achievements.
The graph shows the average completion time in which Team Blue “Q Developer” completed more questions across the board in less time than Team Red “Solo Coder”. Within the 1-hour time limit, Team Blue got all the way to Question 19, whereas Team Red only got to Question 16.
There are some assumptions and validations. People who consider themselves very experienced programmers were encouraged to choose team Red and not use AI, to test themselves against team Blue, those using AI. The code challenges were designed to test the output of applying logic. They were specifically designed to be passable without the use of Amazon Q Developer, to test the optimization of writing logical code with Amazon Q Developer. As a result, the code tasks worked well with Amazon Q Developer due to the nature of and underlying training of Amazon Q Developer models. Many people who attended the event were not Python Programmers (we constrained the challenge to Python only), and walked away impressed at how much of the challenge they could complete.
As an example of one of the more complex questions competitors were given to solve was:
Implement the rail fence cipher.
In the Rail Fence cipher, the message is written downwards on successive "rails" of an imaginary fence, then moving up when we get to the bottom (like a zig-zag). Finally the message is then read off in rows.
For example, using three "rails" and the message "WE ARE DISCOVERED FLEE AT ONCE", the cipherer writes out:
W . . . E . . . C . . . R . . . L . . . T . . . E
. E . R . D . S . O . E . E . F . E . A . O . C .
. . A . . . I . . . V . . . D . . . E . . . N . .
Then reads off: WECRLTEERDSOEEFEAOCAIVDEN
Given variable a. Use a three-rail fence cipher so that result is equal to the decoded message of variable a.
The questions were both algorithmic and logical in nature, which made them great for testing conversational natural language capability to solve questions using Amazon Q Developer, or by applying one’s own logic to write code to solve the question.
Top scoring individual per team:
Total Questions Complete
individual time (min)
With Q Developer (Blue Team)
19
30.46
Solo Coder (Red Team)
16
58.06
By comparing the top two competitors, and considering the solo coder was a highly experienced programmer versus the top Q Developer coder, who was a relatively new programmer not familiar with Python, you can see the efficiency gain when using Q Developer as an AI peer programmer. It took the entire 60 minutes for the solo coder to complete 16 questions, whereas the Q Developer coder got to the final question (Question 20, incomplete) in half of the time.
Summary
Integrating advanced IDE features and adopting paired programming have significantly improved coding efficiency and quality. However, the introduction of Amazon Q Developer has taken this evolution to new heights. By tapping into Q Developer’s vast knowledge base and problem-solving capabilities, the Blue Team was able to navigate complex coding challenges with remarkable speed and accuracy, outperforming the unassisted Red Team. This highlights the transformative impact of leveraging generative AI as a collaborative pair programmer in modern software development, delivering greater efficiency, problem-solving, and, ultimately, higher-quality code. Get started with Amazon Q Developer for your IDE by installing the plugin and enabling your builder ID today.
Customers often need to architect solutions to support components across multiple cloud service providers, a need which may arise if they have acquired a company running on another cloud, or for functional purposes where specific services provide a differentiated capability. In this post, we will show you how to use the AWS Cloud Development Kit (AWS CDK) to create a single pane of glass for managing your multicloud resources.
AWS CDK is an open source framework that builds on the underlying functionality provided by AWS CloudFormation. It allows developers to define cloud resources using common programming languages and an abstraction model based on reusable components called constructs. There is a misconception that CloudFormation and CDK can only be used to provision resources on AWS, but this is not the case. The CloudFormation registry, with support for third party resource types, along with custom resource providers, allow for any resource that can be configured via an API to be created and managed, regardless of where it is located.
Multicloud solution design paradigm
Multicloud solutions are often designed with services grouped and separated by cloud, creating a segregation of resource and functions within the design. This approach leads to a duplication of layers of the solution, most commonly a duplication of resources and the deployment processes for each environment. This duplication increases cost, and leads to a complexity of management increasing the potential break points within the solution or practice.
Along with simplifying resource deployments, and the ever-increasing complexity of customer needs, so too has the need increased for the capability of IaC solutions to deploy resources across hybrid or multicloud environments. Through meeting this need, a proliferation of supported tools, frameworks, languages, and practices has created “choice overload”. At worst, this scares the non-cloud-savvy away from adopting an IaC solution benefiting their cloud journey, and at best confuses the very reason for adopting an IaC practice.
A single pane of glass
Systems Thinking is a holistic approach that focuses on the way a system’s constituent parts interrelate and how systems work as a whole especially over time and within the context of larger systems. Systems thinking is commonly accepted as the backbone of a successful systems engineering approach. Designing solutions taking a full systems view, based on the component’s function and interrelation within the system across environments, more closely aligns with the ability to handle the deployment of each cloud-specific resource, from a single control plane.
While AWS provides a list of services that can be used to help design, manage and operate hybrid and multicloud solutions, with AWS as the primary cloud you can go beyond just using services to support multicloud. CloudFormation registry resource types model and provision resources using custom logic, as a component of stacks in CloudFormation. Public extensions are not only provided by AWS, but third-party extensions are made available for general use by publishers other than AWS, meaning customers can create their own extensions and publish them for anyone to use.
The AWS CDK, which has a 1:1 mapping of all AWS CloudFormation resources, as well as a library of abstracted constructs, supports the ability to import custom AWS CloudFormation extensions, enabling customers and partners to create custom AWS CDK constructs for their extensions. The chosen programming language can be used to inherit and abstract the custom resource into reusable AWS CDK constructs, allowing developers to create solutions that contain native AWS extensions along with secondary hybrid or alternate cloud resources.
Providing the ability to integrate mixed resources in the same stack more closely aligns with the functional design and often diagrammatic depiction of the solution. In essence, we are creating a single IaC pane of glass over the entire solution, deployed through a single control plane. This lowers the complexity and the cost of maintaining separate modules and deployment pipelines across multiple cloud providers.
A common use case for a multicloud: disaster recovery
One of the most common use cases of the requirement for using components across different cloud providers is the need to maintain data sovereignty while designing disaster recovery (DR) into a solution.
Data sovereignty is the idea that data is subject to the laws of where it is physically located, and in some countries extends to regulations that if data is collected from citizens of a geographical area, then the data must reside in servers located in jurisdictions of that geographical area or in countries with a similar scope and rigor in their protection laws.
This requires organizations to remain in compliance with their host country, and in cases such as state government agencies, a stricter scope of within state boundaries, data sovereignty regulations. Unfortunately, not all countries, and especially not all states, have multiple AWS regions to select from when designing where their primary and recovery data backups will reside. Therefore, the DR solution needs to take advantage of multiple cloud providers in the same geography, and as such a solution must be designed to backup or replicate data across providers.
The multicloud solution
A multicloud solution to the proposed use case would be the backup of data from an AWS resource such as an Amazon S3 bucket to another cloud provider within the same geography, such as an Azure Blob Storage container, using AWS event driven behaviour to trigger the copying of data from the primary AWS resource to the secondary Azure backup resource.
Following the IaC single pane of glass approach, the Azure Blob Storage container is created as a resource type in the CloudFormation Registry, and imported into the AWS CDK to be used as a construct in the solution. However, before the extension resource type can be used effectively in the CDK as a reusable construct and added to your private library, you will first need to go through the import into CDK process for creating Constructs.
There are three different levels of constructs, beginning with low-level constructs, which are called CFN Resources (or L1, short for “layer 1”). These constructs directly represent all resources available in AWS CloudFormation. They are named CfnXyz, where Xyz is name of the resource.
Layer 1 Construct
In this example, an L1 construct named CfnAzureBlobStorage represents an Azure::BlobStorage AWS CloudFormation extension. Here you also explicitly configure the ref property, in order for higher level constructs to access the Output value which will be the Azure blob container url being provisioned.
As with every CDK Construct, the constructor arguments are scope, id and props. scope and id are propagated to the cdk.Construct base class. The props argument is of type CfnAzureBlobStorageProps which includes four properties all of type string. This is how the Azure credentials are propagated down from upstream constructs.
Layer 2 Construct
The next level of constructs, L2, also represent AWS resources, but with a higher-level, intent-based API. They provide similar functionality, but incorporate the defaults, boilerplate, and glue logic you’d be writing yourself with a CFN Resource construct. They also provide convenience methods that make it simpler to work with the resource.
In this example, an L2 construct is created to abstract the CfnAzureBlobStorage L1 construct and provides additional properties and methods.
The custom L2 construct class is declared as AzureBlobStorage, this time without the Cfn prefix to represent an L2 construct. This time the constructor arguments include the Azure credentials and client secret, and the ref from the L1 construct us output to the public variable AzureBlobContainerUrl.
As an L2 construct, the AzureBlobStorage construct could be used in CDK Apps along with AWS Resource Constructs in the same Stack, to be provisioned through AWS CloudFormation creating the IaC single pane of glass for a multicloud solution.
Layer 3 Construct
The true value of the CDK construct programming model is in the ability to extend L2 constructs, which represent a single resource, into a composition of multiple constructs that provide a solution for a common task. These are Layer 3, L3, Constructs also known as patterns.
In this example, the L3 construct represents the solution architecture to backup objects uploaded to an Amazon S3 bucket into an Azure Blob Storage container in real-time, using AWS Lambda to process event notifications from Amazon S3.
import { RemovalPolicy, Duration, CfnOutput } from "aws-cdk-lib";
import { Bucket, BlockPublicAccess, EventType } from "aws-cdk-lib/aws-s3";
import { DockerImageFunction, DockerImageCode } from "aws-cdk-lib/aws-lambda";
import { PolicyStatement, Effect } from "aws-cdk-lib/aws-iam";
import { LambdaDestination } from "aws-cdk-lib/aws-s3-notifications";
import { IStringParameter, StringParameter } from "aws-cdk-lib/aws-ssm";
import { Secret, ISecret } from "aws-cdk-lib/aws-secretsmanager";
import { Construct } from "constructs";
import { AzureBlobStorage } from "./azure-blob-storage";
// L3 Construct
export class S3ToAzureBackupService extends Construct {
constructor(
scope: Construct,
id: string,
azureSubscriptionIdParamName: string,
azureClientIdParamName: string,
azureTenantIdParamName: string,
azureClientSecretName: string
) {
super(scope, id);
// Retrieve existing SSM Parameters
const azureSubscriptionIdParameter = this.getSSMParameter("AzureSubscriptionIdParam", azureSubscriptionIdParamName);
const azureClientIdParameter = this.getSSMParameter("AzureClientIdParam", azureClientIdParamName);
const azureTenantIdParameter = this.getSSMParameter("AzureTenantIdParam", azureTenantIdParamName);
// Retrieve existing Azure Client Secret
const azureClientSecret = this.getSecret("AzureClientSecret", azureClientSecretName);
// Create an S3 bucket
const sourceBucket = new Bucket(this, "SourceBucketForAzureBlob", {
removalPolicy: RemovalPolicy.RETAIN,
blockPublicAccess: BlockPublicAccess.BLOCK_ALL,
});
// Create a corresponding Azure Blob Storage account and a Blob Container
const azurebBlobStorage = new AzureBlobStorage(
this,
"MyCustomAzureBlobStorage",
azureSubscriptionIdParameter.stringValue,
azureClientIdParameter.stringValue,
azureTenantIdParameter.stringValue,
azureClientSecretName
);
// Create a lambda function that will receive notifications from S3 bucket
// and copy the new uploaded object to Azure Blob Storage
const copyObjectToAzureLambda = new DockerImageFunction(
this,
"CopyObjectsToAzureLambda",
{
timeout: Duration.seconds(60),
code: DockerImageCode.fromImageAsset("copy_s3_fn_code", {
buildArgs: {
"--platform": "linux/amd64"
}
}),
},
);
// Add an IAM policy statement to allow the Lambda function to access the
// S3 bucket
sourceBucket.grantRead(copyObjectToAzureLambda);
// Add an IAM policy statement to allow the Lambda function to get the contents
// of an S3 object
copyObjectToAzureLambda.addToRolePolicy(
new PolicyStatement({
effect: Effect.ALLOW,
actions: ["s3:GetObject"],
resources: [`arn:aws:s3:::${sourceBucket.bucketName}/*`],
})
);
// Set up an S3 bucket notification to trigger the Lambda function
// when an object is uploaded
sourceBucket.addEventNotification(
EventType.OBJECT_CREATED,
new LambdaDestination(copyObjectToAzureLambda)
);
// Grant the Lambda function read access to existing SSM Parameters
azureSubscriptionIdParameter.grantRead(copyObjectToAzureLambda);
azureClientIdParameter.grantRead(copyObjectToAzureLambda);
azureTenantIdParameter.grantRead(copyObjectToAzureLambda);
// Put the Azure Blob Container Url into SSM Parameter Store
this.createStringSSMParameter(
"AzureBlobContainerUrl",
"Azure blob container URL",
"/s3toazurebackupservice/azureblobcontainerurl",
azurebBlobStorage.blobContainerUrl,
copyObjectToAzureLambda
);
// Grant the Lambda function read access to the secret
azureClientSecret.grantRead(copyObjectToAzureLambda);
// Output S3 bucket arn
new CfnOutput(this, "sourceBucketArn", {
value: sourceBucket.bucketArn,
exportName: "sourceBucketArn",
});
// Output the Blob Conatiner Url
new CfnOutput(this, "azureBlobContainerUrl", {
value: azurebBlobStorage.blobContainerUrl,
exportName: "azureBlobContainerUrl",
});
}
}
The custom L3 construct can be used in larger IaC solutions by calling the class called S3ToAzureBackupService and providing the Azure credentials and client secret as properties to the constructor.
import * as cdk from "aws-cdk-lib";
import { Construct } from "constructs";
import { S3ToAzureBackupService } from "./s3-to-azure-backup-service";
export class MultiCloudBackupCdkStack extends cdk.Stack {
constructor(scope: Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const s3ToAzureBackupService = new S3ToAzureBackupService(
this,
"MyMultiCloudBackupService",
"/s3toazurebackupservice/azuresubscriptionid",
"/s3toazurebackupservice/azureclientid",
"/s3toazurebackupservice/azuretenantid",
"s3toazurebackupservice/azureclientsecret"
);
}
}
Solution Diagram
Diagram 1: IaC Single Control Plane, demonstrates the concept of the Azure Blob Storage extension being imported from the AWS CloudFormation Registry into AWS CDK as an L1 CfnResource, wrapped into an L2 Construct and used in an L3 pattern alongside AWS resources to perform the specific task of backing up from and Amazon s3 Bucket into an Azure Blob Storage Container.
Diagram 1: IaC Single Control Plan
The CDK application is then synthesized into one or more AWS CloudFormation Templates, which result in the CloudFormation service deploying AWS resource configurations to AWS and Azure resource configurations to Azure.
This solution demonstrates not only how to consolidate the management of secondary cloud resources into a unified infrastructure stack in AWS, but also the improved productivity by eliminating the complexity and cost of operating multiple deployment mechanisms into multiple public cloud environments.
The following video demonstrates an example in real-time of the end-state solution:
Next Steps
While this was just a straightforward example, with the same approach you can use your imagination to come up with even more and complex scenarios where AWS CDK can be used as a single pane of glass for IaC to manage multicloud and hybrid solutions.
To get started with the solution discussed in this post, this workshop will provide you with the instructions you need to understand the steps required to create the S3ToAzureBackupService.
Once you have learned how to create AWS CloudFormation extensions and develop them into AWS CDK Constructs, you will learn how, with just a few lines of code, you can develop reusable multicloud unified IaC solutions that deploy through a single AWS control plane.
Conclusion
By adopting AWS CloudFormation extensions and AWS CDK, deployed through a single AWS control plane, the cost and complexity of maintaining deployment pipelines across multiple cloud providers is reduced to a single holistic solution-focused pipeline. The techniques demonstrated in this post and the related workshop provide a capability to simplify the design of complex systems, improve the management of integration, and more closely align the IaC and deployment management practices with the design.
Modern location-based applications require the processing and storage of real-world assets in real-time. The recent release of Amazon Location Service and its Tracker feature makes it possible to quickly and easily build these applications on the AWS platform. Tracking real-world assets is important, but at some point when working with Location Services you will need to demo or test location-based applications without real-world assets.
Applications that track real-world assets are difficult to test and demo in a realistic setting, and it can be hard to get your hands on large amounts of real-world location data. Furthermore, not every company or individual in the early stages of developing a tracking application has access to a large fleet of test vehicles from which to derive this data.
Location data can also be considered highly sensitive, because it can be easily de-anonymized to identify individuals and movement patterns. Therefore, only a few openly accessible datasets exist and are unlikely to exhibit the characteristics required for your particular use-case.
To overcome this problem, the location-based services community has developed multiple openly available location data simulators. This blog will demonstrate how to connect one of those simulators to Amazon Location Service Tracker to test and demo your location-based services on AWS.
Walk-through
Part 1: Create a tracker in Amazon Location Service
This walkthrough will demonstrate how to get started setting up simulated data into your tracker.
Step 1: Navigate to Amazon Location Service in the AWS Console and select “Trackers“.
Step 2: On the “Trackers” screen click the orange “Create tracker“ button.
Step 3: On the “Create tracker” screen, name your tracker and make sure to reply “Yes” to the question asking you if you will only use simulated or sample data. This allows you to use the free-tier of the service.
Next, click “Create tracker” to create you tracker.
Done. You’ve created a tracker. Note the “Name” of your tracker.
Generate trips with the SharedStreets Trip-simulator
SharedStreets maintains an open-source project on GitHub – it is a probabilistic, multi-agent GPS trajectory simulator. It even creates realistic noise, and thus can be used for testing algorithms that must work under real-world conditions. Of course, the generated data is fake, so privacy is not a concern.
The trip-simulator generates files with a single GPS measurement per line. To playback those files to the Amazon Location Service Tracker, you must use a tool to parse the file; extract the GPS measurements, time measurements, and device IDs of the simulated vehicles; and send them to the tracker at the right time.
Before you start working with the playback program, the trip-simulator requires a map to simulate realistic trips. Therefore, you must download a part of OpenStreetMap (OSM). Using GeoFabrik you can download extracts at the size of states or selected cities based on the area within which you want to simulate your data.
This blog will demonstrate how to simulate a small fleet of cars in the greater Munich area. The example will be written for OS-X, but it generalizes to Linux operating systems. If you have a Windows operating system, I recommend using Windows Subsystem for Linux (WSL). Alternatively, you can run this from a Cloud9 IDE in your AWS account.
The probes.json file is the file containing the GPS probes we will playback to Amazon Location Service.
Part 2: Playback trips to Amazon Location Service
Now that you have simulated trips in the probes.json file, you can play them back in the tracker created earlier. For this, you must write only a few lines of Python code. The following steps have been neatly separated into a series of functions that yield an iterator.
Step 1: Load the probes.json file and yield each line
import json
import time
import datetime
import boto3
def iter_probes_file(probes_file_name="probes.json"):
"""Iterates a file line by line and yields each individual line."""
with open(probes_file_name) as probes_file:
while True:
line = probes_file.readline()
if not line:
break
yield line
Step 2: Parse the probe on each line To process the probes, you parse the JSON on each line and extract the data relevant for the playback. Note that the coordinates order is longitude, latitude in the probes.json file. This is the same order that the Location Service expects.
def parse_probes_trip_simulator(probes_iter):
"""Parses a file witch contains JSON document, one per line.
Each line contains exactly one GPS probe. Example:
{"properties":{"id":"RQQ-7869","time":1563123002000,"status":"idling"},"geometry":{"type":"Point","coordinates":[-86.73903753135207,36.20418779626351]}}
The function returns the tuple (id,time,status,coordinates=(lon,lat))
"""
for line in probes_iter:
probe = json.loads(line)
props = probe["properties"]
geometry = probe["geometry"]
yield props["id"], props["time"], props["status"], geometry["coordinates"]
Step 3: Update probe record time
The probes represent historical data. Therefore, when you playback you will need to normalize the probes recorded time to sync with the time you send the request in order to achieve the effect of vehicles moving in real-time.
This example is a single threaded playback. If the simulated playback lags behind the probe data timing, then you will be provided a warning through the code detecting the lag and outputting a warning.
The SharedStreets trip-simulator generates one probe per second. This frequency is too high for most applications, and in real-world applications you will often see frequencies of 15 to 60 seconds or even less. You must decide if you want to add another iterator for sub-sampling the data.
def update_probe_record_time(probes_iter):
"""
Modify all timestamps to be relative to the time this function was called.
I.e. all timestamps will be equally spaced from each other but in the future.
"""
new_simulation_start_time_utc_ms = datetime.datetime.now().timestamp() * 1000
simulation_start_time_ms = None
time_delta_recording_ms = None
for i, (_id, time_ms, status, coordinates) in enumerate(probes_iter):
if time_delta_recording_ms is None:
time_delta_recording_ms = new_simulation_start_time_utc_ms - time_ms
simulation_start_time_ms = time_ms
simulation_lag_sec = (
(
datetime.datetime.now().timestamp() * 1000
- new_simulation_start_time_utc_ms
)
- (simulation_start_time_ms - time_ms)
) / 1000
if simulation_lag_sec > 2.0 and i % 10 == 0:
print(f"Playback lags behind by {simulation_lag_sec} seconds.")
time_ms += time_delta_recording_ms
yield _id, time_ms, status, coordinates
Step 4: Playback probes In this step, pack the probes into small batches and introduce the timing element into the simulation playback. The reason for placing them in batches is explained below in step 6.
def sleep(time_elapsed_in_batch_sec, last_sleep_time_sec):
sleep_time = max(
0.0,
time_elapsed_in_batch_sec
- (datetime.datetime.now().timestamp() - last_sleep_time_sec),
)
time.sleep(sleep_time)
if sleep_time > 0.0:
last_sleep_time_sec = datetime.datetime.now().timestamp()
return last_sleep_time_sec
def playback_probes(
probes_iter,
batch_size=10,
batch_window_size_sec=2.0,
):
"""
Replays the probes in live mode.
The function assumes, that the probes returned by probes_iter are sorted
in ascending order with respect to the probe timestamp.
It will either yield batches of size 10 or smaller batches if the timeout is reached.
"""
last_probe_record_time_sec = None
time_elapsed_in_batch_sec = 0
last_sleep_time_sec = datetime.datetime.now().timestamp()
batch = []
# Creates two second windows and puts all the probes falling into
# those windows into a batch. If the max. batch size is reached it will yield early.
for _id, time_ms, status, coordinates in probes_iter:
probe_record_time_sec = time_ms / 1000
if last_probe_record_time_sec is None:
last_probe_record_time_sec = probe_record_time_sec
time_to_next_probe_sec = probe_record_time_sec - last_probe_record_time_sec
if (time_elapsed_in_batch_sec + time_to_next_probe_sec) > batch_window_size_sec:
last_sleep_time_sec = sleep(time_elapsed_in_batch_sec, last_sleep_time_sec)
yield batch
batch = []
time_elapsed_in_batch_sec = 0
time_elapsed_in_batch_sec += time_to_next_probe_sec
batch.append((_id, time_ms, status, coordinates))
if len(batch) == batch_size:
last_sleep_time_sec = sleep(time_elapsed_in_batch_sec, last_sleep_time_sec)
yield batch
batch = []
time_elapsed_in_batch_sec = 0
last_probe_record_time_sec = probe_record_time_sec
if len(batch) > 0:
last_sleep_time_sec = sleep(time_elapsed_in_batch_sec, last_sleep_time_sec)
yield batch
Step 5: Create the updates for the tracker
LOCAL_TIMEZONE = (
datetime.datetime.now(datetime.timezone(datetime.timedelta(0))).astimezone().tzinfo
)
def convert_to_tracker_updates(probes_batch_iter):
"""
Converts batches of probes in the format (id,time_ms,state,coordinates=(lon,lat))
into batches ready for upload to the tracker.
"""
for batch in probes_batch_iter:
updates = []
for _id, time_ms, _, coordinates in batch:
# The boto3 location service client expects a datetime object for sample time
dt = datetime.datetime.fromtimestamp(time_ms / 1000, LOCAL_TIMEZONE)
updates.append({"DeviceId": _id, "Position": coordinates, "SampleTime": dt})
yield updates
Step 6: Send the updates to the tracker In the update_tracker function, you use the batch_update_device_position function of the Amazon Location Service Tracker API. This lets you send batches of up to 10 location updates to the tracker in one request. Batching updates is much more cost-effective than sending one-by-one. You pay for each call to batch_update_device_position. Therefore, batching can lead to a 10x cost reduction.
def update_tracker(batch_iter, location_client, tracker_name):
"""
Reads tracker updates from an iterator and uploads them to the tracker.
"""
for update in batch_iter:
response = location_client.batch_update_device_position(
TrackerName=tracker_name, Updates=update
)
if "Errors" in response and response["Errors"]:
for error in response["Errors"]:
print(error["Error"]["Message"])
Step 7: Putting it all together The follow code is the main section that glues every part together. When using this, make sure to replace the variables probes_file_name and tracker_name with the actual probes file location and the name of the tracker created earlier.
Paste all of the code listed in steps 1 to 7 into a file called trip_playback.py, then execute
python3 trip_playback.py
This will start the playback process.
Step 8: (Optional) Tracking a device’s position updates Once the playback is running, verify that the updates are actually written to the tracker repeatedly querying the tracker for updates for a single device. Here, you will use the get_device_position function of the Amazon Location Service Tracker API to receive the last known device position.
import boto3
import time
def get_last_vehicle_position_from_tracker(
device_id, tracker_name="your-tracker", client=boto3.client("location")
):
response = client.get_device_position(DeviceId=device_id, TrackerName=tracker_name)
if response["ResponseMetadata"]["HTTPStatusCode"] != 200:
print(str(response))
else:
lon = response["Position"][0]
lat = response["Position"][1]
return lon, lat, response["SampleTime"]
if __name__ == "__main__":
device_id = "my-device"
tracker_name = "my-tracker"
while True:
lon, lat, sample_time = get_last_vehicle_position_from_tracker(
device_id=device_id, tracker_name=tracker_name
)
print(f"{lon}, {lat}, {sample_time}")
time.sleep(10)
In the example above, you must replace the tracker_name with the name of the tracker created earlier and the device_id with the ID of one of the simulation vehicles. You can find the vehicle IDs in the probes.json file created by the SharedStreets trip-simulator. If you run the above code, then you should see the following output.
AWS IoT Device Simulator
As an alternative, if you are familiar with AWS IoT, AWS has its own vehicle simulator that is part of the IoT Device Simulator solution. It lets you simulate a vehicle fleet moving on a road network. This has been described here. The simulator sends the location data to an Amazon IoT endpoint. The Amazon Location Service Developer Guide shows how to write and set-up a Lambda function to connect the IoT topic to the tracker.
The AWS IoT Device Simulator has a GUI and is a good choice for simulating a small number of vehicles. The drawback is that only a few trips are pre-packaged with the simulator and changing them is somewhat complicated. The SharedStreets Trip-simulator has much more flexibility, allowing simulations of fleets made up of a larger number of vehicles, but it has no GUI for controlling the playback or simulation.
Cleanup
You’ve created a Location Service Tracker resource. It does not incur any charges if it isn’t used. If you want to delete it, you can do so on the Amazon Location Service Tracker console.
Conclusion
This blog showed you how to use an open-source project and open-source data to generate simulated trips, as well as how to play those trips back to the Amazon Location Service Tracker. Furthermore, you have access to the AWS IoT Device Simulator, which can also be used for simulating vehicles.
Give it a try and tell us how you test your location-based applications in the comments.
About the authors
The collective thoughts of the interwebz
By continuing to use the site, you agree to the use of cookies. more information
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.