Tag Archives: AWS

Changes to our sending review process

Post Syndicated from Dustin Taylor original https://aws.amazon.com/blogs/messaging-and-targeting/changes-to-our-sending-review-process/

We’re changing some of the language we use in our sending review process to make our communications clearer and more helpful.

If you’re not familiar with the sending review process, it refers to the actions that we take when there are issues with the email sent from an Amazon SES or Amazon Pinpoint account. Usually, these issues are a result of senders making honest mistakes. However, when email providers receive problematic email from a sender, they can’t tell if the sender made a mistake, or if they’re doing something malicious. If an email provider detects a problem that’s severe enough, they might block all incoming email from the sender’s IP address. If that happens, email sent from other senders who use the same IP address is blocked as well.

For this reason, we look for certain patterns and behaviors that could cause deliverability problems, and then work with our customers to help resolve the issues with the email sent from their accounts. We used to call this our enforcement process, but we now refer to it as our sending review process. This name is a much better description of the process (not to mention a bit friendlier).

You might notice some other changes as well. When the reputation metrics for an account (such as the account’s bounce or complaint rate) exceed certain levels, or another issue occurs that could impact the reputation of that account, we’ll monitor the email sending behaviors of that account for a certain period of time. During this time, we make a note of whether the problem gets better or worse. Previously, this period was called a probation period; we now call it a review period.

If an account is under review, but the sender isn’t able to correct the issue before the end of the review period, we’ll temporarily disable the account’s ability to send any more email. We take this action to protect the reputation of the sender, and to ensure that other customers can send email without experiencing deliverability issues. We used to call this a suspension, but that name seemed very permanent and punitive. We now refer to these events as sending pauses, because in the majority of cases, they’re temporary and reversible.

Finally, if a sender disagrees with our decision to place a review period or sending pause on their account, they can contact us to let us know why we made this decision in error. This used to be known as an appeal, but we now call it a review.

If we ever change the status of your account, such as by implementing a review period or sending pause, we’ll contact you by email at the address associated with your AWS account. We recommend that you make sure that we have the right email address. For information about changing the email address associated with your AWS account, see Managing an AWS Account in the AWS Billing and Cost Management User Guide.

In addition to sending you a notification by email, we’ll also update the reputation dashboard in the Amazon SES console to show the current status of your account. To learn more about the reputation dashboard, see Using the Reputation Dashboard in the Amazon SES Developer Guide.

How to map out your migration of Oracle PeopleSoft to AWS

Post Syndicated from Ashok Shanmuga Sundaram original https://aws.amazon.com/blogs/architecture/how-to-map-out-your-migration-of-oracle-peoplesoft-to-aws/

Oracle PeopleSoft Enterprise is a widely used enterprise resource planning (ERP) application. Customers run production deployments of various PeopleSoft applications on AWS, including PeopleSoft Human Capital Management (HCM), Financials and Supply Chain Management (FSCM), Interactive Hub (IAH), and Customer Relationship Management (CRM).

We published a whitepaper on Best Practices for Running Oracle PeopleSoft on AWS in December 2017. It provides architectural guidance and outlines best practices for high availability, security, scalability, and disaster recovery for running Oracle PeopleSoft applications on AWS.

It also covers highly available, scalable, and cost-effective multi-region reference architectures for deploying PeopleSoft applications on AWS, like the one illustrated below.

While migrating your Oracle PeopleSoft applications to AWS, here are some things to keep in mind:

  • Multi-AZ deployments – Deploy your PeopleSoft servers and database across multiple Availability Zones (AZs) for high availability. AWS AZs allow you to operate production applications and databases that are more highly available, fault tolerant, and scalable than would be possible from a single data center.
  • Use Amazon Relational Database Service (Amazon RDS) to deploy your PeopleSoft databaseAmazon RDS makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks, allowing you to focus on your applications and business. Deploying an RDS for Oracle Database in multiple AZs simplifies creating a highly available architecture because you’ll have built-in support for automated failover from your primary database to a synchronously replicated secondary database in an alternative AZ.
  • Migration of large databases – Migrating large databases to Amazon RDS within a small downtime window requires careful planning:
    • We recommend that you take a point-in-time export of your database, transfer it to AWS, import it into Amazon RDS, and then apply the delta changes from on-premises.
    • Use AWS Direct Connect or AWS Snowball to transfer the export dump to AWS.
    • Use AWS Database Migration Service to apply the delta changes and sync the on-premises database with the Amazon RDS instance.
  • AWS Infrastructure Event Management (IEM) – Take advantage of AWS IEM to mitigate risks and help ensure a smooth migration. IEM is a highly focused engagement where AWS experts provide you with architectural and operational guidance, assist you in reviewing and fine-tuning your migration plan, and provide real-time support for your migration.
  • Cost optimization – There are a number of ways you can optimize your costs on AWS, including:
    • Use reserved instances for environments that are running most of the time, like production environments. A Reserved Instance is an EC2 offering that provides you with a significant discount (up to 75%) on EC2 usage compared to On-Demand pricing when you commit to a one-year or three-year term.
    • Shut down resources that are not in use. For example, development and test environments are typically used for only eight hours a day during the work week. You can stop these resources when they are not in use for a potential cost savings of 75% (40 hours vs. 168 hours). Use the AWS Instance Scheduler to automatically start and stop your Amazon EC2 and Amazon RDS instances based on a schedule.

The Configuring Amazon RDS as an Oracle PeopleSoft Database whitepaper has detailed instructions on configuring a backend Amazon RDS database for your Oracle PeopleSoft deployment on AWS. After you read the whitepaper, I recommend these other resources as your next step:

  • For a real-world case study on migrating a large Oracle database to AWS, check out this blog post about how AFG migrated their mission-critical Oracle Siebel CRM system running on Oracle Exadata on-premises to Amazon RDS for Oracle.
  • For more information on running Oracle Enterprise Solutions on AWS, check out this re:Invent 2017 video.
  • You can find more Oracle on AWS resources here and here.

About the author

Ashok Shanmuga Sundaram is a partner solutions architect with the Global System Integrator (GSI) team at Amazon Web Services. He works with the GSIs to provide guidance on enterprise cloud adoption, migration and strategy.

Compute Abstractions on AWS: A Visual Story

Post Syndicated from Massimo Re Ferre original https://aws.amazon.com/blogs/architecture/compute-abstractions-on-aws-a-visual-story/

When I joined AWS last year, I wanted to find a way to explain, in the easiest way possible, all the options it offers to users from a compute perspective. There are many ways to peel this onion, but I want to share a “visual story” that I have created.

I define the compute domain as “anything that has CPU and Memory capacity that allows you to run an arbitrary piece of code written in a specific programming language.” Your mileage may vary in how you define it, but this is broad enough that it should cover a lot of different interpretations.

A key part of my story is around the introduction of different levels of compute abstractions this industry has witnessed in the last 20 or so years.

Separation of duties

The start of my story is a line. In a cloud environment, this line defines the perimeter between the consumer role and the provider role. In the cloud, there are things that AWS will do and things that the consumer will do. The perimeter of these responsibilities varies depending on the services you opt to use. If you want to understand more about this concept, read the AWS Shared Responsibility Model documentation.

The different abstraction levels

The reason why the line above is oblique is because it needs to intercept different compute abstraction levels. If you think about what happened in the last 20 years of IT, we have seen a surge of different compute abstractions that changed the way people consume CPU and Memory resources. It all started with physical (x86) servers back in the 80s, and then we have seen the industry adding abstraction layers over the years (for example, hypervisors, containers, functions).

The higher you go in the abstraction levels, the more the cloud provider can add value and can offload the consumer from non-strategic activities. A lot of these activities tend to be “undifferentiated heavy lifting.” We define this as something that AWS customers have to do but that don’t necessarily differentiate them from their competitors (because those activities are table-stakes in that particular industry).

What we found is that supporting millions of customers on AWS requires a certain degree of flexibility in the services we offer because there are many different patterns, use cases, and requirements to satisfy. Giving our customers choices is something AWS always strives for.

A couple of final notes before we dig deeper. The way this story builds up through the blog post is aligned to the progression of the launch dates of the various services, with a few noted exceptions. Also, the services mentioned are all generally available and production-grade. For full transparency, the integration among some of them may still be work-in-progress, which I’ll call out explicitly as we go.

The instance (or virtual machine) abstraction

This is the very first abstraction we introduced on AWS back in 2006. Amazon Elastic Compute Cloud (Amazon EC2) is the service that allows AWS customers to launch instances in the cloud. When customers intercept us at this level, they retain responsibility of the guest operating system and above (middleware, applications, etc.) and their lifecycle. AWS has the responsibility for managing the hardware and the hypervisor including their lifecycle.

At the very same level of the stack there is also Amazon Lightsail, which “is the easiest way to get started with AWS for developers, small businesses, students, and other users who need a simple virtual private server (VPS) solution. Lightsail provides developers compute, storage, and networking capacity and capabilities to deploy and manage websites and web applications in the cloud.”

And this is how these two services appear in our story:

The container abstraction

With the rise of microservices, a new abstraction took the industry by storm in the last few years: containers. Containers are not a new technology, but the rise of Docker a few years ago democratized access. You can think of a container as a self-contained environment with soft boundaries that includes both your own application as well as the software dependencies to run it. Whereas an instance (or VM) virtualizes a piece of hardware so that you can run dedicated operating systems, a container technology virtualizes an operating system so that you can run separated applications with different (and often incompatible) software dependencies.

And now the tricky part. Modern containers-based solutions are usually implemented in two main logical pieces:

  • A containers control plane that is responsible for exposing the API and interfaces to define, deploy, and lifecycle containers. This is also sometimes referred to as the container orchestration layer.
  • A containers data plane that is responsible for providing capacity (as in CPU/Memory/Network/Storage) so that those containers can actually run and connect to a network. From a practical perspective this is typically a Linux host or less often a Windows host where the containers get started and wired to the network.

Arguably, in a specific compute abstraction discussion, the data plane is key, but it is as important to understand what’s happening for the control plane piece.

In 2014, Amazon launched a production-grade containers control plane called Amazon Elastic Container Service (ECS), which “is a highly scalable, high performance container management service that supports Docker … Amazon ECS eliminates the need for you to install, operate, and scale your own cluster management infrastructure.”

In 2017, Amazon also announced the intention to release a new service called Amazon Elastic Container Service for Kubernetes (EKS) based on Kubernetes, a successful open source containers control plane technology. Amazon EKS was made generally available in early June 2018.

Just like for ECS, the aim for this service is to free AWS customers from having to manage a containers control plane. In the past, AWS customers would spin up EC2 instances and deploy/manage their own Kubernetes masters (masters is the name of the Kubernetes hosts running the control plane) on top of an EC2 abstraction. However, we believe many AWS customers will leave to AWS the burden of managing this layer by either consuming ECS or EKS, depending on their use cases. A comparison between ECS and EKS is beyond the scope of this blog post.

You may have noticed that what we have discussed so far is about the container control plane. How about the containers data plane? This is typically a fleet of EC2 instances managed by the customer. In this particular setup, the containers control plane is managed by AWS while the containers data plane is managed by the customer. One could argue that, with ECS and EKS, we have raised the abstraction level for the control plane, but we have not yet really raised the abstraction level for the data plane as the data plane is still comprised of regular EC2 instances that the customer has responsibility for.

There is more on that later on but, for now, this is how the containers control plane and the containers data plane services appear:

The function abstraction

At re:Invent 2014, AWS introduced another abstraction layer: AWS Lambda. Lambda is an execution environment that allows an AWS customer to run a single function. So instead of having to manage and run a full-blown OS instance to run your code, or having to track all software dependencies in a user-built container to run your code, Lambda allows you to upload your code and let AWS figure out how to run it at scale.

What makes Lambda so special is its event-driven model. Not only can you invoke Lambda directly (for example, via the Amazon API Gateway), but you can trigger a Lambda function upon an event in another AWS service (for example, an upload to Amazon S3 or a change in an Amazon DynamoDB table).

The key point about Lambda is that you don’t have to manage the infrastructure underneath the function you are running. No need to track the status of the physical hosts, no need to track the capacity of the fleet, no need to patch the OS where the function will be running. In a nutshell, no need to spend time and money on the undifferentiated heavy lifting.

And this is how the Lambda service appears:

The bare metal abstraction

Also known as the “no abstraction.”

As recently as re:Invent 2017, we announced (the preview of) the Amazon EC2 bare metal instances. We made this service generally available to the public in May 2018.

This announcement is part of Amazon’s strategy to provide choice to our customers. In this case, we are giving customers direct access to hardware. To quote from Jeff Barr’s post:

“…. (AWS customers) wanted access to the physical resources for applications that take advantage of low-level hardware features such as performance counters and Intel® VT that are not always available or fully supported in virtualized environments, and also for applications intended to run directly on the hardware or licensed and supported for use in non-virtualized environments.”

This is how the bare metal Amazon EC2 i3.metal instance appears:

As a side note, and also as alluded to by Jeff, i3.metal is the foundational EC2 instance type on top of which VMware created their own VMware Cloud on AWS service. We are now offering the ability to any AWS user to provision bare metal instances. This doesn’t necessarily mean you can load your hypervisor of choice out of the box, but you can certainly do things you wouldn’t be able to do with a traditional EC2 instance (note: this was just a Saturday afternoon hack).

More seriously, a question I get often asked is whether users could install ESXi on i3.metal on their own. Today this cannot be done, but I’d be interested in hearing your use case for this.

The full container abstraction (for lack of a better term)

Now that we covered all the abstractions, it is time to go back and see if there are other optimizations we can provide for AWS customers. When we discussed the container abstraction, we called out that while there are two different fully managed containers control planes (ECS and EKS), there wasn’t a managed option for the data plane.

Some customers were (and still are) happy about being in full control of said instances. Others have been very vocal that they wanted to get out of the (undifferentiated heavy-lifting) business of managing the lifecycle of that piece of infrastructure.

Enter AWS Fargate, a production-grade service that provides compute capacity to AWS containers control planes. Practically speaking, Fargate is making the containers data plane fall into the “Provider space” responsibility. This means the compute unit exposed to the user is the container abstraction, while AWS will manage transparently the data plane abstractions underneath.

This is how the Fargate service appears:

Now ECS has two “launch types”: one called “EC2” (where your tasks get deployed on a customer-managed fleet of EC2 instances), and the other one called “Fargate” (where your tasks get deployed on an AWS-managed fleet of EC2 instances).

For EKS, the strategy will be very similar, but as of this writing it was not yet available. If you’re interested in some of the exploration being done to make this happen, this is a good read.

Conclusions

We covered the spectrum of abstraction levels available on AWS and how AWS customers can intercept them depending on their use cases and where they sit on their cloud maturity journey. Customers with a “lift & shift” approach may be more akin to consume services on the left-hand side of the slide, whereas customers with a more mature cloud native approach may be more interested in consuming services on the right-hand side of the slide.

In general, customers tend to use higher-level services to get out of the business of managing non-differentiating activities. For example, I recently talked to a customer interested in using Fargate. The trigger there was the fact that Fargate is ISO, PCI, SOC and HIPAA compliant, which was a huge time and money saver for them because it’s easier to point to an AWS document during an audit than having to architect and document for compliance the configuration of a DIY containers data plane.

As a recap, here’s our visual story with all the abstractions available:

I hope you found it useful. Any feedback is greatly appreciated.

About the author

Massimo is a Principal Solutions Architect at AWS. For about 25 years, he specialized on the x86 ecosystem starting with operating systems and virtualization technologies, and lately he has been head down learning about cloud and how application architectures are evolving in that space. Massimo has a blog at www.it20.info and his Twitter handle is @mreferre.

How to Efficiently Extract and Query Tagged Resources Using the AWS Resource Tagging API and S3 Select (SQL)

Post Syndicated from Marcilio Mendonca original https://aws.amazon.com/blogs/architecture/how-to-efficiently-extract-and-query-tagged-resources-using-the-aws-resource-tagging-api-and-s3-select-sql/

AWS customers can use tags to assign metadata to their AWS resources. Each tag is a simple label consisting of a customer-defined key and an optional value that can make it easier to manage, search for, and filter resources. Although there are no inherent types of tags, they enable customers to categorize resources by multiple criteria such as purpose, owner and, environment.

Once a tagging strategy is defined and enforced, customers can use the AWS Tag Editor to view and manage tags on their AWS resources, regardless of service or region. They can use the tag editor to search for resources by resource type, region, or tag, and then manage the tags applied to those resources.

However, customers have asked for guidance on how to build custom automation mechanisms to extract and query tagged resources so that they can extend the built-in functionalities of the Tag Editor. For instance, customers can build automation to generate custom CSV files for tagged resources and perhaps use SQL to query those resources. In addition, automation allows customers to add validation checks to their CI/CD deployment pipelines, for instance, to check whether resources have been properly tagged.

In this blog post, we introduce a simple yet efficient AWS architecture for extracting and querying tagged resources based on AWS cloud-native features such as the Resource Tagging API and S3 Select. We provide sample code for the architecture discussed that can help customers to customize and/or extend the architecture for their own purpose. By relying on AWS cloud-native features, customers can save time and reduce costs while still being able to do customizations.

For customers unfamiliar with the Resource Tagging API and the S3 Select features, below is a very brief introduction.

Resource Tagging API
AWS customers can use the Resource Tagging API to programatically access the same resource group operations that had been accessible only from the AWS Management Console by now using the AWS SDKs or the AWS Command Line Interface (CLI). By doing so, customers can build automation that fits their need, e.g., code that extract, export, and queries tagged resources.

For further details, please read Resource Groups Tagging – Reference

S3 Select
S3 Select enables applications to retrieve only a subset of data from an object by using simple SQL expressions. By using S3 Select to retrieve only the data needed by the application, customers can achieve drastic performance increases – in many cases you can get as much as a 400% improvement.

For further details, please read:

The Overall Solution Architecture

The figure above depict the overall architecture discussed in this post. It is a simple yet efficient architecture for extracting and querying tagged resources based on AWS cloud-native features. The Resource Tagging API is used to extract tagged resources from one or more AWS accounts via the Python AWS SDK, then a custom CSV file is generated and pushed to S3. Once in S3, the tagged resources file can now be efficiently queried via S3 Select also using Python AWS SDK. By leveraging S3 Select, we can now use SQL to query tagged resources and save on S3 data transfer costs since only the filtered results will be returned directly from S3. Pretty neat, eh?

The Extract Process
The extract process was built using Python 3 and relies on the Resource Tagging API to fetch pages of tagged resources and export them to CSV using the csv Python library.

We start importing the required libraries (boto3 is the AWS SDK for Python, argparse helps managing input parameters, and csv supports building valid CSV files):

import boto3
import argparse
import csv

Then, we define the header columns to use when generating the CSV files containing all tagged resources and the writeToCsv function:

field_names = ['ResourceArn', 'TagKey', 'TagValue']

def writeToCsv(writer, args, tag_list):
    for resource in tag_list:
        print("Extracting tags for resource: " +
              resource['ResourceARN'] + "...")
        for tag in resource['Tags']:
            row = dict(
                ResourceArn=resource['ResourceARN'], TagKey=tag['Key'], TagValue=tag['Value'])
            writer.writerow(row)

We take the CSV output file path as a required parameter so that users can specificy the desired output file name using the argparse library:

def input_args():
    parser = argparse.ArgumentParser()
    parser.add_argument("--output", required=True,
                        help="Output CSV file (eg, /tmp/tagged-resources.csv)")
    return parser.parse_args()

And then, we implement the main extract logic that uses the Resource Tagging API (see boto3.client(‘resourcegroupstaggingapi’) in the code below). Note that we fetch 50 resources at a time and write them to the CSV output file until no more resources are found.

def main():
    args = input_args()
    restag = boto3.client('resourcegroupstaggingapi')
    with open(args.output, 'w') as csvfile:
        writer = csv.DictWriter(csvfile, quoting=csv.QUOTE_ALL,
                                delimiter=',', dialect='excel', fieldnames=field_names)
        writer.writeheader()
        response = restag.get_resources(ResourcesPerPage=50)
        writeToCsv(writer, args, response['ResourceTagMappingList'])
        while 'PaginationToken' in response and response['PaginationToken']:
            token = response['PaginationToken']
            response = restag.get_resources(
                ResourcesPerPage=50, PaginationToken=token)
            writeToCsv(writer, args, response['ResourceTagMappingList'])

if __name__ == '__main__':
    main()

The extract procedure is pretty simple and illustrates well how to use the Resource Tagging API to customize the output. It will also use the default credentials in your account.

Here is how the extract process can be triggered for the QA account (assuming the python source file is named aws-tagged-resources-extractor.py and that there is a QA_AWS_ACCOUNT AWS profile defined in your ~/.aws/credentials file).

export AWS_PROFILE=QA_AWS_ACCOUNT
python aws-tagged-resources-extractor.py --output /tmp/qa-tagged-resources.csv

The extract procedure can be applied to other AWS accounts by updating the AWS_PROFILE environment variable accordingly.

The extract procedure can be applied to other AWS accounts by updating the AWS_PROFILE environment variable accordingly.

The ‘Upload to S3’ Process
Once file /tmp/qa-tagged-resources.csv is generated, it can be upload to an S3 bucket using the AWS CLI (or one could extend the extract sample code above to do so):

aws s3 cp /tmp/qa-tagged-resources.csv s3://[REPLACE-WITH-YOUR-S3-BUCKET]

The Query Process
Once the CSV files containing tagged resources for different AWS accounts are uploaded to S3, we can now use S3 Select to perform familiar SQL queries against these files. Another advantage of using S3 Select is that it reduces the amount of data transferred from S3 which is especially relevant in our case when accounts have a very large number of tagged resources.

We again use the boto3 and argparse libraries (Python 3). Required input parameters include the S3 bucket (–bucket) and the S3 key (–key). The SQL query parameter (–query) is optional and will return all results if not provided.

import boto3
import argparse

def input_args():
    parser = argparse.ArgumentParser()
    parser.add_argument("--bucket", required=True, help="SQL query to filter tagged resources output")
    parser.add_argument("--key", required=True, help="SQL query to filter tagged resources output")
    parser.add_argument("--query", default="select * from s3object", help="SQL query to filter tagged resources output")
    return parser.parse_args()

The main query logic is shown below. It uses the boto3.client(‘s3’) to initialize an s3 client that is later used to query the tagged resources CSV file in S3 via the select_object_content() function. This function takes the S3 bucket name, S3 key, and query as parameters. Check the [Boto3] (http://boto3.readthedocs.io/en/latest/reference/services/s3.html) API reference for details on this function and its inputs and outputs.

def main():
    args = input_args()
    s3 = boto3.client('s3')
    response = s3.select_object_content(
        Bucket=args.bucket,
        Key=args.key,
        ExpressionType='SQL',
        Expression=args.query,
        InputSerialization = {'CSV': {"FileHeaderInfo": "Use"}},
        OutputSerialization = {'CSV': {}},
    )

    for event in response['Payload']:
        if 'Records' in event:
            records = event['Records']['Payload'].decode('utf-8')
            print(records)
            
if __name__ == '__main__':
    main()

Here’s a few examples of how to trigger the query procedure against the CSV files stored in S3 (assuming the Python source file for the query procedure is called aws-tagged-resources-querier). We assume that the S3 bucket is located in a single account referenced by profile CENTRAL_AWS_ACCOUNT.

Return the resource ARNs of all route tables containing a tag named ‘aws:cloudformation:stack-name’ in the QA AWS account

export AWS_PROFILE= CENTRAL_AWS_ACCOUNT
python aws-tagged-resources-querier \
     --bucket [REPLACE-WITH-YOUR-S3-BUCKET] \
     --key qa-tagged-resources.csv \
     --query "select ResourceArn from s3object s \
              where s.ResourceArn like 'arn:aws:ec2%route-table%' \
                and s.TagKey='aws:cloudformation:stack-name'"

We invite readers to build more sophisticated SQL queries.

Summary
In this blog post, we introduced a simple yet efficient AWS architecture for extracting and querying tagged resources based on AWS cloud-native features such as the Resource Tagging API and S3 Select. We provided sample code that can help customers to customize and/or extend the architecture for their own purpose. By relying on AWS cloud-native features, customers can save time and reduce costs while still being able to do customizations.

The “extract” process discussed above is available in the AWS Serverless Repository under an application called aws-tag-explorer. Check it out!

Happy Resource Tagging!

About the Author

Marcilio Mendonca is a Sr. Consultant in the Global DevOps Team at AWS Professional Services. In the past years, he has been helping AWS customers to design, build and deploy best-in-class cloud-native AWS applications using VMs, containers and serverless architectures. Prior to joining AWS, Marcilio was a Software Development Engineer with Amazon. Marcilio also holds a PhD in Computer Science.

 

 

Creating an AI-powered marketing solution for sentiment analysis and engagement

Post Syndicated from Zach Barbitta original https://aws.amazon.com/blogs/messaging-and-targeting/creating-an-ai-powered-marketing-solution-for-sentiment-analysis-and-engagement/

Note: Matt Dombrowski, one of our amazing Solutions Architects, wrote this article. He also developed the sample code that you can use to implement this solution.


Marketers know that it’s critical to understand the conversations that customers are having about their brands. The holy grail isn’t just to understand what’s happening on social media, but to distill those conversations into actionable insights. After that, you can scale, automate, and continuously improve your brand’s ability to engage.

In this blog post, we’ll demonstrate how your marketing department can use machine learning to understand social user sentiment and engage with users.

You’ll assume the role of a Marketing Manager at an up-and-coming retail company called Mountain Manhattan. Mountain Manhattan has seen strong growth in recent years, and is now looking for a better way to engage with its Twitter followers. Specifically, Mountain Manhattan wants to know who its advocates and distracters are, what the overall sentiment of the brand is, and who the key influencers are that need the white-glove treatment.

After that, we’ll show you how to quickly deploy a solution for real-time social media sentiment analysis and engagement. This process consists of three basic steps. First, you collect tweets that refer to your brand’s Twitter handle. Next, you use machine learning to assign a score to those tweets. And finally, you use Amazon Pinpoint to engage with your customers based on those scores.

Mountain Manhattan’s challenges

Like many companies, Mountain Manhattan has more data than they can act on. Mountain Manhattan receives over 1,000 tweets a day. That’s more than 365,000 tweets per year! Like most companies with a social media strategy, Mountain Manhattan thinks of each of these tweets as an ‘opportunity to engage.’ One of Mountain Manhattan’s challenges is that they need the tone, voice, response time, and candor of their responses to be clear and consistent—and they need to do so in several different languages.

They tried the brute force approach of reading tweets and manually responding to each one. However, this process quickly became unsustainable (not to mention very expensive) because of limited time and resources. Also, while Mountain Manhattan’s marketing team is rather tech-savvy, they don’t have the time or experience to worry about technical issues like ongoing development or security. Mountain Manhattan needs an engagement solution that’s affordable and effective, and that has industry-leading reliability, scale, and security.

The solution

Mountain Manhattan decided to use several AWS services to create an integrated social media monitoring and customer engagement solution. The marketing team spent about 30 minutes setting up the sophisticated solution described below, which enables testing and iteration on multiple use cases before going live.

This solution monitors a Twitter feed, and sends relevant tweets to an Amazon Kinesis data stream. Then it uses an AWS Lambda function to take the appropriate action. In this case, that action involves first calling Amazon Comprehend to provide a sentiment score, and then using Amazon Pinpoint to engage with the Twitter user. This solution has several benefits for Mountain Manhattan:

  • It’s scalable. Mountain Manhattan has flash and holiday sales, targeted campaigns, and various ad campaigns that can lead to spikes in customer tweets. This solution can handle nearly any workload in real time. Furthermore, ingesting every single tweet about their brand helps Mountain Manhattan get a holistic view of customer sentiment.
  • It’s easy to use. Mountain Manhattan needs to adapt to their customer needs. This means that they need a solution that’s customizable, user-friendly, and intuitive to use. By using Amazon Pinpoint, Mountain Manhattan’s marketing team was able to set up recurring campaigns based on certain customer characteristics. The daily, automated campaigns send notifications to an ever-updating dynamic segment. This ensures that customers never receive the same campaign message twice.
  • It’s cost-effective. Priorities for Mountain Manhattan can change quickly, and long-term contracts are no longer appealing to management. By using AWS services, there are no subscription fees, upfront costs, or long-term commitments. Mountain Manhattan pays only for they use, and they can adjust their marketing spend at any time.
  • It lets you own your data. Data is the lifeblood of modern marketing organizations. Companies need to own their data for use across many applications and systems. This solution gives Mountain Manhattan that ownership and flexibility. If it ever becomes necessary, they can change the destination of their Kinesis data streams to nearly any destination, and can export their customer data from Amazon Pinpoint.

How the solution works

The following architecture diagram shows the various AWS services that enable this AI-powered social sentiment marketing solution.

An image that shows the relationship between the various components used in this solution.

Let’s take a closer look at each of these components. This solution uses the following services and solutions:

  • Mobile client: Mountain Manhattan’s mobile app uses the Twitter SDK to authenticate users. The app is implemented in React Native for cross-platform compatibility, and because Mountain Manhattan’s developers are more familiar with JavaScript. Integrating the Twitter SDK enables Mountain Manhattan to map customers’ Twitter handles to specific mobile devices. There are a variety of ways to authenticate users—including authentication services from Facebook, Google, and Amazon. In this example, we focus on Twitter.
  • Amazon Kinesis Data Streams: This AWS service transfers tweets from Twitter into AWS Lambda (for sentiment analysis) and Amazon S3 (for long-term archival). Kinesis Data Streams can capture and store terabytes of data per hour from hundreds of thousands of sources. In the future, Mountain Manhattan could expand this solution to analyze data from Facebook, point-of-sale terminals, and website click streams.
  • Amazon ElasticSearch: Kinesis Firehose streams the tweets into an ElasticSearch cluster. By using ElasticSearch, Mountain Manhattan can easily search the data, and can visualize it by using Kibana.
  • AWS Lambda and Amazon Comprehend: Mountain Manhattan uses AWS Lambda to execute code without having to worry about deploying and maintaining servers. The AWS Lambda function looks at the tweets as they come in and determines the appropriate action to take. In Mountain Manhattan’s case, if the customer who tweeted is known, it calls Amazon Comprehend to perform AI-based sentiment analysis. Based on the results of that sentiment analysis, the Lambda function calls Amazon Pinpoint to begin the customer engagement process.
  • Amazon Pinpoint: This solution uses Amazon Pinpoint to handle two essential functions. First, it captures information about endpoints (the unique devices that use the app). Second, it sends targeted campaigns to those endpoints. The AWS Mobile SDK, which is integrated into Mountain Manhattan’s app, automatically associates the customer’s Twitter handle with their endpoint ID in Amazon Pinpoint. Mountain Manhattan also collects some custom attributes for each endpoint. For example, they place each endpoint into one of the following categories: Influencers, Supporters, Detractors, Loyal Shoppers, and CS Support Needed. By categorizing customers in this way, Mountain Manhattan can create more personalized messaging.

Mountain Manhattan’s solution in action

As Mountain Manhattan starts to ingest tweets, they get to know a lot more about their customers than just what they said. Mountain Manhattan can see the number of followers a user has, which is a good way to identify influencers. Additionally, Mountain Manhattan can see the Twitter user’s description, logo, picture, and location (if the user has exposed it). All of this data is fed into the AWS Lambda function, where Mountain Manhattan can take the customized action.

The screenshots in the following sections show the kinds of push notifications that Mountain Manhattan could automatically send to customers based on the content of their tweets.

Identifying influencers and early adopters

Mountain Manhattan’s AWS Lambda function determines how many Twitter followers each user has. If the number of followers is above a certain threshold, the function attaches the Influencer custom attribute to the user’s endpoint, and sends them a push notification.A push notification that says "We like you too! Tap here to join our Influencers Club and get free stuff!"

Tracking and engaging with consumers during events or promotions

During events, Mountain Manhattan can join the conversation with their customers by sending messages in real time based on customers’ tweets.A push notification that says "We're glad you're enjoying the sale! Tap here to subscribe to our events calendar."

Proactively engaging with customers having support issues

When Mountain Manhattan determines that the sentiment of a tweet is negative, they can send custom push notifications in an attempt to resolve the issue.A push notification that says "Sorry to hear you're having trouble! :( Tap here to talk to Jake, one of our best support team members."

Offering discounts or concessions to unhappy customers

Mountain Manhattan could monitor for certain words or phrases, such as “shipping delays”. When they detect these keywords, Amazon Pinpoint can automatically send a push notification that offers a discount on a future purchase.A push notification that says "We agree--delays are annoying! Tap here to get 20% off your next order."

Deploying the solution

Now that we’ve seen how this solution works, it’s time to implement it. The coolest part? You can use an AWS CloudFormation template to deploy all of the AWS components of this solution in a few clicks and about 10 minutes of your time.

Note: The procedures for deploying this solution might change over time as we continue to make improvements to it. For the latest procedures, see the Github page for this solution at https://github.com/aws-samples/amazon-pinpoint-social-sentiment/.

Prerequisites

To complete these procedures, you have to have the following:

  • A mobile app that uses Twitter’s APIs or SDK for authentication and for ingesting tweets.
  • A macOS-based computer and physical iOS device (the Simulator that’s included with Xcode isn’t sufficient for testing this solution).
  • Xcode, Node.js, npm, and CocoaPods installed on your macOS-based computer.
    • To download Xcode, go to https://developer.apple.com/download/.
    • To download Node.js and npm, go to https://nodejs.org/en/. Download the latest Long-Term Support (LTS) version for macOS.
    • To download and install CocoaPods, type the following command at the macOS command line: sudo gem install cocoapods
  • The AWS Command Line Interface (AWS CLI) installed and configured on your macOS-based computer. For information about installing the AWS CLI, see Installing the AWS Command Line Interface. For information about setting up the AWS CLI, see Configuring the AWS CLI.
  • An AWS account with sufficient permissions to create the resources shown in the architecture diagram in the earlier section. For more information about creating an AWS account, see How do I create and activate a new Amazon Web Services account.
  • An Amazon EC2 key pair. You need this to log in to the EC2 instance if you want to modify the Twitter handle that you’re monitoring. For more information, see Creating a Key Pair Using Amazon EC2.
  • An Apple Developer account. Note that the approach that we cover in this post focuses exclusively on iOS devices. You can implement this solution on Android devices

Part 1: Create a Twitter application

The first step in this process is to create a Twitter app, which gives you access to the Twitter API. This solution uses the Twitter API to collect tweets in real time.

To create a Twitter application:

  1. Log in to your Twitter account. If you don’t already have a Twitter account, create one at https://twitter.com/signup.
  2. Go to https://apps.twitter.com/app/new, and then choose Create a new application.
  3. Under Application Details, complete the following sections:
    • For Name, type the name of your app.
    • For Description, type a description of your app.
    • For the Website and Callback URL fields, type any fully qualified URL (such as https://www.example.com). You’ll change these values in a later step, so the values you enter at this point aren’t important.
  4. Choose Create your Twitter application.
  5. Under Your access token, choose Create your access token.
  6. Under Application type, choose Read Only.
  7. Under Oauth settings, note the values next to Consumer key and Consumer secret. Then, under Your access token, note the values next to Access key and Secret access key. You’ll need all of these values in later steps.

Part 2: Install the dependencies

This solution requires you to download and set up some files from a GitHub repository.

To configure the AWS Mobile SDK in your app:

  1. Open Terminal.app. On the command line, navigate to the directory where you want to create your project.
  2. On the command line, type the following command to clone the repository that contains the source code that you’re using to configure this solution: git clone https://github.com/aws-samples/amazon-pinpoint-social-sentiment/
  3. Type the following command to change to the directory that contains the installation files: cd amazon-pinpoint-social-sentiment/mobile
  4. Type the following command to download the dependencies for this solution: npm install
  5. Type the following command to link the dependencies in the project: react-native link
  6. Type the following command to change into the ios directory: cd ios
  7. Type the following command to install CocoaPods into your project: pod install

Part 3: Set up your app to use the AWS Mobile SDK

To configure your app:

  1. From the /mobile directory, type the following command to create a backend project for your app and pull the service configuration (aws-exports.js file) into your project: awsmobile init. Press Enter at each prompt to accept the default response, as shown in the following example.
    Please tell us about your project:
    ? Where is your project's source directory: /
    ? Where is your project's distribution directory that stores build artifacts: /
    ? What is your project's build command: npm run-script build
    ? What is your project's start command for local test run: npm run-script start
    
    ? What awsmobile project name would you like to use: mobile-2018-08-16-03-16-39

  2. Open the file aws-exports.js. This file contains information about the backend configuration of your AWS Mobile Hub project. Take note of the aws_mobile_analytics_app_id key—you’ll use this value in a later step.
  3. In a text editor, open the file pinpoint-social-sentiment/mobile/App.js. Under TwitterAuth.init, next to twitter_key, replace <your key here> with the consumer key that you received when you created your Twitter app in step 1. Then, next to twitter_secret, replace <your secret here> with the consumer secret you received when you created your Twitter app. When you finish, save the file.
  4. In a text editor, open the file amazon-pinpoint-social-sentiment/mobile/ios/MobileCon/AppDelegate.m. Search for the following section:
    - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
    {
      NSURL *jsCodeLocation;
    
      [[Twitter sharedInstance] startWithConsumerKey:@"<your-consumer-key>" consumerSecret:@"<your-consumer-secret>"];

    In this section, replace <your-consumer-key> with your Twitter consumer key, and replace <your-consumer-secret> with your Twitter consumer secret.

  5. In a text editor, open the file amazon-pinpoint-social-sentiment/mobile/ios/MobileCon/Info.plist. Search for the following section:
    <key>CFBundleURLTypes</key>
        <array>
            <dict>
                <key>CFBundleURLSchemes</key>
                <array>
                    <string>twitterkit-<your-API-key></string>
                </array>
            </dict>
        </array>
        ...

    Replace <your-API-key> with your Twitter consumer secret.

Part 4: Set up push notifications in your app

Now you’re ready to set up your app to send push notifications. A recent Medium post from Nader Dabit, one of our Developer Advocates, outlines this process nicely. Start at the Apple Developer Configuration section, and complete the remaining steps. After you complete these steps, your app is ready to send push notifications.

Part 5: Launch the AWS CloudFormation template

While your app is building, you can launch the AWS CloudFormation template that sets up the backend components that power this solution.

  1. Sign in to the AWS Management Console, and then open the AWS CloudFormation console at https://console.aws.amazon.com/cloudformation/home.
  2. Use the region selector to select the US East (N. Virginia) region.
  3. Choose Create new stack.
  4. Next to Choose a template, choose Specify an Amazon S3 template URL, and then paste the following URL: https://s3.amazonaws.com/mattd-customer-share/twitterdemo.template.yaml. Choose Next.
  5. Under Specify Details, for Stack Name, type a name for the CloudFormation stack.
  6. Under Parameters, do the following:
    1. For AccessToken, type your Twitter access token.
    2. For SecretAccessToken, type your Twitter access token secret.
    3. For AppId, type the app ID that you obtained in Part 3.
    4. For ConsumerKey, type your Twitter consumer key.
    5. For ConsumerSecret, type your Twitter consumer secret.
  7. Choose Next.
  8. On the next page, review your settings, and then choose Next again. On the final page, select the box to indicate that you understand that AWS CloudFormation will create IAM resources, and then choose Create.

When you choose Create, AWS CloudFormation creates the all of the backend components for the application. These include an EC2 instance, networking infrastructure, a Kinesis data stream, a Kinesis Firehose delivery stream, an S3 bucket, an Elasticsearch cluster, and a Lambda function. This process takes about 10 minutes to complete.

Part 6: Send a test tweet

Now you’re ready to test the solution to make sure that all of the components work as expected.

Start by logging in to your Twitter account. Send a tweet to @awsformobile. Your tweet should contain language that has a positive sentiment.

Your EC2 instance, which monitors the Twitter streaming API, captures this tweet. When this happens, the EC2 instance uses the Kinesis data stream to send the tweet to an Amazon S3 bucket for long-term storage. It also sends the tweet to AWS Lambda, which uses Amazon Comprehend to assign a sentiment score to the tweet. If the message is positive, Amazon Pinpoint sends a push notification to the Twitter handle that sent the message.

You can monitor the execution of the Lambda function by using Amazon CloudWatch Logs. You can access the CloudWatch Logs console at https://console.aws.amazon.com/cloudwatch/home?region=us-east-1#logs. The log should contain an entry that resembles the following example:

On the Amazon Elasticsearch Service (Amazon ES) console, you can watch as Amazon ES catalogs incoming tweets. You can access this console at https://console.aws.amazon.com/es/home?region=us-east-1. For the Amazon ES domain for the tweets, choose the Kibana URL. You can use Kibana to easily search your incoming tweets, as shown in the following image:

Finally, you can go to your Amazon S3 bucket to view an archive of the tweets that were addressed to you. This bucket is useful for simple archiving, additional analysis, visualization, or even machine learning. You can access the Amazon S3 console at https://s3.console.aws.amazon.com/s3/home?region=us-east-1#.

Part 7: Create an Amazon Pinpoint campaign

In the real world, you probably don’t want to send messages to users immediately after they send tweets to your Twitter handle. If you did, you might seem too aggressive, and your customers might hesitate to engage with your brand in the future.

Fortunately, you can use the campaign scheduling tools in Amazon Pinpoint to create a recurring campaign.

  1. Sign in to the AWS Management Console, and then open the Amazon Pinpoint console at https://console.aws.amazon.com/pinpoint/home/?region=us-east-1.
  2. On the Projects page, choose your app.
  3. In the navigation pane, choose Campaigns, and then choose New Campaign.
  4. For Campaign name, type a name for the campaign, and then choose Next step.
  5. On the Segment page, do the following
    1. Choose Create a new segment.
    2. For Name your segment to reuse it later, type a name for the segment.
    3. For Filter by user attributes, choose the plus sign (+) icon. Filter by segment to include all endpoints where Sentiment is Positive, as shown in the following image:A screenshot that shows how to add the Sentiment = Positive attribute to a Pinpoint segment.
    4. Choose Next step.
  6. On the Message page, type the message that you want to send, and then choose Next step. To learn more about writing mobile push messages, see Writing a Mobile Push Message in the Amazon Pinpoint User Guide.
  7. On the Schedule page, choose the date and time when the message will be sent. You can also schedule the campaign to run on a recurring basis, such as every week. To learn more about scheduling campaigns, see Set the Campaign Schedule in the Amazon Pinpoint User Guide.

Final thoughts

In this blog post, we demonstrated how your marketing department can use machine learning to understand social user sentiment and engage with users.

In the interest of transparency, we calculated the total costs associated with running this solution. Our calculation includes a small Elasticsearch cluster, a small EC2 instance, storage costs, compute costs, and messaging costs. Assuming your app has 1 million monthly active users (MAUs), and assuming that 0.5% of those MAUs mention your brand every month on Twitter, running this solution would cost $28.66 per month, or just under four cents an hour.

We think this solution is one of the most affordable and capable social media sentiment analysis tools you’ll find on the market today. The best part about this solution is that it can be a complete solution—or the starting point for your own customized solution.

Need to send push messages on other platforms, such as Firebase Cloud Messaging (FCM) (for most Android devices)? No problem. Just set up your app to send endpoint data to Amazon Pinpoint and to send push notifications, and you’re ready to go! Want to send messages through different channels? If you have other endpoint data for your customers (such as email addresses or mobile phone numbers), you can add channels to your project in Amazon Pinpoint, and then use those channels to send messages.

We’re very excited about this solution, and we can’t wait to see what you build with it!

Building Real Time AI with AWS Fargate

Post Syndicated from AWS Admin original https://aws.amazon.com/blogs/architecture/building-real-time-ai-with-aws-fargate/

This post is a contribution from AWS customer, Veritone. It was originally published on the company’s Website

Here at Veritone, we deal with a lot of data. Our product uses the power of cognitive computing to analyze and interpret the contents of structured and unstructured data, particularly audio and video. We use cognitive computing to provide valuable insights to our customers.

Our platform is designed to ingest audio, video and other types of data via a series of batch processes (called “engines”) that process the media and attach some sort of output to it, such as transcripts or facial recognition data.

Our goal was to design a data pipeline that could process streaming audio, video, or other content from sources, such as IP cameras, mobile devices, and structured data feeds in real-time, through an open ecosystem of cognitive engines. This enables support for customer use cases like real-time transcription for live-broadcast TV and radio, face and object detection for public safety applications, and the real-time analysis of social media for harmful content.

Why AWS Fargate?
We leverage Docker containers as the deployment artifact of both our internal services and cognitive engines. This gave us the flexibility to deploy and execute services in a reliable and portable way. Fargate on AWS turned out to be a perfect tool for orchestrating the dynamic nature of our deployments.

Fargate allows us to quickly scale Docker-based engines from zero to any desired number without having to worry about pre-provisioning capacity or bootstrapping and managing EC2 instances. We use Fargate both as a backend for quickly starting engine containers on demand and for the orchestration of services that need to always be running. It enables us to handle sudden bursts of real-time workloads with a consistent launch time. Fargate also allows our developers to get near-immediate feedback on deployments without having to manage any infrastructure or deal with downtime. The integration with Fargate makes this super simple.

Moving to Real Time
We designed a solution (shown below), in which media from a source, such as a mobile app, which “pushes” streams into our platform, or an IP camera feed, which is “pulled”, is streamed through a series of containerized engines, processing the data as it is ingested. Some engines, which we refer to as Stream Engines, work on raw media streams from start to finish. For all others, streams are decomposed into a series of objects, such as video frames or small audio/video chunks that can be processed in parallel by what we call Object Engines. An output stream of results from each engine in the pipeline is relayed back to our core platform or customer-facing applications via Veritone’s APIs.

Message queues placed between the components facilitate the flow of stream data, objects, and events through the data pipeline. For that, we defined a number of message formats. We decided to use Apache Kafka, a streaming message platform, as the message bus between these components.

Kafka gives us the ability to:

  • Guarantee that a consumer receives an entire stream of messages, in sequence.
  • Buffer streams and have consumers process streams at their own pace.
  • Determine “lag” of engine queues.
  • Distribute workload across engine groups, by utilizing partitions.

The flow of stream data and the lifecycle of the engines is managed and coordinated by a number of microservices written in Go. These include the Scheduler, Coordinator, and Engine Orchestrators.

Deployment and Orchestration
For processing real-time data, such as streaming video from a mobile device, we required the flexibility to deploy dynamic container configurations and often define new services (engines) on the fly. Stream Engines need to be launched on-demand to handle an incoming stream. Object Engines, on the other hand, are brought up and torn down in response to the amount of pending work in their respective queues.

EC2 instances typically require provisioning to be done in anticipation of incoming load and generally take too long to start in this case. We needed a way to quickly scale Docker containers on demand, and Fargate made this achievable with very little effort.

In Closing
Fargate helped us solve a lot of problems related to real-time processing, including the reduction of operational overhead, for this dynamic environment. We expect it to continue to grow and mature as a service. Some features we would like to see in the near future include GPU support for our GPU-based AI Engines and the ability to cache container images that are larger for quicker “warm” launch times.

About Veritone
Veritone created the world’s first operating system for Artificial Intelligence. Veritone’s aiWARE operating system unlocks the power of cognitive computing to transform and analyze audio, video and other data sources in an automated manner to generate actionable insights. The Veritone platform provides customers ease, speed and accuracy at low cost.

The Veritone authors are Christopher Stobie – [email protected] and Mezzi Sotoodeh – [email protected]

Create Dynamic Contact Forms for S3 Static Websites Using AWS Lambda, Amazon API Gateway, and Amazon SES

Post Syndicated from Saurabh Shrivastava original https://aws.amazon.com/blogs/architecture/create-dynamic-contact-forms-for-s3-static-websites-using-aws-lambda-amazon-api-gateway-and-amazon-ses/

In the era of the cloud, hosting a static website is cheaper, faster and simpler than traditional on premise hosting, where you always have to maintain a running server.  Basically, no static website is truly static. I can promise you will find at least a “contact us” page in most static websites, which, by their very nature, are dynamically generated. And all businesses need a “contact us” page to help customers connect with business owners for services, inquiries or feedback. In its simplest form, a “contact us” page should collect a user’s basic information (name, e-mail id, phone number, short message and e-mail) and get shared with the business via email when submitted.

AWS provides a simplified way to host your static website in an Amazon S3 bucket using your own custom domain. You can either choose to register a new domain with AWS Route 53 or transfer your domain to Route 53 for hosting in five simple steps.

Obviously, you don’t want to spin-up a server to handle a simple “contact us” form, but it’s a critical element of your website. Luckily, in this post-cloud world, AWS delivers a serverless option. You can use AWS Lambda with Amazon API Gateway to create a serverless backend and use Amazon Simple Email Service to send an e-mail to the business owner whenever a customer submits any inquiry or feedback. Let’s learn how to do it.

Architecture Flow

Here, we are assuming a common website-to-cloud migration scenario, where you have registered your domain name with a 3rd party domain registrar and after migration of your website to Amazon S3. From there, you switched to Amazon Route 53 as your DNS provider. You contacted your DNS provider and updated the name server (NS) record to use the name servers in the delegation that you set in Amazon Route 53 (find step-by-step details in the AWS S3 development guide). Your email server still belongs to your DNS provider as you brought that in the package when you registered your domain with a multi-year contract.

Following is the architecture flow with detailed guidance.

lambdases

In the above diagram, the customer is submitting their inquiry through a “contact us” form, which is hosted in an Amazon S3 bucket as a static website. Information will flow in three simple steps:

  • Your “contact us” form will collect all user information and post to Amazon API Gateway restful service.
  • Amazon API Gateway will pass collected user information to an AWS lambda function.
  • AWS Lambda function will auto generate an e-mail and forward it to your mail server using Amazon SES.

Your “Contact Us” Form

Let’s start with a simple “contact us” form html code snippet:

<form id="contact-form" method="post">
      <h4>Name:</h4>
      <input type="text" style="height:35px;" id="name-input" placeholder="Enter name here…" class="form-control" style="width:100%;" /><br/>
      <h4>Phone:</h4>
      <input type="phone" style="height:35px;" id="phone-input" placeholder="Enter phone number" class="form-control" style="width:100%;"/><br/>
      <h4>Email:</h4>
      <input type="email" style="height:35px;" id="email-input" placeholder="Enter email here…" class="form-control" style="width:100%;"/><br/>
      <h4>How can we help you?</h4>
      <textarea id="description-input" rows="3" placeholder="Enter your message…" class="form-control" style="width:100%;"></textarea><br/>
      <div class="g-recaptcha" data-sitekey="6Lc7cVMUAAAAAM1yxf64wrmO8gvi8A1oQ_ead1ys" class="form-control" style="width:100%;"></div>
      <button type="button" onClick="submitToAPI(event)" class="btn btn-lg" style="margin-top:20px;">Submit</button>
</form>

The above form will ask the user to enter their name, phone, e-mail, and provide a free-form text box to write inquiry/feedback details and includes a submit button.

Later in the post, I’ll share the JQuery code for field validation and the variables to collect values.

Defining AWS Lambda Function

The next step is to create a lambda function, which will get all user information through the API Gateway. The lambda function will look something like this:

The AWS  lambda function mailfwd is triggered from the API Gateway POST method, which we will create the next section and send information to Amazon SES for mail forwarding.

If you are new to AWS Lambda then follow these simple steps to Create a Simple Lambda Function and get yourself familiar.

  1. Go to the console and click on “Create Function” and select blueprints for hello-world nodejs6.10 version as shown in below screenshot and click on configure button at the bottom.
  2. To create your AWS Lambda function,  select the “edit code inline” setting, which will have an editor box with the code in it, and replace that code (making sure to change [email protected] to your real e-mail address and update your actual domain in the response variable):

    var AWS = require('aws-sdk');
    var ses = new AWS.SES();
     
    var RECEIVER = '[email protected]';
    var SENDER = '[email protected]';
    
    var response = {
     "isBase64Encoded": false,
     "headers": { 'Content-Type': 'application/json', 'Access-Control-Allow-Origin': 'example.com'},
     "statusCode": 200,
     "body": "{\"result\": \"Success.\"}"
     };
    
    exports.handler = function (event, context) {
        console.log('Received event:', event);
        sendEmail(event, function (err, data) {
            context.done(err, null);
        });
    };
     
    function sendEmail (event, done) {
        var params = {
            Destination: {
                ToAddresses: [
                    RECEIVER
                ]
            },
            Message: {
                Body: {
                    Text: {
                        Data: 'name: ' + event.name + '\nphone: ' + event.phone + '\nemail: ' + event.email + '\ndesc: ' + event.desc,
                        Charset: 'UTF-8'
                    }
                },
                Subject: {
                    Data: 'Website Referral Form: ' + event.name,
                    Charset: 'UTF-8'
                }
            },
            Source: SENDER
        };
        ses.sendEmail(params, done);
    }
    

Now you can execute and test your AWS lambda function as directed in the AWS developer guide. Make sure to update the Lambda execution role and follow the steps provided in the Lambda developer guide to create a basic execution role.

Add following code under policy to allow Amazon SES access to AWS lambda function:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "ses:SendEmail",
            "Resource": "*"
        }
    ]
}

Creating the API Gateway

Now, let’s create the API Gateway that will provide a restful API endpoint for our AWS Lambda function, which we are going to create next. We will use this API endpoint to post user-submitted information in the “Contact Us” form — which will also get posted to the AWS Lambda function.

If you are new to API Gateway, follow these simple steps to create and test an API from the example in the API Gateway Console to familiarize yourself.

  1. Login to AWS console and select API Gateway.  Click on create new API and fill your API name.
  2. Now go to your API name — listed in the left-hand navigation — click on the “actions” drop down, and select “create resource.”
  3. Select your newly-created resource and choose “create method.”  Choose a POST.  Here, you will choose our AWS Lambda Function. To do this, select “mailfwd” from the drop down.
  4. After saving the form above, Click on the “action” menu and choose “deploy API.”  You will see final resources and methods something like below:
  5. Now get your Restful API URL from the “stages” tab as shown in the screenshot below. We will use this URL on our “contact us” HTML page to send the request with all user information.
  6. Make sure to Enable CORS in the API Gateway or you’ll get an error:”Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://abc1234.execute-api.us-east-1.amazonaws.com/02/mailme. (Reason: CORS header ‘Access-Control-Allow-Origin’ missing).”

Setup Amazon SES

Amazon SES requires that you verify your identities (the domains or email addresses that you send email from) to confirm that you own them, and to prevent unauthorized use. Follow the steps outlined in the Amazon SES user guide to verify your sender e-mail.

Connecting it all Together

Since we created our AWS Lambda function and provided the API-endpoint access using API gateway, it’s time to connect all the pieces together and test them. Put following JQuery code in your ContactUs HTML page <head> section. Replace URL variable with your API Gateway URL. You can change field validation as per your need.

function submitToAPI(e) {
       e.preventDefault();
       var URL = "https://abc1234.execute-api.us-east-1.amazonaws.com/01/contact";

            var Namere = /[A-Za-z]{1}[A-Za-z]/;
            if (!Namere.test($("#name-input").val())) {
                         alert ("Name can not less than 2 char");
                return;
            }
            var mobilere = /[0-9]{10}/;
            if (!mobilere.test($("#phone-input").val())) {
                alert ("Please enter valid mobile number");
                return;
            }
            if ($("#email-input").val()=="") {
                alert ("Please enter your email id");
                return;
            }

            var reeamil = /^([\w-\.][email protected]([\w-]+\.)+[\w-]{2,6})?$/;
            if (!reeamil.test($("#email-input").val())) {
                alert ("Please enter valid email address");
                return;
            }

       var name = $("#name-input").val();
       var phone = $("#phone-input").val();
       var email = $("#email-input").val();
       var desc = $("#description-input").val();
       var data = {
          name : name,
          phone : phone,
          email : email,
          desc : desc
        };

       $.ajax({
         type: "POST",
         url : "https://abc1234.execute-api.us-east-1.amazonaws.com/01/contact",
         dataType: "json",
         crossDomain: "true",
         contentType: "application/json; charset=utf-8",
         data: JSON.stringify(data),

         
         success: function () {
           // clear form and show a success message
           alert("Successfull");
           document.getElementById("contact-form").reset();
       location.reload();
         },
         error: function () {
           // show an error message
           alert("UnSuccessfull");
         }});
     }

Now you should be able to submit your contact form and start receiving email notifications when a form is completed and submitted.

Conclusion

Here we are addressing a common use case — a simple contact form — which is important for any small business hosting their website on Amazon S3. This post should help make your static website more dynamic without spinning up any server.

Have you had challenges adding a “contact us” form to your small business website?

About the author

Saurabh Shrivastava is a Solutions Architect working with global systems integrators. He works with our partners and customers to provide them architectural guidance for building scalable architecture in hybrid and AWS environment. In his spare time, he enjoys spending time with his family, hiking, and biking.

Announcing 7 New Exam Readiness Courses for AWS Certifications

Post Syndicated from Janna Pellegrino original https://aws.amazon.com/blogs/architecture/announcing-7-new-exam-readiness-courses-for-aws-certifications/

We’re excited to announce the launch of seven Exam Readiness courses to help you prepare for AWS Certification. Built by AWS, these courses are designed to help you prepare for the Solutions Architect, Developer, DevOps Engineer, Big Data, and Advanced Networking exams. Specifically, the Exam Readiness: AWS Certified Solutions Architect – Associate course has been updated to reflect the newest version of the exam released earlier this year.

What You’ll Learn
These new courses complement our technical training courses and focus on enabling you to validate your technical expertise with AWS Certification. We’ll teach you how to interpret exam questions, apply concepts being tested by the exam, and allocate your study time.

You’ll also have a chance to work through sample questions to understand the rationale behind correct and incorrect answer choices. Training is developed by AWS so our courses are current with the latest best practices. Plus, our classes are taught by AWS accredited instructors who have personal experience passing AWS Certification exams, and can help you identify areas for additional work and study.

Watch our video to hear more about these courses from our curriculum development manager.

Ready to Register?
Find and register for a class near you at aws.training. Looking to train several members of your team? You can also contact us for private, onsite training for your team.

Questions? Review our FAQ or contact us.

Streaming Events from Amazon Pinpoint to Redshift

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/streaming-events-from-amazon-pinpoint-to-redshift/

Note: This post was originally written by Ryan Idrigo-Lam, one of the founding members of the Amazon Pinpoint team.


You can use Amazon Pinpoint to segment, target, and engage with your customers directly from the console. The Pinpoint console also includes a variety of dashboards that you can use to keep track of how your customers use your applications, and measure how likely your customers are to engage with the messages you send them.

Some Pinpoint customers, however, have use cases that require a bit more than what these dashboards have to offer. For example, some customers want to join their Pinpoint data to external data sets, or to collect historical data beyond the six month window that Pinpoint retains. To help customers meet these needs, and many more, Amazon Pinpoint includes a feature called Event Streams.

This article provides information about using Event Streams to export your data from Amazon Pinpoint and into a high-performance Amazon Redshift database. Once your data is in Redshift, you can run queries against it, join it with other data sets, use it as a data source for analytics and data visualization tools, and much more.

Step 1: Create a Redshift Cluster

The first step in this process involves creating a new Redshift cluster to store your data. You can complete this step in a few clicks by using the Amazon Redshift console. For more information, see Managing Clusters Using the Console in the Amazon Redshift Cluster Management Guide.

When you create the new cluster, make a note of the values you specify for the Cluster Identifier, Database Name, Master User Name, and Master User Password. You’ll use all of these values when you set up Amazon Kinesis Firehose in the next section.

Step 2: Create a Firehose Delivery Stream with a Redshift Destination

After you create your Redshift cluster, you can create the Amazon Kinesis Data Firehose delivery stream that will deliver your Pinpoint data to the Redshift cluster.

To create the Kinesis Data Firehose delivery stream

  1. Open the Amazon Kinesis Data Firehose console at https://console.aws.amazon.com/firehose/home.
  2. Choose Create delivery stream.
  3. For Delivery stream name, type a name.
  4. Under Choose source, for Source, choose Direct PUT or other sources. Choose Next.
  5. On the Process records page, do the following:
    1. Under Transform source records with AWS Lambda, choose Enabled if you want to use a Lambda function to transform the data before Firehose loads it into Redshift. Otherwise, choose Disabled.
    2. Under Convert record format, choose Disabled, and then choose Next.
  6. On the Choose destination page, do the following:
    1. For Destination, choose Amazon Redshift.
    2. Under Amazon Redshift destination, specify the Cluster name, User name, Password, and Database for the Redshift database you created earlier. Also specify a name for the Table.
    3. Under Intermediate S3 destination, choose an S3 bucket to store data in. Alternatively, choose Create new to create a new bucket. Choose Next.
  7. On the Configure settings page, do the following:
    1. Under IAM role, choose an IAM role that Firehose can use to access your S3 bucket and KMS key. Alternatively, you can have the Firehose console create a new role. Choose Next.
    2. On the Review page, confirm the settings you specified on the previous pages. If the settings are correct, choose Create delivery stream.

Step 3: Create a JSONPaths file

The next step in this process is to create a JSONPaths file and upload it to an Amazon S3 bucket. You use the JSONPaths file to tell Amazon Redshift how to interpret the unstructured JSON that Amazon Pinpoint provides.

To create a JSONPaths file and upload it to Amazon S3

  1. In a text editor, create a new file.
  2. Paste the following code into the text file:
    {
      "jsonpaths": [
        "$['event_type']",
        "$['event_timestamp']",
        "$['arrival_timestamp']",
        "$['event_version']",
        "$['application']['app_id']",
        "$['application']['package_name']",
        "$['application']['version_name']",
        "$['application']['version_code']",
        "$['application']['title']",
        "$['application']['cognito_identity_pool_id']",
        "$['application']['sdk']['name']",
        "$['application']['sdk']['version']",
        "$['client']['client_id']",
        "$['client']['cognito_id']",
        "$['device']['model']",
        "$['device']['make']",
        "$['device']['platform']['name']",
        "$['device']['platform']['version']",
        "$['device']['locale']['code']",
        "$['device']['locale']['language']",
        "$['device']['locale']['country']",
        "$['session']['session_id']",
        "$['session']['start_timestamp']",
        "$['session']['stop_timestamp']",
        "$['monetization']['transaction']['transaction_id']",
        "$['monetization']['transaction']['store']",
        "$['monetization']['transaction']['item_id']",
        "$['monetization']['transaction']['quantity']",
        "$['monetization']['transaction']['price']['reported_price']",
        "$['monetization']['transaction']['price']['amount']",
        "$['monetization']['transaction']['price']['currency']['code']",
        "$['monetization']['transaction']['price']['currency']['symbol']",
        "$['attributes']['campaign_id']",
        "$['attributes']['campaign_activity_id']",
        "$['attributes']['my_custom_attribute']",
        "$['metrics']['my_custom_metric']"
      ]
    }

  3. Modify the preceding code example to include the fields that you want to import into Redshift.
    Note: You can specify custom attributes or metrics by replacing my_custom_attribute or my_custom_metric in the example above with your custom attributes or metrics, respectively.
  4. When you finish modifying the code example, remove all whitespace, including spaces and line breaks, from the file. Save the file as json-paths.json.
  5. Open the Amazon S3 console at https://s3.console.aws.amazon.com/s3/home.
  6. Choose the S3 bucket you created when you set up the Firehose stream. Upload json-paths.json into the bucket.

Step 4: Configure the table in Redshift

At this point, it’s time to finish setting up your Redshift database. In this section, you’ll create a table in the Redshift cluster you created earlier. The columns in this table mirror the values you specified in the JSONPaths file in the previous section.

  1. Connect to your Redshift cluster by using a database tool such as SQL Workbench/J. For more information about connecting to a cluster, see Connect to the Cluster in the Amazon Redshift Getting Started Guide.
  2. Create a new table that contains a column for each field in the JSONPaths file you created in the preceding section. You can use the following example as a template.
    CREATE schema AWSMA;
    CREATE TABLE AWSMA.event(
      event_type VARCHAR(256) NOT NULL ENCODE LZO,
      event_timestamp TIMESTAMP NOT NULL ENCODE LZO,
      arrival_timestamp TIMESTAMP NULL ENCODE LZO,
      event_version CHAR(12) NULL ENCODE LZO,
      application_app_id VARCHAR(64) NOT NULL ENCODE LZO,
      application_package_name VARCHAR(256) NULL ENCODE LZO,
      application_version_name VARCHAR(256) NULL ENCODE LZO,
      application_version_code VARCHAR(256) NULL ENCODE LZO,
      application_title VARCHAR(256) NULL ENCODE LZO,
      application_cognito_identity_pool_id VARCHAR(64) NULL ENCODE LZO,
      application_sdk_name VARCHAR(256) NULL ENCODE LZO,
      application_sdk_version VARCHAR(256) NULL ENCODE LZO,
      client_id VARCHAR(64) NULL DISTKEY ENCODE LZO,
      client_cognito_id VARCHAR(64) NULL ENCODE LZO,
      device_model VARCHAR(256) NULL ENCODE LZO,
      device_make VARCHAR(256) NULL ENCODE LZO,
      device_platform_name VARCHAR(256) NULL ENCODE LZO,
      device_platform_version VARCHAR(256) NULL ENCODE LZO,
      device_locale_code VARCHAR(256) NULL ENCODE LZO,
      device_locale_language VARCHAR(64) NULL ENCODE LZO,
      device_locale_country VARCHAR(64) NULL ENCODE LZO,
      session_id VARCHAR(64) NULL ENCODE LZO,
      session_start_timestamp TIMESTAMP NULL ENCODE LZO,
      session_stop_timestamp TIMESTAMP NULL ENCODE LZO,
      monetization_transaction_id VARCHAR(64) NULL ENCODE LZO,
      monetization_transaction_store VARCHAR(64) NULL ENCODE LZO,
      monetization_transaction_item_id VARCHAR(64) NULL ENCODE LZO,
      monetization_transaction_quantity FLOAT8 NULL,
      monetization_transaction_price_reported VARCHAR(64) NULL ENCODE LZO,
      monetization_transaction_price_amount FLOAT8 NULL,
      monetization_transaction_price_currency_code VARCHAR(16) NULL ENCODE LZO,
      monetization_transaction_price_currency_symbol VARCHAR(32) NULL ENCODE LZO,
      - Custom Attributes
      a_campaign_id VARCHAR(4000),
      a_campaign_activity_id VARCHAR(4000),
      a_my_custom_attribute VARCHAR(4000),
      - Custom Metrics
      m_my_custom_metric float8
    )
    SORTKEY ( application_app_id, event_timestamp, event_type);

Step 5: Configure the Firehose Stream

You’re getting close! At this point, you’re ready to point the Kinesis Data Firehose stream to your JSONPaths file so that Redshift parses the incoming data properly. You also need to list the columns of the table that your data will be copied into.

To configure the Firehose Stream

  1. Open the Amazon Kinesis Data Firehose console at https://console.aws.amazon.com/firehose/home.
  2. In the list of delivery streams, choose the delivery stream you created earlier.
  3. On the Details tab, choose Edit.
  4. Under Amazon Redshift destination, for COPY options, paste the following:
    JSON 's3://s3-bucket/json-paths.json'
    TRUNCATECOLUMNS
    TIMEFORMAT 'epochmillisecs'

  5. Replace s3-bucket in the preceding code example with the path to the S3 bucket that contains json-paths.json.
  6. For Columns, list all of the columns that are present in the JSONPaths file you created earlier. Specify the column names in the same order as they’re listed in the json-paths.json file, using commas to separate the column names. When you finish, choose Save.

Step 6: Enable Event Streams in Amazon Pinpoint

The only thing left to do now is to tell Amazon Pinpoint to start sending data to Amazon Kinesis.

To enable Event Streaming in Amazon Pinpoint

  1. Open the Amazon Pinpoint console at https://console.aws.amazon.com/pinpoint/home.
  2. Choose the application or project that you want to enable event streams for.
  3. In the navigation pane, choose Settings.
  4. On the Event stream tab, choose Enable streaming of events to Amazon Kinesis.
  5. Under Stream to Amazon Kinesis, select Send events to an Amazon Kinesis Firehose delivery stream.
  6. For Amazon Kinesis Firehose delivery stream, choose the stream you created earlier.
  7. For IAM role, choose an existing role that allows the firehose:PutRecordBatch action, or choose Automatically create a role to have Amazon Pinpoint create a role with the appropriate permissions. If you choose to have Amazon Pinpoint create a role for you, type a name for the role. Choose Save.

That’s it! Once you complete this final step, Amazon Pinpoint starts exporting the data you specified into your Redshift cluster.

I hope this walk through was helpful. If you have any questions, please let us know in the comments or in the Amazon Pinpoint forum.

AWS Online Tech Talks – June 2018

Post Syndicated from Devin Watson original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-june-2018/

AWS Online Tech Talks – June 2018

Join us this month to learn about AWS services and solutions. New this month, we have a fireside chat with the GM of Amazon WorkSpaces and our 2nd episode of the “How to re:Invent” series. We’ll also cover best practices, deep dives, use cases and more! Join us and register today!

Note – All sessions are free and in Pacific Time.

Tech talks featured this month:

 

Analytics & Big Data

June 18, 2018 | 11:00 AM – 11:45 AM PTGet Started with Real-Time Streaming Data in Under 5 Minutes – Learn how to use Amazon Kinesis to capture, store, and analyze streaming data in real-time including IoT device data, VPC flow logs, and clickstream data.
June 20, 2018 | 11:00 AM – 11:45 AM PT – Insights For Everyone – Deploying Data across your Organization – Learn how to deploy data at scale using AWS Analytics and QuickSight’s new reader role and usage based pricing.

 

AWS re:Invent
June 13, 2018 | 05:00 PM – 05:30 PM PTEpisode 2: AWS re:Invent Breakout Content Secret Sauce – Hear from one of our own AWS content experts as we dive deep into the re:Invent content strategy and how we maintain a high bar.
Compute

June 25, 2018 | 01:00 PM – 01:45 PM PTAccelerating Containerized Workloads with Amazon EC2 Spot Instances – Learn how to efficiently deploy containerized workloads and easily manage clusters at any scale at a fraction of the cost with Spot Instances.

June 26, 2018 | 01:00 PM – 01:45 PM PTEnsuring Your Windows Server Workloads Are Well-Architected – Get the benefits, best practices and tools on running your Microsoft Workloads on AWS leveraging a well-architected approach.

 

Containers
June 25, 2018 | 09:00 AM – 09:45 AM PTRunning Kubernetes on AWS – Learn about the basics of running Kubernetes on AWS including how setup masters, networking, security, and add auto-scaling to your cluster.

 

Databases

June 18, 2018 | 01:00 PM – 01:45 PM PTOracle to Amazon Aurora Migration, Step by Step – Learn how to migrate your Oracle database to Amazon Aurora.
DevOps

June 20, 2018 | 09:00 AM – 09:45 AM PTSet Up a CI/CD Pipeline for Deploying Containers Using the AWS Developer Tools – Learn how to set up a CI/CD pipeline for deploying containers using the AWS Developer Tools.

 

Enterprise & Hybrid
June 18, 2018 | 09:00 AM – 09:45 AM PTDe-risking Enterprise Migration with AWS Managed Services – Learn how enterprise customers are de-risking cloud adoption with AWS Managed Services.

June 19, 2018 | 11:00 AM – 11:45 AM PTLaunch AWS Faster using Automated Landing Zones – Learn how the AWS Landing Zone can automate the set up of best practice baselines when setting up new

 

AWS Environments

June 21, 2018 | 11:00 AM – 11:45 AM PTLeading Your Team Through a Cloud Transformation – Learn how you can help lead your organization through a cloud transformation.

June 21, 2018 | 01:00 PM – 01:45 PM PTEnabling New Retail Customer Experiences with Big Data – Learn how AWS can help retailers realize actual value from their big data and deliver on differentiated retail customer experiences.

June 28, 2018 | 01:00 PM – 01:45 PM PTFireside Chat: End User Collaboration on AWS – Learn how End User Compute services can help you deliver access to desktops and applications anywhere, anytime, using any device.
IoT

June 27, 2018 | 11:00 AM – 11:45 AM PTAWS IoT in the Connected Home – Learn how to use AWS IoT to build innovative Connected Home products.

 

Machine Learning

June 19, 2018 | 09:00 AM – 09:45 AM PTIntegrating Amazon SageMaker into your Enterprise – Learn how to integrate Amazon SageMaker and other AWS Services within an Enterprise environment.

June 21, 2018 | 09:00 AM – 09:45 AM PTBuilding Text Analytics Applications on AWS using Amazon Comprehend – Learn how you can unlock the value of your unstructured data with NLP-based text analytics.

 

Management Tools

June 20, 2018 | 01:00 PM – 01:45 PM PTOptimizing Application Performance and Costs with Auto Scaling – Learn how selecting the right scaling option can help optimize application performance and costs.

 

Mobile
June 25, 2018 | 11:00 AM – 11:45 AM PTDrive User Engagement with Amazon Pinpoint – Learn how Amazon Pinpoint simplifies and streamlines effective user engagement.

 

Security, Identity & Compliance

June 26, 2018 | 09:00 AM – 09:45 AM PTUnderstanding AWS Secrets Manager – Learn how AWS Secrets Manager helps you rotate and manage access to secrets centrally.
June 28, 2018 | 09:00 AM – 09:45 AM PTUsing Amazon Inspector to Discover Potential Security Issues – See how Amazon Inspector can be used to discover security issues of your instances.

 

Serverless

June 19, 2018 | 01:00 PM – 01:45 PM PTProductionize Serverless Application Building and Deployments with AWS SAM – Learn expert tips and techniques for building and deploying serverless applications at scale with AWS SAM.

 

Storage

June 26, 2018 | 11:00 AM – 11:45 AM PTDeep Dive: Hybrid Cloud Storage with AWS Storage Gateway – Learn how you can reduce your on-premises infrastructure by using the AWS Storage Gateway to connecting your applications to the scalable and reliable AWS storage services.
June 27, 2018 | 01:00 PM – 01:45 PM PTChanging the Game: Extending Compute Capabilities to the Edge – Discover how to change the game for IIoT and edge analytics applications with AWS Snowball Edge plus enhanced Compute instances.
June 28, 2018 | 11:00 AM – 11:45 AM PTBig Data and Analytics Workloads on Amazon EFS – Get best practices and deployment advice for running big data and analytics workloads on Amazon EFS.

AWS Resources Addressing Argentina’s Personal Data Protection Law and Disposition No. 11/2006

Post Syndicated from Leandro Bennaton original https://aws.amazon.com/blogs/security/aws-and-resources-addressing-argentinas-personal-data-protection-law-and-disposition-no-112006/

We have two new resources to help customers address their data protection requirements in Argentina. These resources specifically address the needs outlined under the Personal Data Protection Law No. 25.326, as supplemented by Regulatory Decree No. 1558/2001 (“PDPL”), including Disposition No. 11/2006. For context, the PDPL is an Argentine federal law that applies to the protection of personal data, including during transfer and processing.

A new webpage focused on data privacy in Argentina features FAQs, helpful links, and whitepapers that provide an overview of PDPL considerations, as well as our security assurance frameworks and international certifications, including ISO 27001, ISO 27017, and ISO 27018. You’ll also find details about our Information Request Report and the high bar of security at AWS data centers.

Additionally, we’ve released a new workbook that offers a detailed mapping as to how customers can operate securely under the Shared Responsibility Model while also aligning with Disposition No. 11/2006. The AWS Disposition 11/2006 Workbook can be downloaded from the Argentina Data Privacy page or directly from this link. Both resources are also available in Spanish from the Privacidad de los datos en Argentina page.

Want more AWS Security news? Follow us on Twitter.

 

When Joe Public Becomes a Commercial Pirate, a Little Knowledge is Dangerous

Post Syndicated from Andy original https://torrentfreak.com/joe-public-becomes-commercial-pirate-little-knowledge-dangerous-180603/

Back in March and just a few hours before the Anthony Joshua v Joseph Parker fight, I got chatting with some fellow fans in the local pub. While some were intending to pay for the fight, others were going down the Kodi route.

Soon after the conversation switched to IPTV. One of the guys had a subscription and he said that his supplier would be along shortly if anyone wanted a package to watch the fight at home. Of course, I was curious to hear what he had to say since it’s not often this kind of thing is offered ‘offline’.

The guy revealed that he sold more or less exclusively on eBay and called up the page on his phone to show me. The listing made interesting reading.

In common with hundreds of similar IPTV subscription offers easily findable on eBay, the listing offered “All the sports and films you need plus VOD and main UK channels” for the sum of just under £60 per year, which is fairly cheap in the current market. With a non-committal “hmmm” I asked a bit more about the guy’s business and surprisingly he was happy to provide some details.

Like many people offering such packages, the guy was a reseller of someone else’s product. He also insisted that selling access to copyrighted content is OK because it sits in a “gray area”. It’s also easy to keep listings up on eBay, he assured me, as long as a few simple rules are adhered to. Right, this should be interesting.

First of all, sellers shouldn’t be “too obvious” he advised, noting that individual channels or channel lists shouldn’t be listed on the site. Fair enough, but then he said the most important thing of all is to have a disclaimer like his in any listing, written as follows:

“PLEASE NOTE EBAY: THIS IS NOT A DE SCRAMBLER SERVICE, I AM NOT SELLING ANY ILLEGAL CHANNELS OR CHANNEL LISTS NOR DO I REPRESENT ANY MEDIA COMPANY NOR HAVE ACCESS TO ANY OF THEIR CONTENTS. NO TRADEMARK HAS BEEN INFRINGED. DO NOT REMOVE LISTING AS IT IS IN ACCORDANCE WITH EBAY POLICIES.”

Apparently, this paragraph is crucial to keeping listings up on eBay and is the equivalent of kryptonite when it comes to deflecting copyright holders, police, and Trading Standards. Sure enough, a few seconds with Google reveals the same wording on dozens of eBay listings and those offering IPTV subscriptions on external platforms.

It is, of course, absolutely worthless but the IPTV seller insisted otherwise, noting he’d sold “thousands” of subscriptions through eBay without any problems. While a similar logic can be applied to garlic and vampires, a second disclaimer found on many other illicit IPTV subscription listings treads an even more bizarre path.

“THE PRODUCTS OFFERED CAN NOT BE USED TO DESCRAMBLE OR OTHERWISE ENABLE ACCESS TO CABLE OR SATELLITE TELEVISION PROGRAMS THAT BYPASSES PAYMENT TO THE SERVICE PROVIDER. RECEIVING SUBSCRIPTION/BASED TV AIRTIME IS ILLEGAL WITHOUT PAYING FOR IT.”

This disclaimer (which apparently no sellers displaying it have ever read) seems to be have been culled from the Zgemma site, which advertises a receiving device which can technically receive pirate IPTV services but wasn’t designed for the purpose. In that context, the disclaimer makes sense but when applied to dedicated pirate IPTV subscriptions, it’s absolutely ridiculous.

It’s unclear why so many sellers on eBay, Gumtree, Craigslist and other platforms think that these disclaimers are useful. It leads one to the likely conclusion that these aren’t hardcore pirates at all but regular people simply out to make a bit of extra cash who have received bad advice.

What is clear, however, is that selling access to thousands of otherwise subscription channels without permission from copyright owners is definitely illegal in the EU. The European Court of Justice says so (1,2) and it’s been backed up by subsequent cases in the Netherlands.

While the odds of getting criminally prosecuted or sued for reselling such a service are relatively slim, it’s worrying that in 2018 people still believe that doing so is made legal by the inclusion of a paragraph of text. It’s even more worrying that these individuals apparently have no idea of the serious consequences should they become singled out for legal action.

Even more surprisingly, TorrentFreak spoke with a handful of IPTV suppliers higher up the chain who also told us that what they are doing is legal. A couple claimed to be protected by communication intermediary laws, others didn’t want to go into details. Most stopped responding to emails on the topic. Perhaps most tellingly, none wanted to go on the record.

The big take-home here is that following some important EU rulings, knowingly linking to copyrighted content for profit is nearly always illegal in Europe and leaves people open for targeting by copyright holders and the authorities. People really should be aware of that, especially the little guy making a little extra pocket money on eBay.

Of course, people are perfectly entitled to carry on regardless and test the limits of the law when things go wrong. At this point, however, it’s probably worth noting that IPTV provider Ace Hosting recently handed over £600,000 rather than fight the Premier League (1,2) when they clearly had the money to put up a defense.

Given their effectiveness, perhaps they should’ve put up a disclaimer instead?

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

ISP Questions Impartiality of Judges in Copyright Troll Cases

Post Syndicated from Andy original https://torrentfreak.com/isp-questions-impartiality-of-judges-in-copyright-troll-cases-180602/

Following in the footsteps of similar operations around the world, two years ago the copyright trolling movement landed on Swedish shores.

The pattern was a familiar one, with trolls harvesting IP addresses from BitTorrent swarms and tracing them back to Internet service providers. Then, after presenting evidence to a judge, the trolls obtained orders that compelled ISPs to hand over their customers’ details. From there, the trolls demanded cash payments to make supposed lawsuits disappear.

It’s a controversial business model that rarely receives outside praise. Many ISPs have tried to slow down the flood but most eventually grow tired of battling to protect their customers. The same cannot be said of Swedish ISP Bahnhof.

The ISP, which is also a strong defender of privacy, has become known for fighting back against copyright trolls. Indeed, to thwart them at the very first step, the company deletes IP address logs after just 24 hours, which prevents its customers from being targeted.

Bahnhof says that the copyright business appeared “dirty and corrupt” right from the get go, so it now operates Utpressningskollen.se, a web portal where the ISP publishes data on Swedish legal cases in which copyright owners demand customer data from ISPs through the Patent and Market Courts.

Over the past two years, Bahnhof says it has documented 76 cases of which six are still ongoing, 11 have been waived and a majority 59 have been decided in favor of mainly movie companies. Bahnhof says that when it discovered that 59 out of the 76 cases benefited one party, it felt a need to investigate.

In a detailed report compiled by Bahnhof Communicator Carolina Lindahl and sent to TF, the ISP reveals that it examined the individual decision-makers in the cases before the Courts and found five judges with “questionable impartiality.”

“One of the judges, we can call them Judge 1, has closed 12 of the cases, of which two have been waived and the other 10 have benefitted the copyright owner, mostly movie companies,” Lindahl notes.

“Judge 1 apparently has written several articles in the magazine NIR – Nordiskt Immateriellt Rättsskydd (Nordic Intellectual Property Protection) – which is mainly supported by Svenska Föreningen för Upphovsrätt, the Swedish Association for Copyright (SFU).

“SFU is a member-financed group centered around copyright that publishes articles, hands out scholarships, arranges symposiums, etc. On their website they have a public calendar where Judge 1 appears regularly.”

Bahnhof says that the financiers of the SFU are Sveriges Television AB (Sweden’s national public TV broadcaster), Filmproducenternas Rättsförening (a legally-oriented association for filmproducers), BMG Chrysalis Scandinavia (a media giant) and Fackförbundet för Film och Mediabranschen (a union for the movie and media industry).

“This means that Judge 1 is involved in a copyright association sponsored by the film and media industry, while also judging in copyright cases with the film industry as one of the parties,” the ISP says.

Bahnhof’s also has criticism for Judge 2, who participated as an event speaker for the Swedish Association for Copyright, and Judge 3 who has written for the SFU-supported magazine NIR. According to Lindahl, Judge 4 worked for a bureau that is partly owned by a board member of SFU, who also defended media companies in a “high-profile” Swedish piracy case.

That leaves Judge 5, who handled 10 of the copyright troll cases documented by Bahnhof, waiving one and deciding the remaining nine in favor of a movie company plaintiff.

“Judge 5 has been questioned before and even been accused of bias while judging a high-profile piracy case almost ten years ago. The accusations of bias were motivated by the judge’s membership of SFU and the Swedish Association for Intellectual Property Rights (SFIR), an association with several important individuals of the Swedish copyright community as members, who all defend, represent, or sympathize with the media industry,” Lindahl says.

Bahnhof hasn’t named any of the judges nor has it provided additional details on the “high-profile” case. However, anyone who remembers the infamous trial of ‘The Pirate Bay Four’ a decade ago might recall complaints from the defense (1,2,3) that several judges involved in the case were members of pro-copyright groups.

While there were plenty of calls to consider them biased, in May 2010 the Supreme Court ruled otherwise, a fact Bahnhof recognizes.

“Judge 5 was never sentenced for bias by the court, but regardless of the court’s decision this is still a judge who shares values and has personal connections with [the media industry], and as if that weren’t enough, the judge has induced an additional financial aspect by participating in events paid for by said party,” Lindahl writes.

“The judge has parties and interest holders in their personal network, a private engagement in the subject and a financial connection to one party – textbook characteristics of bias which would make anyone suspicious.”

The decision-makers of the Patent and Market Court and their relations.

The ISP notes that all five judges have connections to the media industry in the cases they judge, which isn’t a great starting point for returning “objective and impartial” results. In its summary, however, the ISP is scathing of the overall system, one in which court cases “almost looked rigged” and appear to be decided in favor of the movie company even before reaching court.

In general, however, Bahnhof says that the processes show a lack of individual attention, such as the court blindly accepting questionable IP address evidence supplied by infamous anti-piracy outfit MaverickEye.

“The court never bothers to control the media company’s only evidence (lists generated by MaverickMonitor, which has proven to be an unreliable software), the court documents contain several typos of varying severity, and the same standard texts are reused in several different cases,” the ISP says.

“The court documents show a lack of care and control, something that can easily be taken advantage of by individuals with shady motives. The findings and discoveries of this investigation are strengthened by the pure numbers mentioned in the beginning which clearly show how one party almost always wins.

“If this is caused by bias, cheating, partiality, bribes, political agenda, conspiracy or pure coincidence we can’t say for sure, but the fact that this process has mainly generated money for the film industry, while citizens have been robbed of their personal integrity and legal certainty, indicates what forces lie behind this machinery,” Bahnhof’s Lindahl concludes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Some quick thoughts on the public discussion regarding facial recognition and Amazon Rekognition this past week

Post Syndicated from Dr. Matt Wood original https://aws.amazon.com/blogs/aws/some-quick-thoughts-on-the-public-discussion-regarding-facial-recognition-and-amazon-rekognition-this-past-week/

We have seen a lot of discussion this past week about the role of Amazon Rekognition in facial recognition, surveillance, and civil liberties, and we wanted to share some thoughts.

Amazon Rekognition is a service we announced in 2016. It makes use of new technologies – such as deep learning – and puts them in the hands of developers in an easy-to-use, low-cost way. Since then, we have seen customers use the image and video analysis capabilities of Amazon Rekognition in ways that materially benefit both society (e.g. preventing human trafficking, inhibiting child exploitation, reuniting missing children with their families, and building educational apps for children), and organizations (enhancing security through multi-factor authentication, finding images more easily, or preventing package theft). Amazon Web Services (AWS) is not the only provider of services like these, and we remain excited about how image and video analysis can be a driver for good in the world, including in the public sector and law enforcement.

There have always been and will always be risks with new technology capabilities. Each organization choosing to employ technology must act responsibly or risk legal penalties and public condemnation. AWS takes its responsibilities seriously. But we believe it is the wrong approach to impose a ban on promising new technologies because they might be used by bad actors for nefarious purposes in the future. The world would be a very different place if we had restricted people from buying computers because it was possible to use that computer to do harm. The same can be said of thousands of technologies upon which we all rely each day. Through responsible use, the benefits have far outweighed the risks.

Customers are off to a great start with Amazon Rekognition; the evidence of the positive impact this new technology can provide is strong (and growing by the week), and we’re excited to continue to support our customers in its responsible use.

-Dr. Matt Wood, general manager of artificial intelligence at AWS

Amazon SageMaker Updates – Tokyo Region, CloudFormation, Chainer, and GreenGrass ML

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/sagemaker-tokyo-summit-2018/

Today, at the AWS Summit in Tokyo we announced a number of updates and new features for Amazon SageMaker. Starting today, SageMaker is available in Asia Pacific (Tokyo)! SageMaker also now supports CloudFormation. A new machine learning framework, Chainer, is now available in the SageMaker Python SDK, in addition to MXNet and Tensorflow. Finally, support for running Chainer models on several devices was added to AWS Greengrass Machine Learning.

Amazon SageMaker Chainer Estimator


Chainer is a popular, flexible, and intuitive deep learning framework. Chainer networks work on a “Define-by-Run” scheme, where the network topology is defined dynamically via forward computation. This is in contrast to many other frameworks which work on a “Define-and-Run” scheme where the topology of the network is defined separately from the data. A lot of developers enjoy the Chainer scheme since it allows them to write their networks with native python constructs and tools.

Luckily, using Chainer with SageMaker is just as easy as using a TensorFlow or MXNet estimator. In fact, it might even be a bit easier since it’s likely you can take your existing scripts and use them to train on SageMaker with very few modifications. With TensorFlow or MXNet users have to implement a train function with a particular signature. With Chainer your scripts can be a little bit more portable as you can simply read from a few environment variables like SM_MODEL_DIR, SM_NUM_GPUS, and others. We can wrap our existing script in a if __name__ == '__main__': guard and invoke it locally or on sagemaker.


import argparse
import os

if __name__ =='__main__':

    parser = argparse.ArgumentParser()

    # hyperparameters sent by the client are passed as command-line arguments to the script.
    parser.add_argument('--epochs', type=int, default=10)
    parser.add_argument('--batch-size', type=int, default=64)
    parser.add_argument('--learning-rate', type=float, default=0.05)

    # Data, model, and output directories
    parser.add_argument('--output-data-dir', type=str, default=os.environ['SM_OUTPUT_DATA_DIR'])
    parser.add_argument('--model-dir', type=str, default=os.environ['SM_MODEL_DIR'])
    parser.add_argument('--train', type=str, default=os.environ['SM_CHANNEL_TRAIN'])
    parser.add_argument('--test', type=str, default=os.environ['SM_CHANNEL_TEST'])

    args, _ = parser.parse_known_args()

    # ... load from args.train and args.test, train a model, write model to args.model_dir.

Then, we can run that script locally or use the SageMaker Python SDK to launch it on some GPU instances in SageMaker. The hyperparameters will get passed in to the script as CLI commands and the environment variables above will be autopopulated. When we call fit the input channels we pass will be populated in the SM_CHANNEL_* environment variables.


from sagemaker.chainer.estimator import Chainer
# Create my estimator
chainer_estimator = Chainer(
    entry_point='example.py',
    train_instance_count=1,
    train_instance_type='ml.p3.2xlarge',
    hyperparameters={'epochs': 10, 'batch-size': 64}
)
# Train my estimator
chainer_estimator.fit({'train': train_input, 'test': test_input})

# Deploy my estimator to a SageMaker Endpoint and get a Predictor
predictor = chainer_estimator.deploy(
    instance_type="ml.m4.xlarge",
    initial_instance_count=1
)

Now, instead of bringing your own docker container for training and hosting with Chainer, you can just maintain your script. You can see the full sagemaker-chainer-containers on github. One of my favorite features of the new container is built-in chainermn for easy multi-node distribution of your chainer training jobs.

There’s a lot more documentation and information available in both the README and the example notebooks.

AWS GreenGrass ML with Chainer

AWS GreenGrass ML now includes a pre-built Chainer package for all devices powered by Intel Atom, NVIDIA Jetson, TX2, and Raspberry Pi. So, now GreenGrass ML provides pre-built packages for TensorFlow, Apache MXNet, and Chainer! You can train your models on SageMaker then easily deploy it to any GreenGrass-enabled device using GreenGrass ML.

JAWS UG

I want to give a quick shout out to all of our wonderful and inspirational friends in the JAWS UG who attended the AWS Summit in Tokyo today. I’ve very much enjoyed seeing your pictures of the summit. Thanks for making Japan an amazing place for AWS developers! I can’t wait to visit again and meet with all of you.

Randall

New – Pay-per-Session Pricing for Amazon QuickSight, Another Region, and Lots More

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-pay-per-session-pricing-for-amazon-quicksight-another-region-and-lots-more/

Amazon QuickSight is a fully managed cloud business intelligence system that gives you Fast & Easy to Use Business Analytics for Big Data. QuickSight makes business analytics available to organizations of all shapes and sizes, with the ability to access data that is stored in your Amazon Redshift data warehouse, your Amazon Relational Database Service (RDS) relational databases, flat files in S3, and (via connectors) data stored in on-premises MySQL, PostgreSQL, and SQL Server databases. QuickSight scales to accommodate tens, hundreds, or thousands of users per organization.

Today we are launching a new, session-based pricing option for QuickSight, along with additional region support and other important new features. Let’s take a look at each one:

Pay-per-Session Pricing
Our customers are making great use of QuickSight and take full advantage of the power it gives them to connect to data sources, create reports, and and explore visualizations.

However, not everyone in an organization needs or wants such powerful authoring capabilities. Having access to curated data in dashboards and being able to interact with the data by drilling down, filtering, or slicing-and-dicing is more than adequate for their needs. Subscribing them to a monthly or annual plan can be seen as an unwarranted expense, so a lot of such casual users end up not having access to interactive data or BI.

In order to allow customers to provide all of their users with interactive dashboards and reports, the Enterprise Edition of Amazon QuickSight now allows Reader access to dashboards on a Pay-per-Session basis. QuickSight users are now classified as Admins, Authors, or Readers, with distinct capabilities and prices:

Authors have access to the full power of QuickSight; they can establish database connections, upload new data, create ad hoc visualizations, and publish dashboards, all for $9 per month (Standard Edition) or $18 per month (Enterprise Edition).

Readers can view dashboards, slice and dice data using drill downs, filters and on-screen controls, and download data in CSV format, all within the secure QuickSight environment. Readers pay $0.30 for 30 minutes of access, with a monthly maximum of $5 per reader.

Admins have all authoring capabilities, and can manage users and purchase SPICE capacity in the account. The QuickSight admin now has the ability to set the desired option (Author or Reader) when they invite members of their organization to use QuickSight. They can extend Reader invites to their entire user base without incurring any up-front or monthly costs, paying only for the actual usage.

To learn more, visit the QuickSight Pricing page.

A New Region
QuickSight is now available in the Asia Pacific (Tokyo) Region:

The UI is in English, with a localized version in the works.

Hourly Data Refresh
Enterprise Edition SPICE data sets can now be set to refresh as frequently as every hour. In the past, each data set could be refreshed up to 5 times a day. To learn more, read Refreshing Imported Data.

Access to Data in Private VPCs
This feature was launched in preview form late last year, and is now available in production form to users of the Enterprise Edition. As I noted at the time, you can use it to implement secure, private communication with data sources that do not have public connectivity, including on-premises data in Teradata or SQL Server, accessed over an AWS Direct Connect link. To learn more, read Working with AWS VPC.

Parameters with On-Screen Controls
QuickSight dashboards can now include parameters that are set using on-screen dropdown, text box, numeric slider or date picker controls. The default value for each parameter can be set based on the user name (QuickSight calls this a dynamic default). You could, for example, set an appropriate default based on each user’s office location, department, or sales territory. Here’s an example:

To learn more, read about Parameters in QuickSight.

URL Actions for Linked Dashboards
You can now connect your QuickSight dashboards to external applications by defining URL actions on visuals. The actions can include parameters, and become available in the Details menu for the visual. URL actions are defined like this:

You can use this feature to link QuickSight dashboards to third party applications (e.g. Salesforce) or to your own internal applications. Read Custom URL Actions to learn how to use this feature.

Dashboard Sharing
You can now share QuickSight dashboards across every user in an account.

Larger SPICE Tables
The per-data set limit for SPICE tables has been raised from 10 GB to 25 GB.

Upgrade to Enterprise Edition
The QuickSight administrator can now upgrade an account from Standard Edition to Enterprise Edition with a click. This enables provisioning of Readers with pay-per-session pricing, private VPC access, row-level security for dashboards and data sets, and hourly refresh of data sets. Enterprise Edition pricing applies after the upgrade.

Available Now
Everything I listed above is available now and you can start using it today!

You can try QuickSight for 60 days at no charge, and you can also attend our June 20th Webinar.

Jeff;

 

Amazon Neptune Generally Available

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/amazon-neptune-generally-available/

Amazon Neptune is now Generally Available in US East (N. Virginia), US East (Ohio), US West (Oregon), and EU (Ireland). Amazon Neptune is a fast, reliable, fully-managed graph database service that makes it easy to build and run applications that work with highly connected datasets. At the core of Neptune is a purpose-built, high-performance graph database engine optimized for storing billions of relationships and querying the graph with millisecond latencies. Neptune supports two popular graph models, Property Graph and RDF, through Apache TinkerPop Gremlin and SPARQL, allowing you to easily build queries that efficiently navigate highly connected datasets. Neptune can be used to power everything from recommendation engines and knowledge graphs to drug discovery and network security. Neptune is fully-managed with automatic minor version upgrades, backups, encryption, and fail-over. I wrote about Neptune in detail for AWS re:Invent last year and customers have been using the preview and providing great feedback that the team has used to prepare the service for GA.

Now that Amazon Neptune is generally available there are a few changes from the preview:

Launching an Amazon Neptune Cluster

Launching a Neptune cluster is as easy as navigating to the AWS Management Console and clicking create cluster. Of course you can also launch with CloudFormation, the CLI, or the SDKs.

You can monitor your cluster health and the health of individual instances through Amazon CloudWatch and the console.

Additional Resources

We’ve created two repos with some additional tools and examples here. You can expect continuous development on these repos as we add additional tools and examples.

  • Amazon Neptune Tools Repo
    This repo has a useful tool for converting GraphML files into Neptune compatible CSVs for bulk loading from S3.
  • Amazon Neptune Samples Repo
    This repo has a really cool example of building a collaborative filtering recommendation engine for video game preferences.

Purpose Built Databases

There’s an industry trend where we’re moving more and more onto purpose-built databases. Developers and businesses want to access their data in the format that makes the most sense for their applications. As cloud resources make transforming large datasets easier with tools like AWS Glue, we have a lot more options than we used to for accessing our data. With tools like Amazon Redshift, Amazon Athena, Amazon Aurora, Amazon DynamoDB, and more we get to choose the best database for the job or even enable entirely new use-cases. Amazon Neptune is perfect for workloads where the data is highly connected across data rich edges.

I’m really excited about graph databases and I see a huge number of applications. Looking for ideas of cool things to build? I’d love to build a web crawler in AWS Lambda that uses Neptune as the backing store. You could further enrich it by running Amazon Comprehend or Amazon Rekognition on the text and images found and creating a search engine on top of Neptune.

As always, feel free to reach out in the comments or on twitter to provide any feedback!

Randall