Tag Archives: announcements

Announcing the AWS Security and Privacy Knowledge Hub for Australia and New Zealand

Post Syndicated from Phil Rodrigues original https://aws.amazon.com/blogs/security/announcing-the-aws-security-and-privacy-knowledge-hub-for-australia-and-new-zealand/

Cloud technology provides organizations across Australia and New Zealand with the flexibility to adapt quickly and scale their digital presences up or down in response to consumer demand. In 2021 and beyond, we expect to see cloud adoption continue to accelerate as organizations of all sizes realize the agility, operational, and financial benefits of moving to the cloud.

To fully harness the benefits of the digital economy it’s important that you remain vigilant about the security of your technology resources in order to protect the confidentiality, integrity, and availability of your systems and data. Security is our top priority at AWS, and more than ever we believe it’s critical for everyone to understand the best practices to use cloud technology securely. Organizations of all sizes can benefit by implementing automated guardrails that allow you to innovate while maintaining the highest security standards. We want to help you move fast and innovate quickly while staying secure.

This is why we are excited to announce the new AWS Security and Privacy Knowledge Hub for Australia and New Zealand.

The new website offers many resources specific to Australia and New Zealand, including:

  • The latest local security and privacy updates from AWS security experts in Australia and New Zealand.
  • How customers can use AWS to help meet the requirements of local privacy laws, government security standards, and banking security guidance.
  • Local customer stories about Australian and New Zealand companies and agencies that focus on security, privacy, and compliance.
  • Details about AWS infrastructure in Australia and New Zealand, including the upcoming AWS Region in Melbourne.
  • General FAQs on security and privacy in the cloud.

AWS maintains the highest security and privacy practices, which is one reason we are trusted by governments and organizations around the world to deliver services to millions of individuals. In Australia and New Zealand, we have hundreds of thousands of active customers using AWS each month, with many building mission critical applications for their business. For example, the National Bank of Australia (NAB) provides banking platforms like NAB Connect that offer services to businesses of all sizes, built on AWS. The Australian Taxation Office (ATO) offers the flexibility and speed for all Australians to lodge their tax returns electronically on the MyTax application, built on AWS. The University of Auckland runs critical teaching and learning applications relied on by their 18,000 students around the world, built on AWS. AWS Partner Versent helps businesses like Transurban and government agencies like Service NSW operate in the cloud securely, built on AWS.

Security is a shared responsibility between AWS and our customers. You should review the security features that we provide with our services, and be familiar with how to implement your security requirements within your AWS environment. To help you with your responsibility, we offer security services and partner solutions that you can utilize to implement automated and effective security in the cloud. This allows you to focus on your business while keeping your content and applications secure.

We’re inspired by the rapid rate of innovation as customers of all sizes use the cloud to create new business models and work to improve our communities, now and into the future. We look forward to seeing what you will build next on AWS – with security as your top priority.

The AWS Security and Privacy Knowledge Hub for Australia and New Zealand launched today.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Phil Rodrigues

Phil is the Head of the Security Team, Australia & New Zealand for AWS, based in Sydney. He and his team work with AWS’s largest customers to improve their security, risk and compliance in the cloud. Phil is a frequent speaker at AWS and cloud security events across Australia. Prior to AWS he worked for over 20 years in Information Security in the US, Europe, and Asia-Pacific.

Amazon SageMaker Named as the Outright Leader in Enterprise MLOps Platforms

Post Syndicated from Julien Simon original https://aws.amazon.com/blogs/aws/amazon-sagemaker-named-as-the-outright-leader-in-enterprise-mlops-platforms/

Over the last few years, Machine Learning (ML) has proven its worth in helping organizations increase efficiency and foster innovation. As ML matures, the focus naturally shifts from experimentation to production. ML processes need to be streamlined, standardized, and automated to build, train, deploy, and manage models in a consistent and reliable way. Perennial IT concerns such as security, high availability, scaling, monitoring, and automation also become critical. Great ML models are not going to do much good if they can’t serve fast and accurate predictions to business applications, 24/7 and at any scale.

In November 2017, we launched Amazon SageMaker to help ML Engineers and Data Scientists not only build the best models, but also operate them efficiently. Striving to give our customers the most comprehensive service, we’ve since then added hundreds of features covering every step of the ML lifecycle, such as data labeling, data preparation, feature engineering, bias detection, AutoML, training, tuning, hosting, explainability, monitoring, and automation. We’ve also integrated these features in our web-based development environment, Amazon SageMaker Studio.

Thanks to the extensive ML capabilities available in SageMaker, tens of thousands of AWS customers across all industry segments have adopted ML to accelerate business processes, create innovative user experiences, improve revenue, and reduce costs. Examples include Engie (energy), Deliveroo (food delivery), SNCF (railways), Nerdwallet (financial services), Autodesk (computer-aided design), Formula 1 (auto racing), as well as our very own Amazon Fulfillment Technologies and Amazon Robotics.

Today, we’re happy to announce that in his latest report on Enterprise MLOps Platforms, Bradley Shimmin, Chief Analyst at Omdia, paid SageMaker this compliment: “AWS is the outright leader in the Omdia comparative review of enterprise MLOps platforms. Across almost every measure, the company significantly outscored its rivals, delivering consistent value across the entire ML lifecycle. AWS delivers highly differentiated functionality that targets highly impactful areas of concern for enterprise AI practitioners seeking to not just operationalize but also scale AI across the business.

OMDIA

You can download the full report to learn more.

Getting Started
Curious about Amazon SageMaker? The developer guide will show you how to set it up and start running your notebooks in minutes.

As always, we look forward to your feedback. You can send it through your usual AWS Support contacts or post it on the AWS Forum for Amazon SageMaker.

– Julien

Introducing the newest AWS Heroes – June, 2021

Post Syndicated from Ross Barich original https://aws.amazon.com/blogs/aws/introducing-the-newest-aws-heroes-june-2021/

We at AWS continue to be impressed by the passion AWS enthusiasts have for knowledge sharing and supporting peer-to-peer learning in tech communities. A select few of the most influential and active community leaders in the world, who truly go above and beyond to create content and help others build better & faster on AWS, are recognized as AWS Heroes.

Today we are thrilled to introduce the newest AWS Heroes, including the first Heroes based in Perú and Ukraine:

Anahit Pogosova – Tampere, Finland

Data Hero Anahit Pogosova is a Lead Cloud Software Engineer at Solita. She has been architecting and building software solutions with various customers for over a decade. Anahit started working with monolithic on-prem software, but has since moved all the way to the cloud, nowadays focusing mostly on AWS Data and Serverless services. She has been particularly interested in the AWS Kinesis family and how it integrates with AWS Lambda. You can find Anahit speaking at various local and international events, such as AWS meetups, AWS Community Days, ServerlessDays, and Code Mesh. She also writes about AWS on Solita developers’ blog and has been a frequent guest on various podcasts.

Anurag Kale – Gothenburg, Sweden

Data Hero Anurag Kale is a Cloud Consultant at Cybercom Group. He has been using AWS professionally since 2017 and holds the AWS Solutions Architect – Associate certification. He is a co-organizer of the AWS User Group Pune; helping host and organize AWS Community Day Pune 2020 and AWS Community Day India 2020 – Virtual Edition. Anurag’s areas of interest include Amazon DynamoDB, relational databases, serverless data pipelines, data analytics, Infrastructure as Code, and sustainable cloud solutions. He is an active advocate of DynamoDB and Amazon Aurora, and has spoken at various national and international events such as AWS Community Day Nordics 2020 and various AWS Meetups.

Arseny Zinchenko – Kiev, Ukraine

Container Hero Arseny Zinchenko has over 15 years in IT, and currently works as a DevOps Team Lead and Data Security Officer at BetterMe Inc., a leading health & fitness mobile publisher. Since 2011 Arseny has used his blog to share expertise about DevOps, system administration, containerization, and cloud computing. Currently he is focused primary on Amazon Elastic Kubernetes Service (EKS) and security solutions provided by AWS. He is a member of the biggest Ukranian DevOps community, UrkOps, where he helps others to build their best with AWS and containers. where he helps others to build their best with AWS and containers. He also helps implement DevOps methodology in their organizations by using Amazon CloudFormation and AWS managed services like Amazon RDS, Amazon Aurora, and EKS.

Azmi Mengü – Istanbul, Turkey

Community Hero Azmi Mengü is a Sr. Software Engineer on the Infrastructure Team at Armut / HomeRun. He has over 5 years of AWS cloud development experience and has expertise in serverless, containers, data, and storage services. Since 2019, Azmi has been on the organizing committee of the Cloud and Serverless Turkey community. He co-organized and acted as a speaker at over 50 physical and online events during this time. He actively writes blog posts about developing serverless, container, and IaC technologies on AWS. Azmi also co-organized the first-ever ServerlessDays Istanbul in Turkey and AWS Community Day Turkey events.

Carlos Cortez – Lima, Perú

Community Hero Carlos Cortez is the founder and leader of AWS User Group Perú and Founder and CTO of CENNTI, which helps Peruvian companies in their difficult journey to the cloud and the development of Machine Learning solutions. The two biggest AWS events in Perú, AWS Community Day Lima 2019 and AWS UG Perú Conference in 2021, were organized by Carlos. He is the owner of the first AWS Podcast in Latam, “Imperio Cloud” and “Al día con AWS”. He loves to create content for emerging technologies, which is why he created DeepFridays to educate people in Reinforcement Learning.

Chris Miller – Santa Cruz, USA

Machine Learning Hero Chris Miller is an entrepreneur, inventor, and CEO of Cloud Brigade. After winning the 2019 AWS DeepRacer Summit race in Santa Clara, he founded the Santa Cruz DeepRacer Meetup group. Chris has worked with AWS AI/ML product teams with DeepLens and DeepRacer on projects including The Poopinator, and What’s in my Fridge. He prides himself on being a technical founder with experience across a broad range of disciplines, which has led to a lot of crazy projects in competitions and hackathons, such as an automated beer brewery, animatronic ventriloquist dummy, and his team even won a Cardboard Boat Race!

Gert Leenders – Brussels, Belgium

DevTools Hero Gert Leenders started his career as a developer in 2001. Eight years ago, his focus shifted entirely towards AWS. Today, he’s an AWS Cloud Solution Architect helping teams build and deploy cloud-native applications and manage their cloud infrastructure. On his blog, Gert emphasizes hidden gems in AWS developer tools and day-to-day topics for cloud engineers like logging, debugging, error handling and Infrastructure as Code. He also often shares code on GitHub.

Lei Wu – Beijing, China

Machine Learning Hero Lei Wu is head of the machine learning team at FreeWheel. He enjoys sharing technology with others, and he publishes many Chinese language tech blogs at infoQ, covering machine learning, big data, and distributed computing systems. Lei works hard to promote deep learning adoption with AWS services wherever he can, including talks at Spark Summit China, World Artificial Intelligence Conference, AWS Innovate AI/ML edition, and AWS re:Invent where he shared FreeWheel’s best practices on deep learning with Amazon SageMaker.

Hidetoshi Matsui – Hamamatsu, Japan

Serverless Hero Hidetoshi Matsui is a developer at Startup Technology Inc. and a member of the Japan AWS User Group (JAWS-UG). On “builders.flash,” a web magazine for developers run by AWS Japan, the articles he has contributed are among the most viewed pages on the site since 2020. His most impactful achievement is the construction of a distribution site for JAWS-UG’s largest event, JAWS DAYS 2021 re:Connect. He made full use of various AWS services to build a low-latency and scalable distribution system with a serverless architecture and smooth streaming video viewing for nearly 4000 participants.

Philipp Schmid – Nuremberg, Germany

Machine Learning Hero Philipp Schmid is a Machine Learning & Tech Lead at Hugging Face, working to democratize artificial intelligence through open source and open science. He has extensive experience in deep learning, deploying NLP models into production using AWS Lambda, and is an avid advocate of Amazon SageMaker to simplify machine learning, such as “Distributed Training: Train BART/T5 for Summarization using Transformers and Amazon SageMaker.” He loves to share his knowledge on AI and NLP at various meetups such as Data Science on AWS, and on his technical Blog.

Simone Merlini – Milan, Italy

Community Hero Simone Merlini is CEO and CTO at beSharp. In 2012 he co-founded the first AWS User Group in Italy, and he’s currently the organizer of the AWS User Group Milan. He’s also actively involved in the development of Leapp, an open-source project for managing and securing Cloud access in multi-account environments. Simone is also the editor in chief and writer for Proud2beCloud, a blog aimed to share highly specialized AWS knowledge to enable the adoption of cloud technologies.

Virginie Mathivet – Lyon, France

Machine Learning Hero Virginie Mathivet has been leading the DataSquad team at TeamWork since 2017, focused on Data and Artificial Intelligence. Their purpose is to make the most of their clients’ data, via Data Science or Data Engineering / Big Data, mainly on AWS. Virginie regularly participates in conferences and writes books and articles, both for the public (introduction to AI) and for an informed audience (technical subjects). She also campaigns for a better visibility of women in the digital industry and for diversity in the data professions. Her favorite cloud service? Amazon SageMaker of course!

Walid A. Shaari – Dhahran, Saudi Arabia

Container Hero Walid A. Shaari is the community lead for the Dammam Cloud-native AWS User Group, working closely with CNCF ambassadors, K8saraby, and AWS MENA community leaders to enable knowledge sharing, collaboration, and networking. He helped organize the first AWS Community Day – MENA 2020. Walid also maintains GitHub content for Certified Kubernetes Administrators (CKA) and Certified Kubernetes Security Specialists (CKS), and holds several active professional certifications: AWS Certified Associate Solutions Architect, Certified Kubernetes Administrator, Certified Kubernetes Application Developer, Red Hat Certified Architect level IV, and more.

 

 

 

 

If you’d like to learn more about the new Heroes, or connect with a Hero near you, please visit the AWS Hero website.

Ross;

Amazon Location Service Is Now Generally Available with New Routing and Satellite Imagery Capabilities

Post Syndicated from Marcia Villalba original https://aws.amazon.com/blogs/aws/amazon-location-service-is-now-generally-available-with-new-routing-and-satellite-imagery-capabilities/

In December of 2020, we made Amazon Location Service available in preview form for you to start building web and mobile applications with location-based features. Today I’m pleased to announce that we are making Amazon Location generally available along with two new features: routing and satellite imagery.

I have been a full-stack developer for over 15 years. On multiple occasions, I was tasked with creating location-based applications. The biggest challenges I faced when I worked with location providers were integrating the applications into the existing application backend and frontend and keeping the data shared with the location provider secure. When Amazon Location was made available in preview last year, I was so excited. This service makes it possible to build location-based applications with a native integration with AWS services. It uses trusted location providers like Esri and HERE and customers remain in control of their data.

Amazon Location includes the following features:

  • Maps to visualize location information.
  • Places to enable your application to offer point-of-interest search functionality, convert addresses into geographic coordinates in latitude and longitude (geocoding), and convert a coordinate into a street address (reverse geocoding).
  • Routes to use driving distance, directions, and estimated arrival time in your application.
  • Trackers to allow you to retrieve the current and historical location of the devices running your tracking-enabled application.
  • Geofences to give your application the ability to detect and act when a tracked device enters or exits a geographical boundary you define as a geofence. When a breach of the geofence is detected, Amazon Location will send an event to Amazon EventBridge, which can trigger a downstream set of actions, like invoking an AWS Lambda function or sending a notification using Amazon Simple Notification Service (SNS). This level of integration with AWS services is one of the most powerful features of Amazon Location. It will help shorten your application’s time to production.

In the preview announcement blog post, Jeff introduced the service functionality in a lot of detail. In this blog post, I want to focus on the new two features: satellite imagery and routing.

Satellite Imagery

You can use satellite imagery to pack your maps with information and provide more context to the map users. It helps the map users answer questions like “Is there a swamp in that area?” or “What does that building look like?”

To get started with satellite imagery maps, go to the Amazon Location console. On Create a new map, choose Esri Imagery. 

Creating a new map with satellite imagery

Routing
With Amazon Location Routes, your application can request the travel time, distance, and all directions between two locations. This makes it possible for your application users to obtain accurate travel-time estimates based on live road and traffic information.

If you provide these extra attributes when you use the route feature, you can get very tailored information including:

  • Waypoints: You can provide a list of ordered intermediate positions to be reached on the route. You can have up to 25 stopover points including the departure and destination.
  • Departure time: When you specify the departure time for this route, you will receive a result optimized for the traffic conditions at that time.
  • Travel mode: The mode of travel you specify affects the speed and the road compatibility. Not all vehicles can travel on all roads. The available travel modes are car, truck and walking. Depending on which travel mode you select, there are parameters that you can tune. For example, for car and truck, you can specify if you want a route without ferries or tolls. But the most interesting results are when you choose the truck travel mode. You can define the truck dimensions and weight and then get a route that is optimized for these parameters. No more trucks stuck under bridges!

Amazon Location Service and its features can be used for interesting use cases with low effort. For example, delivery companies using Amazon Location can optimize the order of the deliveries, monitor the position of the delivery vehicles, and inform the customers when the vehicle is arriving. Amazon Location can be also used to route medical vehicles to optimize the routing of patients or medical supplies. Logistic companies can use the service to optimize their supply chain by monitoring all the delivery vehicles.

To use the route feature, start by creating a route calculator. In the Amazon Location console, choose Route calculators. For the provider of the route information, choose Esri or HERE.

Screenshot of create a new routing calculator

You can use the route calculator from the AWS SDKs, AWS Command Line Interface (CLI) or the Amazon Location HTTP API.

For example, to calculate a simple route between departure and destination positions using the CLI, you can write something like this:

aws location \
    calculate-route \
        --calculator-name MyExampleCalculator \
        --departure-position -123.1376951951309 49.234371474778385 \
        --destination-position -122.83301379875074 49.235860182576886

The departure-position and destination-positions are defined as longitude, latitude.

This calculation returns a lot of information. Because you didn’t define the travel mode, the service assumes that you are using a car. You can see the total distance of the route (in this case, 29 kilometers). You can change the distance unit when you do the calculation. The service also returns the duration of the trip (in this case, 29 minutes). Because you didn’t define when to depart, Amazon Location will assume that you want to travel when there is the least amount of traffic.

{
    "Legs": [{
        "Distance": 26.549,
        "DurationSeconds": 1711,
        "StartPosition":[-123.1377012, 49.2342994],
        "EndPosition": [-122.833014,49.23592],
        "Steps": [{
            "Distance":0.7,
            "DurationSeconds":52,
            "EndPosition":[-123.1281,49.23395],
            "GeometryOffset":0,
            "StartPosition":[-123.137701,49.234299]},
            ...
        ]
    }],
    "Summary": {
        "DataSource": "Esri",
        "Distance": 29.915115551209176,
        "DistanceUnit": "Kilometers",
        "DurationSeconds": 2275.5813682980006,
        "RouteBBox": [
            -123.13769762299995,
            49.23068000000006,
            -122.83301399999999,
            49.258440000000064
        ]
    }
}

It will return an array of steps, which form the directions to get from departure to destination. The steps are represented by a starting position and end position. In this example, there are 11 steps and the travel mode is a car.

Screenshot of route drawn in map

The result changes depending on the travel mode you selected. For example, if you do the calculation for the same departure and destination positions but choose a travel mode of walking, you will get a series of steps that draw the map as shown below. The travel time and distance are different: 24.1 kilometers and 6 hours and 43 minutes.

Map of route when walking

Available Now
Amazon Location Service is now available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (Stockholm), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo) Regions.

Learn about the pricing models of Amazon Location Service. For more about the service, see Amazon Location Service

Marcia

Amazon Redshift ML Is Now Generally Available – Use SQL to Create Machine Learning Models and Make Predictions from Your Data

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/amazon-redshift-ml-is-now-generally-available-use-sql-to-create-machine-learning-models-and-make-predictions-from-your-data/

With Amazon Redshift, you can use SQL to query and combine exabytes of structured and semi-structured data across your data warehouse, operational databases, and data lake. Now that AQUA (Advanced Query Accelerator) is generally available, you can improve the performance of your queries by up to 10 times with no additional costs and no code changes. In fact, Amazon Redshift provides up to three times better price/performance than other cloud data warehouses.

But what if you want to go a step further and process this data to train machine learning (ML) models and use these models to generate insights from data in your warehouse? For example, to implement use cases such as forecasting revenue, predicting customer churn, and detecting anomalies? In the past, you would need to export the training data from Amazon Redshift to an Amazon Simple Storage Service (Amazon S3) bucket, and then configure and start a machine learning training process (for example, using Amazon SageMaker). This process required many different skills and usually more than one person to complete. Can we make it easier?

Today, Amazon Redshift ML is generally available to help you create, train, and deploy machine learning models directly from your Amazon Redshift cluster. To create a machine learning model, you use a simple SQL query to specify the data you want to use to train your model, and the output value you want to predict. For example, to create a model that predicts the success rate for your marketing activities, you define your inputs by selecting the columns (in one or more tables) that include customer profiles and results from previous marketing campaigns, and the output column you want to predict. In this example, the output column could be one that shows whether a customer has shown interest in a campaign.

After you run the SQL command to create the model, Redshift ML securely exports the specified data from Amazon Redshift to your S3 bucket and calls Amazon SageMaker Autopilot to prepare the data (pre-processing and feature engineering), select the appropriate pre-built algorithm, and apply the algorithm for model training. You can optionally specify the algorithm to use, for example XGBoost.

Architectural diagram.

Redshift ML handles all of the interactions between Amazon Redshift, S3, and SageMaker, including all the steps involved in training and compilation. When the model has been trained, Redshift ML uses Amazon SageMaker Neo to optimize the model for deployment and makes it available as a SQL function. You can use the SQL function to apply the machine learning model to your data in queries, reports, and dashboards.

Redshift ML now includes many new features that were not available during the preview, including Amazon Virtual Private Cloud (VPC) support. For example:

Architectural diagram.

  • You can also create SQL functions that use existing SageMaker endpoints to make predictions (remote inference). In this case, Redshift ML is batching calls to the endpoint to speed up processing.

Before looking into how to use these new capabilities in practice, let’s see the difference between Redshift ML and similar features in AWS databases and analytics services.

ML Feature Data Training
from SQL
Predictions
using SQL Functions
Amazon Redshift ML

Data warehouse

Federated relational databases

S3 data lake (with Redshift Spectrum)

Yes, using
Amazon SageMaker Autopilot
Yes, a model can be imported and executed inside the Amazon Redshift cluster, or invoked using a SageMaker endpoint.
Amazon Aurora ML Relational database
(compatible with MySQL or PostgreSQL)
No

Yes, using a SageMaker endpoint.

A native integration with Amazon Comprehend for sentiment analysis is also available.

Amazon Athena ML

S3 data lake

Other data sources can be used through Athena Federated Query.

No Yes, using a SageMaker endpoint.

Building a Machine Learning Model with Redshift ML
Let’s build a model that predicts if customers will accept or decline a marketing offer.

To manage the interactions with S3 and SageMaker, Redshift ML needs permissions to access those resources. I create an AWS Identity and Access Management (IAM) role as described in the documentation. I use RedshiftML for the role name. Note that the trust policy of the role allows both Amazon Redshift and SageMaker to assume the role to interact with other AWS services.

From the Amazon Redshift console, I create a cluster. In the cluster permissions, I associate the RedshiftML IAM role. When the cluster is available, I load the same dataset used in this super interesting blog post that my colleague Julien wrote when SageMaker Autopilot was announced.

The file I am using (bank-additional-full.csv) is in CSV format. Each line describes a direct marketing activity with a customer. The last column (y) describes the outcome of the activity (if the customer subscribed to a service that was marketed to them).

Here are the first few lines of the file. The first line contains the headers.

age,job,marital,education,default,housing,loan,contact,month,day_of_week,duration,campaign,pdays,previous,poutcome,emp.var.rate,cons.price.idx,cons.conf.idx,euribor3m,nr.employed,y 56,housemaid,married,basic.4y,no,no,no,telephone,may,mon,261,1,999,0,nonexistent,1.1,93.994,-36.4,4.857,5191.0,no
57,services,married,high.school,unknown,no,no,telephone,may,mon,149,1,999,0,nonexistent,1.1,93.994,-36.4,4.857,5191.0,no
37,services,married,high.school,no,yes,no,telephone,may,mon,226,1,999,0,nonexistent,1.1,93.994,-36.4,4.857,5191.0,no
40,admin.,married,basic.6y,no,no,no,telephone,may,mon,151,1,999,0,nonexistent,1.1,93.994,-36.4,4.857,5191.0,no

I store the file in one of my S3 buckets. The S3 bucket is used to unload data and store SageMaker training artifacts.

Then, using the Amazon Redshift query editor in the console, I create a table to load the data.

CREATE TABLE direct_marketing (
	age DECIMAL NOT NULL, 
	job VARCHAR NOT NULL, 
	marital VARCHAR NOT NULL, 
	education VARCHAR NOT NULL, 
	credit_default VARCHAR NOT NULL, 
	housing VARCHAR NOT NULL, 
	loan VARCHAR NOT NULL, 
	contact VARCHAR NOT NULL, 
	month VARCHAR NOT NULL, 
	day_of_week VARCHAR NOT NULL, 
	duration DECIMAL NOT NULL, 
	campaign DECIMAL NOT NULL, 
	pdays DECIMAL NOT NULL, 
	previous DECIMAL NOT NULL, 
	poutcome VARCHAR NOT NULL, 
	emp_var_rate DECIMAL NOT NULL, 
	cons_price_idx DECIMAL NOT NULL, 
	cons_conf_idx DECIMAL NOT NULL, 
	euribor3m DECIMAL NOT NULL, 
	nr_employed DECIMAL NOT NULL, 
	y BOOLEAN NOT NULL
);

I load the data into the table using the COPY command. I can use the same IAM role I created earlier (RedshiftML) because I am using the same S3 bucket to import and export the data.

COPY direct_marketing 
FROM 's3://my-bucket/direct_marketing/bank-additional-full.csv' 
DELIMITER ',' IGNOREHEADER 1
IAM_ROLE 'arn:aws:iam::123412341234:role/RedshiftML'
REGION 'us-east-1';

Now, I create the model straight form the SQL interface using the new CREATE MODEL statement:

CREATE MODEL direct_marketing
FROM direct_marketing
TARGET y
FUNCTION predict_direct_marketing
IAM_ROLE 'arn:aws:iam::123412341234:role/RedshiftML'
SETTINGS (
  S3_BUCKET 'my-bucket'
);

In this SQL command, I specify the parameters required to create the model:

  • FROM – I select all the rows in the direct_marketing table, but I can replace the name of the table with a nested query (see example below).
  • TARGET – This is the column that I want to predict (in this case, y).
  • FUNCTION – The name of the SQL function to make predictions.
  • IAM_ROLE – The IAM role assumed by Amazon Redshift and SageMaker to create, train, and deploy the model.
  • S3_BUCKET – The S3 bucket where the training data is temporarily stored, and where model artifacts are stored if you choose to retain a copy of them.

Here I am using a simple syntax for the CREATE MODEL statement. For more advanced users, other options are available, such as:

  • MODEL_TYPE – To use a specific model type for training, such as XGBoost or multilayer perceptron (MLP). If I don’t specify this parameter, SageMaker Autopilot selects the appropriate model class to use.
  • PROBLEM_TYPE – To define the type of problem to solve: regression, binary classification, or multiclass classification. If I don’t specify this parameter, the problem type is discovered during training, based on my data.
  • OBJECTIVE – The objective metric used to measure the quality of the model. This metric is optimized during training to provide the best estimate from data. If I don’t specify a metric, the default behavior is to use mean squared error (MSE) for regression, the F1 score for binary classification, and accuracy for multiclass classification. Other available options are F1Macro (to apply F1 scoring to multiclass classification) and area under the curve (AUC). More information on objective metrics is available in the SageMaker documentation.

Depending on the complexity of the model and the amount of data, it can take some time for the model to be available. I use the SHOW MODEL command to see when it is available:

SHOW MODEL direct_marketing

When I execute this command using the query editor in the console, I get the following output:

Console screenshot.

As expected, the model is currently in the TRAINING state.

When I created this model, I selected all the columns in the table as input parameters. I wonder what happens if I create a model that uses fewer input parameters? I am in the cloud and I am not slowed down by limited resources, so I create another model using a subset of the columns in the table:

CREATE MODEL simple_direct_marketing
FROM (
        SELECT age, job, marital, education, housing, contact, month, day_of_week, y
 	  FROM direct_marketing
)
TARGET y
FUNCTION predict_simple_direct_marketing
IAM_ROLE 'arn:aws:iam::123412341234:role/RedshiftML'
SETTINGS (
  S3_BUCKET 'my-bucket'
);

After some time, my first model is ready, and I get this output from SHOW MODEL. The actual output in the console is in multiple pages, I merged the results here to make it easier to follow:

Console screenshot.

From the output, I see that the model has been correctly recognized as BinaryClassification, and F1 has been selected as the objective. The F1 score is a metrics that considers both precision and recall. It returns a value between 1 (perfect precision and recall) and 0 (lowest possible score). The final score for the model (validation:f1) is 0.79. In this table I also find the name of the SQL function (predict_direct_marketing) that has been created for the model, its parameters and their types, and an estimation of the training costs.

When the second model is ready, I compare the F1 scores. The F1 score of the second model is lower (0.66) than the first one. However, with fewer parameters the SQL function is easier to apply to new data. As is often the case with machine learning, I have to find the right balance between complexity and usability.

Using Redshift ML to Make Predictions
Now that the two models are ready, I can make predictions using SQL functions. Using the first model, I check how many false positives (wrong positive predictions) and false negatives (wrong negative predictions) I get when applying the model on the same data used for training:

SELECT predict_direct_marketing, y, COUNT(*)
  FROM (SELECT predict_direct_marketing(
                   age, job, marital, education, credit_default, housing,
                   loan, contact, month, day_of_week, duration, campaign,
                   pdays, previous, poutcome, emp_var_rate, cons_price_idx,
                   cons_conf_idx, euribor3m, nr_employed), y
          FROM direct_marketing)
 GROUP BY predict_direct_marketing, y;

The result of the query shows that the model is better at predicting negative rather than positive outcomes. In fact, even if the number of true negatives is much bigger than true positives, there are much more false positives than false negatives. I added some comments in green and red to the following screenshot to clarify the meaning of the results.

Console screenshot.

Using the second model, I see how many customers might be interested in a marketing campaign. Ideally, I should run this query on new customer data, not the same data I used for training.

SELECT COUNT(*)
  FROM direct_marketing
 WHERE predict_simple_direct_marketing(
           age, job, marital, education, housing,
           contact, month, day_of_week) = true;

Wow, looking at the results, there are more than 7,000 prospects!

Console screenshot.

Availability and Pricing
Redshift ML is available today in the following AWS Regions: US East (Ohio), US East (N Virginia), US West (Oregon), US West (San Francisco), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (Paris), Europe (Stockholm), Asia Pacific (Hong Kong) Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), and South America (São Paulo). For more information, see the AWS Regional Services list.

With Redshift ML, you pay only for what you use. When training a new model, you pay for the Amazon SageMaker Autopilot and S3 resources used by Redshift ML. When making predictions, there is no additional cost for models imported into your Amazon Redshift cluster, as in the example I used in this post.

Redshift ML also allows you to use existing Amazon SageMaker endpoints for inference. In that case, the usual SageMaker pricing for real-time inference applies. Here you can find a few tips on how to control your costs with Redshift ML.

To learn more, you can see this blog post from when Redshift ML was announced in preview and the documentation.

Start getting better insights from your data with Redshift ML.

Danilo

Introducing Amazon Kinesis Data Analytics Studio – Quickly Interact with Streaming Data Using SQL, Python, or Scala

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/introducing-amazon-kinesis-data-analytics-studio-quickly-interact-with-streaming-data-using-sql-python-or-scala/

The best way to get timely insights and react quickly to new information you receive from your business and your applications is to analyze streaming data. This is data that must usually be processed sequentially and incrementally on a record-by-record basis or over sliding time windows, and can be used for a variety of analytics including correlations, aggregations, filtering, and sampling.

To make it easier to analyze streaming data, today we are pleased to introduce Amazon Kinesis Data Analytics Studio.

Now, from the Amazon Kinesis console you can select a Kinesis data stream and with a single click start a Kinesis Data Analytics Studio notebook powered by Apache Zeppelin and Apache Flink to interactively analyze data in the stream. Similarly, you can select a cluster in the Amazon Managed Streaming for Apache Kafka console to start a notebook to analyze data in Apache Kafka streams. You can also start a notebook from the Kinesis Data Analytics Studio console and connect to custom sources.

Architectural diagram.

In the notebook, you can interact with streaming data and get results in seconds using SQL queries and Python or Scala programs. When you are satisfied with your results, with a few clicks you can promote your code to a production stream processing application that runs reliably at scale with no additional development effort.

For new projects, we recommend that you use the new Kinesis Data Analytics Studio over Kinesis Data Analytics for SQL Applications. Kinesis Data Analytics Studio combines ease of use with advanced analytical capabilities, which makes it possible to build sophisticated stream processing applications in minutes. Let’s see how that works in practice.

Using Kinesis Data Analytics Studio to Analyze Streaming Data
I want to get a better understanding of the data sent by some sensors to a Kinesis data stream.

To simulate the workload, I use this random_data_generator.py Python script. You don’t need to know Python to use Kinesis Data Analytics Studio. In fact, I am going to use SQL in the following steps. Also, you can avoid any coding and use the Amazon Kinesis Data Generator user interface (UI) to send test data to Kinesis Data Streams or Kinesis Data Firehose. I am using a Python script to have finer control over the data that is being sent.

import datetime
import json
import random
import boto3

STREAM_NAME = "my-input-stream"


def get_random_data():
    current_temperature = round(10 + random.random() * 170, 2)
    if current_temperature > 160:
        status = "ERROR"
    elif current_temperature > 140 or random.randrange(1, 100) > 80:
        status = random.choice(["WARNING","ERROR"])
    else:
        status = "OK"
    return {
        'sensor_id': random.randrange(1, 100),
        'current_temperature': current_temperature,
        'status': status,
        'event_time': datetime.datetime.now().isoformat()
    }


def send_data(stream_name, kinesis_client):
    while True:
        data = get_random_data()
        partition_key = str(data["sensor_id"])
        print(data)
        kinesis_client.put_record(
            StreamName=stream_name,
            Data=json.dumps(data),
            PartitionKey=partition_key)


if __name__ == '__main__':
    kinesis_client = boto3.client('kinesis')
    send_data(STREAM_NAME, kinesis_client)

This script sends random records to my Kinesis data stream using JSON syntax. For example:

{'sensor_id': 77, 'current_temperature': 93.11, 'status': 'OK', 'event_time': '2021-05-19T11:20:00.978328'}
{'sensor_id': 47, 'current_temperature': 168.32, 'status': 'ERROR', 'event_time': '2021-05-19T11:20:01.110236'}
{'sensor_id': 9, 'current_temperature': 140.93, 'status': 'WARNING', 'event_time': '2021-05-19T11:20:01.243881'}
{'sensor_id': 27, 'current_temperature': 130.41, 'status': 'OK', 'event_time': '2021-05-19T11:20:01.371191'}

From the Kinesis console, I select a Kinesis data stream (my-input-stream) and choose Process data in real time from the Process drop-down. In this way, the stream is configured as a source for the notebook.

Console screenshot.

Then, in the following dialog box, I create an Apache Flink – Studio notebook.

I enter a name (my-notebook) and a description for the notebook. The AWS Identity and Access Management (IAM) permissions to read from the Kinesis data stream I selected earlier (my-input-stream) are automatically attached to the IAM role assumed by the notebook.

Console screenshot.

I choose Create to open the AWS Glue console and create an empty database. Back in the Kinesis Data Analytics Studio console, I refresh the list and select the new database. It will define the metadata for my sources and destinations. From here, I can also review the default Studio notebook settings. Then, I choose Create Studio notebook.

Console screenshot.

Now that the notebook has been created, I choose Run.

Console screenshot.

When the notebook is running, I choose Open in Apache Zeppelin to get access to the notebook and write code in SQL, Python, or Scala to interact with my streaming data and get insights in real time.

In the notebook, I create a new note and call it Sensors. Then, I create a sensor_data table describing the format of the data in the stream:

%flink.ssql

CREATE TABLE sensor_data (
    sensor_id INTEGER,
    current_temperature DOUBLE,
    status VARCHAR(6),
    event_time TIMESTAMP(3),
    WATERMARK FOR event_time AS event_time - INTERVAL '5' SECOND
)
PARTITIONED BY (sensor_id)
WITH (
    'connector' = 'kinesis',
    'stream' = 'my-input-stream',
    'aws.region' = 'us-east-1',
    'scan.stream.initpos' = 'LATEST',
    'format' = 'json',
    'json.timestamp-format.standard' = 'ISO-8601'
)

The first line in the previous command tells to Apache Zeppelin to provide a stream SQL environment (%flink.ssql) for the Apache Flink interpreter. I can also interact with the streaming data using a batch SQL environment (%flink.bsql), or Python (%flink.pyflink) or Scala (%flink) code.

The first part of the CREATE TABLE statement is familiar to anyone who has used SQL with a database. A table is created to store the sensor data in the stream. The WATERMARK option is used to measure progress in the event time, as described in the Event Time and Watermarks section of the Apache Flink documentation.

The second part of the CREATE TABLE statement describes the connector used to receive data in the table (for example, kinesis or kafka), the name of the stream, the AWS Region, the overall data format of the stream (such as json or csv), and the syntax used for timestamps (in this case, ISO 8601). I can also choose the starting position to process the stream, I am using LATEST to read the most recent data first.

When the table is ready, I find it in the AWS Glue Data Catalog database I selected when I created the notebook:

Console screenshot.

Now I can run SQL queries on the sensor_data table and use sliding or tumbling windows to get a better understanding of what is happening with my sensors.

For an overview of the data in the stream, I start with a simple SELECT to get all the content of the sensor_data table:

%flink.ssql(type=update)

SELECT * FROM sensor_data;

This time the first line of the command has a parameter (type=update) so that the output of the SELECT, which is more than one row, is continuously updated when new data arrives.

On the terminal of my laptop, I start the random_data_generator.py script:

$ python3 random_data_generator.py

At first I see a table that contains the data as it comes. To get a better understanding, I select a bar graph view. Then, I group the results by status to see their average current_temperature, as shown here:

Notebook screenshot.

As expected by the way I am generating these results, I have different average temperatures depending on the status (OK, WARNING, or ERROR). The higher the temperature, the greater the probability that something is not working correctly with my sensors.

I can run the aggregated query explicitly using a SQL syntax. This time, I want the result computed on a sliding window of 1 minute with results updated every 10 seconds. To do so, I am using the HOP function in the GROUP BY section of the SELECT statement. To add the time to the output of the select, I use the HOP_ROWTIME function. For more information, see how group window aggregations work in the Apache Flink documentation.

%flink.ssql(type=update)

SELECT sensor_data.status,
       COUNT(*) AS num,
       AVG(sensor_data.current_temperature) AS avg_current_temperature,
       HOP_ROWTIME(event_time, INTERVAL '10' second, INTERVAL '1' minute) as hop_time
  FROM sensor_data
 GROUP BY HOP(event_time, INTERVAL '10' second, INTERVAL '1' minute), sensor_data.status;

This time, I look at the results in table format:

Notebook screenshot.

To send the result of the query to a destination stream, I create a table and connect the table to the stream. First, I need to give permissions to the notebook to write into the stream.

In the Kinesis Data Analytics Studio console, I select my-notebook. Then, in the Studio notebooks details section, I choose Edit IAM permissions. Here, I can configure the sources and destinations used by the notebook and the IAM role permissions are updated automatically.

Console screenshot.

In the Included destinations in IAM policy section, I choose the destination and select my-output-stream. I save changes and wait for the notebook to be updated. I am now ready to use the destination stream.

In the notebook, I create a sensor_state table connected to my-output-stream.

%flink.ssql

CREATE TABLE sensor_state (
    status VARCHAR(6),
    num INTEGER,
    avg_current_temperature DOUBLE,
    hop_time TIMESTAMP(3)
)
WITH (
'connector' = 'kinesis',
'stream' = 'my-output-stream',
'aws.region' = 'us-east-1',
'scan.stream.initpos' = 'LATEST',
'format' = 'json',
'json.timestamp-format.standard' = 'ISO-8601');

I now use this INSERT INTO statement to continuously insert the result of the select into the sensor_state table.

%flink.ssql(type=update)

INSERT INTO sensor_state
SELECT sensor_data.status,
    COUNT(*) AS num,
    AVG(sensor_data.current_temperature) AS avg_current_temperature,
    HOP_ROWTIME(event_time, INTERVAL '10' second, INTERVAL '1' minute) as hop_time
FROM sensor_data
GROUP BY HOP(event_time, INTERVAL '10' second, INTERVAL '1' minute), sensor_data.status;

The data is also sent to the destination Kinesis data stream (my-output-stream) so that it can be used by other applications. For example, the data in the destination stream can be used to update a real-time dashboard, or to monitor the behavior of my sensors after a software update.

I am satisfied with the result. I want to deploy this query and its output as a Kinesis Analytics application. To do so, I need to provide an S3 location to store the application executable.

In the configuration section of the console, I edit the Deploy as application configuration settings. There, I choose a destination bucket in the same region and save changes.

Console screenshot.

I wait for the notebook to be ready after the update. Then, I create a SensorsApp note in my notebook and copy the statements that I want to execute as part of the application. The tables have already been created, so I just copy the INSERT INTO statement above.

From the menu at the top right of my notebook, I choose Build SensorsApp and export to Amazon S3 and confirm the application name.

Notebook screenshot.

When the export is ready, I choose Deploy SensorsApp as Kinesis Analytics application in the same menu. After that, I fine-tune the configuration of the application. I set parallelism to 1 because I have only one shard in my input Kinesis data stream and not a lot of traffic. Then, I run the application, without having to write any code.

From the Kinesis Data Analytics applications console, I choose Open Apache Flink dashboard to get more information about the execution of my application.

Apache Flink console screenshot.

Availability and Pricing
You can use Amazon Kinesis Data Analytics Studio today in all AWS Regions where Kinesis Data Analytics is generally available. For more information, see the AWS Regional Services List.

In Kinesis Data Analytics Studio, we run the open-source versions of Apache Zeppelin and Apache Flink, and we contribute changes upstream. For example, we have contributed bug fixes for Apache Zeppelin, and we have contributed to AWS connectors for Apache Flink, such as those for Kinesis Data Streams and Kinesis Data Firehose. Also, we are working with the Apache Flink community to contribute availability improvements, including automatic classification of errors at runtime to understand whether errors are in user code or in application infrastructure.

With Kinesis Data Analytics Studio, you pay based on the average number of Kinesis Processing Units (KPU) per hour, including those used by your running notebooks. One KPU comprises 1 vCPU of compute, 4 GB of memory, and associated networking. You also pay for running application storage and durable application storage. For more information, see the Kinesis Data Analytics pricing page.

Start using Kinesis Data Analytics Studio today to get better insights from your streaming data.

Danilo

In the Works – AWS Region in the United Arab Emirates (UAE)

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/in-the-works-aws-region-in-the-united-arab-emirates-uae/

We are currently building AWS regions in Australia, Indonesia, Spain, India, and Switzerland.

UAE in the Works
I am happy to announce that the AWS Middle East (UAE) Region is in the works and will open in the first half of 2022. The new region is an extension of our existing investment, which already includes two AWS Direct Connect locations and two Amazon CloudFront edge locations, all of which have been in place since 2018. The new region will give AWS customers in the UAE the ability to run workloads and to store data that must remain in-country, in addition to the ability to serve local customers with even lower latency.

The new region will have three Availability Zones, and will be the second AWS Region in the Middle East, joining the existing AWS Region in Bahrain. There are 80 Availability Zones within 25 AWS Regions in operation today, with 15 more Availability Zones and five announced regions underway in the locations that I listed earlier.

As is always the case with an AWS Region, each of the Availability Zones will be a fully isolated part of the AWS infrastructure. The AZs in this region will be connected together via high-bandwidth, low-latency network connections to support applications that need synchronous replication between AZs for availability or redundancy.

AWS in the UAE
In addition to the upcoming AWS Region and the Direct Connect and CloudFront edge locations, we continue to build our team of account managers, partner managers, data center technicians, systems engineers, solutions architects, professional service providers, and more (check out our current positions).

We also plan to continue our on-going investments in education initiatives, training, and start-up enablement to support the UAE’s plans for economic development and digital transformation.

Our customers in the UAE are already using AWS to drive innovation! For example:

Mohammed Bin Rashid Space Centre (MBRSC) – Founded in 2006, MBSRC is home to the UAE’s National Space Program. The Hope Probe was launched last year and reached Mars in February of this year. Data from the probe’s instruments is processed and analyzed on AWS, and made available to the global scientific community in less than 20 minutes.

Anghami is the leading music platform in the Middle East and North Africa, giving over 70 million users access to 57 million songs. They have been hosting their infrastructure on AWS since their days as a tiny startup,. and have benefited from the ability to scale up by as much as 300% when new music is launched.

Sarwa is an investment bank and personal finance platform that was born on the AWS cloud in 2017. They grew by a factor of four in 2020 while processing hundreds of thousands of transactions. Recent AWS-powered innovations from Sarwa include the Sarwa App (design to market in 3 months) and the upcoming Sarwa Trade platform.

Stay Tuned
We’ll be announcing the opening of the Middle East (UAE) Region in a forthcoming blog post, so be sure to stay tuned!

Jeff;

AWS Verified episode 5: A conversation with Eric Rosenbach of Harvard University’s Belfer Center

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/aws-verified-episode-5-a-conversation-with-eric-rosenbach-of-harvard-universitys-belfer-center/

I am pleased to share the latest episode of AWS Verified, where we bring you conversations with global cybersecurity leaders about important issues, such as how to create a culture of security, cyber resiliency, Zero Trust, and other emerging security trends.

Recently, I got the opportunity to experience distance learning when I took the AWS Verified series back to school. I got a taste of life as a Harvard grad student, meeting (virtually) with Eric Rosenbach, Co-Director of the Belfer Center of Science and International Affairs at Harvard University’s John F. Kennedy School of Government. I call it, “Verified meets Veritas.” Harvard’s motto may never be the same again.

In this video, Eric shared with me the Belfer Center’s focus as the hub of the Harvard Kennedy School’s research, teaching, and training at the intersection of cutting edge and interdisciplinary topics, such as international security, environmental and resource issues, and science and technology policy. In recognition of the Belfer Center’s consistently stellar work and its six consecutive years ranked as the world’s #1 university-affiliated think tank, in 2021 it was named a center of excellence by the University of Pennsylvania’s Think Tanks and Civil Societies Program.

Eric’s deep connection to the students reflects the Belfer Center’s mission to prepare future generations of leaders to address critical areas in practical ways. Eric says, “I’m a graduate of the school, and now that I’ve been out in the real world as a policy practitioner, I love going into the classroom, teaching students about the way things work, both with cyber policy and with cybersecurity/cyber risk mitigation.”

In the interview, I talked with Eric about his varied professional background. Prior to the Belfer Center, he was the Chief of Staff to US Secretary of Defense, Ash Carter. Eric was also the Assistant Secretary of Defense for Homeland Defense and Global Security, where he was known around the US government as the Pentagon’s cyber czar. He has served as an officer in the US Army, written two books, been the Chief Security Officer for the European ISP Tiscali, and was a professional committee staff member in the US Senate.

I asked Eric to share his opinion on what the private sector and government can learn from each other. I’m excited to share Eric’s answer to this with you as well as his thoughts on other topics, because the work that Eric and his Belfer Center colleagues are doing is important for technology leaders.

Watch my interview with Eric Rosenbach, and visit the AWS Verified webpage for previous episodes, including interviews with security leaders from Netflix, Vodafone, Comcast, and Lockheed Martin. If you have an idea or a topic you’d like covered in this series, please leave a comment below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Steve Schmidt

Steve is Vice President and Chief Information Security Officer for AWS. His duties include leading product design, management, and engineering development efforts focused on bringing the competitive, economic, and security benefits of cloud computing to business and government customers. Prior to AWS, he had an extensive career at the Federal Bureau of Investigation, where he served as a senior executive and section chief. He currently holds 11 patents in the field of cloud security architecture. Follow Steve on Twitter.

CDK Corner – April 2021

Post Syndicated from Christian Weber original https://aws.amazon.com/blogs/devops/cdk-corner-april-2021/

Social – Community Engagement

We’re getting closer and closer to CDK Day, with the event receiving 75 CFP submissions. The cdkday schedule is now available to plan out your conference day.

Updates to the CDK

Constructs promoted to General Availability

Promoting a module to stable/General Availability is always a cause for celebration. Great job to all the folks involved who helped move aws-acmpca from Experimental to Stable. PR#13778 gives a peak into the work involved. If you’re interested in helping promote a module to G.A., or would like to learn more about the process, read the AWS Construct Library Module Lifecycle document. A big thanks to the CDK Community and team for their work!

Dead Letter Queues

Dead Letter Queues (“DLQs”) are a service implementation pattern that can queue messages when a service cannot process them. For example, if an email message can’t be delivered to a client, an email server could implement a DLQ holding onto that undeliverable message until the client can process the message. DLQs are supported by many AWS services, the community and CDK team have been working to support DLQs with CDK in various modules: aws-codebuild in PR#11228, aws-stepfunctions in PR#13450, and aws-lambda-targets in PR#11617.

Amazon API Gateway

Amazon API Gateway is a fully managed service to deploy APIs at scale. Here are the modules that have received updates to their support for API Gateway:

  • stepfunctions-tasks now supports API Gateway with PR#13033.

  • You can now specify regions when integrating Amazon API Gateway with other AWS services in PR#13251.

  • Support for websockets api in PR#13031 is now available in aws-apigatewayv2 as a Level 2 construct. To differentiate configuration between HTTP and websockets APIs, several of the HTTP API properties were renamed. More information about these changes can be found in the conversation section of PR#13031.

  • You can now set default authorizers in PR#13172. This lets you use an API Gateway HTTP, REST, or Websocket APIs with an authorizer and authorization scopes that cover all routes for a given API resource.

Notable new L2 constructs

AWS Global Accelerator is a networking service that lets users of your infrastructure hosted on AWS use the AWS global network infrastructure for traffic routing, improving speed and performance. Amazon Route 53 supports Global Accelerator and, thanks to PR#13407, you can now take advantage of this functionality in the aws-route-53-targets module as an L2 construct.

Amazon CloudWatch is an important part of monitoring AWS workloads. With PR#13281, the aws-cloudwatch-actions module now includes an Ec2Action construct, letting you programmatically set up observability of EC2-based workloads with CDK.

The aws-cognito module now supports Apple ID User Pools in PR#13160 allowing Developers to define workloads that use Apple IDs for identity management.

aws-iam received a new L2 construct with PR#13393, bringing SAML implementation support to CDK. SAML has become a preferred framework when implementing Single Sign On, and has been supported with IAM for sometime. Now, set it up with even more efficiency with the SamlProvider construct.

Amazon Neptune is a managed graph database service available as a construct in the aws-neptune module. PR#12763 adds L2 constructs to support Database Clusters and Database Instances.

Level ups to existing CDK constructs

Service discovery in AWS is provided by AWS CloudMap. With PR#13192, users of aws-ecs can now register an ECS Service with CloudMap.

aws-lambda has received two notable additions related to Docker: PR#13318, and PR#12258 add functionality to package Lambda function code with the output of a Docker build, or from a Docker build asset, respectively.

The aws-ecr module now supports Tag Mutability. Tags can denote a specific release for a piece of software. Setting the enum in the construct to IMMUTABLE will prevent tags from being overwritten by a later image, if that image uses a tag already present in the container repository.

Last year, AWS announced support for deployment circuit breakers in Amazon Elastic Container Service, enabling customers to perform auto-rollbacks on unhealthy service deployments without manual intervention. PR#12719 includes this functionality as part of the aws-ecs-patterns module, via the DeploymentCircuitBreaker interface. This interface is now available and can be used in constructs such as ApplicationLoadBalancedFargateService.

The aws-ec2 module received some nice quality of life upgrades to it: Support for multi-part user-data in PR#11843, client vpn endpoints in PR#12234, and non-numeric security protocols for security groups in PR#13593 all help improve the experience of using EC2 with CDK.

Learning – Finds from across the internet

On the AWS DevOps Blog, Eric Beard and Rico Huijbers penned a post detailing Best Practices for Developing Cloud Applications with AWS CDK.

Users of AWS Elastic Beanstalk wanting to deploy with AWS CDK can read about deploying Elastic Beanstalk applications with the AWS CDK and the aws-elasticbeanstalk module.

Deploying Infrastructure that is HIPAA and HiTrust compliant with AWS CDK can help customers move faster. This best practices guide for Hipaa and HiTrust environments goes into detail on deploying compliant architecture with the AWS CDK.

Community Acknowledgements

And finally, congratulations and rounds of applause for these folks who had their first Pull Request merged to the CDK Repository!*

*These users’ Pull Requests were merged between 2021-03-01 and 2021-03-31.

Thanks for reading this update of the CDK Corner. See you next time!

C5 Type 2 attestation report now available with one new Region and 123 services in scope

Post Syndicated from Mercy Kanengoni original https://aws.amazon.com/blogs/security/c5-type-2-attestation-report-available-one-new-region-123-services-in-scope/

Amazon Web Services (AWS) is pleased to announce the issuance of the 2020 Cloud Computing Compliance Controls Catalogue (C5) Type 2 attestation report. We added one new AWS Region (Europe-Milan) and 21 additional services and service features to the scope of the 2020 report.

Germany’s national cybersecurity authority, Bundesamt für Sicherheit in der Informationstechnik (BSI), established C5 to define a reference standard for German cloud security requirements. Customers in Germany and other European countries can use AWS’s attestation report to help them meet local security requirements of the C5 framework.

The C5 Type 2 report covers the time period October 1, 2019, through September 30, 2020. It was issued by an independent third-party attestation organization and assesses the design and the operational effectiveness of AWS’s controls against C5’s basic and additional criteria. This attestation demonstrates our commitment to meet the security expectations for cloud service providers set by the BSI in Germany.

We continue to add new Regions and services to the C5 compliance scope so that you have more services to choose from that meet regulatory and compliance requirements. AWS has added the Europe (Milan) Region and the following 21 services and service features to this year’s C5 scope:

You can see a current list of the services in scope for C5 on the AWS Services in Scope by Compliance Program page. The C5 report and Continuing Operations Letter is available to AWS customers through AWS Artifact. For more information, see Cloud Computing Compliance Controls Catalogue (C5).

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mercy Kanengoni

Mercy is a Security Audit Program Manager at AWS. She leads security audits across Europe, and she has previously worked in security assurance and technology risk management.

AWS Asia Pacific (Osaka) Region Now Open to All, with Three AZs and More Services

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-asia-pacific-osaka-region-now-open-to-all-with-three-azs-more-services/

AWS has had a presence in Japan for a long time! We opened the Asia Pacific (Tokyo) Region in March 2011, added a third Availability Zone (AZ) in 2012, and a fourth in 2018. Since that launch, customers in Japan and around the world have used the region to host an incredibly wide variety of applications!

We opened the Osaka Local Region in 2018 to give our customers in Japan a disaster recovery option for their workloads. Located 400 km from Tokyo, the Osaka Local Region used an isolated, fault-tolerant design contained within a single data center.

From Local to Standard
I am happy to announce that the Osaka Local Region has been expanded and is a now a standard AWS region, complete with three Availability Zones. As is always the case with AWS, the AZs are designed to provide physical redundancy, and are able to withstand power outages, internet downtime, floods, and other natural disasters.

The following services are available, with more in the works: Amazon Elastic Kubernetes Service (EKS), Amazon API Gateway, Auto Scaling, Application Auto Scaling, Amazon Aurora, AWS Config, AWS Personal Health Dashboard, AWS IQ, AWS Organizations, AWS Secrets Manager, AWS Shield Standard (regional), AWS Snowball Edge, AWS Step Functions, AWS Systems Manager, AWS Trusted Advisor, AWS Certificate Manager, CloudEndure Migration, CloudEndure Disaster Recovery, AWS CloudFormation, Amazon CloudFront, AWS CloudTrail, Amazon CloudWatch, CloudWatch Events, Amazon CloudWatch Logs, AWS CodeDeploy, AWS Database Migration Service, AWS Direct Connect, Amazon DynamoDB, Elastic Container Registry, Amazon Elastic Container Service (ECS), AWS Elastic Beanstalk, Amazon Elastic Block Store (EBS), Amazon Elastic Compute Cloud (EC2), EC2 Image Builder, Elastic Load Balancing, Amazon EMR, Amazon ElastiCache, Amazon EventBridge, AWS Fargate, Amazon Glacier, AWS Glue, AWS Identity and Access Management (IAM), AWS Snowball, AWS Key Management Service (KMS), Amazon Kinesis Data Firehose, Amazon Kinesis Data Streams, AWS Lambda, AWS Marketplace, AWS Mobile SDK, Network Load Balancer, Amazon Redshift, Amazon Relational Database Service (RDS), Amazon Route 53, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Amazon Simple Storage Service (S3), Amazon Simple Workflow Service (SWF), AWS VPN, VM Import/Export, AWS X-Ray, AWS Artifact, AWS PrivateLink, and Amazon Virtual Private Cloud (VPC).

The Asia Pacific (Osaka) Region supports the C5, C5d, D2, I3, I3en, M5, M5d, R5d, and T3 instance types, in On-Demand, Spot, and Reserved Instance form. X1 and X1e instances are available in a single AZ.

In addition to the AWS regions in Tokyo and Osaka, customers in Japan also benefit from:

  • 16 CloudFront edge locations in Tokyo.
  • One CloudFront edge location in Osaka.
  • One CloudFront Regional Edge Cache in Tokyo.
  • Two AWS Direct Connect locations in Tokyo.
  • One Direct Connect location in Osaka.

Here are typical latency values from the Asia Pacific (Osaka) Region to other cities in the area:

City Latency
Nagoya 2-5 ms
Hiroshima 2-5 ms
Tokyo 5-8 ms
Fukuoka 11-13 ms
Sendai 12-15 ms
Sapporo 14-17 ms
Seoul 27 ms
Taipei 29 ms
Hong Kong 38 ms
Manila 49 ms

AWS Customers in Japan
As I mentioned earlier, our customers are using the AWS regions in Tokyo and Osaka to host an incredibly wide variety of applications. Here’s a sampling:

Mitsubishi UFJ Financial Group (MUFG) – This financial services company adopted a cloud-first strategy and did their first AWS deployment in 2017. They have built a data platform for their banking and group data that helps them to streamline administrative processes, and also migrated a market risk management system. MUFG has been using the Osaka Local Region and is planning to use the Asia Pacific (Osaka) Region to run more workloads and to support their ongoing digital transformation.

KDDI Corporation (KDDI) – This diversified (telecommunication, financial services, Internet, electricity distribution, consumer appliance, and more) company started using AWS in 2016 after AWS met KDDI’s stringent internal security standards. They currently build and run more than 60 services on AWS, including the backend of the au Denki application, used by consumers to monitor electricity usage and rates. They plan to use the Asia Pacific (Osaka) Region to initiate multi-region service to their customers in Japan.

OGIS-RI – Founded in 1983, this global IT consulting firm is a part of the Daigas Group of companies. OSIS-RI provides information strategy, systems integration, systems development, network construction, support, and security. They use AWS to provide their enterprise customers with ekul, a data measurement service that measures and visualizes gas and electricity usage in real time and send it to corporate customers across Japan.

Sony Bank – Founded in 2001 as an asset management bank for individuals, Sony Bank provides services that include foreign currency deposits, home loans, investment trusts, and debit cards. Their gradual migration of internal banking systems to AWS began in 2013 and was 80% complete at the end of 2019. This migration reduced their infrastructure costs by 60% and more than halved the time it once took to procure and build out new infrastructure.

AWS Resources in Japan
As a quick reminder, enterprises, government and research organizations, small and medium businesses, educators, and startups in Japan have access to a wide variety of AWS and community resources. Here’s a sampling:

Available Now
The new region is open to all AWS customers and you can start to use it today!

Jeff;

 

AWS DeepRacer League’s 2021 Season Launches With New Open and Pro Divisions

Post Syndicated from Marcia Villalba original https://aws.amazon.com/blogs/aws/aws-deepracer-leagues-2021-season-launches-with-new-open-and-pro-divisions/

AWS DeepRacer League LogoAs a developer, I have been hearing a lot of stories lately about how companies have solved their business problems using machine learning (ML), so one of my goals for 2021 is to learn more about it.

For the last few years I have been using artificial intelligence (AI) services such as, Amazon Rekognition, Amazon Comprehend, and others extensively. AI services provide a simple API to solve common ML problems such as image recognition, text to speech, and analysis of sentiment in a text. When using these high-level APIs, you don’t need to understand how the underlying ML model works, nor do you have to train it, or maintain it in any way.

Even though those services are great and I can solve most of my business cases with them, I want to understand how ML algorithms work, and that is how I started tinkering with AWS DeepRacer.

AWS DeepRacer, a service that helps you learn reinforcement learning (RL), has been around since 2018. RL is an advanced ML technique that takes a very different approach to training models than other ML methods. Basically, it can learn very complex behavior without requiring any labeled training data, and it can make short-term decisions while optimizing for a long-term goal.

AWS DeepRacer is an autonomous 1/18th scale race car designed to test RL models by racing virtually in the AWS DeepRacer console or physically on a track at AWS and customer events. AWS DeepRacer is for developers of all skill levels, even if you don’t have any ML experience. When learning RL using AWS DeepRacer, you can take part in the AWS DeepRacer League where you get experience with machine learning in a fun and competitive environment.

Over the past year, the AWS DeepRacer League’s races have gone completely virtual and participants have competed for different kinds of prizes. However, the competition has become dominated by experts and newcomers haven’t had much of a chance to win.

The 2021 season introduces new skill-based Open and Pro racing divisions, where racers of all skill levels have five times more opportunities to win rewards than in previous seasons.

Image of the leagues in the console

How the New AWS DeepRacer Racing Divisions Work

The 2021 AWS DeepRacer league runs from March 1 through the end of October. When it kicks off, all participants will enter the Open division, a place to have fun and develop your RL knowledge with other community members.

At the end of every month, the top 10% of the Open division leaderboard will advance to the Pro division for the remainder of the season; they’ll also receive a Pro Welcome kit full of AWS DeepRacer swag. Pro division racers can win DeepRacer Evo cars and AWS DeepRacer merchandise such as hats and T-shirts.

At the end of every month, the top 16 racers in the Pro division will compete against each other in a live race console. That race will determine who will advance that month to the 2021 Championship Cup at re:Invent 2021.

The monthly Pro division winner gets an expenses-paid trip to re:Invent 2021 and participates in the Championship Cup to get a chance to win a Machine Learning education sponsorship worth $20k.

In both divisions, you can collect digital rewards, including vehicle customizations and accessories which will be released to participants once the winners are announced each month. 

You can start racing in the Open division any time during the 2021 season. Get started here!

Image of my racer profileNew Racer Profiles Increase the Fun

At the end of March, you will be able to create a new racer profile with an avatar and show the world which country you are representing.

I hope to see you in the new AWS DeepRacer season, where I’ll start in the Open division as MaVi.

Start racing today and train your first model for free! 

Marcia

TLS 1.2 will be required for all AWS FIPS endpoints beginning March 31, 2021

Post Syndicated from Janelle Hopper original https://aws.amazon.com/blogs/security/tls-1-2-required-for-aws-fips-endpoints/

To help you meet your compliance needs, we’re updating all AWS Federal Information Processing Standard (FIPS) endpoints to a minimum of Transport Layer Security (TLS) 1.2. We have already updated over 40 services to require TLS 1.2, removing support for TLS 1.0 and TLS 1.1. Beginning March 31, 2021, if your client application cannot support TLS 1.2, it will result in connection failures. In order to avoid an interruption in service, we encourage you to act now to ensure that you connect to AWS FIPS endpoints at TLS version 1.2. This change does not affect non-FIPS AWS endpoints.

Amazon Web Services (AWS) continues to notify impacted customers directly via their Personal Health Dashboard and email. However, if you’re connecting anonymously to AWS shared resources, such as through a public Amazon Simple Storage Service (Amazon S3) bucket, then you would not have received a notification, as we cannot identify anonymous connections.

Why are you removing TLS 1.0 and TLS 1.1 support from FIPS endpoints?

At AWS, we’re continually expanding the scope of our compliance programs to meet the needs of customers who want to use our services for sensitive and regulated workloads. Compliance programs, including FedRAMP, require a minimum level of TLS 1.2. To help you meet compliance requirements, we’re updating all AWS FIPS endpoints to a minimum of TLS version 1.2 across all AWS Regions. Following this update, you will not be able to use TLS 1.0 and TLS 1.1 for connections to FIPS endpoints.

How can I detect if I am using TLS 1.0 or TLS 1.1?

To detect the use of TLS 1.0 or 1.1, we recommend that you perform code, network, or log analysis. If you are using an AWS Software Developer Kit (AWS SDK) or Command Line Interface (CLI), we have provided hyperlinks to detailed guidance in our previous TLS blog post about how to examine your client application code and properly configure the TLS version used.

When the application source code is unavailable, you can use a network tool, such as TCPDump (Linux) or Wireshark (Linux or Windows), to analyze your network traffic to find the TLS versions you’re using when connecting to AWS endpoints. For a detailed example of using these tools, see the example, below.

If you’re using Amazon S3, you can also use your access logs to view the TLS connection information for these services and identify client connections that are not at TLS 1.2.

What is the most common use of TLS 1.0 or TLS 1.1?

The most common client applications that use TLS 1.0 or 1.1 are Microsoft .NET Framework versions earlier than 4.6.2. If you use the .NET Framework, please confirm you are using version 4.6.2 or later. For information on how to update and configure .NET Framework to support TLS 1.2, see How to enable TLS 1.2 on clients.

How do I know if I am using an AWS FIPS endpoint?

All AWS services offer TLS 1.2 encrypted endpoints that you can use for all API calls. Some AWS services also offer FIPS 140-2 endpoints for customers who need to use FIPS-validated cryptographic libraries to connect to AWS services. You can check our list of all AWS FIPS endpoints and compare the list to your application code, configuration repositories, DNS logs, or other network logs.

EXAMPLE: TLS version detection using a packet capture

To capture the packets, multiple online sources, such as this article, provide guidance for setting up TCPDump on a Linux operating system. On a Windows operating system, the Wireshark tool provides packet analysis capabilities and can be used to analyze packets captured with TCPDump or it can also directly capture packets.

In this example, we assume there is a client application with the local IP address 10.25.35.243 that is making API calls to the CloudWatch FIPS API endpoint in the AWS GovCloud (US-West) Region. To analyze the traffic, first we look up the endpoint URL in the AWS FIPS endpoint list. In our example, the endpoint URL is monitoring.us-gov-west-1.amazonaws.com. Then we use NSLookup to find the IP addresses used by this FIPS endpoint.

Figure 1: Use NSLookup to find the IP addresses used by this FIPS endpoint

Figure 1: Use NSLookup to find the IP addresses used by this FIPS endpoint

Wireshark is then used to open the captured packets, and filter to just the packets with the relevant IP address. This can be done automatically by selecting one of the packets in the upper section, and then right-clicking to use the Conversation filter/IPv4 option.

After the results are filtered to only the relevant IP addresses, the next step is to find the packet whose description in the Info column is Client Hello. In the lower packet details area, expand the Transport Layer Security section to find the version, which in this example is set to TLS 1.0 (0x0301). This indicates that the client only supports TLS 1.0 and must be modified to support a TLS 1.2 connection.

Figure 2: After the conversation filter has been applied, select the Client Hello packet in the top pane. Expand the Transport Layer Security section in the lower pane to view the packet details and the TLS version.

Figure 2: After the conversation filter has been applied, select the Client Hello packet in the top pane. Expand the Transport Layer Security section in the lower pane to view the packet details and the TLS version.

Figure 3 shows what it looks like after the client has been updated to support TLS 1.2. This second packet capture confirms we are sending TLS 1.2 (0x0303) in the Client Hello packet.

Figure 3: The client TLS has been updated to support TLS 1.2

Figure 3: The client TLS has been updated to support TLS 1.2

Is there more assistance available?

If you have any questions or issues, you can start a new thread on one of the AWS forums, or contact AWS Support or your technical account manager (TAM). The AWS support tiers cover development and production issues for AWS products and services, along with other key stack components. AWS Support doesn’t include code development for client applications.

Additionally, you can use AWS IQ to find, securely collaborate with, and pay AWS-certified third-party experts for on-demand assistance to update your TLS client components. Visit the AWS IQ page for information about how to submit a request, get responses from experts, and choose the expert with the right skills and experience. Log in to your console and select Get Started with AWS IQ to start a request.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Janelle Hopper

Janelle is a Senior Technical Program Manager in AWS Security with over 15 years of experience in the IT security field. She works with AWS services, infrastructure, and administrative teams to identify and drive innovative solutions that improve AWS’ security posture.

Author

Daniel Salzedo

Daniel is a Senior Specialist Technical Account Manager – Security. He has over 25 years of professional experience in IT in industries as diverse as video game development, manufacturing, banking and used car sales. He loves working with our wonderful AWS customers to help them solve their complex security challenges at scale.

Fall 2020 PCI DSS report now available with eight additional services in scope

Post Syndicated from Michael Oyeniya original https://aws.amazon.com/blogs/security/fall-2020-pci-dss-report-now-available-with-eight-additional-services-in-scope/

We continue to expand the scope of our assurance programs and are pleased to announce that eight additional services have been added to the scope of our Payment Card Industry Data Security Standard (PCI DSS) certification. This gives our customers more options to process and store their payment card data and architect their cardholder data environment (CDE) securely in Amazon Web Services (AWS).

You can see the full list on Services in Scope by Compliance Program. The eight additional services are:

  1. Amazon Augmented AI (Amazon A2I) (excluding public workforce and vendor workforce)
  2. Amazon Kendra
  3. Amazon Keyspaces (for Apache Cassandra)
  4. Amazon Timestream
  5. AWS App Mesh
  6. AWS Cloud Map
  7. AWS Glue DataBrew
  8. AWS Ground Station

Private AWS Local Zones and AWS Wavelength sites were newly assessed as additional infrastructure deployments as part of the fall 2020 PCI assessment.

We were evaluated by Coalfire, a third-party Qualified Security Assessor (QSA). The Attestation of Compliance (AOC) evidencing AWS PCI compliance status is available through AWS Artifact.

To learn more about our PCI program and other compliance and security programs, see AWS Compliance Programs. As always, we value your feedback and questions. You can contact the compliance team through the Contact Us page.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Michael Oyeniya

Michael is a Compliance Program Manager at AWS. He has over 15 years of experience managing information technology risk and control for Fortune 500 companies covering security compliance, auditing, and control framework implementation. He has a bachelor’s degree in Finance, master’s degree in Business Administration, and industry certifications including CISA and ISSPCS. Outside of work, he loves singing and reading.

Updated whitepaper available: Encrypting File Data with Amazon Elastic File System

Post Syndicated from Joe Travaglini original https://aws.amazon.com/blogs/security/updated-whitepaper-available-encrypting-file-data-with-amazon-elastic-file-system/

We’re sharing an update to the Encrypting File Data with Amazon Elastic File System whitepaper to provide customers with guidance on enforcing encryption of data at rest and in transit in Amazon Elastic File System (Amazon EFS). Amazon EFS provides simple, scalable, highly available, and highly durable shared file systems in the cloud. The file systems you create by using Amazon EFS are elastic, which allows them to grow and shrink automatically as you add and remove data. They can grow to petabytes in size, distributing data across an unconstrained number of storage servers in multiple Availability Zones.

Read the updated whitepaper to learn about best practices for encrypting Amazon EFS. Learn how to enforce encryption at rest while you create an Amazon EFS file system in the AWS Management Console and in the AWS Command Line Interface (AWS CLI), and how to enforce encryption of data in transit at the client connection layer by using AWS Identity and Access Management (IAM).

Download and read the updated whitepaper.

If you have questions or want to learn more, contact your account executive or contact AWS Support. If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Joseph Travaglini

For over four years, Joe has been a product manager on the Amazon Elastic File System team, responsible for the Amazon EFS security and compliance roadmap, and a product lead for the launch of EFS Infrequent Access. Prior to joining the Amazon EFS team, Joe was Director of Products at Sqrrl, a cybersecurity analytics startup acquired by AWS in 2018.

Author

Peter Buonora

Pete is a Principal Solutions Architect for AWS, with a focus on enterprise cloud strategy and information security. Pete has worked with the largest customers of AWS to accelerate their cloud adoption and improve their overall security posture.

Author

Siva Rajamani

Siva is a Boston-based Enterprise Solutions Architect for AWS. He enjoys working closely with customers and supporting their digital transformation and AWS adoption journey. His core areas of focus are security, serverless computing, and application integration.

AWS and EU data transfers: strengthened commitments to protect customer data

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/aws-and-eu-data-transfers-strengthened-commitments-to-protect-customer-data/

Last year we published a blog post describing how our customers can transfer personal data in compliance with both GDPR and the new “Schrems II” ruling. In that post, we set out some of the robust and comprehensive measures that AWS takes to protect customers’ personal data.

Today, we are announcing strengthened contractual commitments that go beyond what’s required by the Schrems II ruling and currently provided by other cloud providers to protect the personal data that customers entrust AWS to process (customer data). Significantly, these new commitments apply to all customer data subject to GDPR processed by AWS, whether it is transferred outside the European Economic Area (EEA) or not. These commitments are automatically available to all customers using AWS to process their customer data, with no additional action required, through a new supplementary addendum to the AWS GDPR Data Processing Addendum.

Our strengthened contractual commitments include:

  • Challenging law enforcement requests: We will challenge law enforcement requests for customer data from governmental bodies, whether inside or outside the EEA, where the request conflicts with EU law, is overbroad, or where we otherwise have any appropriate grounds to do so.
  • Disclosing the minimum amount necessary: We also commit that if, despite our challenges, we are ever compelled by a valid and binding legal request to disclose customer data, we will disclose only the minimum amount of customer data necessary to satisfy the request.

These strengthened commitments to our customers build on our long track record of challenging law enforcement requests. AWS rigorously limits – or rejects outright – law enforcement requests for data coming from any country, including the United States, where they are overly broad or we have any appropriate grounds to do so.

These commitments further demonstrate AWS’s dedication to securing our customers’ data: it is AWS’s highest priority. We implement rigorous contractual, technical, and organizational measures to protect the confidentiality, integrity, and availability of customer data regardless of which AWS Region a customer selects. Customers have complete control over their data through powerful AWS services and tools that allow them to determine where data will be stored, how it is secured, and who has access.

For example, customers using our latest generation of EC2 instances automatically gain the protection of the AWS Nitro System. Using purpose-built hardware, firmware, and software, AWS Nitro provides unique and industry-leading security and isolation by offloading the virtualization of storage, security, and networking resources to dedicated hardware and software. This enhances security by minimizing the attack surface and prohibiting administrative access while improving performance. Nitro was designed to operate in the most hostile network we could imagine, building in encryption, secure boot, a hardware-based root of trust, a decreased Trusted Computing Base (TCB) and restrictions on operator access. The newly announced AWS Nitro Enclaves feature enables customers to create isolated compute environments with cryptographic controls to assure the integrity of code that is processing highly sensitive data.

AWS encrypts all data in transit, including secure and private connectivity between EC2 instances of all types. Customers can rely on our industry leading encryption features and take advantage of AWS Key Management Services to control and manage their own keys within FIPS-140-2 certified hardware security modules. Regardless of whether data is encrypted or unencrypted, we will always work vigilantly to protect data from any unauthorized access. Find out more about our approach to data privacy.

AWS is constantly working to ensure that our customers can enjoy the benefits of AWS everywhere they operate. We will continue to update our practices to meet the evolving needs and expectations of customers and regulators, and fully comply with all applicable laws in every country in which we operate. With these changes, AWS continues our customer obsession by offering tooling, capabilities, and contractual rights that nobody else does.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Steve Schmidt

Steve is Vice President and Chief Information Security Officer for AWS. His duties include leading product design, management, and engineering development efforts focused on bringing the competitive, economic, and security benefits of cloud computing to business and government customers. Prior to AWS, he had an extensive career at the Federal Bureau of Investigation, where he served as a senior executive and section chief. He currently holds 11 patents in the field of cloud security architecture. Follow Steve on Twitter.

Amplify Flutter is Now Generally Available: Build Beautiful Cross-Platform Apps

Post Syndicated from Martin Beeby original https://aws.amazon.com/blogs/aws/amplify-flutter-is-now-generally-available-build-beautiful-cross-platform-apps/

AWS Amplify is a set of tools and services for building secure, scalable mobile and web applications. Currently, Amplify supports iOS, Android, and JavaScript (web and React Native) and is the quickest and easiest way to build applications powered by Amazon Web Services (AWS).

Flutter is Google’s UI toolkit for building natively compiled mobile, web, and desktop applications from a single code base and is one of the fastest-growing mobile frameworks.

Amplify Flutter brings together AWS Amplify and Flutter, and we designed it for customers who have invested in the Flutter ecosystem and now want to take advantage of the power of AWS.

In August 2020, we launched the developer preview of Amplify Flutter and asked for feedback. We were delighted with the response. After months of refining the service, today we are happy to announce the general availability of Amplify Flutter.

New Amplify Flutter Features in GA
The GA release makes it easier to build powerful Flutter apps with the addition of three new capabilities:

First, we recently added a GraphQL API backed by AWS AppSync as well as REST APIs and handlers using Amazon API Gateway and AWS Lambda.

Second, Amplify DataStore provides a programming model for leveraging shared and distributed data without writing additional code for offline and online scenarios, which makes working with distributed, cross-user data just as simple as working with local-only data.

Finally, we have Hosted UI which is a great way to implement authentication, and works with Amazon Cognito and other social identity providers such as Facebook, Google and Amazon. Hosted UI is a customizable OAuth 2.0 flow that allows you to launch a login screen without embedding the SDK for Cognito or a social provider in your application.

Digging Deeper Into Amplify DataStore
I have been building an app over the past two weeks using Amplify Flutter, and my favorite feature is Amplify DataStore, primarily because it has saved me so much time.

Working with the REST and GraphQL APIs is great in Amplify. However, when I create a mobile app, I’m often thinking about what happens when the mobile device has intermittent connectivity and can’t connect to the API endpoints. Storing data locally and syncing back to the cloud can become quite complicated. Amplify DataStore solves that problem by providing a persistent on-device data store that handles the offline or online scenario.

When I started developing my app, I used DataStore as a stand-alone local database. However, its power became apparent to me when I connected it to a cloud backend. DataStore uses my AWS AppSync API to sync data when network connectivity is available. If the app is offline, it stores it locally, ready for when a connection becomes available.

Amplify DataStore automatically versions data and implements conflict detection and resolution in the cloud using AppSync. The toolchain also generates object definitions for Dart based on the GraphQL schema that I provide.

Writing to Amplify DataStore
Writing to the DataStore is straightforward. The documentation site shows an example that you can try yourself that uses a schema from a blog site.

Post newPost = Post(
    title: 'New Post being saved', rating: 15, status: PostStatus.DRAFT);
await Amplify.DataStore.save(newPost);

Reading from Amplify DataStore
To read from the DataStore, you can query for all records of a given model type.

try {
   List<Post> posts = await Amplify.DataStore.query(Post.classType);
 } catch (e) {
   print("Query failed: " + e);
 }

Synchronization with Amplify DataStore
If you enable data synchronization, there can be different versions of an object across clients, and multiple clients may have updated their copies of an object. DataStore will converge different object versions by applying conflict detection and resolution strategies. The default resolution is called Auto Merge, but other strategies include optimistic concurrency control and custom Lambda functions.

Additional Amplify Flutter Features
Amplify Flutter allows you to work with AWS in three additional ways:

  • Authentication. Amplify Flutter provides an interface for authenticating a user and enables use cases like Sign-Up, Sign-In, and Multi-Factor Authentication. Behind the scenes, it provides the necessary authorization to the other Amplify categories. It comes with built-in support for Cognito user pools and identity pools.
  • Storage. Amplify Flutter provides an interface for managing user content for your app in public, protected, or private storage buckets. It enables use cases like upload, download, and deleting objects and provides built-in support for Amazon Simple Storage Service (S3) by default.
  • Analytics. Amplify Flutter enables you to collect tracking data for authenticated or unauthenticated users in Amazon Pinpoint. You can easily record events and extend the default functionality for custom metrics or attributes as needed.

Available Now
Amplify Flutter is now available in GA in all regions that support AWS Amplify. There is no additional cost for using Amplify Flutter; you only pay for the backend services your applications use above the free tier; check out the pricing page for more details.

Visit the Amplify Flutter documentation to get started and learn more. Happy coding.

— Martin

Opt-in to the new Amazon SES console experience

Post Syndicated from Simon Poile original https://aws.amazon.com/blogs/messaging-and-targeting/amazon-ses-console-opt-in/

Amazon Web Services (AWS) is pleased to announce the launch of the newly redesigned Amazon Simple Email Service (SES) console. With its streamlined look and feel, the new console makes it even easier for customers to leverage the speed, reliability, and flexibility that Amazon SES has to offer. Customers can access the new console experience via an opt-in link on the classic console.

Amazon SES now offers a new, optimized console to provide customers with a simpler, more intuitive way to create and manage their resources, collect sending activity data, and monitor reputation health. It also has a more robust set of configuration options and new features and functionality not previously available in the classic console.

Here are a few of the improvements customers can find in the new Amazon SES console:

Verified identities

Streamlines how customers manage their sender identities in Amazon SES. This is done by replacing the classic console’s identity management section with verified identities. Verified identities are a centralized place in which customers can view, create, and configure both domain and email address identities on one page. Other notable improvements include:

  • DKIM-based verification
    DKIM-based domain verification replaces the previous verification method which was based on TXT records. DomainKeys Identified Mail (DKIM) is an email authentication mechanism that receiving mail servers use to validate email. This new verification method offers customers the added benefit of enhancing their deliverability with DKIM-compliant email providers, and helping them achieve compliance with DMARC (Domain-based Message Authentication, Reporting and Conformance).
  • Amazon SES mailbox simulator
    The new mailbox simulator makes it significantly easier for customers to test how their applications handle different email sending scenarios. From a dropdown, customers select which scenario they’d like to simulate. Scenario options include bounces, complaints, and automatic out-of-office responses. The mailbox simulator provides customers with a safe environment in which to test their email sending capabilities.

Configuration sets

The new console makes it easier for customers to experience the benefits of using configuration sets. Configuration sets enable customers to capture and publish event data for specific segments of their email sending program. It also isolates IP reputation by segment by assigning dedicated IP pools. With a wider range of configuration options, such as reputation tracking and custom suppression options, customers get even more out of this powerful feature.

  • Default configuration set
    One important feature to highlight is the introduction of the default configuration set. By assigning a default configuration set to an identity, customers ensure that the assigned configuration set is always applied to messages sent from that identity at the time of sending. This enables customers to associate a dedicated IP pool or set up event publishing for an identity without having to modify their email headers.

Account dashboard

There is also an account dashboard for the new SES console. This feature provides customers with fast access to key information about their account, including sending limits and restrictions, and overall account health. A visual representation of the customer’s daily email usage helps them ensure that they aren’t approaching their sending limits. Additionally, customers who use the Amazon SES SMTP interface to send emails can visit the account dashboard to obtain or update their SMTP credentials.

Reputation metrics

The new reputation metrics page provides customers with high-level insight into historic bounce and complaint rates. This is viewed at both the account level and the configuration set level. Bounce and complaint rates are two important metrics that Amazon SES considers when assessing a customer’s sender reputation, as well as the overall health of their account.

The redesigned Amazon SES console, with its easy-to-use workflows, will not only enhance the customers’ on-boarding experience, it will also change the paradigms used for their on-going usage. The Amazon SES team remains committed to investing on behalf of our customers and empowering them to be productive anywhere, anytime. We invite you to opt in to the new Amazon SES console experience and let us know what you think.

Top 10 blog posts of 2020

Post Syndicated from Maddie Bacon original https://aws.amazon.com/blogs/security/top-10-posts-of-2020/

The AWS Security Blog endeavors to provide our readers with a reliable place to find the most up-to-date information on using AWS services to secure systems and tools, as well as thought leadership, and effective ways to solve security issues. In turn, our readers have shown us what’s most important for securing their businesses. To that end, we’re happy to showcase the top 10 most popular posts of 2020:

The top 10 posts of 2020

  1. Use AWS Lambda authorizers with a third-party identity provider to secure Amazon API Gateway REST APIs
  2. How to use trust policies with IAM roles
  3. How to use G Suite as external identity provider AWS SSO
  4. Top 10 security items to improve in your AWS account
  5. Automated response and remediation with AWS Security Hub
  6. How to add authentication single page web application with Amazon Cognito OAuth2 implementation
  7. Get ready for upcoming changes in the AWS Single Sign-On user sign-in process
  8. TLS 1.2 to become the minimum for all AWS FIPS endpoints
  9. How to use KMS and IAM to enable independent security controls for encrypted data in S3
  10. Use AWS Firewall Manager VPC security groups to protect your applications hosted on EC2 instances

If you’re new to AWS, or just discovering the Security Blog, we’ve also compiled a list of older posts that customers continue to find useful:

The top five posts of all time

  1. Where’s My Secret Access Key?
  2. Writing IAM Policies: How to Grant Access to an Amazon S3 Bucket
  3. How to Restrict Amazon S3 Bucket Access to a Specific IAM Role
  4. IAM Policies and Bucket Policies and ACLs! Oh, My! (Controlling Access to S3 Resources)
  5. Securely Connect to Linux Instances Running in a Private Amazon VPC

Though these posts were well received, we’re always looking to improve. Let us know what you’d like to read about in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Maddie Bacon

Maddie (she/her) is a technical writer for AWS Security with a passion for creating meaningful content. She previously worked as a security reporter and editor at TechTarget and has a BA in Mathematics. In her spare time, she enjoys reading, traveling, and all things Harry Potter.

Author

Anna Brinkmann

Anna manages the Security Blog and enjoys poking her nose into all the details involved in the blog. If you have feedback about the blog, she’s always available on Slack to hear about it. Anna spends her days drinking lots of black tea, cutting extraneous words, and working to streamline processes.

New IRAP report is now available on AWS Artifact for Australian customers

Post Syndicated from Henry Xu original https://aws.amazon.com/blogs/security/new-irap-report-is-now-available-on-aws-artifact-for-australian-customers/

We are excited to announce that a new Information Security Registered Assessors Program (IRAP) report is now available on AWS Artifact. The new IRAP documentation pack brings new services in scope, and includes a Cloud Security Control Matrix (CSCM) for specific information to help customers assess each applicable control that is required by the Australian Government Information Security Manual (ISM).

The scope of the new IRAP report includes a reassessment of 92 services, and adds 5 additional services: Amazon Macie, AWS Backup, AWS CodePipeline, AWS Control Tower, and AWS X-Ray. With the additional 5 services in scope of this cycle, we now have a total of 97 services assessed at the PROTECTED level. This provides more capabilities for our Australian government customers to deploy workloads at the PROTECTED level across security, storage, developer tools, and governance. For the full list of services, see the AWS Services in Scope page and select the IRAP tab. All services in scope for IRAP are available in the Asia Pacific (Sydney) Region.

We developed IRAP documentation pack in accordance with the Australian Cyber Security Centre (ACSC)’s cloud security guidance and their Anatomy of a Cloud Assessment and Authorisation framework, which addresses guidance within the Attorney-General’s Department’s Protective Security Policy Framework (PSPF), and the Digital Transformation Agency (DTA)’s Secure Cloud Strategy.

We created the IRAP documentation pack to help Australian government agencies and their partners to plan, architect, and risk assess their workload based on AWS Cloud services. Please reach out to your AWS representatives to let us know what additional services you would like to see in scope for coming IRAP assessments. We strive to bring more services into the scope of the IRAP PROTECTED level, based on your requirements.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Artifact forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Henry Xu

Henry is an APAC Audit Program Manager in AWS Security Assurance, currently based in Canberra, Australia. He manages our regional compliance programs, including IRAP assessments. With experiences across leadership and technical roles in both public and private sectors, he is passionate about secure cloud adoption. Outside of AWS, Henry enjoys time with his family, and he loves dancing.