Tag Archives: news

New EC2 T4g Instances – Burstable Performance Powered by AWS Graviton2 – Try Them for Free

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-t4g-instances-burstable-performance-powered-by-aws-graviton2/

Two years ago Amazon Elastic Compute Cloud (EC2) T3 instances were first made available, offering a very cost effective way to run general purpose workloads. While current T3 instances offer sufficient compute performance for many use cases, many customers have told us that they have additional workloads that would benefit from increased peak performance and lower cost.

Today, we are launching T4g instances, a new generation of low cost burstable instance type powered by AWS Graviton2, a processor custom built by AWS using 64-bit Arm Neoverse cores. Using T4g instances you can enjoy a performance benefit of up to 40% at a 20% lower cost in comparison to T3 instances, providing the best price/performance for a broader spectrum of workloads.

T4g instances are designed for applications that don’t use CPU at full power most of the time, using the same credit model as T3 instances with unlimited mode enabled by default. Examples of production workloads that require high CPU performance only during times of heavy data processing are web/application servers, small/medium data stores, and many microservices. Compared to previous generations, the performance of T4g instances makes it possible to migrate additional workloads such as caching servers, search engine indexing, and e-commerce platforms.

T4g instances are available in 7 sizes providing up to 5 Gbps of network and up to 2.7 Gbps of Amazon Elastic Block Store (EBS) performance:

NamevCPUsBaseline Performance/vCPUCPU Credits Earned/HourMemory
t4g.nano25%60.5 GiB
t4g.micro210%121 GiB
t4g.small220%242 GiB
t4g.medium220%244 GiB
t4g.large230%368 GiB
t4g.xlarge440%9616 GiB
t4g.2xlarge840%19232 GiB

Free Trial
To make it easier to develop, test, and run your applications on T4g instances, all AWS customers are automatically enrolled in a free trial on the t4g.micro size. Starting September 2020 until December 31st 2020, you can run a t4g.micro instance and automatically get 750 free hours per month deducted from your bill, including any CPU credits during the free 750 hours of usage. The 750 hours are calculated in aggregate across all regions. For details on terms and conditions of the free trial, please refer to the EC2 FAQs.

During the free trial, have a look at this getting started guide on using the Arm-based AWS Graviton processors. There, you can find suggestions on how to build and optimize your applications, using different programming languages and operating systems, and on managing container-based workloads. Some of the tips are specific for the Graviton processor, but most of the content works generally for anyone using Arm to run their code.

Using T4g Instances
You can start an EC2 instance in different ways, for example using the EC2 console, the AWS Command Line Interface (CLI), AWS SDKs, or AWS CloudFormation. For my first T4g instance, I use the AWS CLI:

$ aws ec2 run-instances \
  --instance-type t4g.micro \
  --image-id ami-09a67037138f86e67 \
  --security-groups MySecurityGroup \
  --key-name my-key-pair

The Amazon Machine Image (AMI) I am using is based on Amazon Linux 2. Other platforms are available, such as Ubuntu 18.04 or newer, Red Hat Enterprise Linux 8.0 and newer, and SUSE Enterprise Server 15 and newer. You can find additional AMIs in the AWS Marketplace, for example Fedora, Debian, NetBSD, CentOS, and NGINX Plus. For containerized applications, Amazon ECS and Amazon Elastic Kubernetes Service optimized AMIs are available as well.

The security group I selected gives me SSH access to the instance. I connect to the instance and do a general update:

$ sudo yum update -y

Since the kernel has been updated, I reboot the instance.

I’d like to set up this instance as a development environment. I can use it to build new applications, or to recompile my existing apps to the 64-bit Arm architecture. To install most development tools, such as Git, GCC, and Make, I use this group of packages:

$ sudo yum groupinstall -y "Development Tools"

AWS is working with several open source communities to drive improvements to the performance of software stacks running on AWS Graviton2. For example, you can see our contributions to PHP for Arm64 in this post.

Using the latest versions helps you obtain maximum performance from your Graviton2-based instances. The amazon-linux-extras command enables new versions for some of my favorite programming environments:

$ sudo amazon-linux-extras enable golang1.11 corretto8 php7.4 python3.8 ruby2.6

The output of the amazon-linux-extras command tells me which packages to install with yum:

$ yum clean metadata
$ sudo yum install -y golang java-1.8.0-amazon-corretto \
  php-cli php-pdo php-fpm php-json php-mysqlnd \
  python38 ruby ruby-irb rubygem-rake rubygem-json rubygems

Let’s check the versions of the tools that I just installed:

$ go version
go version go1.13.14 linux/arm64
$ java -version
openjdk version "1.8.0_265"
OpenJDK Runtime Environment Corretto-8.265.01.1 (build 1.8.0_265-b01)
OpenJDK 64-Bit Server VM Corretto-8.265.01.1 (build 25.265-b01, mixed mode)
$ php -v
PHP 7.4.9 (cli) (built: Aug 21 2020 21:45:13) ( NTS )
Copyright (c) The PHP Group
Zend Engine v3.4.0, Copyright (c) Zend Technologies
$ python3.8 -V
Python 3.8.5
$ ruby -v
ruby 2.6.3p62 (2019-04-16 revision 67580) [aarch64-linux]

It looks like I am ready to go! Many more packages are available with yum, such as MariaDB and PostgreSQL. If you’re interested in databases, you might also want to try the preview of Amazon RDS powered by AWS Graviton2 processors.

Available Now
T4g instances are available today in US East (N. Virginia, Ohio), US West (Oregon), Asia Pacific (Tokyo, Mumbai), Europe (Frankfurt, Ireland).

You now have a broad choice of Graviton2-based instances to better optimize your workloads for cost and performance: low cost burstable general-purpose (T4g), general purpose (M6g), compute optimized (C6g) and memory optimized (R6g) instances. Local NVMe-based SSD storage options are also available.

You can use the free trial to develop new applications, or migrate your existing workloads to the AWS Graviton2 processor. Let me know how that goes!

Danilo

AWS Named as a Cloud Leader for the 10th Consecutive Year in Gartner’s Infrastructure & Platform Services Magic Quadrant

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/aws-named-as-a-cloud-leader-for-the-10th-consecutive-year-in-gartners-infrastructure-platform-services-magic-quadrant/

At AWS, we strive to provide you a technology platform that allows for agile development, rapid deployment, and unlimited scale, so that you can free up your resources to focus on innovation for your customers. It’s greatly rewarding to see our efforts recognized not just by our customers, but also by leading analysts.

This year, Gartner announced a new Magic Quadrant for Cloud Infrastructure and Platform Services (CIPS). This is an evolution of their Magic Quadrant for Cloud Infrastructure as a Service (IaaS) for which AWS has been named as a Leader for nine consecutive years.

Customers are using the cloud in broad ways, beyond foundational compute, networking and storage services. We believe for this reason, Gartner is expanding the scope to include additional platform as a service (PaaS) capabilities, and is extending coverage for areas such as managed database services, serverless computing, and developer tools.

Today, I am happy to share that AWS has been named as a Leader in the Magic Quadrant for Cloud Infrastructure and Platform Services, and placed highest in Ability to Execute and furthest in Completeness of Vision.

More information on the features and factors that our customers examine when choosing a cloud provider are available in the full report.

Danilo

Gartner, Magic Quadrant for Cloud Infrastructure and Platform Services, Raj Bala, Bob Gill, Dennis Smith, David Wright, Kevin Ji, 1 September 2020 – Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Announcing a second Local Zone in Los Angeles

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/announcing-a-second-local-zone-in-los-angeles/

In December 2019, Jeff Barr published this post announcing the launch of a new Local Zone in Los Angeles, California. A Local Zone extends existing AWS regions closer to end-users, providing single-digit millisecond latency to a subset of AWS services in the zone. Local Zones are attached to parent regions – in this case US West (Oregon) – and access to services and resources is performed through the parent region’s endpoints. This makes Local Zones transparent to your applications and end-users. Applications running in a Local Zone have access to all AWS services, not just the subset in the zone, via Amazon’s redundant and very high bandwidth private network backbone to the parent region.

At the end of the post, Jeff wrote (and I quote) – “In the fullness of time (as Andy Jassy often says), there could very well be more than one Local Zone in any given geographic area. In 2020, we will open a second one in Los Angeles (us-west-2-lax-1b), and are giving consideration to other locations.”

Well – that time has come! I’m pleased to announce that following customer requests, AWS today launched a second Local Zone in Los Angeles to further help customers in the area (and Southern California generally) to achieve even greater high-availability and fault tolerance for their applications, in conjunction with very low latency. These customers have workloads that require very low latency, such as artist workstations, local rendering, gaming, financial transaction processing, and more.

The new Los Angeles Local Zone contains a subset of services, such as Amazon Elastic Compute Cloud (EC2), Amazon Elastic Block Store (EBS), Amazon FSx for Windows File Server and Amazon FSx for Lustre, Elastic Load Balancing, Amazon Relational Database Service (RDS), and Amazon Virtual Private Cloud, and remember, applications running in the zone can access all AWS services and other resources through the zone’s association with the parent region.

Enabling Local Zones
Once you opt-in to use one or more Local Zones in your account, they appear as additional Availability Zones for you to use when deploying your applications and resources. The original zone, launched last December, was us-west-2-lax-1a. The additional zone, available to all customers now, is us-west-2-lax-1b. Local Zones can be enabled using the new Settings section of the EC2 Management Console, as shown below

As noted in Jeff’s original post, you can also opt-in (or out) of access to Local Zones with the AWS Command Line Interface (CLI) (aws ec2 modify-availability-zone-group command), AWS Tools for PowerShell (Edit-EC2AvailabilityZoneGroup cmdlet), or by calling the ModifyAvailabilityZoneGroup API from one of the AWS SDKs.

Using Local Zones for Highly-Available, Low-Latency Applications
Local Zones can be used to further improve high availability for applications, as well as ultra-low latency. One scenario is for enterprise migrations using a hybrid architecture. These enterprises have workloads that currently run in existing on-premises data centers in the Los Angeles metro area and it can be daunting to migrate these application portfolios, many of them interdependent, to the cloud. By utilizing AWS Direct Connect in conjunction with a Local Zone, these customers can establish a hybrid environment that provides ultra-low latency communication between applications running in the Los Angeles Local Zone and the on-premises installations without needing a potentially expensive revamp of their architecture. As time progresses, the on-premises applications can be migrated to the cloud in an incremental fashion, simplifying the overall migration process. The diagram below illustrates this type of enterprise hybrid architecture.

Anecdotally, we have heard from customers using this type of hybrid architecture, combining Local Zones with AWS Direct Connect, that they are achieving sub-1.5ms latency communication between the applications hosted in the Local Zone and those in the on-premises data center in the LA metro area.

Virtual desktops for rendering and animation workloads is another scenario for Local Zones. For these workloads, latency is critical and the addition of a second Local Zone for Los Angeles gives additional failover capability, without sacrificing latency, should the need arise.

As always, the teams are listening to your feedback – thank you! – and are working on adding Local Zones in other locations, along with the availability of additional services, including Amazon ECS Amazon Elastic Kubernetes Service, Amazon ElastiCache, Amazon Elasticsearch Service, and Amazon Managed Streaming for Apache Kafka. If you’re an AWS customer based in Los Angeles or Southern California, with a requirement for even greater high-availability and very low latency for your workloads, then we invite you to check out the new Local Zone. More details and pricing information can be found at Local Zone.

— Steve

New EBS Volume Type (io2) – 100x Higher Durability and 10x More IOPS/GiB

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-ebs-volume-type-io2-more-iops-gib-higher-durability/

We launched EBS Volumes with Provisioned IOPS way back in 2012. These volumes are a great fit for your most I/O-hungry and latency-sensitive applications because you can dial in the level of performance that you need, and then (with the launch of Elastic Volumes in 2017) change it later.

Over the years, we have increased the ratio of IOPS per gibibyte (GiB) of SSD-backed storage several times, most recently in August 2016. This ratio started out at 10 IOPS per GiB, and has grown steadily to 50 IOPS per GiB. In other words, the bigger the EBS volume, the more IOPS it can be provisioned to deliver, with a per-volume upper bound of 64,000 IOPS. This change in ratios has reduced storage costs by a factor of 5 for throughput-centric workloads.

Also, based on your requests and your insatiable desire for more performance, we have raised the maximum number of IOPS per EBS volume multiple times:

The August, 2014 change in the I/O request size made EBS 16x more cost-effective for throughput-centric workloads.

Bringing the various numbers together, you can think of Provisioned IOPS volumes as being defined by capacity, IOPS, and the ratio of IOPS per GiB. You should also think about durability, which is expressed in percentage terms. For example, io1 volumes are designed to deliver 99.9% durability, which is 20x more reliable than typical commodity disk drives.

Higher Durability & More IOPS
Today we are launching the io2 volume type, with two important benefits, at the same price as the existing io1 volumes:

Higher Durability – The io2 volumes are designed to deliver 99.999% durability, making them 2000x more reliable than a commodity disk drive, further reducing the possibility of a storage volume failure and helping to improve the availability of your application. By the way, in the past we expressed durability in terms of an Annual Failure Rate, or AFR. The new, percentage-based model is consistent with our other storage offerings, and also communicates expectations for success, rather than for failure.

More IOPS – We are increasing the IOPS per GiB ratio yet again, this time to 500 IOPS per GiB. You can get higher performance from your EBS volumes, and you can reduce or outright eliminate any over-provisioning that you might have done in the past to achieve the desired level of performance.

Taken together, these benefits make io2 volumes a perfect fit for your high-performance, business-critical databases and workloads. This includes SAP HANA, Microsoft SQL Server, and IBM DB2.

You can create new io2 volumes and you can easily change the type of an existing volume to io2:

Or:

$aws ec2 modify-volume --volume-id vol-0b3c663aeca5aabb7 --volume-type io2

io2 volumes support all features of io1 volumes with the exception of Multi-Attach, which is on the roadmap.

Available Now
You can make use of io2 volumes in the US East (Ohio), US East (N. Virginia), US West (N. California), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Stockholm), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), and Middle East (Bahrain) Regions today.

Jeff;

 

Announcing the newest AWS Heroes – August 2020

Post Syndicated from Ross Barich original https://aws.amazon.com/blogs/aws/announcing-the-newest-aws-heroes-august-2020/

The AWS Heroes program recognizes a select few individuals who go above and beyond to share AWS knowledge and teach others about AWS, all while helping make building AWS skills accessible to many. These leaders have an incredible impact within technical communities worldwide and their efforts are greatly appreciated.

Today we are excited to introduce to you the latest AWS Heroes, including the first Heroes from Greece and Portugal:

Angela Timofte – Copenhagen, Denmark

Serverless Hero Angela Timofte is a Data Platform Manager at Trustpilot. Passionate about knowledge sharing, coaching, and public speaking, she is committed to leading by example and empowering people to seek and develop the skills they need to achieve their goals. She is driven to build scalable solutions with the latest technologies while migrating from monolithic solutions using serverless applications and event-driven architecture. She is a co-organizer of the Copenhagen AWS User Group and frequent speaker about serverless technologies at AWS Summits, AWS Community Days, ServerlessDays, and more.

Avi Keinan – Tel Aviv, Israel

Community Hero Avi Keinan is a Senior Cloud Engineer at DoIT International. He specializes in AWS Infrastructure, serverless, and security solutions, enjoys solving complex problems, and is passionate about helping others. Avi is a member of many online AWS forums and groups where he assists both novice and experienced cloud users with solving complex and simple solutions. He also frequently posts AWS-related videos on his YouTube channel.

Hirokazu Hatano – Tokyo, Japan

Community Hero Hirokazu Hatano is the founder & CEO of Operation Lab, LLC. He created the Japan AWS User Group (JAWS-UG) CLI chapter in 2014 and has hosted 165 AWS CLI events over a six-year period, with 4,133 attendees. In 2020, despite the challenging circumstances of COVID-19, he has successfully transformed the AWS CLI events from in-person to virtual. In the first quarter of 2020, he held 12 online hands-on events with 1,170 attendees. In addition, he has organized six other JAWS-UG chapters, making him one of the most frequent JAWS-UG organizers.

Ian McKay – Sydney, Australia

Community Hero Ian McKay is the Cloud Lead at Kablamo. Between helping clients, he loves the chance to build open-source projects with a focus on AWS automation and tooling. Ian built and maintains both the “Former2” and “Console Recorder for AWS” projects which help developers author Infrastructure as Code templates (such as CloudFormation) from existing resources or while in the AWS Management Console. Ian also enjoys speaking at meetups, co-hosting podcasts, engaging with the community on Slack, and posting his latest experiences with AWS on his blog.

Jacopo Nardiello – Milan, Italy

Container Hero Jacopo Nardiello is CEO and Founder at SIGHUP. He regularly speaks at conferences and events, and closely supports the local Italian community where he currently runs the Kubernetes and Cloud Native Milano meetup and regularly collaborates with AWS and the AWS Milan User Group on all things containers. He is passionate about developing and delivering battle-tested solutions based on EKS, Fargate, and OSS software – both from the CNCF and AWS – like AWS Firecracker (the base for Lambda), AWS Elasticsearch Open Distro, and other projects.

Jérémie Rodon – Paris, France

Community Hero Jérémie Rodon is a 9x certified AWS Cloud Architect working for Devoteam Revolve. He has 3 years of consulting experience, designing solutions in various environments from small businesses to CAC40 companies, while also empowering clients on AWS by delivering AWS courses as an AWS Authorized Instructor Champion. His interests lie mainly in security, and especially cryptography: he has done presentations explaining how AWS KMS works under the hood and has written a blog post explaining why the quantum crypto-calypse will not happen.

Kittaya Niennattrakul – Bangkok, Thailand

Community Hero Kittaya Niennattrakul (Tak) is a Product Manager and Assistant Managing Director at Dailitech. Tak has helped run the AWS User Group Thailand since 2014, and is also one of the AWS Asian Women’s Association leaders. In 2019, she and the Thailand User Group team, with support from AWS, organized two big events: AWS Meetup – Career Day (400+ attendees), and AWS Community Day in Bangkok (200+ attendees). Currently, Tak helps write a blog for sharing useful information with the AWS community. In 2020, AWS User Group Thailand has more than 8,000+ members and continues growing.

Konstantinos Siaterlis – Athens, Greece

Machine Learning Hero Konstantinos Siaterlis is Head of Data Engineering at Orfium, a Music Rights Management company based in Malibu. During his day-to-day, he drives adoption/integration and training on several AWS services, including Amazon SageMaker. He is the co-organizer of AWS User Group Greece, and a blogger on TheLastDev, writing about Data Science and introductory tutorials on AWS Machine Learning Services.

Kyle Bai – Taichung City, Taiwan

Container Hero Kyle Bai, also known as Kai-Ren Bai, is a Site Reliability Engineer at MaiCoin. He is a creator and contributor of a few open-source projects on GitHub about AWS, Containers, and Kubernetes. Kyle is a co-organizer of the Cloud Native Taiwan User Group. He is also a technical speaker, with experience delivering presentations at meetups and conferences including AWS re:Invent re:Cap Taipei, AWS Summit Taipei, AWS Developer Year-end Meetup Taipei, AWS Taiwan User Group, and so on.

Linghui Gong – Shanghai, China

Community Hero Linghui Gong is VP of Engineering at Strikingly, Inc. One of the early AWS users in China, Linghui has been leading his team to drive all critical architecture evolutions on AWS, including building a cross-region website cluster that serves millions of sites worldwide. Linghui shared the story of his teams’ cross-region website cluster on AWS at re:Invent 2018. He has also presented AWS best practices twice at AWS Summit Beijing, as well as at many other AWS events.

Luca Bianchi – Milan, Italy

Serverless Hero Luca Bianchi is Chief Technology Officer at Neosperience. He writes on Medium about serverless and Machine Learning, where he focuses on technologies such as AWS CDK, AWS Lambda, and Amazon SageMaker. He is co-founder of Serverless Meetup Italy and co-organizer of ServerlessDays Milano and ServerlessDays Italy. As an active member of the serverless community, Luca is a regular speaker at user groups, helping developers and teams to adopt cloud technologies.

Matthieu Napoli – Lyon, France

Serverless Hero Matthieu Napoli is a software engineer passionate about helping developers to create. Fascinated by how serverless unlocks creativity, he works on making serverless accessible to everyone. Matthieu enjoys maintaining open-source projects including Bref, a framework for creating serverless PHP applications on AWS. Alongside Bref, he sends a monthly newsletter containing serverless news relevant to PHP developers. Matthieu recently created the Serverless Visually Explained course which is packed with use cases, visual explanations, and code samples.

Mike Chambers – Brisbane, Australia

Machine Learning Hero Mike Chambers is an independent trainer, specializing in AWS and machine learning. He was one of the first in the world to become AWS certified and has gone on to train well over a quarter of a million students in the art of AWS, machine learning, and cloud. An active advocate of AWS’s machine learning capabilities, Mike has traveled the world helping study groups, delivering talks at AWS Meetups, all while posting widely on social media. Mike’s passion is to help the community get as excited about technology as he is.

Noah Gift – Raleigh-Durham, USA

Machine Learning Hero Noah Gift is the founder of Pragmatic A.I. Labs. He lectures on cloud computing at top universities globally, including the Duke and Northwestern graduate data science programs. He designs graduate machine learning, MLOps, A.I., and Data Science courses, consults on Machine Learning and Cloud Architecture for AWS, and is a massive advocate of AWS Machine Learning and putting machine learning models into production. Noah has authored several books, including Pragmatic AI, Python for DevOps, and Cloud Computing for Data Analysis.

Olalekan Elesin – Berlin, Germany

Machine Learning Hero Olalekan Elesin is an engineer at heart, with a proven record of leading successful machine learning projects as a data science engineer and a product manager, including using Amazon SageMaker to deliver an AI enabled platform for Scout24, which reduced productionizing ML projects from at least 7 weeks to 3 weeks. He has given several talks across Germany on building machine learning products on AWS, including Serverless Product Recommendations with Amazon Rekognition. For his machine learning blog posts, he writes about automating machine learning workflows with Amazon SageMaker and Amazon Step Functions.

Peter Hanssens – Sydney, Australia

Serverless Hero Peter Hanssens is a community leader: he has led the Sydney Serverless community for the past 3 years, and also built out Data Engineering communities in Melbourne, Sydney, and Brisbane. His passion is helping others to learn and grow their careers through shared experiences. He ran the first ever ServerlessDays in Australia in 2019 and in 2020 he has organized AWS Serverless Community Day ANZ, ServerlessDays ANZ, and DataEngBytes, a community-built data engineering conference.

Ruofei Ma – Beijing, China

Container Hero Ruofei Ma works as a principal software engineer for FreeWheel Inc, where he focuses on developing cloud-native applications with AWS. He enjoys sharing technology with others and is a lecturer at time.geekbang.org, the biggest IT knowledge-sharing platform in China. He enjoys sharing his experience on various meetups, such as the CNCF webinar and GIAC. He is a committee member of the largest service mesh community in China, servicemesher.com, and often posts blogs to share his best practices with the community.

Sheen Brisals – London, United Kingdom

Serverless Hero Sheen Brisals is a Senior Engineering Manager at The LEGO Group and is actively involved in the global Serverless community. Sheen loves being part of Serverless conferences and enjoys sharing knowledge with community members. He talks about Serverless at many events around the world, and his insights into Serverless technology and adoption strategies can be found on his Medium channel. You can also find him tweeting about Serverless success stories and technical thoughts.

Sridevi Murugayen – Chennai, India

Data Hero Sridevi Murugayen has 16+ years of IT experience and is a passionate developer and problem solver. She is an active co-organizer of the AWS User Group in Chennai, and helps host and deliver AWS Community Days, Meetups, and technical sessions to the developer community. She is a regular speaker in community events focusing on analytics solutions using AWS Analytics services, including Amazon EMR, Amazon Redshift, Amazon Kinesis, Amazon S3, and AWS Glue. She strongly believes in diversity and inclusion for a successful society and loves encouraging and enabling women technologists.

Stéphane Maarek – Lisbon, Portugal

Data Hero Stéphane Maarek is an online instructor at Udemy with many courses designed to teach users how to build on AWS (including the AWS Certified Data & Analytics Specialty certification) and deepen their knowledge about AWS services. Stephane also has a keen interest in teaching about Apache Kafka and recently partnered with AWS to launch an in-depth course on Amazon MSK (Managed Streaming for Apache Kafka). Stephane is passionate about technology and how it can help improve lives and processes.

Tom McLaughlin – Boston, USA

Serverless Hero Tom McLaughlin is a cloud infrastructure and operations engineer who has worked in companies ranging from startups to the enterprise. What drew Tom early to serverless was the prospect of having no hosts or container management platform to build and manage which yielded the question; what would he do if the servers he was responsible for went away? He’s found enjoyment in a community of people that are both pushing the future of technology and trying to understand its effects on the future of people and businesses.

 

 

 

 

If you’d like to learn more about the new Heroes, or connect with a Hero near you, please visit the AWS Hero website.

Ross;

Amazon Braket – Go Hands-On with Quantum Computing

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-braket-go-hands-on-with-quantum-computing/

Last year I told you about Amazon Braket and explained the basics of quantum computing, starting from qubits and progressing to quantum circuits. During the preview, AWS customers such as Enel, Fidelity, and Volkswagen have been using Amazon Braket to explore and gain experience with quantum computing.

I am happy to announce that Amazon Braket is now generally available and that you can now make use of both the classically-powered circuit simulator and quantum computers from D-Wave, IonQ, and Rigetti. Today I am going to show you both elements, creating and simulating a simple circuit and then running it on real hardware (also known as a QPU, or Quantum Processing Unit).

Creating and Simulating a Simple Circuit
As I mentioned in my earlier post, you can access Amazon Braket through a notebook-style interface. I start by opening the Amazon Braket Console, choose the desired region (more on that later), and click Create notebook instance:

I give my notebook a name (amazon-braket-jeff-2), select an instance type, and choose an IAM role. I also opt out of root access and forego the use of an encryption key for this example. I can choose to run the notebook in a VPC, and I can (in the Additional settings) change the size of the notebook’s EBS volume. I make all of my choices and click Create notebook instance to proceed:

My notebook is ready in a few minutes and I click to access it:

The notebook model is based on Jupyter, and I start by browsing the examples:

I click on the Superdense Coding example to open it, and then read the introduction and explanation (the math and the logic behind this communication protocol is explained here if you are interested in learning more):

The notebook can run code on the simulator that is part of Braket, or on any of the available quantum computers:

# Select device arn for simulator
device = AwsDevice("arn:aws:braket:::device/quantum-simulator/amazon/sv1")

This code chooses the SV1 managed simulator, which shows its strength for larger circuits (25 or more qubits), and on those that require lots of compute power to simulate. For small circuits, I can also use the local simulator that is part of the Braket SDK, and which runs on the notebook instance:

device = LocalSimulator()

I step through the cells of the notebook, running each one in turn by clicking the Run arrow. The code in the notebook uses the Braket API to build a quantum circuit from scratch, and then displays it in ASCII form (q0 and q1 are qubits, and the T axis indicates time, expressed in moments):

The next cell creates a task that runs the circuit on the chosen device and displays the results:

The get_result function is defined within the notebook. It submits a task to the device, monitors the status of the task, and waits for it to complete. Then it captures the results (a set of probabilities), plots them on a bar graph, and returns the probabilities. As you could learn by looking at the code in the function, the circuit is run 1000 times; each run is known as a “shot.” You can see from the screen shot above that the counts returned by the task (504 and 496) add up to 1000. Amazon Braket allows you to specify between 10 and 100,000 shots per task (depending on the device); more shots leads to greater accuracy.

The remaining cells in the notebook run the same circuit with the other possible messages and verify that the results are as expected. You can run this (and many other examples) yourself to learn more!

Running on Real Hardware
Amazon Braket provides access to QPUs from three manufacturers. I click Devices in the Console to learn more:

Each QPU is associated with a particular AWS region, and also has a unique ARN. I can click a device card to learn more about the technology that powers the device (this reads like really good sci-fi, but I can assure you that it is real), and I can also see the ARN:

I create a new cell in the notebook and copy/paste some code to run the circuit on the Rigetti Aspen-8:

device = AwsDevice("arn:aws:braket:::device/qpu/rigetti/Aspen-8")
counts = get_result(device, circ, s3_folder)
print(counts)

This creates a task and queues it up for the QPU. I can switch to the console in the region associated with the QPU and see the tasks:

The D-Wave QPU processes Braket tasks 24/7. The other QPUs currently process Amazon Braket tasks during specific time windows, and tasks are queued if created while the window is closed. When my task has finished, its status changes to COMPLETED and a CloudWatch Event is generated:

The Amazon Braket API
I used the console to create my notebook and manage my quantum computing tasks, but API and CLI support is also available. Here are the most important API functions:

CreateQuantumTask – Create a task that runs on the simulator or on a QPU.

GetQuantumTask – Get information about a task.

SearchDevices – Use a property-based search to locate suitable QPUs.

GetDevice – Get detailed information about a particular QPU.

As you can see from the code in the notebooks, you can write code that uses the Amazon Braket SDK, including the Circuit, Gates, Moments, and AsciiCircuitDiagram modules.

Things to Know
Here are a couple of important things to keep in mind when you evaluate Amazon Braket:

Emerging Technology – Quantum computing is an emerging field. Although some of you are already experts, it will take some time for the rest of us to understand the concepts and the technology, and to figure out how to put them to use.

Computing Paradigms – The QPUs that you can access through Amazon Braket support two different paradigms. The IonQ and Rigetti QPUs and the simulator support circuit-based quantum computing, and the D-Wave QPU supports quantum annealing. You cannot run a problem designed for one paradigm on a QPU that supports the other one, so you will need to choose the appropriate QPU early in your exploratory journey.

Pricing – Each task that you run will incur a per-task charge and an additional per-shot charge that is specific to the type of QPU that you use. Use of the simulator incurs an hourly charge, billed by the second, with a 15 second minimum. Notebooks pricing is the same as for SageMaker. For more information, check out the Amazon Braket Pricing page.

Give it a Shot!
As I noted earlier, this is an emerging and exciting field and I am looking forward to hearing back after you have had a chance to put Amazon Braket to use.

Jeff;

 

AWS Wavelength Zones Are Now Open in Boston & San Francisco

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-wavelength-zones-are-now-open-in-boston-san-francisco/

We announced AWS Wavelength at AWS re:Invent 2019. As a quick recap, we have partnered with multiple 5G telecommunication providers to embed AWS hardware and software in their datacenters. The goal is to allow developers to build and deliver applications that can benefit from single-digit millisecond latency.

In the time since the announcement we have been working with our partners and a wide range of pilot customers: enterprises, startups, and app developers. Partners and customers alike are excited by the possibilities that Wavelength presents, while also thrilled to find that most of what they know about AWS and EC2 still applies!

Wavelength Zones Now Open
I am happy to be able to announce that the first two Wavelength Zones are now open, one in Boston and the other in San Francisco. These zones are now accessible upon request for developers who want to build apps to service Verizon Wireless customers in those metropolitan areas.

Initially, we expect developers to focus on gaming, media processing, ecommerce, social media, medical image analysis, and machine learning inferencing apps. I suspect that some of the most compelling and relevant use cases are still to be discovered, and that the field is wide open for innovative thinking!

Using a Wavelength Zone
As I mentioned earlier, just about everything that you know about AWS and EC2 still applies. Once you have access to a Wavelength Zone, you can launch your first EC2 instance in minutes. You can initiate the onboarding process by filling out this short [sign-up form] and we’ll do our best to get you set up.

Each WZ is associated with a specific AWS region known as the parent region. This is US East (N. Virginia) for the Wavelength Zone in Boston, and US West (N. California) for the AZ in San Francisco. I’m going to use the Wavelength zone in Boston (us-east-1-wl1-bos-wlz-1), and will show you how to launch an EC2 instance using the AWS Command Line Interface (CLI) (console, API, and CloudFormation support is also available).

I can inspect the output of describe-availability-zones to confirm that I have access to the desired Wavelength Zone:

$ aws ec2 describe-availability-zones
...
||  ZoneName             |  us-east-1f             ||
|+-----------------------+-------------------------+|
||                AvailabilityZones                ||
|+---------------------+---------------------------+|
||  GroupName          |  us-east-1-wl1            ||
||  NetworkBorderGroup |  us-east-1-wl1-bos-wlz-1  ||
||  OptInStatus        |  opted-in                 ||
||  RegionName         |  us-east-1                ||
||  State              |  available                ||
||  ZoneId             |  use1-wl1-bos-wlz1        ||
||  ZoneName           |  us-east-1-wl1-bos-wlz-1  ||
|+---------------------+---------------------------+|

I can create a new Virtual Private Cloud (VPC) or use an existing one:

$ aws ec2 --region us-east-1 create-vpc \
  --cidr-block 10.0.0.0/16

I capture the VPC Id (vpc-01d94be2191cb2dfa) because I will need it again. I’ll also need the Id of the desired security group. For simplicity I’ll use the VPC’s default group:

$ aws ec2 --region us-east-1 describe-security-groups \
  --filters Name=vpc-id,Values=vpc-01d94be2191cb2dfa \
  | grep GroupId

Next, I create a subnet to represent the target Wavelength Zone:

$ aws ec2 --region us-east-1 create-subnet \
  --cidr-block 10.0.0.0/24  \
  --availability-zone us-east-1-wl1-bos-wlz-1 \
  --vpc-id vpc-01d94be2191cb2dfa

Moving right along, I create a route table and associate it with the subnet:

$ aws ec2 --region us-east-1 create-route-table \
  --vpc-id vpc-01d94be2191cb2dfa

$ aws ec2 --region us-east-1 associate-route-table \
  --route-table-id rtb-0c3dc2a16c70d40b5 \
  --subnet-id subnet-0bc3ad0d67e79469c

Next, I create a new type of VPC resource called a Carrier Gateway. This resource is used to communicate (in this case) with Verizon wireless devices in the Boston area. I also create a route from the gateway:

$ aws ec2 --region us-east-1 create-carrier-gateway \
  --vpc-id vpc-01d94be2191cb2dfa
$ 
$ aws ec2 --region us-east-1 create-route \
  --route-table-id rtb-01af227e9ea18c5ab --destination-cidr-block 0.0.0.0/0 \
  --carrier-gateway-id cagw-020c231b6e33ad1ef

The next step is to allocate a carrier IP address for instance that I am about to launch, create an Elastic Network Interface (ENI), and associate the two (the network border group represents the set of IP addresses within the Wavelength Zone):

$ aws ec2 --region us-east-1 allocate-address \
  --domain vpc --network-border-group us-east-1-wl1-bos-wlz-1
$
$ aws ec2 --region us-east-1 create-network-interface \
  --subnet-id subnet-0bc3ad0d67e79469c
$
$ aws ec2 --region us-east-1 associate-address \
  --allocation-id eipalloc-00c2c378c065887f1 --network-interface-id eni-0af68d5ce897ed2b8

And now I can launch my EC2 instance:

 $ aws ec2 --region us-east-1 run-instances \
  --instance-type r5d.2xlarge \
  --network-interface '[{"DeviceIndex":0,"NetworkInterfaceId":"eni-0af68d5ce897ed2b8"}]' \
  --image-id ami-09d95fab7fff3776c \
  --key-name keys-jbarr-us-east

The instance is accessible from devices on the Verizon network in the Boston area, as defined by their coverage map; the Carrier IP addresses do not include Internet ingress. If I need to SSH to it for development or debugging, I can use a bastion host or assign a second IP address.

I can see my instance in the EC2 Console, and manage it as I would any other instance (I edited the name using the console):

I can create an EBS volume in the Wavelength Zone:

And attach it to the instance:

I can create snapshots of the volume, and they will be stored in the parent region.

The next step is to build an application that runs in a Wavelength Zone. You can read Deploying Your First 5G-Enabled Application with AWS Wavelength to learn how to do this!

Things to Know
Here are some things to keep in mind as you think about how to put Wavelength to use:

Pricing – You will be billed for your EC2 instances on an On-Demand basis, and you can also purchase an Instance Savings Plan.

Instance Types – We are launching with support for t3 (medium and xlarge), r5 (2xlarge), and g4 (2xlarge) instances.

Other AWS Services – In addition to launching EC2 instances directly, you can create ECS clusters, EKS clusters (using Kubernetes 1.17), and you can make use of Auto Scaling. Many other services, including AWS Identity and Access Management (IAM), AWS CloudFormation, and Amazon CloudWatch will work as expected with no additional effort on your part.

More Wavelength Zones – We plan to launch more Wavelength Zones with Verizon in the US by the end of 2020. Our work with other carrier partners is proceeding at full speed, and I’ll let you know when those Wavelength Zones become available.

Jeff;

 

 

New – Using Amazon GuardDuty to Protect Your S3 Buckets

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-using-amazon-guardduty-to-protect-your-s3-buckets/

As we anticipated in this post, the anomaly and threat detection for Amazon Simple Storage Service (S3) activities that was previously available in Amazon Macie has now been enhanced and reduced in cost by over 80% as part of Amazon GuardDuty. This expands GuardDuty threat detection coverage beyond workloads and AWS accounts to also help you protect your data stored in S3.

This new capability enables GuardDuty to continuously monitor and profile S3 data access events (usually referred to data plane operations) and S3 configurations (control plane APIs) to detect suspicious activities such as requests coming from an unusual geo-location, disabling of preventative controls such as S3 block public access, or API call patterns consistent with an attempt to discover misconfigured bucket permissions. To detect possibly malicious behavior, GuardDuty uses a combination of anomaly detection, machine learning, and continuously updated threat intelligence. For your reference, here’s the full list of GuardDuty S3 threat detections.

When threats are detected, GuardDuty produces detailed security findings to the console and to Amazon EventBridge, making alerts actionable and easy to integrate into existing event management and workflow systems, or trigger automated remediation actions using AWS Lambda. You can optionally deliver findings to an S3 bucket to aggregate findings from multiple regions, and to integrate with third party security analysis tools.

If you are not using GuardDuty yet, S3 protection will be on by default when you enable the service. If you are using GuardDuty, you can simply enable this new capability with one-click in the GuardDuty console or through the API. For simplicity, and to optimize your costs, GuardDuty has now been integrated directly with S3. In this way, you don’t need to manually enable or configure S3 data event logging in AWS CloudTrail to take advantage of this new capability. GuardDuty also intelligently processes only the data events that can be used to generate threat detections, significantly reducing the number of events processed and lowering your costs.

If you are part of a centralized security team that manages GuardDuty across your entire organization, you can manage all accounts from a single account using the integration with AWS Organizations.

Enabling S3 Protection for an AWS Account
I already have GuardDuty enabled for my AWS account in this region. Now, I want to add threat detection for my S3 buckets. In the GuardDuty console, I select S3 Protection and then Enable. That’s it. To be more protected, I repeat this process for all regions enabled in my account.

After a few minutes, I start seeing new findings related to my S3 buckets. I can select each finding to get more information on the possible threat, including details on the source actor and the target action.

After a few days, I select the Usage section of the console to monitor the estimated monthly costs of GuardDuty in my account, including the new S3 protection. I can also find which are the S3 buckets contributing more to the costs. Well, it turns out I didn’t have lots of traffic on my buckets recently.

Enabling S3 Protection for an AWS Organization
To simplify management of multiple accounts, GuardDuty uses its integration with AWS Organizations to allow you to delegate an account to be the administrator for GuardDuty for the whole organization.

Now, the delegated administrator can enable GuardDuty for all accounts in the organization in a region with one click. You can also set Auto-enable to ON to automatically include new accounts in the organization. If you prefer, you can add accounts by invitation. You can then go to the S3 Protection page under Settings to enable S3 protection for their entire organization.

When selecting Auto-enable, the delegated administrator can also choose to enable S3 protection automatically for new member accounts.

Available Now
As always, with Amazon GuardDuty, you only pay for the quantity of logs and events processed to detect threats. This includes API control plane events captured in CloudTrail, network flow captured in VPC Flow Logs, DNS request and response logs, and with S3 protection enabled, S3 data plane events. These sources are ingested by GuardDuty through internal integrations when you enable the service, so you don’t need to configure any of these sources directly. The service continually optimizes logs and events processed to reduce your cost, and displays your usage split by source in the console. If configured in multi-account, usage is also split by account.

There is a 30-day free trial for the new S3 threat detection capabilities. This applies as well to accounts that already have GuardDuty enabled, and add the new S3 protection capability. During the trial, the estimated cost based on your S3 data event volume is calculated in the GuardDuty console Usage tab. In this way, while you evaluate these new capabilities at no cost, you can understand what would be your monthly spend.

GuardDuty for S3 protection is available in all regions where GuardDuty is offered. For regional availability, please see the AWS Region Table. To learn more, please see the documentation.

Danilo

Announcing the New AWS Community Builders Program!

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/announcing-the-new-aws-community-builders-program/

We continue to be amazed by the enthusiasm for AWS knowledge sharing in technical communities. Many experienced AWS advocates are passionate about helping others build on AWS by sharing their challenges, success stories, and code. Others who are newer to AWS are showing a similar enthusiasm for community building and are asking how they can get more involved in community activities. These builders are seeking better ways to connect with one another, share best practices, and receive resources & mentorship to help improve community knowledge sharing.

To help address these points, we are excited to announce the new AWS Community Builders Program which offers technical resources, mentorship, and networking opportunities to AWS enthusiasts and emerging thought leaders who are passionate about sharing knowledge and connecting with the technical community. As of today, this program is open for anyone to apply to join!

Members of the program will receive:

  • Access to AWS product teams and information about new services and features
  • Mentorship from AWS subject matter experts on a variety of topics, including content creation, community building, and securing speaking engagements
  • AWS Promotional Credits and other helpful resources to support content creation and community-based work

Any individual who is passionate about building on AWS can apply to join the AWS Community Builders program. The application process is open to AWS builders worldwide, and the program seeks applicants from all regions, demographics, and underrepresented communities.

While there is no single specific criteria for being accepted into the program, applications will generally be reviewed for evidence and accuracy of technical content, such as blog posts, open source contributions, presentations, online knowledge sharing, and community organization efforts, such as hosting AWS Community Days, AWS User Groups, or other community-based events. Equally important, the program seeks individuals from diverse backgrounds, who are enthusiastic about getting more involved in these types of activities! The program will accept a limited number of applicants per year.

Please apply to be an AWS Community Builder today. To learn more, you can get connected via a variety of community resources.

Channy and Jason;

Now Open – Fourth Availability Zone in the AWS Asia Pacific (Seoul) Region

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/now-open-fourth-availability-zone-in-the-aws-asia-pacific-seoul-region/

South Korea is considered the most wired country in the world with an internet penetration rate of 96%, according to Pew Research Center. To meet high customer demand, AWS launched our Asia Pacific (Seoul) Region in 2016 and expanded the region with a third Availability Zone (AZ) in May 2019. Now AWS has thousands of active customers from startups to enterprises in Korea.

Today, I am very happy to announce that we added a fourth AZ to the Asia Pacific (Seoul) Region to support the high demand of our growing Korean customer base. This fourth AZ provides customers with additional flexibility to architect scalable, fault-tolerant, and highly available applications in the Asia Pacific (Seoul) Region.

Now Seoul becomes the forth Region with over four AZs following US East (N. Virginia), US West (Oregon), and Asia Pacific (Tokyo). AZs located in AWS Regions consist of one or more discrete data centers, each with redundant power, networking, and connectivity, and each housed in separate facilities.

Now, you can select the 4th AZ in Seoul region via AWS Management Console, command-line interface (CLI), and SDKs.

We’re excited to bring a new AZ to serve our incredible customers in unprecedented times. Here are some examples from different industries, courtesy of my colleagues in Korea.

First, I would like to introduce how startups utilize AWS Cloud for their fast growing businesses:

https://d2908q01vomqb2.cloudfront.net/7b52009b64fd0a2a49e6d8a939753077792b0554/2020/02/19/2020-kurly-logo.pngMarket Kurly is a fast growing startup and is one of Korea’s first grocery delivery platforms to satisfy a typically picky domestic customer base. It has seen exponential growth from $2.9 million in its first year of business in 2015 to $157 million in revenue in 2018. AWS helped to solve a variety of technical challenges managing that scale along with the continual cycle of orders and deliveries.

Sangseok Lim, CTO told us “The COVID-19 has increased the number of customers visiting MarketKurly. Since already moving the infrastructure to AWS Cloud, the flexible and easy expansion enabled us to process massive customer orders without any problems using AWS Auto Scaling and Amazon RDS. So,we were able to respond to massive customer’s demands. In the future, we plan to add features that can satisfy our customers by utilizing various AWS services.”

beNX, a subsidiary company of Big Hit Entertainment managing the Global Superstar, BTS is specializing in digital platforms and fan services. In June 2019, it launched Weverse, a global fan community platform, and Weverse Shop, a commercial platform for artist’s fans. Weverse has grown to a platform supporting a total of 7 million subscribers with an average of 1.5 million visitors per day within 9 months. Weverse Shop is also rapidly growing as a large-scale commerce platform with 250,000 transactions per month.

Wooseok Seo, CEO of beNX told us “A large number of global fans simultaneously visit patterns within a very short time according to the time when the artist’s contents and official products (MD) are released, and stably handles traffic that is 100 times more than usual. Using AWS, we have built a stable service infrastructure, and are successfully implementing automation and improving resource management. beNX is providing innovative experiences to global fans based on solid technology and deep insight into fandom.”

Large enterprise customers in Korea are leveraging AWS’s Artificial Intelligence (AI) services to transform their online businesses:

Korean Air is the South Korea’s largest airline and the country’s flag carrier even while Korean Air celebrates its 50-year anniversary— the company is building on AWS with an eye towards the next 50 years of excellence. For example, Korean Air is starting on its journey by launching innovative projects with Amazon SageMaker to help predict and preempt maintenance for its fleet of aircraft.

Kenny Chang, EVP and CMO of Korean Air told us “The reason why we thought Amazon SageMaker was good was not because it matched our own engine wear and tear result, but SageMaker was able to give us future prediction of what potential patterns of further wear and tear could be, based on machine learning. This helps with improving aircraft safety.”

SK telecom logo SK Telecom is the leading mobile telecommunications company in Korea, one of AWS Wavelength partners delivering ultra-low latency applications for 5G devices. SK Telecom developed a Korean-based natural language processor called KoGPT-2  using AWS services to process the massive amounts of data needed to develop the sophisticated, open-source artificial language model.

Eric Davis, Vice President of Global AI Development Group at SK Telecom told us “We expect that KoGPT-2 will contribute to enhancing the technological capabilities of small and medium-sized enterprises and startups looking to create innovative applications that interpret the Korean language, such as chatbots for the elderly, and search engines for screening out fake news about COVID-19.”

Finally, here are customer cases from the public sector to manage usage spikes from large scale events:

Korea Broadcasting System (KBS) is the national public broadcaster of South Korea. As one of the largest national television networks, it operates lots of radio, television, and online services. In 2017, KBS migrated its on-premises data center to AWS.

Youngjin Sun, Director of Digital Media Bureau told us “In the on-premise environment, it was hard to scale online traffics such as a short period of time for big events like 2018 Asian Games. But in AWS, we could do all of this with a few clicks. Because it only charges for the amount of time spent on resources, it has been possible to save about 50% or more on the large scale event.”

Sookmyung Women’s University, founded in 1906, has tens of thousands of female students and is dedicated to advancing women’s education. This university ses AWS to scale on-demand to meet its spikes in usages during online trainings.

Hee-Jung Yoon, a professor and the director of the Center for Teaching and Learning told us “We had operated on-premise servers of Learning Management System(LMS) earlier, but by introducing AWS, we realized that the greatest advantage of the cloud was flexibility. Its capacity can be expanded depending on the number of users, and it can flexibly respond to massive connections in the first day of online classes during the COVID-19 situation. This is one of the reasons we chose AWS.”

The launch of this new AZ brings AWS to 77 AZs within 24 geographic regions around the world with plans for nine more AZs and three more AWS Regions in Indonesia, Japan, and Spain. We are continuously looking at expanding our infrastructure footprint globally, driven largely by customer demand. For more information on our global infrastructure, please check out this interactive map.

Please contact AWS Customer Support or your account manager for questions on service availability in this new Availability Zone.

Channy;

This article was translated into Korean(한국어) in AWS Korea Blog.

AWS IoT SiteWise – Now Generally Available

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/aws-iot-sitewise-now-generally-available/

At AWS re:Invent 2018, we announced AWS IoT SiteWise in preview which is a fully managed AWS IoT service that you can use to collect, organize, and analyze data from industrial equipment at scale.

Getting performance metrics from industrial equipment is challenging because data is often locked into proprietary on-premises data stores and typically requires specialized expertise to retrieve and place in a format that is useful for analysis. AWS IoT SiteWise simplifies this process by providing software running on a gateway that resides in your facilities and automates the process of collecting and organizing industrial equipment data.

With AWS IoT SiteWise, you can easily monitor equipment across your industrial facilities to identify waste, such as breakdown of equipment and processes, production inefficiencies, and defects in products.

Last year at AWS re:Invent 2019, a bunch of new features were launched including SiteWise Monitor. Today, I am excited to announce AWS IoT SiteWise is now generally available in regions of US East (N. Virginia), US West (Oregon), Europe (Frankfurt), and Europe (Ireland). Let’s see how AWS IoT SiteWise works!

AWS IoT SiteWise – Getting Started
You can easily explore AWS IoT SiteWise by creating a demo wind farm with a single click on the AWS IoT SiteWise console. The demo deploys an AWS CloudFormation template to create assets and generate sample data for up to a week.

You can find the SiteWise demo in the upper-right corner of the AWS IoT SiteWise console, and choose Create demo. The demo takes around 3 minutes to create demo models and assets for representing a wind farm.

Once you see created assets in the console, you can create virtual representations of your industrial operations with AWS IoT SiteWise assets. An asset can represent a device, a piece of equipment, or a process that uploads one or more data streams to the AWS Cloud. For example, the wind turbine that sends air temperature, propeller rotation speed, and power output time-series measurements to asset properties in AWS IoT SiteWise.

Also, you can securely collect data from the plant floor from sensors, equipment, or a local on-premises gateway and upload to the AWS Cloud using a gateway software called AWS IoT SiteWise Connector. It runs on common industrial gateway devices running AWS IoT Greengrass, and reads data directly from servers and historians over the OPC Unified Architecture protocol. AWS IoT SiteWise also accepts MQTT data through AWS IoT Core, and direct ingestion using REST API.

You can learn how to collect data using AWS IoT SiteWise Connector in the blog series – Part 1 of AWS IoT Blog and in the service documentation.

SiteWise Monitor – Creating Managed Web Applications
Once data is stored in AWS IoT SiteWise, you can stream live data in near real-time and query historical data to build downstream IoT applications, but we provide a no-code alternative with SiteWise Monitor. You can explore your library of assets, and create and share operational dashboards with plant operators for real-time monitoring and visualization of equipment health and output with SiteWise Monitor.

With SiteWise Monitor console, choose Create portal to create a web application that is accessible from from a web browser on any web-enabled desktop, tablet or phone and sign-in with your corporate credentials through AWS Single Sign-On (SSO) experience.

Administrators can create one or more web applications to easily share access to asset data with any team in your organization to accelerate insights.

If you click a given portal link and sign in via the credential of AWS SSO, you can visualize and monitor your device, process, and equipment data to quickly identify issues and improve operational efficiency with SiteWise Monitor

You can create a dashboard in a new project for your team so they can visualize and understand your project data. And, choose a visualization type that best displays your data and rearrange and resize visualizations to create a layout that fits your team’s needs.

The dashboard shows asset data and computed metrics in near real time or you can compare and analyze historical time series data from multiple assets and different time periods. There is a new dashboard feature, where you can specify thresholds on the charts and have the charts change color when those thresholds are exceeded.

Also, you can learn how to monitor key measurements and metrics of your assets in near-real time using SiteWise Monitor in the blog series – Part 2 of AWS IoT Blog.

Furthermore, you can subscribe to the AWS IoT SiteWise modeled data via AWS IoT Core rules engine, enable condition monitoring and send notifications or alerts using AWS IoT Events in near-real time, and enable Business Intelligence (BI) reporting on historical data using Amazon QuickSight. For more detail, please refer to this hands-on guide in the blog series – Part 3 of AWS IoT Blog.

Now Available!
With AWS IoT SiteWise, you only pay for what you use with no minimum fees or mandatory service usage. You are billed separately for usage of messaging, data storage, data processing, and SiteWise Monitor. This approach provides you with billing transparency because you only pay for the specific AWS IoT SiteWise resources you use. Please visit the pricing page to learn more and estimate your monthly bill using the AWS IoT SiteWise Calculator.

You can watch interesting talks about business cases and solutions in ‘Driving Overall Equipment Effectiveness (OEE) Across Your Industrial Facilities’ and ‘Building an End-to-End Industrial IoT (IIoT) Solution with AWS IoT‘. To learn more, please visit the AWS IoT SiteWise website or the tutorial, and developer guide.

Explore AWS IoT SiteWise with Bill Vass and Cherie Wong!

Please send us feedback either in the forum for AWS IoT SiteWise or through your usual AWS support contacts.

Channy;

AWS Well-Architected Framework – Updated White Papers, Tools, and Best Practices

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-well-architected-framework-updated-white-papers-tools-and-best-practices/

We want to make sure that you are designing and building AWS-powered applications in the best possible way. Back in 2015 we launched AWS Well-Architected to make sure that you have all of the information that you need to do this right. The framework is built on five pillars:

Operational Excellence – The ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures.

Security – The ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies.

Reliability – The ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues.

Performance Efficiency – The ability to use computing resources efficiently to meet system requirements, and to maintain that efficiency as demand changes and technologies evolve.

Cost Optimization – The ability to run systems to deliver business value at
the lowest price point.

Whether you are a startup, a unicorn, or an enterprise, the AWS Well-Architected Framework will point you in the right direction and then guide you along the way as you build your cloud applications.

Lots of Updates
Today we are making a host of updates to the Well-Architected Framework! Here’s an overview:

Well-Architected Framework -This update includes new and updated questions, best practices, and improvement plans, plus additional examples and architectural considerations. We have added new best practices in operational excellence (organization), reliability (workload architecture), and cost optimization (practice Cloud Financial Management). We are also making the framework available in eight additional languages (Spanish, French, German, Japanese, Korean, Brazilian Portuguese, Simplified Chinese, and Traditional Chinese). Read the Well-Architected Framework (PDF, Kindle) to learn more.

Pillar White Papers & Labs – We have updated the white papers that define each of the five pillars with additional content, including new & updated questions, real-world examples, additional cross-references, and a focus on actionable best practices. We also updated the labs that accompany each pillar:

Well-Architected Tool – We have updated the AWS Well-Architected Tool to reflect the updates that we made to the Framework and to the White Papers.

Learning More
In addition to the documents that I linked above, you should also watch these videos.

In this video, AWS customer Cox Automotive talks about how they are using AWS Well-Architected to deliver results across over 200 platforms:

In this video, my colleague Rodney Lester tells you how to build better workloads with the Well-Architected Framework and Tool:

Get Started Today
If you are like me, a lot of interesting services and ideas are stashed away in a pile of things that I hope to get to “someday.” Given the importance of the five pillars that I mentioned above, I’d suggest that Well-Architected does not belong in that pile, and that you should do all that you can to learn more and to become well-architected as soon as possible!

Jeff;

New – Create Amazon RDS DB Instances on AWS Outposts

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-create-amazon-rds-db-instances-on-aws-outposts/

Late last year I told you about AWS Outposts and invited you to Order Yours Today. As I told you at the time, this is a comprehensive, single-vendor compute and storage offering that is designed to meet the needs of customers who need local processing and very low latency in their data centers and on factory floors. Outposts uses the hardware that we use in AWS public regions

I first told you about Amazon RDS back in 2009. This fully managed service makes it easy for you to launch, operate, and scale a relational database. Over the years we have added support for multiple open source and commercial databases, along with tons of features, all driven by customer requests.

DB Instances on AWS Outposts
Today I am happy to announce that you can now create RDS DB Instances on AWS Outposts. We are launching with support for MySQL and PostgreSQL, with plans to add other database engines in the future (as always, let us know what you need so that we can prioritize it).

You can make use of important RDS features including scheduled backups to Amazon Simple Storage Service (S3), built-in encryption at rest and in transit, and more.

Creating a DB Instance
I can create a DB Instance using the RDS Console, API (CreateDBInstance), CLI (create-db-instance), or CloudFormation (AWS::RDS::DBInstance).

I’ll use the Console, taking care to select the AWS Region that serves as “home base” for my Outpost. I open the Console and click Create database to get started:

I select On-premises for the Database location, and RDS on Outposts for the On-premises database option:

Next, I choose the Virtual Private Cloud (VPC). The VPC must already exist, and it must have a subnet for my Outpost. I also choose the Security Group and the Subnet:

Moving forward, I select the database engine, and version. We’re launching with support for MySQL 8.0.17 and PostgreSQL 12.2-R1, with plans to add more engines and versions based on your feedback:

I give my DB Instance a name (jb-database-2), and enter the credentials for the master user:

Then I choose the size of the instance. I can select between Standard classes (db.m5):

and Memory Optimized classes (db.r5):

Next, I configure the desired amount of SSD storage:

One thing to keep in mind is that each Outpost has a large, but finite amount of compute power and storage. If there’s not enough of either one free when I attempt to create the database, the request will fail.

Within the Additional configuration section I can set up several database options, customize my backups, and set up the maintenance window. Once everything is ready to go, I click Create database:

As usual when I use RDS, the state of my instance starts out as Creating and transitions to Available when my DB Instance is ready:

After the DB instance is ready, I simply configure my code (running in my VPC or in my Outpost) to use the new endpoint:

Things to Know
Here are a couple of things to keep in mind about this new way to use Amazon RDS:

Operations & Functions – Much of what you already know about RDS works as expected and is applicable. You can rename, reboot, stop, start, tag DB instances, and you can make use of point-in-time recovery; you can scale the instance up and down, and automatic minor version upgrades work as expected. You cannot make use of read replicas or create highly available clusters.

Backup & Recover – Automated backups work as expected, and are stored in S3. You can use them to create a fresh DB Instance in the cloud or in any of your Outposts. Manual snapshots also work, and are stored on the Outpost. They can be used to create a fresh DB Instance on the same Outpost.

Encryption – The storage associated with your DB instance is encrypted, as are your DB snapshots, both with KMS keys.

Pricing – RDS on Outposts pricing is based on a management fee that is charged on an hourly basis for each database that is managed. For more information, check out the RDS on Outposts pricing page.

Available Now
You can start creating RDS DB Instances on your Outposts today.

Jeff;

 

Announcing the Porting Assistant for .NET

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/announcing-the-porting-assistant-for-net/

.NET Core is the future of .NET! Version 4.8 of the .NET Framework is the last major version to be released, and Microsoft has stated it will receive only bug-, reliability-, and security-related fixes going forward. For applications where you want to continue to take advantage of future investments and innovations in the .NET platform, you need to consider porting your applications to .NET Core. Also, there are additional reasons to consider porting applications to .NET Core such as benefiting from innovation in Linux and open source, improved application scaling and performance, and reducing licensing spend. Porting can, however, entail significant manual effort, some of which is undifferentiated such as updating references to project dependencies.

When porting .NET Framework applications, developers need to search for compatible NuGet packages and update those package references in the application’s project files, which also need to be updated to the .NET Core project file format. Additionally, they need to discover replacement APIs since .NET Core contains a subset of the APIs available in the .NET Framework. As porting progresses, developers have to wade through long lists of compile errors and warnings to determine the best or highest priority places to continue chipping away at the task. Needless to say, this is challenging, and the added friction could be a deterrent for customers with large portfolios of applications.

Today we announced the Porting Assistant for .NET, a new tool that helps customers analyze and port their .NET Framework applications to .NET Core running on Linux. The Porting Assistant for .NET assesses both the application source code and the full tree of public API and NuGet package dependencies to identify those incompatible with .NET Core and guides developers to compatible replacements when available. The suggestion engine for API and package replacements is designed to improve over time as the assistant learns more about the usage patterns and frequency of missing packages and APIs.

The Porting Assistant for .NET differs from other tools in that it is able to assess the full tree of package dependencies, not just incompatible APIs. It also uses solution files as the starting point, which makes it easier to assess monolithic solutions containing large numbers of projects, instead of having to analyze and aggregate information on individual binaries. These and other abilities gives developers a jump start in the porting process.

Analyzing and porting an application
Getting started with porting applications using the Porting Assistant for .NET is easy, with just a couple of prerequisites. First, I need to install the .NET Core 3.1 SDK. Secondly I need a credential profile (compatible with the AWS Command Line Interface (CLI), although the CLI is not used or required). The credential profile is used to collect compatibility information on the public APIs and packages (from NuGet and core Microsoft packages) used in your application and public NuGet packages that it references. With those prerequisites taken care of, I download and run the installer for the assistant.

With the assistant installed, I check out my application source code and launch the Porting Assistant for .NET from the Start menu. If I’ve previously assessed some solutions, I can view and open those from the Assessed solutions screen, enabling me to pick up where I left off. Or I can select Get started, as I’ll do here, from the home page to begin assessing my application’s solution file.

I’m asked to select the credential profile I want to use, and here I can also elect to opt-in to share my telemetry data. Sharing this data helps to further improve on suggestion accuracy for all users as time goes on, and is useful in identifying issues, so we hope you consider opting-in.

I click Next, browse to select the solution file that I want, and then click Assess to begin the analysis. For this post I’m going to use the open source NopCommerce project.

When analysis is complete I am shown the overall results – the number of incompatible packages the application depends on, APIs it uses that are incompatible, and an overall Portability score. This score is an estimation of the effort required to port the application to .NET Core, based on the number of incompatible APIs it uses. If I’m working on porting multiple applications I can use this to identify and prioritize the applications I want to start on first.

Let’s dig into the assessment overview to see what was discovered. Clicking on the solution name takes me to a more detailed dashboard and here I can see the projects that make up the application in the solution file, and for each the numbers of incompatible package and API dependencies, along with the portability score for each particular project. The current port status of each project is also listed, if I’ve already begun porting the application and have reopened the assessment.

Note that with no project selected in the Projects tab the data shown in the Project references, NuGet packages, APIs, and Source files tabs is solution-wide, but I can scope the data if I wish by first selecting a project.

The Project references tab shows me a graphical view of the package dependencies and I can see where the majority of the dependencies are consumed, in this case the Npp.Core, Npp.Services, and Npp.Web.Framework projects. This view can help me decide where I might want to start first, so as to get the most ‘bang for my buck’ when I begin. I can also select projects to see the specific dependencies more clearly.

The NuGet packages tab gives me a look at the compatible and incompatible dependencies, and suggested replacements if available. The APIs tab lists the incompatible APIs, what package they are in, and how many times they are referenced. Source files lists all of the source files making up the projects in the application, with an indication of how many incompatible API calls can be found in each file. Selecting a source file opens a view showing me where the incompatible APIs are being used and suggested package versions to upgrade to, if they exist, to resolve the issue. If there is no suggested replacement by simply updating to a different package version then I need to crack open a source editor and update the code to use a different API or approach. Here I’m looking at the report for DependencyRegistrar.cs, that exists in the Nop.Web project, and uses the Autofac NuGet package.

Let’s start porting the application, starting with the Nop.Core project. First, I navigate back to the Projects tab, select the project, and then click Port project. During porting the tool will help me update project references to NuGet packages, and also updates the project files themselves to the newer .NET Core formats. I have the option of either making a copy of the application’s solution file, project files, and source files, or I can have the changes made in-place. Here I’ve elected to make a copy.

Clicking Save copies the application source code to the selected location, and opens the Port projects view where I can set the new target framework version (in this case netcoreapp3.1), and the list of NuGet dependencies for the project that I need to upgrade. For each incompatible package the Porting Assistant for .NET gives me a list of possible version upgrades and for each version, I am shown the number of incompatible APIs that will either remain, or will additionally become incompatible. For the package I selected here there’s no difference but for cases where later versions potentially increase the number of incompatible APIs that I would then need to manually fix up in the source code, this indication helps me to make a trade-off decision around whether to upgrade to the latest versions of a package, or stay with an older one.

Once I select a version, the Deprecated API calls field alongside the package will give me a reminder of what I need to fix up in a code editor. Clicking on the value summarizes the deprecated calls.

I continue with this process for each package dependency and when I’m ready, click Port to have the references updated. Using my IDE, I can then go into the source files and work on replacing the incompatible API calls using the Porting Assistant for .NET‘s source file and deprecated API list views as a reference, and follow a similar process for the other projects in my application.

Improving the suggestion engine
The suggestion engine behind the Porting Assistant for .NET is designed to learn and give improved results over time, as customers opt-in to sharing their telemetry. The data models behind the engine, which are the result of analyzing hundreds of thousands of unique packages with millions of package versions, are available on GitHub. We hope that you’ll consider helping improve accuracy and completeness of the results by contributing your data. The user guide gives more details on how the data is used.

The Porting Assistant for .NET is free to use and is available now.

— Steve

Amazon RDS Proxy – Now Generally Available

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/amazon-rds-proxy-now-generally-available/

At AWS re:Invent 2019, we launched the preview of Amazon RDS Proxy, a fully managed, highly available database proxy for Amazon Relational Database Service (RDS) that makes applications more scalable, more resilient to database failures, and more secure. Following the preview of MySQL engine, we extended to the PostgreSQL compatibility. Today, I am pleased to announce that we are now generally available for both engines.

Many applications, including those built on modern serverless architectures using AWS Lambda, Fargate, Amazon ECS, or EKS can have a large number of open connections to the database server, and may open and close database connections at a high rate, exhausting database memory and compute resources.

Amazon RDS Proxy allows applications to pool and share connections established with the database, improving database efficiency, application scalability, and security. With RDS Proxy, failover times for Amazon Aurora and RDS databases are reduced by up to 66%, and database credentials, authentication, and access can be managed through integration with AWS Secrets Manager and AWS Identity and Access Management (IAM).

Amazon RDS Proxy can be enabled for most applications with no code change, and you don’t need to provision or manage any additional infrastructure and only pay per vCPU of the database instance for which the proxy is enabled.

Amazon RDS Proxy – Getting started
You can get started with Amazon RDS Proxy in just a few clicks by going to the AWS management console and creating an RDS Proxy endpoint for your RDS databases. In the navigation pane, choose Proxies and Create proxy. You can also see the proxy panel below.

To create your proxy, specify the Proxy identifier, a unique name of your choosing, and choose the database engine – either MySQL or PostgreSQL. Choose the encryption setting if you want the proxy to enforce TLS / SSL for all connection between application and proxy, and specify a time period that a client connection can be idle before the proxy can close it.

A client connection is considered idle when the application doesn’t submit a new request within the specified time after the previous request completed. The underlying connection between the proxy and database stays open and is returned to the connection pool. Thus, it’s available to be reused for new client connections.

Next, choose one RDS DB instance or Aurora DB cluster in Database to access through this proxy. The list only includes DB instances and clusters with compatible database engines, engine versions, and other settings.

Specify Connection pool maximum connections, a value between 1 and 100. This setting represents the percentage of the max_connections value that RDS Proxy can use for its connections. If you only intend to use one proxy with this DB instance or cluster, you can set it to 100. For details about how RDS Proxy uses this setting, see Connection Limits and Timeouts.

Choose at least one Secrets Manager secret associated with the RDS DB instance or Aurora DB cluster that you intend to access with this proxy, and select an IAM role that has permission to access the Secrets Manager secrets you chose. If you don’t have an existing secret, please click Create a new secret before setting up the RDS proxy.

After setting VPC Subnets and a security group, please click Create proxy. If you more settings in details, please refer to the documentation.

You can see the new RDS proxy after waiting a few minutes and then point your application to the RDS Proxy endpoint. That’s it!

You can also create an RDS proxy easily via AWS CLI command.

aws rds create-db-proxy \
    --db-proxy-name channy-proxy \
    --role-arn iam_role \
    --engine-family { MYSQL|POSTGRESQL } \
    --vpc-subnet-ids space_separated_list \
    [--vpc-security-group-ids space_separated_list] \
    [--auth ProxyAuthenticationConfig_JSON_string] \
    [--require-tls | --no-require-tls] \
    [--idle-client-timeout value] \
    [--debug-logging | --no-debug-logging] \
    [--tags comma_separated_list]

How RDS Proxy works
Let’s see an example that demonstrates how open connections continue working during a failover when you reboot a database or it becomes unavailable due to a problem. This example uses a proxy named channy-proxy and an Aurora DB cluster with DB instances instance-8898 and instance-9814. When the failover-db-cluster command is run from the Linux command line, the writer instance that the proxy is connected to changes to a different DB instance. You can see that the DB instance associated with the proxy changes while the connection remains open.

$ mysql -h channy-proxy.proxy-abcdef123.us-east-1.rds.amazonaws.com -u admin_user -p
Enter password:
...
mysql> select @@aurora_server_id;
+--------------------+
| @@aurora_server_id |
+--------------------+
| instance-9814 |
+--------------------+
1 row in set (0.01 sec)

mysql>
[1]+ Stopped mysql -h channy-proxy.proxy-abcdef123.us-east-1.rds.amazonaws.com -u admin_user -p
$ # Initially, instance-9814 is the writer.
$ aws rds failover-db-cluster --db-cluster-id cluster-56-2019-11-14-1399
JSON output
$ # After a short time, the console shows that the failover operation is complete.
$ # Now instance-8898 is the writer.
$ fg
mysql -h channy-proxy.proxy-abcdef123.us-east-1.rds.amazonaws.com -u admin_user -p

mysql> select @@aurora_server_id;
+--------------------+
| @@aurora_server_id |
+--------------------+
| instance-8898 |
+--------------------+
1 row in set (0.01 sec)

mysql>
[1]+ Stopped mysql -h channy-proxy.proxy-abcdef123.us-east-1.rds.amazonaws.com -u admin_user -p
$ aws rds failover-db-cluster --db-cluster-id cluster-56-2019-11-14-1399
JSON output
$ # After a short time, the console shows that the failover operation is complete.
$ # Now instance-9814 is the writer again.
$ fg
mysql -h channy-proxy.proxy-abcdef123.us-east-1.rds.amazonaws.com -u admin_user -p

mysql> select @@aurora_server_id;
+--------------------+
| @@aurora_server_id |
+--------------------+
| instance-9814 |
+--------------------+
1 row in set (0.01 sec)
+---------------+---------------+
| Variable_name | Value |
+---------------+---------------+
| hostname | ip-10-1-3-178 |
+---------------+---------------+
1 row in set (0.02 sec)

With RDS Proxy, you can build applications that can transparently tolerate database failures without needing to write complex failure handling code. RDS Proxy automatically routes traffic to a new database instance while preserving application connections.

You can review the demo for an overview of RDS Proxy and the steps you need take to access RDS Proxy from a Lambda function.

If you want to know how your serverless applications maintain excellent performance even at peak loads, please read this blog post. For a deeper dive into using RDS Proxy for MySQL with serverless, visit this post.

The following are a few things that you should be aware of:

  • Currently, RDS Proxy is available for the MySQL and PostgreSQL engine family. This engine family includes RDS for MySQL 5.6 and 5.7, PostgreSQL 10.11 and 11.5.
  • In an Aurora cluster, all of the connections in the connection pool are handled by the Aurora primary instance. To perform load balancing for read-intensive workloads, you still use the reader endpoint directly for the Aurora cluster.
  • Your RDS Proxy must be in the same VPC as the database. Although the database can be publicly accessible, the proxy can’t be.
  • Proxies don’t support compressed mode. For example, they don’t support the compression used by the --compress or -C options of the mysql command.

Now Available!
Amazon RDS Proxy is generally available in US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (London) , Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney) and Asia Pacific (Tokyo) regions for Aurora MySQL, RDS for MySQL, Aurora PostgreSQL, and RDS for PostgreSQL, and it includes support for Aurora Serverless and Aurora Multi-Master.

Take a look at the product page, pricing, and the documentation to learn more. Please send us feedback either in the AWS forum for Amazon RDS or through your usual AWS support contacts.

Channy;

Find Your Most Expensive Lines of Code – Amazon CodeGuru Is Now Generally Available

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/find-your-most-expensive-lines-of-code-amazon-codeguru-is-now-generally-available/

Bringing new applications into production, maintaining their code base as they grow and evolve, and at the same time respond to operational issues, is a challenging task. For this reason, you can find many ideas on how to structure your teams, on which methodologies to apply, and how to safely automate your software delivery pipeline.

At re:Invent last year, we introduced in preview Amazon CodeGuru, a developer tool powered by machine learning that helps you improve your applications and troubleshoot issues with automated code reviews and performance recommendations based on runtime data. During the last few months, many improvements have been launched, including a more cost-effective pricing model, support for Bitbucket repositories, and the ability to start the profiling agent using a command line switch, so that you no longer need to modify the code of your application, or add dependencies, to run the agent.

You can use CodeGuru in two ways:

  • CodeGuru Reviewer uses program analysis and machine learning to detect potential defects that are difficult for developers to find, and recommends fixes in your Java code. The code can be stored in GitHub (now also in GitHub Enterprise), AWS CodeCommit, or Bitbucket repositories. When you submit a pull request on a repository that is associated with CodeGuru Reviewer, it provides recommendations for how to improve your code. Each pull request corresponds to a code review, and each code review can include multiple recommendations that appear as comments on the pull request.
  • CodeGuru Profiler provides interactive visualizations and recommendations that help you fine-tune your application performance and troubleshoot operational issues using runtime data from your live applications. It currently supports applications written in Java virtual machine (JVM) languages such as Java, Scala, Kotlin, Groovy, Jython, JRuby, and Clojure. CodeGuru Profiler can help you find the most expensive lines of code, in terms of CPU usage or introduced latency, and suggest ways you can improve efficiency and remove bottlenecks. You can use CodeGuru Profiler in production, and when you test your application with a meaningful workload, for example in a pre-production environment.

Today, Amazon CodeGuru is generally available with the addition of many new features.

In CodeGuru Reviewer, we included the following:

  • Support for Github Enterprise – You can now scan your pull requests and get recommendations against your source code on Github Enterprise on-premises repositories, together with a description of what’s causing the issue and how to remediate it.
  • New types of recommendations to solve defects and improve your code – For example, checking input validation, to avoid issues that can compromise security and performance, and looking for multiple copies of code that do the same thing.

In CodeGuru Profiler, you can find these new capabilities:

  • Anomaly detection – We automatically detect anomalies in the application profile for those methods that represent the highest proportion of CPU time or latency.
  • Lambda function support – You can now profile AWS Lambda functions just like applications hosted on Amazon Elastic Compute Cloud (EC2) and containerized applications running on Amazon ECS and Amazon Elastic Kubernetes Service, including those using AWS Fargate.
  • Cost of issues in the recommendation report – Recommendations contain actionable resolution steps which explain what the problem is, the CPU impact, and how to fix the issue. To help you better prioritize your activities, you now have an estimation of the savings introduced by applying the recommendation.
  • Color-my-code – In the visualizations, to help you easily find your own code, we are coloring your methods differently from frameworks and other libraries you may use.
  • CloudWatch metrics and alerts – To keep track and monitor efficiency issues that have been discovered.

Let’s see some of these new features at work!

Using CodeGuru Reviewer with a Lambda Function
I create a new repo in my GitHub account, and leave it empty for now. Locally, where I am developing a Lambda function using the Java 11 runtime, I initialize my Git repo and add only the README.md file to the master branch. In this way, I can add all the code as a pull request later and have it go through a code review by CodeGuru.

git init
git add README.md
git commit -m "First commit"

Now, I add the GitHub repo as origin, and push my changes to the new repo:

git remote add origin https://github.com/<my-user-id>/amazon-codeguru-sample-lambda-function.git
git push -u origin master

I associate the repository in the CodeGuru console:

When the repository is associated, I create a new dev branch, add all my local files to it, and push it remotely:

git checkout -b dev
git add .
git commit -m "Code added to the dev branch"
git push --set-upstream origin dev

In the GitHub console, I open a new pull request by comparing changes across the two branches, master and dev. I verify that the pull request is able to merge, then I create it.

Since the repository is associated with CodeGuru, a code review is listed as Pending in the Code reviews section of the CodeGuru console.

After a few minutes, the code review status is Completed, and CodeGuru Reviewer issues a recommendation on the same GitHub page where the pull request was created.

Oops! I am creating the Amazon DynamoDB service object inside the function invocation method. In this way, it cannot be reused across invocations. This is not efficient.

To improve the performance of my Lambda function, I follow the CodeGuru recommendation, and move the declaration of the DynamoDB service object to a static final attribute of the Java application object, so that it is instantiated only once, during function initialization. Then, I follow the link in the recommendation to learn more best practices for working with Lambda functions.

Using CodeGuru Profiler with a Lambda Function
In the CodeGuru console, I create a MyServerlessApp-Development profiling group and select the Lambda compute platform.

Next, I give the AWS Identity and Access Management (IAM) role used by my Lambda function permissions to submit data to this profiling group.

Now, the console is giving me all the info I need to profile my Lambda function. To configure the profiling agent, I use a couple of environment variables:

  • AWS_CODEGURU_PROFILER_GROUP_ARN to specify the ARN of the profiling group to use.
  • AWS_CODEGURU_PROFILER_ENABLED to enable (TRUE) or disable (FALSE) profiling.

I follow the instructions (for Maven and Gradle) to add a dependency, and include the profiling agent in the build. Then, I update the code of the Lambda function to wrap the handler function inside the LambdaProfiler provided by the agent.

To generate some load, I start a few scripts invoking my function using the Amazon API Gateway as trigger. After a few minutes, the profiling group starts to show visualizations describing the runtime behavior of my Lambda function.

For example, I can see how much CPU time is spent in the different methods of my function. At the bottom, there are the entry point methods. As I scroll up, I find methods that are called deeper in the stack trace. I right-click and hide the LambdaRuntimeClient methods to focus on my code. Note that my methods are colored differently than those in the packages I am using, such as the AWS SDK for Java.

I am mostly interested in what happens in the handler method invoked by the Lambda platform. I select the handler method, and now it becomes the new “base” of the visualization.

As I move my pointer on each of my methods, I get more information, including an estimation of the yearly cost of running that specific part of the code in production, based on the load experienced by the profiling agent during the selected time window. In my case, the handler function cost is estimated to be $6. If I select the two main functions above, I have an estimation of $3 each. The cost estimation works for code running on Lambda functions, EC2 instances, and containerized applications.

Similarly, I can visualize Latency, to understand how much time is spent inside the methods in my code. I keep the Lambda function handler method selected to drill down into what is under my control, and see where time is being spent the most.

The CodeGuru Profiler is also providing a recommendation based on the data collected. I am spending too much time (more than 4%) in managing encryption. I can use a more efficient crypto provider, such as the open source Amazon Corretto Crypto Provider, described in this blog post. This should lower the time spent to what is expected, about 1% of my profile.

Finally, I edit the profiling group to enable notifications. In this way, if CodeGuru detects an anomaly in the profile of my application, I am notified in one or more Amazon Simple Notification Service (SNS) topics.

Available Now
Amazon CodeGuru is available today in 10 regions, and we are working to add more regions in the coming months. For regional availability, please see the AWS Region Table.

CodeGuru helps you improve your application code and reduce compute and infrastructure costs with an automated code reviewer and application profiler that provide intelligent recommendations. Using visualizations based on runtime data, you can quickly find the most expensive lines of code of your applications. With CodeGuru, you pay only for what you use. Pricing is based on the lines of code analyzed by CodeGuru Reviewer, and on sampling hours for CodeGuru Profiler.

To learn more, please see the documentation.

Danilo

Introducing Amazon Honeycode – Build Web & Mobile Apps Without Writing Code

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/introducing-amazon-honeycode-build-web-mobile-apps-without-writing-code/

VisiCalc was launched in 1979, and I purchased a copy (shown at right) for my Apple II. The spreadsheet model was clean, easy to use, and most of all, easy to teach. I was working in a retail computer store at that time, and knew that this product was a big deal when customers started asking to purchase the software, and for whatever hardware that was needed to run it.

Today’s spreadsheets fill an important gap between mass-produced packaged applications and custom-built code created by teams of dedicated developers. Every tool has its limits, however. Sharing data across multiple users and multiple spreadsheets is difficult, as is dealing with large amounts of data. Integration & automation are also challenging, and require specialized skills. In many cases, those custom-built apps would be a better solution than a spreadsheet, but a lack of developers or other IT resources means that these apps rarely get built.

Introducing Amazon Honeycode
Today we are launching Amazon Honeycode in beta form. This new fully-managed AWS service gives you the power to build powerful mobile & web applications without writing any code. It uses the familiar spreadsheet model and lets you get started in minutes. If you or your teammates are already familiar with spreadsheets and formulas, you’ll be happy to hear that just about everything you know about sheets, tables, values, and formulas still applies.

Amazon Honeycode includes templates for some common applications that you and other members of your team can use right away:

You can customize these apps at any time and the changes will be deployed immediately. You can also start with an empty table, or by importing some existing data in CSV form. The applications that you build with Honeycode can make use of a rich palette of user interface objects including lists, buttons, and input fields:

You can also take advantage of a repertoire of built-in, trigger-driven actions that can generate email notifications and modify tables:

Honeycode also includes a lengthy list of built-in functions. The list includes many functions that will be familiar to users of existing spreadsheets, along with others that are new to Honeycode. For example, FindRow is a more powerful version of the popular Vlookup function.

Getting Started with Honeycode
It is easy to get started. I visit the Honeycode Builder, and create my account:

After logging in I see My Drive, with my workbooks & apps, along with multiple search, filter, & view options:

I can open & explore my existing items, or I can click Create workbook to make something new. I do that, and then select the Simple To-do template:

The workbook, tables, and the apps are created and ready to use right away. I can simply clear the sample data from the tables and share the app with the users, or I can inspect and customize it. Let’s inspect it first, and then share it!

After I create the new workbook, the Tasks table is displayed and I can see the sample data:

Although this looks like a traditional spreadsheet, there’s a lot going on beneath the surface. Let’s go through, column-by-column:

A (Task) – Plain text.

B (Assignee) – Text, formatted as a Contact.

C (First Name) – Text, computed by a formula:

In the formula, Assignee refers to column B, and First Name refers to the first name of the contact.

D (Due) – A date, with multiple formatting options:

E (Done) – A picklist that pulls values from another table, and that is formatted as a Honeycode rowlink. Together, this restricts the values in this column to those found in the other table (Done, in this case, with the values Yes and No), and also makes the values from that table visible within the context of this one:

F (Remind On) – Another picklist, this one taking values from the ReminderOptions table:

G (Notification) – Another date.

This particular table uses just a few of the features and options that are available to you.

I can use the icons on the left to explore my workbook:

I can see the tables:

I can also see the apps. A single Honeycode workbook can contain multiple apps that make use of the same tables:

I’ll return to the apps and the App Builder in a minute, but first I’ll take a look at the automations:

Again, all of the tables and apps in the workbook can use any of the automations in the workbook.

The Honeycode App Builder
Let’s take a closer look at the app builder. As was the case with the tables, I will show you some highlights and let you explore the rest yourself. Here’s what I see when I open my Simple To-do app in the builder:

This app contains four screens (My Tasks, All Tasks, Edit, and Add Task). All screens have both web and mobile layouts. Newly created screens, and also those in this app, have the layouts linked, so that changes to one are reflected in the other. I can unlink the layouts if I want to exercise more control over the controls, the presentation, or to otherwise differentiate the two:

Objects within a screen can reference data in tables. For example, the List object on the My Task screen filters rows of the Tasks table, selecting the undone tasks and ordering them by the due date:

Here’s the source expression:

=Filter(Tasks,"Tasks[Done]<>% ORDER BY Tasks[Due]","Yes")

The “%”  in the filter condition is replaced by the second parameter (“Yes”) when the filter is evaluated. This substitution system makes it easy for you to create interesting & powerful filters using the FILTER() function.

When the app runs, the objects within the List are replicated, one per task:

Objects on screens can initiate run automations and initiate actions. For example, the ADD TASK button navigates to the Add Task screen:

The Add Task screen prompts for the values that specify the new task, and the ADD button uses an automation that writes the values to the Tasks table:

Automations can be triggered in four different ways. Here’s the automation that generates reminders for tasks that have not been marked as done. The automation runs once for each row in the Tasks table:

The notification runs only if the task has not been marked as done, and could also use the FILTER() function:

While I don’t have the space to show you how to build an app from scratch, here’s a quick overview.

Click Create workbook and Import CSV file or Start from scratch:

Click the Tables icon and create reference and data tables:

Click the Apps icon and build the app. You can select a wizard that uses your tables as a starting point, or you can start from an empty canvas.

Click the Automations icon and add time-driven or data-driven automations:

Share the app, as detailed in the next section.

Sharing Apps
After my app is ready to go, I can share it with other members of my team. Each Honeycode user can be a member of one or more teams:

To share my app, I click Share app:

Then I search for the desired team members and share the app:

They will receive an email that contains a link, and can start using the app immediately. Users with mobile devices can install the Honeycode Player (iOS, Android) and make use of any apps that have been shared with them. Here’s the Simple To-do app:

Amazon Honeycode APIs
External applications can also use the Honeycode APIs to interact with the applications you build with Honeycode. The functions include:

GetScreenData – Retrieve data from any screen of a Honeycode application.

InvokeScreenAutomation – Invoke an automation or action defined in the screen of a Honeycode application.

Check it Out
As you can see, Amazon Honeycode is easy to use, with plenty of power to let you build apps that help you and your team to be more productive. Check it out, build something cool, and let me know what you think! You can find out more in the announcement video from Larry Augustin here:

Jeff;

PS – The Amazon Honeycode Forum is the place for you to ask questions, learn from other users, and to find tutorials and other resources that will help you to get started.

AWS Solutions Constructs – A Library of Architecture Patterns for the AWS CDK

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/aws-solutions-constructs-a-library-of-architecture-patterns-for-the-aws-cdk/

Cloud applications are built using multiple components, such as virtual servers, containers, serverless functions, storage buckets, and databases. Being able to provision and configure these resources in a safe, repeatable way is incredibly important to automate your processes and let you focus on the unique parts of your implementation.

With the AWS Cloud Development Kit, you can leverage the expressive power of your favorite programming languages to model your applications. You can use high-level components called constructs, preconfigured with “sensible defaults” that you can customize, to quickly build a new application. The CDK provisions your resources using AWS CloudFormation to get all the benefits of managing your infrastructure as code. One of the reasons I like the CDK, is that you can compose and share your own custom components as higher-level constructs.

As you can imagine, there are recurring patterns that can be useful to more than one customer. For this reason, today we are launching the AWS Solutions Constructs, an open source extension library for the CDK that provides well-architected patterns to help you build your unique solutions. CDK constructs mostly cover single services. AWS Solutions Constructs provide multi-service patterns that combine two or more CDK resources, and implement best practices such as logging and encryption.

Using AWS Solutions Constructs
To see the power of a pattern-based approach, let’s take a look at how that works when building a new application. As an example, I want to build an HTTP API to store data in a Amazon DynamoDB table. To keep the content of the table small, I can use DynamoDB Time to Live (TTL) to expire items after a few days. After the TTL expires, data is deleted from the table and sent, via DynamoDB Streams, to a AWS Lambda function to archive the expired data on Amazon Simple Storage Service (S3).

To build this application, I can use a few components:

  • An Amazon API Gateway endpoint for the API.
  • A DynamoDB table to store data.
  • A Lambda function to process the API requests, and store data in the DynamoDB table.
  • DynamoDB Streams to capture data changes.
  • A Lambda function processing data changes to archive the expired data.

Can I make it simpler? Looking at the available patterns in the AWS Solutions Constructs, I find two that can help me build my app:

  • aws-apigateway-lambda, a Construct that implements an API Gateway REST API connected to a Lambda function. As an example of the “sensible defaults” used by AWS Solutions Constructs, this pattern enables CloudWatch logging for the API Gateway.
  • aws-dynamodb-stream-lambda, a Construct implementing a DynamoDB table streaming data changes to a Lambda function with the least privileged permissions.

To build the final architecture, I simply connect those two Constructs together:

I am using TypeScript to define the CDK stack, and Node.js for the Lambda functions. Let’s start with the CDK stack:

 

import * as cdk from '@aws-cdk/core';
import * as lambda from '@aws-cdk/aws-lambda';
import * as apigw from '@aws-cdk/aws-apigateway';
import * as dynamodb from '@aws-cdk/aws-dynamodb';
import { ApiGatewayToLambda } from '@aws-solutions-constructs/aws-apigateway-lambda';
import { DynamoDBStreamToLambda } from '@aws-solutions-constructs/aws-dynamodb-stream-lambda';

export class DemoConstructsStack extends cdk.Stack {
  constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);

    const apiGatewayToLambda = new ApiGatewayToLambda(this, 'ApiGatewayToLambda', {
      deployLambda: true,
      lambdaFunctionProps: {
        code: lambda.Code.fromAsset('lambda'),
        runtime: lambda.Runtime.NODEJS_12_X,
        handler: 'restApi.handler'
      },
      apiGatewayProps: {
        defaultMethodOptions: {
          authorizationType: apigw.AuthorizationType.NONE
        }
      }
    });

    const dynamoDBStreamToLambda = new DynamoDBStreamToLambda(this, 'DynamoDBStreamToLambda', {
      deployLambda: true,
      lambdaFunctionProps: {
        code: lambda.Code.fromAsset('lambda'),
        runtime: lambda.Runtime.NODEJS_12_X,
        handler: 'processStream.handler'
      },
      dynamoTableProps: {
        tableName: 'my-table',
        partitionKey: { name: 'id', type: dynamodb.AttributeType.STRING },
        timeToLiveAttribute: 'ttl'
      }
    });

    const apiFunction = apiGatewayToLambda.lambdaFunction;
    const dynamoTable = dynamoDBStreamToLambda.dynamoTable;

    dynamoTable.grantReadWriteData(apiFunction);
    apiFunction.addEnvironment('TABLE_NAME', dynamoTable.tableName);
  }
}

At the beginning of the stack, I import the standard CDK constructs for the Lambda function, the API Gateway endpoint, and the DynamoDB table. Then, I add the two patterns from the AWS Solutions Constructs, ApiGatewayToLambda and DynamoDBStreamToLambda.

After declaring the two ApiGatewayToLambda and DynamoDBStreamToLambda constructs, I store the Lambda function, created by the ApiGatewayToLambda constructs, and the DynamoDB table, created by DynamoDBStreamToLambda, in two variables.

At the end of the stack, I “connect” the two patterns together by granting permissions to the Lambda function to read/write in the DynamoDB table, and add the name of the DynamoDB table to the environment of the Lambda function, so that it can be used in the function code to store data in the table.

The code of the two Lambda functions is in the lambda folder of the CDK application. I am using the Node.js 12 runtime.

The restApi.js function implements the API and writes data to the DynamoDB table. The URL path is used as partition key, all the query string parameters in the URL are stored as attributes. The TTL for the item is computed adding a time window of 7 days to the current time.

const { DynamoDB } = require("aws-sdk");

const docClient = new DynamoDB.DocumentClient();

const TABLE_NAME = process.env.TABLE_NAME;
const TTL_WINDOW = 7 * 24 * 60 * 60; // 7 days expressed in seconds

exports.handler = async function (event) {

  const item = event.queryStringParameters;
  item.id = event.pathParameters.proxy;

  const now = new Date(); 
  item.ttl = Math.round(now.getTime() / 1000) + TTL_WINDOW;

  const response = await docClient.put({
    TableName: TABLE_NAME,
    Item: item
  }).promise();

  let statusCode = 204;
  
  if (response.err != null) {
    console.error('request: ', JSON.stringify(event, undefined, 2));
    console.error('error: ', response.err);
    statusCode = 500
  }

  return {
    statusCode: statusCode
  };
};

The processStream.js function is processing data capture records from the DynamoDB Stream, looking for the items deleted by TTL. The archive functionality is not implemented in this sample code.

exports.handler = async function (event) {
  event.Records.forEach((record) => {
    console.log('Stream record: ', JSON.stringify(record, null, 2));
    if (record.userIdentity.type == "Service" &&
      record.userIdentity.principalId == "dynamodb.amazonaws.com") {

      // Record deleted by DynamoDB Time to Live (TTL)
      
      // I can archive the record to S3, for example using Kinesis Data Firehose.
    }
  }
};

Let’s see if this works! First, I need to install all dependencies. To simplify dependencies, each release of AWS Solutions Constructs is linked to the corresponding version of the CDK. I this case, I am using version 1.46.0 for both the CDK and the AWS Solutions Constructs patterns. The first three commands are installing plain CDK constructs. The last two commands are installing the AWS Solutions Constructs patterns I am using for this application.

npm install @aws-cdk/[email protected]
npm install @aws-cdk/[email protected]
npm install @aws-cdk/[email protected]
npm install @aws-solutions-constructs/[email protected]
npm install @aws-solutions-constructs/[email protected]

Now, I build the application and use the CDK to deploy the application.

npm run build
cdk deploy

Towards the end of the output of the cdk deploy command, a green light is telling me that the deployment of the stack is completed. Just next, in the Outputs, I find the endpoint of the API Gateway.

 ✅  DemoConstructsStack

Outputs:
DemoConstructsStack.ApiGatewayToLambdaLambdaRestApiEndpoint9800D4B5 = https://1a2c3c4d.execute-api.eu-west-1.amazonaws.com/prod/

I can now use curl to test the API:

curl "https://1a2c3c4d.execute-api.eu-west-1.amazonaws.com/prod/danilop?name=Danilo&amp;company=AWS"

Let’s have a look at the DynamoDB table:

The item is stored, and the TTL is set. After a week, the item will be deleted and sent via DynamoDB Streams to the processStream.js function.

After I complete my testing, I use the CDK again to quickly delete all resources created for this application:

cdk destroy

Available Now
The AWS Solutions Constructs are available now for TypeScript and Python. The AWS Solutions Builders team is working to make these constructs also available when using Java and C# with the CDK, stay tuned. There is no cost in using the AWS Solutions Constructs, or the CDK, you only pay for the resources created when deploying the stack.

In this first release, 25 patterns are included, covering lots of different use cases. Which new patterns and features should we focus now? Give use your feedback in the open source project repository!

Danilo

HASH: a free, online platform for modeling the world

Post Syndicated from Joel Spolsky original https://www.joelonsoftware.com/2020/06/18/hash-a-free-online-platform-for-modeling-the-world/

Sometimes when you’re trying to figure out the way the world works, basic math is enough to get you going. If we increase the hot water flow by x, the temperature of the mixture goes up by y.

Sometimes you’re working on something that’s just too complicated for that, and you can’t even begin to guess how the inputs affect the outputs. At the warehouse, everything seems to go fine when you have less than four employees, but when you hit five employees, they get in each others’ way so much that the fifth employee effectively does no additional work.

You may not understand the relationship between the number of employees and the throughput of the warehouse, but you definitely know what everybody is doing. If you can imagine writing a little bit of JavaScript code to simulate the behavior of each of your workers, you can run a simulation and see what actually happens. You can tweak the parameters and the rules the employees follow to see how it would help, and you can really gain some traction understanding, and then solving, very complex problems.

That’s what hash.ai is all about. Read David’s launch blog post, then try building your own simulations!

Introducing AWS Snowcone – A Small, Lightweight, Rugged, Secure Edge Computing, Edge Storage, and Data Transfer Device

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/introducing-aws-snowcone-small-lightweight-edge-storage-and-processing/

Last month I published my AWS Snowball Edge Update and told you about the latest updates to Snowball Edge, including faster storage-optimized devices with more memory & vCPUs, the AWS OpsHub for Snow Family GUI-based management tool, IAM for Snowball Edge, and Snowball Edge Support for AWS Systems Manager.

AWS Snowcone
Today I would like to introduce you to the newest and smallest member of the AWS Snow Family of physical edge computing, edge storage, and data transfer devices for rugged or disconnected environments, AWS Snowcone:

AWS Snowcone weighs 4.5 pounds and includes 8 terabytes of usable storage. It is small (9″ long, 6″ wide, and 3″ tall) and rugged, and can be used in a variety of environments including desktops, data centers, messenger bags, vehicles, and in conjunction with drones. Snowcone runs on either AC power or an optional battery, making it great for many different types of use cases where self-sufficiency is vital.

The device enclosure is both tamper-evident and tamper-resistant, and also uses a Trusted Platform Module (TPM) designed to ensure both security and full chain-of-custody for your data. The device encrypts data at rest and in transit using keys that are managed by AWS Key Management Service (KMS) and are never stored on the device.

Like other Snow Family devices, Snowcone includes an E Ink shipping label designed to ensure the device is automatically sent to the correct AWS facility and to aid in tracking. It also includes 2 CPUs, 4 GB of memory, wired or wireless access, and USB-C power using a cord or the optional battery. There’s enough compute power for you to launch EC2 instances and to use AWS IoT Greengrass.

You can use Snowcone for data migration, content distribution, tactical edge computing, healthcare IoT, industrial IoT, transportation, logistics, and autonomous vehicle use cases. You can ship data-laden devices to AWS for offline data transfer, or you can use AWS DataSync for online data transfer.

Ordering a Snowcone
The ordering process for Snowcone is similar to that for Snowball Edge. I open the Snow Family Console, and click Create Job:

I select the Import into Amazon S3 job type and click Next:

I choose my address (or enter a new one), and a shipping speed:

Next, I give my job a name (Snowcone2) and indicate that I want a Snowcone. I also acknowledge that I will provide my own power supply:

Deeper into the page, I choose an S3 bucket for my data, opt-in to WiFi connectivity, and choose an EC2 AMI that will be loaded on the device before it is shipped to me:

As you can see from the image, I can choose multiple buckets and/or multiple AMIs. The AMIs must be made from an instance launched from a CentOS or Ubuntu product in AWS Marketplace, and it must contain a SSH key.

On successive pages (not shown), I specify permissions (an IAM role), choose an AWS Key Management Service (KMS) key to encrypt my data, and set up a SNS topic for job notifications. Then I confirm my choices and click Create job:

Then I await delivery of my device! I can check the status at any time:

As noted in one of the earlier screens, I will also need a suitable power supply or battery (you can find several on the Snowcone Accessories page).

Time passes, the doorbell rings, Luna barks, and my device is delivered…

Luna and a Snowcone

The console also updates to show that my device has been delivered:

On that page, I click Get credentials, copy the client unlock code, and download the manifest file:

Setting up My Snowcone
I connect the Snowcone to the power supply and to my network, and power up! After a few seconds of initialization, the device shows its IP address and invites me to connect:

The IP address was supplied by the DHCP server on my home network, and should be fine. If not, I can touch Network and configure a static IP address or log in to my WiFi network.

Next, I download AWS OpsHub for Snow Family, install it, and then configure it to access the device. I select Snowcone and click Next:

I enter the IP address as shown on the display:

Then I enter the unlock code, upload the manifest file, and click Unlock device:

After a minute or two, the device is unlocked and ready. I enter a name (Snowcone1) that I’ll use within AWS OpsHub and click Save profile name:

I’m all set:

AWS OpsHub for Snow Family
Now that I have ordered & received my device, installed AWS OpsHub for Snow Family, and unlocked my device, I am ready to start managing some file storage and doing some edge computing!

I click on Get started within Manage file storage, and Start NFS. I have several network options, and I’ll use the defaults:

The NFS server is ready within a minute or so, and it has its own IP address:

Once it is ready I can mount the NFS volume and copy files to the Snowcone:

I can store process these files locally, or I can use AWS DataSync to transfer them to the cloud.

As I showed you earlier in this post, I selected an EC2 AMI when I created my job. I can launch instances on the Snowcone using this AMI. I click on Compute, and Launch instance:

I have three instance types to choose from:

Instance NameCPUsRAM
snc1.micro11 GiB
snc1.small12 GiB
snc1.medium24 GiB

I select my AMI & instance type, confirm the networking options, and click Launch:

I can also create storage volumes and attach them to the instance.

The ability to build AMIs and run them on Snowcones gives you the power to build applications that do all sorts of interesting filtering, pre-processing, and analysis at the edge.

I can use AWS DataSync to transfer data from the device to a wide variety of AWS storage services including Amazon Simple Storage Service (S3), Amazon Elastic File System (EFS), or Amazon FSx for Windows File Server. I click on Get started, then Start DataSync Agent, confirm my network settings, and click Start agent:

Once the agent is up and running, I copy the IP address:

Then I follow the link and create a DataSync agent (the deploy step is not necessary because the agent is already running). I choose an endpoint and paste the IP address of the agent, then click Get key:

I give my agent a name (SnowAgent), tag it, and click Create agent:

Then I configure the NFS server in the Snowcone as a DataSync location, and use it to transfer data in or out using a DataSync Task.

API / CLI
While AWS OpsHub is going to be the primary access method for most users, the device can also be accessed programmatically. I can use the Snow Family tools to retrieve the AWS Access Key and Secret Key from the device, create a CLI profile (region is snow), and run commands (or issue API calls) as usual:

C:\>aws ec2 \
   --endpoint http://192.168.7.154:8008 describe-images \
   --profile snowcone1
{
    "Images": [
        {
            "ImageId": "s.ami-0581034c71faf08d9",
            "Public": false,
            "State": "AVAILABLE",
            "BlockDeviceMappings": [
                {
                    "DeviceName": "/dev/sda1",
                    "Ebs": {
                        "DeleteOnTermination": false,
                        "Iops": 0,
                        "SnapshotId": "s.snap-01f2a33baebb50f0e",
                        "VolumeSize": 8
                    }
                }
            ],
            "Description": "Image for Snowcone delivery #1",
            "EnaSupport": false,
            "Name": "Snowcone v1",
            "RootDeviceName": "/dev/sda1"
        },
        {
            "ImageId": "s.ami-0bb6828757f6a23cf",
            "Public": false,
            "State": "AVAILABLE",
            "BlockDeviceMappings": [
                {
                    "DeviceName": "/dev/sda",
                    "Ebs": {
                        "DeleteOnTermination": true,
                        "Iops": 0,
                        "SnapshotId": "s.snap-003d9042a046f11f9",
                        "VolumeSize": 20
                    }
                }
            ],
            "Description": "AWS DataSync AMI for online data transfer",
            "EnaSupport": false,
            "Name": "scn-datasync-ami",
            "RootDeviceName": "/dev/sda"
        }
    ]
}

Get One Today
You can order a Snowcone today for use in US locations.

Jeff;