All posts by Steve Roberts

Introducing Bob’s Used Books—a New, Real-World, .NET Sample Application

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/introducing-bobs-used-books-a-new-real-world-net-sample-application/

Today, I’m happy to announce that a new open-source sample application, a fictitious used books eCommerce store we call Bob’s Used Books, is available for .NET developers working with AWS. The .NET advocacy and development teams at AWS talk to customers regularly and, during those conversations, often receive requests for more in-depth samples. Customers tell us that, while small code snippets serve well to illustrate the mechanics of an API, their development teams also need and want to make use of fuller, more real-world samples to understand better how to construct modern applications for the cloud. Today’s sample application release is in response to those requests.

Bob’s Used Books is a sample eCommerce application built using ASP.NET Core version 6 and represents an initial modernization of a typical on-premises custom application. Representing a first stage of modernization, the application uses modern cross-platform .NET, enabling it to run on both Windows and Linux systems in the cloud. It’s typical of what many .NET developers are just now going through, porting their own applications from .NET Framework to .NET using freely available tools from AWS such as the Toolkit for .NET Refactoring and the Porting Assistant for .NET.

Bob's Used Books sample application homepage

Sample application features
Customers of our fictional bookstore can browse and search on the store for used books and view details on selected books such as price, condition, genre, and more:

Bob's Used Books sample application search results page, which shows 8 books and their prices.

 

Bob's Used Books sample application book details page

Just like a real e-commerce store, customers can add books to a shopping cart, pending subsequent checkout, or to a personal wish list. When the time comes to purchase, the customer can start the checkout process, which will encourage them to sign in if they are an existing customer or sign up during the process.

Bob's Used Books sample application checkout page

In this sample application, the bookstore’s staff uses the same web application to manage inventory and customer orders. Role-based authentication is used to determine whether it’s a staff member signing in, in which case they can view an administrative portal, or a regular store customer. For staff, having accessed the admin portal, they start with a dashboard view that summarizes pending, in-process, or completed orders and the state of the store’s inventory:

Bob's Used Books sample application staff dashboard page

Staff can edit inventory to add new books, complete with cover images, or adjust stock levels. From the same dashboard, staff can also view and process pending orders.

Bob's Used Books sample application staff order processing page

Not shown here, but something I think is pretty cool, is a simulated workflow where customers can re-sell their books through the store. This involves the customer submitting an application, the store admin evaluating and deciding whether to purchase from the customer, the customer “posting” the book to the store if accepted, and finally the admin adding the book into inventory and reimbursing the customer. Remember, this is all fictional, however—no actual financial transactions take place!

Application architecture
The bookstore sample didn’t start as a .NET Framework-based application that needed porting to .NET, but it does use a monolithic MVC (model-view-controller) application design, typical of the .NET Framework development era (and still in use today). It also uses a single Microsoft SQL Server database to contain inventory, shopping cart, user data, and more.

Bob's Used Books sample application outline architecture

When fully deployed to AWS, the application makes use of several services. These provide resources to host the application, provide configuration to the running application, and also provide useful functionality to the running code, such as image verification:

  • Amazon Cognito – used for customer and bookstore staff authentication. The application uses Cognito‘s Hosted UI to provide sign-in and sign-up functionality.
  • Amazon Relational Database Service (RDS) – manages a single Microsoft SQL Server Express instance containing inventory, customer, and other typical data for an e-commerce application.
  • Amazon Simple Storage Service (Amazon S3) – an S3 bucket is used to store cover images for books.
  • AWS Systems Manager Parameter Store – contains runtime configuration data, including the name of the S3 bucket for cover images, and Cognito user pool details.
  • AWS Secrets Manager – holds the user and password details for the underlying SQL Server database in RDS.
  • Amazon CloudFront – provides a domain for accessing the cover images in the S3 bucket, which means the bucket does not need to be publicly available.
  • Amazon Rekognition – used to verify that cover images uploaded for a book do not contain objectionable content.

The application is a starting point to showcase further modernization opportunities in the future, such as adopting purpose-built databases instead of using a single relational database, decomposing the monolith to use microservices (for the latter, AWS provides the Microservice Extractor for .NET), and more. The .NET development, advocacy, and solution architect teams here at AWS are quite excited at the opportunities for new content, using this sample, to illustrate those modernization opportunities in the upcoming months. And, as the sample is open-source, we’re also interested to see where the .NET development community takes it regarding modernization.

Running the application
My colleague Brad Webber, a Solutions Architect at AWS, has written the first in a series of technical blog posts we’ll be publishing about the sample. You’ll find these on the new .NET on AWS blog channel. In his first post, you’ll learn more about how to run or debug the application on your own machine as well as deploy it completely to the AWS cloud.

The application uses SQL Server Express localdb instance for its database needs when running outside the cloud, which means you do currently need to be using a Windows machine to run or debug. Launch profiles, accessible from Visual Studio, Visual Studio Code, or JetBrains Rider (all on Windows), are used to select how the application runs (for example, with no or some cloud resources):

  • Local – When you select this launch profile, the application runs completely on your machine, using no cloud resources, and doesn’t need an AWS account. This enables you to investigate and experiment with the code incurring no charges for cloud resources.
  • Integrated – When you use this profile, the application still runs locally on your Windows machine and continues to use the localdb database instance, but now also uses some AWS resources, such as an S3 bucket, Rekognition, Cognito, and others. This profile enables you to learn how you can use AWS services within your application code, using the AWS SDK for .NET and various extension libraries that we distribute on NuGet (for a full list of all available libraries you can use when developing your applications, see the .NET on AWS repository on GitHub). To enable you to set up the cloud resources needed by the application when using this profile, an AWS Cloud Development Kit (AWS CDK) project is provided in the sample repository, making it easy to set up and tear down those resources on demand.

Deploying the Sample to AWS
You can also deploy the entire application to the AWS Cloud, in this case, to virtual machines in Amazon Elastic Compute Cloud (Amazon EC2) with a SQL Server Express database instance in Amazon Relational Database Service (RDS). The deployment uses resources compatible with the AWS Free Tier but do note, however, that you may still incur charges if you exceed the Free Tier limits. Unlike running the application on your own machine, which requires Windows because of the localdb dependency, you can deploy the application to AWS from any machine, including those running macOS and Linux. Once again, a CDK project is included in the repository to get you started, and Brad’s blog post goes into more detail on these steps so I won’t repeat them here.

Using virtual machines in the cloud is often a first step in modernizing on-premises applications because of similarity with an on-premises server setup, hence the reason for supporting Amazon EC2 deployments out-of-the-box. In the future, we’ll be adding content showing how to deploy the application to container services on AWS, such as AWS App Runner, Amazon Elastic Container Service (Amazon ECS), and Amazon Elastic Kubernetes Service (EKS).

Next steps
The Bob’s Used Books sample application is available now on GitHub. We encourage you, if you’re a .NET developer working on AWS and looking for a deeper, more real-world sample, to clone the repository and take the application for a spin. We’re also curious about what modernization journeys you would decide to take with the application, which will help us create future content for the sample. Let us know in the issues section of the repository. And if you want to contribute to the sample, we welcome contributions!

Discover Building without Limits at AWS Developer Innovation Day

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/discover-building-without-limits-at-aws-developer-innovation-day/

We hope you can join us on Wednesday, April 26, for a free-to-attend online event, AWS Developer Innovation Day. AWS will stream the event simultaneously across multiple platforms, including LinkedIn Live, Twitter, YouTube, and Twitch.

Developer Innovation Day is a brand-new event designed specifically for developers and development teams. In the event, sessions throughout the day will show how you can improve productivity and collaboration, provide a first look at new product updates, and go deep on AWS tools for development and delivery. Topics to be addressed include how you can speed up web and mobile application development and how you can build and deliver faster by taking advantage of modern infrastructure, DevOps, and generative AI-enabled tools.

We’ll be starting the day with a keynote from Adam Seligman, Vice President of Developer Experience at AWS. And, to wrap up the event, there are closing sessions from Dr. Werner Vogels, CTO at Amazon, Harry Mower, Director, AWS Code Suite, and Doug Seven, Director of Software Development, Amazon CodeWhisperer. A panel of experts will join Werner to discuss the latest trends in generative AI and what it all means for developers. Harry and Doug will discuss trends in developer and team productivity. Of course, no event would be complete without our own Jeff Barr, VP of AWS Evangelism, who’ll be sharing a “Best practices and key takeaways” session with a recap of key stories, announcements, and moments from the day!

AWS staff will present the technical breakout sessions during the day, and AWS Heroes and Community Builders will also be involved, presenting community spotlights, which are an excellent chance to get involved with the global AWS community. During and after the event there’s also a chance to sign up for an AWS Amplify hackathon that will be held in May and also an Amazon CodeCatalyst immersion day. Other fun events are also in the works—watch the event details page for more information.

AWS Developer Innovation Day Event Banner

There’s no up-front registration required to join AWS Developer Innovation Day, which, I’ll remind you, is also free to attend. Simply head on over to the event page and select Add to calendar. I’m excited to be joined by colleagues from the AWS on Air team to host the event, and we all hope you’ll choose to join in the fun and learning opportunities at this brand-new event!

Amazon CodeWhisperer, Free for Individual Use, is Now Generally Available

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/amazon-codewhisperer-free-for-individual-use-is-now-generally-available/

Today, Amazon CodeWhisperer, a real-time AI coding companion, is generally available and also includes a CodeWhisperer Individual tier that’s free to use for all developers. Originally launched in preview last year, CodeWhisperer keeps developers in the zone and productive, helping them write code quickly and securely and without needing to break their flow by leaving their IDE to research something. Faced with creating code for complex and ever-changing environments, developers can improve their productivity and simplify their work by making use of CodeWhisperer inside their favorite IDEs, including Visual Studio Code, IntelliJ IDEA, and others. CodeWhisperer helps with creating code for routine or time-consuming, undifferentiated tasks, working with unfamiliar APIs or SDKs, making correct and effective use of AWS APIs, and other common coding scenarios such as reading and writing files, image processing, writing unit tests, and lots more.

Using just an email account, you can sign up and, in just a few minutes, become more productive writing code—and you don’t even need to be an AWS customer. For business users, CodeWhisperer offers a Professional tier that adds administrative features, like SSO and IAM Identity Center integration, policy control for referenced code suggestions, and higher limits on security scanning. And in addition to generating code suggestions for Python, Java, JavaScript, TypeScript, and C#, the generally available release also now supports Go, Rust, PHP, Ruby, Kotlin, C, C++, Shell scripting, SQL, and Scala. CodeWhisperer is available to developers working in Visual Studio Code, IntelliJ IDEA, CLion, GoLand, WebStorm, Rider, PhpStorm, PyCharm, RubyMine, and DataGrip IDEs (when the appropriate AWS extensions for those IDEs are installed), or natively in AWS Cloud9 or AWS Lambda console.

Helping to keep developers in their flow is increasingly important as, facing increasing time pressure to get their work done, developers are often forced to break that flow to turn to an internet search, sites such as StackOverflow, or their colleagues for help in completing tasks. While this can help them obtain the starter code they need, it’s disruptive as they’ve had to leave their IDE environment to search or ask questions in a forum or find and ask a colleague—further adding to the disruption. Instead, CodeWhisperer meets developers where they are most productive, providing recommendations in real time as they write code or comments in their IDE. During the preview we ran a productivity challenge, and participants who used CodeWhisperer were 27% more likely to complete tasks successfully and did so an average of 57% faster than those who didn’t use CodeWhisperer.

Code generation from a comment in CodeWhisperer
Code generation from a comment

The code developers eventually locate may, however, contain issues such as hidden security vulnerabilities, be biased or unfair, or fail to handle open source responsibly. These issues won’t improve the developer’s productivity when they later have to resolve them. CodeWhisperer is the best coding companion when it comes to coding securely and using AI responsibly. To help you code responsibly, CodeWhisperer filters out code suggestions that might be considered biased or unfair, and it’s the only coding companion that can filter or flag code suggestions that may resemble particular open-source training data. It provides additional data for suggestions—for example, the repository URL and license—when code similar to training data is generated, helping lower the risk of using the code and enabling developers to reuse it with confidence.

Reference tracking in CodeWhisperer
Open-source reference tracking

CodeWhisperer is also the only AI coding companion to have security scanning for finding and suggesting remediations for hard-to-detect vulnerabilities, scanning both generated and developer-written code looking for vulnerabilities such as those in the top ten listed in the Open Web Application Security Project (OWASP). If it finds a vulnerability, CodeWhisperer provides suggestions to help remediate the issue.

Scanning for vulnerabilities in CodeWhisperer
Scanning for vulnerabilities

Code suggestions provided by CodeWhisperer are not specific to working with AWS. However, CodeWhisperer is optimized for the most-used AWS APIs, for example AWS Lambda, or Amazon Simple Storage Service (Amazon S3), making it the best coding companion for those building applications on AWS. While CodeWhisperer provides suggestions for general-purpose use cases across a variety of languages, the tuning performed using additional data on AWS APIs means you can be confident it is the highest quality, most accurate code generation you can get for working with AWS.

Meet Your new AI Code Companion Today
Amazon CodeWhisperer is generally available today to all developers—not just those with an AWS account or working with AWS—writing code in Python, Java, JavaScript, TypeScript, C#, Go, Rust, PHP, Ruby, Kotlin, C, C++, Shell scripting, SQL, and Scala. You can sign up with just an email address, and, as I mentioned at the top of this post, CodeWhisperer offers an Individual tier that’s freely available to all developers. More information on the Individual tier, and pricing for the Professional tier, can be found at https://aws.amazon.com/codewhisperer/pricing

AWS Week in Review – March 13, 2023

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/aws-week-in-review-march-13-2023/

It seems like only yesterday I was last writing the Week in Review post, at the end of January, and now here we are almost mid-way through March, almost into spring in the northern hemisphere, and close to a quarter way through 2023. Where does time fly?!

Last Week’s Launches
Here’s some of the launches and other news from the past week that I want to bring to your attention:

New AWS Heroes: At the center of the AWS Community around the globe, Heroes share their knowledge and enthusiasm. Welcome to Ananda in Indonesia, and Aidan and Wendy in Australia, our newly announced Heroes!

General Availability of AWS Application Composer: Launched in preview during Dr. Werner Vogel’s re:Invent 2022 keynote, AWS Application Composer is a tool enabling the composition and configuration of serverless applications using a visual design surface. The visual design is backed by an AWS CloudFormation template, making it deployment ready.

What I find particularly cool about Application Composer is that it also works on existing serverless application templates, and round-trips changes to the template made in either a code editor or the visual designer. This makes it ideal for both new developers, and experienced serverless developers with existing applications.

My colleague Channy’s post provides an introduction, and Application Composer is also featured in last Friday’s AWS on Air show, available to watch on-demand.

Get daily feature updates via Amazon SNS: One thing I’ve learned since joining AWS is that the service teams don’t stand still, and are releasing something new pretty much every day. Sometimes, multiple things! This can, however, make it hard to keep up. So, I was interested to read that you can now receive daily feature updates, in email, by subscribing to an Amazon Simple Notification Service (Amazon SNS) topic. As usual, Jeff’s post has all the details you need to get started.

Using up to 10GB of ephemeral storage for AWS Lambda functions: If you use Lambda for Extract-Transform-Load (ETL) jobs, or any data-intensive jobs that require temporary storage of data during processing, you can now configure up to 10GB of ephemeral storage, mounted at /tmp, for your functions in six additional Regions – Asia Pacific (Hyderabad), Asia Pacific (Jakarta), Asia Pacific (Melbourne), Europe (Spain), Europe (Zurich), and Middle East (UAE). More information on using ephemeral storage with Lambda functions can be found in this blog post.

Increased table counts for Amazon Redshift: workloads that require large numbers of tables can now take advantage of using up to 200K tables, avoiding the need to split tables across multiple data warehouses. The updated limit is available to workloads using the ra3.4xlarge, ra3.16xlarge, and dc2.8xlarge node types with Redshift Serverless and data warehouse clusters.

Faster, simpler permissions setup for AWS Glue: Glue is a serverless data integration and ETL service for discovering, preparing, moving, and integrating data intended for use in analytics and machine learning (ML) workloads. A new guided permissions setup process, available in the AWS Management Console, makes it simpler and easier to grant access to AWS Identity and Access Management (IAM) Roles and users to Glue, and use a default role for running jobs and working with notebooks. This simpler, guided approach helps users start authoring jobs, and work with the Data Catalog, without further setup. 

Microsoft Active Directory authentication for the MySQL-Compatible Edition of Amazon Aurora: You can now use Active Directory, either with an existing on-premises directory or with AWS Directory Service for Microsoft Active Directory, to authenticate database users when accessing Amazon Aurora MySQL-Compatible Edition instances, helping reduce operational overhead. It also enables you to make use of native Active Directory credential management capabilities to manage password complexities and rotation, helping you stay in step with your compliance and security requirements.

Launch of the 2023 AWS DeepRacer League and new competition structure: The DeepRacer tracks are one of my favorite things to visit and watch at AWS events, so I was happy to learn the new 2023 league is now underway. If you’ve not heard of DeepRacer, it’s the world’s first global racing league featuring autonomous vehicles, enabling developers of all skill levels to not only compete to complete the track in the shortest time but also to advance their knowledge of machine learning (ML) in the process. Along with the new league, there are now more chances to earn achievements and prizes using an all new three-tier competition spanning national and regional races. Racers compete for a chance to win a spot in the World Championship, held at AWS re:Invent, and a $43,000 prize purse. What are you waiting for, start your (ML) engines today!

AWS open-source news and updates: The latest newsletter highlighting open-source projects, tools, and demos from the AWS Community is now available. The newsletter is published weekly, and you can find edition 148 here.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Upcoming AWS Events
Here’s some upcoming events you may be interested in checking out:

AWS Pi Day: March 14th is the third annual AWS Pi Day. Join in with the celebrations of the 17th birthday of Amazon Simple Storage Service (Amazon S3) and the cloud in a live virtual event hosted on the AWS on Air channel. There’ll also be news and discussions on the latest innovations across Data services on AWS, including storage, analytics, AI/ML, and more.

.NET developers and architects looking to modernize their applications will be interested in an upcoming webinar, Modernize and Optimize by Containerizing .NET Applications on AWS, scheduled for March 22nd. In this webinar, you’ll find demonstrations on how you can enhance the security of legacy .NET applications through modernizing to containers, update to a modern version of .NET, and run them on the latest versions of Windows. Registration for the online event is open now.

You can find details on all upcoming events, in-person and virtual, here.

New Livestream Shows
There’s some new livestream shows that launched recently I’d like to bring to your attention:

My colleague Isaac has started a new .NET on AWS show, streaming on Twitch. The second episode was live last week; catch up here on demand. Episode 1 is also available here.

I mentioned AWS on Air earlier in this post, and hopefully you’ve caught our weekly Friday show streaming on Twitch, Twitter, YouTube, and LinkedIn. Or, maybe you’ve seen us broadcasting live from AWS events such as Summits or AWS re:Invent. But did you know that some of the hosts of the shows have recently started their own individual shows too? Check out these new shows below:

  • AWS on Air: Startup! – hosted by Jillian Forde, this show focuses on learning technical and business strategies from startup experts to build and scale your startup in AWS. The show runs every Tuesday at 10am PT/1pm ET.
  • AWS On Air: Under the Hood with AWS – in this show, host Art Baudo chats with guests, including AWS technical leaders and customers, about Cloud infrastructure. In last week’s show, the discussion centered around Amazon Elastic Compute Cloud (Amazon EC2) Mac Instances. Watch live every Tuesday at 2pm PT/5pm ET. 
  • AWS on Air: Lockdown! – join host Kyle Dickinson each Tuesday at 11am PT/2pm ET for this show, covering a breadth of AWS security topics in an approachable way that’s suitable for all levels of AWS experience. You’ll encounter demos, guest speakers from AWS, AWS Heroes, and AWS Community Builders. 
  • AWS on Air: Step up your GameDay – hosts AM Grobelny and James Spencer are joined by special guests to strategize and navigate through an AWS GameDay, a fun and challenge-oriented way to learn about AWS. You’ll find this show every second Wednesday at 11am PT/2pm ET.
  • AWS on Air: AMster & the Brit’s Code Corner – join AM Grobelny and myself as we chat about and illustrate cloud development. In Beginners Corner, we answer your questions and try to demystify this strange thing called “coding”, and in Project Corner we tackle slightly larger projects of interest to more experienced developers. There’s something for everyone in Code Corner, live on the 3rd Thursday of each month at 11am PT/2pm ET.

You’ll find all these AWS on Air shows in the published schedule. We hope you can join us!

That’s all for this week – check back next Monday for another AWS Week in Review.

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

AWS Week in Review – January 30, 2023

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/aws-week-in-review-january-30-2023/

This week’s review post comes to you from the road, having just wrapped up sponsorship of NDC London. While there we got to speak to many .NET developers, both new and experienced with AWS, and all eager to learn more. Thanks to everyone who stopped by our expo booth to chat or ask questions to the team!

.NET on AWS booth, NDC London 2023.NET on AWS booth, NDC London 2023

Last Week’s Launches
My team will be back on the road to our next events soon, but first, here are just some launches that caught my attention while I was at the expo booth last week:

General availability of Porting Advisor for Graviton: AWS Graviton2 processors are custom designed, Arm64, processors, that deliver increased price performance over comparable x86-64 processors. They’re suitable for a wide range of compute workloads on Amazon Elastic Compute Cloud (Amazon EC2) including application servers, microservices, high-performance computing (HPC), CPU-based ML inference, gaming, any many more. They’re also available in other AWS services such as AWS Lambda, AWS Fargate, to name just a few. The new Porting Advisor for Graviton is a freely available, open-source command line tool for analyzing compatibility of applications you want to run on Graviton-based compute environments. It provides a report that highlights missing or outdated libraries, and code, that you may need to update in order to port your application to run on Graviton processors.

Runtime management controls for AWS Lambda: Automated feature updates, performance improvements, and security patches to runtime environments for Lambda functions is popular with many customers. However, some customers have asked for increased visibility into when these updates occur, and control over when they’re applied. The new runtime management controls for Lambda provide optional capabilities for those customers that require more control over runtime changes. The new controls are optional; by default, all your Lambda functions will continue to receive automatic updates. But, if you wish, you can now apply a runtime management configuration with your functions that specifies how you want updates to be applied. You can find full details on the new runtime management controls in this blog post on the AWS Compute Blog.

General availability of Amazon OpenSearch Serverless: OpenSearch Serverless was one of the livestream segments in the recent AWS on Air re:Invent Recap of previews that were announced at the conference last December. OpenSearch Serverless is now generally available. As a serverless option for Amazon OpenSearch Service, it removes the need to configure, manage, or scale OpenSearch clusters, offering automatic provisioning and scaling of resources to enable fast ingestion and query responses.

Additional connectors for Amazon AppFlow: At AWS re:Invent 2023, I blogged about a release of new data connectors enabling data transfer from a variety of Software-as-a-Service (SaaS) applications to Amazon AppFlow. An additional set of 10 connectors, enabling connectivity from Asana, Google Calendar, JDBC, PayPal, and more, are also now available. Check out the full list of additional connectors launched this past week in this What’s New post.

AWS open-source news and updates: As usual, there’s a new edition of the weekly open-source newsletter highlighting new open-source projects, tools, and demos from the AWS Community. Read edition #143 here – LINK TBD.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Upcoming AWS Events
Check your calendars and sign up for these AWS events:

AWS Innovate Data and AI/ML edition: AWS Innovate is a free online event to learn the latest from AWS experts and get step-by-step guidance on using AI/ML to drive fast, efficient, and measurable results.

  • AWS Innovate Data and AI/ML edition for Asia Pacific and Japan is taking place on February 22, 2023. Register here.
  • Registrations for AWS Innovate EMEA (March 9, 2023) and the Americas (March 14, 2023) will open soon. Check the AWS Innovate page for updates.

You can find details on all upcoming events, in-person or virtual, here.

And finally, if you’re a .NET developer, my team will be at Swetugg, in Sweden, February 8-9, and DeveloperWeek, Oakland, California, February 15-17. If you’re in the vicinity at these events, be sure to stop by and say hello!

That’s all for this week. Check back next Monday for another Week in Review!

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

Announcing Amazon CodeCatalyst (preview), a Unified Software Development Service

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/announcing-amazon-codecatalyst-preview-a-unified-software-development-service/

Today, we announced the preview release of Amazon CodeCatalyst. A unified software development and delivery service, Amazon CodeCatalyst enables software development teams to quickly and easily plan, develop, collaborate on, build, and deliver applications on AWS, reducing friction throughout the development lifecycle.

In my time as a developer the biggest excitement—besides shipping software to users—was the start of a new project, or being invited to join a project. Both came with the anticipation of building something cool, cutting new code—seeing an idea come to life. However, starting out was sometimes a slow process. My team or I would need to update our local development environments—or entirely new machines—with tools, libraries, and programming frameworks. We had to create source code repositories and set up other shared tools such as Jira, Confluence, or Jenkins, configure build pipelines and other automation workflows, create test environments, and so on. Day-to-day maintenance of development and build environments consumed valuable team cycles and energy. Collaboration between the team took effort, too, because tools to share information and have a single source of truth were not available. Context switching between projects and dealing with conflicting dependencies in those projects, e.g., Python 3.6 for project X and Python 2.7 for project Y—especially when we had only a single machine to work on—further increased the burden.

It doesn’t seem to have gotten any better! These days, when talking to developers about their experiences, I often hear them express that they feel modern development has become even more complicated. This is due to having to select and configure a wider collection of modern frameworks and libraries, tools, cloud services, continuous integration and delivery pipelines, and many other choices that all need to work together to deliver the application experience. What was once manageable by one developer on one machine is now a sprawling, dynamic, complex net of decisions and tradeoffs, made even more challenging by the need to coordinate all this across dispersed teams.

Enter Amazon CodeCatalyst
I’ve spent some time talking with the team behind Amazon CodeCatalyst about their sources of inspiration and goals. Taking feedback from both new and experienced developers and service teams here at AWS, they examined the challenges typically experienced by teams and individual developers when building software for the cloud. Having gathered and reviewed this feedback, they set about creating a unified tool that smooths out the rough edges that needlessly slow down software delivery, and they added features to make it easier for teams to work together and collaborate. Features in Amazon CodeCatalyst to address these challenges include:

  • Blueprints that set up the project’s resources—not just scaffolding for new projects, but also the resources needed to support software delivery and deployment.
  • On-demand cloud-based Dev Environments, to make it easy to replicate consistent development environments for you or your teams.
  • Issue management, enabling tracing of changes across commits, pull requests, and deployments.
  • Automated build and release (CI/CD) pipelines using flexible, managed build infrastructure.
  • Dashboards to surface a feed of project activities such as commits, pull requests, and test reporting.
  • The ability to invite others to collaborate on a project with just an email.
  • Unified search, making it easy to find what you’re looking for across users, issues, code and other project resources.

There’s a lot in Amazon CodeCatalyst that I don’t have space to cover in this post, so I’m going to briefly cover some specific features, like blueprints, Dev Environments, and project collaboration. Other upcoming posts will cover additional features.

Project Blueprints
When I first heard about blueprints, they sounded like a feature to scaffold some initial code for a project. However, they’re much more! Parameterized application blueprints enable you to set up shared project resources to support the application development lifecycle and team collaboration in minutes—not just initial starter code for an application. The resources that a blueprint creates for a project include a source code repository, complete with initial sample code and AWS service configuration for popular application patterns, which follow AWS best practices by default. If you prefer, an external Git repository such as GitHub may be used instead. The blueprint can also add an issue tracker, but external trackers such as Jira can also be used. Then, the blueprint adds a build and release pipeline for CI/CD, which I’ll come to shortly, as well as other integrated tooling.

The project resources and integrated tools set up using blueprints, including the CI/CD pipeline and the AWS resources to host your application, make it so that you can press “deploy” and get sample code running in a few minutes, enabling you to jump right in and start working on your specific business logic.

Project blueprints when starting a new project

At launch, customers can choose from blueprints with Typescript, Python, Java, .NET, Javascript for languages and React, Angular, and Vue frameworks, with more to come. And you don’t need to start with a blueprint. You can build projects with workflows that run on anything that works with Linux and Windows operating systems.

Cloud-Based Dev Environments
Development teams can often run into a problem of “environment drift” where one team member has a slightly different version of a toolchain or library compared to everyone else or the test environments. This can introduce subtle bugs that might go unnoticed for some time. Dev Environment specifications, and the other shared resources, that blueprints create help ensure there’s no unnecessary variance, and everyone on the team gets the same setup to provide a consistent, repeatable experience between developers.

Amazon CodeCatalyst uses a devfile to define the configuration of an on-demand, cloud-based Dev Environment, which currently supports four resizable instance size options with 2, 4, 8, or 16 vCPUs. The devfile defines and configures all of the resources needed to code, test, and debug for a given project, minimizing the time the development team members need to spend on creating and maintaining their local development environments. Devfiles, which are added to the source code repository by the selected blueprint can also be modified if required. With Dev Environments, context switching between projects incurs less overhead—with one click, you can simply switch to a different environment, and you’re ready to start working. This means you’re easily able to work concurrently on multiple codebases without reconfiguring. Being on-demand, Dev Environments can also be paused, restarted, or deleted as needed.

Below is an example of a devfile that bootstraps a Dev Environment.

schemaVersion: 2.0.0
metadata:
  name: aws-universal
  version: 1.0.1
  displayName: AWS Universal
  description: Stack with AWS Universal Tooling
  tags:
    - aws
    - a12
  projectType: aws
commands:
  - id: npm_install
    exec:
      component: aws-runtime
      commandLine: "npm install"
      workingDir: /projects/spa-app
events:
  postStart:
    - npm_install
components:
  - name: aws-runtime
    container:
      image: public.ecr.aws/aws-mde/universal-image:latest
      mountSources: true
      volumeMounts:
        - name: docker-store
          path: /var/lib/docker
  - name: docker-store
    volume:
      size: 16Gi

Developers working in cloud-based Dev Environments provisioned by Amazon CodeCatalyst can use AWS Cloud9 as their IDE. However, they can just as easily work with Amazon CodeCatalyst from other IDEs on their local machines, such as JetBrains IntelliJ IDEA Ultimate, PyCharm Pro, GoLand, and Visual Studio Code. Developers can also create Dev Environments from within their IDE, such as Visual Studio Code or for JetBrains using the JetBrains Gateway app. Below, JetBrains IntelliJ is being used.

Editing an application source file in JetBrains IntelliJ

Build and Release Pipelines
The build and release pipeline created by the blueprint run on flexible, managed infrastructure. The pipelines can use on-demand compute or preprovisioned builds, including a choice of machine sizes, and you can bring your own container environments. You can incorporate build actions that are built in or provided by partners (e.g., Mend, which provides a software composition analysis build action), and you can also incorporate GitHub Actions to compose fully automated pipelines. Pipelines are configurable using either a visual editor or YAML files.

Build and release pipelines enable deployment to popular AWS services, including Amazon Elastic Container Service (Amazon ECS), AWS Lambda, and Amazon Elastic Compute Cloud (Amazon EC2). Amazon CodeCatalyst makes it trivial to set up test and production environments and deploy using pipelines to one or many Regions or even multiple accounts for security.

Running automated workflow

Project Collaboration
As a unified software development service, Amazon CodeCatalyst not only makes it easier to get started building and delivering applications on AWS, it helps developers of all levels collaborate on projects through a single shared project space and source of truth. Developers can be invited to collaborate using just an email. On accepting the invitation, the developer sees the full project context and can begin work at once using the project’s Dev Environments—no need to spend time updating or reconfiguring their local machine with required tools, libraries, or other pre-requisites.

Existing members of an Amazon CodeCatalyst space, or new members using their email, can be invited to collaborate on a project:

Inviting new members to collaborate on a project

Each will receive an invitation email containing a link titled Accept Invitation, which when clicked, opens a browser tab to sign in. Once signed in, they can view all the projects in the Amazon CodeCatalyst space they’ve been invited to and can also quickly switch to other spaces in which they are the owner or to which they’ve been invited.

Projects I'm invited to collaborate on

From there, they can select a project and get an immediate overview of where things stand, for example, the status of recent workflows, any open pull requests, and available Dev Environments.

CodeCatalyst project summary

On the Issues board, team members can see which issues need to be worked on, select one, and get started.

Viewing issues

Being able to immediately see the context for the project, and have access to on-demand cloud-based Dev Environments, all help with being able to start contributing more quickly, eliminating setup delays.

Get Started with Amazon CodeCatalyst in the Free Tier Today!
Blueprints to scaffold not just application code but also shared project resources supporting the development and deployment of applications, issue tracking, invite-by-email collaboration, automated workflows, and more are all available today in the newly released preview of Amazon CodeCatalyst to help accelerate your cloud development and delivery efforts. Learn more in the Amazon CodeCatalyst User Guide. And, as I mentioned earlier, additional blogs posts and other supporting content are planned by the team to dive into the range of features in more detail, so be sure to look out for them!

Announcing Additional Data Connectors for Amazon AppFlow

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/announcing-additional-data-connectors-for-amazon-appflow/

Gathering insights from data is a more effective process if that data isn’t fragmented across multiple systems and data stores, whether on premises or in the cloud. Amazon AppFlow provides bidirectional data integration between on-premises systems and applications, SaaS applications, and AWS services. It helps customers break down data silos using a low- or no-code, cost-effective solution that’s easy to reconfigure in minutes as business needs change.

Today, we’re pleased to announce the addition of 22 new data connectors for Amazon AppFlow, including:

  • Marketing connectors (e.g., Facebook Ads, Google Ads, Instagram Ads, LinkedIn Ads).
  • Connectors for customer service and engagement (e.g., MailChimp, Sendgrid, Zendesk Sell or Chat, and more).
  • Business operations (Stripe, QuickBooks Online, and GitHub).

In total, Amazon AppFlow now supports over 50 integrations with various different SaaS applications and AWS services. This growing set of connectors can be combined to enable you to achieve 360 visibility across the data your organization generates. For instance, you could combine CRM (Salesforce), e-commerce (Stripe), and customer service (ServiceNow, Zendesk) data to build integrated analytics and predictive modeling that can guide your next best offer decisions and more. Using web (Google Analytics v4) and social surfaces (Facebook Ads, Instagram Ads) allows you to build comprehensive reporting for your marketing investments, helping you understand how customers are engaging with your brand. Or, sync ERP data (SAP S/4HANA) with custom order management applications running on AWS. For more information on the current range of connectors and integrations, visit the Amazon AppFlow integrations page.

Datasource connectors for Amazon AppFlow

Amazon AppFlow and AWS Glue Data Catalog
Amazon AppFlow has also recently announced integration with the AWS Glue Data Catalog to automate the preparation and registration of your SaaS data into the AWS Glue Data Catalog. Previously, customers using Amazon AppFlow to store data from supported SaaS applications into Amazon Simple Storage Service (Amazon S3) had to manually create and run AWS Glue Crawlers to make their data available to other AWS services such as Amazon Athena, Amazon SageMaker, or Amazon QuickSight. With this new integration, you can populate AWS Glue Data Catalog with a few clicks directly from the Amazon AppFlow configuration without the need to run any crawlers.

To simplify data preparation and improve query performance when using analytics engines such as Amazon Athena, Amazon AppFlow also now enables you to organize your data into partitioned folders in Amazon S3. Amazon AppFlow also automates the aggregation of records into files that are optimized to the size you specify. This increases performance by reducing processing overhead and improving parallelism.

You can find more information on the AWS Glue Data Catalog integration in the recent What’s New post.

Getting Started with Amazon AppFlow
Visit the Amazon AppFlow product page to learn more about the service and view all the available integrations. To help you get started, there’s also a variety of videos and demos available and some sample integrations available on GitHub. And finally, should you need a custom integration, try the Amazon AppFlow Connector SDK, detailed in the Amazon AppFlow documentation. The SDK enables you to build your own connectors to securely transfer data between your custom endpoint, application, or other cloud service to and from Amazon AppFlow‘s library of managed SaaS and AWS connectors.

— Steve

Classifying and Extracting Mortgage Loan Data with Amazon Textract

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/classifying-and-extracting-mortgage-loan-data-with-amazon-textract/

Mortgage loan applications, at least in the United States, comprise around 500 or more pages of diverse documents. In order for applications to be reviewed, all these documents need to be classified, and the data on each form extracted. This isn’t as easy as it might sound! Besides different data structures in each document, the same data element may have different names on different documents—for example, SSN, or Social Security Number, or Tax ID. These three all refer to the same data.

Today, a new Analyze Lending API, for analyzing and classifying the documents contained in mortgage loan application packages, and extracting the data they contain, is available for Amazon Textract. The new API was created in response to requests from major lenders in the industry to help them process applications faster and reduce errors, which improves the end-customer experience and lower operating costs.

Until now, classification and extraction of data from mortgage loan application packages have been human-intensive tasks, although some lenders have used a hybrid approach, using technology such as Amazon Textract. However, customers told us that they needed even greater workflow automation to speed up automation efforts and reduce human error so that their staff could focus on higher-value tasks.

The new API also provides additional value-add services. It’s able to perform signature detection in terms of which documents have signatures and which don’t. It also provides a summary output of the documents in a mortgage application package and identifies select important documents such as bank statements and 1003 forms that would normally be present. The new workflow is powered by a collection of machine learning (ML) models. When a mortgage application package is uploaded, the workflow classifies the documents in the package before routing them to the right ML model, based on their classification, for data extraction.

Test-Driving the New Analyze Lending API
Although the new API is intended for lenders to incorporate into their business process workflows and applications, anyone can actually try it using the Amazon Textract console. This enables you to see how the API classifies documents and extracts the data elements they contain. If you’re interested in the application of machine learning and artificial intelligence, this may be of interest to you even if you’re not processing a mortgage application package.

I start by opening the Amazon Textract console, expanding Analyze Lending in the navigation panel, and then selecting Demo. The demo console immediately analyzes a set of synthetic test files, and outputs the results shown below (you can always restart the demo by clicking the Reset demo button). I get a summary of the analysis results and a document carousel for each of the documents in the package. The demo console also has a handy help panel containing (among other things) a summary of terminology related to the documents.

Mortgage document analysis summary, carousel, and terminology help text

In the carousel I can see that one document has a signature badge, indicating a signature was detected, but, before taking a look, if I scroll the carousel, I can see that one document was labeled Unclassified:

Unclassified document notification

Returning in the carousel to the document marked with a signature badge, I can see that it’s a check. Signature detection is usually a highly manual process so having the document analysis automatically mark when one is detected is a significant time saver.

Signature detection

Payslips are another document type that customers have told us can be difficult and time-consuming to handle. Selecting the detected payslip in the carousel shows the data extracted from it.

Payslip detection and data extraction

The synthetic data in the demo console provides an overview of how the API is able to analyze, classify, and extract data from the documents in a mortgage application package. However, I can also use my own documents. To do this in the demo console, I click the Upload package button and provide a single file, up to 5 MB, and 10 pages maximum for testing in the demo console, containing documents to analyze. Outside use in the demo console, the API supports documents with up to 3000 pages.

The results, for both the synthetic and your own data, can be downloaded by clicking the Download results button. This provides a .zip file containing four files—two are the raw JSON responses from the API. The other two are CSV-format files containing the summary (summary.csv) and the extracted data (extractions.csv). Both files are in key-value format.

The contents of the summary data file, for the synthetic test data, are below.

'DocumentName,'FirstPage,'LastPage
"'Payslips","'1","'1"
"'Checks","'2","'2"
"'Identity document","'3","'3"
"'1099 DIV","'4","'4"
"'Bank statement","'5","'5"
"'W2","'6","'6"
"'Unclassified","'7","'7"

Below is an example of the data contained in the extractions file.

'key,'value
"'PAY PERIOD END DATE","'7/18/2008"
"'PAY DATE","'7/25/2008"
"'BORROWER NAME","'JOHN STILES"
"'BORROWER ADDRESS","'101 MAIN STREET ANYTOWN, USA 12345"
"'COMPANY NAME","'ANY COMPANY CORP."
"'COMPANY ADDRESS","'475 ANY AVENUE ANYTOWN, USA 10101"
"'FEDERAL FILING STATUS","'Married"
"'STATE FILING STATUS","'2"
"'CURRENT GROSS PAY","'$ 452.43"
"'YTD GROSS PAY","'23,526.80"
"'CURRENT NET PAY","'$ 291.90"
"'REGULAR HOURLY RATE","'10.00"
"'HOLIDAY HOURLY RATE","'10.00"
"'WARNINGS MESSAGES NOTES","'EFFECTIVE THIS PAY PERIOD YOUR REGULAR HOURLY RATE HAS BEEN CHANGED FROM $8.00 TO $10.00 PER HOUR."
"'CURRENT REGULAR PAY","'320"
...

Try the Analyze Lending API Yourself
The new API is available in all Regions where Amazon Textract is offered but do be aware that the workflow and processing are focused on US-centric documents. Pricing for the new API is the same as for the existing table, form, and queries. You can find more details on the service pricing page. Finally, you can read more on the API in the Developer Guide.

Explore the new Analyze Lending API for yourself today in the Amazon Textract console!

— Steve

Automated in-AWS Failback for AWS Elastic Disaster Recovery

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/automated-in-aws-failback-for-aws-elastic-disaster-recovery/

I first covered AWS Elastic Disaster Recovery (DRS) in a 2021 blog post. In that post, I described how DRS “enables customers to use AWS as an elastic recovery site for their on-premises applications without needing to invest in on-premises DR infrastructure that lies idle until needed. Once enabled, DRS maintains a constant replication posture for your operating systems, applications, and databases.” I’m happy to announce that, today, DRS now also supports in-AWS failback, adding to the existing support for non-disruptive recovery drills and on-premises failback included in the original release.

I also wrote in my earlier post that drills are an important part of disaster recovery since, if you don’t test, you simply won’t know for sure that your disaster recovery solution will work properly when you need it to. However, customers rarely like to test because it’s a time-consuming activity and also disruptive. Automation and simplification encourage frequent drills, even at scale, enabling you to be better prepared for disaster, and now you can use them irrespective of whether your applications are on-premises or in AWS. Non-disruptive recovery drills provide confidence that you will meet your recovery time objectives (RTOs) and recovery point objectives (RPOs) should you ever need to initiate a recovery or failback. More information on RTOs and RPOs, and why they’re important to define, can be found in the recovery objectives documentation.

The new automated support provides a simplified and expedited experience to fail back Amazon Elastic Compute Cloud (Amazon EC2) instances to the original Region, and both failover and failback processes (for on-premises or in-AWS recovery) can be conveniently started from the AWS Management Console. Also, for customers that want to customize the granular steps that make up a recovery workflow, DRS provides three new APIs, linked at the bottom of this post.

Failover vs. Failback
Failover is switching the running application to another Availability Zone, or even a different Region, should outages or issues occur that threaten the availability of the application. Failback is the process of returning the application to the original on-premises location or Region. For failovers to another Availability Zone, customers who are agnostic to the zone may continue running the application in its new zone indefinitely if so required. In this case, they will reverse the recovery replication, so the recovered instance is protected for future recovery. However, if the failover was to a different Region, its likely customers will want to eventually fail back and return to the original Region when the issues that caused failover have been resolved.

The below images illustrate architectures for in-AWS applications protected by DRS. The architecture in the image below is for cross-Availability Zone scenarios.

Cross-Availability Zone architecture for DRS

The architecture diagram below is for cross-Region scenarios.

Cross-Region architecture for DRS

Let’s assume an incident occurs with an in-AWS application, so we initiate a failover to another AWS Region. When the issue has been resolved, we want to fail back to the original Region. The following animation illustrates the failover and failback processes.

Illustration of the failover and failback processes

Learn more about in-AWS failback with Elastic Disaster Recovery
As I mentioned earlier, three new APIs are also available for customers who want to customize the granular steps involved. The documentation for these can be found using the links below.

The new in-AWS failback support is available in all Regions where AWS Elastic Disaster Recovery is available. Learn more about AWS Elastic Disaster Recovery in the User Guide. For specific information on the new failback support I recommend consulting this topic in the service User Guide

— Steve

AWS Week in Review – November 14, 2022

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/aws-week-in-review-november-14-2022/

It’s now just two weeks to AWS re:Invent in Las Vegas, and the pace is picking up, both here on the News Blog, and throughout AWS as everyone get ready for the big event! I hope you get the chance to join us, and have shared links and other information at the bottom of this post. First, though, let’s dive straight in to this week’s review of news and announcements from AWS.

Last Week’s Launches
As usual, let’s start with a summary of some launches from the last week that I want to remind you of:

New Switzerland Region – First and foremost, AWS has opened a new Region, this time in Switzerland. Check out Seb’s post here on the News Blog announcing the launch.

New AWS Resource Explorer – if you’ve ever spent time searching for specific resources in your AWS account, especially across Regions, be sure to take a look at the new AWS Resource Explorer, described in this post by Danilo. Once enabled, indexes of the resources in your account are built and maintained (you have control over which resources are indexed). Once the indexes are built, you can issue queries to more quickly arrive at the required resource without jumping between different Regions and service dashboards in the Management Console.

Amazon Lightsail domain registration and DNS autoconfigurationAmazon Lightsail users can now take advantage of new support for registering domain names with automatic configuration of DNS records. Within the Lightsail console, you’re now able to create and register an Amazon Route 53 domain with just a few clicks. 

New models for Amazon SageMaker JumpStart – Two new state-of-the-art models have been released for Amazon SageMaker JumpStart. SageMaker JumpStart provides pretrained, open-source models covering a wide variety of problem types that help you get started with machine learning. The first new model, Bloom, can be used to complete sentences or generate long paragraphs of text in 46 different languages. The second model, Stable Diffusion, generates realistic images from given text. Find out more about the new models in this What’s New post.

Mac instances and macOS VenturaAmazon Elastic Compute Cloud (Amazon EC2) now has support for running the latest version of macOS, Ventura (13.0), for both EC2 x86 Mac and EC2 M1 Mac instances. These instances enable you to provision and run macOS environments in the AWS Cloud, for developers creating apps for iPhone, iPad, Mac, Apple Watch, Apple TV, and Safari.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
Some other news items you may want to explore:

AWS Open Source News and Updates – This blog is published each week, and Installment 135 is now available, highlighting new open-source projects, tools, and demos from the AWS community.

Upcoming AWS Events
AWS re:Invent 2022 – As I noted at the top of this post, we’re now just two weeks away from the event! Join us live in Las Vegas November 28–December 2 for keynotes, opportunities for training and certification, and over 1,500 technical sessions. If you are joining us, be sure to check out the re:Invent 2022 Attendee Guides, each curated by an AWS Hero, AWS industry team, or AWS partner.

If you can’t join us live in Las Vegas, be sure to join us online to watch the keynotes and leadership sessions. My cohosts and I on the AWS on Air show will also be livestreaming daily from the event, chatting with service teams and special guests about all the launches and other announcements. You can find us on Twitch.tv (we’ll be on the front page throughout the event), the AWS channel on LinkedIn Live, Twitter.com/awsonair, and YouTube Live.

And one final update for the event – if you’re a .NET developer, be sure to check out the XNT track in the session catalog to find details on the seven breakouts, three chalk talks, and the workshop we have available for you at the conference!

Check back next Monday for our last week in review before the start of re:Invent!

— Steve

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS.

AWS Batch for Amazon Elastic Kubernetes Service

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/aws-batch-for-amazon-elastic-kubernetes-service/

Today I’m pleased to announce AWS Batch for Amazon Elastic Kubernetes Service (Amazon EKS). AWS Batch for Amazon EKS is ideal for customers who no longer want to shoulder the burden of configuring, fine-tuning, and managing Kubernetes clusters and pods to use with their batch processing workflows. Furthermore, there is no charge for this service. You only pay for the resources that your batch jobs launch.

When I’ve previously considered Kubernetes, it appeared to be focused on the management and hosting of microservice workloads. I was therefore surprised to discover that Kubernetes is also used by some customers to run large-scale, compute-intensive batch workloads. The differences between batch and microservice workloads mean that using Kubernetes for batch processing can be difficult and requires you to invest significant time in custom configuration and management to fine-tune a suitable solution.

Microservice and batch workloads on Kubernetes
Before we look further at AWS Batch for Amazon EKS, let’s consider some of the important differences between batch and microservice workloads to help set some context on why running batch workloads on Kubernetes can be difficult:

  • Microservice workloads are assumed to start and not stop—we expect them to be continuously available. In contrast, batch workloads run to completion and then exit—regardless of success or failure.
  • The results from a batch workload might not be available for several minutes—and sometimes hours or even days. Microservice workloads are expected to respond to requests within milliseconds.
  • We usually deploy microservice workloads across several Availability Zones to ensure high availability. This isn’t a requirement for batch workloads. Although we might distribute a batch job to allow it to process different input data in a distributed analysis, we more typically want to prioritize fast and optimal access to resources the job needs within the Availability Zone in which it is running.
  • Microservice and batch workloads scale differently. For microservices, scaling is generally predictable and usually linear as load increases (or decreases). With batch workloads, you might first perform an initial, or infrequently repeated, proof-of-concept run to analyze performance and discover the correct tuning needed for a full production run. The difference in size between the two can be exponential. Furthermore, with batch workloads, we might scale to an extreme level for a run, then scale back to zero instances for long periods of time, sometimes months.

Although third-party frameworks can help with running batch workloads on Kubernetes, you can also roll your own. Whichever approach you take, significant gaps and challenges can remain in handling the undifferentiated heavy lifting of building, configuring, and maintaining custom batch solutions. Then you also need to consider the scheduling, placing, and scaling of batch workloads on Kubernetes in a cost-effective manner. So how does AWS Batch on Amazon EKS help?

AWS Batch for Amazon EKS
AWS Batch for Amazon EKS offers a fully managed service to run batch workloads using clusters hosted on Amazon Elastic Compute Cloud (Amazon EC2) with no need to install and manage complex, custom batch solutions to address the differences highlighted earlier. AWS Batch provides a scheduler that controls and runs high-volume batch jobs, together with an orchestration component that evaluates when, where, and how to place jobs submitted to a queue. There’s no need for you, as the user, to coordinate any of this work—you just submit a job request into the queue.

Job queueing, dependency tracking, retries, prioritization, compute resource provisioning for Amazon Elastic Compute Cloud (EC2) and Amazon Elastic Compute Cloud (EC2) Spot, and pod submission are all handled using a serverless queue. As a managed service, AWS Batch for Amazon EKS enables you to reduce your operational and management overhead and focus instead on your business requirements. It provides integration with other services such as AWS Identity and Access Management (IAM), Amazon EventBridge, and AWS Step Functions and allows you to take advantage of other partners and tools in the Kubernetes ecosystem.

When running batch jobs on Amazon EKS clusters, AWS Batch is the main entry point to submit workload requests. Based on the queued jobs, AWS Batch then launches worker nodes in your cluster to process the jobs. These nodes are kept separate in a distinct namespace from your other node groups in Amazon EKS. Similarly, nodes in other pods are isolated from those used with AWS Batch.

How it works
AWS Batch uses managed Amazon EKS clusters, which need to be registered with AWS Batch, and permissions set so that AWS Batch can launch and manage compute environments in those clusters to process jobs submitted to the queue. You can find instructions on how to launch a managed cluster that AWS Batch can use in this topic in the Amazon EKS User Guide. Instructions for configuring permissions can be found in the AWS Batch User Guide.

Once one or more clusters have been registered, and permissions set, users can submit jobs to the queue. When a job is submitted, the following actions take place to process the request:

  • On receiving a job request, the queue dispatches a request to the configured compute environment for resources. If an AWS Batch managed scaling group does not yet exist, one is created, and AWS Batch then starts launching Amazon Elastic Compute Cloud (EC2) instances in the group. These new instances are added to the AWS Batch Kubernetes namespace of the cluster.
  • The Kubernetes scheduler places any configured DaemonSet on the node.
  • Once the node is ready, AWS Batch starts sending pod placement requests to your cluster, using labels and taints to make the placement choices for the pods, bypassing much of the logic of the k8s scheduler.
  • This process is repeated, scaling as needed across more EC2 instances in the scaling group until the maximum configured capacity is reached.
  • If the job queue has another compute environment defined, such as one configured to use Spot instances, it will launch additional nodes in that compute environment.
  • Once all work is complete, AWS Batch removes the nodes from the cluster, and terminates the instances.

These steps are illustrated in the animation below.

Animation showing the steps AWS Batch takes when processing a request using an Amazon EKS cluster

Start using your clusters with AWS Batch today
AWS Batch for Amazon Elastic Kubernetes Service (Amazon EKS) is available today. As I noted earlier, there is no charge for this service, and you pay only for the resources your jobs consume. To learn more, visit the Getting Started with Amazon EKS topic in the AWS Batch User Guide. There is also a self-guided workshop to help introduce you to AWS Batch on Amazon EKS.

— Steve

AWS Week in Review – October 17, 2022

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/aws-week-in-review-october-17-2020/

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

Monday means it’s time for another Week in Review post, so, without further ado, let’s dive right in!

Last Week’s Launches
Here’s some launch announcements from last week you may have missed.

AWS Directory Service for Microsoft Active Directory is now available on Windows Server 2019, and all new directories will run on this server platform. Those of you with existing directories can choose to update with either a few clicks on the AWS Managed Microsoft AD console, or you can update programmatically using an API. With either approach, you can update at a time convenient to you and your organization between now and March 2023. After March 2023, directories will be updated automatically.

Users of SAP Solution Manager can now use automated deployments to provision it, in accordance with AWS and SAP best practices, to both single-node and distributed architectures using AWS Launch Wizard.

AWS Activate is a program that offers free tools, resources, and the opportunity to apply for credits to smaller early stage businesses and also more advanced digital businesses, helping them get started quickly on AWS. The program is now open to any self-identified startup.

Amazon QuickSight users who employ row-level security (RLS) to control access to restricted datasets will be interested in a new feature that enables you to ask questions against topics in these datasets. User-based rules control the answers received to questions and any auto-complete suggestions provided when the questions are being framed. This ensures that users only ever receive answer data that they are granted permission to access.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
This interesting blog post focus on the startup Pieces Technologies, who are putting predictive artificial intelligence (AI) and machine learning (ML) tools to work on AWS to predict and offer clinical insights on patient outcomes such as such as projected discharge dates, anticipated clinical and non-clinical barriers to discharge, and risk of readmission. To help healthcare teams work more efficiently, the insights are provided in natural language and seek to optimize overall clarity of a patient’s clinical issues.

As usual, there’s another AWS open-source and updates newsletter. The newsletter is published weekly to bring you up to date on the latest news on open-source projects, posts, and events.

Upcoming Events
Speaking of upcoming events, the following are some you may be interested in joining, especially if you work with .NET:

Looking to modernize .NET workloads using Windows containers on AWS? There’s a free webinar, with follow-along lab, in just a couple of days on October 20. You can find more details and register here.

My .NET colleagues are also hosting another webinar on November 2 related to building modern .NET applications on AWS. If you’re curious about the hosting and development capabilities of AWS for .NET applications, this is a webinar you should definitely check out. You’ll find further information and registration here.

And finally, a reminder that reserved seating for sessions at AWS re:Invent 2022 is now open. We’re now just 6 weeks away from the event! There are lots of great sessions for your attention, but those of particular interest to me are the ones related to .NET, and at this year’s event we have seven breakouts, three chalk talks, and a workshop for you. You can find all the details using the .NET filter in the session catalog (the sessions all start with the prefix XNT, by the way).

That’s all for this week. Check back next Monday for another AWS Week in Review!

— Steve

New – Direct VPC Routing Between On-Premises Networks and AWS Outposts Rack

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/new-direct-vpc-routing-between-on-premises-networks-and-aws-outposts-rack/

Today, we announced direct VPC routing for AWS Outposts rack. This enables you to connect Outposts racks and on-premises networks using simplified IP address management. Direct VPC routing automatically advertises Amazon Virtual Private Cloud (Amazon VPC) subnet CIDR addresses to on-premises networks. This enables you to use the private IP addresses of resources in your VPC when communicating with your on-premises network. Furthermore, you can enable direct VPC routing using a self-serve process without needing to contact AWS.

AWS Outposts rack

If you’re unfamiliar, AWS Outposts rack, a part of the Outposts family, is a fully-managed service that offers the same AWS infrastructure, AWS services, APIs, and tools to virtually any on-premises datacenter or co-location space for a consistent hybrid experience. They’re ideal for workloads that require low-latency access to on-premises systems, local data processing, data residency, and migration of applications with local system interdependencies. Once installed, your Outposts rack becomes an extension of your VPC, and it’s managed using the same APIs, tools, and management controls that you already use in the cloud.

With direct VPC routing, you now have two options to configure and connect your Outposts rack to your on-premises networks. Previously, to configure network routing between an on-premises network and an Outposts rack, you needed to use Customer-owned IP addresses (CoIP). During an Outposts rack installation, this involved providing a separate IP address range/CIDR from your on-premises network for AWS to create an address pool, which is known as a CoIP pool. When an Amazon Elastic Compute Cloud (Amazon EC2) instance on your Outposts rack needed to communicate with your on-premises network, Outposts rack would perform a 1:1 network address translation (NAT) from the VPC private IP address to a CoIP address in the CoIP pool. Using CoIP means that you must manage both VPC and CoIP address pools, without overlap, and configure route propagation between the two sets of addresses. When adding a subnet to a VPC, you also needed to follow several steps to update route propagation between your networks to recognize the new subnet addresses.

Managing IP address ranges for AWS cloud and onsite resources, as well as dealing with CoIP ranges on Outposts rack, can be an operational burden. Although the option to use CoIP is still available and will continue to be fully supported, the new direct VPC routing option simplifies your IP address management. Automatic advertisement of CIDR addresses for subnets, including new subnets added in the future, between the VPC and your Outposts rack, removes the need for you to reconfigure IP addresses. This also keeps route propagation up-to-date, thereby saving you time and effort. Furthermore, as mentioned earlier, you can enable all of this with a self-serve option.

Enabling Direct VPC Routing
You can select either CoIP or direct VPC routing approaches and utilize a new self-service API, CreateLocalGatewayRouteTable, to configure direct VPC routing for both new and existing Outposts racks. This eliminates the need to contact AWS to enable the configuration. To enable direct VPC routing, simply set the mode property in the CreateLocalGatewayRouteTable API’s request parameters to the value direct-vpc-routing. If you’re already using CoIP, then you must delete and recreate the route table that’s propagating traffic between the Outposts rack and your on-premises network.

The following example diagram, taken from the user guide, illustrates the setup for an Outposts rack running several Amazon EC2 instances and connected to an on-premises network, with automatic address advertisement. Note that private IP address ranges are utilized across the Outposts rack resources and the on-premises network.

Example of direct VPC routing

Get started with Direct VPC Routing today
The option to enable direct VPC routing is available now for both new and existing Outposts racks. As mentioned earlier, the option to use CoIP will continue to be supported, but now you can choose between direct VPC routing and CoIP based on your on-premises networking needs. Direct VPC routing is available in all AWS Regions where Outposts rack is supported.

Find more information on this topic in the AWS Outposts User Guide. More information on AWS Outposts rack is available here.

— Steve

AWS Week in Review – August 8, 2022

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/aws-week-in-review-august-8-2022/

As an ex-.NET developer, and now Developer Advocate for .NET at AWS, I’m excited to bring you this week’s Week in Review post, for reasons that will quickly become apparent! There are several updates, customer stories, and events I want to bring to your attention, so let’s dive straight in!

Last Week’s launches
.NET developers, here are two new updates to be aware of—and be sure to check out the events section below for another big announcement:

Tiered pricing for AWS Lambda will interest customers running large workloads on Lambda. The tiers, based on compute duration (measured in GB-seconds), help you save on monthly costs—automatically. Find out more about the new tiers, and see some worked examples showing just how they can help reduce costs, in this AWS Compute Blog post by Heeki Park, a Principal Solutions Architect for Serverless.

Amazon Relational Database Service (RDS) released updates for several popular database engines:

  • RDS for Oracle now supports the April 2022 patch.
  • RDS for PostgreSQL now supports new minor versions. Besides the version upgrades, there are also updates for the PostgreSQL extensions pglogical, pg_hint_plan, and hll.
  • RDS for MySQL can now enforce SSL/TLS for client connections to your databases to help enhance transport layer security. You can enforce SSL/TLS by simply enabling the require_secure_transport parameter (disabled by default) via the Amazon RDS Management console, the AWS Command Line Interface (AWS CLI), AWS Tools for PowerShell, or using the API. When you enable this parameter, clients will only be able to connect if an encrypted connection can be established.

Amazon Elastic Compute Cloud (Amazon EC2) expanded availability of the latest generation storage-optimized Is4gen and Im4gn instances to the Asia Pacific (Sydney), Canada (Central), Europe (Frankfurt), and Europe (London) Regions. Built on the AWS Nitro System and powered by AWS Graviton2 processors, these instance types feature up to 30 TB of storage using the new custom-designed AWS Nitro System SSDs. They’re ideal for maximizing the storage performance of I/O intensive workloads that continuously read and write from the SSDs in a sustained manner, for example SQL/NoSQL databases, search engines, distributed file systems, and data analytics.

Lastly, there’s a new URL from AWS Support API to use when you need to access the AWS Support Center console. I recommend bookmarking the new URL, https://support.console.aws.amazon.com/, which the team built using the latest architectural standards for high availability and Region redundancy to ensure you’re always able to contact AWS Support via the console.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
Here’s some other news items and customer stories that you may find interesting:

AWS Open Source News and Updates – Catch up on all the latest open-source projects, tools, and demos from the AWS community in installment #123 of the weekly open source newsletter.

In one recent AWS on Air livestream segment from AWS re:MARS, discussing the increasing scale of machine learning (ML) models, our guests mentioned billion-parameter ML models which quite intrigued me. As an ex-developer, my mental model of parameters is a handful of values, if that, supplied to methods or functions—not billions. Of course, I’ve since learned they’re not the same thing! As I continue my own ML learning journey I was particularly interested in reading this Amazon Science blog on 20B-parameter Alexa Teacher Models (AlexaTM). These large-scale multilingual language models can learn new concepts and transfer knowledge from one language or task to another with minimal human input, given only a few examples of a task in a new language.

When developing games intended to run fully in the cloud, what benefits might there be in going fully cloud-native and moving the entire process into the cloud? Find out in this customer story from Return Entertainment, who did just that to build a cloud-native gaming infrastructure in a few months, reducing time and cost with AWS services.

Upcoming events
Check your calendar and sign up for these online and in-person AWS events:

AWS Storage Day: On August 10, tune into this virtual event on twitch.tv/aws, 9:00 AM–4.30 PM PT, where we’ll be diving into building data resiliency into your organization, and how to put data to work to gain insights and realize its potential, while also optimizing your storage costs. Register for the event here.

AWS SummitAWS Global Summits: These free events bring the cloud computing community together to connect, collaborate, and learn about AWS. Registration is open for the following AWS Summits in August:

AWS .NET Enterprise Developer Days 2022 – North America: Registration for this free, 2-day, in-person event and follow-up 2-day virtual event opened this past week. The in-person event runs September 7–8, at the Palmer Events Center in Austin, Texas. The virtual event runs September 13–14. AWS .NET Enterprise Developer Days (.NET EDD) runs as a mini-conference within the DeveloperWeek Cloud conference (also in-person and virtual). Anyone registering for .NET EDD is eligible for a free pass to DeveloperWeek Cloud, and vice versa! I’m super excited to be helping organize this third .NET event from AWS, our first that has an in-person version. If you’re a .NET developer working with AWS, I encourage you to check it out!

That’s all for this week. Be sure to check back next Monday for another Week in Review roundup!

— Steve
This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

AWS Week in Review – June 20, 2022

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/aws-week-in-review-june-20-2022/

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

Last Week’s Launches
It’s been a quiet week on the AWS News Blog, however a glance at What’s New page shows the various service teams have been busy as usual. Here’s a round-up of announcements that caught my attention this past week.

Support for 15 new resource types in AWS Config – AWS Config is a service for assessment, audit, and evaluation of the configuration of resources in your account. You can monitor and review changes in resource configuration using automation against a desired configuration. The newly expanded set of types includes resources from Amazon SageMaker, Elastic Load Balancing, AWS Batch, AWS Step Functions, AWS Identity and Access Management (IAM), and more.

New console experience for AWS Budgets – A new split-view panel allows for viewing details of a budget without needing to leave the overview page. The new panel will save you time (and clicks!) when you’re analyzing performance across a set of budgets. By the way, you can also now select multiple budgets at the same time.

VPC endpoint support is now available in Amazon SageMaker Canvas SageMaker Canvas is a visual point-and-click service enabling business analysts to generate accurate machine-learning (ML) models without requiring ML experience or needing to write code. The new VPC endpoint support, available in all Regions where SageMaker Canvas is suppported, eliminates the need for an internet gateway, NAT instance, or a VPN connection when connecting from your SageMaker Canvas environment to services such as Amazon Simple Storage Service (Amazon S3), Amazon Redshift, and more.

Additional data sources for Amazon AppFlow – Facebook Ads, Google Ads, and Mixpanel are now supported as data sources, providing the ability to ingest marketing and product analytics for downstream analysis in AppFlow-connected software-as-a-service (SaaS) applications such as Marketo and Salesforce Marketing Cloud.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
Some other updates you may have missed from the past week:

Amazon Elastic Compute Cloud (Amazon EC2) expanded the Regional availability of AWS Nitro System-based C6 instance types. C6gn instance types, powered by Arm-based AWS Graviton2 processors, are now available in the Asia Pacific (Seoul), Europe (Milan), Europe (Paris), and Middle East (Bahrain) Regions, while C6i instance types, powered by 3rd generation Intel Xeon Scalable processors, are now available in the Europe (Frankfurt) Region.

As a .NET and PowerShell Developer Advocate here at AWS, there are some news and updates related to .NET I want to highlight:

Upcoming AWS Events
The AWS New York Summit is approaching quickly, on July 12. Registration is also now open for the AWS Summit Canberra, an in-person event scheduled for August 31.

Microsoft SQL Server users may be interested in registering for the SQL Server Database Modernization webinar on June 21. The webinar will show you how to go about modernizing and how to cost-optimize SQL Server on AWS.

Amazon re:MARS is taking place this week in Las Vegas. I’ll be there as a host of the AWS on Air show, along with special guests highlighting their latest news from the conference. I also have some On Air sessions on using our AI services from .NET lined up! As usual, we’ll be streaming live from the expo hall, so if you’re at the conference, give us a wave. You can watch the show live on Twitch.tv/aws, Twitter.com/AWSOnAir, and LinkedIn Live.

A reminder that if you’re a podcast listener, check out the official AWS Podcast Update Show. There is also the latest installment of the AWS Open Source News and Updates newsletter to help keep you up to date.

No doubt there’ll be a whole new batch of releases and announcements from re:MARS, so be sure to check back next Monday for a summary of the announcements that caught our attention!

— Steve

AWS Week in Review – May 2, 2022

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/aws-week-in-review-may-2-2022/

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

Wow, May already! Here in the Pacific Northwest, spring is in full bloom and nature has emerged completely from her winter slumbers. It feels that way here at AWS, too, with a burst of new releases and updates and our in-person summits and other events now in full flow. Two weeks ago, we had the San Francisco summit; last week, we held the London summit and also our .NET Enterprise Developer Day virtual event in EMEA. This week we have the Madrid summit, with more summits and events to come in the weeks ahead. Be sure to check the events section at the end of this post for a summary and registration links.

Last week’s launches
Here are some of the launches and updates last week that caught my eye:

If you’re looking to reduce or eliminate the operational overhead of managing your Apache Kafka clusters, then the general availability of Amazon Managed Streaming for Apache Kafka (MSK) Serverless will be of interest. Starting with the original release of Amazon MSK in 2019, the work needed to set up, scale, and manage Apache Kafka has been reduced, requiring just minutes to create a cluster. With Amazon MSK Serverless, the provisioning, scaling, and management of the required resources is automated, eliminating the undifferentiated heavy-lift. As my colleague Marcia notes in her blog post, Amazon MSK Serverless is a perfect solution when getting started with a new Apache Kafka workload where you don’t know how much capacity you will need or your applications produce unpredictable or highly variable throughput and you don’t want to pay for idle capacity.

Another week, another set of Amazon Elastic Compute Cloud (Amazon EC2) instances! This time around, it’s new storage-optimized I4i instances based on the latest generation Intel Xeon Scalable (Ice Lake) Processors. These new instances are ideal for workloads that need minimal latency, and fast access to data held on local storage. Examples of these workloads include transactional databases such as MySQL, Oracle DB, and Microsoft SQL Server, as well as NoSQL databases including MongoDB, Couchbase, Aerospike, and Redis. Additionally, workloads that benefit from very high compute performance per TB of storage (for example, data analytics and search engines) are also an ideal target for these instance types, which offer up to 30 TB of AWS Nitro SSD storage.

Deploying AWS compute and storage services within telecommunications providers’ data centers, at the edge of the 5G networks, opens up interesting new possibilities for applications requiring end-to-end low latency (for example, delivery of high-resolution and high-fidelity live video streaming, and improved augmented/virtual reality (AR/VR) experiences). The first AWS Wavelength deployments started in the US in 2020, and have expanded to additional countries since. This week we announced the opening of the first Canadian AWS Wavelength zone, in Toronto.

Other AWS News
Some other launches and news items you may have missed:

Amazon Relational Database Service (RDS) had a busy week. I don’t have room to list them all, so below is just a subset of updates!

  • The addition of IPv6 support enables customers to simplify their networking stack. The increase in address space offered by IPv6 removes the need to manage overlapping address spaces in your Amazon Virtual Private Cloud (VPC)s. IPv6 addressing can be enabled on both new and existing RDS instances.
  • Customers in the Asia Pacific (Sydney) and Asia Pacific (Singapore) Regions now have the option to use Multi-AZ deployments to provide enhanced availability and durability for Amazon RDS DB instances, offering one primary and two readable standby database instances spanning three Availability Zones (AZs). These deployments benefit from up to 2x faster transaction commit latency, and automated fail overs, typically under 35 seconds.
  • Amazon RDS PostgreSQL users can now choose from General-Purpose M6i and Memory-Optimized R6i instance types. Both of these sixth-generation instance types are AWS Nitro System-based, delivering practically all of the compute and memory resources of the host hardware to your instances.
  • Applications using RDS Data API can now elect to receive SQL results as a simplified JSON string, making it easier to deserialize results to an object. Previously, the API returned a JSON string as an array of data type and value pairs, which required developers to write custom code to parse the response and extract the values, so as to translate the JSON string into an object. Applications that use the API to receive the previous JSON format are still supported and will continue to work unchanged.

Applications using Amazon Interactive Video Service (IVS), offering low-latency interactive video experiences, can now add a livestream chat feature, complete with built-in moderation, to help foster community participation in livestreams using Q&A discussions. The new chat support provides chat room resource management and a messaging API for sending, receiving, and moderating chat messages.

Amazon Polly now offers a new Neural Text-to-Speech (TTS) voice, Vitória, for Brazilian Portuguese. The original Vitória voice, dating back to 2016, used standard technology. The new voice offers a more natural-sounding rhythm, intonation, and sound articulation. In addition to Vitória, Polly also offers a second Brazilian Portuguese neural voice, Camila.

Finally, if you’re a .NET developer who’s modernizing .NET Framework applications to run in the cloud, then the announcement that the open-source CoreWCF project has reached its 1.0 release milestone may be of interest. AWS is a major contributor to the project, a port of Windows Communication Foundation (WCF), to run on modern cross-platform .NET versions (.NET Core 3.1, or .NET 5 or higher). This project benefits all .NET developers working on WCF applications, not just those on AWS. You can read more about the project in my blog post from last year, where I spoke with one of the contributing AWS developers. Congratulations to all concerned on reaching the 1.0 milestone!

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Upcoming AWS Events
As I mentioned earlier, the AWS Summits are in full flow, with some some virtual and in-person events in the very near future you may want to check out:

I’m also happy to share that I’ll be joining the AWS on Air crew at AWS Summit Washington, DC. This in-person event is coming up May 23–25. Be sure to tune in to the livestream for all the latest news from the event, and if you’re there in person feel free to come say hi!

Registration is also now open for re:MARS, our conference for topics related to machine learning, automation, robotics, and space. The conference will be in-person in Las Vegas, June 21–24.

That’s all the news I have room for this week — check back next Monday for another week in review!

— Steve

Announcing the General Availability of AWS Amplify Studio

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/announcing-the-general-availability-of-aws-amplify-studio/

Amplify Studio is a visual interface that simplifies front- and backend development for web and mobile applications. We released it as a preview during AWS re:Invent 2021, and today, I’m happy to announce that it is now generally available (GA). A key feature of Amplify Studio is integration with Figma, helping designers and front-end developers to work collaboratively on design and development tasks. To stay in sync as designs change, developers simply pull the new component designs from Figma into their application in Amplify Studio. The GA version of Amplify Studio also includes some new features such as support for UI event handlers, component theming, and improvements in how you can extend and customize generated components from code.

You may be familiar with AWS Amplify, a set of tools and features to help developers get started faster with configuring various AWS services to support their backend use cases such as user authentication, real-time data, AI/ML, and file storage. Amplify Studio extends this ease of configuration to front-end developers, who can use it to work with prebuilt and custom rich user interface (UI) components for those applications. Backend developers can also make use of Amplify Studio to continue development and configuration of the application’s backend services.

Amplify Studio’s point-and-click visual environment enables front-end developers to quickly and easily compose user interfaces from a library of prebuilt and custom UI components. Components are themeable, enabling you to override Amplify Studio‘s default themes to customize components according to your own or your company’s style guides. Components can also be bound to backend services with no cloud or AWS expertise.

Support for developing the front- and backend tiers of an application isn’t all that’s available. From within Amplify Studio, developers can also take advantage of AWS Amplify Hosting services, Amplify‘s fully managed CI/CD and hosting service for scalable web apps. This service offers a zero-configuration way to deploy the application by simply connecting a Git repository with a built-in continuous integration and deployment workflow. Deployment artifacts can be exported to tools such as the AWS Cloud Development Kit (AWS CDK), making it easy to add support for other AWS services unavailable directly within Amplify Studio. In fact, all of the artifacts that are created in Amplify Studio can be exported as code for you to edit in the IDE of your choice.

You can read all about the original preview, and walk through an example of using Amplify Studio and Figma together, in this blog post published during re:Invent.

UI Event Handlers
Front-end developers are likely familiar with the concepts behind binding events on UI components to invoke some action. For example, selecting a button might cause a transition to another screen or populate some other field with data, potentially supplied from a backend service. In the following screenshot, we’re configuring an event handler for the onClick event on a Card component to open a new browser tab:

Setting a UI event binding

For the selected action we then define the settings, in this case to open a map view onto the location using the latitude and longitude in the card object’s model:

Setting the properties for the action

Extending Components with Code
When you pull your component designs from Figma into your project in Amplify Studio using the amplify pull command, generated JSX code and TypeScript definition files that map to the Figma designs are added to your project. While you could then edit the generated code, the next time you run the pull command, your changes would be overwritten.

Instead of requiring you to edit the generated code, Amplify Studio exposes mechanisms that enable you to extend the generated code to achieve the changes you need without risking losing those changes if the component code files get regenerated. While this was possible in the original preview, the GA version of Amplify Studio makes this process much simpler and more convenient. There are four ways to change generated components within Amplify Studio:

  • Modifying default properties
    Modifying the default properties of components is simple and an approach that’s probably familiar to most developers. These default properties stem from the Amplify UI Library. For example, let’s say we have a custom collection component that derives from the base Collection type, and we want to control how (or even if) the items in the collection wrap when rendered. The Collection type exposes a wrap property which we can make use of:

    <MyCustomCollection wrap={"nowrap"} />
  • Override child UI elements
    Going beyond individual exposed properties, the code that’s generated for components (and all child components) exposes an overrides prop. This prop enables you to supply an object containing multiple prop overrides, giving you full control over extending that generated code. In the following example, I’m changing the color prop belonging to the Title prop of my collection’s items to orange. As I mentioned, the settings object I’m using could contain other properties I want to override too:

    <MyCustomCollectionItem overrides={{"Title": { color: "orange" } }} />
  • Extending collection items with data
    A useful feature when working with items in a collection is to augment items with additional data, and you can do this with the overrideItems prop. You supply a function to this property, accepting parameters for the item and the item’s index in the collection. The output from the function is a set of override props to apply to that item. In the following example, I’m toggling the background color for a collection item depending on whether the item’s index is odd or even. Note that I’m also able to attach code to the item, in this case, an onClick handler that reports the ID of the item that was clicked:

    <MyCustomCollection overrideItems={({ item, index })=>({
      backgroundColor: index % 2 === 0 ? 'white' : 'lightgray',
      onClick: () = alert(`You clicked item with id: ${item.id}`)
    })} />
  • Custom business logic for events
    Sometimes you want to run custom business logic in response to higher-level, logical events. An example would be code to run when an object is created, updated, or deleted in a datastore. This extensibility option provides that ability. In your code, you attach a listener to Amplify Hub’s ui channel. In your listener, you inspect the received events and take action on those of interest. You identify the events using names, which have a specific format, actions:[category]:[action_name]:[status]. You can find a list of all action event names in the documentation. In the following example, I’m attaching a listener in which I want to run some custom code when a new item in a DataStore has completed creation. In my code I need to inspect, in my listener, for an event with the name actions:datastore:create:finished:

    import { Hub } from 'aws-amplify'
    …
    Hub.listen("ui", (capsule) => {
      if (capsule.payload.event === "actions:datastore:create:finished"){
          // An object has been created, do something in response
      }
    });

Component Theming
To accompany the GA release of Amplify Studio, we’ve also released a Figma plugin that allows you to match UI components to your company’s brand and style. To enable it, simply install the Theme Editor plugin from the Figma community link. For example, let’s say I wanted to match Amazon’s brand colors. All I’d have to do is configure the primary color to the Amazon orange (#ff9900) color, and then all components will automatically reflect that primary color.

Get Started with AWS Amplify Studio Today
Visit the AWS Amplify Studio homepage to discover more features, whether you’re a backend or front-end developer, or both! It’s free to get started and designed to help simplify not only the configuration of backend services supporting your application but also the development of your application’s front end and the connections to those backend services. If you’re new to Amplify Studio, you’ll find a tutorial on developing a React-based UI and information on connecting your application to designs in Figma in the documentation.

— Steve

AWS Week in Review – March 14, 2022

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/aws-week-in-review-march-14-2022/

This post is part of our Week in Review series. Check back each week for a quick round up of interesting news and announcements from AWS!

Welcome to the March 14 AWS Week in Review post, and Happy Pi Day! I hope you managed to catch some of our livestreamed Pi day celebration of the 16th birthday of Amazon Simple Storage Service (Amazon S3). I certainly had a lot of fun in the event, along with my co-hosts – check out the end of this post for some interesting facts and fun from the day.

First, let’s dive right into the news items and launches from the last week that caught my attention.

Last Week’s Launches
New X2idn and X2iedn EC2 Instance Types – Customers with memory-intensive workloads and a requirement for high networking bandwidth may be interested in the newly announced X2idn and X2iedn instance types, which are built on the AWS Nitro system. Featuring third-generation Intel Xeon Scalable (Ice Lake) processors, these instance types can yield up to 50 percent higher compute price performance and up to 45 percent higher SAP Application Performance Standard (SAPS) performance than comparable X1 instances. If you’re curious about the suffixes on those instance type names, they specify processor and other information. In this case, the i suffix indicates that the instances are using an Intel processor, e means it’s a memory-optimized instance family, d indicates local NVMe-based SSDs physically connected to the host server, and n means the instance types support higher network bandwidth up to 100 Gbps. You can find out more about the new instance types in this news blog post.

Amazon DynamoDB released two updates – First, an increase in the default service quotas raises the number of tables allowed by default from 256 to 2500 tables. This will help customers working with large numbers of tables. At the same time the service also increased the allowed number of concurrent table management operations, from 50 to 500. Table management operations are those that create, update, or delete tables. The second update relates to PartiQL a SQL-compatible query language you can use to query, insert, update, or delete DynamoDB table data. You can now specify a limit on the number of items processed. You’ll find this useful when you know you only need to process a certain number of items, helping reduce the cost and duration of requests.

If you’re coding against Amazon ECS‘s API, you may want to take a look at the change to UpdateService that now enables you to update load balancers, service registries, tag propagation, and ECS managed tags for a service. Previously, you would have had to delete and recreate the service to make changes to these resources for a service. Now you can do it all with one call, making it a hassle-free and less disruptive, more efficient experience. Take a look at the What’s New post for more details.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
If you’re analyzing time series data, take a look at this new book on building forecasting models and detecting anomalies in your data. It’s authored by Michael Hoarau, an AI/ML Specialist Solutions Architect at AWS.

March 8 was International Women’s Day and we published a post featuring several women, including fellow news blogger and published author Antje Barth, chatting about their experiences working in Developer Relations at AWS.

Upcoming AWS Events
Check your calendars and sign up for these AWS events:

.NET Application Modernization Webinar (March 23)Sign up today to learn about .NET modernization, what it is, and why you might want to modernize. The webinar will include a deep dive focusing on the AWS Microservice Extractor for .NET.

AWS Summit Brussels is fast approaching on March 31st. Register here.

Pi Day Fun & Facts
As this post is published, we’re coming to the end of our livestreamed Pi Day event celebrating the 16th birthday of S3 – how time flies! Here are some interesting facts & fun snippets from the event:

  • In the keynote, we learned S3 currently stores over 200 trillion objects, and serves over 100 million requests per second!
  • S3‘s Intelligent Tiering has saved customers over $250 million to date.
  • Did you know that S3, having reached 16 years of age, is now eligible for a Washington State drivers license? Or that it can now buy a lottery ticket, get a passport, or – check this – it can pilot a hang glider!
  • We asked each of our guests on the livestream, and the team of AWS news bloggers, to nominate their favorite pie. The winner? It’s a tie between apple and pecan pie!

That’s all for this week. Check back next Monday for another Week in Review!

— Steve

Demonstrate your AWS Cloud Storage knowledge and skills with new digital badges!

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/demonstrate-your-aws-cloud-storage-knowledge-and-skills-with-new-digital-badges/

Are you a cloud storage professional or an on-premises storage pro who’s curious about cloud storage? Are you interested in demonstrating your AWS Storage knowledge and skills with potential employers and your community of peers? If so, I’d like to bring to your attention the recent launch of digital badges aligned to Learning Plans for Block Storage and Object Storage on AWS Skill Builder. In this 2021 blog post by Indeed, cloud-computing is the number one in-demand skill employers are looking for.

The new, verifiable, digital badges are available to everyone who scores at least 80 percent in the assessments associated with Learning Plans. The badges prove your knowledge and skills for Object Storage and/or Block Storage in the AWS Cloud. Badges, distributed and managed through Credly, carry with them metadata that enables verification of the issuer and the credential and lists the skills and knowledge demonstrated by the holder. Sharing badges on your résumé, peer community, and via social media assists in developing your career in cloud computing and celebrates your achievements. Some of you may be familiar with AWS re:Post, which launched during re:Invent 2021—your badges can be showcased in your AWS re:Post user profile too.

Object and Block Storage digital badges

AWS Skill Builder Learning Plans and digital badges for Block and Object Storage
Digital badges are available today for the Block Storage and Object Storage Learning Plans on AWS Skill Builder. Block Storage has a focus on Amazon Elastic Block Store (EBS), while Object Storage is focused on Amazon Simple Storage Service (Amazon S3). Both plans contain free learning content to help you build your knowledge in each of these areas and get ready for the assessments.

AWS Skill Builder offers a range of Learning Plans related to cloud computing skills. Learning Plans correspond to roles (architect, developer, etc.) and domain (databases, storage, etc.); each one is specifically designed to build your knowledge with a clear set of outcomes for you to achieve. Freely available, the Learning Plans and related assessments can be taken anywhere, anytime, providing equal and fair learning for all.

Badge assessments are linked to curriculum standards and are developed by service teams, field subject matter experts (SMEs), and content/curriculum SMEs. Therefore, employers can feel satisfied that the badges attained by a potential employee were awarded due to actual demonstrated skills and knowledge for Block and/or Object Storage. By the way, if you feel you have existing skills and knowledge and would prefer to skip straight to the assessment, you can. If you don’t pass, you’ll be guided to fill in your knowledge gaps, and you can then retake the assessment after 24 hours. To earn a badge, you need to score a minimum of 80 percent in the assessment.

The Block Storage and Object Storage Learning Plans are designed for you to take on your own, and you can track your own progress, making it easier to learn in your own time and manage your own learning development. They’re a great opportunity to refresh your skills, check your skills, or learn new ones.

Start collecting digital storage learning badges today
The Learning Plans and new digital badges for Block Storage and Object Storage help you showcase your in-demand knowledge and skills related to AWS Storage. As I mentioned earlier, enrollment for Learning Plans, and the subsequent assessments, are free for everyone. Find out more, and get started, at https://aws.amazon.com/training/badges. And be sure to share your accomplishment by posting on social media with the hashtag #AWSTraining and show off your badges!

— Steve

AWS re:Post – A Reimagined Q&A Experience for the AWS Community

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/aws-repost-a-reimagined-qa-experience-for-the-aws-community/

The internet is an excellent resource for well-intentioned guidance and answers. However, it can sometimes be hard to tell if what you’re reading is, in fact, advice you should follow. Also, some users have a preference toward using a single, trusted online community rather than the open internet to provide them with reliable, vetted, and up-to-date answers to their questions.

Today, I’m happy to announce AWS re:Post, a new, question and answer (Q&A) service, part of the AWS Free Tier, that is driven by the community of AWS customers, partners, and employees. AWS re:Post is an AWS-managed Q&A service offering crowd-sourced, expert-reviewed answers to your technical questions about AWS that replaces the original AWS Forums. Community members can earn reputation points to build up their community expert status by providing accepted answers and reviewing answers from other users, helping to continually expand the availability of public knowledge across all AWS services.

AWS re:Post home page

You’ll find AWS re:Post to be an ideal resource when:

  • You are building an application using AWS, and you have a technical question about an AWS service or best practices.
  • You are learning about AWS or preparing for an AWS certification, and you have a question on an AWS service.
  • Your team is debating issues related to design, development, deployment, or operations on AWS.
  • You’d like to share your AWS expertise with the community and build a reputation as a community expert.

Example of a question and answer in AWS re:Post

There is no requirement to sign in to AWS re:Post to browse the content. For users who do choose to sign in, using their AWS account, there is the opportunity to create a profile, post questions and answers, and interact with the community. Profiles enable users to link their AWS certifications through Credly and to indicate interests in specific AWS technology domains, services, and experts. AWS re:Post automatically shares new questions with these community experts based on their areas of expertise, improving the accuracy of responses as well as encouraging responses for unanswered questions. An opt-in email is also available to receive email notifications to help users stay informed.

User profile in the re:Post community

Over the last four years, AWS re:Post has been used internally by AWS employees helping customers with their cloud journeys. Today, that same trusted technical guidance becomes available to the entire AWS community. Additionally, all active users from the previous AWS Forums have been migrated onto AWS re:Post, as well as the most-viewed content.

Questions from AWS Premium Support customers that do not receive a response from the community are passed on to AWS Support engineers. If the question is related to a customer-specific workload, AWS support will open a support case to take the conversation into a private setting. Note, however, that AWS re:Post is not intended to be used for questions that are time-sensitive or involve any proprietary information, such as customer account details, personally identifiable information, or AWS account resource data.

AWS Support Engineer presence on re:Post

Have Questions? Need Answers? Try AWS re:Post Today
If you have a technical question about an AWS service or product or are eager to get started on your journey to becoming a recognized community expert, I invite you to get started with AWS re:Post today!