All posts by Sébastien Stormacq

Single Sign-On between Okta Universal Directory and AWS

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/single-sign-on-between-okta-universal-directory-and-aws/

Enterprises adopting the AWS Cloud want to effectively manage identities. Having one central place to manage identities makes it easier to enforce policies, to manage access permissions, and to reduce the overhead by removing the need to duplicate users and user permissions across multiple identity silos. Having a unique identity also simplifies access for all of us, the users. We all have access to multiple systems, and we all have troubles to remember multiple distinct passwords. Being able to connect to multiple systems using one single combination of user name and password is a daily security and productivity gain. Being able to link an identity from one system with an identity managed on another trusted system is known as “Identity Federation“, which single sign-on is a subset of. Identity Federation is made possible thanks to industry standards such as Security Assertion Markup Language (SAML), OAuth, OpenID and others.

Recently, we announced a new evolution of AWS Single Sign-On, allowing you to link AWS identities with Azure Active Directory identities. We did not stop there. Today, we are announcing the integration of AWS Single Sign-On with Okta Universal Directory.

Let me show you the experience for System Administrators, then I will demonstrate the single sign-on experience for the users.

First, let’s imagine that I am an administrator for an enterprise that already uses Okta Universal Directory to manage my workforce identities. Now I want to enable a simple and easy to use access to our AWS environments for my users, using their existing identities. As most enterprises, I manage multiple AWS Accounts. I want more than just a single sign-on solution, I want to manage access to my AWS Accounts centrally. I do not want to duplicate my Okta groups and user memberships by hand, nor maintain multiple identity systems (Okta Universal Directory and one for each AWS Account I manage). I want to enable automatic user synchronization between Okta and AWS. My users will sign in to the AWS environments using the experience they are already familiar with in Okta.

Connecting Okta as an identity source for AWS Single Sign-On
The first step is to add AWS Single Sign-On as an “application” Okta users can connect to. I navigate to the Okta administration console and login with my Okta administrator credentials, then I navigate to the Applications tab.

Okta admin consoleI click the green Add Application button and I search for AWS SSO application. I click Add.

Okta add applicationI enter a name to the app (you can choose whatever name you like) and click Done.

On the next screen, I configure the mutual agreement between AWS Single Sign-On and Okta. I first download the SAML Meta Data file generated by Okta by clicking the blue link Identity Provider Metadata. I keep this file, I need it later to configure the AWS side of the single sign-on.

Okta Identity Provider metadata

Now that I have the metadata file, I open to the AWS Management Console in a new tab. I keep the Okta tab open as the procedure is not finished there yet. I navigate to AWS Single Sign-On and click Enable AWS SSO.

I click Settings in the navigation panel. I first set the Identity source by clicking the Change link and selecting External identity provider from the list of options. Secondly, I browse to and select the XML file I downloaded from Okta in the Identity provider metadata section.

SSO configure metadata

I click Next: Review, enter CONFIRM in the provided field, and finally click Change identity source to complete the AWS Single Sign-On side of the process. I take note of the two values AWS SSO ACS URL and AWS SSO Issuer URL as I must enter these in the Okta console.

AWS SSO Save URLsI return to the tab I left open to my Okta console, and copy the values for AWS SSO ACS URL and AWS SSO Issuer URL.

OKTA ACS URLsI click Save to complete the configuration

Configuring Automatic Provisioning
Now that Okta is configured for single sign-on for my users to connect using AWS Single Sign-On I’m going to enable automatic provisioning of user accounts. As new accounts are added to Okta, and assigned to the AWS SSO application, a corresponding AWS Single Sign-On user is created automatically. As an administrator, I do not need to do any work to configure a corresponding account in AWS to map to the Okta user.

From the AWS Single Sign-On Console, I navigate to Settings and then click the Enable identity synchronization link. This opens a dialog containing the values for the SCIM endpoint and an OAuth bearer access token (hidden by default). I need both of these values to use in the Okta application settings.

AWS SSO SCIMI switch back to the tab open on the Okta console, and click on Provisioning tab under the AWS SSO Application. I select Enable API Integration. Then I copy / paste the values Base URL (I paste the value copied in AWS Single Sign-On Console SCIM endpoint) and API Token (I paste the value copied AWS Single Sign-On Console Access token)

Okta API IntegrationI click Test API Credentials to verify everything works as expected. Then I click To App to enable users creation, update, and deactivate.

Okta Provisioning To App

With provisioning enabled, my final task is to assign the users and groups that I want to synchronize from Okta to AWS Single Sign-On. I click the Assignments tab and add Okta users and groups. I click Assign, and I select the Okta users and groups I want to have access to AWS.

OKTA AssignmentsThese users are synchronized to AWS Single Sign-On, and the users now see the AWS Single Sign-On application appear in their Okta portal.

Okta Portal User ViewTo verify user synchronization is working, I switch back to the AWS Single Sign-On console and select the Users tab. The users I assigned in Okta console are present.

AWS SSO User View

I Configured Single Sign-On, Now What?
Okta is now my single source of truth for my user identities and their assignment into groups, and periodic synchronization automatically creates corresponding identities in AWS Single Sign-On. My users sign into their AWS accounts and applications with their Okta credentials and experience, and don’t have to remember an additional user name or password. However, as things stand my users have only access to sign in. To manage permissions in terms of what they can access once signed into AWS, I must set up permissions in AWS Single Sign-On.

Back to AWS SSO Console, I click AWS Accounts on the left tab bar and select the account from my AWS Organizations that I am giving access to. For enterprises having multiple accounts for multiple applications or environment, it gives you the granularity to grant access to a subset of your AWS accounts.

AWS SSO Select AWS AccountI click Assign users to assign SSO users or groups to a set of IAM permissions. For this example, I assign just one user, the one with @example.com email address.

Assign SSO UsersI click Next: Permission sets and Create new permission set to create a set of IAM policies to describe the set of permissions I am granting to these Okta users. For this example, I am granting a read-only permission on all AWS services.SSO Permission setAnd voila, I am ready to test this setup.

SSO User Experience for the console
Now that I showed you the steps System Administrators take to configure the integration, let me show you what is the user experience.

As an AWS Account user, I can sign-in on Okta and get access to my AWS Management Console. I can start either from the AWS Single Sign-On user portal (the URL is on the AWS Single Sign-On settings page) or from the Okta user portal page and select the AWS SSO app.

I choose to start from the AWS SSO User Portal. I am redirected to the Okta login page. I enter my Okta credentials and I land on the AWS Account and Role selection page. I click on AWS Account, select the account I want to log into, and click Management console. After a few additional redirections, I land on the AWS Console page.

SSO User experience

SSO User Experience for the CLI
System administrators, DevOps engineers, Developers, and your automation scripts are not using the AWS console. They use the AWS Command Line Interface (CLI) instead. To configure SSO for the command line, I open a terminal and type aws configure sso. I enter the AWS SSO User Portal URL and the Region.

$aws configure sso
SSO start URL [None]: https://d-0123456789.awsapps.com/start
SSO Region [None]: eu-west-1
Attempting to automatically open the SSO authorization page in your default browser.
If the browser does not open or you wish to use a different device to authorize this request, open the following URL:

https://device.sso.eu-west-1.amazonaws.com/

Then enter the code:

AAAA-BBBB

At this stage, my default browser pops up and I enter my Okta credentials on the Okta login page. I confirm I want to enable SSO for the CLI.

SSO for the CLIand I close the browser when I receive this message:

AWS SSO CLI Close Browser Message

The CLI automatically resumes the configuration, I enter the default Region, the default output format and the name of the CLI profile I want to use.

The only AWS account available to you is: 012345678901
Using the account ID 012345678901
The only role available to you is: ViewOnlyAccess
Using the role name "ViewOnlyAccess"
CLI default client Region [eu-west-1]:
CLI default output format [None]:
CLI profile name [okta]:

To use this profile, specify the profile name using --profile, as shown:

aws s3 ls --profile okta

I am now ready to use the CLI with SSO. In my terminal, I type:

aws --profile okta s3 ls
2020-05-04 23:14:49 do-not-delete-gatedgarden-audit-012345678901
2015-09-24 16:46:30 elasticbeanstalk-eu-west-1-012345678901
2015-06-11 08:23:17 elasticbeanstalk-us-west-2-012345678901

If the machine you want to configure CLI SSO has no graphical user interface, you can configure SSO in headless mode, using the URL and the code provided by the CLI (https://device.sso.eu-west-1.amazonaws.com/ and AAAA-BBBB in the example above)

In this post, I showed how you can take advantage of the new AWS Single Sign-On capabilities to link Okta identities to AWS accounts for user single sign-on. I also make use of the automatic provisioning support to reduce complexity when managing and using identities. Administrators can now use a single source of truth for managing their users, and users no longer need to manage an additional identity and password to sign into their AWS accounts and applications.

AWS Single Sign-On with Okta is free to use, and is available in all Regions where AWS Single Sign-On is available. The full list is here.

To see all this in motion, you can check out the following demo video for more details on getting started.

— seb

New – AWS Amplify Libraries for Android and iOS

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-aws-amplify-libraries-for-android-and-ios/

When you develop mobile applications, you must develop a set of cloud-powered functionalities for each project. For example, most applications require user authentication or detailed in-app analytics. Your application most probably calls REST or GraphQL APIs and is required to support offline scenario and data synchronization. AWS Amplify makes it easy to integrate such functionalities in your mobile and web applications.

AWS Amplify is a set of tools and services for building secure, scalable mobile and web applications. It is made out of three components: an open source set of libraries and UI components for adding cloud-powered functionalities, a command line interactive toolchain to create and manage a cloud backend, and the AWS Amplify Console, an AWS Service to deploy and host full stack serverless web applications.

Today, I am happy to announce the availability of Amplify iOS and Amplify Android libraries and tools, to help mobile application developers to easily build secure and scalable cloud-powered applications.

Until today, when you developed a cloud-powered mobile application, you were using a combination of tools and SDKs: the Amplify CLI to create and manage your backend, and one or several AWS Mobile SDKs to access the backend. In general, AWS Mobile SDKs are low-level wrappers around the AWS Services APIs. They require you to understand the API details and, most of the time, to write many lines of undifferentiated code, such as object (de)serialization, error handling, etc.

Amplify iOS and Amplify Android simplify this. First, they provide native libraries oriented around use-cases, such as Authentication, Data storage and access, machine learning predictions etc. They provide a declarative interface that enables you to programmatically apply best practices with abstractions. Thinking in terms of use cases instead of AWS Services results in higher-level abstraction, faster development cycles, and fewer lines of code. Secondly, they provide tools that integrate with your native IDE toolchain: XCode for iOS and Gradle for Android.

Using Amplify iOS or Amplify Android is our recommended way to integrate a cloud-based backend in your mobile application.

How to get started?
I’ve built two simple mobile applications (one on iOS and one on Android) to show you how to get started. The sources for these examples are available on my GitHub. As you see, I am not a graphic designer. The applications have a list of UI buttons to trigger different flows and the results are only visible in the console.

Amplify iOS & Android Demo

Amplify libraries for mobile are organized around categories for Auth, API (REST and GraphQL), Analytics, File Storage, DataStore, and Predictions. In this example, I use three categories. Auth, to implement sign-in, sign-up, and Login with Facebook flow. DataStore to use a query-able, on-device persistent storage engine. It seamlessly synchronizes data between the app and the cloud with built-in versioning, conflict detection and resolution capabilities. I also use Predictions category to add automatic translation between english and french languages.

Let’s review the four main steps and lines of code to get started on each platform. For a detailed step-by-step tutorial, have a look at the Amplify iOS or Amplify Android documentation.

The first step is to set up your project, to add required dependencies and build steps.

On iOS, you add a couple of lines to your Podfile and add the AWS Amplify build script to the build phase of your project.
On Android, you do the same in your Gradle file for the module and for the app.

// iOS Podfile
target 'amplify-lib-ios-demo' do
  # Comment the next line if you don't want to use dynamic frameworks
  use_frameworks!

  # Pods for amplify-lib-ios-demo
    pod 'Amplify'
    pod 'Amplify/Tools'

    pod 'AmplifyPlugins/AWSAPIPlugin'
    pod 'AmplifyPlugins/AWSDataStorePlugin'
    pod 'AmplifyPlugins/AWSCognitoAuthPlugin'
    pod 'AWSPredictionsPlugin'
// Android build.gradle fragment (Module: app) 
...
compileOptions {
    sourceCompatibility JavaVersion.VERSION_1_8
    targetCompatibility JavaVersion.VERSION_1_8
}
dependencies {
    implementation 'com.amplifyframework:core:1.0.0'
    implementation 'com.amplifyframework:aws-datastore:1.0.0'
    implementation 'com.amplifyframework:aws-api:1.0.0'
    implementation 'com.amplifyframework:aws-predictions:1.0.0'
    implementation 'com.amplifyframework:aws-auth-cognito:1.0.0'
}
...
// Android build.gradle fragment (Project: My Application)
...
repositories {
    mavenCentral()
    google()
    jcenter()
}
dependencies {
        classpath 'com.amplifyframework:amplify-tools-gradle-plugin:1.0.0'
}
apply plugin: 'com.amplifyframework.amplifytools'
...

On iOS, you also must manually add an amplify-tools.sh to your build steps.

When this is done, you type pod install for iOS or you sync the project with Gradle.

The second step is to add the plugins for each category to Amplify at application initialization time. On iOS, I am using didFinishLaunchingWithOptions from the AppDelegate. On Android, I am using onCreate from MainActivity. You’re free to initialize Amplify at any stage in your app, it is not necessary to be at app startup time.

    // iOS AppDelegate class
    func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {
        
        do {
            try Amplify.add(plugin: AWSAPIPlugin())
            try Amplify.add(plugin: AWSDataStorePlugin(modelRegistration: AmplifyModels()))
            try Amplify.add(plugin: AWSCognitoAuthPlugin())
            try Amplify.add(plugin: AWSPredictionsPlugin())
            
            try Amplify.configure()
            print("Amplify initialized")
        } catch {
            print("Failed to configure Amplify \(error)")
        }
}
   // Android MainActivity class (Kotlin version)
   override fun onCreate(savedInstanceState: Bundle?) {
        // ...

        try {
            Amplify.addPlugin(AWSDataStorePlugin())
            Amplify.addPlugin(AWSApiPlugin())
            Amplify.addPlugin(AWSCognitoAuthPlugin())
            Amplify.addPlugin(AWSPredictionsPlugin())
            Amplify.configure(applicationContext)
            Log.i(TAG, "Initialized Amplify")
        } catch (error: AmplifyException) {
            Log.e(TAG, "Could not initialize Amplify", error)
        }
    }

The third step varies from one category to the other. Usually, it involves using the AWS Amplify command line to provision and configure your backend. Type commands like amplify add auth or amplify add predictions to configure a category.

For example, to configure the user authentication with Amazon Cognito and social identity providers, such as Login With Facebook, you type something like the below. This step is identical for iOS and Android as we are creating and configuring the cloud backend.

To learn how to configure single sign-on with social identity providers such as Facebook, Google or Amazon, you can refer to the step-by-step instructions I wrote in this Amplify iOS Workshop (I will update the workshop soon to take advantage of these new AWS Amplify libraries).

Configuring the DataStore involves creating a GraphQL schema for your data. Amplify generates native (Swift or Java) code to represent your data in your app. It transparently handles an offline datastore to store your data and sync them with the backend when network connectivity is available.

The fourth and last step is to actually invoke Amplify’s library code at runtime.

For example, to trigger an authentication using Amazon Cognito hosted web user interface, you use the following code:

// iOS (swift) in AppDelegate object
    func signIn() {
        _ = Amplify.Auth.signInWithWebUI(presentationAnchor: UIApplication.shared.windows.first!) { (result) in
            switch(result) {
                case .success(let result):
                    print(result)
                case .failure(let error):
                    print("Can not signin \(error)")
            }
        }
    }
// Android (Kotlin) in MainActivity 
    fun signIn(view: View?) {
        Amplify.Auth.signInWithWebUI(
            this,
            { result: AuthSignInResult -> Log.i(TAG, result.toString()) },
            { error: AuthException -> Log.e(TAG, error.toString()) }
        )
    }

The above triggers the following web view:

Hosted UI for Cognito

Similarly, to create an item in the Datastore (and persisting it to Amazon DynamoDB over GraphQL), you need the following code:

    // iOS 
    func create() {
        let note = Note(content: "Build iOS application")
        Amplify.DataStore.save(note) {
            switch $0 {
            case .success:
                print("Added note")
            case .failure(let error):
                print("Error adding note - \(error.localizedDescription)")
            }
        }
    }
   // Android 
    fun create(view: View?) {
        val note: Note = Note.builder()
            .content("Build Android application")
            .build()

        Amplify.DataStore.save(
            note,
            { success -> Log.i(TAG, "Saved item: " + success.item.content) },
            { error -> Log.e(TAG, "Could not save item to DataStore", error) }
        )

And to trigger a text translation with the Predictions category, you just need the following code:

    // iOS 
    func translate(text: String) {
        _ = Amplify.Predictions.convert(textToTranslate: text, language: LanguageType.english, targetLanguage: LanguageType.french) {
            switch $0 {
            case .success(let result):
                // update UI on main thread 
                DispatchQueue.main.async() {
                    self.data.translatedText = result.text
                }
            case .failure(let error):
                print("Error adding note - \(error.localizedDescription)")
            }
        }
    }
   // Android
    fun translate(view: View?) {
        Log.i(TAG, "Translating")

        val et : EditText = findViewById(R.id.toBeTranslated)
        val tv : TextView = findViewById(R.id.translated)

        Amplify.Predictions.translateText(
            et.text.toString(),
            LanguageType.ENGLISH,
            LanguageType.FRENCH,
            { success -> tv.setText(success.translatedText) },
            { failure -> Log.e(TAG, failure.localizedMessage) }
        )
    }

Short and slick isn’t it ?

Amplify Mobile demo translation

Price and Availability
AWS Amplify is available free of charge, you only pay for the backend services your application use, above the free tier.

Amplify iOS and Amplify Android are available today from the CocoaPods and Maven Central code repository. The source code is available on GitHub (iOS or Android). Do not hesitate to send us your feedback (Doc, iOS, and Android) or to send us a Pull Request 🙂

I am also curious to learn about the amazing mobile apps you are building with AWS Amplify. Do not hesitate to share your screenshots or App Store links with me.

Happy building!

— seb

New – EC2 M6g Instances, powered by AWS Graviton2

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-m6g-ec2-instances-powered-by-arm-based-aws-graviton2/

Starting today, you can use our first 6th generation Amazon Elastic Compute Cloud (EC2) General Purpose instance: the M6g. The “g” stands for “Graviton2“, our next generation Arm-based chip designed by AWS (and Annapurna Labs, an Amazon company), utilizing 64-bit Arm Neoverse N1 cores.

Graviton 2 chipset

These processors support 256-bit, always-on, DRAM encryption. They also include dual SIMD units to double the floating point performance versus the first generation Graviton, and they support int8/fp16 instructions to accelerate machine learning inference workloads. You can read this full review published by AnandTech for in-depth details.

The M6g instances are available in 8 sizes with 1, 2, 4, 8, 16, 32, 48, and 64 vCPUs, or as bare metal instances. They support configurations with up to 256 GiB of memory, 25 Gbps of network performance, and 19 Gbps of EBS bandwidth. These instances are powered by AWS Nitro System, a combination of dedicated hardware and a lightweight hypervisor.

For those of you running typical open-source application stacks, generally deployed on x86-64 architectures, migrating to Graviton will give you up to 40% improvement on cost-performance ratio, compared to similar-sized M5 instances. M6g instances are well-suited for workloads such as application servers, gaming servers, mid-size databases, caching fleets, web tier and the likes.

We ran an extensive preview program to collect customer feedback on this 6th generation instance type. For example, Honeycomb uses 30% fewer instances vs C5, KeyDB observes 65% better performance and 20% cost reduction vs M5, Inter Systems reported 28% performance improvement and 20% cost reduction compared to equivalent M5 instances, and Treasure Data benchmarked 30% increase of performance for 20% cost reduction compared to similarly sized M5 instances. You can read more customer stories including Hotelbeds, Redbox, Nielsen, Mobiuspace, RayGun on the M6g web page.

Several AWS services teams are evaluating these instances too. For example, during their testing, the [elasticcache] service team found that M6g instances deliver up to 50% throughput improvement over M5 instances on Redis.

Major Linux distributions are available on Arm architecture, just select the Amazon Machine Image (AMI) corresponding to the Arm version of your favorite distribution when launching an instance in the AWS Management Console. Be sure to select the 64-bit (Arm) button on the right part of the screen.

Launch ARM AMI in the console

If you choose the AWS Command Line Interface (CLI) instead, use the corresponding image-id for your region, architecture, and distribution. For example, to start an Amazon Linux 2 instance:

AMI_ID=$(aws ssm get-parameters-by-path --path /aws/service/ami-amazon-linux-latest --output text --query "Parameters[?contains(Name, 'ami-hvm-arm64')].Value")
aws ec2 run-instances --image-id $AMI_ID --instance-type m6g.large --key-name my-ssh-key-name --security-group-ids sg-1234567

(you need to adjust the ssh key name and the security group ID in the above command)

Once the instance is started, it behaves like any Amazon Elastic Compute Cloud (EC2) instance:

~ % ssh [email protected]
Warning: Permanently added 'ec2-01-01-01-01.compute-1.amazonaws.com,01.01.01.01' (ECDSA) to the list of known hosts.
Last login: Wed Apr 22 12:26:44 2020 from 01-01-01-01.amazon.com

       __|  __|_  )
       _|  (     /   Amazon Linux 2 AMI
      ___|\___|___|

https://aws.amazon.com/amazon-linux-2/
[[email protected] ~]$ uname -a
Linux ip-172-31-16-155.ec2.internal 4.14.171-136.231.amzn2.aarch64 #1 SMP Thu Feb 27 20:25:45 UTC 2020 aarch64 aarch64 aarch64 GNU/Linux

The Arm software ecosystem is broad and deep, from Linux distributions (Amazon Linux 2, Ubuntu, Red Hat Enterprise Linux, SUSE Linux Enterprise Server, Fedora, Debian, FreeBSD), to language runtimes (Java with Amazon Corretto , NodeJS, Python, Go,…), container services (Docker, Amazon ECS, Amazon Elastic Kubernetes Service, Amazon Elastic Container Registry), agents (Amazon CloudWatch, AWS Systems Manager, Amazon Inspector), developer tools (AWS Code Suite, Jenkins, GitLab, Chef, Drone.io, Travis CI), and security & monitoring solutions such as Datadog, Crowdstrike, Qualys, Rapid7, Tenable, or Honeycomb.io.

You will find Arm versions of commonly used software packages available for installation through the same mechanisms that you currently use (yum, apt-get, pip, npm …). While some applications may require re-compilation, the vast majority of applications that are based on interpreted languages (such as Java, NodeJS, Python, Go) should run unmodified on M6g instances. In the rare cases where you will need to recompile or debug code, we have assembled some resources to help you to get started.

We are not going to stop at general purposes M6g instances, compute optimized C6g instances and memory optimized R6g instances are coming soon, stay tuned.

Now it’s you turn to give it a try in one the following AWS Regions : US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Frankfurt), and Asia Pacific (Tokyo).

As usual, let us know your feedback.

— seb

Now Open – AWS Africa (Cape Town) Region

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/now-open-aws-africa-cape-town-region/

The AWS Region in Africa that Jeff promised you in 2018 is now open. The official name is Africa (Cape Town) and the API name is af-south-1. You can start using this new Region today to deploy workloads and store your data in South Africa.

The addition of this new Region enables all organizations to bring lower latency services to their end-users across Africa, and allows more African organisations to benefit from the performance, security, flexibility, scalability, reliability, and ease of use of the AWS cloud. It enables organisations of all sizes to experiment and innovate faster.

AWS Regions meet the highest levels of security, compliance, and data protection. With the new Region, local customers with data residency requirements, and those looking to comply with the Protection of Personal Information Act (POPIA), will be able to store their content in South Africa with the assurance that they retain complete ownership of their data and it will not move unless they choose to move it.

Africa (Cape Town) is the 23rd AWS Region, and the first one in Africa. It is comprised of three Availability Zones, bringing the Global AWS Infrastructure to a total of 73 Availability Zones (AZ).

Instances and Services
Applications running on this 3-AZs Region can use C5d, D2, I3, M5, M5d, R5, R5d, and T3 instances, and can use a long list of AWS services including Amazon API Gateway, Amazon Aurora (both MySQL and PostgreSQL), Amazon CloudWatch, Amazon CloudWatch Logs, CloudWatch Events, Amazon DynamoDB, Amazon Elastic Block Store (EBS), Amazon Elastic Compute Cloud (EC2), Amazon Elastic Container Registry, Amazon ECS, Elastic Load Balancing (Classic, Network, and Application), Amazon EMR, Amazon ElastiCache, Amazon Elasticsearch Service, Amazon Glacier, Amazon Kinesis Data Streams, Amazon Relational Database Service (RDS), Amazon Redshift, Amazon Route 53, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Amazon Simple Storage Service (S3), Amazon Simple Workflow Service (SWF), AWS Auto Scaling, AWS Artifact, AWS Certificate Manager, AWS CloudFormation, AWS CloudTrail, AWS CodeDeploy, AWS Config, AWS Personal Health Dashboard, AWS Database Migration Service, AWS Direct Connect, AWS Elastic Beanstalk, AWS Identity and Access Management (IAM), AWS Key Management Service (KMS), AWS Marketplace, AWS Mobile SDK, AWS Shield (regional), AWS Site-to-Site VPN, AWS Step Functions, AWS Support, AWS Systems Manager, AWS Trusted Advisor, AWS X-Ray, AWS Import/Export

A Growing Presence in Africa
This new Region is a continuation of the AWS investment in Africa. In 2004, Amazon opened a Development Center in Cape Town that focuses on building pioneering networking technologies, next generation software for customer support, and the technology behind Amazon Elastic Compute Cloud (EC2). AWS has also added a number of teams including account managers, customer service reps, partner managers, solutions architects, developer advocates, and more, helping customers of all sizes as they move to the cloud.

In 2015, we continued our expansion, opening an office in Johannesburg, and in 2017 we brought the Amazon Global Network to Africa through AWS Direct Connect. In 2018 we launched infrastructure on the African continent introducing Amazon CloudFront to South Africa, with two edge locations in Cape Town and Johannesburg, and recently in Nairobi, Kenya. We also support the growth of technology education with AWS Academy and AWS Educate, and continue to support the growth of new businesses through AWS Activate. The addition of the AWS Region in South Africa helps builders in organisations of all sizes, from startups to enterprises, as well as educational institutions, NGOs, and the public sector across Africa, to innovate and grow.

The new Region is open to all: existing AWS customers, partners, and new African customers working with local partners across the region. If you are planning to deploy workloads on the Africa (Cape Town) Region, don’t hesitate to contact us. I am also taking advantage of this post to remind we have dozens of open positions in the region, for many different roles, such as Account Management, Solution Architects, Customer Support, Product Management, Software Development and more. Visit amazon.jobs and send us your resume.

More to Come
We are continuously expanding our global infrastructure to allow you to deploy workloads close to your end-users. We already announced two future AWS Regions in APAC: Indonesia and Japan, and two in Europe: Italy and Spain. Stay tuned for more posts like this one.

— seb

(Photo via Good Free Photos)

Amazon Redshift update – ra3.4xlarge instances

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/amazon-redshift-update-ra3-4xlarge-instances/

Since we launched Amazon Redshift as a cloud data warehouse service more than seven years ago, tens of thousands of customers built their workloads using it. We are always listening to your feedback and, in December last year, we announced our 3rd generation RA3 node type providing you the ability to scale compute and storage separately. Previous generation DS2 and DC2 nodes had a fixed amount of storage and required adding more nodes to your cluster to increase storage capacity. The new RA3 nodes let you determine how much compute capacity you need to support your workload and then scale the amount of storage based on your needs. The first member of the RA3 family was the ra3.16xlarge which we heard from many customers was fantastic, but more than they needed for their workload needs.

Today we are adding a new smaller member to the RA3 family: the ra3.4xlarge.

The RA3 node type is based on AWS Nitro and includes support for Redshift managed storage. Redshift managed storage automatically manages data placement across tiers of storage and caches the hottest data in high-performance SSD storage while automatically offloading colder data to Amazon Simple Storage Service (S3). Redshift managed storage uses advanced techniques such as block temperature, data block age, and workload patterns to optimize performance.

RA3 nodes with managed storage are a great fit for analytics workloads that require massive storage capacity and can be a great fit for workloads such as operational analytics, where the subset of data that is most important evolves constantly over time. In the past, there was pressure to offload or archive old data to other storage because of fixed storage limits. This made maintaining the operational analytics data set and the larger historical dataset difficult to query when needed.

The new ra3.4xlarge node provides 12 vCPUs, 96 GiB of RAM, and addresses up to 64 Tb of managed storage. A cluster can contain up to 32 of these instances, for a total storage of 2048 TB (that’s 2 petabytes!).

The differences between ra3.16xlarge and ra3.4xlarge nodes are summarized in the table below.

vCPUMemoryAddressable Storage I/OPrice
(US East (N. Virginia))
ra3.4xlarge1296 GiB64TB RMS2 GB/sec$3.26 per Hour
ra3.16xlarge28384 GiB64TB RMS8 GB/sec$13.04 per Hour

To create a new cluster, I am using the Redshift AWS Management Console or AWS Command Line Interface (CLI). In the console. I click Create Cluster and choose ra3.4xlarge instances.

If you have a DS2 or DC2 instance-based cluster you create a new RA3 cluster to evaluate the new instance with managed storage. You use a recent snapshot of your Redshift DS2 or DC2 cluster to create a new cluster based on ra3.4xlarge instances. You keep the two clusters running in parallel to evaluate the compute needs of your application.

You can resize your RA3 cluster at anytime by using elastic resize to add or remove compute capacity. If elastic resize is not available for your chosen configuration then you can do a classic resize.

RA3 instances are now available in 14 AWS Regions : US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Canada (Central), and South America (São Paulo).

The price vary from one region to the other, starting at $3.26/hr/node in US East (N. Virginia). Check the Amazon Redshift pricing page for details.

— seb

Amazon Detective – Rapid Security Investigation and Analysis

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/amazon-detective-rapid-security-investigation-and-analysis/

Almost five years ago, I blogged about a solution that automatically analyzes AWS CloudTrail data to generate alerts upon sensitive API usage. It was a simple and basic solution for security analysis and automation. But demanding AWS customers have multiple AWS accounts, collect data from multiple sources, and simple searches based on regular expressions are not enough to conduct in-depth analysis of suspected security-related events. Today, when a security issue is detected, such as compromised credentials or unauthorized access to a resource, security analysts cross-analyze several data logs to understand the root cause of the issue and its impact on the environment. In-depth analysis often requires scripting and ETL to connect the dots between data generated by multiple siloed systems. It requires skilled data engineers to answer basic questions such as “is this normal?”. Analysts use Security Information and Event Management (SIEM) tools, third-party libraries, and data visualization tools to validate, compare, and correlate data to reach their conclusions. To further complicate the matters, new AWS accounts and new applications are constantly introduced, forcing analysts to constantly reestablish baselines of normal behavior, and to understand new patterns of activities every time they evaluate a new security issue.

Amazon Detective is a fully managed service that empowers users to automate the heavy lifting involved in processing large quantities of AWS log data to determine the cause and impact of a security issue. Once enabled, Detective automatically begins distilling and organizing data from AWS Guard Duty, AWS CloudTrail, and Amazon Virtual Private Cloud Flow Logs into a graph model that summarizes the resource behaviors and interactions observed across your entire AWS environment.

At re:invent 2019, we announced a preview of Amazon Detective. Today, it is our pleasure to announce its availability for all AWS Customers.

Amazon Detective uses machine learning models to produce graphical representations of your account behavior and helps you to answer questions such as “is this an unusual API call for this role?” or “is this spike in traffic from this instance expected?”. You do not need to write code, to configure or to tune your own queries.

To get started with Amazon Detective, I open the AWS Management Console, I type “detective” in the search bar and I select Amazon Detective from the provided results to launch the service. I enable the service and I let the console guide me to configure “member” accounts to monitor and the “master” account in which to aggregate the data. After this one-time setup, Amazon Detective immediately starts analyzing AWS telemetry data and, within a few minutes, I have access to a set of visual interfaces that summarize my AWS resources and their associated behaviors such as logins, API calls, and network traffic. I search for a finding or resource from the Amazon Detective Search bar and, after a short while, I am able to visualize the baseline and current value for a set of metrics.

I select the resource type and ID and start to browse the various graphs.

I can also investigate a AWS Guard Duty finding by using the native integrations within the Guard Duty and AWS Security Hub consoles. I click the “Investigate” link from any finding from AWS Guard Duty and jump directly into a Amazon Detective console that provides related details, context, and guidance to investigate and to respond to the issue. In the example below, Guard Duty reports an unauthorized access that I decide to investigate:

Amazon Detective console opens:

I scroll down the page to check the graph of failed API calls. I click a bar in the graph to get the details, such as the IP addresses where the calls originated:

Once I know the source IP addresses, I click New behavior: AWS role and observe where these calls originated from to compare with the automatically discovered baseline.

Amazon Detective works across your AWS accounts, it is a multi-account solution that aggregates data and findings from up to 1000 AWS accounts into a single security-owned “master” account making it easy to view behavioral patterns and connections across your entire AWS environment.

There are no agents, sensors, or additional software to deploy in order to use the service. Amazon Detective retrieves, aggregates and analyzes data from AWS Guard Duty, AWS CloudTrail and Amazon Virtual Private Cloud Flow Logs. Amazon Detective collects existing logs directly from AWS without touching your infrastructure, thereby not causing any impact to cost or performance.

Amazon Detective can be administered via the AWS Management Console or via the Amazon Detective management APIs. The management APIs enable you to build Amazon Detective into your standard account registration, enablement, and deployment processes.

Amazon Detective is a regional service. I activate the service in every AWS Regions in which I want to analyze findings. All data are processed in the AWS Region where they are generated. Amazon Detective maintains data analytics and log summaries in the behavior graph for a 1-year rolling period from the date of log ingestion. This allows for visual analysis and deep dives over a large data set for a long period of time. When I disable the service, all data is expunged to ensure no data remains.

There are no additional charges or upfront commitments required to use Amazon Detective. We charge per GB of data ingested from AWS AWS CloudTrail, Amazon Virtual Private Cloud Flow Logs, and AWS Guard Duty findings. Amazon Detective offers a 30-day free trial. As usual, check the pricing page for the details.

Amazon Detective is available in all commercial AWS Regions, except China. You can start to use it today.

— seb

Materialize your Amazon Redshift Views to Speed Up Query Execution

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/materialize-your-amazon-redshift-views-to-speed-up-query-execution/

At AWS, we take pride in building state of the art virtualization technologies to simplify the management and access to cloud services such as networks, computing resources or object storage.

In a Relational Database Management Systems (RDBMS), a view is virtualization applied to tables : it is a virtual table representing the result of a database query. Views are frequently used when designing a schema, to present a subset of the data, summarized data (such as aggregated or transformed data) or to simplify data access across multiple tables. When using data warehouses, such as Amazon Redshift, a view simplifies access to aggregated data from multiple tables for Business Intelligence (BI) tools such as Amazon QuickSight or Tableau.

Views provide ease of use and flexibility but they are not speeding up data access. The database system must evaluate the underlying query representing the view each time your application accesses the view. When performance is key, data engineers use create table as (CTAS) as an alternative. A CTAS is a table defined by a query. The query is executed at table creation time and your applications can use it like a normal table, with the downside that the CTAS data set is not refreshed when underlying data are updated. Furthermore, the CTAS definition is not stored in the database system. It is not possible to know if a table was created by a CTAS or not, making it difficult to track which CTAS needs to be refreshed and which is current.

Today, we are introducing materialized views for Amazon Redshift. A materialized view (MV) is a database object containing the data of a query. A materialized view is like a cache for your view. Instead of building and computing the data set at run-time, the materialized view pre-computes, stores and optimizes data access at the time you create it. Data are ready and available to your queries just like regular table data.

Using materialized views in your analytics queries can speed up the query execution time by orders of magnitude because the query defining the materialized view is already executed and the data is already available to the database system.

Materialized views are especially useful for queries that are predictable and repeated over and over. Instead of performing resource-intensive queries on large tables, applications can query the pre-computed data stored in the materialized view.

When the data in the base tables are changing, you refresh the materialized view by issuing a Redshift SQL statement “refresh materialized view“. After issuing a refresh statement, your materialized view contains the same data as would have been returned by a regular view. Refreshes can be incremental or full refreshes (recompute). When possible, Redshift incrementally refreshes data that changed in the base tables since the materialized view was last refreshed.

Let’s see how it works. I create a sample schema to store sales information : each sales transaction and details about the store where the sales took place.

To view the total amount of sales per city, I create a materialized view with the create materialized view SQL statement. I connect to the Redshift console, select the query Editor and type the following statement to create a materialized view (city_sales) joining records from two tables and aggregating sales amount (sum(sales.amount)) per city (group by city):

CREATE MATERIALIZED VIEW city_sales AS (
  SELECT st.city, SUM(sa.amount) as total_sales
  FROM sales sa, store st
  WHERE sa.store_id = st.id
  GROUP BY st.city
);

The resulting schema is below:

Now I can query the materialized view just like a regular view or table and issue statements like “SELECT city, total_sales FROM city_sales” to get the below results. The join between the two tables and the aggregate (sum and group by) are already computed, resulting to significantly less data to scan.

When the data in the underlying base tables change, the materialized view is not automatically reflecting those changes. The data stored in the materialized can be refreshed on demand with latest changes from base tables using the SQL refreshmaterialized view command. Let’s see a practical example:

!-- let's add a row in the sales base table
INSERT INTO sales (id, item, store_id, customer_id, amount)
VALUES(8, 'Gaming PC Super ProXXL', 1, 1, 3000);

SELECT city, total_sales FROM city_sales WHERE city = 'Paris'

city |total_sales|
-----|-----------|
Paris|        690|

!-- the new sale is not taken into account !

!-- let's refresh the materialized view
REFRESH MATERIALIZED VIEW city_sales;

SELECT city, total_sales FROM city_sales WHERE city = 'Paris'

city |total_sales|
-----|-----------|
Paris|       3690|

!-- now the view has the latest sales data

The full code for this very simple demo is available as a gist.

You can start to use materialized views today in all AWS Regions.

There is nothing to change in your existing clusters to start to use materialized views, you can start to create them today at no additional cost.

Happy building !

New: Use AWS CloudFormation StackSets for Multiple Accounts in an AWS Organization

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-use-aws-cloudformation-stacksets-for-multiple-accounts-in-an-aws-organization/

Infrastructure-as-code is the process of managing and creating IT infrastructure through machine-readable text files, such as JSON or YAML definitions or using familiar programming languages, such as Java, Python, or TypeScript. AWS Customers typically uses AWS CloudFormation or the AWS Cloud Development Kit to automate the creation and management of their cloud infrastructure.

CloudFormation StackSets allow you to roll out CloudFormation stacks over multiple AWS accounts and in multiple Regions with just a couple of clicks. When we launched StackSets, grouping accounts was primarily for billing purposes. Since the launch of AWS Organizations, you can centrally manage multiple AWS accounts across diverse business needs including billing, access control, compliance, security and resource sharing.

Use CloudFormation StackSets with Organizations
Today, we are simplifying the use of CloudFormation StackSets for customers managing multiple accounts with AWS Organizations.

You can now centrally orchestrate any AWS CloudFormation enabled service across multiple AWS accounts and regions. For example, you can deploy your centralized AWS Identity and Access Management (IAM) roles, provision Amazon Elastic Compute Cloud (EC2) instances or AWS Lambda functions across AWS Regions and accounts in your organization. CloudFormation StackSets simplify the configuration of cross-accounts permissions and allow for automatic creation and deletion of resources when accounts are joining or are removed from your Organization.

You can get started by enabling data sharing between CloudFormation and Organizations from the StackSets console. Once done, you will be able to use StackSets in the Organizations master account to deploy stacks to all accounts in your organization or in specific organizational units (OUs). A new service managed permission model is available with these StackSets. Choosing Service managed permissions allows StackSets to automatically configure the necessary IAM permissions required to deploy your stack to the accounts in your organization.

In addition to setting permissions, CloudFormation StackSets now offers the option for automatically creating or removing your CloudFormation stacks when a new AWS account joins or quits your Organization. You do not need to remember to manually connect to the new account to deploy your common infrastructure or to delete infrastructure when an account is removed from your Organization. When an account leaves the organization, the stack will be removed from the management of StackSets. However, you can choose to either delete or retain the resources managed by the stack.

Lastly, you choose whether to deploy a stack to your entire Organization or just to one or more Organization Units (OU). You also choose a couple of deployment options: how many accounts will be prepared in parallel, and how many failures you tolerate before stopping the entire deployment.

For a full description how StackSets works, you can read the initial blog article from Jeff.

There is no extra cost for using AWS CloudFormation StackSet with AWS Organizations. The integration is available in all AWS Regions where StackSets is available.

— seb

AWS Backup: EC2 Instances, EFS Single File Restore, and Cross-Region Backup

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/aws-backup-ec2-instances-efs-single-file-restore-and-cross-region-backup/

Since we launched AWS Backup last year, over 20,000 AWS customers are protecting petabytes of data every day. AWS Backup is a fully managed, centralized backup service simplifying the management of your backups for your Amazon Elastic Block Store (EBS) volumes, your databases (Amazon Relational Database Service (RDS) or Amazon DynamoDB), AWS Storage Gateway and your Amazon Elastic File System (EFS) filesystems.

We continuously listen to your feedback and today, we are bringing additional enterprise data capabilities to AWS Backup :

Here are the details.

EC2 Instance Backup
Backing up and restoring an EC2 instance requires additional protection than just the instance’s individual EBS volumes. To restore an instance, you’ll need to restore all EBS volumes but also recreate an identical instance: instance type, VPC, Security Group, IAM role etc.

Today, we are adding the ability to perform backup and recovery tasks on whole EC2 instances. When you back up an EC2 instance, AWS Backup will protect all EBS volumes attached to the instance, and it will attach them to an AMI that stores all parameters from the original EC2 instance except for two (Elastic Inference Accelerator and user data script).

Once the backup is complete, you can easily restore the full instance using the console, API, or AWS Command Line Interface (CLI). You will be able to restore and edit all parameters using the API or AWS Command Line Interface (CLI), and in the console, you will be able to restore and edit 16 parameters from your original EC2 instance.

To get started, open the Backup console and select either a backup plan or an on-demand backup. For this example, I chose On-Demand backup. I select EC2 from the list of services and select the ID of the instance I want to backup.

Note that you need to stop write activity and flush filesystem caches in case you’re using RAID volumes or any other type of technique to group your volumes.

After a while, I see the backup available in my vault. To restore the backup, I select the backup and click Restore.

Before actually starting the restore, I can see the EC2 configuration options that have been backed up and I have the opportunity to modify any value listed before re-creating the instance.

After a few seconds, my restored instance starts and is available in the EC2 console.

Single File Restore for EFS
Often AWS Backup customers would like to restore an accidentally deleted or corrupted file or folder. Before today, you would need to perform a full restore of the entire filesystem, which makes it difficult to meet your strict RTO objectives.

Starting today, you can restore a single file or directory from your Elastic File System filesystem. You select the backup, type the relative path of the file or directory to restore, and AWS Backup will create a new Elastic File System recovery directory at the root of your filesystem, preserving the original path hierarchy. You can restore your files to an existing filesystem or to a new filesystem.

To restore a single file from an Elastic File System backup, I choose the backup from the vault and I click Restore. On the Restore backup window, I choose between restoring the full filesystem or individual items. I enter the path relative to the root of the filesystem (not including the mount point) for the files and directories I want to restore. I also choose if I want to restore the items in the existing filesystem or in a new filesystem. Finally, I click Restore backup to start the restore job.

Cross-region Backup
Many enterprise AWS customers have strict business continuity policies requiring a minimum distance between two copies of their backup. To help enterprises to meet this requirement, we’re adding the capability to copy a backup to another Region, either on-demand when you need it or automatically, as part of a backup plan.

To initiate an on-demand copy of my backup to another Region, I use the console to browse my vaults, select the backup I want to copy and click Copy. I chose the destination Region, the destination vault, and keep the default value for other options. I click Copy on the bottom of the page.

The time to make the copy depends on the size of the backup. I monitor the status on the new Copy Jobs tab of the Job section:

Once the copy is finished, I switch my console to the target Region, I see the backup in the target vault and I can initiate a restore operation, just like usual.

I also can use the AWS Command Line Interface (CLI) or one of our AWS SDKs to automate or to integrate any of these processes in other applications.

Pricing
Pricing depends on the type of backup:

  • there is no additional charge for EC2 instance backup, you will be charged for the storage used by all EBS volumes attached to your instance,
  • for Elastic File System single file restore, you will be charged a fixed fee per restore and for the number of bytes you restore,
  • and for cross-region backup, you will be charged for the cross-region data transfer bandwidth and for the new warm storage space in the target Region.

These three new features are available today in all commercial AWS Regions where AWS Backup is available (you can verify services availability per Region on this web page).

As it is usual with any backup system, it is best practice to regularly perform backups and backup testing. Restore-able backups are the best kind of backups.

— seb

Amplify DataStore – Simplify Development of Offline Apps with GraphQL

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/amplify-datastore-simplify-development-of-offline-apps-with-graphql/

The open source Amplify Framework is a command line tool and a library allowing web & mobile developers to easily provision and access cloud based services. For example, if I want to create a GraphQL API for my mobile application, I use amplify add api on my development machine to configure the backend API. After answering a few questions, I type amplify push to create an AWS AppSync API backend in the cloud. Amplify generates code allowing my app to easily access the newly created API. Amplify supports popular web frameworks, such as Angular, React, and Vue. It also supports mobile applications developed with React Native, Swift for iOS, or Java for Android. If you want to learn more about how to use Amplify for your mobile applications, feel free to attend one the workshops (iOS or React Native) we prepared for the re:Invent 2019 conference.

AWS customers told us the most difficult tasks when developing web & mobile applications is to synchronize data across devices and to handle offline operations. Ideally, when a device is offline, your customers should be able to continue to use your application, not only to access data but also to create and modify them. When the device comes back online, the application must reconnect to the backend, synchronize the data and resolve conflicts, if any. It requires a lot of undifferentiated code to correctly handle all edge cases, even when using AWS AppSync SDK’s on-device cache with offline mutations and delta sync.

Today, we are introducing Amplify DataStore, a persistent on-device storage repository for developers to write, read, and observe changes to data. Amplify DataStore allows developers to write apps leveraging distributed data without writing additional code for offline or online scenario. Amplify DataStore can be used as a stand-alone local datastore in web and mobile applications, with no connection to the cloud, or the need to have an AWS Account. However, when used with a cloud backend, Amplify DataStore transparently synchronizes data with an AWS AppSync API when network connectivity is available. Amplify DataStore automatically versions data, implements conflict detection and resolution in the cloud using AppSync. The toolchain also generates object definitions for my programming language based on the GraphQL schema developers provide.

Let’s see how it works.

I first install the Amplify CLI and create a React App. This is standard React, you can find the script on my git repo. I add Amplify DataStore to the app with npx amplify-app. npx is specific for NodeJS, Amplify DataStore also integrates with native mobile toolchains, such as the Gradle plugin for Android Studio and CocoaPods that creates custom XCode build phases for iOS.

Now that the scaffolding of my app is done, I add a GraphQL schema representing two entities: Posts and Comments on these posts. I install the dependencies and use AWS Amplify CLI to generate the source code for the objects defined in the GraphQL schema.

# add a graphql schema to amplify/backend/api/amplifyDatasource/schema.graphql
echo "enum PostStatus {
  ACTIVE
  INACTIVE
}

type Post @model {
  id: ID!
  title: String!
  comments: [Comment] @connection(name: "PostComments")
  rating: Int!
  status: PostStatus!
}
type Comment @model {
  id: ID!
  content: String
  post: Post @connection(name: "PostComments")
}" > amplify/backend/api/amplifyDatasource/schema.graphql

# install dependencies 
npm i @aws-amplify/core @aws-amplify/DataStore @aws-amplify/pubsub

# generate the source code representing the model 
npm run amplify-modelgen

# create the API in the cloud 
npm run amplify-push

@model and @connection are directives that the Amplify GraphQL Transformer uses to generate code. Objects annotated with @model are top level objects in your API, they are stored in DynamoDB, you can make them searchable, version them or restrict their access to authorised users only. @connection allows to express 1-n relationships between objects, similarly to what you would define when using a relational database (you can use the @key directive to model n-n relationships).

The last step is to create the React app itself. I propose to download a very simple sample app to get started quickly:

# download a simple react app
curl -o src/App.js https://raw.githubusercontent.com/sebsto/amplify-datastore-js-e2e/master/src/App.js

# start the app 
npm run start

I connect my browser to the app http://localhost:8080and start to test the app.

The demo app provides a basic UI (as you can guess, I am not a graphic designer !) to create, query, and to delete items. Amplify DataStore provides developers with an easy to use API to store, query and delete data. Read and write are propagated in the background to your AppSync endpoint in the cloud. Amplify DataStore uses a local data store via a storage adapter, we ship IndexedDB for web and SQLite for mobile. Amplify DataStore is open source, you can add support for other database, if needed.

From a code perspective, interacting with data is as easy as invoking the save(), delete(), or query() operations on the DataStore object (this is a Javascript example, you would write similar code for Swift or Java). Notice that the query() operation accepts filters based on Predicates expressions, such as item.rating("gt", 4) or Predicates.All.

function onCreate() {
  DataStore.save(
    new Post({
      title: `New title ${Date.now()}`,
      rating: 1,
      status: PostStatus.ACTIVE
    })
  );
}

function onDeleteAll() {
  DataStore.delete(Post, Predicates.ALL);
}

async function onQuery(setPosts) {
  const posts = await DataStore.query(Post, c => c.rating("gt", 4));
  setPosts(posts)
}

async function listPosts(setPosts) {
  const posts = await DataStore.query(Post, Predicates.ALL);
  setPosts(posts);
}

I connect to Amazon DynamoDB console and observe the items are stored in my backend:

There is nothing to change in my code to support offline mode. To simulate offline mode, I turn off my wifi. I add two items in the app and turn on the wifi again. The app continues to operate as usual while offline. The only noticeable change is the _version field is not updated while offline, as it is populated by the backend.

When the network is back, Amplify DataStore transparently synchronizes with the backend. I verify there are 5 items now in DynamoDB (the table name is different for each deployment, be sure to adjust the name for your table below):

aws dynamodb scan --table-name Post-raherug3frfibkwsuzphkexewa-amplify \
                   --filter-expression "#deleted <> :value"            \
                   --expression-attribute-names '{"#deleted" : "_deleted"}' \
                   --expression-attribute-values '{":value" : { "BOOL": true} }' \
                   --query "Count"

5 // <= there are now 5 non deleted items in the table !

Amplify DataStore leverages GraphQL subscriptions to keep track of changes that happen on the backend. Your customers can modify the data from another device and Amplify DataStore takes care of synchronizing the local data store transparently. No GraphQL knowledge is required, Amplify DataStore takes care of the low level GraphQL API calls for you automatically. Real-time data, connections, scalability, fan-out and broadcasting are all handled by the Amplify client and AppSync, using WebSocket protocol under the cover.

We are effectively using GraphQL as a network protocol to dynamically transform model instances to GraphQL documents over HTTPS.

To refresh the UI when a change happens on the backend, I add the following code in the useEffect() React hook. It uses the DataStore.observe() method to register a callback function ( msg => { ... } ). Amplify DataStore calls this function when an instance of Post changes on the backend.

const subscription = DataStore.observe(Post).subscribe(msg => {
  console.log(msg.model, msg.opType, msg.element);
  listPosts(setPosts);
});

Now, I open the AppSync console. I query existing Posts to retrieve a Post ID.

query ListPost {
  listPosts(limit: 10) {
    items {
      id
      title
      status
      rating
      _version
    }
  }
}

I choose the first post in my app, the one starting with 7d8… and I send the following GraphQL mutation:

mutation UpdatePost {
  updatePost(input: {
    id: "7d80688f-898d-4fb6-a632-8cbe060b9691"
    title: "updated title 13:56"
    status: ACTIVE
    rating: 7
    _version: 1
  }) {
    id
    title
    status
    rating
    _lastChangedAt
    _version
    _deleted    
  }
}

Immediately, I see the app receiving the notification and refreshing its user interface.

Finally, I test with multiple devices. I first create a hosting environment for my app using amplify add hosting and amplify publish. Once the app is published, I open the iOS Simulator and Chrome side by side. Both apps initially display the same list of items. I create new items in both apps and observe the apps refreshing their UI in near real time. At the end of my test, I delete all items.

I verify there are no more items in DynamoDB (the table name is different for each deployment, be sure to adjust the name for your table below):

aws dynamodb scan --table-name Post-raherug3frfibkwsuzphkexewa-amplify \
                   --filter-expression "#deleted <> :value"            \
                   --expression-attribute-names '{"#deleted" : "_deleted"}' \
                   --expression-attribute-values '{":value" : { "BOOL": true} }' \
                   --query "Count"

0 // <= all the items have been deleted !

When syncing local data with the backend, AWS AppSync keeps track of version numbers to detect conflicts. When there is a conflict, the default resolution strategy is to automerge the changes on the backend. Automerge is an easy strategy to resolve conflit without writing client-side code. For example, let’s pretend I have an initial Post, and Bob & Alice update the post at the same time:

The original item:

{
   "_version": 1,
   "id": "25",
   "rating": 6,
   "status": "ACTIVE",
   "title": "DataStore is Available"
}
Alice updates the rating:

{
   "_version": 2,
   "id": "25",
   "rating": 10,
   "status": "ACTIVE",
   "title": "DataStore is Available"
}
At the same time, Bob updates the title:

{
   "_version": 2,
   "id": "25",
   "rating": 6,
   "status": "ACTIVE",
   "title": "DataStore is great !"
}
The final item after auto-merge is:

{
   "_version": 3,
   "id": "25",
   "rating": 10,
   "status": "ACTIVE",
   "title": "DataStore is great !"
}

Automerge strictly defines merging rules at field level, based on type information defined in the GraphQL schema. For example List and Map are merged, and conflicting updates on scalars (such as numbers and strings) preserve the value existing on the server. Developers can chose other conflict resolution strategies: optimistic concurrency (conflicting updates are rejected) or custom (an AWS Lambda function is called to decide what version is the correct one). You can choose the conflit resolution strategy with amplify update api. You can read more about these different strategies in the AppSync documentation.

The full source code for this demo is available on my git repository. The app has less than 100 lines of code, 20% being just UI related. Notice that I did not write a single line of GraphQL code, everything happens in the Amplify DataStore.

Your Amplify DataStore cloud backend is available in all AWS Regions where AppSync is available, which, at the time I write this post are: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (London).

There is no additional charges to use Amplify DataStore in your application, you only pay for the backend resources you use, such as AppSync and DynamoDB (see here and here for the pricing detail). Both services have a free tier allowing you to discover and to experiment for free.

Amplify DataStore allows you to focus on the business value of your apps, instead of writing undifferentiated code. I can’t wait to discover the great applications you’re going to build with it.

— seb

AWS Compute Optimizer – Your Customized Resource Optimization Service

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/aws-compute-optimizer-your-customized-resource-optimization-service/

When I publicly speak about Amazon Elastic Compute Cloud (EC2) instance type, one frequently asked question I receive is “How can I be sure I choose the right instance type for my application?” Choosing the correct instance type is between art and science. It usually involves knowing your application performance characteristics under normal circumstances (the baseline) and the expected daily variations, and to pick up an instance type that matches these characteristics. After that, you monitor key metrics to validate your choice, and you iterate over the time to adjust the instance type that best suits the cost vs performance ratio for your application. Over-provisioning resources results in paying too much for your infrastructure, and under-provisioning resources results in lower application performance, possibly impacting customer experience.

Earlier this year, we launched Cost Explorer Rightsizing Recommendations, which help you identify under-utilized Amazon Elastic Compute Cloud (EC2) instances that may be downsized within the same family to save money. We received great feedback and customers are asking for more recommendations beyond just downsizing within the same instance family.

Today, we are announcing a new service to help you to optimize compute resources for your workloads: AWS Compute Optimizer. AWS Compute Optimizer uses machine learning techniques to analyze the history of resource consumption on your account, and make well-articulated and actionable recommendations tailored to your resource usage. AWS Compute Optimizer is integrated to AWS Organizations, you can view recommendations for multiple accounts from your master AWS Organizations account.

To get started with AWS Compute Optimizer, I navigate to the AWS Management Console, select AWS Compute Optimizer and activate the service. It immediately starts to analyze my resource usage and history using Amazon CloudWatch metrics and delivers the first recommendations a few hours later.

I can see the first recommendations on the AWS Compute Optimizer dashboard:

I click Over-provisioned: 8 instances to get the details:

I click on one of the eight links to get the actionable findings:

AWS Compute Optimizer offers multiple options. I can and I scroll down the bottom of that page to verify what is the impact if I decide to apply this recommendation:

I can also access the recommendation from the AWS Command Line Interface (CLI):

$ aws compute-optimizer get-ec2-instance-recommendations --instance-arns arn:aws:ec2:us-east-1:012345678912:instance/i-0218a45abd8b53658
{
    "instanceRecommendations": [
        {
            "instanceArn": "arn:aws:ec2:us-east-1:012345678912:instance/i-0218a45abd8b53658",
            "accountId": "012345678912",
            "currentInstanceType": "m5.xlarge",
            "finding": "OVER_PROVISIONED",
            "utilizationMetrics": [
                {
                    "name": "CPU",
                    "statistic": "MAXIMUM",
                    "value": 2.0
                }
            ],
            "lookBackPeriodInDays": 14.0,
            "recommendationOptions": [
                {
                    "instanceType": "r5.large",
                    "projectedUtilizationMetrics": [
                        {
                            "name": "CPU",
                            "statistic": "MAXIMUM",
                            "value": 3.2
                        }
                    ],
                    "performanceRisk": 1.0,
                    "rank": 1
                },
                {
                    "instanceType": "t3.xlarge",
                    "projectedUtilizationMetrics": [
                        {
                            "name": "CPU",
                            "statistic": "MAXIMUM",
                            "value": 2.0
                        }
                    ],
                    "performanceRisk": 3.0,
                    "rank": 2
                },
                {
                    "instanceType": "m5.xlarge",
                    "projectedUtilizationMetrics": [
                        {
                            "name": "CPU",
                            "statistic": "MAXIMUM",
                            "value": 2.0
                        }
                    ],
                    "performanceRisk": 1.0,
                    "rank": 3
                }
            ],
            "recommendationSources": [
                {
                    "recommendationSourceArn": "arn:aws:ec2:us-east-1:012345678912:instance/i-0218a45abd8b53658",
                    "recommendationSourceType": "Ec2Instance"
                }
            ],
            "lastRefreshTimestamp": 1575006953.102
        }
    ],
    "errors": []
}

Keep in mind that AWS Compute Optimizer uses Amazon CloudWatch metrics as basis for the recommendations. By default, CloudWatch metrics are the ones it can observe from an hypervisor point of view, such as CPU utilization, disk IO, and network IO. If I want AWS Compute Optimizer to take into account operating system level metrics, such as memory usage, I need to install a CloudWatch agent on my EC2 instance. AWS Compute Optimizer automatically recognizes these metrics when available and takes these into account when creating recommendation, otherwise, it shows “Data Unavailable” in the console.

AWS customers told us performance is not the only metric they look at when choosing a resource, the price vs performance ratio is important too. For example, it might make sense to use a new generation instance family, such as m5, rather than the older generation (m3 or m4), even when the new generation seems over-provisioned for the workload. This is why, after AWS Compute Optimizer identifies a list of optimal AWS resources for your workload, it presents on-demand pricing, reserved instance pricing, reserved instance utilization, and reserved instance coverage, along with expected resource efficiency to its recommendations.

AWS Compute Optimizer makes it easy to right-size your resource. However, keep in mind that while it is relatively easy to right-size resources for modern applications, or stateless applications that scale horizontally, it might be very difficult to right-size older apps. Some older apps might not run correctly under different hardware architecture, or need different drivers, or not be supported by the application vendor at all. Be sure to check with your vendor before trying to optimize cloud resources for packaged or older apps.

We strongly advise you to thoroughly test your applications on the new recommended instance type before applying any recommendation into production.

Compute Optimizer is free to use and available initially in these AWS Regions: US East (N. Virginia), US West (Oregon), Europe (Ireland), US East (Ohio), South America (São Paulo). Connect to the AWS Management Console today and discover how much you can save by choosing the right resource size for your cloud applications.

— seb

New – VPC Ingress Routing – Simplifying Integration of Third-Party Appliances

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-vpc-ingress-routing-simplifying-integration-of-third-party-appliances/

When I was delivering the Architecting on AWS class, customers often asked me how to configure an Amazon Virtual Private Cloud to enforce the same network security policies in the cloud as they have on-premises. For example, to scan all ingress traffic with an Intrusion Detection System (IDS) appliance or to use the same firewall in the cloud as on-premises. Until today, the only answer I could provide was to route all traffic back from their VPC to an on-premises appliance or firewall in order to inspect the traffic with their usual networking gear before routing it back to the cloud. This is obviously not an ideal configuration, it adds latency and complexity.

Today, we announce new VPC networking routing primitives to allow to route all incoming and outgoing traffic to/from an Internet Gateway (IGW) or Virtual Private Gateway (VGW) to a specific Amazon Elastic Compute Cloud (EC2) instance’s Elastic Network Interface. It means you can now configure your Virtual Private Cloud to send all traffic to an EC2 instance before the traffic reaches your business workloads. The instance typically runs network security tools to inspect or to block suspicious network traffic (such as IDS/IPS or Firewall) or to perform any other network traffic inspection before relaying the traffic to other EC2 instances.

How Does it Work?
To learn how it works, I wrote this CDK script to create a VPC with two public subnets: one subnet for the appliance and one subnet for a business application. The script launches two EC2 instances with public IP address, one in each subnet. The script creates the below architecture:

This is a regular VPC, the subnets have routing tables to the Internet Gateway and the traffic flows in and out as expected. The application instance hosts a static web site, it is accessible from any browser. You can retrieve the application public DNS name from the EC2 Console (for your convenience, I also included the CLI version in the comments of the CDK script).

AWS_REGION=us-west-2
APPLICATION_IP=$(aws ec2 describe-instances                           \
                     --region $AWS_REGION                             \
                     --query "Reservations[].Instances[] | [?Tags[?Key=='Name' && Value=='application']].NetworkInterfaces[].Association.PublicDnsName"  \
                     --output text)
				   
curl -I $APPLICATION_IP

Configure Routing
To configure routing, you need to know the VPC ID, the ENI ID of the ENI attached to the appliance instance, and the Internet Gateway ID. Assuming you created the infrastructure using the CDK script I provided, here are the commands I use to find these three IDs (be sure to adjust to the AWS Region you use):

AWS_REGION=us-west-2
VPC_ID=$(aws cloudformation describe-stacks                              \
             --region $AWS_REGION                                        \
             --stack-name VpcIngressRoutingStack                         \
             --query "Stacks[].Outputs[?OutputKey=='VPCID'].OutputValue" \
             --output text)

ENI_ID=$(aws ec2 describe-instances                                       \
             --region $AWS_REGION                                         \
             --query "Reservations[].Instances[] | [?Tags[?Key=='Name' &&  Value=='appliance']].NetworkInterfaces[].NetworkInterfaceId" \
             --output text)

IGW_ID=$(aws ec2 describe-internet-gateways                               \
             --region $AWS_REGION                                         \
             --query "InternetGateways[] | [?Attachments[?VpcId=='${VPC_ID}']].InternetGatewayId" \
             --output text)

To route all incoming traffic through my appliance, I create a routing table for the Internet Gateway and I attach a rule to direct all traffic to the EC2 instance Elastic Network Interface (ENI):

# create a new routing table for the Internet Gateway
ROUTE_TABLE_ID=$(aws ec2 create-route-table                      \
                     --region $AWS_REGION                        \
                     --vpc-id $VPC_ID                            \
                     --query "RouteTable.RouteTableId"           \
                     --output text)

# create a route for 10.0.1.0/24 pointing to the appliance ENI
aws ec2 create-route                             \
    --region $AWS_REGION                         \
    --route-table-id $ROUTE_TABLE_ID             \
    --destination-cidr-block 10.0.1.0/24         \
    --network-interface-id $ENI_ID

# associate the routing table to the Internet Gateway
aws ec2 associate-route-table                      \
    --region $AWS_REGION                           \
    --route-table-id $ROUTE_TABLE_ID               \
    --gateway-id $IGW_ID

Alternatively, I can use the VPC Console under the new Edge Associations tab.

To route all application outgoing traffic through the appliance, I replace the default route for the application subnet to point to the appliance’s ENI:

SUBNET_ID=$(aws ec2 describe-instances                                  \
                --region $AWS_REGION                                    \
                --query "Reservations[].Instances[] | [?Tags[?Key=='Name' && Value=='application']].NetworkInterfaces[].SubnetId"    \
                --output text)
ROUTING_TABLE=$(aws ec2 describe-route-tables                           \
                    --region $AWS_REGION                                \
                    --query "RouteTables[?VpcId=='${VPC_ID}'] | [?Associations[?SubnetId=='${SUBNET_ID}']].RouteTableId" \
                    --output text)

# delete the existing default route (the one pointing to the internet gateway)
aws ec2 delete-route                       \
    --region $AWS_REGION                   \
    --route-table-id $ROUTING_TABLE        \
    --destination-cidr-block 0.0.0.0/0
	
# create a default route pointing to the appliance's ENI
aws ec2 create-route                          \
    --region $AWS_REGION                      \
    --route-table-id $ROUTING_TABLE           \
    --destination-cidr-block 0.0.0.0/0        \
    --network-interface-id $ENI_ID
	
aws ec2 associate-route-table       \
    --region $AWS_REGION            \
    --route-table-id $ROUTING_TABLE \
    --subnet-id $SUBNET_ID

Alternatively, I can use the VPC Console. Within the correct routing table, I select the Routes tab and click Edit routes to replace the default route (the one pointing to 0.0.0.0/0) to target the appliance’s ENI.

Now I have the routing configuration in place. The new routing looks like:

Configure the Appliance Instance
Finally, I configure the appliance instance to forward all traffic it receives. Your software appliance usually does that for you, no extra step is required when you use AWS Marketplace appliances. When using a plain Linux instance, two extra steps are required:

1. Connect to the EC2 appliance instance and configure IP traffic forwarding in the kernel:

APPLIANCE_ID=$(aws ec2 describe-instances  \
                   --region $AWS_REGION    \
                   --query "Reservations[].Instances[] | [?Tags[?Key=='Name' && Value=='appliance']].InstanceId" \
                   --output text)
aws ssm start-session --region $AWS_REGION --target $APPLIANCE_ID	

##
## once connected (you see the 'sh-4.2$' prompt), type:
##

sudo sysctl -w net.ipv4.ip_forward=1
sudo sysctl -w net.ipv6.conf.all.forwarding=1
exit

2. Configure the EC2 instance to accept traffic for different destinations than itself (known as Dest/Source check) :

aws ec2 modify-instance-attribute --region $AWS_REGION \
                         --no-source-dest-check        \
                         --instance-id $APPLIANCE_ID

Now, the appliance is ready to forward traffic to the other EC2 instances. You can test this by pointing your browser (or using `cURL`) to the application instance.

APPLICATION_IP=$(aws ec2 describe-instances --region $AWS_REGION                          \
                     --query "Reservations[].Instances[] | [?Tags[?Key=='Name' && Value=='application']].NetworkInterfaces[].Association.PublicDnsName"  \
                     --output text)
				   
curl -I $APPLICATION_IP

To verify the traffic is really flowing through the appliance, you can enable source/destination check on the instance again (use --source-dest-check parameter with the modify-instance-attributeCLI command above). The traffic is blocked when Source/Destination check is enabled.

Cleanup
Should you use the CDK script I provided for this article, be sure to run cdk destroy when finished. This ensures you are not billed for the two EC2 instances I use for this demo. As I modified routing tables behind the back of AWS CloudFormation, I need to manually delete the routing tables, the subnet and the VPC. The easiest is to navigate to the VPC Console, select the VPC and click Actions => Delete VPC. The console deletes all components in the correct order. You might need to wait 5-10 minutes after the end of cdk destroy before the console is able to delete the VPC.

From our Partners
During the beta test of these new routing capabilities, we granted early access to a collection of AWS partners. They provided us with tons of helpful feedback. Here are some of the blog posts that they wrote in order to share their experiences (I am updating this article with links as they are published):

  • 128 Technology
  • Aviatrix
  • Checkpoint
  • Cisco
  • Citrix
  • FireEye
  • Fortinet
  • HashiCorp
  • IBM Security
  • Lastline
  • Netscout
  • Palo Alto Networks
  • ShieldX Networks
  • Sophos
  • Trend Micro
  • Valtix
  • Vectra AI
  • Versa Networks

Availability
There is no additional costs to use Virtual Private Cloud ingress routing. It is available in all regions (including AWS GovCloud (US-West)) and you can start to use it today.

You can learn more about gateways routing tables in the updated VPC documentation.

What are the appliances you are going to use with this new VPC routing capability?

— seb

New for Identity Federation – Use Employee Attributes for Access Control in AWS

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-for-identity-federation-use-employee-attributes-for-access-control-in-aws/

When you manage access to resources on AWS or many other systems, you most probably use Role-Based Access Control (RBAC). When you use RBAC, you define access permissions to resources, group these permissions in policies, assign policies to roles, assign roles to entities such as a person, a group of persons, a server, an application, etc. Many AWS customers told us they are doing so to simplify granting access permissions to related entities, such as persons sharing similar business functions in the organisation.

For example, you might create a role for a finance database administrator and give that role access to the tables and compute resources necessary for finance. When Alice, a database admin, moves into that department, you assign her the finance database administrator role.

On AWS, you use AWS Identity and Access Management (IAM) permissions policies and IAM roles to implement your RBAC strategy.

The multiplication of resources makes it difficult to scale. When a new resource is added to the system, system administrators must add permissions for that new resource to all relevant policies. How do you scale this to thousands of resources and thousands of policies? How do you verify that a change in one policy does not grant unnecessary privileges to a user or application?

Attribute-Based Access Control
To simplify the management of permissions, in the context of an evergrowing number of resources, a new paradigm emerged: Attribute-Based Access Control (ABAC). When you use ABAC, you define permissions based on matching attributes. You can use any type of attributes in the policies: user attributes, resource attributes, environment attributes. Policies are IF … THEN rules, for example: IF user attribute role == manager THEN she can access file resources having attribute sensitivity == confidential.

Using ABAC permission control allows to scale your permission system, as you no longer need to update policies when adding resources. Instead, you ensure that resources have the proper attributes attached to them. ABAC allows you to manage fewer policies because you do not need to create policies per job role.

On AWS, attributes are called tags. You can attach tags to resources such as Amazon Elastic Compute Cloud (EC2) instance, Amazon Elastic Block Store (EBS) volumes, AWS Identity and Access Management (IAM) users and many others. Having the possibility to tag resources, combined with the possibility to define permissions conditions on tags, effectively allows you to adopt the ABAC paradigm to control access to your AWS resources.

You can learn more about how to use ABAC permissions on AWS by reading the new ABAC section of the documentation or taking the tutorial, or watching Brigid’s session at re:Inforce.

This was a big step, but it only worked if your user attributes were stored in AWS. Many AWS customers manage identities (and their attributes) in another source and use federation to manage AWS access for their users.

Pass in Attributes for Federated Users
We’re excited to announce that you can now pass user attributes in the AWS session when your users federate into AWS, using standards-based SAML. You can now use attributes defined in external identity systems as part of attributes-based access control decisions within AWS. Administrators of the external identity system manage user attributes and define attributes to pass in during federation. The attributes you pass in are called “session tags”. Session tags are temporary tags which are only valid for the duration of the federated session.

Granting access to cloud resources using ABAC has several advantages. One of them is you have fewer roles to manage. For example, imagine a situation where Bob and Alice share the same job function, but different cost centers; and you want to grant access only to resources belonging to each individual’s cost center. With ABAC, only one role is required, instead of two roles with RBAC. Alice and Bob assume the same role. The policy will grant access to resources where their cost center tag value matches the resource cost center tag value. Imagine now you have over 1,000 people across 20 cost centers. ABAC can reduce the cost center roles from 20 to 1.

Let us consider another example. Let’s say your systems engineer configures your external identity system to include CostCenter as a session tag when developers federate into AWS using an IAM role. All federated developers assume the same role, but are granted access only to AWS resources belonging to their cost center, because permissions apply based on the CostCenter tag included in their federated session and on the resources.

Let’s illustrate this example with the diagram below:


In the figure above, blue, yellow, and green represent the three cost centers my workforce users are attached to. To setup ABAC, I first tag all project resources with their respective CostCenter tags and configure my external identity system to include the CostCenter tag in the developer session. The IAM role in this scenario grants access to project resources based on the CostCenter tag. The IAM permissions might look like this :

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [ "ec2:DescribeInstances"],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": ["ec2:StartInstances","ec2:StopInstances"],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "ec2:ResourceTag/CostCenter": "${aws:PrincipalTag/CostCenter}"
                }
            }
        }
    ]
}

The access will be granted (Allow) only when the condition matches: when the value of the resources’ CostCenter tag matches the value of the principal’s CostCenter tag. Now, whenever my workforce users federate into AWS using this role, they only get access to the resources belonging to their cost center based on the CostCenter tag included in the federated session.

If a user switches from cost center green to blue, your system administrator will update the external identity system with CostCenter = blue, and permissions in AWS automatically apply to grant access to the blue cost center AWS resources, without requiring permissions update in AWS. Similarly, when your system administrator adds a new workforce user in the external identity system, this user immediately gets access to the AWS resources belonging to her cost center.

We have worked with Auth0, ForgeRock, IBM, Okta, OneLogin, Ping Identity, and RSA to ensure the attributes defined in their systems are correctly propagated to AWS sessions. You can refer to their published guidelines on configuring session tags for AWS for more details. In case you are using other Identity Providers, you may still be able to configure session tags, if they support the industry standards SAML 2.0 or OpenID Connect (OIDC). We look forward to working with additional Identity Providers to certify Session Tags with their identity solutions.

Sessions Tags are available in all AWS Regions today at no additional cost. You can read our new session tags documentation page to follow step-by-step instructions to configure an ABAC-based permission system.

— seb

New – Convert Your Single-Region Amazon DynamoDB Tables to Global Tables

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-convert-your-single-region-amazon-dynamodb-tables-to-global-tables/

Hundreds of thousands of AWS customers are using Amazon DynamoDB. In 2017, we launched DynamoDB global tables, a fully managed solution to deploy multi-region, multi-master DynamoDB tables without having to build and maintain your own replication solution. When you create a global table, you specify the AWS Regions where you want the table to be available. DynamoDB performs all of the necessary tasks to create identical tables in these regions and propagate ongoing data changes to all of them.

AWS customers are using DynamoDB global tables for two main reasons: to provide a low-latency experience to their clients and to facilitate their backup or disaster recovery process. Latency is the time it takes for a piece of information to travel through the network and back. Lower latency apps have higher customer engagement and generate more revenue. Deploying your backend to multiple regions close to your customers allows you to reduce the latency in your app. Having a full copy of your data in another region makes it easy to switch traffic to that other region in case you would break your regional setup or in case of an exceedingly rare regional failure. As our CTO, Dr. Werner Vogels wrote: “failures are a given, and everything will eventually fail over time.”

Starting today, you can convert your existing DynamoDB tables to global tables with a few clicks in the AWS Management Console, or using the AWS Command Line Interface (CLI), or the Amazon DynamoDB API. Previously, only empty tables could be converted to global tables. You had to guess your regional usage of a table at the time you created it. Now you can go global, or you can extend existing global tables to additional regions at any time.

Your applications can continue to use the table while we set up the replication. When you add a region to your table, DynamoDB begins populating the new replica using a snapshot of your existing table. Your applications can continue writing to your existing region while DynamoDB builds the new replica, and all in-flight updates will be eventually replicated to your new replica.

To create a DynamoDB global table using the AWS Command Line Interface (CLI), I first create a local table in the US West (Oregon) Region (us-west-2):

aws dynamodb create-table --region us-west-2 \
                          --table-name demo-global-table \
                          --key-schema AttributeName=id,KeyType=HASH \
                          --attribute-definitions AttributeName=id,AttributeType=S \
                          --billing-mode PAY_PER_REQUEST

The command returns:

{
    "TableDescription": {
        "AttributeDefinitions": [
            {
                "AttributeName": "id",
                "AttributeType": "S"
            }
        ],
        "TableName": "demo-global-table",
        "KeySchema": [
            {
                "AttributeName": "id",
                "KeyType": "HASH"
            }
        ],
        "TableStatus": "CREATING",
        "CreationDateTime": 1570278914.419,
        "ProvisionedThroughput": {
            "NumberOfDecreasesToday": 0,
            "ReadCapacityUnits": 0,
            "WriteCapacityUnits": 0
        },
        "TableSizeBytes": 0,
        "ItemCount": 0,
        "TableArn": "arn:aws:dynamodb:us-west-2:400000000003:table/demo-global-table",
        "TableId": "0a04bd34-bbff-42dd-ae18-78d05ce641fd",
        "BillingModeSummary": {
            "BillingMode": "PAY_PER_REQUEST"
        }
    }
}

Once the table is created, I insert some items:

aws dynamodb batch-write-item --region us-west-2 --request-items file://./batch-write-items.json

(The json file is available as a gist)

Then, I update the table to add an additional region, the US East (N. Virginia) Region (us-east-1):

aws dynamodb update-table --table-name
  '{
    "ReplicaUpdates":
    [
      {
        "Create": {
          "RegionName":  "us-east-1"
        }
      }
    ]
  }'

The command returns a long JSON, the attributes you need to pay attention to are:

{
...
        "TableStatus": "UPDATING",
        "TableSizeBytes": 124,
        "ItemCount": 3,
        "StreamSpecification": {
            "StreamEnabled": true,
            "StreamViewType": "NEW_AND_OLD_IMAGES"
        },
        "LatestStreamLabel": "2019-10-22T19:33:37.819",
        "LatestStreamArn": "arn:aws:dynamodb:us-west-2:400000000003:table/demo-global-table/stream/2019-10-22T19:33:37.819"
    }
...
}

I can make the same update in the AWS Management Console I select the table to update and click Global Tables.

Enabling streaming is a requirement for global tables. I first click on Enable stream, then Add region:

I choose the region I want to replicate to, for this example EU West (Ireland) and click Create replica.

DynamoDB asynchronously replicates the table to the new region. I monitor the progress of the replication in the AWS Management Console. Table’s status will eventually change from Creating to Active. I also can check the status by calling the DescribeTable API and verify TableStatus = Active.

After a while, I can query the table in the new region:

aws dynamodb get-item --region eu-east-1 --table-name demo-global-table --key '{"id" : {"S" : "0123456789"}}'

{
    "Item": {
        "firstname": {
            "S": "Jeff"
        },
        "id": {
            "S": "0123456789"
        },
        "lastname": {
            "S": "Barr"
        }
    }
}

Starting today, you can update existing local tables to global tables. In a few weeks, we’ll release a tool that enables you to update your existing global tables to take advantage of this new capability. The update itself will take a few minutes at most. Your table will be available for your applications during the update process.

Other Improvements
We are also simplifying the internal mechanism used for data synchronization. Previously, DynamoDB global tables leveraged DynamoDB Streams and added three attributes (aws:rep:*) in your schema to keep your data in sync. DynamoDB now manages replication natively. It does not expose synchronization attributes to your data and it does not consume additional write capacity:

  • Only one write operation occurs in each region of your global table, which reduces the consumed replicated write capacity that is required on your table.
  • Because of that, a second DynamoDB Streams record is no longer published.
  • The three aws:rep:* attributes that were previously populated are no longer inserted in the item record.

These changes have two consequences for your apps. First, they reduce your DynamoDB costs when using global tables, because no extra write capacity is required to managed the synchronization. Secondly, when your application is relying on the three technical items (aws:rep:*), your application requires a slight code change. In particular, DynamoDB Mapper must not require the aws:rep:* attributes to exist in the item record.

With this change, we are also updating the UpdateTable API. Any operation that modifies global secondary indexes (GSIs), billing mode, server-side encryption, or write capacity units on a global table are applied to all other replicas asynchronously.

Availability
The improved Amazon DynamoDB global tables is available today in the 13 regions where Amazon DynamoDB global tables is available, and more regions are planned in the future. As of today, the list of AWS Regions is us-east-1 (Northern Virginia), us-west-2 (Oregon), us-east-2 (Ohio), us-west-1 (Northern California), ap-northeast-2 (Seoul), ap-southeast-1 (Singapore), ap-southeast-2 (Sydney), ap-northeast-1 (Tokyo), eu-central-1 (Frankfurt), eu-west-1 (Ireland), eu-west-2 (London), GovCloud (US-East), and GovCloud (US-West).

There is no change in pricing. You pay only for the resources you use in the additional regions and the data transfer between regions.

This update addresses the most common feedback that we have heard from you and will serve as the platform on which we will build additional features in the future. Continue to tell us how you are using global tables and what is important for your apps.

— seb

New – Application Load Balancer Simplifies Deployment with Weighted Target Groups

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-application-load-balancer-simplifies-deployment-with-weighted-target-groups/

One of the benefits of cloud computing is the possibility to create infrastructure programmatically and to tear it down when it is not longer needed. This allows to radically change the way developers deploy their applications. When developers used to deploy applications on premises, they had to reuse existing infrastructure for new versions of their applications. In the cloud, developers create new infrastructure for new versions of their applications. They keep the previous version running in parallel for awhile before to tear it down. This technique is called blue/green deployments. It allows to progressively switch traffic between two versions of your apps, to monitor business and operational metrics on the new version, and to switch traffic back to the previous version in case anything goes wrong.

To adopt blue/green deployments, AWS customers are adopting two strategies. The first strategy consists of creating a second application stack, including a second load balancer. Developers use some kind of weighted routing technique, such as DNS, to direct part of the traffic to each stack. The second strategy consists of replacing infrastructure behind the load balancer. Both strategies can cause delays in moving traffic between versions, depending on DNS TTL and caching on client machines. They can cause additional costs to run the extra load balancer, and potential delays to warm up the extra load balancer.

A target group tells a load balancer where to direct traffic to : EC2 instances, fixed IP addresses; or AWS Lambda functions, amongst others. When creating a load balancer, you create one or more listeners and configure listener rules to direct the traffic to one target group.

Today, we are announcing weighted target groups for application load balancers. It allows developers to control how to distribute traffic to multiple versions of their application.

Multiple, Weighted Target Groups
You can now add more than one target group to the forward action of a listener rule, and specify a weight for each group. For example, when you define a rule having two target groups with weights of 8 and 2, the load balancer will route 80% of the traffic to the first target group and 20% to the other.

To experiment with weighted target groups today, you can use this CDK code. It creates two auto scaling groups with EC2 instances and an Elastic Load Balancer in front of them. It also deploys a sample web app on the instances. The blue version of the web app is deployed to the blue instance and the green version of the web app is deployed to the green instance. The infrastructure looks like this:

You can git clone the CDK project and type npm run build && cdk bootstrap && cdk deploy to deploy the above infrastructure. To show you how to configure the load balancer, the CDK code creates the auto scaling, the load balancer and a generic target group. Let’s manually finish the configuration and create two weighted target groups, one for each version of the application.

First, I navigate to the EC2 console, select Target Groups and click the Create Target Group button. I create a target group called green. Be sure to select the correct Amazon Virtual Private Cloud, the one created by the CDK script has a name starting with “AlbWtgStack...“, then click Create.

I repeat the operation to create a blue target group. My Target Groups console looks like this:

Next, I change the two auto scaling groups to point them to the blue and green target groups. In the AWS Management Console, I click Auto Scaling Groups, select one of the two auto scaling groups and I pay attention to the name (it contains either ‘green’ or ‘blue’), I click Actions then Edit.

In the Edit details screen, I remove the target group that has been created by the CDK script and add the target group matching the name of the auto scaling group (green or blue). I click Save at the bottom of the screen and I repeat the operation for the other auto scaling group.

Next, I change the listener rule to add these two target groups, each having their own weight. In the EC2 console, I select Load Balancers on the left side, then I search for the load balancer created by the CDK code (the name starts with “alb”). I click Listeners, then View / edit rules:

There is one rule created by the CDK script. I modify it by clicking the edit icon on the top, then again the edit icon on the left of the rule. I delete the Foward to rule by clicking the trash can icon.

Then I click “+ Add Action” to add two Forward to rules, each having a target group, (blue and green) weighted with 50 and 50.

Finally, click Update on the right side. I am now ready to test the weighted load balancing.

I point my browser to the DNS name of the load balancer. I see either the green or the blue version of the web app. I force my browser to reload the page and I observe the load balancer in action, sending 50% of the requests to the green application and 50% to the blue application. Some browsers might cache the page and not reflect the weight I defined. Safari and Chrome are less aggressive than Firefox at this exercise.

Now, in the AWS Management Console, I change the weights to 80 and 20 and continue to refresh my browser. I observe that the blue version is displayed 8 times out of 10, on average.

I can also adjust the weight from the ALB ModifyListener API, the AWS Command Line Interface (CLI) or with AWS CloudFormation.

For example, I use the AWS Command Line Interface (CLI) like this:

aws elbv2 modify-listener    \
     --listener-arn "<listener arn>" \
     --default-actions        \
        '[{
          "Type": "forward",
          "Order": 1,
          "ForwardConfig": {
             "TargetGroups": [
               { "TargetGroupArn": "<target group 1 arn>",
                 "Weight": 80 },
               { "TargetGroupArn": "<target group 2 arn>",
                 "Weight": 20 },
             ]
          }
         }]'

Or I use AWS CloudFormation with this JSON extract:

"ListenerRule1": {
      "Type": "AWS::ElasticLoadBalancingV2::ListenerRule",
      "Properties": {
        "Actions": [{
          "Type": "forward",
          "ForwardConfig": {
            "TargetGroups": [{
              "TargetGroupArn": { "Ref": "TargetGroup1" },
              "Weight": 1
            }, {
              "TargetGroupArn": { "Ref": "TargetGroup2" },
              "Weight": 1
            }]
          }
        }],
        "Conditions": [{
          "Field": "path-pattern",
          "Values": ["foo"]
        }],
        "ListenerArn": { "Ref": "Listener" },
        "Priority": 1
      }
    }

If you are using an external service or tool to manage your load balancer, you may need to wait till the provider updates their APIs to support weighted routing configuration on Application load balancer.

Other uses
In addition to blue/green deployments, AWS customers can use weighted target groups for two other use cases: cloud migration, or migration between different AWS compute resources.

When you migrate an on-premises application to the cloud, you may want to do it progressively, with a period where the application is running both on the on-premises data center and in the cloud. Eventually, when you have verified that the cloud version performs satisfactorily, you may completely deprecate the on-premises application.

Similarly, when you migrate a workload from EC2 instances to Docker containers running on AWS Fargate for example, you can easily bring up your new application stack on a new target group and gradually move the traffic by changing the target group weights, with no downtime for end users. With Application Load Balancer supporting a variety of AWS resources like EC2, Containers (Amazon ECS, Amazon Elastic Kubernetes Service, AWS Fargate), AWS Lambda functions and IP addresses as targets, you can choose to move traffic between any of these.

Target Group Stickiness
There are situations when you want the clients to experience the same version of the application for a specified duration. Or you want clients currently using the app to not switch to the newly deployed (green) version during their session. For these use cases, we also introduce target group stickiness. When target group stickiness is enabled, the requests from a client are all sent to the same target group for the specified time duration. At the expiry of the duration, the requests are distributed to a target group according to the weight. ALB issues a cookie to maintain target group stickiness.

Note that target group stickiness is different from the already existing target stickiness (also known as Sticky Sessions). Sticky Sessions makes sure that the requests from a client are always sticking to a particular target within a target group. Target group stickiness only ensures the requests are sent to a particular target group. Sticky sessions can be used in conjunction with the target group level stickiness.

To add or configure target group stickiness from the AWS Command Line Interface (CLI), you use the TargetGroupStickinessConfig parameter, like below:

aws elbv2 modify-listener \
    --listener-arn "<listener arn" \
    --default-actions \
    '[{
       "Type": "forward",
       "Order": 1,
       "ForwardConfig": {
          "TargetGroups": [
             {"TargetGroupArn": "<target group 1 arn>", "Weight": 20}, \
             {"TargetGroupArn": "<target group 2 arn>", "Weight": 80}, \
          ],
          "TargetGroupStickinessConfig": {
             "Enabled": true,
             "DurationSeconds": 2000
          }
       }
   }]'

Availability
Application Load Balancer supports up to 5 target groups per listener’s rules, each having their weight. You can adjust the weights as many times as you need, up to the API threshold limit. There might be a slight delay before the actual traffic weight is updated.

Weighted target group is available in all AWS Regions today. There is no additional cost to use weighted target group on Application Load Balancer.

— seb

PS: do not forget to delete the example infrastructure created for this blog post and stop accruing AWS charges. As we manually modified an infrastructure created by the CDK, a simple cdk destroy will immediately return. Connect to the AWS CloudFormation console instead and delete the AlbWtgStack. You also need to manually delete the blue and green target groups in the EC2 console.

Amazon Connect Introduces Web & Mobile Chat for a True Omnichannel Contact Center Experience

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-omnichannel-contact-center-web-and-mobile-chat-for-amazon-connect/

When we started Amazon in 1995, it was with the mission to be the earth’s most customer-centric company. It obviously requires many talented individuals and technologies to deliver on that vision, including contact centers. As Amazon’s retail business scaled, we first shopped for third-party contact center solutions, but we could not find one that fit our needs, so we decided to build our own. After we built an initial version, we listened to our contact center team feedback and iterated for several years to meet our strict requirements of security, elasticity, flexibility, reliability, and high customer experience standards. Many AWS customers told us they have the same challenges to procure, install, configure, and operate their contact centers. They asked us to make our solution available to all businesses.

Since we launched Amazon Connect, thousands of customers have created their own contact centers in the cloud. Amazon Connect makes it easy for non-technical customers to design contact flows, manage agents, and track performance metrics. It is easy to integrate Amazon Connect to other systems, such as customer relationship management (CRM) or to integrate Amazon Lex intelligent conversational bots into contact flows. For example, Intuit integrates Amazon Connect with Salesforce to build contact flow experiences that adapt to their customer needs in real-time. In the United Kingdom, the National Health Service (NHS) is using Amazon Connect and Amazon Lex to automatically answer most frequently asked questions about European Health Insurance Card (EHIC). During the first four weeks of operation, 42 percent of EHIC calls were resolved via the integrated Amazon Connect and Amazon Lex solution, and did not have to be passed back to a human agent. There was a 26 percent reduction in EHIC contact center calls handled by human agents.

But voice-based contact centers are only one part of the story. Today, we communicate more and more with messaging, and customers use multiple channels to communicate with businesses. Often, simple questions can be answered by a short chat message and do not involve a voice conversation with an agent. This is why we are announcing web and mobile chat for Amazon Connect. Your customers can now choose between using chat or making a phone call to get their questions or concerns addressed. When they choose to chat with a contact center agent, they can do it at their own pace, making it as familiar as messaging a friend. Conversation context is maintained across both chat and voice, giving customers freedom to move between channels without forcing them to start all over again or to wait for an agent.

Amazon Connect chat gives businesses a single unified contact center service for voice and chat. Amazon Connect provides a single routing engine which increases efficient distribution amongst agents and reduces end-customer wait times. Agents have a single user interface to help end-customers using both voice and chat, reducing the number of tools they have to learn and the number of screens they have to interact with. Chat activities nicely integrate into your existing contact center flows and the automation you built for voice. You build your flows once and you reuse them across multiple channels. Likewise, for metrics collection and the dashboards you built, they automatically benefit from the unified metrics across multiple channels.

Your customers can start chatting with contact center agents from any of your business applications, web or mobile. Let’s see what it looks like. In the below example, the chat is integrated in your company web site.

Contact center agents receive chat requests in the same web-based Contact Control Panel (CCP) they use for voice engagements. Since it is web-based, agents can work from virtually anywhere. Customers can integrate the CCP directly into the applications their contact center agents use, such as their customer relationship management (CRM) system, using the CCP SDK.

To add chat capabilities to your Amazon Connect contact center, open the console and enable your agents to take chats by enabling Chat in their Routing Profile, no code is required. Once this is done, they can begin accepting chats through the updated agent experience.

Should you need help adding Amazon Connect chat capabilities to your website or applications, please reach out to one of the dozens of Amazon Connect partners available worldwide.

Amazon Connect chat is charged on a per use basis. There are no required up-front payments, long-term commitments or minimum monthly fees. You pay per chat message, independently of the number of agents or customers using it. Regional pricing may vary, read the pricing page for the details.

Amazon Connect chat will be generally available this week in all AWS regions where Amazon Connect is offered: US East (Northern Virginia), US West (Oregon), EU (Frankfurt), Asia Pacific (Sydney), Asia Pacific (Tokyo).

As usual, we’re willing to hear your feedback, do not hesitate to share your thoughts with us.

— seb

Improve Your App Testing With Amplify Console’s Pull Requests Previews and Cypress Testing

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/improve-your-app-testing-with-amplify-consoles-pull-requests-previews-and-cypress-testing/

Amplify Console allows developers to easly configure a Git-based workflow for continuous deployment and hosting of fullstack serverless web apps. Fullstack serverless apps comprise of backend resources such as GraphQL APIs, Data and File Storage, Authentication, or Analytics, integrated with a frontend framework such as React, Gatsby, or Angular. You can read more about the Amplify Console in a previous article I wrote.

Today, we are announcing the ability to create preview URLs and to run end-to-end tests on pull requests before releasing code to production.

Pull Request previews
You can now configure Amplify Console to deploy your application to a unique URL every time a developer submits a pull request to your Git repository. The preview URL is completely different from the one used by the production site. You can see how changes look before merging the pull request into the main branch of your code repository, triggering a new release in the Amplify Console. For fullstack apps with backend environments provisioned via the Amplify CLI, every pull request spins up an ephemeral backend that is deleted when the pull request is closed. You can test changes in complete isolation from the production environment. Amplify Console creates backend infrastructures for pull requests on private git repositories only. This allows to avoid incurring extra costs in case of unsolicited pull requests.

To learn how it works, let’s start a web application with a cloud-based authentication backend, and deploy it on Amplify Console. I first create a React application (check here to learn how to install React).

npx create-react-app amplify-console-demo                                                
cd amplify-console-demo

I initialize the Amplify environment (learn how to install the Amplify CLI first). I add a cloud based authentication backend powered by Amazon Cognito. I accept all the defaults answers proposed by Amplify CLI.

npm install aws-amplify aws-amplify-react
amplify init
amplify add auth
amplify push

I then modify src/App.js to add the front end authentication user interface. The code is available in the AWS Amplify documentation. Once ready, I start the local development server to test the application locally.

npm run start

I point my browser to http://localhost:8080 to verify the scafolding (the below screenshot is taken from my AWS Cloud 9 development environment). I click Create account to create a user, verify the SignUp flow, and authenticate to the app.

After signing up, I see the application page.

There are two important details to note. First, I am using a private GitHub repository. Amplify Console only creates backend infrastructure on pull requests for private repositories, to avoid creating unnecessary infrastructure for unsollicited pull requests. Second, the Amplify Console build process looks for dependencies in package-lock.json only. This is why I added the amplify packages with npm and not with yarn.

When I am happy with my app, I push the code to a GitHub repo (let’s assume I already did git remote add origin ...).

git add amplify
git commit -am "initial commit"
git push origin master

The next step consists of configuring Amplify Console to build and deploy my app on every git commit. I login to the Amplify Console, click Connect App, choose GitHub as repository and click Continue (the first time I do this, I need to authenticate on GitHub, using my GitHub username and password)

I select my repository and the branch I want to use as source:

Amplify Console detects the type of project and proposes a build file. I select the name of my environment (dev). The first time I use Amplify Console, I follow the instructions to create a new service role. This role authorises Amplify Console to access AWS backend services on my behalf.

I click Next. I review the settings and click Save and Deploy. After a few seconds or minutes, my application is ready. I can point my browser to the deployment URL and verify the app is working correctly.

Now, let’s enable previews for pull requests. Click Preview on the left menu and Enable Previews. To enable the previews, Amplify Console requires an app to be installed in my GitHub account. I follow the instructions provided by the console to configure my GitHub account. Once set up, I select a branch, click Manage to enable / disable the pull request previews. (At anytime, I can uninstall the Amplify app from my GitHub account by visiting the Applications section of my GitHub account’s settings.)

Now that the mechanism is in place, let’s create a pull request.

I edit App.js directly on GitHub. I customize the withAuthenticator component to change the color of the Sign In button from orange to green. I save the changes and I create a pull request.

On the Pull Request detail page, I click Show all checks to get the status of the Amplify Console test. I see AWS Amplify Console Web Preview in progress. Amplify Console creates a full backend environment to test the pull request, to build and to deploy the frontend.

Eventually, I see All checks have passed and a green mark. I click Details to get the preview url. In case of an error, you can see the detailled log file of the build phase in the Amplify Console.

I can also check the status of the preview in the Amplify Console.

I point my browser to the preview URL to test my change. I can see the green Sign In button instead of the orange one.

When I try to authenticate using the username and password I created previously, I receive an User does not exist error message because this preview URL points to a different backend than the main application. I can see two Cognito user pools in the Cognito console, one for each environment.

I can control who can access the preview URL using similar access control settings that I use for the main URL.

When I am happy with the proposed changes, I merge the pull request on GitHub to trigger a new build and to deploy the change to the production environment. Amplify Console deletes the preview environment upon merging. The ephemeral backend environment created for the pull request also gets deleted.

Cypress testing
In addition to previewing changes before merging them to the main branch, we also added the capability to run end to end tests during your build process. You can use your favorite test framework to add unit or end-to-end tests to your application and automatically run the tests during the build phase. When you use Cypress test framework, Amplify Console detects the tests in your source tree and automatically adds the testing phase in your application build process.

Only projects that are passing all tests are pushed down your pipeline to the deployment phase. You can learn more about this and follow step by step instructions we posted a few weeks ago.

These two additions to Amplify Console allow you to gain higher confidence in the robustness of your pipeline and the quality of the code delivered to your production environment.

Availability
Web previews are available in all Regions where AWS Amplify Console is available today, at no additional cost on top of the regular Amplify Console pricing. With the AWS Free Usage Tier, you can get started for free. Upon sign up, new AWS customers receive 1,000 build minutes per month for the build and deploy feature, and 15 GB served per month and 5 GB data storage per month for the hosting.

— seb

Learn From Your VPC Flow Logs With Additional Meta-Data

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/learn-from-your-vpc-flow-logs-with-additional-meta-data/

Flow Logs for Amazon Virtual Private Cloud enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow Logs data can be published to Amazon CloudWatch Logs or Amazon Simple Storage Service (S3).

Since we launched VPC Flow Logs in 2015, you have been using it for variety of use-cases like troubleshooting connectivity issues across your VPCs, intrusion detection, anomaly detection, or archival for compliance purposes. Until today, VPC Flow Logs provided information that included source IP, source port, destination IP, destination port, action (accept, reject) and status. Once enabled, a VPC Flow Log entry looks like the one below.

While this information was sufficient to understand most flows, it required additional computation and lookup to match IP addresses to instance IDs or to guess the directionality of the flow to come to meaningful conclusions.

Today we are announcing the availability of additional meta data to include in your Flow Logs records to better understand network flows. The enriched Flow Logs will allow you to simplify your scripts or remove the need for postprocessing altogether, by reducing the number of computations or lookups required to extract meaningful information from the log data.

When you create a new VPC Flow Log, in addition to existing fields, you can now choose to add the following meta-data:

  • vpc-id : the ID of the VPC containing the source Elastic Network Interface (ENI).
  • subnet-id : the ID of the subnet containing the source ENI.
  • instance-id : the Amazon Elastic Compute Cloud (EC2) instance ID of the instance associated with the source interface. When the ENI is placed by AWS services (for example, AWS PrivateLink, NAT Gateway, Network Load Balancer etc) this field will be “-
  • tcp-flags : the bitmask for TCP Flags observed within the aggregation period. For example, FIN is 0x01 (1), SYN is 0x02 (2), ACK is 0x10 (16), SYN + ACK is 0x12 (18), etc. (the bits are specified in “Control Bits” section of RFC793 “Transmission Control Protocol Specification”).
    This allows to understand who initiated or terminated the connection. TCP uses a three way handshake to establish a connection. The connecting machine sends a SYN packet to the destination, the destination replies with a SYN + ACK and, finally, the connecting machine sends an ACK. In the Flow Logs, the handshake is shown as two lines, with tcp-flags values of 2 (SYN), 18 (SYN + ACK).  ACK is reported only when it is accompanied with SYN (otherwise it would be too much noise for you to filter out).
  • type : the type of traffic : IPV4, IPV6 or Elastic Fabric Adapter.
  • pkt-srcaddr : the packet-level IP address of the source. You typically use this field in conjunction with srcaddr to distinguish between the IP address of an intermediate layer through which traffic flows, such as a NAT gateway.
  • pkt-dstaddr : the packet-level destination IP address, similar to the previous one, but for destination IP addresses.

To create a VPC Flow Log, you can use the AWS Management Console, the AWS Command Line Interface (CLI) or the CreateFlowLogs API and select which additional information and the order you want to consume the fields, for example:

Or using the AWS Command Line Interface (CLI) as below:

$ aws ec2 create-flow-logs --resource-type VPC \
                            --region eu-west-1 \
                            --resource-ids vpc-12345678 \
                            --traffic-type ALL  \
                            --log-destination-type s3 \
                            --log-destination arn:aws:s3:::sst-vpc-demo \
                            --log-format '${version} ${vpc-id} ${subnet-id} ${instance-id} ${interface-id} ${account-id} ${type} ${srcaddr} ${dstaddr} ${srcport} ${dstport} ${pkt-srcaddr} ${pkt-dstaddr} ${protocol} ${bytes} ${packets} ${start} ${end} ${action} ${tcp-flags} ${log-status}'

# be sure to replace the bucket name and VPC ID !

{
    "ClientToken": "1A....HoP=",
    "FlowLogIds": [
        "fl-12345678123456789"
    ],
    "Unsuccessful": [] 
}

Enriched VPC Flow Logs are delivered to S3. We will automatically add the required S3 Bucket Policy to authorize VPC Flow Logs to write to your S3 bucket. VPC Flow Logs does not capture real-time log streams for your network interface, it might take several minutes to begin collecting and publishing data to the chosen destinations. Your logs will eventually be available on S3 at s3://<bucket name>/AWSLogs/<account id>/vpcflowlogs/<region>/<year>/<month>/<day>/

An SSH connection from my laptop with IP address 90.90.0.200 to an EC2 instance would appear like this :

3 vpc-exxxxxx2 subnet-8xxxxf3 i-0bfxxxxxxaf eni-08xxxxxxa5 48xxxxxx93 IPv4 172.31.22.145 90.90.0.200 22 62897 172.31.22.145 90.90.0.200 6 5225 24 1566328660 1566328672 ACCEPT 18 OK
3 vpc-exxxxxx2 subnet-8xxxxf3 i-0bfxxxxxxaf eni-08xxxxxxa5 48xxxxxx93 IPv4 90.90.0.200 172.31.22.145 62897 22 90.90.0.200 172.31.22.145 6 4877 29 1566328660 1566328672 ACCEPT 2 OK

172.31.22.145 is the private IP address of the EC2 instance, the one you see when you type ifconfig on the instance.  All flags are “OR”ed during aggregation period. When connection is short, probably both SYN and FIN (3), as well as SYN+ACK and FIN (19) will be set for the same lines.

Once a Flow Log is created, you can not add additional fields or modify the structure of the log to ensure you will not accidently break scripts consuming this data. Any modification will require you to delete and recreate the VPC Flow Logs. There is no additional cost to capture the extra information in the VPC Flow Logs, normal VPC Flow Log pricing applies, remember that Enriched VPC Flow Log records might consume more storage when selecting all fields.  We do recommend to select only the fields relevant to your use-cases.

Enriched VPC Flow Logs is available in all regions where VPC Flow logs is available, you can start to use it today.

— seb

PS: I heard from the team they are working on adding additional meta-data to the logs, stay tuned for updates.

New – Port Forwarding Using AWS System Manager Sessions Manager

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-port-forwarding-using-aws-system-manager-sessions-manager/

I increasingly see customers adopting the immutable infrastructure architecture pattern: they rebuild and redeploy an entire infrastructure for each update. They very rarely connect to servers over SSH or RDP to update configuration or to deploy software updates. However, when migrating existing applications to the cloud, it is common to connect to your Amazon Elastic Compute Cloud (EC2) instances to perform a variety of management or operational tasks. To reduce the surface of attack, AWS recommends using a bastion host, also known as a jump host. This special purpose EC2 instance is designed to be the primary access point from the Internet and acts as a proxy to your other EC2 instances. To connect to your EC2 instance, you first SSH / RDP into the bastion host and, from there, to the destination EC2 instance.

To further reduce the surface of attack, the operational burden to manage bastion hosts and the additional costs incurred, AWS Systems Manager Session Manager allows you to securely connect to your EC2 instances, without the need to run and to operate your own bastion hosts and without the need to run SSH on your EC2 instances. When Systems Manager‘s Agent is installed on your instances and when you have IAM permissions to call Systems Manager API, you can use the AWS Management Console or the AWS Command Line Interface (CLI) to securely connect to your Linux or Windows EC2 instances.

Interactive shell on EC2 instances is not the only use case for SSH. Many customers are also using SSH tunnel to remotely access services not exposed to the public internet. SSH tunneling is a powerful but lesser known feature of SSH that alows you to to create a secure tunnel between a local host and a remote service. Let’s imagine I am running a web server for easy private file transfer between an EC2 instance and my laptop. These files are private, I do not want anybody else to access that web server, therefore I configure my web server to bind only on 127.0.0.1 and I do not add port 80 to the instance’ security group. Only local processes can access the web server. To access the web server from my laptop, I create a SSH tunnel between the my laptop and the web server, as shown below

This command tells SSH to connect to instance as user ec2-user, open port 9999 on my local laptop, and forward everything from there to localhost:80 on instance. When the tunnel is established, I can point my browser at http://localhost:9999 to connect to my private web server on port 80.

Today, we are announcing Port Forwarding for AWS Systems Manager Session Manager. Port Forwarding allows you to securely create tunnels between your instances deployed in private subnets, without the need to start the SSH service on the server, to open the SSH port in the security group or the need to use a bastion host.

Similar to SSH Tunnels, Port Forwarding allows you to forward traffic between your laptop to open ports on your instance. Once port forwarding is configured, you can connect to the local port and access the server application running inside the instance. Systems Manager Session Manager’s Port Forwarding use is controlled through IAM policies on API access and the Port Forwarding SSM Document. These are two different places where you can control who in your organisation is authorised to create tunnels.

To experiment with Port Forwarding today, you can use this CDK script to deploy a VPC with private and public subnets, and a single instance running a web server in the private subnet. The drawing below illustrates the infrastructure that I am using for this blog post.

The instance is private, it does not have a public IP address, nor a DNS name. The VPC Default Security Group does not authorise connection over SSH. The Systems Manager‘s Agent, running on your EC2 instance, must be able to communicate with the Systems Manager‘ Service Endpoint. The private subnet must therefore have a routing table to a NAT Gateway or you must configure an AWS Private Link to do so.

Let’s use Systems Manager Session Manager Port Forwarding to access the web server running on this private instance.

Before doing so, you must ensure the following prerequisites are met on the EC2 instance:

  • System Manager Agent must be installed and running (version 2.3.672.0 or more recent, see instructions for Linux or Windows). The agent is installed and started by default on Amazon Linux 1 & 2, Windows and Ubuntu AMIs provided by Amazon (see this page for the exact versions), no action is required when you are using these.
  • the EC2 instance must have an IAM role with permission to invoke Systems Manager API. For this example, I am using AmazonSSMManagedInstanceCore.

On your laptop, you must:

Once the prerequisites are met, you use the AWS Command Line Interface (CLI) to create the tunnel (assuming you started the instance using the CDK script to create your instance) :

# find the instance ID based on Tag Name
INSTANCE_ID=$(aws ec2 describe-instances \
               --filter "Name=tag:Name,Values=CodeStack/NewsBlogInstance" \
               --query "Reservations[].Instances[?State.Name == 'running'].InstanceId[]" \
               --output text)
# create the port forwarding tunnel
aws ssm start-session --target $INSTANCE_ID \
                       --document-name AWS-StartPortForwardingSession \
                       --parameters '{"portNumber":["80"],"localPortNumber":["9999"]}'

Starting session with SessionId: sst-00xxx63
Port 9999 opened for sessionId sst-00xxx63
Connection accepted for session sst-00xxx63.

You can now point your browser to port 9999 and access your private web server. Type ctrl-c to terminate the port forwarding session.

The Session Manager Port Forwarding creates a tunnel similar to SSH tunneling, as illustrated below.

Port Forwarding works for Windows and Linux instances. It is available in every public AWS region today, at no additional cost when connecting to EC2 instances, you will be charged for the outgoing bandwidth from the NAT Gateway or your VPC Private Link.

— seb

New – Trigger a Kernel Panic to Diagnose Unresponsive EC2 Instances

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-trigger-a-kernel-panic-to-diagnose-unresponsive-ec2-instances/

When I was working on systems deployed in on-premises data centers, it sometimes happened I had to debug an unresponsive server. It usually involved asking someone to physically press a non-maskable interrupt (NMI) button on the frozen server or to send a signal to a command controller over a serial interface (yes, serial, such as in RS-232).This command triggered the system to dump the state of the frozen kernel to a file for further analysis. Such a file is usually called a core dump or a crash dump. The crash dump includes an image of the memory of the crashed process, the system registers, program counter, and other information useful in determining the root cause of the freeze.

Today, we are announcing a new Amazon Elastic Compute Cloud (EC2) API allowing you to remotely trigger the generation of a kernel panic on EC2 instances. The EC2:SendDiagnosticInterrupt API sends a diagnostic interrupt, similar to pressing a NMI button on a physical machine, to a running EC2 instance. It causes the instance’s hypervisor to send a non-maskable interrupt (NMI) to the operating system. The behaviour of your operating system when a NMI interrupt is received depends on its configuration. Typically, it involves entering into kernel panic. The kernel panic behaviour also depends on the operating system configuration, it might trigger the generation of the crash dump data file, obtain a backtrace, load a replacement kernel or restart the system.

You can control who in your organisation is authorized to use that API through IAM Policies, I will give an example below.

Cloud and System Engineers, or specialists in kernel diagnosis and debugging, find in the crash dump invaluable information to analyse the causes of a kernel freeze. Tools like WinDbg (on Windows) and crash (on Linux) can be used to inspect the dump.

Using Diagnostic Interrupt
Using this API is a three step process. First you need to configure the behavior of your OS when it receives the interrupt.

By default, our Windows Server AMIs have memory dump already turned on. Automatic restart after the memory dump has been saved is also selected. The default location for the memory dump file is %SystemRoot% which is equivalent to C:\Windows.

You can access these options by going to :
Start > Control Panel > System > Advanced System Settings > Startup and Recovery

On Amazon Linux 2, you need to install and configurekdump & kexec. This is a one-time setup.

$ sudo yum install kexec-tools

Then edit the file /etc/default/grub to allocate the amount of memory to be reserved for the crash kernel. In this example, we reserve 160M by adding crashkernel=160M. The amount of memory to allocate depends on your instance’s memory size. The general recommendation is to test kdump to see if the allocated memory is sufficient. The kernel doc has the full syntax of the crashkernel kernel parameter.

GRUB_CMDLINE_LINUX_DEFAULT="crashkernel=160M console=tty0 console=ttyS0,115200n8 net.ifnames=0 biosdevname=0 nvme_core.io_timeout=4294967295 rd.emergency=poweroff rd.shell=0"

And rebuild the grub configuration:

$ sudo grub2-mkconfig -o /boot/grub2/grub.cfg

Finally edit /etc/sysctl.conf and add a line : kernel.unknown_nmi_panic=1. This tells the kernel to trigger a kernel panic upon receiving the interrupt.

You are now ready to reboot your instance. Be sure to include these commands in your user data script or in your AMI to automatically configure this on all your instances. Once the instance is rebooted, verify that kdump is correctly started.

$ systemctl status kdump.service
● kdump.service - Crash recovery kernel arming
   Loaded: loaded (/usr/lib/systemd/system/kdump.service; enabled; vendor preset: enabled)
   Active: active (exited) since Fri 2019-07-05 15:09:04 UTC; 3h 13min ago
  Process: 2494 ExecStart=/usr/bin/kdumpctl start (code=exited, status=0/SUCCESS)
 Main PID: 2494 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/kdump.service

Jul 05 15:09:02 ip-172-31-15-244.ec2.internal systemd[1]: Starting Crash recovery kernel arming...
Jul 05 15:09:04 ip-172-31-15-244.ec2.internal kdumpctl[2494]: kexec: loaded kdump kernel
Jul 05 15:09:04 ip-172-31-15-244.ec2.internal kdumpctl[2494]: Starting kdump: [OK]
Jul 05 15:09:04 ip-172-31-15-244.ec2.internal systemd[1]: Started Crash recovery kernel arming.

Our documentation contains the instructions for other operating systems.

Once this one-time configuration is done, you’re ready for the second step, to trigger the API. You can do this from any machine where the AWS CLI or SDK is configured. For example :

$ aws ec2 send-diagnostic-interrupt --region us-east-1 --instance-id <value>

There is no return value from the CLI, this is expected. If you have a terminal session open on that instance, it disconnects. Your instance reboots. You reconnect to your instance, you find the crash dump in /var/crash.

The third and last step is to analyse the content of the crash dump. On Linux systems, you need to install the crash utility and the debugging symbols for your version of the kernel. Note that the kernel version should be the same that was captured by kdump. To find out which kernel you are currently running, use the uname -r command.

$ sudo yum install crash
$ sudo debuginfo-install kernel
$ sudo crash /usr/lib/debug/lib/modules/4.14.128-112.105.amzn2.x86_64/vmlinux /var/crash/127.0.0.1-2019-07-05-15\:08\:43/vmcore

crash 7.2.6-1.amzn2.0.1
... output suppressed for brevity ...

      KERNEL: /usr/lib/debug/lib/modules/4.14.128-112.105.amzn2.x86_64/vmlinux
    DUMPFILE: /var/crash/127.0.0.1-2019-07-05-15:08:43/vmcore  [PARTIAL DUMP]
        CPUS: 2
        DATE: Fri Jul  5 15:08:38 2019
      UPTIME: 00:07:23
LOAD AVERAGE: 0.00, 0.00, 0.00
       TASKS: 104
    NODENAME: ip-172-31-15-244.ec2.internal
     RELEASE: 4.14.128-112.105.amzn2.x86_64
     VERSION: #1 SMP Wed Jun 19 16:53:40 UTC 2019
     MACHINE: x86_64  (2500 Mhz)
      MEMORY: 7.9 GB
       PANIC: "Kernel panic - not syncing: NMI: Not continuing"
         PID: 0
     COMMAND: "swapper/0"
        TASK: ffffffff82013480  (1 of 2)  [THREAD_INFO: ffffffff82013480]
         CPU: 0
       STATE: TASK_RUNNING (PANIC)

Collecting kernel crash dumps is often the only way to collect kernel debugging information, be sure to test this procedure frequently, in particular after updating your operating system or when you will create new AMIs.

Control Who Is Authorized to Send Diagnostic Interrupt
You can control who in your organisation is authorized to send the Diagnostic Interrupt, and to which instances, through IAM policies with resource-level permissions, like in the example below.

{
   "Version": "2012-10-17",
   "Statement": [
      {
      "Effect": "Allow",
      "Action": "ec2:SendDiagnosticInterrupt",
      "Resource": "arn:aws:ec2:region:account-id:instance/instance-id"
      }
   ]
}

Pricing
There are no additional charges for using this feature. However, as your instance continues to be in a ‘running’ state after it receives the diagnostic interrupt, instance billing will continue as usual.

Availability
You can send Diagnostic Interrupts to all EC2 instances powered by the AWS Nitro System, except A1 (Arm-based). This is C5, C5d, C5n, i3.metal, I3en, M5, M5a, M5ad, M5d, p3dn.24xlarge, R5, R5a, R5ad, R5d, T3, T3a, and Z1d as I write this.

The Diagnostic Interrupt API is now available in all public AWS Regions and GovCloud (US), you can start to use it today.

— seb