Tag Archives: AWS Marketplace

New – Multiple Private Marketplace Catalogs

Post Syndicated from Martin Beeby original https://aws.amazon.com/blogs/aws/new-multiple-private-marketplace-catalogs/

We launched AWS Marketplace in 2014, and it allows customers to find, buy, and immediately start using cloud-based applications developed by independent software vendors (ISVs). In 2018, we added the ability to add a Private Marketplace where you can curate a list of approved products your users can purchase from AWS Marketplace. Today we are adding a new feature so that you can create multiple Private Marketplace catalogs within your AWS Organizations. Each Private Marketplace can contain a different set of products to provide a tailored experience for particular groups of accounts.

Customers with diverse sets of users need to scale their governance to meet business needs. For example, a large enterprise with subsidiaries in different industries has varying software needs and policies for each subsidiary. IT administrators struggle with scaling their procurement process to address these differing needs across their organization and often revert to one procurement policy to govern their entire org. While there are exceptions, companies depend on time-consuming, manual processes to ensure the correct products are approved and procured. As a result, customers struggle to scale their procurement and governance process to meet their organization’s speed and agility demands.

After listening to customers explain these sorts of issues, we set about developing a solution to help, and the concept of multiple Private Marketplace catalogs was born. We call these Private Marketplace experiences. They represent the experience that your users will see when they browse their AWS Marketplace. Each experience is a collection of approved AWS Marketplace products that you, as an administrator, select and curate. You can add any of the 8000-plus products from AWS Marketplace to an experience and customize the look and feel by changing the Private Marketplace logo, wording, and color.

In the Private Marketplace admin portal, within the AWS Marketplace interface, you can create an Account group, which is essentially a way of grouping together a number of AWS accounts. You can then associate an experience with an Account group, enabling you to give different users distinct experiences. This association allows you to govern the use of third-party software subscriptions and ensure they adhere to your internal procurement policies.

To give you a flavor of this feature, I will create a new Private Marketplace experience.

How to Create a Private Marketplace Experience
I head over to the Private Marketplace interface and select the Experiences link; this lists the default experience called Private Marketplace Experience. I then press the Create experience button.

I give the experience a name and a description and click the Create experience button.

If I drill into the details for my newly created experience, I can then start to add products to my catalog. I use the search in the All AWS Marketplace products section and look for Tableau. I find the Tableau Server product and select it. I then click the Add button which adds the product to my catalog.

Screenshot, adding a product

To associate this experience with a set of AWS accounts, I click on the Create association button; this allows me to create a new Account group.

I give my account group the title Business Intelligence Group and add a description. I then associate AWS accounts that I want to be part of this group. Finally, I associate this group with the Business Intelligence Team experience I created earlier and finally click Create account group.

I now go to the settings for the experience and change the switch so that the experience is now live. On this screen I can also customize the look and feel of the Private Marketplace by uploading a logo, changing the color, and adding custom wording.

Now, if I log in and browse the AWS Marketplace using an account that is in the Account group I just created, I will see the Private Marketplace experience rather than the regular AWS Marketplace.

Available now
The ability to create multiple Private Marketplace catalogs is available now in all regions that support AWS Marketplace, and you can start using the feature directly from the Private Marketplace interface. Additionally, you can use the accompanying set of APIs so that you can integrate with existing approval or ticketing systems to simplify management across catalogs.

Visit the Private Marketplace page to get started and learn more. Happy curating.

— Martin

Developing, testing, and deploying custom connectors for your data stores with AWS Glue

Post Syndicated from Bo Li original https://aws.amazon.com/blogs/big-data/developing-testing-and-deploying-custom-connectors-for-your-data-stores-with-aws-glue/

AWS Glue is a serverless data integration service that makes it easy to discover, prepare, and combine data for analytics, machine learning, and application development. AWS Glue already integrates with various popular data stores such as the Amazon Redshift, RDS, MongoDB, and Amazon S3. Organizations continue to evolve and use a variety of data stores that best fit their applications and data requirements. We recently announced general availability of AWS Glue custom connectors, which makes it easy to discover and integrate with variety of additional data sources, such as SaaS applications and your custom data sources. With just a few clicks, you can search and select connectors from the AWS Marketplace and begin your data preparation workflow in minutes. We are also releasing a new framework to develop, validate, and deploy your own custom connectors (bring your own connectors / BYOC).

In this blog post, we go over three key aspects of AWS Glue custom connectors. First, we introduce the two mechanisms using which you can plug in a custom connector by either subscribing from AWS Marketplace or bring your own connector into Glue Spark jobs. Second, we describe the three interfaces based on Apache Spark DataSource, Amazon Athena Federated Query, and JDBC, which you can use to develop a custom connector with the released Glue Spark runtime.  Finally, we get deeper into the development process, and describe how Glue Spark runtime interfaces simplify data integration by offering powerful features that are built-in for Glue custom connectors. These features include job bookmarks for incremental loads of your data, at-source data filtering with SQL queries, partitioned execution for data parallelism, data type mapping, advanced Spark and built-in AWS Glue data transformations, integration with AWS Secrets Manager to securely store authentication credentials, AWS Glue Data Catalog for storing connections and table metadata. Glue custom connectors are also supported with AWS Glue Studio that enables visual authoring of your data integration jobs.

These data sources cover the following categories:

This post introduces two mechanisms to use custom connectors with AWS Glue Spark runtime and AWS Glue Studio console. First, we go over the user experience for seamless discovery and subscription to custom connectors developed by AWS Glue partners that are hosted on AWS Marketplace. Next, we go deeper into the five simple steps to develop and test your own connectors with AWS Glue Spark runtime, and deploy them into your production Apache Spark applications for ETL and analytics workloads that run on AWS Glue.

AWS Glue custom connectors: AWS Marketplace and BYOC

You can use an AWS Glue connector available on AWS Marketplace or bring your own connector (BYOC) and plug it into AWS Glue Spark runtime. This is in addition to the native connectors available with AWS Glue.

Connectors available on AWS Marketplace

As we make AWS Glue custom connectors generally available today, we have an ecosystem of Glue connectors listed on AWS Marketplace available from different AWS Glue partners, big data architects, and third-party developers. The following posts go into more detail on using some of these connectors for different use cases with AWS Glue:

BYOC connector example

Customers and application developers also need a method to develop connectors for custom data stores. The next section describes the end-to-end process to develop and test a custom connector using the AWS Glue Spark runtime library and interfaces locally.

After testing and validating, you can package and deploy the custom connector using the BYOC workflow in AWS Glue Studio. For instructions on deploying and using the Snowflake connector with AWS Glue jobs as a BYOC custom connector, see Performing data transformations using Snowflake and AWS Glue.

AWS Glue Spark runtime connector interfaces

AWS Glue Spark runtime offers three interfaces to plug in custom connectors built for existing frameworks: the Spark DataSource API, Amazon Athena Data Source API, or Java JDBC API. The following code snippets show how you can plug in these connectors into AWS Glue Spark runtime without any changes.

For connectors subscribed from AWS Marketplace, use the following code:

Datasource = glueContext.create_dynamic_frame.from_options(connection_type = "marketplace.spark|athena|jdbc", connection_options = {"dbTable":"Account","connectionName":"my-marketplace-connection"}, transformation_ctx = "DataSource0)

For custom connectors developed and deployed with AWS Glue, use the following code:

Datasource = glueContext.create_dynamic_frame.from_options(connection_type = "custom.spark|athena|jdbc", connection_options = {"dbTable":"Account","connectionName":"my-custom-connection"}, transformation_ctx = "DataSource0")

The following table summarizes the interfaces you need to implement for connectivity with AWS Glue Spark runtime using the Spark DataSource API.

Interfaces Description
DataSourceV2 The base interface for Spark DataSource v2 API.
ReadSupport A mix-in interface for DataSourceV2 for the connector to provide data reading ability and scan the data from the data source.
DataSourceReader A data source reader that is created by ReadSupport to scan the data from this data source. It also supports reading actual schema and generating a list of InputPartition for parallel reads from Spark executors.
InputPartition Each InputPartition is responsible for creating a data reader to read data into one RDD partition. InputPartitions are serialized and sent to executors, then the reader is created on executors to do the actual reading.
InputPartitionReader Responsible for reading data into an RDD partition.

The following table summarizes the interfaces you need to implement for connectivity with AWS Glue Spark runtime using the Athena Data Source API.

Interfaces Description
MetadataHandler
doGetSplits Splits up the reads required to scan the requested batch of partitions.
doListSchemaNames Gets the list of schemas (databases) that this source contains.
doGetTable Gets a definition (such as field names, types, and descriptions) of a table.
doListTables Gets the list of tables that this source contains.
getPartitions Gets the partitions that must be read from the request table.
RecordHandler
doReadRecords Reads the row data associated with the provided split.

The following diagram shows the class structure for the three interfaces and their execution on Spark drivers to read metadata and Spark executors to read data from the underlying datasource. The classes shown in pink are the ones that need to be implemented as part of the connector. Classes shown in green are already implemented as part of the Glue Spark runtime.

Steps to develop a custom connector

In the following sections, we describe how to develop, test, and validate an AWS Glue custom connector. We also show how to deploy the connectors to AWS Glue jobs using the AWS Glue Studio console.

Implementing the solution includes the following 5 high-level steps:

  1. Download and install AWS Glue Spark runtime, and review sample connectors.
  2. Develop using the required connector interface.
  3. Build, test, and validate your connector locally.
  4. Package and deploy the connector on AWS Glue.
  5. Use AWS Glue Studio to author a Spark application with the connector.

Downloading and installing AWS Glue Spark runtime and reviewing sample connectors

The first step to developing a connector is to install the Glue Spark runtime from Maven and refer to AWS Glue sample connectors on AWS Glue GitHub repository.

Developing and testing using the required connector interface

As discussed earlier, you can develop AWS Glue custom connectors with one of the following interfaces:

  • Spark DataSource
  • Athena Federated Query
  • JDBC

In this section, we walk you through each interface.

Spark DataSource interface

We use a simple example to illustrate the development of an AWS Glue custom connector with the Spark DataSource interface. You can also find intermediate and complex examples for developing connectors with more functionality for different data sources.

This solution implements a DataSourceReader that returns predefined data as InputPartitions stored in-memory with a given schema. The following interfaces need to be implemented for DataSourceReader. The DataSourceReader implementation runs on the Spark driver and plans the execution of Spark executors reading the data in InputPartitions:

class Reader implements DataSourceReader {
        public StructType readSchema() { ... }
        
        public List<InputPartition<InternalRow>> planInputPartitions() { ... }
}

The InputPartitions are read in parallel by different Spark executors using the InputPartitionReader implementation, which returns the records in Spark’s InternalRow format. The InputPartitionReader is essentially implemented to return an iterator of the records scanned from the underlying data store. Refer the following code:

class SimpleInputPartitionReader implements InputPartitionReader<InternalRow> {
    public boolean next() { ... }

    public InternalRow get() { ... }

    public void close() throws IOException { ... }
}

The second connector example shows how to use an Amazon S3 client to read the data in CSV format from an S3 bucket and path supplied as reader options. The third connector example shows how to use a JDBC driver to read data from a MySQL source. It also shows how to push down a SQL query to filter records at source and authenticate with the user name and password supplied as reader options.

You can plug the connectors based on the Spark DataSource API into AWS Glue Spark runtime as follows. You need to supply the connection_type for custom.spark and an AWS Glue catalog connection containing the reader options, such as user name and password. AWS Glue Spark runtime automatically converts the data source into a Glue DynamicFrame. The following code is an example to plug in the Elasticsearch Spark connector:

Datasource = glueContext.create_dynamic_frame.from_options(connection_type = "custom.spark", connection_options = {"path": "test", "es.nodes": "https://search-glue-etl-job-xxx.us-east-1.es.amazonaws.com","es.net.http.auth.user": "user","es.net.http.auth.pass": "pwd","es.port": "443","es.nodes.wan.only": "true" ,"connectionName":"my-custom-es-connection"}, transformation_ctx = "DataSource0")

AWS Glue Studio provides a visual ETL console that can also auto-generate the preceding code to construct a DynamicFrame for a deployed Spark connector (as described later in this post).

Athena Federated Query interface

AWS Glue Spark runtime also supports connectors developed with the Athena connector interface for federated queries. Similar to the Spark DataSource API, it requires implementation of two key handler interfaces: MetadataHandler and RecordHandler.

The MetadataHandler implementation runs on the Spark driver and contains the functions required to compute the schema, tables, and table partitions, and plan the actual scan by splitting the reads of individual partitions into different splits. See the following code:

public class MyMetadataHandler extends MetadataHandler{
       ListSchemasResponse doListSchemaNames(BlockAllocator allocator, ListSchemasRequest request) { … }

       ListTablesResponse doListTables(BlockAllocator allocator, ListTablesRequest request) { … }

       GetTableResponse doGetTable(BlockAllocator allocator, GetTableRequest request) { … }

       void getPartitions(BlockWriter blockWriter, GetTableLayoutRequest request, QueryStatusChecker queryStatusChecker) { … }

       GetSplitsResponse doGetSplits(BlockAllocator allocator, GetSplitsRequest request) { … }
}

The RecordHandler implements the reader to scan the data from the underlying data store associated with the split contained in the ReadRecordsRequest structure.

AWS Glue custom connectors uses the Athena RecordHandler interface, but it do not need the BlockSpiller implementation or use AWS Lambda to read the data from the underlying data store. Instead, the implementation directly runs inline within each Spark executor to return the records as Apache Arrow column vector batches. Refer the following code:

public class MyRecordHandlerextends RecordHandler{

void readWithConstraint(ConstraintEvaluator constraints, BlockSpiller spiller, ReadRecordsRequest recordsRequest, QueryStatusChecker queryStatusChecker){…}
}

AWS Glue Spark runtime can convert records returned by plugging in an Athena connector to an AWS Glue DynamicFrame as follows:

Datasource = glueContext.create_dynamic_frame.from_options(connection_type = "custom.athena", connection_options = {"tableName":"table","schemaName":"schema","connectionName":"my-custom-athena-connection"}, transformation_ctx = "DataSource0")

JDBC interface

AWS Glue Spark runtime also allows you to plug in any connector compliant with the JDBC interface. It allows you to pass in any connection option available with the JDBC connector as follows:

Datasource = glueContext.create_dynamic_frame.from_options(connection_type = "custom.jdbc", connection_options = {"dbTable":"Account","connectionName":"my-custom-jdbc-connection"}, transformation_ctx = "DataSource0")

Advanced ETL and analytics with AWS Glue

AWS Glue Spark runtime also provides different features supported out-of-the-box with the custom connectors to enable advanced extract, data transformations, and load.

AWS Glue Studio for visual authoring of ETL jobs

Data type mapping

You can type cast the columns while reading them from the underlying data store itself. For example, a dataTypeMapping of {"INTEGER":"STRING"} casts all integer columns to string while parsing the records and constructing the DynamicFrame. This also helps you cast columns to types of your choice. Refer the following code:

DataSource0 = glueContext.create_dynamic_frame.from_options(connection_type = "custom.jdbc", connection_options = {"dataTypeMapping":{"INTEGER":"STRING"}", connectionName":"test-connection-snowflake-jdbc"}, transformation_ctx = "DataSource0")

Partitioning for parallel reads

AWS Glue allows you to read data in parallel from the data store by partitioning it on a column by specifying the partitionColumn, lowerBound, upperBound, and numPartitions. This allows you to use data parallelism and multiple Spark executors allocated for the Spark application. Refer the following code, which reads data from Snowflake using 4 Spark executors in parallel. Data is partitioned across executors uniformly along the id column in the range [0, 200]:

DataSource0 = glueContext.create_dynamic_frame.from_options(connection_type = "custom.jdbc", connection_options = {"upperBound":"200","numPartitions":"4","partitionColumn":"id","lowerBound":"0","connectionName":"my-connection-snowflake"}, transformation_ctx = "DataSource0")

Glue Data Catalog connections

You can encapsulate all your connection properties with Glue Data Catalog connections and supply the connection name as follows. Integration with Glue Data Catalog connections allows you to use the same connection properties across multiple calls in a single Spark application or across different applications. See the following code:

DataSource = glueContext.create_dynamic_frame.from_options(connection_type = "custom.jdbc", connection_options = {"connectionName":"my-connection-snowflake"}, transformation_ctx = "DataSource0")

Secrets Manager for credentials

The Data Catalog connection can also contain a secretId corresponding to a secret stored in AWS Secrets Manager that can be used to securely gather authentication and credentials information at runtime. For more details on using a secretId on the AWS Glue Studio console, see Adding connectors to AWS Glue Studio. secretId can also be specified within the ETL script as follows.

DataSource = glueContext.create_dynamic_frame.from_options(connection_type = "custom.jdbc", connection_options = {"connectionName":"my-connection-snowflake", "secretId"-> "my-secret-id"}, transformation_ctx = "DataSource0")

Secret Id can be used to store credentials for different authentication mechanisms that your connector can support such as username/password, access keys, and OAuth.

SQL queries at source: Filtering with row predicates and column projections

AWS Glue Spark runtime allows you to push down SQL queries to filter data at source with row predicates and column projections. This allows you to load filtered data faster from data stores that support pushdowns. An example SQL query pushed down to a JDBC data source is SELECT id, name, department FROM department WHERE id < 200. Refer the following code:

DataSource = glueContext.create_dynamic_frame.from_options(connection_type = "custom.jdbc", connection_options = {"query":"SELECT id, name, department FROM department WHERE id < 200","connectionName":"my-connection-snowflake "}, transformation_ctx = "DataSource0")

Job bookmarks

AWS Glue job bookmarks allows for incremental loading of data from JDBC sources. It keeps track of the last processed record from the data store and processes new data records in subsequent AWS Glue job runs. Job bookmarks use the primary key as the default column as the bookmark key if it increases or decreases sequentially. Refer the following code that uses a transformation_ctx with job bookmarks enabled for the job:

DataSource0 = glueContext.create_dynamic_frame.from_options(connection_type = "custom.jdbc", connection_options = {"connectionName":"test-connection-snowflake-jdbc"}, transformation_ctx = "DataSource0")

AWS Glue transformations

AWS Glue offers more than 35 commonly used data transformation operations with DynamicFrames and Spark DataFrames. These transforms allow you to get insights from your data and prepare it for further analytics using hundreds of available Spark SQL functions. These transformations include popular functions for schema manipulation, projecting columns, and performing joins across different data sources; transforming data with map, split, and explode; flattening nested datasets with relationalize and unnest; group and aggregate records; and run arbitrary SQL on datasets.

VPC support for networking

AWS Glue jobs allow you to securely connect to your data stores within a private VPC subnet. You can also enable a NAT (Network Address Translation) gateway within a VPC to access both VPC resources and public internet.

Building, testing, and validating your connector locally

After developing the connector for your favorite data store with the interface of your choice, you can follow the instructions to build the connector using Maven by doing a maven install. This builds the connector and installs the resulting JAR into your local Maven repository. You can now include this JAR in the class path of your IDE or AWS Glue Spark runtime downloaded from Maven.

After you build and import the JAR, you can test it locally by plugging it in AWS Glue Spark runtime and writing a validation test. We provide sample validation tests in the AWS Glue’s GitHub repository. These cover several scenarios for both local testing and validation on the AWS Glue job system. The following table lists the validation tests, the functionality they test, and the associated interfaces.

Test Name Description JDBC Spark Athena
DataSourceTest Tests connector connectivity and reading functionality. x x x
ReadWriteTest Tests reading and writing end-to-end workflow. x x x
CatalogConnectionTest Tests catalog connection integration. x x x
DataSchemaTest Tests data schema from reading with the connector. x x x
SecretsManagerTest Tests Secrets Manager integration. x x
DataSinkTest Tests connector connectivity and writing functionality x x
ColumnPartitioningTest Tests connector column partitioning functionality. x
FilterPredicateTest Tests connector filter predicate functionality. x
JDBCUrlTest Tests connector extra parameters for JDBC Url functionality. x
DbtableQueryTest Tests connector dbTable and query option functionality. x
DataTypeMappingTest Tests connector custom data type mapping functionality. x

Functionality such as AWS Glue job bookmarks that allow incremental loads can be tested on the AWS Glue job system only. We also provide a Python script to run all tests together as a suite on the AWS Glue job system.

Packaging and deploying the connector on AWS Glue

We now discuss how you can package your connector and deploy it on AWS Glue using the BYOC workflow:

  1. Package the custom connector as a JAR and upload the JAR file to an Amaon S3 bucket in your account.
  2. Follow the flow to create a custom connector referencing the JAR in Amazon S3 from AWS Glue Studio.
  3. Instantiate a connection for that connector and create an AWS Glue job using it.

For step-by-step instructions on the BYOC workflow, see Creating custom connectors.

Alternatively, we also provide the scripts and instructions for you to share the connector publicly using AWS Marketplace for a price or free. For instructions on subscribing to the connector, see Subscribing to AWS Marketplace connectors.

Using AWS Glue Studio to author a Spark application

After you create a connection for using a BYOC or AWS Marketplace – AWS Glue connector, you can follow the instructions to visually author a Spark ETL application with AWS Glue Studio. These instructions are available here for Job Authoring with custom connectors. Following are screenshots from AWS Glue Studio:

Connectors on AWS Marketplace

Connectors on AWS Marketplace

Visually author Glue jobs using connectors with AWS Glue Studio

Step 1 – Select a connector

Following are screenshots from AWS Glue Studio:

Step 2 – Visually author the job using the associated connection

Conclusion

You can use two different mechanisms to use custom connectors with AWS Glue Spark runtime and AWS Glue Studio console. In this post, we discussed the user experience for seamless discovery and subscription to custom connectors, and walked you through developing and testing your own connectors with AWS Glue Spark runtime, and deploying them into your production Apache Spark applications for ETL and analytics workloads that run on AWS Glue.

Build a custom connector yourself or try one on AWS Marketplace with AWS Glue Studio.

If you would like to partner or add a new Glue connector to AWS Marketplace, please reach out to us at [email protected]

Resources

For additional resources, see the following:


About the Authors

Bo Li is a software engineer in AWS Glue and devoted to designing and building end-to-end solutions to address customer’s data analytic and processing needs with cloud-based data-intensive technologies.

 

 

 

Yubo Xu is a Sofware Development Engineer on the AWS Glue team. His main focus is to improve the stability and efficiency of Spark runtime for AWS Glue and the easiness to connect to various data sources. Outside of work, he enjoys reading books and hiking the trails in the Bay area with his dog, Luffy, a one-year old Shiba Inu.

 

 

Xiaoxi Liu is a software engineer at AWS Glue team. Her passion is building scalable distributed systems for efficiently managing big data on cloud and her concentrations are distributed system, big data and cloud computing

 

 

Mohit Saxena is a Software Development Manager at AWS Glue. His team works on Glue’s Spark runtime to enable new customer use cases for efficiently managing data lakes on AWS and optimize Apache Spark for performance and reliability.

AWS Marketplace Now Offers Professional Services

Post Syndicated from Marcia Villalba original https://aws.amazon.com/blogs/aws/aws-marketplace-now-offers-professional-services/

Now with AWS Marketplace, customers can not only find and buy third-party software but also the professional services needed to support the full lifecycle of those products, including planning, deployment and support. This simplifies the software supply chain including tasks like managing provider relationships and procurement processes and also consolidates billing and invoices in one place.

Until today, customers have used AWS Marketplace for buying software and then used a separate process for contracting professional services. Many customers need extra professional services when they purchase third-party software, like premium support, implementation, or training. The additional effort to support different procurement processes impacts customers’ project timelines and adds a lot complexity to the customer’s organization.

Last year we announced AWS IQ, a service that helps you engage with AWS Certified third-party experts for AWS project work. This year we want to go one step further and help you find professional services for all those third-party software solutions you currently buy from AWS Marketplace.

For the Buyers
Buyers can now discover professional services using AWS Marketplace from multiple trusted sellers, manage the invoices and payments from software and services together and reduce procurement time, accelerating the process from months to days.

This new feature allow buyers to choose from a selection of professional services such as assessments, implementation, premium support, and managed services and training from consulting partners, managed service providers and independent software vendors.

To get started finding and buying professional services, first you need to find the right service for you. If you are looking for a professional service associated with a particular piece of software, using the search tool in AWS Marketplace, you can search for the software and the related professional services will appear in the search results. Use the delivery method to filter the results to just include professional services.

Screenshot of searching for professional services

After you find the service you are looking for, you can visit the service details page and learn more information about the listing. If you want to buy the service, just click continue.

Screenshot of service page

That will open the request service form where you can connect to the seller and request the service. The seller will receive a notification and then they can contact you to agree on the scope of the work including deliverables, milestones, pricing, payment schedules, and service terms.

Screenshot of request service form

Once you agree with the seller on all the specific details of the contract, the seller sends you a private offer. Now the offer page will show the private offer details instead of a request for service form. You can review the pricing, payment schedule, and contract terms and create the contract.

Screenshot of private offer

The service subscription starts after you review and accept the private offer on AWS Marketplace. Also, you will receive an invoice from AWS Marketplace and you can track your subscriptions in the buyers management console. The purchases of the services are itemized on your AWS invoice, simplifying payments and cost management.

For the Sellers
This new feature of AWS Marketplace enables you, the seller, to grow your business and reach new customers by listing your professional service offerings. You can list professional services offerings as individual products or alongside existing software products in AWS Marketplace using pricing, payment schedule, and service terms that are independent from the software.

In AWS Marketplace you will create your seller page, where all your information as a seller will be displayed to the potential buyers.

Public professional service listings are discoverable by search and visible in your seller profile. You will receive customer requests for each of the services listed. Agree with the customer on the details of the service contract and then send a private offer to them.

Screenshot for creating a professional service

AWS Marketplace will invoice and collect the payments from the customers and distribute the funds to your bank account after the customers pay. AWS Marketplace also offers you seller reports that are updated daily to understand how your business is doing.

Availability
To learn more about buying and selling professional services in AWS Marketplace, visit the AWS Marketplace service page

Marcia

Managed Entitlements in AWS License Manager Streamlines License Tracking and Distribution for Customers and ISVs

Post Syndicated from Harunobu Kameda original https://aws.amazon.com/blogs/aws/managed-entitlements-for-aws-license-manager-streamlines-license-management-for-customers-and-isvs/

AWS License Manager is a service that helps you easily manage software licenses from vendors such as Microsoft, SAP, Oracle, and IBM across your Amazon Web Services (AWS) and on-premises environments. You can define rules based on your licensing agreements to prevent license violations, such as using more licenses than are available. You can set the rules to help prevent licensing violations or notify you of breaches. AWS License Manager also offers automated discovery of bring your own licenses (BYOL) usage that keeps you informed of all software installations and uninstallations across your environment and alerts you of licensing violations.

License Manager can manage licenses purchased in AWS Marketplace, a curated digital catalog where you can easily find, purchase, deploy, and manage third-party software, data, and services to build solutions and run your business. Marketplace lists thousands of software listings from independent software vendors (ISVs) in popular categories such as security, networking, storage, machine learning, business intelligence, database, and DevOps.

Managed entitlements for AWS License Manager
Starting today, you can use managed entitlements, a new feature of AWS License Manager that lets you distribute licenses across your AWS Organizations, automate software deployments quickly and track licenses – all from a single, central account. Previously, each of your users would have to independently accept licensing terms and subscribe through their own individual AWS accounts. As your business grows and scales, this becomes increasingly inefficient.

Customers can use managed entitlements to manage more than 8,000 listings available for purchase from more than 1600 vendors in the AWS Marketplace. Today, AWS License Manager automates license entitlement distribution for Amazon Machine Image, Containers and Machine Learning products purchased in the Marketplace with a variety of solutions.

How It Works
Managed entitlements provides built-in controls that allow only authorized users and workloads to consume a license within vendor-defined limits. This new license management mechanism also eliminates the need for ISVs to maintain their own licensing systems and conduct costly audits.

overview

Each time a customer purchases licenses from AWS Marketplace or a supported ISV, the license is activated based on AWS IAM credentials, and the details are registered to License Manager.

list of granted license

Administrators distribute licenses to AWS accounts. They can manage a list of grants for each license.

list of grants

Benefits for ISVs
AWS License Manager managed entitlements provides several benefits to ISVs to simplify the automatic license creation and distribution process as part of their transactional workflow. License entitlements can be distributed to end users with and without AWS accounts. Managed entitlements streamlines upgrades and renewals by removing expensive license audits and provides customers with a self-service tracking tool with built-in license tracking capabilities. There are no fees for this feature.

Managed entitlements provides the ability to distribute licenses to end users who do not have AWS accounts. In conjunction with the AWS License Manager, ISVs create a unique long-term token to identify the customer. The token is generated and shared with the customer. When the software is launched, the customer enters the token to activate the license. The software exchanges the long-term customer token for a short-term token that is passed to the API and the setting of the license is completed. For on-premises workloads that are not connected to the Internet, ISVs can generate a host-specific license file that customers can use to run the software on that host.

Now Available
This new enhancement to AWS License Manager is available today for US East (N. Virginia), US West (Oregon), and Europe (Ireland) with other AWS Regions coming soon.

Licenses purchased on AWS Marketplace are automatically created in AWS License Manager and no special steps are required to use managed entitlements. For more details about the new feature, see the managed entitlement pages on AWS Marketplace, and the documentation. For ISVs to use this new feature, please visit our getting started guide.

Get started with AWS License Manager and the new managed entitlements feature today.

– Kame

Introducing AWS Gateway Load Balancer – Easy Deployment, Scalability, and High Availability for Partner Appliances

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/introducing-aws-gateway-load-balancer-easy-deployment-scalability-and-high-availability-for-partner-appliances/

Last year, we launched Virtual Private Cloud (VPC) Ingress Routing to allow routing of all incoming and outgoing traffic to/from an Internet Gateway (IGW) or Virtual Private Gateway (VGW) to the Elastic Network Interface of a specific Amazon Elastic Compute Cloud (EC2) instance. With VPC Ingress Routing, you can now configure your VPC to send all […]

New – SaaS Contract Upgrades and Renewals for AWS Marketplace

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-saas-contract-upgrades-and-renewals-for-aws-marketplace/

AWS Marketplace currently contains over 7,500 listings from 1,500 independent software vendors (ISVs). You can browse the digital catalog to find, test, buy, and deploy software that runs on AWS:

Each ISV sets the pricing model and prices for their software. There are a variety of options available, including free trials, hourly or usage-based pricing, monthly, annual AMI pricing, and up-front pricing for 1-, 2-, and 3-year contracts. These options give each ISV the flexibility to define the models that work best for their customers. If their offering is delivered via a Software as a Service (SaaS) contract model, the seller can define the usage categories, dimensions, and contract length.

Upgrades & Renewals
AWS customers that make use of the SaaS and usage-based products that they find in AWS Marketplace generally start with a small commitment and then want to upgrade or renew them early as their workloads expand.

Today we are making the process of upgrading and renewing these contracts easier than ever before. While the initial contract is still in effect, buyers can communicate with sellers to negotiate a new Private Offer that best meets their needs. The offer can include additional entitlements to use the product, pricing discounts, a payment schedule, a revised contract end-date, and changes to the end-user license agreement (EULA), all in accord with the needs of a specific buyer.

Once the buyer accepts the offer, the new terms go in to effect immediately. This new, streamlined process means that sellers no longer need to track parallel (paper and digital) contracts, and also ensures that buyers receive continuous service.

Let’s say I am already using a product from AWS Marketplace and negotiate an extended contract end-date with the seller. The seller creates a Private Offer for me and sends me a link that I follow in order to find & review it:

I select the Upgrade offer, and I can see I have a new contract end date, the number of dimensions on my upgrade contract, and the payment schedule. I click Upgrade current contract to proceed:

I confirm my intent:

And I am good to go:

This feature is available to all buyers & SaaS sellers, and applies to SaaS contracts and contracts with consumption pricing.

Jeff;

New – VPC Ingress Routing – Simplifying Integration of Third-Party Appliances

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-vpc-ingress-routing-simplifying-integration-of-third-party-appliances/

When I was delivering the Architecting on AWS class, customers often asked me how to configure an Amazon Virtual Private Cloud to enforce the same network security policies in the cloud as they have on-premises. For example, to scan all ingress traffic with an Intrusion Detection System (IDS) appliance or to use the same firewall in the cloud as on-premises. Until today, the only answer I could provide was to route all traffic back from their VPC to an on-premises appliance or firewall in order to inspect the traffic with their usual networking gear before routing it back to the cloud. This is obviously not an ideal configuration, it adds latency and complexity.

Today, we announce new VPC networking routing primitives to allow to route all incoming and outgoing traffic to/from an Internet Gateway (IGW) or Virtual Private Gateway (VGW) to a specific Amazon Elastic Compute Cloud (EC2) instance’s Elastic Network Interface. It means you can now configure your Virtual Private Cloud to send all traffic to an EC2 instance before the traffic reaches your business workloads. The instance typically runs network security tools to inspect or to block suspicious network traffic (such as IDS/IPS or Firewall) or to perform any other network traffic inspection before relaying the traffic to other EC2 instances.

How Does it Work?
To learn how it works, I wrote this CDK script to create a VPC with two public subnets: one subnet for the appliance and one subnet for a business application. The script launches two EC2 instances with public IP address, one in each subnet. The script creates the below architecture:

This is a regular VPC, the subnets have routing tables to the Internet Gateway and the traffic flows in and out as expected. The application instance hosts a static web site, it is accessible from any browser. You can retrieve the application public DNS name from the EC2 Console (for your convenience, I also included the CLI version in the comments of the CDK script).

AWS_REGION=us-west-2
APPLICATION_IP=$(aws ec2 describe-instances                           \
                     --region $AWS_REGION                             \
                     --query "Reservations[].Instances[] | [?Tags[?Key=='Name' && Value=='application']].NetworkInterfaces[].Association.PublicDnsName"  \
                     --output text)
				   
curl -I $APPLICATION_IP

Configure Routing
To configure routing, you need to know the VPC ID, the ENI ID of the ENI attached to the appliance instance, and the Internet Gateway ID. Assuming you created the infrastructure using the CDK script I provided, here are the commands I use to find these three IDs (be sure to adjust to the AWS Region you use):

AWS_REGION=us-west-2
VPC_ID=$(aws cloudformation describe-stacks                              \
             --region $AWS_REGION                                        \
             --stack-name VpcIngressRoutingStack                         \
             --query "Stacks[].Outputs[?OutputKey=='VPCID'].OutputValue" \
             --output text)

ENI_ID=$(aws ec2 describe-instances                                       \
             --region $AWS_REGION                                         \
             --query "Reservations[].Instances[] | [?Tags[?Key=='Name' &&  Value=='appliance']].NetworkInterfaces[].NetworkInterfaceId" \
             --output text)

IGW_ID=$(aws ec2 describe-internet-gateways                               \
             --region $AWS_REGION                                         \
             --query "InternetGateways[] | [?Attachments[?VpcId=='${VPC_ID}']].InternetGatewayId" \
             --output text)

To route all incoming traffic through my appliance, I create a routing table for the Internet Gateway and I attach a rule to direct all traffic to the EC2 instance Elastic Network Interface (ENI):

# create a new routing table for the Internet Gateway
ROUTE_TABLE_ID=$(aws ec2 create-route-table                      \
                     --region $AWS_REGION                        \
                     --vpc-id $VPC_ID                            \
                     --query "RouteTable.RouteTableId"           \
                     --output text)

# create a route for 10.0.1.0/24 pointing to the appliance ENI
aws ec2 create-route                             \
    --region $AWS_REGION                         \
    --route-table-id $ROUTE_TABLE_ID             \
    --destination-cidr-block 10.0.1.0/24         \
    --network-interface-id $ENI_ID

# associate the routing table to the Internet Gateway
aws ec2 associate-route-table                      \
    --region $AWS_REGION                           \
    --route-table-id $ROUTE_TABLE_ID               \
    --gateway-id $IGW_ID

Alternatively, I can use the VPC Console under the new Edge Associations tab.

To route all application outgoing traffic through the appliance, I replace the default route for the application subnet to point to the appliance’s ENI:

SUBNET_ID=$(aws ec2 describe-instances                                  \
                --region $AWS_REGION                                    \
                --query "Reservations[].Instances[] | [?Tags[?Key=='Name' && Value=='application']].NetworkInterfaces[].SubnetId"    \
                --output text)
ROUTING_TABLE=$(aws ec2 describe-route-tables                           \
                    --region $AWS_REGION                                \
                    --query "RouteTables[?VpcId=='${VPC_ID}'] | [?Associations[?SubnetId=='${SUBNET_ID}']].RouteTableId" \
                    --output text)

# delete the existing default route (the one pointing to the internet gateway)
aws ec2 delete-route                       \
    --region $AWS_REGION                   \
    --route-table-id $ROUTING_TABLE        \
    --destination-cidr-block 0.0.0.0/0
	
# create a default route pointing to the appliance's ENI
aws ec2 create-route                          \
    --region $AWS_REGION                      \
    --route-table-id $ROUTING_TABLE           \
    --destination-cidr-block 0.0.0.0/0        \
    --network-interface-id $ENI_ID
	
aws ec2 associate-route-table       \
    --region $AWS_REGION            \
    --route-table-id $ROUTING_TABLE \
    --subnet-id $SUBNET_ID

Alternatively, I can use the VPC Console. Within the correct routing table, I select the Routes tab and click Edit routes to replace the default route (the one pointing to 0.0.0.0/0) to target the appliance’s ENI.

Now I have the routing configuration in place. The new routing looks like:

Configure the Appliance Instance
Finally, I configure the appliance instance to forward all traffic it receives. Your software appliance usually does that for you, no extra step is required when you use AWS Marketplace appliances. When using a plain Linux instance, two extra steps are required:

1. Connect to the EC2 appliance instance and configure IP traffic forwarding in the kernel:

APPLIANCE_ID=$(aws ec2 describe-instances  \
                   --region $AWS_REGION    \
                   --query "Reservations[].Instances[] | [?Tags[?Key=='Name' && Value=='appliance']].InstanceId" \
                   --output text)
aws ssm start-session --region $AWS_REGION --target $APPLIANCE_ID	

##
## once connected (you see the 'sh-4.2$' prompt), type:
##

sudo sysctl -w net.ipv4.ip_forward=1
sudo sysctl -w net.ipv6.conf.all.forwarding=1
exit

2. Configure the EC2 instance to accept traffic for different destinations than itself (known as Dest/Source check) :

aws ec2 modify-instance-attribute --region $AWS_REGION \
                         --no-source-dest-check        \
                         --instance-id $APPLIANCE_ID

Now, the appliance is ready to forward traffic to the other EC2 instances. You can test this by pointing your browser (or using `cURL`) to the application instance.

APPLICATION_IP=$(aws ec2 describe-instances --region $AWS_REGION                          \
                     --query "Reservations[].Instances[] | [?Tags[?Key=='Name' && Value=='application']].NetworkInterfaces[].Association.PublicDnsName"  \
                     --output text)
				   
curl -I $APPLICATION_IP

To verify the traffic is really flowing through the appliance, you can enable source/destination check on the instance again (use --source-dest-check parameter with the modify-instance-attributeCLI command above). The traffic is blocked when Source/Destination check is enabled.

Cleanup
Should you use the CDK script I provided for this article, be sure to run cdk destroy when finished. This ensures you are not billed for the two EC2 instances I use for this demo. As I modified routing tables behind the back of AWS CloudFormation, I need to manually delete the routing tables, the subnet and the VPC. The easiest is to navigate to the VPC Console, select the VPC and click Actions => Delete VPC. The console deletes all components in the correct order. You might need to wait 5-10 minutes after the end of cdk destroy before the console is able to delete the VPC.

From our Partners
During the beta test of these new routing capabilities, we granted early access to a collection of AWS partners. They provided us with tons of helpful feedback. Here are some of the blog posts that they wrote in order to share their experiences (I am updating this article with links as they are published):

  • 128 Technology
  • Aviatrix
  • Checkpoint
  • Cisco
  • Citrix
  • FireEye
  • Fortinet
  • HashiCorp
  • IBM Security
  • Lastline
  • Netscout
  • Palo Alto Networks
  • ShieldX Networks
  • Sophos
  • Trend Micro
  • Valtix
  • Vectra AI
  • Versa Networks

Availability
There is no additional costs to use Virtual Private Cloud ingress routing. It is available in all regions (including AWS GovCloud (US-West)) and you can start to use it today.

You can learn more about gateways routing tables in the updated VPC documentation.

What are the appliances you are going to use with this new VPC routing capability?

— seb

Learn about AWS Services & Solutions – September AWS Online Tech Talks

Post Syndicated from Jenny Hang original https://aws.amazon.com/blogs/aws/learn-about-aws-services-solutions-september-aws-online-tech-talks/

Learn about AWS Services & Solutions – September AWS Online Tech Talks

AWS Tech Talks

Join us this September to learn about AWS services and solutions. The AWS Online Tech Talks are live, online presentations that cover a broad range of topics at varying technical levels. These tech talks, led by AWS solutions architects and engineers, feature technical deep dives, live demonstrations, customer examples, and Q&A with AWS experts. Register Now!

Note – All sessions are free and in Pacific Time.

Tech talks this month:

 

Compute:

September 23, 2019 | 11:00 AM – 12:00 PM PTBuild Your Hybrid Cloud Architecture with AWS – Learn about the extensive range of services AWS offers to help you build a hybrid cloud architecture best suited for your use case.

September 26, 2019 | 1:00 PM – 2:00 PM PTSelf-Hosted WordPress: It’s Easier Than You Think – Learn how you can easily build a fault-tolerant WordPress site using Amazon Lightsail.

October 3, 2019 | 11:00 AM – 12:00 PM PTLower Costs by Right Sizing Your Instance with Amazon EC2 T3 General Purpose Burstable Instances – Get an overview of T3 instances, understand what workloads are ideal for them, and understand how the T3 credit system works so that you can lower your EC2 instance costs today.

 

Containers:

September 26, 2019 | 11:00 AM – 12:00 PM PTDevelop a Web App Using Amazon ECS and AWS Cloud Development Kit (CDK) – Learn how to build your first app using CDK and AWS container services.

 

Data Lakes & Analytics:

September 26, 2019 | 9:00 AM – 10:00 AM PTBest Practices for Provisioning Amazon MSK Clusters and Using Popular Apache Kafka-Compatible Tooling – Learn best practices on running Apache Kafka production workloads at a lower cost on Amazon MSK.

 

Databases:

September 25, 2019 | 1:00 PM – 2:00 PM PTWhat’s New in Amazon DocumentDB (with MongoDB compatibility) – Learn what’s new in Amazon DocumentDB, a fully managed MongoDB compatible database service designed from the ground up to be fast, scalable, and highly available.

October 3, 2019 | 9:00 AM – 10:00 AM PTBest Practices for Enterprise-Class Security, High-Availability, and Scalability with Amazon ElastiCache – Learn about new enterprise-friendly Amazon ElastiCache enhancements like customer managed key and online scaling up or down to make your critical workloads more secure, scalable and available.

 

DevOps:

October 1, 2019 | 9:00 AM – 10:00 AM PT – CI/CD for Containers: A Way Forward for Your DevOps Pipeline – Learn how to build CI/CD pipelines using AWS services to get the most out of the agility afforded by containers.

 

Enterprise & Hybrid:

September 24, 2019 | 1:00 PM – 2:30 PM PT Virtual Workshop: How to Monitor and Manage Your AWS Costs – Learn how to visualize and manage your AWS cost and usage in this virtual hands-on workshop.

October 2, 2019 | 1:00 PM – 2:00 PM PT – Accelerate Cloud Adoption and Reduce Operational Risk with AWS Managed Services – Learn how AMS accelerates your migration to AWS, reduces your operating costs, improves security and compliance, and enables you to focus on your differentiating business priorities.

 

IoT:

September 25, 2019 | 9:00 AM – 10:00 AM PTComplex Monitoring for Industrial with AWS IoT Data Services – Learn how to solve your complex event monitoring challenges with AWS IoT Data Services.

 

Machine Learning:

September 23, 2019 | 9:00 AM – 10:00 AM PTTraining Machine Learning Models Faster – Learn how to train machine learning models quickly and with a single click using Amazon SageMaker.

September 30, 2019 | 11:00 AM – 12:00 PM PTUsing Containers for Deep Learning Workflows – Learn how containers can help address challenges in deploying deep learning environments.

October 3, 2019 | 1:00 PM – 2:30 PM PTVirtual Workshop: Getting Hands-On with Machine Learning and Ready to Race in the AWS DeepRacer League – Join DeClercq Wentzel, Senior Product Manager for AWS DeepRacer, for a presentation on the basics of machine learning and how to build a reinforcement learning model that you can use to join the AWS DeepRacer League.

 

AWS Marketplace:

September 30, 2019 | 9:00 AM – 10:00 AM PTAdvancing Software Procurement in a Containerized World – Learn how to deploy applications faster with third-party container products.

 

Migration:

September 24, 2019 | 11:00 AM – 12:00 PM PTApplication Migrations Using AWS Server Migration Service (SMS) – Learn how to use AWS Server Migration Service (SMS) for automating application migration and scheduling continuous replication, from your on-premises data centers or Microsoft Azure to AWS.

 

Networking & Content Delivery:

September 25, 2019 | 11:00 AM – 12:00 PM PTBuilding Highly Available and Performant Applications using AWS Global Accelerator – Learn how to build highly available and performant architectures for your applications with AWS Global Accelerator, now with source IP preservation.

September 30, 2019 | 1:00 PM – 2:00 PM PTAWS Office Hours: Amazon CloudFront – Just getting started with Amazon CloudFront and [email protected]? Get answers directly from our experts during AWS Office Hours.

 

Robotics:

October 1, 2019 | 11:00 AM – 12:00 PM PTRobots and STEM: AWS RoboMaker and AWS Educate Unite! – Come join members of the AWS RoboMaker and AWS Educate teams as we provide an overview of our education initiatives and walk you through the newly launched RoboMaker Badge.

 

Security, Identity & Compliance:

October 1, 2019 | 1:00 PM – 2:00 PM PTDeep Dive on Running Active Directory on AWS – Learn how to deploy Active Directory on AWS and start migrating your windows workloads.

 

Serverless:

October 2, 2019 | 9:00 AM – 10:00 AM PTDeep Dive on Amazon EventBridge – Learn how to optimize event-driven applications, and use rules and policies to route, transform, and control access to these events that react to data from SaaS apps.

 

Storage:

September 24, 2019 | 9:00 AM – 10:00 AM PTOptimize Your Amazon S3 Data Lake with S3 Storage Classes and Management Tools – Learn how to use the Amazon S3 Storage Classes and management tools to better manage your data lake at scale and to optimize storage costs and resources.

October 2, 2019 | 11:00 AM – 12:00 PM PTThe Great Migration to Cloud Storage: Choosing the Right Storage Solution for Your Workload – Learn more about AWS storage services and identify which service is the right fit for your business.

 

 

NEW – Machine Learning algorithms and model packages now available in AWS Marketplace

Post Syndicated from Shaun Ray original https://aws.amazon.com/blogs/aws/new-machine-learning-algorithms-and-model-packages-now-available-in-aws-marketplace/

At AWS, our mission is to put machine learning in the hands of every developer. That’s why in 2017 we launched Amazon SageMaker. Since then it has become one of the fastest growing services in AWS history, used by thousands of customers globally. Customers using Amazon SageMaker can use optimized algorithms offered in Amazon SageMaker, to run fully-managed MXNet, TensorFlow, PyTorch, and Chainer algorithms, or bring their own algorithms and models. When it comes to building their own machine learning model, many customers spend significant time developing algorithms and models that are solutions to problems that have already been solved.

 

Introducing Machine Learning in AWS Marketplace

I am pleased to announce the new Machine Learning category of products offered by AWS Marketplace, which includes over 150+ algorithms and model packages, with more coming every day. AWS Marketplace offers a tailored selection for vertical industries like retail (35 products), media (19 products), manufacturing (17 products), HCLS (15 products), and more. Customers can find solutions to critical use cases like breast cancer prediction, lymphoma classifications, hospital readmissions, loan risk prediction, vehicle recognition, retail localizer, botnet attack detection, automotive telematics, motion detection, demand forecasting, and speech recognition.

Customers can search and browse a list of algorithms and model packages in AWS Marketplace. Once customers have subscribed to a machine learning solution, they can deploy it directly from the SageMaker console, a Jupyter Notebook, the SageMaker SDK, or the AWS CLI. Amazon SageMaker protects buyers data by employing security measures such as static scans, network isolation, and runtime monitoring.

The intellectual property of sellers on the AWS Marketplace is protected by encrypting the algorithms and model package artifacts in transit and at rest, using secure (SSL) connections for communications, and ensuring role based access for deployment of artifacts. AWS provides a secure way for the sellers to monetize their work with a frictionless self-service process to publish their algorithms and model packages.

 

Machine Learning category in Action

Having tried to build my own models in the past, I sure am excited about this feature. After browsing through the available algorithms and model packages from AWS Marketplace, I’ve decided to try the Deep Vision vehicle recognition model, published by Deep Vision AI. This model will allow us to identify the make, model and type of car from a set of uploaded images. You could use this model for insurance claims, online car sales, and vehicle identification in your business.

I continue to subscribe and accept the default options of recommended instance type and region. I read and accept the subscription contract, and I am ready to get started with our model.

My subscription is listed in the Amazon SageMaker console and is ready to use. Deploying the model with Amazon SageMaker is the same as any other model package, I complete the steps in this guide to create and deploy our endpoint.

With our endpoint deployed I can start asking the model questions. In this case I will be using a single image of a car; the model is trained to detect the model, maker, and year information from any angle. First, I will start off with a Volvo XC70 and see what results I get:

Results:

{'result': [{'mmy': {'make': 'Volvo', 'score': 0.97, 'model': 'Xc70', 'year': '2016-2016'}, 'bbox': {'top': 146, 'left': 50, 'right': 1596, 'bottom': 813}, 'View': 'Front Left View'}]}

My model has detected the make, model and year correctly for the supplied image. I was recently on holiday in the UK and stayed with a relative who had a McLaren 570s supercar. The thought that crossed my mind as the gulf-wing doors opened for the first time and I was about to be sitting in the car, was how much it would cost for the insurance excess if things went wrong! Quite apt for our use case today.

Results:

{'result': [{'mmy': {'make': 'Mclaren', 'score': 0.95, 'model': '570S', 'year': '2016-2017'}, 'bbox': {'top': 195, 'left': 126, 'right': 757, 'bottom': 494}, 'View': 'Front Right View'}]}

The score (0.95) measures how confident the model is that the result is right. The range of the score is 0.0 to 1.0. My score is extremely accurate for the McLaren car, with the make, model and year all correct. Impressive results for a relatively rare type of car on the road. I test a few more cars given to me by the launch team who are excitedly looking over my shoulder and now it’s time to wrap up.

Within ten minutes, I have been able to choose a model package, deploy an endpoint and accurately detect the make, model and year of vehicles, with no data scientists, expensive GPU’s for training or writing any code. You can be sure I will be subscribing to a whole lot more of these models from AWS Marketplace throughout re:Invent week and trying to solve other use cases in less than 15 minutes!

Access for the machine learning category in AWS Marketplace can be achieved through the Amazon SageMaker console, or directly through AWS Marketplace itself. Once an algorithm or model has been successfully subscribed to, it is accessible via the console, SDK, and AWS CLI. Algorithms and models from the AWS Marketplace can be deployed just like any other model or algorithm, by selecting the AWS Marketplace option as your package source. Once you have chosen an algorithm or model, you can deploy it to Amazon SageMaker by following this guide.

 

Availability & Pricing

Customers pay a subscription fee for the use of an algorithm or model package and the AWS resource fee. AWS Marketplace provides a consolidated monthly bill for all purchased subscriptions.

At launch, AWS Marketplace for Machine Learning includes algorithms and models from Deep Vision AI Inc, Knowledgent, RocketML, Sensifai, Cloudwick Technologies, Persistent Systems, Modjoul, H2Oai Inc, Figure Eight [Crowdflower], Intel Corporation, AWS Gluon Model Zoos, and more with new sellers being added regularly. If you are interested in selling machine learning algorithms and model packages, please reach out to [email protected]

 

 

NEW – AWS Marketplace makes it easier to govern software procurement with Private Marketplace

Post Syndicated from Shaun Ray original https://aws.amazon.com/blogs/aws/new-aws-marketplace-makes-it-easier-to-govern-software-procurement-with-private-marketplace/

Over six years ago, we launched AWS Marketplace with the ambitious goal of providing users of the cloud with the software applications and infrastructure they needed to run their business. Today, more than 200,000 AWS active customers are using software from AWS Marketplace from categories such as security, data and analytics, log analysis and machine learning. Those customers use over 650 million hours a month of Amazon EC2 for products in AWS Marketplace and have more than 950,000 active software subscriptions. AWS Marketplace offers 35 categories and more than 4,500 software listings from more than 1,400 Independent Software Vendors (ISVs) to help you on your cloud journey, no matter what stage of adoption you are up to.

Customers have told us that they love the flexibility and myriad of options that AWS Marketplace provides. Today, I am excited to announce we are offering even more flexibility for AWS Marketplace with the launch of Private Marketplace from AWS Marketplace.

Private Marketplace is a new feature that enables you to create a custom digital catalog of pre-approved products from AWS Marketplace. As an administrator, you can select products that meet your procurement policies and make them available for your users. You can also further customize Private Marketplace with company branding, such as logo, messaging, and color scheme. All controls for Private Marketplace apply across your entire AWS Organizations entity, and you can define fine-grained controls using Identity and Access Management for roles such as: administrator, subscription manager and end user.

Once you enable Private Marketplace, users within your AWS Organizations redirect to Private Marketplace when they sign into AWS Marketplace. Now, your users can quickly find, buy, and deploy products knowing they are pre-approved.

 

Private Marketplace in Action

To get started we need to be using a master account, if you have a single account, it will automatically be classified as a master account. If you are a member of an AWS Organizations managed account, the master account will need to enable Private Marketplace access. Once done, you can add subscription managers and administrators through AWS Identity and Access Management (IAM) policies.

 

1- My account meets the requirement of being a master, I can proceed to create a Private Marketplace. I click “Create Private Marketplace” and am redirected to the admin page where I can whitelist products from AWS Marketplace. To grant other users access to approve products for listing, I can use AWS Organizations policies to grant the AWSMarketplaceManageSubscriptions role.

2- I select some popular software and operating systems from the list and add them to Private Marketplace. Once selected we can now see our whitelisted products.

3- One thing that I appreciate, and I am sure that the administrators of their organization’s Private Marketplace will, is some customization to bring the style and branding inline with the company. In this case, we can choose the name, logo, color, and description of our Private Marketplace.

4- After a couple of minutes we have our freshly minted Private Marketplace ready to go, there is an explicit step that we need to complete to push our Private Marketplace live. This allows us to create and edit without enabling access to users.

 

5 -For the next part, we will switch to a member account and see what our Private Marketplace looks like.

6- We can see the five pieces of software I whitelisted and our customizations to our Private Marketplace. We can also see that these products are “Approved for Procurement” and can be subscribed to by our end users. Other products are still discoverable by our users, but cannot be subscribed to until an administrator whitelists the product.

 

Conclusion

Users in a Private Marketplace can launch products knowing that all products in their Private Marketplace comply with their company’s procurement policies. When users search for products in Private Marketplace, they can see which products are labeled as “Approved for Procurement” and quickly filter between their company’s catalog and the full catalog of software products in AWS Marketplace.

 

Pricing and Availability

Subscription costs remain the same as all products in AWS Marketplace once consumed. Private Marketplace from AWS Marketplace is available in all commercial regions today.

 

 

 

Leveraging AWS Marketplace Partner Storage Solutions for Microsoft

Post Syndicated from islawson original https://aws.amazon.com/blogs/architecture/leveraging-aws-marketplace-partner-storage-solutions-for-microsoft/

Designing a cloud storage solution to accommodate traditional enterprise software such as Microsoft SharePoint can be challenging. Microsoft SharePoint is complex and demands a lot of the underlying storage that’s used for its many databases and content repositories. To ensure that the selected storage platform can accommodate the availability, connectivity, and performance requirements recommended by Microsoft you need to use third-party storage solutions that build on and extend the functionality and performance of AWS storage services.

An appropriate storage solution for Microsoft SharePoint needs to provide data redundancy, high availability, fault tolerance, strong encryption, standard connectivity protocols, point-in-time data recovery, compression, ease of management, directory integration, and support.

AWS Marketplace is uniquely positioned as a procurement channel to find a third-party storage product that provides the additional technology layered on top of AWS storage services. The third-party storage products are provided and maintained by industry newcomers with born-in-the-cloud solutions as well as existing industry leaders. They include many mainstream storage products that are already familiar and commonly deployed in enterprises.

We recently released the “Leveraging AWS Marketplace Storage Solutions for Microsoft SharePoint” whitepaper to walk through the deployment and configuration of SoftNAS Cloud NAS, an AWS Marketplace third-party storage product that provides secure, highly available, redundant, and fault-tolerant storage to the Microsoft SharePoint collaboration suite.

About the Author

Israel Lawson is a senior solutions architect on the AWS Marketplace team.

Amazon Redshift – 2017 Recap

Post Syndicated from Larry Heathcote original https://aws.amazon.com/blogs/big-data/amazon-redshift-2017-recap/

We have been busy adding new features and capabilities to Amazon Redshift, and we wanted to give you a glimpse of what we’ve been doing over the past year. In this article, we recap a few of our enhancements and provide a set of resources that you can use to learn more and get the most out of your Amazon Redshift implementation.

In 2017, we made more than 30 announcements about Amazon Redshift. We listened to you, our customers, and delivered Redshift Spectrum, a feature of Amazon Redshift, that gives you the ability to extend analytics to your data lake—without moving data. We launched new DC2 nodes, doubling performance at the same price. We also announced many new features that provide greater scalability, better performance, more automation, and easier ways to manage your analytics workloads.

To see a full list of our launches, visit our what’s new page—and be sure to subscribe to our RSS feed.

Major launches in 2017

Amazon Redshift Spectrumextend analytics to your data lake, without moving data

We launched Amazon Redshift Spectrum to give you the freedom to store data in Amazon S3, in open file formats, and have it available for analytics without the need to load it into your Amazon Redshift cluster. It enables you to easily join datasets across Redshift clusters and S3 to provide unique insights that you would not be able to obtain by querying independent data silos.

With Redshift Spectrum, you can run SQL queries against data in an Amazon S3 data lake as easily as you analyze data stored in Amazon Redshift. And you can do it without loading data or resizing the Amazon Redshift cluster based on growing data volumes. Redshift Spectrum separates compute and storage to meet workload demands for data size, concurrency, and performance. Redshift Spectrum scales processing across thousands of nodes, so results are fast, even with massive datasets and complex queries. You can query open file formats that you already use—such as Apache Avro, CSV, Grok, ORC, Apache Parquet, RCFile, RegexSerDe, SequenceFile, TextFile, and TSV—directly in Amazon S3, without any data movement.

For complex queries, Redshift Spectrum provided a 67 percent performance gain,” said Rafi Ton, CEO, NUVIAD. “Using the Parquet data format, Redshift Spectrum delivered an 80 percent performance improvement. For us, this was substantial.

To learn more about Redshift Spectrum, watch our AWS Summit session Intro to Amazon Redshift Spectrum: Now Query Exabytes of Data in S3, and read our announcement blog post Amazon Redshift Spectrum – Exabyte-Scale In-Place Queries of S3 Data.

DC2 nodes—twice the performance of DC1 at the same price

We launched second-generation Dense Compute (DC2) nodes to provide low latency and high throughput for demanding data warehousing workloads. DC2 nodes feature powerful Intel E5-2686 v4 (Broadwell) CPUs, fast DDR4 memory, and NVMe-based solid state disks (SSDs). We’ve tuned Amazon Redshift to take advantage of the better CPU, network, and disk on DC2 nodes, providing up to twice the performance of DC1 at the same price. Our DC2.8xlarge instances now provide twice the memory per slice of data and an optimized storage layout with 30 percent better storage utilization.

Redshift allows us to quickly spin up clusters and provide our data scientists with a fast and easy method to access data and generate insights,” said Bradley Todd, technology architect at Liberty Mutual. “We saw a 9x reduction in month-end reporting time with Redshift DC2 nodes as compared to DC1.”

Read our customer testimonials to see the performance gains our customers are experiencing with DC2 nodes. To learn more, read our blog post Amazon Redshift Dense Compute (DC2) Nodes Deliver Twice the Performance as DC1 at the Same Price.

Performance enhancements— 3x-5x faster queries

On average, our customers are seeing 3x to 5x performance gains for most of their critical workloads.

We introduced short query acceleration to speed up execution of queries such as reports, dashboards, and interactive analysis. Short query acceleration uses machine learning to predict the execution time of a query, and to move short running queries to an express short query queue for faster processing.

We launched results caching to deliver sub-second response times for queries that are repeated, such as dashboards, visualizations, and those from BI tools. Results caching has an added benefit of freeing up resources to improve the performance of all other queries.

We also introduced late materialization to reduce the amount of data scanned for queries with predicate filters by batching and factoring in the filtering of predicates before fetching data blocks in the next column. For example, if only 10 percent of the table rows satisfy the predicate filters, Amazon Redshift can potentially save 90 percent of the I/O for the remaining columns to improve query performance.

We launched query monitoring rules and pre-defined rule templates. These features make it easier for you to set metrics-based performance boundaries for workload management (WLM) queries, and specify what action to take when a query goes beyond those boundaries. For example, for a queue that’s dedicated to short-running queries, you might create a rule that aborts queries that run for more than 60 seconds. To track poorly designed queries, you might have another rule that logs queries that contain nested loops.

Customer insights

Amazon Redshift and Redshift Spectrum serve customers across a variety of industries and sizes, from startups to large enterprises. Visit our customer page to see the success that customers are having with our recent enhancements. Learn how companies like Liberty Mutual Insurance saw a 9x reduction in month-end reporting time using DC2 nodes. On this page, you can find case studies, videos, and other content that show how our customers are using Amazon Redshift to drive innovation and business results.

In addition, check out these resources to learn about the success our customers are having building out a data warehouse and data lake integration solution with Amazon Redshift:

Partner solutions

You can enhance your Amazon Redshift data warehouse by working with industry-leading experts. Our AWS Partner Network (APN) Partners have certified their solutions to work with Amazon Redshift. They offer software, tools, integration, and consulting services to help you at every step. Visit our Amazon Redshift Partner page and choose an APN Partner. Or, use AWS Marketplace to find and immediately start using third-party software.

To see what our Partners are saying about Amazon Redshift Spectrum and our DC2 nodes mentioned earlier, read these blog posts:

Resources

Blog posts

Visit the AWS Big Data Blog for a list of all Amazon Redshift articles.

YouTube videos

GitHub

Our community of experts contribute on GitHub to provide tips and hints that can help you get the most out of your deployment. Visit GitHub frequently to get the latest technical guidance, code samples, administrative task automation utilities, the analyze & vacuum schema utility, and more.

Customer support

If you are evaluating or considering a proof of concept with Amazon Redshift, or you need assistance migrating your on-premises or other cloud-based data warehouse to Amazon Redshift, our team of product experts and solutions architects can help you with architecting, sizing, and optimizing your data warehouse. Contact us using this support request form, and let us know how we can assist you.

If you are an Amazon Redshift customer, we offer a no-cost health check program. Our team of database engineers and solutions architects give you recommendations for optimizing Amazon Redshift and Amazon Redshift Spectrum for your specific workloads. To learn more, email us at [email protected].

If you have any questions, email us at [email protected].

 


Additional Reading

If you found this post useful, be sure to check out Amazon Redshift Spectrum – Exabyte-Scale In-Place Queries of S3 Data, Using Amazon Redshift for Fast Analytical Reports and How to Migrate Your Oracle Data Warehouse to Amazon Redshift Using AWS SCT and AWS DMS.


About the Author

Larry Heathcote is a Principle Product Marketing Manager at Amazon Web Services for data warehousing and analytics. Larry is passionate about seeing the results of data-driven insights on business outcomes. He enjoys family time, home projects, grilling out and the taste of classic barbeque.

 

 

 

The Floodgates Are Open – Increased Network Bandwidth for EC2 Instances

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/the-floodgates-are-open-increased-network-bandwidth-for-ec2-instances/

I hope that you have configured your AMIs and your current-generation EC2 instances to use the Elastic Network Adapter (ENA) that I told you about back in mid-2016. The ENA gives you high throughput and low latency, while minimizing the load on the host processor. It is designed to work well in the presence of multiple vCPUs, with intelligent packet routing backed up by multiple transmit and receive queues.

Today we are opening up the floodgates and giving you access to more bandwidth in all AWS Regions. Here are the specifics (in each case, the actual bandwidth is dependent on the instance type and size):

EC2 to S3 – Traffic to and from Amazon Simple Storage Service (S3) can now take advantage of up to 25 Gbps of bandwidth. Previously, traffic of this type had access to 5 Gbps of bandwidth. This will be of benefit to applications that access large amounts of data in S3 or that make use of S3 for backup and restore.

EC2 to EC2 – Traffic to and from EC2 instances in the same or different Availability Zones within a region can now take advantage of up to 5 Gbps of bandwidth for single-flow traffic, or 25 Gbps of bandwidth for multi-flow traffic (a flow represents a single, point-to-point network connection) by using private IPv4 or IPv6 addresses, as described here.

EC2 to EC2 (Cluster Placement Group) – Traffic to and from EC2 instances within a cluster placement group can continue to take advantage of up to 10 Gbps of lower-latency bandwidth for single-flow traffic, or 25 Gbps of lower-latency bandwidth for multi-flow traffic.

To take advantage of this additional bandwidth, make sure that you are using the latest, ENA-enabled AMIs on current-generation EC2 instances. ENA-enabled AMIs are available for Amazon Linux, Ubuntu 14.04 & 16.04, RHEL 7.4, SLES 12, and Windows Server (2008 R2, 2012, 2012 R2, and 2016). The FreeBSD AMI in AWS Marketplace is also ENA-enabled, as is VMware Cloud on AWS.

Jeff;

Building Blocks of Amazon ECS

Post Syndicated from Tiffany Jernigan original https://aws.amazon.com/blogs/compute/building-blocks-of-amazon-ecs/

So, what’s Amazon Elastic Container Service (ECS)? ECS is a managed service for running containers on AWS, designed to make it easy to run applications in the cloud without worrying about configuring the environment for your code to run in. Using ECS, you can easily deploy containers to host a simple website or run complex distributed microservices using thousands of containers.

Getting started with ECS isn’t too difficult. To fully understand how it works and how you can use it, it helps to understand the basic building blocks of ECS and how they fit together!

Let’s begin with an analogy

Imagine you’re in a virtual reality game with blocks and portals, in which your task is to build kingdoms.

In your spaceship, you pull up a holographic map of your upcoming destination: Nozama, a golden-orange planet. Looking at its various regions, you see that the nearest one is za-southwest-1 (SW Nozama). You set your destination, and use your jump drive to jump to the outer atmosphere of za-southwest-1.

As you approach SW Nozama, you see three portals, 1a, 1b, and 1c. Each portal lets you transport directly to an isolated zone (Availability Zone), where you can start construction on your new kingdom (cluster), Royaume.

With your supply of blocks, you take the portal to 1b, and erect the surrounding walls of your first territory (instance)*.

Before you get ahead of yourself, there are some rules to keep in mind. For your territory to be a part of Royaume, the land ordinance requires construction of a building (container), specifically a castle, from which your territory’s lord (agent)* rules.

You can then create architectural plans (task definitions) to build your developments (tasks), consisting of up to 10 buildings per plan. A development can be built now within this or any territory, or multiple territories.

If you do decide to create more territories, you can either stay here in 1b or take a portal to another location in SW Nozama and start building there.

Amazon EC2 building blocks

We currently provide two launch types: EC2 and Fargate. With Fargate, the Amazon EC2 instances are abstracted away and managed for you. Instead of worrying about ECS container instances, you can just worry about tasks. In this post, the infrastructure components used by ECS that are handled by Fargate are marked with a *.

Instance*

EC2 instances are good ol’ virtual machines (VMs). And yes, don’t worry, you can connect to them (via SSH). Because customers have varying needs in memory, storage, and computing power, many different instance types are offered. Just want to run a small application or try a free trial? Try t2.micro. Want to run memory-optimized workloads? R3 and X1 instances are a couple options. There are many more instance types as well, which cater to various use cases.

AMI*

Sorry if you wanted to immediately march forward, but before you create your instance, you need to choose an AMI. An AMI stands for Amazon Machine Image. What does that mean? Basically, an AMI provides the information required to launch an instance: root volume, launch permissions, and volume-attachment specifications. You can find and choose a Linux or Windows AMI provided by AWS, the user community, the AWS Marketplace (for example, the Amazon ECS-Optimized AMI), or you can create your own.

Region

AWS is divided into regions that are geographic areas around the world (for now it’s just Earth, but maybe someday…). These regions have semi-evocative names such as us-east-1 (N. Virginia), us-west-2 (Oregon), eu-central-1 (Frankfurt), ap-northeast-1 (Tokyo), etc.

Each region is designed to be completely isolated from the others, and consists of multiple, distinct data centers. This creates a “blast radius” for failure so that even if an entire region goes down, the others aren’t affected. Like many AWS services, to start using ECS, you first need to decide the region in which to operate. Typically, this is the region nearest to you or your users.

Availability Zone

AWS regions are subdivided into Availability Zones. A region has at minimum two zones, and up to a handful. Zones are physically isolated from each other, spanning one or more different data centers, but are connected through low-latency, fiber-optic networking, and share some common facilities. EC2 is designed so that the most common failures only affect a single zone to prevent region-wide outages. This means you can achieve high availability in a region by spanning your services across multiple zones and distributing across hosts.

Amazon ECS building blocks

Container

Well, without containers, ECS wouldn’t exist!

Are containers virtual machines?
Nope! Virtual machines virtualize the hardware (benefits), while containers virtualize the operating system (even more benefits!). If you look inside a container, you would see that it is made by processes running on the host, and tied together by kernel constructs like namespaces, cgroups, etc. But you don’t need to bother about that level of detail, at least not in this post!

Why containers?
Containers give you the ability to build, ship, and run your code anywhere!

Before the cloud, you needed to self-host and therefore had to buy machines in addition to setting up and configuring the operating system (OS), and running your code. In the cloud, with virtualization, you can just skip to setting up the OS and running your code. Containers make the process even easier—you can just run your code.

Additionally, all of the dependencies travel in a package with the code, which is called an image. This allows containers to be deployed on any host machine. From the outside, it looks like a host is just holding a bunch of containers. They all look the same, in the sense that they are generic enough to be deployed on any host.

With ECS, you can easily run your containerized code and applications across a managed cluster of EC2 instances.

Are containers a fairly new technology?
The concept of containerization is not new. Its origins date back to 1979 with the creation of chroot. However, it wasn’t until the early 2000s that containers became a major technology. The most significant milestone to date was the release of Docker in 2013, which led to the popularization and widespread adoption of containers.

What does ECS use?
While other container technologies exist (LXC, rkt, etc.), because of its massive adoption and use by our customers, ECS was designed first to work natively with Docker containers.

Container instance*

Yep, you are back to instances. An instance is just slightly more complex in the ECS realm though. Here, it is an ECS container instance that is an EC2 instance running the agent, has a specifically defined IAM policy and role, and has been registered into your cluster.

And as you probably guessed, in these instances, you are running containers. 

AMI*

These container instances can use any AMI as long as it has the following specifications: a modern Linux distribution with the agent and the Docker Daemon with any Docker runtime dependencies running on it.

Want it more simplified? Well, AWS created the Amazon ECS-Optimized AMI for just that. Not only does that AMI come preconfigured with all of the previously mentioned specifications, it’s tested and includes the recommended ecs-init upstart process to run and monitor the agent.

Cluster

An ECS cluster is a grouping of (container) instances* (or tasks in Fargate) that lie within a single region, but can span multiple Availability Zones – it’s even a good idea for redundancy. When launching an instance (or tasks in Fargate), unless specified, it registers with the cluster named “default”. If “default” doesn’t exist, it is created. You can also scale and delete your clusters.

Agent*

The Amazon ECS container agent is a Go program that runs in its own container within each EC2 instance that you use with ECS. (It’s also available open source on GitHub!) The agent is the intermediary component that takes care of the communication between the scheduler and your instances. Want to register your instance into a cluster? (Why wouldn’t you? A cluster is both a logical boundary and provider of pool of resources!) Then you need to run the agent on it.

Task

When you want to start a container, it has to be part of a task. Therefore, you have to create a task first. Succinctly, tasks are a logical grouping of 1 to N containers that run together on the same instance, with N defined by you, up to 10. Let’s say you want to run a custom blog engine. You could put together a web server, an application server, and an in-memory cache, each in their own container. Together, they form a basic frontend unit.

Task definition

Ah, but you cannot create a task directly. You have to create a task definition that tells ECS that “task definition X is composed of this container (and maybe that other container and that other container too!).” It’s kind of like an architectural plan for a city. Some other details it can include are how the containers interact, container CPU and memory constraints, and task permissions using IAM roles.

Then you can tell ECS, “start one task using task definition X.” It might sound like unnecessary planning at first. As soon as you start to deal with multiple tasks, scaling, upgrades, and other “real life” scenarios, you’ll be glad that you have task definitions to keep track of things!

Scheduler*

So, the scheduler schedules… sorry, this should be more helpful, huh? The scheduler is part of the “hosted orchestration layer” provided by ECS. Wait a minute, what do I mean by “hosted orchestration”? Simply put, hosted means that it’s operated by ECS on your behalf, without you having to care about it. Your applications are deployed in containers running on your instances, but the managing of tasks is taken care of by ECS. One less thing to worry about!

Also, the scheduler is the component that decides what (which containers) gets to run where (on which instances), according to a number of constraints. Say that you have a custom blog engine to scale for high availability. You could create a service, which by default, spreads tasks across all zones in the chosen region. And if you want each task to be on a different instance, you can use the distinctInstance task placement constraint. ECS makes sure that not only this happens, but if a task fails, it starts again.

Service

To ensure that you always have your task running without managing it yourself, you can create a service based on the task that you defined and ECS ensures that it stays running. A service is a special construct that says, “at any given time, I want to make sure that N tasks using task definition X1 are running.” If N=1, it just means “make sure that this task is running, and restart it if needed!” And with N>1, you’re basically scaling your application until you hit N, while also ensuring each task is running.

So, what now?

Hopefully you, at the very least, learned a tiny something. All comments are very welcome!

Want to discuss ECS with others? Join the amazon-ecs slack group, which members of the community created and manage.

Also, if you’re interested in learning more about the core concepts of ECS and its relation to EC2, here are some resources:

Pages
Amazon ECS landing page
AWS Fargate landing page
Amazon ECS Getting Started
Nathan Peck’s AWSome ECS

Docs
Amazon EC2
Amazon ECS

Blogs
AWS Compute Blog
AWS Blog

GitHub code
Amazon ECS container agent
Amazon ECS CLI

AWS videos
Learn Amazon ECS
AWS videos
AWS webinars

 

— tiffany

 @tiffanyfayj

 

Now Open AWS EU (Paris) Region

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-open-aws-eu-paris-region/

Today we are launching our 18th AWS Region, our fourth in Europe. Located in the Paris area, AWS customers can use this Region to better serve customers in and around France.

The Details
The new EU (Paris) Region provides a broad suite of AWS services including Amazon API Gateway, Amazon Aurora, Amazon CloudFront, Amazon CloudWatch, CloudWatch Events, Amazon CloudWatch Logs, Amazon DynamoDB, Amazon Elastic Compute Cloud (EC2), EC2 Container Registry, Amazon ECS, Amazon Elastic Block Store (EBS), Amazon EMR, Amazon ElastiCache, Amazon Elasticsearch Service, Amazon Glacier, Amazon Kinesis Streams, Polly, Amazon Redshift, Amazon Relational Database Service (RDS), Amazon Route 53, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Amazon Simple Storage Service (S3), Amazon Simple Workflow Service (SWF), Amazon Virtual Private Cloud, Auto Scaling, AWS Certificate Manager (ACM), AWS CloudFormation, AWS CloudTrail, AWS CodeDeploy, AWS Config, AWS Database Migration Service, AWS Direct Connect, AWS Elastic Beanstalk, AWS Identity and Access Management (IAM), AWS Key Management Service (KMS), AWS Lambda, AWS Marketplace, AWS OpsWorks Stacks, AWS Personal Health Dashboard, AWS Server Migration Service, AWS Service Catalog, AWS Shield Standard, AWS Snowball, AWS Snowball Edge, AWS Snowmobile, AWS Storage Gateway, AWS Support (including AWS Trusted Advisor), Elastic Load Balancing, and VM Import.

The Paris Region supports all sizes of C5, M5, R4, T2, D2, I3, and X1 instances.

There are also four edge locations for Amazon Route 53 and Amazon CloudFront: three in Paris and one in Marseille, all with AWS WAF and AWS Shield. Check out the AWS Global Infrastructure page to learn more about current and future AWS Regions.

The Paris Region will benefit from three AWS Direct Connect locations. Telehouse Voltaire is available today. AWS Direct Connect will also become available at Equinix Paris in early 2018, followed by Interxion Paris.

All AWS infrastructure regions around the world are designed, built, and regularly audited to meet the most rigorous compliance standards and to provide high levels of security for all AWS customers. These include ISO 27001, ISO 27017, ISO 27018, SOC 1 (Formerly SAS 70), SOC 2 and SOC 3 Security & Availability, PCI DSS Level 1, and many more. This means customers benefit from all the best practices of AWS policies, architecture, and operational processes built to satisfy the needs of even the most security sensitive customers.

AWS is certified under the EU-US Privacy Shield, and the AWS Data Processing Addendum (DPA) is GDPR-ready and available now to all AWS customers to help them prepare for May 25, 2018 when the GDPR becomes enforceable. The current AWS DPA, as well as the AWS GDPR DPA, allows customers to transfer personal data to countries outside the European Economic Area (EEA) in compliance with European Union (EU) data protection laws. AWS also adheres to the Cloud Infrastructure Service Providers in Europe (CISPE) Code of Conduct. The CISPE Code of Conduct helps customers ensure that AWS is using appropriate data protection standards to protect their data, consistent with the GDPR. In addition, AWS offers a wide range of services and features to help customers meet the requirements of the GDPR, including services for access controls, monitoring, logging, and encryption.

From Our Customers
Many AWS customers are preparing to use this new Region. Here’s a small sample:

Societe Generale, one of the largest banks in France and the world, has accelerated their digital transformation while working with AWS. They developed SG Research, an application that makes reports from Societe Generale’s analysts available to corporate customers in order to improve the decision-making process for investments. The new AWS Region will reduce latency between applications running in the cloud and in their French data centers.

SNCF is the national railway company of France. Their mobile app, powered by AWS, delivers real-time traffic information to 14 million riders. Extreme weather, traffic events, holidays, and engineering works can cause usage to peak at hundreds of thousands of users per second. They are planning to use machine learning and big data to add predictive features to the app.

Radio France, the French public radio broadcaster, offers seven national networks, and uses AWS to accelerate its innovation and stay competitive.

Les Restos du Coeur, a French charity that provides assistance to the needy, delivering food packages and participating in their social and economic integration back into French society. Les Restos du Coeur is using AWS for its CRM system to track the assistance given to each of their beneficiaries and the impact this is having on their lives.

AlloResto by JustEat (a leader in the French FoodTech industry), is using AWS to to scale during traffic peaks and to accelerate their innovation process.

AWS Consulting and Technology Partners
We are already working with a wide variety of consulting, technology, managed service, and Direct Connect partners in France. Here’s a partial list:

AWS Premier Consulting PartnersAccenture, Capgemini, Claranet, CloudReach, DXC, and Edifixio.

AWS Consulting PartnersABC Systemes, Atos International SAS, CoreExpert, Cycloid, Devoteam, LINKBYNET, Oxalide, Ozones, Scaleo Information Systems, and Sopra Steria.

AWS Technology PartnersAxway, Commerce Guys, MicroStrategy, Sage, Software AG, Splunk, Tibco, and Zerolight.

AWS in France
We have been investing in Europe, with a focus on France, for the last 11 years. We have also been developing documentation and training programs to help our customers to improve their skills and to accelerate their journey to the AWS Cloud.

As part of our commitment to AWS customers in France, we plan to train more than 25,000 people in the coming years, helping them develop highly sought after cloud skills. They will have access to AWS training resources in France via AWS Academy, AWSome days, AWS Educate, and webinars, all delivered in French by AWS Technical Trainers and AWS Certified Trainers.

Use it Today
The EU (Paris) Region is open for business now and you can start using it today!

Jeff;

 

AWS PrivateLink Update – VPC Endpoints for Your Own Applications & Services

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-privatelink-update-vpc-endpoints-for-your-own-applications-services/

Earlier this month, my colleague Colm MacCárthaigh told you about AWS PrivateLink and showed you how to use it to access AWS services such as Amazon Kinesis Streams, AWS Service Catalog, EC2 Systems Manager, the EC2 APIs, and the ELB APIs by way of VPC Endpoints. The endpoint (represented by one or more Elastic Network Interfaces or ENIs) resides within your VPC and has IP addresses drawn from the VPC’s subnets, without the need for an Internet or NAT Gateway. This model is clear and easy to understand, not to mention secure and scalable!

Endpoints for Private Connectivity
Today we are building upon the initial launch and extending the PrivateLink model, allowing you to set up and use VPC Endpoints to access your own services and those made available by others. Even before we launched PrivateLink for AWS services, we had a lot of requests for this feature, so I expect it to be pretty popular. For example, one customer told us that they plan to create hundreds of VPCs, each hosting and providing a single microservice (read Microservices on AWS to learn more).

Companies can now create services and offer them for sale to other AWS customers, for access via a private connection. They create a service that accepts TCP traffic, host it behind a Network Load Balancer, and then make the service available, either directly or in AWS Marketplace. They will be notified of new subscription requests and can choose to accept or reject each one. I expect that this feature will be used to create a strong, vibrant ecosystem of service providers in 2018.

The service provider and the service consumer run in separate VPCs and AWS accounts and communicate solely through the endpoint, with all traffic flowing across Amazon’s private network. Service consumers don’t have to worry about overlapping IP addresses, arrange for VPC peering, or use a VPC Gateway. You can also use AWS Direct Connect to connect your existing data center to one of your VPCs in order to allow your cloud-based applications to access services running on-premises, or vice versa.

Providing and Consuming Services
This new feature puts a lot of power at your fingertips. You can set it all up using the VPC APIs, the VPC CLI, or the AWS Management Console. I’ll use the console, and will show you how to provide and then consume a service. I am going to do both within a single AWS account, but that’s just for demo purposes.

Let’s talk about providing a service. It must run behind a Network Load Balancer and must be accessible over TCP. It can be hosted on EC2 instances, ECS containers, or on-premises (configured as an IP target), and should be able to scale in order to meet the expected level of demand. For low latency and fault tolerance, we recommend using an NLB with targets in every AZ of its region. Here’s mine:

I open up the VPC Console and navigate to Endpoint Services, then click on Create Endpoint Service:

I choose my NLB (just one in this case, but I can choose two or more and they will be mapped to consumers on a round-robin basis). By clicking on Acceptance required, I get to control access to my endpoint on a request-by-request basis:

I click on Create service and my service is ready immediately:

If I was going to make this service available in AWS Marketplace, I would go ahead and create a listing now. Since I am going to be the producer and the consumer in this blog post, I’ll skip that step. I will, however, copy the Service name for use in the next step.

I return to the VPC Dashboard and navigate to Endpoints, then click on Create endpoint. Then I select Find service by name, paste the service name, and click on Verify to move ahead. Then I select the desired AZs, and a subnet in each one, pick my security groups, and click on Create endpoint:

Because I checked Acceptance required when I created the endpoint service, the connection is pending acceptance:

Back on the endpoint service side (typically in a separate AWS account), I can see and accept the pending request:

The endpoint becomes available and ready to use within a minute or so. If I was creating a service and selling access on a paid basis, I would accept the request as part of a larger, and perhaps automated, onboarding workflow for a new customer.

On the consumer side, my new endpoint is accessible via DNS name:

Services provided by AWS and services in AWS Marketplace are accessible through split-horizon DNS. Accessing the service through this name will resolve to the “best” endpoint, taking Region and Availability Zone into consideration.

In the Marketplace
As I noted earlier, this new PrivateLink feature creates an opportunity for new and existing sellers in AWS Marketplace. The following SaaS offerings are already available as endpoints and I expect many more to follow (read Sell on AWS Marketplace to get started):

CA TechnologiesCA App Experience Analytics Essentials.

Aqua SecurityAqua Container Image Security Scanner.

DynatraceCloud-Native Monitoring powered by AI.

Cisco StealthwatchPublic Cloud Monitoring – Metered, Public Cloud Monitoring – Contracts.

SigOptML Optimization & Tuning.

Available Today
This new PrivateLink feature is available now and you can start using it today!

Jeff;