Tag Archives: ireland

EC2 Instance Update – M5 Instances with Local NVMe Storage (M5d)

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/ec2-instance-update-m5-instances-with-local-nvme-storage-m5d/

Earlier this month we launched the C5 Instances with Local NVMe Storage and I told you that we would be doing the same for additional instance types in the near future!

Today we are introducing M5 instances equipped with local NVMe storage. Available for immediate use in 5 regions, these instances are a great fit for workloads that require a balance of compute and memory resources. Here are the specs:

Instance Name vCPUs RAM Local Storage EBS-Optimized Bandwidth Network Bandwidth
m5d.large 2 8 GiB 1 x 75 GB NVMe SSD Up to 2.120 Gbps Up to 10 Gbps
m5d.xlarge 4 16 GiB 1 x 150 GB NVMe SSD Up to 2.120 Gbps Up to 10 Gbps
m5d.2xlarge 8 32 GiB 1 x 300 GB NVMe SSD Up to 2.120 Gbps Up to 10 Gbps
m5d.4xlarge 16 64 GiB 1 x 600 GB NVMe SSD 2.210 Gbps Up to 10 Gbps
m5d.12xlarge 48 192 GiB 2 x 900 GB NVMe SSD 5.0 Gbps 10 Gbps
m5d.24xlarge 96 384 GiB 4 x 900 GB NVMe SSD 10.0 Gbps 25 Gbps

The M5d instances are powered by Custom Intel® Xeon® Platinum 8175M series processors running at 2.5 GHz, including support for AVX-512.

You can use any AMI that includes drivers for the Elastic Network Adapter (ENA) and NVMe; this includes the latest Amazon Linux, Microsoft Windows (Server 2008 R2, Server 2012, Server 2012 R2 and Server 2016), Ubuntu, RHEL, SUSE, and CentOS AMIs.

Here are a couple of things to keep in mind about the local NVMe storage on the M5d instances:

Naming – You don’t have to specify a block device mapping in your AMI or during the instance launch; the local storage will show up as one or more devices (/dev/nvme*1 on Linux) after the guest operating system has booted.

Encryption – Each local NVMe device is hardware encrypted using the XTS-AES-256 block cipher and a unique key. Each key is destroyed when the instance is stopped or terminated.

Lifetime – Local NVMe devices have the same lifetime as the instance they are attached to, and do not stick around after the instance has been stopped or terminated.

Available Now
M5d instances are available in On-Demand, Reserved Instance, and Spot form in the US East (N. Virginia), US West (Oregon), EU (Ireland), US East (Ohio), and Canada (Central) Regions. Prices vary by Region, and are just a bit higher than for the equivalent M5 instances.

Jeff;

 

Amazon Neptune Generally Available

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/amazon-neptune-generally-available/

Amazon Neptune is now Generally Available in US East (N. Virginia), US East (Ohio), US West (Oregon), and EU (Ireland). Amazon Neptune is a fast, reliable, fully-managed graph database service that makes it easy to build and run applications that work with highly connected datasets. At the core of Neptune is a purpose-built, high-performance graph database engine optimized for storing billions of relationships and querying the graph with millisecond latencies. Neptune supports two popular graph models, Property Graph and RDF, through Apache TinkerPop Gremlin and SPARQL, allowing you to easily build queries that efficiently navigate highly connected datasets. Neptune can be used to power everything from recommendation engines and knowledge graphs to drug discovery and network security. Neptune is fully-managed with automatic minor version upgrades, backups, encryption, and fail-over. I wrote about Neptune in detail for AWS re:Invent last year and customers have been using the preview and providing great feedback that the team has used to prepare the service for GA.

Now that Amazon Neptune is generally available there are a few changes from the preview:

Launching an Amazon Neptune Cluster

Launching a Neptune cluster is as easy as navigating to the AWS Management Console and clicking create cluster. Of course you can also launch with CloudFormation, the CLI, or the SDKs.

You can monitor your cluster health and the health of individual instances through Amazon CloudWatch and the console.

Additional Resources

We’ve created two repos with some additional tools and examples here. You can expect continuous development on these repos as we add additional tools and examples.

  • Amazon Neptune Tools Repo
    This repo has a useful tool for converting GraphML files into Neptune compatible CSVs for bulk loading from S3.
  • Amazon Neptune Samples Repo
    This repo has a really cool example of building a collaborative filtering recommendation engine for video game preferences.

Purpose Built Databases

There’s an industry trend where we’re moving more and more onto purpose-built databases. Developers and businesses want to access their data in the format that makes the most sense for their applications. As cloud resources make transforming large datasets easier with tools like AWS Glue, we have a lot more options than we used to for accessing our data. With tools like Amazon Redshift, Amazon Athena, Amazon Aurora, Amazon DynamoDB, and more we get to choose the best database for the job or even enable entirely new use-cases. Amazon Neptune is perfect for workloads where the data is highly connected across data rich edges.

I’m really excited about graph databases and I see a huge number of applications. Looking for ideas of cool things to build? I’d love to build a web crawler in AWS Lambda that uses Neptune as the backing store. You could further enrich it by running Amazon Comprehend or Amazon Rekognition on the text and images found and creating a search engine on top of Neptune.

As always, feel free to reach out in the comments or on twitter to provide any feedback!

Randall

EC2 Instance Update – C5 Instances with Local NVMe Storage (C5d)

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/ec2-instance-update-c5-instances-with-local-nvme-storage-c5d/

As you can see from my EC2 Instance History post, we add new instance types on a regular and frequent basis. Driven by increasingly powerful processors and designed to address an ever-widening set of use cases, the size and diversity of this list reflects the equally diverse group of EC2 customers!

Near the bottom of that list you will find the new compute-intensive C5 instances. With a 25% to 50% improvement in price-performance over the C4 instances, the C5 instances are designed for applications like batch and log processing, distributed and or real-time analytics, high-performance computing (HPC), ad serving, highly scalable multiplayer gaming, and video encoding. Some of these applications can benefit from access to high-speed, ultra-low latency local storage. For example, video encoding, image manipulation, and other forms of media processing often necessitates large amounts of I/O to temporary storage. While the input and output files are valuable assets and are typically stored as Amazon Simple Storage Service (S3) objects, the intermediate files are expendable. Similarly, batch and log processing runs in a race-to-idle model, flushing volatile data to disk as fast as possible in order to make full use of compute resources.

New C5d Instances with Local Storage
In order to meet this need, we are introducing C5 instances equipped with local NVMe storage. Available for immediate use in 5 regions, these instances are a great fit for the applications that I described above, as well as others that you will undoubtedly dream up! Here are the specs:

Instance Name vCPUs RAM Local Storage EBS Bandwidth Network Bandwidth
c5d.large 2 4 GiB 1 x 50 GB NVMe SSD Up to 2.25 Gbps Up to 10 Gbps
c5d.xlarge 4 8 GiB 1 x 100 GB NVMe SSD Up to 2.25 Gbps Up to 10 Gbps
c5d.2xlarge 8 16 GiB 1 x 225 GB NVMe SSD Up to 2.25 Gbps Up to 10 Gbps
c5d.4xlarge 16 32 GiB 1 x 450 GB NVMe SSD 2.25 Gbps Up to 10 Gbps
c5d.9xlarge 36 72 GiB 1 x 900 GB NVMe SSD 4.5 Gbps 10 Gbps
c5d.18xlarge 72 144 GiB 2 x 900 GB NVMe SSD 9 Gbps 25 Gbps

Other than the addition of local storage, the C5 and C5d share the same specs. Both are powered by 3.0 GHz Intel Xeon Platinum 8000-series processors, optimized for EC2 and with full control over C-states on the two largest sizes, giving you the ability to run two cores at up to 3.5 GHz using Intel Turbo Boost Technology.

You can use any AMI that includes drivers for the Elastic Network Adapter (ENA) and NVMe; this includes the latest Amazon Linux, Microsoft Windows (Server 2008 R2, Server 2012, Server 2012 R2 and Server 2016), Ubuntu, RHEL, SUSE, and CentOS AMIs.

Here are a couple of things to keep in mind about the local NVMe storage:

Naming – You don’t have to specify a block device mapping in your AMI or during the instance launch; the local storage will show up as one or more devices (/dev/nvme*1 on Linux) after the guest operating system has booted.

Encryption – Each local NVMe device is hardware encrypted using the XTS-AES-256 block cipher and a unique key. Each key is destroyed when the instance is stopped or terminated.

Lifetime – Local NVMe devices have the same lifetime as the instance they are attached to, and do not stick around after the instance has been stopped or terminated.

Available Now
C5d instances are available in On-Demand, Reserved Instance, and Spot form in the US East (N. Virginia), US West (Oregon), EU (Ireland), US East (Ohio), and Canada (Central) Regions. Prices vary by Region, and are just a bit higher than for the equivalent C5 instances.

Jeff;

PS – We will be adding local NVMe storage to other EC2 instance types in the months to come, so stay tuned!

Съд на ЕС: предоставяне на потребителски данни на полицията – кога?

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/05/16/ecj_privacy/

Известно е Заключението на Генералния адвокат по дело  C‑207/16 въз основа на преюдициално запитване, отправено от Audiencia Provincial de Tarragona (съд на провинция Тарагона, Испания).

Запитването се отнася до тълкуването на понятието „тежки престъпления“ по смисъла на практиката на Съда, установена с решение Digital Rights Ireland и решение Tele2 Sverige и Watson  – в които това понятие се използва като критерий за преценка на законосъобразността и пропорционалността на намесата в правата по членове 7 и 8 от Хартата на основните права на Европейския съюз  –  именно съответно правото на зачитане на личния и семейния живот, както и правото на защита на личните данни.

Запитване е направено в контекста на производство по жалба против съдебно решение, с което на полицейски органи е отказана възможността да им бъдат предадени   данни, притежавани от мобилни телефонни оператори, с цел идентифициране на лица за нуждите на наказателно разследване. Обжалваното решение е мотивирано по-специално със съображението, че деянията, предмет на това разследване, не съставляват тежко престъпление, в разрез с изискванията на приложимата испанска правна уредба.

Въпросите:

„1)      Може ли достатъчната тежест на престъплението като критерий, обосноваващ засягането на признатите в членове 7 и 8 от [Хартата] основни права, да се определи единствено с оглед на наказанието, което може да се наложи за разследваното престъпление, или е необходимо освен това да се установи, че с престъпното деяние се увреждат в особена степен индивидуални и/или колективни правни интереси?

2)      Евентуално, ако определянето на тежестта на престъплението с оглед единствено на наказанието, което може да се наложи, отговаря на конституционните принципи на Съюза, приложени от Съда на ЕС в решението му [Digital Rights] като критерии за строг контрол на Директивата[, обявена за невалидна с това решение], то какъв следва да е минималният праг за наказанието? Допустимо ли е по общ начин да се предвиди праг от три години лишаване от свобода?“.

Тоиз разговор е добре известен на българите от времето на прилагането на Директива 24/2006/ЕС за задържане на трафичните данни, обявена от Съда за невалидна. Тогава имаше разногласия по въпроса кое е тежко и кое е сериозно престъпление и как целта за защита на обществения интерес се съотнася с правото на защита на личния живот и личната кореспонденция. Генералният адвокат също прави препратка към Директива 24/2006/ЕС.

ГА по първия въпрос:

90.      Според мен следва да се внимава, за да не се възприеме твърде широко разбиране относно изискванията, поставени от Съда с тези две решения, за да не се препятства, или поне не прекомерно, възможността на държавите членки да дерогират установения от Директива 2002/58 режим, която им е предоставена с член 15, параграф 1 от същата, в случаите, в които разглежданите намеси в личния живот едновременно преследват законна цел и са с ограничен обхват, каквито е възможно да настъпят в случая в резултат от искането на разследващата полицейска служба. По-конкретно считам, че правото на Съюза допуска възможността за компетентните органи да имат достъп до държаните от доставчици на електронни съобщителни услуги данни за идентификация, позволяващи да се издирят предполагаемите извършители на престъпление, което не е тежко.

91.      С оглед на това препоръчвам на Съда да отговори на преформулирания преюдициален въпрос, че член 15, параграф 1 от Директива 2002/58 във връзка с членове 7 и 8 и член 52, параграф 1 от Хартата трябва да се тълкува в смисъл, че мярка, която за целите на борбата с престъпленията дава на компетентните национални органи достъп до идентификационните данни на ползвателите на телефонни номера, активирани с определен мобилен телефон през ограничен период, при обстоятелства като разглежданите в главното производство води до намеса в гарантираните от споменатата директива и Хартата основни права, която не е толкова сериозна, че да налага такъв достъп да се предоставя само в случаите, в които съответното престъпление е тежко.

ГА по втория въпрос:

96.      Според мен право да определят какво представлява „тежко престъпление“ имат по принцип компетентните органи на държавите членки. Независимо от това, благодарение на преюдициалните запитвания, с които юрисдикциите на държавите членки могат да сезират Съда, същият е натоварен да следи за спазването на всички изисквания, произтичащи от правото на Съюза, и по-специално да осигури последователно прилагане на закрилата, предоставена от разпоредбите на Хартата.

107.  Ако понятието „тежко престъпление“ по смисъла на съдебната практика, установена с решения Digital Rights и Tele2, бъде прието от Съда за самостоятелно понятие на правото на Съюза, то би трябвало да се тълкува в смисъл, че тежестта на дадено престъпление, която може да оправдае достъпа на компетентните национални органи до лични данни съгласно член 15, параграф 1 от Директива 2002/58, трябва да се измерва, като се вземат предвид не само наказанията, които е възможно да бъдат наложени, но и съвкупност от други обективни критерии за преценка като упоменатите по-горе.

121. В заключение считам, че ако Съдът постанови — в разрез с това, което препоръчвам — че за да се квалифицира престъплението като „тежко“ по смисъла на неговата практика, установена с решение Digital Rights, следва да се отчита единствено предвиденото наказание, на втория преюдициален въпрос би следвало да се отговори, че държавите членки са свободни да определят минималния размер на съответното наказание за целта, стига да спазват изискванията, произтичащи от правото на Съюза, и по-специално онези изисквания, съгласно които намесата в основните права, гарантирани с членове 7 и 8 от Хартата, трябва да остане изключение и да бъде съобразена с принципа на пропорционалност.

Да напиша и името на този Генерален адвокат – Henrik Saugmandsgaard Øe от Дания. Успял да застане едновременно на най-разнообразни позиции, като един електрон.

EC2 Price Reduction – H1 Instances

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/ec2-price-reduction-h1-instances/

EC2’s H1 instances offer 2 to 16 terabytes of fast, dense storage for big data applications, optimized to deliver high throughput for sequential I/O. Enhanced Networking, 32 to 256 gigabytes of RAM, and Intel Xeon E5-2686 v4 processors running at a base frequency of 2.3 GHz round out the feature set.

I am happy to announce that we are reducing the On-Demand and Reserved Instance prices for H1 instances in the US East (N. Virginia), US East (Ohio), US West (Oregon), and EU (Ireland) Regions by 15%, effective immediately.

Jeff;

 

Facebook изключва милиард и половина потребители от обхвата на GDPR

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/04/19/facebook-13/

Под това заглавие (ексклузивно) Reuters информира за следното:

Потребителите на Facebook (извън Съединените щати и Канада), независимо дали   знаят или не,  сега имат договор за услугата с компанията Facebook със седалище в Ирландия. Както Google,  LinkedIn и други компании,  Facebook също работи чрез калифорнийска и ирландска компания –  Facebook Inc/Калифорния, Менло Парк  u Facebook Ireland – като последното е под ирландска юрисдикция.

Facebook планира договорът c Facebook Ireland  да остане валиден само за европейски потребители, т.е. 1,5 милиарда потребители от Африка, Азия, Австралия и Латинска Америка няма да попаднат в обхвата на Общия регламент за защита на данните на Европейския съюз (GDPR), който влиза в сила на 25 май 2018 г. Най-голямата онлайн социална мрежа в света   намалява обхвата на прилагане на GDPR – регламентът позволява на европейските регулаторни органи да наказват компаниите за събиране или използване на лични данни без съгласието на потребителите.

Така се избягва огромен риск, пише Reuters,   тъй като новият регламент позволява да се налагат глоби в размер до 4% от глобалните годишни приходи за нарушения –  в случая с Facebook това означава  милиарди долари.

В същото време Зукърбърг е говорил вчера на конференция в Сан Хосе, Калифорния и е казал, че   въвежда нови настройки за защита на личния живот и личните данни в Европа, които в крайна сметка щели да обхванат потребителите по целия свят.”Ние не само искаме да спазваме закона, но и надхвърлим задълженията си и да изграждаме нови и по-добри практики за поверителност за всеки във Facebook”.

AWS Certificate Manager Launches Private Certificate Authority

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/aws-certificate-manager-launches-private-certificate-authority/

Today we’re launching a new feature for AWS Certificate Manager (ACM), Private Certificate Authority (CA). This new service allows ACM to act as a private subordinate CA. Previously, if a customer wanted to use private certificates, they needed specialized infrastructure and security expertise that could be expensive to maintain and operate. ACM Private CA builds on ACM’s existing certificate capabilities to help you easily and securely manage the lifecycle of your private certificates with pay as you go pricing. This enables developers to provision certificates in just a few simple API calls while administrators have a central CA management console and fine grained access control through granular IAM policies. ACM Private CA keys are stored securely in AWS managed hardware security modules (HSMs) that adhere to FIPS 140-2 Level 3 security standards. ACM Private CA automatically maintains certificate revocation lists (CRLs) in Amazon Simple Storage Service (S3) and lets administrators generate audit reports of certificate creation with the API or console. This service is packed full of features so let’s jump in and provision a CA.

Provisioning a Private Certificate Authority (CA)

First, I’ll navigate to the ACM console in my region and select the new Private CAs section in the sidebar. From there I’ll click Get Started to start the CA wizard. For now, I only have the option to provision a subordinate CA so we’ll select that and use my super secure desktop as the root CA and click Next. This isn’t what I would do in a production setting but it will work for testing out our private CA.

Now, I’ll configure the CA with some common details. The most important thing here is the Common Name which I’ll set as secure.internal to represent my internal domain.

Now I need to choose my key algorithm. You should choose the best algorithm for your needs but know that ACM has a limitation today that it can only manage certificates that chain up to to RSA CAs. For now, I’ll go with RSA 2048 bit and click Next.

In this next screen, I’m able to configure my certificate revocation list (CRL). CRLs are essential for notifying clients in the case that a certificate has been compromised before certificate expiration. ACM will maintain the revocation list for me and I have the option of routing my S3 bucket to a custome domain. In this case I’ll create a new S3 bucket to store my CRL in and click Next.

Finally, I’ll review all the details to make sure I didn’t make any typos and click Confirm and create.

A few seconds later and I’m greeted with a fancy screen saying I successfully provisioned a certificate authority. Hooray! I’m not done yet though. I still need to activate my CA by creating a certificate signing request (CSR) and signing that with my root CA. I’ll click Get started to begin that process.

Now I’ll copy the CSR or download it to a server or desktop that has access to my root CA (or potentially another subordinate – so long as it chains to a trusted root for my clients).

Now I can use a tool like openssl to sign my cert and generate the certificate chain.


$openssl ca -config openssl_root.cnf -extensions v3_intermediate_ca -days 3650 -notext -md sha256 -in csr/CSR.pem -out certs/subordinate_cert.pem
Using configuration from openssl_root.cnf
Enter pass phrase for /Users/randhunt/dev/amzn/ca/private/root_private_key.pem:
Check that the request matches the signature
Signature ok
The Subject's Distinguished Name is as follows
stateOrProvinceName   :ASN.1 12:'Washington'
localityName          :ASN.1 12:'Seattle'
organizationName      :ASN.1 12:'Amazon'
organizationalUnitName:ASN.1 12:'Engineering'
commonName            :ASN.1 12:'secure.internal'
Certificate is to be certified until Mar 31 06:05:30 2028 GMT (3650 days)
Sign the certificate? [y/n]:y


1 out of 1 certificate requests certified, commit? [y/n]y
Write out database with 1 new entries
Data Base Updated

After that I’ll copy my subordinate_cert.pem and certificate chain back into the console. and click Next.

Finally, I’ll review all the information and click Confirm and import. I should see a screen like the one below that shows my CA has been activated successfully.

Now that I have a private CA we can provision private certificates by hopping back to the ACM console and creating a new certificate. After clicking create a new certificate I’ll select the radio button Request a private certificate then I’ll click Request a certificate.

From there it’s just similar to provisioning a normal certificate in ACM.

Now I have a private certificate that I can bind to my ELBs, CloudFront Distributions, API Gateways, and more. I can also export the certificate for use on embedded devices or outside of ACM managed environments.

Available Now
ACM Private CA is a service in and of itself and it is packed full of features that won’t fit into a blog post. I strongly encourage the interested readers to go through the developer guide and familiarize themselves with certificate based security. ACM Private CA is available in in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), EU (Frankfurt) and EU (Ireland). Private CAs cost $400 per month (prorated) for each private CA. You are not charged for certificates created and maintained in ACM but you are charged for certificates where you have access to the private key (exported or created outside of ACM). The pricing per certificate is tiered starting at $0.75 per certificate for the first 1000 certificates and going down to $0.001 per certificate after 10,000 certificates.

I’m excited to see administrators and developers take advantage of this new service. As always please let us know what you think of this service on Twitter or in the comments below.

Randall

AWS Secrets Manager: Store, Distribute, and Rotate Credentials Securely

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/aws-secrets-manager-store-distribute-and-rotate-credentials-securely/

Today we’re launching AWS Secrets Manager which makes it easy to store and retrieve your secrets via API or the AWS Command Line Interface (CLI) and rotate your credentials with built-in or custom AWS Lambda functions. Managing application secrets like database credentials, passwords, or API Keys is easy when you’re working locally with one machine and one application. As you grow and scale to many distributed microservices, it becomes a daunting task to securely store, distribute, rotate, and consume secrets. Previously, customers needed to provision and maintain additional infrastructure solely for secrets management which could incur costs and introduce unneeded complexity into systems.

AWS Secrets Manager

Imagine that I have an application that takes incoming tweets from Twitter and stores them in an Amazon Aurora database. Previously, I would have had to request a username and password from my database administrator and embed those credentials in environment variables or, in my race to production, even in the application itself. I would also need to have our social media manager create the Twitter API credentials and figure out how to store those. This is a fairly manual process, involving multiple people, that I have to restart every time I want to rotate these credentials. With Secrets Manager my database administrator can provide the credentials in secrets manager once and subsequently rely on a Secrets Manager provided Lambda function to automatically update and rotate those credentials. My social media manager can put the Twitter API keys in Secrets Manager which I can then access with a simple API call and I can even rotate these programmatically with a custom lambda function calling out to the Twitter API. My secrets are encrypted with the KMS key of my choice, and each of these administrators can explicitly grant access to these secrets with with granular IAM policies for individual roles or users.

Let’s take a look at how I would store a secret using the AWS Secrets Manager console. First, I’ll click Store a new secret to get to the new secrets wizard. For my RDS Aurora instance it’s straightforward to simply select the instance and provide the initial username and password to connect to the database.

Next, I’ll fill in a quick description and a name to access my secret by. You can use whatever naming scheme you want here.

Next, we’ll configure rotation to use the Secrets Manager-provided Lambda function to rotate our password every 10 days.

Finally, we’ll review all the details and check out our sample code for storing and retrieving our secret!

Finally I can review the secrets in the console.

Now, if I needed to access these secrets I’d simply call the API.

import json
import boto3
secrets = boto3.client("secretsmanager")
rds = json.dumps(secrets.get_secrets_value("prod/TwitterApp/Database")['SecretString'])
print(rds)

Which would give me the following values:


{'engine': 'mysql',
 'host': 'twitterapp2.abcdefg.us-east-1.rds.amazonaws.com',
 'password': '-)Kw>THISISAFAKEPASSWORD:lg{&sad+Canr',
 'port': 3306,
 'username': 'ranman'}

More than passwords

AWS Secrets Manager works for more than just passwords. I can store OAuth credentials, binary data, and more. Let’s look at storing my Twitter OAuth application keys.

Now, I can define the rotation for these third-party OAuth credentials with a custom AWS Lambda function that can call out to Twitter whenever we need to rotate our credentials.

Custom Rotation

One of the niftiest features of AWS Secrets Manager is custom AWS Lambda functions for credential rotation. This allows you to define completely custom workflows for credentials. Secrets Manager will call your lambda with a payload that includes a Step which specifies which step of the rotation you’re in, a SecretId which specifies which secret the rotation is for, and importantly a ClientRequestToken which is used to ensure idempotency in any changes to the underlying secret.

When you’re rotating secrets you go through a few different steps:

  1. createSecret
  2. setSecret
  3. testSecret
  4. finishSecret

The advantage of these steps is that you can add any kind of approval steps you want for each phase of the rotation. For more details on custom rotation check out the documentation.

Available Now
AWS Secrets Manager is available today in US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), EU (Frankfurt), EU (Ireland), EU (London), and South America (São Paulo). Secrets are priced at $0.40 per month per secret and $0.05 per 10,000 API calls. I’m looking forward to seeing more users adopt rotating credentials to secure their applications!

Randall

Amazon Transcribe Now Generally Available

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/amazon-transcribe-now-generally-available/


At AWS re:Invent 2017 we launched Amazon Transcribe in private preview. Today we’re excited to make Amazon Transcribe generally available for all developers. Amazon Transcribe is an automatic speech recognition service (ASR) that makes it easy for developers to add speech to text capabilities to their applications. We’ve iterated on customer feedback in the preview to make a number of enhancements to Amazon Transcribe.

New Amazon Transcribe Features in GA

To start off we’ve made the SampleRate parameter optional which means you only need to know the file type of your media and the input language. We’ve added two new features – the ability to differentiate multiple speakers in the audio to provide more intelligible transcripts (“who spoke when”), and a custom vocabulary to improve the accuracy of speech recognition for product names, industry-specific terminology, or names of individuals. To refresh our memories on how Amazon Transcribe works lets look at a quick example. I’ll convert this audio in my S3 bucket.

import boto3
transcribe = boto3.client("transcribe")
transcribe.start_transcription_job(
    TranscriptionJobName="TranscribeDemo",
    LanguageCode="en-US",
    MediaFormat="mp3",
    Media={"MediaFileUri": "https://s3.amazonaws.com/randhunt-transcribe-demo-us-east-1/out.mp3"}
)

This will output JSON similar to this (I’ve stripped out most of the response) with indidivudal speakers identified:

{
  "jobName": "reinvent",
  "accountId": "1234",
  "results": {
    "transcripts": [
      {
        "transcript": "Hi, everybody, i'm randall ..."
      }
    ],
    "speaker_labels": {
      "speakers": 2,
      "segments": [
        {
          "start_time": "0.000000",
          "speaker_label": "spk_0",
          "end_time": "0.010",
          "items": []
        },
        {
          "start_time": "0.010000",
          "speaker_label": "spk_1",
          "end_time": "4.990",
          "items": [
            {
              "start_time": "1.000",
              "speaker_label": "spk_1",
              "end_time": "1.190"
            },
            {
              "start_time": "1.190",
              "speaker_label": "spk_1",
              "end_time": "1.700"
            }
          ]
        }
      ]
    },
    "items": [
      {
        "start_time": "1.000",
        "end_time": "1.190",
        "alternatives": [
          {
            "confidence": "0.9971",
            "content": "Hi"
          }
        ],
        "type": "pronunciation"
      },
      {
        "alternatives": [
          {
            "content": ","
          }
        ],
        "type": "punctuation"
      },
      {
        "start_time": "1.190",
        "end_time": "1.700",
        "alternatives": [
          {
            "confidence": "1.0000",
            "content": "everybody"
          }
        ],
        "type": "pronunciation"
      }
    ]
  },
  "status": "COMPLETED"
}

Custom Vocabulary

Now if I needed to have a more complex technical discussion with a colleague I could create a custom vocabulary. A custom vocabulary is specified as an array of strings passed to the CreateVocabulary API and you can include your custom vocabulary in a transcription job by passing in the name as part of the Settings in a StartTranscriptionJob API call. An individual vocabulary can be as large as 50KB and each phrase must be less than 256 characters. If I wanted to transcribe the recordings of my highschool AP Biology class I could create a custom vocabulary in Python like this:

import boto3
transcribe = boto3.client("transcribe")
transcribe.create_vocabulary(
LanguageCode="en-US",
VocabularyName="APBiology"
Phrases=[
    "endoplasmic-reticulum",
    "organelle",
    "cisternae",
    "eukaryotic",
    "ribosomes",
    "hepatocyes",
    "cell-membrane"
]
)

I can refer to this vocabulary later on by the name APBiology and update it programatically based on any errors I may find in the transcriptions.

Available Now

Amazon Transcribe is available now in US East (N. Virginia), US West (Oregon), US East (Ohio) and EU (Ireland). Transcribe’s free tier gives you 60 minutes of transcription for free per month for the first 12 months with a pay-as-you-go model of $0.0004 per second of transcribed audio after that, with a minimum charge of 15 seconds.

When combined with other tools and services I think transcribe opens up a entirely new opportunities for application development. I’m excited to see what technologies developers build with this new service.

Randall

Amazon Translate Now Generally Available

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/amazon-translate-now-generally-available/


Today we’re excited to make Amazon Translate generally available. Late last year at AWS re:Invent my colleague Tara Walker wrote about a preview of a new AI service, Amazon Translate. Starting today you can access Amazon Translate in US East (N. Virginia), US East (Ohio), US West (Oregon), and EU (Ireland) with a 2 million character monthly free tier for the first 12 months and $15 per million characters after that. There are a number of new features available in GA: automatic source language inference, Amazon CloudWatch support, and up to 5000 characters in a single TranslateText call. Let’s take a quick look at the service in general availability.

Amazon Translate New Features

Since Tara’s post already covered the basics of the service I want to point out some of the new features of the service released today. Let’s start with a code sample:

import boto3
translate = boto3.client("translate")
resp = translate.translate_text(
    Text="🇫🇷Je suis très excité pour Amazon Traduire🇫🇷",
    SourceLanguageCode="auto",
    TargetLanguageCode="en"
)
print(resp['TranslatedText'])

Since I have specified my source language as auto, Amazon Translate will call Amazon Comprehend on my behalf to determine the source language used in this text. If you couldn’t guess it, we’re writing some French and the output is 🇫🇷I'm very excited about Amazon Translate 🇫🇷. You’ll notice that our emojis are preserved in the output text which is definitely a bonus feature for Millennials like me.

The Translate console is a great way to get started and see some sample response.

Translate is extremely easy to use in AWS Lambda functions which allows you to use it with almost any AWS service. There are a number of examples in the Translate documentation showing how to do everything from translate a web page to a Amazon DynamoDB table. Paired with other ML services like Amazon Comprehend and [transcribe] you can build everything from closed captioning to real-time chat translation to a robust text analysis pipeline for call centers transcriptions and other textual data.

New Languages Coming Soon

Today, Amazon Translate allows you to translate text to or from English, to any of the following languages: Arabic, Chinese (Simplified), French, German, Portuguese, and Spanish. We’ve announced support for additional languages coming soon: Japanese (go JAWSUG), Russian, Italian, Chinese (Traditional), Turkish, and Czech.

Amazon Translate can also be used to increase professional translator efficiency, and reduce costs and turnaround times for their clients. We’ve already partnered with a number of Language Service Providers (LSPs) to offer their customers end-to-end translation services at a lower cost by allowing Amazon Translate to produce a high-quality draft translation that’s then edited by the LSP for a guaranteed human quality result.

I’m excited to see what applications our customers are able to build with high quality machine translation just one API call away.

Randall

Amazon ECS Service Discovery

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/amazon-ecs-service-discovery/

Amazon ECS now includes integrated service discovery. This makes it possible for an ECS service to automatically register itself with a predictable and friendly DNS name in Amazon Route 53. As your services scale up or down in response to load or container health, the Route 53 hosted zone is kept up to date, allowing other services to lookup where they need to make connections based on the state of each service. You can see a demo of service discovery in an imaginary social networking app over at: https://servicediscovery.ranman.com/.

Service Discovery


Part of the transition to microservices and modern architectures involves having dynamic, autoscaling, and robust services that can respond quickly to failures and changing loads. Your services probably have complex dependency graphs of services they rely on and services they provide. A modern architectural best practice is to loosely couple these services by allowing them to specify their own dependencies, but this can be complicated in dynamic environments as your individual services are forced to find their own connection points.

Traditional approaches to service discovery like consul, etcd, or zookeeper all solve this problem well, but they require provisioning and maintaining additional infrastructure or installation of agents in your containers or on your instances. Previously, to ensure that services were able to discover and connect with each other, you had to configure and run your own service discovery system or connect every service to a load balancer. Now, you can enable service discovery for your containerized services in the ECS console, AWS CLI, or using the ECS API.

Introducing Amazon Route 53 Service Registry and Auto Naming APIs

Amazon ECS Service Discovery works by communicating with the Amazon Route 53 Service Registry and Auto Naming APIs. Since we haven’t talked about it before on this blog, I want to briefly outline how these Route 53 APIs work. First, some vocabulary:

  • Namespaces – A namespace specifies a domain name you want to route traffic to (e.g. internal, local, corp). You can think of it as a logical boundary between which services should be able to discover each other. You can create a namespace with a call to the aws servicediscovery create-private-dns-namespace command or in the ECS console. Namespaces are roughly equivalent to hosted zones in Route 53. A namespace contains services, our next vocabulary word.
  • Service – A service is a specific application or set of applications in your namespace like “auth”, “timeline”, or “worker”. A service contains service instances.
  • Service Instance – A service instance contains information about how Route 53 should respond to DNS queries for a resource.

Route 53 provides APIs to create: namespaces, A records per task IP, and SRV records per task IP + port.

When we ask Route 53 for something like: worker.corp we should get back a set of possible IPs that could fulfill that request. If the application we’re connecting to exposes dynamic ports then the calling application can easily query the SRV record to get more information.

ECS service discovery is built on top of the Route 53 APIs and manages all of the underlying API calls for you. Now that we understand how the service registry, works lets take a look at the ECS side to see service discovery in action.

Amazon ECS Service Discovery

Let’s launch an application with service discovery! First, I’ll create two task definitions: “flask-backend” and “flask-worker”. Both are simple AWS Fargate tasks with a single container serving HTTP requests. I’ll have flask-backend ask worker.corp to do some work and I’ll return the response as well as the address Route 53 returned for worker. Something like the code below:

@app.route("/")
namespace = os.getenv("namespace")
worker_host = "worker" + namespace
def backend():
    r = requests.get("http://"+worker_host)
    worker = socket.gethostbyname(worker_host)
    return "Worker Message: {]\nFrom: {}".format(r.content, worker)

 

Now, with my containers and task definitions in place, I’ll create a service in the console.

As I move to step two in the service wizard I’ll fill out the service discovery section and have ECS create a new namespace for me.

I’ll also tell ECS to monitor the health of the tasks in my service and add or remove them from Route 53 as needed. Then I’ll set a TTL of 10 seconds on the A records we’ll use.

I’ll repeat those same steps for my “worker” service and after a minute or so most of my tasks should be up and running.

Over in the Route 53 console I can see all the records for my tasks!

We can use the Route 53 service discovery APIs to list all of our available services and tasks and programmatically reach out to each one. We could easily extend to any number of services past just backend and worker. I’ve created a simple demo of an imaginary social network with services like “auth”, “feed”, “timeline”, “worker”, “user” and more here: https://servicediscovery.ranman.com/. You can see the code used to run that page on github.

Available Now
Amazon ECS service discovery is available now in US East (N. Virginia), US East (Ohio), US West (Oregon), and EU (Ireland). AWS Fargate is currently only available in US East (N. Virginia). When you use ECS service discovery, you pay for the Route 53 resources that you consume, including each namespace that you create, and for the lookup queries your services make. Container level health checks are provided at no cost. For more information on pricing check out the documentation.

Please let us know what you’ll be building or refactoring with service discovery either in the comments or on Twitter!

Randall

 

P.S. Every blog post I write is made with a tremendous amount of help from numerous AWS colleagues. To everyone that helped build service discovery across all of our teams – thank you :)!

Message Filtering Operators for Numeric Matching, Prefix Matching, and Blacklisting in Amazon SNS

Post Syndicated from Christie Gifrin original https://aws.amazon.com/blogs/compute/message-filtering-operators-for-numeric-matching-prefix-matching-and-blacklisting-in-amazon-sns/

This blog was contributed by Otavio Ferreira, Software Development Manager for Amazon SNS

Message filtering simplifies the overall pub/sub messaging architecture by offloading message filtering logic from subscribers, as well as message routing logic from publishers. The initial launch of message filtering provided a basic operator that was based on exact string comparison. For more information, see Simplify Your Pub/Sub Messaging with Amazon SNS Message Filtering.

Today, AWS is announcing an additional set of filtering operators that bring even more power and flexibility to your pub/sub messaging use cases.

Message filtering operators

Amazon SNS now supports both numeric and string matching. Specifically, string matching operators allow for exact, prefix, and “anything-but” comparisons, while numeric matching operators allow for exact and range comparisons, as outlined below. Numeric matching operators work for values between -10e9 and +10e9 inclusive, with five digits of accuracy right of the decimal point.

  • Exact matching on string values (Whitelisting): Subscription filter policy   {"sport": ["rugby"]} matches message attribute {"sport": "rugby"} only.
  • Anything-but matching on string values (Blacklisting): Subscription filter policy {"sport": [{"anything-but": "rugby"}]} matches message attributes such as {"sport": "baseball"} and {"sport": "basketball"} and {"sport": "football"} but not {"sport": "rugby"}
  • Prefix matching on string values: Subscription filter policy {"sport": [{"prefix": "bas"}]} matches message attributes such as {"sport": "baseball"} and {"sport": "basketball"}
  • Exact matching on numeric values: Subscription filter policy {"balance": [{"numeric": ["=", 301.5]}]} matches message attributes {"balance": 301.500} and {"balance": 3.015e2}
  • Range matching on numeric values: Subscription filter policy {"balance": [{"numeric": ["<", 0]}]} matches negative numbers only, and {"balance": [{"numeric": [">", 0, "<=", 150]}]} matches any positive number up to 150.

As usual, you may apply the “AND” logic by appending multiple keys in the subscription filter policy, and the “OR” logic by appending multiple values for the same key, as follows:

  • AND logic: Subscription filter policy {"sport": ["rugby"], "language": ["English"]} matches only messages that carry both attributes {"sport": "rugby"} and {"language": "English"}
  • OR logic: Subscription filter policy {"sport": ["rugby", "football"]} matches messages that carry either the attribute {"sport": "rugby"} or {"sport": "football"}

Message filtering operators in action

Here’s how this new set of filtering operators works. The following example is based on a pharmaceutical company that develops, produces, and markets a variety of prescription drugs, with research labs located in Asia Pacific and Europe. The company built an internal procurement system to manage the purchasing of lab supplies (for example, chemicals and utensils), office supplies (for example, paper, folders, and markers) and tech supplies (for example, laptops, monitors, and printers) from global suppliers.

This distributed system is composed of the four following subsystems:

  • A requisition system that presents the catalog of products from suppliers, and takes orders from buyers
  • An approval system for orders targeted to Asia Pacific labs
  • Another approval system for orders targeted to European labs
  • A fulfillment system that integrates with shipping partners

As shown in the following diagram, the company leverages AWS messaging services to integrate these distributed systems.

  • Firstly, an SNS topic named “Orders” was created to take all orders placed by buyers on the requisition system.
  • Secondly, two Amazon SQS queues, named “Lab-Orders-AP” and “Lab-Orders-EU” (for Asia Pacific and Europe respectively), were created to backlog orders that are up for review on the approval systems.
  • Lastly, an SQS queue named “Common-Orders” was created to backlog orders that aren’t related to lab supplies, which can already be picked up by shipping partners on the fulfillment system.

The company also uses AWS Lambda functions to automatically process lab supply orders that don’t require approval or which are invalid.

In this example, because different types of orders have been published to the SNS topic, the subscribing endpoints have had to set advanced filter policies on their SNS subscriptions, to have SNS automatically filter out orders they can’t deal with.

As depicted in the above diagram, the following five filter policies have been created:

  • The SNS subscription that points to the SQS queue “Lab-Orders-AP” sets a filter policy that matches lab supply orders, with a total value greater than $1,000, and that target Asia Pacific labs only. These more expensive transactions require an approver to review orders placed by buyers.
  • The SNS subscription that points to the SQS queue “Lab-Orders-EU” sets a filter policy that matches lab supply orders, also with a total value greater than $1,000, but that target European labs instead.
  • The SNS subscription that points to the Lambda function “Lab-Preapproved” sets a filter policy that only matches lab supply orders that aren’t as expensive, up to $1,000, regardless of their target lab location. These orders simply don’t require approval and can be automatically processed.
  • The SNS subscription that points to the Lambda function “Lab-Cancelled” sets a filter policy that only matches lab supply orders with total value of $0 (zero), regardless of their target lab location. These orders carry no actual items, obviously need neither approval nor fulfillment, and as such can be automatically canceled.
  • The SNS subscription that points to the SQS queue “Common-Orders” sets a filter policy that blacklists lab supply orders. Hence, this policy matches only office and tech supply orders, which have a more streamlined fulfillment process, and require no approval, regardless of price or target location.

After the company finished building this advanced pub/sub architecture, they were then able to launch their internal procurement system and allow buyers to begin placing orders. The diagram above shows six example orders published to the SNS topic. Each order contains message attributes that describe the order, and cause them to be filtered in a different manner, as follows:

  • Message #1 is a lab supply order, with a total value of $15,700 and targeting a research lab in Singapore. Because the value is greater than $1,000, and the location “Asia-Pacific-Southeast” matches the prefix “Asia-Pacific-“, this message matches the first SNS subscription and is delivered to SQS queue “Lab-Orders-AP”.
  • Message #2 is a lab supply order, with a total value of $1,833 and targeting a research lab in Ireland. Because the value is greater than $1,000, and the location “Europe-West” matches the prefix “Europe-“, this message matches the second SNS subscription and is delivered to SQS queue “Lab-Orders-EU”.
  • Message #3 is a lab supply order, with a total value of $415. Because the value is greater than $0 and less than $1,000, this message matches the third SNS subscription and is delivered to Lambda function “Lab-Preapproved”.
  • Message #4 is a lab supply order, but with a total value of $0. Therefore, it only matches the fourth SNS subscription, and is delivered to Lambda function “Lab-Cancelled”.
  • Messages #5 and #6 aren’t lab supply orders actually; one is an office supply order, and the other is a tech supply order. Therefore, they only match the fifth SNS subscription, and are both delivered to SQS queue “Common-Orders”.

Although each message only matched a single subscription, each was tested against the filter policy of every subscription in the topic. Hence, depending on which attributes are set on the incoming message, the message might actually match multiple subscriptions, and multiple deliveries will take place. Also, it is important to bear in mind that subscriptions with no filter policies catch every single message published to the topic, as a blank filter policy equates to a catch-all behavior.

Summary

Amazon SNS allows for both string and numeric filtering operators. As explained in this post, string operators allow for exact, prefix, and “anything-but” comparisons, while numeric operators allow for exact and range comparisons. These advanced filtering operators bring even more power and flexibility to your pub/sub messaging functionality and also allow you to simplify your architecture further by removing even more logic from your subscribers.

Message filtering can be implemented easily with existing AWS SDKs by applying message and subscription attributes across all SNS supported protocols (Amazon SQS, AWS Lambda, HTTP, SMS, email, and mobile push). SNS filtering operators for numeric matching, prefix matching, and blacklisting are available now in all AWS Regions, for no extra charge.

To experiment with these new filtering operators yourself, and continue learning, try the 10-minute Tutorial Filter Messages Published to Topics. For more information, see Filtering Messages with Amazon SNS in the SNS documentation.

UK Govt. Met With Copyright Holders Dozens of Times in Just Three Months

Post Syndicated from Andy original https://torrentfreak.com/uk-govt-met-with-copyright-holders-dozens-of-times-in-just-three-months-180310/

While doing business with clients and suppliers is the usual day-to-day routine for most businesses, companies in the entertainment sector seem keener than most to spend time with those in power.

Whether there’s pressure to be applied in respect of upcoming changes in policy or long-term plans for modifying legislation, at least a few times a year news breaks of rightsholders having private meetings with officials. Most of the time, however, the head-to-heads fly under the radar.

This week, however, the UK government published a response to a Freedom of Information Request which asked for details of meetings between the government and copyright owner organizations, enforcement organizations, and collection societies (think BPI, MPA, FACT, Publishers Association, PRS, etc) including times, dates and topics discussed.

The request asked for details of meetings held between May 2016 and April 2017 but the government declined to provide all of this information since the effort required to extract the information “would exceed the cost limit.”

Given the amount of data published, this isn’t a surprise. Even though the government chose to limit the response to events held between January 16, 2017 and April 17, 2017, the meetings between the government and the above groups number in their dozens.

January 2017 got off to a pretty slow start but week three and beyond saw a flurry of meetings with groups and companies such as ITV, BBC, PRS for Music, Copyright Licensing Agency and several other organizations to discuss the EU’s Digital Single Market proposals.

On January 18, 2017 Time Warner had a meeting to discuss content protection and analytics, followed a day later by the Premier League who were booked in to discuss “illicit streaming devices” (a topic mirrored in March during a meeting with the Audiovisual Anti-Piracy Alliance).

Just a few days later the Police Intellectual Property Crime Unit held a “Partnership Working Group Meeting involving industry” and two days after that the police, Trading Standards, and the EU Police Agency convened to discuss enforcement activity.

January 26, 2017 saw an IP Outreach Workshop involving members of the IP Crime Group. This was potentially a big meeting. The IPCG consists of several regional police forces, PIPCU, National Crime Agency, Crown Prosecution Service, Department of Culture, Media and Sport, Trading Standards, HMRC, IFPI, BPI, FACT, Sky TV, PRS, FAST and the Publishers Association, to name just a few.

As the first month of the year was drawing to a close, Amazon met with the government to discuss “current procedures for removing copyright, design and trademark infringing material from their platform.” A similar meeting was held with eBay on February 1 and on February 20, Facebook had its turn on the same topic.

All three companies had come in for criticism from copyright holders for not doing enough to stem the tide of infringing content available on their platforms, particularly so-called Kodi boxes that provide access to movies, shows, and live TV.

However, in the months that followed they each responded positively, with eBay, Amazon and Facebook announcing restrictions on devices sold. While all three platforms still have a problem with infringing device sales, the situation appears to have improved since last year.

On the final day of January 2017, the MPAA attended a meeting to discuss the looming Digital Economy Bill and digital TV piracy. A couple of days later they were back again for a “business awareness seminar” with other big shots including the Alliance for IP, the Anti-Counterfeiting Group, Trading Standards and the Premier League.

However, given the dozens that took place, perhaps one of the more interesting meetings in terms of the mix of those in attendance took place February 7.

Titled “Organized Crime Task Force Meeting – Belfast” it was attended by the Police Service of Northern Ireland, the National Crime Agency, Trading Standards, HM Revenue and Customs, the Border Force, and (spot the odd one out) the Federation Against Copyright Theft.

This seems to suggest that FACT (a private company) is effectively embedded at the highest level of law enforcement, something that has made people very uncomfortable in the past.

Later in February, there was a roundtable meeting with the Alliance for IP, MPAA, Publishers’ Association, BPI, Premier League and Federation Against Copyright Theft (again) to discuss Brexit, the Digital Single Market, IP enforcement and industrial strategy. A similar meeting was held in March which was attended by UK Music, BPI, PRS, Featured Artists Coalition, and many more.

The full list of meetings, which number in their dozens for just a three-month period, can be found here pdf. Whether the volume is representative of other three-month periods isn’t clear but it seems reasonable to conclude that copyright organizations have the ears of government officials in the UK on an almost continual basis.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Auto Scaling is now available for Amazon SageMaker

Post Syndicated from Ana Visneski original https://aws.amazon.com/blogs/aws/auto-scaling-is-now-available-for-amazon-sagemaker/

Kumar Venkateswar, Product Manager on the AWS ML Platforms Team, shares details on the announcement of Auto Scaling with Amazon SageMaker.


With Amazon SageMaker, thousands of customers have been able to easily build, train and deploy their machine learning (ML) models. Today, we’re making it even easier to manage production ML models, with Auto Scaling for Amazon SageMaker. Instead of having to manually manage the number of instances to match the scale that you need for your inferences, you can now have SageMaker automatically scale the number of instances based on an AWS Auto Scaling Policy.

SageMaker has made managing the ML process easier for many customers. We’ve seen customers take advantage of managed Jupyter notebooks and managed distributed training. We’ve seen customers deploying their models to SageMaker hosting for inferences, as they integrate machine learning with their applications. SageMaker makes this easy –  you don’t have to think about patching the operating system (OS) or frameworks on your inference hosts, and you don’t have to configure inference hosts across Availability Zones. You just deploy your models to SageMaker, and it handles the rest.

Until now, you have needed to specify the number and type of instances per endpoint (or production variant) to provide the scale that you need for your inferences. If your inference volume changes, you can change the number and/or type of instances that back each endpoint to accommodate that change, without incurring any downtime. In addition to making it easy to change provisioning, customers have asked us how we can make managing capacity for SageMaker even easier.

With Auto Scaling for Amazon SageMaker, in the SageMaker console, the AWS Auto Scaling API, and the AWS SDK, this becomes much easier. Now, instead of having to closely monitor inference volume, and change the endpoint configuration in response, customers can configure a scaling policy to be used by AWS Auto Scaling. Auto Scaling adjusts the number of instances up or down in response to actual workloads, determined by using Amazon CloudWatch metrics and target values defined in the policy. In this way, customers can automatically adjust their inference capacity to maintain predictable performance at a low cost. You simply specify the target inference throughput per instance and provide upper and lower bounds for the number of instances for each production variant. SageMaker will then monitor throughput per instance using Amazon CloudWatch alarms, and then it will adjust provisioned capacity up or down as needed.

After you configure the endpoint with Auto Scaling, SageMaker will continue to monitor your deployed models to automatically adjust the instance count. SageMaker will keep throughput within desired levels, in response to changes in application traffic. This makes it easier to manage models in production, and it can help reduce the cost of deployed models, as you no longer have to provision sufficient capacity in order to manage your peak load. Instead, you configure the limits to accommodate your minimum expected traffic and the maximum peak, and Amazon SageMaker will work within those limits to minimize cost.

How do you get started? Open the SageMaker console. For existing endpoints, you first access the endpoint to modify the settings.


Then, scroll to the Endpoint runtime settings section, select the variant, and choose Configure auto scaling.


First, configure the minimum and maximum number of instances.

Next, choose the throughput per instance at which you want to add an additional instance, given previous load testing.

You can optionally set cool down periods for scaling in or out, to avoid oscillation during periods of wide fluctuation in workload. If not, SageMaker will assume default values.

And that’s it! You now have an endpoint that will automatically scale with increasing inferences.

You pay for the capacity used at regular SageMaker pay-as-you-go pricing, so you no longer have to pay for unused capacity during relative idle periods!

Auto Scaling in Amazon SageMaker is available today in the US East (N. Virginia & Ohio), EU (Ireland), and U.S. West (Oregon) AWS regions. To learn more, see the Amazon SageMaker Auto Scaling documentation.


Kumar Venkateswar is a Product Manager in the AWS ML Platforms team, which includes Amazon SageMaker, Amazon Machine Learning, and the AWS Deep Learning AMIs. When not working, Kumar plays the violin and Magic: The Gathering.

 

 

 

 


 

 

Now Available – AWS Serverless Application Repository

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-available-aws-serverless-application-repository/

Last year I suggested that you Get Ready for the AWS Serverless Application Repository and gave you a sneak peek. The Repository is designed to make it as easy as possible for you to discover, configure, and deploy serverless applications and components on AWS. It is also an ideal venue for AWS partners, enterprise customers, and independent developers to share their serverless creations.

Now Available
After a well-received public preview, the AWS Serverless Application Repository is now generally available and you can start using it today!

As a consumer, you will be able to tap in to a thriving ecosystem of serverless applications and components that will be a perfect complement to your machine learning, image processing, IoT, and general-purpose work. You can configure and consume them as-is, or you can take them apart, add features, and submit pull requests to the author.

As a publisher, you can publish your contribution in the Serverless Application Repository with ease. You simply enter a name and a description, choose some labels to increase discoverability, select an appropriate open source license from a menu, and supply a README to help users get started. Then you enter a link to your existing source code repo, choose a SAM template, and designate a semantic version.

Let’s take a look at both operations…

Consuming a Serverless Application
The Serverless Application Repository is accessible from the Lambda Console. I can page through the existing applications or I can initiate a search:

A search for “todo” returns some interesting results:

I simply click on an application to learn more:

I can configure the application and deploy it right away if I am already familiar with the application:

I can expand each of the sections to learn more. The Permissions section tells me which IAM policies will be used:

And the Template section displays the SAM template that will be used to deploy the application:

I can inspect the template to learn more about the AWS resources that will be created when the template is deployed. I can also use the templates as a learning resource in preparation for creating and publishing my own application.

The License section displays the application’s license:

To deploy todo, I name the application and click Deploy:

Deployment starts immediately and is done within a minute (application deployment time will vary, depending on the number and type of resources to be created):

I can see all of my deployed applications in the Lambda Console:

There’s currently no way for a SAM template to indicate that an API Gateway function returns binary media types, so I set this up by hand and then re-deploy the API:

Following the directions in the Readme, I open the API Gateway Console and find the URL for the app in the API Gateway Dashboard:

I visit the URL and enter some items into my list:

Publishing a Serverless Application
Publishing applications is a breeze! I visit the Serverless App Repository page and click on Publish application to get started:

Then I assign a name to my application, enter my own name, and so forth:

I can choose from a long list of open-source friendly SPDX licenses:

I can create an initial version of my application at this point, or I can do it later. Either way, I simply provide a version number, a URL to a public repository containing my code, and a SAM template:

Available Now
The AWS Serverless Application Repository is available now and you can start using it today, paying only for the AWS resources consumed by the serverless applications that you deploy.

You can deploy applications in the US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Canada (Central), EU (Frankfurt), EU (Ireland), EU (London), and South America (São Paulo) Regions. You can publish from the US East (N. Virginia) or US East (Ohio) Regions for global availability.

Jeff;

 

Mission Space Lab flight status announced!

Post Syndicated from Erin Brindley original https://www.raspberrypi.org/blog/mission-space-lab-flight-status-announced/

In September of last year, we launched our 2017/2018 Astro Pi challenge with our partners at the European Space Agency (ESA). Students from ESA membership and associate countries had the chance to design science experiments and write code to be run on one of our two Raspberry Pis on the International Space Station (ISS).

Astro Pi Mission Space Lab logo

Submissions for the Mission Space Lab challenge have just closed, and the results are in! Students had the opportunity to design an experiment for one of the following two themes:

  • Life in space
    Making use of Astro Pi Vis (Ed) in the European Columbus module to learn about the conditions inside the ISS.
  • Life on Earth
    Making use of Astro Pi IR (Izzy), which will be aimed towards the Earth through a window to learn about Earth from space.

ESA astronaut Alexander Gerst, speaking from the replica of the Columbus module at the European Astronaut Center in Cologne, has a message for all Mission Space Lab participants:

ESA astronaut Alexander Gerst congratulates Astro Pi 2017-18 winners

Subscribe to our YouTube channel: http://rpf.io/ytsub Help us reach a wider audience by translating our video content: http://rpf.io/yttranslate Buy a Raspberry Pi from one of our Approved Resellers: http://rpf.io/ytproducts Find out more about the Raspberry Pi Foundation: Raspberry Pi http://rpf.io/ytrpi Code Club UK http://rpf.io/ytccuk Code Club International http://rpf.io/ytcci CoderDojo http://rpf.io/ytcd Check out our free online training courses: http://rpf.io/ytfl Find your local Raspberry Jam event: http://rpf.io/ytjam Work through our free online projects: http://rpf.io/ytprojects Do you have a question about your Raspberry Pi?

Flight status

We had a total of 212 Mission Space Lab entries from 22 countries. Of these, a 114 fantastic projects have been given flight status, and the teams’ project code will run in space!

But they’re not winners yet. In April, the code will be sent to the ISS, and then the teams will receive back their experimental data. Next, to get deeper insight into the process of scientific endeavour, they will need produce a final report analysing their findings. Winners will be chosen based on the merit of their final report, and the winning teams will get exclusive prizes. Check the list below to see if your team got flight status.

Belgium

Flight status achieved:

  • Team De Vesten, Campus De Vesten, Antwerpen
  • Ursa Major, CoderDojo Belgium, West-Vlaanderen
  • Special operations STEM, Sint-Claracollege, Antwerpen

Canada

Flight status achieved:

  • Let It Grow, Branksome Hall, Toronto
  • The Dark Side of Light, Branksome Hall, Toronto
  • Genie On The ISS, Branksome Hall, Toronto
  • Byte by PIthons, Youth Tech Education Society & Kid Code Jeunesse, Edmonton
  • The Broadviewnauts, Broadview, Ottawa

Czech Republic

Flight status achieved:

  • BLEK, Střední Odborná Škola Blatná, Strakonice

Denmark

Flight status achieved:

  • 2y Infotek, Nærum Gymnasium, Nærum
  • Equation Quotation, Allerød Gymnasium, Lillerød
  • Team Weather Watchers, Allerød Gymnasium, Allerød
  • Space Gardners, Nærum Gymnasium, Nærum

Finland

Flight status achieved:

  • Team Aurora, Hyvinkään yhteiskoulun lukio, Hyvinkää

France

Flight status achieved:

  • INC2, Lycée Raoul Follereau, Bourgogne
  • Space Project SP4, Lycée Saint-Paul IV, Reunion Island
  • Dresseurs2Python, clg Albert CAMUS, essonne
  • Lazos, Lycée Aux Lazaristes, Rhone
  • The space nerds, Lycée Saint André Colmar, Alsace
  • Les Spationautes Valériquais, lycée de la Côte d’Albâtre, Normandie
  • AstroMega, Institut de Genech, north
  • Al’Crew, Lycée Algoud-Laffemas, Auvergne-Rhône-Alpes
  • AstroPython, clg Albert CAMUS, essonne
  • Aruden Corp, Lycée Pablo Neruda, Normandie
  • HeroSpace, clg Albert CAMUS, essonne
  • GalaXess [R]evolution, Lycée Saint Cricq, Nouvelle-Aquitaine
  • AstroBerry, clg Albert CAMUS, essonne
  • Ambitious Girls, Lycée Adam de Craponne, PACA

Germany

Flight status achieved:

  • Uschis, St. Ursula Gymnasium Freiburg im Breisgau, Breisgau
  • Dosi-Pi, Max-Born-Gymnasium Germering, Bavaria

Greece

Flight status achieved:

  • Deep Space Pi, 1o Epal Grevenon, Grevena
  • Flox Team, 1st Lyceum of Kifissia, Attiki
  • Kalamaria Space Team, Second Lyceum of Kalamaria, Central Macedonia
  • The Earth Watchers, STEM Robotics Academy, Thessaly
  • Celestial_Distance, Gymnasium of Kanithos, Sterea Ellada – Evia
  • Pi Stars, Primary School of Rododaphne, Achaias
  • Flarions, 5th Primary School of Salamina, Attica

Ireland

Flight status achieved:

  • Plant Parade, Templeogue College, Leinster
  • For Peats Sake, Templeogue College, Leinster
  • CoderDojo Clonakilty, Co. Cork

Italy

Flight status achieved:

  • Trentini DOP, CoderDojo Trento, TN
  • Tarantino Space Lab, Liceo G. Tarantino, BA
  • Murgia Sky Lab, Liceo G. Tarantino, BA
  • Enrico Fermi, Liceo XXV Aprile, Veneto
  • Team Lampone, CoderDojoTrento, TN
  • GCC, Gali Code Club, Trentino Alto Adige/Südtirol
  • Another Earth, IISS “Laporta/Falcone-Borsellino”
  • Anti Pollution Team, IIS “L. Einaudi”, Sicily
  • e-HAND, Liceo Statale Scientifico e Classico ‘Ettore Majorana’, Lombardia
  • scossa team, ITTS Volterra, Venezia
  • Space Comet Sisters, Scuola don Bosco, Torino

Luxembourg

Flight status achieved:

  • Spaceballs, Atert Lycée Rédange, Diekirch
  • Aline in space, Lycée Aline Mayrisch Luxembourg (LAML)

Poland

Flight status achieved:

  • AstroLeszczynPi, I Liceum Ogolnoksztalcace im. Krola Stanislawa Leszczynskiego w Jasle, podkarpackie
  • Astrokompasy, High School nr XVII in Wrocław named after Agnieszka Osiecka, Lower Silesian
  • Cosmic Investigators, Publiczna Szkoła Podstawowa im. Św. Jadwigi Królowej w Rzezawie, Małopolska
  • ApplePi, III Liceum Ogólnokształcące im. prof. T. Kotarbińskiego w Zielonej Górze, Lubusz Voivodeship
  • ELE Society 2, Zespol Szkol Elektronicznych i Samochodowych, Lubuskie
  • ELE Society 1, Zespol Szkol Elektronicznych i Samochodowych, Lubuskie
  • SpaceOn, Szkola Podstawowa nr 12 w Jasle – Gimnazjum Nr 2, Podkarpackie
  • Dewnald Ducks, III Liceum Ogólnokształcące w Zielonej Górze, lubuskie
  • Nova Team, III Liceum Ogolnoksztalcace im. prof. T. Kotarbinskiego, lubuskie district
  • The Moons, Szkola Podstawowa nr 12 w Jasle – Gimnazjum Nr 2, Podkarpackie
  • Live, Szkoła Podstawowa nr 1 im. Tadeusza Kościuszki w Zawierciu, śląskie
  • Storm Hunters, I Liceum Ogolnoksztalcace im. Krola Stanislawa Leszczynskiego w Jasle, podkarpackie
  • DeepSky, Szkoła Podstawowa nr 1 im. Tadeusza Kościuszki w Zawierciu, śląskie
  • Small Explorers, ZPO Konina, Malopolska
  • AstroZSCL, Zespół Szkół w Czerwionce-Leszczynach, śląskie
  • Orchestra, Szkola Podstawowa nr 12 w Jasle, Podkarpackie
  • ApplePi, I Liceum Ogolnoksztalcace im. Krola Stanislawa Leszczynskiego w Jasle, podkarpackie
  • Green Crew, Szkoła Podstawowa nr 2 w Czeladzi, Silesia

Portugal

Flight status achieved:

  • Magnetics, Escola Secundária João de Deus, Faro
  • ECA_QUEIROS_PI, Secondary School Eça de Queirós, Lisboa
  • ESDMM Pi, Escola Secundária D. Manuel Martins, Setúbal
  • AstroPhysicists, EB 2,3 D. Afonso Henriques, Braga

Romania

Flight status achieved:

  • Caelus, “Tudor Vianu” National High School of Computer Science, District One
  • CodeWarriors, “Tudor Vianu” National High School of Computer Science, District One
  • Dark Phoenix, “Tudor Vianu” National High School of Computer Science, District One
  • ShootingStars, “Tudor Vianu” National High School of Computer Science, District One
  • Astro Pi Carmen Sylva 2, Liceul Teoretic “Carmen Sylva”, Constanta
  • Astro Meridian, Astro Club Meridian 0, Bihor

Slovenia

Flight status achieved:

  • astrOSRence, OS Rence
  • Jakopičevca, Osnovna šola Riharda Jakopiča, Ljubljana

Spain

Flight status achieved:

  • Exea in Orbit, IES Cinco Villas, Zaragoza
  • Valdespartans, IES Valdespartera, Zaragoza
  • Valdespartans2, IES Valdespartera, Zaragoza
  • Astropithecus, Institut de Bruguers, Barcelona
  • SkyPi-line, Colegio Corazón de María, Asturias
  • ClimSOLatic, Colegio Corazón de María, Asturias
  • Científicosdelsaz, IES Profesor Pablo del Saz, Málaga
  • Canarias 2, IES El Calero, Las Palmas
  • Dreamers, M. Peleteiro, A Coruña
  • Canarias 1, IES El Calero, Las Palmas

The Netherlands

Flight status achieved:

  • Team Kaki-FM, Rkbs De Reiger, Noord-Holland

United Kingdom

Flight status achieved:

  • Binco, Teignmouth Community School, Devon
  • 2200 (Saddleworth), Detached Flight Royal Air Force Air Cadets, Lanchashire
  • Whatevernext, Albyn School, Highlands
  • GraviTeam, Limehurst Academy, Leicestershire
  • LSA Digital Leaders, Lytham St Annes Technology and Performing Arts College, Lancashire
  • Mead Astronauts, Mead Community Primary School, Wiltshire
  • STEAMCademy, Castlewood Primary School, West Sussex
  • Lux Quest, CoderDojo Banbridge, Co. Down
  • Temparatus, Dyffryn Taf, Carmarthenshire
  • Discovery STEMers, Discovery STEM Education, South Yorkshire
  • Code Inverness, Code Club Inverness, Highland
  • JJB, Ashton Sixth Form College, Tameside
  • Astro Lab, East Kent College, Kent
  • The Life Savers, Scratch and Python, Middlesex
  • JAAPiT, Taylor Household, Nottingham
  • The Heat Guys, The Archer Academy, Greater London
  • Astro Wantenauts, Wantage C of E Primary School, Oxfordshire
  • Derby Radio Museum, Radio Communication Museum of Great Britain, Derbyshire
  • Bytesyze, King’s College School, Cambridgeshire

Other

Flight status achieved:

  • Intellectual Savage Stars, Lycée français de Luanda, Luanda

 

Congratulations to all successful teams! We are looking forward to reading your reports.

The post Mission Space Lab flight status announced! appeared first on Raspberry Pi.