Tag Archives: amazon

Textbook Rental Scam

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/10/textbook-rental-scam.html

Here’s a story of someone who, with three compatriots, rented textbooks from Amazon and then sold them instead of returning them. They used gift cards and prepaid credit cards to buy the books, so there was no available balance when Amazon tried to charge them the buyout price for non-returned books. They also used various aliases and other tricks to bypass Amazon’s fifteen-book limit. In all, they stole 14,000 textbooks worth over $1.5 million.

The article doesn’t link to the indictment, so I don’t know how they were discovered.

Opt-in to the new Amazon SES console experience

Post Syndicated from Simon Poile original https://aws.amazon.com/blogs/messaging-and-targeting/amazon-ses-console-opt-in/

Amazon Web Services (AWS) is pleased to announce the launch of the newly redesigned Amazon Simple Email Service (SES) console. With its streamlined look and feel, the new console makes it even easier for customers to leverage the speed, reliability, and flexibility that Amazon SES has to offer. Customers can access the new console experience via an opt-in link on the classic console.

Amazon SES now offers a new, optimized console to provide customers with a simpler, more intuitive way to create and manage their resources, collect sending activity data, and monitor reputation health. It also has a more robust set of configuration options and new features and functionality not previously available in the classic console.

Here are a few of the improvements customers can find in the new Amazon SES console:

Verified identities

Streamlines how customers manage their sender identities in Amazon SES. This is done by replacing the classic console’s identity management section with verified identities. Verified identities are a centralized place in which customers can view, create, and configure both domain and email address identities on one page. Other notable improvements include:

  • DKIM-based verification
    DKIM-based domain verification replaces the previous verification method which was based on TXT records. DomainKeys Identified Mail (DKIM) is an email authentication mechanism that receiving mail servers use to validate email. This new verification method offers customers the added benefit of enhancing their deliverability with DKIM-compliant email providers, and helping them achieve compliance with DMARC (Domain-based Message Authentication, Reporting and Conformance).
  • Amazon SES mailbox simulator
    The new mailbox simulator makes it significantly easier for customers to test how their applications handle different email sending scenarios. From a dropdown, customers select which scenario they’d like to simulate. Scenario options include bounces, complaints, and automatic out-of-office responses. The mailbox simulator provides customers with a safe environment in which to test their email sending capabilities.

Configuration sets

The new console makes it easier for customers to experience the benefits of using configuration sets. Configuration sets enable customers to capture and publish event data for specific segments of their email sending program. It also isolates IP reputation by segment by assigning dedicated IP pools. With a wider range of configuration options, such as reputation tracking and custom suppression options, customers get even more out of this powerful feature.

  • Default configuration set
    One important feature to highlight is the introduction of the default configuration set. By assigning a default configuration set to an identity, customers ensure that the assigned configuration set is always applied to messages sent from that identity at the time of sending. This enables customers to associate a dedicated IP pool or set up event publishing for an identity without having to modify their email headers.

Account dashboard

There is also an account dashboard for the new SES console. This feature provides customers with fast access to key information about their account, including sending limits and restrictions, and overall account health. A visual representation of the customer’s daily email usage helps them ensure that they aren’t approaching their sending limits. Additionally, customers who use the Amazon SES SMTP interface to send emails can visit the account dashboard to obtain or update their SMTP credentials.

Reputation metrics

The new reputation metrics page provides customers with high-level insight into historic bounce and complaint rates. This is viewed at both the account level and the configuration set level. Bounce and complaint rates are two important metrics that Amazon SES considers when assessing a customer’s sender reputation, as well as the overall health of their account.

The redesigned Amazon SES console, with its easy-to-use workflows, will not only enhance the customers’ on-boarding experience, it will also change the paradigms used for their on-going usage. The Amazon SES team remains committed to investing on behalf of our customers and empowering them to be productive anywhere, anytime. We invite you to opt in to the new Amazon SES console experience and let us know what you think.

Amazon Has Trucks Filled with Hard Drives and an Armed Guard

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/01/amazon-has-trucks-filled-with-hard-drives-and-an-armed-guard.html

From an interview with an Amazon Web Services security engineer:

So when you use AWS, part of what you’re paying for is security.

Right; it’s part of what we sell. Let’s say a prospective customer comes to AWS. They say, “I like pay-as-you-go pricing. Tell me more about that.” We say, “Okay, here’s how much you can use at peak capacity. Here are the savings we can see in your case.”

Then the company says, “How do I know that I’m secure on AWS?” And this is where the heat turns up. This is where we get them. We say, “Well, let’s take a look at what you’re doing right now and see if we can offer a comparable level of security.” So they tell us about the setup of their data centers.

We say, “Oh my! It seems like we have level five security and your data center has level three security. Are you really comfortable staying where you are?” The customer figures, not only am I going to save money by going with AWS, I also just became aware that I’m not nearly as secure as I thought.

Plus, we make it easy to migrate and difficult to leave. If you have a ton of data in your data center and you want to move it to AWS but you don’t want to send it over the internet, we’ll send an eighteen-wheeler to you filled with hard drives, plug it into your data center with a fiber optic cable, and then drive it across the country to us after loading it up with your data.

What? How do you do that?

We have a product called Snowmobile. It’s a gas-guzzling truck. There are no public pictures of the inside, but it’s pretty cool. It’s like a modular datacenter on wheels. And customers rightly expect that if they load a truck with all their data, they want security for that truck. So there’s an armed guard in it at all times.

It’s a pretty easy sell. If a customer looks at that option, they say, yeah, of course I want the giant truck and the guy with a gun to move my data, not some crappy system that I develop on my own.

Lots more about how AWS views security, and Keith Alexander’s position on Amazon’s board of directors, in the interview.

Found on Slashdot.

Manipulating Systems Using Remote Lasers

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/12/manipulating-systems-using-remote-lasers.html

Many systems are vulnerable:

Researchers at the time said that they were able to launch inaudible commands by shining lasers — from as far as 360 feet — at the microphones on various popular voice assistants, including Amazon Alexa, Apple Siri, Facebook Portal, and Google Assistant.

[…]

They broadened their research to show how light can be used to manipulate a wider range of digital assistants — including Amazon Echo 3 — but also sensing systems found in medical devices, autonomous vehicles, industrial systems and even space systems.

The researchers also delved into how the ecosystem of devices connected to voice-activated assistants — such as smart-locks, home switches and even cars — also fail under common security vulnerabilities that can make these attacks even more dangerous. The paper shows how using a digital assistant as the gateway can allow attackers to take control of other devices in the home: Once an attacker takes control of a digital assistant, he or she can have the run of any device connected to it that also responds to voice commands. Indeed, these attacks can get even more interesting if these devices are connected to other aspects of the smart home, such as smart door locks, garage doors, computers and even people’s cars, they said.

Another article. The researchers will present their findings at Black Hat Europe — which, of course, will be happening virtually — on December 10.

Analyze and improve email campaigns with Amazon Simple Email Service and Amazon QuickSight

Post Syndicated from Apoorv Gakhar original https://aws.amazon.com/blogs/messaging-and-targeting/analyze-and-improve-email-campaigns-with-amazon-simple-email-service-and-amazon-quicksight/

Email is a popular channel for applications, used in both marketing campaigns and other outbound customer communications. The challenge with email is that it can become increasingly complex to manage for companies that must send large quantities of messages per month. This complexity is especially true when companies need to measure detailed email engagement metrics to track campaign success.

As a marketer, you want to monitor several metrics, including open rates, click-through rates, bounce rates, and delivery rates. If you do not track your email results, you could potentially be wasting your campaign resources. Monitoring and interpreting your sending results can help you deliver the best content possible to your subscribers’ inboxes, and it can also ensure that your IP reputation stays high. Mailbox providers prioritize inbox placement for senders that deliver relevant content. As a business professional, tracking your emails can also help you stay on top of hot leads and important clients. For example, if someone has opened your email multiple times in one day, it might be a good idea to send out another follow-up email to touch base.

Building a large-scale email solution is a complex and expensive challenge for any business. You would need to build infrastructure, assemble your network, and warm up your IP addresses. Alternatively, working with some third-party email solutions require contract negotiations and upfront costs.

Fortunately, Amazon Simple Email Service (SES) has a highly scalable and reliable backend infrastructure to reduce the preceding challenges. It has improved content filtering techniques, reputation management features, and a vast array of analytics and reporting functions. These features help email senders reach their audiences and make it easier to manage email channels across applications. Amazon SES also provides API operations to monitor your sending activities through simple API calls. You can publish these events to Amazon CloudWatch, Amazon Kinesis Data Firehose, or by using Amazon Simple Notification Service (SNS).

In this post, you learn how to build and automate a serverless architecture that analyzes email events. We explore how to track important metrics such as open and click rate of the emails.

Solution overview

 

The metrics that you can measure using Amazon SES are referred to as email sending events. You can use Amazon CloudWatch to retrieve Amazon SES event data. You can also use Amazon SNS to interpret Amazon SES event data. However, in this post, we are going to use Amazon Kinesis Data Firehose to monitor our user sending activity.

Enable Amazon SES configuration sets with open and click metrics and publish email sending events to Amazon Kinesis Data Firehose as JSON records. A Lambda function is used to parse the JSON records and publish the content in the Amazon S3 bucket.

Ingested data lands in an Amazon S3 bucket that we refer to as the raw zone. To make that data available, you have to catalog its schema in the AWS Glue data catalog. You create and run the AWS Glue crawler that crawls your data sources and construct your Data Catalog. The Data Catalog uses pre-built classifiers for many popular source formats and data types, including JSON, CSV, and Parquet.

When the crawler is finished creating the table definition and schema, you analyze the data using Amazon Athena. It is an interactive query service that makes it easy to analyze data in Amazon S3 using SQL. Point to your data in Amazon S3, define the schema, and start querying using standard SQL, with most results delivered in seconds.

Now you can build visualizations, perform ad hoc analysis, and quickly get business insights from the Amazon SES event data using Amazon QuickSight. You can easily run SQL queries using Amazon Athena on data stored in Amazon S3, and build business dashboards within Amazon QuickSight.

 

Deploying the architecture:

Configuring Amazon Kinesis Data Firehose to write to Amazon S3:

  1. Navigate to the Amazon Kinesis in the AWS Management Console. Choose Kinesis Data Firehose and create a delivery stream.
  2. Enter delivery stream name as “SES_Firehose_Demo”.
  3. Under the source category, select “Direct Put or other sources”.
  4. On the next page, make sure to enable Data Transformation of source records with AWS Lambda. We use AWS Lambda to parse the notification contents that we only process the required information as per the use case.
  5. Click the “Create New” Lambda function.
  6. Click on “General Kinesis Data FirehoseProcessing” Lambda blueprint and this opens up the Lambda console. Enter following values in Lambda
    • Name: SES-Firehose-Json-Parser
    • Execution role: Create a new role with basic Lambda permissions.
  7. Click “Create Function”. Now replace the Lambda code with the following provided code and save the function.
    • 'use strict';
      console.log('Loading function');
      exports.handler = (event, context, callback) => {
         /* Process the list of records and transform them */
          const output = event.records.map((record) => {
              console.log(record.recordId);
              const payload =JSON.parse((Buffer.from(record.data, 'base64').toString()))
              console.log("payload : " + payload);
              
              if (payload.eventType == "Click") {
              const resultPayLoadClick = {
                      eventType : payload.eventType,
                      destinationEmailId : payload.mail.destination[0],
                      sourceIp : payload.click.ipAddress,
                  };
              console.log("resultPayLoad : " + resultPayLoadClick.eventType + resultPayLoadClick.destinationEmailId + resultPayLoadClick.sourceIp);
              
              //const parsed = resultPayLoad[0];
              //console.log("parsed : " + (Buffer.from(JSON.stringify(resultPayLoad))).toString('base64'));
              
              
              return{
                  recordId: record.recordId,
                  result: 'Ok',
                  data: (Buffer.from(JSON.stringify(resultPayLoadClick))).toString('base64'),
              };
              }
              else {
                  const resultPayLoadOpen = {
                      eventType : payload.eventType,
                      destinationEmailId : payload.mail.destination[0],
                      sourceIp : payload.open.ipAddress,
                  };
              console.log("resultPayLoad : " + resultPayLoadOpen.eventType + resultPayLoadOpen.destinationEmailId + resultPayLoadOpen.sourceIp);
              
              //const parsed = resultPayLoad[0];
              //console.log("parsed : " + (Buffer.from(JSON.stringify(resultPayLoad))).toString('base64'));
              
              
              return{
                  recordId: record.recordId,
                  result: 'Ok',
                  data: (Buffer.from(JSON.stringify(resultPayLoadOpen))).toString('base64'),
              };
              }
          });
          console.log("Output : " + output.data);
          console.log(`Processing completed.  Successful records ${output.length}.`);
          callback(null, { records: output });
      };

      Please note:

      For this blog, we are only filtering out three fields i.e. Eventname, destination_Email, and SourceIP. If you want to store other parameters you can modify your code accordingly. For the list of information that we receive in notifications, you may check out the following document.

      https://docs.aws.amazon.com/ses/latest/DeveloperGuide/event-publishing-retrieving-firehose-examples.html

  8. Now, navigate back to your Amazon Kinesis Data Firehose console and choose the newly created Lambda function.
  9. Keep the convert record format disabled and click “Next”.
  10. In the destination, choose Amazon S3 and select a target Amazon S3 bucket. Create a new bucket if you do not want to use the existing bucket.
  11. Enter the following values for Amazon S3 Prefix and Error Prefix. When event data is published.
    • Prefix:
      fhbase/year=!{timestamp:yyyy}/month=!{timestamp:MM}/day=!{timestamp:dd}/hour=!{timestamp:HH}/
    • Error Prefix:
      fherroroutputbase/!{firehose:random-string}/!{firehose:error-output-type}/!{timestamp:yyyy/MM/dd}/
  12. You may utilize the above values in the Amazon S3 prefix and error prefix. If you use your own prefixes make sure to accordingly update the target values in AWS Glue which you will see in further process.
  13. Keep the Amazon S3 backup option disabled and click “Next”.
  14. On the next page, under the Permissions section, select create a new role. This opens up a new tab and then click “Allow” to create the role.
  15. Navigate back to the Amazon Kinesis Data Firehose console and click “Next”.
  16. Review the changes and click on “Create delivery stream”.

Configure Amazon SES to publish event data to Kinesis Data Firehose:

  1. Navigate to Amazon SES console and select “Email Addresses” from the left side.
  2. Click on “Verify a New Email Address” on the top. Enter your email address to which you send a test email.
  3. Go to your email inbox and click on the verify link. Navigate back to the Amazon SES console and you will see verified status on the email address provided.
  4. Open the Amazon SES console and select Configuration set from the left side.
  5. Create a new configuration set. Enter “SES_Firehose_Demo”  as the configuration set name and click “Create”.
  6. Choose Kinesis Data Firehose as the destination and provide the following details.
    • Name: OpenClick
    • Event Types: Open and Click
  7. In the IAM Role field, select ‘Let SES make a new role’. This allows SES to create a new role and add sufficient permissions for this use case in that role.
  8. Click “Save”.

Sending a Test email:

  1. Navigate to Amazon SES console, click on “Email Addresses” on the left side.
  2. Select your verified email address and click on “Send a Test email”.
  3. Make sure you select the raw email format. You may use the following format to send out a test email from the console. Make sure you send out this email to a recipient inbox to which you have the access.
    • X-SES-CONFIGURATION-SET: SES_Firehose_Demo
      X-SES-MESSAGE-TAGS: Email=NULL
      From: [email protected]
      To: [email protected]
      Subject: Test email
      Content-Type: multipart/alternative;
          		boundary="----=_boundary"
      
      ------=_boundary
      Content-Type: text/html; charset=UTF-8
      Content-Transfer-Encoding: 7bit
      This is a test email.
      
      <a href="https://aws.amazon.com/">Amazon Web Services</a>
      ------=_boundary
  4. Once the email is received in the recipient’s inbox, open the email and click the link present in the same. This generates a click and open event and send the response back to SES.

Creating Glue Crawler:

  1. Navigate to the AWS Glue console, select “crawler” from the left side, and then click on “Add crawler” on the top.
  2. Enter the crawler name as “SES_Firehose_Crawler” and click “Next”.
  3. Under Crawler source type, select “Data stores” and click “Next”.
  4. Select Amazon S3 as the data source and prove the required path. Include the path until the “fhbase” folder.
  5. Select “no” under Add another data source section.
  6. In the IAM role, select the option to ‘Create an IAM role’. Enter the name as “SES_Firehose-Crawler”. This provides the necessary permissions automatically to the newly created role.
  7. In the frequency section, select run on demand and click “Next”. You may choose this value as per your use case.
  8. Click on add Database and provide the name as “ses_firehose_glue_db”. Click on create and then click “Next”.
  9. Review your Glue crawler setting and click on “Finish”.
  10. Run the above-created crawler. This crawls the data from the specified Amazon S3 bucket and create a catalog and table definition.
  11. Now navigate to “tables” on the left, and verify a “fhbase” table is created after you run the crawler.

If you want to analyze the data stored until now, you can use Amazon Athena and test the queries. If not, you can move to the Amazon Quicksight directly.

Analyzing the data using Amazon Athena:

  1. Open Athena console and select the database, which is created using AWS Glue
  2. Click on “setup a query result location in Amazon S3” as shown in the following screenshot.
  3. Navigate to the Amazon S3 bucket created in earlier steps and create a folder called “AthenaQueryResult”. We store our Athena query result in this bucket.
  4. Now navigate back to Amazon Athena and select the Amazon S3 bucket with the folder location as shown in the following screenshot and click “Save”.
  5. Run the following query to test the sample output and accordingly modify your SQL query to get the desired output.
    • Select * from “ses_firehose_glue_db”.”fhbase”

Note: If you want to track the opened emails by unique Ip addresses then you can modify your SQL query accordingly. This is because every time an email gets opened, you will receive a notification even if the same email was previously opened.

 

Visualizing the data in Amazon QuickSight dashboards:

  1. Now, let’s analyze this data using Amazon Athena via Amazon Quicksight.
  2. Log into Amazon Quicksight and choose Manage data, New dataset. Choose Amazon Athena as a new data source.
  3. Enter the data source name as “SES-Demo” and click on “Create the data source”.
  4. Select your database from the drop-down as “ses_firehose_glue_db” and table “fhbase” that you have created in AWS Glue.
  5. And add a custom SQL based on your use case and click on “Confirm query”. Refer to the example below.
  6. You can perform ad hoc analysis and modify your query according to your business needs as shown in the following image. Click “Save & Visualize”.
  7. You can now visualize your event data on Amazon Quicksight dashboard. You can use various graphs to represent your data. For this demo, the default graph is used and two fields are selected to populate on the graph, as shown below.

 

Conclusion:

This architecture shows how to track your email sending activity at a granular level. You set up Amazon SES to publish event data to Amazon Kinesis Data Firehose based on fine-grained email characteristics that you define. You can also track several types of email sending events, including sends, deliveries, bounces, complaints, rejections, rendering failures, and delivery delays. This information can be useful for operational and analytical purposes.

To get started with Amazon SES, follow this quick start guide and you can learn more about monitoring sending activity here.

About the Authors

Chirag Oswal is a solutions architect and AR/VR specialist working with the public sector India. He works with AWS customers to help them adopt the cloud operating model on a large scale.

Apoorv Gakhar is a Cloud Support Engineer and an Amazon SES Expert. He is working with AWS to help the customers integrate their applications with various AWS Services.

 

Additional Resources:

Amazon SES Dedicated IP Pools

Amazon Personalize optimizer using Amazon Pinpoint events

Template Personalization using Amazon Pinpoint

 

 

Amazon Delivery Drivers Hacking Scheduling System

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/09/amazon-delivery-drivers-hacking-scheduling-system.html

Amazon drivers — all gig workers who don’t work for the company — are hanging cell phones in trees near Amazon delivery stations, fooling the system into thinking that they are closer than they actually are:

The phones in trees seem to serve as master devices that dispatch routes to multiple nearby drivers in on the plot, according to drivers who have observed the process. They believe an unidentified person or entity is acting as an intermediary between Amazon and the drivers and charging drivers to secure more routes, which is against Amazon’s policies.

The perpetrators likely dangle multiple phones in the trees to spread the work around to multiple Amazon Flex accounts and avoid detection by Amazon, said Chetan Sharma, a wireless industry consultant. If all the routes were fed through one device, it would be easy for Amazon to detect, he said.

“They’re gaming the system in a way that makes it harder for Amazon to figure it out,” Sharma said. “They’re just a step ahead of Amazon’s algorithm and its developers.”

Amazon Supplier Fraud

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/08/amazon_supplier.html

Interesting story of an Amazon supplier fraud:

According to the indictment, the brothers swapped ASINs for items Amazon ordered to send large quantities of different goods instead. In one instance, Amazon ordered 12 canisters of disinfectant spray costing $94.03. The defendants allegedly shipped 7,000 toothbrushes costing $94.03 each, using the code for the disinfectant spray, and later billed Amazon for over $650,000.

In another instance, Amazon ordered a single bottle of designer perfume for $289.78. In response, according to the indictment, the defendants sent 927 plastic beard trimmers costing $289.79 each, using the ASIN for the perfume. Prosecutors say the brothers frequently shipped and charged Amazon for more than 10,000 units of an item when it had requested fewer than 100. Once Amazon detected the fraud and shut down their accounts, the brothers allegedly tried to open new ones using fake names, different email addresses, and VPNs to obscure their identity.

It all worked because Amazon is so huge that everything is automated.

Build a Raspberry Pi Zero W Amazon price tracker

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/build-a-raspberry-pi-zero-w-amazon-price-tracker/

Have you ever missed out on a great deal on Amazon because you were completely unaware it existed? Are you interested in a specific item but waiting for it to go on sale? Here’s help: Devscover’s latest video shows you how to create an Amazon price tracker using Raspberry Pi Zero W and Python.

Build An Amazon Price Tracker With Python

Wayne from Devscover shows you how to code a Amazon Price Tracker with Python! Get started with your first Python project. Land a job at a big firm like Google, Facebook, Twitter or even the less well known but equally exciting big retail organisations or Government with Devscover tutorials and tips.

By following their video tutorial, you can set up a notification system on Raspberry Pi Zero W that emails you every time your chosen item’s price drops. Very nice.

Devscover’s tutorial is so detailed that it seems a waste to try and summarise it here. So instead, why not make yourself a cup of tea and sit down with the video? It’s worth the time investment: if you follow the instructions, you’ll end up with a great piece of tech that’ll save you money!

Remember, if you like what you see, subscribe to the Devscover YouTube channel and give them a thumbs-up for making wonderful Raspberry Pi content!

The post Build a Raspberry Pi Zero W Amazon price tracker appeared first on Raspberry Pi.

Technical Report of the Bezos Phone Hack

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/01/technical_repor.html

Motherboard obtained and published the technical report on the hack of Jeff Bezos’s phone, which is being attributed to Saudi Arabia, specifically to Crown Prince Mohammed bin Salman.

…investigators set up a secure lab to examine the phone and its artifacts and spent two days poring over the device but were unable to find any malware on it. Instead, they only found a suspicious video file sent to Bezos on May 1, 2018 that “appears to be an Arabic language promotional film about telecommunications.”

That file shows an image of the Saudi Arabian flag and Swedish flags and arrived with an encrypted downloader. Because the downloader was encrypted this delayed or further prevented “study of the code delivered along with the video.”

Investigators determined the video or downloader were suspicious only because Bezos’ phone subsequently began transmitting large amounts of data. “[W]ithin hours of the encrypted downloader being received, a massive and unauthorized exfiltration of data from Bezos’ phone began, continuing and escalating for months thereafter,” the report states.

“The amount of data being transmitted out of Bezos’ phone changed dramatically after receiving the WhatsApp video file and never returned to baseline. Following execution of the encrypted downloader sent from MBS’ account, egress on the device immediately jumped by approximately 29,000 percent,” it notes. “Forensic artifacts show that in the six (6) months prior to receiving the WhatsApp video, Bezos’ phone had an average of 430KB of egress per day, fairly typical of an iPhone. Within hours of the WhatsApp video, egress jumped to 126MB. The phone maintained an unusually high average of 101MB of egress data per day for months thereafter, including many massive and highly atypical spikes of egress data.”

The Motherboard article also quotes forensic experts on the report:

A mobile forensic expert told Motherboard that the investigation as depicted in the report is significantly incomplete and would only have provided the investigators with about 50 percent of what they needed, especially if this is a nation-state attack. She says the iTunes backup and other extractions they did would get them only messages, photo files, contacts and other files that the user is interested in saving from their applications, but not the core files.

“They would need to use a tool like Graykey or Cellebrite Premium or do a jailbreak to get a look at the full file system. That’s where that state-sponsored malware is going to be found. Good state-sponsored malware should never show up in a backup,” said Sarah Edwards, an author and teacher of mobile forensics for the SANS Institute.

“The full file system is getting into the device and getting every single file on there­ — the whole operating system, the application data, the databases that will not be backed up. So really the in-depth analysis should be done on that full file system, for this level of investigation anyway. I would have insisted on that right from the start.”

The investigators do note on the last page of their report that they need to jailbreak Bezos’s phone to examine the root file system. Edwards said this would indeed get them everything they would need to search for persistent spyware like the kind created and sold by the NSO Group. But the report doesn’t indicate if that did get done.

Fooling Voice Assistants with Lasers

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/11/fooling_voice_a.html

Interesting:

Siri, Alexa, and Google Assistant are vulnerable to attacks that use lasers to inject inaudible­ — and sometimes invisible­ — commands into the devices and surreptitiously cause them to unlock doors, visit websites, and locate, unlock, and start vehicles, researchers report in a research paper published on Monday. Dubbed Light Commands, the attack works against Facebook Portal and a variety of phones.

Shining a low-powered laser into these voice-activated systems allows attackers to inject commands of their choice from as far away as 360 feet (110m). Because voice-controlled systems often don’t require users to authenticate themselves, the attack can frequently be carried out without the need of a password or PIN. Even when the systems require authentication for certain actions, it may be feasible to brute force the PIN, since many devices don’t limit the number of guesses a user can make. Among other things, light-based commands can be sent from one building to another and penetrate glass when a vulnerable device is kept near a closed window.

Architecting multiple microservices behind a single domain with Amazon API Gateway

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/architecting-multiple-microservices-behind-a-single-domain-with-amazon-api-gateway/

This post is courtesy of Roberto Iturralde, Solutions Architect.

Today’s modern architectures are increasingly microservices-based, with separate engineering teams working independently on services with their own feature requirements and deployment pipelines. The benefits of this approach include increased agility and release velocity.

Microservice architectures also come with some challenges, particularly when they make up parts of a public service or API. These include enforcing engineering and security standards and collating application logs and metrics for a cross-service operational view.

It’s also important to have the microservices feel like a cohesive product to external customers, for authentication and metering in particular:

  • The engineering teams want autonomy.
  • The security team wants a cross-service view and to make it easy for the teams to adhere to the organization’s guidelines.
  • Customers want to feel like they’re using a unified product.

The AWS toolbox

AWS offers many services that you can weave together to meet these needs.

Amazon API Gateway is a fully managed service for deploying and managing a unified front door to your applications. It has features for routing your domain’s traffic to different backing microservices, enforcing consistent authentication and authorization with fine-grained permissions across them, and implementing consistent API throttling and usage metering. The microservice that backs a given API can live in another AWS account. You don’t have to expose it to the internet.

Amazon Cognito is a user management service with rich support for authentication and authorization of users. You can manage those users within Amazon Cognito or from other federated IdPs. Amazon Cognito can vend JSON Web Tokens and integrates natively with API Gateway to support OAuth scopes for fine-grained API access.

Amazon CloudWatch is a monitoring and management service that collects and visualizes data across AWS services. CloudWatch dashboards are customizable home pages that can contain graphs showing metrics and alarms. You can customize these to represent a specific microservice, a collection of microservices that comprise a product, or any other meaningful view with fine-grained access control to the dashboard.

AWS X-Ray is an analysis and debugging tool designed for distributed applications. It has tools to help gain insight into the performance of your microservices, and the APIs that front them, to measure and debug any potential customer impact.

AWS Service Catalog allows the central management and self-service creation of AWS resources that meet your organization’s guidelines and best practices. You can require separate permissions for managing catalog entries from deploying catalog entries, allowing a central team to define and publish templates for resources across the company.

Architectural options

There are many options for how you can combine these AWS services to meet your requirements. Your decisions may also depend on your expertise with AWS. The following features are common to all the designs below:

  • Amazon Route 53 has registered custom domains and hosts their DNS. You could also use an external registrar and DNS service.
  • AWS Certificate Manager (ACM) manages Transport Layer Security (TLS) certificates for the custom domains that route traffic to API Gateway APIs in a given account.
  • Amazon Cognito manages the users who access the APIs in API Gateway.
  • Service Catalog holds catalog products for API Gateway APIs that adhere to the organizational guidelines and best practices, such as security configuration and default API throttling. Microservice teams have permission to create an API pointed to their service and configure specific parameters, with approvals required for production environments. For more information, see Standardizing infrastructure delivery in distributed environments using AWS Service Catalog.

The following shows common design patterns and their high-level benefits and challenges.

Single AWS account

Microservices, their fronting API Gateway APIs, and supporting services are in the same AWS account. This account also includes core AWS services such as the following:

  • Route 53 for domain name registration and DNS
  • ACM for managing server certificates for your domain
  • Amazon Cognito for user management
  • Service Catalog for the catalog of best-practice product templates to use across the organization

Single AWS account example

Use this approach if you do not yet have a multi-account strategy or if you use AWS native tools for observability. With a single AWS account, the microservices can share the same networking topology, and so more easily communicate with each other when needed. With all the API Gateway APIs in the same AWS account, you can configure API throttling, metering, authentication, and authorization features for a unified experience for customers. You can also route traffic to a given API using subdomains or base path mapping in API Gateway.

A single AWS account can manage TLS certificates for AWS domains in one place. This feature is available to all API Gateway APIs. Having the microservices and their API Gateway APIs in the same AWS account gives more complete X-Ray service maps, given that X-Ray currently can’t analyze traces across AWS accounts. Similarly, you have a complete view of the metrics all AWS services publish to CloudWatch. This feature allows you to create CloudWatch dashboards that span the API Gateway APIs and their backing microservices.

There is an increased blast radius with this architecture, because the microservices share the same account. The microservices can impact each other through shared AWS service limits or mistakes by team members on other microservice teams. Most AWS services support tagging for cost allocation and granular access control, but there are some features of AWS services that do not. Because of this, it’s more difficult to separate the costs of each microservice completely.

Separate AWS accounts

When using separate AWS accounts, each API Gateway API lives in the same AWS account as its backing microservice. Separate AWS accounts hold the Service Catalog portfolio, domain registration (using Route 53), and aggregated logs from the microservices. The organization account, security account, and other core accounts are discussed further in the AWS Landing Zone Solution.

Separate AWS accounts

Use this architecture if you have a mature multi-account strategy and existing tooling for cross-account observability. In this approach, an AWS account encapsulates a microservice completely, for cost isolation and reduced blast radius. With the API Gateway API in the same account as the backing microservice, you have a complete view of the microservice in CloudWatch and X-Ray.

You can only meter API usage by microservice because API Gateway usage plans can’t track activity across accounts. Implement a process to ensure each customer’s API Gateway API key is the same across accounts for a smooth customer experience.

API Gateway base path mappings are local to an AWS account, so you must use subdomains to separate the microservices that comprise a product under a single domain. However, you can have a complete view of each microservice in the CloudWatch dashboards and X-Ray console for its AWS account. This creates a view across microservices that requires aggregation in a central AWS account or external tool.

Central API account

Using a central API account is similar to the separate account architecture, except the API Gateway APIs are in a central account.

Central API account

This architecture is the best approach for most users. It offers a balance of the benefits of microservice separation with the unification of particular services for a better end-user experience. Each microservice has an AWS account, which isolates it from the other services and reduces the risk of AWS service limit contention or accidents due to sharing the account with other engineering teams.

Because each microservice lives in a separate account, that account’s bill captures all the costs for that microservice. You can track the API costs, which are in the shared API account, using tags on API Gateway resources.

While the microservices are isolated in separate AWS accounts, the API Gateway throttling, metering, authentication, and authorization features are centralized for a consistent experience for customers. You can use subdomains or API Gateway base path mappings to route traffic to different API Gateway APIs. Also, the TLS certificates for your domains are centrally managed and available to all API Gateway APIs.

You can now split CloudWatch metrics, X-Ray traces, and application logs across accounts for a given microservice and its fronting API Gateway API. Unify these in a central AWS account or a third-party tool.

Conclusion

The breadth of the AWS Cloud presents many architectural options to customers. When designing your systems, it’s essential to understand the benefits and challenges of design decisions before implementing a solution.

This post walked you through three common architectural patterns for allowing independent microservice teams to operate behind a unified domain presented to your customers. The best approach for your organization depends on your priorities, experience, and familiarity with AWS.

Amazon Is Losing the War on Fraudulent Sellers

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/05/amazon_is_losin.html

Excellent article on fraudulent seller tactics on Amazon.

The most prominent black hat companies for US Amazon sellers offer ways to manipulate Amazon’s ranking system to promote products, protect accounts from disciplinary actions, and crush competitors. Sometimes, these black hat companies bribe corporate Amazon employees to leak information from the company’s wiki pages and business reports, which they then resell to marketplace sellers for steep prices. One black hat company charges as much as $10,000 a month to help Amazon sellers appear at the top of product search results. Other tactics to promote sellers’ products include removing negative reviews from product pages and exploiting technical loopholes on Amazon’s site to lift products’ overall sales rankings.

[…]

AmzPandora’s services ranged from small tasks to more ambitious strategies to rank a product higher using Amazon’s algorithm. While it was online, it offered to ping internal contacts at Amazon for $500 to get information about why a seller’s account had been suspended, as well as advice on how to appeal the suspension. For $300, the company promised to remove an unspecified number of negative reviews on a listing within three to seven days, which would help increase the overall star rating for a product. For $1.50, the company offered a service to fool the algorithm into believing a product had been added to a shopper’s cart or wish list by writing a super URL. And for $1,200, an Amazon seller could purchase a “frequently bought together” spot on another marketplace product’s page that would appear for two weeks, which AmzPandora promised would lead to a 10% increase in sales.

This was a good article on this from last year. (My blog post.)

Amazon has a real problem here, primarily because trust in the system is paramount to Amazon’s success. As much as they need to crack down on fraudulent sellers, they really want articles like these to not be written.

Slashdot thread. Boing Boing post.

Fraudulent Tactics on Amazon Marketplace

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/12/fraudulent_tact.html

Fascinating article about the many ways Amazon Marketplace sellers sabotage each other and defraud customers. The opening example: framing a seller for false advertising by buying fake five-star reviews for their products.

Defacement: Sellers armed with the accounts of Amazon distributors (sometimes legitimately, sometimes through the black market) can make all manner of changes to a rival’s listings, from changing images to altering text to reclassifying a product into an irrelevant category, like “sex toys.”

Phony fires: Sellers will buy their rival’s product, light it on fire, and post a picture to the reviews, claiming it exploded. Amazon is quick to suspend sellers for safety claims.

[…]

Over the following days, Harris came to realize that someone had been targeting him for almost a year, preparing an intricate trap. While he had trademarked his watch and registered his brand, Dead End Survival, with Amazon, Harris hadn’t trademarked the name of his Amazon seller account, SharpSurvival. So the interloper did just that, submitting to the patent office as evidence that he owned the goods a photo taken from Harris’ Amazon listings, including one of Harris’ own hands lighting a fire using the clasp of his survival watch. The hijacker then took that trademark to Amazon and registered it, giving him the power to kick Harris off his own listings and commandeer his name.

[…]

There are more subtle methods of sabotage as well. Sellers will sometimes buy Google ads for their competitors for unrelated products — say, a dog food ad linking to a shampoo listing — so that Amazon’s algorithm sees the rate of clicks converting to sales drop and automatically demotes their product.

What’s also interesting is how Amazon is basically its own government — with its own rules that its suppliers have no choice but to follow. And, of course, increasingly there is no option but to sell your stuff on Amazon.

EC2 Instance Update – M5 Instances with Local NVMe Storage (M5d)

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/ec2-instance-update-m5-instances-with-local-nvme-storage-m5d/

Earlier this month we launched the C5 Instances with Local NVMe Storage and I told you that we would be doing the same for additional instance types in the near future!

Today we are introducing M5 instances equipped with local NVMe storage. Available for immediate use in 5 regions, these instances are a great fit for workloads that require a balance of compute and memory resources. Here are the specs:

Instance Name vCPUs RAM Local Storage EBS-Optimized Bandwidth Network Bandwidth
m5d.large 2 8 GiB 1 x 75 GB NVMe SSD Up to 2.120 Gbps Up to 10 Gbps
m5d.xlarge 4 16 GiB 1 x 150 GB NVMe SSD Up to 2.120 Gbps Up to 10 Gbps
m5d.2xlarge 8 32 GiB 1 x 300 GB NVMe SSD Up to 2.120 Gbps Up to 10 Gbps
m5d.4xlarge 16 64 GiB 1 x 600 GB NVMe SSD 2.210 Gbps Up to 10 Gbps
m5d.12xlarge 48 192 GiB 2 x 900 GB NVMe SSD 5.0 Gbps 10 Gbps
m5d.24xlarge 96 384 GiB 4 x 900 GB NVMe SSD 10.0 Gbps 25 Gbps

The M5d instances are powered by Custom Intel® Xeon® Platinum 8175M series processors running at 2.5 GHz, including support for AVX-512.

You can use any AMI that includes drivers for the Elastic Network Adapter (ENA) and NVMe; this includes the latest Amazon Linux, Microsoft Windows (Server 2008 R2, Server 2012, Server 2012 R2 and Server 2016), Ubuntu, RHEL, SUSE, and CentOS AMIs.

Here are a couple of things to keep in mind about the local NVMe storage on the M5d instances:

Naming – You don’t have to specify a block device mapping in your AMI or during the instance launch; the local storage will show up as one or more devices (/dev/nvme*1 on Linux) after the guest operating system has booted.

Encryption – Each local NVMe device is hardware encrypted using the XTS-AES-256 block cipher and a unique key. Each key is destroyed when the instance is stopped or terminated.

Lifetime – Local NVMe devices have the same lifetime as the instance they are attached to, and do not stick around after the instance has been stopped or terminated.

Available Now
M5d instances are available in On-Demand, Reserved Instance, and Spot form in the US East (N. Virginia), US West (Oregon), EU (Ireland), US East (Ohio), and Canada (Central) Regions. Prices vary by Region, and are just a bit higher than for the equivalent M5 instances.

Jeff;

 

AWS Online Tech Talks – June 2018

Post Syndicated from Devin Watson original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-june-2018/

AWS Online Tech Talks – June 2018

Join us this month to learn about AWS services and solutions. New this month, we have a fireside chat with the GM of Amazon WorkSpaces and our 2nd episode of the “How to re:Invent” series. We’ll also cover best practices, deep dives, use cases and more! Join us and register today!

Note – All sessions are free and in Pacific Time.

Tech talks featured this month:

 

Analytics & Big Data

June 18, 2018 | 11:00 AM – 11:45 AM PTGet Started with Real-Time Streaming Data in Under 5 Minutes – Learn how to use Amazon Kinesis to capture, store, and analyze streaming data in real-time including IoT device data, VPC flow logs, and clickstream data.
June 20, 2018 | 11:00 AM – 11:45 AM PT – Insights For Everyone – Deploying Data across your Organization – Learn how to deploy data at scale using AWS Analytics and QuickSight’s new reader role and usage based pricing.

 

AWS re:Invent
June 13, 2018 | 05:00 PM – 05:30 PM PTEpisode 2: AWS re:Invent Breakout Content Secret Sauce – Hear from one of our own AWS content experts as we dive deep into the re:Invent content strategy and how we maintain a high bar.
Compute

June 25, 2018 | 01:00 PM – 01:45 PM PTAccelerating Containerized Workloads with Amazon EC2 Spot Instances – Learn how to efficiently deploy containerized workloads and easily manage clusters at any scale at a fraction of the cost with Spot Instances.

June 26, 2018 | 01:00 PM – 01:45 PM PTEnsuring Your Windows Server Workloads Are Well-Architected – Get the benefits, best practices and tools on running your Microsoft Workloads on AWS leveraging a well-architected approach.

 

Containers
June 25, 2018 | 09:00 AM – 09:45 AM PTRunning Kubernetes on AWS – Learn about the basics of running Kubernetes on AWS including how setup masters, networking, security, and add auto-scaling to your cluster.

 

Databases

June 18, 2018 | 01:00 PM – 01:45 PM PTOracle to Amazon Aurora Migration, Step by Step – Learn how to migrate your Oracle database to Amazon Aurora.
DevOps

June 20, 2018 | 09:00 AM – 09:45 AM PTSet Up a CI/CD Pipeline for Deploying Containers Using the AWS Developer Tools – Learn how to set up a CI/CD pipeline for deploying containers using the AWS Developer Tools.

 

Enterprise & Hybrid
June 18, 2018 | 09:00 AM – 09:45 AM PTDe-risking Enterprise Migration with AWS Managed Services – Learn how enterprise customers are de-risking cloud adoption with AWS Managed Services.

June 19, 2018 | 11:00 AM – 11:45 AM PTLaunch AWS Faster using Automated Landing Zones – Learn how the AWS Landing Zone can automate the set up of best practice baselines when setting up new

 

AWS Environments

June 21, 2018 | 11:00 AM – 11:45 AM PTLeading Your Team Through a Cloud Transformation – Learn how you can help lead your organization through a cloud transformation.

June 21, 2018 | 01:00 PM – 01:45 PM PTEnabling New Retail Customer Experiences with Big Data – Learn how AWS can help retailers realize actual value from their big data and deliver on differentiated retail customer experiences.

June 28, 2018 | 01:00 PM – 01:45 PM PTFireside Chat: End User Collaboration on AWS – Learn how End User Compute services can help you deliver access to desktops and applications anywhere, anytime, using any device.
IoT

June 27, 2018 | 11:00 AM – 11:45 AM PTAWS IoT in the Connected Home – Learn how to use AWS IoT to build innovative Connected Home products.

 

Machine Learning

June 19, 2018 | 09:00 AM – 09:45 AM PTIntegrating Amazon SageMaker into your Enterprise – Learn how to integrate Amazon SageMaker and other AWS Services within an Enterprise environment.

June 21, 2018 | 09:00 AM – 09:45 AM PTBuilding Text Analytics Applications on AWS using Amazon Comprehend – Learn how you can unlock the value of your unstructured data with NLP-based text analytics.

 

Management Tools

June 20, 2018 | 01:00 PM – 01:45 PM PTOptimizing Application Performance and Costs with Auto Scaling – Learn how selecting the right scaling option can help optimize application performance and costs.

 

Mobile
June 25, 2018 | 11:00 AM – 11:45 AM PTDrive User Engagement with Amazon Pinpoint – Learn how Amazon Pinpoint simplifies and streamlines effective user engagement.

 

Security, Identity & Compliance

June 26, 2018 | 09:00 AM – 09:45 AM PTUnderstanding AWS Secrets Manager – Learn how AWS Secrets Manager helps you rotate and manage access to secrets centrally.
June 28, 2018 | 09:00 AM – 09:45 AM PTUsing Amazon Inspector to Discover Potential Security Issues – See how Amazon Inspector can be used to discover security issues of your instances.

 

Serverless

June 19, 2018 | 01:00 PM – 01:45 PM PTProductionize Serverless Application Building and Deployments with AWS SAM – Learn expert tips and techniques for building and deploying serverless applications at scale with AWS SAM.

 

Storage

June 26, 2018 | 11:00 AM – 11:45 AM PTDeep Dive: Hybrid Cloud Storage with AWS Storage Gateway – Learn how you can reduce your on-premises infrastructure by using the AWS Storage Gateway to connecting your applications to the scalable and reliable AWS storage services.
June 27, 2018 | 01:00 PM – 01:45 PM PTChanging the Game: Extending Compute Capabilities to the Edge – Discover how to change the game for IIoT and edge analytics applications with AWS Snowball Edge plus enhanced Compute instances.
June 28, 2018 | 11:00 AM – 11:45 AM PTBig Data and Analytics Workloads on Amazon EFS – Get best practices and deployment advice for running big data and analytics workloads on Amazon EFS.

Some quick thoughts on the public discussion regarding facial recognition and Amazon Rekognition this past week

Post Syndicated from Dr. Matt Wood original https://aws.amazon.com/blogs/aws/some-quick-thoughts-on-the-public-discussion-regarding-facial-recognition-and-amazon-rekognition-this-past-week/

We have seen a lot of discussion this past week about the role of Amazon Rekognition in facial recognition, surveillance, and civil liberties, and we wanted to share some thoughts.

Amazon Rekognition is a service we announced in 2016. It makes use of new technologies – such as deep learning – and puts them in the hands of developers in an easy-to-use, low-cost way. Since then, we have seen customers use the image and video analysis capabilities of Amazon Rekognition in ways that materially benefit both society (e.g. preventing human trafficking, inhibiting child exploitation, reuniting missing children with their families, and building educational apps for children), and organizations (enhancing security through multi-factor authentication, finding images more easily, or preventing package theft). Amazon Web Services (AWS) is not the only provider of services like these, and we remain excited about how image and video analysis can be a driver for good in the world, including in the public sector and law enforcement.

There have always been and will always be risks with new technology capabilities. Each organization choosing to employ technology must act responsibly or risk legal penalties and public condemnation. AWS takes its responsibilities seriously. But we believe it is the wrong approach to impose a ban on promising new technologies because they might be used by bad actors for nefarious purposes in the future. The world would be a very different place if we had restricted people from buying computers because it was possible to use that computer to do harm. The same can be said of thousands of technologies upon which we all rely each day. Through responsible use, the benefits have far outweighed the risks.

Customers are off to a great start with Amazon Rekognition; the evidence of the positive impact this new technology can provide is strong (and growing by the week), and we’re excited to continue to support our customers in its responsible use.

-Dr. Matt Wood, general manager of artificial intelligence at AWS

Amazon SageMaker Updates – Tokyo Region, CloudFormation, Chainer, and GreenGrass ML

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/sagemaker-tokyo-summit-2018/

Today, at the AWS Summit in Tokyo we announced a number of updates and new features for Amazon SageMaker. Starting today, SageMaker is available in Asia Pacific (Tokyo)! SageMaker also now supports CloudFormation. A new machine learning framework, Chainer, is now available in the SageMaker Python SDK, in addition to MXNet and Tensorflow. Finally, support for running Chainer models on several devices was added to AWS Greengrass Machine Learning.

Amazon SageMaker Chainer Estimator


Chainer is a popular, flexible, and intuitive deep learning framework. Chainer networks work on a “Define-by-Run” scheme, where the network topology is defined dynamically via forward computation. This is in contrast to many other frameworks which work on a “Define-and-Run” scheme where the topology of the network is defined separately from the data. A lot of developers enjoy the Chainer scheme since it allows them to write their networks with native python constructs and tools.

Luckily, using Chainer with SageMaker is just as easy as using a TensorFlow or MXNet estimator. In fact, it might even be a bit easier since it’s likely you can take your existing scripts and use them to train on SageMaker with very few modifications. With TensorFlow or MXNet users have to implement a train function with a particular signature. With Chainer your scripts can be a little bit more portable as you can simply read from a few environment variables like SM_MODEL_DIR, SM_NUM_GPUS, and others. We can wrap our existing script in a if __name__ == '__main__': guard and invoke it locally or on sagemaker.


import argparse
import os

if __name__ =='__main__':

    parser = argparse.ArgumentParser()

    # hyperparameters sent by the client are passed as command-line arguments to the script.
    parser.add_argument('--epochs', type=int, default=10)
    parser.add_argument('--batch-size', type=int, default=64)
    parser.add_argument('--learning-rate', type=float, default=0.05)

    # Data, model, and output directories
    parser.add_argument('--output-data-dir', type=str, default=os.environ['SM_OUTPUT_DATA_DIR'])
    parser.add_argument('--model-dir', type=str, default=os.environ['SM_MODEL_DIR'])
    parser.add_argument('--train', type=str, default=os.environ['SM_CHANNEL_TRAIN'])
    parser.add_argument('--test', type=str, default=os.environ['SM_CHANNEL_TEST'])

    args, _ = parser.parse_known_args()

    # ... load from args.train and args.test, train a model, write model to args.model_dir.

Then, we can run that script locally or use the SageMaker Python SDK to launch it on some GPU instances in SageMaker. The hyperparameters will get passed in to the script as CLI commands and the environment variables above will be autopopulated. When we call fit the input channels we pass will be populated in the SM_CHANNEL_* environment variables.


from sagemaker.chainer.estimator import Chainer
# Create my estimator
chainer_estimator = Chainer(
    entry_point='example.py',
    train_instance_count=1,
    train_instance_type='ml.p3.2xlarge',
    hyperparameters={'epochs': 10, 'batch-size': 64}
)
# Train my estimator
chainer_estimator.fit({'train': train_input, 'test': test_input})

# Deploy my estimator to a SageMaker Endpoint and get a Predictor
predictor = chainer_estimator.deploy(
    instance_type="ml.m4.xlarge",
    initial_instance_count=1
)

Now, instead of bringing your own docker container for training and hosting with Chainer, you can just maintain your script. You can see the full sagemaker-chainer-containers on github. One of my favorite features of the new container is built-in chainermn for easy multi-node distribution of your chainer training jobs.

There’s a lot more documentation and information available in both the README and the example notebooks.

AWS GreenGrass ML with Chainer

AWS GreenGrass ML now includes a pre-built Chainer package for all devices powered by Intel Atom, NVIDIA Jetson, TX2, and Raspberry Pi. So, now GreenGrass ML provides pre-built packages for TensorFlow, Apache MXNet, and Chainer! You can train your models on SageMaker then easily deploy it to any GreenGrass-enabled device using GreenGrass ML.

JAWS UG

I want to give a quick shout out to all of our wonderful and inspirational friends in the JAWS UG who attended the AWS Summit in Tokyo today. I’ve very much enjoyed seeing your pictures of the summit. Thanks for making Japan an amazing place for AWS developers! I can’t wait to visit again and meet with all of you.

Randall