Tag Archives: artificial intelligence

Find Your Most Expensive Lines of Code – Amazon CodeGuru Is Now Generally Available

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/find-your-most-expensive-lines-of-code-amazon-codeguru-is-now-generally-available/

Bringing new applications into production, maintaining their code base as they grow and evolve, and at the same time respond to operational issues, is a challenging task. For this reason, you can find many ideas on how to structure your teams, on which methodologies to apply, and how to safely automate your software delivery pipeline.

At re:Invent last year, we introduced in preview Amazon CodeGuru, a developer tool powered by machine learning that helps you improve your applications and troubleshoot issues with automated code reviews and performance recommendations based on runtime data. During the last few months, many improvements have been launched, including a more cost-effective pricing model, support for Bitbucket repositories, and the ability to start the profiling agent using a command line switch, so that you no longer need to modify the code of your application, or add dependencies, to run the agent.

You can use CodeGuru in two ways:

  • CodeGuru Reviewer uses program analysis and machine learning to detect potential defects that are difficult for developers to find, and recommends fixes in your Java code. The code can be stored in GitHub (now also in GitHub Enterprise), AWS CodeCommit, or Bitbucket repositories. When you submit a pull request on a repository that is associated with CodeGuru Reviewer, it provides recommendations for how to improve your code. Each pull request corresponds to a code review, and each code review can include multiple recommendations that appear as comments on the pull request.
  • CodeGuru Profiler provides interactive visualizations and recommendations that help you fine-tune your application performance and troubleshoot operational issues using runtime data from your live applications. It currently supports applications written in Java virtual machine (JVM) languages such as Java, Scala, Kotlin, Groovy, Jython, JRuby, and Clojure. CodeGuru Profiler can help you find the most expensive lines of code, in terms of CPU usage or introduced latency, and suggest ways you can improve efficiency and remove bottlenecks. You can use CodeGuru Profiler in production, and when you test your application with a meaningful workload, for example in a pre-production environment.

Today, Amazon CodeGuru is generally available with the addition of many new features.

In CodeGuru Reviewer, we included the following:

  • Support for Github Enterprise – You can now scan your pull requests and get recommendations against your source code on Github Enterprise on-premises repositories, together with a description of what’s causing the issue and how to remediate it.
  • New types of recommendations to solve defects and improve your code – For example, checking input validation, to avoid issues that can compromise security and performance, and looking for multiple copies of code that do the same thing.

In CodeGuru Profiler, you can find these new capabilities:

  • Anomaly detection – We automatically detect anomalies in the application profile for those methods that represent the highest proportion of CPU time or latency.
  • Lambda function support – You can now profile AWS Lambda functions just like applications hosted on Amazon Elastic Compute Cloud (EC2) and containerized applications running on Amazon ECS and Amazon Elastic Kubernetes Service, including those using AWS Fargate.
  • Cost of issues in the recommendation report – Recommendations contain actionable resolution steps which explain what the problem is, the CPU impact, and how to fix the issue. To help you better prioritize your activities, you now have an estimation of the savings introduced by applying the recommendation.
  • Color-my-code – In the visualizations, to help you easily find your own code, we are coloring your methods differently from frameworks and other libraries you may use.
  • CloudWatch metrics and alerts – To keep track and monitor efficiency issues that have been discovered.

Let’s see some of these new features at work!

Using CodeGuru Reviewer with a Lambda Function
I create a new repo in my GitHub account, and leave it empty for now. Locally, where I am developing a Lambda function using the Java 11 runtime, I initialize my Git repo and add only the README.md file to the master branch. In this way, I can add all the code as a pull request later and have it go through a code review by CodeGuru.

git init
git add README.md
git commit -m "First commit"

Now, I add the GitHub repo as origin, and push my changes to the new repo:

git remote add origin https://github.com/<my-user-id>/amazon-codeguru-sample-lambda-function.git
git push -u origin master

I associate the repository in the CodeGuru console:

When the repository is associated, I create a new dev branch, add all my local files to it, and push it remotely:

git checkout -b dev
git add .
git commit -m "Code added to the dev branch"
git push --set-upstream origin dev

In the GitHub console, I open a new pull request by comparing changes across the two branches, master and dev. I verify that the pull request is able to merge, then I create it.

Since the repository is associated with CodeGuru, a code review is listed as Pending in the Code reviews section of the CodeGuru console.

After a few minutes, the code review status is Completed, and CodeGuru Reviewer issues a recommendation on the same GitHub page where the pull request was created.

Oops! I am creating the Amazon DynamoDB service object inside the function invocation method. In this way, it cannot be reused across invocations. This is not efficient.

To improve the performance of my Lambda function, I follow the CodeGuru recommendation, and move the declaration of the DynamoDB service object to a static final attribute of the Java application object, so that it is instantiated only once, during function initialization. Then, I follow the link in the recommendation to learn more best practices for working with Lambda functions.

Using CodeGuru Profiler with a Lambda Function
In the CodeGuru console, I create a MyServerlessApp-Development profiling group and select the Lambda compute platform.

Next, I give the AWS Identity and Access Management (IAM) role used by my Lambda function permissions to submit data to this profiling group.

Now, the console is giving me all the info I need to profile my Lambda function. To configure the profiling agent, I use a couple of environment variables:

  • AWS_CODEGURU_PROFILER_GROUP_ARN to specify the ARN of the profiling group to use.
  • AWS_CODEGURU_PROFILER_ENABLED to enable (TRUE) or disable (FALSE) profiling.

I follow the instructions (for Maven and Gradle) to add a dependency, and include the profiling agent in the build. Then, I update the code of the Lambda function to wrap the handler function inside the LambdaProfiler provided by the agent.

To generate some load, I start a few scripts invoking my function using the Amazon API Gateway as trigger. After a few minutes, the profiling group starts to show visualizations describing the runtime behavior of my Lambda function.

For example, I can see how much CPU time is spent in the different methods of my function. At the bottom, there are the entry point methods. As I scroll up, I find methods that are called deeper in the stack trace. I right-click and hide the LambdaRuntimeClient methods to focus on my code. Note that my methods are colored differently than those in the packages I am using, such as the AWS SDK for Java.

I am mostly interested in what happens in the handler method invoked by the Lambda platform. I select the handler method, and now it becomes the new “base” of the visualization.

As I move my pointer on each of my methods, I get more information, including an estimation of the yearly cost of running that specific part of the code in production, based on the load experienced by the profiling agent during the selected time window. In my case, the handler function cost is estimated to be $6. If I select the two main functions above, I have an estimation of $3 each. The cost estimation works for code running on Lambda functions, EC2 instances, and containerized applications.

Similarly, I can visualize Latency, to understand how much time is spent inside the methods in my code. I keep the Lambda function handler method selected to drill down into what is under my control, and see where time is being spent the most.

The CodeGuru Profiler is also providing a recommendation based on the data collected. I am spending too much time (more than 4%) in managing encryption. I can use a more efficient crypto provider, such as the open source Amazon Corretto Crypto Provider, described in this blog post. This should lower the time spent to what is expected, about 1% of my profile.

Finally, I edit the profiling group to enable notifications. In this way, if CodeGuru detects an anomaly in the profile of my application, I am notified in one or more Amazon Simple Notification Service (SNS) topics.

Available Now
Amazon CodeGuru is available today in 10 regions, and we are working to add more regions in the coming months. For regional availability, please see the AWS Region Table.

CodeGuru helps you improve your application code and reduce compute and infrastructure costs with an automated code reviewer and application profiler that provide intelligent recommendations. Using visualizations based on runtime data, you can quickly find the most expensive lines of code of your applications. With CodeGuru, you pay only for what you use. Pricing is based on the lines of code analyzed by CodeGuru Reviewer, and on sampling hours for CodeGuru Profiler.

To learn more, please see the documentation.

Danilo

Amazon EKS Now Supports EC2 Inf1 Instances

Post Syndicated from Julien Simon original https://aws.amazon.com/blogs/aws/amazon-eks-now-supports-ec2-inf1-instances/

Amazon Elastic Kubernetes Service (EKS) has quickly become a leading choice for machine learning workloads. It combines the developer agility and the scalability of Kubernetes, with the wide selection of Amazon Elastic Compute Cloud (EC2) instance types available on AWS, such as the C5, P3, and G4 families.

As models become more sophisticated, hardware acceleration is increasingly required to deliver fast predictions at high throughput. Today, we’re very happy to announce that AWS customers can now use the Amazon EC2 Inf1 instances on Amazon Elastic Kubernetes Service, for high performance and the lowest prediction cost in the cloud.

A primer on EC2 Inf1 instances
Inf1 instances were launched at AWS re:Invent 2019. They are powered by AWS Inferentia, a custom chip built from the ground up by AWS to accelerate machine learning inference workloads.

Inf1 instances are available in multiple sizes, with 1, 4, or 16 AWS Inferentia chips, with up to 100 Gbps network bandwidth and up to 19 Gbps EBS bandwidth. An AWS Inferentia chip contains four NeuronCores. Each one implements a high-performance systolic array matrix multiply engine, which massively speeds up typical deep learning operations such as convolution and transformers. NeuronCores are also equipped with a large on-chip cache, which helps cut down on external memory accesses, saving I/O time in the process. When several AWS Inferentia chips are available on an Inf1 instance, you can partition a model across them and store it entirely in cache memory. Alternatively, to serve multi-model predictions from a single Inf1 instance, you can partition the NeuronCores of an AWS Inferentia chip across several models.

Compiling Models for EC2 Inf1 Instances
To run machine learning models on Inf1 instances, you need to compile them to a hardware-optimized representation using the AWS Neuron SDK. All tools are readily available on the AWS Deep Learning AMI, and you can also install them on your own instances. You’ll find instructions in the Deep Learning AMI documentation, as well as tutorials for TensorFlow, PyTorch, and Apache MXNet in the AWS Neuron SDK repository.

In the demo below, I will show you how to deploy a Neuron-optimized model on an EKS cluster of Inf1 instances, and how to serve predictions with TensorFlow Serving. The model in question is BERT, a state of the art model for natural language processing tasks. This is a huge model with hundreds of millions of parameters, making it a great candidate for hardware acceleration.

Building an EKS Cluster of EC2 Inf1 Instances
First of all, let’s build a cluster with two inf1.2xlarge instances. I can easily do this with eksctl, the command-line tool to provision and manage EKS clusters. You can find installation instructions in the EKS documentation.

Here is the configuration file for my cluster. Eksctl detects that I’m launching a node group with an Inf1 instance type, and will start your worker nodes using the EKS-optimized Accelerated AMI.

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: cluster-inf1
  region: us-west-2
nodeGroups:
  - name: ng1-public
    instanceType: inf1.2xlarge
    minSize: 0
    maxSize: 3
    desiredCapacity: 2
    ssh:
      allow: true

Then, I use eksctl to create the cluster. This process will take approximately 10 minutes.

$ eksctl create cluster -f inf1-cluster.yaml

Eksctl automatically installs the Neuron device plugin in your cluster. This plugin advertises Neuron devices to the Kubernetes scheduler, which can be requested by containers in a deployment spec. I can check with kubectl that the device plug-in container is running fine on both Inf1 instances.

$ kubectl get pods -n kube-system
NAME                                  READY STATUS  RESTARTS AGE
aws-node-tl5xv                        1/1   Running 0        14h
aws-node-wk6qm                        1/1   Running 0        14h
coredns-86d5cbb4bd-4fxrh              1/1   Running 0        14h
coredns-86d5cbb4bd-sts7g              1/1   Running 0        14h
kube-proxy-7px8d                      1/1   Running 0        14h
kube-proxy-zqvtc                      1/1   Running 0        14h
neuron-device-plugin-daemonset-888j4  1/1   Running 0        14h
neuron-device-plugin-daemonset-tq9kc  1/1   Running 0        14h

Next, I define AWS credentials in a Kubernetes secret. They will allow me to grab my BERT model stored in S3. Please note that both keys needs to be base64-encoded.

apiVersion: v1 
kind: Secret 
metadata: 
  name: aws-s3-secret 
type: Opaque 
data: 
  AWS_ACCESS_KEY_ID: <base64-encoded value> 
  AWS_SECRET_ACCESS_KEY: <base64-encoded value>

Finally, I store these credentials on the cluster.

$ kubectl apply -f secret.yaml

The cluster is correctly set up. Now, let’s build an application container storing a Neuron-enabled version of TensorFlow Serving.

Building an Application Container for TensorFlow Serving
The Dockerfile is very simple. We start from an Amazon Linux 2 base image. Then, we install the AWS CLI, and the TensorFlow Serving package available in the Neuron repository.

FROM amazonlinux:2
RUN yum install -y awscli
RUN echo $'[neuron] \n\
name=Neuron YUM Repository \n\
baseurl=https://yum.repos.neuron.amazonaws.com \n\
enabled=1' > /etc/yum.repos.d/neuron.repo
RUN rpm --import https://yum.repos.neuron.amazonaws.com/GPG-PUB-KEY-AMAZON-AWS-NEURON.PUB
RUN yum install -y tensorflow-model-server-neuron

I build the image, create an Amazon Elastic Container Registry repository, and push the image to it.

$ docker build . -f Dockerfile -t tensorflow-model-server-neuron
$ docker tag IMAGE_NAME 123456789012.dkr.ecr.us-west-2.amazonaws.com/inf1-demo
$ aws ecr create-repository --repository-name inf1-demo
$ docker push 123456789012.dkr.ecr.us-west-2.amazonaws.com/inf1-demo

Our application container is ready. Now, let’s define a Kubernetes service that will use this container to serve BERT predictions. I’m using a model that has already been compiled with the Neuron SDK. You can compile your own using the instructions available in the Neuron SDK repository.

Deploying BERT as a Kubernetes Service
The deployment manages two containers: the Neuron runtime container, and my application container. The Neuron runtime runs as a sidecar container image, and is used to interact with the AWS Inferentia chips. At startup, the latter configures the AWS CLI with the appropriate security credentials. Then, it fetches the BERT model from S3. Finally, it launches TensorFlow Serving, loading the BERT model and waiting for prediction requests. For this purpose, the HTTP and grpc ports are open. Here is the full manifest.

kind: Service
apiVersion: v1
metadata:
  name: eks-neuron-test
  labels:
    app: eks-neuron-test
spec:
  ports:
  - name: http-tf-serving
    port: 8500
    targetPort: 8500
  - name: grpc-tf-serving
    port: 9000
    targetPort: 9000
  selector:
    app: eks-neuron-test
    role: master
  type: ClusterIP
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: eks-neuron-test
  labels:
    app: eks-neuron-test
    role: master
spec:
  replicas: 2
  selector:
    matchLabels:
      app: eks-neuron-test
      role: master
  template:
    metadata:
      labels:
        app: eks-neuron-test
        role: master
    spec:
      volumes:
        - name: sock
          emptyDir: {}
      containers:
      - name: eks-neuron-test
        image: 123456789012.dkr.ecr.us-west-2.amazonaws.com/inf1-demo:latest
        command: ["/bin/sh","-c"]
        args:
          - "mkdir ~/.aws/ && \
           echo '[eks-test-profile]' > ~/.aws/credentials && \
           echo AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID >> ~/.aws/credentials && \
           echo AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY >> ~/.aws/credentials; \
           /usr/bin/aws --profile eks-test-profile s3 sync s3://jsimon-inf1-demo/bert /tmp/bert && \
           /usr/local/bin/tensorflow_model_server_neuron --port=9000 --rest_api_port=8500 --model_name=bert_mrpc_hc_gelus_b4_l24_0926_02 --model_base_path=/tmp/bert/"
        ports:
        - containerPort: 8500
        - containerPort: 9000
        imagePullPolicy: Always
        env:
        - name: AWS_ACCESS_KEY_ID
          valueFrom:
            secretKeyRef:
              key: AWS_ACCESS_KEY_ID
              name: aws-s3-secret
        - name: AWS_SECRET_ACCESS_KEY
          valueFrom:
            secretKeyRef:
              key: AWS_SECRET_ACCESS_KEY
              name: aws-s3-secret
        - name: NEURON_RTD_ADDRESS
          value: unix:/sock/neuron.sock

        resources:
          limits:
            cpu: 4
            memory: 4Gi
          requests:
            cpu: "1"
            memory: 1Gi
        volumeMounts:
          - name: sock
            mountPath: /sock

      - name: neuron-rtd
        image: 790709498068.dkr.ecr.us-west-2.amazonaws.com/neuron-rtd:1.0.6905.0
        securityContext:
          capabilities:
            add:
            - SYS_ADMIN
            - IPC_LOCK

        volumeMounts:
          - name: sock
            mountPath: /sock
        resources:
          limits:
            hugepages-2Mi: 256Mi
            aws.amazon.com/neuron: 1
          requests:
            memory: 1024Mi

I use kubectl to create the service.

$ kubectl create -f bert_service.yml

A few seconds later, the pods are up and running.

$ kubectl get pods
NAME                           READY STATUS  RESTARTS AGE
eks-neuron-test-5d59b55986-7kdml 2/2   Running 0        14h
eks-neuron-test-5d59b55986-gljlq 2/2   Running 0        14h

Finally, I redirect service port 9000 to local port 9000, to let my prediction client connect locally.

$ kubectl port-forward svc/eks-neuron-test 9000:9000 &

Now, everything is ready for prediction, so let’s invoke the model.

Predicting with BERT on EKS and Inf1
The inner workings of BERT are beyond the scope of this post. This particular model expects a sequence of 128 tokens, encoding the words of two sentences we’d like to compare for semantic equivalence.

Here, I’m only interested in measuring prediction latency, so dummy data is fine. I build 100 prediction requests storing a sequence of 128 zeros. I send them to the TensorFlow Serving endpoint via grpc, and I compute the average prediction time.

import numpy as np
import grpc
import tensorflow as tf
from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2_grpc
import time

if __name__ == '__main__':
    channel = grpc.insecure_channel('localhost:9000')
    stub = prediction_service_pb2_grpc.PredictionServiceStub(channel)
    request = predict_pb2.PredictRequest()
    request.model_spec.name = 'bert_mrpc_hc_gelus_b4_l24_0926_02'
    i = np.zeros([1, 128], dtype=np.int32)
    request.inputs['input_ids'].CopyFrom(tf.contrib.util.make_tensor_proto(i, shape=i.shape))
    request.inputs['input_mask'].CopyFrom(tf.contrib.util.make_tensor_proto(i, shape=i.shape))
    request.inputs['segment_ids'].CopyFrom(tf.contrib.util.make_tensor_proto(i, shape=i.shape))

    latencies = []
    for i in range(100):
        start = time.time()
        result = stub.Predict(request)
        latencies.append(time.time() - start)
        print("Inference successful: {}".format(i))
    print ("Ran {} inferences successfully. Latency average = {}".format(len(latencies), np.average(latencies)))

On average, prediction took 5.92ms. As far as BERT goes, this is pretty good!

Ran 100 inferences successfully. Latency average = 0.05920819044113159

In real-life, we would certainly be batching prediction requests in order to increase throughput. If needed, we could also scale to larger Inf1 instances supporting several Inferentia chips, and deliver even more prediction performance at low cost.

Getting Started
Kubernetes users can deploy Amazon Elastic Compute Cloud (EC2) Inf1 instances on Amazon Elastic Kubernetes Service today in the US East (N. Virginia) and US West (Oregon) regions. As Inf1 deployment progresses, you’ll be able to use them with Amazon Elastic Kubernetes Service in more regions.

Give this a try, and please send us feedback either through your usual AWS Support contacts, on the AWS Forum for Amazon Elastic Kubernetes Service, or on the container roadmap on Github.

– Julien

New – Label 3D Point Clouds with Amazon SageMaker Ground Truth

Post Syndicated from Julien Simon original https://aws.amazon.com/blogs/aws/new-label-3d-point-clouds-with-amazon-sagemaker-ground-truth/

Launched at AWS re:Invent 2018, Amazon Sagemaker Ground Truth is a capability of Amazon SageMaker that makes it easy to annotate machine learning datasets. Customers can efficiently and accurately label image and text data with built-in workflows, or any other type of data with custom workflows. Data samples are automatically distributed to a workforce (private, 3rd party or MTurk), and annotations are stored in Amazon Simple Storage Service (S3). Optionally, automated data labeling may also be enabled, reducing both the amount of time required to label the dataset, and the associated costs.

About a year ago, I met with Automotive customers who expressed interest in labeling 3-dimensional (3D) datasets for autonomous driving. Captured by LIDAR sensors, these datasets are particularly large and complex. Data is stored in frames that typically contain 50,000 to 5 million points, and can weigh up to hundreds of Megabytes each. Frames are either stored individually, or in sequences that make it easier to track moving objects.

As you can imagine, labeling these datasets is extremely time-consuming, as workers need to navigate complex 3D scenes and annotate many different object classes. This often requires building and managing very complex tools. Always looking to help customers build simpler and more efficient workflows, the Ground Truth team gathered more feedback, and got to work.

Today, I’m extremely happy to announce that you can use Amazon Sagemaker Ground Truth to label 3D point clouds using a built-in editor, and state-of-the-art assistive labeling features.

Introducing 3D Point Cloud Labeling
Just like for other Ground Truth tasks types, input data for 3D point clouds has to be stored in an S3 bucket. It also needs to be described by a manifest file, a JSON file containing both the location of frames in S3 and their attributes. A dataset may contain either single-frame data, or multi-frame sequences.

Optionally, the dataset may also include image data captured by on-board cameras. Using a feature called “sensor fusion”, Ground Truth can synchronize a 3D point cloud with up to 8 cameras. Thanks to this, workers get a real-life view of the scene, and they can also interchangeably apply labels to 2D images and 3D point clouds.

Once the manifest file is ready, Ground Truth lets you create the following task types:

  • Object Detection: identify objects of interest within a 3D point cloud frame.
  • Object Tracking: track objects of interest across a sequence of 3D point cloud frames.
  • Semantic Segmentation: segment the points of a 3D point cloud frame into predefined categories.

These can either be labeling jobs where workers annotate new frames, or adjustment jobs where they review and fine-tune existing annotations. Jobs may be distributed either to a private workforce or to a vendor workforce you picked on AWS Marketplace.

Using the built-in graphical user interface (GUI) and its shortcuts for navigation and labeling, workers can quickly and accurately apply labels, boxes and categories to 3D objects (“car”, “pedestrian”, and so on). They can also add user-defined attributes, such as the color of a car, or whether an object is fully or partially visible.

The GUI includes many assistive labeling features that significantly simplify labeling work, save time, and improve the quality of annotations. Here are a few examples:

  • Snapping: Ground Truth infers a tight-fitting box around the object.
  • Interpolation: the labeler annotates an object in the first and last frames of a sequence. Ground Truth automatically annotates it in the middle frames.
  • Ground detection and removal: Ground Truth can automatically detect and remove 3D points belonging to the ground from object boxes.

Even with assistive labeling, it may take a while to annotate complex frames and sequences, so work is saved periodically to avoid any data loss.

Preparing 3D Point Cloud Datasets
As previously mentioned, you have to provide a manifest file describing your 3D dataset. The format of this file is defined in the Ground Truth documentation. Of course, the steps required to build it will vary from one dataset to the next. For example, the Audi A2D2 dataset contains almost 400,000 frames, with 360-degree 3D LIDAR data and 2D images. KITTI, another popular choice for autonomous driving research, includes a 3D dataset with 15,000 images and their corresponding point clouds, for a total of 80,256 labeled objects. This notebook shows you how to convert KITTI data to the Ground Truth format.

When datasets contain both 3D LIDAR data and 2D camera images, one challenge is to synchronize them. This allows us to project 3D points to 2D coordinates, map them on the pictures captured by on-board cameras, and vice versa. Another challenge is that data captured by a given device uses coordinates local to this device. Fortunately, we know where the device is located on the car, and where it’s pointed to. All of this can be solved by building a global coordinate system, also known as a World Coordinate System (WCS). Using matrix operations (which I’ll spare you), we can compute the coordinates of all data points inside the WCS.

Once frames have been processed, their information is saved in the manifest file: the position of the vehicle, the location of LIDAR data in S3, the location of associated pictures in S3, and so on. For large datasets, the whole process is a significant workload, and you could run it on a managed service such as Amazon SageMaker Processing, Amazon EMR or AWS Glue.

Labeling 3D Point Clouds with Amazon SageMaker Ground Truth
Let’s do a quick demo, based on this notebook. Starting from pre-processed sample frames, it streamlines the process of creating a 3D point cloud labeling job for each of the six task types (Object Detection, Object Tracking, Semantic Segmentation, and the associated adjustment task types). You can easily make yourself a private worker, and start labeling frames with the worker GUI and its labeling tools.

A picture is worth a thousand words, and a video even more! In this first video, I annotate a couple of cars using two assistive labeling features. First, I fit the box to the ground, which helps me capture object points that are close to the ground without actually capturing the ground itself. Second, I fit the box to the object, which ensures a tight fit without any blank space.

Amazon SageMaker Ground Truth

In this second video, I annotate a third car using the same technique. It’s quite harder to “see” than the previous ones, but I still manage to fit a tight box around it. Playing the next nine frames, I see that this car is actually moving. Jumping directly to the tenth frame, I adjust the bounding box to the new location of the car. Ground Truth automatically labels the eight middle frames, another assistive labeling feature called interpolation.

Amazon SageMaker Ground Truth

I’ve barely scratched the surface, and there’s plenty more to learn. Now it’s your turn!

Getting Started
You can start labeling 3D point clouds with Amazon Sagemaker Ground Truth today in the following regions:

  • US East (N. Virginia), US East (Ohio), US West (Oregon),
  • Canada (Central),
  • Europe (Ireland), Europe (London), Europe (Frankfurt),
  • Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Seoul), Asia Pacific (Sydney), Asia Pacific (Tokyo).

We’re looking forward to reading your feedback. You can send it through your usual support contacts, or in the AWS Forum for Amazon SageMaker.

– Julien

Learning AI at school — a peek into the black box

Post Syndicated from Sue Sentance original https://www.raspberrypi.org/blog/research-seminar-learning-ai-at-school/

“In the near future, perhaps sooner than we think, virtually everyone will need a basic understanding of the technologies that underpin machine learning and artificial intelligence.” — from the 2018 Informatics Europe & EUACM report about machine learning

As the quote above highlights, AI and machine learning (ML) are increasingly affecting society and will continue to change the landscape of work and leisure — with a huge impact on young people in the early stages of their education.

But how are we preparing our young people for this future? What skills do they need, and how do we teach them these skills? This was the topic of last week’s online research seminar at the Raspberry Pi Foundation, with our guest speaker Juan David Rodríguez Garcia. Juan’s doctoral studies around AI in school complement his work at the Ministry of Education and Vocational Training in Spain.

Juan David Rodríguez Garcia

Juan’s LearningML tool for young people

Juan started his presentation by sharing numerous current examples of AI and machine learning, which young people can easily relate to and be excited to engage with, and which will bring up ethical questions that we need to be discussing with them.

Of course, it’s not enough for learners to be aware of AI applications. While machine learning is a complex field of study, we need to consider what aspects of it we can make accessible to young people to enable them to learn about the concepts, practices, and skills underlying it. During his talk Juan demonstrated a tool called LearningML, which he has developed as a practical introduction to AI for young people.

Screenshot of a demo of Juan David Rodríguez Garcia's LearningML tool

Juan demonstrates image recognition with his LearningML tool

LearningML takes inspiration from some of the other in-development tools around machine learning for children, such as Machine Learning for Kids, and it is available in one integrated platform. Juan gave an enticing demo of the tool, showing how to use visual image data (lots of pictures of Juan with hats, glasses on, etc.) to train and test a model. He then demonstrated how to use Scratch programming to also test the model and apply it to new data. The seminar audience was very positive about the LearningML, and of course we’d like it translated into English!

Juan’s talk generated many questions from the audience, from technical questions to the key question of the way we use the tool to introduce children to bias in AI. Seminar participants also highlighted opportunities to bring machine learning to other school subjects such as science.

AI in schools — what and how to teach

Machine learning demonstrates that computers can learn from data. This is just one of the five big ideas in AI that the AI4K12 group has identified for teaching AI in school in order to frame this broad domain:

  1. Perception: Computers perceive the world using sensors
  2. Representation & reasoning: Agents maintain models/representations of the world and use them for reasoning
  3. Learning: Computers can learn from data
  4. Natural interaction: Making agents interact comfortably with humans is a substantial challenge for AI developers
  5. Societal impact: AI applications can impact society in both positive and negative ways

One general concern I have is that in our teaching of computing in school (if we touch on AI at all), we may only focus on the fifth of the ‘big AI ideas’: the implications of AI for society. Being able to understand the ethical, economic, and societal implications of AI as this technology advances is indeed crucial. However, the principles and skills underpinning AI are also important, and how we introduce these at an age-appropriate level remains a significant question.

Illustration of AI, Image by Seanbatty from Pixabay

There are some great resources for developing a general understanding of AI principles, including unplugged activities from Computer Science For Fun. Yet there’s a large gap between understanding what AI is and has the potential to do, and actually developing the highly mathematical skills to program models. It’s not an easy issue to solve, but Juan’s tool goes a little way towards this. At the Raspberry Pi Foundation, we’re also developing resources to bridge this educational gap, including new online projects building on our existing machine learning projects, and an online course. Watch this space!

AI in the school curriculum and workforce

All in all, we seem to be a long way off introducing AI into the school curriculum. Looking around the world, in the USA, Hong Kong, and Australia there have been moves to introduce AI into K-12 education through pilot initiatives, and hopefully more will follow. In England, with a computing curriculum that was written in 2013, there is no requirement to teach any AI or machine learning, or even to focus much on data.

Let’s hope England doesn’t get left too far behind, as there is a massive AI skills shortage, with millions of workers needing to be retrained in the next few years. Moreover, a recent House of Lords report outlines that introducing all young people to this area of computing also has the potential to improve diversity in the workforce — something we should all be striving towards.

We look forward to hearing more from Juan and his colleagues as this important work continues.

Next up in our seminar series

If you missed the seminar, you can find Juan’s presentation slides and a recording of his talk on our seminars page.

In our next seminar on Tuesday 2 June at 17:00–18:00 BST / 12:00–13:00 EDT / 9:00–10:00 PDT / 18:00–19:00 CEST, we’ll welcome Dame Celia Hoyles, Professor of Mathematics Education at University College London. Celia will be sharing insights from her research into programming and mathematics. To join the seminar, simply sign up with your name and email address and we’ll email the link and instructions. If you attended Juan’s seminar, the link remains the same.

The post Learning AI at school — a peek into the black box appeared first on Raspberry Pi.

Baidu’s AI Produces Short Videos in One Click

Post Syndicated from Cici Zhang original https://spectrum.ieee.org/tech-talk/artificial-intelligence/machine-learning/baidus-ai-produces-short-videos-in-one-click

Near the end of 2019, when Baidu’s AI, named ERNIE, beat Google’s AI, named BERT, in its understanding of human language, a team at Baidu Research was already prepping ERNIE for a new tool. They envisioned a program that could analyze the text from a URL, synthesize a pithy narrative, and align it with machine-selected clips to churn out a 2-minute video with voice over—all in less time than it would take to play a song.

Last month, a prototype version of such a program, called VidPress, debuted. The AI’s goal is to not only save human video editors’ time but also to outperform them in quality.

Optimize Power and Performance of your Chip using HLS

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/optimize_power_and_performance_of_your_chip_using_hls

While machine learning algorithms and hardware has moved into the mainstream, designers have only skimmed the surface of what is possible. The complexity of the next-generation hardware and algorithms needed for tomorrow has already exceeded what can be done today. This means creating new power/memory efficient hardware architectures to meet these next-generation demands. This paper explains why only High-Level Synthesis can provide a reliable path to getting this done.

How Facebook is Using AI to Fight COVID-19 Misinformation

Post Syndicated from Tekla S. Perry original https://spectrum.ieee.org/view-from-the-valley/artificial-intelligence/machine-learning/how-facebook-is-using-ai-to-fight-covid19-misinformation

Fifty million posts full of falsehoods about the coronavirus were disseminated on Facebook in April. And 2.5 million ads for face masks, Covid-19 test kits, and other coronavirus products tried to circumvent an advertising ban since 1 March.

Those are just the pieces of coronavirus content that were identified as problematic and flagged or removed by Facebook during the period. The flood of misinformation is certainly larger, as not every fake or exploitive post is easily detectable.

“These are difficult challenges, and our tools are far from perfect,” said a blog post reporting the statistics that was posted today as part of Facebook’s quarterly Community Standards Enforcement Report. The company regularly provides data on its efforts to fight hate speech and other problematic content; today’s report was the first to specifically address coronavirus policy violations.

While Facebook relies on human fact checkers (it works with 60 fact-checking organizations around the world), the report indicated that the company relies on AI to supplement the scrutiny done by human eyes. The 50 million posts flagged were based on 7500 false articles identified by fact checkers. (When the company detects misinformation, it flags it with a warning label which, Facebook indicates, keeps about 95 percent of users from clicking through to view it.)

Some of the tools Facebook deployed were already in place to deal with general misinformation; some were new.

To identify misinformation related to articles spotted by fact checkers, Facebook reported, its systems had to be trained to detect images that appeared alike to a person, not to a computer. A good example would be an image screenshot from an existing post. To a computer, the pixels are far different, but to most humans, they seem the same. The AI also had to detect the difference between images that look very similar, but carry far different meanings, for example, an image in which text was altered from “COVID-19 isn’t found in toilet paper” to “COVID-19 is found in toilet paper.” To do this, it used its existing SimSearchNet tool, which creates compact versions of images on the site and indexes them, allowing them to be easily checked against databases containing COVID-19 misinformation. It is also applying its multimodal content analysis tools that look at both text and images together to interpret a post.

To block coronavirus product ads, Facebook launched a new system that extracts objects from images known to violate its policy, adds those to a database, and then automatically checks any new images posted against the database.

“This local feature-based solution is…more robust to common adversarial modification tactics like cropping, rotation, occlusion, and noise,” Facebook indicated in the blog post. The database also allowed it to train its classifier to find specific objects—like face masks or hand sanitizer—in new images, rather than relying entirely on finding image matches, the company reported. To improve accuracy, Facebook included what it calls a negative image set, for example, images that are not face masks—a sleep mask, a handkerchief—that the classifier might mistake for a face mask.

“We have much more work to be done,” said the Facebook post. “But we are confident we can build on our efforts so far, further improve our systems, and do more to protect people from harmful content related to the pandemic.”

Reinventing Enterprise Search – Amazon Kendra is Now Generally Available

Post Syndicated from Julien Simon original https://aws.amazon.com/blogs/aws/reinventing-enterprise-search-amazon-kendra-is-now-generally-available/

At the end of 2019, we launched a preview version of Amazon Kendra, a highly accurate and easy to use enterprise search service powered by machine learning. Today, I’m very happy to announce that Amazon Kendra is now generally available.

For all its amazing achievements in past decades, Information Technology had yet to solve a problem that all of us face every day: quickly and easily finding the information we need. Whether we’re looking for the latest version of the company travel policy, or asking a more technical question like “what’s the tensile strength of epoxy adhesives?”, we never seem to be able to get the correct answer right away. Sometimes, we never get it at all!

Not only are these issues frustrating for users, they’re also responsible for major productivity losses. According to an IDC study, the cost of inefficient search is $5,700 per employee per year: for a 1,000-employee company, $5.7 million evaporate every year, not counting the liability and compliance risks imposed by low accuracy search.

This problem has several causes. First, most enterprise data is unstructured, making it difficult to pinpoint the information you need. Second, data is often spread across organizational silos, and stored in heterogeneous backends: network shares, relational databases, 3rd party applications, and more. Lastly, keyword-based search systems require figuring out the right combination of keywords, and usually return a large number of hits, most of them irrelevant to our query.

Taking note of these pain points, we decided to help our customers build the search capabilities that they deserve. The result of this effort is Amazon Kendra.

Introducing Amazon Kendra
With just a few clicks, Amazon Kendra enables organizations to index structured and unstructured data stored in different backends, such as file systems, applications, Intranet, and relational databases. As you would expect, all data is encrypted in flight using HTTPS, and can be encrypted at rest with AWS Key Management Service (KMS).

Amazon Kendra is optimized to understand complex language from domains like IT (e.g. “How do I set up my VPN?”), healthcare and life sciences (e.g. “What is the genetic marker for ALS?”), and many other domains. This multi-domain expertise allows Kendra to find more accurate answers. In addition, developers can explicitly tune the relevance of results, using criteria such as authoritative data sources or document freshness.

Kendra search can be quickly deployed to any application (search page, chat apps, chatbots, etc.) via the code samples available in the AWS console, or via APIs. Customers can be up and running with state the art semantic search from Kendra in minutes.

Many organizations already use Amazon Kendra today. For example, the Allen Institute is committed to solving some of the biggest mysteries of bioscience, researching the unknown of human biology, in the brain, the human cell and the immune system. Says Dr. Oren Etzioni, Chief Executive Officer of the Allen Institute for AI: “One of the most impactful things AI like Amazon Kendra can do right now is help scientists, academics, and technologists quickly find the right information in a sea of scientific literature and move important research faster. The Semantic Scholar team at Allen Institute for AI, along with our partners, is proud to provide CORD-19 and to support the AI resources the community is building to leverage this resource to tackle this crucial problem”.

Introducing New Features in Amazon Kendra
Based on customer feedback collected during the preview phase, we added the following features to Amazon Kendra.

  • New scaling options for the Enterprise Edition, as well as a newly-introduced Developer Edition (see details below).
  • 3 new Cloud Connectors: OneDrive, Salesforce, and ServiceNow (in addition to S3, RDS, and SharePoint Online).
  • Expertise on 8 new domains: Automotive, Health, HR, Legal, Media and Entertainment, News, Telecom, Travel and Leisure (in addition to Chemical, Energy, Finance, Insurance, IT, and Pharmaceuticals).
  • Faster indexing, and improved accuracy.

Indexing Data with Amazon Kendra
For the purpose of this demo, I downloaded a small subset of Wikipedia (about 50,000 web pages). I uploaded the individual files in HTML format to an Amazon Simple Storage Service (S3) bucket.

Heading out to the Kendra console, I start by creating a new index, giving it a name and a description. One click is all it takes to enable encryption with AWS Key Management Service (KMS).

After 30 minutes or so, the index is in service. I can now add data sources to it.

Adding my S3 bucket is extremely easy. I first enter a name for the data source.

Then, I define the name of the S3 bucket. I also need to specify the name of the IAM role used by Kendra, either selecting an existing role or creating a new one.

I’m given the choice to schedule synchronization at periodic intervals, in order to refresh the index with new data added to the data source. I go for a daily refresh running at midnight.

The next screen lets me review all parameters, and create the data source. Once it’s active, I launch the initial synchronization by clicking on the “Sync now” button.

After a little while, synchronization is complete. Moving to the test window, I can now start running queries on the index.

Querying Data with Amazon Kendra
While working on one of my posts the other day, I listened to a Jazz song that I really liked, played by a musician named Thad Jones. Knowing absolutely nothing about Jazz players, I’m curious whether Kendra can help me learn more.

Unsurprisingly, this query matches a large number of documents. However, Kendra comes up with a suggested answer, a high confidence answer to my query. It points at a specific paragraph in one of indexed pages. Relevant content is highlighted for more convenience, and I can immediately see that this is the right answer to my query. No need to look any further! Accordingly, I give it a thumbs up so that Amazon Kendra knows that this is indeed a good answer.

Looking to learn more about Thad Jones, I ask a second question.

Once again, I get a suggested answer. This time, Kendra went one step further by returning the exact answer from the document, instead of just returning the document itself. This shows how Kendra is able to understand context and extract relationships, in this case the link between an individual and their city of birth.

Still curious, I ask a third question.

I get another suggested answer, and it’s once again right on target. The information I’m looking for is in the first sentence: Thad Jones has played with Count Basie. As you can see, the paragraph above doesn’t even include the word “play”. Yet, Amazon Kendra interpreted my question correctly. Thad Jones is a musician: if I’m asking about him playing with someone else, it’s very likely that I’m looking for other musicians, not for sport partners! This ability to understand natural language queries and to extract deep domain knowledge is what makes Amazon Kendra so accurate.

Getting Started
Amazon Kendra is available today in US East (N. Virginia), US West (Oregon), and Europe (Ireland).

You can pick one of two editions.

The Enterprise Edition lets you search up to 500,000 documents, and run up to 40,000 queries per day for $7 per hour. You will also be charged $0.000001 per document scanned, and $0.35 per hour per connector when syncing. If you need more indexing or querying capacity, you can now scale each independently: $3.5 per hour for additional 40,000 queries, and $3.5 per hour for additional 500,000 searchable documents.

The Developer Edition has the same features as the Enterprise Edition. However, it’s limited to 4,000 queries per day, on up to 10,000 searchable documents across 5 data sources. No scaling options are available. Please note that the Developer Edition runs on a single availability zone, which is why it shouldn’t be used for production purposes.

Please give Amazon Kendra a try! We’d love to get your feedback, either through your usual AWS Support contacts, or on the AWS Forum for Kendra.

– Julien

COVID Moonshot: Can AI Algorithms and Volunteer Chemists Design a Knockout Antiviral?

Post Syndicated from Megan Scudellari original https://spectrum.ieee.org/the-human-os/artificial-intelligence/medical-ai/covid-moonshot-can-ai-algorithms-and-volunteer-chemists-design-a-knockout-antiviral

IEEE COVID-19 coverage logo, link to landing page

It started with a tweet. Alpha Lee, co-founder and chief scientific officer of machine-learning company PostEra, read on Twitter that Diamond Light Source, the UK’s national synchrotron facility, had identified a set of chemical fragments that attach to an important coronavirus protein.

Lee wondered if his company, formed just six months earlier, could help connect the dots from fragments to viable drugs to fight COVID-19. PostEra uses AI algorithms to map routes for drug synthesis to speed the drug discovery process. But to do so, they would need some design ideas. So Lee asked the Internet.

On 17 March, in collaboration with Diamond, the PostEra team launched the COVID Moonshot to crowdsource drug designs from medicinal chemists. Then PostEra applied their technology, pro-bono, to determine if and how those designs could be made.

Preventing AI From Divulging Its Own Secrets

Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/tech-talk/artificial-intelligence/machine-learning/how-prevent-ai-power-usage-secrets

One of the sneakiest ways to spill the secrets of a computer system involves studying its pattern of power usage while it performs operations. That’s why researchers have begun developing ways to shield the power signatures of AI systems from prying eyes.

This AI Poet Mastered Rhythm, Rhyme, and Natural Language to Write Like Shakespeare

Post Syndicated from Jey Han Lau original https://spectrum.ieee.org/artificial-intelligence/machine-learning/this-ai-poet-mastered-rhythm-rhyme-and-natural-language-to-write-like-shakespeare

Here’s a stanza from a sonnet written by William Shakespeare:

And here’s one written by Deep-speare, an artificial intelligence program that we trained to write sonnets:

Deep-speare’s creation is nonsensical when you read it closely, but it certainly “scans well,” as an English teacher would say—its rhythm, rhyme scheme, and the basic grammar of its individual lines all seem fine at first glance. As our research team discovered when we showed our AI’s poetry to the world, that’s enough to fool quite a lot of people; most readers couldn’t distinguish the AI-generated poetry from human-written works.

Our team, composed of three machine-learning researchers and one scholar of literature, trained our AI poet using about 2,700 sonnets taken from the online library Project Gutenberg. Our “poet” learned how to compose poetry on its own, using the AI approach known as deep learning—it cranked through the poems in its training database, trying again and again to create lines of poetry that matched the examples. We didn’t give it rhyming dictionaries, pronunciation dictionaries, or other resources, as has often been the case in previous computer-generated poetry projects. Instead, Deep-speare independently learned three sets of rules that pertain to sonnet writing: rhythm, rhyme scheme, and the fundamentals of natural language (which words go together).

Our goal was to see how far we could push deep learning for natural-language generation, and to make use of the interesting qualities of poetry. Poetic forms such as sonnets have fairly rigid patterns when it comes to rhyme and rhythm, and we wondered if we could design the system’s architecture so that Deep-speare would learn these patterns autonomously.

Our efforts fall within the booming research field of computational creativity. AI-generated paintings have been auctioned off at Christie’s, the DeepBach program has composed convincing music in the style of Bach, and there has been work in other media such as sculpture and choreography. In the realm of language and literature, a text-generating system called GPT-2 from the research lab OpenAI proved able to generate fairly coherent paragraphs of text based on a starter sentence.

These experiments in computational creativity are enabled by the dramatic advances in deep learning over the past decade. Deep learning has several key advantages for creative pursuits. For starters, it’s extremely flexible, and it’s relatively easy to train deep-learning systems (which we call models) to take on a wide variety of tasks. These models are also very good at discovering patterns and generalizing from those patterns—sometimes with surprising results, which can be interpreted as “accidental creativity.” What’s more, the inherent element of randomness within deep-learning algorithms leads to variability in the models’ output. This variability lends itself well to creative applications, assuming the human collaborator has the patience to sift through the different outputs. Finally, it’s relatively easy to build models that work with different types of data, including text, speech, images, and videos.

A sonnet is chiefly distinguished by two features: its 14-line length and its two-part “argument” structure, in which the poem first describes a problem or lays out a question and then offers a solution or resolution. In the 16th century, English poets developed a distinctive sonnet style using a rhythm called iambic pentameter, where 10-syllable lines have a regular unstressed-stressed rhythmic pattern. An English sonnet typically consisted of three four-line stanzas (called quatrains) that presented the “problem,” followed by a two-line couplet, often with a rhyme scheme of ABAB CDCD EFEF GG. Shakespeare made such frequent use of this poetic form that today it’s called the Shakespearean sonnet.

In the Deep-speare project, we sought to produce individual quatrains from the problem section of Shakespearean sonnets. We therefore focused on producing verses in iambic pentameter with regular rhyme schemes, rather than trying to replicate the full 14-line form of the sonnet or its two-part argumentative structure. We’d like to work on that greater challenge someday, but first we have to prove that our AI poet has mastered individual quatrains.

Our system was powered by three components: a rhythm model that learned iambic pentameter, a rhyme model that learned which words rhyme with each other, and a language model that learned which words are typically found together. The language model was the main component that generated the sonnet, word by word.

A language model judges which sentences are valid within a language (in this case, English) by ranking any arbitrary sentence with a probability score. A properly trained language model will assign fluent sentences higher probabilities and nonsensical sentences lower probabilities. But consider how language is both produced and interpreted: sequentially, one word after another. This same principle allows us to break down the very complex problem of creating sentences into a series of simpler problems involving words. A language model’s job is to look at a partial sentence and predict what word will come next. To make this prediction, it looks at all of the words it knows and gives each possible next word an individual probability score, which is contingent on the words that are already in the sentence.

A language model learns these probabilities by ingesting all the words and sentences in its training corpus; researchers use Wikipedia entries, discussions on Reddit, or databases specifically constructed for training ­natural-language-processing systems. From that trove of text, the AI learns which words are most often found together. In the case of our Deep-speare project, the model learned basic lessons about language from ­Project ­Gutenberg’s whole collection of poetry, and refined its sonnet-writing abilities using roughly 2,700 Shakespearean sonnets in the online library, which contained about 367,000 words.

The quality of a language model can be characterized by measuring the amount of “surprise” upon observing the next word. If it is assigned a high probability score, the word is unsurprising; words with low probability scores are quite surprising. This degree of surprise is used as a signal while training a language model from text. If the model is not surprised by each successive word, as we progress one word at a time through a large corpus of text, then the model can be considered to have captured much of the complexity of language. This includes the existence of multiword units like “San ­Francisco” that frequently co-occur, the rules of grammar and syntax that govern sentence structure, and semantic information, such as the fact that “coffee” tends to be “strong” or “weak,” but rarely “powerful” or “lightweight.”

Once we had our trained language model, it could finish a sentence or generate sentences entirely from scratch. It performed either function by randomly choosing a word that had a high probability score, adding it to the growing sentence, and recomputing the probabilities of all the possible words that could come next. By repeating this process, Deep-speare generated its lines of poetry.

While Deep-speare’s language model was learning about word probabilities from Project Gutenberg’s collection of sonnets, a separate rhythm model was learning about iambic pentameter. We told the rhythm model that each line was composed of 10 syllables in a stressed-unstressed pattern. The model looked at the letters and punctuation within each line and determined which characters corresponded to a syllable and which syllables received the stress. For example, the word “summer” should be understood as two syllables—the stressed “sum” and the unstressed “mer.” When Deep-speare was writing its quatrains, the language model generated candidate lines of poetry, from which the rhythm model picked one that fit the iambic pentameter pattern. Then the process repeated for the next line.

The rhyme model also learned its lessons from the collection of sonnets, but it looked only at the characters within the final word of each line. During its training process, we told the model that each sentence-ending word should rhyme with one other word within the quatrain, and then we let it figure out which of those words were most similar and thus most likely to rhyme. To take the example of the Shakespeare sonnet quoted earlier, the rhyme model determined that “day” and “May” had a high “rhymability” score, as did “temperate” and “date.”

Once Deep-speare was trained and ready to compose, we gave it three different rhyme templates to choose from: AABB, ABBA, and the ABAB that’s most typical of Shakespearean sonnets. During its writing process, Deep-speare first randomly picked one of the templates. Then the language model proceeded to generate the lines of poetry, word by word; when it reached a word that should rhyme, it offered candidate words to the rhyme model.

Here are two examples of quatrains generated by Deep-speare. The first shows a slightly trained model that’s beginning to grasp the rhyme scheme but hasn’t yet found the rhythm, and isn’t making much sense.

by complex grief’s petty nurse. had wise upon
along
came all me’s beauty, except a nymph of song



to be in the prospect, he th of forms i join


and long in the hears and must can god to run

This second quatrain shows the progress made by a model that has nearly finished its training. Its rhymes (in the ABBA pattern) are correct, it nails the iambic pentameter, and its language is not just coherent, it’s reasonably poetic!

shall i behold him in his cloudy state


for just but tempteth me to stop and pray


a cry: if it will drag me, find no way


from pardon to him, who will stand and wait

In assessing Deep-speare’s poetic output, we first checked to be sure it wasn’t just copying sentences from its training data. We found that the phrases in its generated poems didn’t overlap much with phrases in the training data, so we were confident that Deep-speare wasn’t merely memorizing existing sonnets; it was creating original poems.

But an original sonnet isn’t necessarily a good sonnet. To assess the quality of Deep-speare’s quatrains, we worked with two types of human evaluators. The first judges were crowdworkers employed through Amazon’s Mechanical Turk platform who had a basic command of the English language but no expertise in poetry. We presented them with a pair of sonnet quatrains, one composed by a human and the other generated by a machine, and asked them to guess which one was written by a human.

We were greatly dismayed by the initial results. When we first posted the task, the crowdworkers identified the human-written sonnets with near-perfect accuracy. It seemed like the end of the road for our research, as the results indicated that the machine-generated poems were clearly not up to standard.

Then we considered an alternative explanation for the near-perfect accuracy: The crowdworkers had cheated. As our human-written poems were taken from Project ­Gutenberg (in which all text is indexed online and searchable), we wondered if the workers had copied the poems’ text and searched for it online. We tested this ourselves, and it worked—the human-written poem always returned some search results, so achieving perfect accuracy on the guessing game was a trivial accomplishment.

To discourage the crowdworkers from cheating, we converted all the poems’ text into images, then put the task up for evaluation again. Lo and behold, the workers’ accuracy plunged from nearly 100 percent to about 50 percent, indicating that they could not reliably distinguish between human poetry and machine poetry. Although the workers could still cheat by manually typing the text of the poems into a Google search bar, that procedure apparently required too much effort.

Our second evaluator was coauthor Adam Hammond, an assistant professor of literature at the University of Toronto. Unlike the crowdworker experiment, this evaluation did not involve a guessing game. Instead, Hammond received a random mix of human-written and machine-generated sonnets and had to rate each poem on four attributes: rhyme, rhythm, readability, and emotional impact.

Hammond gave Deep-speare’s quatrains very high marks for rhyme and rhythm. In fact, they got higher ratings on these attributes than the human-written sonnets. Hammond wasn’t surprised by this result, explaining that human poets often break rules to achieve certain effects. But in the readability and emotional-impact categories, Hammond judged the machine-generated sonnets to be markedly inferior. The literature expert could easily tell which poems were generated by Deep-speare.

One of the most interesting aspects of the project was the response it elicited. Shortly after we presented our paper at a 2018 conference on computational linguistics, news outlets around the world picked up the story. Many articles quoted the following quatrain as evidence of the humanlike poetry Deep-speare was capable of producing:

With joyous gambols gay and still array,


no longer when he ’twas, while in his day


at first to pass in all delightful ways


around him, charming, and of all his days.

When Hammond was interviewed on BBC Radio, the presenter read this same quatrain aloud and asked for an interpretation. Hammond responded by asking the presenter if she had noticed that the quatrain contained a glaring grammatical error: “he ’twas,” a contraction of the nonsense phrase “he it was.” The presenter’s response indicated that she had not noticed.

Such willingness to look past obvious errors in order to marvel at the wonders of AI, a phenomenon that the social scientist Sherry Turkle names “the Eliza effect,” dates back to the earliest experiments in text-based AI. At MIT in the 1960s, computer scientist Joseph Weizenbaum developed Eliza, the first chatbot, which replicated the conversational style of a psychotherapist. Although the program was quite crude, and its limitations easy to expose, Weizenbaum was shocked to discover how easily users were taken in by his creation. Turkle, a colleague of Weizenbaum’s at MIT in the 1970s, noticed that even graduate students who understood Eliza’s limitations nonetheless fed it questions it was able to answer in a humanlike way.

The Eliza effect—which Turkle defines as “human complicity in a digital fantasy”—seems to have been at work in the public response to Deep-speare as well. The public so wanted the quatrains to demonstrate the powers of AI that it looked past evidence to the contrary.

Such willful misunderstandings of AI may be increasingly problematic as Deep-speare’s capacities grow. We’re continuing with this research, and one of our goals is to improve our AI poet’s scores on readability and emotional impact. To improve overall coherence, one tactic may be to “pretrain” the language model on a very large corpus of text, such as the entirety of Wikipedia, to give it a better grasp of which words are likely to appear together in a long narrative; then we could take that general language model and give it special training in the language of sonnets.

We’re also thinking about how human poets compose their works: A poet doesn’t sit down at a desk and think, “Hmm, what should my first word be?” and then, having made that tough decision, contemplate the second word. Instead, the poet has a theme or narrative in mind, and then searches for the words to express that idea. We’ve already taken a step in that direction by giving Deep-speare the ability to generate a poem based on a specific topic, such as love or loss. Sticking to one topic may increase the coherence and continuity of the quatrain; the model’s word choices will be constrained because it will have learned which words fit with a given theme. We’re also planning experiments with a more hierarchical language model that first generates a high-level narrative for the poem, and then uses that framework to generate the individual words.

It’s an ambitious goal, to be sure. We hope that Deep-speare will measure up, if not to Shakespeare, then to a character described in one of Shakespeare’s poems:

He had the dialect and different skill,
Catching all passions in his craft of will.

This article appears in the May 2020 print issue as “The AI Poet.”

AI-Powered Rat Could Be a Valuable New Tool for Neuroscience

Post Syndicated from Edd Gent original https://spectrum.ieee.org/tech-talk/artificial-intelligence/machine-learning/ai-powered-rat-valuable-new-tool-neuroscience

Can we study AI the same way we study lab rats? Researchers at DeepMind and Harvard University seem to think so. They built an AI-powered virtual rat that can carry out multiple complex tasks. Then, they used neuroscience techniques to understand how its artificial “brain” controls its movements.

Today’s most advanced AI is powered by artificial neural networks—machine learning algorithms made up of layers of interconnected components called “neurons” that are loosely inspired by the structure of the brain. While they operate in very different ways, a growing number of researchers believe drawing parallels between the two could both improve our understanding of neuroscience and make smarter AI.

Now the authors of a new paper due to be presented this week at the International Conference on Learning Representations have created a biologically-accurate 3D model of a rat that can be controlled by a neural network in a simulated environment. They also showed that they could use neuroscience techniques for analyzing biological brain activity to understand how the neural net controlled the rat’s movements.

Announcing TorchServe, An Open Source Model Server for PyTorch

Post Syndicated from Julien Simon original https://aws.amazon.com/blogs/aws/announcing-torchserve-an-open-source-model-server-for-pytorch/

PyTorch is one of the most popular open source libraries for deep learning. Developers and researchers particularly enjoy the flexibility it gives them in building and training models. Yet, this is only half the story, and deploying and managing models in production is often the most difficult part of the machine learning process: building bespoke prediction APIs, scaling them, securing them, etc.

One way to simplify the model deployment process is to use a model server, i.e. an off-the-shelf web application specially designed to serve machine learning predictions in production. Model servers make it easy to load one or several models, automatically creating a prediction API backed by a scalable web server. They’re also able to run preprocessing and postprocessing code on prediction requests. Last but not least, model servers also provide production-critical features like logging, monitoring, and security. Popular model servers include TensorFlow Serving and the Multi Model Server.

Today, I’m extremely happy to announce TorchServe, a PyTorch model serving library that makes it easy to deploy trained PyTorch models at scale without having to write custom code.

Introducing TorchServe
TorchServe is a collaboration between AWS and Facebook, and it’s available as part of the PyTorch open source project. If you’re interested in how the project was initiated, you can read the initial RFC on Github.

With TorchServe, PyTorch users can now bring their models to production quicker, without having to write custom code: on top of providing a low latency prediction API, TorchServe embeds default handlers for the most common applications such as object detection and text classification. In addition, TorchServe includes multi-model serving, model versioning for A/B testing, monitoring metrics, and RESTful endpoints for application integration. As you would expect, TorchServe supports any machine learning environment, including Amazon SageMaker, container services, and Amazon Elastic Compute Cloud (EC2).

Several customers are already enjoying the benefits of TorchServe.

Toyota Research Institute Advanced Development, Inc. (TRI-AD) is developing software for automated driving at Toyota Motor Corporation. Says Yusuke Yachide, Lead of ML Tools at TRI-AD: “we continuously optimize and improve our computer vision models, which are critical to TRI-AD’s mission of achieving safe mobility for all with autonomous driving. Our models are trained with PyTorch on AWS, but until now PyTorch lacked a model serving framework. As a result, we spent significant engineering effort in creating and maintaining software for deploying PyTorch models to our fleet of vehicles and cloud servers. With TorchServe, we now have a performant and lightweight model server that is officially supported and maintained by AWS and the PyTorch community”.

Matroid is a maker of computer vision software that detects objects and events in video footage. Says Reza Zadeh, Founder and CEO at Matroid Inc.: “we develop a rapidly growing number of machine learning models using PyTorch on AWS and on-premise environments. The models are deployed using a custom model server that requires converting the models to a different format, which is time-consuming and burdensome. TorchServe allows us to simplify model deployment using a single servable file that also serves as the single source of truth, and is easy to share and manage”.

Now, I’d like to show you how to install TorchServe, and load a pretrained model on Amazon Elastic Compute Cloud (EC2). You can try other environments by following the documentation.

Installing TorchServe
First, I fire up a CPU-based Amazon Elastic Compute Cloud (EC2) instance running the Deep Learning AMI (Ubuntu edition). This AMI comes preinstalled with several dependencies that I’ll need, which will speed up setup. Of course you could use any AMI instead.

TorchServe is implemented in Java, and I need the latest OpenJDK to run it.

sudo apt install openjdk-11-jdk

Next, I create and activate a new Conda environment for TorchServe. This will keep my Python packages nice and tidy (virtualenv works too, of course).

conda create -n torchserve

source activate torchserve

Next, I install dependencies for TorchServe.

pip install sentencepiece       # not available as a Conda package

conda install psutil pytorch torchvision torchtext -c pytorch

If you’re using a GPU instance, you’ll need an extra package.

conda install cudatoolkit=10.1

Now that dependencies are installed, I can clone the TorchServe repository, and install TorchServe.

git clone https://github.com/pytorch/serve.git

cd serve

pip install .

cd model-archiver

pip install .

Setup is complete, let’s deploy a model!

Deploying a Model
For the sake of this demo, I’ll simply download a pretrained model from the PyTorch model zoo. In real life, you would probably use your own model.

wget https://download.pytorch.org/models/densenet161-8d451a50.pth

Next, I need to package the model into a model archive. A model archive is a ZIP file storing all model artefacts, i.e. the model itself (densenet161-8d451a50.pth), a Python script to load the state dictionary (matching tensors to layers), and any extra file you may need. Here, I include a file named index_to_name.json, which maps class identifiers to class names. This will be used by the built-in image_classifier handler, which is in charge of the prediction logic. Other built-in handlers are available (object_detector, text_classifier, image_segmenter), and you can implement your own.

torch-model-archiver --model-name densenet161 --version 1.0 \
--model-file examples/image_classifier/densenet_161/model.py \
--serialized-file densenet161-8d451a50.pth \
--extra-files examples/image_classifier/index_to_name.json \
--handler image_classifier

Next, I create a directory to store model archives, and I move the one I just created there.

mkdir model_store

mv densenet161.mar model_store/

Now, I can start TorchServe, pointing it at the model store and at the model I want to load. Of course, I could load several models if needed.

torchserve --start --model-store model_store --models densenet161=densenet161.mar

Still on the same machine, I grab an image and easily send it to TorchServe for local serving using an HTTP POST request. Note the format of the URL, which includes the name of the model I want to use.

curl -O https://s3.amazonaws.com/model-server/inputs/kitten.jpg

curl -X POST http://127.0.0.1:8080/predictions/densenet161 -T kitten.jpg

The result appears immediately. Note that class names are visible, thanks to the built-in handler.

[
{"tiger_cat": 0.4693356156349182},
{"tabby": 0.46338796615600586},
{"Egyptian_cat": 0.06456131488084793},
{"lynx": 0.0012828155886381865},
{"plastic_bag": 0.00023323005007114261}
]

I then stop TorchServe with the ‘stop‘ command.

torchserve --stop

As you can see, it’s easy to get started with TorchServe using the default configuration. Now let me show you how to set it up for remote serving.

Configuring TorchServe for Remote Serving
Let’s create a configuration file for TorchServe, named config.properties (the default name). This files defines which model to load, and sets up remote serving. Here, I’m binding the server to all public IP addresses, but you can restrict it to a specific address if you want to. As this is running on an EC2 instance, I need to make sure that ports 8080 and 8081 are open in the Security Group.

model_store=model_store
load_models=densenet161.mar
inference_address=http://0.0.0.0:8080
management_address=http://0.0.0.0:8081

Now I can start TorchServe in the same directory, without having to pass any command line arguments.

torchserve --start

Moving back to my local machine, I can now invoke TorchServe remotely, and get the same result.

curl -X POST http://ec2-54-85-61-250.compute-1.amazonaws.com:8080/predictions/densenet161 -T kitten.jpg

You probably noticed that I used HTTP. I’m guessing a lot of you will require HTTPS in production, so let me show you how to set it up.

Configuring TorchServe for HTTPS
TorchServe can use either the Java keystore or a certificate. I’ll go with the latter.

First, I create a certificate and a private key with openssl.

openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout mykey.key -out mycert.pem

Then, I update the configuration file to define the location of the certificate and key, and I bind TorchServe to its default secure ports (don’t forget to update the Security Group).

model_store=model_store
load_models=densenet161.mar
inference_address=https://0.0.0.0:8443
management_address=https://0.0.0.0:8444
private_key_file=mykey.key
certificate_file=mycert.pem

I restart TorchServe, and I can now invoke it with HTTPS. As I use a self-signed certificate, I need to pass the ‘–insecure’ flag to curl.

curl --insecure -X POST https://ec2-54-85-61-250.compute-1.amazonaws.com:8443/predictions/densenet161 -T kitten.jpg

There’s a lot more to TorchServe configuration, and I encourage you to read its documentation!

Getting Started
TorchServe is available now at https://github.com/pytorch/serve.

Give it a try, and please send us feedback on Github.

– Julien

 

 

 

AI Can Help Hospitals Triage COVID-19 Patients

Post Syndicated from Eliza Strickland original https://spectrum.ieee.org/the-human-os/artificial-intelligence/medical-ai/ai-can-help-hospitals-triage-covid19-patients

As the coronavirus pandemic brings floods of people to hospital emergency rooms around the world, physicians are struggling to triage patients, trying to determine which ones will need intensive care. Volunteer doctors and nurses with no special pulmonary training must assess the condition of patients’ lungs. In Italy, at the peak of that country’s crisis, doctors faced terrible decisions about who should receive help and resources. 

AWS DeepComposer – Now Generally Available With New Features

Post Syndicated from Julien Simon original https://aws.amazon.com/blogs/aws/aws-deepcomposer-now-generally-available-with-new-features/

AWS DeepComposer, a creative way to get started with machine learning, was launched in preview at AWS re:Invent 2019. Today, I’m extremely happy to announce that DeepComposer is now available to all AWS customers, and that it has been expanded with new features.

A primer on AWS DeepComposer
If you’re new to AWS DeepComposer, here’s how to get started.

  • Log into the AWS DeepComposer console.
  • Learn about the service and how it uses generative AI.
  • Record a short musical tune, using either the virtual keyboard in the console, or a physical keyboard available for order on Amazon.com.
  • Select a pretrained model for your favorite genre.
  • Use this model to generate a new polyphonic composition based on your tune.
  • Play the composition in the console.
  • Export the composition, or share it on SoundCloud.

Now let’s look at the new features, which make it even easier to get started with generative AI.

Learning Capsules
DeepComposer is powered by Generative Adversarial Networks (aka GANs, research paper), a neural network architecture built specifically to generate new samples from an existing data set. A GAN pits two different neural networks against each other to produce original digital works based on sample inputs: with DeepComposer, you can train and optimize GAN models to create original music.

Until now, developers interested in growing skills in GANs haven’t had an easy way to get started. In order to help them regardless of their background in ML or music, we are building a collection of easy learning capsules that introduce key concepts, and how to train and evaluate GANs. This includes an hands-on lab with step-by-step instructions and code to build a GAN model.

Once you’re familiar with GANs, you’ll be ready to move on to training your own model!

In-console Training
You now have the ability to train your own generative model right in the DeepComposer console, without having to write a single line of machine learning code.

First, let’s select a GAN architecture:

  • MuseGAN, by Hao-Wen Dong, Wen-Yi Hsiao, Li-Chia Yang and Yi-Hsuan Yang (research paper, Github): MuseGAN has been specifically designed for generating music. The generator in MuseGAN is composed of a shared network to learn a high level representation of the song, and a series of private networks to learn how to generate individual music tracks.
  • U-Net, by Olaf Ronneberger, Philipp Fischer and Thomas Brox (research paper, project page): U-Net has been extremely successful in the image translation domain (e.g. converting winter images to summer images), and it can also be used for music generation. It’s a simpler architecture than MuseGAN, and therefore easier for beginners to understand. If you’re curious what’s happening under the hood, you can learn more about the U-Net architecture in this Jupyter notebook.

Let’s go with MuseGAN, and give the new model a name.

Next, I just have to pick the dataset I want to train my model on.

Optionally, I can also set hyperparameters (i.e. training parameters), but I’ll go with default settings this time. Finally, I click on ‘Start training’, and AWS DeepComposer fires up a training job, taking care of all the infrastructure and machine learning setup for me.

About 8 hours later, the model has been trained, and I can use it to generate compositions. Here, I can add the new ‘rhythm assist’ feature, that helps correct the timing of musical notes in your input, and make sure notes are in time with the beat.

Getting started
AWS DeepComposer is available today in the US East (N. Virginia) region.

The service includes a 12-month Free Tier for all AWS customers, so you can generate 500 compositions using our sample models at no cost.

In addition to the Free Tier, ordering the keyboard from Amazon.com in the US, and linking it to the DeepComposer console will get you another 3 months of free trial!

picture of underside of the keyboard

Give AWS DeepComposer a try, and let us know what you think! You can send your feedback through your usual AWS Support contacts, or on the AWS Forum for DeepComposer.

– Julien

Formula 1: Using Amazon SageMaker to Deliver Real-Time Insights to Fans

Post Syndicated from Annik Stahl original https://aws.amazon.com/blogs/architecture/formula-1-using-amazon-sagemaker-to-deliver-real-time-insights-to-fans-live/

The Formula one Group (F1) is responsible for the promotion of the FIA Formula One World Championship, a series of auto racing events in 21 countries where professional drivers race single-seat cars on custom tracks or through city courses in pursuit of the World Championship title.

Formula 1 works with AWS to enhance its race strategies, data tracking systems, and digital broadcasts through a wide variety of AWS services—including Amazon SageMaker, AWS Lambda, and AWS analytics services—to deliver new race metrics that change the way fans and teams experience racing.

In this special live segment of This is My Architecture, you’ll get a look at what’s under the hood of Formula 1’s F1 Insights. Hear about the machine learning algorithms the company trains on Amazon SageMaker and how inferences are made during races to deliver insights to fans.

For more content like this, subscribe to our YouTube channels This is My Architecture, This is My Code, and This is My Model, or visit the This is My Architecture AWS website, which has search functionality and the ability to filter by industry, language, and service.

Delve into the Forumla 1 case study to learn more about how AWS fuels analytics through machine learning.

Five Companies Using AI to Fight Coronavirus

Post Syndicated from Megan Scudellari original https://spectrum.ieee.org/the-human-os/artificial-intelligence/medical-ai/companies-ai-coronavirus

As of Thursday afternoon, there are 10,985 confirmed cases of COVID-19 in the United States and zero FDA-approved drugs to treat the infection.

While DARPA works on short-term “firebreak” countermeasures and computational scientists track sources of new cases of the virus, a host of drug discovery companies are putting their AI technologies to work predicting which existing drugs, or brand-new drug-like molecules, could treat the virus.

Drug development typically takes at least decade to move from idea to market, with failure rates of over 90% and a price tag between $2 and $3 billion. “We can substantially accelerate this process using AI and make it much cheaper, faster, and more likely to succeed,” says Alex Zhavoronkov, CEO of Insilico Medicine, an AI company focused on drug discovery.

Here’s an update on five AI-centered companies targeting coronavirus:

Deargen

In early February, scientists at South Korea-based Deargen published a preprint paper (a paper that has not yet been peer-reviewed by other scientists) with the results from a deep learning-based model called MT-DTI. This model uses simplified chemical sequences, rather than 2D or 3D molecular structures, to predict how strongly a molecule of interest will bind to a target protein.

The model predicted that of available FDA-approved antiviral drugs, the HIV medication atazanavir is the most likely to bind and block a prominent protein on the outside of SARS-CoV-2, the virus that causes COVID-19. It also identified three other antivirals that might bind the virus.

While the company is unaware of any official organization following up on their recommendations, their model also predicted several not-yet-approved drugs, such as the antiviral remdesivir, that are now being tested in patients, according to Sungsoo Park, co-founder and CTO of Deargen.

Deargen is now using their deep learning technology to generate new antivirals, but they need partners to help them develop the molecules, says Park. “We currently do not have a facility to test these drug candidates,” he notes. “If there are pharmaceutical companies or research institutes that want to test these drug candidates for SARS-CoV-2, [they would] always be welcome.”

Insilico Medicine

Hong Kong-based Insilico Medicine similarly jumped into the field in early February with a pre-print paper. Instead of seeking to re-purpose available drugs, the team used an AI-based drug discovery platform to generate tens of thousands of novel molecules with the potential to bind a specific SARS-CoV-2 protein and block the virus’s ability to replicate. A deep learning filtering system narrowed down the list.

“We published the original 100 molecules after a 4-day AI sprint,” says Insilico CEO Alex Zhavoronkov. The group next planned to make and test seven of the molecules, but the pandemic interrupted: Over 20 of their contract chemists were quarantined in Wuhan.

Since then, Insilico has synthesized two of the seven molecules and, with a pharmaceutical partner, plans to put them to the test in the next two weeks, Zhavoronkov tells IEEE. The company is also in the process of licensing their AI platform to two large pharmaceutical companies.

Insilico is also actively investigating drugs that might improve the immune systems of the elderly—so an older individual might respond to SARS-CoV-2 infection as a younger person does, with milder symptoms and faster recovery—and drugs to help restore lung function after infection. They hope to publish additional results soon.

SRI Biosciences and Iktos

On March 4, Menlo Park-based research center SRI International and AI company Iktos in Paris announced a collaboration to discover and develop new anti-viral therapies. Iktos’s deep learning model designs virtual novel molecules while SRI’s SynFini automated synthetic chemistry platform figures out the best way to make a molecule, then makes it.

With their powers combined, the systems can design, make and test new drug-like molecules in 1 to 2 weeks, says Iktos CEO Yann Gaston-Mathé. AI-based generation of drug candidates is currently in progress, and “the first round of target compounds will be handed to SRI’s SynFini automated synthesis platform shortly,” he tells IEEE.

Iktos also recently released two AI-based software platforms to accelerate drug discovery: one for new drug design, and another, with a free online beta version, to help synthetic chemists deconstruct how to better build a compound. “We are eager to attract as many users as possible on this free platform and to get their feedback to help us improve this young technology,” says Gaston-Mathé.

Benevolent AI

In February, British AI-startup Benevolent AI published two articles, one in The Lancet and one in The Lancet Infectious Diseases, identifying approved drugs that might block the viral replication process of SARS-CoV-2.

Using a large repository of medical information, including data extracted from the scientific literature by machine learning, the company’s AI system identified 6 compounds that effectively block a cellular pathway that appears to allow the virus into cells to make more virus particles.

One of those six, baricitinib, a once-daily pill approved to treat rheumatoid arthritis, looks to be the best of the group for both safety and efficacy against SARS-CoV-2, the authors wrote. Benevolent’s co-founder, Ivan Griffin, told Recode that Benevolent has reached out to drug manufacturers who make the drug about testing it as a potential treatment.

Currently, ruxolitinib, a drug that works by a similar mechanism, is in clinical trials for COVID-19.

Halodoc: Building the Future of Tele-Health One Microservice at a Time

Post Syndicated from Annik Stahl original https://aws.amazon.com/blogs/architecture/halodoc-building-the-future-of-tele-health-one-microservice-at-a-time/

Halodoc, a Jakarta-based healthtech platform, uses tele-health and artificial intelligence to connect patients, doctors, and pharmacies. Join builder Adrian De Luca for this special edition of This is My Architecture as he dives deep into the solutions architecture of this Indonesian healthtech platform that provides healthcare services in one of the most challenging traffic environments in the world.

Explore how the company evolved its monolithic backend into decoupled microservices with Amazon EC2 and Amazon Simple Queue Service (SQS), adopted serverless to cost effectively support new user functionality with AWS Lambda, and manages the high volume and velocity of data with Amazon DynamoDB, Amazon Relational Database Service (RDS), and Amazon Redshift.

For more content like this, subscribe to our YouTube channels This is My Architecture, This is My Code, and This is My Model, or visit the This is My Architecture AWS website, which has search functionality and the ability to filter by industry, language, and service.

Intel’s Neuromorphic Nose Learns Scents in Just One Sniff

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/artificial-intelligence/machine-learning/intels-neuromorphic-nose-learns-scents-in-just-one-sniff

Researchers at Intel and Cornell University report that they’ve made an electronic nose that can learn the scent of a chemical after just one exposure to it and then identify that scent even when it’s masked by others. The system is built around Intel’s neuromorphic research chip, Loihi and an array of 72 chemical sensors. Loihi was programmed to mimic the workings of neurons in the olfactory bulb, the part of the brain that distinguishes different smells. The system’s inventors say it could one day watch for hazardous substances in the air, sniff out hidden drugs or explosives, or aid in medical diagnoses.

Satellites and AI Monitor Chinese Economy’s Reaction to Coronavirus

Post Syndicated from Tekla S. Perry original https://spectrum.ieee.org/view-from-the-valley/artificial-intelligence/machine-learning/satellites-and-ai-monitor-chinese-economys-reaction-to-coronavirus

Researchers on WeBank’s AI Moonshot Team have taken a deep learning system developed to detect solar panel installations from satellite imagery and repurposed it to track China’s economic recovery from the novel coronavirus outbreak.

This, as far as the researchers know, is the first time big data and AI have been used to measure the impact of the new coronavirus on China, Haishan Wu, vice general manager of WeBank’s AI department, told IEEE Spectrum. WeBank is a private Chinese online banking company founded by Tencent.

The team used its neural network to analyze visible, near-infrared, and short-wave infrared images from various satellites, including the infrared bands from the Sentinel-2 satellite. This allowed the system to look for hot spots indicative of actual steel manufacturing inside a plant.  In the early days of the outbreak, this analysis showed that steel manufacturing had dropped to a low of 29 percent of capacity. But by 9 February, it had recovered to 76 percent.

The researchers then looked at other types of manufacturing and commercial activity using AI. One of the techniques was simply counting cars in large corporate parking lots. From that analysis, it appeared that, by 10 February, Tesla’s Shanghai car production had fully recovered, while tourism operations, like Shanghai Disneyland, are still shut down.

Moving beyond satellite data, the researchers took daily anonymized GPS data from several million mobile phone users in 2019 and 2020, and used AI to determine which of those users were commuters. The software then counted the number of commuters in each city, and compared the number of commuters on a given day in 2019 and its corresponding date in 2020, starting with Chinese New Year. In both cases, Chinese New Year saw a huge dip in commuting, but unlike in 2019, the number of people going to work didn’t bounce back after the holiday. While things picked up slowly, the WeBank researchers calculated that by 10 March 2020, about 75 percent of the workforce had returned to work.

Projecting out from these curves, the researchers concluded that most Chinese workers, with the exception of Wuhan, will be back to work by the end of March. Economic growth in the first quarter, their study indicated, will take a 36 percent hit.

Finally, the team used natural language processing technology to mine Twitter-like services and other social media platforms for mentions of companies that provide online working, gaming, education, streaming video, social networking, e-commerce, and express delivery services. According to this analysis, telecommuting for work is booming, up 537 percent from the first day of 2020; online education is up 169 percent; gaming is up 124 percent; video streaming is up 55 percent; social networking is up 47 percent. Meanwhile,  e-commerce is flat, and express delivery is down a little less than 1 percent. The analysis of China’s social media activity also yielded the prediction that the Chinese economy will be mostly back to normal by the end of March.