Tag Archives: artificial intelligence

AWS Online Tech Talks – January 2018

Post Syndicated from Ana Visneski original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-january-2018/

Happy New Year! Kick of 2018 right by expanding your AWS knowledge with a great batch of new Tech Talks. We’re covering some of the biggest launches from re:Invent including Amazon Neptune, Amazon Rekognition Video, AWS Fargate, AWS Cloud9, Amazon Kinesis Video Streams, AWS PrivateLink, AWS Single-Sign On and more!

January 2018– Schedule

Noted below are the upcoming scheduled live, online technical sessions being held during the month of January. Make sure to register ahead of time so you won’t miss out on these free talks conducted by AWS subject matter experts.

Webinars featured this month are:

Monday January 22

Analytics & Big Data
11:00 AM – 11:45 AM PT Analyze your Data Lake, Fast @ Any Scale  Lvl 300

Database
01:00 PM – 01:45 PM PT Deep Dive on Amazon Neptune Lvl 200

Tuesday, January 23

Artificial Intelligence
9:00 AM – 09:45 AM PT  How to get the most out of Amazon Rekognition Video, a deep learning based video analysis service Lvl 300

Containers

11:00 AM – 11:45 AM Introducing AWS Fargate Lvl 200

Serverless
01:00 PM – 02:00 PM PT Overview of Serverless Application Deployment Patterns Lvl 400

Wednesday, January 24

DevOps
09:00 AM – 09:45 AM PT Introducing AWS Cloud9  Lvl 200

Analytics & Big Data
11:00 AM – 11:45 AM PT Deep Dive: Amazon Kinesis Video Streams
Lvl 300
Database
01:00 PM – 01:45 PM PT Introducing Amazon Aurora with PostgreSQL Compatibility Lvl 200

Thursday, January 25

Artificial Intelligence
09:00 AM – 09:45 AM PT Introducing Amazon SageMaker Lvl 200

Mobile
11:00 AM – 11:45 AM PT Ionic and React Hybrid Web/Native Mobile Applications with Mobile Hub Lvl 200

IoT
01:00 PM – 01:45 PM PT Connected Product Development: Secure Cloud & Local Connectivity for Microcontroller-based Devices Lvl 200

Monday, January 29

Enterprise
11:00 AM – 11:45 AM PT Enterprise Solutions Best Practices 100 Achieving Business Value with AWS Lvl 100

Compute
01:00 PM – 01:45 PM PT Introduction to Amazon Lightsail Lvl 200

Tuesday, January 30

Security, Identity & Compliance
09:00 AM – 09:45 AM PT Introducing Managed Rules for AWS WAF Lvl 200

Storage
11:00 AM – 11:45 AM PT  Improving Backup & DR – AWS Storage Gateway Lvl 300

Compute
01:00 PM – 01:45 PM PT  Introducing the New Simplified Access Model for EC2 Spot Instances Lvl 200

Wednesday, January 31

Networking
09:00 AM – 09:45 AM PT  Deep Dive on AWS PrivateLink Lvl 300

Enterprise
11:00 AM – 11:45 AM PT Preparing Your Team for a Cloud Transformation Lvl 200

Compute
01:00 PM – 01:45 PM PT  The Nitro Project: Next-Generation EC2 Infrastructure Lvl 300

Thursday, February 1

Security, Identity & Compliance
09:00 AM – 09:45 AM PT  Deep Dive on AWS Single Sign-On Lvl 300

Storage
11:00 AM – 11:45 AM PT How to Build a Data Lake in Amazon S3 & Amazon Glacier Lvl 300

Google Blocks Pirate Search Results Prophylactically

Post Syndicated from Ernesto original https://torrentfreak.com/google-blocks-pirated-search-results-prophylactically-180103/

On an average day, Google processes more than three million takedown notices from copyright holders, and that’s for its search engine alone.

Under the current DMCA legislation, US-based Internet service providers are expected to remove infringing links, if a copyright holder complains.

This process shields these services from direct liability. In recent years there has been a lot of discussion about the effectiveness of the system, but Google has always maintained that it works well.

This was also highlighted by Google’s copyright counsel Caleb Donaldson, in an article he wrote for the American Bar Association’s publication Landslide.

“The DMCA provided Google and other online service providers the legal certainty they needed to grow,” Donaldson writes.

“And the DMCA’s takedown notices help us fight piracy in other ways as well. Indeed, the Web Search notice-and-takedown process provides the cornerstone of Google’s fight against piracy.”

The search engine does indeed go beyond ‘just’ removing links. The takedown notices are also used as a signal to demote domains. Websites for which it receives a lot of takedown notices will be placed lower in search results, for example.

These measures can be expanded and complemented by artificial intelligence in the future, Google’s copyright counsel envisions.

“As we move into a world where artificial intelligence can learn from vast troves of data like these, we will only get better at using the information to better fight against piracy,” Donaldson writes.

Artificial intelligence (AI) is a buzz-term that has a pretty broad meaning nowadays. Donaldson doesn’t go into detail on how AI can fight piracy. It could help to spot erroneous notices, on the one hand, but can also be applied to filter content proactively.

The latter is something Google is slowly opening up to.

Over the past year, we’ve noticed on a few occasions that Google is processing takedown notices for non-indexed links. While we assumed that this was an ‘error’ on the sender’s part, it appears to be a new policy.

“Google has critically expanded notice and takedown in another important way: We accept notices for URLs that are not even in our index in the first place. That way, we can collect information even about pages and domains we have not yet crawled,” Donaldson writes.

In other words, Google blocks URLs before they appear in the search results, as some sort of piracy vaccine.

“We process these URLs as we do the others. Once one of these not-in-index URLs is approved for takedown, we prophylactically block it from appearing in our Search results, and we take all the additional deterrent measures listed above.”

Some submitters are heavily relying on the new feature, Google found. In some cases, the majority of the submitted URLs in a notice are not indexed yet.

The search engine will keep a close eye on these developments. At TorrentFreak, we also found that copyright holders sometimes target links that don’t even exist. Whether Google will also accept these takedown requests in the future, is unknown.

It’s clear that artificial intelligence and proactive filtering are becoming more and more common, but Google says that the company will also keep an eye on possible abuse of the system.

“Google will push back if we suspect a notice is mistaken, fraudulent, or abusive, or if we think fair use or another defense excuses that particular use of copyrighted content,” Donaldson notes.

Artificial intelligence and prophylactic blocking surely add a new dimension to the standard DMCA takedown procedure, but whether it will be enough to convince copyright holders that it works, has yet to be seen.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

The deep learning Santa/Not Santa detector

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/deep-learning-santa-detector/

Did you see Mommy kissing Santa Claus? Or was it simply an imposter? The Not Santa detector is here to help solve the mystery once and for all.

Building a “Not Santa” detector on the Raspberry Pi using deep learning, Keras, and Python

The video is a demo of my “Not Santa” detector that I deployed to the Raspberry Pi. I trained the detector using deep learning, Keras, and Python. You can find the full source code and tutorial here: https://www.pyimagesearch.com/2017/12/18/keras-deep-learning-raspberry-pi/

Ho-ho-how does it work?

Note: Adrian Rosebrock is not Santa. But he does a good enough impression of the jolly old fellow that his disguise can fool a Raspberry Pi into thinking otherwise.

Raspberry Pi 'Not Santa' detector

We jest, but has anyone seen Adrian and Santa in the same room together?
Image c/o Adrian Rosebrock

But how is the Raspberry Pi able to detect the Santa-ness or Not-Santa-ness of people who walk into the frame?

Two words: deep learning

If you’re not sure what deep learning is, you’re not alone. It’s a hefty topic, and one that Adrian has written a book about, so I grilled him for a bluffers’ guide. In his words, deep learning is:

…a subfield of machine learning, which is, in turn a subfield of artificial intelligence (AI). While AI embodies a large, diverse set of techniques and algorithms related to automatic reasoning (inference, planning, heuristics, etc), the machine learning subfields are specifically interested in pattern recognition and learning from data.

Artificial Neural Networks (ANNs) are a class of machine learning algorithms that can learn from data. We have been using ANNs successfully for over 60 years, but something special happened in the past 5 years — (1) we’ve been able to accumulate massive datasets, orders of magnitude larger than previous datasets, and (2) we have access to specialized hardware to train networks faster (i.e., GPUs).

Given these large datasets and specialized hardware, deeper neural networks can be trained, leading to the term “deep learning”.

So now we have a bird’s-eye view of deep learning, how does the detector detect?

Cameras and twinkly lights

Adrian used a model he had trained on two datasets to detect whether or not an image contains Santa. He deployed the Not Santa detector code to a Raspberry Pi, then attached a camera, speakers, and The Pi Hut’s 3D Xmas Tree.

Raspberry Pi 'Not Santa' detector

Components for Santa detection
Image c/o Adrian Rosebrock

The camera captures footage of Santa in the wild, while the Christmas tree add-on provides a twinkly notification, accompanied by a resonant ho, ho, ho from the speakers.

A deeper deep dive into deep learning

A full breakdown of the project and the workings of the Not Santa detector can be found on Adrian’s blog, PyImageSearch, which includes links to other deep learning and image classification tutorials using TensorFlow and Keras. It’s an excellent place to start if you’d like to understand more about deep learning.

Build your own Santa detector

Santa might catch on to Adrian’s clever detector and start avoiding the camera, and for that eventuality, we have our own Santa detector. It uses motion detection to notify you of his presence (and your presents!).

Raspberry Pi Santa detector

Check out our Santa Detector resource here and use a passive infrared sensor, Raspberry Pi, and Scratch to catch the big man in action.

The post The deep learning Santa/Not Santa detector appeared first on Raspberry Pi.

What is HAMR and How Does It Enable the High-Capacity Needs of the Future?

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/hamr-hard-drives/

HAMR drive illustration

During Q4, Backblaze deployed 100 petabytes worth of Seagate hard drives to our data centers. The newly deployed Seagate 10 and 12 TB drives are doing well and will help us meet our near term storage needs, but we know we’re going to need more drives — with higher capacities. That’s why the success of new hard drive technologies like Heat-Assisted Magnetic Recording (HAMR) from Seagate are very relevant to us here at Backblaze and to the storage industry in general. In today’s guest post we are pleased to have Mark Re, CTO at Seagate, give us an insider’s look behind the hard drive curtain to tell us how Seagate engineers are developing the HAMR technology and making it market ready starting in late 2018.

What is HAMR and How Does It Enable the High-Capacity Needs of the Future?

Guest Blog Post by Mark Re, Seagate Senior Vice President and Chief Technology Officer

Earlier this year Seagate announced plans to make the first hard drives using Heat-Assisted Magnetic Recording, or HAMR, available by the end of 2018 in pilot volumes. Even as today’s market has embraced 10TB+ drives, the need for 20TB+ drives remains imperative in the relative near term. HAMR is the Seagate research team’s next major advance in hard drive technology.

HAMR is a technology that over time will enable a big increase in the amount of data that can be stored on a disk. A small laser is attached to a recording head, designed to heat a tiny spot on the disk where the data will be written. This allows a smaller bit cell to be written as either a 0 or a 1. The smaller bit cell size enables more bits to be crammed into a given surface area — increasing the areal density of data, and increasing drive capacity.

It sounds almost simple, but the science and engineering expertise required, the research, experimentation, lab development and product development to perfect this technology has been enormous. Below is an overview of the HAMR technology and you can dig into the details in our technical brief that provides a point-by-point rundown describing several key advances enabling the HAMR design.

As much time and resources as have been committed to developing HAMR, the need for its increased data density is indisputable. Demand for data storage keeps increasing. Businesses’ ability to manage and leverage more capacity is a competitive necessity, and IT spending on capacity continues to increase.

History of Increasing Storage Capacity

For the last 50 years areal density in the hard disk drive has been growing faster than Moore’s law, which is a very good thing. After all, customers from data centers and cloud service providers to creative professionals and game enthusiasts rarely go shopping looking for a hard drive just like the one they bought two years ago. The demands of increasing data on storage capacities inevitably increase, thus the technology constantly evolves.

According to the Advanced Storage Technology Consortium, HAMR will be the next significant storage technology innovation to increase the amount of storage in the area available to store data, also called the disk’s “areal density.” We believe this boost in areal density will help fuel hard drive product development and growth through the next decade.

Why do we Need to Develop Higher-Capacity Hard Drives? Can’t Current Technologies do the Job?

Why is HAMR’s increased data density so important?

Data has become critical to all aspects of human life, changing how we’re educated and entertained. It affects and informs the ways we experience each other and interact with businesses and the wider world. IDC research shows the datasphere — all the data generated by the world’s businesses and billions of consumer endpoints — will continue to double in size every two years. IDC forecasts that by 2025 the global datasphere will grow to 163 zettabytes (that is a trillion gigabytes). That’s ten times the 16.1 ZB of data generated in 2016. IDC cites five key trends intensifying the role of data in changing our world: embedded systems and the Internet of Things (IoT), instantly available mobile and real-time data, cognitive artificial intelligence (AI) systems, increased security data requirements, and critically, the evolution of data from playing a business background to playing a life-critical role.

Consumers use the cloud to manage everything from family photos and videos to data about their health and exercise routines. Real-time data created by connected devices — everything from Fitbit, Alexa and smart phones to home security systems, solar systems and autonomous cars — are fueling the emerging Data Age. On top of the obvious business and consumer data growth, our critical infrastructure like power grids, water systems, hospitals, road infrastructure and public transportation all demand and add to the growth of real-time data. Data is now a vital element in the smooth operation of all aspects of daily life.

All of this entails a significant infrastructure cost behind the scenes with the insatiable, global appetite for data storage. While a variety of storage technologies will continue to advance in data density (Seagate announced the first 60TB 3.5-inch SSD unit for example), high-capacity hard drives serve as the primary foundational core of our interconnected, cloud and IoT-based dependence on data.

HAMR Hard Drive Technology

Seagate has been working on heat assisted magnetic recording (HAMR) in one form or another since the late 1990s. During this time we’ve made many breakthroughs in making reliable near field transducers, special high capacity HAMR media, and figuring out a way to put a laser on each and every head that is no larger than a grain of salt.

The development of HAMR has required Seagate to consider and overcome a myriad of scientific and technical challenges including new kinds of magnetic media, nano-plasmonic device design and fabrication, laser integration, high-temperature head-disk interactions, and thermal regulation.

A typical hard drive inside any computer or server contains one or more rigid disks coated with a magnetically sensitive film consisting of tiny magnetic grains. Data is recorded when a magnetic write-head flies just above the spinning disk; the write head rapidly flips the magnetization of one magnetic region of grains so that its magnetic pole points up or down, to encode a 1 or a 0 in binary code.

Increasing the amount of data you can store on a disk requires cramming magnetic regions closer together, which means the grains need to be smaller so they won’t interfere with each other.

Heat Assisted Magnetic Recording (HAMR) is the next step to enable us to increase the density of grains — or bit density. Current projections are that HAMR can achieve 5 Tbpsi (Terabits per square inch) on conventional HAMR media, and in the future will be able to achieve 10 Tbpsi or higher with bit patterned media (in which discrete dots are predefined on the media in regular, efficient, very dense patterns). These technologies will enable hard drives with capacities higher than 100 TB before 2030.

The major problem with packing bits so closely together is that if you do that on conventional magnetic media, the bits (and the data they represent) become thermally unstable, and may flip. So, to make the grains maintain their stability — their ability to store bits over a long period of time — we need to develop a recording media that has higher coercivity. That means it’s magnetically more stable during storage, but it is more difficult to change the magnetic characteristics of the media when writing (harder to flip a grain from a 0 to a 1 or vice versa).

That’s why HAMR’s first key hardware advance required developing a new recording media that keeps bits stable — using high anisotropy (or “hard”) magnetic materials such as iron-platinum alloy (FePt), which resist magnetic change at normal temperatures. Over years of HAMR development, Seagate researchers have tested and proven out a variety of FePt granular media films, with varying alloy composition and chemical ordering.

In fact the new media is so “hard” that conventional recording heads won’t be able to flip the bits, or write new data, under normal temperatures. If you add heat to the tiny spot on which you want to write data, you can make the media’s coercive field lower than the magnetic field provided by the recording head — in other words, enable the write head to flip that bit.

So, a challenge with HAMR has been to replace conventional perpendicular magnetic recording (PMR), in which the write head operates at room temperature, with a write technology that heats the thin film recording medium on the disk platter to temperatures above 400 °C. The basic principle is to heat a tiny region of several magnetic grains for a very short time (~1 nanoseconds) to a temperature high enough to make the media’s coercive field lower than the write head’s magnetic field. Immediately after the heat pulse, the region quickly cools down and the bit’s magnetic orientation is frozen in place.

Applying this dynamic nano-heating is where HAMR’s famous “laser” comes in. A plasmonic near-field transducer (NFT) has been integrated into the recording head, to heat the media and enable magnetic change at a specific point. Plasmonic NFTs are used to focus and confine light energy to regions smaller than the wavelength of light. This enables us to heat an extremely small region, measured in nanometers, on the disk media to reduce its magnetic coercivity,

Moving HAMR Forward

HAMR write head

As always in advanced engineering, the devil — or many devils — is in the details. As noted earlier, our technical brief provides a point-by-point short illustrated summary of HAMR’s key changes.

Although hard work remains, we believe this technology is nearly ready for commercialization. Seagate has the best engineers in the world working towards a goal of a 20 Terabyte drive by 2019. We hope we’ve given you a glimpse into the amount of engineering that goes into a hard drive. Keeping up with the world’s insatiable appetite to create, capture, store, secure, manage, analyze, rapidly access and share data is a challenge we work on every day.

With thousands of HAMR drives already being made in our manufacturing facilities, our internal and external supply chain is solidly in place, and volume manufacturing tools are online. This year we began shipping initial units for customer tests, and production units will ship to key customers by the end of 2018. Prepare for breakthrough capacities.

The post What is HAMR and How Does It Enable the High-Capacity Needs of the Future? appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Marvellous retrofitted home assistants

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/retrofitted-home-assistants/

As more and more digital home assistants are appearing on the consumer market, it’s not uncommon to see the towering Amazon Echo or sleek Google Home when visiting friends or family. But we, the maker community, are rarely happy unless our tech stands out from the rest. So without further ado, here’s a roundup of some fantastic retrofitted home assistant projects you can recreate and give pride of place in your kitchen, on your bookshelf, or wherever else you’d like to talk to your virtual, disembodied PA.

Google AIY Robot Conversion

Turned an 80s Tomy Mr Money into a little Google AIY / Raspberry Pi based assistant.

Matt ‘Circuitbeard’ Brailsford’s Tomy Mr Money Google AIY Assistant is just one of many home-brew home assistants makers have built since the release of APIs for Amazon Alexa and Google Home. Here are some more…

Teddy Ruxpin

Oh Teddy, how exciting and mysterious you were when I unwrapped you back in the mideighties. With your awkwardly moving lips and twitching eyelids, you were the cream of the crop of robotic toys! How was I to know that during my thirties, you would become augmented with home assistant software and suddenly instil within me a fear unlike any I’d felt before? (Save for my lifelong horror of ET…)

Alexa Ruxpin – Raspberry Pi & Alexa Powered Teddy Bear

Please watch: “DIY Fidget LED Display – Part 1” https://www.youtube.com/watch?v=FAZIc82Duzk -~-~~-~~~-~~-~- There are tons of virtual assistants out on the market: Siri, Ok Google, Alexa, etc. I had this crazy idea…what if I made the virtual assistant real…kinda. I decided to take an old animatronic teddy bear and hack it so that it ran Amazon Alexa.

Several makers around the world have performed surgery on Teddy to install a Raspberry Pi within his stomach and integrate him with Amazon Alexa Voice or Google’s AIY Projects Voice kit. And because these makers are talented, they’ve also managed to hijack Teddy’s wiring to make his lips move in time with his responses to your commands. Freaky…

Speaking of freaky: check out Zack’s Furlexa — an Amazon Alexa Furby that will haunt your nightmares.

Give old tech new life

Devices that were the height of technology when you purchased them may now be languishing in your attic collecting dust. With new and improved versions of gadgets and gizmos being released almost constantly, it is likely that your household harbours a spare whosit or whatsit which you can dismantle and give a new Raspberry Pi heart and purpose.

Take, for example, Martin Mander’s Google Pi intercom. By gutting and thoroughly cleaning a vintage intercom, Martin fashioned a suitable housing the Google AIY Projects Voice kit to create a new home assistant for his house:

1986 Google Pi Intercom

This is a 1986 Radio Shack Intercom that I’ve converted into a Google Home style device using a Raspberry Pi and the Google AIY (Artificial Intelligence Yourself) kit that came free with the MagPi magazine (issue 57). It uses the Google Assistant to answer questions and perform actions, using IFTTT to integrate with smart home accessories and other web services.

Not only does this build look fantastic, it’s also a great conversation starter for any visitors who had a similar device during the eighties.

Also take a look at Martin’s 1970s Amazon Alexa phone for more nostalgic splendour.

Put it in a box

…and then I’ll put that box inside of another box, and then I’ll mail that box to myself, and when it arrives…

A GIF from the emperors new groove - Raspberry Pi Home Assistant

A GIF. A harmless, little GIF…and proof of the comms team’s obsession with The Emperor’s New Groove.

You don’t have to be fancy when it comes to housing your home assistant. And often, especially if you’re working with the smaller people in your household, the results of a simple homespun approach are just as delightful.

Here are Hannah and her dad Tom, explaining how they built a home assistant together and fit it inside an old cigar box:

Raspberry Pi 3 Amazon Echo – The Alexa Kids Build!

My 7 year old daughter and I decided to play around with the Raspberry Pi and build ourselves an Amazon Echo (Alexa). The video tells you about what we did and the links below will take you to all the sites we used to get this up and running.

Also see the Google AIY Projects Voice kit — the cardboard box-est of home assistant boxes.

Make your own home assistant

And now it’s your turn! I challenge you all (and also myself) to create a home assistant using the Raspberry Pi. Whether you decide to fit Amazon Alexa inside an old shoebox or Google Home inside your sister’s Barbie, I’d love to see what you create using the free home assistant software available online.

Check out these other home assistants for Raspberry Pi, and keep an eye on our blog to see what I manage to create as part of the challenge.

Ten virtual house points for everyone who shares their build with us online, either in the comments below or by tagging us on your social media account.

The post Marvellous retrofitted home assistants appeared first on Raspberry Pi.

AWS Contributes to Milestone 1.0 Release and Adds Model Serving Capability for Apache MXNet

Post Syndicated from Ana Visneski original https://aws.amazon.com/blogs/aws/aws-contributes-to-milestone-1-0-release-and-adds-model-serving-capability-for-apache-mxnet/

Post by Dr. Matt Wood

Today AWS announced contributions to the milestone 1.0 release of the Apache MXNet deep learning engine including the introduction of a new model-serving capability for MXNet. The new capabilities in MXNet provide the following benefits to users:

1) MXNet is easier to use: The model server for MXNet is a new capability introduced by AWS, and it packages, runs, and serves deep learning models in seconds with just a few lines of code, making them accessible over the internet via an API endpoint and thus easy to integrate into applications. The 1.0 release also includes an advanced indexing capability that enables users to perform matrix operations in a more intuitive manner.

  • Model Serving enables set up of an API endpoint for prediction: It saves developers time and effort by condensing the task of setting up an API endpoint for running and integrating prediction functionality into an application to just a few lines of code. It bridges the barrier between Python-based deep learning frameworks and production systems through a Docker container-based deployment model.
  • Advanced indexing for array operations in MXNet: It is now more intuitive for developers to leverage the powerful array operations in MXNet. They can use the advanced indexing capability by leveraging existing knowledge of NumPy/SciPy arrays. For example, it supports MXNet NDArray and Numpy ndarray as index, e.g. (a[mx.nd.array([1,2], dtype = ‘int32’]).

2) MXNet is faster: The 1.0 release includes implementation of cutting-edge features that optimize the performance of training and inference. Gradient compression enables users to train models up to five times faster by reducing communication bandwidth between compute nodes without loss in convergence rate or accuracy. For speech recognition acoustic modeling like the Alexa voice, this feature can reduce network bandwidth by up to three orders of magnitude during training. With the support of NVIDIA Collective Communication Library (NCCL), users can train a model 20% faster on multi-GPU systems.

  • Optimize network bandwidth with gradient compression: In distributed training, each machine must communicate frequently with others to update the weight-vectors and thereby collectively build a single model, leading to high network traffic. Gradient compression algorithm enables users to train models up to five times faster by compressing the model changes communicated by each instance.
  • Optimize the training performance by taking advantage of NCCL: NCCL implements multi-GPU and multi-node collective communication primitives that are performance optimized for NVIDIA GPUs. NCCL provides communication routines that are optimized to achieve high bandwidth over interconnection between multi-GPUs. MXNet supports NCCL to train models about 20% faster on multi-GPU systems.

3) MXNet provides easy interoperability: MXNet now includes a tool for converting neural network code written with the Caffe framework to MXNet code, making it easier for users to take advantage of MXNet’s scalability and performance.

  • Migrate Caffe models to MXNet: It is now possible to easily migrate Caffe code to MXNet, using the new source code translation tool for converting Caffe code to MXNet code.

MXNet has helped developers and researchers make progress with everything from language translation to autonomous vehicles and behavioral biometric security. We are excited to see the broad base of users that are building production artificial intelligence applications powered by neural network models developed and trained with MXNet. For example, the autonomous driving company TuSimple recently piloted a self-driving truck on a 200-mile journey from Yuma, Arizona to San Diego, California using MXNet. This release also includes a full-featured and performance optimized version of the Gluon programming interface. The ease-of-use associated with it combined with the extensive set of tutorials has led significant adoption among developers new to deep learning. The flexibility of the interface has driven interest within the research community, especially in the natural language processing domain.

Getting started with MXNet
Getting started with MXNet is simple. To learn more about the Gluon interface and deep learning, you can reference this comprehensive set of tutorials, which covers everything from an introduction to deep learning to how to implement cutting-edge neural network models. If you’re a contributor to a machine learning framework, check out the interface specs on GitHub.

To get started with the Model Server for Apache MXNet, install the library with the following command:

$ pip install mxnet-model-server

The Model Server library has a Model Zoo with 10 pre-trained deep learning models, including the SqueezeNet 1.1 object classification model. You can start serving the SqueezeNet model with just the following command:

$ mxnet-model-server \
  --models squeezenet=https://s3.amazonaws.com/model-server/models/squeezenet_v1.1/squeezenet_v1.1.model \
  --service dms/model_service/mxnet_vision_service.py

Learn more about the Model Server and view the source code, reference examples, and tutorials here: https://github.com/awslabs/mxnet-model-server/

-Dr. Matt Wood

Amazon EC2 Bare Metal Instances with Direct Access to Hardware

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-amazon-ec2-bare-metal-instances-with-direct-access-to-hardware/

When customers come to us with new and unique requirements for AWS, we listen closely, ask lots of questions, and do our best to understand and address their needs. When we do this, we make the resulting service or feature generally available; we do not build one-offs or “snowflakes” for individual customers. That model is messy and hard to scale and is not the way we work.

Instead, every AWS customer has access to whatever it is that we build, and everyone benefits. VMware Cloud on AWS is a good example of this strategy in action. They told us that they wanted to run their virtualization stack directly on the hardware, within the AWS Cloud, giving their customers access to the elasticity, security, and reliability (not to mention the broad array of services) that AWS offers.

We knew that other customers also had interesting use cases for bare metal hardware and didn’t want to take the performance hit of nested virtualization. They wanted access to the physical resources for applications that take advantage of low-level hardware features such as performance counters and Intel® VT that are not always available or fully supported in virtualized environments, and also for applications intended to run directly on the hardware or licensed and supported for use in non-virtualized environments.

Our multi-year effort to move networking, storage, and other EC2 features out of our virtualization platform and into dedicated hardware was already well underway and provided the perfect foundation for a possible solution. This work, as I described in Now Available – Compute-Intensive C5 Instances for Amazon EC2, includes a set of dedicated hardware accelerators.

Now that we have provided VMware with the bare metal access that they requested, we are doing the same for all AWS customers. I’m really looking forward to seeing what you can do with them!

New Bare Metal Instances
Today we are launching a public preview the i3.metal instance, the first in a series of EC2 instances that offer the best of both worlds, allowing the operating system to run directly on the underlying hardware while still providing access to all of the benefits of the cloud. The instance gives you direct access to the processor and other hardware, and has the following specifications:

  • Processing – Two Intel Xeon E5-2686 v4 processors running at 2.3 GHz, with a total of 36 hyperthreaded cores (72 logical processors).
  • Memory – 512 GiB.
  • Storage – 15.2 terabytes of local, SSD-based NVMe storage.
  • Network – 25 Gbps of ENA-based enhanced networking.

Bare Metal instances are full-fledged members of the EC2 family and can take advantage of Elastic Load Balancing, Auto Scaling, Amazon CloudWatch, Auto Recovery, and so forth. They can also access the full suite of AWS database, IoT, mobile, analytics, artificial intelligence, and security services.

Previewing Now
We are launching a public preview of the Bare Metal instances today; please sign up now if you want to try them out.

You can now bring your specialized applications or your own stack of virtualized components to AWS and run them on Bare Metal instances. If you are using or thinking about using containers, these instances make a great host for CoreOS.

An AMI that works on one of the new C5 instances should also work on an I3 Bare Metal Instance. It must have the ENA and NVMe drivers, and must be tagged for ENA.

Jeff;

 

Presenting Amazon Sumerian: An easy way to create VR, AR, and 3D experiences

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/launch-presenting-amazon-sumerian/

If you have had an opportunity to read any of my blog posts or attended any session I’ve conducted at various conferences, you are probably aware that I am definitively a geek girl. I am absolutely enamored with all of the latest advancements that have been made in technology areas like cloud, artificial intelligence, internet of things and the maker space, as well as, with virtual reality and augmented reality. In my opinion, it is a wonderful time to be a geek. All the things that we dreamed about building while we sweated through our algorithms and discrete mathematics classes or the technology we marveled at when watching Star Wars and Star Trek are now coming to fruition.  So hopefully this means it will only be a matter of time before I can hyperdrive to other galaxies in space, but until then I can at least build the 3D virtual reality and augmented reality characters and images like those featured in some of my favorite shows.

Amazon Sumerian provides tools and resources that allows anyone to create and run augmented reality (AR), virtual reality (VR), and 3D applications with ease.  With Sumerian, you can build multi-platform experiences that run on hardware like the Oculus, HTC Vive, and iOS devices using WebVR compatible browsers and with support for ARCore on Android devices coming soon.

This exciting new service, currently in preview, delivers features to allow you to design highly immersive and interactive 3D experiences from your browser. Some of these features are:

  • Editor: A web-based editor for constructing 3D scenes, importing assets, scripting interactions and special effects, with cross-platform publishing.
  • Object Library: a library of pre-built objects and templates.
  • Asset Import: Upload 3D assets to use in your scene. Sumerian supports importing FBX, OBJ, and coming soon Unity projects.
  • Scripting Library: provides a JavaScript scripting library via its 3D engine for advanced scripting capabilities.
  • Hosts: animated, lifelike 3D characters that can be customized for gender, voice, and language.
  • AWS Services Integration: baked in integration with Amazon Polly and Amazon Lex to add speech and natural language to into Sumerian hosts. Additionally, the scripting library can be used with AWS Lambda allowing use of the full range of AWS services.

Since Amazon Sumerian doesn’t require you to have 3D graphics or programming experience to build rich, interactive VR and AR scenes, let’s take a quick run to the Sumerian Dashboard and check it out.

From the Sumerian Dashboard, I can easily create a new scene with a push of a button.

A default view of the new scene opens and is displayed in the Sumerian Editor. With the Tara Blog Scene opened in the editor, I can easily import assets into my scene.

I’ll click the Import Asset button and pick an asset, View Room, to import into the scene. With the desired asset selected, I’ll click the Add button to import it.

Excellent, my asset was successfully imported into the Sumerian Editor and is shown in the Asset panel.  Now, I have the option to add the View Room object into my scene by selecting it in the Asset panel and then dragging it onto the editor’s canvas.

I’ll repeat the import asset process and this time I will add the Mannequin asset to the scene.

Additionally, with Sumerian, I can add scripting to Entity assets to make my scene even more exciting by adding a ScriptComponent to an entity and creating a script.  I can use the provided built-in scripts or create my own custom scripts. If I create a new custom script, I will get a blank script with some base JavaScript code that looks similar to the code below.

'use strict';
/* global sumerian */
//This is Me-- trying out the custom scripts - Tara

var setup = function (args, ctx) {
// Called when play mode starts.
};
var fixedUpdate = function (args, ctx) {
// Called on every physics update, after setup().
};
var update = function (args, ctx) {
// Called on every render frame, after setup().
};
var lateUpdate = function (args, ctx) {
// Called after all script "update" methods in the scene has been called.
};
var cleanup = function (args, ctx) {
// Called when play mode stops.
};
var parameters = [];

Very cool, I just created a 3D scene using Amazon Sumerian in a matter of minutes and I have only scratched the surface.

Summary

The Amazon Sumerian service enables you to create, build, and run virtual reality (VR), augmented reality (AR), and 3D applications with ease.  You don’t need any 3D graphics or specialized programming knowledge to get started building scenes and immersive experiences.  You can import FBX, OBJ, and Unity projects in Sumerian, as well as upload your own 3D assets for use in your scene. In addition, you can create digital characters to narrate your scene and with these digital assets, you have choices for the character’s appearance, speech and behavior.

You can learn more about Amazon Sumerian and sign up for the preview to get started with the new service on the product page.  I can’t wait to see what rich experiences you all will build.

Tara

 

How to Recover From Ransomware

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/complete-guide-ransomware/

Here’s the scenario. You’re working on your computer and you notice that it seems slower. Or perhaps you can’t access document or media files that were previously available.

You might be getting error messages from Windows telling you that a file is of an “Unknown file type” or “Windows can’t open this file.”

Windows error message

If you’re on a Mac, you might see the message “No associated application,” or “There is no application set to open the document.”

MacOS error message

Another possibility is that you’re completely locked out of your system. If you’re in an office, you might be looking around and seeing that other people are experiencing the same problem. Some are already locked out, and others are just now wondering what’s going on, just as you are.

Then you see a message confirming your fears.

wana decrypt0r ransomware message

You’ve been infected with ransomware.

You’ll have lots of company this year. The number of ransomware attacks on businesses tripled in the past year, jumping from one attack every two minutes in Q1 to one every 40 seconds by Q3.There were over four times more new ransomware variants in the first quarter of 2017 than in the first quarter of 2016, and damages from ransomware are expected to exceed $5 billion this year.

Growth in Ransomware Variants Since December 2015

Source: Proofpoint Q1 2017 Quarterly Threat Report

This past summer, our local PBS and NPR station in San Francisco, KQED, was debilitated for weeks by a ransomware attack that forced them to go back to working the way they used to prior to computers. Five months have passed since the attack and they’re still recovering and trying to figure out how to prevent it from happening again.

How Does Ransomware Work?

Ransomware typically spreads via spam or phishing emails, but also through websites or drive-by downloads, to infect an endpoint and penetrate the network. Once in place, the ransomware then locks all files it can access using strong encryption. Finally, the malware demands a ransom (typically payable in bitcoins) to decrypt the files and restore full operations to the affected IT systems.

Encrypting ransomware or “cryptoware” is by far the most common recent variety of ransomware. Other types that might be encountered are:

  • Non-encrypting ransomware or lock screens (restricts access to files and data, but does not encrypt them)
  • Ransomware that encrypts the Master Boot Record (MBR) of a drive or Microsoft’s NTFS, which prevents victims’ computers from being booted up in a live OS environment
  • Leakware or extortionware (exfiltrates data that the attackers threaten to release if ransom is not paid)
  • Mobile Device Ransomware (infects cell-phones through “drive-by downloads” or fake apps)

The typical steps in a ransomware attack are:

1
Infection
After it has been delivered to the system via email attachment, phishing email, infected application or other method, the ransomware installs itself on the endpoint and any network devices it can access.
2
Secure Key Exchange
The ransomware contacts the command and control server operated by the cybercriminals behind the attack to generate the cryptographic keys to be used on the local system.
3
Encryption
The ransomware starts encrypting any files it can find on local machines and the network.
4
Extortion
With the encryption work done, the ransomware displays instructions for extortion and ransom payment, threatening destruction of data if payment is not made.
5
Unlocking
Organizations can either pay the ransom and hope for the cybercriminals to actually decrypt the affected files (which in many cases does not happen), or they can attempt recovery by removing infected files and systems from the network and restoring data from clean backups.

Who Gets Attacked?

Ransomware attacks target firms of all sizes — 5% or more of businesses in the top 10 industry sectors have been attacked — and no no size business, from SMBs to enterprises, are immune. Attacks are on the rise in every sector and in every size of business.

Recent attacks, such as WannaCry earlier this year, mainly affected systems outside of the United States. Hundreds of thousands of computers were infected from Taiwan to the United Kingdom, where it crippled the National Health Service.

The US has not been so lucky in other attacks, though. The US ranks the highest in the number of ransomware attacks, followed by Germany and then France. Windows computers are the main targets, but ransomware strains exist for Macintosh and Linux, as well.

The unfortunate truth is that ransomware has become so wide-spread that for most companies it is a certainty that they will be exposed to some degree to a ransomware or malware attack. The best they can do is to be prepared and understand the best ways to minimize the impact of ransomware.

“Ransomware is more about manipulating vulnerabilities in human psychology than the adversary’s technological sophistication.” — James Scott, expert in Artificial Intelligence

Phishing emails, malicious email attachments, and visiting compromised websites have been common vehicles of infection (we wrote about protecting against phishing recently), but other methods have become more common in past months. Weaknesses in Microsoft’s Server Message Block (SMB) and Remote Desktop Protocol (RDP) have allowed cryptoworms to spread. Desktop applications — in one case an accounting package — and even Microsoft Office (Microsoft’s Dynamic Data Exchange — DDE) have been the agents of infection.

Recent ransomware strains such as Petya, CryptoLocker, and WannaCry have incorporated worms to spread themselves across networks, earning the nickname, “cryptoworms.”

How to Defeat Ransomware

1
Isolate the Infection
Prevent the infection from spreading by separating all infected computers from each other, shared storage, and the network.
2
Identify the Infection
From messages, evidence on the computer, and identification tools, determine which malware strain you are dealing with.
3
Report
Report to the authorities to support and coordinate measures to counter attacks.
4
Determine Your Options
You have a number of ways to deal with the infection. Determine which approach is best for you.
5
Restore and Refresh
Use safe backups and program and software sources to restore your computer or outfit a new platform.
6
Plan to Prevent Recurrence
Make an assessment of how the infection occurred and what you can do to put measures into place that will prevent it from happening again.

1 — Isolate the Infection

The rate and speed of ransomware detection is critical in combating fast moving attacks before they succeed in spreading across networks and encrypting vital data.

The first thing to do when a computer is suspected of being infected is to isolate it from other computers and storage devices. Disconnect it from the network (both wired and Wi-Fi) and from any external storage devices. Cryptoworms actively seek out connections and other computers, so you want to prevent that happening. You also don’t want the ransomware communicating across the network with its command and control center.

Be aware that there may be more than just one patient zero, meaning that the ransomware may have entered your organization or home through multiple computers, or may be dormant and not yet shown itself on some systems. Treat all connected and networked computers with suspicion and apply measures to ensure that all systems are not infected.

This Week in Tech (TWiT.tv) did a videocast showing what happens when WannaCry is released on an isolated system and encrypts files and trys to spread itself to other computers. It’s a great lesson on how these types of cryptoworms operate.

2 — Identify the Infection

Most often the ransomware will identify itself when it asks for ransom. There are numerous sites that help you identify the ransomware, including ID Ransomware. The No More Ransomware! Project provides the Crypto Sheriff to help identify ransomware.

Identifying the ransomware will help you understand what type of ransomware you have, how it propagates, what types of files it encrypts, and maybe what your options are for removal and disinfection. It also will enable you to report the attack to the authorities, which is recommended.

wanna decryptor 2.0 ransomware message

WannaCry Ransomware Extortion Dialog

3 — Report to the Authorities

You’ll be doing everyone a favor by reporting all ransomware attacks to the authorities. The FBI urges ransomware victims to report ransomware incidents regardless of the outcome. Victim reporting provides law enforcement with a greater understanding of the threat, provides justification for ransomware investigations, and contributes relevant information to ongoing ransomware cases. Knowing more about victims and their experiences with ransomware will help the FBI to determine who is behind the attacks and how they are identifying or targeting victims.

You can file a report with the FBI at the Internet Crime Complaint Center.

There are other ways to report ransomware, as well.

4 — Determine Your Options

Your options when infected with ransomware are:

  1. Pay the ransom
  2. Try to remove the malware
  3. Wipe the system(s) and reinstall from scratch

It’s generally considered a bad idea to pay the ransom. Paying the ransom encourages more ransomware, and in most cases the unlocking of the encrypted files is not successful.

In a recent survey, more than three-quarters of respondents said their organization is not at all likely to pay the ransom in order to recover their data (77%). Only a small minority said they were willing to pay some ransom (3% of companies have already set up a Bitcoin account in preparation).

Even if you decide to pay, it’s very possible you won’t get back your data.

5 — Restore or Start Fresh

You have the choice of trying to remove the malware from your systems or wiping your systems and reinstalling from safe backups and clean OS and application sources.

Get Rid of the Infection

There are internet sites and software packages that claim to be able to remove ransomware from systems. The No More Ransom! Project is one. Other options can be found, as well.

Whether you can successfully and completely remove an infection is up for debate. A working decryptor doesn’t exist for every known ransomware, and unfortunately it’s true that the newer the ransomware, the more sophisticated it’s likely to be and a perhaps a decryptor has not yet been created.

It’s Best to Wipe All Systems Completely

The surest way of being certain that malware or ransomware has been removed from a system is to do a complete wipe of all storage devices and reinstall everything from scratch. If you’ve been following a sound backup strategy, you should have copies of all your documents, media, and important files right up to the time of the infection.

Be sure to determine as well as you can from file dates and other information what was the date of infection. Consider that an infection might have been dormant in your system for a while before it activated and made significant changes to your system. Identifying and learning about the particular malware that attacked your systems will enable you to understand how that malware operates and what your best strategy should be for restoring your systems.

Backblaze Backup enables you to go back in time and specify the date prior to which you wish to restore files. That date should precede the date your system was infected.

Choose files to restore from earlier date in Backblaze Backup

If you’ve been following a good backup policy with both local and off-site backups, you should be able to use backup copies that you are sure were not connected to your network after the time of attack and hence protected from infection. Backup drives that were completely disconnected should be safe, as are files stored in the cloud, as with Backblaze Backup.

System Restores Are not the Best Strategy for Dealing with Ransomware and Malware

You might be tempted to use a System Restore point to get your system back up and running. System Restore is not a good solution for removing viruses or other malware. Since malicious software is typically buried within all kinds of places on a system, you can’t rely on System Restore being able to root out all parts of the malware. Instead, you should rely on a quality virus scanner that you keep up to date. Also, System Restore does not save old copies of your personal files as part of its snapshot. It also will not delete or replace any of your personal files when you perform a restoration, so don’t count on System Restore as working like a backup. You should always have a good backup procedure in place for all your personal files.

Local backups can be encrypted by ransomware. If your backup solution is local and connected to a computer that gets hit with ransomware, the chances are good your backups will be encrypted along with the rest of your data.

With a good backup solution that is isolated from your local computers, such as Backblaze Backup, you can easily obtain the files you need to get your system working again. You have the flexility to determine which files to restore, from which date you want to restore, and how to obtain the files you need to restore your system.

Choose how to obtain your backup files

You’ll need to reinstall your OS and software applications from the source media or the internet. If you’ve been managing your account and software credentials in a sound manner, you should be able to reactivate accounts for applications that require it.

If you use a password manager, such as 1Password or LastPass, to store your account numbers, usernames, passwords, and other essential information, you can access that information through their web interface or mobile applications. You just need to be sure that you still know your master username and password to obtain access to these programs.

6 — How to Prevent a Ransomware Attack

“Ransomware is at an unprecedented level and requires international investigation.” — European police agency EuroPol

A ransomware attack can be devastating for a home or a business. Valuable and irreplaceable files can be lost and tens or even hundreds of hours of effort can be required to get rid of the infection and get systems working again.

Security experts suggest several precautionary measures for preventing a ransomware attack.

  1. Use anti-virus and anti-malware software or other security policies to block known payloads from launching.
  2. Make frequent, comprehensive backups of all important files and isolate them from local and open networks. Cybersecurity professionals view data backup and recovery (74% in a recent survey) by far as the most effective solution to respond to a successful ransomware attack.
  3. Keep offline backups of data stored in locations inaccessible from any potentially infected computer, such as external storage drives or the cloud, which prevents them from being accessed by the ransomware.
  4. Install the latest security updates issued by software vendors of your OS and applications. Remember to Patch Early and Patch Often to close known vulnerabilities in operating systems, browsers, and web plugins.
  5. Consider deploying security software to protect endpoints, email servers, and network systems from infection.
  6. Exercise cyber hygiene, such as using caution when opening email attachments and links.
  7. Segment your networks to keep critical computers isolated and to prevent the spread of malware in case of attack. Turn off unneeded network shares.
  8. Turn off admin rights for users who don’t require them. Give users the lowest system permissions they need to do their work.
  9. Restrict write permissions on file servers as much as possible.
  10. Educate yourself, your employees, and your family in best practices to keep malware out of your systems. Update everyone on the latest email phishing scams and human engineering aimed at turning victims into abettors.

It’s clear that the best way to respond to a ransomware attack is to avoid having one in the first place. Other than that, making sure your valuable data is backed up and unreachable by ransomware infection will ensure that your downtime and data loss will be minimal or avoided completely.

Have you endured a ransomware attack or have a strategy to avoid becoming a victim? Please let us know in the comments.

The post How to Recover From Ransomware appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Amazon and Microsoft Ramp Up their AI Efforts

Post Syndicated from Chris De Santis original https://www.anchor.com.au/blog/2017/11/amazon-microsoft-ai-efforts/

In recent years, Amazon Web Services and Microsoft have shifted to more of a focus on artificial intelligence (AI), and as a result, have vastly increased their investment in the implementation and advancement of AI over their fleets of online products and services.

Recently, at Sydney’s Microsoft Summit (16-17 November 2017), Microsoft announced that they have stepped up their investment in the AI arena by releasing and showcasing new tools to enhance human and organisational capabilities. These tools include Visual Studio Tools for AI, Azure IoT Edge, Microsoft Translator, and Seeing AI.

The reasoning that Microsoft gave to justify their focus is that the popularity of AI has been steadily increasing, which can be attributed to three main factors: cloud computing, powerful algorithms, and multitudes of data.

Amazon, on the other hand, are planning to introduce a number of new AI products into the market, in order to catch up to the likes of Microsoft and Google. Through a project codenamed “Ironman”, AWS are working internally as well as partnering with startups to take a few-years-old AWS data warehousing service and gearing it to organise data for workloads employing machine learning techniques.

For more information more information on recent AI endeavours in the cloud computing market, check out our previous article below:

AI in the Cloud Market: AWS & Microsoft Lend a Big Hand

Amazon and Microsoft Ramp Up their AI Efforts

The post Amazon and Microsoft Ramp Up their AI Efforts appeared first on AWS Managed Services by Anchor.

Community Profile: Matthew Timmons-Brown

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/community-profile-matthew-timmons-brown/

This column is from The MagPi issue 57. You can download a PDF of the full issue for free, or subscribe to receive the print edition in your mailbox or the digital edition on your tablet. All proceeds from the print and digital editions help the Raspberry Pi Foundation achieve its charitable goals.

“I first set up my YouTube channel because I noticed a massive lack of video tutorials for the Raspberry Pi,” explains Matthew Timmons-Brown, known to many as The Raspberry Pi Guy. At 18 years old, the Cambridge-based student has more than 60 000 subscribers to his channel, making his account the most successful Raspberry Pi–specific YouTube account to date.

Matthew Timmons-Brown

Matt gives a talk at the Raspberry Pi 5th Birthday weekend event

The Raspberry Pi Guy

If you’ve attended a Raspberry Pi event, there’s a good chance you’ve already met Matt. And if not, you’ll have no doubt come across one or more of his tutorials and builds online. On more than one occasion, his work has featured on the Raspberry Pi blog, with his yearly Raspberry Pi roundup videos being a staple of the birthday celebrations.

Matthew Timmons-Brown

With his website, Matt aimed to collect together “the many strands of The Raspberry Pi Guy” into one, neat, cohesive resource — and it works. From newcomers to the credit card-sized computer to hardened Pi veterans, The Raspberry Pi Guy offers aid and inspiration for many. Looking for a review of the Raspberry Pi Zero W? He’s filmed one. Looking for a step-by-step guide to building a Pi-powered Amazon Alexa? No problem, there’s one of those too.

Make your Raspberry Pi artificially intelligent! – Amazon Alexa Personal Assistant Tutorial

Artificial Intelligence. A hefty topic that has dominated the field since computers were first conceived. What if I told you that you could put an artificial intelligence service on your own $30 computer?! That’s right! In this tutorial I will show you how to create your own artificially intelligent personal assistant, using Amazon’s Alexa voice recognition and information service!

Raspberry Pi electric skateboard

Last summer, Matt introduced the world to his Raspberry Pi-controlled electric skateboard, soon finding himself plastered over local press as well as the BBC and tech sites like Adafruit and geek.com. And there’s no question as to why the build was so popular. With YouTubers such as Casey Neistat increasing the demand for electric skateboards on a near-daily basis, the call for a cheaper, home-brew version has quickly grown.

DIY 30KM/H ELECTRIC SKATEBOARD – RASPBERRY PI/WIIMOTE POWERED

Over the summer, I made my own electric skateboard using a £4 Raspberry Pi Zero. Controlled with a Nintendo Wiimote, capable of going 30km/h, and with a range of over 10km, this project has been pretty darn fun. In this video, you see me racing around Cambridge and I explain the ins and outs of this project.

Using a Raspberry Pi Zero, a Nintendo Wii Remote, and a little help from members of the Cambridge Makespace community, Matt built a board capable of reaching 30km/h, with a battery range of 10km per charge. Alongside Neistat, Matt attributes the project inspiration to Australian student Tim Maier, whose build we previously covered in The MagPi.

Matthew Timmons-Brown and Eben Upton standing in a car park looking at a smartphone

LiDAR

Despite the success and the fun of the electric skateboard (including convincing Raspberry Pi Trading CEO Eben Upton to have a go for local television news coverage), the project Matt is most proud of is his wireless LiDAR system for theoretical use on the Mars rovers.

Matthew Timmons-Brown's LiDAR project for scanning terrains with lasers

Using a tablet app to define the angles, Matt’s A Level coursework LiDAR build scans the surrounding area, returning the results to the touchscreen, where they can be manipulated by the user. With his passion for the cosmos and the International Space Station, it’s no wonder that this is Matt’s proudest build.

Built for his A Level Computer Science coursework, the build demonstrates Matt’s passion for space and physics. Used as a means of surveying terrain, LiDAR uses laser light to measure distance, allowing users to create 3D-scanned, high-resolution maps of a specific area. It is a perfect technology for exploring unknown worlds.

Matthew Timmons-Brown and two other young people at a reception in the Houses of Parliament

Matt was invited to St James’s Palace and the Houses of Parliament as part of the Raspberry Pi community celebrations in 2016

Joining the community

In a recent interview at Hills Road Sixth Form College, where he is studying mathematics, further mathematics, physics, and computer science, Matt revealed where his love of electronics and computer science started. “I originally became interested in computer science in 2012, when I read a tiny magazine article about a computer that I would be able to buy with pocket money. This was a pretty exciting thing for a 12-year-old! Your own computer… for less than £30?!” He went on to explain how it became his mission to learn all he could on the subject and how, months later, his YouTube channel came to life, cementing him firmly into the Raspberry Pi community

The post Community Profile: Matthew Timmons-Brown appeared first on Raspberry Pi.

AWS Online Tech Talks – November 2017

Post Syndicated from Sara Rodas original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-november-2017/

Leaves are crunching under my boots, Halloween is tomorrow, and pumpkin is having its annual moment in the sun – it’s fall everybody! And just in time to celebrate, we have whipped up a fresh batch of pumpkin spice Tech Talks. Grab your planner (Outlook calendar) and pencil these puppies in. This month we are covering re:Invent, serverless, and everything in between.

November 2017 – Schedule

Noted below are the upcoming scheduled live, online technical sessions being held during the month of November. Make sure to register ahead of time so you won’t miss out on these free talks conducted by AWS subject matter experts.

Webinars featured this month are:

Monday, November 6

Compute

9:00 – 9:40 AM PDT: Set it and Forget it: Auto Scaling Target Tracking Policies

Tuesday, November 7

Big Data

9:00 – 9:40 AM PDT: Real-time Application Monitoring with Amazon Kinesis and Amazon CloudWatch

Compute

10:30 – 11:10 AM PDT: Simplify Microsoft Windows Server Management with Amazon Lightsail

Mobile

12:00 – 12:40 PM PDT: Deep Dive on Amazon SES What’s New

Wednesday, November 8

Databases

10:30 – 11:10 AM PDT: Migrating Your Oracle Database to PostgreSQL

Compute

12:00 – 12:40 PM PDT: Run Your CI/CD Pipeline at Scale for a Fraction of the Cost

Thursday, November 9

Databases

10:30 – 11:10 AM PDT: Migrating Your Oracle Database to PostgreSQL

Containers

9:00 – 9:40 AM PDT: Managing Container Images with Amazon ECR

Big Data

12:00 – 12:40 PM PDT: Amazon Elasticsearch Service Security Deep Dive

Monday, November 13

re:Invent

10:30 – 11:10 AM PDT: AWS re:Invent 2017: Know Before You Go

5:00 – 5:40 PM PDT: AWS re:Invent 2017: Know Before You Go

Tuesday, November 14

AI

9:00 – 9:40 AM PDT: Sentiment Analysis Using Apache MXNet and Gluon

10:30 – 11:10 AM PDT: Bringing Characters to Life with Amazon Polly Text-to-Speech

IoT

12:00 – 12:40 PM PDT: Essential Capabilities of an IoT Cloud Platform

Enterprise

2:00 – 2:40 PM PDT: Everything you wanted to know about licensing Windows workloads on AWS, but were afraid to ask

Wednesday, November 15

Security & Identity

9:00 – 9:40 AM PDT: How to Integrate AWS Directory Service with Office365

Storage

10:30 – 11:10 AM PDT: Disaster Recovery Options with AWS

Hands on Lab

12:30 – 2:00 PM PDT: Hands on Lab: Windows Workloads

Thursday, November 16

Serverless

9:00 – 9:40 AM PDT: Building Serverless Websites with [email protected]

Hands on Lab

12:30 – 2:00 PM PDT: Hands on Lab: Deploy .NET Code to AWS from Visual Studio

– Sara

Linux Foundation debuts Community Data License Agreement

Post Syndicated from jake original https://lwn.net/Articles/737212/rss

The Linux Foundation has announced a pair of licenses for data that are modeled on the two broad categories of free-software licenses: permissive and copyleft. The Community Data License Agreement (CDLA) comes in two flavors: Sharing that “encourages contributions of data back to the data community” and Permissive that allows the data to be used without any further requirements.

Inspired by the collaborative software development models of open source software, the CDLA licenses are designed to enable individuals and organizations of all types to share data as easily as they currently share open source software code. Soundly drafted licensing models can help people form communities to assemble, curate and maintain vast amounts of data, measured in petabytes and exabytes, to bring new value to communities of all types, to build new business opportunities and to power new applications that promise to enhance safety and services.
The growth of big data analytics, machine learning and artificial intelligence (AI) technologies has allowed people to extract unprecedented levels of insight from data. Now the challenge is to assemble the critical mass of data for those tools to analyze. The CDLA licenses are designed to help governments, academic institutions, businesses and other organizations open up and share data, with the goal of creating communities that curate and share data openly.

AI in the Cloud Market: AWS & Microsoft Lend a Big Hand

Post Syndicated from Chris De Santis original https://www.anchor.com.au/blog/2017/10/aws-microsoft-launch-ai-platform/

Artificial intelligence (or AI) doesn’t necessarily play a big role in the current cloud hosting market, but Amazon Web Services (AWS) and Microsoft are looking to change that.

AI is starting to grow at an alarming rate and may be a significant role-player in the near future. According to Bernie Trudel, chairman of the Asia Cloud Computing Association (ACCA), AI “will become the killer application that will drive cloud computing forward”. He continues to mention that, although AI only accounts for 1% of the today’s global cloud computing market, its overall IT market share is growing at 52%, and its expected to rapidly grow to 10% of cloud revenue by 2025.

Trudel made notable that, although the big players in the cloud game are currently offering AI capabilities, the cloud-based AI market is still in its early stages. These big players include AWS, Microsoft, Google, and IBM. He also continues to state that AWS is certainly the leader in the cloud market, but they’re playing catch-up in terms of an AI perspective.

AWS 💘 Microsoft?

Here’s the funny bit–that a day or two after Trudel said all of this at Cloud Expo Asia, AWS announce (on their blog) their combined effort with Microsoft to create a new open-source deep-learning interface that “allows developers to more easily and quickly build machine learning models”. In other words, Gluon is an AI application for developers to create their own AI models, to the benefit of their own cloud applications and technical endeavours.

If you’d like to learn more about Gluon and the details of the project, head over to the AWS blog here.

AWS + Microsoft

 

The post AI in the Cloud Market: AWS & Microsoft Lend a Big Hand appeared first on AWS Managed Services by Anchor.

Introducing Gluon: a new library for machine learning from AWS and Microsoft

Post Syndicated from Ana Visneski original https://aws.amazon.com/blogs/aws/introducing-gluon-a-new-library-for-machine-learning-from-aws-and-microsoft/

Post by Dr. Matt Wood

Today, AWS and Microsoft announced Gluon, a new open source deep learning interface which allows developers to more easily and quickly build machine learning models, without compromising performance.

Gluon Logo

Gluon provides a clear, concise API for defining machine learning models using a collection of pre-built, optimized neural network components. Developers who are new to machine learning will find this interface more familiar to traditional code, since machine learning models can be defined and manipulated just like any other data structure. More seasoned data scientists and researchers will value the ability to build prototypes quickly and utilize dynamic neural network graphs for entirely new model architectures, all without sacrificing training speed.

Gluon is available in Apache MXNet today, a forthcoming Microsoft Cognitive Toolkit release, and in more frameworks over time.

Neural Networks vs Developers
Machine learning with neural networks (including ‘deep learning’) has three main components: data for training; a neural network model, and an algorithm which trains the neural network. You can think of the neural network in a similar way to a directed graph; it has a series of inputs (which represent the data), which connect to a series of outputs (the prediction), through a series of connected layers and weights. During training, the algorithm adjusts the weights in the network based on the error in the network output. This is the process by which the network learns; it is a memory and compute intensive process which can take days.

Deep learning frameworks such as Caffe2, Cognitive Toolkit, TensorFlow, and Apache MXNet are, in part, an answer to the question ‘how can we speed this process up? Just like query optimizers in databases, the more a training engine knows about the network and the algorithm, the more optimizations it can make to the training process (for example, it can infer what needs to be re-computed on the graph based on what else has changed, and skip the unaffected weights to speed things up). These frameworks also provide parallelization to distribute the computation process, and reduce the overall training time.

However, in order to achieve these optimizations, most frameworks require the developer to do some extra work: specifically, by providing a formal definition of the network graph, up-front, and then ‘freezing’ the graph, and just adjusting the weights.

The network definition, which can be large and complex with millions of connections, usually has to be constructed by hand. Not only are deep learning networks unwieldy, but they can be difficult to debug and it’s hard to re-use the code between projects.

The result of this complexity can be difficult for beginners and is a time-consuming task for more experienced researchers. At AWS, we’ve been experimenting with some ideas in MXNet around new, flexible, more approachable ways to define and train neural networks. Microsoft is also a contributor to the open source MXNet project, and were interested in some of these same ideas. Based on this, we got talking, and found we had a similar vision: to use these techniques to reduce the complexity of machine learning, making it accessible to more developers.

Enter Gluon: dynamic graphs, rapid iteration, scalable training
Gluon introduces four key innovations.

  1. Friendly API: Gluon networks can be defined using a simple, clear, concise code – this is easier for developers to learn, and much easier to understand than some of the more arcane and formal ways of defining networks and their associated weighted scoring functions.
  2. Dynamic networks: the network definition in Gluon is dynamic: it can bend and flex just like any other data structure. This is in contrast to the more common, formal, symbolic definition of a network which the deep learning framework has to effectively carve into stone in order to be able to effectively optimizing computation during training. Dynamic networks are easier to manage, and with Gluon, developers can easily ‘hybridize’ between these fast symbolic representations and the more friendly, dynamic ‘imperative’ definitions of the network and algorithms.
  3. The algorithm can define the network: the model and the training algorithm are brought much closer together. Instead of separate definitions, the algorithm can adjust the network dynamically during definition and training. Not only does this mean that developers can use standard programming loops, and conditionals to create these networks, but researchers can now define even more sophisticated algorithms and models which were not possible before. They are all easier to create, change, and debug.
  4. High performance operators for training: which makes it possible to have a friendly, concise API and dynamic graphs, without sacrificing training speed. This is a huge step forward in machine learning. Some frameworks bring a friendly API or dynamic graphs to deep learning, but these previous methods all incur a cost in terms of training speed. As with other areas of software, abstraction can slow down computation since it needs to be negotiated and interpreted at run time. Gluon can efficiently blend together a concise API with the formal definition under the hood, without the developer having to know about the specific details or to accommodate the compiler optimizations manually.

The team here at AWS, and our collaborators at Microsoft, couldn’t be more excited to bring these improvements to developers through Gluon. We’re already seeing quite a bit of excitement from developers and researchers alike.

Getting started with Gluon
Gluon is available today in Apache MXNet, with support coming for the Microsoft Cognitive Toolkit in a future release. We’re also publishing the front-end interface and the low-level API specifications so it can be included in other frameworks in the fullness of time.

You can get started with Gluon today. Fire up the AWS Deep Learning AMI with a single click and jump into one of 50 fully worked, notebook examples. If you’re a contributor to a machine learning framework, check out the interface specs on GitHub.

-Dr. Matt Wood

Things Go Better With Step Functions

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/things-go-better-with-step-functions/

I often give presentations on Amazon’s culture of innovation, and start out with a slide that features a revealing quote from Amazon founder Jeff Bezos:

I love to sit down with our customers and to learn how we have empowered their creativity and to pursue their dreams. Earlier this year I chatted with Patrick from The Coca-Cola Company in order to learn how they used AWS Step Functions and other AWS services to support the Coke.com Vending Pass program. This program includes drink rewards earned by purchasing products at vending machines equipped to support mobile payments using the Coca-Cola Vending Pass. Participants swipe their NFC-enabled phones to complete an Apple Pay or Android Pay purchase, identifying themselves to the vending machine and earning credit towards future free vending purchases in the process

After the swipe, a combination of SNS topics and AWS Lambda functions initiated a pair of calls to some existing backend code to count the vending points and update the participant’s record. Unfortunately, the backend code was slow to react and had some timing dependencies, leading to missing updates that had the potential to confuse Vending Pass participants. The initial solution to this issue was very simple: modify the Lambda code to include a 90 second delay between the two calls. This solved the problem, but ate up process time for no good reason (billing for the use of Lambda functions is based on the duration of the request, in 100 ms intervals).

In order to make their solution more cost-effective, the team turned to AWS Step Functions, building a very simple state machine. As I wrote in an earlier blog post, Step Functions coordinate the components of distributed applications and microservices at scale, using visual workflows that are easy to build.

Coke built a very simple state machine to simplify their business logic and reduce their costs. Yours can be equally simple, or they can make use of other Step Function features such as sequential and parallel execution and the ability to make decisions and choose alternate states. The Coke state machine looks like this:

The FirstState and the SecondState states (Task states) call the appropriate Lambda functions while Step Functions implements the 90 second delay (a Wait state). This modification simplified their logic and reduced their costs. Here’s how it all fits together:

 

What’s Next
This initial success led them to take a closer look at serverless computing and to consider using it for other projects. Patrick told me that they have already seen a boost in productivity and developer happiness. Developers no longer need to wait for servers to be provisioned, and can now (as Jeff says) unleash their creativity and pursue their dreams. They expect to use Step Functions to improve the scalability, functionality, and reliability of their applications, going far beyond the initial use for the Coca-Cola Vending Pass. For example, Coke has built a serverless solution for publishing nutrition information to their food service partners using Lambda, Step Functions, and API Gateway.

Patrick and his team are now experimenting with machine learning and artificial intelligence. They built a prototype application to analyze a stream of photos from Instagram and extract trends in tastes and flavors. The application (built as a quick, one-day prototype) made use of Lambda, Amazon DynamoDB, Amazon API Gateway, and Amazon Rekognition and was, in Patrick’s words, a “big win and an enabler.”

In order to build serverless applications even more quickly, the development team has created an internal CI/CD reference architecture that builds on the Serverless Application Framework. The architecture includes a guided tour of Serverless and some boilerplate code to access internal services and assets. Patrick told me that this model allows them to easily scale promising projects from “a guy with a computer” to an entire development team.

Patrick will be on stage at AWS re:Invent next to my colleague Tim Bray. To meet them in person, be sure to attend SRV306 – State Machines in the Wild! How Customers Use AWS Step Functions.

Jeff;

Natural Language Processing at Clemson University – 1.1 Million vCPUs & EC2 Spot Instances

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/natural-language-processing-at-clemson-university-1-1-million-vcpus-ec2-spot-instances/

My colleague Sanjay Padhi shared the guest post below in order to recognize an important milestone in the use of EC2 Spot Instances.

Jeff;


A group of researchers from Clemson University achieved a remarkable milestone while studying topic modeling, an important component of machine learning associated with natural language processing, breaking the record for creating the largest high-performance cluster by using more than 1,100,000 vCPUs on Amazon EC2 Spot Instances running in a single AWS region. The researchers conducted nearly half a million topic modeling experiments to study how human language is processed by computers. Topic modeling helps in discovering the underlying themes that are present across a collection of documents. Topic models are important because they are used to forecast business trends and help in making policy or funding decisions. These topic models can be run with many different parameters and the goal of the experiments is to explore how these parameters affect the model outputs.

The Experiment
Professor Amy Apon, Co-Director of the Complex Systems, Analytics and Visualization Institute at Clemson University with Professor Alexander Herzog and graduate students Brandon Posey and Christopher Gropp in collaboration with members of the AWS team as well as AWS Partner Omnibond performed the experiments.  They used software infrastructure based on CloudyCluster that provisions high performance computing clusters on dynamically allocated AWS resources using Amazon EC2 Spot Fleet. Spot Fleet is a collection of biddable spot instances in EC2 responsible for maintaining a target capacity specified during the request. The SLURM scheduler was used as an overlay virtual workload manager for the data analytics workflows. The team developed additional provisioning and workflow automation software as shown below for the design and orchestration of the experiments. This setup allowed them to evaluate various topic models on different data sets with massively parallel parameter sweeps on dynamically allocated AWS resources. This framework can easily be used beyond the current study for other scientific applications that use parallel computing.

Ramping to 1.1 Million vCPUs
The figure below shows elastic, automatic expansion of resources as a function of time, in the US East (Northern Virginia) Region. At just after 21:40 (GMT-1) on Aug. 26, 2017, the number of vCPUs utilized was 1,119,196. Clemson researchers also took advantage of the new per-second billing for the EC2 instances that they launched. The vCPU count usage is comparable to the core count on the largest supercomputers in the world.

Here’s the breakdown of the EC2 instance types that they used:

Campus resources at Clemson funded by the National Science Foundation were used to determine an effective configuration for the AWS experiments as compared to campus resources, and the AWS cloud resources complement the campus resources for large-scale experiments.

Meet the Team
Here’s the team that ran the experiment (Professor Alexander Herzog, graduate students Christopher Gropp and Brandon Posey, and Professor Amy Apon):

Professor Apon said about the experiment:

I am absolutely thrilled with the outcome of this experiment. The graduate students on the project are amazing. They used resources from AWS and Omnibond and developed a new software infrastructure to perform research at a scale and time-to-completion not possible with only campus resources. Per-second billing was a key enabler of these experiments.

Boyd Wilson (CEO, Omnibond, member of the AWS Partner Network) told me:

Participating in this project was exciting, seeing how the Clemson team developed a provisioning and workflow automation tool that tied into CloudyCluster to build a huge Spot Fleet supercomputer in a single region in AWS was outstanding.

About the Experiment
The experiments test parameter combinations on a range of topics and other parameters used in the topic model. The topic model outputs are stored in Amazon S3 and are currently being analyzed. The models have been applied to 17 years of computer science journal abstracts (533,560 documents and 32,551,540 words) and full text papers from the NIPS (Neural Information Processing Systems) Conference (2,484 documents and 3,280,697 words). This study allows the research team to systematically measure and analyze the impact of parameters and model selection on model convergence, topic composition and quality.

Looking Forward
This study constitutes an interaction between computer science, artificial intelligence, and high performance computing. Papers describing the full study are being submitted for peer-reviewed publication. I hope that you enjoyed this brief insight into the ways in which AWS is helping to break the boundaries in the frontiers of natural language processing!

Sanjay Padhi, Ph.D, AWS Research and Technical Computing