Tag Archives: Tutorial

Kodi ‘Trademark Troll’ Has Interesting Views on Co-Opting Other People’s Work

Post Syndicated from Andy original https://torrentfreak.com/kodi-trademark-troll-has-interesting-views-on-co-opting-other-peoples-work-170917/

The Kodi team, operating under the XBMC Foundation, announced last week that a third-party had registered the Kodi trademark in Canada and was using it for their own purposes.

That person was Geoff Gavora, who had previously been in communication with the Kodi team, expressing how important the software was to his sales.

“We had hoped, given the positive nature of his past emails, that perhaps he was doing this for the benefit of the Foundation. We learned, unfortunately, that this was not the case,” XBMC Foundation President Nathan Betzen said.

According to the Kodi team, Gavora began delisting Amazon ads placed by companies selling Kodi-enabled products, based on infringement of Gavora’s trademark rights.

“[O]nly Gavora’s hardware can be sold, unless those companies pay him a fee to stay on the store,” Betzen explained.

Predictably, Gavora’s move is being viewed as highly controversial, not least since he’s effectively claiming licensing rights in Canada over what should be a free and open source piece of software. TF obtained one of the notices Amazon sent to a seller of a Kodi-enabled device in Canada, following a complaint from Gavora.

Take down Kodi from Amazon, or pay Gavora

So who is Geoff Gavora and what makes him tick? Thanks to a 2016 interview with Ali Salman of the Rapid Growth Podcast, we have a lot of information from the horse’s mouth.

It all began in 2011, when Gavora began jailbreaking Apple TVs, loading them with XBMC, and selling them to friends.

“I did it as a joke, for beer money from my friends,” Gavora told Salman.

“I’d do it for $25 to $50 and word of mouth spread that I was doing this so we could load on this media center to watch content and online streams from it.”

Intro to the interview with Ali Salman

Soon, however, word of mouth caused the business to grow wings, Gavora claims.

“So they started telling people and I start telling people it’s $50, and then I got so busy so I start telling people it’s $75. I’m getting too busy with my work and with this. And it got to the point where I was making more jailbreaking these Apple TVs than I was at my career, and I wasn’t very happy at my career at that time.”

Jailbreaking was supposed to be a side thing to tide Gavora over until another job came along, but he had a problem – he didn’t come from a technical background. Nevertheless, what Gavora did have was a background in marketing and with a decent knowledge of how to succeed in customer service, he majored on that front.

Gavora had come to learn that while people wanted his devices, they weren’t very good at operating XBMC (Kodi’s former name) which he’d loaded onto them. With this in mind, he began offering web support and phone support via a toll-free line.

“I started receiving calls from New York, Dallas, and then Australia, Hong Kong. Everyone around the world was calling me and saying ‘we hear there’s some kid in Calgary, some young child, who’s offering tech support for the Apple TV’,” Gavora said.

But with things apparently going well, a wrench was soon thrown into the works when Apple released the third variant of its Apple TV and Gavorra was unable to jailbreak it. This prompted him to market his own Linux-based set-top device and his business, Raw-Media, grew from there.

While it seems likely that so-called ‘Raw Boxes’ were doing reasonably well with consumers, what was the secret of their success? Podcast host Salman asked Gavora for his ‘networking party 10-second pitch’, and the Canadian was happy to oblige.

“I get this all the time actually. I basically tell people that I sell a box that gives them free TV and movies,” he said.

This was met with laughter from the host, to which Gavora added, “That’s sort of the three-second pitch and everyone’s like ‘Oh, tell me more’.”

“Who doesn’t like free TV, come on?” Salman responded. “Yeah exactly,” Gavora said.

The image below, taken from a January 2016 YouTube unboxing video, shows one of the products sold by Gavora’s company.

Raw-Media Kodi Box packaging (note Kodi logo)

Bearing in mind the offer of free movies and TV, the tagline on the box, “Stop paying for things you don’t want to watch, watch more free tv!” initially looks quite provocative. That being said, both the device and Kodi are perfectly capable of playing plenty of legal content from free sources, so there’s no problem there.

What is surprising, however, is that the unboxing video shows the device being booted up, apparently already loaded with infamous third-party Kodi addons including PrimeWire, Genesis, Icefilms, and Navi-X.

The unboxing video showing the Kodi setup

Given that Gavora has registered the Kodi trademark in Canada and prints the official logo on his packaging, this runs counter to the official Kodi team’s aggressive stance towards boxes ready-configured with what they categorize as banned addons. Matters are compounded when one visits the product support site.

As seen in the image below, Raw-Media devices are delivered with a printed card in the packaging informing people where to get the after-sales services Gavora says he built his business upon. The cards advise people to visit No-Issue.ca, a site setup to offer text and video-based support to set-top box buyers.

No-Issue.ca (which is hosted on the same server as raw-media.ca and claimed officially as a sister site here) now redirects to No-Issue.is, as per a 2016 announcement. It has a fairly bland forum but the connected tutorial videos, found on No Issue’s YouTube channel, offer a lot more spice.

Registered under Gavora’s online nickname Gombeek (which is also used on the official Kodi forums), the channel is full of videos detailing how to install and use a wide range of addons.

The No-issue YouTube Channel tutorials

But while supplying tutorial videos is one thing, providing the actual software addons is another. Surprisingly, No-Issue does that too. Filed away under the URL http://solved.no-issue.is/ is a Kodi repository which distributes a wide range of addons, including many that specialize in infringing content, according to the Kodi team.

The No-Issue repository

A source familiar with Raw-Media’s devices informs TF that they’re no longer delivered with addons installed. However, tools hosted on No-Issue.is automate the installation process for the customer, with unlisted YouTube Videos (1,2) providing the instructions.

XBMC Foundation President Nathan Betzen says that situation isn’t ideal.

“If that really is his repo it is disappointing to see that Gavora is charging a fee or outright preventing the sale of boxes with Kodi installed that do not include infringing add-ons, while at the same time he is distributing boxes himself that do include the infringing add-ons like this,” Betzen told TF.

While the legality of this type of service is yet to be properly tested in Canada and may yet emerge as entirely permissible under local law, Gavora himself previously described his business as operating in a gray area.

“If I could go back in time four years, I would’ve been more aggressive in the beginning because there was a lot of uncertainty being in a gray market business about how far I could push it,” he said.

“I really shouldn’t say it’s a gray market because everything I do is completely above board, I just felt it was more gray market so I was a bit scared,” he added.

But, legality aside (which will be determined in due course through various cases 1,2), the situation is still problematic when it comes to the Kodi trademark.

The official Kodi team indicate they don’t want to be associated with any kind of questionable addon or even tutorials for the same. Nevertheless, several of the addons installed by No-Issue (including PrimeWire, cCloud TV, Genesis, Icefilms, MoviesHD, MuchMovies and Navi-X, to name a few), are present on the Kodi team’s official ban list.

The fact remains, however, that Gavora successfully registered the trademark in Canada (one month later it was transferred to a brand new company at the same address), and Kodi now have no control over the situation in the country, short of a settlement or some kind of legal action.

Kodi matters aside, though, we get more insight into Gavora’s attitudes towards intellectual property after learning that he studied gemology and jewelry at school. He’s a long-standing member of jewelry discussion forum Ganoskin.com (his profile links to Gavora.com, a domain Gavora owns, as per information supplied by Amazon).

Things get particularly topical in a 2006 thread titled “When your work gets ripped“. The original poster asked how people feel when their jewelry work gets copied and Gavora made his opinions known.

“I think that what most people forget to remember is that when a piece from Tiffany’s or Cartier is ripped off or copied they don’t usually just copy the work, they will stamp it with their name as well,” Gavora said.

“This is, in fact, fraud and they are deceiving clients into believing they are purchasing genuine Tiffany’s or Cartier pieces. The client is in fact more interested in purchasing from an artist than they are the piece. Laying claim to designs (unless a symbol or name is involved) is outrageous.”

Unless that ‘design’ is called Kodi, of course, then it’s possible to claim it as your own through an administrative process and begin demanding licensing fees from the public. That being said, Gavora does seem to flip back and forth a little, later suggesting that being copied is sometimes ok.

“If someone copies your design and produces it under their own name, I think one should be honored and revel in the fact that your design is successful and has caused others to imitate it and grow from it,” he wrote.

“I look forward to the day I see one of my original designs copied, that is the day I will know my design is a success.”

From their public statements, this opinion isn’t shared by the Kodi team in respect of their product. Despite the Kodi name, software and logo being all their own work, they now find themselves having to claw back rights in Canada, in order to keep the product free in the region. For now, however, that seems like a difficult task.

TorrentFreak wrote to Gavora and asked him why he felt the need to register the Kodi trademark, but we received no response. That means we didn’t get the chance to ask him why he’s taking down Amazon listings for other people’s devices, or about something else that came up in the podcast.

“My biggest weakness, I guess, is that I’m too ethical about how I do my business,” he said, referring to how he deals with customers.

Only time will tell how that philosophy will affect Gavora’s attitudes to trademarks and people’s desire not to be charged for using free, open source software.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Manage Kubernetes Clusters on AWS Using CoreOS Tectonic

Post Syndicated from Arun Gupta original https://aws.amazon.com/blogs/compute/kubernetes-clusters-aws-coreos-tectonic/

There are multiple ways to run a Kubernetes cluster on Amazon Web Services (AWS). The first post in this series explained how to manage a Kubernetes cluster on AWS using kops. This second post explains how to manage a Kubernetes cluster on AWS using CoreOS Tectonic.

Tectonic overview

Tectonic delivers the most current upstream version of Kubernetes with additional features. It is a commercial offering from CoreOS and adds the following features over the upstream:

  • Installer
    Comes with a graphical installer that installs a highly available Kubernetes cluster. Alternatively, the cluster can be installed using AWS CloudFormation templates or Terraform scripts.
  • Operators
    An operator is an application-specific controller that extends the Kubernetes API to create, configure, and manage instances of complex stateful applications on behalf of a Kubernetes user. This release includes an etcd operator for rolling upgrades and a Prometheus operator for monitoring capabilities.
  • Console
    A web console provides a full view of applications running in the cluster. It also allows you to deploy applications to the cluster and start the rolling upgrade of the cluster.
  • Monitoring
    Node CPU and memory metrics are powered by the Prometheus operator. The graphs are available in the console. A large set of preconfigured Prometheus alerts are also available.
  • Security
    Tectonic ensures that cluster is always up to date with the most recent patches/fixes. Tectonic clusters also enable role-based access control (RBAC). Different roles can be mapped to an LDAP service.
  • Support
    CoreOS provides commercial support for clusters created using Tectonic.

Tectonic can be installed on AWS using a GUI installer or Terraform scripts. The installer prompts you for the information needed to boot the Kubernetes cluster, such as AWS access and secret key, number of master and worker nodes, and instance size for the master and worker nodes. The cluster can be created after all the options are specified. Alternatively, Terraform assets can be downloaded and the cluster can be created later. This post shows using the installer.

CoreOS License and Pull Secret

Even though Tectonic is a commercial offering, a cluster for up to 10 nodes can be created by creating a free account at Get Tectonic for Kubernetes. After signup, a CoreOS License and Pull Secret files are provided on your CoreOS account page. Download these files as they are needed by the installer to boot the cluster.

IAM user permission

The IAM user to create the Kubernetes cluster must have access to the following services and features:

  • Amazon Route 53
  • Amazon EC2
  • Elastic Load Balancing
  • Amazon S3
  • Amazon VPC
  • Security groups

Use the aws-policy policy to grant the required permissions for the IAM user.

DNS configuration

A subdomain is required to create the cluster, and it must be registered as a public Route 53 hosted zone. The zone is used to host and expose the console web application. It is also used as the static namespace for the Kubernetes API server. This allows kubectl to be able to talk directly with the master.

The domain may be registered using Route 53. Alternatively, a domain may be registered at a third-party registrar. This post uses a kubernetes-aws.io domain registered at a third-party registrar and a tectonic subdomain within it.

Generate a Route 53 hosted zone using the AWS CLI. Download jq to run this command:

ID=$(uuidgen) && \
aws route53 create-hosted-zone \
--name tectonic.kubernetes-aws.io \
--caller-reference $ID \
| jq .DelegationSet.NameServers

The command shows an output such as the following:

[
  "ns-1924.awsdns-48.co.uk",
  "ns-501.awsdns-62.com",
  "ns-1259.awsdns-29.org",
  "ns-749.awsdns-29.net"
]

Create NS records for the domain with your registrar. Make sure that the NS records can be resolved using a utility like dig web interface. A sample output would look like the following:

The bottom of the screenshot shows NS records configured for the subdomain.

Download and run the Tectonic installer

Download the Tectonic installer (version 1.7.1) and extract it. The latest installer can always be found at coreos.com/tectonic. Start the installer:

./tectonic/tectonic-installer/$PLATFORM/installer

Replace $PLATFORM with either darwin or linux. The installer opens your default browser and prompts you to select the cloud provider. Choose Amazon Web Services as the platform. Choose Next Step.

Specify the Access Key ID and Secret Access Key for the IAM role that you created earlier. This allows the installer to create resources required for the Kubernetes cluster. This also gives the installer full access to your AWS account. Alternatively, to protect the integrity of your main AWS credentials, use a temporary session token to generate temporary credentials.

You also need to choose a region in which to install the cluster. For the purpose of this post, I chose a region close to where I live, Northern California. Choose Next Step.

Give your cluster a name. This name is part of the static namespace for the master and the address of the console.

To enable in-place update to the Kubernetes cluster, select the checkbox next to Automated Updates. It also enables update to the etcd and Prometheus operators. This feature may become a default in future releases.

Choose Upload “tectonic-license.txt” and upload the previously downloaded license file.

Choose Upload “config.json” and upload the previously downloaded pull secret file. Choose Next Step.

Let the installer generate a CA certificate and key. In this case, the browser may not recognize this certificate, which I discuss later in the post. Alternatively, you can provide a CA certificate and a key in PEM format issued by an authorized certificate authority. Choose Next Step.

Use the SSH key for the region specified earlier. You also have an option to generate a new key. This allows you to later connect using SSH into the Amazon EC2 instances provisioned by the cluster. Here is the command that can be used to log in:

ssh –i <key> [email protected]<ec2-instance-ip>

Choose Next Step.

Define the number and instance type of master and worker nodes. In this case, create a 6 nodes cluster. Make sure that the worker nodes have enough processing power and memory to run the containers.

An etcd cluster is used as persistent storage for all of Kubernetes API objects. This cluster is required for the Kubernetes cluster to operate. There are three ways to use the etcd cluster as part of the Tectonic installer:

  • (Default) Provision the cluster using EC2 instances. Additional EC2 instances are used in this case.
  • Use an alpha support for cluster provisioning using the etcd operator. The etcd operator is used for automated operations of the etcd master nodes for the cluster itself, in addition to for etcd instances that are created for application usage. The etcd cluster is provisioned within the Tectonic installer.
  • Bring your own pre-provisioned etcd cluster.

Use the first option in this case.

For more information about choosing the appropriate instance type, see the etcd hardware recommendation. Choose Next Step.

Specify the networking options. The installer can create a new public VPC or use a pre-existing public or private VPC. Make sure that the VPC requirements are met for an existing VPC.

Give a DNS name for the cluster. Choose the domain for which the Route 53 hosted zone was configured earlier, such as tectonic.kubernetes-aws.io. Multiple clusters may be created under a single domain. The cluster name and the DNS name would typically match each other.

To select the CIDR range, choose Show Advanced Settings. You can also choose the Availability Zones for the master and worker nodes. By default, the master and worker nodes are spread across multiple Availability Zones in the chosen region. This makes the cluster highly available.

Leave the other values as default. Choose Next Step.

Specify an email address and password to be used as credentials to log in to the console. Choose Next Step.

At any point during the installation, you can choose Save progress. This allows you to save configurations specified in the installer. This configuration file can then be used to restore progress in the installer at a later point.

To start the cluster installation, choose Submit. At another time, you can download the Terraform assets by choosing Manually boot. This allows you to boot the cluster later.

The logs from the Terraform scripts are shown in the installer. When the installation is complete, the console shows that the Terraform scripts were successfully applied, the domain name was resolved successfully, and that the console has started. The domain works successfully if the DNS resolution worked earlier, and it’s the address where the console is accessible.

Choose Download assets to download assets related to your cluster. It contains your generated CA, kubectl configuration file, and the Terraform state. This download is an important step as it allows you to delete the cluster later.

Choose Next Step for the final installation screen. It allows you to access the Tectonic console, gives you instructions about how to configure kubectl to manage this cluster, and finally deploys an application using kubectl.

Choose Go to my Tectonic Console. In our case, it is also accessible at http://cluster.tectonic.kubernetes-aws.io/.

As I mentioned earlier, the browser does not recognize the self-generated CA certificate. Choose Advanced and connect to the console. Enter the login credentials specified earlier in the installer and choose Login.

The Kubernetes upstream and console version are shown under Software Details. Cluster health shows All systems go and it means that the API server and the backend API can be reached.

To view different Kubernetes resources in the cluster choose, the resource in the left navigation bar. For example, all deployments can be seen by choosing Deployments.

By default, resources in the all namespace are shown. Other namespaces may be chosen by clicking on a menu item on the top of the screen. Different administration tasks such as managing the namespaces, getting list of the nodes and RBAC can be configured as well.

Download and run Kubectl

Kubectl is required to manage the Kubernetes cluster. The latest version of kubectl can be downloaded using the following command:

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl

It can also be conveniently installed using the Homebrew package manager. To find and access a cluster, Kubectl needs a kubeconfig file. By default, this configuration file is at ~/.kube/config. This file is created when a Kubernetes cluster is created from your machine. However, in this case, download this file from the console.

In the console, choose admin, My Account, Download Configuration and follow the steps to download the kubectl configuration file. Move this file to ~/.kube/config. If kubectl has already been used on your machine before, then this file already exists. Make sure to take a backup of that file first.

Now you can run the commands to view the list of deployments:

~ $ kubectl get deployments --all-namespaces
NAMESPACE         NAME                                    DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system       etcd-operator                           1         1         1            1           43m
kube-system       heapster                                1         1         1            1           40m
kube-system       kube-controller-manager                 3         3         3            3           43m
kube-system       kube-dns                                1         1         1            1           43m
kube-system       kube-scheduler                          3         3         3            3           43m
tectonic-system   container-linux-update-operator         1         1         1            1           40m
tectonic-system   default-http-backend                    1         1         1            1           40m
tectonic-system   kube-state-metrics                      1         1         1            1           40m
tectonic-system   kube-version-operator                   1         1         1            1           40m
tectonic-system   prometheus-operator                     1         1         1            1           40m
tectonic-system   tectonic-channel-operator               1         1         1            1           40m
tectonic-system   tectonic-console                        2         2         2            2           40m
tectonic-system   tectonic-identity                       2         2         2            2           40m
tectonic-system   tectonic-ingress-controller             1         1         1            1           40m
tectonic-system   tectonic-monitoring-auth-alertmanager   1         1         1            1           40m
tectonic-system   tectonic-monitoring-auth-prometheus     1         1         1            1           40m
tectonic-system   tectonic-prometheus-operator            1         1         1            1           40m
tectonic-system   tectonic-stats-emitter                  1         1         1            1           40m

This output is similar to the one shown in the console earlier. Now, this kubectl can be used to manage your resources.

Upgrade the Kubernetes cluster

Tectonic allows the in-place upgrade of the cluster. This is an experimental feature as of this release. The clusters can be updated either automatically, or with manual approval.

To perform the update, choose Administration, Cluster Settings. If an earlier Tectonic installer, version 1.6.2 in this case, is used to install the cluster, then this screen would look like the following:

Choose Check for Updates. If any updates are available, choose Start Upgrade. After the upgrade is completed, the screen is refreshed.

This is an experimental feature in this release and so should only be used on clusters that can be easily replaced. This feature may become a fully supported in a future release. For more information about the upgrade process, see Upgrading Tectonic & Kubernetes.

Delete the Kubernetes cluster

Typically, the Kubernetes cluster is a long-running cluster to serve your applications. After its purpose is served, you may delete it. It is important to delete the cluster as this ensures that all resources created by the cluster are appropriately cleaned up.

The easiest way to delete the cluster is using the assets downloaded in the last step of the installer. Extract the downloaded zip file. This creates a directory like <cluster-name>_TIMESTAMP. In that directory, give the following command to delete the cluster:

TERRAFORM_CONFIG=$(pwd)/.terraformrc terraform destroy --force

This destroys the cluster and all associated resources.

You may have forgotten to download the assets. There is a copy of the assets in the directory tectonic/tectonic-installer/darwin/clusters. In this directory, another directory with the name <cluster-name>_TIMESTAMP contains your assets.

Conclusion

This post explained how to manage Kubernetes clusters using the CoreOS Tectonic graphical installer.  For more details, see Graphical Installer with AWS. If the installation does not succeed, see the helpful Troubleshooting tips. After the cluster is created, see the Tectonic tutorials to learn how to deploy, scale, version, and delete an application.

Future posts in this series will explain other ways of creating and running a Kubernetes cluster on AWS.

Arun

A printing GIF camera? Is that even a thing?

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/printing-gif-camera/

Abhishek Singh’s printing GIF camera uses two Raspberry Pis, the Model 3 and the Zero W, to take animated images and display them on an ejectable secondary screen.

Instagif – A DIY Camera that prints GIFs instantly

I built a camera that snaps a GIF and ejects a little cartridge so you can hold a moving photo in your hand! I’m calling it the “Instagif NextStep”.

The humble GIF

Created in 1987, Graphics Interchange Format files, better known as GIFs, have somewhat taken over the internet. And whether you pronounce it G-IF or J-IF, you’ve probably used at least one to express an emotion, animate images on your screen, or create small, movie-like memories of events.

In 2004, all patents on the humble GIF expired, which added to the increased usage of the file format. And by the early 2010s, sites such as giphy.com and phone-based GIF keyboards were introduced into our day-to-day lives.

A GIF from a scene in The Great Gatsby - Raspberry Pi GIF Camera

Welcome to the age of the GIF

Polaroid cameras

Polaroid cameras have a somewhat older history. While the first documented instant camera came into existence in 1923, commercial iterations made their way to market in the 1940s, with Polaroid’s model 95 Land Camera.

In recent years, the instant camera has come back into fashion, with camera stores and high street fashion retailers alike stocking their shelves with pastel-coloured, affordable models. But nothing beats the iconic look of the Polaroid Spirit series, and the rainbow colour stripe that separates it from its competitors.

Polaroid Spirit Camera - Raspberry Pi GIF Camera

Shake it like a Polaroid picture…

And if you’re one of our younger readers and find yourself wondering where else you’ve seen those stripes, you’re probably more familiar with previous versions of the Instagram logo, because, well…

Instagram Logo - Raspberry Pi GIF Camera

I’m sorry for the comment on the previous image. It was just too easy.

Abhishek Singh’s printing GIF camera

Abhishek labels his creation the Instagif NextStep, and cites his inspiration for the project as simply wanting to give it a go, and to see if he could hold a ‘moving photo’.

“What I love about these kinds of projects is that they involve a bunch of different skill sets and disciplines”, he explains at the start of his lengthy, highly GIFed and wonderfully detailed imugr tutorial. “Hardware, software, 3D modeling, 3D printing, circuit design, mechanical/electrical engineering, design, fabrication etc. that need to be integrated for it to work seamlessly. Ironically, this is also what I hate about these kinds of projects”

Care to see how the whole thing comes together? Well, in the true spirit of the project, Abhishek created this handy step-by-step GIF.

Piecing it together

I thought I’ll start off with the entire assembly and then break down the different elements. As you can see, everything is assembled from the base up in layers helping in easy assembly and quick disassembly for troubleshooting

The build comes in two parts – the main camera housing a Raspberry Pi 3 and Camera Module V2, and the ejectable cartridge fitted with Raspberry Pi Zero W and Adafruit PiTFT screen.

When the capture button is pressed, the camera takes 3 seconds’ worth of images and converts them into .gif format via a Python script. Once compressed and complete, the Pi 3 sends the file to the Zero W via a network connection. When it is satisfied that the Zero W has the image, the Pi 3 automatically ejects the ‘printed GIF’ cartridge, and the image is displayed.

A demonstration of how the GIF is displayed on the Raspberry Pi GIF Camera

For a full breakdown of code, 3D-printable files, and images, check out the full imgur post. You can see more of Abhishek’s work at his website here.

Create GIFs with a Raspberry Pi

Want to create GIFs with your Raspberry Pi? Of course you do. Who wouldn’t? So check out our free time-lapse animations resource. As with all our learning resources, the project is free for you to use at home and in your clubs or classrooms. And once you’ve mastered the art of Pi-based GIF creation, why not incorporate it into another project? Say, a motion-detecting security camera or an on-the-go tweeting GIF camera – the possibilities are endless.

And make sure you check out Abhishek’s other Raspberry Pi GIF project, Peeqo, who we covered previously in the blog. So cute. SO CUTE.

The post A printing GIF camera? Is that even a thing? appeared first on Raspberry Pi.

Hello World Issue 3: Approaching Assessment

Post Syndicated from Carrie Anne Philbin original https://www.raspberrypi.org/blog/hello-world-3/

It’s the beginning of a new school year, and the latest issue of Hello World is here! Hello World is our magazine about computing and digital making for educators, and it’s a collaboration between The Raspberry Pi Foundation and Computing at School, part of the British Computing Society.

The front cover of Hello World Issue 3

In issue 3, our international panel of experts takes an in-depth look at assessment in computer science.

Approaching assessment, and much more

Our cover feature explores innovative, practical, and effective approaches to testing and learning. The issue is packed with other great resources, guides, features and lesson plans to support educators.

Highlights include:

  • Tutorials and lesson plans on Scratch Pong, games design, and the database-building Python library, SQLite3
  • Supporting learning with online video
  • The potential of open-source resources in education
  • A bluffer’s guide to Non-Examination Assessments (NEA) for GCSE Computer Science
  • A look at play and creativity in programming

Get your copy of Hello World 3

Hello World is available as a free Creative Commons download for anyone around the world who is interested in Computer Science and digital making education. Grab the latest issue straight from the Hello World website.

Thanks to the very generous support of our sponsors BT, we are able to offer free printed versions of the magazine to serving educators in the UK. It’s for teachers, Code Club volunteers, teaching assistants, teacher trainers, and others who help children and young people learn about computing and digital making. Remember to subscribe to receive your free copy, posted directly to your home.

Free book!

As a special bonus for our print subscribers, this issue comes bundled with a copy of Ian Livingstone and Shahneila Saeed’s new book, Hacking the Curriculum: Creative Computing and the Power of Play

Front cover of Hacking the Curriculum by Ian Livingstone and Shahneila Saeed - Hello World 3

This gorgeous-looking image comes courtesy of Jonathan Green

The book explains the critical importance of coding and computing in modern schools, and offers teachers and school leaders practical guidance on how to improve their computing provision. Thanks to Ian Livingstone, Shahneila Saeed, and John Catt Educational Ltd. for helping to make this possible. The book will be available with issue 3 to new subscribers while stocks last.

10,000 subscribers

We are very excited to announce that Hello World now has more than 10,000 subscribers!

Banner to celebrate 10000 subscribers

We’re celebrating this milestone, but we’d love to reach even more computing and digital making educators. Help us to spread the word to teachers, volunteers and home educators in the UK.

Get involved

Share your teaching experiences in computing and related subjects with Hello World, and help us to help other educators! When you air your questions and challenges on our letters page, other educators are ready to help you. Drop us an email to submit letters, articles, lesson plans, and questions for our FAQ pages – wherever you are in the world, get in touch with us by emailing [email protected].

The post Hello World Issue 3: Approaching Assessment appeared first on Raspberry Pi.

AWS Hot Startups – August 2017

Post Syndicated from Tina Barr original https://aws.amazon.com/blogs/aws/aws-hot-startups-august-2017/

There’s no doubt about it – Artificial Intelligence is changing the world and how it operates. Across industries, organizations from startups to Fortune 500s are embracing AI to develop new products, services, and opportunities that are more efficient and accessible for their consumers. From driverless cars to better preventative healthcare to smart home devices, AI is driving innovation at a fast rate and will continue to play a more important role in our everyday lives.

This month we’d like to highlight startups using AI solutions to help companies grow. We are pleased to feature:

  • SignalBox – a simple and accessible deep learning platform to help businesses get started with AI.
  • Valossa – an AI video recognition platform for the media and entertainment industry.
  • Kaliber – innovative applications for businesses using facial recognition, deep learning, and big data.

SignalBox (UK)

In 2016, SignalBox founder Alain Richardt was hearing the same comments being made by developers, data scientists, and business leaders. They wanted to get into deep learning but didn’t know where to start. Alain saw an opportunity to commodify and apply deep learning by providing a platform that does the heavy lifting with an easy-to-use web interface, blueprints for common tasks, and just a single-click to productize the models. With SignalBox, companies can start building deep learning models with no coding at all – they just select a data set, choose a network architecture, and go. SignalBox also offers step-by-step tutorials, tips and tricks from industry experts, and consulting services for customers that want an end-to-end AI solution.

SignalBox offers a variety of solutions that are being used across many industries for energy modeling, fraud detection, customer segmentation, insurance risk modeling, inventory prediction, real estate prediction, and more. Existing data science teams are using SignalBox to accelerate their innovation cycle. One innovative UK startup, Energi Mine, recently worked with SignalBox to develop deep networks that predict anomalous energy consumption patterns and do time series predictions on energy usage for businesses with hundreds of sites.

SignalBox uses a variety of AWS services including Amazon EC2, Amazon VPC, Amazon Elastic Block Store, and Amazon S3. The ability to rapidly provision EC2 GPU instances has been a critical factor in their success – both in terms of keeping their operational expenses low, as well as speed to market. The Amazon API Gateway has allowed for operational automation, giving SignalBox the ability to control its infrastructure.

To learn more about SignalBox, visit here.

Valossa (Finland)

As students at the University of Oulu in Finland, the Valossa founders spent years doing research in the computer science and AI labs. During that time, the team witnessed how the world was moving beyond text, with video playing a greater role in day-to-day communication. This spawned an idea to use technology to automatically understand what an audience is viewing and share that information with a global network of content producers. Since 2015, Valossa has been building next generation AI applications to benefit the media and entertainment industry and is moving beyond the capabilities of traditional visual recognition systems.

Valossa’s AI is capable of analyzing any video stream. The AI studies a vast array of data within videos and converts that information into descriptive tags, categories, and overviews automatically. Basically, it sees, hears, and understands videos like a human does. The Valossa AI can detect people, visual and auditory concepts, key speech elements, and labels explicit content to make moderating and filtering content simpler. Valossa’s solutions are designed to provide value for the content production workflow, from media asset management to end-user applications for content discovery. AI-annotated content allows online viewers to jump directly to their favorite scenes or search specific topics and actors within a video.

Valossa leverages AWS to deliver the industry’s first complete AI video recognition platform. Using Amazon EC2 GPU instances, Valossa can easily scale their computation capacity based on customer activity. High-volume video processing with GPU instances provides the necessary speed for time-sensitive workflows. The geo-located Availability Zones in EC2 allow Valossa to bring resources close to their customers to minimize network delays. Valossa also uses Amazon S3 for video ingestion and to provide end-user video analytics, which makes managing and accessing media data easy and highly scalable.

To see how Valossa works, check out www.WhatIsMyMovie.com or enable the Alexa Skill, Valossa Movie Finder. To try the Valossa AI, sign up for free at www.valossa.com.

Kaliber (San Francisco, CA)

Serial entrepreneurs Ray Rahman and Risto Haukioja founded Kaliber in 2016. The pair had previously worked in startups building smart cities and online privacy tools, and teamed up to bring AI to the workplace and change the hospitality industry. Our world is designed to appeal to our senses – stores and warehouses have clearly marked aisles, products are colorfully packaged, and we use these designs to differentiate one thing from another. We tell each other apart by our faces, and previously that was something only humans could measure or act upon. Kaliber is using facial recognition, deep learning, and big data to create solutions for business use. Markets and companies that aren’t typically associated with cutting-edge technology will be able to use their existing camera infrastructure in a whole new way, making them more efficient and better able to serve their customers.

Computer video processing is rapidly expanding, and Kaliber believes that video recognition will extend to far more than security cameras and robots. Using the clients’ network of in-house cameras, Kaliber’s platform extracts key data points and maps them to actionable insights using their machine learning (ML) algorithm. Dashboards connect users to the client’s BI tools via the Kaliber enterprise APIs, and managers can view these analytics to improve their real-world processes, taking immediate corrective action with real-time alerts. Kaliber’s Real Metrics are aimed at combining the power of image recognition with ML to ultimately provide a more meaningful experience for all.

Kaliber uses many AWS services, including Amazon Rekognition, Amazon Kinesis, AWS Lambda, Amazon EC2 GPU instances, and Amazon S3. These services have been instrumental in helping Kaliber meet the needs of enterprise customers in record time.

Learn more about Kaliber here.

Thanks for reading and we’ll see you next month!

-Tina

 

MagPi 61: ten amazing Raspberry Pi Zero W projects

Post Syndicated from Rob Zwetsloot original https://www.raspberrypi.org/blog/magpi-61-10-pi-zero-projects/

Hey folks! Rob here, with another roundup of the latest The MagPi magazine. MagPi 61 focuses on some incredible ‘must make’ Raspberry Pi Zero W projects, 3D printers and – oh, did someone mention the Google AIY Voice Projects Kit?

Cover of The MagPi magazine with a picture of the Pi Zero W - MagPi 61

Make amazing Raspberry Pi Zero W projects with our latest issue

Inside MagPi 61

In issue 61, we’re focusing on the small but mighty wonder that is the Raspberry Pi Zero W, and on some of the very best projects we’ve found for you to build with it. From arcade machines to robots, dash cams, and more – it’s time to make the most of our $10 computer.

And if that’s not enough, we’ve also delved deeper into the maker relationship between Raspberry Pi and Ardunio, with some great creations such as piano stairs, a jukebox, and a smart home system. There’s also a selection of excellent tutorials on building 3D printers, controlling Hue lights, and making cool musical instruments.

A spread of The MagPi magazine showing a DJ deck tutorial - MagPi 61

Spin it, DJ!

Get the MagPi 61

The new issue is out right now, and you can pick up a copy at WH Smith, Tesco, Sainsbury’s, and Asda. If you live in the US, check out your local Barnes & Noble or Micro Center over the next few days. You can also get the new issue online from our store, or digitally via our Android or iOS app. And don’t forget, there’s always the free PDF as well.

Subscribe for free goodies

Some of you have asked me about the goodies that we give out to subscribers. This is how it works: if you take out a twelve-month print subscription to The MagPi, you’ll get a Pi Zero W, Pi Zero case, and adapter cables, absolutely free! This offer does not currently have an end date.

Pre-order AIY Kits

We have some AIY Voice Kit news! Micro Center has opened pre-orders for the kits in America, and Pimoroni has set up a notification service for those closer to the UK.

We hope you all enjoy the issue. Oh, and if you’re at World Maker Faire, New York, come and see us at the Raspberry Pi stall! Otherwise – see you next month.

The post MagPi 61: ten amazing Raspberry Pi Zero W projects appeared first on Raspberry Pi.

Netflix develops Morse code search option

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/netflix-morse-code/

What happens when Netflix gives its staff two days to hack the platform and create innovative (and often unnecessary) variations on the streaming service?

This. This is what happens.

Hack Day Summer 2017 Teleflix

Uploaded by NetflixOpenSource on 2017-08-28.

Netflix Hack Day

Twice a year, the wonderful team at Netflix is given two days to go nuts and create fun, random builds, taking inspiration from Netflix and its content. So far they’ve debuted a downgraded version of the streaming platform played on an original Nintendo Entertainment System (NES), turned hit show Narcos into a video game, and utilised VR technology into many more builds that, while they’ll never be made public, have no doubt led to some lightbulb moments for the creative teams involved.

DarNES – Netflix Hack Day – Winter 2015

In a world… where devices proliferate… darNES digs back in time to provide Netflix access to the original Nintendo Entertainment System.

Kevin Spacey? More like ‘Kevin Spacebar’, am I right? Aha…ha…haaaa…I’ll get my coat.

Teleflix

The Teleflix build from this summer’s Hack Day is obviously the best one yet, as it uses a Raspberry Pi. By writing code that decodes the dots and dashes from an original 1920s telegraph (provided by AT&T, and lovingly restored by the team using ketchup!) into keystrokes, they’re able to search for their favourite shows via Morse code.

Netflix Morse Code

Morse code, for the unaware, is a method for transmitting letters and numbers via a standardised series of beeps, clicks, or flashes. Stuck in a sticky situation? Three dots followed by three dashes and a further three dots gives you ‘SOS’. Sorted. So long as there’s someone there to see or hear it, who also understands Morse Code.

Morse Code

Morse code was a method of transmiting textual information as a series of on-off tones that could be directly understood by a skilled listener. Mooo-Theme: http://soundcloud.com/mooojvm/mooo-theme

So if you’d like to watch, for example, The Unbreakable Kimmy Schmidt, you simply send: – …. . / ..- -. -… .-. . .- -.- .- -… .-.. . / -.- .. — — -.– / … -.-. …. — .. -.. – and you’re set. Easy!

To reach Netflix, the team used a Playstation 4. However, if you want to skip a tech step, you could stream Netflix directly to your Raspberry Pi by following this relatively new tutorial. Nobody at Pi Towers has tried it out yet, but if you have we’d be interested to see how you got on in the comments below.

And if you’d like to play around a little more with the Raspberry Pi and Morse code, you can pick up your own Morse code key, or build one using conductive components such as buttons or bananas, and try it out for yourself.

Alex’s Netflix-themed Morse code quiz

Just for fun, here are the titles of some of my favourite shows to watch on Netflix, translated into Morse code. Using the key below, why not take a break and challenge your mind to translate them back into English. Reward yourself +10 imaginary House Points for each correct answer.

Netflix Morse Code

  1. -.. — -.-. – — .-. / .– …. —
  2. …. .- -. -. .. -… .- .-..
  3. – …. . / — .-
  4. … . -. … . —..
  5. .— . … … .. -.-. .- / .— — -. . …
  6. –. .. .-.. — — .-. . / –. .. .-. .-.. …
  7. –. .-.. — .–

The post Netflix develops Morse code search option appeared first on Raspberry Pi.

Analyzing Salesforce Data with Amazon QuickSight

Post Syndicated from David McAmis original https://aws.amazon.com/blogs/big-data/analyzing-salesforce-data-with-amazon-quicksight/

Salesforce Sales Cloud is a powerful platform for managing customer data. One of the key functions that the platform provides is the ability to track customer opportunities. Opportunities in Salesforce are used to track revenue, sales pipelines, and other activities from the very first contact with a potential customer to a closed sale.

Amazon QuickSight is a rich data visualization tool that provides the ability to connect to Salesforce data and use it as a data source for creating analyses, stories, and dashboards  and easily share them with others in the organization. This post focuses on how to connect to Salesforce as a data source and create a useful opportunity dashboard, incorporating Amazon QuickSight features like relative date filters, Key Performance Indicator (KPI) charts, and more.

Walkthrough

In this post, you walk through the following tasks:

  • Creating a new data set based on Salesforce data
  • Creating your analysis and adding visuals
  • Creating an Amazon QuickSight dashboard
  • Working with filters

Note: For this walkthrough, I am using my own Salesforce.com Developer Edition account. You can sign up for your own free developer account at https://developer.salesforce.com/.

Creating a new Amazon QuickSight data set based on Salesforce data

To start, you need to create a new Amazon QuickSight data set. Sign in to Amazon QuickSight at https://quicksight.aws using the link from the home page. Enter your Amazon QuickSight account name and choose Continue. Next, enter your Email address or user name and password, then choose Sign In.

On the Amazon QuickSight start page, choose Manage Data, which takes you to a list of your data sets. Choose New Data Set, and choose Salesforce as your data source. Enter a data source name—in this example, I called mine “SFDC Opportunity.” Choose Create Data Source to open the Salesforce authentication page, where you can enter your Salesforce user name and password.

After you are authenticated to Salesforce, you are presented with a drop-down list that lets you select data from Reports or Objects. For this tutorial, choose Object. Scroll down in the list to choose the Opportunity object, and then choose Select.

To finish creating your data set, choose Visualize to go to where you can create a new Amazon QuickSight analysis from this data.

Creating your analysis and adding visuals

Now that you have acquired your data, it’s time to start working with your analysis. In Amazon Quicksight, an analysis is a container for a set of related visual stories. When you chose Visualize, a new analysis was created for you. This is where you start to create the visuals (charts, graphs, etc.) that will be the building blocks for your dashboard.

In Amazon QuickSight, Salesforce objects look like database tables. In the analysis that you just created, you can see the columns in the Fields list for the Opportunity object.

The Opportunity object in Salesforce has a number of default fields. Salesforce administrators can extend this object by adding other custom fields as required—these custom fields are usually marked with a “_c” at the end.

In the Fields List, you can see that Amazon QuickSight has divided the fields into Dimensions and Measures.  You use these to create your visualizations and dashboard. For this particular dashboard, you create five different visuals to display the data in a few different ways.

Opportunity by Stage

For the first visualization, you create a horizontal bar chart showing “Opportunity by Stage”. In the Fields List, choose the StageName dimension and the ExpectedRevenue measure. By default, this should create a horizontal bar chart for you, as shown in the following image.

Notice that this chart includes the Closed Won category, which we aren’t interested in showing. Choose the bar for Closed Won, and in the pop-up menu, choose Exclude Closed Won. This filters the chart to show only opportunities that are in progress.

It’s important to note that for this dashboard, we only want to show the opportunities that are not Closed Won. So in the menu bar on the left side, choose Filter.

By default, the filter that you just created was only applied to a single visualization. To change this, choose the filter, and then choose All Visuals from the drop-down list. This applies the filter to all visuals in the analysis.

To finish, select the chart title and rename the chart to Opportunity by Stage.

Opportunity by Month

Next, you need to create a new visual to show “Opportunity by Month.” You use a vertical bar chart to display the data. On the Amazon QuickSight toolbar, choose Add, and then choose Add visual. For this visual, choose CloseDate from the dimensions and ExpectedRevenue from the measures.

Using the Visual Types menu, change the chart type to a Vertical Bar Chart. By default, the chart displays the revenue by year, but we want to break it down a bit further. Choose Field Wells, and using the CloseDate drop-down menu, change the Aggregate to Month.

With the change to a monthly aggregate, your chart should look something like the following:

Select the chart title and rename the chart to Opportunity by Month.

Expected Revenue

When working with Salesforce opportunities, there are two measures that are important to most sales managers—the first is the total amount associated with the opportunity, and the second is what the actual expected revenue will be. For the next visual, you use the KPI chart to display these measures.

Choose Add on the Amazon QuickSight toolbar, and then choose Add visual. From the measures, choose ExpectedRevenue, and then Amount. To change your visualization, go to the Visual Types menu and choose the Key Performance Indicator (KPI). Your visualization should change and be similar to the following:

Select the chart title and rename the chart to Expected Revenue.

Opportunity by Lead Source

Next, you need to look at where the opportunity actually came from. This helps your dashboard users understand where the leads are being generated from and their value to the business. For this visual, you use a Horizontal Bar Chart.

On the Amazon QuickSight toolbar, choose Add, and then choose Add visual. From the measures, choose Amount, and for the dimensions, choose LeadSource. To change your visualization, go to the Visual Types menu and choose the Horizontal Bar Chart. Your visualization should change and be similar to the following:

Note: If you can’t read the chart labels for the bars, grab the axis line and drag to resize.

Select the chart title and rename the chart to Opportunity by Lead Source.

Expected Revenue vs. Opportunity Amount

For the last visual, you look at the individual opportunities and how they contribute to the total pipeline. A tree map is a specialized chart type that lets your dashboard users see how each opportunity amount contributes to the whole.  Additionally, you can highlight if there is a difference between the Expected Revenue and the Amount by sizing the marks by the Amount and coloring them by the Expected Amount.

On the Amazon QuickSight toolbar, choose Add, and then choose Add visual. From the measures, choose ExpectedRevenue and Amount. From the dimensions, choose Name. To change your visualization, go to the Visual Types menu and choose the Tree Map. Your visualization should change and be similar to the following:

Select the chart title and rename the chart to Expected Revenue vs Opportunity Amount.

Creating an Amazon QuickSight dashboard

Now that your visuals are created, it’s time to do the fun part—actually putting your Amazon QuickSight dashboard together. To create a dashboard, resize and position your visuals on the page, using the following layout:

To resize a visual, grab the handle in the lower-right corner and drag it to the height and width that you want.

To move your visual, use the grab bar at the top of the visual, as shown here:

When you are done resizing your visuals, your canvas should look something like this:

To create a dashboard, choose Share in the Amazon QuickSight toolbar. Then choose Create Dashboard. For this dashboard, give it a name of SFDC Opportunity Dashboard, and choose Create Dashboard. You are prompted to enter the email address or user name of the users you want to share this dashboard with.

Because we are just concentrating on the design at the moment, you can choose Cancel and share your dashboard later using the Share button on the dashboard toolbar.

Working with filters

There is one more feature that you can use when viewing your dashboard to make it even more useful. Earlier, when you were working with the Analysis, you added a filter to remove any opportunities that were tagged as Closed Won. Now, as you are viewing the dashboard, you add a filter that you can use to filter on a relative date.

This feature in Amazon QuickSight allows you to choose a time period (years, quarters, months, weeks, etc.) and then select from a list of relative time periods. For example, if you choose Year, you could set the filter options to Previous Year, This Year, Year to Date, or Last N Years.

This is especially handy for a Salesforce Opportunity dashboard, as you might want to filter the data using the Close Date field to see when the opportunity is actually set to close.

To create a relative date filter, choose Filter on the toolbar. Choose the filter icon, and then choose CloseDate, as shown in the following image:

At the top of the Edit Filter pane, change the drop-down list to apply the filter to All Visuals. The default filter type is Time Range, so use the drop-down list to change the filter type to Relative Dates.  For the time period, choose Quarters. To view all the current opportunities in your dashboard, choose the option for This Quarter, and choose Apply.

With the date filter in place, you have the final component for your dashboard, which should look something like the following example:

It’s important to note that at this point, you have added the filter when viewing the dashboard. If you think this is something that other users might want to do, you can go back to your Amazon QuickSight Analysis and add the filter there—that way it will be available for all dashboard users.

Summary

In this post, you learned how to connect to Salesforce data and create a basic dashboard. You can apply the same techniques to create analyses and dashboards from all different types of Salesforce data and objects. Whether you want to analyze your Salesforce account demographics or where your leads are coming from, or evaluate any other data stored in Salesforce, Amazon QuickSight helps you quickly connect to and visualize your data with only a few clicks.

 


Additional Reading

Learn how to visualize Amazon S3 analytics data with Amazon QuickSight!


About the Author

David McAmis is a Big Data & Analytics Consultant with Amazon Web Services. He works with customers to develop scalable platforms to gather, process and analyze data on AWS.

 

 

 

 

Amazon AppStream 2.0 Launch Recap – Domain Join, Simple Network Setup, and Lots More

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-appstream-2-0-launch-recap-domain-join-simple-network-setup-and-lots-more/

We (the AWS Blog Team) work to maintain a delicate balance between coverage and volume! On the one hand, we want to make sure that you are aware of as many features as possible. On the other, we don’t want to bury you in blog posts. As a happy medium between these two extremes we sometimes let interesting new features pile up for a couple of weeks and then pull them together in the form of a recap post such as this one.

Today I would like to tell you about the latest and greatest additions to Amazon AppStream 2.0, our application streaming service (read Amazon AppStream 2.0 – Stream Desktop Apps from AWS to learn more). We launched GPU-powered streaming instances just a month ago and have been adding features rapidly; here are some recent launches that did not get covered in individual posts at launch time:

  • Microsoft Active Directory Domains – Connect AppStream 2.0 streaming instances to your Microsoft Active Directory domain.
  • User Management & Web Portal – Create and manage users from within the AppStream 2.0 management console.
  • Persistent Storage for User Files – Use persistent, S3-backed storage for user home folders.
  • Simple Network Setup – Enable Internet access for image builder and instance fleets more easily.
  • Custom VPC Security Groups – Use VPC security groups to control network traffic.
  • Audio-In – Use microphones with your streaming applications.

These features were prioritized based on early feedback from AWS customers who are using or are considering the use of AppStream 2.0 in their enterprises. Let’s take a quick look at each one.

Domain Join
This much-requested feature allows you to connect your AppStream 2.0 streaming instances to your Microsoft Active Directory (AD) domain. After you do this you can apply existing policies to your streaming instances, and provide your users with single sign-on access to intranet resources such as web sites, printers, and file shares. Your users are authenticated using the SAML 2.0 provider of your choice, and can access applications that require a connection to your AD domain.

To get started, visit the AppStream 2.0 Console, create and store a Directory Configuration:

Newly created image builders and newly launched fleets can then use the stored Directory Configuration to join the AD domain in an Organizational Unit (OU) that you provide:

To learn more, read Using Active Directory Domains with AppStream 2.0 and follow the Setting Up the Active Directory tutorial. You can also learn more in the What’s New.

User Management & Web Portal
This feature makes it easier for you to give new users access to the applications that you are streaming with AppStream 2.0 if you are not using the Domain Join feature that I described earlier.

You can create and manage users, give them access to applications through a web portal, and send them welcome emails, all with a couple of clicks:

AppStream 2.0 sends each new user a welcome email that directs them to a web portal where they will be prompted to create a permanent password. Once they are logged in they are able to access the applications that have been assigned to them.

To learn more, read Using the AppStream 2.0 User Pool and the What’s New.

Persistent Storage
This feature allows users of streaming applications to store files for use in later AppStream 2.0 sessions. Each user is given a home folder which is stored in Amazon Simple Storage Service (S3) between sessions. The folder is made available to the streaming instance at the start of the session and changed files are periodically synced back to S3. To enable this feature, simply check Enable Home Folders when you create your next fleet:

All folders (and the files within) are stored in an S3 bucket that is automatically created within your account when the feature is enabled. There is no limit on total file storage but we recommend that individual files be limited to 5 gigabytes.

Regular S3 pricing applies; to learn more about this feature read about Persistent Storage with AppStream 2.0 Home Folders and check out the What’s New.

Simple Network Setup
Setting up Internet access for your image builder and your streaming instances was once a multi-step process. You had to create a Network Address Translation (NAT) gateway in a public subnet of one of your VPCs and configure traffic routing rules.

Now, you can do this by marking the image builder or the fleet for Internet access, selecting a VPC that has at least one public subnet, and choosing the public subnet(s), all from the AppStream 2.0 Console:

To learn more, read Network Settings for Fleet and Image Builder Instances and Enabling Internet Access Using a Public Subnet and check out the What’s New.

Custom VPC Security Groups
You can create VPC security groups and associate them with your image builders and your fleets. This gives you fine-grained control over inbound and outbound traffic to databases, license servers, file shares, and application servers. Read the What’s New to learn more.

Audio-In
You can use analog and USB microphones, mixing consoles, and other audio input devices with your streaming applications. Simply click on Enable Microphone in the AppStream 2.0 toolbar to get started. Read the What’s New to learn more.

Available Now
All of these features are available now and you can start using them today in all AWS Regions where Amazon AppStream 2.0 is available.

Jeff;

PS – If you are new to AppStream 2.0, try out some pre-installed applications. No setup needed and you’ll get to experience the power of streaming applications first-hand.

Mod your Nerf gun with a Pi

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/mod-nerf-gun-pi/

Michael Darby, who blogs at 314reactor, has created a new Raspberry Pi build, and it’s pretty darn cool. Though it’s not the first Raspberry Pi-modded Nerf gun we’ve seen, it’s definitely one of the most complex!

Nerf Gun Ammo Counter / Range Finder – Raspberry Pi

An ammo counter and range finder made from a Raspberry Pi for a Nerf Gun.

Nerf guns

Nerf guns are toy dart guns that have been on the market since the early 1990s. They are popular with kids and adults who enjoy playing paintball, laser tag, and first-person shooter video games. Michael loves Nerf guns, and he wanted to give his toy a sci-fi overhaul, making it look and function more like a gun that an avatar might use in Half-Life, Quake, or Doom.

Modding a Nerf gun

A busy and creative member of the Raspberry Pi community, Michael has previously delighted us with his Windows 98 wristwatch. Now, he has upgraded his Nerf gun with a rangefinder and an ammo counter by adding a Pi, a Pimoroni Rainbow HAT, and some sensors.

Setting up a rangefinder was straightforward. Michael fixed an ultrasonic distance sensor pointing in the direction of the gun’s barrel. Live information about how far away he is from his target is shown on the Rainbow HAT’s alphanumeric display.

View of Michael Darby's nerf gun range finder

To create an ammo counter, Michael had to follow a more circuitous route. Since he couldn’t think of a way to read out how many darts are in the Nerf gun’s magazine, he ended up counting how many darts have been shot instead. This data is collected via a proximity sensor, a device that can measure shorter distances than an ultrasonic sensor. Michael aimed the sensor towards the end of the barrel, attaching it with Blu-Tack.

View of Michael Darby's nerf gun proximity sensor

The number of shots left in the magazine is indicated by the seven LEDs above the Rainbow HAT’s alphanumeric display. The countdown works for more than seven darts, thanks to colour coding: the LEDs count down first in red, then in orange, and finally in green.

In a Python script running on the Pi, Michael has included a default number of shots per magazine. When he changes a magazine, he uses one of the HAT’s buttons as a ‘Reload’ button, resetting the counter. He has also set up the HAT so that the number of available shots can be entered manually instead.

Nerf gun modding tutorial

On Michael’s blog you will find a thorough step-by-step guide to how he created this build. He has also included his code, and links to all the components, software installation guides, and test scripts he has used. So head on over there if you’re keen to mod your own nerf gun like this, and take a look at some of his other projects while you’re there!

Michael welcomes suggestions for how to improve upon his mods, especially for how to count shots in a magazine automatically. Do you have an idea? Let usand himknow in the comments!

Toy mods

Over the years, we’ve covered quite a few fun toy upgrades, and some that may have to be approached with caution. The Pi-powered busy board for babies, the ‘weaponized’ teddy bear, and the inevitable smart Fisher Price phone are just a few from our archives.

What’s your favourite childhood toy, and how could it be improved by the addition of a Pi? Share your ideas with us in the comments below.

The post Mod your Nerf gun with a Pi appeared first on Raspberry Pi.

Michael Reeves and the ridiculous Subscriber Robot

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/michael-reeves-subscriber-robot/

At the beginning of his new build’s video, YouTuber Michael Reeves discusses a revelation he had about why some people don’t subscribe to his channel:

The real reason some people don’t subscribe is that when you hit this button, that’s all, that’s it, it’s done. It’s not special, it’s not enjoyable. So how do we make subscribing a fun, enjoyable process? Well, we do it by slowly chipping away at the content creator’s psyche every time someone subscribes.

His fix? The ‘fun’ interactive Subscriber Robot that is the subject of the video.

Be aware that Michael uses a couple of mild swears in this video, so maybe don’t watch it with a child.

The Subscriber Robot

Just showing that subscriber dedication My Patreon Page: https://www.patreon.com/michaelreeves Personal Site: https://michaelreeves.us/ Twitter: https://twitter.com/michaelreeves08 Song: Summer Salt – Sweet To Me

Who is Michael Reeves?

Software developer and student Michael Reeves started his YouTube account a mere four months ago, with the premiere of his robot that shines lasers into your eyes – now he has 110k+ subscribers. At only 19, Michael co-owns and manages a company together with friends, and is set on his career path in software and computing. So when he is not making videos, he works a nine-to-five job “to pay for college and, y’know, live”.

The Subscriber Robot

Michael shot to YouTube fame with the aforementioned laser robot built around an Arduino. But by now he has also be released videos for a few Raspberry Pi-based contraptions.

Michael Reeves Raspberry Pi Subscriber Robot

Michael, talking us through the details of one of the worst ideas ever made

His Subscriber Robot uses a series of Python scripts running on a Raspberry Pi to check for new subscribers to Michael’s channel via the YouTube API. When it identifies one, the Pi uses a relay to make the ceiling lights in Michael’s office flash ten times a second while ear-splitting noise is emitted by a 102-decibel-rated buzzer. Needless to say, this buzzer is not recommended for home use, work use, or any use whatsoever! Moreover, the Raspberry Pi also connects to a speaker that announces the name of the new subscriber, so Michael knows who to thank.

Michael Reeves Raspberry Pi Subscriber Robot

Subscriber Robot: EEH! EEH! EEH! MoistPretzels has subscribed.
Michael: Thank you, MoistPretzels…

Given that Michael has gained a whopping 30,000 followers in the ten days since the release of this video, it’s fair to assume he is currently curled up in a ball on the office floor, quietly crying to himself.

If you think Michael only makes videos about ridiculous builds, you’re mistaken. He also uses YouTube to provide educational content, because he believes that “it’s super important for people to teach themselves how to program”. For example, he has just released a new C# beginners tutorial, the third in the series.

Support Michael

If you’d like to help Michael in his mission to fill the world with both tutorials and ridiculous robot builds, make sure to subscribe to his channel. You can also follow him on Twitter and support him on Patreon.

You may also want to check out the Useless Duck Company and Simone Giertz if you’re in the mood for more impractical, yet highly amusing, robot builds.

Good luck with your channel, Michael! We are looking forward to, and slightly dreading, more videos from one of our favourite new YouTubers.

The post Michael Reeves and the ridiculous Subscriber Robot appeared first on Raspberry Pi.

Analyzing AWS Cost and Usage Reports with Looker and Amazon Athena

Post Syndicated from Dillon Morrison original https://aws.amazon.com/blogs/big-data/analyzing-aws-cost-and-usage-reports-with-looker-and-amazon-athena/

This is a guest post by Dillon Morrison at Looker. Looker is, in their own words, “a new kind of analytics platform–letting everyone in your business make better decisions by getting reliable answers from a tool they can use.” 

As the breadth of AWS products and services continues to grow, customers are able to more easily move their technology stack and core infrastructure to AWS. One of the attractive benefits of AWS is the cost savings. Rather than paying upfront capital expenses for large on-premises systems, customers can instead pay variables expenses for on-demand services. To further reduce expenses AWS users can reserve resources for specific periods of time, and automatically scale resources as needed.

The AWS Cost Explorer is great for aggregated reporting. However, conducting analysis on the raw data using the flexibility and power of SQL allows for much richer detail and insight, and can be the better choice for the long term. Thankfully, with the introduction of Amazon Athena, monitoring and managing these costs is now easier than ever.

In the post, I walk through setting up the data pipeline for cost and usage reports, Amazon S3, and Athena, and discuss some of the most common levers for cost savings. I surface tables through Looker, which comes with a host of pre-built data models and dashboards to make analysis of your cost and usage data simple and intuitive.

Analysis with Athena

With Athena, there’s no need to create hundreds of Excel reports, move data around, or deploy clusters to house and process data. Athena uses Apache Hive’s DDL to create tables, and the Presto querying engine to process queries. Analysis can be performed directly on raw data in S3. Conveniently, AWS exports raw cost and usage data directly into a user-specified S3 bucket, making it simple to start querying with Athena quickly. This makes continuous monitoring of costs virtually seamless, since there is no infrastructure to manage. Instead, users can leverage the power of the Athena SQL engine to easily perform ad-hoc analysis and data discovery without needing to set up a data warehouse.

After the data pipeline is established, cost and usage data (the recommended billing data, per AWS documentation) provides a plethora of comprehensive information around usage of AWS services and the associated costs. Whether you need the report segmented by product type, user identity, or region, this report can be cut-and-sliced any number of ways to properly allocate costs for any of your business needs. You can then drill into any specific line item to see even further detail, such as the selected operating system, tenancy, purchase option (on-demand, spot, or reserved), and so on.

Walkthrough

By default, the Cost and Usage report exports CSV files, which you can compress using gzip (recommended for performance). There are some additional configuration options for tuning performance further, which are discussed below.

Prerequisites

If you want to follow along, you need the following resources:

Enable the cost and usage reports

First, enable the Cost and Usage report. For Time unit, select Hourly. For Include, select Resource IDs. All options are prompted in the report-creation window.

The Cost and Usage report dumps CSV files into the specified S3 bucket. Please note that it can take up to 24 hours for the first file to be delivered after enabling the report.

Configure the S3 bucket and files for Athena querying

In addition to the CSV file, AWS also creates a JSON manifest file for each cost and usage report. Athena requires that all of the files in the S3 bucket are in the same format, so we need to get rid of all these manifest files. If you’re looking to get started with Athena quickly, you can simply go into your S3 bucket and delete the manifest file manually, skip the automation described below, and move on to the next section.

To automate the process of removing the manifest file each time a new report is dumped into S3, which I recommend as you scale, there are a few additional steps. The folks at Concurrency labs wrote a great overview and set of scripts for this, which you can find in their GitHub repo.

These scripts take the data from an input bucket, remove anything unnecessary, and dump it into a new output bucket. We can utilize AWS Lambda to trigger this process whenever new data is dropped into S3, or on a nightly basis, or whatever makes most sense for your use-case, depending on how often you’re querying the data. Please note that enabling the “hourly” report means that data is reported at the hour-level of granularity, not that a new file is generated every hour.

Following these scripts, you’ll notice that we’re adding a date partition field, which isn’t necessary but improves query performance. In addition, converting data from CSV to a columnar format like ORC or Parquet also improves performance. We can automate this process using Lambda whenever new data is dropped in our S3 bucket. Amazon Web Services discusses columnar conversion at length, and provides walkthrough examples, in their documentation.

As a long-term solution, best practice is to use compression, partitioning, and conversion. However, for purposes of this walkthrough, we’re not going to worry about them so we can get up-and-running quicker.

Set up the Athena query engine

In your AWS console, navigate to the Athena service, and click “Get Started”. Follow the tutorial and set up a new database (we’ve called ours “AWS Optimizer” in this example). Don’t worry about configuring your initial table, per the tutorial instructions. We’ll be creating a new table for cost and usage analysis. Once you walked through the tutorial steps, you’ll be able to access the Athena interface, and can begin running Hive DDL statements to create new tables.

One thing that’s important to note, is that the Cost and Usage CSVs also contain the column headers in their first row, meaning that the column headers would be included in the dataset and any queries. For testing and quick set-up, you can remove this line manually from your first few CSV files. Long-term, you’ll want to use a script to programmatically remove this row each time a new file is dropped in S3 (every few hours typically). We’ve drafted up a sample script for ease of reference, which we run on Lambda. We utilize Lambda’s native ability to invoke the script whenever a new object is dropped in S3.

For cost and usage, we recommend using the DDL statement below. Since our data is in CSV format, we don’t need to use a SerDe, we can simply specify the “separatorChar, quoteChar, and escapeChar”, and the structure of the files (“TEXTFILE”). Note that AWS does have an OpenCSV SerDe as well, if you prefer to use that.

 

CREATE EXTERNAL TABLE IF NOT EXISTS cost_and_usage	 (
identity_LineItemId String,
identity_TimeInterval String,
bill_InvoiceId String,
bill_BillingEntity String,
bill_BillType String,
bill_PayerAccountId String,
bill_BillingPeriodStartDate String,
bill_BillingPeriodEndDate String,
lineItem_UsageAccountId String,
lineItem_LineItemType String,
lineItem_UsageStartDate String,
lineItem_UsageEndDate String,
lineItem_ProductCode String,
lineItem_UsageType String,
lineItem_Operation String,
lineItem_AvailabilityZone String,
lineItem_ResourceId String,
lineItem_UsageAmount String,
lineItem_NormalizationFactor String,
lineItem_NormalizedUsageAmount String,
lineItem_CurrencyCode String,
lineItem_UnblendedRate String,
lineItem_UnblendedCost String,
lineItem_BlendedRate String,
lineItem_BlendedCost String,
lineItem_LineItemDescription String,
lineItem_TaxType String,
product_ProductName String,
product_accountAssistance String,
product_architecturalReview String,
product_architectureSupport String,
product_availability String,
product_bestPractices String,
product_cacheEngine String,
product_caseSeverityresponseTimes String,
product_clockSpeed String,
product_currentGeneration String,
product_customerServiceAndCommunities String,
product_databaseEdition String,
product_databaseEngine String,
product_dedicatedEbsThroughput String,
product_deploymentOption String,
product_description String,
product_durability String,
product_ebsOptimized String,
product_ecu String,
product_endpointType String,
product_engineCode String,
product_enhancedNetworkingSupported String,
product_executionFrequency String,
product_executionLocation String,
product_feeCode String,
product_feeDescription String,
product_freeQueryTypes String,
product_freeTrial String,
product_frequencyMode String,
product_fromLocation String,
product_fromLocationType String,
product_group String,
product_groupDescription String,
product_includedServices String,
product_instanceFamily String,
product_instanceType String,
product_io String,
product_launchSupport String,
product_licenseModel String,
product_location String,
product_locationType String,
product_maxIopsBurstPerformance String,
product_maxIopsvolume String,
product_maxThroughputvolume String,
product_maxVolumeSize String,
product_maximumStorageVolume String,
product_memory String,
product_messageDeliveryFrequency String,
product_messageDeliveryOrder String,
product_minVolumeSize String,
product_minimumStorageVolume String,
product_networkPerformance String,
product_operatingSystem String,
product_operation String,
product_operationsSupport String,
product_physicalProcessor String,
product_preInstalledSw String,
product_proactiveGuidance String,
product_processorArchitecture String,
product_processorFeatures String,
product_productFamily String,
product_programmaticCaseManagement String,
product_provisioned String,
product_queueType String,
product_requestDescription String,
product_requestType String,
product_routingTarget String,
product_routingType String,
product_servicecode String,
product_sku String,
product_softwareType String,
product_storage String,
product_storageClass String,
product_storageMedia String,
product_technicalSupport String,
product_tenancy String,
product_thirdpartySoftwareSupport String,
product_toLocation String,
product_toLocationType String,
product_training String,
product_transferType String,
product_usageFamily String,
product_usagetype String,
product_vcpu String,
product_version String,
product_volumeType String,
product_whoCanOpenCases String,
pricing_LeaseContractLength String,
pricing_OfferingClass String,
pricing_PurchaseOption String,
pricing_publicOnDemandCost String,
pricing_publicOnDemandRate String,
pricing_term String,
pricing_unit String,
reservation_AvailabilityZone String,
reservation_NormalizedUnitsPerReservation String,
reservation_NumberOfReservations String,
reservation_ReservationARN String,
reservation_TotalReservedNormalizedUnits String,
reservation_TotalReservedUnits String,
reservation_UnitsPerReservation String,
resourceTags_userName String,
resourceTags_usercostcategory String  


)
    ROW FORMAT DELIMITED
      FIELDS TERMINATED BY ','
      ESCAPED BY '\\'
      LINES TERMINATED BY '\n'

STORED AS TEXTFILE
    LOCATION 's3://<<your bucket name>>';

Once you’ve successfully executed the command, you should see a new table named “cost_and_usage” with the below properties. Now we’re ready to start executing queries and running analysis!

Start with Looker and connect to Athena

Setting up Looker is a quick process, and you can try it out for free here (or download from Amazon Marketplace). It takes just a few seconds to connect Looker to your Athena database, and Looker comes with a host of pre-built data models and dashboards to make analysis of your cost and usage data simple and intuitive. After you’re connected, you can use the Looker UI to run whatever analysis you’d like. Looker translates this UI to optimized SQL, so any user can execute and visualize queries for true self-service analytics.

Major cost saving levers

Now that the data pipeline is configured, you can dive into the most popular use cases for cost savings. In this post, I focus on:

  • Purchasing Reserved Instances vs. On-Demand Instances
  • Data transfer costs
  • Allocating costs over users or other Attributes (denoted with resource tags)

On-Demand, Spot, and Reserved Instances

Purchasing Reserved Instances vs On-Demand Instances is arguably going to be the biggest cost lever for heavy AWS users (Reserved Instances run up to 75% cheaper!). AWS offers three options for purchasing instances:

  • On-Demand—Pay as you use.
  • Spot (variable cost)—Bid on spare Amazon EC2 computing capacity.
  • Reserved Instances—Pay for an instance for a specific, allotted period of time.

When purchasing a Reserved Instance, you can also choose to pay all-upfront, partial-upfront, or monthly. The more you pay upfront, the greater the discount.

If your company has been using AWS for some time now, you should have a good sense of your overall instance usage on a per-month or per-day basis. Rather than paying for these instances On-Demand, you should try to forecast the number of instances you’ll need, and reserve them with upfront payments.

The total amount of usage with Reserved Instances versus overall usage with all instances is called your coverage ratio. It’s important not to confuse your coverage ratio with your Reserved Instance utilization. Utilization represents the amount of reserved hours that were actually used. Don’t worry about exceeding capacity, you can still set up Auto Scaling preferences so that more instances get added whenever your coverage or utilization crosses a certain threshold (we often see a target of 80% for both coverage and utilization among savvy customers).

Calculating the reserved costs and coverage can be a bit tricky with the level of granularity provided by the cost and usage report. The following query shows your total cost over the last 6 months, broken out by Reserved Instance vs other instance usage. You can substitute the cost field for usage if you’d prefer. Please note that you should only have data for the time period after the cost and usage report has been enabled (though you can opt for up to 3 months of historical data by contacting your AWS Account Executive). If you’re just getting started, this query will only show a few days.

 

SELECT 
	DATE_FORMAT(from_iso8601_timestamp(cost_and_usage.lineitem_usagestartdate),'%Y-%m') AS "cost_and_usage.usage_start_month",
	COALESCE(SUM(cost_and_usage.lineitem_unblendedcost ), 0) AS "cost_and_usage.total_unblended_cost",
	COALESCE(SUM(CASE WHEN (CASE
         WHEN cost_and_usage.lineitem_lineitemtype = 'DiscountedUsage' THEN 'RI Line Item'
         WHEN cost_and_usage.lineitem_lineitemtype = 'RIFee' THEN 'RI Line Item'
         WHEN cost_and_usage.lineitem_lineitemtype = 'Fee' THEN 'RI Line Item'
         ELSE 'Non RI Line Item'
        END = 'RI Line Item') THEN cost_and_usage.lineitem_unblendedcost  ELSE NULL END), 0) AS "cost_and_usage.total_reserved_unblended_cost",
	1.0 * (COALESCE(SUM(CASE WHEN (CASE
         WHEN cost_and_usage.lineitem_lineitemtype = 'DiscountedUsage' THEN 'RI Line Item'
         WHEN cost_and_usage.lineitem_lineitemtype = 'RIFee' THEN 'RI Line Item'
         WHEN cost_and_usage.lineitem_lineitemtype = 'Fee' THEN 'RI Line Item'
         ELSE 'Non RI Line Item'
        END = 'RI Line Item') THEN cost_and_usage.lineitem_unblendedcost  ELSE NULL END), 0)) / NULLIF((COALESCE(SUM(cost_and_usage.lineitem_unblendedcost ), 0)),0)  AS "cost_and_usage.percent_spend_on_ris",
	COALESCE(SUM(CASE WHEN (CASE
         WHEN cost_and_usage.lineitem_lineitemtype = 'DiscountedUsage' THEN 'RI Line Item'
         WHEN cost_and_usage.lineitem_lineitemtype = 'RIFee' THEN 'RI Line Item'
         WHEN cost_and_usage.lineitem_lineitemtype = 'Fee' THEN 'RI Line Item'
         ELSE 'Non RI Line Item'
        END = 'Non RI Line Item') THEN cost_and_usage.lineitem_unblendedcost  ELSE NULL END), 0) AS "cost_and_usage.total_non_reserved_unblended_cost",
	1.0 * (COALESCE(SUM(CASE WHEN (CASE
         WHEN cost_and_usage.lineitem_lineitemtype = 'DiscountedUsage' THEN 'RI Line Item'
         WHEN cost_and_usage.lineitem_lineitemtype = 'RIFee' THEN 'RI Line Item'
         WHEN cost_and_usage.lineitem_lineitemtype = 'Fee' THEN 'RI Line Item'
         ELSE 'Non RI Line Item'
        END = 'Non RI Line Item') THEN cost_and_usage.lineitem_unblendedcost  ELSE NULL END), 0)) / NULLIF((COALESCE(SUM(cost_and_usage.lineitem_unblendedcost ), 0)),0)  AS "cost_and_usage.percent_spend_on_non_ris"
FROM aws_optimizer.cost_and_usage  AS cost_and_usage

WHERE 
	(((from_iso8601_timestamp(cost_and_usage.lineitem_usagestartdate)) >= ((DATE_ADD('month', -5, DATE_TRUNC('MONTH', CAST(NOW() AS DATE))))) AND (from_iso8601_timestamp(cost_and_usage.lineitem_usagestartdate)) < ((DATE_ADD('month', 6, DATE_ADD('month', -5, DATE_TRUNC('MONTH', CAST(NOW() AS DATE))))))))
GROUP BY 1
ORDER BY 2 DESC
LIMIT 500

The resulting table should look something like the image below (I’m surfacing tables through Looker, though the same table would result from querying via command line or any other interface).

With a BI tool, you can create dashboards for easy reference and monitoring. New data is dumped into S3 every few hours, so your dashboards can update several times per day.

It’s an iterative process to understand the appropriate number of Reserved Instances needed to meet your business needs. After you’ve properly integrated Reserved Instances into your purchasing patterns, the savings can be significant. If your coverage is consistently below 70%, you should seriously consider adjusting your purchase types and opting for more Reserved instances.

Data transfer costs

One of the great things about AWS data storage is that it’s incredibly cheap. Most charges often come from moving and processing that data. There are several different prices for transferring data, broken out largely by transfers between regions and availability zones. Transfers between regions are the most costly, followed by transfers between Availability Zones. Transfers within the same region and same availability zone are free unless using elastic or public IP addresses, in which case there is a cost. You can find more detailed information in the AWS Pricing Docs. With this in mind, there are several simple strategies for helping reduce costs.

First, since costs increase when transferring data between regions, it’s wise to ensure that as many services as possible reside within the same region. The more you can localize services to one specific region, the lower your costs will be.

Second, you should maximize the data you’re routing directly within AWS services and IP addresses. Transfers out to the open internet are the most costly and least performant mechanisms of data transfers, so it’s best to keep transfers within AWS services.

Lastly, data transfers between private IP addresses are cheaper than between elastic or public IP addresses, so utilizing private IP addresses as much as possible is the most cost-effective strategy.

The following query provides a table depicting the total costs for each AWS product, broken out transfer cost type. Substitute the “lineitem_productcode” field in the query to segment the costs by any other attribute. If you notice any unusually high spikes in cost, you’ll need to dig deeper to understand what’s driving that spike: location, volume, and so on. Drill down into specific costs by including “product_usagetype” and “product_transfertype” in your query to identify the types of transfer costs that are driving up your bill.

SELECT 
	cost_and_usage.lineitem_productcode  AS "cost_and_usage.product_code",
	COALESCE(SUM(cost_and_usage.lineitem_unblendedcost), 0) AS "cost_and_usage.total_unblended_cost",
	COALESCE(SUM(CASE WHEN REGEXP_LIKE(cost_and_usage.product_usagetype, 'DataTransfer')    THEN cost_and_usage.lineitem_unblendedcost  ELSE NULL END), 0) AS "cost_and_usage.total_data_transfer_cost",
	COALESCE(SUM(CASE WHEN REGEXP_LIKE(cost_and_usage.product_usagetype, 'DataTransfer-In')    THEN cost_and_usage.lineitem_unblendedcost  ELSE NULL END), 0) AS "cost_and_usage.total_inbound_data_transfer_cost",
	COALESCE(SUM(CASE WHEN REGEXP_LIKE(cost_and_usage.product_usagetype, 'DataTransfer-Out')    THEN cost_and_usage.lineitem_unblendedcost  ELSE NULL END), 0) AS "cost_and_usage.total_outbound_data_transfer_cost"
FROM aws_optimizer.cost_and_usage  AS cost_and_usage

WHERE 
	(((from_iso8601_timestamp(cost_and_usage.lineitem_usagestartdate)) >= ((DATE_ADD('month', -5, DATE_TRUNC('MONTH', CAST(NOW() AS DATE))))) AND (from_iso8601_timestamp(cost_and_usage.lineitem_usagestartdate)) < ((DATE_ADD('month', 6, DATE_ADD('month', -5, DATE_TRUNC('MONTH', CAST(NOW() AS DATE))))))))
GROUP BY 1
ORDER BY 2 DESC
LIMIT 500

When moving between regions or over the open web, many data transfer costs also include the origin and destination location of the data movement. Using a BI tool with mapping capabilities, you can get a nice visual of data flows. The point at the center of the map is used to represent external data flows over the open internet.

Analysis by tags

AWS provides the option to apply custom tags to individual resources, so you can allocate costs over whatever customized segment makes the most sense for your business. For a SaaS company that hosts software for customers on AWS, maybe you’d want to tag the size of each customer. The following query uses custom tags to display the reserved, data transfer, and total cost for each AWS service, broken out by tag categories, over the last 6 months. You’ll want to substitute the cost_and_usage.resourcetags_customersegment and cost_and_usage.customer_segment with the name of your customer field.

 

SELECT * FROM (
SELECT *, DENSE_RANK() OVER (ORDER BY z___min_rank) as z___pivot_row_rank, RANK() OVER (PARTITION BY z__pivot_col_rank ORDER BY z___min_rank) as z__pivot_col_ordering FROM (
SELECT *, MIN(z___rank) OVER (PARTITION BY "cost_and_usage.product_code") as z___min_rank FROM (
SELECT *, RANK() OVER (ORDER BY CASE WHEN z__pivot_col_rank=1 THEN (CASE WHEN "cost_and_usage.total_unblended_cost" IS NOT NULL THEN 0 ELSE 1 END) ELSE 2 END, CASE WHEN z__pivot_col_rank=1 THEN "cost_and_usage.total_unblended_cost" ELSE NULL END DESC, "cost_and_usage.total_unblended_cost" DESC, z__pivot_col_rank, "cost_and_usage.product_code") AS z___rank FROM (
SELECT *, DENSE_RANK() OVER (ORDER BY CASE WHEN "cost_and_usage.customer_segment" IS NULL THEN 1 ELSE 0 END, "cost_and_usage.customer_segment") AS z__pivot_col_rank FROM (
SELECT 
	cost_and_usage.lineitem_productcode  AS "cost_and_usage.product_code",
	cost_and_usage.resourcetags_customersegment  AS "cost_and_usage.customer_segment",
	COALESCE(SUM(cost_and_usage.lineitem_unblendedcost ), 0) AS "cost_and_usage.total_unblended_cost",
	1.0 * (COALESCE(SUM(CASE WHEN REGEXP_LIKE(cost_and_usage.product_usagetype, 'DataTransfer')    THEN cost_and_usage.lineitem_unblendedcost  ELSE NULL END), 0)) / NULLIF((COALESCE(SUM(cost_and_usage.lineitem_unblendedcost ), 0)),0)  AS "cost_and_usage.percent_spend_data_transfers_unblended",
	1.0 * (COALESCE(SUM(CASE WHEN (CASE
         WHEN cost_and_usage.lineitem_lineitemtype = 'DiscountedUsage' THEN 'RI Line Item'
         WHEN cost_and_usage.lineitem_lineitemtype = 'RIFee' THEN 'RI Line Item'
         WHEN cost_and_usage.lineitem_lineitemtype = 'Fee' THEN 'RI Line Item'
         ELSE 'Non RI Line Item'
        END = 'Non RI Line Item') THEN cost_and_usage.lineitem_unblendedcost  ELSE NULL END), 0)) / NULLIF((COALESCE(SUM(cost_and_usage.lineitem_unblendedcost ), 0)),0)  AS "cost_and_usage.unblended_percent_spend_on_ris"
FROM aws_optimizer.cost_and_usage_raw  AS cost_and_usage

WHERE 
	(((from_iso8601_timestamp(cost_and_usage.lineitem_usagestartdate)) >= ((DATE_ADD('month', -5, DATE_TRUNC('MONTH', CAST(NOW() AS DATE))))) AND (from_iso8601_timestamp(cost_and_usage.lineitem_usagestartdate)) < ((DATE_ADD('month', 6, DATE_ADD('month', -5, DATE_TRUNC('MONTH', CAST(NOW() AS DATE))))))))
GROUP BY 1,2) ww
) bb WHERE z__pivot_col_rank <= 16384
) aa
) xx
) zz
 WHERE z___pivot_row_rank <= 500 OR z__pivot_col_ordering = 1 ORDER BY z___pivot_row_rank

The resulting table in this example looks like the results below. In this example, you can tell that we’re making poor use of Reserved Instances because they represent such a small portion of our overall costs.

Again, using a BI tool to visualize these costs and trends over time makes the analysis much easier to consume and take action on.

Summary

Saving costs on your AWS spend is always an iterative, ongoing process. Hopefully with these queries alone, you can start to understand your spending patterns and identify opportunities for savings. However, this is just a peek into the many opportunities available through analysis of the Cost and Usage report. Each company is different, with unique needs and usage patterns. To achieve maximum cost savings, we encourage you to set up an analytics environment that enables your team to explore all potential cuts and slices of your usage data, whenever it’s necessary. Exploring different trends and spikes across regions, services, user types, etc. helps you gain comprehensive understanding of your major cost levers and consistently implement new cost reduction strategies.

Note that all of the queries and analysis provided in this post were generated using the Looker data platform. If you’re already a Looker customer, you can get all of this analysis, additional pre-configured dashboards, and much more using Looker Blocks for AWS.


About the Author

Dillon Morrison leads the Platform Ecosystem at Looker. He enjoys exploring new technologies and architecting the most efficient data solutions for the business needs of his company and their customers. In his spare time, you’ll find Dillon rock climbing in the Bay Area or nose deep in the docs of the latest AWS product release at his favorite cafe (“Arlequin in SF is unbeatable!”).

 

 

 

OK Google, be aesthetically pleasing

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/aesthetically-pleasing-ok-google/

Maker Andrew Jones took a Raspberry Pi and the Google Assistant SDK and created a gorgeous-looking, and highly functional, alternative to store-bought smart speakers.

Raspberry Pi Google AI Assistant

In this video I get an “Ok Google” voice activated AI assistant running on a raspberry pi. I also hand make a nice wooden box for it to live in.

OK Google, what are you?

Google Assistant is software of the same ilk as Amazon’s Alexa, Apple’s Siri and Microsoft’s Cortana. It’s a virtual assistant that allows you to request information, play audio, and control smart home devices via voice commands.

Infinite Looping Siri, Alexa and Google Home

One can barely see the iPhone’s screen. That’s because I have a privacy protection screen. Sorry, did not check the camera angle. Learn how to create your own loop, why we put Cortana out of the loop, and how to train Siri to an artificial voice: https://www.danrl.com/2016/12/01/looping-ais-siri-alexa-google-home.html

You probably have a digital assistant on your mobile phone, and if you go to the home of someone even mildly tech-savvy, you may see a device awaiting commands via a wake word such the device’s name or, for the Google Assistant, the phrase “OK, Google”.

Homebrew versions

Understanding the maker need to ‘put tech into stuff’ and upgrade everyday objects into everyday objects 2.0, the creators of these virtual assistants have allowed access for developers to run their software on devices such as the Raspberry Pi. This means that your common-or-garden homemade robot can now be controlled via voice, and your shed-built home automation system can have easy-to-use internet connectivity via a reliable, multi-device platform.

Andrew’s Google Assistant build

Andrew gives a peerless explanation of how the Google Assistant works:

There’s Google’s Cloud. You log into Google’s Cloud and you do a bunch of cloud configuration cloud stuff. And then on the Raspberry Pi you install some Python software and you do a bunch of configuration. And then the cloud and the Pi talk the clouds kitten rainbow protocol and then you get a Google AI assistant.

It all makes perfect sense. Though for more extra detail, you could always head directly to Google.

Andrew Jones Raspberry Pi OK Google Assistant

I couldn’t have explained it better myself

Andrew decided to take his Google Assistant-enabled Raspberry Pi and create a new body for it. One that was more aesthetically pleasing than the standard Pi-inna-box. After wiring his build and cannibalising some speakers and a microphone, he created a sleek, wooden body that would sit quite comfortably in any Bang & Olufsen shop window.

Find the entire build tutorial on Instructables.

Make your own

It’s more straightforward than Andrew’s explanation suggests, we promise! And with an array of useful resources online, you should be able to incorporate your choice of virtual assistants into your build.

There’s The Raspberry Pi Guy’s tutorial on setting up Amazon Alexa on the Raspberry Pi. If you’re looking to use Siri on your Pi, YouTube has a plethora of tutorials waiting for you. And lastly, check out Microsoft’s site for using Cortana on the Pi!

If you’re looking for more information on Google Assistant, check out issue 57 of The MagPi Magazine, free to download as a PDF. The print edition of this issue came with a free AIY Projects Voice Kit, and you can sign up for The MagPi newsletter to be the first to know about the kit’s availability for purchase.

The post OK Google, be aesthetically pleasing appeared first on Raspberry Pi.

Thomas and Ed become a RealLifeDoodle on the ISS

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/astro-pi-reallifedoodle/

Thanks to the very talented sooperdavid, creator of some of the wonderful animations known as RealLifeDoodles, Thomas Pesquet and Astro Pi Ed have been turned into one of the cutest videos on the internet.

space pi – Create, Discover and Share Awesome GIFs on Gfycat

Watch space pi GIF by sooperdave on Gfycat. Discover more GIFS online on Gfycat

And RealLifeDoodles aaaaare?

Thanks to the power of viral video, many will be aware of the ongoing Real Life Doodle phenomenon. Wait, you’re not aware?

Oh. Well, let me explain it to you.

Taking often comical video clips, those with a know-how and skill level that outweighs my own in spades add faces and emotions to inanimate objects, creating what the social media world refers to as a Real Life Doodle. From disappointed exercise balls to cannibalistic piles of leaves, these video clips are both cute and sometimes, though thankfully not always, a little heartbreaking.

letmegofree – Create, Discover and Share Awesome GIFs on Gfycat

Watch letmegofree GIF by sooperdave on Gfycat. Discover more reallifedoodles GIFs on Gfycat

Our own RealLifeDoodle

A few months back, when Programme Manager Dave Honess, better known to many as SpaceDave, sent me these Astro Pi videos for me to upload to YouTube, a small plan hatched in my brain. For in the midst of the video, and pointed out to me by SpaceDave – “I kind of love the way he just lets the unit drop out of shot” – was the most adorable sight as poor Ed drifted off into the great unknown of the ISS. Finding that I have this odd ability to consider many inanimate objects as ‘cute’, I wanted to see whether we could turn poor Ed into a RealLifeDoodle.

Heading to the Reddit RealLifeDoodle subreddit, I sent moderator sooperdavid a private message, asking if he’d be so kind as to bring our beloved Ed to life.

Yesterday, our dream came true!

Astro Pi

Unless you’re new to the world of the Raspberry Pi blog (in which case, welcome!), you’ll probably know about the Astro Pi Challenge. But for those who are unaware, let me break it down for you.

Raspberry Pi RealLifeDoodle

In 2015, two weeks before British ESA Astronaut Tim Peake journeyed to the International Space Station, two Raspberry Pis were sent up to await his arrival. Clad in 6063-grade aluminium flight cases and fitted with their own Sense HATs and camera modules, the Astro Pis Ed and Izzy were ready to receive the winning codes from school children in the UK. The following year, this time maintained by French ESA Astronaut Thomas Pesquet, children from every ESA member country got involved to send even more code to the ISS.

Get involved

Will there be another Astro Pi Challenge? Well, I just asked SpaceDave and he didn’t say no! So why not get yourself into training now and try out some of our space-themed free resources, including our 3D-print your own Astro Pi case tutorial? You can also follow the adventures of Ed and Izzy in our brilliant Story of Astro Pi cartoons.

Raspberry Pi RealLifeDoodle

And if you’re quick, there’s still time to take part in tomorrow’s Moonhack! Check out their website for more information and help the team at Code Club Australia beat their own world record!

The post Thomas and Ed become a RealLifeDoodle on the ISS appeared first on Raspberry Pi.

Ms. Haughs’ tote-ally awesome Raspberry Pi bag

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/pi-tote-bag/

While planning her trips to upcoming educational events, Raspberry Pi Certified Educator Amanda Haughs decided to incorporate the Pi Zero W into a rather nifty accessory.

Final Pi Tote bag

Uploaded by Amanda Haughs on 2017-07-08.

The idea

Commenting on the convenient size of the Raspberry Pi Zero W, Amanda explains on her blog “I decided that I wanted to make something that would fully take advantage of the compact size of the Pi Zero, that was somewhat useful, and that I could take with me and share with my maker friends during my summer tech travels.”

Amanda Haughs Raspberry Pi Tote Bag

Awesome grandmothers and wearable tech are an instant recipe for success!

With access to her grandmother’s “high-tech embroidery machine”, Amanda was able to incorporate various maker skills into her project.

The Tech

Amanda used five clear white LEDs and the Raspberry Pi Zero for the project. Taking inspiration from the LED-adorned Babbage Bear her team created at Picademy, she decided to connect the LEDs using female-to-female jumper wires

Amanda Haughs Pi Tote Bag

Poor Babbage really does suffer at Picademy events

It’s worth noting that she could also have used conductive thread, though we wonder how this slightly less flexible thread would work in a sewing machine, so don’t try this at home. Or do, but don’t blame me if it goes wonky.

Having set the LEDs in place, Amanda worked on the code. Unsure about how she wanted the LEDs to blink, she finally settled on a random pulsing of the lights, and used the GPIO Zero library to achieve the effect.

Raspberry Pi Tote Bag

Check out the GPIO Zero library for some great LED effects

The GPIO Zero pulse effect allows users to easily fade an LED in and out without the need for long strings of code. Very handy.

The Bag

Inspiration for the bag’s final design came thanks to a YouTube video, and Amanda and her grandmother were able to recreate the make using their fabric of choice.

DIY Tote Bag – Beginner’s Sewing Tutorial

Learn how to make this cute tote bag. A great project for beginning seamstresses!

A small pocket was added on the outside of the bag to allow for the Raspberry Pi Zero to be snugly secured, and the pattern was stitched into the front, allowing spaces for the LEDs to pop through.

Raspberry Pi Tote Bag

Amanda shows off her bag to Philip at ISTE 2017

You can find more information on the project, including Amanda’s initial experimentation with the Sense HAT, on her blog. If you’re a maker, an educator or, (and here’s a word I’m pretty sure I’ve made up) an edumaker, be sure to keep her blog bookmarked!

Make your own wearable tech

Whether you use jumper leads, or conductive thread or paint, we’d love to see your wearable tech projects.

Getting started with wearables

To help you get started, we’ve created this Getting started with wearables free resource that allows you to get making with the Adafruit FLORA and and NeoPixel. Check it out!

The post Ms. Haughs’ tote-ally awesome Raspberry Pi bag appeared first on Raspberry Pi.

Video playback on freely-arranged screens with info-beamer

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/info-beamer/

When the creator of the digital signage software info-beamer, Florian Wesch, shared this project on Reddit, I don’t think he was prepared for the excited reaction of the community. Florian’s post, which by now has thousands of upvotes, showcased the power of info-beamer. Not only can the software display a video via multiple Raspberry Pis, it also automatically rejigs the output to match the size and angle of the Pis’ monitors.

info-beamer raspberry pi

Wait…what?

I know, right? We’ve seen many video-based Raspberry Pi projects, but this is definitely one of the most impressive ones. While those of us with a creative streak were imagining cool visual arts installations using monitors and old televisions of various sizes, the more technically-minded puzzled over how Florian pulled this off.

It’s obvious that info-beamer has manifold potential uses. But we had absolutely zero understanding of how it works!

How does info-beamer do this?

Lucky for us, Florian returned to Reddit a few days later with a how-to video, explaining in layman’s terms how you too can get a video to play on a multi-screen, multi-Pi setup.

Automatic video wall configuration with info-beamer hosted

This is an exciting new feature I’ve made available for the info-beamer hosted digital signage system: You can create a video wall consisting of freely arranged screens in seconds. The screens don’t even have to be planar. Just rotate and place them as you like.

First you’ll need to set up info-beamer, which will allow you to introduce multiple Raspberry Pis, and their attached monitors, into a joint network. To make the software work, there’s some Python code you have to write yourself, but hands-on tutorials and example code exist to make this fairly easy, even if you have little experience in Python.

info-beamer raspberry pi

As you can see in Florian’s video, info-beamer assigns each monitor its own, unique section of video. Taking a photo of the monitors and uploading it to a site provides enough information for the software to play a movie trailer split across multiple screens.

info-beamer raspberry pi

A step that’s missing in the video, but that Florian described on Reddit, is how to configure the screens via a drag-and-drop interface so that the software recognizes them. Once this is done, your video display is good to go.

For more information about info-beamer check out the website, and follow the official Twitter account for updates.

Using Raspberry Pi in video-based projects

Since it has an HDMI port, connecting your Raspberry Pi to any compatible monitor, including your television, is an easy task. And with a little tweaking and soldering you can even connect your Pi to that ageing SCART TV/Video combo you might have in the loft.

As I said earlier, there’s an abundance of Pi-powered video-based projects. Many digital art installations, and even commercial media devices, rely on the Raspberry Pi because of its low cost, small size, and high-quality multimedia capabilities.

Have you used a Raspberry Pi in a video-playback project? Share it with us below – we’d love to see it!

The post Video playback on freely-arranged screens with info-beamer appeared first on Raspberry Pi.

Updates to GPIO Zero, the physical computing API

Post Syndicated from Ben Nuttall original https://www.raspberrypi.org/blog/gpio-zero-update/

GPIO Zero v1.4 is out now! It comes with a set of new features, including a handy pinout command line tool. To start using this newest version of the API, update your Raspbian OS now:

sudo apt update && sudo apt upgrade

Some of the things we’ve added will make it easier for you try your hand on different programming styles. In doing so you’ll build your coding skills, and will improve as a programmer. As a consequence, you’ll learn to write more complex code, which will enable you to take on advanced electronics builds. And on top of that, you can use the skills you’ll acquire in other computing projects.

GPIO Zero pinout tool

The new pinout tool

Developing GPIO Zero

Nearly two years ago, I started the GPIO Zero project as a simple wrapper around the low-level RPi.GPIO library. I wanted to create a simpler way to control GPIO-connected devices in Python, based on three years’ experience of training teachers, running workshops, and building projects. The idea grew over time, and the more we built for our Python library, the more sophisticated and powerful it became.

One of the great things about Python is that it’s a multi-paradigm programming language. You can write code in a number of different styles, according to your needs. You don’t have to write classes, but you can if you need them. There are functional programming tools available, but beginners get by without them. Importantly, the more advanced features of the language are not a barrier to entry.

Become a more advanced programmer

As a beginner to programming, you usually start by writing procedural programs, in which the flow moves from top to bottom. Then you’ll probably add loops and create your own functions. Your next step might be to start using libraries which introduce new patterns that operate in a different manner to what you’ve written before, for example threaded callbacks (event-driven programming). You might move on to object-oriented programming, extending the functionality of classes provided by other libraries, and starting to write your own classes. Occasionally, you may make use of tools created with functional programming techniques.

Five buttons in different colours

Take control of the buttons in your life

It’s much the same with GPIO Zero: you can start using it very easily, and we’ve made it simple to progress along the learning curve towards more advanced programming techniques. For example, if you want to make a push button control an LED, the easiest way to do this is via procedural programming using a while loop:

from gpiozero import LED, Button

led = LED(17)
button = Button(2)

while True:
    if button.is_pressed:
        led.on()
    else:
        led.off()

But another way to achieve the same thing is to use events:

from gpiozero import LED, Button
from signal import pause

led = LED(17)
button = Button(2)

button.when_pressed = led.on
button.when_released = led.off

pause()

You could even use a declarative approach, and set the LED’s behaviour in a single line:

from gpiozero import LED, Button
from signal import pause

led = LED(17)
button = Button(2)

led.source = button.values

pause()

You will find that using the procedural approach is a great start, but at some point you’ll hit a limit, and will have to try a different approach. The example above can be approach in several programming styles. However, if you’d like to control a wider range of devices or a more complex system, you need to carefully consider which style works best for what you want to achieve. Being able to choose the right programming style for a task is a skill in itself.

Source/values properties

So how does the led.source = button.values thing actually work?

Every GPIO Zero device has a .value property. For example, you can read a button’s state (True or False), and read or set an LED’s state (so led.value = True is the same as led.on()). Since LEDs and buttons operate with the same value set (True and False), you could say led.value = button.value. However, this only sets the LED to match the button once. If you wanted it to always match the button’s state, you’d have to use a while loop. To make things easier, we came up with a way of telling devices they’re connected: we added a .values property to all devices, and a .source to output devices. Now, a loop is no longer necessary, because this will do the job:

led.source = button.values

This is a simple approach to connecting devices using a declarative style of programming. In one single line, we declare that the LED should get its values from the button, i.e. when the button is pressed, the LED should be on. You can even mix the procedural with the declarative style: at one stage of the program, the LED could be set to match the button, while in the next stage it could just be blinking, and finally it might return back to its original state.

These additions are useful for connecting other devices as well. For example, a PWMLED (LED with variable brightness) has a value between 0 and 1, and so does a potentiometer connected via an ADC (analogue-digital converter) such as the MCP3008. The new GPIO Zero update allows you to say led.source = pot.values, and then twist the potentiometer to control the brightness of the LED.

But what if you want to do something more complex, like connect two devices with different value sets or combine multiple inputs?

We provide a set of device source tools, which allow you to process values as they flow from one device to another. They also let you send in artificial values such as random data, and you can even write your own functions to generate values to pass to a device’s source. For example, to control a motor’s speed with a potentiometer, you could use this code:

from gpiozero import Motor, MCP3008
from signal import pause

motor = Motor(20, 21)
pot = MCP3008()

motor.source = pot.values

pause()

This works, but it will only drive the motor forwards. If you wanted the potentiometer to drive it forwards and backwards, you’d use the scaled tool to scale its values to a range of -1 to 1:

from gpiozero import Motor, MCP3008
from gpiozero.tools import scaled
from signal import pause

motor = Motor(20, 21)
pot = MCP3008()

motor.source = scaled(pot.values, -1, 1)

pause()

And to separately control a robot’s left and right motor speeds with two potentiometers, you could do this:

from gpiozero import Robot, MCP3008
from signal import pause

robot = Robot(left=(2, 3), right=(4, 5))
left = MCP3008(0)
right = MCP3008(1)

robot.source = zip(left.values, right.values)

pause()

GPIO Zero and Blue Dot

Martin O’Hanlon created a Python library called Blue Dot which allows you to use your Android device to remotely control things on their Raspberry Pi. The API is very similar to GPIO Zero, and it even incorporates the value/values properties, which means you can hook it up to GPIO devices easily:

from bluedot import BlueDot
from gpiozero import LED
from signal import pause

bd = BlueDot()
led = LED(17)

led.source = bd.values

pause()

We even included a couple of Blue Dot examples in our recipes.

Make a series of binary logic gates using source/values

Read more in this source/values tutorial from The MagPi, and on the source/values documentation page.

Remote GPIO control

GPIO Zero supports multiple low-level GPIO libraries. We use RPi.GPIO by default, but you can choose to use RPIO or pigpio instead. The pigpio library supports remote connections, so you can run GPIO Zero on one Raspberry Pi to control the GPIO pins of another, or run code on a PC (running Windows, Mac, or Linux) to remotely control the pins of a Pi on the same network. You can even control two or more Pis at once!

If you’re using Raspbian on a Raspberry Pi (or a PC running our x86 Raspbian OS), you have everything you need to remotely control GPIO. If you’re on a PC running Windows, Mac, or Linux, you just need to install gpiozero and pigpio using pip. See our guide on configuring remote GPIO.

I road-tested the new pin_factory syntax at the Raspberry Jam @ Pi Towers

There are a number of different ways to use remote pins:

  • Set the default pin factory and remote IP address with environment variables:
$ GPIOZERO_PIN_FACTORY=pigpio PIGPIO_ADDR=192.168.1.2 python3 blink.py
  • Set the default pin factory in your script:
import gpiozero
from gpiozero import LED
from gpiozero.pins.pigpio import PiGPIOFactory

gpiozero.Device.pin_factory = PiGPIOFactory(host='192.168.1.2')

led = LED(17)
  • The pin_factory keyword argument allows you to use multiple Pis in the same script:
from gpiozero import LED
from gpiozero.pins.pigpio import PiGPIOFactory

factory2 = PiGPIOFactory(host='192.168.1.2')
factory3 = PiGPIOFactory(host='192.168.1.3')

local_hat = TrafficHat()
remote_hat2 = TrafficHat(pin_factory=factory2)
remote_hat3 = TrafficHat(pin_factory=factory3)

This is a really powerful feature! For more, read this remote GPIO tutorial in The MagPi, and check out the remote GPIO recipes in our documentation.

GPIO Zero on your PC

GPIO Zero doesn’t have any dependencies, so you can install it on your PC using pip. In addition to the API’s remote GPIO control, you can use its ‘mock’ pin factory on your PC. We originally created the mock pin feature for the GPIO Zero test suite, but we found that it’s really useful to be able to test GPIO Zero code works without running it on real hardware:

$ GPIOZERO_PIN_FACTORY=mock python3
>>> from gpiozero import LED
>>> led = LED(22)
>>> led.blink()
>>> led.value
True
>>> led.value
False

You can even tell pins to change state (e.g. to simulate a button being pressed) by accessing an object’s pin property:

>>> from gpiozero import LED
>>> led = LED(22)
>>> button = Button(23)
>>> led.source = button.values
>>> led.value
False
>>> button.pin.drive_low()
>>> led.value
True

You can also use the pinout command line tool if you set your pin factory to ‘mock’. It gives you a Pi 3 diagram by default, but you can supply a revision code to see information about other Pi models. For example, to use the pinout tool for the original 256MB Model B, just type pinout -r 2.

GPIO Zero documentation and resources

On the API’s website, we provide beginner recipes and advanced recipes, and we have added remote GPIO configuration including PC/Mac/Linux and Pi Zero OTG, and a section of GPIO recipes. There are also new sections on source/values, command-line tools, FAQs, Pi information and library development.

You’ll find plenty of cool projects using GPIO Zero in our learning resources. For example, you could check out the one that introduces physical computing with Python and get stuck in! We even provide a GPIO Zero cheat sheet you can download and print.

There are great GPIO Zero tutorials and projects in The MagPi magazine every month. Moreover, they also publish Simple Electronics with GPIO Zero, a book which collects a series of tutorials useful for building your knowledge of physical computing. And the best thing is, you can download it, and all magazine issues, for free!

Check out the API documentation and read more about what’s new in GPIO Zero on my blog. We have lots planned for the next release. Watch this space.

Get building!

The world of physical computing is at your fingertips! Are you feeling inspired?

If you’ve never tried your hand on physical computing, our Build a robot buggy learning resource is the perfect place to start! It’s your step-by-step guide for building a simple robot controlled with the help of GPIO Zero.

If you have a gee-whizz idea for an electronics project, do share it with us below. And if you’re currently working on a cool build and would like to show us how it’s going, pop a link to it in the comments.

The post Updates to GPIO Zero, the physical computing API appeared first on Raspberry Pi.

Awesome Raspberry Pi cases to 3D print at home

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/3d-printed-raspberry-pi-cases/

Unless you’re planning to fit your Raspberry Pi inside a build, you may find yourself in need of a case to protect it from dust, damage and/or the occasional pet attack. Here are some of our favourite 3D-printed cases, for which files are available online so you can recreate them at home.

TARDIS

TARDIS Raspberry PI 3 case – 3D Printing Time lapse

Every Tuesday we’ll 3D print designs from the community and showcase slicer settings, use cases and of course, Time-lapses! This week: TARDIS Raspberry PI 3 case By: https://www.thingiverse.com/Jason3030 https://www.thingiverse.com/thing:2430122/ BCN3D Sigma Blue PLA 3hrs 20min X:73 Y:73 Z:165mm .4mm layer / .6mm nozzle 0% Infill / 4mm retract 230C / 0C 114G 60mm/s —————————————– Shop for parts for your own DIY projects http://adafru.it/3dprinting Download Autodesk Fusion 360 – 1 Year Free License (renew it after that for more free use!)

Since I am an avid Whovian, it’s not surprising that this case made its way onto the list. Its outside is aesthetically pleasing to the aspiring Time Lord, and it snugly fits your treasured Pi.



Pop this case on your desk and chuckle with glee every time someone asks what’s inside it:

Person: What’s that?
You: My Raspberry Pi.
Person: What’s a Raspberry Pi?
You: It’s a computer!
Person: There’s a whole computer in that tiny case?
You: Yes…it’s BIGGER ON THE INSIDE!

I’ll get my coat.

Pi crust

Yes, we all wish we’d thought of it first. What better case for a Raspberry Pi than a pie crust?

3D-printed Raspberry Pi cases

While the case is designed to fit the Raspberry Pi Model B, you will be able to upgrade the build to accommodate newer models with a few tweaks.



Just make sure that if you do, you credit Marco Valenzuela, its original baker.

Consoles

Since many people use the Raspberry Pi to run RetroPie, there is a growing trend of 3D-printed console-style Pi cases.

3D-printed Raspberry Pi cases

So why not pop your Raspberry Pi into a case made to look like your favourite vintage console, such as the Nintendo NES or N64?



You could also use an adapter to fit a Raspberry Pi Zero within an actual Atari cartridge, or go modern and print a PlayStation 4 case!

Functional

Maybe you’re looking to use your Raspberry Pi as a component of a larger project, such as a home automation system, learning suite, or makerspace. In that case you may need to attach it to a wall, under a desk, or behind a monitor.

3D-printed Raspberry Pi cases

Coo! Coo!

The Pidgeon, shown above, allows you to turn your Zero W into a surveillance camera, while the piPad lets you keep a breadboard attached for easy access to your Pi’s GPIO pins.



Functional cases with added brackets are great for incorporating your Pi on the sly. The VESA mount case will allow you to attach your Pi to any VESA-compatible monitor, and the Fallout 4 Terminal is just really cool.

Cute

You might want your case to just look cute, especially if it’s going to sit in full view on your desk or shelf.

3D-printed Raspberry Pi cases

The tired cube above is the only one of our featured 3D prints for which you have to buy the files ($1.30), but its adorable face begged to be shared anyway.



If you’d rather save your money for another day, you may want to check out this adorable monster from Adafruit. Be aware that this case will also need some altering to fit newer versions of the Pi.

Our cases

Finally, there are great options for you if you don’t have access to a 3D printer, or if you would like to help the Raspberry Pi Foundation’s mission. You can buy one of the official Raspberry Pi cases for the Raspberry Pi 3 and Raspberry Pi Zero (and Zero W)!

3D-printed Raspberry Pi cases



As with all official Raspberry Pi accessories (and with the Pi itself), your money goes toward helping the Foundation to put the power of digital making into the hands of people all over the world.

3D-printed Raspberry Pi cases

You could also print a replica of the official Astro Pi cases, in which two Pis are currently orbiting the earth on the International Space Station.

Design your own Raspberry Pi case!

If you’ve built a case for your Raspberry Pi, be it with a 3D printer, laser-cutter, or your bare hands, make sure to share it with us in the comments below, or via our social media channels.

And if you’d like to give 3D printing a go, there are plenty of free online learning resources, and sites that offer tutorials and software to get you started, such as TinkerCAD, Instructables, and Adafruit.

The post Awesome Raspberry Pi cases to 3D print at home appeared first on Raspberry Pi.

MagPi 60: the ultimate troubleshooting guide

Post Syndicated from Rob Zwetsloot original https://www.raspberrypi.org/blog/magpi-60/

Hey folks, Rob from The MagPi here! It’s the last Thursday of the month, and that can only mean one thing: a brand-new The MagPi issue is out! In The MagPi 60, we’re bringing you the top troubleshooting tips for your Raspberry Pi, sourced directly from our amazing community.

The MagPi 60 cover with DVD slip case shown

The MagPi #60 comes with a huge troubleshooting guide

The MagPi 60

Our feature-length guide covers snags you might encounter while using a Raspberry Pi, and it is written for newcomers and veterans alike! Do you hit a roadblock while booting up your Pi? Are you having trouble connecting it to a network? Don’t worry – in this issue you’ll find troubleshooting advice you can use to solve your problem. And, as always, if you’re still stuck, you can head over to the Raspberry Pi forums for help.

More than troubleshooting

That’s not all though – Issue 60 also includes a disc with Raspbian-x86! This version of Raspbian for PCs contains all the recent updates and additions, such as offline Scratch 2.0 and the new Thonny IDE. And – *drumroll* – the disc version can be installed to your PC or Mac. The last time we had a Raspbian disc on the cover, many of you requested an installable version, so here you are! There is an installation guide inside the mag, so you’ll be all set to get going.

On top of that, you’ll find our usual array of amazing tutorials, projects, and reviews. There’s a giant guitar, Siri voice control, Pi Zeros turned into wireless-connected USB drives, and even a review of a new robot kit. You won’t want to miss it!

A spread from The MagPi 60 showing a giant Raspberry Pi-powered guitar

I wasn’t kidding about the giant guitar

How to get a copy

Grab your copy today in the UK from WHSmith, Sainsbury’s, Asda, and Tesco. Copies will be arriving very soon in US stores, including Barnes & Noble and Micro Center. You can also get the new issue online from our store, or digitally via our Android or iOS app. And don’t forget, there’s always the free PDF as well.

Subscribe for free goodies

Some of you have asked me about the goodies that we give out to subscribers. This is how it works: if you take out a twelve-month print subscription of The MagPi, you’ll get a Pi Zero W, Pi Zero case, and adapter cables absolutely free! This offer does not currently have an end date.

Alright, I think I’ve covered everything! So that’s it. I’ll see you next month.

Jean-Luc Picard sitting at a desk playing with a pen and sighing

The post MagPi 60: the ultimate troubleshooting guide appeared first on Raspberry Pi.