Tag Archives: 3D

FRED-209 Nerf gun tank

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/nerf-gun-tank-fred-209/

David Pride, known to many of you as an active member of our maker community, has done it again! His FRED-209 build combines a Nerf gun, 3D printing, a Raspberry Pi Zero, and robotics to make one neat remotely controlled Nerf tank.

FRED-209 – 3D printed Raspberry Pi Nerf Tank

Uploaded by David Pride on 2017-09-17.

A Nerf gun for FRED-209

David says he worked on FRED-209 over the summer in order to have some fun with Nerf guns, which weren’t around when he was a kid. He purchased an Elite Stryfe model at a car boot sale, and took it apart to see what made it tick. Then he set about figuring out how to power it with motors and a servo.

Nerf Elite Stryfe components for the FRED-209 Nerf tank of David Pride

To control the motors, David used a ZeroBorg add-on board for the Pi Zero, and he set up a PlayStation 3 controller to pilot his tank. These components were also part of a robot that David entered into the Pi Wars competition, so he had already written code for them.

3D printing for FRED-209

During prototyping for his Nerf tank, which David named after ED-209 from RoboCop, he used lots of eBay loot and several 3D-printed parts. He used the free OpenSCAD software package to design the parts he wanted to print. If you’re a novice at 3D printing, you might find the printing advice he shares in the write-up on his blog very useful.

3D-printed lid of FRED-209 nerf gun tank by David Pride

David found the 3D printing of the 24cm-long lid of FRED-209 tricky

On eBay, David found some cool-looking chunky wheels, but these turned out to be too heavy for the motors. In the end, he decided to use a Rover 5 chassis, which changed the look of FRED-209 from ‘monster truck’ to ‘tank’.

FRED-209 Nerf tank by David Pride

Next step: teach it to use stairs

The final result looks awesome, and David’s video demonstrates that it shoots very accurately as well. A make like this might be a great defensive project for our new apocalypse-themed Pioneers challenge!

Taking FRED-209 further

David will be uploading code and STL files for FRED-209 soon, so keep an eye on his blog or Twitter for updates. He’s also bringing the Nerf tank to the Cotswold Raspberry Jam this weekend. If you’re attending the event, make sure you catch him and try FRED-209 out yourself.

Never one to rest on his laurels, David is already working on taking his build to the next level. He wants to include a web interface controller and a camera, and is working on implementing OpenCV to give the Nerf tank the ability to autonomously detect targets.

Pi Wars 2018

I have a feeling we might get to see an advanced version of David’s project at next year’s Pi Wars!

The 2018 Pi Wars have just been announced. They will take place on 21-22 April at the Cambridge Computer Laboratory, and you have until 3 October to apply to enter the competition. What are you waiting for? Get making! And as always, do share your robot builds with us via social media.

The post FRED-209 Nerf gun tank appeared first on Raspberry Pi.

Security updates for Wednesday

Post Syndicated from ris original https://lwn.net/Articles/734318/rss

Security updates have been issued by CentOS (emacs), Debian (apache2, gdk-pixbuf, and pyjwt), Fedora (autotrace, converseen, dmtx-utils, drawtiming, emacs, gtatool, imageinfo, ImageMagick, inkscape, jasper, k3d, kxstitch, libwpd, mingw-libzip, perl-Image-SubImageFind, pfstools, php-pecl-imagick, psiconv, q, rawtherapee, ripright, rss-glx, rubygem-rmagick, synfig, synfigstudio, techne, vdr-scraper2vdr, vips, and WindowMaker), Oracle (emacs and kernel), Red Hat (emacs and kernel), Scientific Linux (emacs), SUSE (emacs), and Ubuntu (apache2).

Robinson: The state of open source accelerated graphics on ARM devices

Post Syndicated from ris original https://lwn.net/Articles/734043/rss

Peter Robinson looks
at
the state of open source accelerated graphics on ARM devices.
Despite the two bad examples above there’s actually been a lot of good change in the last five years. We now have a number of options for fully accelerated 2D/3D graphics on ARM SoCs and I run GNOME Shell on Wayland, yes the full open source shiny, on a number of different devices regularly.

New – Per-Second Billing for EC2 Instances and EBS Volumes

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-per-second-billing-for-ec2-instances-and-ebs-volumes/

Back in the old days, you needed to buy or lease a server if you needed access to compute power. When we launched EC2 back in 2006, the ability to use an instance for an hour, and to pay only for that hour, was big news. The pay-as-you-go model inspired our customers to think about new ways to develop, test, and run applications of all types.

Today, services like AWS Lambda prove that we can do a lot of useful work in a short time. Many of our customers are dreaming up applications for EC2 that can make good use of a large number of instances for shorter amounts of time, sometimes just a few minutes.

Per-Second Billing for EC2 and EBS
Effective October 2nd, usage of Linux instances that are launched in On-Demand, Reserved, and Spot form will be billed in one-second increments. Similarly, provisioned storage for EBS volumes will be billed in one-second increments.

Per-second billing also applies to Amazon EMR and AWS Batch:

Amazon EMR – Our customers add capacity to their EMR clusters in order to get their results more quickly. With per-second billing for the EC2 instances in the clusters, adding nodes is more cost-effective than ever.

AWS Batch – Many of the batch jobs that our customers run complete in less than an hour. AWS Batch already launches and terminates Spot Instances; with per-second billing batch processing will become even more economical.

Some of our more sophisticated customers have built systems to get the most value from EC2 by strategically choosing the most advantageous target instances when managing their gaming, ad tech, or 3D rendering fleets. Per-second billing obviates the need for this extra layer of instance management, and brings the costs savings to all customers and all workloads.

While this will result in a price reduction for many workloads (and you know we love price reductions), I don’t think that’s the most important aspect of this change. I believe that this change will inspire you to innovate and to think about your compute-bound problems in new ways. How can you use it to improve your support for continuous integration? Can it change the way that you provision transient environments for your dev and test workloads? What about your analytics, batch processing, and 3D rendering?

One of the many advantages of cloud computing is the elastic nature of provisioning or deprovisioning resources as you need them. By billing usage down to the second we will enable customers to level up their elasticity, save money, and customers will be positioned to take advantage of continuing advances in computing.

Things to Know
This change is effective in all AWS Regions and will be effective October 2, for all Linux instances that are newly launched or already running. Per-second billing is not currently applicable to instances running Microsoft Windows or Linux distributions that have a separate hourly charge. There is a 1 minute minimum charge per-instance.

List prices and Spot Market prices are still listed on a per-hour basis, but bills are calculated down to the second, as is Reserved Instance usage (you can launch, use, and terminate multiple instances within an hour and get the Reserved Instance Benefit for all of the instances). Also, bills will show times in decimal form, like this:

The Dedicated Per Region Fee, EBS Snapshots, and products in AWS Marketplace are still billed on an hourly basis.

Jeff;

 

Industry 4.0

Post Syndicated from Йовко Ламбрев original https://yovko.net/industry-4-0/

Лятото беше горещо, не само заради температурите, които отчетоха термометрите. С група съмишленици решихме да стиснем ръце помежду си и да поставим началото на един проект, който има за цел да събира хора с идеи за развитие на традиционните индустрии и сферата на финансите или по-точно това, което напоследък се нарича дигитална трансформация. Нарекохме се Trakia Tech, защото всичко започна в Пловдив като се надяваме да не му поставяме географски граници.

Скоро, ако всичко върви по план ще имаме и собствено място, където да посрещаме събития, да провеждаме обучения или да приютяваме екипи, чийто идеи сме харесали и искаме да помогнем да се случат по-бързо – с менторство, споделен опит или свързване с подходящ инвеститор.

Започнахме с две събития ([1], [2]), с които искахме да заявим присъствие, още докато изчаквахме съда да впише регистрацията ни и благодарим за приятния отзвук, който получихме след тях. От миналата седмица сме официално регистрирани от Окръжен съд – Пловдив и съвсем навреме за да бъдем пълноценни съорганизатори на двудневната конференция Индустрия 4.0, заедно с Капитал и община Пловдив.

Това всъщност е традиционната конференция Doing Business, която от миналата година се провежда в Пловдив, а от тази година има ново име, а именно Industry 4.0.

Четвъртата индустриална революция е много интересен момент в технологичната история на човечеството, защото ако предходните три досега (първата свързваме с парните двигатели и механизацията; втората с електричеството, поточните линии и масовите производства; третата с автоматизацията и електрониката;) са добавяли иновации, свързани предимно с материалния свят, то четвъртата, която се случва в момента е комбинация от технологичния напредък в информационните технологии, новите материали, вкл. нанотехнологии, а дори и съвременните открития в генетиката и биологията. Четвъртата индустриална революция е перфектна буря от иновации, които и най-смелите футуристи оглеждат внимателно и предпазливо. Най-хубавото е, че много от тези нови технологии са съвсем достъпни или с нисък праг на внедряване – та – хей, можете да си поръчате 3D-принтер от магазинче в Капана!

И да – отличителен белег на Индустрия 4.0 е дигиталната трансформация – софтуерът, роботизацията, IoT, AI, автономността и виртуализацията.

За всичко това (и още много други съпътстващи неща) ще си говорим цели два дни – на 28 и 29 септември 2017 – в Пловдив, в палата 7 на панаирното градче. Освен възможността да се запознаете с интересните гости, които традиционно посещават събитието, ще имате възможност и да чуете различните мнения и гледни точки на интересни панелисти.

Ние от Тракия Тех ще модерираме точно панел на тема „Дигитална трансформация и готовност за дигитална култура на традиционните индустрии“ в началото на втория ден на конференцията.

Необходима е регистрация за събитието – и побързайте защото след 15 септември цената ще се увеличи. Ако вече сте присъствали на някое от двете предходни събития на Trakia Tech през лятото вече сте получили своя код за отстъпка при регистрацията 🙂

Заповядайте!

Delivering Graphics Apps with Amazon AppStream 2.0

Post Syndicated from Deepak Suryanarayanan original https://aws.amazon.com/blogs/compute/delivering-graphics-apps-with-amazon-appstream-2-0/

Sahil Bahri, Sr. Product Manager, Amazon AppStream 2.0

Do you need to provide a workstation class experience for users who run graphics apps? With Amazon AppStream 2.0, you can stream graphics apps from AWS to a web browser running on any supported device. AppStream 2.0 offers a choice of GPU instance types. The range includes the newly launched Graphics Design instance, which allows you to offer a fast, fluid user experience at a fraction of the cost of using a graphics workstation, without upfront investments or long-term commitments.

In this post, I discuss the Graphics Design instance type in detail, and how you can use it to deliver a graphics application such as Siemens NX―a popular CAD/CAM application that we have been testing on AppStream 2.0 with engineers from Siemens PLM.

Graphics Instance Types on AppStream 2.0

First, a quick recap on the GPU instance types available with AppStream 2.0. In July, 2017, we launched graphics support for AppStream 2.0 with two new instance types that Jeff Barr discussed on the AWS Blog:

  • Graphics Desktop
  • Graphics Pro

Many customers in industries such as engineering, media, entertainment, and oil and gas are using these instances to deliver high-performance graphics applications to their users. These instance types are based on dedicated NVIDIA GPUs and can run the most demanding graphics applications, including those that rely on CUDA graphics API libraries.

Last week, we added a new lower-cost instance type: Graphics Design. This instance type is a great fit for engineers, 3D modelers, and designers who use graphics applications that rely on the hardware acceleration of DirectX, OpenGL, or OpenCL APIs, such as Siemens NX, Autodesk AutoCAD, or Adobe Photoshop. The Graphics Design instance is based on AMD’s FirePro S7150x2 Server GPUs and equipped with AMD Multiuser GPU technology. The instance type uses virtualized GPUs to achieve lower costs, and is available in four instance sizes to scale and match the requirements of your applications.

Instance vCPUs Instance RAM (GiB) GPU Memory (GiB)
stream.graphics-design.large 2 7.5 GiB 1
stream.graphics-design.xlarge 4 15.3 GiB 2
stream.graphics-design.2xlarge 8 30.5 GiB 4
stream.graphics-design.4xlarge 16 61 GiB 8

The following table compares all three graphics instance types on AppStream 2.0, along with example applications you could use with each.

  Graphics Design Graphics Desktop Graphics Pro
Number of instance sizes 4 1 3
GPU memory range
1–8 GiB 4 GiB 8–32 GiB
vCPU range 2–16 8 16–32
Memory range 7.5–61 GiB 15 GiB 122–488 GiB
Graphics libraries supported AMD FirePro S7150x2 NVIDIA GRID K520 NVIDIA Tesla M60
Price range (N. Virginia AWS Region) $0.25 – $2.00/hour $0.5/hour $2.05 – $8.20/hour
Example applications Adobe Premiere Pro, AutoDesk Revit, Siemens NX AVEVA E3D, SOLIDWORKS AutoDesk Maya, Landmark DecisionSpace, Schlumberger Petrel

Example graphics instance set up with Siemens NX

In the section, I walk through setting up Siemens NX with Graphics Design instances on AppStream 2.0. After set up is complete, users can able to access NX from within their browser and also access their design files from a file share. You can also use these steps to set up and test your own graphics applications on AppStream 2.0. Here’s the workflow:

  1. Create a file share to load and save design files.
  2. Create an AppStream 2.0 image with Siemens NX installed.
  3. Create an AppStream 2.0 fleet and stack.
  4. Invite users to access Siemens NX through a browser.
  5. Validate the setup.

To learn more about AppStream 2.0 concepts and set up, see the previous post Scaling Your Desktop Application Streams with Amazon AppStream 2.0. For a deeper review of all the setup and maintenance steps, see Amazon AppStream 2.0 Developer Guide.

Step 1: Create a file share to load and save design files

To launch and configure the file server

  1. Open the EC2 console and choose Launch Instance.
  2. Scroll to the Microsoft Windows Server 2016 Base Image and choose Select.
  3. Choose an instance type and size for your file server (I chose the general purpose m4.large instance). Choose Next: Configure Instance Details.
  4. Select a VPC and subnet. You launch AppStream 2.0 resources in the same VPC. Choose Next: Add Storage.
  5. If necessary, adjust the size of your EBS volume. Choose Review and Launch, Launch.
  6. On the Instances page, give your file server a name, such as My File Server.
  7. Ensure that the security group associated with the file server instance allows for incoming traffic from the security group that you select for your AppStream 2.0 fleets or image builders. You can use the default security group and select the same group while creating the image builder and fleet in later steps.

Log in to the file server using a remote access client such as Microsoft Remote Desktop. For more information about connecting to an EC2 Windows instance, see Connect to Your Windows Instance.

To enable file sharing

  1. Create a new folder (such as C:\My Graphics Files) and upload the shared files to make available to your users.
  2. From the Windows control panel, enable network discovery.
  3. Choose Server Manager, File and Storage Services, Volumes.
  4. Scroll to Shares and choose Start the Add Roles and Features Wizard. Go through the wizard to install the File Server and Share role.
  5. From the left navigation menu, choose Shares.
  6. Choose Start the New Share Wizard to set up your folder as a file share.
  7. Open the context (right-click) menu on the share and choose Properties, Permissions, Customize Permissions.
  8. Choose Permissions, Add. Add Read and Execute permissions for everyone on the network.

Step 2:  Create an AppStream 2.0 image with Siemens NX installed

To connect to the image builder and install applications

  1. Open the AppStream 2.0 management console and choose Images, Image Builder, Launch Image Builder.
  2. Create a graphics design image builder in the same VPC as your file server.
  3. From the Image builder tab, select your image builder and choose Connect. This opens a new browser tab and display a desktop to log in to.
  4. Log in to your image builder as ImageBuilderAdmin.
  5. Launch the Image Assistant.
  6. Download and install Siemens NX and other applications on the image builder. I added Blender and Firefox, but you could replace these with your own applications.
  7. To verify the user experience, you can test the application performance on the instance.

Before you finish creating the image, you must mount the file share by enabling a few Microsoft Windows services.

To mount the file share

  1. Open services.msc and check the following services:
  • DNS Client
  • Function Discovery Resource Publication
  • SSDP Discovery
  • UPnP Device H
  1. If any of the preceding services have Startup Type set to Manual, open the context (right-click) menu on the service and choose Start. Otherwise, open the context (right-click) menu on the service and choose Properties. For Startup Type, choose Manual, Apply. To start the service, choose Start.
  2. From the Windows control panel, enable network discovery.
  3. Create a batch script that mounts a file share from the storage server set up earlier. The file share is mounted automatically when a user connects to the AppStream 2.0 environment.

Logon Script Location: C:\Users\Public\logon.bat

Script Contents:

:loop

net use H: \\path\to\network\share 

PING localhost -n 30 >NUL

IF NOT EXIST H:\ GOTO loop

  1. Open gpedit.msc and choose User Configuration, Windows Settings, Scripts. Set logon.bat as the user logon script.
  2. Next, create a batch script that makes the mounted drive visible to the user.

Logon Script Location: C:\Users\Public\startup.bat

Script Contents:
REG DELETE “HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer” /v “NoDrives” /f

  1. Open Task Scheduler and choose Create Task.
  2. Choose General, provide a task name, and then choose Change User or Group.
  3. For Enter the object name to select, enter SYSTEM and choose Check Names, OK.
  4. Choose Triggers, New. For Begin the task, choose At startup. Under Advanced Settings, change Delay task for to 5 minutes. Choose OK.
  5. Choose Actions, New. Under Settings, for Program/script, enter C:\Users\Public\startup.bat. Choose OK.
  6. Choose Conditions. Under Power, clear the Start the task only if the computer is on AC power Choose OK.
  7. To view your scheduled task, choose Task Scheduler Library. Close Task Scheduler when you are done.

Step 3:  Create an AppStream 2.0 fleet and stack

To create a fleet and stack

  1. In the AppStream 2.0 management console, choose Fleets, Create Fleet.
  2. Give the fleet a name, such as Graphics-Demo-Fleet, that uses the newly created image and the same VPC as your file server.
  3. Choose Stacks, Create Stack. Give the stack a name, such as Graphics-Demo-Stack.
  4. After the stack is created, select it and choose Actions, Associate Fleet. Associate the stack with the fleet you created in step 1.

Step 4:  Invite users to access Siemens NX through a browser

To invite users

  1. Choose User Pools, Create User to create users.
  2. Enter a name and email address for each user.
  3. Select the users just created, and choose Actions, Assign Stack to provide access to the stack created in step 2. You can also provide access using SAML 2.0 and connect to your Active Directory if necessary. For more information, see the Enabling Identity Federation with AD FS 3.0 and Amazon AppStream 2.0 post.

Your user receives an email invitation to set up an account and use a web portal to access the applications that you have included in your stack.

Step 5:  Validate the setup

Time for a test drive with Siemens NX on AppStream 2.0!

  1. Open the link for the AppStream 2.0 web portal shared through the email invitation. The web portal opens in your default browser. You must sign in with the temporary password and set a new password. After that, you get taken to your app catalog.
  2. Launch Siemens NX and interact with it using the demo files available in the shared storage folder – My Graphics Files. 

After I launched NX, I captured the screenshot below. The Siemens PLM team also recorded a video with NX running on AppStream 2.0.

Summary

In this post, I discussed the GPU instances available for delivering rich graphics applications to users in a web browser. While I demonstrated a simple setup, you can scale this out to launch a production environment with users signing in using Active Directory credentials,  accessing persistent storage with Amazon S3, and using other commonly requested features reviewed in the Amazon AppStream 2.0 Launch Recap – Domain Join, Simple Network Setup, and Lots More post.

To learn more about AppStream 2.0 and capabilities added this year, see Amazon AppStream 2.0 Resources.

A printing GIF camera? Is that even a thing?

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/printing-gif-camera/

Abhishek Singh’s printing GIF camera uses two Raspberry Pis, the Model 3 and the Zero W, to take animated images and display them on an ejectable secondary screen.

Instagif – A DIY Camera that prints GIFs instantly

I built a camera that snaps a GIF and ejects a little cartridge so you can hold a moving photo in your hand! I’m calling it the “Instagif NextStep”.

The humble GIF

Created in 1987, Graphics Interchange Format files, better known as GIFs, have somewhat taken over the internet. And whether you pronounce it G-IF or J-IF, you’ve probably used at least one to express an emotion, animate images on your screen, or create small, movie-like memories of events.

In 2004, all patents on the humble GIF expired, which added to the increased usage of the file format. And by the early 2010s, sites such as giphy.com and phone-based GIF keyboards were introduced into our day-to-day lives.

A GIF from a scene in The Great Gatsby - Raspberry Pi GIF Camera

Welcome to the age of the GIF

Polaroid cameras

Polaroid cameras have a somewhat older history. While the first documented instant camera came into existence in 1923, commercial iterations made their way to market in the 1940s, with Polaroid’s model 95 Land Camera.

In recent years, the instant camera has come back into fashion, with camera stores and high street fashion retailers alike stocking their shelves with pastel-coloured, affordable models. But nothing beats the iconic look of the Polaroid Spirit series, and the rainbow colour stripe that separates it from its competitors.

Polaroid Spirit Camera - Raspberry Pi GIF Camera

Shake it like a Polaroid picture…

And if you’re one of our younger readers and find yourself wondering where else you’ve seen those stripes, you’re probably more familiar with previous versions of the Instagram logo, because, well…

Instagram Logo - Raspberry Pi GIF Camera

I’m sorry for the comment on the previous image. It was just too easy.

Abhishek Singh’s printing GIF camera

Abhishek labels his creation the Instagif NextStep, and cites his inspiration for the project as simply wanting to give it a go, and to see if he could hold a ‘moving photo’.

“What I love about these kinds of projects is that they involve a bunch of different skill sets and disciplines”, he explains at the start of his lengthy, highly GIFed and wonderfully detailed imugr tutorial. “Hardware, software, 3D modeling, 3D printing, circuit design, mechanical/electrical engineering, design, fabrication etc. that need to be integrated for it to work seamlessly. Ironically, this is also what I hate about these kinds of projects”

Care to see how the whole thing comes together? Well, in the true spirit of the project, Abhishek created this handy step-by-step GIF.

Piecing it together

I thought I’ll start off with the entire assembly and then break down the different elements. As you can see, everything is assembled from the base up in layers helping in easy assembly and quick disassembly for troubleshooting

The build comes in two parts – the main camera housing a Raspberry Pi 3 and Camera Module V2, and the ejectable cartridge fitted with Raspberry Pi Zero W and Adafruit PiTFT screen.

When the capture button is pressed, the camera takes 3 seconds’ worth of images and converts them into .gif format via a Python script. Once compressed and complete, the Pi 3 sends the file to the Zero W via a network connection. When it is satisfied that the Zero W has the image, the Pi 3 automatically ejects the ‘printed GIF’ cartridge, and the image is displayed.

A demonstration of how the GIF is displayed on the Raspberry Pi GIF Camera

For a full breakdown of code, 3D-printable files, and images, check out the full imgur post. You can see more of Abhishek’s work at his website here.

Create GIFs with a Raspberry Pi

Want to create GIFs with your Raspberry Pi? Of course you do. Who wouldn’t? So check out our free time-lapse animations resource. As with all our learning resources, the project is free for you to use at home and in your clubs or classrooms. And once you’ve mastered the art of Pi-based GIF creation, why not incorporate it into another project? Say, a motion-detecting security camera or an on-the-go tweeting GIF camera – the possibilities are endless.

And make sure you check out Abhishek’s other Raspberry Pi GIF project, Peeqo, who we covered previously in the blog. So cute. SO CUTE.

The post A printing GIF camera? Is that even a thing? appeared first on Raspberry Pi.

MagPi 61: ten amazing Raspberry Pi Zero W projects

Post Syndicated from Rob Zwetsloot original https://www.raspberrypi.org/blog/magpi-61-10-pi-zero-projects/

Hey folks! Rob here, with another roundup of the latest The MagPi magazine. MagPi 61 focuses on some incredible ‘must make’ Raspberry Pi Zero W projects, 3D printers and – oh, did someone mention the Google AIY Voice Projects Kit?

Cover of The MagPi magazine with a picture of the Pi Zero W - MagPi 61

Make amazing Raspberry Pi Zero W projects with our latest issue

Inside MagPi 61

In issue 61, we’re focusing on the small but mighty wonder that is the Raspberry Pi Zero W, and on some of the very best projects we’ve found for you to build with it. From arcade machines to robots, dash cams, and more – it’s time to make the most of our $10 computer.

And if that’s not enough, we’ve also delved deeper into the maker relationship between Raspberry Pi and Ardunio, with some great creations such as piano stairs, a jukebox, and a smart home system. There’s also a selection of excellent tutorials on building 3D printers, controlling Hue lights, and making cool musical instruments.

A spread of The MagPi magazine showing a DJ deck tutorial - MagPi 61

Spin it, DJ!

Get the MagPi 61

The new issue is out right now, and you can pick up a copy at WH Smith, Tesco, Sainsbury’s, and Asda. If you live in the US, check out your local Barnes & Noble or Micro Center over the next few days. You can also get the new issue online from our store, or digitally via our Android or iOS app. And don’t forget, there’s always the free PDF as well.

Subscribe for free goodies

Some of you have asked me about the goodies that we give out to subscribers. This is how it works: if you take out a twelve-month print subscription to The MagPi, you’ll get a Pi Zero W, Pi Zero case, and adapter cables, absolutely free! This offer does not currently have an end date.

Pre-order AIY Kits

We have some AIY Voice Kit news! Micro Center has opened pre-orders for the kits in America, and Pimoroni has set up a notification service for those closer to the UK.

We hope you all enjoy the issue. Oh, and if you’re at World Maker Faire, New York, come and see us at the Raspberry Pi stall! Otherwise – see you next month.

The post MagPi 61: ten amazing Raspberry Pi Zero W projects appeared first on Raspberry Pi.

Deadline 10 – Launch a Rendering Fleet in AWS

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/deadline-10-launch-a-rendering-fleet-in-aws/

Graphical rendering is a compute-intensive task that is, as they say, embarrassingly parallel. Looked at another way, this means that there’s a more or less linear relationship between the number of processors that are working on the problem and the overall wall-clock time that it takes to complete the task. In a creative endeavor such as movie-making, getting the results faster spurs creativity, improves the feedback loop, gives you time to make more iterations and trials, and leads to a better result. Even if you have a render farm in-house, you may still want to turn to the cloud in order to gain access to more compute power at peak times. Once you do this, the next challenge is to manage the combination of in-house resources, cloud resources, and the digital assets in a unified fashion.

Deadline 10
Earlier this week we launched Deadline 10, a powerful render management system. Building on technology that we brought on board with the acquisition of Thinkbox Software, Deadline 10 is designed to extend existing on-premises rendering into the AWS Cloud, giving you elasticity and flexibility while remaining simple and easy to use. You can set up and manage large-scale distributed jobs that span multiple AWS regions and benefit from elastic, usage-based AWS licensing for popular applications like Deadline for Autodesk 3ds Max, Maya, Arnold, and dozens more, all available from the Thinkbox Marketplace. You can purchase software licenses from the marketplace, use your existing licenses, or use them together.

Deadline 10 obtains cloud-based compute resources by managing bids for EC2 Spot Instances, providing you with access to enough low-cost compute capacity to let your imagination run wild! It uses your existing AWS account, tags EC2 instances for tracking, and synchronizes your local assets to the cloud before rendering begins.

A Quick Tour
Let’s take a quick tour of Deadline 10 and see how it makes use of AWS. The AWS Portal is available from the View menu:

The first step is to log in to my AWS account:

Then I configure the connection server, license server, and the S3 bucket that will be used to store rendering assets:

Next, I set up my Spot fleet, establishing a maximum price per hour for each EC2 instance, setting target capacity, and choosing the desired rendering application:

I can also choose any desired combination of EC2 instance types:

When I am ready to render I click on Start Spot Fleet:

This will initiate the process of bidding for and managing Spot Instances. The running instances are visible from the Portal:

I can monitor the progress of my rendering pipeline:

I can stop my Spot fleet when I no longer need it:

Deadline 10 is now available for usage based license customers; a new license is needed for traditional floating license users. Pricing for yearly Deadline licenses has been reduced to $48 annually. If you are already using an earlier version of Deadline, feel free to contact us to learn more about licensing options.

Jeff;

HDClub, Russia’s Leading HD-Only Torrent Site, Permanently Shuts Down

Post Syndicated from Andy original https://torrentfreak.com/hdclub-russias-leading-hd-torrent-site-permanently-shuts-down-170830/

While millions of users frequent popular public torrent sites such as The Pirate Bay and RARBG every day, there’s a thriving scene that’s hidden from the wider public eye.

Every week, private torrent trackers cater to dozens of millions of BitTorrent users who have taken the time and effort to gain access to these more secretive communities. Often labeled as elitist and running counter to the broad sharing ethos that made file-sharing the beast it is today, private sites pride themselves on quality, order and speed, something public sites typically struggle to match.

In addition to these notable qualities, many private sites choose to focus on a particular niche. There are sites dedicated to obscure electronic music, comedy, and even magic, but HDClub’s focus was given away by its name.

Dubbing itself “The HighDefinition BitTorrent Community”, HDClub specialized in HD productions including Blu-ray and 3D content, covering movies, TV shows, music videos, and animation.

Born in 2007, HDClub celebrated its ninth birthday on March 9 last year, with 2017 heralding a full decade online for the site. Catering mainly to the Russian and Ukrainian markets, the site’s releases often preserved an English audio option, ideal for those looking for high-quality releases from an unorthodox source at decent speeds.

Of course, HDClub releases often leaked out of the site, meaning that thousands are still available on regular public trackers, as a search on any Western torrent engine reveals.

A sample of HDClub releases listed on Torrentz2

Importantly, the site offered thousands of releases completely unavailable in Russia from licensed sources, meaning it filled a niche in which official outlets either wouldn’t or couldn’t compete. This earned itself a place in Russia’s Top 1000 sites list, despite being a closed membership platform.

The site’s attention to detail and focus earned it a considerable following. For the past few years the site capped membership at 190,000 people but in practice, attendance floated around the 170,000 mark. Seeders peaked at approximately 400,000 with leechers considerably less, making seeding as difficult as one might expect on a ratio-based tracker.

Now, however, the decade-long run of HDClub has come to an abrupt end. Early this week the tracker went dark, reportedly without advance notice. A Russian language announcement now present on its main page explains the reasons for the site’s demise.

“Recently, we received several dozens of complaints from rightsholders weekly, and our community is subjected to attacks and espionage,” the announcement reads.

While public torrent sites are always bombarded with DMCA-style notices, private sites tend to avoid large numbers of complaints. In this case, however, HDClub were clearly feeling the pressure. The site’s main page was open to the public while featuring popular releases, so this probably didn’t help with the load.

It’s not clear what is meant by “attacks and espionage” but it’s possibly a reference to DDoS assaults and third-parties attempting to monitor the site. Nevertheless, as HDClub points out, the climate for torrent, streaming, and similar sites has become increasingly hostile in the region recently.

“In parallel, there is a tightening of Internet legislation in Russia, Ukraine and EU countries,” the site says.

Interestingly, the site’s operators also suggest that interest from some quarters had waned, noting that “the time of enthusiasts irretrievably goes away.” It’s unclear whether that’s a reference to site users, the site’s operators, or indeed both. But in any event, any significant decline in any area can prove fatal, particularly when other pressures are at play.

“In the circumstances, we can no longer support the work of the club in the originally conceived format. The project is closed, but we ask you to refrain from long farewells. Thank you all and goodbye!” the message concludes.

Interestingly, the site ends with a little teaser, which may indicate some hope for the future.

“There are talks on preserving the heritage of the club,” it reads, without adding further details.

Possibly stay tuned…..

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

New – Amazon EC2 Elastic GPUs for Windows

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/new-ec2-elastic-gpus-for-windows/

Today we’re excited to announce the general availability of Amazon EC2 Elastic GPUs for Windows. An Elastic GPU is a GPU resource that you can attach to your Amazon Elastic Compute Cloud (EC2) instance to accelerate the graphics performance of your applications. Elastic GPUs come in medium (1GB), large (2GB), xlarge (4GB), and 2xlarge (8GB) sizes and are lower cost alternatives to using GPU instance types like G3 or G2 (for OpenGL 3.3 applications). You can use Elastic GPUs with many instance types allowing you the flexibility to choose the right compute, memory, and storage balance for your application. Today you can provision elastic GPUs in us-east-1 and us-east-2.

Elastic GPUs start at just $0.05 per hour for an eg1.medium. A nickel an hour. If we attach that Elastic GPU to a t2.medium ($0.065/hour) we pay a total of less than 12 cents per hour for an instance with a GPU. Previously, the cheapest graphical workstation (G2/3 class) cost 76 cents per hour. That’s over an 80% reduction in the price for running certain graphical workloads.

When should I use Elastic GPUs?

Elastic GPUs are best suited for applications that require a small or intermittent amount of additional GPU power for graphics acceleration and support OpenGL. Elastic GPUs support up to and including the OpenGL 3.3 API standards with expanded API support coming soon.

Elastic GPUs are not part of the hardware of your instance. Instead they’re attached through an elastic GPU network interface in your subnet which is created when you launch an instance with an Elastic GPU. The image below shows how Elastic GPUs are attached.

Since Elastic GPUs are network attached it’s important to provision an instance with adequate network bandwidth to support your application. It’s also important to make sure your instance security group allows traffic on port 2007.

Any application that can use the OpenGL APIs can take advantage of Elastic GPUs so Blender, Google Earth, SIEMENS SolidEdge, and more could all run with Elastic GPUs. Even Kerbal Space Program!

Ok, now that we know when to use Elastic GPUs and how they work, let’s launch an instance and use one.

Using Elastic GPUs

First, we’ll navigate to the EC2 console and click Launch Instance. Next we’ll select a Windows AMI like: “Microsoft Windows Server 2016 Base”. Then we’ll select an instance type. Then we’ll make sure we select the “Elastic GPU” section and allocate an eg1.medium (1GB) Elastic GPU.

We’ll also include some userdata in the advanced details section. We’ll write a quick PowerShell script to download and install our Elastic GPU software.


<powershell>
Start-Transcript -Path "C:\egpu_install.log" -Append
(new-object net.webclient).DownloadFile('http://ec2-elasticgpus.s3-website-us-east-1.amazonaws.com/latest', 'C:\egpu.msi')
Start-Process "msiexec.exe" -Wait -ArgumentList "/i C:\egpu.msi /qn /L*v C:\egpu_msi_install.log"
[Environment]::SetEnvironmentVariable("Path", $env:Path + ";C:\Program Files\Amazon\EC2ElasticGPUs\manager\", [EnvironmentVariableTarget]::Machine)
Restart-Computer -Force
</powershell>

This software sends all OpenGL API calls to the attached Elastic GPU.

Next, we’ll double check to make sure my security group has TCP port 2007 exposed to my VPC so my Elastic GPU can connect to my instance. Finally, we’ll click launch and wait for my instance and Elastic GPU to provision. The best way to do this is to create a separate SG that you can attach to the instance.

You can see an animation of the launch procedure below.

Alternatively we could have launched on the AWS CLI with a quick call like this:

$aws ec2 run-instances --elastic-gpu-specification Type=eg1.2xlarge \
--image-id ami-1a2b3c4d \
--subnet subnet-11223344 \
--instance-type r4.large \
--security-groups "default" "elasticgpu-sg"

then we could have followed the Elastic GPU software installation instructions here.

We can now see our Elastic GPU is humming along and attached by checking out the Elastic GPU status in the taskbar.

We welcome any feedback on the service and you can click on the Feedback link in the bottom left corner of the GPU Status Box to let us know about your experience with Elastic GPUs.

Elastic GPU Demonstration

Ok, so we have our instance provisioned and our Elastic GPU attached. My teammates here at AWS wanted me to talk about the amazingly wonderful 3D applications you can run, but when I learned about Elastic GPUs the first thing that came to mind was Kerbal Space Program (KSP), so I’m going to run a quick test with that. After all, if you can’t launch Jebediah Kerman into space then what was the point of all of that software? I’ve downloaded KSP and added the launch parameter of -force-opengl to make sure we’re using OpenGL to do our rendering. Below you can see my poor attempt at building a spaceship – I used to build better ones. It looks pretty smooth considering we’re going over a network with a lossy remote desktop protocol.

I’d show a picture of the rocket launch but I didn’t even make it off the ground before I experienced a rapid unscheduled disassembly of the rocket. Back to the drawing board for me.

In the mean time I can check my Amazon CloudWatch metrics and see how much GPU memory I used during my brief game.

Partners, Pricing, and Documentation

To continue to build out great experiences for our customers, our 3D software partners like ANSYS and Siemens are looking to take advantage of the OpenGL APIs on Elastic GPUs, and are currently certifying Elastic GPUs for their software. You can learn more about our partnerships here.

You can find information on Elastic GPU pricing here. You can find additional documentation here.

Now, if you’ll excuse me I have some virtual rockets to build.

Randall

3D print your own Rubik’s Cube Solver

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/rubiks-cube-solver/

Why use logic and your hands to solve a Rubik’s Cube, when you could 3D print your own Rubik’s Cube Solver and thus avoid overexerting your fingers and brain cells? Here to help you with this is Otvinta‘s new robotic make:

Fully 3D-Printed Rubik’s Cube Solving Robot

This 3D-printed Raspberry PI-powered Rubik’s Cube solving robot has everything any serious robot does — arms, servos, gears, vision, artificial intelligence and a task to complete. If you want to introduce robotics to your kids or your students, this is the perfect machine for it. This robot is fully 3D-printable.

Rubik’s Cubes

As Liz has said before, we have a lot of Rubik’s cubes here at Pi Towers. In fact, let me just…hold on…I’ll be right back.

Okay, these are all the ones I found on Gordon’s desk, and I’m 99% sure there are more in his drawers.

Raspberry Pi Rubik's Cube Solver

And that’s just Gordon. Given that there’s a multitude of other Pi Towers staff members who are also obsessed with the little twisty cube of wonder, you could use what you find in our office to restock an entire toy shop for the pre-Christmas rush!

So yeah, we like Rubik’s Cubes.

The 3D-Printable Rubik’s Cube Solver

Aside from the obvious electronic elements, Otvinta’s Rubik’s Cube Solving Robot is completely 3D-printable. While it may take a whopping 70 hours of print time and a whole spool of filament to make your solving robot a reality, we’ve seen far more time-consuming prints with a lot less purpose than this.

(If you’ve clicked the link above, I’d just like to point out that, while that build might be 3D printing overkill, I want one anyway.)

Rubik's Cube Solver

After 3D printing all the necessary parts of your Rubik’s Cube Solving Robot, you’ll need to run the Windows 10 IoT Core on your Raspberry Pi. Once connected to your network, you can select the Pi from the IoT Dashboard on your main PC and install the RubiksCubeRobot app.

Raspberry Pi Rubik's Cube Solver

Then simply configure the robot via the app, and you’re good to go!

You might not necessarily need a Raspberry Pi to create this build, since you could simply run the app on your main PC. However, using a Pi will make your project more manageable and less bulky.

You can find all the details of how to make your own Rubik’s Cube Solving Robot on Otvinta’s website, so do make sure to head over there if you want to learn more.

All the robots!

This isn’t the first Raspberry Pi-powered Rubik’s Cube out there, and it surely won’t be the last. There’s this one by Francesco Georg using LEGO Mindstorms; this one was originally shared on Reddit; Liz wrote about this one; and there’s one more which I can’t seem to find but I swear exists, and it looks like the Eye of Sauron! Ten House Points to whoever shares it with me in the comments below.

The post 3D print your own Rubik’s Cube Solver appeared first on Raspberry Pi.

Affordable Raspberry Pi 3D Body Scanner

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/affordable-raspberry-pi-3d-body-scanner/

With a £1000 grant from Santander, Poppy Mosbacher set out to build a full-body 3D body scanner with the intention of creating an affordable setup for makespaces and similar community groups.

First Scan from DIY Raspberry Pi Scanner

Head and Shoulders Scan with 29 Raspberry Pi Cameras

Uses for full-body 3D scanning

Poppy herself wanted to use the scanner in her work as a fashion designer. With the help of 3D scans of her models, she would be able to create custom cardboard dressmakers dummy to ensure her designs fit perfectly. This is a brilliant way of incorporating digital tech into another industry – and it’s not the only application for this sort of build. Growing numbers of businesses use 3D body scanning, for example the stores around the world where customers can 3D scan and print themselves as action-figure-sized replicas.

Print your own family right on the high street!
image c/o Tom’s Guide and Shapify

We’ve also seen the same technology used in video games for more immersive virtual reality. Moreover, there are various uses for it in healthcare and fitness, such as monitoring the effect of exercise regimes or physiotherapy on body shape or posture.

Within a makespace environment, a 3D body scanner opens the door to including new groups of people in community make projects: imagine 3D printing miniatures of a theatrical cast to allow more realistic blocking of stage productions and better set design, or annually sending grandparents a print of their grandchild so they can compare the child’s year-on-year growth in a hands-on way.

Raspberry Pi 3d Body Scan

The Germany-based clothing business Outfittery uses full body scanners to take the stress out of finding clothes that fits well.
image c/o Outfittery

As cheesy as it sounds, the only limit for the use of 3D scanning is your imagination…and maybe storage space for miniature prints.

Poppy’s Raspberry Pi 3D Body Scanner

For her build, Poppy acquired 27 Raspberry Pi Zeros and 27 Raspberry Pi Camera Modules. With various other components, some 3D-printed or made of cardboard, Poppy got to work. She was helped by members of Build Brighton and by her friend Arthur Guy, who also wrote the code for the scanner.

Raspberry Pi 3D Body Scanner

The Pi Zeros run Raspbian Lite, and are connected to a main server running a node application. Each is fitted into its own laser-cut cardboard case, and secured to a structure of cardboard tubing and 3D-printed connectors.

Raspberry Pi 3D Body Scanner

In the finished build, the person to be scanned stands within the centre of the structure, and the press of a button sends the signal for all Pis to take a photo. The images are sent back to the server, and processed through Autocade ReMake, a freemium software available for the PC (Poppy discovered part-way through the project that the Mac version has recently lost support).

Build your own

Obviously there’s a lot more to the process of building this full-body 3D scanner than what I’ve reported in these few paragraphs. And since it was Poppy’s goal to make a readily available and affordable scanner that anyone can recreate, she’s provided all the instructions and code for it on her Instructables page.

Projects like this, in which people use the Raspberry Pi to create affordable and interesting tech for communities, are exactly the type of thing we love to see. Always make sure to share your Pi-based projects with us on social media, so we can boost their visibility!

If you’re a member of a makespace, run a workshop in a school or club, or simply love to tinker and create, this build could be the perfect addition to your workshop. And if you recreate Poppy’s scanner, or build something similar, we’d love to see the results in the comments below.

The post Affordable Raspberry Pi 3D Body Scanner appeared first on Raspberry Pi.

Sean’s DIY Bitcoin Lottery with a Raspberry Pi

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/seans-diy-bitcoin-lottery/

After several explorations into the world of 3D printing, and fresh off the back of his $5 fidget spinner crowd funding campaign, Sean Hodgins brings us his latest project: a DIY Bitcoin Lottery!

DIY Bitcoin Lottery with a Raspberry Pi

Build your own lottery! Thingiverse Files: https://www.thingiverse.com/thing:2494568 Pi How-to: http://www.idlehandsproject.com/raspberry-pi-bitcoin-lottery/ Instructables: https://www.instructables.com/id/DIY-Bitcoin-Lottery-With-Raspberry-Pi/ Send me bitcoins if you want!

What is Bitcoin mining?

According to the internet, Bitcoin mining is:

[A] record-keeping service. Miners keep the blockchain consistent, complete, and unalterable by repeatedly verifying and collecting newly broadcast transactions into a new group of transactions called a block. Each block contains a cryptographic hash of the previous block, using the SHA-256 hashing algorithm, which links it to the previous block, thus giving the blockchain its name.

If that makes no sense to you, welcome to the club. So here’s a handy video which explains it better.

What is Bitcoin Mining?

For more information: https://www.bitcoinmining.com and https://www.weusecoins.com What is Bitcoin Mining? Have you ever wondered how Bitcoin is generated? This short video is an animated introduction to Bitcoin Mining. Credits: Voice – Chris Rice (www.ricevoice.com) Motion Graphics – Fabian Rühle (www.fabianruehle.de) Music/Sound Design – Christian Barth (www.akkord-arbeiter.de) Andrew Mottl (www.andrewmottl.com)

Okay, now I get it.

I swear.

Sean’s Bitcoin Lottery

As a retired Bitcoin miner, Sean understands how the system works and what is required for mining. And since news sources report that Bitcoin is currently valued at around $4000, Sean decided to use a Raspberry Pi to bring to life an idea he’d been thinking about for a little while.

Sean Hodgins Raspberry Pi Bitcoin Lottery

He fitted the Raspberry Pi into a 3D-printed body, together with a small fan, a strip of NeoPixels, and a Block Eruptor ASIC which is the dedicated mining hardware. The Pi runs a Python script compatible with CGMiner, a mining software that needs far more explanation than I can offer in this short blog post.

The Neopixels take the first 6 characters of the 64-character-long number of the current block, and interpret it as a hex colour code. In this way, the block’s data is converted into colour, which, when you think about it, is kind of beautiful.

The device moves on to trying to solve a new block every 20 minutes. When it does, the NeoPixel LEDs play a flashing ‘Win’ or ‘Lose’ animation to let you know whether you were the one to solve the previous block.

Sean Hodgins Raspberry Pi Bitcoin Lottery

Lottery results

Sean has done the maths to calculate the power consumption of the device. He says that the annual cost of running his Bitcoin Lottery is roughly what you would pay for two lottery scratch cards. Now, the odds of solving a block are much lower than those of buying a winning scratch card. However, since the mining device moves on to a new block every 20 minutes, the odds of being a winner with Bitcoin using Sean’s build are actually better than those of winning the lottery.

Sean Hodgins Raspberry Pi Bitcoin Lottery

MATHS!

But even if you don’t win, Sean’s project is a fun experiment in Bitcoin mining and creating colour through code. And if you want to make your own, you can download the 3D-files here, find the code here, and view the step-by-step guide here on Instructables.

Good luck and happy mining!

The post Sean’s DIY Bitcoin Lottery with a Raspberry Pi appeared first on Raspberry Pi.

Thomas and Ed become a RealLifeDoodle on the ISS

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/astro-pi-reallifedoodle/

Thanks to the very talented sooperdavid, creator of some of the wonderful animations known as RealLifeDoodles, Thomas Pesquet and Astro Pi Ed have been turned into one of the cutest videos on the internet.

space pi – Create, Discover and Share Awesome GIFs on Gfycat

Watch space pi GIF by sooperdave on Gfycat. Discover more GIFS online on Gfycat

And RealLifeDoodles aaaaare?

Thanks to the power of viral video, many will be aware of the ongoing Real Life Doodle phenomenon. Wait, you’re not aware?

Oh. Well, let me explain it to you.

Taking often comical video clips, those with a know-how and skill level that outweighs my own in spades add faces and emotions to inanimate objects, creating what the social media world refers to as a Real Life Doodle. From disappointed exercise balls to cannibalistic piles of leaves, these video clips are both cute and sometimes, though thankfully not always, a little heartbreaking.

letmegofree – Create, Discover and Share Awesome GIFs on Gfycat

Watch letmegofree GIF by sooperdave on Gfycat. Discover more reallifedoodles GIFs on Gfycat

Our own RealLifeDoodle

A few months back, when Programme Manager Dave Honess, better known to many as SpaceDave, sent me these Astro Pi videos for me to upload to YouTube, a small plan hatched in my brain. For in the midst of the video, and pointed out to me by SpaceDave – “I kind of love the way he just lets the unit drop out of shot” – was the most adorable sight as poor Ed drifted off into the great unknown of the ISS. Finding that I have this odd ability to consider many inanimate objects as ‘cute’, I wanted to see whether we could turn poor Ed into a RealLifeDoodle.

Heading to the Reddit RealLifeDoodle subreddit, I sent moderator sooperdavid a private message, asking if he’d be so kind as to bring our beloved Ed to life.

Yesterday, our dream came true!

Astro Pi

Unless you’re new to the world of the Raspberry Pi blog (in which case, welcome!), you’ll probably know about the Astro Pi Challenge. But for those who are unaware, let me break it down for you.

Raspberry Pi RealLifeDoodle

In 2015, two weeks before British ESA Astronaut Tim Peake journeyed to the International Space Station, two Raspberry Pis were sent up to await his arrival. Clad in 6063-grade aluminium flight cases and fitted with their own Sense HATs and camera modules, the Astro Pis Ed and Izzy were ready to receive the winning codes from school children in the UK. The following year, this time maintained by French ESA Astronaut Thomas Pesquet, children from every ESA member country got involved to send even more code to the ISS.

Get involved

Will there be another Astro Pi Challenge? Well, I just asked SpaceDave and he didn’t say no! So why not get yourself into training now and try out some of our space-themed free resources, including our 3D-print your own Astro Pi case tutorial? You can also follow the adventures of Ed and Izzy in our brilliant Story of Astro Pi cartoons.

Raspberry Pi RealLifeDoodle

And if you’re quick, there’s still time to take part in tomorrow’s Moonhack! Check out their website for more information and help the team at Code Club Australia beat their own world record!

The post Thomas and Ed become a RealLifeDoodle on the ISS appeared first on Raspberry Pi.

Awesome Raspberry Pi cases to 3D print at home

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/3d-printed-raspberry-pi-cases/

Unless you’re planning to fit your Raspberry Pi inside a build, you may find yourself in need of a case to protect it from dust, damage and/or the occasional pet attack. Here are some of our favourite 3D-printed cases, for which files are available online so you can recreate them at home.

TARDIS

TARDIS Raspberry PI 3 case – 3D Printing Time lapse

Every Tuesday we’ll 3D print designs from the community and showcase slicer settings, use cases and of course, Time-lapses! This week: TARDIS Raspberry PI 3 case By: https://www.thingiverse.com/Jason3030 https://www.thingiverse.com/thing:2430122/ BCN3D Sigma Blue PLA 3hrs 20min X:73 Y:73 Z:165mm .4mm layer / .6mm nozzle 0% Infill / 4mm retract 230C / 0C 114G 60mm/s —————————————– Shop for parts for your own DIY projects http://adafru.it/3dprinting Download Autodesk Fusion 360 – 1 Year Free License (renew it after that for more free use!)

Since I am an avid Whovian, it’s not surprising that this case made its way onto the list. Its outside is aesthetically pleasing to the aspiring Time Lord, and it snugly fits your treasured Pi.



Pop this case on your desk and chuckle with glee every time someone asks what’s inside it:

Person: What’s that?
You: My Raspberry Pi.
Person: What’s a Raspberry Pi?
You: It’s a computer!
Person: There’s a whole computer in that tiny case?
You: Yes…it’s BIGGER ON THE INSIDE!

I’ll get my coat.

Pi crust

Yes, we all wish we’d thought of it first. What better case for a Raspberry Pi than a pie crust?

3D-printed Raspberry Pi cases

While the case is designed to fit the Raspberry Pi Model B, you will be able to upgrade the build to accommodate newer models with a few tweaks.



Just make sure that if you do, you credit Marco Valenzuela, its original baker.

Consoles

Since many people use the Raspberry Pi to run RetroPie, there is a growing trend of 3D-printed console-style Pi cases.

3D-printed Raspberry Pi cases

So why not pop your Raspberry Pi into a case made to look like your favourite vintage console, such as the Nintendo NES or N64?



You could also use an adapter to fit a Raspberry Pi Zero within an actual Atari cartridge, or go modern and print a PlayStation 4 case!

Functional

Maybe you’re looking to use your Raspberry Pi as a component of a larger project, such as a home automation system, learning suite, or makerspace. In that case you may need to attach it to a wall, under a desk, or behind a monitor.

3D-printed Raspberry Pi cases

Coo! Coo!

The Pidgeon, shown above, allows you to turn your Zero W into a surveillance camera, while the piPad lets you keep a breadboard attached for easy access to your Pi’s GPIO pins.



Functional cases with added brackets are great for incorporating your Pi on the sly. The VESA mount case will allow you to attach your Pi to any VESA-compatible monitor, and the Fallout 4 Terminal is just really cool.

Cute

You might want your case to just look cute, especially if it’s going to sit in full view on your desk or shelf.

3D-printed Raspberry Pi cases

The tired cube above is the only one of our featured 3D prints for which you have to buy the files ($1.30), but its adorable face begged to be shared anyway.



If you’d rather save your money for another day, you may want to check out this adorable monster from Adafruit. Be aware that this case will also need some altering to fit newer versions of the Pi.

Our cases

Finally, there are great options for you if you don’t have access to a 3D printer, or if you would like to help the Raspberry Pi Foundation’s mission. You can buy one of the official Raspberry Pi cases for the Raspberry Pi 3 and Raspberry Pi Zero (and Zero W)!

3D-printed Raspberry Pi cases



As with all official Raspberry Pi accessories (and with the Pi itself), your money goes toward helping the Foundation to put the power of digital making into the hands of people all over the world.

3D-printed Raspberry Pi cases

You could also print a replica of the official Astro Pi cases, in which two Pis are currently orbiting the earth on the International Space Station.

Design your own Raspberry Pi case!

If you’ve built a case for your Raspberry Pi, be it with a 3D printer, laser-cutter, or your bare hands, make sure to share it with us in the comments below, or via our social media channels.

And if you’d like to give 3D printing a go, there are plenty of free online learning resources, and sites that offer tutorials and software to get you started, such as TinkerCAD, Instructables, and Adafruit.

The post Awesome Raspberry Pi cases to 3D print at home appeared first on Raspberry Pi.

New – GPU-Powered Streaming Instances for Amazon AppStream 2.0

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-gpu-powered-streaming-instances-for-amazon-appstream-2-0/

We launched Amazon AppStream 2.0 at re:Invent 2016. This application streaming service allows you to deliver Windows applications to a desktop browser.

AppStream 2.0 is fully managed and provides consistent, scalable performance by running applications on general purpose, compute optimized, and memory optimized streaming instances, with delivery via NICE DCV – a secure, high-fidelity streaming protocol. Our enterprise and public sector customers have started using AppStream 2.0 in place of legacy application streaming environments that are installed on-premises. They use AppStream 2.0 to deliver both commercial and line of business applications to a desktop browser. Our ISV customers are using AppStream 2.0 to move their applications to the cloud as-is, with no changes to their code. These customers focus on demos, workshops, and commercial SaaS subscriptions.

We are getting great feedback on AppStream 2.0 and have been adding new features very quickly (even by AWS standards). So far this year we have added an image builder, federated access via SAML 2.0, CloudWatch monitoring, Fleet Auto Scaling, Simple Network Setup, persistent storage for user files (backed by Amazon S3), support for VPC security groups, and built-in user management including web portals for users.

New GPU-Powered Streaming Instances
Many of our customers have told us that they want to use AppStream 2.0 to deliver specialized design, engineering, HPC, and media applications to their users. These applications are generally graphically intensive and are designed to run on expensive, high-end PCs in conjunction with a GPU (Graphics Processing Unit). Due to the hardware requirements of these applications, cost considerations have traditionally kept them out of situations where part-time or occasional access would otherwise make sense. Recently, another requirement has come to the forefront. These applications almost always need shared, read-write access to large amounts of sensitive data that is best stored, processed, and secured in the cloud. In order to meet the needs of these users and applications, we are launching two new types of streaming instances today:

Graphics Desktop – Based on the G2 instance type, Graphics Desktop instances are designed for desktop applications that use the CUDA, DirectX, or OpenGL for rendering. These instances are equipped with 15 GiB of memory and 8 vCPUs. You can select this instance family when you build an AppStream image or configure an AppStream fleet:

Graphics Pro – Based on the brand-new G3 instance type, Graphics Pro instances are designed for high-end, high-performance applications that can use the NVIDIA APIs and/or need access to large amounts of memory. These instances are available in three sizes, with 122 to 488 GiB of memory and 16 to 64 vCPUs. Again, you can select this instance family when you configure an AppStream fleet:

To learn more about how to launch, run, and scale a streaming application environment, read Scaling Your Desktop Application Streams with Amazon AppStream 2.0.

As I noted earlier, you can use either of these two instance types to build an AppStream image. This will allow you to test and fine tune your applications and to see the instances in action.

Streaming Instances in Action
We’ve been working with several customers during a private beta program for the new instance types. Here are a few stories (and some cool screen shots) to show you some of the applications that they are streaming via AppStream 2.0:

AVEVA is a world leading provider of engineering design and information management software solutions for the marine, power, plant, offshore and oil & gas industries. As part of their work on massive capital projects, their customers need to bring many groups of specialist engineers together to collaborate on the creation of digital assets. In order to support this requirement, AVEVA is building SaaS solutions that combine the streamed delivery of engineering applications with access to a scalable project data environment that is shared between engineers across the globe. The new instances will allow AVEVA to deliver their engineering design software in SaaS form while maximizing quality and performance. Here’s a screen shot of their Everything 3D app being streamed from AppStream:

Nissan, a Japanese multinational automobile manufacturer, trains its automotive specialists using 3D simulation software running on expensive graphics workstations. The training software, developed by The DiSti Corporation, allows its specialists to simulate maintenance processes by interacting with realistic 3D models of the vehicles they work on. AppStream 2.0’s new graphics capability now allows Nissan to deliver these training tools in real time, with up to date content, to a desktop browser running on low-cost commodity PCs. Their specialists can now interact with highly realistic renderings of a vehicle that allows them to train for and plan maintenance operations with higher efficiency.

Cornell University is an American private Ivy League and land-grant doctoral university located in Ithaca, New York. They deliver advanced 3D tools such as AutoDesk AutoCAD and Inventor to students and faculty to support their course work, teaching, and research. Until now, these tools could only be used on GPU-powered workstations in a lab or classroom. AppStream 2.0 allows them to deliver the applications to a web browser running on any desktop, where they run as if they were on a local workstation. Their users are no longer limited by available workstations in labs and classrooms, and can bring their own devices and have access to their course software. This increased flexibility also means that faculty members no longer need to take lab availability into account when they build course schedules. Here’s a copy of Autodesk Inventor Professional running on AppStream at Cornell:

Now Available
Both of the graphics streaming instance families are available in the US East (Northern Virginia), US West (Oregon), EU (Ireland), and Asia Pacific (Tokyo) Regions and you can start streaming from them today. Your applications must run in a Windows 2012 R2 environment, and can make use of DirectX, OpenGL, CUDA, OpenCL, and Vulkan.

With prices in the US East (Northern Virginia) Region starting at $0.50 per hour for Graphics Desktop instances and $2.05 per hour for Graphics Pro instances, you can now run your simulation, visualization, and HPC workloads in the AWS Cloud on an economical, pay-by-the-hour basis. You can also take advantage of fast, low-latency access to Amazon Elastic Compute Cloud (EC2), Amazon Simple Storage Service (S3), AWS Lambda, Amazon Redshift, and other AWS services to build processing workflows that handle pre- and post-processing of your data.

Jeff;

 

IoT Sleepbuddy, the robotic babysitter

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/sleepbuddy-robotic-babysitter/

You’re watching the new episode of Game of Thrones, and suddenly you hear your children, up and about after their bedtime! Now you’ll probably miss a crucial moment of the show because you have to put them to bed again. Or you’re out to dinner with friends and longing for the sight of your sleeping small humans. What do you do? Text the babysitter to check on them? Well, luckily for you these issues could soon be things of the past, thanks to Bert Vuylsteke and his Pi-powered Sleepbuddy. This IoT-controlled social robot could fulfil all your remote babysitting needs!

IoT Sleepbuddy – babyphone – Design concept

This is the actual concept of my robot and in what context it can be used.

A social robot?

A social robot fulfils a role normally played by a person, and interacts with humans via human language, gestures, and facial expressions. This is what Bert says about the role of the Sleepbuddy:

[For children, it] is a friend or safeguard from nightmares, but it is so much more for the babysitters or parents. The babysitters or parents connect their smartphone/tablet/PC to the Sleepbuddy. This will give them access to control all his emotions, gestures, microphone, speaker and camera. In the eye is a hidden camera to see the kids sleeping. The speaker and microphone allow communication with the kids through WiFi.

The roots of the Sleepbuddy

As a student at Ghent University, Bert had to build a social robot using OPSORO, the university’s open-source robotics platform. The developers of this platform create social robots for research purposes. They are also making all software, as well as hardware design plans, available on GitHub. In addition, you will soon be able to purchase their robot kits via a Kickstarter. OPSORO robots are designed around the Raspberry Pi, and controlled via a web interface. The interface allows you to customise your robot’s behaviour, using visual or text-based programming languages.

Sleepbuddy Bert Vuylsteke components

The Sleepbuddy’s components

Building the Sleepbuddy

Bert has provided a detailed Instructable describing the process of putting the Sleepbuddy together, complete with video walk-throughs. However, the making techniques he has used include thermoforming, laser cutting, and 3D printing. If you want to recreate this build, you may need to contact your local makerspace to find out whether they have the necessary equipment.

Sleepbuddy Bert Vuylsteke assembly

Assembling the Sleepbuddy

Finally, Bert added an especially cute touch to this project by covering the Sleepbuddy in blackboard paint. Therefore, kids can draw on the robot to really make it their own!

So many robots!

At Pi Towers we are partial to all kinds of robots, be they ones that test medical devices, play chess or Connect 4, or fight other robots. If they twerk, or are cute, tiny, or shoddy, we maybe even like them a tiny bit more.

Do you share our love of robots? Would you like to make your own? Then check out our resource for building a simple robot buggy. Maybe it will kick-start your career as the general of a robot army. A robot army that does good, of course! Let us know your benevolent robot overlord plans in the comments.

The post IoT Sleepbuddy, the robotic babysitter appeared first on Raspberry Pi.