Tag Archives: robotics

A Robot the Size of the World

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/12/a-robot-the-size-of-the-world.html

In 2016, I wrote about an Internet that affected the world in a direct, physical manner. It was connected to your smartphone. It had sensors like cameras and thermostats. It had actuators: Drones, autonomous cars. And it had smarts in the middle, using sensor data to figure out what to do and then actually do it. This was the Internet of Things (IoT).

The classical definition of a robot is something that senses, thinks, and acts—that’s today’s Internet. We’ve been building a world-sized robot without even realizing it.

In 2023, we upgraded the “thinking” part with large-language models (LLMs) like GPT. ChatGPT both surprised and amazed the world with its ability to understand human language and generate credible, on-topic, humanlike responses. But what these are really good at is interacting with systems formerly designed for humans. Their accuracy will get better, and they will be used to replace actual humans.

In 2024, we’re going to start connecting those LLMs and other AI systems to both sensors and actuators. In other words, they will be connected to the larger world, through APIs. They will receive direct inputs from our environment, in all the forms I thought about in 2016. And they will increasingly control our environment, through IoT devices and beyond.

It will start small: Summarizing emails and writing limited responses. Arguing with customer service—on chat—for service changes and refunds. Making travel reservations.

But these AIs will interact with the physical world as well, first controlling robots and then having those robots as part of them. Your AI-driven thermostat will turn the heat and air conditioning on based also on who’s in what room, their preferences, and where they are likely to go next. It will negotiate with the power company for the cheapest rates by scheduling usage of high-energy appliances or car recharging.

This is the easy stuff. The real changes will happen when these AIs group together in a larger intelligence: A vast network of power generation and power consumption with each building just a node, like an ant colony or a human army.

Future industrial-control systems will include traditional factory robots, as well as AI systems to schedule their operation. It will automatically order supplies, as well as coordinate final product shipping. The AI will manage its own finances, interacting with other systems in the banking world. It will call on humans as needed: to repair individual subsystems or to do things too specialized for the robots.

Consider driverless cars. Individual vehicles have sensors, of course, but they also make use of sensors embedded in the roads and on poles. The real processing is done in the cloud, by a centralized system that is piloting all the vehicles. This allows individual cars to coordinate their movement for more efficiency: braking in synchronization, for example.

These are robots, but not the sort familiar from movies and television. We think of robots as discrete metal objects, with sensors and actuators on their surface, and processing logic inside. But our new robots are different. Their sensors and actuators are distributed in the environment. Their processing is somewhere else. They’re a network of individual units that become a robot only in aggregate.

This turns our notion of security on its head. If massive, decentralized AIs run everything, then who controls those AIs matters a lot. It’s as if all the executive assistants or lawyers in an industry worked for the same agency. An AI that is both trusted and trustworthy will become a critical requirement.

This future requires us to see ourselves less as individuals, and more as parts of larger systems. It’s AI as nature, as Gaia—everything as one system. It’s a future more aligned with the Buddhist philosophy of interconnectedness than Western ideas of individuality. (And also with science-fiction dystopias, like Skynet from the Terminator movies.) It will require a rethinking of much of our assumptions about governance and economy. That’s not going to happen soon, but in 2024 we will see the first steps along that path.

This essay previously appeared in Wired.

On Robots Killing People

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/09/on-robots-killing-people.html

The robot revolution began long ago, and so did the killing. One day in 1979, a robot at a Ford Motor Company casting plant malfunctioned—human workers determined that it was not going fast enough. And so twenty-five-year-old Robert Williams was asked to climb into a storage rack to help move things along. The one-ton robot continued to work silently, smashing into Williams’s head and instantly killing him. This was reportedly the first incident in which a robot killed a human; many more would follow.

At Kawasaki Heavy Industries in 1981, Kenji Urada died in similar circumstances. A malfunctioning robot he went to inspect killed him when he obstructed its path, according to Gabriel Hallevy in his 2013 book, When Robots Kill: Artificial Intelligence Under Criminal Law. As Hallevy puts it, the robot simply determined that “the most efficient way to eliminate the threat was to push the worker into an adjacent machine.” From 1992 to 2017, workplace robots were responsible for 41 recorded deaths in the United States—and that’s likely an underestimate, especially when you consider knock-on effects from automation, such as job loss. A robotic anti-aircraft cannon killed nine South African soldiers in 2007 when a possible software failure led the machine to swing itself wildly and fire dozens of lethal rounds in less than a second. In a 2018 trial, a medical robot was implicated in killing Stephen Pettitt during a routine operation that had occurred a few years earlier.

You get the picture. Robots—”intelligent” and not—have been killing people for decades. And the development of more advanced artificial intelligence has only increased the potential for machines to cause harm. Self-driving cars are already on American streets, and robotic "dogs" are being used by law enforcement. Computerized systems are being given the capabilities to use tools, allowing them to directly affect the physical world. Why worry about the theoretical emergence of an all-powerful, superintelligent program when more immediate problems are at our doorstep? Regulation must push companies toward safe innovation and innovation in safety. We are not there yet.

Historically, major disasters have needed to occur to spur regulation—the types of disasters we would ideally foresee and avoid in today’s AI paradigm. The 1905 Grover Shoe Factory disaster led to regulations governing the safe operation of steam boilers. At the time, companies claimed that large steam-automation machines were too complex to rush safety regulations. This, of course, led to overlooked safety flaws and escalating disasters. It wasn’t until the American Society of Mechanical Engineers demanded risk analysis and transparency that dangers from these huge tanks of boiling water, once considered mystifying, were made easily understandable. The 1911 Triangle Shirtwaist Factory fire led to regulations on sprinkler systems and emergency exits. And the preventable 1912 sinking of the Titanic resulted in new regulations on lifeboats, safety audits, and on-ship radios.

Perhaps the best analogy is the evolution of the Federal Aviation Administration. Fatalities in the first decades of aviation forced regulation, which required new developments in both law and technology. Starting with the Air Commerce Act of 1926, Congress recognized that the integration of aerospace tech into people’s lives and our economy demanded the highest scrutiny. Today, every airline crash is closely examined, motivating new technologies and procedures.

Any regulation of industrial robots stems from existing industrial regulation, which has been evolving for many decades. The Occupational Safety and Health Act of 1970 established safety standards for machinery, and the Robotic Industries Association, now merged into the Association for Advancing Automation, has been instrumental in developing and updating specific robot-safety standards since its founding in 1974. Those standards, with obscure names such as R15.06 and ISO 10218, emphasize inherent safe design, protective measures, and rigorous risk assessments for industrial robots.

But as technology continues to change, the government needs to more clearly regulate how and when robots can be used in society. Laws need to clarify who is responsible, and what the legal consequences are, when a robot’s actions result in harm. Yes, accidents happen. But the lessons of aviation and workplace safety demonstrate that accidents are preventable when they are openly discussed and subjected to proper expert scrutiny.

AI and robotics companies don’t want this to happen. OpenAI, for example, has reportedly fought to “water down” safety regulations and reduce AI-quality requirements. According to an article in Time, it lobbied European Union officials against classifying models like ChatGPT as “high risk” which would have brought “stringent legal requirements including transparency, traceability, and human oversight.” The reasoning was supposedly that OpenAI did not intend to put its products to high-risk use—a logical twist akin to the Titanic owners lobbying that the ship should not be inspected for lifeboats on the principle that it was a “general purpose” vessel that also could sail in warm waters where there were no icebergs and people could float for days. (OpenAI did not comment when asked about its stance on regulation; previously, it has said that “achieving our mission requires that we work to mitigate both current and longer-term risks,” and that it is working toward that goal by “collaborating with policymakers, researchers and users.”)

Large corporations have a tendency to develop computer technologies to self-servingly shift the burdens of their own shortcomings onto society at large, or to claim that safety regulations protecting society impose an unjust cost on corporations themselves, or that security baselines stifle innovation. We’ve heard it all before, and we should be extremely skeptical of such claims. Today’s AI-related robot deaths are no different from the robot accidents of the past. Those industrial robots malfunctioned, and human operators trying to assist were killed in unexpected ways. Since the first-known death resulting from the feature in January 2016, Tesla’s Autopilot has been implicated in more than 40 deaths according to official report estimates. Malfunctioning Teslas on Autopilot have deviated from their advertised capabilities by misreading road markings, suddenly veering into other cars or trees, crashing into well-marked service vehicles, or ignoring red lights, stop signs, and crosswalks. We’re concerned that AI-controlled robots already are moving beyond accidental killing in the name of efficiency and “deciding” to kill someone in order to achieve opaque and remotely controlled objectives.

As we move into a future where robots are becoming integral to our lives, we can’t forget that safety is a crucial part of innovation. True technological progress comes from applying comprehensive safety standards across technologies, even in the realm of the most futuristic and captivating robotic visions. By learning lessons from past fatalities, we can enhance safety protocols, rectify design flaws, and prevent further unnecessary loss of life.

For example, the UK government already sets out statements that safety matters. Lawmakers must reach further back in history to become more future-focused on what we must demand right now: modeling threats, calculating potential scenarios, enabling technical blueprints, and ensuring responsible engineering for building within parameters that protect society at large. Decades of experience have given us the empirical evidence to guide our actions toward a safer future with robots. Now we need the political will to regulate.

This essay was written with Davi Ottenheimer, and previously appeared on Atlantic.com.

Make a robot: A fun and educational journey into robotics for kids

Post Syndicated from Marc Scott original https://www.raspberrypi.org/blog/make-a-robot/

Lots of kids are excited about robotics, and we have the free resources you need to help your children start making robots.

A smiling girl holding a robot buggy in her lap

What’s a robot anyway?

Did you know that the concept of robotics dates back to ancient Greece, where a mathematician built a self-propelled flying pigeon to understand bird flight? Today, we have robots assisting people in everything from manufacturing to medicine. But what exactly is a robot? Ask two people, and you might get two different answers. Some may tell you about Star Wars’ C3PO and R2D2, while others may tell you about self-driving cars or even toys.

In my view, a robot is a machine that can carry out a series of physical tasks, programmed via a computer. These tasks could range from picking up an object and placing it elsewhere, to navigating a maze, to even assembling a car without human interaction.

Why robotics?

My first encounter with robotics was the Big Trak, a programmable toy vehicle created in 1979. You could program up to 16 commands into Big Trak, which it then executed in sequence. My family and I used the toy to transport items to each other around our house. It was a fun and engaging way to explore the basics of robotics and programming.

A Big Trak toy robot on wheels with a keypad on top and with a cart attached.

Understanding something about robotics is not just for scientists and engineers. It involves learning a range of skills that empower your kids to be creators of our digital world, instead of just consumers.

A child codes at a desktop computer.

Robotics combines various aspects of science, technology, engineering, and mathematics (STEM) in a fun and engaging way. It also encourages young people’s problem-solving abilities, creativity, and critical thinking — skills that are key for the innovators of tomorrow.

Machine learning and robotics: A powerful duo

What happens when we add machine learning to robotics? Machine learning is an area of artificial intelligence where people design computer systems so they “learn” from data. This is not unlike how people learn from experience. Machine learning can enable robots to adapt to new situations and perform tasks that only people used to do.

A girl shows off a robot she has built.

We’ve already built robots that can play chess with you, or clean your house, or deliver your food. As people develop machine learning for robotics further, the possibilities are vast. By the time our children start their careers, it might be normal to have robots as software-driven “coworkers”. It’s important that we prepare children for the possible future that robotics and machine learning could open up. We need to empower them to contribute to creating robots with capabilities that complement and benefit all people.

To see what free resources we’re offering to help young people understand and create with machine learning and AI, check out this blog post about our Experience AI learning programme.

Getting started with robotics

So, how can kids start diving into the world of robotics? Here are three online resources to kickstart their journey:

Physical computing with Scratch and the Raspberry Pi

Physical computing with Scratch and the Raspberry Pi‘ is a fantastic introduction to using electronics with the block-based Scratch programming language for young learners.

A girl with a Raspberry Pi computer.

Kids will learn to create interactive stories, games, and animations, all while getting a taste of physical computing. They’ll explore how to use sound and light, and even learn how to create improvised buttons.

Introduction to Raspberry Pi Pico and MicroPython

This project path introduces the Raspberry Pi Pico, a tiny yet powerful digital device that kids can program using the text-based MicroPython language.

Blink on Raspberry Pi Pico.
A Raspberry Pi Pico.

It’s a great way to delve deeper into the world of electronics and programming. The path includes a variety of fun and engaging projects that incorporate crafting and allow children to see the tangible results of their coding efforts.

Build a robot

‘Build a robot’ is a project path that allows young people to create a simple programmable buggy. They can then make it remote-controlled and even transform it so it can follow a line by itself.

A robot buggy with a Raspberry Pi.

This hands-on project path not only teaches the basics of robotics but also encourages problem-solving as kids iteratively improve their robot buggy’s design.

The robot building community

Let’s take a moment to celebrate two young tech creators who love building robots.
Selin is a digital maker from Istanbul, Turkey, who is passionate about robotics and AI. Selin’s journey into the world of digital making began with a wish: after her family’s beloved dog Korsan passed away, she wanted to bring him back to life. This led her to design a robotic dog on paper, and to learn coding and digital making to build that robot.

Selin is posing on one knee, next to her robot.

Selin has since built seven different robotics projects. One of them is IC4U, a robotic guide dog designed to help people with impaired sight. Selin’s commitment to making projects that help make the world a better place was recognised when she was awarded the Aspiring Teen Award by Women in Tech.

Jay, a young digital maker from Preston, UK, started experimenting with code at a young age to make his own games. He attended free local coding groups, such as CoderDojo, and was introduced to the block-based programming language Scratch. Soon, Jay was combining his interests in programming with robotics to make his own inventions.

Young coder Jay shows off some of his robotics projects.

Jay’s dad, Biren, comments: “With robotics and coding, what Jay has learned is to think outside of the box and without any limits. This has helped him achieve amazing things.”

Open up the world of making robots for your child

Robotics and machine learning are not just science fiction — they shape our lives today in ways kids might not even realise. Whether your child is just interested in playing with robots, wants to learn more about them, or is considering a career in robotics, our free resources are a great place to start.

If a Greek mathematician was able to build a flying pigeon millennia ago, imagine what children could create today!

The post Make a robot: A fun and educational journey into robotics for kids appeared first on Raspberry Pi Foundation.

Credible Handwriting Machine

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/05/credible-handwriting-machine.html

In case you don’t have enough to worry about, someone has built a credible handwriting machine:

This is still a work in progress, but the project seeks to solve one of the biggest problems with other homework machines, such as this one that I covered a few months ago after it blew up on social media. The problem with most homework machines is that they’re too perfect. Not only is their content output too well-written for most students, but they also have perfect grammar and punctuation ­ something even we professional writers fail to consistently achieve. Most importantly, the machine’s “handwriting” is too consistent. Humans always include small variations in their writing, no matter how honed their penmanship.

Devadath is on a quest to fix the issue with perfect penmanship by making his machine mimic human handwriting. Even better, it will reflect the handwriting of its specific user so that AI-written submissions match those written by the student themselves.

Like other machines, this starts with asking ChatGPT to write an essay based on the assignment prompt. That generates a chunk of text, which would normally be stylized with a script-style font and then output as g-code for a pen plotter. But instead, Devadeth created custom software that records examples of the user’s own handwriting. The software then uses that as a font, with small random variations, to create a document image that looks like it was actually handwritten.

Watch the video.

My guess is that this is another detection/detection avoidance arms race.

AIs as Computer Hackers

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/02/ais-as-computer-hackers.html

Hacker “Capture the Flag” has been a mainstay at hacker gatherings since the mid-1990s. It’s like the outdoor game, but played on computer networks. Teams of hackers defend their own computers while attacking other teams’. It’s a controlled setting for what computer hackers do in real life: finding and fixing vulnerabilities in their own systems and exploiting them in others’. It’s the software vulnerability lifecycle.

These days, dozens of teams from around the world compete in weekend-long marathon events held all over the world. People train for months. Winning is a big deal. If you’re into this sort of thing, it’s pretty much the most fun you can possibly have on the Internet without committing multiple felonies.

In 2016, DARPA ran a similarly styled event for artificial intelligence (AI). One hundred teams entered their systems into the Cyber Grand Challenge. After completing qualifying rounds, seven finalists competed at the DEFCON hacker convention in Las Vegas. The competition occurred in a specially designed test environment filled with custom software that had never been analyzed or tested. The AIs were given 10 hours to find vulnerabilities to exploit against the other AIs in the competition and to patch themselves against exploitation. A system called Mayhem, created by a team of Carnegie-Mellon computer security researchers, won. The researchers have since commercialized the technology, which is now busily defending networks for customers like the U.S. Department of Defense.

There was a traditional human–team capture-the-flag event at DEFCON that same year. Mayhem was invited to participate. It came in last overall, but it didn’t come in last in every category all of the time.

I figured it was only a matter of time. It would be the same story we’ve seen in so many other areas of AI: the games of chess and go, X-ray and disease diagnostics, writing fake news. AIs would improve every year because all of the core technologies are continually improving. Humans would largely stay the same because we remain humans even as our tools improve. Eventually, the AIs would routinely beat the humans. I guessed that it would take about a decade.

But now, five years later, I have no idea if that prediction is still on track. Inexplicably, DARPA never repeated the event. Research on the individual components of the software vulnerability lifecycle does continue. There’s an enormous amount of work being done on automatic vulnerability finding. Going through software code line by line is exactly the sort of tedious problem at which machine learning systems excel, if they can only be taught how to recognize a vulnerability. There is also work on automatic vulnerability exploitation and lots on automatic update and patching. Still, there is something uniquely powerful about a competition that puts all of the components together and tests them against others.

To see that in action, you have to go to China. Since 2017, China has held at least seven of these competitions—called Robot Hacking Games—many with multiple qualifying rounds. The first included one team each from the United States, Russia, and Ukraine. The rest have been Chinese only: teams from Chinese universities, teams from companies like Baidu and Tencent, teams from the military. Rules seem to vary. Sometimes human–AI hybrid teams compete.

Details of these events are few. They’re Chinese language only, which naturally limits what the West knows about them. I didn’t even know they existed until Dakota Cary, a research analyst at the Center for Security and Emerging Technology and a Chinese speaker, wrote a report about them a few months ago. And they’re increasingly hosted by the People’s Liberation Army, which presumably controls how much detail becomes public.

Some things we can infer. In 2016, none of the Cyber Grand Challenge teams used modern machine learning techniques. Certainly most of the Robot Hacking Games entrants are using them today. And the competitions encourage collaboration as well as competition between the teams. Presumably that accelerates advances in the field.

None of this is to say that real robot hackers are poised to attack us today, but I wish I could predict with some certainty when that day will come. In 2018, I wrote about how AI could change the attack/defense balance in cybersecurity. I said that it is impossible to know which side would benefit more but predicted that the technologies would benefit the defense more, at least in the short term. I wrote: “Defense is currently in a worse position than offense precisely because of the human components. Present-day attacks pit the relative advantages of computers and humans against the relative weaknesses of computers and humans. Computers moving into what are traditionally human areas will rebalance that equation.”

Unfortunately, it’s the People’s Liberation Army and not DARPA that will be the first to learn if I am right or wrong and how soon it matters.

This essay originally appeared in the January/February 2022 issue of IEEE Security & Privacy.

AWS Week in Review – December 12, 2022

Post Syndicated from Marcia Villalba original https://aws.amazon.com/blogs/aws/aws-week-in-review-december-12-2022/

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

The world is asynchronous, is what Werner Vogels, Amazon CTO, reminded us during his keynote last week at AWS re:Invent. At the beginning of the keynote, he showed us how weird a synchronous world would be and how everything in nature is asynchronous. One example of an event-driven application he showcased during his keynote is Serverlesspresso, a project my team has been working on for the last year. And last week, we announced Serverlesspresso extensions, a new program that lets you contribute to Serverlesspresso and learn how event-driven applications can be extended.

Last Week’s Launches
Here are some launches that got my attention during the previous week.

Amazon SageMaker Studio now supports fine-grained data access control with AWS LakeFormation when accessing data through Amazon EMR. Now, when you connect to EMR clusters to SageMaker Studio notebooks, you can choose what runtime IAM role you want to connect with, and the notebooks will only access data and resources permitted by the attached runtime role.

Amazon Lex has now added support for Arabic, Cantonese, Norwegian, Swedish, Polish, and Finnish. This opens new possibilities to create chat bots and conversational experiences in more languages.

Amazon RDS Proxy now supports creating proxies in Amazon Aurora Global Database primary and secondary Regions. Now, building multi-Region applications with Amazon Aurora is simpler. RDS proxy sits between your application and the database pool and shares established database connections.

Amazon FSx for NetApp ONTAP launched many new features. First, it added the support for Nitro-based encryption of data in transit. It also extended NVMe read cache support to Single-AZ file systems. And it added four new features to ease the use of the service: easily assign a snapshot policy to your volumes, easily create data protection volumes, configure volumes so their tags are automatically copied to the backups, and finally, add or remove VPC route tables for your existing Multi-AZ file systems.

I would also like to mention two launches that happened before re:Invent but were not covered on the News Blog:

Amazon EventBridge Scheduler is a new capability from Amazon EventBridge that allows you to create, run, and manage scheduled tasks at scale. Using this new capability, you can schedule one-time or recurrent tasks across 270 AWS services.

AWS IoT RoboRunner is now generally available. Last year at re:Invent Channy wrote a blog post introducing the preview for this service. IoT RoboRunner is a robotic service that makes it easier to build and deploy applications for fleets of robots working seamlessly together.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
Some other updates and news that you may have missed:

I would like to recommend this really interesting Amazon Science article about federated learning. This is a framework that allows edge devices to work together to train a global model while keeping customers’ data on-device.

Podcast Charlas Técnicas de AWS – If you understand Spanish, this podcast is for you. Podcast Charlas Técnicas is one of the official AWS podcasts in Spanish, and every other week there is a new episode. Today the final episode for season three launched, and in it, we discussed many of the re:Invent launches. You can listen to all the episodes directly from your favorite podcast app or at AWS Podcasts en español.

AWS open-source news and updates–This is a newsletter curated by my colleague Ricardo to bring you the latest open-source projects, posts, events, and more.

Upcoming AWS Events
Check your calendars and sign up for these AWS events:

AWS Resiliency Hub Activation Day is a half-day technical virtual session to deep dive into the features and functionality of Resiliency Hub. You can register for free here.

AWS re:Invent recaps in your area. During the re:Invent week, we had lots of new announcements, and in the next weeks you can find in your area a recap of all these launches. All the events will be posted on this site, so check it regularly to find an event nearby.

AWS re:Invent keynotes, leadership sessions, and breakout sessions are available on demand. I recommend that you check the playlists and find the talks about your favorite topics in one collection.

That’s all for this week. Check back next Monday for another Week in Review!

— Marcia

Celebrating the community: Selin

Post Syndicated from Rosa Brown original https://www.raspberrypi.org/blog/celebrating-the-community-selin/

We are so excited to share another story from the community! Our series of community stories takes you across the world to hear from young people and educators who are engaging with creating digital technologies in their own personal ways. 

Selin and a robot she has built.
Selin and her robot guide dog IC4U.

In this story we introduce you to Selin, a digital maker from Istanbul, Turkey, who is passionate about robotics and AI. Watch the video to hear how Selin’s childhood pet inspired her to build tech projects that aim to help others live well.  

Meet Selin 

Selin (16) started her digital making journey because she wanted to solve a problem: after her family’s beloved dog Korsan passed away, she wanted to bring him back to life. Selin thought a robotic dog could be the answer, and so she started to design her project on paper. When she found out that learning to code would mean she could actually make a robotic dog, Selin began to teach herself about coding and digital making. Selin has since built seven robots, and her enthusiasm for creating digital technologies shows no sign of stopping.    

Selin is on one knee, next to her robot.
Selin and her robot guide dog IC4U.

One of Selin’s big motivations to explore digital making was having an event to work towards. When she discovered Coolest Projects, our global technology showcase for young people, Selin set herself the task of making a robot that she could present at the Coolest Projects event in 2018. 

When thinking about ideas for what to make for Coolest Projects, Selin remembered how it felt to lose her dog. She wondered what it must be like when a blind person’s guide dog passes away, as that person loses their friend as well as their support. So Selin decided to make a robotic guide dog called IC4U. She contacted several guide dog organisations to find out how guide dogs are trained and what they need to be able to do so she could replicate their behaviour in her robot. The robot is voice-controlled so that people with impaired sight can interact with it easily. 

Selin and the judges at Coolest Projects.
Selin at Coolest Projects International in 2018.

Selin and her parents travelled to Coolest Projects International in Dublin with Selin’s robotic guide dog, and Selin and IC4U became a judges’ favourite in the Hardware category. Selin enjoyed participating in Coolest Projects so much that she started designing her project for next year’s event straight away:    

“When I returned back I immediately started working for next year’s Coolest Projects.”  

Selin

Many of Selin’s tech projects share a theme: to help make the world a better place. For example, another robot made by Selin is the BB4All — a school assistant robot to tackle bullying. And last year, while she attended the Stanford AI4ALL summer camp, Selin worked with a group of young people to design a tech project to increase the speed and accuracy of lung cancer diagnoses.

Through her digital making projects, Selin wants to show how people can use robotics and AI technology to support people and their well-being. In 2021, Selin’s commitment to making these projects was recognised when she was awarded the Aspiring Teen Award by Women in Tech.           

Selin stands next to an photograph of herself. In the photograph she has a dog on one side and a robot dog on the other.

Listening to Selin, it is inspiring to hear how a person can use technology to express themselves as well as create projects that have the potential to do so much good. Selin acknowledges that sometimes the first steps can be the hardest, especially for girls  interested in tech: “I know it’s hard to start at first, but interests are gender-free.”

“Be curious and courageous, and never let setbacks stop you so you can actually accomplish your dream.”    

Selin

We have loved seeing all the wonderful projects that Selin has made in the years since she first designed a robot dog on paper. And it’s especially cool to see that Selin has also continued to work on her robot IC4U, the original project that led her to coding, Coolest Projects, and more. Selin’s robot has developed with its maker, and we can’t wait to see what they both go on to do next.

Help us celebrate Selin and inspire other young people to discover coding and digital making as a passion, by sharing her story on Twitter, LinkedIn, and Facebook.

The post Celebrating the community: Selin appeared first on Raspberry Pi.

Preview – AWS IoT RoboRunner for Building Robot Fleet Management Applications

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/preview-aws-iot-roborunner-for-building-robot-fleet-management-applications/

In 2018, we launched AWS RoboMaker, a cloud-based simulation service that enables robotics developers to run, scale, and automate simulation without managing any infrastructure. As we worked with robot developers and operators, we have repeatedly heard that they face challenges in operating different robot types in their automation efforts, including autonomous guided vehicles (AGV), autonomous mobile vehicles (AMR), and robotic manipulators.

Many customers choose different types of robots – often from different vendors in a single facility. Robot operator want to access the unified data required to build applications that work across a fleet of robots. However, when a new robot is added to an autonomous operation, complex and time-consuming software integration work is required to connect the robot control software to work management systems.

Today, we are launching a public preview of AWS IoT RoboRunner, a new robotics service that makes it easier for enterprises to build and deploy applications that help fleets of robots work seamlessly together. AWS IoT RoboRunner lets you connect your robots and work management systems, thereby enabling you to orchestrate work across your operation through a single system view.

This new service builds on the same technology used in Amazon fulfillment centers, and now we are excited to make it available to all developers to build advanced robotics applications for their businesses.

AWS IoT RoboRunner in Action
You can create a single facility (e.g., site name and location) in the AWS Management Console to get started with AWS IoT RoboRunner. Behind the scenes, AWS IoT RoboRunner automatically creates centralized repositories for storing facility, robot, destination, and task data. Then, the robots working on this site are setup as a “Fleet”, and each individual robot is setup in AWS IoT RoboRunner as a “Robot” within a fleet.

You can download the Fleet Gateway Library to develop integration codes for connecting your robots and WMS systems with AWS IoT RoboRunner to send and receive data from individual robot fleets. You can also develop the first robotics management application using the Task Manager Library and deploy Task Manager codes as an AWS Lambda function and Fleet Gateway codes on-premises as an AWS IoT Greengrass component.

To enable a single-system view of the robots, status of the systems, and progress of tasks on the same interface, AWS IoT RoboRunner provides APIs that let you build a user application. AWS IoT RoboRunner provides sample applications for allocating tasks to robot fleets so that you can get started quickly. You can customize the task allocation code with business requirements that align to your use case.

Learn more by reading Getting started with AWS IoT RoboRunner in the AWS IoT RoboRunner Developer Guide. Watch a quick introductory video about AWS IoT RoboRunner for more information.

Try Public Preview Now
AWS IoT RoboRunner is now available in public preview, and you can start using them today in the US East (N. Virginia) and Europe (Frankfrut) Regions. There will be no additional cost to use this feature during the preview period.

You can send feedback to [email protected], the AWS forum for AWS IoT, or through your usual AWS Support contacts.

Channy

The Raspberry Pi Build HAT and LEGO® components at our CoderDojo

Post Syndicated from Mark Calleja original https://www.raspberrypi.org/blog/raspberry-pi-build-hat-lego-education-robotics-coderdojo/

As so many CoderDojos around the world, our office-based CoderDojo hadn’t been able to bring learners together in person since the start of the coronavirus pandemic. So we decided that our first time back in the Raspberry Pi Foundation headquarters should be something special. Having literally just launched the new Raspberry Pi Build HAT for programming LEGO® projects with Raspberry Pi computers, we wanted to celebrate our Dojo’s triumphant return to in-person session by offering a ‘LEGO bricks and Raspberry Pi’ activity!

A robot buggy built by young people with LEGO bricks and the Raspberry Pi Build HAT.

Back in person, with new ways to create with code

The Raspberry Pi Build HAT allows learners to build and program projects with Raspberry Pi computers and LEGO® Technic™ motors and sensors from the LEGO® Education SPIKE™ Portfolio.

A close-up of the Raspberry Pi Build HAT on a Maker Plate and connected to electronic components.

What better way could there be to get the more experienced coders among our Dojo’s young people (Ninjas) properly excited to be back? We knew they were fond of building things with LEGO bricks, as so many young people are, so we were sure they would have great fun with this activity!

Two girls work together on a coding project.

For our beginners, we set up Raspberry Pi workstations and got them coding the projects on the Home island on our brand-new Code Club World platform, which they absolutely loved, so their jealousy was mitigated somewhat. 

Being able to rely on your learners’ existing skills in making the physical build leaves you a lot more time to support them with what they’re actually here to learn: the coding and digital making skills.

We wanted to keep our first Dojo back small, so for the ‘LEGO bricks and Raspberry Pi’ activity, we set up just four workstations, each with a Raspberry Pi 4, with 4GB RAM and a Raspberry Pi Build HAT on top, and a LEGO Education SPIKE Prime set. We put eight participants into teams of two, and made sure that all of them brought a little experience with text-based coding, because we wanted them to be able to focus on making projects in their own style, rather than first learning the basics of coding in Python. Then we offered our Ninjas the choice of the first two projects in the Introduction to the Raspberry Pi Build HAT and LEGO path: make Pong game controllers, or make a remote-controlled robot buggy. As I had predicted, all the teams chose to make a robot buggy!

""

Teamwork and design

The teams of Ninjas were immediately off and making — in fact, they couldn’t wait to get the lids off the boxes of brightly coloured bricks and beams!

Two young people work as a team at a CoderDojo coding club.

Our project instructions focus primarily on supporting learners through coding and testing the mechanics of their creations, leaving the design and build totally up to them. This was evidenced by the variety of buggy designs we saw at the project showcase at the end of the two-hour Dojo session!

One of the amazing things Raspberry Pi makes possible when you use it with the Raspberry Pi Build HAT and SPIKE™ Prime set: it’s simple to make the Raspberry Pi at the heart of the creation talk to a mobile device via Bluetooth, and off you go controlling what you’ve created via a phone or tablet.

While beginner-friendly, the projects in the Introduction path involve a mix of coding, testing, designing, and building. So it required focus and solid teamwork for the Ninjas to finish their buggies in time for the project showcase. And this is where building with LEGO pieces was really helpful.

Coding front and centre, thanks to the Raspberry Pi Build HAT

Having LEGO bricks and the Build HAT available to create their Raspberry Pi–powered robot buggies made it easy for our Ninjas to focus on writing the code to get their buggies to work. They weren’t relying on crafting skills or duct tape and glue guns to make a chassis in the relatively short time they had, and the coding could be front and centre for them.

The most exciting part for the Ninjas was that they were building remote-controlled robot buggies. This is one of the amazing things Raspberry Pi makes possible when you use it with the Build HAT and SPIKE™ Prime set: it’s simple to make the Raspberry Pi at the heart of the creation talk to a mobile device via Bluetooth, and off you go controlling what you’ve created via a phone or tablet.

The LEGO Technic motors that are part of the LEGO Education SPIKE Prime set are of really high quality, and they’re super easy to program with the Build HAT and its Python library! You can change the motors’ speed by setting a single parameter in your code. You can also easily write code to set or read the motors’ exact angle (their absolute position). That allows you to finely control the motors’ movements, or to use them as sensors.

Some of our teams, inspired by everything the SPIKE Prime set has to offer, tried out programming the set’s sensors, to switch their robot buggy on or help it avoid obstacles. Because we only had about 90 minutes of digital making, not all teams managed to finish adding the extra features they wanted — but next time for sure!

A young person programs a robot buggy built with LEGO bricks and the Raspberry Pi Build HAT.

With a little more time (or another Dojo session), it would have been possible for the Ninjas to make some very advanced remote-controlled buggies indeed, complete with headlights, brake lights, sensors, and sound.

Learning with LEGO® elements and Raspberry Pi computers

If you have access to LEGO Education SPIKE Prime sets for your learners, then the Raspberry Pi Build HAT is a great addition that allows them to build complex robotics projects with very simple code — but I think that’s not its main benefit.

A robot buggy built by young people with LEGO bricks and the Raspberry Pi Build HAT.

Because the Build HAT allows your learners to work with LEGO elements, you know that many of them already understand one aspect of the creation process: they’ve got experience of using LEGO bricks to solve a problem. In a coding or STEM club session, or in a classroom lesson, you can only give your learners limited amount of time to complete a project, or get their project prototype to a stable point. So being able to rely on your learners’ existing skills in making the physical build leaves you a lot more time to support them with what they’re actually here to learn: the coding and digital making skills.

You and your young people next!

The projects using the Raspberry Pi Build HATs were such a hit, we’ll be getting them and the LEGO Education SPIKE Prime sets out at every Dojo session from now on! We’re excited to see what young people around the world will be creating thanks to our new collaboration with LEGO Education.

Have you used the Raspberry Pi Build HAT with your learners or young people at home yet? Share their stories and creations in the comments here, or on social media using #BuildHAT.

The post The Raspberry Pi Build HAT and LEGO® components at our CoderDojo appeared first on Raspberry Pi.

Robotic waiter learning to serve drinks

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/robotic-waiter-learning-to-serve-drinks/

The maker of this robotic waiter had almost all of the parts for this project just sat around collecting dust on a shelf. We’re delighted they decided to take the time to pick up the few extra bits they needed online, then take the extra hour (just an hour?!) to write a program in Python to get this robotic waiter up and running.

It’s learning! Bartending is hard

We are also thrilled to report (having spotted it in the reddit post we found this project on) that the maker had “so much fun picking up and sometimes crushing small things with this claw.” The line between serving drinks and wanting to crush things is thinner than you might imagine.

And in even better news, all the code you need to recreate this build is on GitHub.

Robo arm, HAT, and Raspberry Pi all together

Parts list

First successful straw-drop. Perfecto!

reddit comments bantz

One of our favourite things about finding Raspberry Pi-powered projects on reddit is the comments section. It’s (usually) the perfect mix of light adoration, constructive suggestions, and gateways to tangents we cannot ignore.

Like this one recalling the Rick and Morty sketch in which a cute tiny robot realises their sole purpose is to pass butter:

No swears in this scene! But it is an adult cartoon in general

And also this one pointing us to another robotic arm having a grand old time picking up a tiny ball, sending it down a tiny slide, and then doing it all over again. Because it’s important we know how to make our own fun:

We also greatly enjoyed the fact that the original maker couldn’t use the Rick and Morty “what is my purpose” line to share this project because they are such an uber fan that they already used it for a project they posted just the day before. This cute creation’s sole reason for existing is to hold an Apple pencil while looking fabulous. And we are HERE for it:

The post Robotic waiter learning to serve drinks appeared first on Raspberry Pi.

Community stories: Avye

Post Syndicated from Katie Gouskos original https://www.raspberrypi.org/blog/community-stories-avye-robotics-girls-tech/

We’re excited to share another incredible story from the community — the second in our new series of inspirational short films that celebrate young tech creators across the world.

A young teenager with glasses smiles
Avye discovered robotics at her local CoderDojo and is on a mission to get more girls like her into tech.

These stories showcase some of the wonderful things that young people are empowered to do when they learn how to create with technology. We hope that they will inspire many more young people to get creative with technology too!

Meet Avye

This time, you will meet an accomplished, young community member who is on a quest to encourage more girls to join her and get into digital making.

Help us celebrate Avye by liking and sharing her story on Twitter, Linkedin, or Facebook!

For as long as she can remember, Avye (13) has enjoyed creating things. It was at her local CoderDojo that seven-year-old Avye was introduced to the world of robotics. Avye’s second-ever robot, the Raspberry Pi–powered Voice O’Tronik Bot, went on to win the Hardware category at our Coolest Projects UK event in 2018.

A girl shows off a robot she has built
Avye showcased her Raspberry Pi–powered Voice O’Tronik Bot at Coolest Projects UK in 2018.

Coding and digital making have become an integral part of Avye’s life, and she wants to help other girls discover these skills too. She says, I believe that it’s important for girls and women to see and be aware of ordinary girls and women doing cool things in the STEM world.” Avye started running her own workshops for girls in their community and in 2018 founded Girls Into Coding. She has now teamed up with her mum Helene, who is committed to helping to drive the Girls Into Coding mission forwards.

I want to get other girls like me interested in tech.

Avye

Avye has received multiple awards to celebrate her achievements, including the Princess Diana Award and Legacy Award in 2019. Most recently, in 2020, Avye won the TechWomen100 Award, the Women in Tech’s Aspiring Teen Award, and the FDM Everywoman in Tech Award!

We cannot wait to see what the future has in store for her. Help us celebrate Avye and inspire others by liking and sharing her story on Twitter, Linkedin, or Facebook!

The post Community stories: Avye appeared first on Raspberry Pi.

Friday Squid Blogging: Underwater Robot Uses Squid-Like Propulsion

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/11/friday-squid-blogging-underwater-robot-uses-squid-like-propulsion.html

This is neat:

By generating powerful streams of water, UCSD’s squid-like robot can swim untethered. The “squidbot” carries its own power source, and has the room to hold more, including a sensor or camera for underwater exploration.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

AWS Architecture Monthly Magazine: Robotics

Post Syndicated from Annik Stahl original https://aws.amazon.com/blogs/architecture/architecture-monthly-magazine-robotics/

Architecture Monthly: RoboticsSeptember’s issue of AWS Architecture Monthly issue is all about robotics. Discover why iRobot, the creator of your favorite (though maybe not your pet’s favorite) little robot vacuum, decided to move its mission-critical platform to the serverless architecture of AWS. Learn how and why you sometimes need to test in a virtual environment instead of a physical one. You’ll also have the opportunity to hear from technical experts from across the robotics industry who came together for the AWS Cloud Robotics Summit in August.

Our expert this month, Matt Hansen (who has dreamed of building robots since he was a teen), gives us his outlook for the industry and explains why cloud will be an essential part of that.

In September’s Robotics issue

  • Ask an Expert: Matt Hansen, Principle Solutions Architect
  • Blog: Testing a PR2 Robot in a Simulated Hospital
  • Case Study: iRobot
  • Blog: Introduction to Automatic Testing of Robotics Applications
  • Case Study: Multiply Labs Uses AWS RoboMaker to Manufacture Individualized Medicines
  • Demos & Videos: AWS Cloud Robotics Summit (August 18-19, 2020)
  • Related Videos: iRobot and ZS Associates

Survey opportunity

This month, we’re also asking you to take a 10-question survey about your experiences with this magazine. The survey is hosted by an external company (Qualtrics), so the below survey button doesn’t lead to our website. Please note that AWS will own the data gathered from this survey, and we will not share the results we collect with survey respondents. Your responses to this survey will be subject to Amazon’s Privacy Notice. Please take a few moments to give us your opinions.

How to access the magazine

We hope you’re enjoying Architecture Monthly, and we’d like to hear from you—leave us star rating and comment on the Amazon Kindle Newsstand page or contact us anytime at [email protected].

Field Notes: Deploying UiPath RPA Software on AWS

Post Syndicated from Yuchen Lin original https://aws.amazon.com/blogs/architecture/field-notes-deploying-uipath-rpa-software-on-aws/

Running UiPath RPA software on AWS leverages the elasticity of the AWS Cloud, to set up, operate, and scale robotic process automation. It provides cost-efficient and resizable capacity, and scales the robots to meet your business workload. This reduces the need for administration tasks, such as hardware provisioning, environment setup, and backups. It frees you to focus on business process optimization by automating more processes.

This blog post guides you in deploying UiPath robotic processing automation (RPA) software on AWS. RPA software uses the user interface to capture data and manipulate applications just like humans do. It runs as a software robot to interpret, and trigger responses, as well as communicate with other systems to perform a variety of repetitive tasks.

UiPath Enterprise RPA Platform provides the full automation lifecycle including discover, build, manage, run, engage, and measure with different products. This blog post focuses on the Platform’s core products: build with UiPath Studio, manage with UiPath Orchestrator and run with UiPath Robots.

About UiPath software

UiPath Enterprise RPA Platform’s core products are:

UiPath Studio and UiPath Robot are individual products, you can deploy each on a standalone machine.

UiPath Orchestrator contains Web Servers, SQL Server and Indexer Server (Elasticsearch), you can use Single Machine deployment, or Multi-Node deployment, depends on the workload capacity and availability requirements.

For information on UiPath platform offerings, review UiPath platform products.

UiPath on AWS

You can deploy all UiPath products on AWS.

  • UiPath Studio is needed for automation design jobs and runs on single machine. You deploy it with Amazon EC2.
  • UiPath Robots are needed for automation tasks, runs on a single machine, and scales with the business workload. You deploy it with Amazon EC2 and scale with Amazon EC2 Auto Scaling.
  • UiPath Orchestrator is needed for automation administration jobs and contains three logical components that run on multiple machines. You deploy Web Server with Amazon EC2, SQL Server with Amazon RDS, and Indexer Server with Amazon Elasticsearch Service. For Multi-Node deployment, you deploy High Availability Add-On with Amazon EC2.

The architecture of UiPath Enterprise RPA Platform on AWS looks like the following diagram:

Figure 1 - UiPath Enterprise RPA Platform on AWS

Figure 1 – UiPath Enterprise RPA Platform on AWS

By deploying the UiPath Enterprise RPA Platform on AWS, you can set up, operate, and scale workloads. This controls the infrastructure cost to meet process automation workloads.

Prerequisites

For this walkthrough, you should have the following prerequisites:

  • An AWS account
  • AWS resources
  • UiPath Enterprise RPA Platform software
  • Basic knowledge of Amazon EC2, EC2 Auto Scaling, Amazon RDS, Amazon Elasticsearch Service.
  • Basic knowledge to set up Windows Server, IIS, SQL Server, Elasticsearch.
  • Basic knowledge of Redis Enterprise to set up High Availability Add-on.
  • Basic knowledge of UiPath Studio, UiPath Robot, UiPath Orchestrator.

Deployment Steps

Deploy UiPath Studio
UiPath Studio deploys on a single machine. Amazon EC2 instances provide secure and resizable compute capacity in the cloud, and the ability to launch applications when needed without upfront commitments.

  1. Download the UiPath Enterprise RPA Platform. UiPath Studio is integrated in the installation package.
  2. Launch an EC2 instance with a Windows OS-based Amazon Machine Image (AMI) that meets the UiPath Studio hardware requirements and software requirements.
  3. Install the UiPath Studio software. For UiPath Studio installation steps, review the UiPath Studio Guide.

Optionally, you can save the installation and pre-configuration work completed for UiPath Studio as a custom Amazon Machine Image (AMI). Then, you can launch more UiPath Studio instances from this AMI. For details, visit Launch an EC2 instance from a custom AMI tutorial.

UiPath Robot Deployment

Each UiPath Robot deploys one single machine with Amazon EC2. Amazon EC2 Auto Scaling helps you add or remove Robots to meet automation workload changes in demand.

  1. Download the UiPath Enterprise RPA Platform. The UiPath Robot is integrated in the installation package.
  2. Launch an EC2 instance with a Windows OS based Amazon Machine Image (AMI) that meets the UiPath Robot hardware requirements and software requirements.
  3. Install the business application (Microsoft Office, SAP, etc.) required for your business processes. Alternatively, select the business application AMI from the AWS Marketplace.
  4. Install the UiPath Robot software. For UiPath Robot installation steps, review Installing the Robot.

Optionally, you can save the installation and pre-configuration work completed for UiPath Robot as a custom Amazon Machine Image (AMI). Then you can create Launch templates with instance configuration information. With launch template, you can create Auto Scaling groups from launch templates and scale the Robots.

Scale the Robots’ Capacity

Amazon EC2 Auto Scaling groups help you use scaling policies to scale compute capacity based on resource use. By monitoring the process queue and creating a customized scaling policy, the UiPath Robot can automatically scale based on the workload. For details, review Scaling the size of your Auto Scaling group.

Use the Robot Logs

UiPath Robot generates multiple diagnostic and execution logs. Amazon CloudWatch provides the log collection, storage, and analysis, and enables the complete visibility of the Robots and automation tasks. For CloudWatch agent setup on Robot, review Quick Start: Enable Your Amazon EC2 Instances Running Windows Server to Send logs to CloudWatch Logs.

Monitor the Automation Jobs

UiPath Robot uses the user interface to capture data and manipulate applications. When UiPath Robot runs, it is important to capture processing screens for troubleshooting and auditing usage. This screen capture activity can be integrated with process in conjunction with UiPath Studio.

Amazon S3 provides cost-effective storage for retaining all Robot logs and processing screen captures. Amazon S3 Object Lifecycle Management automates the transition between different storage classes, and helps you manage the screenshots so that they are stored cost effectively throughout their lifecycle. For lifecycle policy creation, review How Do I Create a Lifecycle Policy for an S3 Bucket?.

UiPath Orchestrator Deployment

Deployment Components
UiPath Orchestrator Server Platform has many logical components, grouped in three layers:

  • presentation layer
  • web service layer
  • persistence layer

The presentation layer and web service layer are built into one ASP.NET website. The persistence layer contains SQL Server and Elasticsearch. There are three deployment components to be set up:

  • web application
  • SQL Server
  • Elasticsearch

The Web Server, SQL Server, and Elasticsearch Server require multiple different environments. Review the hardware requirements and software requirements for more details.

Note: set up the Web Server, SQL Server, Elasticsearch Server environments before running the UiPath Enterprise Platform installation wizard.

Set up Web Server with Amazon EC2

UiPath Orchestrator Web Server deploys on Windows Server with IIS 7.5 or later. For details, review the software requirements.

AWS provides various AMIs for Windows Server that can help you set up the environment required for the Web Server.

The Microsoft Windows Server 2019 Base AMI includes most prerequisites for installation except some features of Web Server (IIS) to be enabled. For configuration steps, review Server Roles and Features.

The Web Server should be put in correct subnet (Public or Private) and have proper security group (HTTPS visits) according to the business requirements. Review Allow user to connect EC2 on HTTP or HTTPS.

Set up SQL Server with Amazon RDS

Amazon Relational Database Service (Amazon RDS) provides you with a managed database service. With a few clicks, you can set up, operate, and scale a relational database in the AWS Cloud.

Amazon RDS support SQL Server Engine. For UiPath Orchestrator, both Standard Edition and Enterprise Edition are supported. For details, review software requirements.

Amazon RDS can be set up in multiple Available Zones to meet requirements for high availability.

UiPath Orchestrator can connect to the created Amazon RDS database with SQL Server Authentication.

Set up Elasticsearch Server with Amazon Elasticsearch Service (Amazon ES)

Amazon ES is a fully managed service for you to deploy, secure, and operate Elasticsearch at scale with generally zero down time.

Elasticsearch Service provides a managed ELS stack, with no upfront costs or usage requirements, and without the operational overhead.

All messages logged by UiPath Robots are sent through the Logging REST endpoint to the Indexer Server where they are indexed for future utilization.

Install UiPath Orchestrator on the Web Server

After Web Server, SQL Server, Elasticsearch Server environment are ready, download the UiPath Enterprise RPA Platform, and install it on the Web Server.

The UiPath Enterprise Platform installation wizard guides you in configuring and setting up each environment, including connecting to SQL Server and configuring the Elasticsearch API URL.

After you complete setup, the UiPath Orchestrator Portal is available for you to visit and manage processes, jobs, and robots.

The UiPath Orchestrator dashboard appears like in the following screenshot:

Figure UiPath Orchestrator Portal

Figure 2- UiPath Orchestrator Portal

Set up Orchestrator High Availability Architecture

One Orchestrator can handle many robots in a typical configuration, but any product running on a single server is vulnerable to failure if something happens to that server.

The High Availability add-on (HAA) enables you to add a second Orchestrator server to your environment that is generally fully synchronized with the first server.

To set up multi-node deployment, launch Amazon EC2 instances with a Linux OS-based Amazon Machine Image (AMI) that meets the HAA hardware and software requirements. Follow the installation guide to set up HAA.

Elastic Load Balancing automatically distributes incoming application traffic across multiple targets. Network Load Balancer should be set up to allow Robots to communicate with multi-node Orchestrators.

Cleaning up

To avoid incurring future charges, delete all the resources.

Conclusion

In this post, I showed you how to deploy the UiPath Enterprise RPA Platform on AWS to further optimize and automate your business processes. AWS Managed Services like Amazon EC2, Amazon RDS, and Amazon Elasticsearch Service help you set up the environment with high availability. This reduces the maintenance effort of backend services, as well as scaling Orchestrator capabilities. Amazon EC2 Auto Scaling helps you add or remove robots to meet automation workload changes in demand.

Learn more about how to integrate UiPath with AWS services, check out The UiPath and AWS partnership.

Field Notes provides hands-on technical guidance from AWS Solutions Architects, consultants, and technical account managers, based on their experiences in the field solving real-world business problems for customers.

Boston Dynamics’ Handle robot recreated with Raspberry Pi

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/boston-dynamics-handle-robot-recreated-with-raspberry-pi/

You in the community seemed so impressed with this recent Boston Dynamics–inspired build that we decided to feature another. This time, maker Harry was inspired by Boston Dynamics’ research robot Handle, which stands 6.5 ft tall, travels at 9 mph and jumps 4​ ​feet vertically. Here’s how Harry made his miniature version, MABEL (Multi Axis Balancer Electronically Levelled).

MABEL has individually articulated legs to enhance off-road stability, prevent it from tipping, and even make it jump (if you use some really fast servos). Harry is certain that anyone with a 3D printer and a “few bits” can build one.

MABEL builds on the open-source YABR project for its PID controller, and it’s got added servos and a Raspberry Pi that helps interface them and control everything.

Installing MABEL’s Raspberry Pi brain and wiring the servos

Thanks to a program based on the open-source YABR firmware, an Arduino handles all of the PID calculations using data from an MPU-6050 accelerometer/gyro. Raspberry Pi, using Python code, manages Bluetooth and servo control, running an inverse kinematics algorithm to translate the robot legs perfectly in two axes.

Kit list

If you want to attempt this project yourself, the files for all the hard 3D-printed bits are on Thingiverse, and all the soft insides are on GitHub.

IKSolve is the class that handles the inverse kinematics functionality for MABEL (IKSolve.py) and allows for the legs to be translated using (x, y) coordinates. It’s really simple to use: all that you need to specify are the home values of each servo (these are the angles that, when passed over to your servos, make the legs point directly and straight downwards at 90 degrees).

When MABEL was just a twinkle in Harry’s eye

MABEL is designed to work by listening to commands on the Arduino (PID contoller) end that are sent to it by Raspberry Pi over serial using pySerial. Joystick data is sent to Raspberry Pi using the Input Python library. Harry first tried to get the joystick data from an old PlayStation 3 controller, but went with the PiHut’s Raspberry Pi Compatible Wireless Gamepad in the end for ease.

Keep up with Harry’s blog or give Raspibotics a follow on Twitter, as part 3 of his build write-up should be dropping imminently, featuring updates that will hopefully get MABEL jumping!

The post Boston Dynamics’ Handle robot recreated with Raspberry Pi appeared first on Raspberry Pi.