Tag Archives: drones

Corvus Robotics’ Autonomous Drones Tackle Warehouses

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/drone-warehouse-corvus-robotics

Warehouses offer all kinds of opportunities for robots. Semi-structured controlled environments, lots of repetitive tasks, and humans that would almost universally rather be somewhere else. Robots have been doing great at taking over jobs that involve moving stuff from one place to another, but there are all kinds of other things that have to happen to keep warehouses operating efficiently.

Corvus Robotics, a YC-backed startup that’s just coming out of stealth, has decided that they want to go after warehouse inventory tracking. That is, making sure that a warehouse knows exactly what’s inside of it and where. This is a more complicated task than it seems like it should be, and not just any robot is able to do it. Corvus’ solution involves autonomous drones that can fly unattended for weeks on end, collecting inventory data without any human intervention at all.




Many warehouses have a dedicated team of humans whose job is to wander around the warehouse scanning stuff to maintain an up to date list of where everything is, a task which is both very important and very boring. As it turns out, autonomous drones can scan up to ten times faster than humans—Corvus Robotics’ drones are able to inventory an entire warehouse on a rolling basis in just a couple days, while it would take a human team weeks to do the same task.

Inventory is a significant opportunity for robotics, and we’ve seen a bunch of different attempts at doing inventory in places like supermarkets, but warehouses are different. Warehouses can be huge, in every dimension, meaning that the kinds of robots that can make supermarket inventory work just won’t cut it in a warehouse environment for the simple reason that they can’t see inventory stacked on shelves all the way to the ceiling, which can be over 20m high. And this is why the drone form factor, while novel, actually offers a uniquely useful solution.

It’s probably fair to think of a warehouse as a semi-structured environment, with emphasis on the “semi.” At the beginning of a deployment, Corvus will generate one map of the operating area that includes both geometric and semantic information. After that, the drones will autonomously update that map with each flight throughout their entire lifetimes. There are walls and ceilings that don’t move, along with large shelving units that are mostly stationary, but those things aren’t going to do your localization system any favors since they all look the same. And the stuff that does offer some uniqueness, like the items on those shelves, is changing all the time. “That’s a huge problem for us,” says Mohammed Kabir, Corvus Robotics’ CTO. “Being able to do place recognition at the granularity that we need while everything is changing is really hard.” If you were looking closely at the video, you may have spotted some fiducials (optical patterns placed in the environment that vision systems find easy to spot), but we’re told that the video was shot in Corvus Robotics’ development warehouse where those markers are used for ground truth testing.

In real deployments, fiducials (or anything else) isn’t necessary. The drone has its charging dock, and the initial map, but otherwise it’s doing onboard visual-inertial SLAM (simultaneous localization and mapping), dense volumetric mapping, and motion planning with its 10 camera array and an autonomy stack running on ROS and PX4 for real time flight control. Corvus isn’t willing to let us in on all of their secrets, but they did tell us that they incorporate some of the structured components of the environment into their SLAM solution, as well as some things are semi-static—that is, things that are unlikely to change over the duration of a single flight, helping the drone with loop closure.

One of the big parts of being able to do this is the ability to localize in very large, unstructured environments where things are constantly changing without having to rely on external infrastructure. For example, a WiFi connection back to our base station is not guaranteed, so everything needs to run on-board the drone, which is a non-trivial task. It’s essentially all of the compute of a self-driving car, compressed into the drone. -Mohammed Kabir

Corvus is able to scan between 200 and 400 pallet positions per hour per drone, inclusive of recharge time. At ground level, this is probably about equivalent in speed to a human (although more sustainable). But as you start looking at inventory higher off the ground, the drone maintains a constant scan rate, while for a human, it gets exponentially harder, involving things like strapping yourself to a forklift. And of course the majority of the items in a high warehouse are not at ground level, because ground level only covers a tier or two of a space that may soar to 20 meters. Overall, Corvus says that they can do inventory up to 10x faster than a human.

With a few exceptions, it’s unlikely that most warehouses are going to be able to go human-free in the foreseeable future, meaning that any time you talk about robot autonomy, you also have to talk about safety. “We can operate when no one’s around, so our customers often schedule the drones during the third shift when the warehouse is dark,” says Mohammed Kabir. “There are also customers who want us to operate around people, which initially terrified us, because interacting with humans can be quite tricky. But over the last couple years, we’ve built safety systems to be able to deal with that.” In addition to the collision avoidance that comes with the 360 degree vision system that the drone uses to navigate, it has a variety of safety-first behaviors all the way up to searching for clear flat spots to land in the event of an emergency. But it sounds like the primary way that Corvus tries to maintain safety is by keeping drones and humans as separate as possible, which may involve process changes for the warehouse, explains Corvus Robotics CEO Jackie Wu. “If you see a drone in an aisle, just don’t go in until it’s done.”

We also asked Wu about what exactly he means when he calls the Corvus Robotics’ drone “fully autonomous,” because depending on who you ask (and what kind of robot and task you’re talking about), full autonomy can mean a lot of different things.

For us, full autonomy means continuous end to end operation with no human in the loop within a certain scenario or environment. Obviously, it’s not level five autonomy, because nobody is doing level five, which would take some kind of generalized intelligence that can fly anywhere. But, for level four, for the warehouse interior, the drones fly on scheduled missions, intelligently find objects of interest while avoiding collisions, come back to land, recharge and share that data, all without anybody touching them. And we’re able to do this repeatedly, without external localization infrastructure. -Jackie Wu

As tempting as it is, we’re not going to get into the weeds here about what exactly constitutes “full autonomy” in the context of drones. Well, okay, maybe we’ll get into the weeds a little bit, just to say that being able to repeatedly do a useful task end-to-end without a human in the loop seems close enough to whatever your definition of full autonomy is that it’s probably a fair term to apply here. Are there other drones that are arguably more autonomous, in the sense that they require even less structure in the environment? Sure. Are those same drones arguably less autonomous because they don’t autonomously recharge? Probably. Corvus Robotics’ perspective that the ability to run a drone autonomously for weeks at a time is a more important component of autonomy is perfectly valid considering their use case, but I think we’re at the point where “full autonomy” at this level is becoming domain-specific enough to make direct comparisons difficult and maybe not all that useful.

Corvus has just recently come out of stealth, and they’re currently working on pilot projects with a handful of Global 2000 companies.

How Robots Helped Out After the Surfside Condo Collapse

Post Syndicated from Robin R. Murphy original https://spectrum.ieee.org/building-collapse-surfside-robots

Editor’s Note: Along with Robin Murphy, the authors of this article include David Merrick, Justin Adams, Jarrett Broder, Austin Bush, Laura Hart, and Rayne Hawkins. This team is with the Florida State University’s Disaster Incident Response Team, which was in Surfside for 24 days at the request of Florida US&R Task 1 (Miami Dade Fire Rescue Department).

On June 24, 2021, at 1:25AM portions of the 12 story Champlain Towers South condominium in Surfside, Florida collapsed, killing 98 people and injuring 11, making it the third largest fatal collapse in US history. The life-saving and mitigation Response Phase, the phase where responders from local, state, and federal agencies searched for survivors, spanned June 24 to July 7, 2021. This article summarizes what is known about the use of robots at Champlain Towers South, and offers insights into challenges for unmanned systems.


Small unmanned aerial systems (drones) were used immediately upon arrival by the Miami Dade Fire Rescue (MDFR) Department to survey the roughly 2.68 acre affected area. Drones, such as the DJI Mavic Enterprise Dual with a spotlight payload and thermal imaging, flew in the dark to determine the scope of the collapse and search for survivors. Regional and state emergency management drone teams were requested later that day to supplement the effort of flying day and night for tactical life-saving operations and to add flights for strategic operations to support managing the overall response.


View of a Phantom 4 Pro in use for mapping the collapse on July 2, 2021. Two other drones were also in the airspace conducting other missions but not visible. Photo: Robin R. Murphy

The teams brought at least 9 models of rotorcraft drones, including the DJI Mavic 2 Enterprise Dual, Mavic 2 Enterprise Advanced, DJI Mavic 2 Zoom, DJI Mavic Mini, DJI Phantom 4 Pro, DJI Matrice 210, Autel Dragonfish, and Autel EVO II Pro plus a tethered Fotokite drone. The picture above shows a DJI Phantom 4 Pro in use, with one of the multiple cranes in use on the site visible. The number of flights for tactical operations were not recorded, but drones were flown for 304 missions for strategic operations alone, making the Surfside collapse the largest and longest use of drones recorded for a disaster, exceeding the records set by Hurricane Harvey (112) and Hurricane Florence (260).

Unmanned ground bomb squad robots were reportedly used on at least two occasions in the standing portion of the structure during the response, once to investigate and document the garage and once on July 9 to hold a repeater for a drone flying in the standing portion of the garage. Note that details about the ground robots are not yet available and there may have been more missions, though not on the order of magnitude of the drone use. Bomb squad robots tend to be too large for use in areas other than the standing portions of the collapse.

We concentrate on the use of the drones for tactical and strategic operations, as the authors were directly involved in those operations. It offers a preliminary analysis of the lessons learned. The full details of the response will not be available for many months due to the nature of an active investigation into the causes of the collapse and due to privacy of the victims and their families.

Drone Use for Tactical Operations

Tactical operations were carried out primarily by MDFR with other drone teams supporting when necessary to meet the workload. Drones were first used by the MDFR drone team, which arrived within minutes of the collapse as part of the escalating calls. The drone effort started with night operations for direct life-saving and mitigation activities. Small DJI Mavic 2 Enterprise Dual drones with thermal camera and spotlight payloads were used for general situation awareness to help responders understand the extent of the collapse beyond what could be seen from the street side. The built-in thermal imager was used but did not have the resolution and was unable to show details as much of the material was the same temperature and heat emissions were fuzzy. The spotlight with the standard visible light camera was more effective, though the view was constricted. The drones were also used to look for survivors or trapped victims, help determine safety hazards to responders, and provide task force leaders with overwatch of the responders. During daylight, DJI Mavic Zoom drones were added because of their higher camera resolution zoom. When fires started in the rubble, drones with a streaming connection to bucket truck operators were used to help optimize position of water. Drones were also used to locate civilians entering the restricted area or flying drones to taking pictures.

In a novel use of drones for physical interaction, MDFR squads flew drones to attempt to find and pick up items in the standing portion of the structure with immense value to survivors.

As the response evolved, the use of drones was expanded to missions where the drones would fly in close proximity to structures and objects, fly indoors, and physically interact with the environment. For example, drones were used to read license plates to help identify residents, search for pets, and document belongings inside parts of the standing structure for families. In a novel use of drones for physical interaction, MDFR squads flew drones to attempt to find and pick up items in the standing portion of the structure with immense value to survivors. Before the demolition of the standing portion of the tower, MDFR used a drone to remove an American flag that had been placed on the structure during the initial search.

Drone Use for Strategic Operations


An orthomosiac of the collapse constructed from imagery collected by a drone on July 1, 2021.

Strategic operations were carried out by the Disaster Incident Research Team (DIRT) from the Florida State University Center for Disaster Risk Policy. The DIRT team is a state of Florida asset and was requested by Florida Task Force 1 when it was activated to assist later on June 24. FSU supported tactical operations but was solely responsible for collecting and processing imagery for use in managing the response. This data was primarily orthomosiac maps (a single high resolution image of the collapse created from stitching together individual high resolution imagers, as in the image above) and digital elevation maps (created from structure from motion, below).


Digital elevation map constructed from imagery collected by a drone on 27 June, 2021.Photo: Robin R. Murphy

These maps were collected every two to four hours during daylight, with FSU flying an average of 15.75 missions per day for the first two weeks of the response. The latest orthomosaic maps were downloaded at the start of a shift by the tactical responders for use as base maps on their mobile devices. In addition, a 3D reconstruction of the state of the collapse on July 4 was flown the afternoon before the standing portion was demolished, shown below.


GeoCam 3D reconstruction of the collapse on July 4, 2021. Photo: Robin R. Murphy

The mapping functions are notable because they require specialized software for data collection and post-processing, plus the speed of post-processing software relied on wireless connectivity. In order to stitch and fuse images without gaps or major misalignments, dedicated software packages are used to generate flight paths and autonomously fly and trigger image capture with sufficient coverage of the collapse and overlap between images.

Coordination of Drones on Site

The aerial assets were loosely coordinated through social media. All drones teams and Federal Aviation Administration (FAA) officials shared a WhatsApp group chat managed by MDFR. WhatsApp offered ease of use, compatibility with everyone’s smartphones and mobile devices, and ease of adding pilots. Ease of adding pilots was important because many were not from MDFR and thus would not be in any personnel-oriented coordination system. The pilots did not have physical meetings or briefings as a whole, though the tactical and strategic operations teams did share a common space (nicknamed “Drone Zone”) while the National Institute of Standards and Technology teams worked from a separate staging location. If a pilot was approved by MDFR drone captain who served as the “air boss,” they were invited to the WhatsApp group chat and could then begin flying immediately without physically meeting the other pilots.

The teams flew concurrently and independently without rigid, pre-specified altitude or area restrictions. One team would post that they were taking off to fly at what area of the collapse and at what altitude and then post when they landed. The easiest solution was for the pilots to be aware of each others’ drones and adjust their missions, pause, or temporarily defer flights. If a pilot forgot to post, someone would send a teasing chat eliciting a rapid apology.

Incursions by civilian manned and unmanned aircraft in the restricted airspace did occur. If FAA observers or other pilots saw a drone flying that was not accounted for in the chat, i.e., that five drones were visible over the area but only four were posted, or if a drone pilot saw a drone in an unexpected area, they would post a query asking if someone had forgotten to post or update a flight. If the drone remained unaccounted for, the FAA would assume that a civilian drone had violated the temporary flight restrictions and search the surrounding area for the offending pilot.

Preliminary Lessons Learned

While the drone data and performance is still being analyzed, some lessons learned have emerged that may be of value to the robotics, AI, and engineering communities.

Tactical and strategic operations during the response phase favored small, inexpensive, easy to carry platforms with cameras supporting coarse structure from motion rather than larger, more expensive lidar systems. The added accuracy of lidar systems was not needed for those missions, though the greater accuracy and resolution of such systems were valuable for the forensic structural analysis. For tactical and strategic operations, the benefits of lidar was not worth the capital costs and logistical burden. Indeed, general purpose consumer/prosumer drones that could fly day or night, indoors and outdoors, and for both mapping and first person view missions were highly preferred over specialized drones. The reliability of a drone was another major factor in choosing a specific model to field, again favoring consumer/prosumer drones as they typically have hundreds of thousand hours of flight time more than specialized or novel drones. Tethered drones offer some advantages for overwatch but many tactical operations missions require a great deal of mobility. Strategic mapping necessitates flying directly over the entire area being mapped.

While small, inexpensive general purpose drones offered many advantages, they could be further improved for flying at night and indoors. A wider area of lighting would be helpful. A 360 degree (spherical) area of coverage for obstacle avoidance for working indoors or at low altitudes and close proximity to irregular work envelopes and near people, especially as night, would also be useful. Systems such as the Flyability ELIOS 2 are designed to fly in narrow and highly cluttered indoor areas, but no models were available for the immediate response. Drone camera systems need to be able to look straight up to inspect the underside of structures or ceilings. Mechanisms for determining the accurate GPS location of a pixel in an image, not just the GPS location of the drone, is becoming increasing desirable.

Other technologies could be of benefit to the enterprise but face challenges. Computer vision/machine learning (CV/ML) for searching for victims in rubble is often mentioned as a possible goal, but a search for victims who are not on the surface of the collapse is not visually directed. The portions of victims that are not covered by rubble are usually camouflaged with gray dust, so searches tend to favor canines using scent. Another challenge for CV/ML methods is the lack of access to training data. Privacy and ethical concerns poses barriers to the research community gaining access to imagery with victims in the rubble, but simulations may not have sufficient fidelity.

The collapse supplies motivation for how informatics research and human-computer interaction and human-robot interaction can contribute to the effective use of robots during a disaster, and illustrates that a response does not follow a strictly centralized, hierarchical command structure and the agencies and members of the response are not known in advance. Proposed systems must be flexible, robust, and easy to use. Furthermore, it is not clear that responders will accept a totally new software app versus making do with a general purpose app such as WhatsApp that the majority routinely use for other purposes.

The biggest lesson learned is that robots are helpful and warrant more investment, particular as many US states are proposing to terminate purchase of the very models of drones that were so effective over cybersecurity concerns.

However, the biggest lesson learned is that robots are helpful and warrant more investment, particular as many US states are proposing to terminate purchase of the very models of drones that were so effective over cybersecurity concerns. There remains much to work to be done by researchers, manufacturers, and emergency management to make these critical technologies more useful for extreme environments. Our current work is focusing on creating open source datasets and documentation and conducting a more thorough analysis to accelerate the process.

Value of Drones

The pervasive use of the drones indicates their implicit value to responding to, and documenting, the disaster. It is difficult to quantify the impact of drones, similar to the difficulties in quantifying the impact of a fire truck on firefighting or the use of mobile devices in general. Simply put, drones would not have been used beyond a few flights if they were not valuable.

The impact of the drones on tactical operations was immediate, as upon arrival MDFR flew drones to assess the extent of the collapse. Lighting on fire trucks primarily illuminated the street side of the standing portion of the building, while the drones, unrestricted by streets or debris, quickly expanded situation awareness of the disaster. The drones were used optimize placement of water to suppress the fires in the debris. The impact of the use of drones for other tactical activities is harder to quantify, but the frequent flights and pilots remaining on stand-by 24/7 indicate their value.

The impact of the drones on strategic operations was also considerable. The data collected by the drones and then processed into 2D maps and 3D models became a critical part of the US&R operations as well as one part of the nascent investigation into why the building failed. During initial operations, DIRT provided 2D maps to the US&R teams four times per day. These maps became the base layers for the mobile apps used on the pile to mark the locations of human remains, structural members of the building, personal effects, or other identifiable information. Updated orthophotos were critical to the accuracy of these reports. These apps running on mobile devices suffered from GPS accuracy issues, often with errors as high as ten meters. By having base imagery that was only hours old, mobile app users where able to ‘drag the pin’ on the mobile app to a more accurate report location on the pile – all by visualizing where they were standing compared to fresh UAS imagery. Without this capability, none of the GPS field data would be of use to US&R or investigators looking at why the structural collapse occurred. In addition to serving a base layer on mobile applications, the updated map imagery was used in all tactical, operational, and strategic dashboards by the individual US&R teams as well as the FEMA US&R Incident Support Team (IST) on site to assist in the management of the incident.

Aside from the 2D maps and orthophotos, 3D models were created from the drone data and used by structural experts to plan operations, including identifying areas with high probabilities of finding survivors or victims. Three-dimensional data created through post-processing also supported the demand for up-to-date volumetric estimates – how much material was being removed from the pile, and how much remained. These metrics provided clear indications of progress throughout the operations.

Acknowledgments

Portions of this work were supported by NSF grants IIS-1945105 and CMMI- 2140451. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

The authors express their sincere condolences to the families of the victims.

Nothing Can Keep This Drone Down

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/nothing-can-keep-this-drone-down

When life knocks you down, you’ve got to get back up. Ladybugs take this advice seriously in the most literal sense. If caught on their backs, the insects are able to use their tough exterior wings, called elytra (of late made famous in the game Minecraft), to self-right themselves in just a fraction of a second.

Inspired by this approach, researchers have created self-righting drones with artificial elytra. Simulations and experiments show that the artificial elytra can not only help salvage fixed-wing drones from compromising positions, but also improve the aerodynamics of the vehicles during flight. The results are described in a study published July 9 in IEEE Robotics and Automation Letters.

Charalampos Vourtsis is a doctoral assistant at the Laboratory of Intelligent Systems, Ecole Polytechnique Federale de Lausanne in Switzerland who co-created the new design. He notes that beetles, including ladybugs, have existed for tens of millions of years. “Over that time, they have developed several survival mechanisms that we found to be a source of inspiration for applications in modern robotics,” he says.

His team was particularly intrigued by beetles’ elytra, which for ladybugs are their famous black-spotted, red exterior wing. Underneath the elytra is the hind wing, the semi-transparent appendage that’s actually used for flight.

When stuck on their backs, ladybugs use their elytra to stabilize themselves, and then thrust their legs or hind wings in order to pitch over and self-right. Vourtsis’ team designed Micro Aerial Vehicles (MAVs) that use a similar technique, but with actuators to provide the self-righting force. “Similar to the insect, the artificial elytra feature degrees of freedom that allow them to reorient the vehicle if it flips over or lands upside down,” explains Vourtsis.



The researchers created and tested artificial elytra of different lengths (11, 14 and 17 centimeters) and torques to determine the most effective combination for self-righting a fixed-wing drone. While torque had little impact on performance, the length of elytra was found to be influential.

On a flat, hard surface, the shorter elytra lengths yielded mixed results. However, the longer length was associated with a perfect success rate. The longer elytra were then tested on different inclines of 10°, 20° and 30°, and at different orientations. The drones used the elytra to self-right themselves in all scenarios, except for one position at the steepest incline.  

The design was also tested on seven different terrains: pavement, course sand, fine sand, rocks, shells, wood chips and grass. The drones were able to self-right with a perfect success rate across all terrains, with the exception of grass and fine sand. Vourtsis notes that the current design was made from widely available materials and a simple scale model of the beetle’s elytra—but further optimization may help the drones self-right on these more difficult terrains.

As an added bonus, the elytra were found to add non-negligible lift during flight, which offsets their weight.  

Vourtsis says his team hopes to benefit from other design features of the beetles’ elytra. “We are currently investigating elytra for protecting folding wings when the drone moves on the ground among bushes, stones, and other obstacles, just like beetles do,” explains Vourtsis. “That would enable drones to fly long distances with large, unfolded wings, and safely land and locomote in a compact format in narrow spaces.”

New US Electronic Warfare Platform

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/05/new_us_electron.html

The Army is developing a new electronic warfare pod capable of being put on drones and on trucks.

…the Silent Crow pod is now the leading contender for the flying flagship of the Army’s rebuilt electronic warfare force. Army EW was largely disbanded after the Cold War, except for short-range jammers to shut down remote-controlled roadside bombs. Now it’s being urgently rebuilt to counter Russia and China, whose high-tech forces — unlike Afghan guerrillas — rely heavily on radio and radar systems, whose transmissions US forces must be able to detect, analyze and disrupt.

It’s hard to tell what this thing can do. Possibly a lot, but it’s all still in prototype stage.

Historically, cyber operations occurred over landline networks and electronic warfare over radio-frequency (RF) airwaves. The rise of wireless networks has caused the two to blur. The military wants to move away from traditional high-powered jamming, which filled the frequencies the enemy used with blasts of static, to precisely targeted techniques, designed to subtly disrupt the enemy’s communications and radar networks without their realizing they’re being deceived. There are even reports that “RF-enabled cyber” can transmit computer viruses wirelessly into an enemy network, although Wojnar declined to confirm or deny such sensitive details.

[…]

The pod’s digital brain also uses machine-learning algorithms to analyze enemy signals it detects and compute effective countermeasures on the fly, instead of having to return to base and download new data to human analysts. (Insiders call this cognitive electronic warfare). Lockheed also offers larger artificial intelligences to assist post-mission analysis on the ground, Wojnar said. But while an AI small enough to fit inside the pod is necessarily less powerful, it can respond immediately in a way a traditional system never could.

EDITED TO ADD (5/14): Here are two reports on Russian electronic warfare capabilities.

Another Attack Against Driverless Cars

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/07/another_attack_.html

In this piece of research, attackers successfully attack a driverless car system — Renault Captur’s “Level 0” autopilot (Level 0 systems advise human drivers but do not directly operate cars) — by following them with drones that project images of fake road signs in 100ms bursts. The time is too short for human perception, but long enough to fool the autopilot’s sensors.

Boing Boing post.

Drone Denial-of-Service Attack against Gatwick Airport

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/12/drone_denial-of.html

Someone is flying a drone over Gatwick Airport in order to disrupt service:

Chris Woodroofe, Gatwick’s chief operating officer, said on Thursday afternoon there had been another drone sighting which meant it was impossible to say when the airport would reopen.

He told BBC News: “There are 110,000 passengers due to fly today, and the vast majority of those will see cancellations and disruption. We have had within the last hour another drone sighting so at this stage we are not open and I cannot tell you what time we will open.

“It was on the airport, seen by the police and corroborated. So having seen that drone that close to the runway it was unsafe to reopen.”

The economics of this kind of thing isn’t in our favor. A drone is cheap. Closing an airport for a day is very expensive.

I don’t think we’re going to solve this by jammers, or GPS-enabled drones that won’t fly over restricted areas. I’ve seen some technologies that will safely disable drones in flight, but I’m not optimistic about those in the near term. The best defense is probably punitive penalties for anyone doing something like this — enough to discourage others.

There are a lot of similar security situations, in which the cost to attack is vastly cheaper than 1) the damage caused by the attack, and 2) the cost to defend. I have long believed that this sort of thing represents an existential threat to our society.

EDITED TO ADD (12/23): The airport has deployed some ant-drone technology and reopened.

Autonomous drones (only slightly flammable)

Post Syndicated from Liz Upton original https://www.raspberrypi.org/blog/autonomous-drones-only-slightly-flammable/

I had an email a little while ago, which opened: “I don’t know if you remember me, but…”

As it happens, I remembered Andy Baker very well, in large part because an indoor autonomous drone demo he ran at a Raspberry Pi birthday party a couple of years ago ACTUALLY CAUGHT FIRE. Here’s a refresher.

Raspberry Pi Party Autonomous drone demo + fire

At the Raspberry Pi IV party and there is a great demo of an Autonomous drone which is very impressive with only using a Pi. However it caught on fire. But i believe it does actually work.

We’ve been very careful since then to make sure that speakers are always accompanied by a fire extinguisher.

I love stories like Andy’s. He started working with the Raspberry Pi shortly after our first release in 2012, and had absolutely no experience with drones or programming them; there’s nothing more interesting than watching someone go from a standing start to something really impressive. It’s been a couple of years since we were last in touch, but Andy mailed me last week to let me know he’s just completed his piDrone project, after years of development. I thought you’d like to hear about it too. Over to Andy!

Building an autonomous drone from scratch

I suffer from “terminal boredom syndrome”; I always need a challenging hobby to keep me sane. In 2012, the Raspberry Pi was launched just as my previous hobby had come to an end. After six months of playing (including a Raspberry Pi version of a BBC Micro Turtle robot I did at school 30+ years ago), I was looking for something really challenging. DIY drones were emerging, so I set out making one with a Raspberry Pi and Python, from absolute ignorance but loads of motivation.  Six years later, with only one fire (at the Raspberry Pi 4th Birthday Party, no less!), the job is done.

Here’s smaller Zoë, larger Hermione and their remote-controller, Ivy:

Zoë (as in “Ball”), the smallest drone, is based on a Pi ZeroW, supporting preset- and manual-flight controls. Hermione (as in “Granger”) is a Pi3 drone, supporting the above along with GPS and obstacle-avoidance.

Penelope (as in “Pitstop”), not shown above, is a B3+ with mix of the two above.

Development history

It probably took four years(!) to get the drone to simply hover stably for more than a few seconds. For example, the accelerometer (IMU) tells gravity and acceleration in 3D; and from sum math(s), angles, speed and distance. But IMU output is very noisy. It drifts with temperature, and because gravity is huge compared to the propeller changes, it doesn’t take long before the calculated speed and distance values drift significantly. It took a lot of time, experimentation and guesswork to get accelerometer, gyrometer, ground-facing LiDAR and a Raspberry Pi camera to work together to get a stable hover for minutes rather than seconds. And during that experimentation, there were plenty of crashes: replacement parts were needed many many times! However, with a sixty-second stable hover finally working, adding cool features like GPS tracking, object avoidance and human control were trivial in comparison.

GNSS waypoint tracked successfully!

See http://blog.pistuffing.co.uk/whoohoo/

Obstruction avoidance test 2 – PASSED!!!!!

Details at http://pidrone.io/posts/obstruction-avoidance-test-2-passed/

Human control (iPhone)

See http://pidrone.io/posts/human-i-am-human/

In passing, I’m a co-founder and assistant at the Cotswold Raspberry Jam (cotswoldjam.org). I’m hoping to take Zoë to the next event on September 15th – tickets are free – and there’s so much more learn, interact and play with beyond the piDrone.

Finally, a few years ago, my goal became getting the piDrone exploring a maze: all but minor tweaks are now in places. Sadly, piDrone battery power for exploring a large maze currently doesn’t exist. Perhaps my next project will be designing a nuclear-fusion battery pack?  Deuterium oxide (heavy water) is surprisingly cheap, it seems…

More resources

If you want to learn more, there’s years of development on Andy’s blog at http://pidrone.io, and he’s made considerable documentation available at GitHub if you want to explore things further after this blog post. Thanks Andy!

The post Autonomous drones (only slightly flammable) appeared first on Raspberry Pi.

New – Machine Learning Inference at the Edge Using AWS Greengrass

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-machine-learning-inference-at-the-edge-using-aws-greengrass/

What happens when you combine the Internet of Things, Machine Learning, and Edge Computing? Before I tell you, let’s review each one and discuss what AWS has to offer.

Internet of Things (IoT) – Devices that connect the physical world and the digital one. The devices, often equipped with one or more types of sensors, can be found in factories, vehicles, mines, fields, homes, and so forth. Important AWS services include AWS IoT Core, AWS IoT Analytics, AWS IoT Device Management, and Amazon FreeRTOS, along with others that you can find on the AWS IoT page.

Machine Learning (ML) – Systems that can be trained using an at-scale dataset and statistical algorithms, and used to make inferences from fresh data. At Amazon we use machine learning to drive the recommendations that you see when you shop, to optimize the paths in our fulfillment centers, fly drones, and much more. We support leading open source machine learning frameworks such as TensorFlow and MXNet, and make ML accessible and easy to use through Amazon SageMaker. We also provide Amazon Rekognition for images and for video, Amazon Lex for chatbots, and a wide array of language services for text analysis, translation, speech recognition, and text to speech.

Edge Computing – The power to have compute resources and decision-making capabilities in disparate locations, often with intermittent or no connectivity to the cloud. AWS Greengrass builds on AWS IoT, giving you the ability to run Lambda functions and keep device state in sync even when not connected to the Internet.

ML Inference at the Edge
Today I would like to toss all three of these important new technologies into a blender! You can now perform Machine Learning inference at the edge using AWS Greengrass. This allows you to use the power of the AWS cloud (including fast, powerful instances equipped with GPUs) to build, train, and test your ML models before deploying them to small, low-powered, intermittently-connected IoT devices running in those factories, vehicles, mines, fields, and homes that I mentioned.

Here are a few of the many ways that you can put Greengrass ML Inference to use:

Precision Farming – With an ever-growing world population and unpredictable weather that can affect crop yields, the opportunity to use technology to increase yields is immense. Intelligent devices that are literally in the field can process images of soil, plants, pests, and crops, taking local corrective action and sending status reports to the cloud.

Physical Security – Smart devices (including the AWS DeepLens) can process images and scenes locally, looking for objects, watching for changes, and even detecting faces. When something of interest or concern arises, the device can pass the image or the video to the cloud and use Amazon Rekognition to take a closer look.

Industrial Maintenance – Smart, local monitoring can increase operational efficiency and reduce unplanned downtime. The monitors can run inference operations on power consumption, noise levels, and vibration to flag anomalies, predict failures, detect faulty equipment.

Greengrass ML Inference Overview
There are several different aspects to this new AWS feature. Let’s take a look at each one:

Machine Learning ModelsPrecompiled TensorFlow and MXNet libraries, optimized for production use on the NVIDIA Jetson TX2 and Intel Atom devices, and development use on 32-bit Raspberry Pi devices. The optimized libraries can take advantage of GPU and FPGA hardware accelerators at the edge in order to provide fast, local inferences.

Model Building and Training – The ability to use Amazon SageMaker and other cloud-based ML tools to build, train, and test your models before deploying them to your IoT devices. To learn more about SageMaker, read Amazon SageMaker – Accelerated Machine Learning.

Model Deployment – SageMaker models can (if you give them the proper IAM permissions) be referenced directly from your Greengrass groups. You can also make use of models stored in S3 buckets. You can add a new machine learning resource to a group with a couple of clicks:

These new features are available now and you can start using them today! To learn more read Perform Machine Learning Inference.

Jeff;

 

HackSpace magazine 3: Scrap Heap Hacking

Post Syndicated from Andrew Gregory original https://www.raspberrypi.org/blog/hackspace-magazine-3-scrap-heap-hacking/

We’re making with a purpose in issue 3 of HackSpace magazine. Not only are we discovering ways in which 3D printing is helping to save resources — and in some case lives — in the developing world, we’re also going all out with recycling. While others might be content with separating their glass and plastic waste, we’re going much, much further by making useful things out of discarded old bits of rubbish you can find at your local scrapyard.

Hackspaces

We’re going to Cheltenham Hackspace to learn how to make a leather belt, to Liverpool to discover the ways in which an open-source design and some bits and bobs from IKEA are protecting our food supply, and we also take a peek through the doors of Nottingham Hackspace.

Tutorials

The new issue also has the most tutorials you’ll have seen anywhere since…well, since HackSpace magazine issue 2! Guides to 3D-printing on fabric, Arduino programming, and ESP8266 hacking are all to be found in issue 3. Plus, we’ve come up with yet another way to pipe numbers from the internet into big, red, glowing boxes — it’s what LEDs were made for.



With the addition of racing drones, an angry reindeer, and an intelligent toaster, we think we’ve definitely put together an issue you’ll enjoy.

Get your copy

The physical copy of HackSpace magazine is available at all good UK newsagents today, and you can order it online from the Raspberry Pi Press store wherever you are based. Moreover, you can download the free PDF version from our website. And if you’ve read our first two issues and enjoyed what you’ve seen, be sure to subscribe!

Write for us

Are you working on a cool project? Do you want to share your skills with the world, inspire others, and maybe show off a little? HackSpace magazine wants your article! Send an outline of your piece to us, and we’ll get back to you about including it in a future issue.

The post HackSpace magazine 3: Scrap Heap Hacking appeared first on Raspberry Pi.

Detecting Drone Surveillance with Traffic Analysis

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/01/detecting_drone.html

This is clever:

Researchers at Ben Gurion University in Beer Sheva, Israel have built a proof-of-concept system for counter-surveillance against spy drones that demonstrates a clever, if not exactly simple, way to determine whether a certain person or object is under aerial surveillance. They first generate a recognizable pattern on whatever subject­ — a window, say — someone might want to guard from potential surveillance. Then they remotely intercept a drone’s radio signals to look for that pattern in the streaming video the drone sends back to its operator. If they spot it, they can determine that the drone is looking at their subject.

In other words, they can see what the drone sees, pulling out their recognizable pattern from the radio signal, even without breaking the drone’s encrypted video.

The details have to do with the way drone video is compressed:

The researchers’ technique takes advantage of an efficiency feature streaming video has used for years, known as “delta frames.” Instead of encoding video as a series of raw images, it’s compressed into a series of changes from the previous image in the video. That means when a streaming video shows a still object, it transmits fewer bytes of data than when it shows one that moves or changes color.

That compression feature can reveal key information about the content of the video to someone who’s intercepting the streaming data, security researchers have shown in recent research, even when the data is encrypted.

Research paper and video.

Hello World Issue 4: Professional Development

Post Syndicated from Carrie Anne Philbin original https://www.raspberrypi.org/blog/hello-world-issue-4/

Another new year brings with it thoughts of setting goals and targets. Thankfully, there is a new issue of Hello World packed with practical advise to set you on the road to success.

Hello World is our magazine about computing and digital making for educators, and it’s a collaboration between the Raspberry Pi Foundation and Computing at School, which is part of the British Computing Society.

Hello World 4 Professional Development Raspberry Pi CAS

In issue 4, our international panel of educators and experts recommends approaches to continuing professional development in computer science education.

Approaches to professional development, and much more

With recommendations for more professional development in the Royal Society’s report, and government funding to support this, our cover feature explores some successful approaches. In addition, the issue is packed with other great resources, guides, features, and lesson plans to support educators.

Hello World 4 Professional Development Raspberry Pi CAS
Hello World 4 Professional Development Raspberry Pi CAS
Hello World 4 Professional Development Raspberry Pi CAS
Hello World 4 Professional Development Raspberry Pi CAS

Highlights include:

  • The Royal Society: After the Reboot — learn about the latest report and its findings about computing education
  • The Cyber Games — a new programme looking for the next generation of security experts
  • Engaging Students with Drones
  • Digital Literacy: Lost in Translation?
  • Object-oriented Coding with Python

Get your copy of Hello World 4

Hello World is available as a free Creative Commons download for anyone around the world who is interested in computer science and digital making education. You can get the latest issue as a PDF file straight from the Hello World website.

Thanks to the very generous sponsorship of BT, we are able to offer free print copies of the magazine to serving educators in the UK. It’s for teachers, Code Club volunteers, teaching assistants, teacher trainers, and others who help children and young people learn about computing and digital making. So remember to subscribe to have your free print magazine posted directly to your home — 6000 educators have already signed up to receive theirs!

Could you write for Hello World?

By sharing your knowledge and experience of working with young people to learn about computing, computer science, and digital making in Hello World, you will help inspire others to get involved. You will also help bring the power of digital making to more and more educators and learners.

The computing education community is full of people who lend their experience to help colleagues. Contributing to Hello World is a great way to take an active part in this supportive community, and you’ll be adding to a body of free, open-source learning resources that are available for anyone to use, adapt, and share. It’s also a tremendous platform to broadcast your work: Hello World digital versions alone have been downloaded more than 50000 times!

Wherever you are in the world, get in touch with us by emailing our editorial team about your article idea.

The post Hello World Issue 4: Professional Development appeared first on Raspberry Pi.

Is blockchain a security topic? (Opensource.com)

Post Syndicated from jake original https://lwn.net/Articles/740929/rss

At Opensource.com, Mike Bursell looks at blockchain security from the angle of trust. Unlike cryptocurrencies, which are pseudonymous typically, other kinds of blockchains will require mapping users to real-life identities; that raises the trust issue.

What’s really interesting is that, if you’re thinking about moving to a permissioned blockchain or distributed ledger with permissioned actors, then you’re going to have to spend some time thinking about trust. You’re unlikely to be using a proof-of-work system for making blocks—there’s little point in a permissioned system—so who decides what comprises a “valid” block that the rest of the system should agree on? Well, you can rotate around some (or all) of the entities, or you can have a random choice, or you can elect a small number of über-trusted entities. Combinations of these schemes may also work.

If these entities all exist within one trust domain, which you control, then fine, but what if they’re distributors, or customers, or partners, or other banks, or manufacturers, or semi-autonomous drones, or vehicles in a commercial fleet? You really need to ensure that the trust relationships that you’re encoding into your implementation/deployment truly reflect the legal and IRL [in real life] trust relationships that you have with the entities that are being represented in your system.

And the problem is that, once you’ve deployed that system, it’s likely to be very difficult to backtrack, adjust, or reset the trust relationships that you’ve designed.”

NSA "Red Disk" Data Leak

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/11/nsa_red_disk_da.html

ZDNet is reporting about another data leak, this one from US Army’s Intelligence and Security Command (INSCOM), which is also within to the NSA.

The disk image, when unpacked and loaded, is a snapshot of a hard drive dating back to May 2013 from a Linux-based server that forms part of a cloud-based intelligence sharing system, known as Red Disk. The project, developed by INSCOM’s Futures Directorate, was slated to complement the Army’s so-called distributed common ground system (DCGS), a legacy platform for processing and sharing intelligence, surveillance, and reconnaissance information.

[…]

Red Disk was envisioned as a highly customizable cloud system that could meet the demands of large, complex military operations. The hope was that Red Disk could provide a consistent picture from the Pentagon to deployed soldiers in the Afghan battlefield, including satellite images and video feeds from drones trained on terrorists and enemy fighters, according to a Foreign Policy report.

[…]

Red Disk was a modular, customizable, and scalable system for sharing intelligence across the battlefield, like electronic intercepts, drone footage and satellite imagery, and classified reports, for troops to access with laptops and tablets on the battlefield. Marking files found in several directories imply the disk is “top secret,” and restricted from being shared to foreign intelligence partners.

A couple of points. One, this isn’t particularly sensitive. It’s an intelligence distribution system under development. It’s not raw intelligence. Two, this doesn’t seem to be classified data. Even the article hedges, using the unofficial term of “highly sensitive.” Three, it doesn’t seem that Chris Vickery, the researcher that discovered the data, has published it.

Chris Vickery, director of cyber risk research at security firm UpGuard, found the data and informed the government of the breach in October. The storage server was subsequently secured, though its owner remains unknown.

This doesn’t feel like a big deal to me.

Slashdot thread.

IoT Cybersecurity: What’s Plan B?

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/10/iot_cybersecuri.html

In August, four US Senators introduced a bill designed to improve Internet of Things (IoT) security. The IoT Cybersecurity Improvement Act of 2017 is a modest piece of legislation. It doesn’t regulate the IoT market. It doesn’t single out any industries for particular attention, or force any companies to do anything. It doesn’t even modify the liability laws for embedded software. Companies can continue to sell IoT devices with whatever lousy security they want.

What the bill does do is leverage the government’s buying power to nudge the market: any IoT product that the government buys must meet minimum security standards. It requires vendors to ensure that devices can not only be patched, but are patched in an authenticated and timely manner; don’t have unchangeable default passwords; and are free from known vulnerabilities. It’s about as low a security bar as you can set, and that it will considerably improve security speaks volumes about the current state of IoT security. (Full disclosure: I helped draft some of the bill’s security requirements.)

The bill would also modify the Computer Fraud and Abuse and the Digital Millennium Copyright Acts to allow security researchers to study the security of IoT devices purchased by the government. It’s a far narrower exemption than our industry needs. But it’s a good first step, which is probably the best thing you can say about this legislation.

However, it’s unlikely this first step will even be taken. I am writing this column in August, and have no doubt that the bill will have gone nowhere by the time you read it in October or later. If hearings are held, they won’t matter. The bill won’t have been voted on by any committee, and it won’t be on any legislative calendar. The odds of this bill becoming law are zero. And that’s not just because of current politics — I’d be equally pessimistic under the Obama administration.

But the situation is critical. The Internet is dangerous — and the IoT gives it not just eyes and ears, but also hands and feet. Security vulnerabilities, exploits, and attacks that once affected only bits and bytes now affect flesh and blood.

Markets, as we’ve repeatedly learned over the past century, are terrible mechanisms for improving the safety of products and services. It was true for automobile, food, restaurant, airplane, fire, and financial-instrument safety. The reasons are complicated, but basically, sellers don’t compete on safety features because buyers can’t efficiently differentiate products based on safety considerations. The race-to-the-bottom mechanism that markets use to minimize prices also minimizes quality. Without government intervention, the IoT remains dangerously insecure.

The US government has no appetite for intervention, so we won’t see serious safety and security regulations, a new federal agency, or better liability laws. We might have a better chance in the EU. Depending on how the General Data Protection Regulation on data privacy pans out, the EU might pass a similar security law in 5 years. No other country has a large enough market share to make a difference.

Sometimes we can opt out of the IoT, but that option is becoming increasingly rare. Last year, I tried and failed to purchase a new car without an Internet connection. In a few years, it’s going to be nearly impossible to not be multiply connected to the IoT. And our biggest IoT security risks will stem not from devices we have a market relationship with, but from everyone else’s cars, cameras, routers, drones, and so on.

We can try to shop our ideals and demand more security, but companies don’t compete on IoT safety — and we security experts aren’t a large enough market force to make a difference.

We need a Plan B, although I’m not sure what that is. E-mail me if you have any ideas.

This essay previously appeared in the September/October issue of IEEE Security & Privacy.

Military Robots as a Nature Analog

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/08/military_robots.html

This very interesting essay looks at the future of military robotics and finds many analogs in nature:

Imagine a low-cost drone with the range of a Canada goose, a bird that can cover 1,500 miles in a single day at an average speed of 60 miles per hour. Planet Earth profiled a single flock of snow geese, birds that make similar marathon journeys, albeit slower. The flock of six-pound snow geese was so large it formed a sky-darkening cloud 12 miles long. How would an aircraft carrier battlegroup respond to an attack from millions of aerial kamikaze explosive drones that, like geese, can fly hundreds of miles? A single aircraft carrier costs billions of dollars, and the United States relies heavily on its ten aircraft carrier strike groups to project power around the globe. But as military robots match more capabilities found in nature, some of the major systems and strategies upon which U.S. national security currently relies — perhaps even the fearsome aircraft carrier strike group — might experience the same sort of technological disruption that the smartphone revolution brought about in the consumer world.

US Army Researching Bot Swarms

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/07/us_army_researc.html

The US Army Research Agency is funding research into autonomous bot swarms. From the announcement:

The objective of this CRA is to perform enabling basic and applied research to extend the reach, situational awareness, and operational effectiveness of large heterogeneous teams of intelligent systems and Soldiers against dynamic threats in complex and contested environments and provide technical and operational superiority through fast, intelligent, resilient and collaborative behaviors. To achieve this, ARL is requesting proposals that address three key Research Areas (RAs):

RA1: Distributed Intelligence: Establish the theoretical foundations of multi-faceted distributed networked intelligent systems combining autonomous agents, sensors, tactical super-computing, knowledge bases in the tactical cloud, and human experts to acquire and apply knowledge to affect and inform decisions of the collective team.

RA2: Heterogeneous Group Control: Develop theory and algorithms for control of large autonomous teams with varying levels of heterogeneity and modularity across sensing, computing, platforms, and degree of autonomy.

RA3: Adaptive and Resilient Behaviors: Develop theory and experimental methods for heterogeneous teams to carry out tasks under the dynamic and varying conditions in the physical world.

Slashdot thread.

And while we’re on the subject, this is an excellent report on AI and national security.