Alphabet’s enthusiasm for balloons deflated earlier this year, when it announced that its high-altitude Internet company, Loon, could not become commercially viable.
But while the stratosphere might not be a great place to put a cellphone tower, it could be the sweet spot for cameras, argue a host of high-tech startups.
The market for Earth-observation services from satellites is expected to top US $4 billion by 2025, as orbiting cameras, radars, and other devices monitor crops, assess infrastructure, and detect greenhouse gas emissions. Low-altitude observations from drones could be worth.
Neither platform is perfect. Satellites can cover huge swaths of the planet but remain expensive to develop, launch, and operate. Their cameras are also hundreds of kilometers from the things they are trying to see, and often moving at tens of thousands of kilometers per hour.
Drones, on the other hand, can take supersharp images, but only over a relatively small area. They also need careful human piloting to coexist with planes and helicopters.
Balloons in the stratosphere, 20 kilometers above Earth (and 10 km above most jets), split the difference. They are high enough not to bother other aircraft and yet low enough to observe broad areas in plenty of detail. For a fraction of the price of a satellite, an operator can launch a balloon that lasts for weeks (even months), carrying large, capable sensors.
Unsurprisingly, perhaps, the U.S. military has funded development in stratospheric balloon tests across six Midwest states to “provide a persistent surveillance system to locate and deter narcotic trafficking and homeland security threats.”
But the Pentagon is far from the only organization flying high. An IEEE Spectrum analysis of applications filed with the U.S. Federal Communications Commission reveals at least six companies conducting observation experiments in the stratosphere. Some are testing the communications, navigation, and flight infrastructure required for such balloons. Others are running trials for commercial, government, and military customers.
The illustration above depicts experimental test permits granted by the FCC from January 2020 to June 2021, together covering much of the continental United States. Some tests were for only a matter of hours; others spanned days or more.
Spies, gumshoes and hard-driving investigative reporters—no matter how amazing they may be at their jobs—all suffer one inescapable drawback: They don’t scale.
By contrast, consider the case of Eliot Higgins. In 2012, this business administrator began blogging about videos and other social media feeds he was following on the conflicts in Libya and Syria. By systematically and minutely analyzing the locations, people, and weapons he saw, Higgins was able to identify the use of cluster bombs and chemical weapons, and eventually uncover a weapons smuggling operation.
In 2014, he used a Kickstarter campaign to launch Bellingcat, an online platform for citizen journalists to verify and publish research that taps into the hive mind approach to worldwide, Internet-era collaboration and open-source sleuthing. Whereas traditional investigative journalism—commissioned and published by newspapers or foundations or blogs—is only as expansive and wide-ranging as the team of reporters assigned to any given project, Bellingcat is more like an online meritocracy of ideas à la Wikipedia: Start a thread or page or investigation or project, and if it yields something good, some will read while others will contribute. Get enough people involved, and things can even start to snowball.
Bellingcat has gone on to conduct important investigations into the downing of the MH17 airliner over Ukraine, atrocities in Cameroon and Tigray, and the poisonings of political rivals by the Russian government. The organization has collaborated with human rights agencies, the International Criminal Court, and traditional news outlets, including the BBC and TheNew York Times. Bellingcat now has 22 staff and a host of volunteers.
Many of the tools Bellingcat uses in its investigations are available online, for anyone to use or improve upon, including software to determine the time of day from shadows in photographs, identify the locations of Instagram posts, or find cloud-free areas in satellite imagery.
The following is a condensed version of a phone interview IEEE Spectrum conducted with Higgins in June.
Spectrum: What’s the story behind the name “Bellingcat”?
Higgins: I wanted part of the website to be dedicated to showing people how to do open source investigations, and a friend suggested the fable of “Belling the Cat.” This is about group of mice who are afraid of a ferocious cat, so they come up with the idea of putting a bell around its neck to protect them. We’re teaching people how to bell the cat.
Spectrum: It’s interesting that with their phones and digital activity inadvertently documenting their movements, some of the “cats” you investigate are now belling themselves.
Higgins: As smartphone technology has become more available, people are recording and sharing every aspect of their lives. They give away a huge amount of information, everything from their day-to-day activities to war crimes and some of the most horrific acts you can imagine. Some of that is done on purpose, and sometimes it’s just accidental or incidental. But because that’s all online, it’s all information that we can use to piece together what happened around a wide variety of events.
Spectrum: So how does a typical investigation come together?
Higgins: We break it down into three steps: identify, verify and amplify. Identify is finding information, such as videos and photographs from a specific incident. We then verify that by figuring out if we can confirm details. We don’t have any special high-end tools—it’s all stuff that’s free or very cheap for anyone to access. Google is probably one of the most useful tools, just having a search engine that can help you trawl massive amounts of material. Satellite imagery from Google Earth is a very important part of the geolocation verification process, as is Google Street View.
Amplification might be a blog on Bellingcat or a collaboration with a media outlet or NGO. One of the reasons that these organizations work with us is that we use stuff that’s on the internet already. They don’t have to worry whether this information is actually correct. There are concerns that Wikileaks, for example, is being sent stuff by intelligence organizations. But with us, we can show the social media posts or the videos that make our point, and then tell them how we analyzed it.
For Bellingcat, building an audience isn’t just about reaching more people, it’s about involving more people in the investigation. Getting more eyeballs on the things we’re trying to discover is often something that’s quite important.
Spectrum: How does that audience engagement work in practice?
Higgins: Europol had a Stop Child Abuse campaign where it was asking members of the public to identify objects from abuse imagery. We amplified that to our audience and just through sharing these object images, children have been rescued and perpetrators arrested. It doesn’t have to be a particularly complex task, but if you can get half a million people working on it, you often get a correct answer. It’s about a community built around Bellingcat, not just Bellingcat doing stuff itself.
Higgins: One of our contributors saw on social media a photo of what appeared to be a nuclear weapon at a foreign military base. As he was doing keyword searches on terms related to nuclear weapons, he started getting results from flashcard apps [smartphone apps people use to prepare themselves for school and work tests].
He discovered that people were using these apps to save quizzes about the security at nuclear bases, seemingly unaware that they were discoverable on Google. And he searched for the names of more bases believed to have nuclear weapons, he found more and more profiles, and more and more information about the storage of nuclear devices.
Open source investigation is often like this—choosing the right words to find the thing that you’re looking for and then digging through all the information.
Spectrum: But then you had to decide how much of that information to make public.
Higgins: Right. We published only a very small selection that was older and out of date, and gave them a good chance to take precautions, like changing their protocols and passwords.
One of the paradoxes of our work is that we want as much information available online for our investigations but we also recognize that it can pose a risk to society. It’s interesting but also probably quite a bad thing that it is all out there.
Spectrum: How much do you worry about misinformation, such as fake social media profiles, misleading geocoding, or other information created with the intention of driving investigators in the wrong direction?
Higgins: We often deal with actors who will put out fake information and part of the verification process is looking out for that. Russia put out a massive amounts of falsified images during our MH17 investigation, for example, all of which we could check and verify using open source investigation techniques. Anything that we’re using or publishing is triangulated with other open source evidence, or sources independent from the original.
So if a really important video appears on a brand new YouTube account, you are immediately suspicious. And it’s actually very hard to make a convincing fake. It’s one thing to post a single fake image but it’s not just the thing you’re posting, it’s also the social media account itself. Where and when did it originate? Who is it following, who follows it? If something’s fake, it doesn’t exist in the network that genuine information exists in.
When Russia presented satellite injury with MH17, we were able to show it was actually taken six weeks before they claimed, based on various details we found on [publicly available] satellite imagery.
Spectrum: I’m interested in whether the cat ever listens to its bell? Does Russian intelligence, for instance, learn from your investigations, or are they making the same mistakes over and over again?
Higgins: Open-source investigation is so new that even if one nation starts figuring this stuff out and [takes] steps to stop us, the rest of the world isn’t so aware. And no matter what restrictions there are, there’s always something else to investigate.
There are a million worthy things we could be looking into, any day of the week. That’s partly why Bellingcat does so much training and involves such a big audience. The more people who are doing it, the more stuff gets investigated.
And because so much of open source investigations is online, you don’t have to be in the same country you’re writing about. Most of the work that has been done on Syria, for example, is by people who are not Syrian and who don’t live in Syria,
We have a sense of the internationalism where people in the UK and Germany support people in distant countries. They’re on the ground gathering the evidence and we’re the ones piecing it together and analyzing it.
Spectrum: What’s your take on digital technologies like social media now? Are they a force for good that can create positive change, or a malign influence where misinformation spreads and polarizes communities?
Higgins: There’s good and bad in all of it. Some of the work we’re doing now is focused on teaching students and schools to do open source investigations on issues in their area and working with local media.
I’ve been helping a community in the UK that rescues stolen dogs by using license plate analysis from security camera footage. Often the plates are too blurred to read but we’ve developed technology as part of our investigations into things like murders, to help them track down these missing dogs.
It’s not Syrian war crimes or Russian assassinations but having your pet stolen is a big issue. We’re doing tech development as well, creating new tech platforms for volunteers to organize on, and new ways for evidence to be gathered for justice and accountability.
If you can teach people on a large scale and start building communities around that, you’ll also build resilience to people being drawn into conspiracy theories, where they think they’re finding answers but they’re just getting a false sense of power.
What our students are finding is a way to do the work themselves and discover something for themselves, and that’s having an impact which has measurable results.
When Kevin Wells’ private jet lost its GPS reception after taking off from Hayward Executive Airport in California’s Bay Area in February 2019, he whipped out his phone and starting filming the instrument panel. For a few seconds, the signals from over a dozen GPS satellites can be seen to blink away, before slowly returning as Wells continues his ascent. Wells was able to continue his flight safely, but may have accidentally flown too high, into commercial airspace near Oakland.
Wells was quick to film the incident because this was not the first time he had suffered GPS problems. Less than a month earlier, his Cessna had been hit by a similar outage, at almost the identical location. “When I asked the Hayward Tower about the loss of signal, they did not know of [it],” he wrote later in a report to NASA’s Aviation Safety Reporting System.
“It wasn’t a big event for me because I came out of the clouds at the beginning of the second event,” says Wells of the 2019 interference.
Wells had fallen victim to a GPS interference event, where rogue or malicious signals drown out the faint signals from navigation satellites in orbit. Such events, often linked to U.S. military tests, can cause dangerous situations and near-misses. A recent Spectrum investigation discovered that they are far more prevalent, particularly in the western United States, than had previously been thought.
Luckily, Wells was in a position to do something about it. As Executive Director of Stanford Institute for Theoretical Physics, Wells knew many of the university’s researchers. He took his video to the GPS Lab at Stanford Engineering, where Professor Todd Walter was already studying the problem of GPS jamming.
The Federal Aviation Administration (FAA) and the Federal Communications Commission (FCC) do have procedures for finding people deliberately or accidentally interfering with GPS signals. When pilots reported mysterious, sporadic GPS jamming near Wilmington Airport in North Carolina, the FAA eventually identified a poorly designed antenna on a utility company’s wireless control system. “But this took many weeks, maybe months,” Walter tells Spectrum. “Our goal is to track down the culprit in days.”
Walter’s team was working on a drone that would autonomously sniff out local signals in the GPS band, without having to rely on GPS for its own navigation. “But we didn’t have permission to fly drones in Hayward’s airspace and it wasn’t quite at the point where we could just launch it and seek out the source of the interference,” says Walter.
Instead, Walter had a different idea: Why not use the GPS receivers in other aircraft to crowdsource a solution? All modern planes carry ADS-B transponders—devices that continually broadcast their GPS location, speed and heading to aid air traffic control and avoid potential collisions. These ADS-B signals are collected by nearby aircraft but also by many terrestrial sensors, including a network of open-access receivers organized by OpenSky, a Swiss nonprofit.
With OpenSky’s data in hand, the Stanford researchers’ first task was accurately identifying interference events. They found that the vast majority of times that ADS-B receivers lost data had nothing to do with interference. Some receivers were unreliable, others were obscured from planes overhead by buildings or trees.
Being able to link the loss of Wells’ GPS signals with data from the OpenSky database was an important step in characterizing genuine jamming. Integrity and accuracy indicators built into the ADS-B data stream also helped the researchers. “I think certain interference events have a characteristic that we can now recognize,” says Walter. “But I’d be concerned that there would be other interference events that we don’t have the pattern for. We need more resources and more data.”
For the Hayward incidents, Walter’s team managed to identify 17 days of potential jamming between January and March 2019. Of the 265 aircraft that flew near Hayward during that time, the ADS-B data showed that 25 had probably experienced GPS interference. This intermittent jamming was not enough to narrow down the source of the signals, says Walter: “We can say it’s in this region of Hayward but you don’t want to go searching ten city blocks. We want to localize it to a house or building.”
The FAA eventually issued a warning to pilots flying in and out of Hayward and much of the southern Bay Area, and the interference seemed to quiet down. However, Walter recently looked at 2020 ADS-B data near Hayward and found 13 more potentially jammed flights.
Walter now hopes to expand on its Hayward study by tapping into the FAA’s own network of ADS-B receivers to help uncover more interference signals hidden in the data.
Leveraging ADS-B signals is not the way researchers can troubleshoot GPS reception. Earlier this month, John Stader and Sanjeev Gunawardena at the U.S. Air Force Institute of Technology presented a paper at the Institute of Navigation’s PTTI 2021 conference detailing an alternative interference detection system.
The AFIT system also uses free and open-source data, but this time from a network of continuously-operating terrestrial GPS receivers across the globe. These provide high-quality data on GPS signals in their vicinity, which the AFIT researchers then mined to identify interference events. The AFIT team was able to identify 30 possible jamming events in one 24-hour period, with detection within a matter of minutes of each jamming incident occurring. “The end goal of this research effort is to create a worldwide, automated system for detecting interference events using publicly available data,” wrote the authors.
A small system like this is already up and running around Madrid Airport in Spain, where 11 GPS receivers constantly monitor for jamming or accidental interference. “This is costly and takes a while to install,” says Walter, “I feel like major airports probably do want something like that to protect their airspace, but it’s never going to be possible for smaller airports like Hayward.”
Another problem facing both these systems is the sparse distribution of sensors, says Todd Humphreys, Director of the Radionavigation Laboratory at UT Austin: “There are fewer than 3000 GPS reference stations with publicly-accessible data across the globe; these can be separated by hundreds of miles. Likewise, global coverage by ships and planes is still sparse enough to make detection challenging, and localization nearly impossible, except around ports and airports.”
Humphreys favors using other satellites to monitor the GPS satellites, and published a paper last year that zeroed in on a GPS jamming system in Syria. Aireon is already monitoring ADS-B signals worldwide from Iridium’s medium earth orbit satellites, while HawkEye 360 is building out its own radio-frequency sensing satellites in low earth orbit (LEO).
“That’s very exciting,” says Walter. “If you have dedicated equipment on these LEO satellites, that can be very powerful and maybe in the long term much lower cost than a whole bunch of terrestrial sensors.”
Until then, pilots like Kevin Wells will having to keep their wits about them as they navigate areas prone to GPS interference. The source of the jamming near Hayward Airport was never identified.
When Mark Zuckerberg said “Move fast and breakthings,” this is surely not what he meant.
Nevertheless, at a technological level the 6 January attacks on the U.S. Capitol could be contextualized by a line of patents filed or purchased by Facebook, tracing back 20 years. This portfolio arguably sheds light on how the most powerful country in the world was brought low by a rampaging mob nurtured on lies and demagoguery.
While Facebook’s arsenal of over 9,000 patents span a bewildering range of topics, at its heart are technologies that allow individuals to see an exclusive feed of content uniquely curated to their interests, their connections, and increasingly, their prejudices.
Algorithms create intensely personal “filter bubbles,” which are powerfully addictive to users, irresistible to advertisers, and a welcoming environment for rampant misinformation and disinformation such as QAnon, antivaxxer propaganda, and election conspiracy theories.
As Facebook turns 17—it was “born” 4 February 2004—a close reading of the company’s patent history shows how the social network has persistently sought to attract, categorize, and retain users by giving them more and more of what keeps them engaged on the site. In other words, hyperpartisan communities on Facebook that grow disconnected from reality are arguably less a bug than a feature.
Anyone who has used social media in recent years will likely have seen both misinformation (innocently shared false information) and disinformation (lies and propaganda). Last March, in a survey of over 300 social media users by the Center for an Informed Public at the University of Washington (UW), published on Medium, almost 80 percent reported seeing COVID-19 misinformation online, with over a third believing something false themselves. A larger survey covering the United States and Brazil in 2019, by the University of Liverpool and others, found that a quarter of Facebook users had accidentally shared misinformation. Nearly one in seven admitted sharing fake news on purpose.
“Misinformation tends to be more compelling than journalistic content, as it’s easy to make something interesting and fun if you have no commitment to the truth,” says Patricia Rossini, the social-media researcher who conducted the Liverpool study.
In December, a complaint filed by dozens of U.S. states asserted, “Due to Facebook’s unlawful conduct and the lack of competitive constraints…there has been a proliferation of misinformation and violent or otherwise objectionable content on Facebook’s properties.”
When a platform is open, like Twitter, most users can see almost everyone’s tweets. Therefore, tracking the source and spread of misinformation is comparatively straightforward. Facebook, on the other hand, has spent a decade and a half building a mostly closed information ecosystem.
Last year, Forbes estimated that the company’s 15,000 content moderators make some 300,000 bad calls every day. Precious little of that process is ever open to public scrutiny, although Facebook recently referred its decision to suspend Donald Trump’s Facebook and Instagram accounts to its Oversight Board. This independent 11-member “Supreme Court” is designed to review thorny content moderation decisions.
Meanwhile, even some glimpses of sunlight prove fleeting: After the 2020 U.S. presidential election, Facebook temporarily tweaked its algorithms to promote authoritative, fact-based news sources like NPR, a U.S. public-radio network. According to The New York Times, it soon reversed that decision, though, effectively cutting short its ability to curtail what a spokesperson called “inaccurate claims about the election.”
The company began filing patents soon after it was founded in 2004. A 2006 patent described how to automatically track your activity to detect relationships with other users, while another the same year laid out how those relationships could determine which media content and news might appear in your feed.
In 2006, Facebook patented a way to “characterize major differences between two sets of users.” In 2009, Mark Zuckerberg himself filed a patent that showed how Facebook “and/or external parties” could “target information delivery,” including political news, that might be of particular interest to a group.
This automated curation can drive people down partisan rabbit holes, fears Jennifer Stromer-Galley, a professor in the School of Information Studies at Syracuse University. “When you see perspectives that are different from yours, it requires thinking and creates aggravation,” she says. “As a for-profit company that’s selling attention to advertisers, Facebook doesn’t want that, so there’s a risk of algorithmic reinforcement of homogeneity, and filter bubbles.”
In the run-up to Facebook’s IPO in 2012, the company moved to protect its rapidly growing business from intellectual property lawsuits. A 2011 Facebook patent describes how to filter content according to biographic, geographic, and other information shared by a user. Another patent, bought by Facebook that year from the consumer electronics company Philips, concerns “an intelligent information delivery system” that, based on someone’s personal preferences collects, prioritizes, and “selectively delivers relevant and timely” information.
In recent years, as the negative consequences of Facebook’s drive to serve users ever more attention-grabbing content emerged, the company’s patent strategy seems to have shifted. Newer patents appear to be trying to rein in the worst excesses of the filter bubbles Facebook pioneered.
The word “misinformation” appeared in a Facebook patent for the first time in 2015, for technology designed to demote “objectionable material that degrades user experience with the news feed and otherwise compromises the integrity of the social network.” A pair of patents in 2017 described providing users with more diverse content from both sides of the political aisle and adding contextual tags to help rein in misleading “false news.”
Such tags using information from independent fact-checking organizations could help, according to a study by Ethan Porter, coauthor of False Alarm: The Truth About Political Mistruths in the Trump Era (Cambridge University Press, 2019). “It’s no longer a controversy that fact-checks reliably improve factual accuracy,” he says. “And contrary to popular misconception, there is no evident exception for controversial or highly politicized topics.”
Franziska Roesner, a computer scientist and part of the UW team, was involved in a similar, qualitative study last year that also gave a glimmer of hope. “People are now much more aware of the spread and impact of misinformation than they were in 2016 and can articulate robust strategies for vetting content,” she says. “The problem is that they don’t always follow them.”
Rossini’s Liverpool study also found that behaviors usually associated with democratic gains, such as discussing politics and being exposed to differing opinions, were associated with dysfunctional information sharing. Put simply, the worst offenders for sharing fake news were also the best at building online communities; they shared a lot of information, both good and bad.
Moreover, Rossini doubts the very existence of filter bubbles. Because many Facebook users have more and more varied digital friends than they do in-person connections, she says “most social media users are systematically exposed to more diversity than they would be in their offline life.”
The problem is that some of that diversity includes hate speech, lies, and propaganda that very few of us would ever seek out voluntarily—but that goes on to radicalize some.
“I personally quit Facebook two and a half years ago when the Cambridge Analytica scandal happened,” says Lalitha Agnihotri, formerly a data scientist for the Dutch company Philips, who in 2001 was part of a team that filed a related patent. In 2011, Facebook then acquired that Philips patent. “I don’t think Facebook treats my data right. Now that I realize that IP generated by me enabled Facebook to do things wrong, I feel terrible about it.”
Agnihotri says that she has been contacted by Facebook recruiters several times over the years but has always turned them down. “My 12-year-old suggested that maybe I need to join them, to make sure they do things right,” she says. “But it will be hard, if not impossible, to change a culture that comes from their founder.”
This article appears in the February 2021 print issue as “The Careful Engineering of Facebook’s Filter Bubble.”
The FBI is still trying to identify some of the hundreds of people who launched a deadly attack on the U.S. Congress last week. “We have deployed our full investigative resources and are working closely with our federal, state, and local partners to aggressively pursue those involved in criminal activity during the events of January 6,” reads a page that contains images of dozens of unknown individuals, including one suspected of planting several bombs around Washington, D.C.
But while the public is being urged to put names to faces, America’s law enforcement agencies already have access to technologies that could do much of the heavy lifting. “We have over three billion photos that we indexed from the public internet, like Google for faces,” Hoan Ton-That, CEO of facial recognition start-up Clearview AI told Spectrum.
Ton-That said that Clearview’s customers, including the FBI, were using it to help identify the perpetrators: “Use our system, and in about a second it might point to someone’s Instagram page.”
Clearview has attracted criticism because it relies on images scraped from social media sites without their—or their users’—permission.
“The Capitol images are very good quality for automatic face recognition,” agreed a senior face recognition expert at one of America’s largest law enforcement agencies, who asked not to be named because they were talking to Spectrum without the permission of their superiors.
Face recognition technology is commonplace in 2021. But the smartphone that recognizes your face in lieu of a passcode is solving a much simpler problem than trying to ID a masked (or often in the Capitol attacks surprisingly unmasked) intruder from a snatched webcam frame.
The first is comparing a live, high resolution image to a single, detailed record stored in the phone. “Modern algorithms can basically see past issues such as how your head is oriented and variations in illumination or expression,” says Arun Vemury, director of the Department of Homeland Security (DHS) Science and Technology Directorate Biometric and Identity Technology Center. In a recent DHS test of such screening systems at airports, the best algorithm identified the correct person at least 96 percent of the time.
The second scenario, however, is attempting to connect a fleeting, unposed image against one of hundreds of millions of people in the country or around the world. “Most law enforcement agencies can only search against mugshots of people who have been arrested in their jurisdictions, not even DMV records,” says the law enforcement officer.
And as the size of the database grows, so does the likelihood of the system generating incorrect identifications. “Very low false positive rates are still fairly elusive,” says Vemury. “Because there are lots of people out there who might look like you, from siblings and children to complete strangers. Honestly, faces are not all that different from one another.”
Nevertheless, advances in machine learning techniques and algorithms mean that facial recognition technologies are improving. After the COVID-19 pandemic hit last year, the National Institute of Standards and Technology tested industry-leading algorithms [PDF] with images of people wearing facemasks. While some of the algorithms saw error rates soar, others had only a modest decrease in effectiveness compared to maskless facial recognition efforts. Incredibly, the best algorithm’s performance with masks on was comparable to the state-of-the-art on unmasked images from just three years earlier.
In fact, claims Vemury, AI-powered facial recognition systems are now better at matching unfamiliar faces than even the best trained human. “There’s almost always a human adjudicating the result or figuring out whether or not to follow up,” he says, “But if a human is more likely to make an error than the algorithm, are we are we really thinking about this process correctly? It’s almost like asking a third grader to check a high school student’s calculus homework.”
Yet such technological optimism worries Elizabeth Rowe, a law professor at the University of Florida Levin College of Law. “Just because we have access to all of this information doesn’t mean that we should necessarily use it,” she says. “Part of the problem is that there’s no reporting accountability of who’s using what and why, especially among private companies.”
There are also ongoing concerns that the way some face recognition technologies work (or fail to work) with different demographic groups [PDF] can exacerbate institutional racial biases. “We’re doing some additional research in this area,” says Vemury. “But even if you made the technologies totally fair, you could still deploy them in ways that could have a discriminative outcome.”
But if facial recognition technologies are linked to apprehending high profile suspects such as the Capitol attackers, enthusiasm for their use is likely only going to grow, says Rowe.
“Just as consumers have gotten attached to the convenience of using biometrics to access our toys, I think we’ll find law enforcement agencies doing exactly the same thing,” she says. “It’s easy, and it gives them the potential to conduct investigations in a way that they couldn’t before.”
In March last year, Google co-founder Sergey Brin finally saw a return on the millions he has invested in a quest to build the world’s largest and greenest airship for humanitarian missions. After six years of development, his secretive airship company, LTA Research and Exploration, quietly made its first sale: an a 18-metre long, 12-engined, all-electric aircraft called Airship 3.0. The price? According to an FAA filing obtained by IEEE Spectrum, it was just $18.70.
This was not an effort by Brin to cash out his airship investment, but a key part of its development process. The FAA records show that the buyer of Airship 3.0 was Nicolas Garafolo, an associate professor in the Mechanical Engineering department at the University of Akron in Ohio.
When not working at the university, Garafolo leads LTA’s Akron research team, which includes undergraduates, graduate students and a number of alumni from UA’s College of Engineering. The nominal purchase price is probably a nod to UA’s founding year: 1870.
Airship 3.0 is actually LTA Akron’s second prototype. The first, registered in September 2018, was also a 12-engined electric airship, but only 15 meters long, or a little longer than a typical American school bus. It underwent flight tests in Akron earlier this year. Akron was where the US Navy built gargantuan airships in the 1930s, and is still home to the largest active airship hangar in the world—which, according to emails obtained under a public records request, LTA is also interested in leasing.
But Brin doesn’t want to recreate the glory days of the past, he wants to surpass them. Patents, official records and job listings suggest that LTA’s new airship will be safer, smarter and much more environmentally sustainable than the wallowing and dangerous airships of yore.
The biggest question in designing an airship is how to make it float. Hydrogen is cheap, plentiful and the lightest gas in the universe, but also extremely flammable and difficult to contain. Helium, the next lightest gas, is safely inert, but expensive and increasingly scarce. Virtually all new airships since the Hindenburg disaster have opted for helium as a lifting gas.
LTA’s airship, uniquely, will have both gases on board. Helium will be used to provide lift, while hydrogen will be used to power its electric engines. The lithium ion batteries used in today’s electric cars are too heavy for use in airships that LTA intends to deliver humanitarian aid to remote disasters. Instead, a hydrogen fuel cell will provide reliable power and could enable long range missions.
A patent application published last year shows that LTA is also rethinking how to manufacture very large airships. Traditionally, airships are kept stationary while their rigid frames, used to hold and shape the gas envelope, are being built. This requires workers to climb to great heights, adding risks and delays.
LTA’s patent covers a “rollercoaster” structure [right] that allows a partially completed airship to be rotated around its central axis during construction, so that workers can stay safely on the ground. The patent application also describes a method for 3D printing airship components out of strong, lightweight carbon fiber.
LTA’s website says that it is working to create a family of aircraft with no operational carbon footprint to “substantially reduce the total global carbon footprint of aviation.” That would require both generating hydrogen using renewable electricity, and producing a variety of aircraft to satisfy passenger as well as cargo demand.
Paperwork filed by LTA suggests that its first full-size airship, called Pathfinder 1, could already be almost ready to take to the air from LTA’s headquarters at Moffett Field in Silicon Valley. FAA records show that the Pathfinder is powered by 12 electric motors and able to carry 14 people. That would make it about the same size as the only passenger airship operating today, the Zeppelin NT, which conducts sightseeing tours in Germany and Switzerland.
In fact, LTA’s first airship could even be based on the Zeppelin NT, modified to use electric propulsion. LTA has received numerous imports from Zeppelin over the last few years, including fins, rudders, and equipment for a passenger gondola.
Unlike experimental fixed-wing or VTOL aircraft, which can be quietly tested and flown from remote airfields, there will be no keeping the enormous Pathfinder 1 secret when it finally leaves its hangar at Moffett Field. In January, LTA flew a small unmarked airship, usually operated for aerial advertising, from there. Even the sight of that airship, probably for a test of LTA’s flight systems, got observers chattering. The unveiling of Pathfinder 1 will show once and for all that Sergey Brin’s dreams of an airship renaissance are anything but hot air.
The projections were horrifying. Experts were forecasting upwards of 100 million people in the United States infected with the novel coronavirus, with 2 percent needing intensive care, and half of those requiring the use of medical ventilators.
In early March, it seemed as if the United States might need a million ventilators to cope with COVID-19—six times as many as hospitals had at the time. The federal government launched a crash purchasing program for 200,000 of the complex devices, but they would take months to arrive and cost tens of thousands of dollars each.
Across the United States and around the world, engineers sat up and took notice. At NASA’s Jet Propulsion Laboratory (JPL), in Pasadena, Calif., a chance meeting between engineers at a coffee machine led to a prototype low-cost ventilator in five days. At Virgin Orbit, a rocket startup in nearby Long Beach, engineers assembled their own ultrasimple but functional ventilator in three days.
And in Cleveland, a team at the Dan T. Moore Company, a holding company with an impressive portfolio in industrial R&D, had its first prototype up and running in just 12 hours. “It was made out of plywood and very crude,” says senior engineering manager Ryan Sarkisian. “But it gave us a good understanding of what the next steps would be for a rapid response–style solution.”
From the largest universities to domestic garages, hundreds of teams and even individuals scrambled to build ventilators for the expected onslaught. An evaluation of open-source ventilator projects has tracked 116 efforts globally, and it is far from comprehensive.
Meanwhile, the U.S. Food and Drug Administration (FDA) rushed through rules at the end of March that would allow, if the worst-case scenarios came to pass, new ventilators and other medical devices intended to treat COVID-19 to be deployed without the usual years-long safety assessments. Around the same time, General Electric and Ford announced that they were joining forces to rapidly manufacture 50,000 ventilators based on one of GE’s existing designs.
With hundreds of thousands of traditional ventilators on order and potentially even more DIY devices coming soon, President Trump boasted in a speech on 29 April, “We became the king of ventilators, thousands and thousands of ventilators.”
Now, though, before most of the DIY ventilators could make it to production, let alone treat a patient, the need for them has faded away. Aggressive social distancing and isolation policies have slowed transmission of the coronavirus, while hospitals and state governments readily shared surplus ventilators with locations that were suffering the worst outbreaks, like New York City.
Today, some hospitals are even quieter than they were before COVID-19. “This morning in Cincinnati, there were 12 COVID-19 patients on a ventilator,” said Richard Branson, a professor of surgery emeritus at the University of Cincinnati College of Medicine. In a phone interview with IEEE Spectrum in late April, Branson, who is also editor in chief of the journal Respiratory Care, added “We usually have more patients on ventilators than that on a regular day, but we canceled all the [elective] surgeries.”
No one is complaining about having too many ventilators, of course. The story line, starting with the earnest pleas and the ensuing media frenzy, and continuing with the massive engineering response, certainly has a heartwarming ring. Countless engineers dropped what they were doing and worked long hours to design, build, and test impressive machines in weeks, rather than months or years. But an unforeseen twist in that story line raises some vexing questions. What has become of all these rapid-response ventilator projects and the tens of thousands of home-brewed devices they planned to produce? Has it all been a well-intentioned waste of time and money, a squandering of resources that could have been better put toward producing protective equipment or other materials? And even if any of these devices do find their way into hospitals, for example, in developing countries with a real need for ventilators, will they be safe and effective enough to actually use?
“There was one particular day where I was scrolling through Facebook, and one, two, three friends had lost a friend or family member to COVID-19,” remembers Wallace Santos, cofounder of Maingear, a maker of high-performance gaming computers based in Kenilworth, N.J. “That’s when I thought, Holy crap, this is really happening.”
So when Rahul Sood, the chairman of Maingear’s board and a longtime tech entrepreneur, suggested that Maingear build a ventilator itself, Santos was interested—if skeptical. “I didn’t know if we could do it, but we started to investigate,” he says. “And the truth is that if we can build really complex and beautiful liquid-cooling systems [for computers], we can build a ventilator as well. It’s not that hard actually.”
Like most medical gear, ventilators come in different types and sizes, and with different features, capabilities, and levels of complexity. They all perform the same basic function: getting oxygen into, and in some cases clearing carbon dioxide away from, the lungs of people who are having trouble breathing or who have ceased to be able to breathe at all. In relatively mild cases, physicians may use a noninvasive type, in which a tight-fitting mask, akin to a full-face scuba mask, provides pressurized air to the patient’s nose and mouth.
More severe cases are treated with an invasive system, meaning they make use of a tube through the mouth or through an opening in the neck into the patient’s windpipe (trachea). With this setup, the patient is usually sedated or kept unconscious for the days or weeks it can take for the patient’s body to fight off the infection and regain the ability to breathe independently.
Ventilators must carry out several functions with extreme reliability. They must, of course, supply oxygen at higher-than-ambient pressure and allow carbon dioxide to be exhaled and cleared away from the patient’s lungs. The air they provide to the patient must be warm and moist, and yet free of bacteria and other pathogens. Also, they must be equipped with sensors and software that detect if the breathing mask or tube has been dislodged, if the patient’s breathing has become erratic or weaker, or if the breathing rate has simply changed. Finally, the machines must be designed so that they can be thoroughly cleaned and also contain, as much as possible, any pathogens in the patient’s exhalations. The systems must also be compatible with existing hospital infrastructure and procedures.
Consider Maingear’s design, which was based on an emergency ventilator that had already been used in Italy and Switzerland. To repurpose it for COVID-19, engineers made the part that touches a patient disposable rather than cleanable, to reduce the risk of cross infection. Maingear also rewrote some of its software and designed a tough PC-style case so that it could either be housed in a standard medical equipment rack, or moved about on wheels. “This thing is seven grand out the door,” says Sood. “And we can manufacture them very quickly in New York or New Jersey once we get FDA approval.”
At Dan T. Moore, engineering manager Sarkisian had a similarly abrupt introduction to ventilators. Before working on its ventilator, dubbed SecondBreath, his team had been developing lightweight, high-strength metal matrix composites for automotive brake pads. “We’re fairly green when it comes to medical devices,” he admits. “But understanding that manual resuscitation bags are readily available and already have FDA approval, we wanted to utilize that concept as a way to transfer breath to a patient.”
Just as it sounds, a manual resuscitation bag allows a trained medic to provide ventilation by squeezing on a rubberized bag attached to a face mask. Originally designed in the 1950s, these bags are simple and flexible, but they do require a trained operator. To bring the tactic into the modern age, Dan T. Moore’s engineers focused on automating, controlling, and monitoring that squeezing process.
After building a bag-squeezing prototype in 12 hours, Sarkisian’s team of nine automotive engineers refined the design by talking to local doctors and experimenting with different components. “We were trying to make it as cheap as possible, and to really understand the features that it needs to have to keep people alive,” Sarkisian says.
At a minimum, that meant a system that could reliably squeeze a bag for hours, or even days, on end, be readily usable by doctors accustomed to working with more sophisticated ventilators, and have a suite of alarms should anything go wrong.
“We had about two weeks working through prototypes, adding more alarms and different sensors to optimize our system,” says Sarkisian. “Then a little over a week testing out the system, and another two to three days at a local hospital on lung-simulator machines.” It took just 21 days from the first SecondBreath prototype to submitting the device to the FDA, and the company hopes to sell its devices for around US $6,000.
“It just shows you that medical devices are pretty dang expensive,” says Andrew Dorman, an engineer who worked on the SecondBreath project. “When people learn that we can make these ventilators in three weeks and can sell them for a fraction of the price, they might take a look at [the traditional] medical system and say, something’s wrong here.”
Virgin Orbit also made a bag-squeezing ventilator after its CEO, Dan Hart, offered his factory and workers to California governor Gavin Newsom. Newsom connected Hart to the California Emergency Medical Services Authority (EMSA), which identified ventilators as its key need at the time. “A consortium of physicians, scientists, and engineers led by the University of California, Irvine, and the University of Texas at Austin directed us [to] go and make the simplest possible ventilator,” says Virgin Orbit vice president for special projects Will Pomerantz. “We essentially took all the people who were going to be building next year’s rockets and said, ‘Next year’s rockets can probably wait a little bit—you’re going to be building or testing ventilators.’ ”
As a manufacturer of air-launched rockets for small satellites, Virgin Orbit realized that one of the bigger challenges was going to be dealing with supply chains disrupted by COVID-19 itself. “We tried to build it without requiring anything complex or specialized, or if it was, from an industry that does not touch upon medical devices at all,” says Pomerantz.
For example, instead of building a motor from scratch, the Virgin Orbit team utilized something they could find in almost any small town: the windshield wiper motor from one of the country’s most ubiquitous cars, a 2010 Toyota Camry.
This concern for manufacturability also drove the scientists and engineers at NASA’s JPL. They wanted their ventilator, an open-source design, to be within the reach of almost any competent mechanic, anywhere in the world. “Our target was that if a person had all the parts sitting in front of them, they could put it together alone in about 45 minutes, with as few tools as possible,” says Michael R. Johnson, a spacecraft mechatronics expert who served as chief mechanical engineer for the JPL’s ventilator, called VITAL (Ventilator Intervention Technology Accessible Locally). “We couldn’t make them fast enough. Not that anybody was saying anything—it was just understood.”
In the end, the JPL actually designed two different low-cost, easy-to-assemble ventilators, neither of which relied on existing resuscitation bags. A pneumatic device uses stored energy in the hospital gas supply to power the ventilation, while a design that uses a separate compressor serves situations where pressurized gases are unavailable. Not only would the two designs serve different needs, they would also reduce the chance of component shortages halting production entirely. A single circuit board serves both designs, using a simple microcontroller assembly running Arduino code.
The pneumatic ventilator was ready first and was sent straight to the Icahn School of Medicine at Mount Sinai, in New York City, for testing on human simulator machines and by medical staff. “The whole time it was there, we had a Zoom meeting running, and we were watching them use it,” says Johnson. “We would watch things like how they were pushing the buttons. There was one they were pushing really hard, and I thought, okay, let’s add a couple more support screws to the circuit board. Or we’d hear someone say how it wasn’t adjusting quite the way they’d like, so we made a note to change the sensitivity.”
Beyond the engineering work, preparing the descriptive and safety paperwork required by the FDA involved a significant investment of time and resources. Most efforts used outside lawyers and experts, with the JPL’s comprehensive submission running to 505 pages. “A big thing for the FDA is the failure modes and effects analysis, which we do all the time for our spacecraft,” says Johnson. “We asked a couple of medical companies if they could send us examples, and it turns out they’re actually less rigorous than what we do for our space missions.”
Probably best prepared for the administrative burden of developing a new ventilator in a matter of weeks was the team at the University of Minnesota’s Earl E. Bakken Medical Devices Center. For their day jobs, the engineers here work with industrial partners to come up with ideas for, and develop prototypes of, medical devices.
“We have iterative design processes that I’ve both learned and taught here, so I knew that this was possible if you took the right approach,” says Aaron Tucker, lead engineer for the university’s Coventor ventilator project. “We boiled it down to the key concepts—what you need on a bare minimum to survive when you’re being ventilated.”
The Coventor is another design that compresses commercially available ventilation bags, paring the parts down to just $150 of readily available components housed in plain sheet metal. The Coventor team worked closely with Boston Scientific, a large manufacturer of medical devices, to label their device so as to mitigate the risks around its limited capabilities. “Our experience and our ability to collaborate was why we ended up being [one of the first] approved for use by the FDA,” says Tucker. The Coventor’s selling price will be less than $1,000 when Boston Scientific moves it into production shortly.
By mid-May, the FDA had approved six new ventilators for emergency use, including devices from Coventor, SecondBreath, Virgin Orbit, and the JPL. All are limited to use during the pandemic, and only when standard ventilators are unavailable.
SecondBreath has manufactured 36 devices so far, while Virgin Orbit has produced a couple of hundred. Coventor and Boston Scientific have an order for 3,000 of their ventilators from UnitedHealth Group, a health care company in Minnesota. As of early June, none had been deployed in the United States.
“I don’t think any of these devices will ever be used in the United States,” says Branson, the University of Cincinnati surgery professor. “I give these people a lot of credit. They’re trying to do something positive. They’re very smart, they’re motivated, they’re well meaning, but they don’t know what they don’t know.”
“Several of the ventilators the government is purchasing don’t meet these requirements,” Branson says. “Some barely meet not even half of them.” The ventilators authorized by the FDA for emergency use seem to similarly fall short, often in multiple areas. Branson notes that the bag-squeezing designs, in particular, are problematic.
“If everybody had paralyzed respiratory muscles and normal lungs, as in polio, a bag squeezer would work,” he says. “But this is an acute respiratory distress syndrome with parts of the lung that are stiff right next to parts of the lung that aren’t. If you’re not careful with how you deliver the breath, areas that are stiff get very little gas, and areas that are not get too much gas, and that injures them.”
Another problem is that most of the DIY ventilators do not allow for patients to breathe on their own, requiring them to be heavily sedated. “But in New York City [at the height of the outbreak], they ran out of drugs to sedate people,” Branson notes. “And paralyzing people has its own negative consequences,” he adds. “In general, the less sophisticated the device is, the more sophisticated the caregiver has to be.”
But in the kind of crises where emergency ventilators will be needed, medical staff will already be stretched dangerously thin. Branson believes that asking them to suddenly start using unfamiliar new devices that lack traditional protective features and alarms is a recipe for disaster. “You can’t change the standards of care to meet the requirements of the ventilator,” he says.
With the need for ventilators down sharply in North America, and with many medical professionals there reluctant to use home-brewed ventilators, what will become of all this work? Some DIY ventilator teams are already looking overseas. Indeed, Coventor, SecondBreath, and the JPL all designed their devices with an eye on developing countries. “We know that the ‘Cadillac’ ventilators we’re used to in the U.S. are not available in many countries, for cost and other reasons,” says Coventor’s Tucker. “We’re thinking about whether and how we can start to move the Coventor overseas. Nothing prevents it from being used globally. We even picked a global power supply.”
Virgin Orbit has already found a manufacturer in South Africa to produce at least 1,000 of its resuscitators for use by the African Union.
Whether the U.S. startups can sail past foreign regulators as swiftly as they did the FDA remains to be seen. And there are already plenty of engineers innovating overseas, of course. In India, for example, AgVa Healthcare has been marketing a compact ventilator for less than $3,000 and claims to be building 10,000 units a month.
None of the organizations Spectrum spoke with would put a dollar amount on their DIY ventilator efforts, but the combined total is very likely well into the millions. Branson suggests that some of those funds might have been better deployed in proven technologies. “The answer would be to have money and resources given to people who already make FDA-approved devices, to make more of them,” he says. Other tech companies also focused on much simpler items that are still in short supply, such as protective masks and gloves.
For now, as the adrenaline rush from having produced their first ventilators ebbs, some of the ventilator teams are experiencing a welcome lull. “It did a lot for employee morale to feel like we were all putting our shoulders to it and pushing together,” says Virgin Orbit’s Pomerantz. “If the world needs our ventilators, we’ll keep building them. And if it doesn’t, thank goodness. We’re happy to get back to our day jobs.”
Not everyone is convinced that the worst is over. Sood is keen to keep Maingear’s development effort on track. “Based on everything that we’ve seen and all the data we looked at, this is just wave one of a multiwave process,” he says. “We think that wave two, sometime in the fall, might even be worse, and they’re going to start asking for ventilators again. We want to get our machines prepared and ready ahead of time.”
Sood’s pessimism has plenty of company. When the JPL recently invited firms to ask for (free) licenses to manufacture NASA’s open-source ventilators, it received more than 200 applications from organizations all over the world. The best-case scenario, from all perspectives, is that such efforts continue to be a magnificent waste of time and money.
As a transportation technology journalist, I’ve ridden in a lot of self-driving cars, both with and without safety drivers. A key part of the experience has always been a laptop or screen showing a visualization of other road users and pedestrians, using data from one or more laser-ranging lidar sensors.
Ghostly three-dimensional shapes made of shimmering point clouds appear at the edge of the screen, and are often immediately recognizable as cars, trucks, and people.
At first glance, the screen in Echodyne’s Ford Flex SUV looks like a lidar visualization gone wrong. As we explore the suburban streets of Kirkland, Washington, blurry points and smeary lines move across the display, changing color as they go. They bear little resemblance to the vehicles and cyclists I can see out of the window.
“The launch environment of tomorrow will more closely resemble that of airline operations—with frequent launches from a myriad of locations worldwide,” said Todd Master, DARPA’s program manager for the competition at the time. The U.S. military relies on space-based systems for much of its navigation and surveillance needs, and wants a way to quickly replace damaged or destroyed satellites in the future. At the moment, it takes at least three years to build, test, and launch spacecraft.
The Uber car that hit and killed Elaine Herzberg in Tempe, Ariz., in March 2018 could not recognize all pedestrians, and was being driven by an operator likely distracted by streaming video, according to documents released by the U.S. National Transportation Safety Board (NTSB) this week.
But while the technical failures and omissions in Uber’s self-driving car program are shocking, the NTSB investigation also highlights safety failures that include the vehicle operator’s lapses, lax corporate governance of the project, and limited public oversight.
This week, the NTSB released over 400 pages ahead of a 19 November meeting aimed at determining the official cause of the accident and reporting on its conclusions. The Board’s technical review of Uber’s autonomous vehicle technology reveals a cascade of poor design decisions that led to the car being unable to properly process and respond to Herzberg’s presence as she crossed the roadway with her bicycle.
A radar on the modified Volvo XC90 SUV first detected Herzberg roughly six seconds before the impact, followed quickly by the car’s laser-ranging lidar. However, the car’s self-driving system did not have the capability to classify an object as a pedestrian unless they were near a crosswalk.
For the next five seconds, the system alternated between classifying Herzberg as a vehicle, a bike and an unknown object. Each inaccurate classification had dangerous consequences. When the car thought Herzberg a vehicle or bicycle, it assumed she would be travelling in the same direction as the Uber vehicle but in the neighboring lane. When it classified her as an unknown object, it assumed she was static.
Worse still, each time the classification flipped, the car treated her as a brand new object. That meant it could not track her previous trajectory and calculate that a collision was likely, and thus did not even slow down. Tragically, Volvo’s own City Safety automatic braking system had been disabled because its radars could have interfered with Uber’s self-driving sensors.
By the time the XC90 was just a second away from Herzberg, the car finally realized that whatever was in front of it could not be avoided. At this point, it could have still slammed on the brakes to mitigate the impact. Instead, a system called “action suppression” kicked in.
This was a feature Uber engineers had implemented to avoid unnecessary extreme maneuvers in response to false alarms. It suppressed any planned braking for a full second, while simultaneously alerting and handing control back to its human safety driver. But it was too late. The driver began braking after the car had already hit Herzberg. She was thrown 23 meters (75 feet) by the impact and died of her injuries at the scene.
Four days after the crash, at the same time of night, Tempe police carried out a rather macabre re-enactment. While an officer dressed as Herzberg stood with a bicycle at the spot she was killed, another drove the actual crash vehicle slowly towards her. The driver was able to see the officer from at least 194 meters (638 feet) away.
Key duties for Uber’s 254 human safety drivers in Tempe were actively monitoring the self-driving technology and the road ahead. In fact, recordings from cameras in the crash vehicle show that the driver spent much of the ill-fated trip looking at something placed near the vehicle’s center console, and occasionally yawning or singing. The cameras show that she was looking away from the road for at least five seconds directly before the collision.
Police investigators later established that the driver had likely been streaming a television show on her personal smartphone. Prosecutors are reportedly still considering criminal charges against her.
Uber’s Tempe facility, nicknamed “Ghost Town,” did have strict prohibitions against using drugs, alcohol or mobile devices while driving. The company also had a policy of spot-checking logs and in-dash camera footage on a random basis. However, Uber was unable to supply NTSB investigators with documents or logs that revealed if and when phone checks were performed. The company also admitted that it had never carried out any drug checks.
Originally, the company had required two safety drivers in its cars at all times, with operators encouraged to report colleagues who violated its safety rules. In October 2017, it switched to having just one.
The investigation also revealed that Uber didn’t have a comprehensive policy on vigilance and fatigue. In fact, the NTSB found that Uber’s self-driving car division “did not have a standalone operational safety division or safety manager. Additionally, [it] did not have a formal safety plan, a standardized operations procedure (SOP) or guiding document for safety.”
Instead, engineers and drivers were encouraged to follow Uber’s core values or norms, which include phrases such as: “We have a bias for action and accountability”; “We look for the toughest challenges, and we push”; and, “Sometimes we fail, but failure makes us smarter.”
NTSB investigators found that state of Arizona had a similarly relaxed attitude to safety. A 2015 executive order from governor Doug Ducey established a Self-Driving Vehicle Oversight Committee. That committee met only twice, with one of its representatives telling NTSB investigators that “the committee decided that many of the [laws enacted in other states] stifled innovation and did not substantially increase safety. Further, it felt that as long as the companies were abiding by the executive order and existing statutes, further actions were unnecessary.”
When investigators inquired whether the committee, the Arizona Department of Transportation, or the Arizona Department of Public Safety had sought any information from autonomous driving companies to monitor the safety of their operations, they were told that none had been collected.
As it turns out, the fatal collision was far from the first crash that Uber’s 40 self-driving cars in Tempe had been involved in. Between September 2016 and March 2018, the NTSB learned there had been 37 other crashes and incidents involving Uber’s test vehicles in autonomous mode. Most were minor rear-end fender-benders, but on one occasion, a test vehicle drove into a bicycle lane bollard. Another time, a safety driver had been forced to take control of the car to avoid a head-on collision. The result: the car struck a parked vehicle.
For the first time, we have a complete, representative number for the overall orbital collision risk of a satellite mega-constellation.
Last month, Amazon provided the U.S. Federal Communications Commission (FCC) with data for its planned fleet of 3,236 Kuiper System broadband Internet satellites.
If one in 10 satellites fails while on orbit, and loses its ability to dodge other spacecraft or space junk, Amazon’s figures [PDF] show that there is a 12 percent chance that one of those failed satellites will suffer a collision with a piece of space debris measuring 10 centimeters or larger. If one in 20 satellites fails—the same proportion as failed in rival SpaceX’s first tranche of Starlink satellites—there is a six percent chance of a collision.
More than a third of all the orbital debris being tracked today came from just two collisions that occurred about a decade ago. Researchers are concerned that more explosions or breakups could accelerate the Kessler Syndrome—a runaway chain reaction of orbital collisions that could render low earth orbit (LEO) hostile to almost any spacecraft.
Satellite start-up UbiquitiLink’s patented technology allows ordinary cellphones to use satellites like cell towers, bringing cheap messaging to millions
“Tens of thousands of people every year die because they have no connectivity,” says Charles Miller, CEO of satellite communications start-up UbiquitiLink. “That is coming to an end.”
It’s a bold claim from a young start-up that has only a launched a single experimental satellite to date, but Miller insists that UbiquitiLink has developed technology that enables everyday cellphones to communicate directly with satellites in orbit.
If true, this could enable a cheap and truly global messaging service without the need for expensive extra antennas or ground stations. For example, Miller points out that fishing is the one of the most dangerous industries in the world, with communications failures contributing to many of its over 20,000 deaths each year.
“Around the world, most fisherman can’t afford a satellite phone,” says Miller. “They’re living on the edge already. Now with the phone in their pocket that they [already own], they can get connected.”
The received wisdom has been that cellphones lack the power and sensitivity to communicate with satellites in orbit, which are in any case moving far too fast to form useful connections.
UbiquitiLink engineers tackled one problem at a time. For a start, they calculated that cellphones should—just—have enough power to reach satellites in very low earth orbits of around 400 kilometers, as long as they used frequencies below 1 GHz to minimize atmospheric attenuation. Messages would be queued until a satellite passes overheard—perhaps once a day at first, rising to hourly as more satellites are launched.
Satellites would use the same software found in terrestrial cell towers, with a few modifications. Signals would be Doppler shifted because of the satellite’s high velocity (around 7.5 kilometers/second).
“You have to compensate so that the phone doesn’t see that Doppler shift, and you have to trick the phone into accepting the time delay from the extra range,” says Miller. “Those two pieces are our secret sauce and are patented. The phone just thinks [the satellite is] a weak cell tower at the edge of its ability to connect to, but it tolerates that.”
UbiquitiLink also brushes off concerns about interference. In a filing with the FCC, the company noted that the downlink signal from its satellite “is very low and is intended to be the ‘tower of last resort.’” In cities, the satellite’s broadcasts would be drowned out by powerful urban cell towers, while in areas with no cell coverage at all, there is nothing to interfere with.
It is only in rural or suburban areas, with spare and widely separated towers, that interference is a potential concern. Even there, wrote UbiquitiLink, the design of cellular networks, and the fact that the satellite uses time-sharing protocols, means just a 0.0000117 percent of a conflict, which would last only a very short time.
The technology has already been tested. In February, an experimental satellite briefly connected with cellular devices in New Zealand and the Falkland Islands before a computer on board failed. “This limited our ability to test but we got enough data to demonstrate the key fundamentals we couldn’t from the ground,” says Miller.
UbiquitiLink is now planning to try again. In a few days, its latest orbital cell tower will launch on board a SpaceX resupply mission to the International Space Station. Later this summer, the payload will be attached to a Cygnus capsule that brought supplies on a previous mission. When the capsule is jettisoned for its return to Earth, UbiquitiLink’s device will piggyback on it, hopefully for six months, testing 2G and LTE cell connections with wireless operators in up to a dozen countries.
Miller says UbiquitiLink has trial agreements with nearly 20 operators around the world, and plans to operate a basic messaging service in 56 countries. “From their perspective, we’re a roaming provider that extends their network everywhere. They keep the customer relationship and we’re just a wholesale provider. It’s a win-win relationship,” he says.
This week, the company also raised another $5.2 million in funding from venture capital firm run by Steve Case, co-founder of AOL, bring its total capitalization to over $12 million.
If these tests go well, UbiquitiLink wants to start launching operational satellites next year, with plans for several thousand satellites by 2023. Today’s smartphones could connect to UbiquitiLink’s satellites by simply downloading an app, and even a handful could provide a useful service, says Miller: “With 3 to 6 microsatellites, we can provide global coverage everywhere between +55 and -55 degrees latitude several times a day. Not all the 5 billion people with a phone will want to use that. But even if just one in a hundred thinks a periodic service is good enough, that’s still 50 million people.”
Beyond emergency messaging, UbiquitiLink is targeting internet of things users who might balk at buying additional hardware. “Most cars come off the assembly line today with a cellular chip already installed, for security or over the air updates,” says Miller. “Those cars will now stay connected everywhere.”
If UbiquitiLink’s technology works at scale, it could undercut other satellite start-ups, like Swarm, that are pinning their hopes on selling millions of earth stations for IoT. But UbiquitiLink is not shunning traditional satellites completely. The test device launching this weekend will use rival Globalstar’s satellites for telemetry, tracking and control.
In a cavernous building in Washington state, Blue Origin workers are constructing New Glenn’s BE-4 engine
Jeff Bezos, the founder of Amazon and the richest person on Earth, is of course a man who thinks big. But exactly how big is only now becoming clear.
“The solar system can support a trillion humans, and then we’d have 1,000 Mozarts, and 1,000 Einsteins,” he told a private aviation group at the Yale Club in New York City this past February. “Think how incredible and dynamic that civilization will be.” The pragmatic entrepreneur went on to say that “the first step [is] to build a low-cost, highly operable, reusable launch vehicle.” And that’s precisely what he is doing with his private aerospace firm, Blue Origin.
Blue Origin is not just a company; it’s a personal quest for Bezos, who currently sells around US $1 billion of his own Amazon stock each year to fund Blue Origin’s development of new spacecraft. The first, called New Shepard, is a suborbital space-tourist vehicle, which should make its first crewed flight later this year. But it is the next, a massive rocket called New Glenn, that could enable cheap lunar missions and kick-start Bezos’s grand vision of human beings living all over the solar system.
New Glenn’s first stage will use seven enormous new BE-4 engines, each powered by methane (the same fuel used in some of Amazon’s less-polluting delivery vans in Europe). Like SpaceX’s Falcon booster, the New Glenn’s first stage will also use its engines to steer itself gracefully back down to a landing ship for reuse.
After eight years of development, the BE-4 represents the cutting edge of rocket science. It promises to be simpler, safer, cheaper, and far more reusable than the engines of yesteryear.
Blue Origin is also working on two other engines, including one (the BE-7) destined for the company’s Blue Moon lunar lander. But the BE-4 is the largest of the three, designed to generate as much as 2,400 kilonewtons of thrust at sea level. That’s far less than the 6,770 kN provided by each of the five F-1 engines that sent men to the moon a half century ago. Even so, 2,400 kN is quite respectable for a single engine, which in multiples can produce more than enough oomph for the missions envisioned. For comparison, the Russian RD-171M engine provides a thrust of 7,257 kN, and Rocketdyne’s RS-68A, which powers the Delta IV launch vehicle, can generate 3,137 kN.
But the real competition now arguably comes from the other swashbuckling billionaire in the United States’ new space race: Elon Musk. His aerospace company, SpaceX, is testing a big engine called Raptor, which is similarly powered by liquid methane and liquid oxygen. Although the Raptor is slightly less powerful, at 1,700 kN, it is destined for an even larger rocket, the Super Heavy, which will employ 31 of the engines, and the Starship spacecraft, which will use 7 of them.
With SpaceX working at a blistering pace on various space missions and the oft-delayed BE-4 still two years from its first flight, Bezos could find his futuristic engine overshadowed before it begins launching payloads into orbit. Even so, Bezos’s new rocket engine could prove more reliable and less costly than its rivals, which would make it enormously influential in the long run.
Every aspect of the BE-4’s design can be traced back to Bezos’s requirements of low cost, reusability, and high operability.
The overwhelming majority of orbital rocket engines ever made, typically costing millions of dollars apiece, have been used just once, ending up on the bottom of the sea or scattered over a desert. That single-shot approach makes about as much sense, Musk likes to say, as scrapping a 747 airliner after every flight.
The space shuttle was supposed to change all that, combining two reusable boosters with an orbiter housing three main enginesthat could be flown over and over again. But the shuttle proved far different from the workhorse it was intended to be, requiring painstaking evaluation and reconstruction after every flight. As a result, each shuttle mission cost an estimated $450 million. Riffing on Musk’s airliner analogy, Bezos said recently, “You can’t fly your 767 to its destination and then X-ray the whole thing, disassemble it all, and expect to have acceptable costs.”
In the end, Blue Origin took inspiration for the BE-4 not from the U.S. space program but from the program’s archrival, that of the Soviets.
As far back as 1949, Soviet engineers started adopting staged combustion engines, where some fuel and oxidizer flows first through a preburner before reaching the main combustion chamber. That preburn is greatly restricted, providing just enough pressure increase to drive the turbines that pump fuel and oxidizer into the combustion chambers. This scheme is more efficient than those used in simpler engines in which some propellant is burned just to drive the engine’s pumps. In that case, the hot gases that result are vented, which squanders the energy left in them. In their designs, Russian engineers focused on a type of staged combustion that uses a high ratio of oxidizer to fuel in the preburner and delivers exceptional thrust-to-weight performance.
American engineers considered this approach to be impractical because high levels of hot, oxygen-rich gases from the preburner would attack and perhaps even ignite metallic components downstream. They opted instead to develop “fuel-rich” preburner technology, which doesn’t have this problem because the hot gases leaving the preburner contain little oxygen. American engineers used this approach, for example, in the shuttle’s main engines.
The Soviets persevered, using oxygen-rich staged combustion in an engine called the NK-33 for the USSR’s secret moon-shot program in the late 1960s. The result of that program, a powerful but ungainly rocket called the N1, suffered a series of spectacular launchpad failures and never reached orbit. Dozens of NK-33s were mothballed in a warehouse until the mid-1990s, when the U.S. engine company Aerojet bought them to study and rebuild.
By the time Blue Origin started work on the BE-4 in 2011, American rocket engineers were ready to take on the challenges of oxygen-rich staged combustion to achieve the higher efficiency it offered. So that’s what Blue Origin decided to use in this new rocket engine. SpaceX, too, will have an oxygen-rich preburner in its Raptor engines, which will also have a fuel-rich preburner, a configuration known as full-flow staged combustion.
As the Soviets learned vividly with the N1, complexity is the enemy of reliability—even more so when an engine needs to be reused many times. “Fatigue is the biggest issue with a reusable engine,” says Tim Ellis, a propulsion engineer who worked on the BE-4 from 2011 to 2015. “Rocket engines experience about 10 times more stress, thrust, and power than an aircraft engine, so it’s a much harder problem.”
To help solve that problem, Ellis suggested incorporating 3D-printed metal parts into the BE-4. Using 3D printing accelerated the design process, replacing cast or forged parts that used to take a year or more to source with parts made in-house in just a couple of months. The technology also allowed intricately shaped components to be made from fewer pieces.
“Fewer parts means fewer joints, and joints are one of the areas that can fatigue more than anything else,” says Ellis. The 3D metal printing process involves sintering metal powders with lasers, and the resulting material can end up even stronger than traditional machined or cast components. Ellis estimates that up to 5 percent of Blue Origin’s engine by mass could now be 3D printed.
“True operational reusability is what we have designed to from day one,” says Danette Smith, Blue Origin’s senior vice president of Blue Engines, in an interview over email. Each BE-4 should be able to fly at least 25 times before refurbishment, according to Bezos. When the expense of building each engine can be shared over dozens of flights, running costs become more important.
Blue Origin and SpaceX have both settled on methane for fueling their new engines, but for different reasons. For Musk, methane meshes with his interplanetary ambitions. Methane is fairly simple to produce from just carbon dioxide and water, both to be found on Mars. A spaceship powered by methane engines could theoretically manufacture its own fuel on Mars for a journey back to Earth or to other destinations in the solar system.
Blue Origin’s choice was driven by more pragmatic concerns, says Rob Meyerson, president of Blue Origin from 2003 to 2018: “We found that LNG [liquefied natural gas] you could buy right out of the pipeline is four times cheaper than rocket-grade kerosene,” a more traditional fuel choice. Unlike gaseous methane, which often contains high levels of impurities, LNG is 95 percent pure methane, says Meyerson. Methane is also less toxic than kerosene and is stored at temperatures similar to those used for liquid oxygen, making refueling simpler and safer.
For all of Blue Origin’s technical prowess, media headlines might suggest that it’s losing this new space race. Virgin Galactic astronauts have flown the company’s suborbital vehicle to space twice, and SpaceX has delivered cargo more than 70 times to Earth orbit and beyond. Blue Origin, meanwhile, is still tinkering with the uncrewed New Shepard and carrying out seemingly interminable ground tests of the BE-4.
But saying Blue Origin is lagging is to misunderstand its mission, says John Horack, professor of aerospace policy at Ohio State University: “Their motto is Gradatim Ferociter—to be ferociously incremental, as opposed to making spectacular leaps forward. Test, test, test. Data, data, data. Improve and then do it all again.”
Most of Blue Origin’s engine and flight tests are carried out on a remote ranch in West Texas, far from prying eyes. The only mishaps that are publicly known are a prototype launch vehicle crashing there in 2011, a booster failure on return in 2015, and a BE-4 exploding on a test stand in 2017.
“If they were funded differently, there would be a need to demonstrate milestone after milestone,” says Horack. “But because they’re funded through Mr. Bezos’s personal wealth, they can afford that strategy. And I think that in the end it will pay off handsomely.”
Arguably, it already has. In 2014, rival launch provider United Launch Alliance (ULA) was looking for an engine for its own next-generation launch vehicle, the Vulcan. It offered to invest in the BE-4 program, but only if Blue Origin could increase the engine’s planned thrust by nearly 40 percent. For Blue Origin, that would mean not only taking the BE-4 back to the drawing board but redesigning the entire New Glenn rocket to match, likely delaying its maiden launch by years. Worse still, there was no guarantee that ULA would end up buying any BE-4s at all.
For Meyerson, then Blue Origin president, the opportunity to power two new launch vehicles, potentially for a decade or more to come, was worth the risk. “There’s not a lot of new rockets,” he says. “It’s not like the automobile industry, where companies are designing and building new cars every year.”
Last September, that gamble finally paid off as ULA confirmed that the Vulcan would use a pair of BE-4 engines. Just weeks later, the U.S. Air Force announced hundreds of millions of dollars in funding for both the Vulcan and the New Glenn to support future military launches. “It’s brilliant, because Blue Origin found a way to monetize something they had to do anyway,” says Horack. “The more engines you make, the lower your unit cost, the more flight data you get, and the more reliability you can build in. It’s a virtuous cycle.”
ULA’s decision also cleared the way for Blue Origin to start work on a planned BE-4 factory in Huntsville, Ala. Groundbreaking for the $200 million facility began in January. The company already has a factory to build and refurbish New Glenn rockets near the Kennedy Space Center, in Florida. The first New Glenn and BE-4s could lift off at Cape Canaveral as soon as 2021.
Blue Origin would be well advised to keep to that schedule. Gradatim Ferociter is a great motto for a billionaire’s passion project. But for a rapidly growing business that needs to compete in the race to return to the moon, Blue Origin might need to be a little less gradatim, and a little more ferociter.
This article appears in the July 2019 print issue as “The Heavy Lift.”
A Q&A with ‘Eyes in the Sky’ author Arthur Holland Michel
A new type of aerial surveillance, enabled by rapid advances in imaging and computing technology, is quietly replacing traditional drone video cameras. Wide-area motion imaging (WAMI) aims to capture an entire city within a single image, giving operators a God-like view in which they can follow multiple incidents simultaneously, and track people or vehicles backward in time.
Arthur Holland Michel, founder and co-director of the Center for the Study of the Drone, a research institute at Bard College in New York, has written a new book about WAMI called Eyes in the Sky: The Secret Rise of Gorgon Stare and How It Will Watch Us All. This fascinating history details WAMI’s development by researchers at a national lab, its deployment by the US military, and its arrival as a crime-fighting tool—and possibly privacy nightmare—in the skies above America.
IEEE Spectrum talked with Michel prior to the publication of his book. What follows is transcript of that interview, lightly edited for clarity and length.
Amazon Web Services has promised immediate service but FCC filings suggest the company has yet to obtain the long-term licenses necessary to operate
When Amazon Web Services (AWS) announced the general availability of its satellite Ground Station service last month, it claimed that operators switching to its facilities could control satellites and “download, process, store, analyze, and act upon satellite data quicker with substantial cost savings.” Amazon announced eight satellite customers and partners, saying that Ground Station would be available immediately in the United States, with service in other countries rolling out in the next 12 months.
However, a review of U.S. Federal Communications Commission (FCC) filings suggests that the full Ground Station experience is currently only available to a single company, with two more startups enjoying partial service. And while Amazon has earned a reputation for moving aggressively into new markets, the technology giant also made several regulatory missteps that suggest it’s still finding its way in the world of satellite operations.
Elon Musk says SpaceX’s Starlink satellites will autonomously avoid hazards in orbit. Experts are not so sure
The 60 Starlink satellites that SpaceX is preparing to launch tomorrow, after two delays last week, are the first in a planned mega-constellation of nearly 12,000 satellites that Elon Musk hopes will bring affordable broadband Internet to everyone in the world.
Last Wednesday, SpaceX revealed that its Starlink satellites will be the first to autonomously avoid collisions with objects in orbit. Elon Musk told reporters, “They will use their thrusters to maneuver automatically around anything that [the U.S. military] is tracking.”
According to multiple experts, however, a single source of orbital data is not good enough to automate critical decisions about safety.
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.