Tag Archives: Aerospace

China to Launch Space Mining Bot

Post Syndicated from Andrew Jones original https://spectrum.ieee.org/tech-talk/aerospace/satellites/china-to-launch-space-mining-bot

The possibility of space mining has long captured the imagination and even inspired business ventures. Now, a space startup in China is taking its first steps towards testing capabilities to identify and extract off-Earth resources.

Origin Space, a Beijing-based private space resources company, is set to launch its first ‘space mining robot’ in November. NEO-1 is a small (around 30 kilograms) satellite intended to enter a 500-kilometer-altitude sun-synchronous orbit. It will be launched by a Chinese Long March series rocket as a secondary payload.

This small spacecraft will not be doing actual mining; instead, it will be testing technologies. “The goal is to verify and demonstrate multiple functions such as spacecraft orbital maneuver, simulated small celestial body capture, intelligent spacecraft identification and control,” says Yu Tianhong, an Origin Space co-founder.

Origin Space, established in 2017, describes itself as China’s first firm focused on the utilization of space resources. China’s private space sector emerged following a 2014 government decision to open up the industry. Because asteroid mining has often been talked of as potentially a trillion-dollar industry, it is no surprise that a company focused on this area has joined the likes of others developing rockets and small satellites.

Another mission, Yuanwang-1 (‘Look up-1’), and nicknamed “Little Hubble”, is slated to launch in 2021. A deal for development of the satellite was reached with DFH Satellite Co., Ltd., a subsidiary of China’s main state-owned space contractor CASC, earlier this year.

The “Little Hubble” satellite will carry an optical telescope designed to observe and monitor Near Earth Asteroids. Origin Space notes that identifying suitable targets is the first step toward space resources utilization.

Beyond this, Origin Space will also be taking aim at the moon with NEO-2, with a target launch date of late 2021 or early 2022.

Yu says the lunar project plan is not completed, but includes an eventual lunar landing. The tentative mission profile envisions an indirect journey to our celestial neighbor. The spacecraft will first be launched into low-Earth orbit and then gradually raise its orbit with onboard propulsion until it reaches a lunar orbit. The spacecraft will—after completing its observation goals—make a hard landing on the lunar surface. 

While Chandrayaan-2, India’s second lunar mission, used a circuitous route to go from geosynchronous transfer orbit out to lunar orbit, a small spacecraft with limited propulsion may take a long time to reach the moon.

The issue of space resources became a hot topic once again after NASA administrator Jim Bridenstine last week announced that the agency will purchase lunar regolith and rock samples from commercial companies once they have collected moon material.

But Brian Weeden, Director of Program Planning for the Secure World Foundation, says that space resources companies still face myriad challenges, including the logistics extracting resources and the small matter of who (other than NASA) is going to buy them.

“We’ve heard a lot about water on the Moon, but if you talk to any lunar scientist they will tell you we don’t actually know what the chemical composition of that water is and how difficult it will be to extract and refine it into a usable product,” says Weeden.

“The same thing goes for asteroids to an even greater degree. On Earth, we have massive mining operations and factories and smelteries to refine raw materials into usable products. How much of that will you need in space and how do you build it?” Weeden says.

He adds: “Right now the only real customers are the national space agencies that are planning to do things on the Moon. They might have a use for lunar regolith as a building material and water for fuel and life support. But aside from the very small contract we saw from NASA last week, I haven’t seen any major interest from governments in buying those materials commercially or at what price.”

Origin Space is far from the only or first space mining company. Planetary Resources, a U.S.-based firm, was established back in 2009 before suffering funding issues and eventually being acquired by blockchain firm ConsenSys in 2018. Another U.S. venture, Deep Space Industries, was acquired in January 2019 and is apparently pivoting away from asteroid mining towards developing small satellites. Meanwhile Tokyo-based ispace recently raised $28 million for the first of a series of lunar landers.

Asked about learning from the case of companies such as Planetary Resources, Yu stated that the firm was a pioneer in the space resources industry, adding that it is always challenging for the first players in the game. “We think they lack important milestones and revenue. We are working hard to accelerate the progress of milestone projects while generating revenue.”

36Kr, a Chinese technology publishing and data company, reports (Chinese) that Origin Space will launch a pre-A financing round at the end of the year to fund the planned lunar exploration mission.

NASA Study Proposes Airships, Cloud Cities for Venus Exploration

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/aerospace/space-flight/nasa-study-proposes-airships-cloud-cities-for-venus-exploration

Editor’s note: It’s been 35 years since a pair of robotic balloons explored the clouds of Venus. The Soviet Vega 2 and Vega 2 probes didn’t manage to find any Venusians, but that may have been because we didn’t know exactly what to look for. On 14 September 2020, a study published in Nature suggested that traces of phosphine in Venus’ atmosphere could be an indication of a biological process: that is, of microbial alien life. If confirmed, such a finding could completely change the way we think about the universe, which has us taking a serious look at what it would take to get human explorers to Venus in the near future. This article was originally published on 16 December 2014.

It has been accepted for decades that Mars is the next logical place for humans to explore. Mars certainly seems to offer the most Earth-like environment of any other place in the solar system, and it’s closer to Earth than just about anyplace else, except Venus. But exploration of Venus has always been an enormous challenge: Venus’s surface is hellish, with 92 atmospheres of pressure and temperatures of nearly 500 °C.

The surface of Venus isn’t going to work for humans, but what if we ignore the surface and stick to the clouds? Dale Arney and Chris Jones, from the Space Mission Analysis Branch of NASA’s Systems Analysis and Concepts Directorate at Langley Research Center, in Virginia, have been exploring that idea. Perhaps humans could ride through the upper atmosphere of Venus in a solar-powered airship. Arney and Jones propose that it may make sense to go to Venus before we ever send humans to Mars.

To put NASA’s High Altitude Venus Operational Concept (HAVOC) mission in context, it helps to start thinking about exploring the atmosphere of Venus instead of exploring the surface. “The vast majority of people, when they hear the idea of going to Venus and exploring, think of the surface, where it’s hot enough to melt lead and the pressure is the same as if you were almost a mile underneath the ocean,” Jones says. “I think that not many people have gone and looked at the relatively much more hospitable atmosphere and how you might tackle operating there for a while.”

At 50 kilometers above its surface, Venus offers one atmosphere of pressure and only slightly lower gravity than Earth. Mars, in comparison, has a “sea level” atmospheric pressure of less than a hundredth of Earth’s, and gravity just over a third Earth normal. The temperature at 50 km on Venus is around 75 °C, which is a mere 17 degrees hotter than the highest temperature recorded on Earth. It averages -63 °C on Mars, and while neither extreme would be pleasant for an unprotected human, both are manageable.

What’s more important, especially relative to Mars, is the amount of solar power available on Venus and the amount of protection that Venus has from radiation. The amount of radiation an astronaut would be exposed to in Venus’s atmosphere would be “about the same as if you were in Canada,” says Arney. On Mars, unshielded astronauts would be exposed to about 0.67 millisieverts per day, which is 40 times as much as on Earth, and they’d likely need to bury their habitats several meters beneath the surface to minimize exposure. As for solar power, proximity to the sun gets Venus 40 percent more than we get here on Earth, and 240 percent more than we’d see on Mars. Put all of these numbers together and as long as you don’t worry about having something under your feet, Jones points out, the upper atmosphere of Venus is “probably the most Earth-like environment that’s out there.”


It’s also important to note that Venus is often significantly closer to Earth than Mars is. Because of how the orbits of Venus and Earth align over time, a crewed mission to Venus would take a total of 440 days using existing or very near-term propulsion technology: 110 days out, a 30-day stay, and then 300 days back—with the option to abort and begin the trip back to Earth immediately after arrival. That sounds like a long time to spend in space, and it absolutely is. But getting to Mars and back using the same propulsive technology would involve more than 500 days in space at a minimum. A more realistic Mars mission would probably last anywhere from 650 to 900 days (or longer) due to the need to wait for a favorable orbital alignment for the return journey, which means that there’s no option to abort the mission and come home earlier: If anything went wrong, astronauts would have to just wait around on Mars until their return window opened.

HAVOC comprises a series of missions that would begin by sending a robot into the atmosphere of Venus to check things out. That would be followed up by a crewed mission to Venus orbit with a stay of 30 days, and then a mission that includes a 30-day atmospheric stay. Later missions would have a crew of two spend a year in the atmosphere, and eventually there would be a permanent human presence there in a floating cloud city.

The defining feature of these missions is the vehicle that will be doing the atmospheric exploring: a helium-filled, solar-powered airship. The robotic version would be 31 meters long (about half the size of the Goodyear blimp), while the crewed version would be nearly 130 meters long, or twice the size of a Boeing 747. The top of the airship would be covered with more than 1,000 square meters of solar panels, with a gondola slung underneath for instruments and, in the crewed version, a small habitat and the ascent vehicle that the astronauts would use to return to Venus’s orbit, and home.

Getting an airship to Venus is not a trivial task, and getting an airship to Venus with humans inside it is even more difficult. The crewed mission would involve a Venus orbit rendezvous, where the airship itself (folded up inside a spacecraft) would be sent to Venus ahead of time. Humans would follow in a transit vehicle (based on NASA’s Deep Space Habitat), linking up with the airship in Venus orbit.

Since there’s no surface to land on, the “landing” would be extreme, to say the least. “Traditionally, say if you’re going to Mars, you talk about ‘entry, descent, and landing,’ or EDL,” explains Arney. “Obviously, in our case, ‘landing’ would represent a significant failure of the mission, so instead we have ‘entry, descent, and inflation,’ or EDI.” The airship would enter the Venusian atmosphere inside an aeroshell at 7,200 meters per second. Over the next seven minutes, the aeroshell would decelerate to 450 m/s, and it would deploy a parachute to slow itself down further. At this point, things get crazy. The aeroshell would drop away, and the airship would begin to unfurl and inflate itself, while still dropping through the atmosphere at 100 m/s. As the airship got larger, its lift and drag would both increase to the point where the parachute became redundant. The parachute would be jettisoned, the airship would fully inflate, and (if everything had gone as it’s supposed to), it would gently float to a stop at 50 km above Venus’s surface.

Near the equator of Venus (where the atmosphere is most stable), winds move at about 100 meters per second, circling the planet in just 110 hours. Venus itself barely rotates, and one Venusian day takes longer than a Venusian year does. The slow day doesn’t really matter, however, because for all practical purposes the 110-hour wind circumnavigation becomes the length of one day/night cycle. The winds also veer north, so to stay on course, the airship would push south during the day, when solar energy is plentiful, and drift north when it needs to conserve power at night.

Meanwhile, the humans would be busy doing science from inside a small (21-cubic-meter) habitat, based on NASA’s existing Space Exploration Vehicle concept. There’s not much reason to perform extravehicular activities, so that won’t even be an option, potentially making things much simpler and safer (if a bit less exciting) than a trip to Mars.

The airship has a payload capacity of 70,000 kilograms. Of that, nearly 60,000 kg will be taken up by the ascent vehicle, a winged two-stage rocket slung below the airship. (If this looks familiar, it’s because it’s based on the much smaller Pegasus rocket, which is used to launch satellites into Earth orbit from beneath a carrier aircraft.) When it’s time to head home, the astronauts would get into a tiny capsule on the front of the rocket, drop from the airship, and then blast back into orbit. There, they’ll meet up with their transit vehicle and take it back to Earth orbit. The final stage is to rendezvous in Earth orbit with one final capsule (likely Orion), which the crew will use to make the return to Earth’s surface.

The HAVOC team believes that its concept offers a realistic target for crewed exploration in the near future, pending moderate technological advancements and support from NASA. Little about HAVOC is dependent on technology that isn’t near-term. The primary restriction that a crewed version of HAVOC would face is that in its current incarnation it depends on the massive Block IIB configuration of the Space Launch System, which may not be ready to fly until the late 2020s. Several proof-of-concept studies have already been completed. These include testing Teflon coating that can protect solar cells (and other materials) from the droplets of concentrated sulfuric acid that are found throughout Venus’s atmosphere and verifying that an airship with solar panels can be packed into an aeroshell and successfully inflated out of it, at least at 1/50 scale.

Many of the reasons that we’d want to go to Venus are identical to the reasons that we’d want to go to Mars, or anywhere else in the solar system, beginning with the desire to learn and explore. With the notable exception of the European Space Agency’s Venus Express orbiter, the second planet from the sun has been largely ignored since the 1980s, despite its proximity and potential for scientific discovery. HAVOC, Jones says, “would be characterizing the environment not only for eventual human missions but also to understand the planet and how it’s evolved and the runaway greenhouse effect and everything else that makes Venus so interesting.” If the airships bring small robotic landers with them, HAVOC would complete many if not most of the science objectives that NASA’s own Venus Exploration Analysis Group has been promoting for the past two decades.

“Venus has value as a destination in and of itself for exploration and colonization,” says Jones. “But it’s also complementary to current Mars plans.…There are things that you would need to do for a Mars mission, but we see a little easier path through Venus.” For example, in order to get to Mars, or anywhere else outside of the Earth-moon system, we’ll need experience with long-duration habitats, aerobraking and aerocapture, and carbon dioxide processing, among many other things. Arney continues: “If you did Venus first, you could get a leg up on advancing those technologies and those capabilities ahead of doing a human-scale Mars mission. It’s a chance to do a practice run, if you will, of going to Mars.”

It would take a substantial policy shift at NASA to put a crewed mission to Venus ahead of one to Mars, no matter how much sense it might make to take a serious look at HAVOC. But that in no way invalidates the overall concept for the mission, the importance of a crewed mission to Venus, or the vision of an eventual long-term human presence there in cities in the clouds. “If one does see humanity’s future as expanding beyond just Earth, in all likelihood, Venus is probably no worse than the second planet you might go to behind Mars,” says Arney. “Given that Venus’s upper atmosphere is a fairly hospitable destination, we think it can play a role in humanity’s future in space.”

Hanford Has a Radioactive Capsule Problem

Post Syndicated from Maria Gallucci original https://spectrum.ieee.org/aerospace/military/hanford-has-a-radioactive-capsule-problem

At the vast reservation known as the Hanford Site in south-central Washington state, much of the activity these days concerns its 212 million liters (56 million gallons) of radioactive sludge. From World War II through the Cold War, the site produced plutonium for more than 60,000 nuclear weapons, creating enough toxic by-products to fill 177 giant underground tanks. The U.S. Department of Energy (DOE), which controls Hanford, is pushing to start “vitrifying,” or glassifying, some of that waste within two years. The monumental undertaking is the nation’s—and possibly the world’s—largest environmental cleanup effort. It has been going on for decades and will take decades more to complete.

But the tanks are not the only outsize radioactive hazard at Hanford. The site also houses nearly 2,000 capsules of highly radioactive cesium and strontium. Each of the double-walled, stainless-steel capsules weighs 11 kilograms and is roughly the size of a rolled-up yoga mat. Together, they contain over a third of the total radioactivity at Hanford.

For decades, the capsules have resided in a two-story building called the Waste Encapsulation and Storage Facility (WESF). Inside, the capsules sit beneath 4 meters of cooling water in concrete cells lined with stainless steel. The water surrounding the capsules glows neon blue as the cesium and strontium decay, a phenomenon known as Cherenkov radiation.

Built in 1973, the facility is well beyond its 30-year design life. In 2013, nuclear specialists in neighboring Oregon warned that the concrete walls of the pools had lost structural integrity due to gamma radiation emitted by the capsules. Hanford is located just 56 kilometers (35 miles) from Oregon’s border and sits beside the Columbia River. After leaving the site, the river flows through Oregon farms and fisheries and eventually through Portland, the state’s biggest city.

In 2014, the DOE’s Office of the Inspector General concluded that the WESF poses the “greatest risk” for serious accident of any DOE facility that’s beyond its design life. In the event of a severe earthquake, for instance, the degraded basins would likely collapse, draining the cooling water. In a worst-case scenario, the capsules would then overheat and break, releasing radioactivity that would contaminate the ground and air and render parts of the Hanford Site inaccessible for years and potentially reach nearby cities.

“If it’s bad enough, it means all cleanup essentially stops,” says Dirk Dunning, an engineer and retired Hanford expert who worked for the Oregon Department of Energy and who helped flag initial concerns about the concrete. “We can’t fix it, we can’t stop it. It just becomes a horrible, intractable problem.”

To avoid such a catastrophe, in 2015 the DOE began taking steps to transfer capsules out of the basins and into dry casks on an outdoor storage pad. The plan is to place six capsules inside a cylindrical metal sleeve; air inside the cylinder is displaced with helium to dissipate heat from the capsules. The sleeves are then fitted inside a series of shielded canisters, like a nuclear nesting doll. The final vessel is a 3.3-meter-tall cylindrical cask made of a special steel alloy and reinforced concrete. A passive cooling system draws cool air into the cask and expels warm air, without the need for fans or pools of water. The cask will sit vertically on the concrete pad. Eventually, there will be 16 to 20 casks. Similar systems are used to store spent nuclear fuel at commercial power plants, including the Columbia Generating Station at Hanford. The agency has until 31 August 2025 to complete the work, according to a legal agreement between the DOE, the state of Washington, and the U.S. Environmental Protection Agency.

When the transfer is completed, DOE estimates the new facility will save more than US $6 million per year in operating costs. But it’s intended only as a temporary fix. After 50 years in dry storage—around 2075, in other words—the capsules’ contents could be vitrified as well, or else buried in an unspecified deep geologic repository.

Even that timeline may be too ambitious. At a congressional hearing in March, DOE officials said that treatment of the tank waste was the “highest priority” and sought to defer the capsule-transfer work and other cleanup efforts at Hanford. They also proposed slashing Hanford’s annual budget by $700 million in fiscal year 2021. The DOE Office of Environmental Management’s “strategic vision” for 2020–2030 [PDF] noted only that the agency “will continue to evaluate” the transfer of capsules currently stored at the WESF.

And the COVID-19 pandemic has further complicated the department’s plans. The DOE now says it “will be assessing potential impacts on all projects” resulting from reduced operations due to the pandemic. The department’s FY2021 budget proposal calls for “safely” deferring work on the WESF capsule transfers for one year, while supporting “continued maintenance, monitoring, and assessment activities at WESF,” according to a written response sent to IEEE Spectrum.

Unsurprisingly, community leaders and state policymakers oppose the potential slowdowns and budget cuts. They argue that Hanford’s cleanup—now over three decades in the making—cannot be delayed further. David Reeploeg of the Tri-City Development Council (TRIDEC) says the DOE’s strategic vision and proposed budget cuts add to the “collective frustration” at “this pattern of kicking the can down the road.” TRIDEC advocates for Hanford-related priorities in the adjacent communities of Richland, Kennewick, and Pasco, Wash. Reeploeg adds that congressional support over the years has been key to increasing Hanford cleanup funding beyond the DOE’s request levels.

How did Hanford end up with 1,936 capsules of radioactive waste?

The cesium and strontium inside the capsules were once part of the toxic mix stored in Hanford’s giant underground tanks. The heat given off by these elements as they decayed was causing the high-level radioactive waste to dangerously overheat to the point of boiling. And so from 1967 to 1985, technicians extracted the elements from the tanks and put them in capsules.

Initially, the DOE believed that such materials, especially cesium-137, could be put to useful work, in thermoelectric power supplies, to calibrate industrial instruments, or to extend the shelf life of pork, wheat, and spices (though consumers are generally wary of irradiated foods). The department leased hundreds of capsules to private companies around the United States.

One of those companies was Radiation Sterilizers, which used Hanford’s cesium capsules to sterilize medical supplies at its facilities in Decatur, Ga., and Westerville, Ohio. In 1988, a capsule in Decatur developed a pinhole leak, and 0.02 percent of its contents escaped—a mess that took the DOE four years and $47 million to clean up. Federal investigators concluded that moving the capsules in and out of water more than 7,000 times caused temperature changes that damaged the steel. Radiation Sterilizers had removed temperature-measuring systems in its facility, among other failures cited by the DOE. The company, though, blamed the government for shipping a damaged capsule. Whatever the cause, the DOE recalled all capsules and returned them to the WESF.

The WESF now contains 1,335 capsules of cesium, in the form of cesium chloride. Most of that waste consists of nonradioactive isotopes of cesium; of the radioactive isotopes, cesium-137 dominates, with lesser amounts of cesium-135. Another 601 capsules contain strontium, in the form of strontium fluoride, with the main radioactive isotope being strontium-90.

Cesium-137 and strontium-90 have half-lives of 30 years and 29 years, respectively—relatively short periods compared with the half-lives of other materials in the nation’s nuclear inventory, such as uranium and plutonium. However, the present radioactivity of the capsules “is so great” that it will take more than 800 years for the strontium capsules to decay enough to be classified as low-level waste, according to a 2003 report by the U.S. National Research Council. And while the radioactivity of the cesium-137 will diminish significantly after several hundred years, cesium-135 has a half-life of 2.3 million years, which means that the isotope will eventually become the dominant source of radioactivity in the cesium capsules, the report said.

Workers at Hanford continue to monitor the condition of the capsules by periodically shaking the containers using a long metal gripping tool. If they hear a “clunk,” it means the inner stainless-steel pipe is moving freely and is thus considered to be in good condition. Some capsules, though, fail the clunk test, which indicates the inner pipe is damaged, rusty, or swollen, and thus can’t move. About two dozen of the failed capsules have been “overpacked”—that is, sealed in a larger stainless-steel container and held separately.

Moving the capsules from wet storage to dry is only temporary

The DOE has made substantial progress on the capsule-transfer work in recent years. In August 2019, CH2M Hill Plateau Remediation Company, one of the main environmental cleanup contractors at Hanford, completed designs to modify the WESF for removal of the capsules. In the weeks before COVID-19 temporarily shut down the site in late March, crews had started fabricating equipment to load capsules into sleeves, transfer them into casks, and move them outside. A team cleaned and painted part of the WESF to make way for the loading crane. At the nearby Maintenance and Storage Facility, workers were building a mock-up system to allow people to train and test equipment.

During the lockdown, employees working remotely continued with technical and design reviews and nuclear-safety assessments. With Hanford now in a phased reopening, CH2M Hill workers recently broke ground on the site of the future dry cask storage pad and have resumed construction at the mock-up facility. Last October, the DOE awarded Intermech, a construction firm owned by Emcor, a nearly $5.6 million contract to build a reinforced-concrete pad surrounded by two chain-link fences, along with utility infrastructure and a heavy-duty road connecting the WESF to the pad.

However, plans for fiscal year 2021, which starts in October, are less certain. In its budget request to Congress in February, the DOE proposed shrinking Hanford’s annual cleanup budget from $2.5 billion to about $1.8 billion. Officials sought no funding for WESF modification and storage work, eliminating $11 million from the current budget. Meanwhile, the agency sought to boost funding for tank-waste vitrification from $15 million to $50 million. Under its legal agreements, the DOE is required to start glassifying Hanford’s low-activity waste by 2023.

Reeploeg of the Tri-City Development Council says the budget cuts, if approved, would make it harder for the capsule-transfer project to stay on track.

Along with vitrification, he told Spectrum, “we think WESF is a top priority, too. Considering that the potential consequences of an event there are so significant, we want those capsules out of the pool and into dry-cask storage as quickly as possible.”

Reeploeg said the failure of another aging Hanford facility should have been a wake-up call. In 2017, a tunnel that runs into the Plutonium Uranium Extraction Plant partially collapsed, exposing highly radioactive materials. Officials had been aware of the tunnel’s structural problems since the 1970s. Ultimately, no airborne radiation leaks were detected, and no workers were hurt. But in a February 2020 report, the Government Accountability Office said the DOE hadn’t done enough to prevent such an event.

Hanford experts at Washington state’s Department of Ecology said a short-term delay on the WESF mission won’t significantly increase the threat to the environment or workers.

“We don’t believe that there’s an immediate health risk from a slowdown of work,” says Alex Smith, the department’s nuclear waste program manager. So long as conditions are properly maintained in the pool cells, the capsules shouldn’t see any noticeable aging or decay in the near-term, she says, but it still makes sense to transfer the capsules to reduce the risk of a worst-case disaster.

In an email to Spectrum, the DOE noted that routine daily inspections of the WESF pool walls haven’t revealed any visible degradation or spalling—flaking that occurs due to moisture in the concrete.

Still, for Hanford watchdogs, the possibility of any new delays compounds the seemingly endless nature of the environmental cleanup mission. Ever since Hanford shuttered its last nuclear reactor in 1987, efforts to extract, treat, contain, and demolish radioactive waste and buildings have proceeded in fits and starts, marked by a few successes—such as the recent removal of 27 cubic meters of radioactive sludge near the Columbia River—but also budgeting issues, technical hurdles, and the occasional accident.

“There are all these competing [cleanup projects], but the clock is running on all of them,” says Dunning, the Oregon nuclear expert. “And you don’t know when it’s going to run out.”

Japan on Track to Introduce Flying Taxi Services in 2023

Post Syndicated from John Boyd original https://spectrum.ieee.org/cars-that-think/aerospace/aviation/japan-on-track-to-introduce-flying-taxi-services-in-2023

Last year, Spectrum reported on Japan’s public-private initiative to create a new industry around electric vertical takeoff and landing vehicles (eVTOLs) and flying cars. Last Friday, start-up company SkyDrive Inc. demonstrated the progress made since then when it held a press conference to spotlight its prototype vehicle and show reporters a video taken three days earlier of the craft undergoing a piloted test flight in front of staff and investors.

The sleek, single-seat eVTOL, dubbed SD-03 (SkyDrive third generation), resembles a hydroplane on skis and weighs in at 400 kilograms. The body is made of carbon fiber, aluminum, and other materials that have been chosen for their weight, balance, and durability. The craft measures 4 meters in length and width, and is about 2 meters tall. During operation, the nose of the craft is lit with white LED lights; red lights run around the bottom to enable the vehicle to be seen in the sky and to distinguish the direction the craft is flying. 

The SD-03 uses four pairs of electrically driven coaxial rotors, with one pair mounted at each quadrant. These enable a flight time of 5 to 10 minutes at speeds up to 50 kilometers per hour. “The propellers on each pair counter-rotate,” explains Nobuo Kishi, Sky Drive’s chief technology officer. “This cancels out propeller torque.” It also makes for a compact design, “so all the craft needs to land is the space of two parked cars,” he adds.

But when it came to providing more details of the drive system, Kishi declined, saying it’s a trade secret that’s a source of competitive advantage. The same goes for the craft’s energy storage system: Other than disclosing the fact that the flying taxi currently uses a lithium polymer battery, he’s also keeping details about the powertrain confidential.

Underlying this need for secrecy is the technology’s restricted capabilities. “Total energy that can be stored in a battery is a major limiting factor here,” says Steve Wright, Senior Research Fellow in Avionics and Aircraft Systems at the University of West England. “Which is why virtually every one of these projects is aiming at the air-taxi market within megacities.”

SkyDrive video shows the SD-03 take off vertically then engage in maneuvers as it hovers up to two meters off the ground around a netted enclosure. The craft is shown moving about at walking speed for roughly 4 minutes before landing on a designated spot. For monitoring purposes and back-up, engineers used an additional computer-assisted control system to ensure the craft’s stability and safety.

Speaking at the press conference, Tomohiro Fukuzawa, SkyDrive’s CEO, estimated there are currently as many as 100 flying car projects underway around the world, “but only a few have succeeded with someone on board,” he said.

He went on to note that Japan lags behind other countries in the aviation industry but excels in manufacturing cars. Given the similarities between cars —especially electric cars—and VTOLs, he believes Japan can compete with companies in the United States, Europe, and China that are also developing eVTOLs.

SkyDrive’s advances have encouraged new venture capital investors to come on board and nearly triple investment to a total of 5.9 billion yen ($56 million). Original investors include large corporations that saw an opportunity to get in on the ground floor of a promising new industry backed by government. One investor, NEC, is aiming to create more options for its air-traffic management systems, while Japan’s largest oil company, Eneos, is interested in developing electric charging stations for all kinds of electric vehicles.

In May, SkyDrive unveiled a drone for commercial use that is based on the same drive and power systems as the SD-03. Named the Cargo Drone, it’s able to transport payloads of up to 30 kg and can be preprogrammed to fly autonomously or be piloted manually. It will be operated as a service by SkyDrive, starting at a minimum monthly rental charge of 380,000 yen ($3,600) that rises according to the purpose and frequency of use. 

Kishi says the drone is designed to work within a 3 km range in locations that are difficult or time-consuming to get to by road. For instance, Obayashi Corp., one of Japan’s big five construction companies and an investor in SkyDrive, has been testing the Cargo Drone to autonomously deliver materials like sandbags and timber to a remote, hard-to-reach location.

Fukuzawa established SkyDrive in 2018 after leaving Toyota Motor and working with Cartivator, a group of volunteer engineers interested in developing flying cars. SkyDrive now has a staff of fifty.

Also in 2018, the Japanese government formed the Public-Private Conference for Air Mobility made up of private companies, universities, and government ministries. The stated aim was to make flying vehicles a reality by 2023. Tomohiko Kojima of Japan’s Civil Aviation Bureau told Spectrum that since the Conference’s formation, the Ministry of Land, Infrastructure, Transport and Tourism has held a number of meetings with members to discuss matters like airspace for eVTOL use, flight rules, and permitted altitudes. “And last month, the Ministry established a working-level group to discuss certification standards for eVTOLs, a standard for pilots, and operational safety standards,” Kojima added.

Fukuzawa is also targeting 2023 to begin taxi services (single passenger and pilot) in the Osaka Bay area, flying between locations like Kansai and Kobe airports and tourist attractions such as Universal Studios Japan. These flights will take less than ten minutes—a practical nod to the limitations of the battery energy storage system.

“What SkyDrive is proposing is entirely do-able,” says Wright. “Almost all rotor-only eVTOL projects are limited to sub-30-minute endurance, which, with safety reserves, equate to about 10 to 20 minutes flying.”

NASA’s Mars Rover Required a Special Touch for Its Robotic Arms

Post Syndicated from ATI Industrial Automation original https://spectrum.ieee.org/aerospace/robotic-exploration/nasa_mars_rover_required_a_special_touch_for_its_robotic_arms

In July, NASA launched the most sophisticated rover the agency has ever built: Perseverance. https://mars.nasa.gov/mars2020/ Scheduled to land on Mars in February 2021, Perseverance will be able to perform unique research into the history of microbial life on Mars in large part due to its robotic arms. To achieve this robotic capability, NASA needed to call upon innovation-driven contractors to make such an engineering feat a reality.

One of the company’s that NASA enlisted to help develop Perseverance was ATI Industrial Automation. https://www.ati-ia.com/ NASA looked to have ATI adapt the company’s own Force/Torque Sensor to enable the robotic arm of Perseverance to operate in the environment of space. ATI Force/Torque sensors were initially developed to enable robots and automation systems to sense the forces applied while interacting with their environment in operating rooms or factory floors.

However, the environment of space presented unique engineering challenges for ATI’s Force/Torque Sensor. The extreme environment and the need for redundancy to ensure that any single failure wouldn’t compromise the sensor function were the key challenges the ATI engineers faced, according to Ian Stern, Force/Torque Sensor Product Manager at ATI. https://www.linkedin.com/in/ianhstern/

“ATI’s biggest technical challenge was developing the process and equipment needed to perform the testing at the environmental limits,” said Stern. “The challenges start when you consider the loads that the sensor sees during the launch of the Atlas 5 rocket from earth. The large G forces cause the tooling on the end of the sensor to generate some of the highest loads that the sensor sees over its life.”

Once on Mars the sensor must be able to accurately and reliably measure force/torques in temperatures ranging from -110° to +70° Celsius (C). This presents several challenges because of how acutely temperature influences the accuracy of force measurement devices. To meet these demands, ATI developed the capability to calibrate the sensors at -110°C. “This required a lot of specialized equipment for achieving these temperatures while making it safe for our engineers to perform the calibration process,” added Stern.

In addition to the harsh environment, redundancy strategies are critical for a sensor technology on a space mission. While downtime on the factory floor can be costly, a component failure on Mars can render the entire mission worthless since there are no opportunities for repairs.

This need for a completely reliable product meant that ATI engineers had to develop their sensor so that it was capable of detecting failures in its

measurements as well as accurately measuring forces and torques should there be multiple failures on the measurement signals. ATI developed a patented process for achieving this mechanical and electrical redundancy.

All of this effort to engineer a sensor for NASA’s Mars mission may enable a whole new generation of space exploration, but it’s also paying immediate dividends for ATI’s more terrestrial efforts in robotic sensors.

“The development of a sensor for the temperatures on Mars has helped us to develop and refine our process of temperature compensation,” said Stern. “This has benefits on the factory floor in compensating for thermal effects from tooling or the environment.”

Stern points out as an example of these new temperature compensation strategies a solution that was developed to address the heat produced by the motor mounted to a tool changer. This heat flux can cause undesirable output on the Force/Torque data, according to Stern.

“As a result of the Mars Rover project we now have several different processes to apply on our standard industrial sensors to mitigate the effects of temperature change,” said Stern.

The redundancy requirements translated into a prototype of a Standalone Safety Rated Force/Torque sensor capable of meeting Performance Level d (PL-d) safety requirements.

This type of sensor can actively check its health and provide extremely high-resolution data allowing a large, 500 kilogram payload robot handling automotive body parts to safely detect if a human finger was pinched.

ATI is also leveraging the work it did for Perseverance to inform some of its ongoing space projects. One particular project is for a NASA Tech demo that is targeting a moon rover for 2023, a future mars rovers and potential mission to Europa that would use sensors for drilling into ice.

Stern added: “The fundamental capability that we developed for the Perseverance Rover is scalable to different environments and different payloads for nearly any space application.”

For more information on ATI Industrial Automation please click here.

With Ultralight Lithium-Sulfur Batteries, Electric Airplanes Could Finally Take Off



Post Syndicated from Mark Crittenden original https://spectrum.ieee.org/aerospace/aviation/with-ultralight-lithiumsulfur-batteries-electric-airplanes-could-finally-take-off

Electric aircraft are all the rage, with prototypes in development in every size from delivery drones to passenger aircraft. But the technology has yet to take off, and for one reason: lack of a suitable battery.

For a large passenger aircraft to take off, cruise, and land hundreds of kilometers away would take batteries that weigh thousands of kilograms—far too heavy for the plane to be able to get into the air in the first place. Even for relatively small aircraft, such as two-seat trainers, the sheer weight of batteries limits the plane’s payload, curtails its range, and thus constrains where the aircraft can fly. Reducing battery weight would be an advantage not only for aviation, but for other electric vehicles, such as cars, trucks, buses, and boats, all of whose performance is also directly tied to the energy-to-weight ratio of their batteries.

For such applications, today’s battery of choice is lithium ion. It reached maturity years ago, with each new incremental improvement smaller than the last. We need a new chemistry.

Since 2004 my company, Oxis Energy, in Oxfordshire, England, has been working on one of the leading contenders—lithium sulfur. Our battery technology is extremely lightweight: Our most recent models are achieving more than twice the energy density typical of lithium-ion batteries. Lithium sulfur is also capable of providing the required levels of power and durability needed for aviation, and, most important, it is safe enough. After all, a plane can’t handle a sudden fire or some other calamity by simply pulling to the side of the road.

The new technology has been a long time coming, but the wait is now over. The first set of flight trials have already been completed.

Fundamentally, a lithium-sulfur cell is composed of four components:

  • The positive electrode, known as the cathode, absorbs electrons during discharge. It is connected to an aluminum-foil current collector coated with a mixture of carbon and sulfur. Sulfur is the active material that takes part in the electrochemical reactions. But it is an electrical insulator, so carbon, a conductor, delivers electrons to where they are needed. There is also a small amount of binder added to ensure the carbon and sulfur hold together in the cathode.
  • The negative electrode, or anode, releases electrons during discharge. It is connected to pure lithium foil. The lithium, too, acts as a current collector, but it is also an active material, taking part in the electrochemical reaction.
  • A porous separator prevents the two electrodes from touching and causing a short circuit. The separator is bathed in an electrolyte containing lithium salts.
  • An electrolyte facilitates the electrochemical reaction by allowing the movement of ions between the two electrodes.

These components are connected and packaged in foil as a pouch cell. The cells are in turn connected together—both in series and in parallel—and packaged in a 20 ampere-hour, 2.15-volt battery pack. For a large vehicle such as an airplane, scores of packs are connected to create a battery capable of providing tens or hundreds of amp-hours at several hundred volts.

Lithium-sulfur batteries are unusual because they go through multiple stages as they discharge, each time forming a different, distinct molecular species of lithium and sulfur. When a cell discharges, lithium ions in the electrolyte migrate to the cathode, where they combine with sulfur and electrons to form a polysulfide, Li2S8. At the anode, meanwhile, lithium molecules give up electrons to form positively charged lithium ions; these freed electrons then move through the external circuit—the load—which takes them back to the cathode. In the electrolyte, the newly produced Li2S8 immediately reacts with more lithium ions and more electrons to form a new polysulfide, Li2S6. The process continues, stepping through further polysulfides, Li2S4 and Li2S2, to eventually become Li2S. At each step more energy is given up and passed to the load until at last the cell is depleted of energy.

Recharging reverses the sequence: An applied current forces electrons to flow in the opposite direction, causing the sulfur electrode, or cathode, to give up electrons, converting Li2S to Li2S2. The polysulfide continues to add sulfur atoms step-by-step until Li2S8 is created in the cathode. And each time electrons are given up, lithium ions are produced that then diffuse through the electrolyte, combining with electrons at the lithium electrode to form lithium metal. When all the Li2S has been converted to Li2S8, the cell is fully charged.

This description is simplified. In reality, the reactions are more complex and numerous, taking place also in the electrolyte and at the anode. In fact, over many charge and discharge cycles, it is these side reactions that cause degradation in a lithium-sulfur cell. Minimizing these, through the selection of the appropriate materials and cell configuration, is the fundamental, underlying challenge that must be met to produce an efficient cell with a long lifetime.

One great challenge for both lithium-ion and lithium-sulfur technologies has been the tendency for repeated charging and discharging cycles to degrade the anode. In the case of lithium ion, ions arriving at that electrode normally fit themselves into interstices in the metal, a process called intercalation. But sometimes ions plate the surface, forming a nucleus on which further plating can accumulate. Over many cycles a filament, or dendrite, may grow until it reaches the opposing electrode and short-circuits the cell, causing a surge of energy, in the form of heat that irreparably damages the cell. If one cell breaks down like this, it can trigger a neighboring cell to do the same, beginning a domino effect known as a thermal runaway reaction—in common parlance, a fire.

With lithium-sulfur cells, degradation of the lithium-metal anode is also a problem. However, this occurs via a very different mechanism, one that does not involve the formation of dendrites. In lithium-sulfur cells, uneven current densities on the anode surface cause lithium to be plated and stripped unevenly as the battery is charged and discharged. Over time, this uneven plating and stripping causes mosslike deposits on the anode that react with the sulfide and polysulfides in the electrolyte. These mosslike deposits become electrically disconnected from the bulk anode, leaving less of the anode surface available for chemical reaction. Eventually, as this degradation progresses, the anode fails to operate, preventing the cell from accepting charge.

Developing solutions to this degradation problem is crucial to producing a cell that can perform at a high level over many charge-discharge cycles. A promising strategy we’ve been pursuing at Oxis involves coating the lithium-metal anode with thin layers of ceramic materials to prevent degradation. Such ceramic materials need to have high ionic conductivity and be electrically insulating, as well as mechanically and chemically robust. The ceramic layers allow lithium ions to pass through unimpeded and be incorporated into the bulk lithium metal beneath.

We are doing this work on the protection layer for the anode in partnership with Pulsedeon and Leitat, and we’re optimistic that it will dramatically increase the number of times a cell can be discharged and charged. And it’s not our only partnership. We’re also working with Arkema to improve the cathode in order to increase the power and energy density of the battery.

Indeed, the key advantage of lithium-ion batteries over their predecessors—and of lithium sulfur over lithium ion—is the great amount of energy the cells can pack into a small amount of mass. The lead-acid starter battery that cranks the internal combustion engine in a car can store about 50 watt-hours per kilogram. Typical lithium-ion designs can hold from 100 to 265 Wh/kg, depending on the other performance characteristics for which it has been optimized, such as peak power or long life. Oxis recently developed a prototype lithium-sulfur pouch cell that proved capable of 470 Wh/kg, and we expect to reach 500 Wh/kg within a year. And because the technology is still new and has room for improvement, it’s not unreasonable to anticipate 600 Wh/kg by 2025.

When cell manufacturers quote energy-density figures, they usually specify the energy that’s available when the cell is being discharged at constant, low power rates. In some applications such low rates are fine, but for the many envisioned electric aircraft that will take off vertically, the energy must be delivered at higher power rates. Such a high-power feature must be traded off for lower total energy-storage capacity.

Furthermore, the level of energy density achievable in a single cell might be considerably greater than what’s possible in a battery consisting of many such cells. The energy density doesn’t translate directly from the cell to the battery because cells require packaging—the case, the battery management system, and the connections, and perhaps cooling systems. The weight must be kept in check, and for this reason our company is using advanced composite materials to develop light, strong, flameproof enclosures.

If the packaging is done right, the energy density of the battery can be held to 80 percent of that of the cells: A cell rated at 450 Wh/kg can be packaged at more than 360 Wh/kg in the final battery. We expect to do better by integrating the battery into the aircraft, for instance, by making the wing space do double duty as the battery housing. We expect that doing so will get the figure up to 90 percent.

To optimize battery performance without compromising safety we rely, first and foremost, on a battery management system (BMS), which is a combination of software and hardware that controls and protects the battery. It also includes algorithms for measuring the energy remaining in a battery and others for minimizing the energy wasted during charging.

Like lithium-ion cells, lithium-sulfur cells vary slightly from one another. These differences, as well as differences in the cells’ position in the battery pack, may cause some cells to consistently run hotter than others. Over time, those high temperatures slowly degrade performance, so it is important to minimize the power differences from cell to cell. This is usually achieved using a simple balancing solution, in which several resistors are connected in parallel with a cell, all controlled by software in the BMS.

Even when charging and discharging rates are kept within safe limits, any battery may still generate excessive heat. So, typically, a dedicated thermal-management system is necessary. An electric car can use liquid cooling, but in aviation, air cooling is much preferred because it adds less weight. Of course, the battery can be placed at a point where air is naturally moving across the surface of the airplane—perhaps the wing. If necessary, air can be shunted to the battery through ducts. At Oxis, we’re using computational modeling to optimize such cooling. For instance, when we introduced this technique in a project for a small fixed-wing aircraft, it allowed us to design an effective thermal-management system, without which the battery would reach its temperature limits before it was fully discharged.

As noted above, a battery pack is typically arranged with the cells both in parallel and in series. However, there’s more to the arrangement of cells. Of course, the battery is a mission-critical component of an e-plane, so you’ll want redundancy, for enhanced safety. You could, for instance, design the battery in two equal parts, so that if one half fails it can be disconnected, leaving the aircraft with at least enough energy to manage a controlled descent and landing.

Another software component within the BMS is the state-of-charge algorithm. Imagine having to drive a car whose fuel gauge had a measurement error equivalent to 25 percent of the tank’s capacity. You’d never let the indicator drop to 25 percent, just to make sure that the car wouldn’t sputter to a halt. Your practical range would be only three-quarters of the car’s actual range. To avoid such waste, Oxis has put a great emphasis on the development of state-of-charge algorithms.

In a lithium-ion battery you can estimate the charge by simply measuring the voltage, which falls as the energy level does. But it’s not so simple for a lithium-sulfur battery. Recall that in the lithium-sulfur battery, different polysulfides figure in the electrochemical process at different times during charge and discharge. The upshot is that voltage is not a good proxy for the state of charge and, to make things even more complicated, the voltage curve is asymmetrical for charge and for discharge. So the algorithms needed to keep track of the state of charge are much more sophisticated. We developed ours with Cranfield University, in England, using statistical techniques, among them the Kalman filter, as well as neural networks. We can estimate state of charge to an accuracy of a few percent, and we are working to do better still.

All these design choices involve trade-offs, which are different for different airplanes. We vary how we manage these trade-offs in order to tailor our battery designs for three distinct types of aircraft.

  • High-altitude pseudo satellites (HAPS) are aircraft that fly at around 15,000 to 20,000 meters. The hope is to be able to fly for months at a time; the current record is 26 days, set in 2018 by the Airbus Zephyr S. By day, these aircraft use solar panels to power the motors and charge the batteries; by night, they fly on battery power. Because the 24-hour charge-and-discharge period demands only a little power, you can design a light battery and thus allow for a large payload. The lightness also makes it easier for such an aircraft to fly far from the equator, where the night lasts longer.
  • Electric vertical take-off and landing (eVTOL) aircraft are being developed as flying taxis. Lilium, in Germany, and Uber Elevate, among others, already have such projects under way. Again, weight is critical, but here the batteries need not only be light but must also be powerful. Oxis has therefore developed two versions of its cell chemistry. The high-energy version is optimized in many aspects of the cell design to minimize weight, but it is limited to relatively low power; it is best suited to HAPS applications. The high-power version weighs more, although still significantly less than a lithium-ion battery of comparable performance; it is well suited for such applications as eVTOL.
  • Light fixed-wing aircraft: The increasing demand for pilots is coming up against the high cost of training them; an all-electric trainer aircraft would dramatically reduce the operation costs. A key factor is longer flight duration, which is enabled by the lighter battery. Bye Aerospace, in Colorado, is one company leading the way in such aircraft. Furthermore, other companies—such as EasyJet, partnered with Wright Electric—are planning all-electric commercial passenger jets for short-haul, 2-hour flights.

Three factors will determine whether lithium-sulfur batteries ultimately succeed or fail. First is the successful integration of the batteries into multiple aircraft types, to prove the principle. Second is the continued refinement of the cell chemistry. Third is the continued reduction in the unit cost. A plus here is that sulfur is about as cheap as materials get, so there’s reason to hope that with volume manufacturing, the unit cost will fall below that of the lithium-ion design, as would be required for commercial success.

Oxis has already produced tens of thousands of cells, and it is currently scaling up two new projects. Right now, it is establishing a manufacturing plant for the production of both the electrolyte and the cathode active material in Port Talbot, Wales. Later, the actual mass production of lithium-sulfur cells will begin on a site that belongs to Mercedes-Benz Brazil, in Minas Gerais, Brazil.

This state-of-the-art plant should be commissioned and operating by 2023. If the economies of scale prove out, and if the demand for electric aircraft rises as we expect, then lithium-sulfur batteries could begin to supplant lithium-ion batteries in this field. And what works in the air ought to work on the ground, as well.

This article appears in the August 2020 print issue as “Ultralight Batteries for Electric Airplanes.”

About the Author

Mark Crittenden is head of battery development and integration at Oxis Energy, in Oxfordshire, U.K.

Amazon’s Project Kuiper is More Than the Company’s Response to SpaceX

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/aerospace/satellites/amazons-project-kuiper-is-more-than-the-companys-response-to-spacex

Amazon cleared an important hurdle when the U.S. Federal Communications Commission (FCC) announced on 30 July that the company was authorized to deploy and operate its Kuiper satellite constellation. The authorization came with the caveat that Amazon would still have to demonstrate that Kuiper would not interfere with previously authorized satellite projects, such as SpaceX’s Starlink.

Even with the FCC’s caveat, it’s tempting to imagine that the idea of putting a mega-constellation of thousands of satellites in low-Earth orbit to provide uninterrupted broadband access anywhere on Earth will become a battle between Jeff Bezos’ Kuiper and Elon Musk’s Starlink. After all, even in space, how much room can there be for two mega-constellations, let alone additional efforts like that of the recently-beleaguered OneWeb? But some experts suggest that Amazon’s real play will come from its ability to vertically integrate Kuiper into the rest of the Amazon ecosystem—an ability SpaceX cannot match with Starlink.

“With Amazon, it’s a whole different ballgame,” says Zac Manchester, an assistant professor of aeronautics and astronautics at Stanford University. “The thing that makes Amazon different from SpaceX and OneWeb is they have so much other stuff going for them.” If Kuiper succeeds, Amazon can not only offer global satellite broadband access—it can include that access as part of its Amazon Web Services (AWS), which already offers resources for cloud computing, machine learning, data analytics, and more.

First, some quick background on what Amazon plans with Kuiper itself. The FCC approved the launch of 3,236 satellites. Not all of those thousands of satellites have to be launched immediately, however. Amazon is now obligated to launch at least half of the total by 2026 to retain the operating license the FCC has granted the company.

Amazon has said it will invest US $10 billion to build out the constellation. The satellites themselves will circle the Earth in what’s referred to as “low Earth orbit,” or LEO, which is any orbital height below 2000 kilometers. The satellites will operate in the Ka band (26.5 to 40 gigahertz).

A common talking point for companies building satellite broadband systems is that the constellations will be able to provide ubiquitous broadband access. In reality, except for users in remote or rural locations, terrestrial fiber or cellular networks almost always win out. In other words, no one in a city or suburb should be clamoring for satellite broadband.

“If they think they’re competing against terrestrial providers, they’re deluded,” says Tim Farrar, a satellite communications consultant, adding that satellite broadband is for last-resort customers who don’t have any other choice for connectivity.

However, these last-resort customers also include industries that can weather the cost of expensive satellite broadband, such as defense, oil and gas, and aviation. There’s far more money to be made in catering to those industries than in building several thousand satellites just to connect individual rural broadband subscribers.

But what these far-flung industries also increasingly have in common, alongside industries like Earth-imaging and weather-monitoring that also depend on satellite connectivity, is data. Specifically, the need to move, store, and crunch large quantities of data. And that’s something Amazon already offers.

“You could see Project Kuiper being a middleman for getting data into AWS,” says Manchester. “SpaceX owns the space segment, they can get data from point A to point B through space. Amazon can get your data through the network and into their cloud and out to end users.” There are plenty of tech start-ups and other companies that already do machine learning and other data-intensive operations in AWS and could make use of Kuiper to move their data. (Amazon declined to comment on the record about their future plans for Kuiper for this story).

Amazon has also built AWS ground stations that connect satellites directly with the rest of the company’s web service infrastructure. Building and launching satellites is certainly expensive, but the ground stations to connect those satellites are also a not-insignificant cost. Because Amazon already offers access to these ground stations on a per-minute basis, Manchester thinks it’s not unreasonable for the company to expand that offering to Kuiper’s connectivity.

There’s also Blue Origin to consider. While the rocket company owned by Bezos currently has a heavy-lift rocket that could conceivably bring Kuiper satellites to LEO, that could change. The company has at least one such rocket —the New Glenn—in development. Farrar notes that Amazon could spend the next few years in satellite development before it needs to begin launching satellites in earnest, by which point Blue Origin could have a heavy-lift option available.

Farrar says that with an investment of $10 billion, Amazon will need to bring in millions of subscribers to consider the project a financial success. But Amazon can also play a longer game than, say, SpaceX. Whereas the latter is going to be dependent entirely on subscriptions to generate revenue for Starlink, Amazon’s wider business platform means Kuiper is not dependent solely on its own ability to attract users. Plus, Amazon has the resources to make a long-term investment in Kuiper before turning a profit, in a way Starlink cannot.

“They own all these things the other guys don’t,” says Manchester. “In a lot of ways, Amazon has a grander vision. They’re not trying to be a telco.”

China Launches Beidou, Its Own Version of GPS

Post Syndicated from Andrew Jones original https://spectrum.ieee.org/tech-talk/aerospace/satellites/final-piece-of-chinas-beidou-navigation-satellite-system-comes-online

The final satellite needed to complete China’s own navigation and positioning satellite system has passed final on-orbit tests. The completed independent system provides military and commercial value while also facilitating new technologies and services.

The Beidou was launched on a Long March 3B rocket from the Xichang Satellite Launch Center in a hilly region of Sichuan province at 01:43 UTC on Tuesday, 23 June. The satellite was sent into a geosynchronous transfer orbit before entering an orbital slot approximately 35,786 kilometers in altitude which keeps it at a fixed point above the Earth.

Like GPS, the main, initial motivation for Beidou was military. The People’s Liberation Army did not want to be dependent on GPS for accurate positioning data of military units and weapons guidance, as the U.S. Air Force could switch off open GPS signals in the event of conflict. 

As with GPS, Beidou also provides and facilitates a range of civilian and commercial services and activities, with an output value of $48.5 billion in 2019. 

Twenty four satellites in medium Earth orbits (at around 21,500 kilometers above the Earth) provide positioning, navigation and timing (PNT) services. The satellites use rubidium and hydrogen atomic clocks for highly-accurate timing that allows precise measurement of speed and location.

Additionally, thanks to a number of satellites in geosynchronous orbits, Beidou provides a short messaging service through which 120-character messages can be sent to other Beidou receivers. Beidou also aids international search and rescue services. Vessels at sea will be able to seek help from nearby ships in case of emergency despite no cellphone signal.

The Beidou satellite network is also testing inter-satellite links, removing reliance on ground stations for communications across the system.

Beidou joins the United States’ GPS and Russia’s GLONASS in providing global PNT services, with Europe’s Galileo soon to follow. These are all compatible and interoperable, meaning users can draw services from all of these to improve accuracy.

“The BeiDou-3 constellation transmits a civil signal that was designed to be interoperable with civil signals broadcast by Galileo, GPS III, and a future version of GLONASS. This means that civil users around the world will eventually be getting the same signal from more than 100 satellites across all these different constellations, greatly increasing availability, accuracy, and resilience,” says Brian Weeden, Director of Program Planning for Secure World Foundation

“This common signal is the result of international negotiations that have been going on since the mid-2000s within the International Committee of GNSS (ICG).”

The rollout of Beidou has taken two decades. The first Beidou satellites were launched in 2000, providing coverage to China. Second generation Beidou-2 satellites provided coverage for the Asia-Pacific region starting in 2012. Deployment of Beidou-3 satellites began in 2015, with Tuesday’s launch being the 30th such satellite. 

But this is far from the end of the line. China wants to establish a ‘ubiquitous, integrated and intelligent and comprehensive’ national PNT system, with Beidou as its core, by 2035, according to a white paper.

Chinese aerospace firms are also planning satellite constellations in low Earth orbit to augment the Beidou signal, improving accuracy while facilitating high-speed data transmission. Geely, an automotive giant, is now also planning its own constellation to improve accuracy for autonomous driving.

Although the space segment is complete, China still has work to do on the ground to make full use of Beidou, according to Weeden.

“It’s not just enough to launch the satellites; you also have to roll out the ground terminals and get them integrated into everything you want to make use of the system. Doing so is often much harder and takes much longer than putting up the satellites. 

“So, for the Chinese military to make use of the military signals offered by BeiDou-3, they need to install compatible receivers into every plane, tank, ship, bomb, and backpack. That will take a lot of time and effort,” Weeden states.

With the rollout of Beidou satellites complete, inhabitants downrange of Xichang will be spared any further disruption and possible harm. Long March 3B launches of Beidou satellites frequently see spent rocket stages fall near or on inhabited areas. Eighteen such launches have been carried out since 2018.

The areas calculated to be under threat from falling boosters were evacuated ahead of time for safety. Warnings about residual toxic hypergolic propellant were also issued. But close calls and damage to property were all too common.

Sergey Brin’s Revolutionary $19 Airship

Post Syndicated from Mark Harris original https://spectrum.ieee.org/tech-talk/aerospace/aviation/sergey-brins-revolutionary-20-airship

In March last year, Google co-founder Sergey Brin finally saw a return on the millions he has invested in a quest to build the world’s largest and greenest airship for humanitarian missions. After six years of development, his secretive airship company, LTA Research and Exploration, quietly made its first sale: an a 18-metre long, 12-engined, all-electric aircraft called Airship 3.0. The price? According to an FAA filing obtained by IEEE Spectrum, it was just $18.70.

This was not an effort by Brin to cash out his airship investment, but a key part of its development process. The FAA records show that the buyer of Airship 3.0 was Nicolas Garafolo, an associate professor in the Mechanical Engineering department at the University of Akron in Ohio.

When not working at the university, Garafolo leads LTA’s Akron research team, which includes undergraduates, graduate students and a number of alumni from UA’s College of Engineering. The nominal purchase price is probably a nod to UA’s founding year: 1870.

Airship 3.0 is actually LTA Akron’s second prototype. The first, registered in September 2018, was also a 12-engined electric airship, but only 15 meters long, or a little longer than a typical American school bus. It underwent flight tests in Akron earlier this year. Akron was where the US Navy built gargantuan airships in the 1930s, and is still home to the largest active airship hangar in the world—which, according to emails obtained under a public records request, LTA is also interested in leasing.

But Brin doesn’t want to recreate the glory days of the past, he wants to surpass them. Patents, official records and job listings suggest that LTA’s new airship will be safer, smarter and much more environmentally sustainable than the wallowing and dangerous airships of yore.

The biggest question in designing an airship is how to make it float. Hydrogen is cheap, plentiful and the lightest gas in the universe, but also extremely flammable and difficult to contain. Helium, the next lightest gas, is safely inert, but expensive and increasingly scarce. Virtually all new airships since the Hindenburg disaster have opted for helium as a lifting gas.

LTA’s airship, uniquely, will have both gases on board. Helium will be used to provide lift, while hydrogen will be used to power its electric engines. The lithium ion batteries used in today’s electric cars are too heavy for use in airships that LTA intends to deliver humanitarian aid to remote disasters. Instead, a hydrogen fuel cell will provide reliable power and could enable long range missions.

A patent application published last year shows that LTA is also rethinking how to manufacture very large airships. Traditionally, airships are kept stationary while their rigid frames, used to hold and shape the gas envelope, are being built. This requires workers to climb to great heights, adding risks and delays. 

LTA’s patent covers a “rollercoaster” structure [right] that allows a partially completed airship to be rotated around its central axis during construction, so that workers can stay safely on the ground. The patent application also describes a method for 3D printing airship components out of strong, lightweight carbon fiber.

LTA’s website says that it is working to create a family of aircraft with no operational carbon footprint to “substantially reduce the total global carbon footprint of aviation.” That would require both generating hydrogen using renewable electricity, and producing a variety of aircraft to satisfy passenger as well as cargo demand.

Paperwork filed by LTA suggests that its first full-size airship, called Pathfinder 1, could already be almost ready to take to the air from LTA’s headquarters at Moffett Field in Silicon Valley. FAA records show that the Pathfinder is powered by 12 electric motors and able to carry 14 people.  That would make it about the same size as the only passenger airship operating today, the Zeppelin NT, which conducts sightseeing tours in Germany and Switzerland.

In fact, LTA’s first airship could even be based on the  Zeppelin NT, modified to use electric propulsion. LTA has received numerous imports from Zeppelin over the last few years, including fins, rudders, and equipment for a passenger gondola.

Unlike experimental fixed-wing or VTOL aircraft, which can be quietly tested and flown from remote airfields, there will be no keeping the enormous Pathfinder 1 secret when it finally leaves its hangar at Moffett Field. In January, LTA flew a small unmarked airship, usually operated for aerial advertising, from there. Even the sight of that airship, probably for a test of LTA’s flight systems, got observers chattering. The unveiling of Pathfinder 1 will show once and for all that Sergey Brin’s dreams of an airship renaissance are anything but hot air.

AI Seeks ET: Machine Learning Powers Hunt for Life in the Solar System

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/tech-talk/aerospace/robotic-exploration/ai-seeks-et-machine-learning-life-solar-system

Can artificial intelligence help the search for life elsewhere in the solar system? NASA thinks the answer may be “yes”—and not just on Mars either.

A pilot AI system is now being tested for use on the ExoMars mission that is currently slated to launch in the summer or fall of 2022. The machine-learning algorithms being developed will help science teams decide how to test Martian soil samples to return only the most meaningful data.

For ExoMars, the AI system will only be used back on earth to analyze data gather by the ExoMars rover. But if the system proves to be as useful to the rovers as now suspected, a NASA mission to Saturn’s moon Titan (now scheduled for 2026 launch) could automate the scientific sleuthing process in the field. This mission will rely on the Dragonfly octocopter drone to fly from surface location to surface location through Titan’s dense atmosphere and drill for signs of life there.

The hunt for microbial life in another world’s soil, either as fossilized remnants or as present-day samples, is very challenging, says Eric Lyness, software lead of the NASA Goddard Planetary Environments Lab in Greenbelt, Md. There is of course no precedent to draw upon, because no one has yet succeeded in astrobiology’s holy grail quest.

But that doesn’t mean AI can’t provide substantial assistance. Lyness explained that for the past few years he’d been puzzling over how to automate portions of an exploratory mission’s geochemical investigation, wherever in the solar system the scientific craft may be.

Last year he decided to try machine learning. “So we got some interns,” he said. “People right out of college or in college, who have been studying machine learning. … And they did some amazing stuff. It turned into much more than we expected.” Lyness and his collaborators presented their scientific analysis algorithm at a geochemistry conference last month.

ExoMars’s rover—named Rosalind Franklin, after one of the co-discoverers of DNA—will be the first that can drill down to 2-meter depths, beyond where solar UV light might penetrate and kill any life forms. In other words, ExoMars will be the first Martian craft with the ability to reach soil depths where living soil bacteria could possibly be found.

“We could potentially find forms of life, microbes or other things like that,” Lyness said. However, he quickly added, very little conclusive evidence today exists to suggest that there’s present-day (microbial) life on Mars. (NASA’s Curiosity rover has sent back some inexplicable observations of both methane and molecular oxygen in the Martian atmosphere that could conceivably be a sign of microbial life forms, though non-biological processes could explain these anomalies too.)

Less controversially, the Rosalind Franklin rover’s drill could also turn up fossilized evidence of life in the Martian soil from earlier epochs when Mars was more hospitable.

NASA’s contribution to the joint Russian/European Space Agency ExoMars project is an instrument called a mass spectrometer that will be used to analyze soil samples from the drill cores. Here, Lyness said, is where AI could really provide a helping hand.

The spectrometer, which studies the mass distribution of ions in a sample of material, works by blasting the drilled soil sample with a laser and then mapping out the atomic masses of the various molecules and portions of molecules that the laser has liberated. The problem is any given mass spectrum could originate from any number of source compounds, minerals and components. Which always makes analyzing a mass spectrum a gigantic puzzle.

Lyness said his group is studying the mineral montmorillonite, a commonplace component of the Martian soil, to see the many ways it might reveal itself in a mass spectrum. Then his team sneaks in an organic compound with the montmorillonite sample to see how that changes the mass spectrometer output.

“It could take a long time to really break down a spectrum and understand why you’re seeing peaks at certain [masses] in the spectrum,” he said. “So anything you can do to point scientists into a direction that says, ‘Don’t worry, I know it’s not this kind of thing or that kind of thing,’ they can more quickly identify what’s in there.”

Lyness said the ExoMars mission will provide a fertile training ground for his team’s as-yet-unnamed AI algorithm. (He said he’s open to suggestions—though, please, no spoof Boaty McBoatface submissions need apply.)

Because the Dragonfly drone and possibly a future astrobiology mission to Jupiter’s moon Europa would be operating in much more hostile environments with much less opportunity for data transmission back and forth to Earth, automating a craft’s astrobiological exploration would be practically a requirement.

All of which points to a future in mid-2030s in which a nuclear-powered octocopter on a moon of Saturn flies from location to location to drill for evidence of life on this tantalizingly bio-possible world. And machine learning will help power the science.

“We should be researching how to make the science instruments smarter,” Lyness said. “If you can make it smarter at the source, especially for planetary exploration, it has huge payoffs.”

U.S. Eases Restrictions on Private Remote-Sensing Satellites

Post Syndicated from David Schneider original https://spectrum.ieee.org/tech-talk/aerospace/satellites/eased-restrictions-on-commercial-remote-sensing-satellites

Later this month, satellite-based remote-sensing in the United States will be getting a big boost. Not from a better rocket, but from the U.S. Commerce Department, which will be relaxing the rules that govern how companies provide such services.

For many years, the Commerce Department has been tightly regulating those satellite-imaging companies, because of worries about geopolitical adversaries buying images for nefarious purposes and compromising U.S. national security. But the newly announced rules, set to go into effect on July 20, represent a significant easing of restrictions.

Previously, obtaining permission to operate a remote-sensing satellite has been a gamble—the criteria by which a company’s plans were judged were vague, as was the process, an inter-agency review requiring input from the U.S. Department of Defense as well as the State Department. But in May of 2018, the Trump administration’s Space Policy Directive-2 made it apparent that the regulatory winds were changing. In an effort to promote economic growth, the Commerce Department was commanded to rescind or revise regulations established in the Land Remote Sensing Policy Act of 1992,  a piece of legislation that compelled remote-sensing satellite companies to obtain licenses and required that their operations not compromise national security.

Following that directive, in May of 2019 the Commerce Department issued a Notice of Proposed Rulemaking in an attempt to streamline what many in the satellite remote-sensing industry saw as a cumbersome and restrictive process.

But the proposed rules didn’t please industry players. To the surprise of many of them, though, the final rules announced last May were significantly less strict. For example, they allow satellite remote-sensing companies to sell images of a particular type and resolution if substantially similar images are already commercially available in other countries. The new rules also drop earlier restrictions on nighttime imaging, radar imaging, and short-wave infrared imaging.

On June 25th, Commerce Secretary Wilbur Ross explained at a virtual meeting of the National Oceanic and Atmospheric Administration’s Advisory Committee on Commercial Remote Sensing why the final rules differ so much from what was proposed in 2019:

Last year at this meeting, you told us that our first draft of the rule would be detrimental to the U.S. industry and that it could threaten a decade’s worth of progress. You provided us with assessments of technology, foreign competition, and the impact of new remote sensing applications. We listened. We made the case with our government colleagues that the U.S. industry must innovate and introduce new products as quickly as possible. We argued that it was no longer possible to control new applications in the intensifying global competition for dominance.

In other words, the cat was already out of the bag: there’s no sense prohibiting U.S. companies from offering satellite-imaging services already available from foreign companies.

An area where the new rules remain relatively strict though, concerns the taking of pictures of other objects in orbit. Companies that want to offer satellite inspection or maintenance services would need rules that allow what regulators call “non-Earth imaging.” But there are national security implications here, because pictures obtained in this way could blow the cover of U.S. spy satellites masquerading as space debris.

While the extent to which spy satellites cloak themselves in the guise of space debris isn’t known, it seems clear that this would be an ideal tactic for avoiding detection. That strategy won’t work, though, if images taken by commercial satellites reveal a radar-reflecting object to be a cubesat instead of a mangled mass of metal.

Because of that concern, the current rules demand that companies limit the detailed imaging of other objects in space to ones for which they have obtained permission from the satellite owner and from the Secretary of Commerce at least 5 days in advance of obtaining images. But that stipulation begs a key question: Who should a satellite-imaging company contact if it wants to take pictures of a piece of space debris? Maybe imaging space debris would only require the permission of the Secretary of Commerce. But then, would the Secretary ever give such a request a green light? After all, if permission were typically granted, instances when it wasn’t would become suspicious.

More likely, imaging space debris—or spy satellites trying to pass as junk—is going to remain off the table for the time being. So even though the new rules are a welcome development to most commercial satellite companies, some will remain disappointed, including those companies that make up the Consortium for the Execution of Rendezvous and Servicing Operations (CONFERS), which had recommended that “the U.S. government should declare the space domain as a public space and the ability to conduct [non-Earth imaging] as the equivalent of taking photos of public activities on a public street.”

No Propeller? No Problem. This Blimp Flies on Buoyancy Alone

Post Syndicated from Andrew Rae original https://spectrum.ieee.org/aerospace/aviation/no-propeller-no-problem-this-blimp-flies-on-buoyancy-alone

On a cold March night last year in Portsmouth, England, an entirely new type of aircraft flew for the first time, along a dimly lit 120-meter corridor in a cavernous building once used to build minesweepers for the Royal Navy.

This is the Phoenix, an uncrewed blimp that has no engines but propels itself forward by varying its buoyancy and its orientation. The prototype measures 15 meters in length, 10.5 meters in wingspan, and when fully loaded weighs 150 kilograms (330 pounds). It flew over the full length of the building, each flight requiring it to undulate up and down about five times.

Flying in this strange way has advantages. For one, it demands very little energy, allowing the craft to be used for long-duration missions. Also, it dispenses with whirring rotors and compressor blades and violent exhaust streams—all potentially dangerous to people or objects on the ground and even in the air. Finally, it’s cool: an airship that moves like a sea creature.

This propulsion concept has been around since 1864, when a patent for the technique, as applied to an airship, was granted to one Solomon Andrews, of New Jersey (U.S. Patent 43,449). Andrews called the ship the Aereon, and he proposed that it use hydrogen for lift, to make the ship ascend. The ship could then vent some of the hydrogen to reduce its buoyancy, allowing it to descend. A return to lighter-than-air buoyancy would then be achieved by discarding ballast carried aloft in a gondola suspended beneath the airship.

The pilot would control the ship’s attitude by walking along the length of the gondola. Walking to the front moved the center of gravity ahead of the center of buoyancy, making the nose of the airship pitch down; walking to the back would make the nose pitch up.

Andrews suggested that these two methods could be used in conjunction to propel the airship in a sinusoidal flight path. Raising the nose in ascent and lowering it in descent causes the combination of aerodynamic force with either buoyancy (when lighter than air) or with weight (when heavier than air) to have a vector component along the flight path. That component provides the thrust except at the top and bottom of the flight path, where momentum alone carries it through. The flight tests we performed were at walking pace, so the aerodynamic forces would have been very small. There will always be a component of either buoyancy or weight along the flight path.

The method Andrews describes in his patent means that the flight had to end when the airship ran out of either hydrogen or ballast. Later, he built a second airship, which used cables to compress the gas or let it expand again, so that the airship could go up and down without having to jettison ballast. His approach was sound: The key to unlocking this idea and creating a useful aircraft is thus the ability to vary the buoyancy in a sustainable manner.

A variation on this mode of propulsion has been demonstrated successfully underwater, in remotely operated vehicles. Many of these “gliders” vary the volume of water that they displace by using compressed air to expand and contract flexible bladders. Such gliders have been used as long-distance survey vehicles that surface periodically to upload the data they’ve collected. Because water is nearly 1,000 times as dense as air, these robot submarines needn’t change the volume of the bladders very much to attain the necessary changes in buoyancy.

Aeronautical versions of this variable-buoyancy concept have been tried—the Physical Science Laboratory at New Mexico State University ran a demonstration project called Aerobody in the early 2000s—but all anyone could do was demonstrate that this odd form of propulsion works. Before now, nobody ever took advantage of the commercial possibilities that it offered for ultralong endurance applications.

The Phoenix project grew out of a small demonstration system developed by Athene Works, a British company that specializes in innovation and that’s funded by the U.K. Ministry of Defense. That system was successful enough to interest Innovate UK, a government agency dedicated to testing new ideas, and the Aerospace Technology Institute, a government-funded body that promotes transformative technology in air transport. These two organizations put up half the £3.5 million budget for the Phoenix. The rest was supplied by four private companies, five universities, and three governmental organizations devoted to high-value manufacturing.

My colleagues and I had less than four years to develop many of the constituent technologies, most of which were bespoke solutions, and then build and test the new craft. A great number of organizations participated in the project, with the Centre for Process Innovation managing the overall collaboration. I served as the lead engineer.

For up-and-down motion, the aircraft takes in and compresses air into an internal “lung,” making itself heavier than air; then it releases that compressed air to again become lighter than air. Think of the aircraft as a creature that inhales and exhales as it propels itself forward.

The 15-meter-long fuselage, filled with helium to achieve buoyancy, has a teardrop shape, representing a compromise between a sphere (which would be the ideal shape for maximizing the volume of gas you can enclose with a given amount of material) and a long, thin needle (which would minimize drag). At the relatively low speeds such a craft can aspire to, it is enough that the teardrop be just streamlined enough to avoid eddy currents, which on a sphere would form when the boundary layer of air that lies next to the surface of the airship pulls away from it. With our teardrop, the only drag comes from the friction of the air as it flows smoothly over the surface.

The skin is made of Vectran[PDF], a fiber that’s strong enough to withstand the internal pressure and sufficiently closely knit so that together with a thermoplastic polyurethane coating it can seal the helium in. The point was to be strong enough to maintain the right shape, even when the airship’s internal bladder was inflating.

Whether ascending or descending, the aircraft must control its attitude. It therefore has wings with ailerons at the tips to control the aircraft’s roll. At the back is a cross-shaped structure with a pair of horizontal stabilizers incorporating elevators to control how the airship pitches up or down, and a similar pair of vertical stabilizers with rudders to control how it yaws left or right. These flight surfaces have much in common with the wood, fabric, and wire parts in the pioneering airplanes of the early 20th century.

Two carbon-fiber spars span the wings, giving them strength. Airfoil-shaped ribs are distributed along the spars, each made up of foam sandwiched between carbon fiber. A thin skin wraps around this skeleton to give the wing its shape. We designed the horizontal and vertical tail sections to be identical to one another and to the outer panels of the wings. Thus, we were able to limit the types of parts, making the craft easier to construct and repair.

An onboard power system supplies the electricity needed to operate the pumps and valves used to inflate and deflate the inner bladder. It also energizes the various actuators needed to adjust the flight-control surfaces and keeps the craft’s autonomous flight-control system functioning. A rechargeable lithium-ion battery with a capacity of 3 kilowatt-hours meets those requirements in darkness. During daylight hours, arrays of flexible solar cells (most of them on the upper surfaces of the wings, the rest on the upper surface of the horizontal tail) recharge that battery. We confirmed through ground tests outdoors in the sun that these solar cells could simultaneously power all of the aircraft’s systems and recharge the battery in a reasonable amount of time, proving that the Phoenix could be entirely self-sufficient for energy.

We had envisaged also using a hydrogen fuel cell, but because of fire-safety requirements it wasn’t quite ready for the indoor flight trials. We do plan to add this second power source later, for redundancy. Also, if we were to use hydrogen as the lift gas, the fuel cell could be used to replenish any hydrogen lost through the airship’s skin.

So how well did the thing fly? For our tests, we programmed the autonomous flight-control system to follow a sinusoidal flight path by operating the valves and compressors connected to the internal bladder. In this respect, the flight-control system has more in common with a submarine’s buoyancy controls than an airplane’s flight controls.

We had to set strict altitude limits to avoid contact with the roof and floor of the building during our indoor test. In normal operation, the aircraft will be free to determine for itself the amplitude of its up-and-down motion and the length of each undulation to achieve the necessary velocity. Doing that will require some complex calculations and precisely executed commands—a far cry from the meandering backward and forward in a wicker gondola that Andrews did.

Although our experiments to date are merely testing a previously unproven concept, the Phoenix can now serve as the prototype for a commercially valuable aircraft. The next step is getting the Phoenix certified as airworthy. For that, it must pass flight trials outdoors. When we planned the project, this certification had a series of weight thresholds, with 150 kg being the upper limit for approval through the U.K. Civil Aviation Authority under a “permit to fly.” Had it been heavier, approval by the European Union Aviation Safety Agency would have been needed, and trying to obtain that was beyond our budget of both time and money. After the United Kingdom fully exits from the European Union, certification will be different.

Commercial applications for such an aircraft are not hard to imagine. A good example is as a high-altitude pseudosatellite, a craft that can be positioned at will to convey wireless signals to remote places. Existing aircraft designed to perform this role all need very big arrays of solar cells and large batteries, which add to both the weight and cost of the aircraft. Because the Phoenix needs only small arrays of solar cells on the wings and horizontal tail, it can be built for a tenth the cost of the solar e-planes that have been designed for this purpose. It is a cheap, almost disposable, alternative with a much higher ratio of payload to mass than that of alternative aircraft. And our designs for a much larger version of the Phoenix show that it should be feasible to lift a payload of 100 kg to an altitude of 20 kilometers.

We are now beginning to develop such a successor to the Phoenix. Perhaps you will see it one day, a dot in the sky, hanging motionless or languidly porpoising to a new position high above your head.

This article appears in the July 2020 print issue as “This Blimp Flies on Buoyancy Alone.”

About the Author

Andrew Rae is a professor of engineering at the University of the Highlands and Islands, in Inverness, Scotland.

Tiny Satellites Could Distribute Quantum Keys

Post Syndicated from Neil Savage original https://spectrum.ieee.org/tech-talk/aerospace/satellites/tiny-satellites-could-distribute-quantum-keys

Unbreakable quantum keys that use the laws of physics to protect their secrets could be transmitted from orbiting devices a person could lift with one hand, according to experiments conducted from the International Space Station.

Researchers launched a tiny, experimental satellite loaded with optics that could emit entangled pairs of photons. Entangled photons share quantum mechanical properties in such a way that measuring the state of one member of the pair—such as its polarization—instantly tells you what the state of its partner is. Such entanglement could be the basis for quantum key distribution, in which cryptographic keys are used to decode messages. If an eavesdropper were to intercept one of the photons and measure it, that would change the state of both, and the user would know the key had been compromised.

NASA’s Next Mars Rover Will Carry a Tiny Helicopter


Post Syndicated from Ned Potter original https://spectrum.ieee.org/aerospace/robotic-exploration/nasas-next-mars-rover-will-carry-a-tiny-helicopter

If ever there was life on Mars, NASA’s Perseverance rover should be able to find signs of it. The rover, scheduled to launch from Kennedy Space Center, in Florida, in late July or early August, is designed to drill through rocks in an ancient lake bed and examine them for biosignatures, extract oxygen from the atmosphere, and collect soil samples that might someday be returned to Earth.

But to succeed at a Mars mission, you always need a little ingenuity; that’s literally what Perseverance is carrying. Bolted to the rover’s undercarriage is a small autonomous helicopter called Ingenuity. If all goes as planned, it will become the first aircraft to make a powered flight on another planet.

Flying a drone on Mars sounds simple, but it has been remarkably difficult to design a workable machine. Ingenuity’s worst enemy is the planet’s atmosphere, which is less than 1 percent as dense as Earth’s and can drop to –100 °C at night at the landing site.

“Imagine a breeze on Earth,” says Theodore Tzanetos, flight test conductor for the project at NASA’s Jet Propulsion Laboratory, in Pasadena, Calif. “Now imagine having 1 percent of that to bite into or grab onto for lift and control.” No earthly helicopter has ever flown in air that thin.

Perseverance and Ingenuity are set to land in a crater called Jezero on 18 February 2021 and then head off to explore. About 60 Martian days later, the rover should lower the drone to the ground, move about 100 meters away, and watch it take off.

While the car-size Perseverance has a mass of 1,025 kilograms, the drone is just 1.8 kg with a fuselage the size of a box of tissues. Ingenuity’s twin carbon-fiber rotors sit on top of one another and spin in opposite directions at about 2,400 rpm, five times as fast as most helicopter rotors on Earth. If they went any slower, the vehicle couldn’t get off the ground. Much faster and the outer edges of the rotors would approach supersonic speed, possibly causing shock waves and turbulence that would make the drone all but impossible to stabilize.

Ingenuity is intended as a technology demonstration. Mission managers say they hope to make up to five flights over a 30-day period. No flight is planned to last more than 90 seconds, reach altitudes of more than 10 meters, or span more than 300 meters from takeoff to landing.

“It may be a bit less maneuverable than a drone on Earth,” says Josh Ravich, the project’s mechanical engineering lead at JPL, “but it has to survive the rocket launch from Earth, the flight from Earth to Mars, entry, descent, and landing on the Martian surface, and the cold nights there.”

That’s why engineers struggled through years of design work, trying to meet competing needs for power, durability, maneuverability, and weight. Most of the drone’s power, supplied by a small solar panel above the rotors and stored in lithium-ion batteries, will be spent not on flying but on keeping the radio and guidance systems warm overnight. They considered insulating the electronics with aerogel, a super-lightweight foam, but decided even that would add too much weight. Modeling showed that the Martian atmosphere, which is mainly carbon dioxide, would supply some thermal buffering.

The team calculated that the best time of day for the first flight will be late in the Martian morning. By then, the light is strong enough to charge the batteries for brief hops. But if they wait longer, the sun’s warmth would also cause air to rise, thinning it at the surface and making it even more difficult to generate lift.

To see if the drone would fly at all, they put a test model in a three-story chamber filled with a simulated Martian atmosphere. A wire rig pulled up on it to simulate Mars’s 0.38-g gravity. It flew, but, says Ravich, the real test will be on Mars.

If Ingenuity succeeds, future missions could use drones as scouts to help rovers—and perhaps astronauts—explore hard-to-reach cliff sides and volcanoes. “We’ve only seen Mars from the surface or from orbit,” says Ravich. “In a 90-second flight, we can see hundreds of meters ahead.” 

This article appears in the July 2020 print issue as “A Mars Helicopter Preps for Launch.”

Zombie Satellites Return From the Graveyard

Post Syndicated from Nola Taylor Redd original https://spectrum.ieee.org/tech-talk/aerospace/satellites/zombie-satellites-return-from-the-graveyard

New technology may help to bring dead satellites back to life. Earlier this year, the Mission Extension Vehicle (MEV), a spacecraft jointly managed by NASA and Northrop Grumman, made history when it resurrected a decrepit satellite from the satellite graveyard. Reviving the spacecraft is a key step in extending the lifetime of orbiting objects; a second mission is set to extend the lifetime of another satellite later this summer.

Most satellites in geosynchronous orbits (GEO) have a design life of 15 years and are launched with enough fuel to cover that timeframe. At the end of their lifetime, the crafts are required to enter a graveyard orbit mandated by the 2002 draft Mitigation Guidelines issued by the Inter-Agency Space Debris Coordination Committee (IADC). Graveyard orbits comprise paths at least 300 kilometers above the geosynchronous region, giving the zombie spacecraft room to have their orbits incrementally ground down by the gravity of the sun and moon.

Even when they’re out of fuel, most satellites are still fully capable of functioning. “The technical degradation—besides fuel—of the satellite subsystems beyond 15 years is very marginal,” says Joe Anderson, vice president of business development and operations at SpaceLogistics, a subsidiary of Northrop Grumman. Anerson says he’s aware of satellites providing valuable services for nearly 30 years.

Defunct satellites are tracked primarily by the United States Air Force, which follows their mass rather than their radio signals. Smaller bits such as debris are difficult for the Air Force to follow, but satellites are usually large enough (though new technology is bringing them down in size). In contrast, the IADC is comprises just over a dozen space agencies and is the “gold standard” in space recommendations, says Jonathan McDowell, an astrophysicist at the Harvard-Smithsonian Center for Astrophysics.

According to McDowell, who is also author of Jonathan’s Space Report, a weekly newsletter cataloguing the comings and goings of spacecraft, there are 915 objects within 200 km of the GEO, so they still have plenty of room to avoid one another. The looming issue is the 365 defunct spacecraft that—due to malfunction, lack of planning, or laziness—didn’t follow the IADC’s guidelines. In contrast, the graveyard region contains only 283 spacecraft. Dead satellites not parked in the agreed upon spot could lead to collisions (and therefore more debris) which could damage active spacecraft.

“The level of compliance is a little disappointing,” McDowell says. “The junk satellite environment close to GEO is more than one would want.”

In February, MEV-1 successfully brought an Intel satellite back from the graveyard back into geostationary orbit, where it now serves over 30 customers. Launched in 2001, the Instelsat eventually ran out of fuel and retired to the satellite graveyard. Without fuel, it could no longer adjust its orbit, though its other systems remained functional.

The MEV-1 was designed for interfacing with single-use satellites like the Intelsat. By docking with the satellite’s liquid apogee engine, a common feature that helps most geostationary satellites finalize their orbits at the start of their lifetime, MER-1 captured the satellite and began to lower its orbit, putting the Intelsat back into play at the start of April. MEV-1 will remain connected to the Intelsat for the next five years, then return it to the graveyard. MEV-1 will then proceed to its next customer.

MEV-1 is only the beginning. Anderson says that a second MEV will be launched later this summer. It will  reach geostationary orbit in January 2021 and extend the lifetime of a second Intel satellite for an additional five years. Northrup Grumman plans to have only two MEVs in space, but Anderson expects them to be utilized throughout their 15-year-plus lifetime. The company is also developing Mission Extension Pods, smaller propulsion augmentation devices installed on a client’s satellite to provide orbital boosts, extending missions for up to six years. The first pods should launch in 2023, Anderson says.

While Northrup Grumman touts refueling and refurbishing missions like MEV as a bonus to the company’s bottom line, McDowell sees it as a great way to solve the growing problem of space debris and the collection of defunct human-made objects in space. Spacecraft like the MEV could potentially relocate nonfunctioning satellites from GEO to the graveyard, though there are likely to be regulatory and legal issues over who has the right to haul away someone else’s trash. He has long advocated for a launch tax on the companies using space; those funds would sustain an “international space garbage trucking agency” responsible for cleaning up the messes from collisions and enforcing the removal of out-of-work satellites.

“The era of the space garbage truck is coming,” McDowell says.

New Microsatellite Will Focus on Industrial Methane Emissions

Post Syndicated from Maria Gallucci original https://spectrum.ieee.org/energywise/aerospace/satellites/microsatellite-industrial-methane-emissions

Claire, a microsatellite, was monitoring a mud volcano in Central Asia when a mysterious plume appeared in its peripheral view. The 15-kilogram spacecraft had spotted a massive leak of methane—a powerful climate pollutant—erupting from an oil and gas facility in western Turkmenistan. The sighting in January 2019 eventually spurred the operator to fix its equipment, plugging one of the world’s largest reported methane leaks to date.

Canadian startup GHGSat launched Claire four years ago to begin tracking greenhouse gas emissions. Now the company is ready to send its second satellite into orbit. On 20 June, the next-generation Iris satellite is expected to hitch a ride on Arianespace’s Vega 16 rocket from a site in French Guiana. The launch follows back-to-back delays due to a rocket failure last year and the COVID-19 outbreak.

GHGSat is part of a larger global effort by startups, energy companies, and environmental groups to develop new technologies for spotting and quantifying methane emissions. 

Although the phrase “greenhouse gas emissions” is almost synonymous with carbon dioxide, it refers to a collection of gases, including methane.  Methane traps significantly more heat in the atmosphere than carbon dioxide, and it’s responsible for about one-fourth of total atmospheric warming to date. While mud volcanoes, bogs, and permafrost are natural methane emitters, a rising share is linked to human activities, including cattle operations, landfills, and the production, storage, and transportation of natural gas. In February, a scientific study found that human-caused methane emissions might be 25 to 40 percent higher than previously estimated.

Iris’s launch also comes as the Trump administration works to ease regulations on U.S. fossil fuel companies. The U.S. Environmental Protection Agency in May sought to expedite a rollback of federal methane rules on oil and gas sites. The move could lead to an extra 5 million tons of methane emissions every year, according to the Environmental Defense Fund.

Stéphane Germain, president of Montreal-based GHGSat, said the much-improved Iris satellite will enhance the startup’s ability to document methane in North America and beyond.
 

“We’re expecting 10 times the performance relative to Claire, in terms of detection,” he said ahead of the planned launch date.

The older satellite is designed to spot light absorption patterns for both carbon dioxide and methane. But, as Germain explained, the broader spectral detection range requires some compromise on the precision and quality of measurements. Iris’s spectrometer, by contrast, is optimized for only methane plumes, which allows it to spot smaller emission sources in fewer measurements.

Claire also collects about 25 percent of the stray light from outside its field of view, which impinges on its detector. It also experiences “ghosting,” or the internal light reflections within the camera and lens that lead to spots or mirror images. And space radiation has caused more damage to the microsat’s detector than developers initially expected. 

With Iris, GHGSat has tweaked the optical equipment and added radiation shielding to minimize such issues on the new satellite, Germain said.

Other technology upgrades include a calibration feature that corrects for any dead or defective pixels that might mar the observational data. Iris will test an experimental computing system with 10 times the memory and four times the processing power of Claire. The new satellite will also test optical communications downlink, allowing the satellite to bypass shared radio frequencies. The laser-based, 1-gigabit-per-second downlink promises to be more than a thousand times faster than current radio transmission.

GHGSat is one of several ventures aiming to monitor methane from orbit. Silicon Valley startup Bluefield Technologies plans to launch a backpack-sized microsatellite in 2020, following a high-altitude balloon test of its methane sensors at nearly 31,000 meters. MethaneSAT, an independent subsidiary of the Environmental Defense Fund, expects to complete its satellite by 2022. 

The satellites could become a “big game changer” for methane-monitoring, said Arvind Ravikumar, an assistant professor of energy engineering at the Harrisburg University of Science and Technology in Pennsylvania. 

“The advantage of something like satellites is that it can be done remotely,” he said. “You don’t need to go and ask permission from an operator — you can just ask a satellite to point to a site and see what its emissions are. We’re not relying on the industry to report what their emissions are.”

Such transparency “puts a lot of public pressure on companies that are not managing their methane emissions well,” he added.

Ravikumar recently participated in two research initiatives to test methane-monitoring equipment on trucks, drones, and airplanes. The Mobile Monitoring Challenge, led by Stanford University’s Natural Gas Initiative and the Environmental Defense Fund, studied 10 technologies at controlled test sites in Colorado and California. The Alberta Methane Field Challenge, an industry-backed effort, studied similar equipment at active oil-and-gas production sites in Alberta, Canada.

Both studies suggest that a combination of technologies is needed to effectively identify leaks from wellheads, pipelines, tanks, and other equipment. A plane can quickly spot methane plumes during a flyover, but more precise equipment, such as a handheld optical-gas-imaging camera, might be necessary to further clarify the data.

GHGSat’s technology could play a similarly complementary role with government-led research missions, Germain said. 

Climate-monitoring satellites run by space agencies tend to have “very coarse resolutions, because they’re designed to monitor the whole planet all the time to inform climate change models. Whereas ours are designed to monitor individual facilities,” he said. The larger satellites can spot large leaks faster, while Iris or Claire could help pinpoint the exact point source.

After Iris, GHGSat plans to launch a third satellite in December, and it’s working to add an additional eight spacecraft — the first in a “constellation” of pollution-monitoring satellites. “The goal ultimately is to track every single source of carbon dioxide and methane in the world, routinely,” Germaine said.

Quantum Satellite Links Extend More Than 1,000 Kilometers

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/aerospace/satellites/entangled-satellite

A space-based, virtually unhackable quantum Internet may be one step closer to reality due to satellite experiments that linked ground stations more than 1,000 kilometers apart, a new study finds.

Quantum physics makes a strange effect known as entanglement possible. Essentially, two or more particles such as photons that get linked or “entangled” can influence each other simultaneously no matter how far apart they are.

Entanglement is an essential factor in the operations of quantum computers, the networks that would connect them, and the most sophisticated kinds of quantum cryptography, a theoretically unhackable means of securing information exchange.

SpaceX Returns U.S. Astronauts to Space

Post Syndicated from Stephen Cass original https://spectrum.ieee.org/tech-talk/aerospace/space-flight/spacex-returns-us-astronauts-to-space

With a “let’s light this candle,” and an ear-shattering roar and a blaze of light in Florida this afternoon, the United States rejoined an exclusive club of nations: those capable of launching people into space. Since the space shuttle fleet was decommissioned in 2011, American astronauts have had to hitch a ride on Russian Soyuz spacecraft. The launch today was the second attempt after the first attempt earlier this week was cancelled due to weather concerns. All went well, including a successful recovery of the the Falcon booster’s first stage, with a soft landing on the drone ship Of Course I Still Love You, stationed in the Atlantic.

This nine-year gap has highlighted how moribund the United States’ official space program has become: after all, between 1960 and 1969, NASA developed and flew three new crewed spacecraft—Gemini, the Apollo Command and Service Module, and the Lunar Lander, not to mention launch boosters such as the Saturn IB and Saturn V. But this return to human spaceflight marks more than just a return to the status quo, but heralds a new epoch when commercial, rather than government entities, take the lead in designing, delivering, and operating new spacecraft. The Dragon 2 spacecraft and Falcon 9 booster were both created by SpaceX, the current leader in the still nascent world of private spaceflight.

To be clear, NASA has always relied on commercial contractors to build most of its hardware. Gemini was built by McDonnell Aircraft (now absorbed into Boeing) for example, and each of the Saturn V’s three stages was built by a separate company—Boeing, North American (now part of Boeing), and Douglas (now also part of Boeing)—with the rocket’s instrumentation, telemetry, and computer module built by IBM. But during the Apollo era, NASA kept these contractors on a very tight leash, dictating many aspects of the design, and also had the advantage of vast budgets devoted to testing and debugging. NASA also exerted complete control of the entire flight from delivery of the hardware through launch and the subsequent mission.

But today, NASA’s took a back seat. While NASA’s mission control in Houston remains in charge of the overall mission to the International Space Station (ISS), SpaceX’s mission control in Hawthorne, California, was responsible for the launch and flight into orbit. The Dragon capsule won’t reach the ISS until tomorrow, when it will dock and essentially become part of the station until it is time for it to return.

The two astronauts flying onboard the Dragon—Robert Behnken and Douglas Hurley—are both veteran of the space shuttle program. SpaceX’s replacement is like the shuttle in that it is intended to be a largely resuable mode of transport, including a booster that can fly itself home to a safe landing, but the overall approach is more reminiscent of the Gemini spacecraft of the 1960s than anything since.

Few remember today, but the snub-nosed, matt-black-and-white Gemini spacecraft, with their stylish red trim, was originally designed with reusability in mind. Components were arranged to make servicing between flights easier, and a version was in the works to bring astronauts to and from a military space station, the Manned Orbiting Laboratory (MOL). The MOL was cancelled abruptly, and so only one refurbished Gemini capsule was ever reflown in an uncrewed test, and it’s planned role as reusable space station visitor largely forgotten.

Like Gemini, the Dragon 2 is comprised of two main components: a bell-shaped crew compartment with a curved heatshield and an unpressurized “trunk” section behind the heatshield that acts as both an adapter for mating the spacecraft to the launch rocket and provides additional payload space. Like Gemini, when returning to Earth, the unpressurized section is jettisoned, and the crew capsule alone makes a so-called “blunt body” descent through the atmosphere before releasing parachutes and splashing down in the ocean for pickup.

But the Dragon 2 also features enormous improvements. For one thing, it’s much bigger than Gemini, capable of carrying up to seven people, compared to Gemini’s two. The unpressurized section is also coated with solar cells to generate power, a first for U.S. crewed spacecraft, which previously relied exclusively on batteries or fuel cells. The crew capsule is also outfitted with rockets that are powerful enough for orbital maneuvering, while Gemini relied on thrusters in its disposable section. These rockets also mean that the crew capsule is capable of being its own launch escape system, pulling the crew the safety in the event of a booster failure. Gemini relied on two airplane-style ejector seats for launch emergencies, something that few astronauts seemed to have any confidence in.

The Dragon spacecraft also has a sophisticated autonomous guidance system: in theory, astronauts could just sit back at launch and let the spacecraft fly all the way into dock at the ISS. However, during this proving flight, the astronauts will take control of the spacecraft to test the manual backup controls, albeit in a much slicker manner than the old Gemini days, when Buzz Aldrin had to bust out a sextant to make a rendezvous after Gemini 12’s radar stopped working. The new system interface is more reminiscent of a video game than a piece of navigation equipment—in fact, you can try your own hand at space station docking using a replica of the interface online.

Exactly how long the crew will stay onboard the ISS is unknown, although there is a limit of 119 days due to concerns about the longevity of the Dragon’s solar panels in orbit.

Can Cargo-Carrying Drones Jump Over Air Freight’s Logistical Logjams?

Post Syndicated from Ed De Reyes original https://spectrum.ieee.org/aerospace/aviation/can-cargocarrying-drones-jump-over-air-freights-logistical-logjams

It’s 4 a.m. on 23 December, and an MD-11 freighter has just landed at the logistics base near San Bernardino, Calif. It landed late because of heavy fog; many of the aircraft in San Bernardino won’t be able to depart because the other airports in the Los Angeles Basin are fogbound as well.

Ground crews remove cargo from the MD-11 and get the many packages into delivery trucks and small planes as quickly as possible before the start of the morning rush of cargo from Amazon, Walmart, and other online retailers scrambling to deliver Christmas gifts. Meanwhile, construction is snarling traffic on the I-10 freeway into Los Angeles, and because of the fog, the I-15 is no better. Similar messes are often seen at choke points throughout the world, where the package-delivery business faces a raft of problems.

Now imagine that the MD-11 landing in San Bernardino meets up with a number of small aircraft, each fully fueled for a day of flying, with no pilots on board and no weather restrictions. The ground crew slides four fully loaded LD-2 containers with 2,400 kilograms of cargo through the nose of each craft, closes them up, and poof, they’re cleared for takeoff, despite the thickening fog. The aircraft all take off, arriving 30 minutes later at various nearby airports, like the one in Oxnard, Calif., where they taxi to the ramps of the relevant package-delivery companies. There the ground crew removes the cargo, sorts it, and loads the small aircraft with new cargo, bound for the Beverly Hills package-sorting facility.

After being cleared for takeoff, the first of these aircraft rises like a helicopter before heading to the Beverly Hills site, which is just a parking lot with two ground handlers. As this odd-looking uncrewed aircraft comes in, the wings fold to help the plane clear any branches or power lines. Just then, a truck pulls right into the intended landing spot, but the aircraft perceives the obstruction and rises to a holding position until the ground crew can clear the area. Only then does it land, give up its cargo, accept new cargo, and get cleared again for takeoff.

And so it goes, right down the supply chain. Weather conditions that today would paralyze operations are shrugged off—fog, freezing rain, even failures of ground vehicles at improvised landing areas. Say a forklift can’t cross a muddy field to unload the cargo container. No problem; one of the ground crew simply tilts the aircraft’s nose up and unlatches the hooks and the cargo container slides right out with the help of a cargo winch. Throughout the day, the process of sorting, loading, and takeoff is repeated 15 more times at Oxnard and other airports in Southern California before a final 4,500 kg of payload is loaded into the aircraft and it returns to San Bernardino. Refueling takes place just once, at the end of the day.

This vision of urban air mobility, built on the promise of electric propulsion and on autonomous flight, is no sci-fi dream but a practical project, one that a number of companies are pursuing. Airbus has finished testing its Vahana, a concept electric vertical takeoff and landing (VTOL) aircraft that is meant to fly passengers at low altitudes within and between cities and towns. There is also the Cora, a creation of Google impresarios; it, too, is meant only for short distances and low altitudes. Neither of those two aircraft can carry much cargo, though, particularly in bad weather.

Consider how much easier it would be to use such methods to move cargo instead of people. If there are no passengers on board, you can lose the heavy, bulky gear that assures passenger safety. Replace pilots and you can also dispense with the instruments that help them see where they’re going, as well as the equipment that soundproofs the cabin and supports the windows, floor beams, bulkheads, and so forth. In some cases, an aircraft can weigh 25 percent more with human-factor equipment than without it.

My company, Sabrewing Aircraft, in Camarillo, Calif., was founded to exploit these advantages. By starting with a clean-sheet concept that was never meant to fly people, only cargo, and thus with no one on board to be put at risk, the aircraft can go to and from places no crewed rival can safely reach.

We call it the Rhaegal. If need be, it can lift almost 2,500 kg (5,500 pounds) of cargo straight up from the ground, like a helicopter; if a short runway is available, it can take off in the standard way, then fly straight ahead carrying as much as 4,500 kg (10,000 pounds). That’s more than the new Cessna 408 SkyCourier can manage, and the Rhaegal flies much faster and higher. Also, it is designed to load and unload without the help of forklifts, pallet jacks, or other specialized equipment.

The Rhaegal sits low to the ground, whether on tarmac or even a sand dune, then tilts its nose upward so that either containerized or bulk cargo can be quickly loaded and secured. The aircraft’s high-flotation “tundra tires” and four-post landing-gear arrangement allow it to land in mud, snow, sand, marsh, or deep puddles, and an integral loading ramp with rollers can be used to ease loading of pallets or containers.

Because the Rhaegal has a maximum gross weight above 600 kg (1,320 pounds) it falls under U.S. Federal Aviation Administration Regulation Part 23, which requires that it be remotely monitored and controlled and that it remain in contact with air traffic control at all times. Its operator, who can be hundreds or even thousands of miles away, controls the aircraft via a satellite link. In this way, the local air traffic control authority speaks to the operator through the aircraft, just as if the operator were sitting in the cockpit itself.

Prior to takeoff, the operator loads into the computer an exact flight plan, provided by the air traffic control authorities, that includes procedures for departing in any weather and also establishes the frequencies, routes, and a clearance to the aircraft’s final destination. That way it can find its way home even if it loses communication with the operator or air traffic control.

The U.S. Federal Aviation Administration (FAA) requires that a human pilot of a conventional aircraft must see and avoid any air traffic that may be following an intersecting flight path. The same rule applies to the Rhaegal: It must do this job by itself, without operator input. This system, known as the Detect and Avoid (DAA) system, uses a mix of sensors, among them an anticollision radar (made by Garmin), a camera-based system that can spot conflicting air traffic and provide autopilot commands to avoid it (made by Iris Automation), and a lidar, or laser-ranging system, to detect power lines and other small obstructions at close range (made by Attollo Engineering). The DAA system also uses what is called automatic dependent surveillance–broadcast (ADS-B), a satellite-navigation system now mandated by the FAA for virtually all aircraft of any size operating in controlled airspace. This system tracks all flights no matter what paths they take, allowing for far more flexible routing than the older, ground-based radars could manage.

Not all traffic problems are in the air; some are on the ground. Cars or trucks may be moving around or even parked, for instance, when a parking lot itself serves as the landing zone. The Rhaegal uses an artificial-intelligence landing system to spot obstacles from above, including vehicles, people, rocks, and uneven surfaces. This landing system can recognize many types of obstacles and clearings, including landing pads aboard ships at sea.

Data fed from all the sensors are fused into a single picture of the plane’s surroundings by a sensor-interface computer, which monitors nearby air traffic and computes how to keep a safe distance away. When that happens, the computer sends a message to the operator on the ground that a possible conflict is approaching; the operator then makes the decision to change the flight path. If the operator does nothing, the computer will take the necessary steps on its own. Wherever the aircraft goes, the computer can detect bad weather up ahead and provide the data to the operator, who together with air traffic controllers can make changes to avoid storms, in some cases by flying well above them.

What’s more, the Rhaegal is semiautonomous, meaning that it can complete its mission even if it loses communication with the operator and with air traffic control in general. It simply follows a preplanned flight route, detecting and avoiding traffic on the way and then landing at a remote location.

The Rhaegal’s all-composite airframe is built in sections that can be quickly and easily repaired or even replaced in the field, with a minimum of hand tools. This modular design means that inspections that used to ground aircraft for weeks or even months can now be accomplished in hours. The Rhaegal is well suited for military applications: It can fly high and fast enough to avoid ground fire or fly low to avoid radar, enabling it to bring vital supplies to isolated units. It’s even versatile enough to whisk four casualties and two medics to a mobile hospital within the “golden hour” after an injury occurs, greatly increasing the patient’s chances of survival. In addition, the Rhaegal has a proprietary system that allows it to land safely if its propulsion system is damaged: It can either glide to a safe landing spot or, if the craft is hovering, it can land even if it loses the thrust of an entire duct unit.

Rhaegal gets its power from a turboshaft engine, which is basically a gas turbine designed specifically to turn a rotor rather than generate thrust, as a jet engine would do. This engine drives a generator that sends power to electric motors, which turn rotor blades. These are like propellers, but they are shrouded to provide more thrust than an open rotor would and to protect both people on the ground and the blades themselves when landing near bushes or trees. The point of this turboelectric drivetrain is to provide high efficiency in cruise flight and also high power during takeoff and landing. That heightened efficiency allows it to emit an estimated 70 percent less carbon than the Cessna 408 SkyCourier while carrying twice the load four times farther. And because of its turbo design, it can be made “greener” still by using biofuel.

The airframe of the first Rhaegal was completed in March 2020, and we expect flight testing will have begun by the time this article appears. Sabrewing has been in discussions with the FAA since 2017, and permission to start type certification—which assures the safety of a new type of aircraft—could come shortly.

Certification is no small administrative matter: Type certification for even a small private airplane can easily cost US $50 million to $100 million, but a cargo UAV should cost just a fraction as much to certify. And the Rhaegal stands first in line for such certification, ahead of all other electric cargo carriers using vertical takeoff and landing.

So don’t be surprised if you look up sometime soon and see a Rhaegal cruising above your head. And some December in the not-too-distant future it might be playing Santa Claus to kids in both remote villages and large industrial centers throughout the world.

This article appears in the June 2020 print issue as “Can Cargo Drones Solve Air Freight’s Logjams.”

About the Author

Ed De Reyes, a retired Air Force test pilot, is CEO of Sabrewing Aircraft Co., in Camarillo, Calif.

What It’s Like to Sweat the Launch of a New Spaceship

Post Syndicated from James Oberg original https://spectrum.ieee.org/tech-talk/aerospace/space-flight/launch-new-spaceship-spacex-nasa-news-history

Much has been made of the long gap—nine years—since the last human space launch from U.S. soil. Soon, astronauts will fly again from Cape Canaveral. But there’s an even longer gap that hasn’t been mentioned, even though it’s probably much more significant for the success of today’s SpaceX launch.

It’s been almost forty years since the last time Americans flew on a completely new spacecraft. That was on 12 April 1981 with the space shuttle Columbia. Special preparation is always needed for this kind of first, as I remember well. I was in mission control with the ascent team, looking after the launch.