All posts by Michael Dumiak

Ultra-quick Cameras Reveal How to Catch a Light Pulse Mid-Flight

Post Syndicated from Michael Dumiak original https://spectrum.ieee.org/tech-talk/sensors/imagers/super-fast-cameras-capture-light-pulse-midflight

The footage is something between the ghostlike and commonplace: a tiny laser pulse tracing through a box pattern, as if moving through a neon storefront sign. But this pulse is more than meets the eye, more than a pattern for the eye to follow, and there’s no good metaphor for it. It is quite simply and starkly a complete burst of light—a pulse train with both a front and a back—captured in a still image, mid-flight.

Electrical engineers and optics experts at the Advanced Quantum Architecture (AQUA) Laboratory in Neuchâtel, Switzerland made this footage last year, mid-pandemic, using a single-photon avalanche diode camera, or SPAD. Its solid-state photodetectors are capable of very high-precision measurements of time and therefore distance, even as its single pixels are struck by individual light particles, or photons.

Edoardo Charbon and his colleagues at the Swiss Ecole Polytechnique Fédérale de Lausanne, working with camera maker Canon, were able in late 2019 to develop a SPAD array at megapixel size in a small camera they called Mega X. It can resolve one million pixels and process this information very quickly. 

The Charbon-led AQUA group—at that point comprising nanotechnology prizewinner Kazuhiro Morimoto, Ming-Lo Wu and Andrei Ardelean—synchronized Mega X to a femtosecond laser. They fire the laser through an aerosol of water vapor, made using dry ice procured at a party shop. 

The photons hit the water droplets, and some of them scatter toward the camera. With a sensor able to realize a megapixel resolution, the camera can capture 24,000 frames per second with exposure times as fast as 3.8 nanoseconds.

The sort of photodetectors used in Mega X have been in development for several decades. Indeed, SPAD imagers can be found in smartphone cameras, industrial robots, and lab spectroscopy. The kind of footage caught in the Neuchâtel lab is called light-in-flight. MIT’s Ramesh Raskar in 2010 and 2011 used a streak camera—a kind of camera used in chemistry applications—to produce 2D images and build films of light in flight in what he called femtophotography

The development of light-in-flight imaging goes back at least to the late 1970s, when Nils Abramson at Stockholm’s Royal Institute of Technology used holography to record light wavefronts. Genevieve Gariepy, Daniele Faccio and other researchers then in Edinburgh in 2015 used a SPAD to show footage of light in flight. The first light-in-flight images took many hours to construct, but the Mega X camera can do it in a few minutes. 

Martin Zurek, a freelance photographer and consultant in Bavaria (he does 3D laser scanning and drone measurements for architectural and landscape projects) recalls many long hours working on femtosecond spectroscopy for his physics doctorate in Munich in the late 1990s. When Zurek watches the AQUA light footage, he’s impressed by the resolution in location and time, which opens up dimensions for the image. “The most interesting thing is how short you can make light pulses,” he says. “You’re observing something fundamental, a fundamental physical property or process. That should astound us every day.”

A potential use for megapixel SPAD cameras would be for faster and more detailed light-ranging 3D scans of landscapes and large objects in mapping, facsimile production, and image recording. For example, megapixel SPAD technology could greatly enhance LiDAR imaging and modeling of the kind done by the Factum Foundation in their studies of the tomb of the Egyptian Pharaoh Seti I.

Faccio, now at the University of Glasgow, is using SPAD imaging to obtain better time-resolved fluorescence microscopy images of cell metabolism. As with Raskar’s MIT group, Faccio hopes to apply the technology to human body imaging.

The AQUA researchers were able to observe an astrophysical effect in their footage called superluminal motion, which is an illusion akin to the doppler effect. Only in this case light appears to the observer to speed up—which it can’t really do, already traveling as it does at the speed of light.

Charbon’s thoughts are more earthbound. “This is just like a conventional camera, except that in every single pixel you can see every single photon,” he says. “That blew me away. It’s why I got into this research in the first place.”

Chaos Engineering Saved Your Netflix

Post Syndicated from Michael Dumiak original https://spectrum.ieee.org/telecom/internet/chaos-engineering-saved-your-netflix

To hear Greg Orzell tell it, the original Chaos Monkey tool was simple: It randomly picked a virtual machine hosted somewhere on Netflix’s cloud and sent it a “Terminate” command. Unplugged it. Then the Netflix team would have to figure out what to do.

That was a decade ago now, when Netflix moved its systems to the cloud and subsequently navigated itself around a major U.S. East Coast service outage caused by its new partner, Amazon Web Services (AWS).

Orzell is currently a principal software engineer at GitHub and lives in Mainz, Germany. As he recently recalled the early days of Chaos Monkey, Germany got ready for another long round of COVID-related pandemic lockdowns and deathly fear. Chaos itself raced outside.

But while the coronavirus wrenched daily life upside-down and inside out, a practice called chaos engineering, applied in computer networks, might have helped many parts of the networked world limp through their coronavirus-compromised circumstances.

Chaos engineering is a kind of high-octane active analysis, stress testing taken to extremes. It is an emerging approach to evaluating distributed networks, running experiments against a system while it’s in active use. Companies do this to build confidence in their operation’s ability to withstand turbulent conditions.

Orzell and his Netflix colleagues built Chaos Monkey as a Java-based tool from the AWS software development kit. The tool acted almost like a number generator. But when Chaos Monkey told a virtual machine to terminate, it was no simulation. The team wanted systems that could tolerate host servers and pieces of application services going down. “It was a lot easier to make that real by saying, ‘No, no, no, it’s going to happen,’ ” Orzell says. “We promise you it will happen twice in the next month, because we are going to make it happen.’ ”

In controlled and small but still significant ways, chaos engineers see if systems work by breaking them—on purpose, on a regular basis. Then they try to learn from it. The results show if a system works as expected, but they also build awareness that even in an engineering organization, things fail. All the time.

As practiced today, chaos engineering is more refined and ambitious still. Subsequent tools could intentionally slow things down to a crawl, send network traffic into black holes, and turn off network ports. (One related app called Chaos Kong could scale back company servers inside an entire geographic region. The system would then need to be resilient enough to compensate.) Concurrently, engineers also developed guardrails and safety practices to contain the blast radius. And the discipline took root.

At Netflix, chaos engineering has evolved into a platform called the Chaos Automated Platform, or ChAP, which is used to run specialized experiments. (See “Spawning Chaos,” above.) Nora Jones, a software engineer, founder and chief executive of a startup called Jeli, says teams need to understand when and where to experiment. She helped implement ChAP while still at Netflix. “Creating chaos in a random part of the system is not going to be that useful for you,” she says. “There needs to be some sort of reasoning behind it.”

Of course, the novel coronavirus has added entirely new kinds of chaos to network traffic. Traffic fluctuations during the pandemic did not all go in one direction either, says AWS principal solutions architect Constantin Gonzalez. Travel services like the German charter giant Touristik Union International (TUI), for instance, drastically pulled in its sails as traffic ground to a halt. But the point in building resilient networks is to make them elastic, he says.

Chaos engineering is geared for this. As an engineering mind-set, it alludes to Murphy’s Law, developed during moonshot-era rocket science: If something can go wrong, it will go wrong.

It’s tough to say that the practice kept the groaning networks up and running during the pandemic. There are a million variables. But for those using chaos-engineering techniques—even for as far-flung and traditional a business as DBS Bank, a consumer and investment institution in Singapore with US $437 billion in assets—they helped. DBS is three years into a network resiliency program, site-reliability engineer Harpreet Singh says, and even as the program got off the ground in early 2018 the team was experimenting with chaos-engineering tools.

And chaos seems to be catching. Jones’s Jeli startup delivers a strategic view on what she calls the catalyst events (events that might be simulated or sparked by chaos engineering), which show the difference between how an organization thinks it works and how it actually works. Gremlin, a four-year-old San Jose venture, offers chaos-engineering tools as service products. In January, the company also issued its first “State of Chaos Engineering” report for 2021. In a blog post announcing the publication, Gremlin vice president of marketing Aileen Horgan described chaos-engineering conferences these days as topping 3,500-plus registrants. Gremlin’s user base alone, she noted, has conducted nearly 500,000 chaos-engineering system attacks to date.

Gonzalez says AWS has been using chaos-engineering practices for a long time. This year—as the networked world, hopefully, recovers from stress that tested it like never before—AWS is launching a fault-injection service for its cloud customers to use to run their own experiments.

Who knows how they’ll be needed.

Tiny Ultralight Insulators, At the Push of a Button

Post Syndicated from Michael Dumiak original https://spectrum.ieee.org/tech-talk/geek-life/tools-toys/tiny-ultralight-spaceage-insulators-at-the-push-of-a-button

Using a 3D printer to make tiny translucent lotuses with millimeter-size petals: It’s eye-catching, but then again, printers are pretty good these days.

But to do it—the “it” being the process of building objects using a solid which becomes liquid after squeezing, flows through a needle, and reverts to a solid as it dries—with aerogel, a material that’s more than 99 percent air is something else. Swiss researchers recently announced an additive manufacturing breakthrough with the material after three years of working on it. Potential applications include building tiny, thin superinsulators that save space on circuit boards and creating bespoke shapes that would help keep medical implants cool inside the body.

The researchers pulled back the curtain this August on a set of intricate demonstrator aerogel grids, cubes, and lotuses made using 3D microprinting techniques delivering structures as thin as one-tenth of a millimeter.

Aerogels are nanoporous solids; NASA sometimes calls them liquid or frozen smoke (the space agency’s made extensive use of aerogels including a mission that used them to capture interstellar comet dust). They can made from silica, graphene or any number of other base materials.

Silica aerogels are among the world’s best thermal insulators and a go-to object of Internet attention. Among the most dramatic illustrations of aerogels’ abilities are clips of flowers shielded from blowtorches or people protected from flamethrowers. Different base materials yield different aerogel properties: graphene or gold aerogels yield conductive paper, but make lousy insulators. Silica is what’s called for to ward off heat. But aerogels are as finicky as they are fascinating. They’re difficult to handle and expensive to make in large quantities. Touch them too often and they crumble. They can be molded to more or less small sizes, but heretofore couldn’t be machined into tiny objects.

But teams of researchers—from Empa, the Swiss materials science and technology lab, along with colleagues at ETH Zürich, the federal tech research institute, and at the Paul Scherrer Institute in Aargau—were aware of the potential applications and wanted to see if they could shape aerogels at micro size. These teams have been working with aerogels in different capacities for the past decade, and developed an aerogel-based plaster for insulating restored historic or listed buildings.

“If you can miniaturize aerogel, the cost aspects become much different. With one cubic meter, you could make a million components for a cell phone or something—the materials cost doesn’t matter anymore,” says Wim Malfait, leader of Empa’s superinsulation materials group. “The issue is how to make them. How do you make the components in the shape, size and format that you need them?”

The Empa team’s recipe for 3D printing takes off-the-shelf silica aerogel powder and adds it to a pentanol solvent with a silica precursor to bind the powder together. They first make an ink by using a spatula, then spin it in a food mixer at 3,000 rpm for five minutes. The result is a paste. Putting this paste under pressure—squeezing it—turns it into a liquid slurry that can flow easily through a printing needle. This is called shear thinning. After flowing into the needle, it can be printed out in the desired shape, layer by layer. Once the tiny object has been formed, they use ammonia gas to make the object into a gel. Then, a special drying process removes the solvent. There you have your aerogel lotus with millimeter-size petals. The group recently laid out its results and a few samples in a paper in the journal Nature.

Malfait says the Swiss institutes are now working with commercial industry partners on feasibility studies for the insulators.

Thermal insulation is a complex field, and there are other ways to diffuse or control heat without custom engineering. There are also other aerogel efforts underway, for example in Singapore, where Hai Duong’s group is using waste products including cotton waste and pineapple leaf in aerogel-making. Aerogels’ 90-year-history is full of false starts down the road toward hoped-for applications, but engineers keep at it: Their properties look too good for them to remain a novelty.

Germany’s BASF and Massachusetts-based partner Aspen Aerogels are building a line of aerogel-based insulator mats. And the doors are open for tiny, custom-made, inexpensive insulators—if industry wants it. Richard Collins, a tech analyst at IDTechEx, offers the reminder that market pull is required for real commercial success, rather than material push.

Tiny, heat-resistant lotus petals, though—they’ll get people looking.

Say Kaixo!

Post Syndicated from Michael Dumiak original https://spectrum.ieee.org/tech-talk/computing/software/say-kaixo

Organizers of a European Union–supported software sharing platform for language technologies are planting seeds for applications that could debut on it with some eye-catching results: We might see the sprouting of a Basque-speaking, Alexa-style home language assistant, for instance.

A first-release version of the platform, called the European Language Grid, is already being used to distribute and gain visibility for language usage and translation tools from some of the hundreds of European firms trading in language technology. Many of the tools offer the ability to communicate among speakers of complex languages–Irish Gaelic, Maltese, and Latvian, to name a few–that are spoken by relatively few people.

If it seems global technology giants such as Google or Amazon could deliver these kinds of tools, maybe that’s right. But they may not dedicate the time and ensure the polish that a dedicated niche developer might. Besides, supporters of the initiative say, Europe should take care of its own digital infrastructure. Getting linguistic architectures to work easily and freely is a key interest on a continent that is trying to hold together a strained economic and social union straddling dozens of mother tongues.

The Language Grid is meant to create a broad marketplace for language technology in Europe, says Georg Rehm, a principal researcher at the German Research Center for Artificial Intelligence (DFKI).

The Grid is a scalable web platform that allows access to data sets and tools that are docked behind the platform’s interface. The base infrastructure is operated on a Kubernetes cluster—a set of node machines that run containerized applications built by service providers. It’s all hosted by the cloud provider SysEleven in Berlin. Users can access data and tools in the docker containers without needing to install anything locally. Grid organizers recently picked 10 early-stage projects that can be supported by the platform, boosting them with small research grants. Another open call for projects is running through October and November. Results are likely in early January 2021.

“Our technologies and services will be more visible to a broader market we would otherwise not be able to reach,” says Igor Leturia Azkarate, speech technologies manager at Elhuyar Fundazoia, a non-governmental organization promoting the everyday use of Basque, especially in science and technology. “We hope it will help other speakers of minority languages be aware of the possibilities, and that they will take advantage of our work.”

Azkarate and his colleagues are adapting Basque language text-to-speech and speech recognition tools to work within Mycroft AI, a Python-based open-source software voice assistant. The goal is to make a home assistant speaker, an Alexa-like device, that operates natively in Basque. Right now, the big home assistants operate in the world’s dozen or so most widely spoken languages. But rather than obliging users to go into Spanish or English—or wait for an as-yet-undeveloped Basque front-end facsimile or halfway solution that might still leave a user with a Julio Iglesias playlist on their hands rather than some Iñigo Muguruza—Azkarate’s after something better. Once the Elhuyar team adapts its Basque tools, they’ll be accessible on the Language Grid for others to use or experiment with.

Another early-stage project is coming from Jörg Tiedemann at the University of Helsinki, who is working with colleagues to develop open translation models for the Grid. These models use deep neural networks—layered software architectures that implement complex mathematical functions—to map text into numeric representations. Using data sets to train the models to find the best ways to solve problems takes a lot of computing power and is expensive. Making the models available for re-use will help developers build tools for low-density languages. “Minority languages get too little attention because they are not commercially interesting,” Tiedemann says. “This gap needs to be closed.”

Andrejs Vasiļjevs, chief executive of the language technology company Tilde, got his start because of a scarcity of digital tools in his native Latvia. In the late 1980s, he was studying computer science in Riga; in those days Latvia was part of the Soviet Union, with personal computing a limited realm. As the Union collapsed, PCs came in and people wanted to use them to start independent newspapers and magazines. But because there were no Latvian keyboards nor any Latvian fonts, it was not possible to write in Latvian. Vasiļjevs got to work on the problem and started Tilde in 1991 with a business partner, Uldis Dzenis.

Three decades later, Tilde is still making tools to spur communication—but now in machine translation, speech synthesis, and speech recognition. A Tilde translation engine is currently running underneath Germany’s EU presidency website; it provides on-the-fly translations of source documents from German, French, and English originals into all of the other 21 official EU languages. The Riga-based developer already has several datasets and models on the Language Grid for potential clients to test out, including a machine translation engine for English to Bulgarian and back,x and a text-to-speech model for Latvian language, child’s voice. “We want to integrate our key services into the European Language Grid,” Vasiļjevs says. “It makes for more exposure to the market.”

Minsk’s Teetering Tech Scene

Post Syndicated from Michael Dumiak original https://spectrum.ieee.org/tech-talk/computing/it/minsks-teetering-tech-scene

Nearly every day, cops in black riot gear move in phalanxes through the streets, swinging batons to clear thousands of chanting protesters, cracking heads, and throwing people to the ground. An often tone-deaf president jails people on political grounds and refuses to leave office in the wake of a disputed election.

It’s a scene from Belarus, a landlocked former Soviet republic where, for the past couple of months, public outcry and the state’s authoritarian response have kept the nation on a razor’s edge.

Also hanging in the balance is the robust Belarusian digital scene, which has flourished over recent years, accounts for perhaps five to six percent of the nation’s economy, and has provided a steady engine for growth. This, in a place which may be better known for mining potash.

Belarus is led by President Alexander Lukashenko, who came to power in 1994. On 9 August, Lukashenko stood for his sixth term in office: This time, as the announced results in his favor topped 80 percent, the vote was widely seen as fixed and people took to the streets of Minsk by the thousands to call for new elections. They met harsh police response.

In the weeks since, the capital’s coders came under increasing pressure then dialed up pressure of their own. State authorities arrested four employees at sales-process software developer PandaDoc, including the Minsk office director, who after more than a month in jail was released on Oct. 11. Belarusian mass protesters organized their response using digital tools. Open letters calling for new elections and the release of political prisoners circulated among tech industry executives, with one gaining 2,500 signatures. That missive came with a warning that conditions could get to where the industry would no longer function.

That would be a huge loss. More than 54,000 IT specialists and 1,500 tech companies called Belarus home as of 2019. Product companies span a broad swath of programming: natural language processing and computational linguistics; augmented reality; mobile technologies and consumer apps; and gaming. A Medellín, Colombia–based news service that covers startups even did a roundup of the machine learning and artificial intelligence development going on in Minsk. According to the report, this activity is worth some $3 billion a year in sales and revenue, with Minsk-built mobile apps drawing in more than a billion users.

Belarus’s tech trade has become vital to the structure of the local economy and its future. The sector showed double-digit growth over the past 10 years, says Dimitar Bogov, regional economist for the European Bank for Reconstruction and Development. “After manufacturing and agriculture, ICT is the biggest sector,” he says. “It is the most important. It is the source of growth during the last several years.”

Though it may seem surprising that the marshy Slavic plains of Belarus would bear digital fruit, it makes sense that computing found roots here. During the mid-1950s, the Soviet Union’s council of ministers wanted to ramp up computer production in the country, with Minsk selected as one of the hubs. It would produce as much as 70 percent of all computers built in the Soviet Union.

Lukashenko’s government itself had a hand in spurring digital growth in recent years by opening a High Tech Park—both a large incubator building in Minsk and a federal legal framework in the country—fertilized by tax breaks and a reduction in red tape. The scene hummed along from just after the turn of the century through the aughts: By 2012, IHS Markit, a business intelligence company that uses in-house digital development as part of its secret sauce, could snap up semantic search engine developers working in a Minsk coding factory by the dozen.

Eight years later, that team is still working in Belarus, but no longer in a brick warehouse adorned by a Lenin mural. They are in a glass office pod complex, neighbors to home furnishing corporates and the national franchise operations for McDonald’s. And despite the global economic downturn wrought by COVID-19, the tech sector in Belarus is even showing growth in 2020, Bogov says. “It grew by more than eight percent. This is less than in previous years, but it is still impressive to show growth during the pandemic.”

But a shadow hangs over all that now. Reports by media outlets including the Wall Street Journal, BBC, and Bloomberg have cited the PandaDoc chief executive and other tech sources as saying the whole sector could shut down.

Though—so far—there is no evidence of a mass exodus, there are reports of some techies leaving Belarus. There are protests every week, but people also go back to work, in a tense and somewhat murky standoff.

“I talk a lot to people in Belarusian IT. It looks like everyone is outraged,” says Sergei Guriev, a political economist at Paris’s Sciences Po Institute. “Even people who do not speak out support the opposition quietly with resources and technology.” Yuri Gurski, chief executive of the co-founding firm Palta and VC investor Haxus, announced he would help employees of the companies Haxus invests in—including the makers of image editors Fabby and Fabby Look, and the ovulation tracker app Flo—to temporarily relocate outside of Belarus if they fear for life and health.

But Youheni Preiherman, a Minsk-based analyst and director of the Minsk Dialogue Council on International Relations, hears a lot of uncertainty. “Some people ask their managers to let them go for the time being until the situation calms down a bit,” he says. “Some companies, on the contrary, are now saying no, we want to make sure we stay—we feel some change is in the air and we want to be here.”

Meanwhile, the Digital Transformation Ministry in Ukraine is already looking to snap up talent sitting on the fence. Former Bloomberg Moscow bureau chief James Brook reported on his Ukraine Business News site that In September, Ukraine retained the Belarusian lawyer who developed the Minsk Hi-Tech Park concept to do the same there. The Ukrainians are sweetening the pot by opening a Web portal to help Belarusian IT specialists wanting to make the move.

The standoff in Belarus could move into a deliberative state with brokered talks over a new constitution and eventual exit for Lukashenko, but analysts say it could also be prone to fast and unexpected moves—for good or for ill. The future direction for Belarus is being written by the week. But with AI engineers and augmented reality developers who had been content in Minsk no longer sure whether to stay or go, the outcome will be about more than just who runs the government. And the results will resound for years to come.

Ambitious Data Project Aims to Organize the World’s Geoscientific Records

Post Syndicated from Michael Dumiak original https://spectrum.ieee.org/computing/software/ambitious-data-project-aims-to-organize-the-worlds-geoscientific-records

Geoscience researchers are excited by a new big-data effort to connect millions of hard-won scientific records in databases around the world. When complete, the network will be a virtual portal into the ancient history of the planet.

The project is called Deep-time Digital Earth, and one of its leaders, Nanjing-based paleontologist Fan Junxuan, says it unites hundreds of researchers—geochemists, geologists, mineralogists, paleontologists—in an ambitious plan to link potentially hundreds of databases.

The Chinese government has lined up US $75 million for a planned complex near Shanghai that will house dedicated programming teams and academics supporting the project, and a supercomputer for related research. More support will come from other institutions and companies, with Fan estimating total costs to create the network at about $90 million.

Right now, a handful of independent databases with more than a million records each serve the geosciences. But there are hundreds more out there holding data related to Earth’s history. These smaller collections were built with assorted software and documentation formats. They’re kept on local hard drives or institutional servers, some decades old, and converted from one format into another as time, funding, and interest allow. The data might be in different languages and is often guided by informal or variably defined concepts. There is no standard for arranging the hundreds of tables or thousands of fields. This archipelago of information is potentially very useful but hard to access.

Fan saw an opportunity while building a database comprising the Chinese geological literature. Once it was complete, he and his colleagues were able to use parallel computing programs to examine data on 11,000 marine fossil species in 3,000 geological sections. The results dated patterns of paleobiodiversity—the appearance, flowering, and extinction of whole species—at a temporal resolution of 26,000 years. In geologic time, that’s pretty accurate.

The Deep-time project planners want to build a decentralized system that would bring these large and small data sources together. The main technical challenge is not to aggregate petabytes of data on centralized servers but rather to script strings of code. These strings would work through a programming interface to link individual databases so that any user could extract information through that interface.

Harmonizing these data fields requires human beings to talk to one another. Fan and his colleagues hope to kick off those discussions in New Delhi, which in March is hosting a big gathering of geoscientists. A linked network could be a gold mine for researchers scouring geologic data for clues.

In a 19th-century building behind Berlin’s Museum für Naturkunde, micropaleontology curator David Lazarus and paleobiologist postdoc Johan Renaudie run the group’s ­Neptune database, which is likely to be linked with Deep-time Digital Earth as it develops. Neptune holds a wealth of data on core samples from the world’s ocean floors. Lazarus started the database in the late 1980s, before the current SQL language standard was readily available—at that time it was mostly found only on mainframes. Renaudie explains that Neptune has been modified from its incarnation as a relational database using 4th Dimension for Mac, and has been carefully patched over the years.

There are many such patched-up archives in the field, and some researchers start, develop, and care for data centers that drift into oblivion when funding runs out. “We call them whale fall,” Lazarus says, referring to dead whales that sink to the ocean floor.

Creating a database network could keep this information alive longer and distribute it further. It could lead to new kinds of queries, says Mike ­Benton, a vertebrate paleontologist in Bristol, England, making it possible to combine independent data sources with iterative algorithms that run through millions or billions of equations. Doing this can deliver more precise time resolutions, which hitherto has been really difficult. “If you want to analyze the dynamics of ancient geography and climate and its influence on life, you need a high-resolution geological timeline,” Fan says. “Right now this analysis is not available.”

This article appears in the March 2020 print issue as “Data Project Aims to Organize Scientific Records.”

A Software Update Will Instruct Space Tomatoes to Sprout

Post Syndicated from Michael Dumiak original https://spectrum.ieee.org/tech-talk/aerospace/satellites/a-software-update-from-earth-will-instruct-tomatoes-to-sprout-in-space

It’s hard enough to grow tomatoes from seeds out in a sunny garden patch. To do it in sun-synchronous orbit—that is to say, in outer space—would seem that much harder. But is it? 

That’s what plant biologists and aerospace engineers in Cologne and Bremen, Germany are set to find out. Researchers are preparing in the next couple of weeks to send a software upload to a satellite orbiting at 575 kilometers (357 miles) above the Earth. Onboard the satellite are two small greenhouses, each greenhouse bearing six tiny tomato seeds and a gardener’s measure of hope. The upload is going to tell these seeds to go ahead and try to sprout.

The experiment aims to not only grow tomatoes in space but to examine the workings of combined biological life support systems under specific gravitational conditions, namely, those on the moon and on Mars. Eu:CROPIS, which is the name of the satellite as well as the orbital tomato-growing program, is right now spinning at a rate which generates the exact gravitational field found on the moon.

The environment is designed to work as a closed loop: the idea is to employ algae, lava filters, plants, and recycled human urine to create the cycle by which plants absorb nitrates and produce oxygen. Being able to accomplish all these tasks will be crucial to any long-term stay in space, be it on a moon base or a year-long flight to Mars. Any humans along for that kind of ride will be glad to get away from tinned applesauce and surely welcome fresh greens or, say, a tomato.