All posts by Jeff Hecht

New Hollow-core Optical Fiber Is Clearer Than Glass

Post Syndicated from Jeff Hecht original https://spectrum.ieee.org/tech-talk/semiconductors/optoelectronics/new-hollow-core-optical-fiber-is-clearer-than-glass

Conventional fiber optics guide light through solid glass cores that are extremely transparent. The clearest fibers have loss as low as 0.142 decibel per kilometer. Which means that more than one percent of input light remains after a hundred kilometers. Yet solid glass fibers cannot carry very high powers, particularly in short pulses, limiting their use for applications including delivering intense light for laser machining.

Now a team at the University of Southampton’s Optoelectronics Research Center has made hollow-core fibers that are clearer than solid-core fibers at some important wavelengths. They report their results in Nature Communications.

Interest in hollow-core fibers initially took off in 1998 after Philip St. John Russell, then at the University of Bath in the UK, showed that microstructured optical fibers could guide light within a hollow core by blocking transmission through layers around the core at some wavelengths.  However, the air-filled microstructures in the fiber were very complex, while these “photonic bandgap” fibers enabled transmission of too limited a range of wavelengths to be practical.

In recent years, Francesco Poletti at Southampton has pioneered a new range of experimental fibers that depart from Russell’s basic idea by limiting the microstructure only to the hollow core of the fiber.  Poletti’s 2014 design spaced several pairs of nested tubes, like small straws placed inside larger ones, around the outside of the central core.   

Lumenisity Ltd., a startup from our group, has already installed [fibers with the new design] in optical cables running live traffic,” Poletti said in an email. Light travels 50% faster in air than in glass, so the initial market for hollow-core fibers is short data links requiring extreme speed. He cites financial transfers, 5G and links inside and between data centers. It’s a potential boon for high-frequency security traders.   However, it has yet to match the transmission of solid-core fibers in the telecommunications band at 1550.

Now Poletti’s group has developed new hollow-core fibers that match or even beat the transparency of solid-core fibers in key wavelengths outside the telecommunications band, where glass is most transparent. They tailored fibers for 660 and 850 nm photons (these bands in the red and near infrared are widely used in biology and quantum networks) as well as for 1064 nm (another near infrared band that’s popular for industrial laser machining).

He says the new technology could offer advantages over conventional solid-core fiber for high precision sensors, laser beam delivery and time-frequency measurement. A handful of vendors already sell older types of hollow-core fibers to deliver ultrashort laser pulses that deliver a rapid series of pulses to cut glass for smartphones. “There is no alternative to hollow-core fibers because solid-core fibers cannot compete,” Poletti says. The intense light pulses have to be kept out of the glass to avoid damaging it.

So far, glass cutting is a small share of the overall market.  But he says his group’s new hollow-core fibers can transport “kilowatt scale continuous powers in a fundamental mode over tens or hundreds of meters, and could soon challenge solid-core delivery fibers in the much bigger market of [industrial machining with] high average power lasers.” Analysts put sales well into the billions

The new hollow-core fibers also have other advantages besides damage resistance and faster light speed. Keeping light out of the glass avoids nonlinear effects that can distort signals and can improve quality of the transmitted laser beam.

Fundamental limits of the new fibers are unknown. Light scattering dominates the loss of solid-core fibers at wavelengths shorter than the telecommunications band. “Up to a few months ago, we thought that scattering of light at the air-glass interfaces [in hollow-core fibers] would eventually dominate loss at those shorter wavelengths,” Poletti says. But the loss keeps dropping as they improve fiber fabrication, and he now speculates that in a few years the new hollow fibers might be ten times clearer in that band than the fundamental limit solid-core fibers.

Optical Conversion Tech Images Infrared “In Color”

Post Syndicated from Jeff Hecht original https://spectrum.ieee.org/tech-talk/semiconductors/optoelectronics/turning-infrared-into-new-colors

Mid-infrared (mid-IR) light is rich with information. For cameras sensitive to this portion of the spectrum, cancer cells are sometimes easily spotted. The earth under our feet glows like a campfire. Many “invisible” gases like carbon dioxidemethane and propane are also very much visible in mid-IR. 

This new realm of “colors” not available to the human eye may ultimately be coming to your smartphone camera, too. That’s if the research performed by Michael Mrejen, a research scientist at the the Femto-Nano Laboratory at the University of Tel Aviv, can be scaled up to commercial-grade consumer technology. 

He says the device they’ve begun to develop could “bring ‘color’ mid-IR imaging to the masses and [offer] a window to a whole new wealth of data not available so far to the public.”

They can do this, he says, by “leveraging the widespread availability of cheap, high resolution, fast and efficient silicon-based [visible light] color sensors.” 

Today, conventional mid-IR sensors are expensive, insensitive, operate below room temperature and lack the high resolution of mass-produced silicon camera chips. 

Yet Mrejen and colleagues are pioneering a new technique that shifts mid-IR light to visible wavelengths that mass-produced silicon camera chips can detect. This way smartphones and other portable cameras might be able to include mid-IR light in their images.

Wavelength or frequency shifting is an old thing in the radio spectrum, where signals have been shifted between frequency bands for more than a century. Light wavelengths were first shifted soon after the invention of the laser.

Today, the light from green laser pointers is produced by doubling the output frequency of invisible infrared light from tiny lasers. Yet shifting light to non-harmonic frequencies is more difficult, and shifting multiple wavelengths to higher frequencies—or equivalently, to shorter wavelengths—is even more difficult.

The problem is that different wavelengths travel at different speeds through nonlinear materials, causing light waves with different frequencies to drift out of phase with each other, so little of the light is converted to the visible.

The only way to shift multiple wavelengths at once has been to shift one wavelength at a time. But that requires very sophisticated cameras too expensive for most users.

Seeking a better way to shift the mid-IR photons to a wavelength that silicon can see, Mrejen’s group relied on a nonlinear process his coauthor Haim Suchowski had previously developed that slowly changes the infrared waves into visible photons. This so-called adiabatic technique of frequency conversion, described in a recent issue of Laser and Photonics Reviews, compensates for the phase mismatch problem noted above. 

Suchowski says he hopes mass-produced special crystals and pump lasers (altogether costing a few hundred dollars) that use this adiabatic frequency conversion trick could be mounted on smartphones.

“We humans see between red and blue,” Suchowski said in a press release. “If we could see in the infrared realm, we would see that elements like hydrogen, carbon and sodium have a unique color. An environmental monitoring  satellite that would take a picture in this region would see a pollutant being now emitted from a plant, or a spy satellite would see where explosives or uranium are being hidden. In addition, since every object emits heat in the infrared, all this information can be seen even at night.”

Liquid Lasers Challenge Fiber Lasers As The Basis of Future High-Energy Weapons

Post Syndicated from Jeff Hecht original https://spectrum.ieee.org/tech-talk/aerospace/military/fiber-lasers-face-a-challenger-in-laser-weapons

Despite a lot of progress in recent years, practical laser weapons that can shoot down planes or missiles are still a ways off. But a new liquid laser may be bringing that day closer. 

Much of the effort in recent years has focused on high-power fiber lasers. These lasers usually specially doped coils of optical fibers to amplify a laser beam, and were in originally developed for industrial cutting and welding.  Initially, fiber laser were dark horses in the Pentagon’s effort to develop electrically powered solid-state laser weapons that began two decades ago.  However, by 2013 the Navy was testing a 30-kilowatt fiber laser on a ship. Since then, their ability to deliver high-energy beams of excellent optical quality has earned fiber lasers the leading role in the current field trials of laser weapons in the 50- to 100-kilowatt class.  But now aerospace giant Boeing has teamed with General Atomics—a defense contractor also known for research in nuclear fusion—to challenge fiber lasers in achieving the 250-kilowatt threshold that some believe will be essential for future generations of laser weapons. Higher laser powers would be needed for nuclear missile defense.

The challenging technology was developed to control crucial issues with high energy solid-state lasers: size, weight and power, and the problem of dissipating waste heat that could disrupt laser operation and beam quality. General Atomics “had a couple of completely new ideas, including a liquid laser. They were considered completely crazy at the time, but DARPA funded us,” said company vice president Mike Perry in a 2016 interview. Liquid lasers are similar to solid-state lasers, but they use a cooling liquid that flows through channels integrated into the solid-state laser material. A crucial trick was ensuring that the cooling liquid has a refractive index exactly the same as that of the solid laser material. A perfect match of the liquid and solid could avoid any refraction or reflection at the boundary between them. Avoiding reflection or refraction in the the cooling liquid also required making the fluid flow smoothly through the channels to prevent turbulence.

The system promised to be both compact and comparatively lightweight, just what DARPA wanted to fit a 150-kW laser into a fighter jet. The goal was a device that weighed only 750 kilograms, or just 5 kg/kW of output. The project that went through multiple development stages of testing that lasted a decade. In 2015, General Atomic delivered the HELLADS, the High Energy Liquid Laser Area Defense System, rated as 150-kW class, to the White Sands Missile Range in New Mexico for live fire tests against military targets.  A press release issued at the time boasted the laser held “the world’s record for the highest laser output power of any electrically powered laser.” At the time, General Atomics described it as a modular laser weapon weighing in at four kilograms per kilowatt.
 

Development has continued since then. A spokesperson says General Atomics is now on the seventh generation of their “Distributed gain laser heads,” modules which can be combined to generate over 250 kW of laser output from a very compact package. Improvements over the past two years have enhanced beam quality and the ability to emit high-energy beams both continuously or in a series of pulses, giving more flexibility in attacking targets.

Sustaining good beam quality at that power level is important. Mike Griffin, former undersecretary of defense for research and engineering, told Congress that current fiber laser technology could be scaled to 300 kilowatts to protect air force tankers.  However, that may be pushing the upper limits of how many beams from separate fiber lasers emitting at closely spaced wavelengths can be combined coherently to generate a single high-energy laser beam of high quality.

The agreement calls for use General Atomics to supply integrated thermal management equipment and a high-density modular high-power lithium-ion battery system able to store three megajoules of energy as well as the laser. Boeing will supply a beam director and software for precision acquisition and pointing technology that Boeing developed and supplied for other experimental laser weapon testbeds, including the Air Force’s megawatt-class Airborne Laser, the last big chemically-powered gas laser, scrapped in 2014.  “Together, we’re leveraging six decades of directed energy experience and proven, deployed technologies,” said  Norm Tew, Boeing Missile and Weapon Systems vice president and general manager, and Huntsville site senior executive, in a statement.

100 Million Zoom Sessions Over a Single Optical Fiber

Post Syndicated from Jeff Hecht original https://spectrum.ieee.org/tech-talk/telecom/internet/single-optical-fibers-100-million-zoom

Journal Watch report logo, link to report landing page

A team at the Optical Networks Group at University College London has sent 178 terabits per second through a commercial singlemode optical fiber that has been on the market since 2007. It’s a record for the standard singlemode fiber widely used in today’s networks, and twice the data rate of any system now in use. The key to their success was transmitting it across a spectral range of 16.8 terahertz, more than double the the broadest range in commercial use.

The goal is to expand the capacity of today’s installed fiber network to serve the relentless demand for more bandwidth for Zoom meetings, streaming video, and cloud computing. Digging holes in the ground to lay new fiber-optic cables can run over $500,000 a kilometer in metropolitan areas, so upgrading transmission of fibers already in the ground by installing new optical transmitters, amplifiers, and receivers could save serious money. But it will require a new generation of optoelectronic technology.

A new generation of fibers have been in development for the past few years that promise higher capacity by carrying signals on multiple paths through single fibers. Called spatial division multiplexing, the idea has been demonstrated in fibers with multiple cores, multiple modes through individual cores, or combining multiple modes in multiple fibers. It’s demonstrated record capacity for single fibers, but the technology is immature and would require the expensive laying of new fibers. Boosting capacity of fibers already in the ground would be faster and cheaper. Moreover, many installed fibers remain dark, carrying no traffic, or transmitting on only a few of the roughly 100 available wavelengths, making them a hot commodity for data networks.

“The fundamental issue is how much bandwidth we can get” through installed fibers, says Lidia Galdino,  a University College lecturer who leads a team including engineers from equipment maker Xtera and Japanese telecomm firm KDDI. For a baseline they tested Corning Inc.’s SMF-28 ULL (ultra-low-loss) fiber, which has been on the market since 2007. With a pure silica core, its attenuation is specified at no more than 0.17 dB/km at the 1550-nanometer minimum-loss wavelength, close to the theoretical limit. It can carry 100-gigabit/second signals more than a thousand kilometers through a series of amplifiers spaced every 125 km.

Generally, such long-haul fiber systems operate in the C band of wavelengths from 1530 to 1565 nm. A few also operate in the L band from 1568 to 1605 nm, most notably the world’s highest-capacity submarine cable, the 13,000-km Pacific Light Cable, with nominal capacity at 24,000 gigabits per second on each of six fiber pairs. Both bands use well-developed erbium-doped fiber amplifiers, but that’s about the limit of their spectral range.

To cover a broader spectral range, UCL added the largely unused wavelengths of 1484 to 1520 nm in the shorter-wavelength S band. That required new amplifiers that used thulium to amplify those wavelengths. Because only two thulium amplifiers were available, they also added Raman-effect fiber amplifiers to balance gain across that band. They also used inexpensive semiconductor optical amplifiers to boost signals reaching the receiver after passing through 40 km of fiber. 

Another key to success is format. “We encoded the light in the best possible way” of geometric coding quadrature amplitude modulation (QAM) format to take advantage of differences in signal quality between bands. “Usually commercial systems use 64 points, but we went to 1024 [QAM levels]…an amazing achievement,” for the best quality signals, Gandino said. 

This experiment, reported in IEEE Photonics Technology Letters, is only the first in a planned series. Their results are close to the Shannon limit on communication rates imposed by noise in the channel. The next step, she says, will be buying more optical amplifiers so they can extend transmission beyond 40 km.

“This is fundamental research on the maximum capacity per channel,” Galdino says. The goal is to find limits, rather than to design new equipment. Their complex system used more than US$2.6 million of equipment, including multiple types of amplifiers and modulation schemes. It’s a testbed, not optimized for cost, performance, or reliability, but for experimental flexibility. Industry will face the challenge of developing detectors, receivers, amplifiers and high-quality lasers on new wavelengths, which they have already started. If they succeed, a single fiber pair will be able to carry enough video for all 50 million school-age children in the US to be on two Zoom video channels at once.

Optical Labs Set Terabit Transmission Records

Post Syndicated from Jeff Hecht original https://spectrum.ieee.org/tech-talk/computing/networks/optical-labs-set-terabit-transmission-records

Press releases about fiber optics, like baseball stories, report many types of records. The two new fiber-optics records reported last month at Conference on Optical Fiber Communications (OFC 2020) are definitely major league. A team from Japan’s National Institute for Communications Technology (NICT) has sent a staggering 172 terabits per second through a single multicore fiber—more than the combined throughput of all the fibers in the world’s highest-capacity submarine cable. Not to be outdone, Nokia Bell Labs reported a record single-stream data rate of 1.52 terabits per second, close to four times the 400 gigabits per second achieved by the fastest links now used in data centers.

The IEEE-cosponsored OFC 2020 could have used some of that capacity during the 9–12 March meeting in San Diego. Although many exhibitors, speakers, and would-be attendees dropped plans for the show as the COVID-19 virus spread rapidly around the world, the organizers elected to go ahead with the show. In the end, a large fraction of the talks were streamed from homes and offices to the conference, then streamed to remote viewers around the world.

Telecommunications traffic has increased relentlessly over recent decades, made possible by tremendous increases in the data rates that fiber-optic transmitters can push through the standard single-mode fibers that have been used since the 1990s. But fiber throughputs are now approaching the nonlinear Shannon limit on information transfer, so developers are exploring ways to expand the number of parallel optical paths via spatial-division multiplexing.

Spatial-division multiplexing is an optical counterpart of MIMO, which uses multiple input and output antennas for high-capacity microwave transmission. The leading approaches: packing many light-guiding cores into optical fibers or redesigning fiber cores to transmit light along multiple parallel paths through the core that can be isolated at the end of the fiber.

Yet multiplying the number of cores has limits. They must be separated by at least 40 micrometers to prevent noise-inducing crosstalk between them. As a result, no more than five cores can fit into fibers based on the 125-micrometer diameter standard for long-haul and submarine networks. Adding more cores can allow higher data rates, but that leads to fiber diameters up to 300 µm, which are stiff and would require expensive designs for cables meant for submarine applications.

In a late paper, Georg Rademacher of NTIC described a new design called close-coupled multi-core fibers. He explains that the key difference in that design is that “The cores are close to each other so that the signals intentionally couple with each other. Ideally, the light that is coupled into one core should spread to all other cores after only a few meters.” The signals resemble those from fibers in which individual cores carry multiple modes, and require MIMO processing to extract the output signal. However, because signals couple between cores over a much shorter distance in the new fibers than in earlier few-mode fibers, the processing required is much simpler.

Earlier demonstrations of close-coupling were limited to narrow wavelength ranges of less than 5 nanometers. In San Diego, Rademacher reported testing an 80-km length of three-core 125-nm fiber with signals from a frequency comb light source. The team transmitted 24.5 gigabaud 16-quadrature amplitude modulated (16-QAM) signals to sample performance on 359 optical channels in the C and L fiber bands spanning a 75-nm bandwidth. Signals were looped repeatedly through the test fiber and an optical amplifier to simulate a total distance of 2040 kilometers.

The total data rate measured over that distance was 172 terabits per second, a record for 125-µm fibers. Fibers with much larger diameter and more cores have transmitted over 700 terabits over 2000 km, he says, but they remain in the laboratory. The world’s largest capacity commercial system, the Pacific Light Cable Network, will require six fibers in order to send 144 terabits when and if it comes into full operation. The close-coupled fibers “are far from the required maturity for subsea use,” says Rademacher. His group also is studying other ways that multicore fibers might improve on today’s single-mode fibers.

The Nokia record addresses the huge market for high-speed connections in massive Internet data centers. At last year’s OFC, the industry showed commercial 400-gigabit links for data centers, and announced a push for 800 gigabit rates. This year, Nokia reported a laboratory demonstration that came close to doubling the 800-Gig rate.

Successfully achieving single-channel data rates exceeding 100 gigabits depends on transmitting signals coherently rather than via the simple off-on keying used for rates up to 10 gigabits. Coherent systems convert input digital signals to analog format at the transmitter, then the receiver converts the analog signals back to digital form for distribution. This makes the analog-digital converter a crucial choke point for achieving the highest possible data rates.

Speaking via the Internet from Stuttgart, Fred Buchali of Nokia Bell Labs explained how his group had used a new silicon-germanium chip to achieve a record single-carrier transmission rate of 1.52 terabits per second through 80 km of standard single-mode fiber. Their digital-to-analog converter generated 128 gigabaud at information rates of 6.2 bits per symbol for each of two polarizations. It broke a previous record of 1.3 terabits per second that Nokia reported in September 2019.

Micram Microelectronic GmbH of Bochum, Germany, designed and made the prototype for Nokia using high-speed bipolar transistors and 55-nanometer CMOS technology. Buchali said that looping the fiber and adding erbium amplifiers allowed them to reach 240 km at a data rate of 1.46 Tbit/s. The apparent goal is to reach 1.6 Tbit/s, four times the current-best 400 gigabits per second, at the typical data center distance of 80 km. 

If we are able to meet that target with a single carrier as is demonstrated here rather than a number of carriers—say, 4 at 400Gbps—then it is quite likely that the solution would be both more efficient in spectrum usage and lower cost,” says Theodore Sizer, Smart Optical Fabric & Devices Research Lab leader at Nokia Bell labs. That could be an important step in letting the fast-growing data center world handle the world’s insatiable demand for data.

Submarine Cable Repairs Underway in South Africa

Post Syndicated from Jeff Hecht original https://spectrum.ieee.org/tech-talk/telecom/internet/undersea-cable-repairs-south-africa

A clock in South Africa is counting down the seconds until a pair of broken cables are expected to go back in service. Every few hours, the service provider TENET, which keeps South Africa’s university and research facilities connected to the global Internet, tweets updates.

The eight-year-old West Africa Cable System (WACS) submarine cable, which runs parallel to Africa’s west coast, broke at two points early on 16 January. That same day, an 18-year-old cable called SAT-3 that runs along the same route also broke.

First Blue LED Emission From a Perovskite

Post Syndicated from Jeff Hecht original https://spectrum.ieee.org/tech-talk/semiconductors/materials/first-blue-led-emission-perovskite-news

Researchers in California have extended the spectral range of perovskite light-emitting diodes into the blues. How did they do it? By cracking the mystery of why the optical and electronic properties of the materials change when current flows through them.

Painstaking measurements revealed that current-induced heating deformed cells in the semiconductor crystals. That observation enabled the team to make the first single-crystal perovskite diodes, says group leader Peidong Yang, a chemistry professor at the University of California at Berkeley. 

Diode Lasers Jump to the Deep Ultraviolet

Post Syndicated from Jeff Hecht original https://spectrum.ieee.org/tech-talk/semiconductors/optoelectronics/diode-lasers-jump-to-the-deep-ultraviolet

The first electrically powered semiconductor laser emitting in the deep ultraviolet marks a big step into a new field, says Ramón Collazo, a materials science professor at North Carolina State University and a founder of Adroit Materials in Cary, NC.

Researchers had thought “there was a hard wall” blocking diode lasers from emitting ultraviolet light shorter than 315 nm even in laboratory lasers, Collazo says. Now a team including Hiroshi Amano of Nagoya University, who shared the 2014 Nobel Physics Prize for inventing efficient blue light-emitting diodes, has scored another breakthrough by demonstrating a 271.8-nm diode laser, more than 40 nm deeper into the ultraviolet.

“Bio-sensing and sterilization are expected to be the first key applications” of deep ultraviolet diode lasers, says Ziyi Zhang of the Asahi Kasei Corporate Research Center in Fuji, lead author of a paper coauthored by Amano. Bio-sensors based on diode lasers “could be far smaller, cheaper, and more easily replaceable” than the bulky gas lasers, which are now the only type available at wavelengths shorter than 300 nm, Zhang says.

The U.S. Pentagon had made a significant investment in developing deep-ultraviolet lasers for bio-sensing, says Collazo, but “after three DARPA programs, the Japanese got it,” he said with a chuckle. He says that demonstrating a deep-ultraviolet laser opens the door to making diode lasers across a broad range from 220 to 365 nm.

Commercial LEDs made of the same compound, aluminum-gallium nitride, can emit wavelengths as short as 210 nm. However, their light spreads rapidly, leaving little power after tens of centimeters, which limits their applications. Laser diodes concentrate their beam over a longer distance, delivering higher power to small spots, and their light is concentrated in a band of less than 1 nm, compared to more than 10 nm for LEDs. “These features of laser diodes should enable some medical applications,” says Zhang.

Diode lasers are much harder to make than LEDs because they require passing higher current densities through the layer where current carriers combine to emit light. This is a particular problem for the nitride compounds that emit in the blue, violet and ultraviolet because they are prone to crystalline defects that can cause failures at high current densities. Such material problems had stalled progress in blue LEDs and diode lasers until Amano and Isamu Akasaki at Nagoya University and Shiju Nakamura, then at the Nichia Corporation, developed new ways of processing the mixture of gallium, indium and nitrogen needed for blue emission in the early 1990s. They succeeded first with LEDs and later with diode lasers at the 405-nm wavelength needed to store high-definition digital video on Blu-Ray discs.

Diodes made from gallium, indium and nitrogen emit blue light, with the wavelength decreasing as the indium content decreases, reaching about 370 nm from pure GaN. Aluminum must be added to replace some gallium to reach shorter wavelengths, but adding aluminum also makes the compound more vulnerable to defects. That’s not a severe problem for LEDs, which reached 210 nanometers in the deep ultraviolet in 2006. However, the high current density in diode lasers stalled their development in the ultraviolet. The shortest wavelengths in commercial diodes remain 375 nm, and short-lived laboratory versions remained stalled around 320 nm for years. 

In 2018, a team from North Carolina State and Adroit Materials (Cary, NC) led by Adroit chief operating office Ronny Kirste was able to reduce defect levels in AlGaN containing more aluminum than gallium to produce laser light at 265 nm in the deep ultraviolet.  However, their semiconductor lasers were powered by 193-nm light from a large pulsed gas laser, a technique useful in research, but not practical for applications. The holy grail for practical deep ultraviolet lasers is powering them directly by electrical current passing through the semiconductor.

A team from Asahi Kasei Corporate Research & Development in Fuji, Nagoya, and Crystal IS in Green Island, NY demonstrated the new electrically powered diode laser emitting at 271.8 nm.  The keys to their success, Zhang says, were design of the laser diode structure, their technique for doping the semiconductor, and epitaxial growth on a substrate of single-crystal AlN, which reduced threshold current and operating voltage. Layers in the structure contained up to twice as much aluminum as gallium.

Their laser generated 50-nanosecond pulses at a rate of 2000 Hz, but most applications are expected to require a continuous laser beam. Zhang says that further reductions in threshold current and operating voltage should allow continuous-wave operation. Asahi Kasei plans to continue teaming the Nagoya to improve their understanding of the material system and develop commercial versions.

Adroit Materials had already working on AlGaN, and now is working to duplicate the Asahi Kasei results. “We want to replace those gas lasers” which have long been the only laser sources practical for most short-wavelength ultraviolet applications, says Kirste. “The market is huge for that.”  Much of that market is biological because DNA absorbs strongly at 260 nm. In addition to sensing biological material including potential pathogen, bright deep-ultraviolet sources can break apart DNA, killing pathogens. UV LEDs emitting in that range already can sterilize small volumes of water, such as needed by soldiers in the field where water supplies are suspect. Compact laser sources could sterilize larger volumes quicker.

Bringing Legacy Fiber Optic Cables Up to Speed

Post Syndicated from Jeff Hecht original https://spectrum.ieee.org/tech-talk/telecom/internet/legacy-fiber-optic-cables-speed-data-rates

Installing optical fibers with fat cores once seemed like a good idea for local-area or campus data networks. It was easier to couple light into such “multimode” fibers than into the tiny cores of high-capacity “singlemode” fibers used for long-haul networks.

The fatter the core, the slower data flows through the fiber, but fiber with 50-micrometer (µm) cores can carry data at rates of 100 megabits per second up to a few hundred meters—good enough for local transmission.

Now Cailabs in Rennes, France has developed special optics that it says can send signals at rates of 10 gigabits per second (Gbps) up to 10 kilometers through the same fiber, avoiding the need to replace legacy multimode fiber. They hope to reach rates of 100 Gbps, which are now widely required for large data centers.

Photonics Meets Plasmonics in New Switch that Could Steer Lidar Laser Beams

Post Syndicated from Jeff Hecht original https://spectrum.ieee.org/tech-talk/semiconductors/optoelectronics/new-electro-mechanical-switch-integrated-photonics

The synergy of electronic processing and optical communications has powered the decades-long boom in information technology. But the need to convert signals back and forth between electrical and optical forms is becoming a bottleneck for the emerging field of integrated photonics.

A new type of switch that combines electrical and mechanical effects to redirect light could open the door to large-scale reconfigurable photonic networks for several applications including beam steering for lidars and optical neural networks for computing. 

Currently, integrated photonics are used in high-performance fiber-optic systems , and a joint government-industry program called AIM Photonics is pushing their manufacture. However, current optical switches are too big and require too much power to blend well into integrated photonics. The new hybrid nano-opto-electro-mechanical switch has a footprint of 10 square micrometers and runs on only one volt—making it compatible with the CMOS (complementary metal-oxide-semiconductor) silicon electronics used in integrated photonics, says Christian Haffner from the Swiss Federal Institute of Technology in Zurich now working at the National Institute for Standards and Technology (NIST) in Gaithersburg, Maryland and the researcher who led the team that developed it.

The root of the problem Haffner set out to solve is that photons and electrons behave very differently.

Photons are great for communications because they travel at the speed of light and interact weakly with each other and matter, but they are much larger than chip features and require high voltages to redirect them because of their weak interactions.

Electrons are much smaller and interact much more strongly than photons, making them better for switching and for processing signals. However, electrons move slower than light, and more energy is needed to move them.

Long-distance communication systems process signals electronically and convert the signal into light for transmission, but converting between signal formats is cumbersome for local transmission. The new switch makes it possible to redirect optical signals on the integrated photonic circuit without having to convert them to electrical format and then back to optical format for further transmission. 

In Science, Haffner and colleagues describe a hybrid nano-opto-electro-mechanical switch that would occupy only about 10 square micrometers on an integrated photonic circuit. Their switch is a small multilayered disk sitting at a T-junction between two optical waveguides—stripes of transparent silica that guide light—that meet at a right angle. The top layer of the disk is a four-micrometer circle of 40-nanometer gold membrane resting on a small piece of alumina on layer of silicon deposited on silica. That structure acts as a curved waveguide resonant with both the input and output waveguides, so it can transfer resonant light between the two. 

Light within the silica waveguides remains as photons, but within the switch the light excites oscillations of surface electrons in the gold, producing plasmons that vibrate at the frequency of the light wave but over a light much smaller than the optical wavelength. That tight confinement of the plasmonic part of the energy in the air gap between the gold and silicon creates a strong opto-electro-mechanical effect concentrated in the small volume of the switch. 

With no voltage applied to the switch, the plasmonic waveguide remains resonant with the silica waveguides, so it couples light from the input waveguide to the output waveguide with minimal loss, as shown in the animation.

Applying one volt to the switch produces a static charge that pulls the gold membrane toward the silicon layer, changing the shape of the waveguide in the switch so it shifts the phase of the light by 180 degrees. This causes destructive interference in the switch, breaking the resonance and the coupling of light into the side waveguide, so the light instead continues through the input waveguide to another switch.

“What we have in the end,” says Haffner, “is a hybrid [switch], partly photonic and partly plasmonic, that manipulates light very efficiently.” The plasmonic part concentrates the switching in a small area; the photonic part experiences low loss. Applying a one-volt bias compatible with CMOS electronics across such a short distance can produce a very strong force. That gives the switch a small footprint, low loss, and lower power consumption, which conventional electro-optic switches cannot achieve simultaneously.

Mass of the gold film is so low that the switch can operate millions of times a second. That’s adequate for most switching, says Haffner, but it does have limits. The mechanical part of the switch cannot reach the picosecond speeds needed to modulate light in an optical transmitter. 

The first applications are likely to be in laser beam steering for lidar, particularly for autonomous vehicles where continual information on the local environment is vital for safety. Another potential application is optical routing of signals on integrated photonic chips to create optical neural networks for deep-learning applications. The switch can redirect signals millions of times a second, a time scale needed by such applications.

“I don’t see any issues in fabricating [the switches] with high yield,” says Haffner. 

Driving Tests Coming for Autonomous Cars

Post Syndicated from Jeff Hecht original https://spectrum.ieee.org/cars-that-think/transportation/self-driving/driving-tests-coming-for-autonomous-cars

The three laws of robotic safety in Isaac Asimov’s science fiction stories seem simple and straightforward, but the ways the fictional tales play out reveal unexpected complexities. Writers of safety standards for self-driving cars express their goals in similarly simple terms. But several groups now developing standards for how autonomous vehicles will interact with humans and with each other face real-world issues much more complex than science fiction.

Advocates of autonomous cars claim that turning the wheel over to robots could slash the horrific toll of 1.3 million people killed around the world each year by motor vehicles. Yet the public has become wary because robotic cars also can kill. Documents released last week by the U.S. National Transportation Safety Board blame the March 2018 death of an Arizona pedestrian struck by a self-driving Uber on safety failures by the car’s safety driver, the company, and the state of Arizona. Even less-deadly safety failures are damning, like the incident where a Tesla in Autopilot mode wasn’t smart enough to avoid crashing into a stopped fire engine whose warning lights were flashing.

Safety standards for autonomous vehicles “are absolutely critical” for public acceptance of the new technology, says Greg McGuire, associate director of the Mcity autonomous vehicle testing lab at the University of Michigan. “Without them, how do we know that [self-driving cars] are safe, and how do we gain public trust?” Earning that trust requires developing standards through an open process that the public can scrutinize, and may even require government regulation, he adds.

Companies developing autonomous technology have taken notice. Earlier this year, representatives from 11 companies including Aptiv, Audi, Baidu, BMW, Daimler, Infineon, Intel, and Volkswagen collaborated to write a wide-ranging whitepaper titled “Safety First for Automated Driving.” They urged designing safety features into the automated driving function, and using heightened cybersecurity to assure the integrity of vital data including the locations, movement, and identification of other objects in the vehicle environment. They also urged validating and verifying the performance of robotic functions in a wide range of operating conditions.

On 7 November, the International Telecommunications Union announced the formation of a focus group called AI for Autonomous and Assisted Driving. It’s aim: to develop performance standards for artificial intelligence (AI) systems that control self-driving cars. (The ITU has come a long way since its 1865 founding as the International Telegraph Union, with a mandate to standardize the operations of telegraph services.)

ITU intends the standards to be “an equivalent of a Turing Test for AI on our roads,” says focus group chairman Bryn Balcombe of the Autonomous Drivers Alliance. A computer passes a Turing Test if it can fool a person into thinking it’s a human. The AI test is vital, he says, to assure that human drivers and the AI behind self-driving cars understand each other and predict each other’s behaviors and risks.

A planning document says AI development should match public expectations so:

• AI never engages in careless, dangerous, or reckless driving behavior

• AI remains aware, willing, and able to avoid collisions at all times

• AI meets or exceeds the performance of a competent, careful human driver

 

These broad goals for automotive AI algorithms resemble Asimov’s laws, insofar as they bar hurting humans and demand that they obey human commands and protect their own existence. But the ITU document includes a list of 15 “deliverables” including developing specifications for evaluating AIs and drafting technical reports needed for validating AI performance on the road.

A central issue is convincing the public to entrust the privilege of driving—a potentially life-and-death activity—to a technology which has suffered embarrassing failures like the misidentification of minorities that led San Francisco to ban the use of facial recognition by police and city agencies.

Testing how well an AI can drive is vastly complex, says McGuire. Human adaptability makes us fairly good drivers. “We’re not perfect, but we are very good at it, with typically a hundred million miles between fatal traffic crashes,” he says. Racking up that much distance in real-world testing is impractical—and it is but a fraction of the billions of vehicle miles needed for statistical significance. That’s a big reason developers have turned to simulations. Computers can help them run up virtual mileage needed to find potential safety flaws that might arise only rare situations, like in a snowstorm or heavy rain, or on a road under construction.

It’s not enough for an automotive AI to assure the vehicle’s safety, says McGuire. “The vehicle has to work in a way that humans would understand.” Self-driving cars have been rear-ended when they stopped in situations where most humans would not have expected a driver to stop. And a truck can be perfectly safe even when close enough to unnerve a bicyclist.

Other groups are also developing standards for robotic vehicles. ITU is covering both automated driver assistance and fully autonomous vehicles. Underwriters Laboratories is working on a standard for fully-autonomous vehicles. The Automated Vehicle Safety Consortium, a group including auto companies, plus Lyft, Uber, and SAE International (formerly the Society of Automotive Engineers) is developing safety principles for SAE Level 4 and 5 autonomous vehicles. The BSI Group (formerly the British Institute of Standards) developed a strategy for British standards for connected and autonomous vehicles and is now working on the standards themselves.

How long will it take to develop standards? “This is a research process,” says McGuire. “It takes as long as it takes” to establish public trust and social benefit. In the near term, Mcity has teamed with the city of Detroit, the U.S. Department of Transportation, and Verizon to test autonomous vehicles for transporting the elderly on city streets. But he says the field “needs to be a living thing that continues to evolve” over a longer period.

What Silicon Photonics Delivers at 1200 Gigabits Per Second

Post Syndicated from Jeff Hecht original https://spectrum.ieee.org/tech-talk/computing/networks/high-speed-silicon-photonics

Fiber-optic data rates on a single wavelength will take a big step up later this month. Acacia Communications says it will demonstrate a 1.2-terabit-per-second module at the European Conference on Optical Communications (ECOC) that is taking place from 22-26 September in Dublin. Transmission of up to 800 gigabits per second on a single wavelength was introduced for the first time this March at the Optical Fiber Communications Conference (OFC) in San Diego.  Both meetings are cosponsored by IEEE.

The explosive growth of cloud computing and traffic between data centers has given network operators a tremendous thirst for bandwidth on scales including those for transmission inside data centers at the network edge and for submarine cable stretching more than 10,000 kilometers. Data centers have long relied on coherent optical transmission of 100-gigabit Ethernet signals on each of many separate wavelengths carried by an optical fiber. Now operators have begun shifting traffic at busy data centers to 400-Gigabit Ethernet. With cloud data centers now mirrored around the globe, operators also are pushing for ever-increasing capacities for transoceanic submarine cables, most recently the Pacific Light Cable capable of carrying 144 terabits per second between Hong Kong and Los Angeles.

The crucial elements of those transmission systems are modules that convert digital signals between electronic and optical forms. They require both photonic circuits that convert digital signals between electronic and optical formats, and powerful digital signal processing (DSP) chips that extract error-free signals from the noise that accumulates during high-speed transmission. In 2017, Acacia introduced its 1.2-T Pico DSP chip, which operates at 1200 gigabits per second, and later was integrated into a module that split it into a pair of 600-gigabit signals for transmission on two separate wavelengths.
 

Acacia’s new AC1200-SC2 module combines the DSP chip with integrated photonic circuits to generate a 1200-Gb/s signal for transmission on a single wavelength in a fiber. That makes the module optimizable for high speed over short distances and high distances at lower speed, says Acacia marketing manager Tom Williams. The modules can transport three 400-GigE signals of data center traffic using the powerful but delicate 64QAM modulation scheme. For transmission over longer distances, the module can switch to more robust modulation schemes. 16QAM can carry two 400-GigE signals, and the rugged DPSK (Differential Phase Shift Key) scheme can transport one 400 GigE signal over 10,000 kilometers, as shown in the figure. The module also can be adjusted to support legacy signal formats still in wide use, where the bandwidths of a single optical channel or wavelength are 50, 75, or 100 gigahertz.

A key to further enhancements, Williams adds, is the use of silicon photonics for converting the digital signals between optical and electronic format. The strength of silicon photonics is its integration of silicon with optical materials to combine optical and electronic functions. It’s a field that has matured recently, as IEEE recognized when it announced that its annual Photonics award would be given to Chris Doerr, Acacia’s vice president for advanced development, for his work in the field.

The new modules “could translate to better price per bit of transmission along a fiber,” depending on how vendors price their systems, says Jimmy Yu of the market analysis firm Dell’Oro Group. In the long term, “the words ‘400 Gbps everywhere’ resonate the most,” he says. Once 800- and 1200-gigabit systems become available, network operators can for the first time transmit 400 gigabit signals on long haul and ultra-long haul routes.
 

Backup Laser to Revive Aeolus Wind-Sensing Satellite

Post Syndicated from Jeff Hecht original https://spectrum.ieee.org/tech-talk/aerospace/satellites/backup-laser-to-revive-aeolus-satellite

Early results have shown that ultraviolet lidar can provide vital new data for weather and climate studies, but the laser power is slowly fading

Firing laser pulses from a satellite is the best way to measure wind velocity. Thus the launch of the lidar-equipped Aeolus satellite last August marked a breakthrough in measuring wind speeds around the world from the ground to the stratosphere for use in weather forecasting and climate research.

Its first nine months proved its worth. “The quality of the wind data is fantastic,” Josef Aschbacher, director of Earth observations at the European Space Agency, said in May. However, the high energy pulses from the Aladin ultraviolet laser used to make the measurements were growing dimmer.

This week ESA is switching on a backup laser, and expects to spend three to four weeks commissioning it. “The first data, available only to expert users, should be available around the end of July,” Aeolus mission manager Tommaso Parrinello told Spectrum in an email. ESA expects to need another month to confirm data quality for continuing studies of weather and climate.  

Weather and climate trends depend on wind velocities, so accurate measurements of their throughout the atmosphere are essential for weather forecasting and climate research. We can estimate wind velocity indirectly by watching the motion of wind-driven clouds and the turbulence of wind-blown ocean surfaces. However, the most accurate velocity measurements come from inserting probes directly into the moving air. Weather balloons are the most common probes, supplemented by ground-based and aircraft-mounted instruments, but the numbers of these probes are limited.

Firing laser pulses into the air is another way of measuring wind velocity. Putting the laser on a satellite allows its pulses to probe the atmosphere around the whole world. Air molecules scatter some laser light directly back at the laser, in the process shifting the wavelength slightly by an amount that depends on the velocity of the molecule. A sensor mounted on the satellite can detect the difference between the laser output and the reflected light, thus measuring the air speed. This kind of laser is called a laser radar or lidar, and it is best known for its short-range uses in police speed traps and self-driving cars.

Lidars have been flown in space before, but mostly for measuring distance to the ground. One famous example is the Mars Orbiter Laser Altimeter which mapped the red planet from the Mars Global Surveyor. That worked well because hard surfaces reflect much of the laser energy. Air, on the other hand, reflects only small fraction of the laser light, making it harder to measure wind velocity. However, air scatters more light at shorter wavelengths, so lidar returns can be increased by using lasers firing high energy pulses at short ultraviolet wavelengths.

ESA custom-built a high-power laser they called Aladin to operate very stably, firing 50 pulses per second in a precise band for measurement. The laser worked well, but when they tested the optical system in a vacuum in 2005 they found that the intense ultraviolet pulses destroyed fragile coatings on optical surfaces and clouded the optics. Fixing that took more than a decade and raised the total cost of Aeolus to $560 million when it launched last year.

The laser fired ultraviolet of 65 millijoules once it reached orbit, but that energy declined 20 to 30 percent in the first nine months, and Aschbacher said it was losing one millijoule per week in May. That was a big advance over earlier high-energy ultraviolet lasers in space, which failed within a few hours, Parrinello says. The European Center for Medium-Range Weather Forecasts found the resulting data was good enough to improve weather-forecast quality. However, at the rate power was dropping, Aeolus would not be able to collect the three years of data sought for research.

Analysis of satellite performance found the losses originated from multiple problems with the laser rather than a recurrence of the earlier optical difficulties. The laser beam seems to be drifting away from the target area, reducing power delivered. The semiconductor diode lasers that convert input electricity into the light that powers the ultraviolet laser appear to be dimming.

Fortunately, ESA’s design included a redundant laser designed as a backup to the original. The backup laser’s design is essentially the same as that of the fading laser, but the backup was not being used, and ground tests show that its pulse energy can be adjusted more readily. That extra margin of the backup laser should be good enough for Aeolus to collect the desired three years of data, Parrinello says, but warns that high-power laser missions, “are only one shot away from failure.” Satellite lifetime also is limited by the limited fuel available to stabilize its orbit at 320 kilometers, an elevation so low it requires a boost every week.

Results from Aeolus have already raised interest in a follow-up operational satellite for meteorological observations, and Parrinello says ESA has already exchanged letters of interest with the European Organization for the Exploitation of Meteorological Satellites.