All posts by Michael Koziol

Indoor Air-Quality Monitoring Can Allow Anxious Office Workers to Breathe Easier

Post Syndicated from Michael Koziol original

Although IEEE Spectrum’s offices reopened several weeks ago, I have not been back and continue to work remotely—as I suspect many, many other people are still doing. Reopening offices, schools, gyms, restaurants, or really any indoor space seems like a dicey proposition at the moment.

Generally speaking, masks are better than no masks, and outdoors is better than indoors, but short of remaining holed up somewhere with absolutely no contact with other humans, there’s always that chance of contracting COVID-19. One of the reasons I personally haven’t gone back to the office is because the uncertainty over whether the virus, exhaled by a coworker, is present in the air, doesn’t seem worth the novelty of actually being able to work at my desk again for the first time in months.

Unfortunately, we can’t detect the virus directly, short of administering tests to people. There’s no sensor you can plug in and place on your desk that will detect virus nuclei hanging out in the air around you. What is possible, and might bring people some peace of mind, is installing IoT sensors in office buildings and schools to monitor air quality and ascertain how favorable the indoor environment is to COVID-19 transmission.

One such monitoring solution has been developed by wireless tech company eleven-x and environmental engineering consultant Pinchin. The companies have created a real-time monitoring system that measures carbon dioxide levels, relative humidity, and particulate levels to assess the quality of indoor air.

“COVID is the first time we’ve dealt with a source of environmental hazard that is the people themselves,” says David Shearer, the director of Pinchin’s indoor environmental quality group. But that also gives Pinchin and eleven-x some ideas on how to approach monitoring indoor spaces, in order to determine how likely transmission might be.

Again, because there’s no such thing as a sensor that can directly detect COVID-19 virus nuclei in an environment, the companies have resorted to measuring secondhand effects. The first measurement is carbon dioxide levels. he biggest contributor to CO2 in an indoor space is people exhaling. Higher levels in a room or building suggest more people, and past a certain level, means either there are too many people in the space or the air is not being vented properly. Indirectly, that carbon dioxide suggests that if someone was a carrier of the coronavirus, there’s a good chance there’s more virus nuclei in the air.

The IoT system also measures relative humidity. Dan Mathers, the CEO of eleven-x, says that his company and Pinchin designed their monitoring system based on recommendations by the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE). ASHRAE’s guidelines [PDF] suggest that a space’s relative humidity should hover between 40 and 60 percent. More or less than that, Shearer explains, and it’s easier for the virus to be picked up by the tissue of the lungs.

A joint sensor measures carbon dioxide levels and relative humidity. A separate sensor measures the level of 2.5-micrometer particles in the air. 2.5 microns (or PM 2.5) is an established particulate size threshold in air quality measuring. And as circumstances would have it, it’s also slightly smaller than the size of a virus nucleus, which about 3 µm. They’re still close enough in size that a high level of 2.5-µm particles in the air means there’s a greater chance that there’s potentially some COVID-19 nuclei in the space.

According to Mathers, the eleven-x sensors are battery-operated and communicate using LoRaWAN, a wireless standard that is most notable for being extremely low power. Mathers says the batteries in eleven-x’s sensors can last for ten years or more without needing to be replaced.

LoRaWAN’s principal drawback is that isn’t a good fit for transmitting high-data messages. But that’s not a problem for eleven-x’s sensors. Mathers says the average message they send is just 10 bytes. Where LoRa excels is in situations where simple updates or measurements need to be sent—measurements such as carbon dioxide, relative humidity, and particulate levels. The sensors communicate on what’s called an interrupt basis, meaning once a measurement passes a particular threshold, the sensor wakes up and sends an update. Otherwise, it whiles away the hours in a very low power sleep mode.

Mathers explains that one particulate sensor is placed near a vent to get a good measurement of 2.5-micron-particle levels. Then, several carbon dioxide/relative humidity sensors are scattered around the indoor space to get multiple measurements. All of the sensors communicate their measurements to LoRaWAN gateways that relay the data to eleven-x and Pinchin for real-time monitoring.

Typically, indoor air quality tests are done in-person, maybe once or twice a year, with measuring equipment brought into the building. Shearer believes that the desire to reopen buildings, despite the specter of COVID-19 outbreaks, will trigger a sea change for real-time air monitoring.

It has to be stressed again that the monitoring system developed by eleven-x and Pinchin is not a “yes or no” style system. “We’re in discussions about creating an index,” says Shearer, “but it probably won’t ever be that simple.” Shifting regional guidelines, and the evolving understanding about how COVID-19 is transmitted will make it essentially impossible to create a hard-and-fast index for what qualities of indoor air are explicitly dangerous with respect to the virus.

Instead, like social distancing and mask wearing recommendations, the measurements are intended to give general guidelines to help people avoid unhealthy situations. What’s more, the sensors will notify building occupants should indoor conditions change from likely safe to likely problematic. It’s another way to make people feel safe if they have to return to work or school.

Apptricity Beams Bluetooth Signals Over 30 Kilometers

Post Syndicated from Michael Koziol original

When you think about Bluetooth, you probably think about things like wireless headphones, computer mice, and other personal devices that utilize the short-range, low-power technology. That’s where Bluetooth has made its mark, after all—as an alternative to Wi-Fi, using unlicensed spectrum to make quick connections between devices.

But it turns out that Bluetooth can go much farther than the couple of meters for which most people rely on it. Apptricity, a company that provides asset and inventory tracking technologies, has developed a Bluetooth beacon that can transmit signals over 32 kilometers (20 miles). The company believes its beacon is a cheaper, secure alternative to established asset and inventory tracking technologies.

A quick primer, if you’re not entirely clear on asset tracking versus inventory tracking: There’s some gray areas in the middle, but by and large, “asset tracking” refers to an IT department registering which employee has which laptop, or a construction company keeping tabs on where its backhoes are on a large construction site. Inventory tracking refers more to things like a retail store keeping correct product counts on the shelves, or a hospital noting how quickly it’s going through its store of gloves.

Asset and inventory tracking typically use labor-intensive techniques like barcode or passive RFID scanning, which are limited both by distance (a couple of meters at most, in both cases) and the fact that a person has to be directly involved in scanning. Alternatively, companies can use satellite or LTE tags to keep track of stuff. While such tags don’t require a person to actively track items, they are far more expensive, requiring a costly subscription to either a satellite or LTE network.

So, the burning question: How does one send a Bluetooth signal over 30-plus kilometers? Typically, Bluetooth’s distance is limited because large distances would require a prohibitive amount of power, and its use of unlicensed spectrum means that the greater the distance, the more likely it will interfere with other wireless signals.

The key new wrinkle, according to Apptricity’s CEO Tim Garcia, is precise tuning within the Bluetooth spectrum. Garcia says it’s the same principle as a tightly-focused laser beam. A laser beam will travel farther without its signal weakening beyond recovery if the photons making up the beam are all as close to a specific frequency as possible. Apptricity’s Bluetooth beacons use firmware developed by the company to achieve such precise tuning, but with Bluetooth signals instead of photons. Thus, data can be sent and received by the beacons without interfering and without requiring unwieldy amounts of power.

Garcia says RFID tags and barcode scanning don’t actively provide information about assets or inventory. Bluetooth, however, can not only pinpoint where something is, it can send updates about a piece of equipment that needs maintenance or just requires a routine check-up.

By its own estimation, Apptricity’s Bluetooth beacons are 90 percent cheaper than LTE or satellite tags, specifically because Bluetooth devices don’t require paying for a subscription to an established network.

The company’s current transmission distance record for its Bluetooth beacons is 38 kilometers (23.6 miles). The company has also demonstrated non-commercial versions of the beacons for the U.S. Department of Defense with broadcast ranges between 80 and 120 kilometers.

Amazon’s Project Kuiper is More Than the Company’s Response to SpaceX

Post Syndicated from Michael Koziol original

Amazon cleared an important hurdle when the U.S. Federal Communications Commission (FCC) announced on 30 July that the company was authorized to deploy and operate its Kuiper satellite constellation. The authorization came with the caveat that Amazon would still have to demonstrate that Kuiper would not interfere with previously authorized satellite projects, such as SpaceX’s Starlink.

Even with the FCC’s caveat, it’s tempting to imagine that the idea of putting a mega-constellation of thousands of satellites in low-Earth orbit to provide uninterrupted broadband access anywhere on Earth will become a battle between Jeff Bezos’ Kuiper and Elon Musk’s Starlink. After all, even in space, how much room can there be for two mega-constellations, let alone additional efforts like that of the recently-beleaguered OneWeb? But some experts suggest that Amazon’s real play will come from its ability to vertically integrate Kuiper into the rest of the Amazon ecosystem—an ability SpaceX cannot match with Starlink.

“With Amazon, it’s a whole different ballgame,” says Zac Manchester, an assistant professor of aeronautics and astronautics at Stanford University. “The thing that makes Amazon different from SpaceX and OneWeb is they have so much other stuff going for them.” If Kuiper succeeds, Amazon can not only offer global satellite broadband access—it can include that access as part of its Amazon Web Services (AWS), which already offers resources for cloud computing, machine learning, data analytics, and more.

First, some quick background on what Amazon plans with Kuiper itself. The FCC approved the launch of 3,236 satellites. Not all of those thousands of satellites have to be launched immediately, however. Amazon is now obligated to launch at least half of the total by 2026 to retain the operating license the FCC has granted the company.

Amazon has said it will invest US $10 billion to build out the constellation. The satellites themselves will circle the Earth in what’s referred to as “low Earth orbit,” or LEO, which is any orbital height below 2000 kilometers. The satellites will operate in the Ka band (26.5 to 40 gigahertz).

A common talking point for companies building satellite broadband systems is that the constellations will be able to provide ubiquitous broadband access. In reality, except for users in remote or rural locations, terrestrial fiber or cellular networks almost always win out. In other words, no one in a city or suburb should be clamoring for satellite broadband.

“If they think they’re competing against terrestrial providers, they’re deluded,” says Tim Farrar, a satellite communications consultant, adding that satellite broadband is for last-resort customers who don’t have any other choice for connectivity.

However, these last-resort customers also include industries that can weather the cost of expensive satellite broadband, such as defense, oil and gas, and aviation. There’s far more money to be made in catering to those industries than in building several thousand satellites just to connect individual rural broadband subscribers.

But what these far-flung industries also increasingly have in common, alongside industries like Earth-imaging and weather-monitoring that also depend on satellite connectivity, is data. Specifically, the need to move, store, and crunch large quantities of data. And that’s something Amazon already offers.

“You could see Project Kuiper being a middleman for getting data into AWS,” says Manchester. “SpaceX owns the space segment, they can get data from point A to point B through space. Amazon can get your data through the network and into their cloud and out to end users.” There are plenty of tech start-ups and other companies that already do machine learning and other data-intensive operations in AWS and could make use of Kuiper to move their data. (Amazon declined to comment on the record about their future plans for Kuiper for this story).

Amazon has also built AWS ground stations that connect satellites directly with the rest of the company’s web service infrastructure. Building and launching satellites is certainly expensive, but the ground stations to connect those satellites are also a not-insignificant cost. Because Amazon already offers access to these ground stations on a per-minute basis, Manchester thinks it’s not unreasonable for the company to expand that offering to Kuiper’s connectivity.

There’s also Blue Origin to consider. While the rocket company owned by Bezos currently has a heavy-lift rocket that could conceivably bring Kuiper satellites to LEO, that could change. The company has at least one such rocket —the New Glenn—in development. Farrar notes that Amazon could spend the next few years in satellite development before it needs to begin launching satellites in earnest, by which point Blue Origin could have a heavy-lift option available.

Farrar says that with an investment of $10 billion, Amazon will need to bring in millions of subscribers to consider the project a financial success. But Amazon can also play a longer game than, say, SpaceX. Whereas the latter is going to be dependent entirely on subscriptions to generate revenue for Starlink, Amazon’s wider business platform means Kuiper is not dependent solely on its own ability to attract users. Plus, Amazon has the resources to make a long-term investment in Kuiper before turning a profit, in a way Starlink cannot.

“They own all these things the other guys don’t,” says Manchester. “In a lot of ways, Amazon has a grander vision. They’re not trying to be a telco.”

5G Just Got Weird

Post Syndicated from Michael Koziol original

The only reason you’re able to read this right now is because of the Internet standards created by the Internet Engineering Task Force. So while standards may not always be the most exciting thing in the world, they make exciting things possible. And occasionally, even the standards themselves get weird.

That’s the case with the recent 5G standards codified by the 3rd Generation Partnership Project (3GPP), the industry group that establishes the standards for cellular networks. 3GPP finalized Release 16 on July 3.

Release 16 is where things are getting weird for 5G. While earlier releases focused on the core of 5G as a generation of cellular service, Release 16 lays the groundwork for new services that have never been addressed by cellular before. At least, not in such a rigorous, comprehensive way.

“Release 15 really focused on the situation we’re familiar with,” says Danny Tseng, a director of technical marketing at Qualcomm, referring to cellular service, adding, “release 16 really broke the barrier” for connected robots, cars, factories, and dozens of other applications and scenarios.

4G and other earlier generations of cellular focused on just that: cellular. But when 3GPP members started gathering to hammer out what 5G could be, there was interest in developing a wireless system that could do more than connect phones. “The 5G vision has always been this unifying platform,” says Tseng. When developing 5G standards, researchers and engineers saw no reason that wireless cellular couldn’t also be used to connect anything wireless.

With that in mind, here’s an overview of what’s new and weird in Release 16. If you’d rather pour through the standards yourself, here you go. But if you’d rather not drown in page after page of technical details, don’t worry—keep reading for the cheat sheet.


One of the flashiest things in Release 16 is V2X, short for “Vehicle to Everything.” In other words, using 5G for cars to communicate with each other and everything else around them. Hanbyul Seo, an engineer at LG Electronics, says V2X technologies have previously been standardized in IEEE 802.11p and 3GPP LTE V2X, but that the intention in these cases was to enable basic safety services. Seo is one of the rapporteurs for 3GPP’s item on V2X, meaning he was responsible for reporting on the item’s progress to 3GPP.

In defining V2X for 5G, Seo says the most challenging thing was to provide high data throughput, reliability, and low latency, both of which are essential for anything beyond the most basic communications. Seo explains that earlier standards typically deal with messages with hundreds of bytes that are expected to reach 90 percent of receivers in a 300-meter radius within a few hundred milliseconds. The 3GPP standards bring those benchmarks into the realm of gigabytes per second, 99.999 percent reliability, and just a few milliseconds.


Matthew Webb, a 3GPP delegate for Huawei and the other rapporteur for the 3GPP item on V2X, adds that Release 16 also introduces a new technique called sidelinking. Sidelinks will allow 5G-connected vehicles to communicate directly with one another, rather than going through a cell-tower intermediary. As you might imagine, that can make a big difference for cars speeding past each other on a highway as they alert each other about their positions.

Tseng says that sidelinking started as a component of the V2X work, but it can theoretically apply to any two devices that might need to communicate directly rather than go through a base station first. Factory robots are one example, or large-scale Internet of Things installations.

Location Services

Release 16 also includes information on location services. In past generations of cellular, three cell towers were required to triangulate where a phone was by measuring the round-trip distance of a signal from each tower. But 5G networks will be able to use the round-trip time from a single tower to locate a device. That’s because massive MIMO and beamforming allow 5G towers to send precise signals directly to devices, and so the network can measure the direction and angle of a beam, along with its distance from the tower, to locate it.

Private Networks

Then there’s private networks. When we think of cellular networks, we tend to think of wide networks that cover lots of ground so that you can always be sure you have a signal. But 5G incorporates millimeter waves, which are higher frequency radio waves (30 to 300 GHz) that don’t travel nearly as far as traditional cell signals. Millimeter waves means it will be possible to build a network just for an office building, factory, or stadium. At those scales, 5G could function essentially like Wi-Fi networks.

Unlicensed Spectrum

The last area to touch on is Release 16’s details on unlicensed spectrum. Jing Sun, an engineer at Qualcomm and the 3GPP’s rapporteur on the subject, says Release 16 is the first occasion unlicensed spectrum has been included in the 5G’s cellular service. According to Sun, it made sense to expand 5G into unlicensed spectrum in the 5 and 6 GHz bands because the bands are widely available around the world and ideal for use now that 5G is pushing cellular service into higher frequency bands. Unlicensed spectrum could be key for private networks as, just like Wi-Fi, the networks could use that spectrum without having to go through the rigorous process of licensing a frequency band which may or may not be available.

Release 17 Will “Extend Reality”

Release 16 has introduced a lot of new areas for 5G service, but very few of these areas are finished. “The Release 17 scope was decided last December,” says Tseng. “We’ve got a pretty good idea of what’s in there.” In general, that means building on a lot of the blocks established in Release 16. For example, Release 17 will include more mechanisms by which devices—not just cars—can sidelink.

And it will include entirely new things as well. Release 17 includes a work item on extended reality—the catch-all term for alternate reality and virtual reality technologies. Tseng says there is also an item about NR-Lite, which will attempt to address current gaps in IoT coverage by using the 20 MHz band. Sun says Release 17 also includes a study item to explore the possibility of using frequencies in the 52 to 71 GHz range, far higher than anything used in cellular today.

Finalizing Release 17 will almost certainly be pushed back by the ongoing Covid-19 pandemic, which makes it difficult if not impossible for groups working on work items and study items to meet face-to-face. But it’s not stopping the work entirely. And when Release 17 is published, it’s certain to make 5G even weirder still.

Spacecraft of the Future Could Be Powered By Lattice Confinement Fusion

Post Syndicated from Michael Koziol original

Nuclear fusion is hard to do. It requires extremely high densities and pressures to force the nuclei of elements like hydrogen and helium to overcome their natural inclination to repel each other. On Earth, fusion experiments typically require large, expensive equipment to pull off.

But researchers at NASA’s Glenn Research Center have now demonstrated a method of inducing nuclear fusion without building a massive stellarator or tokamak. In fact, all they needed was a bit of metal, some hydrogen, and an electron accelerator.

The team believes that their method, called lattice confinement fusion, could be a potential new power source for deep space missions. They have published their results in two papers in Physical Review C.

“Lattice confinement” refers to the lattice structure formed by the atoms making up a piece of solid metal. The NASA group used samples of erbium and titanium for their experiments. Under high pressure, a sample was “loaded” with deuterium gas, an isotope of hydrogen with one proton and one neutron. The metal confines the deuterium nuclei, called deuterons, until it’s time for fusion.

“During the loading process, the metal lattice starts breaking apart in order to hold the deuterium gas,” says Theresa Benyo, an analytical physicist and nuclear diagnostics lead on the project. “The result is more like a powder.” At that point, the metal is ready for the next step: overcoming the mutual electrostatic repulsion between the positively-charged deuteron nuclei, the so-called Coulomb barrier. 

To overcome that barrier requires a sequence of particle collisions. First, an electron accelerator speeds up and slams electrons into a nearby target made of tungsten. The collision between beam and target creates high-energy photons, just like in a conventional X-ray machine. The photons are focused and directed into the deuteron-loaded erbium or titanium sample. When a photon hits a deuteron within the metal, it splits it apart into an energetic proton and neutron. Then the neutron collides with another deuteron, accelerating it.

At the end of this process of collisions and interactions, you’re left with a deuteron that’s moving with enough energy to overcome the Coulomb barrier and fuse with another deuteron in the lattice.

Key to this process is an effect called electron screening, or the shielding effect. Even with very energetic deuterons hurtling around, the Coulomb barrier can still be enough to prevent fusion. But the lattice helps again. “The electrons in the metal lattice form a screen around the stationary deuteron,” says Benyo. The electrons’ negative charge shields the energetic deuteron from the repulsive effects of the target deuteron’s positive charge until the nuclei are very close, maximizing the amount of energy that can be used to fuse.

Aside from deuteron-deuteron fusion, the NASA group found evidence of what are known as Oppenheimer-Phillips stripping reactions. Sometimes, rather than fusing with another deuteron, the energetic deuteron would collide with one of lattice’s metal atoms, either creating an isotope or converting the atom to a new element. The team found that both fusion and stripping reactions produced useable energy.

“What we did was not cold fusion,” says Lawrence Forsley, a senior lead experimental physicist for the project. Cold fusion, the idea that fusion can occur at relatively low energies in room-temperature materials, is viewed with skepticism by the vast majority of physicists. Forsley stresses this is hot fusion, but “We’ve come up with a new way of driving it.”

“Lattice confinement fusion initially has lower temperatures and pressures” than something like a tokamak, says Benyo. But “where the actual deuteron-deuteron fusion takes place is in these very hot, energetic locations.” Benyo says that when she would handle samples after an experiment, they were very warm. That warmth is partially from the fusion, but the energetic photons initiating the process also contribute heat.

There’s still plenty of research to be done by the NASA team. Now they’ve demonstrated nuclear fusion, the next step is to create reactions that are more efficient and more numerous. When two deuterons fuse, they create either a proton and tritium (a hydrogen atom with two neutrons), or helium-3 and a neutron. In the latter case, that extra neutron can start the process over again, allowing two more deuterons to fuse. The team plans to experiment with ways to coax more consistent and sustained reactions in the metal.

Benyo says that the ultimate goal is still to be able to power a deep-space mission with lattice confinement fusion. Power, space, and weight are all at a premium on a spacecraft, and this method of fusion offers a potentially reliable source for craft operating in places where solar panels may not be useable, for example. And of course, what works in space could be used on Earth. 

Infinera and Windstream Beam 800 Gigabits Per Second Through a Single Optical Fiber

Post Syndicated from Michael Koziol original

For the first time, an 800 gigabit per second connection has been made over a live fiber optic link. The connection, a joint test in June conducted by Infinera and Windstream, beamed through a fiber optic line stretching from San Diego and Phoenix. If widely implemented, 800G connections could reduce the costs of operating long-haul fiber networks.

800G should not be confused with the more commonly-known 5G cellular service. In the latter, the “G” refers to the current generation of wireless technology. In fiber optics, the “G” indicates how many gigabits per second an individual cable can carry. For most long-haul routes today, 100G is standard.

The test conducted by Infinera, an optical transmission equipment manufacturer, and Windstream, a service provider, is not the first 800G demonstration, nor is it even the first 800G over long distances. It is, however, the first demonstration over a live network, where conditions are rarely, if ever, as ideal as a laboratory.

“We purposely selected this travel route because of how typical it looks,” says Art Nichols, Windstream’s vice president of architecture and technology.

In a real-world route, amplifiers and repeaters, which boost and regenerate optical signals respectively, are not placed regularly along the route for optimal performance. Instead, they’re placed near where people actually live, work, and transmit data. This means that a setup that might deliver 800 Gbps in a lab may not necessarily work over an irregular live network.

For 800G fiber, 800 Gbps is the maximum data rate possible and usually is not sustainable over very long distances, often falling off after about 100 kilometers. The 800G test conducted by Infinera and Windstream successfully delivered the maximum data rate through a single fiber across more than 730 km. “There’s really a fundamental shift in the underlying technology that made this happen,” says Rob Shore, the senior vice president of marketing at Infinera.

Shore credits Infinera’s Nyquist subcarriers [PDF] for sustaining maximum data rates over long distances. Named for electrical engineer Harry Nyquist, the subcarriers digitally divide a single laser beam into 8 components.

“It’s the same optical signal, and we’re essentially dividing it or compartmentalizing it into separate individual data streams,” Shore says.

Infinera’s use of Nyquist subcarriers amplifies the effect of another, widely-adopted optical technique: probabilistic constellation shaping. According to Shore, the technique, originally pioneered by Nokia, is a way to “groom” individual optical signals for better performance—including traveling longer distances before suffering from attenuation. Shore says that treating each optical signal as 8 separate signals thanks to the Nyquist subcarriers essentially compounds the effects of probabilistic constellation shaping, allowing Infinera’s 800G headline data rates to travel much further than is typically possible.

What’s next for 800G after this test? “Obviously, the very first thing we need to is to actually release the product” used for the demonstration, Shore says, which he expects Infinera to do before the end of the year. 800G fiber could come to play an important part in network backhaul, especially as 5G networks come on-line around the world. All that wireless data will have to travel through the wired infrastructure somehow, and 800G fiber could ensure there will be bandwidth to spare.

Researchers Demo a New Polarization-Based Encryption Strategy

Post Syndicated from Michael Koziol original

Telecommunications can never be too secure. That’s why researchers continue to explore new encryption methods. One emerging method is called ghost polarization communication, or GPC. Researchers at the Technical University of Darmstadt, in Germany, recently demonstrated this approach in a proof-of-principle experiment.

GPC uses unpolarized light to keep messages safe. Sunlight is unpolarized light, as are fluorescent and incandescent lights and LEDs. All light waves are made up of an electric field and a magnetic field propagating in the same direction through space. In unpolarized light, the orientation of the electric-field component of a light wave fluctuates randomly. That’s the opposite of polarized light sources such as lasers, in which this orientation is fixed.

It’s often easiest to imagine unpolarized light as having no specific orientation at all, since it changes on the order of nanoseconds. However, according to Wolfgang Elsaesser, one of the Darmstadt researchers who developed GPC, there’s another way to look at it: “Unpolarized light can be viewed as a very fast distribution on the Poincaré sphere.” (The Poincaré sphere is a common mathematical tool for visualizing polarized light.)

In other words, unpolarized light could be a source of rapidly generated random numbers that can be used to encode a message—if the changing polarization can be measured quickly enough and decoded at the receiver.

Suppose Alice wants to send Bob a message using GPC. Using the proof of principle that the Darmstadt team developed, Alice would pass unpolarized light through a half-wave plate—a device that alters the polarization of a beam of light—to encode her message. In this specific case, the half-wave plate would alter the polarization according to the specific message being encoded.

Bob can decode the message only when he receives Alice’s altered beam as well as a reference beam, and then correlates the two. Anyone attempting to listen in on the conversation by intercepting the altered beam would be stymied, because they’d have no reference against which to decode the message.

GPC earned its name because a message may be decoded only by using both the altered beam and a reference beam. “Ghost” refers to the entangled nature of the beams; separately, each one is useless. Only together can they transmit a message. And messages are sent via the beams’ polarizations, hence “ghost polarization.”

Elsaesser says GPC is possible with both wired and wireless communications setups. For the proof-of-principle tests, they relied largely on wired setups, which were slightly easier to measure than over-the-air tests. The group used standard commercial equipment, including fiber-optic cable and 1,550-nanometer light sources (1,550 nanometers is the most common wavelength of light used for fiber communications).

The Darmstadt group confirmed GPC was possible by encoding a short message by mapping 0 bits and 1 bits using polarization angles agreed upon by the sender and receiver. The receiver could decode the message by comparing the polarization angles of the reference beam with those of the altered beam containing the message. They also confirmed the message could not be decoded by an outside observer who did not have access to the reference beam.

However, Elsaesser stresses that the tests were preliminary. “The weakness at the moment,” he says, “is that we have not concentrated on the modulation speed or the transmission speed.” What mattered was proving that the idea could work at all. Elsaesser says this approach is similar to that taken for other forms of encryption, like chaos communications, which started out with low transmission speeds but have seen rates tick up as the techniques have been refined.

Robert Boyd, a professor of optics at the University of Rochester, in N.Y., says that the most important question to answer about GPC is how it compares to the Rivest-Shamir-Adleman (RSA) system commonly used to encode messages today. Boyd suspects that the Darmstadt approach, like the RSA system, is not absolutely secure. But he says that if GPC turned out to be more secure or more efficient to implement—even by a factor of two—it would have a tremendous advantage over RSA.

Moving forward, Elsaesser already has ideas on how to simplify the Darmstadt system. And because the team has now demonstrated GPC with commercial components, Elsaesser expects that a refined GPC system could simply be plugged into existing networks for an easy security upgrade.

This article appears in the July 2020 print issue as “New Encryption Strategy Passes Early Test.”

Researchers Use Lasers to Bring the Internet Under the Sea

Post Syndicated from Michael Koziol original

Communicating underwater has always been a hassle. Radio transmissions, the ubiquitous wireless standard above the waves, can’t transmit very far before being entirely absorbed by the water. Acoustic transmissions (think sonar) are the preferred choice underwater, but they suffer from very low data rates. Wouldn’t it be nice if we could all just have Wi-Fi underwater instead?

Underwater Wi-Fi is exactly what researchers at the King Abdullah University of Science and Technology (KAUST) in Thuwal, Saudi Arabia, have developed. The system, which they call Aqua-Fi, uses a combination of lasers and off-the-shelf components to create a bi-directional wireless connection for underwater devices. The system is fully compliant with  IEEE 802.11 wireless standards, meaning it can easily connect to and function as part of the broader Internet.

Here’s how it works: Say you have a device underwater that needs to transmit data (for the KAUST researchers, it was waterproofed smartphones). They then used a regular old Wi-Fi signal to connect that device to an underwater “modem.” Specifically, they used a Raspberry Pi to function as that modem. The Raspberry Pi converted the wireless signal to an optical signal (in this case, a laser) that was beamed to receiver attached to a surface buoy. From there, established communications techniques were used to send the signal to an orbiting satellite. For the underwater device to receive data, the process is simply reversed.

Aqua-Fi stems from work that the KAUST researchers did back in 2017, when they used a blue laser to transmit a 1.2-gigabit file underwater. But that wasn’t interesting enough, according to Basem Shihada, an associate professor of computer science at KAUST and one of the researchers on the Aqua-Fi project. “Who cares about submitting just a file?” he says. “Let’s do something with a bit more life.”

It was that thinking that spurred the team to start looking at bi-directional communications, with the ultimate goal of building a system that can transfer high-resolution video.

Shihada says it was important to him that all of the components be off the shelf. “My first rule when we started this project: I do not want to have something that is [custom made for this],” he says. The only exception is the circuit in the Raspberry Pi that converts the wireless signal to an optical signal and vice versa.

The team used LEDs instead of lasers in their first design, but found the LEDs were not powerful enough for high data rates. With LEDs, the beams were limited to distances of about 7 meters and data rates of about 1 kilobit per second. When they upgraded to blue and green lasers, they achieved 2.11 megabits per second over 20 meters.

Shihada says that currently, the system is limited by the capabilities of the Raspberry Pi. The team burned out the custom circuit responsible for converting optical and wireless signals when, on two occasions, they used a laser that was too powerful. He says that in order for this setup to incorporate more powerful lasers that can both communicate farther and transmit more data, the Raspberry Pi will need to be swapped out for a dedicated optical modem.

Even with the limitations of the Raspberry Pi, the KAUST researchers were able to use Aqua-Fi to place Skype calls and transfer files.

But there’s still a big problem that needs to be addressed in order to make a system like Aqua-Fi commercially viable; and it can’t be solved as easily as swapping out the Raspberry Pi. “If you want to imagine how to build the Internet underwater,” says Shihada, “laser alignment remains the most challenging part.” Because lasers are so precise, even mildly turbulent waters can knock a beam off course and cause it to miss a receptor.

The KAUST researchers are exploring two options to solve the alignment problem. The first is to use a technique similar to the “photonic fence” developed to kill mosquitoes. A low-power guide laser would scan for the receptor. When a connection is made, it would inform another, higher-powered laser to begin sending data. If the waves misaligned the system again, the high-power laser would shut off and the guide laser would kick in, initiating another search.

The other option is a MIMO-like solution using a small array of receptors, so that even if the laser emitter is jostled a bit by the water, it will still maintain a connection.

You might still be asking yourself at this point why anyone even needs the Internet underwater? First, there’s plenty of need in underwater conservation for remote monitoring of sea life and coral reefs, for example. High definition video collected and transmitted by wireless undersea cameras can be immensely helpful to conservationists.

But it’s also helpful to the high-tech world. Companies like Microsoft are exploring the possibility of placing data centers offshore and underwater. Placing data centers on the ocean floor can perhaps save money both on cooling the equipment as well as energy costs, if the kinetic energy of the waves can be harvested and converted to electricity. And if there are data centers underwater, the Internet will need to be there too.

How Off-Grid, Lights-Out Cell Sites Will Aid the Effort to Bring the Next Billion People Online

Post Syndicated from Michael Koziol original

Rural connectivity has always been a headache for service providers expanding their networks. Smaller and more spread out communities, by their nature, require more investment and infrastructure for fewer customers and smaller returns. Lighting up a street of apartment buildings in a major city can bring in dozens if not hundreds of new customers with just one fiber cable, for example, while miles of cable might be needed for each new customer in a sparsely populated area.

This problem is truer for many places in Africa, where a higher percentage of the population is rural. Connecting the people in those regions has been a tough task to date.

In an effort to change that, Clear Blue Technologies, a Canadian company developing off-grid power systems, will be working with South African provider MTN and others to roll out 500 new off-grid, no-maintenance 2G and 3G cell towers in Africa. The $3.5 million CAD contract is the first step in a 3- to 5-year program to expand cellular networks in a dozen countries in a way that’s suited to much of Africa’s large size and rural demographics.

“What’s unique about Africa,” says Miriam Tuerk, the CEO of Clear Blue, “is the pent-up demand.” People there have smartphones, but they don’t necessarily have the networks to use them to their full potential.

Tuerk outlines two key challenges faced by any rural network buildout. The first is the cost of the cell towers, specifically the software and hardware. Because rural networks must inherently require a lot of towers to cover a wide area, the cost of licensing or purchasing software or hardware can be staggering. Tuerk suggests that projects like OpenCellular that are developing open source software can help bring down those costs.

More critically, cell towers require power. Traditionally, a cell tower receives power through the grid or, often in cases of emergency, diesel generators. Of course, building out power grid infrastructure is just as expensive as building out wired communications networks. When it comes to refilling diesel generators—well, you can imagine how time consuming it could be to regularly drive out and refill generators in person.

“If you want to move into rural areas, you really have to have a completely lights-off, off-the-grid solution,” says Tuerk. That means no grid connections and no regular maintenance visits to the cell sites to cut down on installation and maintenance costs. Tuerk says Clear Blue’s power tech makes it possible for a cell site to be installed for about US $10,000, whereas a traditional installation can start at between $30,000 and $50,000 per site.

Clear Blue was selected by the Telecom Infra Project to supply tech for off-grid cell towers. The Telecom Infra Project is a collaborative effort by companies to expand telecom network infrastructure, with the stated goal of bringing the next billion people online. For those currently living in places without a grid, something like Clear Blue’s tech is essential.

Clear Blue provides power to cell sites through solar, but Tuerk says it’s more than just plugging some solar panels into a cell tower and calling it a day. “If you’re solar, it has to work through weather,” she says. Lights-out, off-grid cell towers require energy forecasting, to ensure that service doesn’t drop overnight or during a string of cloudy days. Tuerk compares it to driving an electric car between two cities, with no charging stations in between: You want to make sure you have enough juice to make it to the other side.

The sites to be installed will conduct energy forecasting on site using data analytics and weather forecasts to determine energy usage. The sites will also report data back to Clear Blue’s cloud, so that the company can refine energy forecasts using aggregate data.

The initial rollouts have been delayed because of the coronavirus pandemic, but Tuerk is confident that they’ll still meet their goal of installing at least 500 sites by the end of the year. As Tuerk points out, it’s almost expected that projects in Africa get delayed because of some unforeseen hiccup. As the saying goes, “This is Africa.”

The Pandemic Has Slowed Wireless Network Buildouts

Post Syndicated from Michael Koziol original

When cities and states in the United States first began lockdowns due to the COVID-19 pandemic, a big concern initially was whether the Internet and other communications networks would hold up under the strain. Thankfully, aside from some minor hiccups and latency problems, the Internet’s infrastructure seems to be doing just fine.

It’s one thing to know that our current communications infrastructure is up to the task, but what about future networks? After all, plenty of networks were under construction when social distancing and shelter-in-place measures were instituted across the U.S.

The gist is that lockdowns have made it difficult, though not impossible, for companies to complete their network buildouts on schedule. For a wireless network operator, that can mean more than just deadlines—it could mean the loss of its license to operate a network over the frequencies it has been assigned.

Here’s how network buildout typically works in the U.S.: When a company wants to build and operate a wireless network, it must receive a license from the Federal Communications Commission to operate over a specific set of frequencies in a given area. That license also includes a deadline: By a specific date, the company must have finished its network buildout and started sending signals. Otherwise, the frequencies are up for grabs.

But now, companies are struggling to meet their deadlines. “It’s not like these guys are trying to skirt the rules,” says Mark Crosby, CEO of the Enterprise Wireless Alliance, an industry trade association. “It’s not that they don’t want to finish their construction, it’s just that things are difficult right now.”

Crosby says he’s heard plenty of stories from wireless companies that are members of EWA. Some can’t get cranes out to sites for tower construction, while others have had delays in getting technicians and utility workers on site. Crosby’s favorite story, so far, is a company that had a sheriff come out to a site to tell the workers that they could finish their work, but because of the risk that some of them might be carrying COVID-19, they shouldn’t plan on coming back into town, but to leave as soon as they were done. “It’s like it’s straight out of the movies, the sheriff saying ‘I thought I told you to get out of town,’” says Crosby.

Because of these problems, the EWA petitioned the FCC for a waiver of buildout deadlines until 31 August 2020. The FCC responded by giving all companies with a deadline between 15 March and 15 May an additional 60 days to complete their buildouts [PDF]. Although it’s not as much as the EWA initially asked for, Crosby believes companies will be okay, and says that the EWA will continue to monitor the situation, should more time be needed.

Prior to the FCC’s decision, the agency was granting deadline waivers on a case-by-case basis. Although companies are traditionally able to apply for deadline extensions—and Crosby notes that such extensions are almost always granted, provided the company is actually working on their buildout—the EWA’s position was that case-by-case was too cumbersome during the pandemic. “The FCC is all working from home,” Crosby says. “It’s just as difficult for them.” To avoid headaches and paperwork, it makes more sense to grant a blanket extension.

The FCC did not respond to a request for comment on the current state of buildout delays and why its response was not the same as the EWA’s original request.

While the pandemic is undoubtedly disrupting network buildouts, Crosby for one is equally certain it won’t cause lingering problems. He’s confident that when state and local governments begin lifting lockdown measures, companies will have no trouble wrapping up their delayed buildouts and getting new construction done on schedule. “The EWA wouldn’t support [taking undue advantage of the situation],” he says. “If everything’s normal, don’t roll deadlines. We do not want people sitting on the spectrum.”

How the Internet Can Cope With the Explosion of Demand for “Right Now” Data During the Coronavirus Outbreak

Post Syndicated from Michael Koziol original

The continuing spread of COVID-19 has forced far more people to work and learn remotely than ever before. And more people in self-isolation and quarantine means more people are streaming videos and playing online games. The spike in Internet usage has some countries looking at ways to curb streaming data to avoid overwhelming the Internet.

But the amount of data we’re collectively using now is not actually the main cause of the problem. It’s the fact that we’re all suddenly using many more low-latency applications: teleconferencing, video streaming, and so on. The issue is not that the Internet is running out of throughput. It’s that there’s a lot more demand for data that needs to be delivered without any perceivable delay.

“The Internet is getting overwhelmed,” says Bayan Towfiq, the founder and CEO of Subspace, a startup focusing on improving the delivery of low-latency data. “The problem is going to get worse before it gets better.” Subspace wasn’t planning to come out of stealth mode until the end of this year, but the COVID-19 crisis has caused the company to rethink those plans.

“What’s [been a noticeable problem while everyone is at home streaming and videoconferencing] is less than one percent of data. [For these applications] it’s more important than the other 99 percent,” says Towfiq. While we all collectively use far more data loading webpages and browsing social media, we don’t notice if a photo takes half a second to load in the same way we notice a half-second delay on a video conference call.

So if we’re actually not running out of data throughput, why the concern over streaming services and teleconferencing overloading the Internet?

“The Internet doesn’t know about the applications running on it,” says Towfiq. Put another way, the Internet is agnostic about the type of data moving from point A to point B. What matters most, based on how the Internet has been built, is moving as much data as possible.

And normally that’s fine, if most of the data is in the form of emails or people browsing Amazon. If a certain junction is overwhelmed by data, load times may be a little slower. But again, we barely notice a delay in most of the things for which we use the Internet.

The growing use of low-latency applications, however, means those same bottlenecks are painfully apparent. When a staff Zoom meeting has to contend with someone trying to watch the Mandalorian, the Internet sees no difference between your company’s videochat and Baby Yoda.

For Towfiq, the solution to the Internet’s current stress is not to cut back on the amount of video-conferencing, streaming, and online gaming, as has been suggested. Instead, the solution is what Subspace has been focused on since its founding last year: changing how the Internet works by forcing it to prioritize that one percent of data that absolutely, positively has to get there right away.

Subspace has been installing both software and hardware for ISPs in cities around the world designed to do exactly that. Towfiq says ISPs already saw the value in Subspace’s tech after the company demonstrated that it could make online gaming far smoother for players by reducing the amount of lag they dealt with.

Initially Subspace was sending out engineers to personally install its equipment and software for ISPs and cities they were working with. But with the rising demand and the pandemic itself, the company is transitioning to “palletizing” its equipment: making it so that, after shipping it, the city or ISP can plug in just a few cables and change how their networks function.

Now, Towfiq says, the pandemic has made it clear that the startup needed to immediately come out of stealth. Even though Subspace was already connecting its new tech to cities’ network infrastructure at a rate of five per week in February, coming out of stealth will allow the company to publicly share information about what it’s working on. The urgency, says Towfiq, outweighed the company’s original plans to conduct some proof-of-concept trials and build out a customer base.

“There’s a business need that’s been pulled out of us to move faster and unveil right now,” Towfiq says. He adds that Subspace didn’t make the decision to come out of stealth until last Tuesday. “There’s a macro thing happening with governments and Internet providers not knowing what to do.”

Subspace could offer the guidance these entities need to avoid overwhelming their infrastructure. And once we’re all back to something approximating normal after the COVID-19 outbreak, the Internet will still benefit from the types of changes Subspace is making. As Towfiq says, “We’re becoming a new kind of hub for the Internet.”

Snag With Linking Google’s Undersea Cable to Saint Helena Could Leave Telecom Monopoly Entrenched

Post Syndicated from Michael Koziol original

Last June, Google announced an addition to the company’s planned Equiano undersea cable. In addition to stretching down the length of Africa’s western coastline, a branch would split off to connect to the remote island of Saint Helena, part of the United Kingdom. The cable would be an incredible gain for the island of about 4,500 people, who today rely on a shared 50-megabit-per-second satellite link for Internet connections.

The cable will deliver an expected minimum of several hundred gigabits per second. That’s far more data than the island’s residents can use, and would in fact be prohibitively expensive for Helenians to afford. To make the cable’s costs feasible, Christian von der Ropp—who started the Connect Saint Helena campaign in 2011—worked on the possibility of getting satellite ground stations installed on the island.

These ground stations would be crucial links between the growing number of satellites in orbit and our global network infrastructure. One of the biggest problems with satellites, especially lower-orbiting ones, is that they spend significant chunks of time without a good backhaul connection—usually because they’re over an ocean or a remote area on land. The southern Atlantic, as you can surely guess, is one such spot. Saint Helena happens to be right in the middle of it.

Von der Ropp found there was interest among satellite companies to build ground stations on the island. OneWeb, Spire Global, and Laser Light have all expressed interest in building infrastructure on the island. The ground stations would be a perfect match for the cable’s throughput, taking up the bulk of the cable’s usage and effectively subsidizing the costs of high-speed access for Helenians.

But what seemed like smooth sailing for the cable has now run into another bump, however. The island government is currently at odds with the island’s telecom monopoly, Sure South Atlantic. If the dispute cannot be resolved by the time the cable lands in late 2021 or early 2022, Helenians could see incredibly fast Internet speeds come to their shores—only to go nowhere once they arrive.

“The arrival of unlimited broadband places [Sure’s] business at risk,” says von der Ropp. He points out that, in general, when broadband access becomes essentially unlimited, users move away from traditional phone and television services in favor of messaging and streaming services like Skype, WhatsApp, and Netflix. If fewer Helenians are paying for Sure’s service packages, the company may instead jack up the prices on Internet access—which Helenians would then be forced to pay.

Most pressing, however, is that the island’s infrastructure simply cannot handle the data rates the Equiano branch will deliver. Because Sure is a monopoly, the company has little incentive to upgrade or repair infrastructure in any but the direst circumstances (Sure did not respond to a request for comment for this story).

That could give satellite operators cold feet as well. Under Sure’s current contract with the island government, satellite operators would be forbidden from running their own fiber from their ground stations to the undersea cable’s terminus. They would be reliant on Sure’s existing infrastructure to make the connection.

Sure’s current monopoly contract is due to expire on December 31, 2022—about a year after the cable is due to land on the island—assuming that the Saint Helena government does not renew the contract. Given the dissatisfaction of many on the island with the quality of service, that appears to be a distinct possibility. Right now, for example, Helenians pay 82 pounds per month for 11 gigabytes of data according to Sure’s Gold Package. The moment they exceed their data cap, Sure charges them 5 pence per megabyte, which is a 670 percent increase in the cost of data.

11 GB per month may seem hard to burn through, but remember that for Helenians, that data covers everything—streaming, browsing the Internet, phone calls, and texting. For a Helenian that has exceeded their data cap, a routine 1.5 GB iPhone update could cost them an additional 75 pounds.

But it could be hard to remove Sure as a monopoly. If the island government ends the contract, Sure has a right of compensation for all assets on the island. Von der Ropp estimates that means the government would be required to compensate Sure in the ballpark of four or five million pounds. That’s an extremely hefty sum, considering the government’s total annual budget is between 10 and 20 million pounds.

“They will need cash to pay the monopoly’s ransom,” says von der Ropp, adding that it will likely be up to the United Kingdom to foot the bill. Meanwhile, the island will need to look for new providers to replace Sure, ones that will hopefully invest in upgrading the island’s deteriorating infrastructure.

 There is interest in doing just that. As Councilor Cyril Leo put it in a recent speech to the island’s Legislative Council, “Corporate monopoly in St Helena cannot have the freedom to extract unregulated profits, from the fiber-optic cable enterprise, at the expense of the people of Saint Helena.” What remains to be seen is if the island can actually find a way to remove that corporate monopoly.

Finally, an Effort to Make Inflight Wi-Fi Less Awful

Post Syndicated from Michael Koziol original

Inflight Wi-Fi is terrible.

To be fair, the fact that we can connect to the Internet at all while we’re screaming across the sky at hundreds of kilometers an hour in a metal tube is pretty incredible. But still, the quality of that Internet connection often leaves something to be desired.

Jack Mandala, the CEO of the industry group Seamless Air Alliance, credits the generally poor inflight Wi-Fi experience in part to the fact that there have never been industry-wide standards developed for the technology. “With regard to inflight connectivity, it’s largely just been proprietary equipment,” he says.

A Radio Frequency Exposure Test Finds an iPhone 11 Pro Exceeds the FCC’s Limit

Post Syndicated from Michael Koziol original

A test by Penumbra Brands to measure how much radiofrequency energy an iPhone 11 Pro gives off found that the phone emits more than twice the amount allowable by the U.S. Federal Communications Commission.

The FCC measures exposure to RF energy as the amount of wireless power a person absorbs for each kilogram of their body. The agency calls this the specific absorption rate, or SAR. For a cellphone, the FCC’s threshold of safe exposure is 1.6 watts per kilogram. Penumbra’s test found that an iPhone 11 Pro emitted 3.8 W/kg.

Ryan McCaughey, Penumbra’s chief technology officer, said the test was a follow up to an investigation conducted by the Chicago Tribune last year. The Tribune tested several generations of Apple, Samsung, and Motorola phones, and found that many exceeded the FCC’s limit.

No Job Is Too Small for Compact Geostationary Satellites

Post Syndicated from Michael Koziol original

graphic link to special report landing page

A typical geostationary satellite is as big as a van, as heavy as a hippopotamus, and as long-lived as a horse. These parameters shorten the list of companies and countries that can afford to build and operate the things.

Even so, such communications satellites have been critically important because their signals reach places that would otherwise be tricky or impossible to connect. For example, they can cover rural areas that are too expensive to hook up with optical fiber, and they provide Wi-Fi services to airliners and wireless communications to ocean-going ships.

The problem with these giant satellites is the cost of designing, building, and launching them. To become competitive, some companies are therefore scaling down: They’re building geostationary “smallsats.”

Two companies, Astranis and GapSat, plan to launch GEO smallsats in 2020 and 2021, respectively. Both companies acknowledge that there will always be a place for those hulking, minibus-size satellites. But they say there are plenty of business opportunities for a smaller, lighter, more flexible version—opportunities that the newly popular low Earth orbit (LEO) satellite constellations aren’t well suited for.

Market forces have made GEO smallsats desirable; technological advances have made them feasible. The most important innovations are in software-defined radio and in rocket launching and propulsion.

One reason geostationary satellites have to be so large is the sheer bulk of their analog radio components. “The biggest things are the filters,” says John Gedmark, the CEO and cofounder of Astranis, referring to the electronic component that keeps out unwanted radio frequencies. “This is hardware that’s bolted onto waveguides, and it’s very large and made of metal.”

Waveguides are essentially hollow metal tubes that carry a signal between transmitters and receivers to the actual antennas. For C-band frequencies, for instance, waveguides can be 3.5 centimeters wide. While that may not seem so big, analog satellites need a waveguide for every frequency they send and receive over, and the number of waveguides can add up quickly. Combined, the waveguides, filters, and other signal equipment are a massive, expensive volume of hardware that, until recently, had no work-around.

Software-defined radio provides a much more compact digital alternative. Although SDR as a concept has been around for decades, in recent years improved processors have made it increasingly practical to replace analog parts with much smaller digital equivalents.

Another technological improvement with an even longer pedigree is electric propulsion, which generates thrust by accelerating ionized propellant through an electric field. This method generates less thrust than chemical systems, which means it takes longer to move a satellite into a new orbit. However, electric propulsion gets far more mileage out of a given quantity of propellant, and that saves a lot of space and weight.

It’s worth mentioning that a smallsat may not be as minuscule as its name suggests. Just how small it can be depends on whom you’re talking to. For example, GapSat’s satellite, GapSat-2, weighs 20 kilograms and takes up about as much space as a mailbox, according to David Gilmore, the company’s chief operating officer. At one end, the smallest smallsats—sometimes called microsats or nanosats—are similar in size to CubeSats. Those briefcase-size things are being developed by SpaceX, OneWeb, and other companies for use in massive LEO constellations.

The miniaturization of geostationary satellites owes as much to market forces as to technology. Back in the 1970s, demand for broad spectrum access (for example, to broadcast television) favored large satellites bearing tons of transponders that could broadcast across many frequencies. But most of the fruits of the wireless spectrum have been harvested. Business opportunities now center on the scraps of spectrum remaining.

“There’s a lot of snippets of spectrum left over that the big guys have left behind,” says Rob Schwarz, the chief technology officer of space infrastructure at Maxar Technologies in Palo Alto, Calif. “That doesn’t fit into that big-satellite business model.”

Instead, companies like Astranis and GapSat are building business models around narrower spectrum access, transmitting over a smaller range of frequencies or covering a more localized geographic area.

Meanwhile, private rocket firms are cutting the cost of putting things up in orbit. SpaceX and Blue Origin have both developed reusable rockets that reduce costs per launch. And it’s easy enough to design rockets specifically to carry a lot of small packages rather than one huge one, which means the cost of a single launch can be spread over many more satellites.

That’s not to say that van-size GEO satellites are going extinct. “So there’s a trend to building bigger, more complicated, more sophisticated, much more expensive, and much higher-bandwidth satellites that make all the satellites ever built in the history of the world pale in comparison,” says Gregg Daffner, the chief executive officer of GapSat. These are the satellites that will replace today’s GEO communications satellites, the ones that can give wide-spectrum coverage to an entire continent.

But a second trend, the one GapSat is betting on, is that the new GEO smallsats won’t directly compete against those hemisphere-covering behemoths.

“For a customer that has the spectrum and the market demand, bigger is still better,” says Maxar’s Schwarz. “The challenge is whether or not their business case can consume a terabit per second. It’s sort of like going to Costco—you get the cheapest possible package of ground beef, but if you’re alone, you’ve got to wonder, am I going to eat 25 pounds of ground beef?” Companies like Astranis and GapSat are looking for the consumers that need just a bit of spectrum for only a very narrow application.

Astranis, for example, is targeting service providers that have no need for a terabit per second. “The smaller size means we can find customers who are interested in most, if not all, of the satellite’s capacity right off the bat,” says CEO Gedmark. Bigger satellites, for comparison, can take years to sell all their available spectrum: Astranis instead intends to launch each satellite already knowing what the specific single customer is. Astranis plans to launch its first satellite on a SpaceX Falcon 9 rocket in the last quarter of 2020. The satellite, which is for Alaska-based service provider Pacific Dataport, will have 7.5 gigabits per second of capacity in the Ka-band (26.5 to 40 gigahertz).

According to Gedmark, preventing overheating was one of the biggest technical challenges Astranis faced, because of the great power the company is packing into a small volume. There’s no air up there to carry away excess heat, so the satellite is entirely reliant on thermal radiation.

Although the Astranis satellite in many ways functions like a larger communications satellite, just for customers that need less capacity, GapSat—as its name implies—is looking to plug holes in the coverage of those larger satellites. Often, satellite service providers find that for a few months they need to supply a bit more bandwidth than a satellite in orbit can currently handle. That typically means they’ll need to borrow bandwidth from another satellite, which can involve tricky negotiations. Instead, GapSat believes, GEO smallsats can meet these temporary demands.

Historically, GapSat has been a bandwidth broker, connecting customers that needed temporary coverage with satellite operators that were in a position to offer it. Now the company plans to launch GapSat-2 to provide its own temporary service. The satellite would sit in place for just a few months before moving to another orbital slot for the next customer.

However, GapSat’s plan created a bit of a design conundrum. On the one hand, GapSat-2 needed to be small, to keep costs manageable and to be able to quickly shift into a new orbital slot. On the other, the satellite also had to work in whatever frequencies each customer required. Daffner calls the specifics of the company’s solution its “secret sauce,” but the upshot is that GapSat has developed wideband transponders to offer coverage in the Ku-band (12 to 18 GHz), Ka-band (26.5 to 40 GHz), and V/Q-band (33 to 75 GHz), depending on what each customer needs.

Don’t expect there to be much of a clash between the smallsat companies and the companies deploying LEO constellations. Gedmark says there’s little to no overlap between the two emerging forms of space-based communications.

He notes that because LEO constellations are closer to Earth, they have an advantage for customers that require low latency. LEO constellations may be a better choice for real-time voice communications, for example. If you’ve ever been on a phone call with a delay, you know how jarring it can be to have to wait for the other person to respond to what you said. However, by their nature, LEO constellations are all-or-nothing affairs because you need most if not all of the constellation in space to reliably provide coverage, whereas a small GEO satellite can start operating all by itself.

Astranis and GapSat will test that proposition soon after they launch in late 2020 and early 2021, respectively. They’ll be joined by Ovzon and Saturn Satellite Networks, which are also building GEO smallsats for 2021 launches as well.

There’s one final area in which the Astranis and GapSat satellites will differ from the larger communications satellites: their life span. GapSat-2 will have to be raised into a graveyard orbit after roughly 6 years, compared with 20 to 25 years for today’s huge GEO satellites. Astranis is also intentionally shooting for shorter life spans than those common for GEO satellites.

“And that is a good thing!” Gedmark says. “You just get that much faster of an injection of new technology up there, rather than having these incredibly long, 25-year technology cycles. That’s just not the world we live in anymore.”

This article appears in the January 2020 print issue as “Geostationary Satellites for Small Jobs.”

The article has been updated from its print version to reflect the most recent information on GapSat’s plans.

Kumu Networks Launches an Analog Radio Module That Cancels Its Own Interference

Post Syndicated from Michael Koziol original

It’s a problem as old as radio: Radios cannot send and receive signals at the same time on the same frequency. Or to be more accurate, whenever they do, any signals they receive are drowned out by the strength of their transmissions.

Being able to send and receive signals simultaneously—a technique called full duplex—would make for far more efficient use of our wireless spectrum, and make radio interference less of a headache. As it stands, wireless communications generally rely on frequency- and time-division duplexing techniques, which separate the send and receive signals based on either the frequency used or when they occur, respectively, to avoid interference.

Kumu Networks, based in Sunnyvale, Calif., is now selling an analog self-interference canceller that the company says can be easily installed in most any wireless system. The device is a plug-and-play component that cancels out the noise of a transmitter so that a radio can hear much quieter incoming signals. It’s not true full duplex, but it tackles one of radio’s biggest problems: Transmitted signals are much more powerful than received signals.

“A transmitter signal is almost a trillion times more powerful than a receiver signal,” says Harish Krishnaswamy, an associate professor of electrical engineering at Columbia University, in New York City. That makes it extra hard to filter out the noise, he adds.

Krishnaswamy says that in order to cancel signals with precision, you have to do it in steps. One step might involve performing some cancellation within the antenna itself. More cancellation techniques can be developed in chips and in digital layers.

While it may be theoretically possible, Krishnaswamy notes that reliably reaching that mark has proven difficult, even for engineers in the lab. Out in the world, a radio’s environment is constantly changing. How a radio hears reflections and echoes of its own transmissions changes as well, and so cancellers must adapt to precisely filter out extraneous signals.

Kumu’s K6 Canceller Co-Site Interference Mitigation Module (the new canceller’s official name) is strictly an analog approach. Joel Brand, the vice president of product management at Kumu, says the module can achieve 50 decibels of cancellation. Put in terms of a power ratio, that means it cancels the transmitted signal by a factor of 100,000. That’s still a far cry from what would be required to fully cancel the transmitted signal, but it’s enough to help a radio hear signals more easily while transmitting.

Kumu’s module cancels a radio’s own transmissions by using analog components tuned to emit signals that are the inverse of the transmitted signals. The signal inversion allows the module to cancel out the transmitted signal and ensure that other signals the radio is listening for make it through.

Despite its limitations, Brand says there’s plenty of interest in a canceller of this caliber. One example he gives is an application in defense. Radio jammers are common tools, but with self-interference cancellation, they can ignore their own jamming signal to continue listening for other radios that are trying to broadcast. Another area where Brand says Kumu’s technology has received some interest is in aerospace, with one customer launching modules into space on satellites.

Kumu also develops digital cancellation techniques that can work in tandem with analog gear like its K6 canceller. However, according to Brand, digital cancellations tend to be highly bespoke. Performing them often means “cleaning” the signal after the radio has received it, which requires a deep knowledge of the radio systems involved.

Analog cancellation simply requires tuning the components to filter out the transmitted signal. And because of that simplicity, Kumu’s module may well find its place in noisy wireless environments such as the home—where digital assistants like Alexa and smart home devices have already moved in.

This article appears in the January 2020 print issue as “Helping Radios Hear Themselves.”

The Ultimate Generation

Post Syndicated from Michael Koziol original

One of 5G’s biggest selling points is faster data rates than any previous generation of wireless—by a lot. Millimeter waves are part of what’s making that possible. 5G’s use of millimeter waves, a higher-frequency band than 2G, 3G, or 4G ever used, has forced service providers like AT&T and T-Mobile to rethink how 5G networks should be deployed, with higher frequencies requiring small cell sites located more closely together.

6G, while still just a hazy idea in the minds of wireless researchers, could very well follow in 5G’s footsteps by utilizing higher frequencies and pushing for faster data rates. So let’s have some fun with that—assuming such qualities remain important for future generations of wireless, where will that take us? What will 8G look like? 10G? And at what point does this extrapolation to future generations of wireless no longer make physical sense?

Expert Discusses Key Challenges for the Next Generation of Wearables

Post Syndicated from Michael Koziol original

For decades, pedometers have counted our steps and offered insights into mobility. More recently, smartwatches have begun to track a suite of vital signs in real-time. There is, however, a third category of information—in addition to mobility and vitals—that could be monitored by wearable devices: biochemistry.

To be clear, there are real-time biochemistry monitoring devices available like the FreeStyle Libre, which can monitor blood glucose in people with diabetes for 14 days at a time, relatively unobtrusively. But Joseph Wang, a professor of nanoengineering at the University of California, San Diego thinks there’s still room for improvement. During a talk about biochemistry wearables at ApplySci’s 12th Wearable Tech + Digital Health + Neurotech event at Harvard on 14 November, Wang outlined some of the key challenges to making such wearables as ubiquitous and unobtrusive as the Apple Watch.

Wang identified three engineering problems that must be tackled: flexibility, power, and treatment delivery. He also discussed potential solutions that his research team has identified for each of these problems.

What’s Next for Spectrum Sharing?

Post Syndicated from Michael Koziol original

“You’ve graduated from the school of spectral hard knocks,” Paul Tilghman, a U.S. Defense Advanced Research Projects Agency (DARPA) program manager, told the teams competing in the agency’s Spectrum Collaboration Challenge (SC2) finale on 23 October. The three-year competition had just concluded, and the top three teams were being called on stage as a song that sounded vaguely like “Pomp and Circumstance” played overhead.

“Hard knocks” wasn’t an exaggeration—the 10 teams that made it to the finale, as well as others who were eliminated in earlier rounds of the competition—had been tasked with proving something that hadn’t been demonstrated before. Their challenge was to see if AI-managed radio systems could work together to share wireless spectrum more effectively than static, pre-allocated bands. They had spent years battling it out in match-ups in a specially-built RF emulator DARPA built for the competition, Colosseum.

By the end of the finale, the top teams had demonstrated their systems could transmit more data over less spectrum than existing standards like LTE, and shown an impressive ability to reuse spectrum over multiple radios. In some finale match-ups, the radio systems of five teams were transmitting over 200 or 300 percent more data than is currently possible with today’s rigid spectrum band allocations. And that’s important, given that we’re facing a looming wireless spectrum crunch.

Darpa Grand Challenge Finale Reveals Which AI-Managed Radio System Shares Spectrum Best

Post Syndicated from Michael Koziol original

It all started when Grant Imahara, electrical engineer and former host of MythBusters, walked onto the stage. A celebrity guest appearance is quite a way to kick off a U.S. Defense Advanced Research Projects Agency (DARPA) event, right?

The event in question was the finale of DARPA’s Spectrum Collaboration Challenge, or SC2. SC2 has been one in the long list of DARPA’s grand challenges—starting with 2004’s eponymous Grand Challenge for self-driving cars. Over the duration of the three-year SC2 competition, teams from around the world have worked to prove that it is possible to create AI-managed radio systems that can manage wireless spectrum better than traditional allocation strategies.

DARPA’s motivation in running the challenge—and the hope of Paul Tilghman, the program manager who’s overseen it—is to develop AI-managed radio systems to help avoid a looming spectrum crunch. Wireless spectrum is a finite and exceedingly valuable resource. And an ever-growing number of applications and industries that want a slice of spectrum (with 5G being just the latest newcomer in a long line of technologies) means there soon won’t be any available spectrum left. That is, if the wireless ecosystem sticks to its century-old method of pre-allocating spectrum for different uses in rigid bands.

At stake in the finale (beyond bragging rights, of course): nearly US $4 million in prize money. The winner: the team whose AI-managed radio system collaborated best when matched up with the diverse lineup of systems other teams had built and brought to the finale. The winning team walked away with a $2 million first place prize, while second and third left with $1 million and $750,000, respectively.

The finale was presented live by Imahara, Tilghman, and Ben Hilburn, president of the GNU Radio Foundation. There to witness it were attendees of MWC Los Angeles, a mobile industry convention. Though it was the culmination of three years of competition, the finale allowed the teams’ radio systems to duke it out in real time for the first time. The head-to-head matchup was made possible by DARPA’s efforts in relocating Colosseum, the massive radio frequency emulator at the heart of the competition, to the Los Angeles Convention Center floor just before the event.

In the end, GatorWings, the team representing the University of Florida’s Electrical and Computer Engineering Department, walked away with the first-place prize. MarmotE came in second, and rounding out the podium was Zylinium.

The format of the finale hewed closely to that of last December’s Preliminary Event 2, but with a few twists. All ten finalists competed in the first set of round-robin match-ups in a scenario called “Alleys of Austin.” Each of the radio systems played the role of a squad of soldiers in a concentrated effort to sweep an urban neighborhood; by the end of the round, the lowest performing team would be eliminated.

The first five rounds of match-ups, run and analyzed while Colosseum was still in its home at Johns Hopkins University Applied Physics Lab, weeded out teams as DARPA tested them on different qualities necessary for AI-managed radio systems to succeed in the real world. In each match-up, teams were awarded points based on how many radio connections their systems could make during the scenario. The catch, however, is that if a competitor stifled one of the other teams and hogged too much of the spectrum, everyone would be penalized and no one would earn points. Awarding points based on collaborative performance avoided what Tilghman calls “spectrum anarchy,” where teams would incentivize their own spectrum usage at the expense of everyone else.

The second round event, called “Off the Cuff,” saw the systems learning to negotiate ad-hoc communications as though the teams were disaster relief agencies setting up operations just after a natural disaster.

“Wildfire,” the third-round challenge, measured the teams’ ability to prioritize traffic, by placing them in a wildfire response scenario where radio communications with the important air tankers had to take precedence over everything else.

“Slice of Life,” the fourth-round scenario, imagined the remaining teams in a shopping mall setting, where each team’s radio acted as a Wi-Fi hotspot for a different shop or restaurant—the idea being that they’d have to collaborate as different stores experienced a surge in traffic at different points during the day. (A coffee shop, for example, would require more spectrum in the morning, and a bar would need more at night).

The last round for the live finals was called “Trash Compactor.” That setup forced the radio systems to share an increasingly narrow slice of spectrum. Over the course of the match, the band narrowed from 20 megahertz to 10 MHz and again to only 5 MHz by the end.

Then, with Colosseum in place outside the portioned room in which MWC Los Angeles attendees had gathered to see the spectacle, the competitors did it again, and results were churned out in real time. The five remaining teams—MarmotE, Erebus, Zylinium, Andersons, and GatorWings—found themselves duking it out again in versions of the previous scenarios that came with added twists that Tilghman described as “fun surprises.” 

The amped-up “Slice of Life” scenario, for example, dropped the teams back into the shopping mall. But this time, there was a nearby legacy incumbent system, like a satellite base station. The incumbent had priority over a portion of the spectrum band, and would trip an alert if the collaborating teams drowned out its own communications. Suddenly, the teams were having to manage random surges in demand for spectrum via their hotspots while also not taking up too much and disrupting the legacy operator.

With a few exceptions, the radio systems showed they could roll with the punches SC2 threw at them. The fifth round of match-ups, “Trash Compactor,” proved to be perhaps the most challenging, as teams seemed to struggle to share the spectrum effectively as the available amount got squeezed. And yet, situations like working alongside an incumbent operator—something that stymied many of the teams last December—seemed much easier this time around.

The teams’ systems also established radio connections more quickly, and made more effective use of the available spectrum to complete more connections than they had less than a year ago. In one of the “Slice of Life” match-ups, teams collaborated to achieve almost 340 percent spectrum occupancy. In essence, their radio systems crammed almost 70 MHz of traditional spectrum applications into 20 MHz of bandwidth.

It’s hard not to be impressed by that level of spectral efficiency. But after the awards were handed out and the photographs taken, Tilghman stressed the fact that the technology has a long way to go to be fully developed. Still, he thinks it will be developed. “I think the paradigm of spectrum collaboration is here to stay,” Tilghman said to conclude the event, “and it will propel us from spectrum scarcity to spectrum abundance.”