All posts by Stacey Higginbotham

We Don’t Need a Jetsons Future, Just a Sustainable One

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/energy/environment/we-dont-need-a-jetsons-future-just-a-sustainable-one

For decades, our vision of the future has been stuck in a 1960s-era dream of science fiction embodied by The Jetsons and space travel. But that isn’t what we need right now. In fact, what if our vision of that particular technologically advanced future is all wrong?

What if, instead of self-driving cars, digital assistants whispering in our ears, and virtual-reality glasses, we viewed a technologically advanced society as one where everyone had sustainable housing? Where we could manage and then reduce the amount of carbon in our atmosphere? Where everyone had access to preventative health care that was both personalized and less invasive?

What we need is something called cozy futurism, a concept I first encountered while reading a blog post by software engineer Jose Luis Ricón Fernández de la Puente. In the post, he calls for a vision of technology that looks at human needs and attempts to meet those needs, not only through technologies but also cultural shifts and policy changes.

Take space travel as an example. Much of the motivation behind building new rockets or developing colonies on Mars is wrapped up in the rhetoric of our warming planet being something to escape from. In doing so, we miss opportunities to fix our home rather than flee it.

But we can change our attitudes. What’s more, we are changing. Climate change is a great example. Albeit slowly, entrepreneurs who helped build out the products and services over the tech boom of the past 20 years are now searching for technologies to address the crisis.

Jason Jacobs, the founder of the fitness app Runkeeper, has created an entire media business called My Climate Journey to find and help recruit tech folks to address climate change. Last year, Jeff Bezos created a US $10 billion fund to make investments in organizations fighting climate change. Bill Gates wrote an entire book, How to Avoid a Climate Disaster: The Solutions We Have and the Breakthroughs We Need.

Mitigating climate change is an easy way to understand the goals of cozy futurism, but I’m eager to see us all go further. What about reducing pollution in urban and poor communities? Nonprofits are already using cheap sensors to pinpoint heat islands in cities, or neighborhoods where air pollution disproportionately affects communities of color. With this information, policy changes can lighten the unfair distribution of harm.

And perhaps if we see the evidence of harm in data, more people will vote to attack pollution, climate change, and other problems at their sources, rather than looking to tech to put a Band-Aid on them or mitigate the effects—or worse, adding to the problem by producing a never-ending stream of throwaway gadgets. We should instead embrace tech as a tool to help governments hold companies accountable for meeting policy goals.

Cozy futurism is an opportunity to reframe the best use of technology as something actively working to help humanity—not individually, like a smartwatch monitoring your health or self-driving cars easing your commute, but in aggregate. That’s not to say we should do away with VR goggles or smart gadgets, but we should think a bit more about how and why we’re using them, and whether we’re overprioritizing them. After all, what’s better than demonstrating that the existential challenges facing us all are things we can find solutions to, not just for those who can hitch a ride off-world but for everyone.

After all, I’d rather be cozy on Earth than stuck in a bubble on Mars.

This article appears in the August 2021 print issue as “Cozy Futurism.”

Reckoning With Tech Before It Becomes Invisible

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/artificial-intelligence/machine-learning/reckoning-with-tech-before-it-becomes-invisible

Ten years ago, venture capitalist Marc Andreessen proclaimed that software was eating the world. Today, the hottest features in the latest phones are software updates or AI improvements, not faster chips or new form factors. Technology is becoming more mundane, and ultimately, invisible. 

This probably doesn’t bother you. But even as technologies fade into the background of our lives, they still play a pervasive role. We still need to examine how technologies might be affecting us, even if—especially if—they’re commonplace. 

For example, Waze’s navigation software has been influencing drivers’ behavior in the real world for years, algorithmically routing too many cars to residential streets and clogging them. The devices and apps from home-security company Ring have turned neighborhoods into panopticons in which your next door neighbor can become the subject of a notification. Connected medical devices can let an insurance company know if the patient isn’t using the device appropriately, allowing the insurer to stop covering the gadget. 

Using technology to create or reinforce social norms might seem benign or even beneficial, but it doesn’t hurt to ask which norms the technology is enforcing. Likewise, technologies that promise to save time might be saving time for some at the expense of others. Most important, how do we know if a new technology is serving a greater good or policy goal, or merely boosting a company’s profit margins? Underneath concerns about Amazon and Facebook and Google is an understanding that big tech is everywhere, and we have no idea how to make it work for society’s goals, rather than a company’s, or an individual’s. 

A big part of the problem is that we haven’t even established what those benefits should be. Let’s take the idea of legislating AI, or even computer-mediated decisions in general. Should we declare such technology illegal on its face? Many municipalities in the United States are trying to ban law enforcement from using facial-recognition software in order to identify individuals. Then again, the FBI has used it to find the people who participated in the 6 January insurrection at the U.S. Capitol. 

To complicate the issue further, it’s well established that facial recognition (and algorithms in general) are biased against Black faces and women’s faces. Personally, I don’t think the solution is to ban facial recognition outright. The European Union, for example, has proposed legislation to audit the outcomes of facial-recognition algorithms regularly to ensure policy goals are met. There’s no reason the United States and the rest of the world can’t do the same. 

And while some in the technology industry have called for the United States to create a separate regulatory body to govern AI, I think the country and policymakers are best served by the addition of offices and experts within existing agencies who can audit the various algorithms and determine if they help meet the agency’s goals. For example, the U.S. Justice Department could monitor, or even be in charge of approving, programs used to release people on bail to keep an eye out for potential bias. 

The United States already has a model of how this might work. The Federal Communications Commission relies on its Office of Engineering and Technology to help regulate the airwaves. Crucially, the office hires experts in the field rather than political appointees. The government can build the same infrastructure into other agencies that can handle scientific and technological inquiry on demand. Doing so would make the invisible visible again—and then we could all see and control the results of our technology.

This article appears in the July 2021 print issue as “Reckoning With Tech.”

Treat Smart City Tech like Sewers, or Better

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/telecom/internet/treat-smart-city-tech-like-sewers-or-better

Smart cities, like many things, took a beating in 2020. Alphabet, Google’s parent company, pulled its Sidewalk Labs subsidiary out of a smart-city project in Toronto. Cisco killed its plans to sell smart-city technology. And in many places, city budgets will be affected for years to come by the pandemic’s economic shock, making it more difficult to justify smart-city expenses.

That said, the pandemic also provided municipalities around the world with reason to invest new technologies for public transportation, contact tracing, and enforcing social distancing. In a way, the present moment is an ideal time for a new understanding of smart-city infrastructure and a new way of paying for it.

Cities need to think of their smart-city efforts as infrastructure, like roads and sewers, and as such, they need to think about investing in it, owning it, maintaining it, and controlling how it’s used in the same ways as they do for other infrastructure. Smart-city deployments affect the citizenry, and citizens will have a lot to say about any implementation. The process of including that feedback and respecting citizens’ rights means that cities should own the procurement process and control the implementation.

In some cases, citizen backlash can kill a project, such as the backlash against Sidewalk’s Toronto project over who exactly had access to the data collected by the smart-city infrastructure. Even when cities do get permission from citizens for deployments, the end results are often neighborhood-size “innovation zones” that are little more than glorified test beds. A truly smart city needs a master plan, citizen accountability, and a means of funding that grants the city ownership.

One way to do this would be for cities to create public authorities, just like they do when investing in public transportation or even health care. These authorities should have publicly appointed commissioners who manage and operate the sensors and services included in smart-city projects. They would also have the ability to raise funds using bond issues backed by the revenue created by smart-city implementation.

For example, consider a public safety project that requires sensors at intersections to reduce collisions. The project might use the gathered data to meet its own safety goals, but the insights derived from analyzing traffic patterns could also be sold to taxi companies or logistics providers.

These sales will underpin the repayment on bonds issued to pay for the technology’s deployment and management. While some municipal bonds mature in 10- to 30-year time frames, there are also bonds with 3- to 5-year terms that would be better suited to the shorter life spans of technologies like traffic-light sensors.

Even if bonds and public authorities aren’t the right way to proceed, owning and controlling the infrastructure has other advantages. Smart-city contracts could employ local contractors and act not just as a source of revenue for cities but also as an economic development tool that can create jobs and a halo effect to draw in new companies and residents.

For decades, cities have invested in their infrastructure using public debt. If cities invest in smart-city technologies the same way, they could give their citizens a bigger stake in the process, create new streams of revenue for the city, and improve quality of life. After all, people deserve to live in smarter cities, rather than innovation zones.

This article appears in the March 2021 print issue as “Smarter Smart Cities.”

To Close the Digital Divide, the FCC Must Redefine Broadband Speeds

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/telecom/internet/to-close-the-digital-divide-the-fcc-must-redefine-broadband-speeds

The coronavirus pandemic has brought the broadband gap in the United States into stark relief—5.6 percent of the population has no access to broadband infrastructure. But for an even larger percentage of the population, the issue is that they can’t afford access, or they get by on mobile phone plans. Recent estimates, for example, suggest that 15 million to 16 million students—roughly 30 percent of the grade-school population in the United States—lack broadband access for some reason.

The Federal Communications Commission (FCC) has punted on broadband access for at least a decade. With the recent change in the regulatory regime, it’s time for the country that created the ARPANET to fix its broadband access problem. While the lack of access is driven largely by broadband’s high cost, the reason that cost is driving the broadband gap is because the FCC’s current definition of broadband is stuck in the early 2000s.

The FCC defines broadband as a download speed of 25 megabits per second and an upload speed of 3 Mb/s. The agency set this definition in 2015, when it was already immediately outdated. At that time, I was already stressing a 50 Mb/s connection just from a couple of Netflix streams and working from home. Before 2015, the defined broadband speeds in the United States were an anemic 4 Mb/s down and 1 Mb/s up, set in 2010.

If the FCC wants to address the broadband gap rather than placate the telephone companies it’s supposed to regulate, it should again redefine broadband. The FCC could easily establish broadband as 100 Mb/s down and at least 10 Mb/s up. This isn’t a radical proposal: As of 2018, 90.5 percent of the U.S. population already had access to 100 Mb/s speeds, but only 45.7 percent were tapping into it, according to the FCC’s 2020 Broadband Deployment Report.

Redefining broadband will force upgrades where necessary and also reveal locations where competition is low and prices are high. As things stand, most people in need of speeds above 100 Mb/s have only one option: cable providers. Fiber is an alternative, but most U.S. fiber deployments are in wealthy suburban and dense urban areas, leaving rural students and those living on reservations behind. A lack of competition leaves cable providers able to impose data caps and raise fees.

What seems like a lack of demand is more likely a rejection of a high-cost service, even as more people require 100 Mb/s for their broadband needs. In the United States, 100 Mb/s plans cost $81.19 per month on average, according to data from consumer interest group New America. The group gathered broadband prices across 760 plans in 28 cities around the world, including 14 cities in the United States. When compared with other countries, prices in the United States are much higher. In Europe, the average cost of a 100/10 Mb/s plan is $48.48, and in Asia, a similar plan would cost $69.76.

Closing the broadband gap will still require more infrastructure and fewer monopolies, but redefining broadband is a start. With a new understanding of what constitutes reasonable broadband, the United States can proactively create new policies that promote the rollout of plans that will meet the needs of today and the future.

This article appears in the February 2021 print issue as “Redefining Broadband.”

The IoT’s E-Waste Problem Isn’t Inevitable

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/consumer-electronics/gadgets/the-iots-ewaste-problem-isnt-inevitable

In my office closet, I have a box full of perfectly good smart-home gadgets that are broken only because the companies that built them stopped updating their software. I can’t bear to toss them in a landfill, but I don’t really know how to recycle them. I’m not alone: Electronic waste, or e-waste, has become much more common.

The adoption of Project Connected Home Over IP (CHIP) standards by Amazon, Apple, Google, and the Zigbee Alliance will make smart homes more accessible to more people. But the smart devices these people bring into their homes will also eventually end up on the junk heap.

Perhaps surprisingly, we still don’t have a clear answer as to what we should do when a product’s software doesn’t outlive its hardware, or when its electronics don’t outlast the housing. Companies are building devices that used to last decades—such as thermostats, fridges, or even lights—with five- to seven-year life-spans.

When e-waste became a hot topic in the computing world, computer makers such as Dell and HP worked with recycling centers to better recycle their electronics. You might argue that those programs didn’t do enough, because e-waste is still a growing problem. In 2019 alone, the world generated 53.6 million metric tons of e-waste, according to a report from the Global E-waste Monitor. And the amount is rising: According to the same report, each year we produce 2.5 million metric tons more e-waste than the year before.

This is an obviously unsustainable amount of waste. While recycling programs might not be enough to solve the problem, I’d still like to see the makers of connected devices partner up with recycling centers to take back devices when they are at the end of their lives. The solution could be as simple as, say, Amazon adding a screen to the app for a smart device that offers the address of a local recycling partner whenever someone chooses to decommission that device.

The idea is not unprecedented for smart devices. The manufacturer of the Tile tracking device has an agreement with a startup called Emplacement that offers recycling information when the battery on one of Tile’s trackers dies and the device is useless. Another example is GE Appliances, which hauls away old appliances when people buy new ones, even as added software potentially shortens their years of usefulness.

Companies can also make the recycling process easier by designing products differently. For example, they should rely less on glues that make it hard to salvage recyclable metals from within electronic components and use smaller circuit boards with minimal components. Companies should also design their connected products so that they physically work in some fashion even if the software and app are defunct. In other words, no one should design a connected product that works only with an app, because doing so is all but forcing its obsolescence in just a few years. If the device still works, however, people might be able to pass it along for reuse even if some of the fancier features aren’t operational.

Connected devices won’t be in every home in the future, but they will become more common, and more people will come to rely on the features they offer. Which means we’re set for an explosion of new electronic waste in the next five to ten years as these devices reach the end of their life-spans. How we handle that waste—and how much of it we have to deal with—depends on the decisions companies make now.

This article appears in the January 2021 print issue as “E-Waste Isn’t Inevitable.”

Why We Need a Robot Registry


Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/robotics/humanoids/why-we-need-a-robot-registry

I have a confession to make: A robot haunts my nightmares. For me, Boston Dynamics’ Spot robot is 32.5 kilograms (71.1 pounds) of pure terror. It can climb stairs. It can open doors. Seeing it in a video cannot prepare you for the moment you cross paths on a trade-show floor. Now that companies can buy a Spot robot for US $74,500, you might encounter Spot anywhere.

Spot robots now patrol public parks in Singapore to enforce social distancing during the pandemic. They meet with COVID-19 patients at Boston’s Brigham and Women’s Hospital so that doctors can conduct remote consultations. Imagine coming across Spot while walking in the park or returning to your car in a parking garage. Wouldn’t you want to know why this hunk of metal is there and who’s operating it? Or at least whom to call to report a malfunction?

Robots are becoming more prominent in daily life, which is why I think governments need to create national registries of robots. Such a registry would let citizens and law enforcement look up the owner of any roaming robot, as well as learn that robot’s purpose. It’s not a far-fetched idea: The U.S. Federal Aviation Administration already has a registry for drones.

Governments could create national databases that require any companies operating robots in public spaces to report the robot make and model, its purpose, and whom to contact if the robot breaks down or causes problems. To allow anyone to use the database, all public robots would have an easily identifiable marker or model number on their bodies. Think of it as a license plate or pet microchip, but for bots.

There are some smaller-scale registries today. San Jose’s Department of Transportation (SJDOT), for example, is working with Kiwibot, a delivery robot manufacturer, to get real-time data from the robots as they roam the city’s streets. The Kiwibots report their location to SJDOT using the open-source Mobility Data Specification, which was originally developed by Los Angeles to track Bird scooters.

Real-time location reporting makes sense for Kiwibots and Spots wandering the streets, but it’s probably overkill for bots confined to cleaning floors or patrolling parking lots. That said, any robots that come in contact with the general public should clearly provide basic credentials and a way to hold their operators accountable. Given that many robots use cameras, people may also be interested in looking up who’s collecting and using that data.

I starting thinking about robot registries after Spot became available in June for anyone to purchase. The idea gained specificity after listening to Andra Keay, founder and managing director at Silicon Valley Robotics, discuss her five rules of ethical robotics at an Arm event in October. I had already been thinking that we needed some way to track robots, but her suggestion to tie robot license plates to a formal registry made me realize that people also need a way to clearly identify individual robots.

Keay pointed out that in addition to sating public curiosity and keeping an eye on robots that could cause harm, a registry could also track robots that have been hacked. For example, robots at risk of being hacked and running amok could be required to report their movements to a database, even if they’re typically restricted to a grocery store or warehouse. While we’re at it, Spot robots should be required to have sirens, because there’s no way I want one of those sneaking up on me.

This article appears in the December 2020 print issue as “Who’s Behind That Robot?”

IoT Network Companies Have Cracked Their Chicken-and-Egg Problem

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/telecom/wireless/iot-network-companies-have-cracked-their-chicken-and-egg-problem

Along with everything else going on, we may look back at 2020 as the year that companies finally hit upon a better business model for Internet of Things (IoT) networks. Established network companies such as the Things Network and Helium, and new players such as Amazon, have seemingly given up on the idea of making money from selling network connectivity. Instead, they’re focused on getting the network out there for developers to use, assuming in the process that they’ll benefit from the effort in the long run.

IoT networks have a chicken-and-egg problem. Until device makers see widely available networks, they don’t want to build products that run on the network. And customers don’t want to pay for a network, and thus, fund its development, if there aren’t devices available to use. So it’s hard to raise capital to build a new wide-area network (WAN) that provides significant coverage and supports a plethora of devices that are enticing to use.

It certainly didn’t help such network companies as Ingenu, MachineQ, Senet, and SigFox that they’re all marketing half a dozen similar proprietary networks. Even the cellular carriers, which are promoting both LTE-M for machine-to-machine networks and NB-IoT for low-data-rate networks, have historically struggled to justify their investments in IoT network infrastructure. After COVID-19 started spreading in Japan, NTT DoCoMo called it quits on its NB-IoT network, citing a lack of demand.

“Personally, I don’t believe in business models for [low-power WANs],” says Wienke Giezeman, the CEO and cofounder of the Things Network. His company does deploy long-range low-power WAN gateways for customers that use the LoRa Alliance’s LoRaWAN specifications. (“LoRa” is short for “long-range.”) But Giezeman sees that as the necessary element for later providing the sensors and software that deliver the real IoT applications customers want to buy. Trying to sell both the network and the devices is like running a restaurant that makes diners buy and set up the stove before it cooks their meals.

The Things Network sets up the “stove” and includes the cost of operating it in the “meal.” But, because Giezeman is a big believer in the value of open source software and creating a sense of abundance around last-mile connectivity, he’s also asking customers to opt in to turning their networks into public networks.

Senet does something similar, letting customers share their networks. Helium is using cryptocurrencies to entice people to set up LoRaWAN hotspots on their networks and rewarding them with tokens for keeping the networks operational. When someone uses data from an individual’s Helium node, that individual also gets tokens that might be worth something one day. I actually run a Helium hotspot in my home, although I’m more interested in the LoRa coverage than the potential for wealth.

And there’s Amazon, which plans to embed its own version of a low-power WAN into its Echo and its Ring security devices. Whenever someone buys one of these devices they’ll have the option of adding it as a node on the Amazon Sidewalk network. Amazon’s plan is to build out a decentralized network for IoT devices, starting with a deal to let manufacturer Tile use the network for its ­Bluetooth tracking devices.

After almost a decade of following various low-power IoT networks, I’m excited to see them abandon the idea that the network is the big value, and instead recognize that it’s the things on the network that entice people. Let’s hope this year marks a turning point for low-power WANs.

This article appears in the November 2020 print issue as “Network Included.”

Open-Source Vote-Auditing Software Can Boost Voter Confidence

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/computing/software/opensource-voteauditing-software-can-boost-voter-confidence

Election experts were already concerned about the security and accuracy of the 2020 U.S. presidential election. Now, with the ongoing COVID-19 pandemic and the new risk it creates for in-person voting—not to mention the debate about whether mail-in ballots lead to voter fraud—the amount of anxiety around the 2020 election is unprecedented.

“Elections are massively complicated, and they are run by the most OCD individuals, who are process oriented and love color coding,” says Monica Childers, a product manager with the nonprofit organization VotingWorks. “And in a massively complex system, the more you change things, especially at the last minute, the more you introduce the potential for chaos.” But that’s just what election officials are being forced to do.

Most of the conversation around election security focuses on the security of voting machines and preventing interference. But it’s equally important to prove that ballots were correctly counted. If a party or candidate cries foul, states will have to audit their votes to prove there were no miscounts.

VotingWorks has built an open-source vote-auditing software tool called Arlo, and the organization has teamed up with the U.S. Cybersecurity and Infrastructure Security Agency to help states adopt the tool. Arlo helps election officials conduct a risk-limiting audit [PDF], which ensures that the reported results match the actual results. And because it’s open source, all aspects of the software are available for inspection.

There are actually several ways to audit votes. You’re probably most familiar with recounts, a process dictated by law that orders a complete recounting of ballots if an election is very close. But full recounts are rare. More often, election officials will audit the ballots tabulated by a single machine, or verify the ballots cast in a few precincts. However, those techniques don’t give a representative sample of how an entire state may have voted.

This is where a risk-limiting audit excels. The audit takes a random sample of the ballots from across the area undergoing the audit and outlines precisely how the officials should proceed. This includes giving explicit instructions for choosing the ballots at random (pick the fourth box on shelf A and then select the 44th ballot down, for example). It also explains how to document a “chain of custody” for the selected ballots so that it’s clear which auditors handled which ballots.

The random-number generator that Arlo uses to select the ballots is published online. Anyone can use the tool to select the same ballots to audit and compare their results. The software provides the data-entry system for the teams of auditors entering the ballot results. Arlo will also indicate how likely it is that the entire election was reported correctly.

The technology may not be fancy, but the documentation and attention to a replicable process is. And that’s most important for validating the results of a contested election.

Arlo has been tested in elections in Michigan, Ohio, Pennsylvania, and a few other states. The software isn’t the only way a state or election official can conduct a risk-limiting audit, but it does make the process easier. Childers says Colorado took almost 10 years to set up risk-limiting audits. VotingWorks has been using Arlo and its staff to help several states set up these processes, which has taken less than a year.

The upcoming U.S. election is dominated by partisanship, but risk-limiting audits have been embraced by both parties. So far, it seems everyone agrees that if your vote gets counted, the government needs to count it correctly.

This article appears in the October 2020 print issue as “Making Sure Votes Count.”

For the IoT, User Anonymity Shouldn’t Be an Afterthought. It Should Be Baked In From the Start

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/telecom/security/for-the-iot-user-anonymity-shouldnt-be-an-afterthought-it-should-be-baked-in-from-the-start

The Internet of Things has the potential to usher in many possibilities—including a surveillance state. In the July issue, I wrote about how user consent is an important prerequisite for companies building connected devices. But there are other ways companies are trying to ensure that connected devices don’t invade people’s privacy.

Some IoT businesses are designing their products from the start to discard any personally identifiable information. Andrew Farah, the CEO of Density, which developed a people-counting sensor for commercial buildings, calls this “anonymity by design.” He says that rather than anonymizing a person’s data after the fact, the goal is to design products that make it impossible for the device maker to identify people in the first place.

“When you rely on anonymizing your data, then you’re only as good as your data governance,” Farah says. With anonymity by design, you can’t give up personally identifiable information, because you don’t have it. Density, located in Macon, Ga., settled on a design that uses four depth-perceiving sensors to count people by using height differentials.

Density could have chosen to use a camera to easily track the number of people in a building, but Farah balked at the idea of creating a surveillance network. Taj Manku, the CEO of Cognitive Systems, was similarly concerned about the possibilities of his company’s technology. Cognitive, in Waterloo, Ont., Canada, developed software that interprets Wi-Fi signal disruptions in a room to understand people’s movements.

With the right algorithm, the company’s software could tell when someone is sleeping or going to the bathroom or getting a midnight snack. I think it’s natural to worry about what happens if a company could pull granular data about people’s behavior patterns.

Manku is worried about information gathered after the fact, like if police issued a subpoena for Wi-Fi disruption data that could reveal a person’s actions in their home. Cognitive does data processing on the device and then dumps that data. Nothing identifiable is sent to the cloud. Likewise, customers who buy Cognitive’s software can’t access the data on their devices, just the insight. In other words, the software would register a fall, without including a person’s earlier actions.

“You have to start thinking about it from day one when you’re architecting the product, because it’s very hard to think about it after,” Manku says. It’s difficult to shut things down retroactively to protect privacy. It’s best if sensitive information stays local and gets purged.

Companies that promote anonymity will lose helpful troves of data. These could be used to train future machine-learning models in order to optimize their devices’ performance. Cognitive gets around this limitation by having a set of employees and friends volunteer their data for training. Other companies decide they don’t want to get into the analytics market or take a more arduous route to acquire training data for improving their devices.

If nothing else, companies should embrace anonymity by design in light of the growing amount of comprehensive privacy legislation around the world, like the General Data Protection Regulation in Europe and the California Consumer Privacy Act. Not only will it save them from lapses in their data-governance policies, it will guarantee that when governments come knocking for surveillance data, these businesses can turn them away easily. After all, you can’t give away something you never had.

This article appears in the September 2020 print issue as “Anonymous by Design.”

Power Grids Should Be as Data Driven as the Internet

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/energy/the-smarter-grid/power-grids-should-be-as-data-driven-as-the-internet

Governments are setting ambitious renewable energy goals in response to climate change. The problem is, the availability of renewable sources doesn’t align with the times when our energy demands are the highest. We need more electricity for lights when the sun has set and solar is no longer available, for example. But if utilities could receive information about energy usage in real time, as Internet service providers already do with data usage, it would change the relationship we have with the production and consumption of our energy.

Utilities must still meet energy demands regardless of whether renewable sources are available, and they still have to mull whether to construct expensive new power plants to meet expected spikes in demand. But real-time information would make it easier to use more renewable energy sources when they’re available. Using this information, utilities could set prices in response to current availability and demand. This real-time pricing would serve as an incentive to customers to use more energy when those sources are available, and thus avoid putting more strain on power plants.

California is one example of this strategy. The California Energy Commission hopes establishing rules for real-time pricing for electricity use will demonstrate how overall demand and availability affect the cost. It’s like surge pricing for a ride share: The idea is that electricity would cost more during peak demand. But the strategy would likely generate savings for people most of the time.

Granted, most people won’t be thrilled with the idea of paying more to dry their towels in the afternoons and evenings, as the sun goes down and demand peaks. But new smart devices could make the pricing incentives both easier on the customer and less visible by handling most of the heavy lifting that a truly dynamic and responsive energy grid requires.

For example, companies such as Ecobee, Nest, Schneider Electric, and Siemens could offer small app-controlled computers that would sit on the breaker boxes outside a building. The computer would manage the flow of electricity from the breaker box to the devices in the building, while the app would help set priorities and prices. It might ask the user during setup to decide on an electricity budget, or to set devices to have priority over other devices during peak demand.

Back in 2009, Google created similar software called Google PowerMeter, but the tech was too early—the appliances that could respond to real-time information weren’t yet available. Google shut down the service in 2011. Karen Herter, an energy specialist for the California Energy Commission, believes that the state’s rules for real-time pricing will be the turning point that convinces energy and tech giants to build such smart devices again.

This year, the CEC is writing rules for real-time pricing. The agency is investigating rates that update every hour, every 15 minutes, and every 5 minutes. No matter what, the rates will be publicly available, so that breaker box computers at homes and businesses can make decisions about what to power and when.

We will all need to start caring about when we use electricity—whether to spend more money to run a dryer at 7 p.m., when demand is high, or run it overnight, when electricity may be cheaper. California, with the rules it’s going to have in place by January 2022, could be the first to create a market for real-time energy pricing. Then, we may see a surge of devices and services that could increase our use of renewable energy to 100 percent—and save money on our electric bills along the way.

This article appears in the August 2020 print issue as “Data-Driven Power.”

The Internet of Things Has a Consent Problem

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/telecom/security/the-internet-of-things-has-a-consent-problem

Consent has become a big topic in the wake of the Me Too movement. But consent isn’t just about sex. At its core, it’s about respect and meeting people where they are at. As we add connected devices to homes, offices, and public places, technologists need to think about consent.

Right now, we are building the tools of public, work, and home surveillance, and we’re not talking about consent before we implement those tools. Sensors used in workplaces and homes can track sound, temperature, occupancy, and motion to understand what a person is doing and what the surrounding environment is like. Plenty of devices have cameras and microphones that feed back into a cloud service.

In the cloud, images, conversations, and environmental cues could be accessed by hackers. Beyond that, simply by having a connected device, users give the manufacturer’s employees a clear window into their private lives. While I personally may not mind if Google knows my home temperature or independent contractors at Amazon can accidentally listen in on my conversations, others may.

For some, the issue with electronic surveillance is simply that they don’t want these records created. For others, getting picked up by a doorbell camera might represent a threat to their well-being, given the U.S. government’s increased use of facial recognition and attempts to gather large swaths of electronic data using broad warrants.

How should companies think about IoT consent? Transparency is important—any company selling a connected device should be up-front about its capabilities and about what happens to the device data. Informing the user is the first step.

But the company should encourage the user to inform others as well. It could be as simple as a sticker alerting visitors that a house is under video surveillance. Or it might be a notification in the app that asks the user to explain the device’s capabilities to housemates or loved ones. Such a notification won’t help those whose partners use connected devices as an avenue for abuse and control, but it will remind anyone setting up a device in their home that it could have the potential for almost surveillance-like access to their family members.

In professional settings, consent can build trust in a connected product or automated system. For example, AdventHealth Celebration, a hospital in the Orlando, Fla., area has implemented a tracking solution for nurses that monitors their movements during a shift to determine the optimal workflows. Rather than just turning the system loose, however, Celebration informed nurses before bringing in the system and since then has worked with them to interpret results.

So far, the hospital has shifted how it allocates patients to rooms to make sure high-needs patients aren’t next to one another and assigned to the same nurse. But getting the nurses involved at the start was crucial to success. Cities deploying facial recognition in schools or in airports without asking citizens for input would do well to pay attention to the success of Celebration’s system. A failure to ask for input or to inform citizens shows a clear lack of concern around consent.

Which in turn implies that our governments aren’t keen on respect and meeting people where they are at. Even if that’s true for governments, is that the message that tech companies want to send to customers?

This article appears in the July 2020 print issue as “The IoT’s Consent Problem.”

Tracking COVID-19 With the IoT May Put Your Privacy at Risk

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/telecom/security/tracking-covid19-with-the-iot-may-put-your-privacy-at-risk

IEEE COVID-19 coverage logo, link to landing page

The Internet of Things makes the invisible visible. That’s the IoT’s greatest feature, but also its biggest potential drawback. More sensors on more people means the IoT becomes a visible web of human connections that we can use to, say, track down a virus.

Track-and-trace programs are already being used to monitor outbreaks of COVID-19 and its spread. But because they would do so through easily enabled mass surveillance, we need to put rules in place about how to undertake any attempts to track the movements of people.

In April, Google and Apple said they would work together to build an opt-in program for Android or iOS users. The program would use their phones’ Bluetooth connection to deliver exposure notifications—meaning that transmissions are tracked by who comes into contact with whom, rather than where people spend their time. Other proposals use location data provided by phone applications to determine where people are traveling.

All of these ideas have slightly different approaches, but at their core they’re still tracking programs. Any such program that we implement to track the spread of COVID-19 should follow some basic guidelines to ensure that the data is used only for public health research. This data should not be used for marketing, commercial gain, or law enforcement. It shouldn’t even be used for research outside of public health.

Let’s talk about the limits we should place around this data. A tracking program for COVID-19 should be implemented only for a prespecified duration that’s associated with a public health goal (like reducing the spread of the virus). So, if we’re going to collect device data and do so without requiring a user to opt in, governments need to enact legislation that explains what the tracking methodology is, requires an audit for accuracy and efficacy by a third party, and sets a predefined end.

Ethical data collection is also critical. Apple and Google’s Bluetooth method uses encrypted tokens to track people as they pass other people. The Bluetooth data is people-centric, not location-centric. Once a person uploads a confirmation that they’ve been infected, their device can issue notifications to other devices that were recently nearby, alerting users—anonymously—that they may have come in contact with someone who’s infected.

This is good. And while it might be possible to match a person to a device, it would be difficult. Ultimately, linking cases anonymously to devices is safer than simply collecting location data on infected individuals. The latter makes it easy to identify people based on where they sleep at night and work during the day, for example.

Going further, this data must be encrypted on the device, during transit and when stored on a cloud or government server, so that random hackers can’t access it. Only the agency in charge of track-and-trace efforts should have access to the data from the device. This means that police departments, immigration agencies, or private companies can’t access that data. Ever.

However, researchers should have access to some form of the data after a few years have passed. I don’t know what that time limit should be, but when that time comes, institutional review boards, like those that academic institutions use to protect human research subjects, should be in place to evaluate each request for what could be highly personal data.

If we can get this right, we can use the lessons learned during COVID-19 not only to protect public health but also to promote a more privacy-centric approach to the Internet of Things.

This article appears in the June 2020 print issue as “Pandemic vs. Privacy.”

COVID-19 Makes It Clear That Broadband Access Is a Human Right

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/telecom/internet/covid19-makes-clear-broadband-access-is-human-right

Like clean water and electricity, broadband access has become a modern-day necessity. The spread of COVID-19 and the ensuing closure of schools and workplaces and even the need for remote diagnostics make this seem like a new imperative, but the idea is over a decade old. Broadband is a fundamental human right, essential in times like now, but just as essential when the world isn’t in chaos.

A decade ago, Finland declared broadband a legal right. In 2011, the United Nations issued a report [PDF] with a similar conclusion. At the time, the United States was also debating its broadband policy and a series of policy efforts that would ensure everyone had access to broadband. But decisions made by the Federal Communications Commission between 2008 and 2012 pertaining to broadband mapping, network neutrality, data caps and the very definition of broadband are now coming back to haunt the United States as cities lock themselves down to flatten the curve on COVID-19.

While some have voiced concerns about whether the strain of everyone working remotely might break the Internet, the bigger issue is that not everyone has Internet access in the first place. Most U.S. residential networks are built for peak demand, and even the 20 to 40 percent increase in network traffic seen in locations hard hit by the virus won’t be enough to buckle networks.

An estimated 21 to 42 million people in the United States don’t have physical access to broadband, and even more cannot afford it or are reliant on mobile plans with data limits. For a significant portion of our population, this makes remote schooling and work prohibitively expensive at best and simply not an option at worst. This number hasn’t budged significantly in the last decade, and it’s not just a problem for the United States. In Hungary, Spain, and New Zealand, a similar percentage of households also lack a broadband subscription according to data from the Organization for Economic Co-operation and Development.

Faced with the ongoing COVID-19 outbreak, Internet service providers in the United States. have already taken several steps to expand broadband access. Comcast, for example, has made its public Wi-Fi network available to anyone. The company has also expanded its Internet Essentials program—which provides a US $9.95 monthly connection and a subsidized laptop—to a larger number of people on some form of government assistance.

To those who already have access but are now facing financial uncertainty, AT&T, Comcast, and more than 200 other U.S. ISPs have pledged not to cut off subscribers who can’t pay their bills and not to charge late fees, as part of an FCC plan called Keep Americans Connected. Additionally, AT&T, Comcast, and Verizon have also promised to eliminate data caps for the near future, so customers don’t have to worry about blowing past a data limit while learning and working remotely.

It’s good to keep people connected during quarantines and social distancing, but going forward, some of these changes should become permanent. It’s not enough to say that broadband is a basic necessity; we have to push for policies that ensure companies treat it that way.

“If it wasn’t clear before this crisis, it is crystal clear now that broadband is a necessity for every aspect of modern civic and commercial life. U.S. policymakers need to treat it that way,” FCC Commissioner Jessica Rosenworcel says. “We should applaud public spirited efforts from our companies, but we shouldn’t stop there.” 

This article appears in the May 2020 print issue as “We All Deserve Broadband.”

It’s Too Late to Undo Climate Change. We Need Tech in Order to Adapt

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/consumer-electronics/portable-devices/its-too-late-to-undo-climate-change-we-need-tech-in-order-to-adapt

On the CES floor in Las Vegas this past January, I saw dozens of companies showing off products designed to help us adapt to climate change. It was an unsettling reminder that we’ve tipped the balance on global warming and that hotter temperatures, wildfires, and floods are the new reality.

Based on our current carbon dioxide emissions, we can expect warming of up to 1.5 °C by 2033. Even if we stopped spewing carbon today, temperatures would continue to rise for a time, and weather would grow still more erratic.

The companies at CES recognize that it’s too late to stop climate change. Faced with that realization, this group of entrepreneurs is focusing on climate adaptation. For them, the goal is to make sure that people and the global economy will still survive across as much of the world as possible. These entrepreneurs’ companies are developing practicalities, such as garments that adapt to the weather or new building materials with higher melting points so that roads won’t crack in extreme temperatures.

One of the biggest risks in a warming world is that both outdoor workers and their equipment will overheat more often. Scientists expect to see humans migrate from parts of the world where temperatures and humidity combine to repeatedly create heat indexes of 40.6 °C, because beyond that temperature humans have a hard time surviving [PDF]. But even in more temperate locations, the growing number of hotter days will also make it tough for outdoor workers.

Embr Labs is building a bracelet that the company says can lower a person’s perceived temperature a few degrees simply by changing the temperature on their wrist. The bracelet doesn’t change actual body temperature, so it can’t help outdoor workers avoid risk on a sweltering day. But it could still be used to keep workers cooler on safe yet still uncomfortably warm days. It might also allow companies to raise their indoor temperatures, saving on air-conditioning costs.

Elsewhere, Epicore Biosystems is building wearable microfluidic sensors that monitor people for dehydration or high body temperatures. The Epicore sensors are already being used for athletes. But it’s not hard to imagine that in the near future there’d be a market for putting them on construction, farm, and warehouse workers who have to perform outside jobs in hot weather.

Extreme temperatures—and extreme fluctuations between temperatures—are also terrible for our existing road and rail infrastructure. Companies such as RailPod, as well as universities, are building AI-powered drones and robots that can monitor miles of roadway or track and send back data on repairs.

And then there’s flooding. Coastal roads and roads near rivers will need to withstand king tides, flash floods, and sustained floodwaters. Pavement engineers are working on porous concrete to mitigate flood damage and on embedded sensors to communicate a road’s status in real time to transportation officials.

There are so many uncertainties about our warming planet, but what isn’t in doubt is that climate change will damage our infrastructure and disrupt our patterns of work. Plenty of companies are focused on the admirable goal of preventing further warming, but we need to also pay attention to the companies that can help us adapt. A warmer planet is already here.

This article appears in the April 2020 print issue as “Tech for a Warming World.”

Network Slicing is 5G’s Hottest Feature

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/telecom/wireless/network-slicing-5gs-hottest-feature

The hottest feature of 5G isn’t discussed very much. When people talk about 5G they tend to discuss the gigabit speeds or the lower latencies. But it’s network slicing, the ability to partition off segments of the 5G network with specific latency, bandwidth, and quality-of-service guarantees, that could change the underlying economics of cellular service. Network slicing could lead to new companies that provide connectivity and help offset the capital costs associated with deploying 5G networks.

How? Instead of selling data on a per-gigabyte basis, these companies could sell wireless connectivity with specific parameters. A manufacturing facility, for example, may prioritize low latency so that its robots operate as precisely as possible. A hospital may want not only low latency but also dedicated bandwidth for telemedicine, to ensure that signals aren’t lost at an inopportune moment.

Today, if a hospital or factory wants a dedicated wireless network with specific requirements, a telco has to custom-engineer it. But with network slicing, the telco can instead use software to allocate slices without human involvement. This would reduce the operating costs of a 5G network. That ease and flexibility, combined with the ability to price the network for different capabilities, will be what helps carriers justify the capital costs of deploying 5G, says Paul Challoner, the vice president of network product solutions for Ericsson North America.

Challoner envisions that soon customers will be able to go to a telco’s website and define what they want, get the pricing for it, and then use the network slice for however long they need. He sees 2020 as being the year that equipment companies like Ericsson “race to the slice,” trying to show wireless carriers what they can do.

Mobile-tech consultant Chetan Sharma thinks network slicing will likely take a year or two longer to hit the mainstream. But he also sees it as a catalyst for new companies that will enter the market to resell connectivity for dedicated use cases. For example, a company like Twilio or Particle, which already resell network connectivity to clients, could bring together slices from different carriers to offer a global service with specific characteristics. A company like BMW could then use that service when it wants to roll out a software update at a specific time to all of its vehicles—and to ensure that the update made it through.

Or maybe Amazon or Microsoft Azure could offer an industrial IoT product to factories that have specific latency requirements, by bundling together wireless connectivity from multiple carriers. A few years back, the telecom industry was debating whether carriers were becoming a dumb pipe. Sharma thinks the ability to customize speed, latency, and quality of service means 5G will put an end to that particular debate.

That said, carriers charging customers based on the capabilities they need does mean that some people will bring up concerns around network neutrality and how to ensure that customers aren’t charged an arm and a leg for a decent best-effort service.

“It’s uncharted territory,” says Sharma. “When the FCC was looking at [network neutrality] they didn’t consider network slicing as part of the equation. So my view is that they will have to update what operators are allowed to do with network slicing. We’ll need more clarity on the ruling.”

This article appears in the March 2020 print issue as “What 5G Hype Gets Wrong.”

The Long Goodbye of Wi-Fi Has Begun

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/telecom/wireless/the-long-goodbye-of-wifi-has-begun

In ten years, we won’t need Wi-Fi.

At least, that’s what Azhar Hussain, the CEO of IoT company Hanhaa, told me on a phone call late last year. He thinks the end of Wi-Fi is nigh because he believes that allocating spectrum in smaller chunks will let municipalities, universities, and companies create private 5G cellular networks. The convenience of those networks will impel companies to choose cellular connections over Wi-Fi for their IoT devices.

There’s reason to think Hussain is right, at least for higher-value devices, such as medical devices, home appliances, and outdoor gear like pool-cleaning robots. Zach Supalla, the CEO of Particle, a company that supplies IoT components to businesses with little experience building connected products, says more than half of the IoT devices in Particle’s cloud that use cellular connections are also within range of a Wi-Fi network. Supalla says that companies choose cellular modules over Wi-Fi because the modules are easier to set up and businesses can better control the consumer experience.

Wi-Fi devices are notoriously difficult to connect to one another, or pair. To get a connected product on their home Wi-Fi network, consumers must often pair with a software-based access point before switching the device over to their own network.

This process can be fraught with errors. Even I, a reporter who has tested hundreds of connected devices, fail to get a device on my network on the first try roughly a third of the time. To make it easier, Amazon and Google have both created proprietary onboarding processes that handle the setup on behalf of the user, so that when consumers power their devices on, they automatically try to join their network.

However, device manufacturers still have to implement both Amazon’s and Google’s programs separately, and that requires know-how that some companies don’t possess. Thankfully, Amazon, Apple, and Google are now working on a smart-home standard that may simplify things. But the details are scant, and any solution they develop won’t be available until 2021 at the earliest.

When you’re faced with multiple Wi-Fi ecosystems, cellular is just easier, Hussain says. Cellular networks cost more now because you have to install radios on the devices and pay a subscription to use the cellular network. Hussain sees those costs coming down, potentially even disappearing, given time.

That’s because he’s anticipating a future where universities, businesses, and municipalities set up their own cellular networks using spectrum obtained through new spectrum auctions, such as the Citizens Broadband Radio fServices (CBRS) auctions occurring in the United States in June. Cellular equipment makers are already building gear and testing these private networks in factories and offices. If new roaming plans are developed to allow devices to come onto these local networks easily, similar to joining a Wi-Fi hotspot, cellular connectivity will become practically free.

Even if Hussain’s vision doesn’t come to pass in the next 10 years, the costs of low-data-rate cellular contracts will continue to drop, and that could still eventually put the nail in the coffin for Wi-Fi. And I mostly agree: I think there are plenty of reasons to believe that Wi-Fi will never disappear entirely, but I do think small cellular networks will take its place in our lives.

This article appears in the February 2020 print issue as “Wi-Fi’s Long Goodbye.”

Engineers are Pushing Machine Learning to the World’s Humblest Microprocessors

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/telecom/wireless/engineers-are-pushing-machine-learning-to-the-worlds-humblest-microprocessors

In February, a group of researchers from Google, Microsoft, Qualcomm, Samsung, and half a dozen universities will gather in San Jose, Calif., to discuss the challenge of bringing machine learning to the farthest edge of the network, specifically microprocessors running on sensors or other battery-powered devices.

The event is called the Tiny ML Summit (ML for “machine learning”), and its goal is to figure out how to run machine learning algorithms on the tiniest microprocessors out there. Machine learning at the edge will drive better privacy practices, lower energy consumption, and build novel applications in future generations of devices.

As a refresher, at its core machine learning is the training of a neural network. Such training requires a ton of data manipulation. The end result is a model that is designed to complete a task, whether that’s playing Go or responding to a spoken command.

Many companies are currently focused on building specialized silicon for machine learning in order to train networks inside data centers. They also want silicon for conducting inference—running data against a machine learning model to see if the data matches the model’s results—at the edge. But the goal of the Tiny ML community is to take inference to the smallest processors out there—like an 8-bit microcontroller that powers a remote sensor.

To be clear, there’s already been a lot of progress in bringing inference to the edge if we’re talking about something like a smartphone. In November 2019, Google open-sourced two versions of its machine learning algorithms, one of which required 50 percent less power to run, and the other of which performed twice as fast as previous versions of the algorithm. There are also several startups such as Flex Logix, Greenwaves, and Syntiant tackling similar challenges using dedicated silicon.

But the Tiny ML community has different goals. Imagine including a machine learning model that can separate a conversation from background noise on a hearing aid. If you can’t fit that model on the device itself, then you need to maintain a wireless connection to the cloud where the model is running. It’s more efficient, and more secure, to run the model directly on the hearing aid—if you can fit it.

Tiny ML researchers are also experimenting with better data classification by using ML on battery-powered edge devices. Jags Kandasamy, CEO of Latent AI, which is developing software to compress neural networks for tiny processors, says his company is in talks with companies that are building augmented-reality and virtual-reality headsets. These companies want to take the massive amounts of image data their headsets gather and classify the images seen on the device so that they send only useful data up to the cloud for later training. For example, “If you’ve already seen 10 Toyota Corollas, do they all need to get transferred to the cloud?” Kandasamy asks.

On-device classification could be a game changer in reducing the amount of data gathered and input into the cloud, which saves on bandwidth and electricity. Which is good, as machine learning typically requires a lot of electricity.

There’s plenty of focus on the “bigger is better” approach when it comes to machine learning, but I’m excited about the opportunities to bring machine learning to the farthest edge. And while Tiny ML is still focused on the inference challenge, maybe someday we can even think about training the networks themselves on the edge.

This article appears in the January 2020 print issue as “Machine Learning on the Edge.”

Hey, Data Scientists: Show Your Machine-Learning Work

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/computing/software/hey-data-scientists-show-your-machinelearning-work

In the last two years, the U.S. Food and Drug Administration has approved several machine-learning models to accomplish tasks such as classifying skin cancer and detecting pulmonary embolisms. But for the companies who built those models, what happens if the data scientist who wrote the algorithms leaves the organization?

In many businesses, an individual or a small group of data scientists is responsible for building essential machine-learning models. Historically, they have developed these models on their own laptops through trial and error, and pass it along for production when it works. But in that transfer, the data scientist might not think to pass along all the information about the model’s development. And if the data scientist leaves, that information is lost for good.

That potential loss of information is why experts in data science are calling for machine learning to become a formal, documented process overseen by more people inside an organization.

Companies need to think about what could happen if their data scientists take new jobs, or if a government organization or an important customer asks to see an audit of the algorithm to ensure it is fair and accurate. Not knowing what data was used to train the model and how the data was weighted could lead to a loss of business, bad press, and perhaps regulatory scrutiny, if the model turns out to be biased.

David Aronchick, the head of open-source machine-learning strategy at Microsoft Azure, says companies are realizing that they must run their machine-learning efforts the same way they run their software-development practices. That means encouraging documentation and codevelopment as much as possible.

Microsoft has some ideas about what the documentation process should look like. The process starts with the researcher structuring and organizing the raw data and annotating it appropriately. Not having a documented process at this stage could lead to poorly annotated data that has biases associated with it or is unrelated to the problem the business wants to solve.

Next, during training, a researcher feeds the data to a neural network and tweaks how it weighs various factors to get the desired result. Typically, the researcher is still working alone at this point, but other people should get involved to see how the model is being developed—just in case questions come up later during a compliance review or even a lawsuit.

A neural network is a black box when it comes to understanding how it makes its decisions, but the data, the number of layers, and how the network weights different parameters shouldn’t be mysterious. The researchers should be able to tell how the data was structured and weighted at a glance.

It’s also at this point where having good documentation can help make a model more flexible for future use. For example, a shopping site’s model that crunched data specifically for Christmas spending patterns can’t apply that same model to Valentine’s Day spending. Without good documentation, a data scientist would have to essentially rebuild the model, rather than going back and tweaking a few parameters to adjust it for a new holiday.

The last step in the process is actually deploying the model. Historically, only at this point would other people get involved and acquaint themselves with the data scientist’s hard work. Without good documentation, they’re sure to get headaches trying to make sense of it. But now that data is so essential to so many businesses—not to mention the need to adapt quickly—it’s time for companies to build machine-learning processes that rival the quality of their software-development processes.

This article appears in the December 2019 print issue as “Show Your Machine-Learning Work.”

Let’s Build Robots That Are as Smart as Babies

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/robotics/artificial-intelligence/lets-build-robots-that-are-as-smart-as-babies

Let’s face it: Robots are dumb. At best they are idiot savants, capable of doing one thing really well. In general, even those robots require specialized environments in which to do their one thing really well. This is why autonomous cars or robots for home health care are so difficult to build. They’ll need to react to an uncountable number of situations, and they’ll need a generalized understanding of the world in order to navigate them all.

Babies as young as two months already understand that an unsupported object will fall, while five-month-old babies know materials like sand and water will pour from a container rather than plop out as a single chunk. Robots lack these understandings, which hinders them as they try to navigate the world without a prescribed task and movement.

But we could see robots with a generalized understanding of the world (and the processing power required to wield it) thanks to the video-game industry. Researchers are bringing physics engines—the software that provides real-time physical interactions in complex video-game worlds—to robotics. The goal is to develop robots’ understanding in order to learn about the world in the same way babies do.

Giving robots a baby’s sense of physics helps them navigate the real world and can even save on computing power, according to Lochlainn Wilson, the CEO of SE4, a Japanese company building robots that could operate on Mars. SE4 plans to avoid the problems of latency caused by distance from Earth to Mars by building robots that can operate independently for a few hours before receiving more instructions from Earth.

Wilson says that his company uses simple physics engines such as PhysX to help build more-independent robots. He adds that if you can tie a physics engine to a coprocessor on the robot, the real-time basic physics intuitions won’t take compute cycles away from the robot’s primary processor, which will often be focused on a more complicated task.

Wilson’s firm occasionally still turns to a traditional graphics engine, such as Unity or the Unreal Engine, to handle the demands of a robot’s movement. In certain cases, however, such as a robot accounting for friction or understanding force, you really need a robust physics engine, Wilson says, not a graphics engine that simply simulates a virtual environment. For his projects, he often turns to the open-source Bullet Physics engine built by Erwin Coumans, who is now an employee at Google.

Bullet is a popular physics-engine option, but it isn’t the only one out there. Nvidia Corp., for example, has realized that its gaming and physics engines are well-placed to handle the computing demands required by robots. In a lab in Seattle, Nvidia is working with teams from the University of Washington to build kitchen robots, fully articulated robot hands and more, all equipped with Nvidia’s tech.

When I visited the lab, I watched a robot arm move boxes of food from counters to cabinets. That’s fairly straightforward, but that same robot arm could avoid my body if I got in its way, and it could adapt if I moved a box of food or dropped it onto the floor.

The robot could also understand that less pressure is needed to grasp something like a cardboard box of Cheez-It crackers versus something more durable like an aluminum can of tomato soup.

Nvidia’s silicon has already helped advance the fields of artificial intelligence and computer vision by making it possible to process multiple decisions in parallel. It’s possible that the company’s new focus on virtual worlds will help advance the field of robotics and teach robots to think like babies.

This article appears in the November 2019 print issue as “Robots as Smart as Babies.”

Where’s My Stuff? Now, Bluetooth and Ultrawideband Can Tell You

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/telecom/standards/wheres-my-stuff-now-bluetooth-and-ultrawideband-can-tell-you

We all lose things. Think about how much time you’ve spent searching for your keys or your wallet. Now imagine how much time big companies spend searching for lost items. In a hospital, for example, the quest for a crash cart can slow a response team during an emergency, while on a construction site, the hunt for the right tool can lead to escalating delays.

According to a recent study funded by Microsoft, roughly 33 percent of companies utilizing the Internet of Things are using it for tracking their stuff. Quality location data is important for more than tracking misplaced tools; it’s also necessary for robotics in manufacturing and in autonomous vehicles, so they can spot nearby humans and avoid them.

The growing interest in locating things is reflected in updated wireless standards. The Bluetooth Special Interest Group estimates that with the updated 5.1 standard, the wireless technology can now locate devices to within a few inches. Elsewhere, Texas Instruments has built a radar chip using 60-gigahertz signals that can help robots “see” where things are in a factory by bouncing radio waves off its surroundings.

But for me, the real excitement is in a newcomer to the scene. In August, NXP, Bosch, Samsung, and access company Assa Abloy launched the FiRa Consortium to handle location tracking using ultrawideband radios (FiRa stands for “fine-ranging”). This isn’t the ultrawideband of almost 20 years ago, which offered superfast wireless data transfers over short distances much like Wi-Fi does today. FiRa uses a wide band of spectrum in the 6- to 9-GHz range and relies on the new IEEE 802.15.4z standard. The base standard is used for other IoT network technologies, including Zigbee, Wi-SUN, 6LoWPAN, and Thread radios, but the z formulation is designed specifically for securely ascertaining the location of a device.

FiRa delivers location data based on a time-of-flight measurement—the time it takes a quick signal pulse to make a round trip to the device. This is different from Bluetooth’s method, which opens a connection between radios and then broadcasts the location. Charles Dachs, vice chair of the FiRa Consortium and vice president of mobile transactions at NXP, says FiRa’s pulselike data transmissions allow location data to be gleaned for items within 100 to 200 meters of a node without sucking up a lot of power. Time-of-flight measurements allow for additional security, since they make it harder to spoof a location, and they’re so accurate, it’s obvious that a person is right there, not even a few meters away. Also, because the radio transmissions aren’t constant, it’s possible for hundreds of devices to ping a node without overwhelming it. By comparison, Bluetooth nodes can handle only about 50 devices.

FiRa’s location-tracking feature is likely to be the application that entices many companies to adopt the standard, but it can do more. The consortium also hopes that automotive companies will use it for securely unlocking car doors or front doors wirelessly. However, there is a downside: Widespread FiRa use for locks would require either a separate fob or new radios on our smartphones.

I think it’s far more likely that FiRa will find its future in enterprise and industrial asset tracking. Historically, Bluetooth has struggled in this space because of the limited number of connections that can be made. Other radios have been a bit too niche, or not well designed for enterprise use. As for location tracking for us consumers? Apple and Google are both betting on Bluetooth, so that’s where I’d place my bets, too.

This article appears in the October 2019 print issue as “Where’s My Stuff?.”