All posts by Stacey Higginbotham

Open-Source Vote-Auditing Software Can Boost Voter Confidence

Post Syndicated from Stacey Higginbotham original

Election experts were already concerned about the security and accuracy of the 2020 U.S. presidential election. Now, with the ongoing COVID-19 pandemic and the new risk it creates for in-person voting—not to mention the debate about whether mail-in ballots lead to voter fraud—the amount of anxiety around the 2020 election is unprecedented.

“Elections are massively complicated, and they are run by the most OCD individuals, who are process oriented and love color coding,” says Monica Childers, a product manager with the nonprofit organization VotingWorks. “And in a massively complex system, the more you change things, especially at the last minute, the more you introduce the potential for chaos.” But that’s just what election officials are being forced to do.

Most of the conversation around election security focuses on the security of voting machines and preventing interference. But it’s equally important to prove that ballots were correctly counted. If a party or candidate cries foul, states will have to audit their votes to prove there were no miscounts.

VotingWorks has built an open-source vote-auditing software tool called Arlo, and the organization has teamed up with the U.S. Cybersecurity and Infrastructure Security Agency to help states adopt the tool. Arlo helps election officials conduct a risk-limiting audit [PDF], which ensures that the reported results match the actual results. And because it’s open source, all aspects of the software are available for inspection.

There are actually several ways to audit votes. You’re probably most familiar with recounts, a process dictated by law that orders a complete recounting of ballots if an election is very close. But full recounts are rare. More often, election officials will audit the ballots tabulated by a single machine, or verify the ballots cast in a few precincts. However, those techniques don’t give a representative sample of how an entire state may have voted.

This is where a risk-limiting audit excels. The audit takes a random sample of the ballots from across the area undergoing the audit and outlines precisely how the officials should proceed. This includes giving explicit instructions for choosing the ballots at random (pick the fourth box on shelf A and then select the 44th ballot down, for example). It also explains how to document a “chain of custody” for the selected ballots so that it’s clear which auditors handled which ballots.

The random-number generator that Arlo uses to select the ballots is published online. Anyone can use the tool to select the same ballots to audit and compare their results. The software provides the data-entry system for the teams of auditors entering the ballot results. Arlo will also indicate how likely it is that the entire election was reported correctly.

The technology may not be fancy, but the documentation and attention to a replicable process is. And that’s most important for validating the results of a contested election.

Arlo has been tested in elections in Michigan, Ohio, Pennsylvania, and a few other states. The software isn’t the only way a state or election official can conduct a risk-limiting audit, but it does make the process easier. Childers says Colorado took almost 10 years to set up risk-limiting audits. VotingWorks has been using Arlo and its staff to help several states set up these processes, which has taken less than a year.

The upcoming U.S. election is dominated by partisanship, but risk-limiting audits have been embraced by both parties. So far, it seems everyone agrees that if your vote gets counted, the government needs to count it correctly.

This article appears in the October 2020 print issue as “Making Sure Votes Count.”

For the IoT, User Anonymity Shouldn’t Be an Afterthought. It Should Be Baked In From the Start

Post Syndicated from Stacey Higginbotham original

The Internet of Things has the potential to usher in many possibilities—including a surveillance state. In the July issue, I wrote about how user consent is an important prerequisite for companies building connected devices. But there are other ways companies are trying to ensure that connected devices don’t invade people’s privacy.

Some IoT businesses are designing their products from the start to discard any personally identifiable information. Andrew Farah, the CEO of Density, which developed a people-counting sensor for commercial buildings, calls this “anonymity by design.” He says that rather than anonymizing a person’s data after the fact, the goal is to design products that make it impossible for the device maker to identify people in the first place.

“When you rely on anonymizing your data, then you’re only as good as your data governance,” Farah says. With anonymity by design, you can’t give up personally identifiable information, because you don’t have it. Density, located in Macon, Ga., settled on a design that uses four depth-perceiving sensors to count people by using height differentials.

Density could have chosen to use a camera to easily track the number of people in a building, but Farah balked at the idea of creating a surveillance network. Taj Manku, the CEO of Cognitive Systems, was similarly concerned about the possibilities of his company’s technology. Cognitive, in Waterloo, Ont., Canada, developed software that interprets Wi-Fi signal disruptions in a room to understand people’s movements.

With the right algorithm, the company’s software could tell when someone is sleeping or going to the bathroom or getting a midnight snack. I think it’s natural to worry about what happens if a company could pull granular data about people’s behavior patterns.

Manku is worried about information gathered after the fact, like if police issued a subpoena for Wi-Fi disruption data that could reveal a person’s actions in their home. Cognitive does data processing on the device and then dumps that data. Nothing identifiable is sent to the cloud. Likewise, customers who buy Cognitive’s software can’t access the data on their devices, just the insight. In other words, the software would register a fall, without including a person’s earlier actions.

“You have to start thinking about it from day one when you’re architecting the product, because it’s very hard to think about it after,” Manku says. It’s difficult to shut things down retroactively to protect privacy. It’s best if sensitive information stays local and gets purged.

Companies that promote anonymity will lose helpful troves of data. These could be used to train future machine-learning models in order to optimize their devices’ performance. Cognitive gets around this limitation by having a set of employees and friends volunteer their data for training. Other companies decide they don’t want to get into the analytics market or take a more arduous route to acquire training data for improving their devices.

If nothing else, companies should embrace anonymity by design in light of the growing amount of comprehensive privacy legislation around the world, like the General Data Protection Regulation in Europe and the California Consumer Privacy Act. Not only will it save them from lapses in their data-governance policies, it will guarantee that when governments come knocking for surveillance data, these businesses can turn them away easily. After all, you can’t give away something you never had.

This article appears in the September 2020 print issue as “Anonymous by Design.”

Power Grids Should Be as Data Driven as the Internet

Post Syndicated from Stacey Higginbotham original

Governments are setting ambitious renewable energy goals in response to climate change. The problem is, the availability of renewable sources doesn’t align with the times when our energy demands are the highest. We need more electricity for lights when the sun has set and solar is no longer available, for example. But if utilities could receive information about energy usage in real time, as Internet service providers already do with data usage, it would change the relationship we have with the production and consumption of our energy.

Utilities must still meet energy demands regardless of whether renewable sources are available, and they still have to mull whether to construct expensive new power plants to meet expected spikes in demand. But real-time information would make it easier to use more renewable energy sources when they’re available. Using this information, utilities could set prices in response to current availability and demand. This real-time pricing would serve as an incentive to customers to use more energy when those sources are available, and thus avoid putting more strain on power plants.

California is one example of this strategy. The California Energy Commission hopes establishing rules for real-time pricing for electricity use will demonstrate how overall demand and availability affect the cost. It’s like surge pricing for a ride share: The idea is that electricity would cost more during peak demand. But the strategy would likely generate savings for people most of the time.

Granted, most people won’t be thrilled with the idea of paying more to dry their towels in the afternoons and evenings, as the sun goes down and demand peaks. But new smart devices could make the pricing incentives both easier on the customer and less visible by handling most of the heavy lifting that a truly dynamic and responsive energy grid requires.

For example, companies such as Ecobee, Nest, Schneider Electric, and Siemens could offer small app-controlled computers that would sit on the breaker boxes outside a building. The computer would manage the flow of electricity from the breaker box to the devices in the building, while the app would help set priorities and prices. It might ask the user during setup to decide on an electricity budget, or to set devices to have priority over other devices during peak demand.

Back in 2009, Google created similar software called Google PowerMeter, but the tech was too early—the appliances that could respond to real-time information weren’t yet available. Google shut down the service in 2011. Karen Herter, an energy specialist for the California Energy Commission, believes that the state’s rules for real-time pricing will be the turning point that convinces energy and tech giants to build such smart devices again.

This year, the CEC is writing rules for real-time pricing. The agency is investigating rates that update every hour, every 15 minutes, and every 5 minutes. No matter what, the rates will be publicly available, so that breaker box computers at homes and businesses can make decisions about what to power and when.

We will all need to start caring about when we use electricity—whether to spend more money to run a dryer at 7 p.m., when demand is high, or run it overnight, when electricity may be cheaper. California, with the rules it’s going to have in place by January 2022, could be the first to create a market for real-time energy pricing. Then, we may see a surge of devices and services that could increase our use of renewable energy to 100 percent—and save money on our electric bills along the way.

This article appears in the August 2020 print issue as “Data-Driven Power.”

The Internet of Things Has a Consent Problem

Post Syndicated from Stacey Higginbotham original

Consent has become a big topic in the wake of the Me Too movement. But consent isn’t just about sex. At its core, it’s about respect and meeting people where they are at. As we add connected devices to homes, offices, and public places, technologists need to think about consent.

Right now, we are building the tools of public, work, and home surveillance, and we’re not talking about consent before we implement those tools. Sensors used in workplaces and homes can track sound, temperature, occupancy, and motion to understand what a person is doing and what the surrounding environment is like. Plenty of devices have cameras and microphones that feed back into a cloud service.

In the cloud, images, conversations, and environmental cues could be accessed by hackers. Beyond that, simply by having a connected device, users give the manufacturer’s employees a clear window into their private lives. While I personally may not mind if Google knows my home temperature or independent contractors at Amazon can accidentally listen in on my conversations, others may.

For some, the issue with electronic surveillance is simply that they don’t want these records created. For others, getting picked up by a doorbell camera might represent a threat to their well-being, given the U.S. government’s increased use of facial recognition and attempts to gather large swaths of electronic data using broad warrants.

How should companies think about IoT consent? Transparency is important—any company selling a connected device should be up-front about its capabilities and about what happens to the device data. Informing the user is the first step.

But the company should encourage the user to inform others as well. It could be as simple as a sticker alerting visitors that a house is under video surveillance. Or it might be a notification in the app that asks the user to explain the device’s capabilities to housemates or loved ones. Such a notification won’t help those whose partners use connected devices as an avenue for abuse and control, but it will remind anyone setting up a device in their home that it could have the potential for almost surveillance-like access to their family members.

In professional settings, consent can build trust in a connected product or automated system. For example, AdventHealth Celebration, a hospital in the Orlando, Fla., area has implemented a tracking solution for nurses that monitors their movements during a shift to determine the optimal workflows. Rather than just turning the system loose, however, Celebration informed nurses before bringing in the system and since then has worked with them to interpret results.

So far, the hospital has shifted how it allocates patients to rooms to make sure high-needs patients aren’t next to one another and assigned to the same nurse. But getting the nurses involved at the start was crucial to success. Cities deploying facial recognition in schools or in airports without asking citizens for input would do well to pay attention to the success of Celebration’s system. A failure to ask for input or to inform citizens shows a clear lack of concern around consent.

Which in turn implies that our governments aren’t keen on respect and meeting people where they are at. Even if that’s true for governments, is that the message that tech companies want to send to customers?

This article appears in the July 2020 print issue as “The IoT’s Consent Problem.”

Tracking COVID-19 With the IoT May Put Your Privacy at Risk

Post Syndicated from Stacey Higginbotham original

IEEE COVID-19 coverage logo, link to landing page

The Internet of Things makes the invisible visible. That’s the IoT’s greatest feature, but also its biggest potential drawback. More sensors on more people means the IoT becomes a visible web of human connections that we can use to, say, track down a virus.

Track-and-trace programs are already being used to monitor outbreaks of COVID-19 and its spread. But because they would do so through easily enabled mass surveillance, we need to put rules in place about how to undertake any attempts to track the movements of people.

In April, Google and Apple said they would work together to build an opt-in program for Android or iOS users. The program would use their phones’ Bluetooth connection to deliver exposure notifications—meaning that transmissions are tracked by who comes into contact with whom, rather than where people spend their time. Other proposals use location data provided by phone applications to determine where people are traveling.

All of these ideas have slightly different approaches, but at their core they’re still tracking programs. Any such program that we implement to track the spread of COVID-19 should follow some basic guidelines to ensure that the data is used only for public health research. This data should not be used for marketing, commercial gain, or law enforcement. It shouldn’t even be used for research outside of public health.

Let’s talk about the limits we should place around this data. A tracking program for COVID-19 should be implemented only for a prespecified duration that’s associated with a public health goal (like reducing the spread of the virus). So, if we’re going to collect device data and do so without requiring a user to opt in, governments need to enact legislation that explains what the tracking methodology is, requires an audit for accuracy and efficacy by a third party, and sets a predefined end.

Ethical data collection is also critical. Apple and Google’s Bluetooth method uses encrypted tokens to track people as they pass other people. The Bluetooth data is people-centric, not location-centric. Once a person uploads a confirmation that they’ve been infected, their device can issue notifications to other devices that were recently nearby, alerting users—anonymously—that they may have come in contact with someone who’s infected.

This is good. And while it might be possible to match a person to a device, it would be difficult. Ultimately, linking cases anonymously to devices is safer than simply collecting location data on infected individuals. The latter makes it easy to identify people based on where they sleep at night and work during the day, for example.

Going further, this data must be encrypted on the device, during transit and when stored on a cloud or government server, so that random hackers can’t access it. Only the agency in charge of track-and-trace efforts should have access to the data from the device. This means that police departments, immigration agencies, or private companies can’t access that data. Ever.

However, researchers should have access to some form of the data after a few years have passed. I don’t know what that time limit should be, but when that time comes, institutional review boards, like those that academic institutions use to protect human research subjects, should be in place to evaluate each request for what could be highly personal data.

If we can get this right, we can use the lessons learned during COVID-19 not only to protect public health but also to promote a more privacy-centric approach to the Internet of Things.

This article appears in the June 2020 print issue as “Pandemic vs. Privacy.”

COVID-19 Makes It Clear That Broadband Access Is a Human Right

Post Syndicated from Stacey Higginbotham original

Like clean water and electricity, broadband access has become a modern-day necessity. The spread of COVID-19 and the ensuing closure of schools and workplaces and even the need for remote diagnostics make this seem like a new imperative, but the idea is over a decade old. Broadband is a fundamental human right, essential in times like now, but just as essential when the world isn’t in chaos.

A decade ago, Finland declared broadband a legal right. In 2011, the United Nations issued a report [PDF] with a similar conclusion. At the time, the United States was also debating its broadband policy and a series of policy efforts that would ensure everyone had access to broadband. But decisions made by the Federal Communications Commission between 2008 and 2012 pertaining to broadband mapping, network neutrality, data caps and the very definition of broadband are now coming back to haunt the United States as cities lock themselves down to flatten the curve on COVID-19.

While some have voiced concerns about whether the strain of everyone working remotely might break the Internet, the bigger issue is that not everyone has Internet access in the first place. Most U.S. residential networks are built for peak demand, and even the 20 to 40 percent increase in network traffic seen in locations hard hit by the virus won’t be enough to buckle networks.

An estimated 21 to 42 million people in the United States don’t have physical access to broadband, and even more cannot afford it or are reliant on mobile plans with data limits. For a significant portion of our population, this makes remote schooling and work prohibitively expensive at best and simply not an option at worst. This number hasn’t budged significantly in the last decade, and it’s not just a problem for the United States. In Hungary, Spain, and New Zealand, a similar percentage of households also lack a broadband subscription according to data from the Organization for Economic Co-operation and Development.

Faced with the ongoing COVID-19 outbreak, Internet service providers in the United States. have already taken several steps to expand broadband access. Comcast, for example, has made its public Wi-Fi network available to anyone. The company has also expanded its Internet Essentials program—which provides a US $9.95 monthly connection and a subsidized laptop—to a larger number of people on some form of government assistance.

To those who already have access but are now facing financial uncertainty, AT&T, Comcast, and more than 200 other U.S. ISPs have pledged not to cut off subscribers who can’t pay their bills and not to charge late fees, as part of an FCC plan called Keep Americans Connected. Additionally, AT&T, Comcast, and Verizon have also promised to eliminate data caps for the near future, so customers don’t have to worry about blowing past a data limit while learning and working remotely.

It’s good to keep people connected during quarantines and social distancing, but going forward, some of these changes should become permanent. It’s not enough to say that broadband is a basic necessity; we have to push for policies that ensure companies treat it that way.

“If it wasn’t clear before this crisis, it is crystal clear now that broadband is a necessity for every aspect of modern civic and commercial life. U.S. policymakers need to treat it that way,” FCC Commissioner Jessica Rosenworcel says. “We should applaud public spirited efforts from our companies, but we shouldn’t stop there.” 

This article appears in the May 2020 print issue as “We All Deserve Broadband.”

It’s Too Late to Undo Climate Change. We Need Tech in Order to Adapt

Post Syndicated from Stacey Higginbotham original

On the CES floor in Las Vegas this past January, I saw dozens of companies showing off products designed to help us adapt to climate change. It was an unsettling reminder that we’ve tipped the balance on global warming and that hotter temperatures, wildfires, and floods are the new reality.

Based on our current carbon dioxide emissions, we can expect warming of up to 1.5 °C by 2033. Even if we stopped spewing carbon today, temperatures would continue to rise for a time, and weather would grow still more erratic.

The companies at CES recognize that it’s too late to stop climate change. Faced with that realization, this group of entrepreneurs is focusing on climate adaptation. For them, the goal is to make sure that people and the global economy will still survive across as much of the world as possible. These entrepreneurs’ companies are developing practicalities, such as garments that adapt to the weather or new building materials with higher melting points so that roads won’t crack in extreme temperatures.

One of the biggest risks in a warming world is that both outdoor workers and their equipment will overheat more often. Scientists expect to see humans migrate from parts of the world where temperatures and humidity combine to repeatedly create heat indexes of 40.6 °C, because beyond that temperature humans have a hard time surviving [PDF]. But even in more temperate locations, the growing number of hotter days will also make it tough for outdoor workers.

Embr Labs is building a bracelet that the company says can lower a person’s perceived temperature a few degrees simply by changing the temperature on their wrist. The bracelet doesn’t change actual body temperature, so it can’t help outdoor workers avoid risk on a sweltering day. But it could still be used to keep workers cooler on safe yet still uncomfortably warm days. It might also allow companies to raise their indoor temperatures, saving on air-conditioning costs.

Elsewhere, Epicore Biosystems is building wearable microfluidic sensors that monitor people for dehydration or high body temperatures. The Epicore sensors are already being used for athletes. But it’s not hard to imagine that in the near future there’d be a market for putting them on construction, farm, and warehouse workers who have to perform outside jobs in hot weather.

Extreme temperatures—and extreme fluctuations between temperatures—are also terrible for our existing road and rail infrastructure. Companies such as RailPod, as well as universities, are building AI-powered drones and robots that can monitor miles of roadway or track and send back data on repairs.

And then there’s flooding. Coastal roads and roads near rivers will need to withstand king tides, flash floods, and sustained floodwaters. Pavement engineers are working on porous concrete to mitigate flood damage and on embedded sensors to communicate a road’s status in real time to transportation officials.

There are so many uncertainties about our warming planet, but what isn’t in doubt is that climate change will damage our infrastructure and disrupt our patterns of work. Plenty of companies are focused on the admirable goal of preventing further warming, but we need to also pay attention to the companies that can help us adapt. A warmer planet is already here.

This article appears in the April 2020 print issue as “Tech for a Warming World.”

Network Slicing is 5G’s Hottest Feature

Post Syndicated from Stacey Higginbotham original

The hottest feature of 5G isn’t discussed very much. When people talk about 5G they tend to discuss the gigabit speeds or the lower latencies. But it’s network slicing, the ability to partition off segments of the 5G network with specific latency, bandwidth, and quality-of-service guarantees, that could change the underlying economics of cellular service. Network slicing could lead to new companies that provide connectivity and help offset the capital costs associated with deploying 5G networks.

How? Instead of selling data on a per-gigabyte basis, these companies could sell wireless connectivity with specific parameters. A manufacturing facility, for example, may prioritize low latency so that its robots operate as precisely as possible. A hospital may want not only low latency but also dedicated bandwidth for telemedicine, to ensure that signals aren’t lost at an inopportune moment.

Today, if a hospital or factory wants a dedicated wireless network with specific requirements, a telco has to custom-engineer it. But with network slicing, the telco can instead use software to allocate slices without human involvement. This would reduce the operating costs of a 5G network. That ease and flexibility, combined with the ability to price the network for different capabilities, will be what helps carriers justify the capital costs of deploying 5G, says Paul Challoner, the vice president of network product solutions for Ericsson North America.

Challoner envisions that soon customers will be able to go to a telco’s website and define what they want, get the pricing for it, and then use the network slice for however long they need. He sees 2020 as being the year that equipment companies like Ericsson “race to the slice,” trying to show wireless carriers what they can do.

Mobile-tech consultant Chetan Sharma thinks network slicing will likely take a year or two longer to hit the mainstream. But he also sees it as a catalyst for new companies that will enter the market to resell connectivity for dedicated use cases. For example, a company like Twilio or Particle, which already resell network connectivity to clients, could bring together slices from different carriers to offer a global service with specific characteristics. A company like BMW could then use that service when it wants to roll out a software update at a specific time to all of its vehicles—and to ensure that the update made it through.

Or maybe Amazon or Microsoft Azure could offer an industrial IoT product to factories that have specific latency requirements, by bundling together wireless connectivity from multiple carriers. A few years back, the telecom industry was debating whether carriers were becoming a dumb pipe. Sharma thinks the ability to customize speed, latency, and quality of service means 5G will put an end to that particular debate.

That said, carriers charging customers based on the capabilities they need does mean that some people will bring up concerns around network neutrality and how to ensure that customers aren’t charged an arm and a leg for a decent best-effort service.

“It’s uncharted territory,” says Sharma. “When the FCC was looking at [network neutrality] they didn’t consider network slicing as part of the equation. So my view is that they will have to update what operators are allowed to do with network slicing. We’ll need more clarity on the ruling.”

This article appears in the March 2020 print issue as “What 5G Hype Gets Wrong.”

The Long Goodbye of Wi-Fi Has Begun

Post Syndicated from Stacey Higginbotham original

In ten years, we won’t need Wi-Fi.

At least, that’s what Azhar Hussain, the CEO of IoT company Hanhaa, told me on a phone call late last year. He thinks the end of Wi-Fi is nigh because he believes that allocating spectrum in smaller chunks will let municipalities, universities, and companies create private 5G cellular networks. The convenience of those networks will impel companies to choose cellular connections over Wi-Fi for their IoT devices.

There’s reason to think Hussain is right, at least for higher-value devices, such as medical devices, home appliances, and outdoor gear like pool-cleaning robots. Zach Supalla, the CEO of Particle, a company that supplies IoT components to businesses with little experience building connected products, says more than half of the IoT devices in Particle’s cloud that use cellular connections are also within range of a Wi-Fi network. Supalla says that companies choose cellular modules over Wi-Fi because the modules are easier to set up and businesses can better control the consumer experience.

Wi-Fi devices are notoriously difficult to connect to one another, or pair. To get a connected product on their home Wi-Fi network, consumers must often pair with a software-based access point before switching the device over to their own network.

This process can be fraught with errors. Even I, a reporter who has tested hundreds of connected devices, fail to get a device on my network on the first try roughly a third of the time. To make it easier, Amazon and Google have both created proprietary onboarding processes that handle the setup on behalf of the user, so that when consumers power their devices on, they automatically try to join their network.

However, device manufacturers still have to implement both Amazon’s and Google’s programs separately, and that requires know-how that some companies don’t possess. Thankfully, Amazon, Apple, and Google are now working on a smart-home standard that may simplify things. But the details are scant, and any solution they develop won’t be available until 2021 at the earliest.

When you’re faced with multiple Wi-Fi ecosystems, cellular is just easier, Hussain says. Cellular networks cost more now because you have to install radios on the devices and pay a subscription to use the cellular network. Hussain sees those costs coming down, potentially even disappearing, given time.

That’s because he’s anticipating a future where universities, businesses, and municipalities set up their own cellular networks using spectrum obtained through new spectrum auctions, such as the Citizens Broadband Radio fServices (CBRS) auctions occurring in the United States in June. Cellular equipment makers are already building gear and testing these private networks in factories and offices. If new roaming plans are developed to allow devices to come onto these local networks easily, similar to joining a Wi-Fi hotspot, cellular connectivity will become practically free.

Even if Hussain’s vision doesn’t come to pass in the next 10 years, the costs of low-data-rate cellular contracts will continue to drop, and that could still eventually put the nail in the coffin for Wi-Fi. And I mostly agree: I think there are plenty of reasons to believe that Wi-Fi will never disappear entirely, but I do think small cellular networks will take its place in our lives.

This article appears in the February 2020 print issue as “Wi-Fi’s Long Goodbye.”

Engineers are Pushing Machine Learning to the World’s Humblest Microprocessors

Post Syndicated from Stacey Higginbotham original

In February, a group of researchers from Google, Microsoft, Qualcomm, Samsung, and half a dozen universities will gather in San Jose, Calif., to discuss the challenge of bringing machine learning to the farthest edge of the network, specifically microprocessors running on sensors or other battery-powered devices.

The event is called the Tiny ML Summit (ML for “machine learning”), and its goal is to figure out how to run machine learning algorithms on the tiniest microprocessors out there. Machine learning at the edge will drive better privacy practices, lower energy consumption, and build novel applications in future generations of devices.

As a refresher, at its core machine learning is the training of a neural network. Such training requires a ton of data manipulation. The end result is a model that is designed to complete a task, whether that’s playing Go or responding to a spoken command.

Many companies are currently focused on building specialized silicon for machine learning in order to train networks inside data centers. They also want silicon for conducting inference—running data against a machine learning model to see if the data matches the model’s results—at the edge. But the goal of the Tiny ML community is to take inference to the smallest processors out there—like an 8-bit microcontroller that powers a remote sensor.

To be clear, there’s already been a lot of progress in bringing inference to the edge if we’re talking about something like a smartphone. In November 2019, Google open-sourced two versions of its machine learning algorithms, one of which required 50 percent less power to run, and the other of which performed twice as fast as previous versions of the algorithm. There are also several startups such as Flex Logix, Greenwaves, and Syntiant tackling similar challenges using dedicated silicon.

But the Tiny ML community has different goals. Imagine including a machine learning model that can separate a conversation from background noise on a hearing aid. If you can’t fit that model on the device itself, then you need to maintain a wireless connection to the cloud where the model is running. It’s more efficient, and more secure, to run the model directly on the hearing aid—if you can fit it.

Tiny ML researchers are also experimenting with better data classification by using ML on battery-powered edge devices. Jags Kandasamy, CEO of Latent AI, which is developing software to compress neural networks for tiny processors, says his company is in talks with companies that are building augmented-reality and virtual-reality headsets. These companies want to take the massive amounts of image data their headsets gather and classify the images seen on the device so that they send only useful data up to the cloud for later training. For example, “If you’ve already seen 10 Toyota Corollas, do they all need to get transferred to the cloud?” Kandasamy asks.

On-device classification could be a game changer in reducing the amount of data gathered and input into the cloud, which saves on bandwidth and electricity. Which is good, as machine learning typically requires a lot of electricity.

There’s plenty of focus on the “bigger is better” approach when it comes to machine learning, but I’m excited about the opportunities to bring machine learning to the farthest edge. And while Tiny ML is still focused on the inference challenge, maybe someday we can even think about training the networks themselves on the edge.

This article appears in the January 2020 print issue as “Machine Learning on the Edge.”

Hey, Data Scientists: Show Your Machine-Learning Work

Post Syndicated from Stacey Higginbotham original

In the last two years, the U.S. Food and Drug Administration has approved several machine-learning models to accomplish tasks such as classifying skin cancer and detecting pulmonary embolisms. But for the companies who built those models, what happens if the data scientist who wrote the algorithms leaves the organization?

In many businesses, an individual or a small group of data scientists is responsible for building essential machine-learning models. Historically, they have developed these models on their own laptops through trial and error, and pass it along for production when it works. But in that transfer, the data scientist might not think to pass along all the information about the model’s development. And if the data scientist leaves, that information is lost for good.

That potential loss of information is why experts in data science are calling for machine learning to become a formal, documented process overseen by more people inside an organization.

Companies need to think about what could happen if their data scientists take new jobs, or if a government organization or an important customer asks to see an audit of the algorithm to ensure it is fair and accurate. Not knowing what data was used to train the model and how the data was weighted could lead to a loss of business, bad press, and perhaps regulatory scrutiny, if the model turns out to be biased.

David Aronchick, the head of open-source machine-learning strategy at Microsoft Azure, says companies are realizing that they must run their machine-learning efforts the same way they run their software-development practices. That means encouraging documentation and codevelopment as much as possible.

Microsoft has some ideas about what the documentation process should look like. The process starts with the researcher structuring and organizing the raw data and annotating it appropriately. Not having a documented process at this stage could lead to poorly annotated data that has biases associated with it or is unrelated to the problem the business wants to solve.

Next, during training, a researcher feeds the data to a neural network and tweaks how it weighs various factors to get the desired result. Typically, the researcher is still working alone at this point, but other people should get involved to see how the model is being developed—just in case questions come up later during a compliance review or even a lawsuit.

A neural network is a black box when it comes to understanding how it makes its decisions, but the data, the number of layers, and how the network weights different parameters shouldn’t be mysterious. The researchers should be able to tell how the data was structured and weighted at a glance.

It’s also at this point where having good documentation can help make a model more flexible for future use. For example, a shopping site’s model that crunched data specifically for Christmas spending patterns can’t apply that same model to Valentine’s Day spending. Without good documentation, a data scientist would have to essentially rebuild the model, rather than going back and tweaking a few parameters to adjust it for a new holiday.

The last step in the process is actually deploying the model. Historically, only at this point would other people get involved and acquaint themselves with the data scientist’s hard work. Without good documentation, they’re sure to get headaches trying to make sense of it. But now that data is so essential to so many businesses—not to mention the need to adapt quickly—it’s time for companies to build machine-learning processes that rival the quality of their software-development processes.

This article appears in the December 2019 print issue as “Show Your Machine-Learning Work.”

Let’s Build Robots That Are as Smart as Babies

Post Syndicated from Stacey Higginbotham original

Let’s face it: Robots are dumb. At best they are idiot savants, capable of doing one thing really well. In general, even those robots require specialized environments in which to do their one thing really well. This is why autonomous cars or robots for home health care are so difficult to build. They’ll need to react to an uncountable number of situations, and they’ll need a generalized understanding of the world in order to navigate them all.

Babies as young as two months already understand that an unsupported object will fall, while five-month-old babies know materials like sand and water will pour from a container rather than plop out as a single chunk. Robots lack these understandings, which hinders them as they try to navigate the world without a prescribed task and movement.

But we could see robots with a generalized understanding of the world (and the processing power required to wield it) thanks to the video-game industry. Researchers are bringing physics engines—the software that provides real-time physical interactions in complex video-game worlds—to robotics. The goal is to develop robots’ understanding in order to learn about the world in the same way babies do.

Giving robots a baby’s sense of physics helps them navigate the real world and can even save on computing power, according to Lochlainn Wilson, the CEO of SE4, a Japanese company building robots that could operate on Mars. SE4 plans to avoid the problems of latency caused by distance from Earth to Mars by building robots that can operate independently for a few hours before receiving more instructions from Earth.

Wilson says that his company uses simple physics engines such as PhysX to help build more-independent robots. He adds that if you can tie a physics engine to a coprocessor on the robot, the real-time basic physics intuitions won’t take compute cycles away from the robot’s primary processor, which will often be focused on a more complicated task.

Wilson’s firm occasionally still turns to a traditional graphics engine, such as Unity or the Unreal Engine, to handle the demands of a robot’s movement. In certain cases, however, such as a robot accounting for friction or understanding force, you really need a robust physics engine, Wilson says, not a graphics engine that simply simulates a virtual environment. For his projects, he often turns to the open-source Bullet Physics engine built by Erwin Coumans, who is now an employee at Google.

Bullet is a popular physics-engine option, but it isn’t the only one out there. Nvidia Corp., for example, has realized that its gaming and physics engines are well-placed to handle the computing demands required by robots. In a lab in Seattle, Nvidia is working with teams from the University of Washington to build kitchen robots, fully articulated robot hands and more, all equipped with Nvidia’s tech.

When I visited the lab, I watched a robot arm move boxes of food from counters to cabinets. That’s fairly straightforward, but that same robot arm could avoid my body if I got in its way, and it could adapt if I moved a box of food or dropped it onto the floor.

The robot could also understand that less pressure is needed to grasp something like a cardboard box of Cheez-It crackers versus something more durable like an aluminum can of tomato soup.

Nvidia’s silicon has already helped advance the fields of artificial intelligence and computer vision by making it possible to process multiple decisions in parallel. It’s possible that the company’s new focus on virtual worlds will help advance the field of robotics and teach robots to think like babies.

This article appears in the November 2019 print issue as “Robots as Smart as Babies.”

Where’s My Stuff? Now, Bluetooth and Ultrawideband Can Tell You

Post Syndicated from Stacey Higginbotham original

We all lose things. Think about how much time you’ve spent searching for your keys or your wallet. Now imagine how much time big companies spend searching for lost items. In a hospital, for example, the quest for a crash cart can slow a response team during an emergency, while on a construction site, the hunt for the right tool can lead to escalating delays.

According to a recent study funded by Microsoft, roughly 33 percent of companies utilizing the Internet of Things are using it for tracking their stuff. Quality location data is important for more than tracking misplaced tools; it’s also necessary for robotics in manufacturing and in autonomous vehicles, so they can spot nearby humans and avoid them.

The growing interest in locating things is reflected in updated wireless standards. The Bluetooth Special Interest Group estimates that with the updated 5.1 standard, the wireless technology can now locate devices to within a few inches. Elsewhere, Texas Instruments has built a radar chip using 60-gigahertz signals that can help robots “see” where things are in a factory by bouncing radio waves off its surroundings.

But for me, the real excitement is in a newcomer to the scene. In August, NXP, Bosch, Samsung, and access company Assa Abloy launched the FiRa Consortium to handle location tracking using ultrawideband radios (FiRa stands for “fine-ranging”). This isn’t the ultrawideband of almost 20 years ago, which offered superfast wireless data transfers over short distances much like Wi-Fi does today. FiRa uses a wide band of spectrum in the 6- to 9-GHz range and relies on the new IEEE 802.15.4z standard. The base standard is used for other IoT network technologies, including Zigbee, Wi-SUN, 6LoWPAN, and Thread radios, but the z formulation is designed specifically for securely ascertaining the location of a device.

FiRa delivers location data based on a time-of-flight measurement—the time it takes a quick signal pulse to make a round trip to the device. This is different from Bluetooth’s method, which opens a connection between radios and then broadcasts the location. Charles Dachs, vice chair of the FiRa Consortium and vice president of mobile transactions at NXP, says FiRa’s pulselike data transmissions allow location data to be gleaned for items within 100 to 200 meters of a node without sucking up a lot of power. Time-of-flight measurements allow for additional security, since they make it harder to spoof a location, and they’re so accurate, it’s obvious that a person is right there, not even a few meters away. Also, because the radio transmissions aren’t constant, it’s possible for hundreds of devices to ping a node without overwhelming it. By comparison, Bluetooth nodes can handle only about 50 devices.

FiRa’s location-tracking feature is likely to be the application that entices many companies to adopt the standard, but it can do more. The consortium also hopes that automotive companies will use it for securely unlocking car doors or front doors wirelessly. However, there is a downside: Widespread FiRa use for locks would require either a separate fob or new radios on our smartphones.

I think it’s far more likely that FiRa will find its future in enterprise and industrial asset tracking. Historically, Bluetooth has struggled in this space because of the limited number of connections that can be made. Other radios have been a bit too niche, or not well designed for enterprise use. As for location tracking for us consumers? Apple and Google are both betting on Bluetooth, so that’s where I’d place my bets, too.

This article appears in the October 2019 print issue as “Where’s My Stuff?.”

Remaking the World for Robots

Post Syndicated from Stacey Higginbotham original

Over time, we will design physical spaces to accommodate robots and augmented reality

Every time I’m in a car in Europe and bumping along a narrow, cobblestone street, I am reminded that our physical buildings and infrastructure don’t always keep up with our technology. Whether we’re talking about cobblestone roads or the lack of Ethernet cables in the walls of old buildings, much of our established architecture stays the same while technology moves forward.

But embracing augmented reality, autonomous vehicles, and robots gives us new incentives to redevelop our physical environments. To really get the best experience from these technologies, we’ll have to create what Carla Diana, an established industrial designer and author, calls the “robot-readable world.”

Diana works with several businesses that make connected devices and robots. One such company is Diligent Robotics, of Austin, Texas, which is building Moxi, a one-handed robot designed for hospitals. Moxi will help nurses and orderlies by taking on routine tasks, such as fetching supplies and lab results, that don’t require patient interaction. However, many hospitals weren’t designed with rolling robots with pinchers for hands in mind.

Moxi can’t open every kind of door or use the stairs, so its usefulness is limited in the average hospital. For now, Diligent sends a human helper for Moxi during test runs. But the company’s thinking is that if hospitals see the value in an assistive robot, they might change their door handles and organize supplies around ramps, not stairs. The bonus is that these changes would make hospitals more accessible to the elderly and those with disabilities.

This design philosophy doesn’t have to be limited to the hospital, however. Autonomous cars will likely need road signs that are different from the ones we’ve grown accustomed to. Current road signs are easily read by humans, but they could be vandalized so as to trick autonomous vehicles into interpreting them incorrectly. Delivery drones will need markers to navigate as well as places to land, if Amazon wants to get serious about delivering packages this way.

Google has already developed one solution. Back in 2014, the company invented plus codes. These are short codes for places that don’t traditionally have street names and numbers, such as a residence in a São Paulo favela or a point along an oil pipeline. These codes are readable by humans and machines, thus making the world a little more bot friendly.

Augmented reality (AR) also stands to benefit from this new design philosophy. Mark Rolston is the founder and chief creative officer of ArgoDesign, a company that helps tech companies design their products. Rolston has found that bringing AR—such as Magic Leap’s head-mounted virtual retinal display—into offices and homes can be tough, depending on the environment. For example, the Magic Leap reads glass walls as blank space, which results in AR images that are too faint to show up on the surface.

AR also struggles with white or dark walls. Rolston says the ideal wall is painted a light gray and has curved edges rather than sharp corners. While he doesn’t expect every room in an office or home to follow these guidelines, he does think we’ll start seeing a shift in design to accommodate AR needs.

In other words, we’ll still see the occasional cobblestone street and white wall, but more and more we’ll see our physical structures accommodate our tech-focused society.

The Factories of the Future Will Be Fast, Flexible, and Free of Wires

Post Syndicated from Stacey Higginbotham original

AI, 5G, and the IoT will allow factories to produce new goods on the fly

The future of manufacturing is software defined. You don’t have to look further than ABB to understand why companies are turning to 5G networks, artificial intelligence, and computer vision. The Swiss company is using these new tools to boost reliability and agility in its nearly 300 factories around the world, which produce a host of goods, from simple plastic zip ties to complex robotic arms.

For ABB and other companies pushing software-defined networking, it’s all about being safer while adapting to a growing clamor for personalized products.

When it comes to safety, adding more sensors to machines and deploying AI can make the end product more consistently reliable. At its Heidelberg factory, for example, ABB makes circuit breakers. But even with 99.999 percent reliability at ABB’s factory, faulty circuit breakers would still kill 3,000 people a year, according to Guido Jouret, the company’s chief digital officer.

People can’t achieve 100 percent reliability when making and inspecting the completed circuit breakers—but a camera with machine learning can. When that camera detects any sort of variation, factory managers can go back to the machine to figure out what’s causing that defect.

Boosting safety and reliability isn’t exactly news to anyone who has followed the adoption of Japanese Kaizen or Six Sigma manufacturing, efforts to improve reliability and reduce waste, in the automotive industry. But adding automation and robots is becoming more important today as our culture increasingly wants customized products. These so-called lean-manufacturing methods allow factory managers to reconfigure their production lines on the fly.

“It’s not always about being more efficient,” Jouret says. “It’s about being more agile.”

Kiva Allgood, the head of Internet of Things and automotive at Ericsson, calls the shift from efficiency to agility a move away from economies of scale toward an economy of one. In other words, the inefficiencies traditionally associated with making low quantities of goods will no longer apply. She saw this change coming as an executive at General Electric. Now she’s working on the wireless technology that will help make this possible.

But before we can reprogram the factory floor, we have to understand it. That starts with individual machines. We’ll need manufacturing equipment with sensors measuring both the machine’s work and the machine’s health. This is the stage where many manufacturers are today.

The factory should also have sensors that provide context to the overall environment, including temperature, workers’ movements, and more. Armed with that understanding as well as computer-vision algorithms designed to detect flaws in the manufactured product, it will become possible to quickly repurpose robots to make something new.

Perhaps more interestingly, future agile factories will remove the wires littering factory floors. Historically, factory automation has meant building a rigidly defined manufacturing line dictated by the robots making the product. But with developing tech, factories will free those robots from their data and power wires, and replace the wires with low-latency wireless 5G networks. Then, factories can turn days-long reconfiguration efforts into an overnight project.

By emphasizing agility over efficiency, the factories of the future will be able to turn on a dime to meet the demands of our fast-paced society.

This article appears in the July 2019 print issue as “One Factory Fits All.”

IoT Can Make Construction Less of a Headache

Post Syndicated from Stacey Higginbotham original

With more data, constructing buildings can look more like factory manufacturing

The construction industry can be a mess. When constructing any building, there are several steps you must take in a specific order, so a snag in one step tends to snowball into more problems down the line. You can’t start on drywall until the plumbers and electricians complete their work, for example, and if the drywall folks are behind, the crew working on the interior finish gets delayed even more.

Developers and general contractors hope that by adopting Internet of Things (IoT) solutions to cut costs, build faster, and use a limited labor pool more efficiently, they can turn the messy, fragmented world of building construction into something more closely resembling what it actually is—a manufacturing process.

“We’re looking at the most fragmented and nonstructured process ever, but it is still a manufacturing process,” says Meirav Oren, the CEO of Versatile Natures, an Israeli company that provides on-site data-collection technology to construction sites. The more you understand the process, says Oren, the better you are at automating it. In other conversations I’ve had with people in the construction sector, the focus isn’t on prefabricated housing or cool bricklaying robots. The focus is on turning construction into a regimented process that can be better understood and optimized.

Like agriculture—which has undergone its own revolution, thanks to connected tech—construction is labor intensive, dependent on environmental factors, and highly regulated. In farming today, lidar identifies insects while robots pick weeds with the aid of computer vision. The goal in agricultural tech is to make workers more efficient, rather than eliminating them. Construction technology startups, using artificial intelligence and the IoT, have a similar goal.

Oren’s goal, for example, is to make construction work more like a typical manufacturing process by using a sensor-packed device that’s mounted to a crane to track the flow of materials on a site. Versatile Natures’ devices also monitor environmental factors, such as wind speed, to make sure the crane isn’t pushed beyond its capabilities.

Another construction-tech startup, Pillar Technologies of New York City, aims to reduce the impact of on-site environments on workers’ safety and construction schedules. Pillar makes sensors that measure eight environmental metrics, including temperature, humidity, carbon monoxide, and particulates. The company then uses the gathered data to evaluate what is happening at the site and to make predictions about delays, such as whether the air is too humid to properly drywall a house.

Because many work crews sign up for multiple job sites, a delay at one site often means a delay at others. Alex Schwarzkopf, cofounder and CEO of Pillar, hopes that one day Pillar’s devices will use data to monitor construction progress and then inform general contractors in advance that the plumbers are behind, for example. That way, the contractor can reschedule the drywall group or help the plumbers work faster.

Construction is full of fragmented processes, and understanding each fragment can lead to an improvement of the whole. As Oren says, there is a lot of low-hanging fruit in the construction industry, which means that startups can attack a small individual problem and still make a big impact.

This article appears in the June 2019 print issue as “Deconstructing the Construction Industry.”