All posts by Steven Cherry

Reversing Climate Change by Pulling Carbon Out of the Air

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/energy/environment/reversing-climate-change-by-pulling-carbon-out-of-the-air

Steven Cherry Hi, this is Steven Cherry for Radio Spectrum.

Let’s face it. The United States, and, really, the entire world, has squandered much of the time that has elapsed since climate change first became a concern more than forty years ago.

Increasingly, scientists are warning that taking coal plants off line, building wind and solar farms here and there, and planting trees, even everywhere, aren’t going keep our planet from heating to the point of human misery. Twenty years from now, we’re going to wish we had started thinking about not just carbon-zero technologies, but carbon-negative ones.

Last year we spoke with the founder of Air Company, which makes carbon-negative vodka by starting with liquid CO2 and turning it into ethanol, and then further refining it into a product sold in high-end liquor stores. Was it possible to skip the final refining steps and just use the ethanol as fuel? Yes, we were told, but that would be a waste of what was already close to being a premium product.

Which leads to the question, are there any efforts under way to take carbon out of the atmosphere on an industrial scale? And if so, what would be the entire product chain?

One company already doing that is Global Thermostat, and its CEO is our guest today.

Graciela Chichilnisky is, in addition to the startup, an Argentine-born Professor of Economics and Mathematical Statistics at Columbia University and Director of the school’s Consortium for Risk Management. She’s also co-author of a July 2020 book, Reversing Climate Change.

Welcome to the podcast.

Graciela Chichilnisky Thank you, Steven. Pleasure to be here.

Steven Cherry Graciela, you have to pilot facilities in California, they will each have the capacity to remove 3000 to 4000 metric tons of CO2 per year. How exactly do they operate?

Graciela Chichilnisky The actual capacity varies depending on the equipment, but you are right on the whole, and the facility is at SRI, which used to be the Stanford Research Institute. They work by removing CO2 directly from the air. The technology is called “direct-air-capture” and our firm, Global Thermostat, is the only American firm doing that. And it is the world leader.

The technology, essentially, scrubs air. So you move a lot of air over capture equipment and chemicals that have a natural affinity for CO2, so as the air moves by, the CO2 is absorbed by the solvents and then you separate the combination of the solvent with the CO2 and lo and behold, you got yourself 98 percent pure CO2 coming out at, as a gas, at one atmosphere. That is [at a] very, very, very high, level, how it works.

And the details are, of course, much more complex and very, very interesting. What is most interesting, perhaps, is the chemists who are used to working with constrained capture in limited facilities—hence volumes—find that the natural chemical and physical properties of the process change when you are acting in an unconstrained area (in fact, the whole atmosphere). You are using the air directly from the atmosphere to remove the CO2. And that’s why it is possible to do that in a way that we have patented—we have about 70 patents right now—in a way that actually is economically feasible. It is possible to do it, save the CO2, and make money. And that is, in fact, the business plan for our company, which includes reversing climate change through this process.

Steven Cherry Yes, so let’s take the next step of the process, what happens with the CO2 once it’s at its 98 percent purity?

Graciela Chichilnisky The CO2—what is perhaps a very good secret for most people—you see CO2 is a very valuable gas and even though it’s a nuisance and is dangerous depending on the concentration in your atmosphere, here or earth, it sells for anywhere between a $100/tonne and $1500 to $1800/tonne. So if you think about that, all you need to know is that the cost of obtaining the CO2 from the air should be lower than the cost of selling it.

The question is what markets would satisfy that. And I’m going to give you a case in which we are already working and selling which we are not working yet. We’re already working with the production of synthetic fuels, in particular synthetic gasoline. Gasoline can be produced by combining CO2 and hydrogen, the CO2 from the air, the hydrogen from water—the hydrogen is produced using hydrolysis—and the CO2 comes from here using our technology. Combining those two gives you hydrocarbons and when properly mixed, you obtain a chemical which is molecule by molecule identical to gasoline, except it comes from water and air instead of coming from petroleum. So if you burn it, you still produce CO2, but the CO2 that is emitted came from the atmosphere in the production of the gasoline and therefore you have a closed circle. And in net terms you’re emitting nothing, using the gasoline that is produced from CO2 and hydrogen—from air and water. These markets, the markets currently in our case, in addition to our synthetic gasoline, include the water desalination market. We work with a company that is the largest desalinated of water in the world, in Saudi Arabia.

And they need a lot of CO2 because the process of desalinating water for human consumption requires the use of CO2. In addition to those two examples, applications, commercial uses, synthetic gasoline and disseminated water, there are carbonated beverages, for example, beer and Coca-Cola. Indeed, we work with Coca-Cola and we work with Siemens, and with AME, automobile companies such as. Porsche, to produce clean gasoline—the synthetic gasoline I mentioned.

From the CO2, you can actually produce elements of cement and other building materials. So as a whole, McKinsey has documented that there is a $1 trillion market per year globally for CO2. So CO2 is a very valuable chemical on Earth, even though it’s a nuisance and dangerous in the atmosphere. So the notion is—the notion of Global Thermostat is—bring it down. In other words, take it from the atmosphere where it is dangerous; bring it down to earth, where it is valuable.

Steven Cherry I love that our first carbon negative podcast involved vodka and our second one now involves beer. So that’s the economic case for what you’re doing. There’s also the question of the carbon budget. There’s a certain amount of energy used in the processes of removing CO2 from the air and then using it for some of these applications; what would be a typical net carbon budget?

Graciela Chichilnisky Negative, in other words, what happens is that we don’t use electricity, which is mostly reduced from fossil fuels right now. We use heat and our heat can be produced as a waste heat from other processes; it doesn’t have to be electricity. In fact we use very little electricity.

But think of it this way: In the year 2020, we for the first time in history humans are able to produce electricity directly from the sun less expensively than using fossil fuels. The two-and-a-half cents or less, continually downward, is the going price for solar photovoltaic production of electricity. It’s the lowest cost. Two cents a kilowatt hour is really the lowest possible cost.

Steven Cherry One wonderful thing about this is that you’re an economist and so you’re determined not just to develop technologies, but ensure that they find a home in the marketplace because that’s the most practical way to implement them at scale.

In 2019, Global Thermostat started working with Exxon Mobil. I understand they provided some money and I believe initially 10 employees. I gather the idea is for them to be one organization commercializing this technology further. How would that work?

Graciela Chichilnisky Well, first of all, I do have two Ph.D.s; I started pure mathematics at MIT. That was my first Ph.D. My second Ph.D. was in economics at UC Berkeley. So I do have the mathematics as well as the economics in my background. What we’re doing requires several forms of expertise. You said it; Global Thermostat has made a joint development agreement with Exxon and is working with Coca-Cola and is working now, with Siemens; is working with a company called HIF, which is in Chile.

So, how does that work? As you probably know, Exxon Mobil is a multifaceted company. In addition to fossil fuels, they have a huge expertise in carbon capture technology, the old fashioned, I would say traditional, type. And by that I mean capture of CO2 from the fumes of power plants, for example.

They have the resources and the know-how, and we are a small company and we want to expand our production. So they offered an opportunity for us to go with the high-level technology, the advanced company in the area of carbon capture in a more traditional way that are willing to experiment and they’re willing to advance commercially the removal of CO2 directly from the atmosphere.

So that with them in our contract, we intend to build a one gigaton plant, that’s what we contracted to do, which means that we then we will scale up or technology. So every year it can eventually remove one billion—with a ‘b’ as in boy—tons of CO2 from the atmosphere every year. That’s the scale-up I’m talking about, and that is the main purpose of our partnership with Exxon Mobil.

And if you think about it—you said it yourself—you want to know what the carbon budget really, roughly speaking, don’t forget that I worked in the Kyoto Protocol. And I created the carbon market of the Kyoto Protocols. So I know a lot about carbon budgets and how demanding they are and how far we are from what we need to do. We need to essentially remove 40 gigatons of CO2 every year from the atmosphere in order to reverse climate change. And what I’m telling you is that we these type of partnerships with companies like Exxon, we can do one gigaton—you’re at a shooting distance from that goal. And that’s why I a contract with Exxon is to scale up our technology to remove one gigaton of CO2 per year. And then if we had 40 of those plans, then we would be removing all the CO2 that humans need to remove from the atmosphere right now in order to reverse climate change.

Steven Cherry It seems paradoxical that it would make more sense to take carbon directly out of the air, the direct air capture, rather than focusing on concentrated sources of carbon and carbon dioxide, such as a power plant smokestack. How is that paradox resolved? How is it more sensible to take it directly out of the atmosphere?

Graciela Chichilnisky First of all, it is not sensible, it’s very creative, very unique, and he has never been done not what we’re doing—it has never been done. And there is a good reason why wasn’t done, because as you’re point out, it’s more difficult, actually, and it’s more expensive to remove CO2 from the air than to remove it from a concentrated source. So why would we be doing that? The answer is, if you remove CO2 from the chimneys or any natural facility, the best you can do—the best best best possible—is to make that facility carbon neutral; to remove all the CO2 that it is emitting.

That’s the best. If you’re really lucky, right? Okay, that’s not enough anymore. When I used to be the lead author of the IPCC, the Intergovernmental Panel on Climate Change, working on this topic, I found—and it is well-known now—that going carbon neutral does not suffice. I think you say that in your introduction. Now we have to go carbon negative, which means we have to remove in net terms more CO2 than what is emitted. And that CO2 that we remove should be stabilized on Earth. I’m not saying sequester on the ground, but I’m saying stabilized. You know, it could be in materials or instruments or whatever, stabilizing nerves after it’s removed.

If you need to remove more CO2 than what you emit and we need to remove 40 gigatons more than what we emit right now, you cannot do it from industrial facilities, the best that you can achieve is carbon neutrality. You need to go carbon negative. For that you have to go and remove CO2 from air.

Steven Cherry I said that 20 years from now, we’ll wish we had started all this 20 years earlier, but you actually started this process a decade ago, you already foresaw that we would need carbon negative processes. But at the same time, as you mentioned, you were also working to develop the Kyoto Protocols, specifically creating carbon markets. Was that just a stopgap before getting to this point that you’re at now?

Graciela Chichilnisky No. No, no. The carbon market solution was the solution, an easy solution. Let me explain. The problem is that our prices are all wrong, and when we try to maximize economic performance, we maximize our GDP, in which we don’t take into account the enormous damage that excessive CO2 emissions are causing to humans to our economy, to our world, and even to our survival as a species. So the invention of the carbon market—I invented and I designed it and I rolled it into the Kyoto Protocol in 1997—was done with a purpose of changing the system of values.

In other words, introducing prices and values that make it more desirable to be clean rather than to over-emit. Right now if we were to cut all the trees in the United States and produce toilet paper, our would economic system of economic performance, how we measure it, we say that we are much better off. After all, more trees are being cut off and used to produce toilet paper than before.

So I decided that this had to change. And in fact, when I designed and created the carbon market, in the Kyoto Protocol, it became international law in 2005. And it is now what’s called the European Union Emission Trading System, which encompasses 27 nations, and is also used in China, in 14 states in the United States, and essentially 25 percent of humankind is using now the carbon market, that I designed and wrote into the protocol originally in 1997. But the most important statistic for me is, in December 2019 Physics Today, there is an article on the carbon market, which says the carbon market has succeeded by decreasing the emissions from the nations that use the carbon market in those years since 2005, when it became international law, decreasing the emission, those nations that use the carbon market by 30 percent from the base year.

Another way of saying is that if the whole world was using, not just the 25 percent that I mentioned, the carbon markets, we would be 30 percent below the level of emissions of 2005. And you know what? We really wouldn’t have the climate disaster, the catastrophe, that we fear. We would not have it because we would be containing the emissions of CO2 through the use of the carbon market, as was done in all the nations that adopted carbon market when it became international law in 2005.

So that’s a solution, but we haven’t adopted it, only 25 percent of the work succeeded. The rest of the world went south. We emitted even more. So now in relation to decreasing emissions because you cannot avoid increasing emissions—that’s critical—you now have to remove the CO2, the legacy CO2, that we put into the atmosphere and which is still in the atmosphere after all these years. So from the physical point of view, you have to know CO2 doesn’t decay, doesn’t decay as fast as other gases, and it remains in the atmosphere once emitted for decades, even hundreds of years in some cases. As a result of that, we do have a lot of legacy CO2 that doesn’t decay.

Steven Cherry The title of your book is Reversing Climate Change. The subtitle is How Carbon Removals Can Resolve Climate Change and Fix the Economy. Perhaps you want to say another word about the fix the economy part.

Graciela Chichilnisky Yeah, I will do it with two sentences. Sentence #1, I just want to quote new President Biden, who said, “When I think about climate change,” he said, “I think jobs, jobs, jobs.” So a technological evolution of this nature, that could be even a revolution, it’s creating a lot of jobs and it is creating the infrastructure that will allow us to solve the problem and grow the economy at the same time, because every time you remove CO2, you make money now. It doesn’t cost money. You have to invest initially, but you make money.

 The second issue—[Biden] doesn’t address because he doesn’t know the level of detail or this type of focus—is the problem of the environment and the resources is very closely tied with the problem of inequity. And you must be aware, because there have been a number of books that were really prominently published and reviewed about the increase in the inequity in the global economy, not just internationally that we know is huge, it has increased 20 times since 1945, but also within nations, like in the United States. Well, what’s interesting is that these new technologies not only solve the problem at the technological level and not only can bring jobs, as I mentioned and I quoted Biden saying so, but in addition, these technologies sponsor equity. And I will give you two examples very quickly. As I mentioned already, the solar photovoltaic revolution in which 80 percent of the cost of the production of electricity for photovoltaic energy has decreased in the last 20 years.

That revolution has created the most accessible form of energy than ever before, because while fossil fuels were the main raw material for the production of electricity in the $60 trillion power plant economy, those are really not very equitable at all. And fossil fuels come from a few regions in the world. They have to be extracted from under the earth, etc. And the result is that our whole energy production system lies at the foundation of the inequity of the modern economy, the industrial revolution. If you replace fossil fuels, natural gas, petroleum, and coal, by the sun, as an input, you have a major equalizer because everybody in the world has access to the sun in the same amount. So the input now is no longer fossil fuels that come from a few places that make a lot of money. The input now is the sun that comes from everywhere and everybody has access to that. They import. That creates energy. Now, that’s more equitable is a huge difference, huge difference.

And the other difference is that with new technology that transforms CO2 into materials for construction or even into clean forms of energy like synthetic gasoline as I explained before. That is based on air, as an input, and the air has a property, it has the same concentration of CO2 all over the planet and this means an equalizer again. So we now can reduce cement, let’s say, beverages, food. You can even reduce protein from CO2 of course, because of the carbon molecules; you can actually produce all the materials that we need and even food and drinks, beverages, from air. And the air is equitably distributed—it’s one of the last few public goods that everybody has access to, as is the sun. So we are now going into a new economy. Powered by sun and with resources coming from air and, you know, what? That solves the problem of equity in a big way. I would say inequity, which is so paralyzing to economies and to the world as a whole. So I wanted to say not only this is an environmental change, some may say a revolution, but this is in addition a social and economic change and some would say revolution.

Steven Cherry Yeah, we could do we could do an entire show on things like the resource paradox, countries that are rich in oil, for example, end up being poorer through the extraction processes than when they started. Well, Graciela, it’s going to take economists, businesspeople, scientists and politicians to lead us out of this crisis. And we’re fortunate to have a news, someone who is several of those things. Thank you for your research, your book, your company, your teaching, and for joining us today.

Graciela Chichilnisky Great. Thank you very, very much for your time and for your insightful questions.

Steven Cherry Well Graciela, it’s going to take economists, businesspeople, scientists, and politicians to lead us out of this crisis, and we’re fortunate to have in you someone who is two of those things working with the other two. Thanks for your research, your book, your company, and your teaching—and for joining us today.

We’ve been speaking with Graciela Chichilnisky: Columbia University economist, co-author of the 2020 book, Reversing Climate Change, and CEO of Global Thermostat, a startup devoted to pulling carbon out of the air cost-effectively.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers, a professional organization dedicated to advancing technology for the benefit of humanity.

This interview was recorded February 2, 2021 via Zoom and AdobeAudition. Our theme music is by Chad Crouch.

You can subscribe to Radio Spectrum on Spotify, Apple Podcast, and wherever else you get your podcasts, or listen on the Spectrum website, where you can also sign up for alerts of new episodes. We welcome your feedback on the web or in social media.

For Radio Spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

 

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

The Uneconomics of Coal, Fracking, and Developing ANWR

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/energy/environment/the-uneconomics-of-coal-fracking-and-developing-anwr

Steven Cherry Hi this is Steven Cherry for Radio Spectrum.

Many things have changed in 2020, and it’s an open question which are altered permanently and which are transitory. Work-from-home may be here to stay; as might the shift from movie theatres and cable tv networks to streaming services; pet adoption rates are so high that some animal shelters are empty and global greenhouse gas emissions declined in record numbers.

That last fact has several causes—the lockdowns and voluntary confinements of the pandemic; an oil glut that preceded the pandemic and continued through it; the ways renewable energy—especially solar energy—is successfully competing with fossil-fuels. According to the Institute for Energy Economics and Financial Analysis, an Ohio-based non-profit that studies the energy economy, more than 100 banks and insurers have divested or are divesting from coal mining and coal power plants. Their analysis also shows that natural gas power plant projects—for example one that’s been proposed for central Virginia—are a poor investment, due to a combination of clean-energy regulations and the difficulty of amortizing big power-plant construction in the face of a growing clean-energy pipeline, expected to grow dramatically over the next four years.

Such continued growth in clean-energy projects is particularly notable, as it comes despite high job losses for the renewable energy industry, slowing construction activity, and difficulty in finding capital financing. Those same headwinds brought about a record number of bankruptcies in the fracking industry.

My guest today is eminently qualified to answer the question, are the changes we’re seeing in the U.S. energy-generation profile temporary or permanent? And what are the consequences for climate change? Kathy Hipple was formerly an analyst at the aforementioned Institute for Energy Economics and Financial Analysis and is a professor in Bard College’s Managing for Sustainability MBA program.

Kathy, welcome to the podcast.

Kathy Hipple Thank you, Steven. It’s great to be here.

Steven Cherry Kathy, your background is broader than most. You did a long stint on Wall Street at Merrill Lynch, but you’re also on the board of Meals on Wheels in Bennington, Vermont. There are issues of environmental justice in our decisions about what kind of energy generation to finance and where, and we’ll get to that. But first, it seems like the economics behind our energy sources are shifting almost faster than we can keep up. Where are we at currently with the economics of fossil fuels—coal, petroleum, natural gas?

Kathy Hipple Well, you’re right. It has seemed that 2020 saw an acceleration of trends. But this is not new. This has been going on for at least a decade, that fossil fuels have been in decline from a financial standpoint. And the energy sector—which currently only includes oil and gas companies, that does not include renewable energy—finished last place in the market for the decade between 2010 and 2020. It also finished last place in 2020, 2019, and 2018. So this is a sector in financial decline, long-term financial decline. And as we know and because I’m a finance professor, finance is all about the future. So the market is telling us that the future is not fossil fuels. Which is why the energy sector is now only 2 percent—a little over 2 percent—of the S&P 500. And in the 1980w it was 28+ percent. So we now have a world economy that is much less dependent on fossil fuels financially than it has ever been.

Steven Cherry Wall Street firms have promised to lead the charge toward sustainable energy use, but the world’s largest asset manager, BlackRock, a year after it said it would divest its portfolio from fossil fuels, still has something like $85 billion invested in coal companies, the worst of the fossil fuels in terms of pollution and greenhouse gases.

Kathy Hipple Yes, BlackRock has been a disappointment in many respects. They are not walking their talk. Their talk is impressive, but their follow-through, as you say, they’re still heavily invested in coal, still heavily invested in financing gas and oil projects around the world. And they are also moving into clean energy. But they have not yet done the divestment that many activists have called on them to do and that the Larry Fink letter suggests that they will do.

They have not been as transparent as they probably should be in terms of how they are working with management of companies to see if they are actually promoting the energy transition or if they are reporting on Taskforce for Climate-related Financial Disclosures, TCFD. So I do think that they grew their asset base tremendously in 2020, but they have a long way to go before they will become a climate leader on the investment side.

Steven Cherry It’s impossible to talk about new drilling without talking about fracking. A 2019 study of 40 dedicated U.S. shale oil companies found that only four of them had a positive cash flow balance. Much of the easiest drilling has already been done. Investors haven’t been getting good returns even on them. And the price of oil generally is pretty low. The thing that has puzzled some observers is that besides the economic damage wrought by fracking financially, it seems to be driven more by enthusiasm than results. Does fracking make sense financially?

Kathy Hipple Fracking does not make sense financially and it never has. That is the big dirty secret—even when oil prices were well above $100/barrel and natural gas prices were much higher than they are now. These companies, year in and year out since 2010, had been cash flow negative in aggregate. Occasionally you’ll get one or two companies that will outperform their peers. But in aggregate, the oil—the frackers that are going after oil, largely in the Permian Basin in Texas and New Mexico—have been cash flow negative each and every year; and in even worse shape than the oil price fractures, are the fossil gas (sometimes called natural gas) producers, largely in the Marcellus-Utica basins in Appalachia.

They have been in extremis and they have produced negative cash flows again, even when gas prices were much higher than they are now. So the business case for fracking has never been proved; it’s a poor business model—as you mentioned, the decline rate is very high, which means you have to continue to put money into drill new wells. And the industry has never found a way to be profitable and to be cash flow positive.

In fact, one of the former CEOs of the largest gas frackers, EQT, said he had never seen an industry, in a sense, commit suicide the way the fracking industry has done. So you’re right, it’s been a terrible investment. It’s been driven by enthusiasm and a lot of investors saying wait until next year. But largely the investor base has moved away from this sector. The sector has no access to the public markets for either equity or for debt. Many banks have walked away from them. They’ve closed their loan portfolios. One prominent bank sold their entire energy portfolio for roughly 50, 60 cents on the dollar. So the sector probably can only go forward if it has access to higher-risk capital or higher-cost capital. And these will be investors who are willing to gamble on a sector that has never yet shown a financial success.

Steven Cherry There’s a lot of political momentum behind fracking, especially in western Pennsylvania and places like that, North Dakota. What is one to do when there’s such a disconnect between the politics and the finances?

Kathy Hipple That’s a great question, Steven. The industry has lost a tremendous amount of its economic and financial power, but it retains a lot of political power. And that is particularly true in places like Texas and in Pennsylvania, as you mentioned. However, I think that the public view about fracking has started to change. In fact, there was an interesting study that the counties in Pennsylvania that had more fracking, in fact, did not vote for Trump at the same level they had four years earlier, and that the public is starting to really question whether they want to have pipelines under their land, whether they want to have orphan wells or wells for gas and for oil that have just been abandoned. And they’re really questioning whether the number of jobs the industry promises will ever materialize.

Often the industry comes to a state and says we will produce this many jobs. And in fact, most of the jobs are in construction and they’re short-term jobs. And they are reasonably high-paying jobs, but often the jobs are imported from construction workers outside the state. And once these wells are drilled, they don’t require people to man them. So these are not good long term sources of revenue for these local counties, communities, or states.

Some of my students, interestingly enough, did a study on a wind farm in a small county, Paulding County, in Ohio, and they showed that the long term revenue produced from the wind farm was actually very stable income and that the county could make use of these—they were called payment in lieu of taxes—PILOT funds to finance their school district, to finance special ed, to finance DARE officers (stay off drug officers), and that a lot of counties throughout Texas, for example, are really very dependent now on income and revenue streams coming from wind. So I think as more municipalities are looking at the long-term stable income that comes in from a wind farm, for example, versus the boom-bust cycle of the oil and gas industry, clean energy will begin to be much, much more appealing—even more so than it is now.

Steven Cherry Historically, a lot of that revenue to communities are really … there’s sort of no better example of that than Alaska and in fact, in mid-November 2020, in other words, in the lame duck period between election and inauguration, the Trump administration opened up ANWAR, the Alaska Arctic National Wildlife Refuge. In fact, this was our impetus for first contacting you for this show. It’s now mid-January as we record this. Where are we at with ANWAR?

Kathy Hipple Well, it’s a beautiful, pristine part of the world and it’s very high cost to produce oil from that part of the world. And since there’s a glut of oil and a glut of gas on the market worldwide, one questions whether there’s any rational reason for drilling there. But it was one of the final moves by the Trump administration to rush through the process of allowing bidding on these lands.

And it will be interesting to see. Very few bids came in. And it doesn’t mean anybody will go forward because this is not economically producible oil, given current prices of oil. Any firm that puts money into this is likely at the end of the day to lose money.

Steven Cherry You know, back in the mid 2010s, Shell ended up abandoning a $7 billion drilling project in the Arctic, are the oil companies really enthusiastic about drilling there?

Kathy Hipple No, it doesn’t appear that they are. In fact, if you look at most of 2020, there were massive historic write-downs among the big oil companies around the world. The large oil companies did not participate in bidding for the land and water. They … A couple of smaller companies did. But the larger companies have largely stayed away.

Steven Cherry So is unwarned more of a symbol of a conflict between business and environmentalism?

Kathy Hipple I wouldn’t have put it in those terms, but I think that’s an excellent way to put that.

Steven Cherry The Biden administration promised an enormous infrastructure program oriented toward environmental concerns and shifting to a clean energy economy. Leaving aside the political difficulties in getting any such legislation through Congress, how big a program could we have and still remain within the bounds of good economic sense?

Kathy Hipple I don’t know the exact dollar amount to answer that question, but there’s still a tremendous amount of low hanging fruit with infrastructure spending and energy-efficiency spending. We always talk about moving to clean energy and renewable energy, which is fantastic. There’s an enormous need to build that out in this country. But there’s also a lot of low-hanging fruit about just energy efficiency, which ends up getting kind of short shrift when we talk about the energy transition. That could be billions and billions building out an electric-vehicle-charging system around the country. We need to move very quickly to decarbonize. Many of the countries’ plans are 2030, 2040, 2050. The urgency is to act immediately, to act now. And I’m extraordinarily happy that the Biden administration is moving as quickly as they are—just a few days into their administration.

Steven Cherry I was going to ask you about electric vehicles. It looks like Detroit is finally getting serious about them. How does that change the energy generation situation and the grid distribution system five years from now, 10 years from now?

Kathy Hipple Well, it’s essential to decarbonize the economy and much of the use of oil is for vehicle travel. The more vehicles can be electrified, the less need there will be for oil in this country. The United States has fallen behind Europe in terms of EVs and China is coming along very, very quickly and very aggressively. So the United States has a long way to go.

And part of it is that people do have a concern about range anxiety. There are not enough high-speed chargers. Many people live in apartments, and if they live in apartments, they can’t charge their vehicle overnight. They may not be going to an office, which you alluded to in your opening statement. So they can’t charge there. So if you live, for example, in New York City, where I split my time between Vermont and New York City, if you live in an apartment building, it’s very difficult in New York City to reliably have an EV. And that has to change and it has to change very, very quickly.

Steven Cherry Perhaps we could say a word about nuclear power. We’ve had three really bad accidents and almost three-quarters of a century, four, if you count Three Mile Island. That’s either a lot or a little, depending on how you look at these things. France still gets a steady 63 percent of its energy from nuclear. In fact, it only gets 10 percent from fossil fuels. Now, there are a number of new designs, including one that puts small nuclear plants on barges in the ocean. Is there a future for nuclear construction, new nuclear construction outside of China, which has been continuing to move that way?

Kathy Hipple I am not the world’s expert on nuclear power, but what I see, the cost of solar dropping 90 percent and wind dropping 70 percent and battery storage dropping quickly. I keep seeing estimates for new nuclear power and it is surprisingly continuing to increase. So it is very difficult for a new energy plant, whether it’s gas or whether it’s nuclear, to compete with the dropping cost or the declining costs of solar, wind, and battery storage.

So I don’t see in the United States that there’s a future for certainly not large nuclear. The question would be is how long do the existing nuclear plants continue to operate in the United States? And most of the energy forecasts to get to net-zero by 2030, 2040, 2050, do assume that the currently existing nuclear plants continue to operate, but they do not generally call for new nuclear.

Steven Cherry Finally, there are issues of environmental justice that are economic, for example, the air pollution caused by fossil fuel extraction and consumption falls disproportionately on minorities and the poor. This is something that you’ve studied as well.

I think that the issue of environmental justice has always been there, but it has gained a tremendous amount of traction in the past couple of years, I think, especially in 2020, when it became increasingly clear how disproportionate the poor communities were being affected by fossil fuels, which includes also petrochemical plants.

If you look at Cancer Alley in Louisiana and the number of refineries and petrochemical plants that are in a very small area of Louisiana, it’s very difficult not to be very, very concerned about environmental justice issues and the concept of a just transition. It’s a very interesting one that really needs to be top of mind as we are very thoughtful about accelerating the energy transition. It’s simply as a matter of basic decency and fairness that we cannot have the pollution caused by fossil fuels to fall disproportionately on poor communities and especially black and community communities of color. Terribly unfair.

Steven Cherry In some ways this is a part of a broader question about externalities and how they get paid for either financially or in terms of things like cancer that have tilted our economy toward fossil fuel consumption for a century now. Is there anything that can be done about that?

Kathy Hipple Well, it depends on who you ask. If you asked, for example, Bob Litterman, he chaired the Climate Leadership Committee, and he has pushed hard for a … essentially a carbon tax, but that if carbon was taxed and if the proceeds of the revenues generated by that was treated like a dividend in his view and that of, I think, his fellow Climate Leadership committee board members, that would go a long way toward addressing some of the social costs of carbon pollution. That’s one possible solution. Other countries are figuring out how to do it with cap-and-trade. But I think it’s only a question of time in this country before we have some kind of a reckoning. And one of the things the Biden administration is doing is trying to actually calculate the social cost of carbon pollution.

Steven Cherry Kathy, we’ve been speaking about oil companies as a sort of hegemony, but are there distinctions you want to make among them?

Kathy Hipple I think that’s a very interesting question, Steven. In the last few years, some of the oil—especially the large oil—companies, we call them oil majors or the integrated oil companies, have started to diverge. So the European oil companies, Shell, BP, Total, in particular, have taken a more forward-looking view toward the energy transition than have their American counterparts, Exxon and Chevron. Exxon and Chevron have largely continued along the path of doubling down on oil and gas production and petrochemicals, whereas Total, for example, has been very forward-thinking for about a decade. Now, are they doing enough? No. Still, a very small percentage of their capital expenditures are directed toward clean energy, but they are at least moving in the right direction. And Shell and BP are very involved as well, at least moving in that direction again—not quickly enough, not aggressively enough, to meet the Paris … To be aligned with Paris. But at least we’re seeing that they are aware of the energy transition and they are not staking their entire future on oil and gas, but trying to move beyond that.

Steven Cherry Companies like BP have even set a date to be out of fossil fuels 2040 or 2050. How painful is that going to be for them? Are there loopholes that make this more of a PR commitment than a serious one?

Kathy Hipple That’s a great question. BP did actually say they would reduce their fossil fuel production and that the loophole is some of their joint ventures have been carved out of that. But that was one of the most significant because it said they will, along with Repsol another European oil company, did say that they would reduce production. And we need more of that. This industry is mature. It’s declining. We need a managed decline for that industry. And that will not happen if they are just making empty statements.

Steven Cherry Well, Kathy, it seems like we’re not really going to get to where we need to on climate change until we restructure the economy around it. So thank you for your work toward that and for joining us today to talk about it.

Kathy Hipple Thank you very much for having me, Steven. And congratulations on the work that you’re doing with your students at NYU.

Steven Cherry We’ve been speaking with Kathy Hipple, of Bard College’s Managing for Sustainability MBA program, about the clean-energy economy.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers, a professional organization dedicated to advancing technology for the benefit of humanity.

This interview was recorded January 25, 2021 via Zoom. Our theme music is by Chad Crouch.

You can subscribe to Radio Spectrum on Spotify, Apple Podcast, and wherever else you get your podcasts, or listen on the Spectrum website, where you can also sign up for alerts of new episodes. We welcome your feedback on the web or in social media.

For Radio Spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

Bright X-Rays, AI, and Robotic Labs—A Roadmap for Better Batteries

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/energy/batteries-storage/bright-xrays-ai-and-robotic-labsa-roadmap-for-better-batteries

Steven Cherry Hi, this is Steven Cherry for Radio Spectrum.

Batteries have come a long way. What used to power flashlights and toys, Timex watches and Sony Walkmans, are now found in everything from phones and laptops to cars and planes.

Batteries all work the same: Chemical energy is converted to electrical energy by creating a flow of electrons from one material to another; that flow generates an electrical current.

Yet batteries are also wildly different, both because the light bulb in a flashlight and the engine in a Tesla have different needs, and because battery technology keeps improving as researchers fiddle with every part of the system: the two chemistries that make up the anode and the cathode, and the electrolyte and how the ions pass through it from one to the other.

A Chinese proverb says, “Give a man a fish, and you feed him for a day. Teach a man to fish, and you feed him for a lifetime.” The Christian Bible says, “follow me and I will make you fishers of men.”

In other words, a more engineering-oriented proverb would say, “let’s create a lab and develop techniques for measuring the efficacy of different fishing rods, which will help us develop different rods for different bodies of water and different species of fish.”

The Argonne National Laboratory is one such lab. There, under the leadership of Venkat Srinivasan, director of its Collaborative Center for Energy Storage Science, a team of scientists has developed a quiver of techniques for precisely measuring the velocity and behavior of ions and comparing it to mathematical models of battery designs.

Venkat Srinivasan [Ven-kat Sri-ni-va-san] is also deputy director of Argonne’s Joint Center for Energy Storage Research, a national program that looks beyond the current generation of lithium–ion batteries. He was previously a staff scientist at Lawrence Berkeley National Laboratory, wrote a popular blog, “This Week in Batteries,” and is my guest today via Teams.

Venkat, welcome to the podcast.

Venkat Srinivasan Thank you so much. I appreciate the time. I always love talking about batteries, so it’d be great to have this conversation.

Steven Cherry I think I gave about as simplistic a description of batteries as one could give, maybe we could start with what are the main battery types today and why is one better than another for a given application?

Venkat Srinivasan So, Steve, there are two kinds of batteries that I think all of us use in our daily lives. One of them is a primary battery. The ones that you don’t recharge. So a common one is something that you might be putting in your children’s toys or something like that.

The second, which I think is the one that is sort of powering everything that we think of, things like electric cars and grid storage or rechargeable batteries. So these are the ones where we have to go back and charge them again. So let’s talk a little bit more about rechargeable batteries that are a number of them that are sitting somewhere in the world. You have lead–acid batteries that are sitting in your car today. They’ve been sitting there for the last 30, 40 years where they used to stop the car for lighting the car up when the engine is not on. This is something that will continue to be in our cars for quite some time.

[n-dashes].

You’re also seeing lithium–ion batteries that are not powering the car itself. Instead of having internal combustion engine and gasoline, you’re seeing more pure chemicals coming out that have lithium–ion batteries. And then the third battery, which we sort of don’t see, but we have some in different places are nickel–cadmium and metal–hydride batteries. These are kind of going away slowly. But the Toyota Prius is a great example of a nickel–metal hybrid. But many people still drive Priuses—I have one—that still has nickel-metal batteries in them. These are some of the classes of materials that are more common. But there are others, like flow batteries, that people haven’t really probably thought about and haven’t seen, which is being researched quite a bit, there are companies that are trying to install flow batteries for grid storage, which are also rechargeable batteries that are of a different type.

The most prevalent of these is lithium–ion; that’s the chemistry that has completely changed electric vehicle transportation. It’s changed the way we speak on our phones. The iPhone would not be possible if not for the lithium–ion battery. It’s the battery that has pretty much revolutionized all of transportation. And it’s the reason why the Nobel Prize two years ago went to the lithium–ion batteries for the discovery and ultimately the commercialization of the technology—it’s because it had such a wide impact.

Steven Cherry I gather that remarkably, we’ve designed all these different batteries and can power a cell phone for a full day and power a car from New York to Boston without fully understanding the chemistry involved. I’m going to offer a comparison and I’d like you to say whether it’s accurate or not.

We developed vaccines for smallpox beginning in 1798; we ended smallpox as a threat to humanity—all without understanding the actual mechanisms at the genetic level or even the cellular level by which the vaccine confers immunity. But the coronavirus vaccines we’re now deploying were developed in record time because we were able to study the virus and how it interacts with human organs at those deeper levels. And the comparison here is that with these new techniques developed at Argonne and elsewhere, we can finally understand battery chemistry at the most fundamental level.

Venkat Srinivasan That is absolutely correct. If you go back in time and ask yourself, what about the batteries like the acid batteries and the nickel–cadmium batteries—did we invent them in some systematic fashion? Well, I guess not, right?

Certainly once the materials were discovered, there was a lot of innovation that went into it using what was state-of-the-art techniques at that time to make them better and better and better. But to a large extent, the story that you just said about the vaccines with the smallpox is probably very similar to the kinds of things that are happening in batteries, the older chemistries.

The world has changed now. If you look at the kinds of things we are doing today, like you said, that in a variety of techniques, both experimental but also mathematical, meaning, now computer simulations have come to our aid and now we’re able to take a deeper understanding on how batteries behave and then use that to discover new materials—first, maybe on a computer, but certainly in the lab at some point. So this is something that is also happening in the battery world. The kinds of innovations you are seeing now with COVID vaccines are the kinds of things we are seeing happen in the battery world in terms of discovering the next big breakthrough.

Steven Cherry So I gather the main technology you’re using now is ultraright X-rays and you’re using it to come up with for the first time the electrical current, something known as the transport number. Let’s let’s start with the X-rays.

Venkat Srinivasan We used to cycle the battery up back. Things used to happen to them. We then had to open up the battery and see what happened on the inside. And as you can imagine, right when you open up a battery, you hope that nothing changes by the time you take it to your experimental technique of choice to look at what’s happening on the inside. But oftentimes things change. So what you have inside the battery during its operation may not be the same as what you’re probing when you open up the cell. So a trend that’s been going on for some time now is to say, well, maybe we should be thinking about in situ to operando methods, meaning inside the party’s environment during operation, trying to find more information in the cell.

Typically all battery people will do is they’ll send a current into the battery and then measure the potential or vice versa. That’s a common thing that’s done. So what we are trying to do now is do one more thing on top of that: Can we probe something on the inside without opening up the cell? X-rays come into play because these are extremely powerful light, they can go through the battery casing, go into the cell, and you can actually start seeing things inside the battery itself during operando operation, meaning you can pass current keep the battery in the environment you want it to be and send the X-ray beam and see what’s happening on the inside.

So this is a trend that we’ve been slowly exploring, going back a decade. And a decade ago, we probably did not have the resolution to be able to see things at a very minute scale. So we were seeing maybe a few tens of microns of what was happening in these batteries. Maybe we were measuring things once every minute or so, but we’re slowly getting better and better; we’re making the resolution tighter, meaning we can see smaller features and we are trying to get the time resolution such that we can see things at a faster and faster time. So that trend is something that is going to is helping us and we continue to help us make batteries better.

Steven Cherry So if I could push my comparison a little further, we developed the COVID vaccines in record time and with stunning efficiency. I mean, 95 percent effective right out of the gate. Will this new ability to look inside the battery while it’s in operation, will this create new generations of better batteries in record time?

Venkat Srinivasan That will be the hope. And I do want to bring in two aspects that I think work complementarily with each other. One is the extreme techniques—and related techniques like X-ray, so we should not forget that there are non-X-ray techniques also that give us information that can be crucially important. But along with that, there has been this revolution in computing that has really come to the forefront in the last five to 10 years. What this computing revolution is that basically because computers are getting more and more powerful and computing resources are getting cheaper, we are able to now start to calculate on computers all sorts of things. For example, we can calculate how much lithium can a material hold—without actually having to go into the lab. And we can do this in a high-throughput fashion: screen a variety of materials and start to see which of these looks the most promising. Similarly, we can do it, same thing, to ask: Can we find iron conductors to find, say, solid-state battery materials using the same techniques?

Now, once you have these kinds of materials in play and you do them very, very fast using computers, you can start to think about how do you combine them with these X-ray techniques. So you could imagine that you’re finding a material on the computer. You’re trying to synthesize them and during the synthesis you try to watch and see, are you making the material you were predicting or did something happen during synthesis where you were not able to make the particular material?

And using this complementary way of looking at things, I think in the next five to 10 years you’re going to see this amazing acceleration of material discovery between the computing and the X-ray sources and other techniques of experimental methods. They’re going to see this incredible acceleration in terms of finding new things. You know, the big trick in materials—and this is certainly true for battery materials—if you can find those materials, maybe one of them looks interesting. So the job here is to cycle through those thousand as quickly as possible to find that one nugget that can be exciting. And so what we’re seeing now with computing and with these X-rays is the ability to cycle through many materials very quickly so that we can start to pin down which of those which of the one among those thousand looks the most promising that we can spend a lot more resources and time on them.

Steven Cherry We’ve been relying on lithium–ion for quite a while. It was first developed in 1985 and first used commercially by Sony in 1991. These batteries are somewhat infamous for occasionally exploding in phones and laptops and living rooms and on airplanes and even in the airplanes themselves in the case of the Boeing 787. Do you think this research will lead to safer batteries?

Venkat Srinivasan Absolutely. The first thing I should clarify is that the lithium–ion from the 1990s is not the same lithium–ion we used today. There have been many generations of materials that have changed over time; they’ve gotten better; the energy density has actually gone up by a factor of three in those twenty-five years, and there’s a chance that it’s going to continue to go up by another factor of two in the next decade or so. The reality is that when we use the word lithium–ion, we’re actually talking about a variety of material classes that go into the into the anodes, the cathode, and the electrolytes that make up the lithium–ion batteries. So the first thing to kind of notice is that these materials are changing continuously, what the new techniques are bringing is a way for us to push the boundaries of lithium–ion, meaning there is still a lot of room left for lithium–ion to get better, and these new techniques are allowing us to invent the next generation of cathode materials, anode materials, and electrolytes that could be used in the system to continue to push on things like energy density, fast-charge capability, cycle life. These are the kinds of big problems we’re worried about. So these techniques are certainly going to allow us to get there.

There is another important thing to think about for lithium–ion, which is recyclability. I think it’s been pretty clear that as the market for batteries starts to go up, they’re going to have a lot of batteries that are going to reach end-of-life at some stage and we do not want to throw them away. We want to take out the precious metals in them, the ones that we think are going to be useful for the next generation of batteries. And we want to make sure we dispose of them in a very sort of a safe and efficient manner for the environment. So I think that is also an area of R&D that’s going to be enabled by these kinds of techniques.

The last thing I’d say is that we’re thinking hard about systems that go beyond lithium–ion, things like solid-state batteries, things like magnesium-based batteries … And those kinds of chemistries, we really feel like taking these modern techniques and putting them in play is going to accelerate the development time frame. So you mentioned 1985 and 1991; lithium–ion battery research started in the 1950s and 60s, and it’s taken as many decades before we could get to a stage where Sony could actually go and commercialize it. And we think we can accelerate the timeline pretty significantly for things like solid-state batteries or magnesium-based batteries because of all the modern techniques.

Steven Cherry Charging time is also a big area for potential improvement, especially in electric cars, which still only have a driving range that maybe gets to 400 kilometers, in practical terms. Will we be getting to the point where we can recharge in the time it takes to get a White Chocolate Gingerbread Frappuccino at Starbucks?

Venkat Srinivasan That’s the that’s the dream. So Argonne actually leads a project for the Department of Energy working with multiple other national labs on enabling 10-minute charging of batteries. I will say that in the last two or three years, there’s been tremendous progress in this area. Instead of a forty-five-minute charge or a one-hour charge that was considered to be a fast charge. We now feel like there is a possibility of getting under 30 minutes of charging. They still have to be proven out. They have to be implemented at large scale. But more and more as we learn using these similar techniques that I can see a little bit more about, that there is a lot of work happening at the Advanced Photon Source looking at fast charging of batteries, trying to understand the phenomenon that is stopping us from charging very fast. These same techniques are allowing us to think about how to solve the problem.

And I’ll take a bet in the next five years, we’ll start to look at 10-minute charging as something that is going to be possible. Three or four years ago, I would not have said that. But in the next five years, I think they are going to start saying, hey, you know, I think there are ways in which you can start to get to this kind of charging time. Certainly it’s a big challenge. It’s not just a challenge in the battery side, it’s a challenge in how are we going to get the electricity to reach the electric car? I mean, there’s going to be a problem there. There’s a lot of heat generation that happens in these systems. We’ve got to find a way to pull it out. So there’s a lot of challenges that we have to solve. But I think these techniques are slowly giving us answers to, why is it a problem to begin with? And allowing us to start to test various hypotheses to find ways to solve the problem.

Steven Cherry The last area where I think people are looking for dramatic improvement is weight and bulk. It’s important in our cell phones and it’s also important in electric cars.

Venkat Srinivasan Yeah, absolutely. So frankly, it’s not just in electric cars. At Argonne they’re starting to think about light-duty vehicles, which is our passenger cars, but also heavy-duty vehicles. Right. I mean, what happens when you start to think about trucking across the country carrying a heavy payload? We are trying to think hard about aviation, about marine, and rail. As you start to get to these kinds of applications, the energy density requirement goes up dramatically.

I’ll give you some numbers. If you look at today’s lithium–ion batteries at the pack level, the energy density is approximately 180 watt-hours per kilogram, give or take. Depending on the company, That could be a little bit higher or lower, but approximately 180 Wh/kg. If we look at a 737 going across the country or a significant distance carrying a number of passengers, the kinds of energy density you would need is upwards of 800 Wh/kg. So just to give you a sense for that, right, we said it’s 180 for today’s lithium–ion. We’re talking about four to five times the energy density of today’s lithium–ion before we can start to think about electric aviation. So energy density would gravimetric and volumetric. It’s going to be extremely important in the future. Much of the R&D that we are doing is trying to discover materials that allow us to increase energy density. The hope is that you will increase energy density. You will make the battery charge very fine. To get them to last very long, all simultaneously, that tends to be a big deal, but it is not all about compromising between these different competing metrics—cycle life, calendar life, cost, safety, performance, all of them tend to play against each other. But the big hope is that we are able to improve the energy density without compromising on these other metrics. That’s kind of the big focus of the R&D that’s going on worldwide, but certainly at Argonne.

Steven Cherry I gather there’s also a new business model for conducting this research, a nonprofit organization that brings corporate and government, and academic research all under one aegis. Tell us about CalCharge.

Venkat Srinivasan Yeah, if you kind of think about the battery world and this is true for many of these hard technologies, the sort of the cleantech or greentech as people have come to call them. There is a lot of innovation that is needed, which means in our lab R&D, the kinds of techniques and models that we’re talking about is crucially important. But it’s also important for us to find a way to make them into a market, meaning you have to be able to take that lab innovation; you’ve got to be able to manufacture them; you’ve got to get them in the hands of, say, a car company that’s going to test them and ultimately qualify them and then integrate them into the vehicle.

So this is a long road to go from lab to market. And the traditional way you’ve thought about this is you will want to throw it across the fence, right. So, say at Argonne National Lab, invent something and then we throw it across the fence to industry and then you hope that industry takes it from there and they run with it and they solve the problems. That tends to be an extremely inefficient process. That’s because oftentimes that a national lab might stop is not enough for an industry to run with it—there are multiple paths that show up. And when you integrate these devices into the company’s existing other components there are problems that show up when you get it up to manufacturing, when you start to get up to a larger scale; there are problems that show up and you make a pact with it. And oftentimes the solution to these problems goes back to the material. So the fundamental principle that me and many others have started thinking about is you do not want to keep R&D, the manufacturing, and the market separate. You have to find a way to connect them up.

And if you connect them up very closely, then the market starts to drive the R&D, the R&D innovation starts to get the people to the manufacturing world excited. And there is this close connection among all of these three things that makes things go faster and faster. We’ve seen this in other industries and it certainly will be true in the battery world. So we’ve been trying very, very hard to kind of enable these kinds of what I would call public-private[NDASH] partnerships, ways in which we, the public, meaning the national lab systems, can start to interact with the private companies and find ways to move this along. So this is a concept that I think of me and a few others have been sort of thinking about for quite some time. Before I moved to Argonne, I was at Lawrence Berkeley. And at Lawrence Berkeley—the Bay Area has a very rich ecosystem of battery companies, especially startup companies.

So I created this entity called CalCharge, which was a way to connect up the local ecosystem in the San Francisco Bay Area to the national labs in the area—Lawrence Berkeley, SLAC, and Sandia National Labs in Livermore. So those are the three that were connected. And the idea behind this is how do we take the sort of the national lab facilities, the people, and the kind of the amazing brains that they have and use them to start to solve some of the problems that is facing? And how do we take the IP that is sitting in the lab and how do we move them to market using these startups so that we can continuously work with each other, make sure that we don’t have these valleys of death as we’ve come to call them, when we move from lab to market and try to accelerate that. I’ve been doing very similar things at Argonne in the last four years thinking hard about how do you do this, but on a national scale.

So we’ve been working closely with the Department of Energy, working with various entities both in the Chicagoland area, but also in the wider U.S. community, to start to think about enabling these kinds of ecosystems where national labs like ours and others across the country—there are 70 national labs, Department of Energy national labs—maybe a dozen of them have expertise that can be used for the free world. How do we connect them up? And the local universities that are the different parts of the country with amazing expertise, how do you connect them up to these startups, the big companies, the manufacturers, the car companies that are coming in, but also the material companies, companies that are providing lithium for a supply chain perspective? So my dream is that we would have this big ecosystem of everybody talking to each other, finding ways to leverage each other and ultimately making this technology something that can reach the market as quickly as possible.

Steven Cherry And right now, who is waiting on whom? Is there enough new research that it’s up to the corporations to do something with it? Or are they looking for specific improvements that that they need to wait for you to make?

Venkat Srinivasan All of the above. That is probably quite a bit of R&D that’s going on that industry is not aware of, and that tends to be a big problem—there’s a visibility problem when it comes to the kinds of things that are going on in the national labs and the academic world. There are things where we are not aware of the problems that industry is facing. And I think these kinds of disconnects where sometimes the lack of awareness keeps things from happening fast is what we need to solve. And the more connections we have, the more interactions we have, the more conversations we have with each other, the exposure increases. And when the exposure increases, we have a better chance of being able to solve these kinds of problems where the lack of information stops us from getting the kinds of innovation that we could get.

Steven Cherry And at your end, at the research end, I gather one immediate improvement you’re looking to make is the brightness of the X-rays. Is there anything else that we should look forward to?

Venkat Srinivasan Yeah, there are a couple of things that I think are very important. The first one is the brightness of the X-rays. There’s an upgrade that is coming up for the advanced photon source that’s going to change the time resolution in which we can start to see these batteries. So, for example, when you’re charging the batteries very fast, you can get data very quickly. So that’s going to be super important. The second one is you can also start to think about seeing features that are even smaller than the kinds of features we see today. So that’s the first big thing.

The second thing that is connected to that is artificial intelligence and machine learning is becoming something that is permeating through all forms of research, including battery research, we use AI and ML for all sorts of things. But one thing we’ve been thinking about is how do we connect up AI and ML to the kinds of X-ray techniques we’ve been using. So, for example, instead of looking all over the battery to see if there is a problem, can we use signatures but of where the problems could be occurring? So that these machine learning tools can quickly go in and identify the spot where things could be going wrong so that you can spend all your time and energy taking data at that particular spot. So that, again, we’re being very efficient with the time that we have to ensure that we’re catching the problems we have to catch. So I think the next big thing that is going on is this whole artificial intelligence and machine learning that is going to be integral for us in the battery discovery world.

The last thing which is an emerging trend is what is called automated labs or self-driving labs. The idea behind this is that instead of a human being going in and sort of synthesizing a material starting in the morning and finishing the evening and then characterizing it the next day and finding out what happened to it and then going back and trying the next material, could we start to do this using robotics? This is something that’s been a trend for a while now. But where things are heading is that more and more robots can start to do things that a human being could do. So you could imagine robots going in and synthesizing electrolyte molecules, mixing them up, testing for the conductivity and trying to see if the conductivity is higher than the one that you had before. If it’s not going back and iterating on finding a new molecule based on the previous results so that you can efficiently try to find the answer for a higher conductive electrolyte than one that you have is your baseline. Robots work 24/7. So it kind of makes it very, very useful for us to think about these ways of innovating. Robots generate a lot of data, which we now know how to handle because of all the machine learning tools we’ll be developing in the last three, four, five years. So all of a sudden, the synergy, the intersection between machine learning, the ability to analyze a lot of data, and robotics are starting to come into play. And I think we’re going to see that that’s going to open up new ways to discover materials in a rapid fashion.

Steven Cherry Well, Venkat, if you will forgive a rather obvious pun, the future of battery technology seems bright. And I wish you and your colleagues at Argonne and CalCharge every success. Thank you for your role in this research and for being here today.

Thank you so much. I appreciate the time you’ve taken to ask me this questions.

We’ve been speaking with Venkat Srivinasan of Argonne National Lab about a newfound ability to study batteries at the molecular level and about improvements that might result from it.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers, a professional organization dedicated to advancing technology for the benefit of humanity.

This interview was recorded January 6, 2021 using Adobe Audition and edited in Audacity. Our theme music is by Chad Crouch.

You can subscribe to Radio Spectrum on the Spectrum website, where you can also sign up for alerts, or on Spotify, Apple, Google—wherever you get your podcasts. We welcome your feedback on the web or in social media.

For Radio Spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

See Also:

Battery of tests: Scientists figure out how to track what happens inside batteries

Concentration and velocity profiles in a polymeric lithium-ion battery electrolyte

Data-Free Medicine

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/biomedical/diagnostics/datafree-medicine

Steven Cherry Hi, this is Steven Cherry for Radio Spectrum.

The saddest fact about the coronavirus pandemic is certainly the deaths it has already caused and the many more deaths to come before the world gets the virus under at least as much control as, say, chicken pox or an ordinary flu.

The second-saddest fact about the pandemic is the economic and educational havoc it has wrought.

Perhaps the third-saddest fact is the unfortunate lack of agreement about the best strategies for living with the virus, which, at least in the U.S., is responsible for many of those deaths, and, arguably much of the havoc as well. It has roiled families as well as the presidential election, by politicizing the wearing of masks, the limits on gatherings, the openings and closings of restaurants and schools.

But yet another sad fact is that, as was said thousands of years ago, “there is nothing new under the sun,” and this too is nothing new; there is a shocking and unfortunate lack of widespread agreement about the best answers when it comes to many medical questions, even among doctors, because there is a shocking and unfortunate lack of evidence—and even respect for evidence—in the medical arena. That’s the contention of the authors of a rather prescient 2017 book, Unhealthy Politics: The Battle Over Evidence-Based Medicine, subtitled, how partisanship, polarization, and medical authority stand in the way of evidence-based medicine.

And so I’ve asked one of those authors to lay out the case that medicine isn’t nearly as evidence-based as we think it is and as it should be, and tell us what role that has played in the severity of the pandemic—and what we might do about it, both in the near pandemic future, and to improve American medicine overall.

Eric Patashnik is a professor of public policy and political science at Brown University and the director of its Master of Public Affairs Program. He joins us via Zoom.

Steven Cherry Eric, welcome to the podcast.

Eric Patashnik Thank you so much for having me. It’s a pleasure to be here.

Steven Cherry Eric, the book starts with a rather surprising fact about how little we know about an operation that’s performed millions of times in the U.S., which you call a sham procedure. What is the sham procedure and how is it possible that a procedure that’s known to be a sham can be performed millions of times?

Eric Patashnik Yeah, so the original motivation for my book, which is coauthored with Alan Gerber at Yale and Conor Dowling at the University of Mississippi, was a remarkable study published in the New England Journal of Medicine in 2002. And what the study found was that a very common surgical procedure, arthroscopy for osteoarthritis of the knee—which is an operation that many older folks who have arthritis get after they try more conservative treatments such as drugs or physical therapy—what it found was that this widely used surgical procedure worked no better at relieving joint pain or improving function than a sham operation—in other words, a placebo intervention in which the surgeon merely pretended to operate.

And this was a groundbreaking study. It received a tremendous amount of media coverage. And the three of us are social scientists were not physicians, but we were startled by the study. And we began asking very basic questions. How is it possible that a widely used surgery wouldn’t work any better than a fake operation? And so what we began to do was investigate the medical literature. And what we found was this surgery had diffused into widespread practice in advance of any rigorous evidence that it really worked. Essentially, a number of surgeons began performing this procedure and they found their patients said that they felt better, but there really wasn’t rigorous data.

And that’s not so surprising because it turns out medical procedures, such as the kinds of things that your internist might do for you when you’re ill, or even surgeries, often become widely used without rigorous trials. We have a Food and Drug Administration that looks at the efficacy of pharmaceuticals, but there’s no FDA for surgery, for example. And so this turns out to be actually a very, very wide problem.

And what we learned as we started digging into the case is that this was really illustrative of a much broader problem in health care. Indeed, some experts believe that less than half of the medical care that Americans receive is based on adequate scientific evidence about what works best. Many treatment options have never been compared head-to-head with alternatives. So, for example, let’s say you have a bad back and you know there are different ways to treat it. You could try a drug, you could try physical therapy, you could get spinal fusion surgery. What patients would like to know is which of these treatments is really going to be best for me. And often the answer is we just don’t know. The studies haven’t been done. We just lack that information.

Steven Cherry You say that people in one part of the country get four times as many hip replacements as those in another and not for any sound medical reason. So why are there these regional variations in treatments?

Eric Patashnik Yeah, so basically there’s no centralized process in the United States to investigate alternative treatments for common medical conditions and determining what works best. And scholars at the Dartmouth Institute have found that there are remarkable variations in practice. In other words, patients with the same medical condition might receive very, very different treatment options and that these variances in how patients with the same condition are treated are not driven by, for example, regional differences in disease or patient preferences or even clinical evidence.

It might just be, for example, that physicians in Cleveland use a particular drug, whereas those in Boston use a different one. And we don’t ever have a system to figure out expeditiously which one is best for patients. And as a result of that, it could be the case that some patients in one part of the country are receiving inferior treatments and we might have no way of discovering that in short order. That’s, I think, a major problem that a lot of patients don’t realize. Even if you see a doctor that you trust, even if you have a very good relationship with your physician, the treatment that you may be receiving might be quite different from other patients in other parts of the country that have the very same condition. There was, I think, a recognition that this was a big problem. As part of the Affordable Care Act, there was an agency created the Patient-Centered Outcomes Research Institute [PCORI], whose mission is to fund and disseminate research findings on the comparative effectiveness of different treatments and diagnostic tests and other options. And I think that’s a very good thing. But we’ve only made a limited amount of progress in generating the information we need to answer those basic questions. 

Steven Cherry So to what extent is the problem that true evidence, especially via a large randomized double-blind controlled trials, is hard to come by? It’s expensive to do. It’s time-consuming. It’s rarely done. And if and when it’s done, it’s hardly ever replicated.

Eric Patashnik It’s true. There are there are major barriers and these studies can be expensive. But the information that comes from these comparative effectiveness studies is really a public good. It benefits all patients and payers and all stakeholders in the political system. So that’s one of the justifications for having a government role in subsidizing the production of these studies, because all of us would be better off if we had reliable information about what treatments work best and for whom and under what conditions. Individual physicians really don’t have a strong incentive to fund these studies or ability to fund the studies themselves. In fact, in the knee surgery case that I mentioned earlier, if it were not for the entrepreneurial initiative of a few physicians out of Texas, we might still not have those groundbreaking studies. It was really just because a few leaders decided that they really wanted to answer the question of whether this knee surgery worked. There was no system in place to ensure that those research holes are filled.

Steven Cherry Under our system, trials and other tests of efficacy are often in the hands of the large drug and medical device manufacturers. To the layperson, that sounds like putting the wolf in charge of determining the best way to build the chicken coop.

Eric Patashnik We do have a Food and Drug Administration that does a lot of tremendous work and they’re very valuable. And that’s also a benefit for the pharmaceutical industry. They, too, have an incentive to make sure that products that are sold to the public are seen as effective and trustworthy. So the industry itself does want a certain level of regulatory scrutiny. But the current FDA process, I think, while extremely valuable, has some limits. It’s often the case that drug companies are not required to study whether a new drug works better than, say, a cheaper alternative way of treating the same disease. If there’s medical and nonmedical options for treating a medical condition, say, for example, a surgery or a pharmaceutical agent, we have no process in place to ensure that those two options are forced to compete head-to-head. And so the FDA process, I think, is extremely valuable. And it does ensure that we find that drugs work better than a placebo, but we often don’t have what we really want to know, what patients want to know, which is which drug is best for me or should I be taking any drug.

We’ve also seen, I think over time, in part under industry pressure, a bit of a lowering of the evidentiary bar and even how we evaluate drugs. So oftentimes what the FDA is looking at is not answering the question that patients most want to know, which is, will this drug help me live longer? Will it help me improve my quality of life? Oftentimes, studies are looking at surrogate endpoints. For example, if I take this drug, will it lower my blood pressure or will it improve my cholesterol? And those surrogate outcomes, we hope, are correlated with the health outcomes that we really care about. But oftentimes the statistical causal relationship between them is much weaker than we would like it to be. And so sometimes we even have studies that are well done. But the questions that they’re answering are really not the ones that patients care most about.

Steven Cherry You would think that insurance companies would refuse to pay for a sham procedure or for a procedure that’s done four times in one region than another.

Eric Patashnik The insurance companies are in a difficult position because, of course, they would like to figure out what works best and conserve resources and not allocate them to low-value medical services. But in the United States, it’s very difficult for private insurers not to cover a medical treatment, for example, that is covered by the Medicare program. And the Medicare program really only looks at whether interventions are reasonable as judged essentially by physicians. The Medicare agency doesn’t really have the authority to examine whether a particular intervention is cost-effective or good value for money. And it’s also the case that individual insurers don’t have the resources to pay for the studies themselves, because, after all, if one insurer would spend hundreds of millions of dollars to answer the question, does this treatment A work better than treatment B? Well, once that study is done, the information would be available to other insurance companies, to their competitors. And so they don’t really have—the first insurance company doesn’t really have—an incentive to make that initial investment.

So, yes, payers would like this information, but individual insurance companies really don’t have a strong economic incentive to fund the studies. And it’s also difficult for them to deny coverage to treatments if the Medicare agency funds it.

The entire Medicare program, which was established in 1965, was, of course very controversial at the start. Many physicians and the AMA [American Medical Association] originally opposed Medicare. They were very concerned that government would be intruding on their clinical autonomy and second-guessing them. In exchange for essentially they’re buy-in, the government essentially said, look, we’re going to pay for health care for senior citizens, but we will defer to your professional judgment about the best way to treat patients. That’s not really government’s role. And, of course, Medicare has changed dramatically over the last decades. And I don’t want to say that Medicare doesn’t scrutinize treatments at all or make coverage decisions. It does. But that basic model of essentially deferring judgment or deferring decisions about coverage to physicians basically has remained the same.

And so we rely on physicians to exercise their best judgment to determine appropriate care. In many cases, that works very well. But if we have, for example, a breakdown of professional authority, if we have physicians in a particular practice area are widely using a surgical intervention, that doesn’t work, we don’t really have a good way of fixing that so quickly. And what we saw in that knee surgery case that I mentioned earlier was after that landmark New England Journal of Medicine study came out—and this is what was so disconcerting—a lot of the orthopedic surgeons that perform that surgery did not embrace it at all. They reacted extremely defensively. They attacked the study authors. They they made all sorts of arguments about what was wrong with the study.

And, of course, any study could have some flaws. And certainly every any study should be replicated. And we might not want to change practices based on a single study. But the overall behavior of the orthopedic community was one of essentially trying to push away evidence as opposed to embracing it as a way of learning what is best to treat their patients. In our research, we found that unfortunately, that practice has repeated itself over and over again.

There was another recent study just a couple of years ago called Orbita, which looked at millions of heart patients who had clogged arteries and they were receiving a stent inserted to reduce their chest pain. And this is another example of a very widely medical procedure. It’s expensive. It carries risks. It’s become the standard of care. And yet it’s diffused widely into practice on the basis of really little hard data. And the Orbita study was like the knee surgery finally done and it was a sham-controlled or a placebo-controlled trial in which some patients were randomized to receive a stent and others received no intervention at all beyond everyone in the trial receiving basic pharmaceutical drugs for cardiac disease. And what it found was that the patients who received a stent experienced no more improvement in chest pain or exercise tolerance—that was the other endpoint of the study—compared to patients who received a placebo procedure. And interventional cardiologists really saw that study as an attack on their specialty. And they lashed out at the trial and at the investigators. And it was another similar reaction of self-protective and defensive behavior of physicians that were not embracing the best available medical evidence.

Steven Cherry The book argues that the widespread disregard for scientific evidence or the lack of it is systematic within the medical community but it’s also political—that there’s little incentive for politicians to step in and demand medicine be more evidence-based. Is this physician pushback one of the reasons?

Eric Patashnik Yeah, I mean, I think what we’re talking about then is, essentially, as a society, we have a social contract. We delegate our authority to physicians and we give them the privilege of licensure and they can control who is a physician. They earn high salaries. They quite rightly are treated with great respect. And I should say, we think physicians are amazing people and we see our work as trying to help physicians for sure. But the way that social contract works is we rely on the medical community to self-regulate, to figure out if physicians are not practicing appropriate care, and to learn what would be best for patients.

And if that social contract is not working well, what we find in the book is it’s extremely difficult for government to do anything about it. Well, why is that? The main reason is when it comes to medical care, the public really trusts physicians. We did a bunch of public opinion surveys to try to understand how the public thinks about health care. And we found the bottom line of really scores of surveys that we did was when it comes to medicine, really the only actors that the American public trust are physicians. They don’t trust pharmaceutical firms. They don’t trust insurance companies, and they certainly don’t trust government.

Steven Cherry I mean, part of that trust is grounded in the fact that medicine is pretty arcane, complex, technical … People go to school for years and years and years and train in various ways. I mean, if my life depended on my having an opinion on which version of string theory was more likely to be true, I’d study up on string theory, but I wouldn’t get very far. And I think neither would most people.

Eric Patashnik Absolutely. It’s completely understandable and rational for ordinary Americans to look to doctors for advice about what works best. I certainly do in my own life and would steer people away from just Googling medical conditions at random. And you can find a lot of misleading information on the Web. But what we really do want at the end of the day is the best available evidence about what treatments work best. And then that is a sort of systematic evidence base, and then ideally physicians would be using that kind of data and then taking into account the specific medical conditions and backgrounds and preferences of individual patients. So we certainly think there’s a major role for clinical judgment in medicine. Medicine is always going to be both an art and a science. What we argue in the book is that the ratio of art to science has gotten out of whack, and we’re relying too little on rigorous evidence and too much on idiosyncratic decision making.

Steven Cherry So I take it your solution would involve an FDA for surgery and other medical procedures.

Eric Patashnik Well we certainly think that there should be much more rigorous scrutiny of procedures.

That’s really a big part of medicine. We would like to see PCORI, the Patient Centered Outcomes Research Institute, begin taking on some of those harder questions in medicine. And then we would also like to see, I think, the Medicare agency begin looking more rigorously at the added value of particular treatments.

And if there’s a treatment that is extraordinarily expensive and there’s no evidence that it works better than a cheaper alternative, we don’t necessarily think that patients shouldn’t get it. They should certainly be free to choose it. But perhaps, for example, the Medicare agency should only pay in reimbursement up to the amount of the cheaper, equally effective treatment. So we really need, I think, more tailored kinds of coverage and reimbursement policies, as well as more rigorous studies. It’s been difficult, however, to move the needle on these kinds of policy solutions. The agency that I mentioned, PCORI, was very, very cautious in its first decade. It was just reauthorized, which I think is terrific, but it was very controversial. When it was first created back in 2010, it was seen as a rationing agency. It got caught up in charges of death panels, along with a lot of controversy, as part of the ACA.

Even in the current crisis, we’ve lacked the kind of rigorous information we need to figure out, for example, what covid therapeutics would be most effective. There’s been a paucity of randomized control trials on those kinds of questions. Of course, we’ve been in the middle of a pandemic. It’s quite understandable that physicians have been trying to do the best they can with available therapies. But other countries like the UK have done better in getting rigorous studies going during the pandemic so we can get quick answers to these life and death questions.

Steven Cherry So fundamentally, these are data questions and data scientists are reinventing that sort of thing. Do you think that with electronic medical records and deep learning studies of procedures and outcomes and so forth, that even without a new federal agency, we could get better data just from the data that we have and don’t use properly?

Eric Patashnik I think that’s a fantastic question. My colleague Alan Gerber and I, the coauthor of the book, as I mentioned, there was that landmark Orbita study about the efficacy of heart stents. And we had a conference a couple of years ago at Yale where we brought the lead author of the study, along with other leading cardiologists and social scientists together to look at what had happened in that case and why we are so often struggling with questions of data. And one of the things that I think was most exciting about the conversation is there really is an opportunity, I believe, to connect medical researchers and data scientists and other kinds of scholars to figure out how we can tease out rigorous causal inferences about what treatments work best from observational data.

Now, there’s a long history in medicine of reaching conclusions on the basis of observational data and non-randomized controlled trials, where it turned out that some of our conclusions were wrong.  For example, beliefs that hormone replacement therapy would reduce heart disease, or that aggressive treatments for breast cancer would be better than standard treatments of breast cancer. And it turned out that in both of those cases, that was incorrect after randomized controlled trials were done.

So for very good reason, I think some of the leading evidence-based medicine … physicians have been skeptical about learning through methods other than RCTs [randomized clinical trials]. But in recent years, I think there really have been some breakthroughs in data science and other kinds of. Techniques that do allow us to learn what kinds of treatments work best. I’m hopeful that that kind of partnership in the coming decades will accelerate our ability to learn. We’re certainly going to need more RCTs; we’re doing too few of them. But I think there are other methods of learning.

The COVID pandemic is going to provide a remarkable opportunity to learn about the efficacy of a wide range of treatments, because we kind of had a national experiment, particularly during the early part of the pandemic during March and April. Basically, Americans and people around the world just stopped going to the doctor for all sorts of conditions. And of course, some of that was problematic. There were people that had serious medical conditions. They were fearful of going to the emergency room. But there were also people who might have had knee pain and they otherwise would have gotten the orthopedic surgery. There were people who had a sore back and they put off a procedure or they didn’t get a colonoscopy or they didn’t get some kind of screening for another medical condition. And what we don’t know is what the health outcomes were of this dramatic change in people’s consumption of medical services during this period. We have a remarkable opportunity now to figure out which of those medical interventions really were necessary and that the fact that people didn’t get them or got them in significantly fewer quantities was actually really bad for people’s health. And that’s going to be crucial to learn. So we can make sure that people do get those vital treatments and which are those services turned out upon reflection, actually weren’t so necessary. And that even though people skip some of those things, they didn’t turn out to have any negative health outcomes at all or perhaps even escaped a cascade of further unnecessary treatments or overutilization. Certainly, nobody wishes that this had happened. But now that it did happen and we had this once in a century kind of dramatic shift in people’s consumption of health care services, we really need to do our best to learn from this experience and figure out what the consequences were.

Steven Cherry Well, Eric, I started this podcast with a Bible verse, and I’m going to switch from the early Christians to the ancient Greeks. You might feel like you’re pushing a rock up a hill like Sisyphus, but like the similarly punished Prometheus, it was in the cause of empathy and imparting knowledge. So I thank you for co-authoring the book and thanks for joining us today.

Eric Patashnik Thank you so much. It’s a real pleasure to be here.

Steven Cherry We’ve been speaking with Brown University professor Eric Patashnik about the thesis of his co-authored book, Unhealthy Politics: The Battle Over Evidence-Based Medicine: How partisanship, polarization, and medical authority stand in the way of evidence-based medicine.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers, a professional organization dedicated to advancing technology for the benefit of humanity.

This interview was recorded December 16, 2020 via Zoom using Adobe Audition. Our theme music is by Chad Crouch.

You can subscribe to Radio Spectrum on the Spectrum website, spectrum.ieee.org, or on Spotify, Apple, Google—wherever you get your podcasts. We welcome your feedback on the web or in social media.

For Radio Spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

See also

Proof and Consequences

A new book explores the deceptive power of numbers
A conversation with Charles Seife (06 October 2010)

5G Cellular Spectrum Auction—Can’t Tell the Players Without a Scorecard

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/telecom/wireless/5g-cellular-spectrum-auctioncant-tell-the-players-without-a-scorecard

Steven Cherry Today’s episode may confuse people and search engines alike—we’ve titled this podcast series Radio Spectrum, but our topic for today is the radio spectrum. Could you here the capital letters the first time I said “radio spectrum” and all lower case letters the second time? I thought not.

Anyway, let’s dive in. Here’s a quote and a 10-point quiz question—who said this?

“The most pressing communications problem at this particular time, however, is the scarcity of radio frequencies in relation to the steadily growing demand.”

Everyone who guessed that it was a U.S. President give yourselves a point. If you guessed it was President Harry Truman, in 1950, give yourself the other 9 points.

At the time, he was talking mainly about commercial radio and the nascent technology of television, ham radios. We didn’t even yet have Telestar, a satellite linkage in the land-based telephone monopoly. That would come a decade later. A decade after that, Motorola researcher Martin Cooper would make the first mobile telephone call.

By the late ’70s, the Federal Communications Commission was set to allocate spectrum specifically for these new mobile devices. But it’s first cellular spectrum allocation was a messy affair. The U.S. was divided up into 120 cellular markets, with two licenses each, and in some cases, hundreds of bidders. By 1984, the FCC had switched over to a lottery system. Unsurprisingly, people gamed the system. The barriers to enter the lottery were low, and many of the 37,000 applications—yes, 37,000 applications—simply wanted to flip the spectrum for a profit if they won.

The FCC would soon move to an auction system. Overnight, the barrier to entry went from very low to very high. One observer noted that these auctions were not “for the weak of heart or those with shallow pockets.”

Cellular adoption grew at a pace no one could anticipate. In 1990 there were 12 million mobile subscriptions worldwide and no data services. Twenty-five years later, there were more than 7 billion subscriber accounts sending and receiving about 50 exabytes per day and accounting for something like four percent of global GDP.

Historically, cellular has occupied a chunk of the radio spectrum that had television transmissions on the one side and satellite use on the other. It should come as no surprise that to meet all that demand, our cellular systems have been muscling out their neighbors for some time.

The FCC is on the verge of yet another auction, to start on December 8. Some observers think this will be the last great auction, for at least a while. It’s for the lower portion of what’s called the C-band, which stretches from 3.7–4.2 gigahertz.

Here to sort out the who, what, when, why, and a bit of the how of this auction is Mark Gibson, Senior Director for Business Development and Spectrum Policy at CommScope, a venerable North Carolina-based manufacturer of cellular and other telecommunications equipment—the parent company of Arris, which might be a more recognizable name to our listeners. And most importantly, he’s an EE and an IEEE member.

Mark, welcome to the podcast.

Mark Gibson Thank you, Steven. I love that intro. I learned something. I got one point, but I didn’t get the other nine. So …  but thank you, that was very good.

Steven Cherry Thank you. Mark, maybe we could start with why this band and this particular 280 MHz portion of it is so important that one observer called it the Louisiana Purchase of cellular?

Mark Gibson Well, that’s a great question. Probably the best reason is how much spectrum is being offered. I believe this is the largest contiguous spectrum in the mid-bands, not considering the millimeter wave, which is the spectrum, about 28 gigahertz. Those were offered in large GHz chunks, but they have propagation issues. So this is the mid-band and of course, meaning you know, mid-band spectrum has superior propagation characteristics over millimeter wave. So you have 280 MHz of spectrum that’s in basically the sweet spot of the range—spectrum that Harry Truman didn’t think about when he talked about the dearth. And so that’s the primary reason. A large amount of spectrum in the middle—the upper-middle of the mid-band range—is one of the reasons this is of so much interest.

Steven Cherry I’m going to ask you a question about 5G that’s maybe a bit cynical. 5G makes our phones more complex, it uses a ton of battery life, it will often increase latency, something we had a recent podcast episode about, it’s fabulously expensive for providers to roll out, and therefore it will probably increase costs for customers, or at least prevent our already-too-high costs from decreasing. When, if ever, will it be a greater plus than a minus?

Mark Gibson That’s a great question, because when you consider the C-band—the Spectrum at 3700 to 3980—that … Let me back up a minute. One of the reasons the issues exist for 5G has to do with the fact that for all intents and purposes, 5G has been deployed in the millimeter wave bands. If you, for example, read a lot of the consumer press, the Wall Street Journal, and others, when they talk about 5G, they talk about it in the context of millimeter-wave.

They talk about small cells, indoor capability, and the like. When you think about 5G in the context of this spectrum, the C-band, a lot of the concerns you’re talking about, at least in terms of some of the complexity with respect to propagation, begin to diminish. Now, the inherent complexities with 5G still exist in some of those are overcome with a lot of spectrum, especially the fact that the spectrum will be TTD, time division duplex. But I’d say that for the most part, the fact that this band will be the 5G—will be deployed in this mid-band range—will give it superior characteristics over millimeter wave, at least in the fact you’ve got a lot of spectrum and also have it in the lower mid-band so you can travel further.

The other inherent concerns, I guess you can say about 5G or the same things we heard about 4G versus 3G. So I think it’s just a question of getting used to the Gs as they get bigger. The one main thing that will help this out is this band of spectrum as well as the band that’s right below it, which is the CPRS band that’s supposed to be a 5G band as well.

Steven Cherry There was already an auction this summer of the band just above this at 3.5 GHz. The process of December’s auction started back in 2018. The auction itself will be held at the end of 2020. What took so long?

Mark Gibson Well, part of the problem is that the band is occupied by … right now I think the number is around 16 500 Earth stations on the ground and then twenty-some satellites in the air. So it took a long time to figure out how to accommodate them. They’re incumbents, and so the age-old question, when new spectrum is made available: What to do with the incumbents? Do you share, do you relocate, do you do both and call it transitional sharing. There was a lot of discussion in that regard around the owners of the spectrum.

The interesting thing here, you have an Earth station on the ground that transmits to the satellite. That could be anybody, it could be for a lot of these are broadcasters. NPR has a network, Disney, ESPN, the regular cable broadcasters, and whatnot. Then you have satellite owners who are different and then you have Earth station owners who are different. So in this ecosystem, there’s three different classes of owners and trying to sort out their rights to the spectrum was complicated. And then how you deal with them in regard to sharing relocation. In the end, the commission came down to relocation and so now the complexity is around, “how do you relocate?” And “relocate” is sort of a generic term. It means to move in spectrum, basically repack them. But how do you repack 16 000 Earth stations into the band above? That would be into the 4.0 to 4.2 gigahertz band. So that’s taken a long time to sort out who has the rights to the spectrum; how do we make it equitable; how do we repack; and then how do we craft an auction? Those are some of the main reasons it’s taken so long.

Steven Cherry Auction rules have evolved over time, but there’s still a little bit of the problem of bidders bidding with the intention of flipping the spectrum instead of using it themselves. Can you say a word about the 2016 auction?

Mark Gibson Well, that’s a good question. With all the spectrum auctions, the commission the SEC manages, what they try to do is establish what’s called substantial service requirements to ensure that those that are going to buy the spectrum put it to some use. But that doesn’t eliminate totally speculation. And so with the TV band auction, there was a fair amount of speculation, mostly because that 600 MHz spectrum is really very valuable. That auction went for $39 billion, if I’m not mistaken. But there was another auction that was complicated because of the situation of TV stations having to be moved. But we saw this in the AWS-3, which is the 1.70 to 2.1 gig bands. A lot of speculation went on and we saw this in great detail, a great sense in the millimeter-wave. In fact, there had been a lot of speculation in the millimeter-wave bands even before the auctions occurred, with companies having acquired the spectrum and hanging onto it, hoping that they could sell it. In fact, several of them did and made a lot of money.

But the way the commission tries to address that primarily is through substantial service requirements over the period of their license term, which is typically 10 years. And what it says loosely is that in order for a licensee to claim their spectrum, they have to put it to some substantial service, which is usually a percent of the rural and percent of the urban population over a period of time. Those changed somewhat, but mostly this is what they try to do.

Steven Cherry Yeah, the 2016 auction was gamed by hedge funds and others who figured out that particular slices in the spectrum, that particular TV stations, if they could hold them, they could keep another buyer from holding a continuous span of spectrum. I should point out that the designers of the 2016 auction won a Nobel Economic Prize for their innovations. But —.

Mark Gibson Yes.

Steven Cherry — I’m just curious, the hedge funds sold these slices at a premium so that there wouldn’t be a gap in a range of spectrum. What would have happened if that were to be those little gaps?

Mark Gibson Well, if you have gaps in spectrum, that’s not it’s not terrible. It’s just you don’t then have a robust use of the spectrum. What our regulator and most regulators endeavor to do in any spectrum auction is to make allocations or the assignments, if you will, contiguous. In other words, there’s no gaps. You know, if there are gaps, what it means is you have a couple of things. One is if you have a gap in the spectrum for whatever reason and that gap, the people using that, then you have adjacent channel concerns. And these are concerns that could give rise to adjacent channel interference from dissimilar services.

With the way that spectrum auction is run, they were able to separate, if you will, bifurcate the auction so that there was a low and a high, and they sold them both at once. Base station frequencies were in the upper portion and the handset frequencies were in a lower portion of that band. And so if you have gaps, then you have problems with who’s in those gaps, and that gives rise to sharing issues. The other thing is, if there’s gaps, the value of the spectrum tends to get diminished because it’s not contiguous. And we’ve seen that in some instances in some of the other bands, and somewhat arcane bands like AWS-4 [Advanced Wireless Service], some of the bands that are owned by Dish and whatnot.

Steven Cherry There’s been a lot of consolidation in the cellular market. In fact, we have only three major players now that T-Mobile acquired Sprint, a merger that finally finished merging earlier this year. So who are the buyers in this auction?

Mark Gibson Well, they run the gamut. I mean, there’s a lot of them there. What’s interesting, a couple of interesting things have come out of it. The Tier 1s are well represented by AT&T, T-Mobile, Sprint, and Verizon. So we expect them all to participate. And interestingly enough, T-Mobile only bought eight markets. If you want to compare this to the spectrum right below it, with the CBRS [Citizens Broadband Radio Service] for at least a spectrum comparison. T-Mobile only bought eight markets and that.

So anyhow, the Tier— the Tier 1s are well represented, but so are the MVNOs [Mobile Virtual Network Operators]—the cable operators. And what’s interesting is that Charter and Comcast are bidding together. Now, ostensibly, that’s because they want to make sure they de-conflict some of the spectrum situations in market area boundaries. In other words, they’re generally separate in terms of their coverage areas. But there were some concerns that there may have some problems with interference de-confliction at the edges of the boundaries. So they decided to participate together, which is interesting.

You know, at the end of the day, there’s 58 companies that qualified to bid and there are a lot of wireless Internet service providers or we call WISPs, for whom this would be a fairly expensive endeavor, the WISPs did participate in the CBRS auction, but that, by comparison, will probably be easily an order of magnitude below what the C-band will be.

But you look at it, it’s the usual suspects. It’s most everybody that’s participated in a major spectrum auction going way back to probably the first ones, although their names have changed. You have a lot of rural telcos, rural telephone companies, a lot of smaller cable operators. Just generally people that could be speculating. Don’t see a ton of that in this band and this spectrum. But there are some that look like they could be some speculators. But I don’t see a lot of that.

Steven Cherry I think people think of Verizon as the biggest cellular provider in the U.S. With the biggest, best coverage overall, but they’ve been playing catch up when it comes to spectrum.

Well, their position is interesting. They act like they’re playing catch up, but they do have a lot of a spectrum. They have a lot of spectrum in the millimeter-wave band, they have a lot of 28 gig spectrum from the acquisition of LMDS [Local Multiband Distribution Service], which is a defunct service. And so they have a lot of 28 gig spectrum. They got a lot of spectrum in the AWS [i.e., Advanced Wireless Service-3] auction, they got a lot of spectrum. In fact, they were the largest spectrum acquirer and I’m sorry, second-largest spectrum acquirer in the CBRS auction. They got a lot of spectrum in the 600 MHz auction. So I don’t know that they’re spectrum poor compared perhaps to some of the others, but they just want to have a broader spectrum footprint, it looks like, or spectrum position.

Steven Cherry You mentioned Dish Network before … The T-Mobile–Sprint merger requires that it give up some spectrum to help Dish become a bigger competitor.

Mark Gibson Right.

Steven Cherry I think the government’s intent was that it be a fourth major to replace Sprint. Did that happen? Is that happening? Is Dish heavily involved in this auction?

Mark Gibson Ah, Dish is involved … I’m not sure to what extent heavily. We haven’t seen anything come out yet with respect to down payments and down payments, give you some indication of their interest in the spectrum. I will say that they spent the most amount of money, over a billion dollars in the CBRS auction. So they were very interested in that. And their spectrum position is, like I said, it’s really a mélange of spectrum across the bands. They have a bunch of spectrum in the AWS-4, a bunch of spectrum in the old 1.4 gig. Their spectrum holdings are really sort of all over the place and they have some spectrum that they risk losing because of this whole substantial service situation. They haven’t built some of their networks out and they have a unique situation because they’re a company who has not built a wireless network yet.

And so most of us expect that Dish again, will build their spectrum portfolio as needed, and then they can decide what they want to do. But you’re correct, they were considered by the commission as possibly being able to help with the 2.5 gig spectrum and some of the other spectrum that was made available through the Sprint-T-Mobile acquisition or merger.

Steven Cherry Auctions are about as exciting as the British game of cricket, which is incredibly boring if you don’t know the rules of the game —

Mark Gibson And going about as long!

Steven Cherry — Right! But it is very exciting if you do understand the rules and the teams and the best players and all that, what else should we be looking out for to get maximum excitement out of this auction?

Mark Gibson Well, I don’t think anybody who’s not interested in spectrum auctions is going to find this exciting other than the potential amount of monies coming into the Treasury. And at the end of the day, $28 to $35 billion dollars is not chump change. So there’s that interest there. And you compare that to CBRS and other auctions, you know, the auctions generate a fair amount of money. I think that’s something to keep in mind, just to watch the auction, are the posturing by the various key players, certainly the Tier 1s and the cable companies.

And just to watch what they do, it’ll be interesting to also watch what the WISPs do, and how the WISPs manage their bidding in some of their market regions. It was interesting in the CBRS auction, the one location that generated the highest per-POP [Point of Presence] revenue was a place called Loving, Tx., which has a total population, depending upon who you ask, of between 82 and 140 people. Yet that license went for many millions of dollars. So the dollar-per-MHz-POP, which is a measure of how much the spectrum was worth, was in the hundreds of dollars. So there’s that stuff that happens, which is we all looked at that and kind of scratched our heads. For this one, like I said, it’ll be the typical looking at what the Tier 1s here choose to do, for that matter, and all the telcos, what they’re going to do, certainly what Dish is going to do and how they do that.

But you won’t know that during the auction because you don’t know who’s bidding on what per se. A lot of the bidding is closed, and so you won’t know, for example, that AT&T or T-Mobile or Dish or whoever are bidding on a given market. You only know that at the end. What you will know, though, is at the end of the rounds how much the markets are going for. Then you can start making educated guesses as to what’s happening there based on what you might know about a given entity.

It’s also interesting to look at the entity names because there’s the applicant name and then there’s the name of the company and there’s sort of a lot of—it’s not chicanery, but hidden stuff going on. For example, Dish is bidding as Little Bear Wireless. So that’s interesting because they had a different name for the CBRS auction. So anyhow, I will be—and my cohorts will be—watching this auction to sort of seeing, first of all, how high it goes, how high some of the markets go. And then at the end, when the auction is all done, who bid for what? And then try to piece together what all that means.

Steven Cherry So what will Comcast’s role be in the auction and what would be a good outcome?

Mark Gibson Well, you know, it’s interesting. Comcast does not own spectrum per se they do a lot of leasing. I think they may have won some spectrum, a little bit of spectrum, in the 600-megahertz band, but they participated in AWS—AWS-1—as a consortium called SpectrumCo, and they were there with Cox and two other smaller cable companies. And they ended up getting a bunch of spectrum. I think they ended up bidding $4 billion in that auction and won a bunch but couldn’t figure out what to do with it. The association dissolved and the spectrum then was left to various other companies.

So I think for for Comcast it’ll be good if they get spectrum in the markets that they want to participate in, obviously. As you probably are aware, the auction is going to be done in two phases or the spectrum going to be awarded in two—and I think the auction is going to be done in two phases as well, the phase one is for the lower—the A Block, which is the 3700 to 3800 [MHz], the lower 100, in the top 46 PEAs, partial economic areas. So that auction will happen and then the rest of it will be sold—the rest of the 180. So the question is, well where will they be bidding? Will they be bidding in their DMAs, which is their market areas overlaid with the PEAs they’ll be bidding. I don’t know that, but that would be interesting to watch. But I think to the extent that Comcast gets any spectrum anywhere, I think it’ll be good for them that I’m pretty sure they can turn around and put it to good use. They have interesting plans, you know, with strand mounted approaches which are really very interesting. So we’ll see. That will be interesting to watch.

Steven Cherry And what will your company’s role in the auction be—and what would be a good outcome for CommScope?

Mark Gibson Our role is … We don’t participate in the auction. I, and my spectrum colleagues, will be watching it closely just to kind of discern sort of the parlor game of who’s doing what. Of course, when the auction ends, we find out we’re all wrong, but that’s okay. But we’ll be we’ll be looking at who’s doing what kind of trying to figure out who—to anticipate who’s doing what. Obviously, CommScope will be selling all of the auction winners what we do, which is infrastructure equipment, base station antennas, and everything in our potpourri —

What would be nice for you guys is if Dish and the cable companies do well since they have to start from scratch when it comes to equipment.

True. That’s true. Yeah, that would be good. And we work with all of these folks, so it’ll be interesting to see how that comes together.

And other than the money in the Treasury, what would be some good outcomes for consumers and citizens?

Well, that’s a good question. Having this spectrum now for 5G, and actually now we’re hearing 6G, in this band will be very useful for consumers. It’ll mean that your handsets will now be able to operate without putting your head in a weird position to accommodate the millimeter wave antennas that are inherent in the handsets. It’s right above CBRS, so there’s some contiguity. There’s another band that’s being looked at. There’s a rulemaking open on it right now. That’s 3450 to 3550. That’s right below CBRS. So that’s 100 megahertz.

So when you consider that plus the CBRS band plus the C-band, you’re looking now at 530 megahertz of mid-band spectrum possible. And so for consumers that will open up a lot of the use cases that are unique to 5G, things like IoT, vehicular technology use cases, that can only be partially addressed with the 5.9 GHz band. If you’ve been following that. C-V2X [Cellular Vehicle-to-Everything] capabilities to push more data to the vehicle will be much more doable across these swaths of spectrum—certainly in this band. We’ve seen interesting applications that were borne out of CBRS that should really see a good life in this band, things like precision agriculture and that sort of thing. So I think consumers will benefit from having the 280 contiguous megahertz of mid-band spectrum to enable all of the use cases that 5G claims to enable.

Steven Cherry Well, Mark, as the saying goes, in the American form of cricket—that is, baseball—you can’t tell the players without a scorecard. Thanks for giving us the scorecard and for joining us today.

Mark Gibson Well, my pleasure, Steven. Very good questions. I really enjoyed it. Thanks again.

Steven Cherry That’s very nice of you to say. We’ve been speaking with Mark Gibson of telecommunications equipment provider CommScope about the upcoming auction, December 8th, of more cellular spectrum in the C-band.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers, a professional organization dedicated to advancing technology for the benefit of humanity. This interview was recorded November 30th, 2020. Our theme music is by Chad Crouch.

You can subscribe to Radio Spectrum on the Spectrum website—where you can sign up for alerts or for our newsletter—or on Spotify, Apple Podcast, and wherever else you get your podcasts. We welcome your feedback on the Web or in social media.

For Radio Spectrum about the radio spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

 

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

Polling Is Too Hard—for Humans

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/artificial-intelligence/machine-learning/polling-is-too-hardfor-humans

Steven Cherry Hi, this is Steven Cherry, for Radio Spectrum.

The Literary Digest, a now-defunct magazine, was founded in 1890. It offered—despite what you’d expect from its name—condensed versions of news-analysis and opinion pieces. By the mid-1920s, it had over a million subscribers. Some measure of its fame and popularity stemmed from accurately predicting every presidential election from 1916 to 1932, based on polls it conducted of its ever-growing readership.

Then came 1936. The Digest predicted that Kansas Governor Alf Landon would win in a landslide over the incumbent, Franklin Delano Roosevelt. Landon in fact captured only 38 percent of the vote. Roosevelt won 46 of the U.S.’s 48 states, the biggest landslide in presidential history. The magazine never recovered from its gaffe and folded two years later.

The Chicago Tribune did recover from its 1948 gaffe, one of the most famous newspaper headlines of all time, “Dewey Defeats Truman”—a headline that by the way was corrected in the second edition that election night to read “Democrats Make Sweep of State Offices,” and by the final edition, “Early Dewey Lead Narrow; Douglas, Stevenson Win,” referring to candidates that year for Senator and Governor. The Senator, Paul Douglas, by the way, was no relation to an earlier Senator from Illinois a century ago, Stephen Douglas.

The Literary Digest’s error was due, famously, to the way it conducted its polls— its readership, even though a million strong, was woefully unrepresentative of the nation’s voters as a whole.

The Tribune’s gaffe was in part due to a printer’s strike that forced the paper to settle on a first-edition banner headline hours earlier than it otherwise would have, but it made the guess with great confidence in part because the unanimous consensus of the polling that year that had Dewey ahead, despite his running one of the most lackluster, risk-averse campaigns of all time.

Polls have been making mistakes ever since, and it’s always, fundamentally, the same mistake. They’re based on representative samples of the electorate that aren’t sufficiently representative.

After the election of 2016, in which the polling was not only wrong but itself might have inspired decisions that affected the outcome—where the Clinton campaign shepherded its resources; whether James Comey would hold a press conference—pollsters looked inward, re-weighted various variables, assured us that the errors of 2016 had been identified and addressed, and then proceeded to systematically mis-predict the 2020 presidential election much as they had four years earlier.

After a century of often-wrong results, it would be reasonable to conclude that polling is just too difficult for humans to get right.

But what about software? Amazon, Netflix, and Google do a remarkable job of predicting consumer sentiment, preferences, and behavior. Could artificial intelligence predict voter sentiment, preferences, and behavior?

Well, it’s not as if they haven’t tried. And results in 2020 were mixed. One system predicted Biden’s lead in the popular vote to be large, but his electoral college margin small—not quite the actual outcome. Another system was even further from the mark, giving Biden wins in Florida, Texas, and Ohio—adding up to a wildly off-base electoral margin.

One system, though, did remarkably well. As a headline in Fortune magazine put it the morning of election day, “The polls are wrong. The U.S. presidential race is a near dead heat, this AI ‘sentiment analysis’ tool says.” The AI tool predicted a popular vote of 50.2 percent for Biden, only about one-sixth of one percent from the actual total, and 47.3 percent for Trump, off by a mere one-tenth of one percent.

The AI compay that Fortune magazine referred to is called Expert.ai, and its Chief Technology Officer, Marco Varone, is my guest today.

Marco, welcome to the podcast.

Marco Varone Hi everybody.

Steven Cherry Marco, AI-based speech recognition has been pretty good for 20 years, AI has been getting better and better at fraud detection for 25 years. AI beat the reigning chess champion back in 1997. Why has it taken so long to apply AI to polling, which is, after all … well, even in 2017, was a $20.1 billion dollar industry, which is about $20 billion more than chess.

Marco Varone Well, there are two reasons for this. The first one, that if you wanted to apply artificial intelligence to this kind of problem, you need to have the capability of understanding language in a pretty specific, deep, and nuanced way. And it is something that, frankly, for many, many years was very difficult and required a lot of investment and a lot of work in trying to go deeper than the traditional shallow understanding of text. So this was one element.

The second element is that, as you have seen in this particular case, polls, on average, are working still pretty well. But there are particular events, in particular a situation where there is a clear gap between what has been predicted and the final result. And there is a tendency to say, okay, on average, the results are not so bad. So don’t change too much because we can make it with good results, without the big changes that are always requiring investment, modification, and a complex process.

I would say that it’s a combination of the technology that needed to become better and better in understanding the capability of really extracting insights and small nuances from any kind of communication and the fact that for other types of polls, the current situation is not so bad.

The fact [is] that now there is a growing amount of information that you can easily analyze because it is everywhere in every social network, every communication in every blog and comments, made it a bit easier to say, okay, now we have a better technology, even in specific situations we can have access to a huge amount of data. So let’s try it. And this is what we did. And I believe this will become a major trend in the future.

Steven Cherry Every AI system needs data; Expert.ai uses social posts. How does it work?

Marco Varone Well, the social posts are, I would say, the most valuable kind of content that you can analyze in a situation like this, because on one side, it is a type of content that we know. When I say we know, it means that we have a used this type of content for many other projects. It is normally the kind of content that we analyze for our traditional customers, looking for the actual comments and opinions about products, services, and particular events. Social content is easy to get—up to a point; with the recent scandals, it’s becoming a bit more difficult to have access to. A huge amount of social data in the past was a bit simpler—and also it is something where you can find really every kind of person, every kind of expression, and every kind of discussion.

So it’s easier to analyze this content, to extract a big amount of insight, a big amount of information, and trying to tune … to create reasonably solid models that can be tested in a sort of realtime—there is a continuous stream of social content. There is an infinite number of topics that are discussed. And so you have the opportunity to have something that is plentiful, that is cheap, but has a big [?] expression and where you can really tune your models and tune your algorithms in it much faster and more cost-effective way than with the other type of content.

Steven Cherry So that sort of thing requires something to count as a ground truth. What is what is your ground truth here?

Very, very, very good point … a very good question. The key point is that from the start, we have decided to invest a lot of money and a lot of efforts in creating a sort of representation of knowledge that we have stored in a big knowledge graph that has been crafted manually, initially.

So we created this knowledge representation that is a sort of representation of the world knowledge, in a reduced form, and the language and the way that you express this knowledge. And we created this solid foundation, manually, so we have been able to build on a very solid and very structured foundation. On top of this foundation, it was possible, as I mentioned, to add the new knowledge, working, analyzing a big amount of data, social data is an example, but there are many other types of data that we use to enrich our knowledge. And so we are not influenced, like many other approaches, from a bias that you can take from extracting knowledge only from data.

So it’s the start of a two-tier system where we have this solid ground-truth foundation—the knowledge and information that that expert linguists and the people that have a huge understanding of things that’s created. On top of that, we can add all the information that we can extract more or less automatically from a different type of data. We believe that this was a huge investment that we did during the years, but is paying big dividends and also giving us the possibility of understanding the language and the communication at a deeper level than with other approaches.

Steven Cherry And are you using only data from Twitter or from other social media as well?

Marco Varone No, no, we try to use as much social media as possible, the limitation sometimes is that Twitter is much easier and faster to have access to a bigger amount of information. For other social sources sometimes is not that easy because you can have issues in accessing the content or you have a very limited amount of information that you can download, or that is expensive—or some sources, you cannot really control them automatically. So Twitter becomes his first choice for the reason that it is easier to get a big volume. And if you are ready to pay, you can have access to the full Twitter firehose.

Steven Cherry The universe of people who post on social networks would seem to be skewed in any number of ways. Some people post more than the average. Some people don’t post much at all. People are sometimes more extreme in their views. How do you go from social media sentiment to voter sentiment? How do you avoid the Literary Digest problem?

Marco Varone Probably the most relevant element is our huge experience. Somehow we we have started to analyze the big amount of data, textual data, many, many years ago, and we were forced to really find a way of managing and balancing and avoiding this kind of noise or duplicated information or extra spurious information—[it] can really impact on the capability of our solution to extract the real insights.

So I think that experience—a lot of experience in doing this for many, many years—is the second the secret element of our recipe in being able to do this kind of analysis. And that I would add that also you should consider that if you do it several times, we started to analyze political content, things that link it to political elections a few years ago. So we also had this generic experience and a specific experience in finding how to tune the different parameters, how to set the different algorithms to try to minimize these kinds of noisy elements. You can’t remove them completely. It is impossible.

But for example, when we analyzed the social content for the Brexit referendum, in the UK, and we were able to guess—one of the few able to do this—the real result of it, we learned a lot of lessons and we were able to improve our capability. Clearly, this means that there is not that a formula that is good for every kind of analysis.

Steven Cherry It’s sort of a commonplace that people express more extreme views on the Internet than they do in face-to-face encounters. The results from 2016 and 2020—and the Brexit result as well—suggests that the opposite may be the case. People’s voting reflects truly-held extreme views, while the polling reflects a sort of face-to-face façade.

Marco Varone Yes, I must admit that we had a small advantage in this—compared with many other companies and probably many other players that tried to guess the result of this election or the Brexit—being based where our technology is. Here in Italy, we saw this kind of situation happening much sooner than we have seen happening in other countries. So in Italy, we had, even many years ago, the strange situation where people, when they were polled, for an interview, were saying, “Oh, no, I think that is too extreme. I will never vote for this. I will vote for this other candidate or the other party.” But in the end, when that the elections were over, you saw that, oh, this is not what really happened in the secret of the vote.

So I would say that this is a small secret, a small advantage that we have against many other people that try to guess this result, creating this kind of technology and implementation in Italy, where these small splits or exaggerated positioning decided the vote for the election was happening before then we have seen. Now it’s very common. This is happening not only in the U.S., but also in other countries. It was happening before … So we have been able to understand it sooner and try to adjust and balance our parameters accordingly.

Steven Cherry That’s so interesting. People have, of course, compared the Trump administration to the Berlusconi administration, but I didn’t realize that the comparison went back all the way to their initial candidacies. So in effect, the shy voter theory—especially the shy-Trump voter theory—is basically correct and people express themselves more authentically online.

Steven Cherry Correct. This is what we are seeing, again and again. And it is something that I believe is not only happening in the political environment, but there it’s somehow stronger than in other places. As I told you, we are applying our artificial intelligence solution in many different fields, analyzing the feedback from customers of telco companies, banks, insurance companies. And you see that when you look at, for example, the content of the e-mails, or let me say official communication that they are exchanging between the customer and the company, everything is a bit smoother, more natural. The tone is under control. And then that when you see the same kind of problem that is discussed in a social content, everything is stronger. People are really trying to give a much stronger opinion, saying, I’ll never buy this kind of service or I had big problems with this company.

And so, again, this is something that we have seen also in other spaces. In the political situation, I believe it is even stronger because they are really not buying something like when you are interacting with a company, but you are trying to give your small contribution to the future of your country or your state or your local government. So probably there are even stronger sentiments and feelings for people. And in the social situation, they are really free because you are not really identified—normally you can be recognized, but in many cases you are not linked to the specific person doing that. So I believe that that is the strongest place where there is this, “Okay, I really wanted to say what I think, and this is the only place where I will tell this, because the risk of having a sort of a negative result is smaller.”

Steven Cherry Yeah. So not to belabor the point, but it does seem important. It’s commonly thought that the Internet goads people into holding more extreme positions than they really do, but the reality is that it instead frees them to express themselves more honestly.

A 2015 article in Nature argued that public opinion has become more extreme over time, and then the article looks at some of the possible causes. I’m wondering if you have seen that in your work and is it possible that standard polling techniques simply have not caught up with that?

Marco Varone Yes, I think that we can confirm we have seen that this kind of change we have … We are applying our solution to social content for a good number of years. I would say not exactly from the start because you need to have the sort of a minimum amount of data but it’s been a big number of years. And I can confirm, yes, it’s something that we have seen that it is happening. I don’t know exactly if it is also something that is linked to the fact that people that are more vocal on such a content are also part of the new generation of people that are younger, that have been able to use these kinds of channels of communication actively more or less from the start. I think that there are different element on this, but for sure I can confirm this.

And in different countries, we have seen some significant variation. For example, you should expect that here in Italy it’s super-strong because the Italian people, for example, are considered very … They don’t fear to express their opinion, but I will say that in the U.S. and also in the U.K., we are seeing [it] even stronger. Okay, so it’s happening in all the countries where we are operating and there are some countries where it’s even stronger than another one. You will not be surprised that, for example, when you analyze there, the content in Germany, such a content, everything is somehow more under control, exactly as you expect. So sometimes there are surprises. In other situations that are things that are more or less as you expect.

Steven Cherry I mentioned earlier Amazon, Netflix and Google. Are there similarities between what you’re doing here and what recommendation engines do?

Marco Varone There are elements in common and there are significant differences. The elements in common that they are also using the capability that they have in analyzing the textual content to extract elements for the recommendation, but they are also using a lot of other information. For us that when you analyze something more or less, the only information that we can get access to is really that the tweets, the posts, the articles, and other similar things. But for Amazon, they have access—or for Netflix—to get a lot of other information. So on Amazon, you have the clicks, you have the story of the customer, you have the different path that has been followed in navigating the site. They have historical information. So they have a much richer set of data and the text part is only somehow a complement of it. So there are elements in common and differences. And the other difference is that all these companies have a very shallow capability of understanding what is really written—in a comment, in a post, in a tweet—they tend to work more or less on a keyword level. Okay, this is a negative keyword; this is a positive keyword. With our AI intelligence, we can go deeper than that. So we can get the emotion, the feeling—we can disambiguate much better small differences in the expression of the person because we can go to a deeper level of understanding. It is not like a person. A person is still better than understanding all the nuances, but it’s something that can add more value and allows us to compensate—up to a point—to the fact that we don’t have access to this huge set of other data that these big companies easily have because they track and they log everything.

Steven Cherry I’m not sure humans always do better. You know, one of my complaints about the movie rating site Rotten Tomatoes is they take reviews by film reviewers and assess whether the review was a generally positive or generally negative review. It’s incredibly simplistic. And yet, in my opinion, they often get it wrong. I’d love to see a sentiment analysis software attack the movie-rating problem. Speaking of which, polling is more of a way to show off your company’s capabilities, yes? Your main business involves applications in industries like insurance and banking and publishing?

Marco Varone Correct. Absolutely. We decided that we would do it from time to time, as is said, to apply our technology and our solutions to this specific problem, not because we want to become a competitor of the companies doing these polls, but because we think it is a very good way to show the capability and the power of our technology and our solution, applied to a problem that is easily understood by everybody.

Normally what we do is to apply this kind of approach, for example, in analyzing the customer interaction between the customers and our clients or analyzing big amounts of social content to identify trends, patterns, emerging elements that can be emerging technologies or managing challenges.

Part of our customers are also in the intelligence space. So public police forces, national security, intelligence agencies … and use our AI platform to try to recognize possible threats, to help investigators and analysts to find the information that they want to find in a much faster and more structured way. Finally, I will say that our historical market is in publishing. Publishers are always searching for a way to enrich the content that they publish with the additional metadata so that the people reading and navigating inside the knowledge can really slice and dice the information across many dimensions or can then focus on specific topics, a specific place, or specific type of event.

Steven Cherry Returning to polling, the Pew Research Center is just one of many polling organizations that looked inward after 2020 and as far as I can tell, concluded that it needed to do still better sampling and weighting of voters. In other words, they just need to do a better job of what they had been doing. Do you think they could ever succeed at that or are they just on a failed path and they really need to start doing something more like what you’re doing?

Marco Varone I think that they are on a failed path and they need to really merge the two approaches. I believe that for the future, they really need to keep the good part of what they did for many, many years, because there is still a lot of value in that. But they are obliged to add this additional dimension because only working together with these two approaches, you can really find something that can give a good result. And I would say good prediction in the majority of the situations, even in these extreme events that are becoming more and more common. And this is sort of a part of how the world is changing.

So we think that they need to look at the kind of artificial technology, artificial intelligence technologies that we and other companies are making available because you cannot continue. This is not a problem of tuning the existing formulas. They should not discard it. It would be a big mistake, but for sure, in my opinion, they need a tool to blend the two things and spend the time to balance this combined model, because, again, if you just then merge the two approaches without spending time on balancing, the result would be even worse than what they have now.

Steven Cherry Well, Marco, I think that’s a very natural human need to predict the future, to help us plan accordingly, and a very natural cultural need to understand where our fellow citizens stand and feel and think about the important issues that face us. Polling tries to meet those needs. And if it’s been on the wrong path these many years, I hope there’s a right path and hopefully you’re pointing the way to it. Thanks for your work and for joining us today.

Marco Varone Thank you. It was a pleasure.

Steven Cherry We’ve been speaking with Marco Varone, CTO of Expert.ai, about polling, prediction, social media, and natural language processing.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers, a professional organization dedicated to advancing technology for the benefit of humanity.

This interview was recorded November 24, 2020. Our theme music is by Chad Crouch.

You can subscribe to Radio Spectrum on the Spectrum website, Spotify, Apple Podcast, or wherever you get your podcasts. You sign up for alerts or for our upcoming newsletter. And we welcome your feedback on the web or in social media.

For Radio Spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

Can Detroit Catch Tesla?

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/transportation/advanced-cars/can-detroit-catch-tesla

Steven Cherry There’s a Silicon Valley VC who might have been the first to call cars, “mobile phones with wheels,” at least, I heard him say that back in 2015. That’s only somewhat true for today’s cars—no phone has a multi-speed transmission or a timing belt, no phone needs oil changes and new spark plugs. You know what else doesn’t need any of those things? Electric cars.

If any cars are mobile phones with wheels, it’s electric cars. And just as the switch from landline phones to mobile phones was quick, and from computers to smartphones was maybe even quicker, the shift from engines to motors, from internal combustion cars to electric cars, is starting to gain momentum and when it starts to have at scale, it will happen quickly.

How quickly? Pandemic aside, Tesla would be on track to sell half a million cars in 2020, all of them electric. By contrast, GM sold almost 3 million cars last year, almost none of them electric. But by 2025 or so, GM plans to sell a million electric cars, a year that the company think might be a tipping point toward electrics. Why? To quote one executive, the better driving and owning experience. “When you get used to charging your vehicle like a phone at night, when you charge it, and you don’t worry about it, you never have to stop at a gas station. There’s a lot to be said for that kind of lifestyle.”

Of course, to do that, you need amazing batteries, and an amazing capacity to produce batteries—both of which are at the heart of the company’s plans. So much so that an upcoming article in IEEE Spectrum focuses on a new GM battery factory, in partnership with LG Chem, that will dwarf Tesla’s Gigafactory and power, pun intended, its drive, pun again intended, to that 2025 goal of a million electric cars.

The author of that article is Lawrence Ulrich, a longtime contributing editor at Spectrum and a noted auto maven. He also writes about cars for the New York Times, Car and Driver, and elsewhere, and has test-driven more cars than can be found on the Hertz lot at a mid-sized airport, or so it seems.

Lawrence, welcome to the podcast.

Lawrence Ulrich Hello, Steven.

Steven Cherry Let’s start with the batteries themselves. They’re very different from Tesla’s.

Lawrence Ulrich They are. They’re called Ultium batteries. And where Tesla started off with the small almost AAA cells that you would find in a typical laptop and put thousands of them together for their original Tesla Roadster. And they’re still using that approach, today.

General Motors and some other legacy automakers are going to so-called large format cells or prismatic cells. And they’re packaged up for General Motors in these big pouches. They’re called a pouch format. The upshot is efficiency … Ideally that you’re going to be able to pack more energy into a smaller space. And that’s what General Motors is up to. And if all goes well, they’ll be producing 250 million of these power cells a year from a plant in Lordstown, Ohio.

And Lordstown has its own significance. This is the site of a somewhat notorious General Motors plant from the ’70s that became kind of a poster child for labor strife in America. And this plant in Lordstown produced some basically crappy small cars with names like Vega and Cavalier before it turned into a much more model plant in the late ‘90s. The plant was shut down and the area ever since has been starving for jobs.

President Trump got involved and the whole place became a kind of political football. But this plant is one of those heartwarming stories, potentially, bringing jobs back to a struggling area in Ohio that could really use them.

Steven Cherry So the batteries themselves they differ from Tesla’s in size and shape. You mentioned that chemistry is different as well. Apparently, that these will use a lot less cobalt. Why is that important?

Lawrence Ulrich Yeah, well, the big drive, of course, around the world is to eliminate the precious metals in batteries. Cobalt is mined under very inhuman conditions in many places around the world. We managed to spend several decades not worrying about the effect on the planet of breaking oil out of everywhere, the attendant pollution. Now, that’s an issue.

Steven Cherry So how do the two battery systems compare in price overall?

Lawrence Ulrich When it comes to pricing, in some ways we don’t really know. Companies keep that information really close to vest. But what we do know is that battery prices around the world are falling dramatically. They’re essentially a commodity. You know a mistake people make in this—and I can fall prey to it, too—is considering this solely as some kind of horse race, as if a Tesla or another company is going to have some battery breakthrough that no one—or a chemistry that no one—else has seen and that no one else will have access to.

In the history of automobiles. That’s never, ever been the case. If one company has something, another company can either imitate it, license it, get it their own way, in very, very short order. In fact, most of the big technologies in the world now come from global suppliers and not from the auto companies themselves. And they might have brief licensing agreements to work with a single company, but they never last. So it’s better to look at it as, it’s a worldwide drive to reduce the price of these batteries.

Any advantage in batteries, I’ve come to believe, for one company or another is going to be a very, very short-lived advantage. Just in the same way as any innovation in safety, from anti-lock brakes to adaptive cruise control, rapidly sweeps through the industry. But that’s a good thing. This is technology. This needs to be more than about technology and one-hundred-thousand-dollar Teslas and a half-million-dollar electric supercars. It needs to be a technology that’s in the bread-and-butter family cars that we all can afford. So pricing a battery that was just exorbitant a decade ago is just falling and falling and falling. And the Holy Grail is the $100/kWh level. And it’s looking like the entire industry is just beginning to push the price below that.

Steven Cherry One technology advantage that GM might have for, as you say, at least in the short term, is a unique monitoring system within the battery.

Lawrence Ulrich General Motors has what’s called a wireless battery management system, and it is absolutely a world’s first. Most of your battery packs if you look, they have this tangle of wiring that monitors the state of charge in individual cells. GM has found a pretty elegant solution for its new pack to make that an entirely wireless system. So instead of this wiring linking your battery modules, you’re going to have integrated RF antennas on circuit boards and they run just a wireless communications protocol. It’s a lot like Bluetooth but runs with lower power.

The upshot is that you’ve got cradle to grave monitoring of the battery health, and that means picture a warehouse filled with cells on the factory floor where they’re being installed in cars and then on to when these batteries are being used in cars. The company is going to be able to collect data on charges from that inventory to the cars and hopefully they’ll be able to use that data to improve the durability of their batteries.

Steven Cherry We make fun of the cliché that data is the new oil, but it really can be important, especially in the automotive context, right? When it comes to developing self-driving cars, we look at how many millions of miles each company has achieved, and the same thing could happen here. GM’s scale would allow it to leap ahead in data even once other manufacturers adopt similar systems.

That’s definitely what they’re hoping. And if they do have one advantage in this, it is their scale. It’s their manufacturing know-how. We’ve seen that as Tesla’s weak link; reliability is absolutely bottom of the barrel. Their cars are the least reliable cars of pretty much any manufacturer. And they don’t like that to be discussed a lot, but it’s true. And they’re improving. But for all their strength that’s their weak link—the manufacturing scale and the quality control. Where you look at for the peak, a Toyota, you know, the Toyota production system that really swept every area of manufacturing in the world. And it is based somewhat on scale. And as you said, data is king and Tesla is—when you mentioned self-driving cars, Tesla has the most cars with the most active driver assistance systems on them. I won’t call them self-driving because that oversells the technology they really have. But what’s brilliant about Tesla is they’ve got this army of guinea pigs basically out there and they’re constantly pulling data from these cars and using it to develop the next generation of their self-driving technology. So that, right, that onboard data of cars is just critical to the development—and the speedy development—of all these technologies.

Steven Cherry So weirdly, at least when it comes to batteries in the battle between geriatric GM and the teenager Tesla, it’s Tesla that’s trapped by its battery legacy. And GM has some kind of second mover advantage here.

Lawrence Ulrich Potentially, and we don’t know what Tesla is working on. They had a much-ballyhooed battery day a few months ago that was one of their biggest fizzle ever. And usually we expect one of Elon Musk’s grand pronouncements to really bump their stock price. And for one of the first time, investors around the world were unimpressed because lo and behold, they didn’t have any real battery breakthroughs to announce.

And I think it illustrates just what a tough nut this is to crack. The idea—and maybe it was unrealistic to expect that Tesla was going to have some miracle batteries that could do what other batteries in the world cannot. And we’ve got to remember, very smart scientists and engineers all around the world are working on this problem. So this eureka Thomas Edison thing is probably not going to be the way it goes. It’s going to be slow and steady. And again, potentially the advantage of a GM is to operate at scale. And Tesla did it themselves. They wanted to have their own battery plants rather than just buying cells from LG Chem or Panasonic. General Motors is in a way mimicking their formula—they’re building this giant gigafactory in Ohio with 50 percent more battery capacity than Tesla’s in Nevada.

Steven Cherry This new battery factory in Lordstown, Ohio, is quite an undertaking. It’s going to cost $2.3 billion. It’s about 30 football fields in size. That’s still smaller than the [Tesla] Gigafactory, about 1.4 million square feet to almost 2 million. But the battery production is expected to be greater than Tesla’s when measured in gigawatt hours.

Lawrence Ulrich That’s exactly right, Steve. The plant itself is looking at an ultimate capacity of 30 gWh of batteries each year—250 million cells. And now, of course, they just need cars to put them in. And that’s an open question with GM. It’s always been the level of sincerity and the level of commitment in the company.

Every legacy automaker is facing this catch-22 right now. They know they need to start transferring their production into electric cars. But yet more than 98 percent of all the cars purchased in the world today still have some form of fossil fuel combustion aboard. So they’ve got to continue to satisfy that market or they’re cutting their own throat. But yet developing this new market. And in a way, it’s easier to be Tesla. They’re all-electric all the time, where, in a way, these companies are developing the technology that’s going to put their old business out of business. It’s gotta be a really tricky proposition, and it’s pretty obvious why companies would be reluctant then to get into that business. It’s almost like they need to spin themselves off and have their own electric divisions.

Steven Cherry Yeah, the first step is the batteries in that factory. But the second step for GM is to revamp assembly-line plants for which they’re spending $2.2 billion in Detroit alone.

Lawrence Ulrich Absolutely. And more hiring going on. And the idea ultimately is a $20 billion investment over the next five years. And those one million cars, we should note, are … those are cars including China, which is currently by far the biggest electric car market on the planet today.

So, yeah, these companies are just engaged in this huge drive to seed the market and frankly, to find customers beyond Tesla. For all the good feelings and the good vibes about electric, no one has managed to really, in my mind, sell a hit EV in America. other than Tesla models. We’ve had some decent first attempts.

You go all the way back to GM’s ill-fated EV1 and see that they were in a way, a first-mover in electric way back when. It never took, and then the Chevy Volt was a pretty well-engineered car, but it didn’t catch on. And I think one of the lessons that people took was, at least for now, what people don’t want in electric cars is just an electric version of a frumpy, low budget economy car. And it seems originally, that was our idea of what electric cars were going to be. They were going to be these kind of eat-your-peas cars, just strictly utilitarian passenger pods, not a lot of personality with very little excitement. Tesla, of course, change that idea. And said, “Aha! it turns out people still want style and performance and luxury.”

And they don’t want to give any, like laptop buyers. They want more and they want it to cost the same or less. So that’s the big push for now. We see in all the General Motors cars coming out. There’s not a frumpy econo box in the bunch.

And in fact, they’re leading with a reborn Hummer EV. Ten years ago, you’d’ve been laughing—that’s going to be your your initial mover in this space? A Hummer? But America wants EVs and they want trucks. So they figured, hey, we might as well try to sell these in the most popular segments where the masses are buying and traditional cars are totally out in America. The SUV and trucks share now is pushing 75 percent. Three of every four cars sold in America are not cars at all. They are a pickup truck or an SUV.

Steven Cherry Was the idea that the Hummer was a symbol of everything wrong about car-making from a sustainability and climate point of view? And now it’s going to symbolize GM’s woke status?

Lawrence Ulrich I think that somewhere in there, that’s got to be what they believe. But I think it all traces back to the success that Tesla has had with their cars. And the idea that GM like Nissan with their Leaf, the first bid to attract people were going to be these affordable economy-type cars. The idea that electricity being about efficiency and that was going to be their prime selling point, save the planet, save money, be efficient, save energy.

And it turns out that that sales pitch doesn’t work on very many. They still want their luxury car, their high-design car, or shall we say in America, they want their luxury truck or their luxury SUV. If it runs on electricity and makes their life more convenient and saves them money and saves the planet, great. But that’s not the prime buying motivation, saving money and saving fuel, especially with gasoline being dirt cheap, which is, of course, a whole other question. But in the way that Tesla showed that, at least for now, people—they want all the attributes they always had in a car. They want design; they want a car that’s going to impress their neighbors. They want a car that’s fun and has good power and not a boring, frumpy little shoebox.

And so for General Motors, which is hugely enmeshed in selling trucks and SUVs, like pretty much any automaker in America now, they know which side their bread is buttered on. They have a lot more chance selling an exciting product. And the Hummer, for better or worse, has a look and has a following. And it dovetails nicely with what people are wanting in vehicles right now. That’s any SUV that has an adventure-y, outdoor, four-wheeling vibe to it is just selling like gangbusters right now.

It’s counterintuitive, of course, but it’s where we are in a country where more than 75 percent of the sales are SUVs and pickups. And let’s underline that’s not just the old bad Detroit, that’s Mercedes, that’s BMW, that’s Porsche. In any case, any company that has an SUV, their SUVs are outselling their car models exponentially, whether that’s a German company or a Japanese company, whatever. Strangely enough, that’s starting to sweep the world as well. SUV sales are booming in Europe. They’re booming in Asia. They’re booming around the world.

Steven Cherry There’s a price-point issue here as well. In the long run, electric cars will be cheaper. They have fewer moving parts, maintenance is much lower, and so forth. But for the moment, electric cars are more expensive. And if you add $15 000 to a low-end car, you’ve jumped the price 50 percent or more. If you add $15 000 to a $60 000 car, you’re in a range where the buyer is a lot less price-sensitive.

Lawrence Ulrich That’s exactly it as well. And that’s why we’re seeing a Hummer priced at over $100 000 out of the gate; it’s why we’re seeing luxury SUVs from Mercedes, BMW, Porsche—all across the board. The mythical $35 000 Tesla Model 3 never really arrived. They discourage people from ordering it. And now they’ve pulled it off the order books entirely. And the cheapest Tesla Model 3 is now $39 000.

And you’re exactly right. There is a lot less price sensitivity in the luxury reaches of the market to tack on another $10, $15 000 in battery costs. And that also has spelled failure for EVs that the lower reaches of the market. And of course, that’s going to be the acid test. We hear so much about this vaunted price parity between EVs and fossil-fuel vehicles. And frankly, I think analysts have way jumped the gun. We keep hearing the price parity is two and three years away. Well, it’s not. And anyone who tells you that I believe is lying. Sure. Maybe price parity in an $80 000 luxury car. But when someone can sell you a $25 000 Honda Civic that also is stuffed full of batteries—it’s several years away. And that brings us back to the huge drive to reduce battery costs in that GMC Hummer.

The top version of it is going to have a 200 kWh battery pack—that’s twice the size of the largest pack in a Tesla vehicle. And that’s partly key to making a truck that big have decent range. We’re looking at 350-mile driving range, but a 200 kWh battery pack—do some back of the envelope math at even $100/kWh—that’s a $20 000 battery pack. Right. That so that might work in a $100 000 vehicle. It’s not going to fly in an affordable work truck, or an affordable SUV. So there’s still this huge, huge need to cut the price of batteries in half yet again from where they are today. And then we really, maybe, we will see $20 and $25 000 EVs. The day is absolutely coming, but patience is still required.

Steven Cherry GM is planning all electric Cadillacs, but also all electric Chevrolets. And what do you think the prices will be like? And how will these compare in terms of driving range, which is still a pretty big issue for most buyers?

Lawrence Ulrich Right. Let’s take the Chevy Bolt as an example. It’s about a $37 000 car and take off the $7500 tax credit and it becomes $30 000, a lot more palatable at that rate.

But again, without government backing, without government incentives, you can see what happens. The buyer of a Nissan Leaf or a Chevy Bolt, comes to the showroom, sees a $37 000 car that in form and function and features is really no different from the conventional car that sells for $20 or $22 000. Which one is that person going to buy unless there are real true believer in electricity? And that’s the contradiction that we’re facing right now. Somehow, some way the industry is going to have to give people the cars in the range that they want without a giant increase in price. Because Americans are greedy—of course! We want it all and telling someone they need to spend $10 000 more because it’s going to vaguely help the environment. That’s just going to be a nonstarter with a lot of people.

Steven Cherry And I think I’m a typical buyer. I drive a Subaru and looking at the new Subarus, I could buy a really nice car that I’d be happy to drive for $28 000 or I can buy the new plug-in hybrid for $40 000, That seems like a big jump to me.

Lawrence Ulrich Rght. And especially if you’re not really seeing a return. And the elephant in the room is the price of gasoline in America especially. I hate to be cynical about it, but my hunch is as long as gasoline is below $3 a gallon, there’s just not a giant incentive for people to make such a big lifestyle switch. And we still lack the charging infrastructure.

Lawrence Ulrich Just a few hours ago, I was driving Volvo’s new Polestar 2—and for people who aren’t familiar with Polestar, it’s Volvo’s new electric car division. Terrific car, a Tesla Model 3 competitor, great performance, really awesome interior with this new Android-based operating system and really, really slick, very Tesla-like and good performance. But the thing was still a $60 000 car for basically a compact crossover.

The audience for a vehicle like that at that price is just limited, especially then we’ve got the issue of people who don’t have the ability to charge at home. There’s still a lot of apartment and condo dwellers in America. And the question of where you’re supposed to charge—Tesla has done a brilliant job of addressing that on the road with its supercharger network. But a long way to go in that regard. And my argument is always on both: The price of fuel is the number one thing we could do to promote EV adoption in America is a gasoline tax—a little bit of the stick part of the carrot and stick approach. But that is just a political nonstarter.

But like you said, when when the person has the Subaru, the Honda, the Chevy, the Ford, that they’re perfectly content with … Asking them to inconvenience themselves in any way and to spend more money on top of it to get less range—it’s a tough nut to crack, even when you tell them all the advantages of electricity as potentially lower maintenance, not having to go to a gas station, being able to just charge overnight at home.

But on the subject of range, it looks like they’re going to be strongly competitive. GM’s battery leaders and EV leaders are saying that they believe 300 miles is the minimum to make for a viable EV. And that sounds about right. You want to get yourself near the range that a tank of gasoline would take you … Two hundred miles is just not enough, except there’s one exception. And we do have tons of multicar households in America. And in fact, once you get out of the cities, of course, it’s the norm for families to have two and three cars. One of the great hopes is people might have that EV as their commuter car, as their urban errand car, as they’re around town car.

A really good one that’s out right now is the Mini Cooper SE. It’s just about the most affordable EV sold in America. It starts right around $30 000; you take the $7500 tax credit, you take some local and state credits—in places like California … There’s places you’re going to be able to get into this Mini for like $20 000. And it’s like, wow, now you’re talking. And it’s got barely got maybe an effective range of about one hundred and thirty miles. And it doesn’t sound like very much at all. But you do know, when you’re living in a city and you’re just driving around town and drive around the suburbs, it’s amazing how far one hundred and thirty miles actually is. That might be a week of driving for just little daily take the kids to school, go to work and back and again, and you’re pumping it back full as soon as you get back home anyway. So you’ve always got this car there that’s got at least some useful range in it. So there’s been a little eureka moment out there of people thinking maybe there is room in there for ultra-portable EVs that can carry smaller battery packs and have shorter ranges but then still meet those very specific needs.

But beyond that, absolutely, the drive for battery efficiency is huge and Tesla is absolutely the leader there. It’s really as much or more their electric motor efficiency than it is the amount of energy that they’re carrying aboard. And we’re seeing that—this Volvo is a good example. It has a 75 kWh battery pack, pretty much identical to the size of the pack in a model three. Yet this Volvo can be the official range is two hundred and thirty miles. Real world, let’s call it 200 to 210. Tesla has nearly the same size battery pack and they’re squeezing 350 miles out of the pack. Again, that’s a hugely fudged figure—Tesla gets a huge break from the EPA in their testing … 350 miles in real world driving, let’s call it 260, 270. Nothing like 350, but definitely more than what other people are squeezing out of the same of batteries, whether that’s Volvo, Audi, Porsche, whoever. So that gets down to their software, to their prime-mover advantages. Their huge and in-depth knowledge of their own batteries, their software, their electric motors, their entire package is more efficient than the packages of other vehicles.

Steven Cherry How will these new GM cars compare with Tesla’s in terms of range?

Lawrence Ulrich General Motors is on that same drive, packing a large battery packs, but also trying to boost their efficiency. So pretty much anything we’re going to see out of General Motors is going to be in the 300- to 400-mile range, basically.

Steven Cherry Driving is for some of us, a burden. For others of us, it’s still a pleasure. You drive more cars in a year than I have in my lifetime. What’s been your favorite EV to date?

Lawrence Ulrich My favorite EV has got to be the Porsche Taycan at the moment. It’s a little unfair—it’s also one of the most expensive EVs. When it came out and the critical consensus was how much better it was than a Tesla Model S and I concurred with that, but also said, well, the thing loaded up to the gills is also $180 000; it damn well better be better than a Model S! It’s a five year newer design. It’s the leading edge design and it’s very, very expensive. There’s a more affordable version that will start maybe just under $100 000.

But man, the thing is, is something else, right, it’s one of those vehicles … You want to put somebody in it who’s never been in an electric car before because it just blows their minds. And we tested it in Germany from the Autobahn to Denmark. And that the thing is calibrates from 0 to 60 in 2.3 seconds, which is even just a touch quicker than the fastest Porsche sports cars in their lineup.

There, the Porsche advantage over the Tesla is, it’s a Porsche. Their advantages? Their entire history is about building race, winning, and great driving, high performance cars. That’s their bread and butter. And so it’s a real Porsche. It feels like a Porsche drive, like a Porsche, and it just happens to be electric. And so you get … It’s the best of both worlds and it’s a really tremendous car.

I still drive a stick. Becau—.

Lawrence Ulrich Well, good for you. Good for you! Keep the faith!

Steven Cherry —Because I love being in just the right gear to power through a curve on a two-lane road of the Hudson Valley. Leaving Porsches aside if you could compare comparable cars, a good stick car versus an electric, does the electric hold up in terms of the experience?

It surely does. You know, performance people … The old line with people with gasoline in their veins were always skeptical of electric cars and probably because of that image that these weren’t going to be fun. They were going to be hairshirts, you know, and with all the joy of driving taken away … Turns out it’s potentially the opposite. The instant torque of an electric car is really something to behold when you experience it. It doesn’t have to be a car that can go 180 mph and be like a supercar. It can be the family truckster and they can still have this terrific acceleration and dead quiet.

For some people, the sound of that engine is a dear loss … I’m among them. I love hearing a really good motor rev really high. And you learn to live with that, especially if it feels pretty selfish to say that I’m so wedded to the sound of this V-8 that I’m willing to keep doing pollution out of tailpipe.

And electric cars also handle well. They’re cheating a little bit. They’ve got all their weight back really low in the floor and it creates a low center of gravity and it does help the car slingshot through turns. And to people who haven’t experienced really high-end gasoline performance cars, it feels pretty remarkable at first. And they think that means that the car is faster in all conditions than, say, a sports car. But the negative is, it’s still saddled with a lot of extra weight and all things being equal, a 5000-pound car is not going to handle as well as the 3500-pound car. So that’s another performance thing that electrics are going to have to deal with is, trying to pare down all that excess weight. Weight is the enemy. One of the weird things about electric cars is once the battery runs low on power, you’re still carrying the same weight around. Whether the battery is full of juice or dead empty, you’ve got 1000 pounds or even 2000 pounds of battery aboard. So you’re really lugging this boat anchor around all the time. So making batteries lighter is is going to be a big part of this as well.

Steven Cherry One of the most interesting things about the future that we’re hurtling ourselves into is the way in which companies have expanded into platforms, and it can be an insidious intrusion … You’re an Amazon Prime customer, and so you look around for … You end up with an Alexa; now you’re looking for a thermostat and you end up with an Alexa-compatible thermostat and you’re looking for a door lock and a secure system and you end up with Amazon’s.

We’re seeing this with the Google platform. We’re seeing this with the Apple platform, as well as the Amazon platform. The Volvo has the Android built into it. And it seems to me that this platform question is even entering into the car world. And I know for a fact people who have just bought a second Prius, for example, because they’re so used to the Prius interface and the big screen in the middle and the way the seat fits, and … Are we getting locked into platforms even in the car world?

Lawrence Ulrich Well, absolutely. It’s part of their drive for efficiency that we see even just the basic building blocks of cars. General Motors is one. If you took all their combinations of transmissions and fossil fuel engines and the chasses that are for cars, they had five hundred and fifty combinations around the world. Their new building block, they’re what they call their BEV2 platform, their battery electric vehicle platform, and all their Ultium batteries … They can build anything from the tiniest little performance coupe to the massive Hummer pickup with a combination of nineteen jigsaw pieces.

What’s that going to do for your manufacturing costs? It’s going to greatly—and your complexity—it’s going to greatly reduce it. So even before electrics started getting some traction, cars were going to that modular construction to save costs. And it’s brilliant in a way. And the fear is that it’s going to take the individuality out of automobiles and we see it in ridesharing as well … As they start expanding into rideshare … Where our cars—to some people cars are just a point A to B transportation. They don’t think about what they’re sitting in, who built it, what powertrain … They just want to get somewhere. And to them, it doesn’t matter what it looks like, who put it together, who owns the components, who owns the platform.

So that consolidation is absolutely a trend in automobiles. Quite likely we’re going to see one or more legacy automakers continue to merge. Others fail, especially faced with these challenges of the electric revolution.

Yes, we’re going to see more of this and not less. And as you said, that gets right down to the infotainment systems in cars. One of the huge challenges they face is the car has to last for 10 or 20 years. Well, digital technology is moving so much more quickly than a car can evolve. Car companies spent years trying to keep Apple and Google out of their cars for very good reason. They saw them as taking over this potentially very valuable space and a profit center for them as well. They would want to sell you their own navigation system, their own very much marked-up audio system, and a $2000 navigation system that only cost them $200. That’s where the profit is in cars, not the car itself. It’s all the frou-frou. So allowing Apple and Google to take that over, you can see what it is, almost an existential threat. But because smartphones have become ubiquitous, the gig is up, the game is over and they know it. So they’ve had to make peace with it.

And this new Android automotive system is an entirely Google-based embedded infotainment system that doesn’t even run off your phone. It can eliminate your phone in the car. You can still hook it up if you want. But basically everything from Google Maps to Google Voice and Google Playstore is just embedded in the car. I have to add, it works great. I’ve never been in a car with more consistent and better voice commands. Ask for an artist for a song, and it’s searching everything from onboard Spotify to the onboard music library. Boom! Two seconds later, a whole list of songs is cued up. So the idea of having a cloud-based system just in your car, it’s the way to go and when it needs to be updated that takes a minute over the air and I can have the newest stuff in my car without having to run out and buy a new car every five years.

Steven Cherry I’d be remiss, Lawrence, if I didn’t take this opportunity to ask you your thoughts about autonomous vehicles.

Lawrence Ulrich I hesitate to even put a timeline out because it all depends on what you mean by self-driving. And here’s where Tesla has some blame on it, pushing the idea of the very poorly named “Autopilot,” which was really nothing more than an adaptive cruise control system with some extra functionality, collision avoidance—you’re about to run into another car ahead of you or a pedestrian, yes the car would stop itself. It’ll pace other cars in traffic, all those things. But the idea that that car was ready—or that any car was ready—to let you sit back, take a nap, read a book that is such a huge, huge, huge technical problem.

People don’t realize it’s way more difficult than autopilot in the air. A plane can kind of cruise in these basically empty skies for hours upon hours and really not encounter a single thing in its path. The road is just this living organism of variables and pedestrians and people. And just making a car be able to respond to any and all eventualities is very difficult. So with that preamble, it can be done. It’s a huge engineering challenge. And one of the greatest experts of this is Gill Pratt, the guy who ran the Defense Agency research into autonomous vehicles. Now he’s Toyota’s self-driving guru and he calls where we’re now at the trough of disillusionment.

We were all promised these self-driving cars and now we’re seeing that we’re a long way away. We’ll see them first in very tightly controlled geofenced situations, airport shuttles … Things where a vehicle has to just repeat the same repeatable route at low speed. Corporate campuses, universities, airports, places like that, where cars can ferry people over short distances. We’re very, very much on the verge of having vehicles that could handle that limited functionality today. That could be done by driving through Times Square at rush hour in an autonomous vehicle? A decade or more away, for sure.

Steven Cherry Well, this has been mainly very good news, Lawrence, especially hearing that I can save the planet without losing all the fun of driving. That’s reassuring. It’s a great story. It’s a great ride, as we say, in the storytelling business. Thanks for telling it in the pages of the magazine. And thanks for joining us today.

Lawrence Ulrich Thanks for having me, Steven.

Steven Cherry We’ve been speaking with IEEE Spectrum contributing editor Lawrence Ulrich, about his upcoming article, “GM Bets Big on Batteries” in the December issue.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers, a professional organization dedicated to advancing technology for the benefit of humanity.

This interview was recorded November 17, 2020. Our theme music is by Chad Crouch.

Radio Spectrum can be subscribed to on the Spectrum website, Spotify, Apple Podcast, Stitcher, or wherever you get your podcasts. We welcome your feedback on the web or in social media.

For Radio Spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

Telemedicine Comes to the Operating Room

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/biomedical/devices/telemedicine-comes-to-the-operating-room

Steven Cherry Hi, this is Steven Cherry for Radio Spectrum.

You know what a hospital operating room looks like—at least from TV shows. There’s the surgeon, of course, maybe a surgical resident, nurses, a scrub tech, the anesthesiologist, maybe a few aides; some students, if it’s a teaching hospital. 

But an actual modern hospital operating room probably has someone you never see on television: a medical device company representative. The device might be a special saw or probe or other tool for the surgeon to use; it might be a device being implanted, such as an artificial hip, knee, or mandible; a pacemaker—even, lately, internal braces to stabilize someone’s spine.

The toolkits for some of these devices might include dozens of wrenches and screws. The surgeon may be using the device and the kit for the first time. The medical device company representative quite probably knows more about the device and its insertion than anyone on the surgical team.

Obviously, in the time of the coronavirus, it’s a plus to have as few people in the OR as possible. But even in non-Covid times, it’s inefficient to fly these company reps around the country to observe and advise an operation that might only take an hour. And so, in a handful of ORs, you’ll see something else—one or more cameras, mounted strategically, and a flat-panel screen on a console, connected to a remote console. The medical device rep—or a consulting surgeon—can be a thousand kilometers away, controlling the cameras, looking at an MRI scan, and making notations on their tablet that can be seen on the one in the operating room.

It’s telemedicine for the OR, and it’s the brainchild of my guest today.

Daniel Hawkins is a serial inventor with well over 100 patents to his name and a serial entrepreneur with several startups to his résumé. His latest, is the one whose system we’re taking about today, Avail Medsystems. He joins us by Zoom.

Daniel, welcome to the podcast.

Daniel Hawkins Thanks for the opportunity, Steven. Happy to be here.

Steven Cherry Daniel, I didn’t know anything about these medical device reps. I gather they’re often part of the marketing or customer support teams at their companies, but they undergo some real surgical training before they start advising doctors.

Daniel Hawkins They do, in fact, Steven, typically the training regimens are several weeks, if not several months long, and then after they complete those training regimens, they’re required to travel with somebody very experienced in the operating rooms and they get what was initially a didactic training in the classroom setting or possibly even cadaveric lab setting, then converts to real-world settings in operating rooms where their teacher, if you will, has been on the job for an extended time period. And does a teacher mentor kind of a training session on an ongoing basis for several weeks, if not a few months, with a representative before they are turned loose.

Steven Cherry This isn’t just Zoom for operating rooms. The cameras, for example, aren’t like the webcam in my computer.

Daniel Hawkins No, they’re not. These are, in fact, 30x optical-zoom cameras. I can confidently say there’s not a camera on the planet that we haven’t tried! And have ultimately chosen a pair of cameras that have incredible clarity, color-balancing, and appropriate low-level-light image-capture capability. Because in operating rooms you need all of those things. The remote individual being a sales rep or a trained physician in an open surgery needs to have crystal clear images of the tissue that they are operating on. And color and color balancing, white-balancing, and tissue-plane identification are really relying on high-end optical clarity.

Steven Cherry The cameras were just one of the engineering challenges you faced.

Daniel Hawkins We are requiring high-definition audio and a high-definition video at a local source, meaning the operating room. We’re transferring that via a HIPPA-compliant, fully-encrypted Internet connection, bouncing off the cloud and then down to a remote participant, being the industry representative or possibly an advising surgeon could be across town and across the country or across the globe. And our system is designed to have latency of less than half a second. Now, of course, we’re dependent on the quality of the local and the remote Internet connections. But before we install a system, we care for the local issues with provisioning of the network in the hospital.

Steven Cherry Another challenge was the business model. There’s a hundred thousand dollars worth of equipment here, but your solution doesn’t involve customers shelling out that money.

Daniel Hawkins That’s right, I’ve been, Steven, twenty-six years in the medical device business and one of the first capital equipment businesses I was involved in with within health care is actually The Da Vinci surgical robot produced by Intuitive Surgical. That’s a two-million-dollar robot. Be it two million dollars, two hundred thousand dollars, or even two thousand dollars requires extensive approvals inside of hospitals to go through a capital acquisition process and model. And that really would delay our commercialization if we required that to get our systems placed. We decided instead to pursue a very aggressive model, inasmuch as we’re not charging at all for that hardware. We’re not charging a capital cost, we’re not charging a lease. We’re not even charging for the upkeep and maintenance or technical support. It’s fully free of charge to the hospitals from a capital perspective. What we do instead is market the utilization of the these systems in a fee-for-service based on time.

Steven Cherry In some sense, your customer is also the medical device manufacturer.

Daniel Hawkins Yes, we’re really a two-sided network. The first side, of course, is placing the consoles in hospitals or ambulatory surgery centers where we generate our revenues from the fees paid by the remote participant. And in the vast majority of cases, that is, in fact, the medical device manufacturer, that is the Johnson and Johnson or Medtronic or an Abbott or Boston Scientific. The variety of medical device companies have an aggregate of over 100 000 sales reps and clinical specialists. Those are folks that are somewhat like sales, that they don’t have a sales quota. Their whole job is to support procedures. There’s 110 000 just sales reps and probably something similar in the clinical specialist field force. These people need access to operating rooms every day. They waste an extraordinary amount of time driving between their different customers from one hospital to the next and waiting for a procedure once they arrive at the hospital waiting for the next procedure. The estimates are about 50 percent of their time is wasted in logistics. You can have a significant increase in the efficiency of time spent supporting your customers, those customers being the surgeons who were conducting the operation.

Steven Cherry We think of the remote experience as being inferior, but it seems there are some advantages here. For example, being able to look at scans more easily.

Daniel Hawkins That’s a great way of thinking about it. There are really a number of advantages. In an operating room, when you go as an industry representative to help a surgeon through the specifics of using some type of a device that you’re representing, you have to observe what’s called a sterile field—kind of an imaginary bubble that extends probably six or eight feet around every dimension of the operating table. That means you need to stand back. If you’re standing back, it’s kind of hard to see the operating field itself. And you can’t point to anything unless you use a laser pointer, which is a common tool in many reps bags.

And you also can’t really annotate or draw on a screen—if you kind of imagine there being a screen is displaying part of the procedure, could be from a moving X-ray called an angiogram if it’s an angioplasty placing a stent in the heart, or it could be a screen with a full video image, if it is a minimally invasive surgery procedure; it’s called laparoscopic surgery. And you might want to actually point something out to the surgeon. You can’t really do that with a tool that would allow you to draw and really point something out. Those are two examples of things that we solve with the Avail system. But because of the nature of our cameras and our console, you can actually get a better view of the operator field using our system than you could get if you were physically in the room. Our cameras, one of them is on a boom arm, is positioned over the operating field and you were able to see directly down under the operating field and zoom down and quite literally count the eyelashes on the patient if you wanted to do that. The level of of visual acuity is quite impressive. We also get an ability for somebody remote to draw on the screen, almost like you might see on Monday Night Football.

Steven Cherry So is there an increased interest in your system because of the pandemic, or maybe less so because so much in hospitals is on hold while they deal with that one overriding problem?

Daniel Hawkins That’s a great question. The fundamental issues that we’re solving have existed for forty years. Medical devices, have always been supported, trained, and introduced in person. And that’s a challenge. In fact, somewhere between 25 percent and 100 percent of cases require physical presence from industry. Some procedures like angioplasty, about one in four times, there’s a physical person in the room from a medical device company. For pacemakers, they’re actually not implanted unless there’s somebody in the world because the medical device representative is integral to the procedure. The pandemic shone a spotlight on the issues of access and needing that access. And interest levels, Steven actually went up. The awareness of the need for those people in the room against the restrictions of being able to come into the hospital made it very, very apparent that a remote capability was needed.

Another thing happened that was really interesting. What was otherwise an assumption—that health care needed to be delivered in person—that presumption has been shattered in dozens and dozens and dozens of medical device companies have approached us and we are under contract with several dozen right now.

Steven Cherry Daniel, you have something like one hundred and fifty patents. Your last startup, which I guess you’re still an adviser to, took some medical techniques that were well-known in kidney stone treatment and applied them to arterial plaque. None of this seems like the kind of thing that somebody would come up with if their degrees were from Wharton and Stanford in business and management.

Daniel Hawkins So I have been, in many respects, a medical device junkie for a few decades here, 26 years in total. But really, my interest stems even prior to that. My father was a physician. I grew up around medicine. I also grew up around entrepreneurship. What I really sought was a way to combine the two and didn’t know much about the medical-device industry. But what I did understand is I really thought the tools that surgeons used were pretty interesting.

When I was an undergraduate, I actually attempted to pursue a joint undergrad Wharton and premed degree. And thankfully, the deans of the schools made a different recommendation for me and suggested I take one. I knew I didn’t want to actually be a physician, but I did know that I wanted to be involved in health care. And after business school, I got involved in health care immediately. Really, I didn’t have any patents at all until 20005, I believe it was.

I joined a couple of engineers in an incubator of sorts and our task—we were sponsored by actually venture firms—our task was to create new medical technologies for disease states that were underserved. And they showed me how to invent is probably the best way to describe this, Steven. And after that, I was hooked. It was it just became something where I would observe there’s an issue. And by the nature of that process of incubation, I was the idea guy. I was the one who was trying to find the unmet needs. I would see those. And that means but what I would hear from the engineers I was working with is so many different types of solutions that could be brought to bear in. The beautiful part about that was actually that I was just informed enough to ask the question and just ignorant enough to not stop myself from wanting to pursue it.

Steven Cherry My grandmother was a doctor and, like your father, her office was downstairs in the house I grew up in, but I don’t have scores of medical-related patents, so I knew there was more to this story. You are also an executive at Intuitive Surgical, which makes The Da Vinci surgical robot. In some ways, the Avail system backs away from robot-aided surgery. Why did neither of your recent startups go further down the robotic path?

Daniel Hawkins Really, robotics is a … it’s fascinating … It’s absolutely fascinating. And I think it’s frankly undertapped. There’s a level of expertise that is needed in robotics that I simply don’t have. Having said that, I am an adviser to a brand-new robotic surgery company that is really just incredibly interesting, what they’re working on that—not at liberty, to talk too much about it.

Steven Cherry Getting back to Avail, it would seem helpful for a rural community, say maybe where there’s no surgeon at all, but a doctor or even a nurse practitioner needing to perform a procedure for which they need trained guidance. Is their interest outside of big hospitals in big cities?

Daniel Hawkins There absolutely is. Rural applications, I think, are very relevant. As are military surgery centers. And, you know, there’s many different use cases. And in some ways, I’d encourage you to think of what we’re doing as a telecommunications platform. We are connecting expertise from outside of the procedure room and delivering it to insert the procedure room. And that means really anyone who is an outside expert can clinically contribute to a surgery where somebody might have incrementally less expertise.

It’s also relevant for ambulatory surgery centers where there tend not to be five or six or seven surgeons in a practice group all working the same day at that same location. If there’s a case in a large hospital that a surgeon is working on and they have a question that they think one of their colleagues might be able to help out, they’ll ask a circulating nurse or a technician to call doctor so-and-so. And that physician, if they’re otherwise available, might put on a mask and a pair of gloves and come in and have a look. And they might consult for five minutes or 15 minutes. That’s incredibly valuable and it happens all the time.

Steven Cherry I can imagine the expertise flipping around. This seems like a good tool for observing an operation, if you’re a student at a teaching hospital. Better than being maybe dozens of feet away in the theater.

Daniel Hawkins Absolutely true. In fact, we’re working with a couple of medical universities where they’re actually interested in revamping their curriculum to solve exactly that problem. The issue being that there might be a dozen and a half or two dozen surgeon trainees and they’re circulating around an operating rooms trying to observe what they can. But as a practical matter, it really can only have two maybe at most three trainee surgeons, if you will, in an operating room at any given point in time to observe. Past that it becomes difficult to see and didactically a lot more challenging.

What about outside of medicine? I can imagine a complex engine repair on an oil rig in the Arctic, for example.

Daniel Hawkins Most certainly our technology is not really dependent on the content of what it’s doing. The capability is really universal for anything that involves audio and video. It has been proposed for that type of a remote repair setting that you just described. It’s actually been proposed to be used in hospitals in a similar fashion where the repair of an MRI machine would be consulted by the repairing … the manufacturer, if you will, would consult the biomedical engineer in a facility who’s pointed the cameras at the MRI machine and they can be walked through the steps. You know, for the remote that you just described out in the Arctic, one of the interesting use cases that we’re actively exploring is a military application where one of our units might be on a Marine vessel. As long as they’re able to get a satellite Internet connection. We’re talking about the military so that should not be an issue.

Steven Cherry Well, Daniel, that’s a pretty creative solution to a problem I think most of us didn’t even know existed. I’m sure hospitals and medical device reps are grateful for it. And I’m grateful for your joining us today.

Daniel Hawkins Thanks very much.

Steven Cherry We’ve been speaking with Daniel Hawkins, founder of Avail Medsystems, a startup that’s moving telemedicine from the doctor’s office to the hospital operating room.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers, a professional organization dedicated to advancing technology for the benefit of humanity.

And we’re grateful to benefit from open-source—our music is by Chad Crouch and our editing tool is Audacity. This interview was recorded November 2, 2020. Radio Spectrum can be subscribed to on the Spectrum website, Spotify, Apple Podcast, Stitcher, or wherever you get your podcasts. We welcome your feedback on the web or in social media.

For Radio Spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

The Battle for Videogame Culture Isn’t Playstation vs Xbox

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/computing/software/the-battle-for-videogame-culture-isnt-playstation-vs-xbox

Steven Cherry Hi, this is Steven Cherry for Radio Spectrum.

About ten years ago, I wanted to write an article about why so many rock climbers were scientists and engineers. One person I was eager to talk with was Willie Crowther, who, while employed at BBN in the early 1970s, was one of the founders of the early Internet—there’s even a famous picture of Internet pioneers that includes him—but was also a pioneering rock climber in New England and the Shawangunk mountains of the Hudson Valley. Searching the web for an active email address for him, I kept coming up with a person with the same name who was also a computer programmer and who wrote one of the first—maybe the first—adventure-style computer game. I eventually figured out that there was only one Willie Crowther, who had done all three things—worked on the Arpanet, rock climbed, and wrote a seminal computer game.

November is a big month for the millions of people who devote their time and money to computer games within a two-day period. Sony will be releasing its fifth-generation PlayStation and its main competitor, Microsoft’s newest Xbox, comes out as well.

So it’s a good month to look at the culture of gaming and how it reflects the broader culture; how it reinforces it; and how it could potentially be a force for freeing us from some of the worse angels of our nature—or for trapping us further into them.

I’m not sure I can imagine someone better qualified to talk about this than Megan Condis. She a professor at Texas Tech University and is the author of the 2018 book, Gaming Masculinity: Trolls, Fake Geeks, and the Gendered Battle for Online Culture.

Megan, welcome to the podcast.

Megan Condis Thank you so much for having me.

Steven Cherry The origins of gaming are pretty well represented by rock-climbing Internet-pioneer Willie Crowther, white male engineers of the 1950s and 60s who have leisure time and unique access to the mainframe computers of the era. Megan, you say there are consequences and reverberations of that set of attributes even half a century later.

Megan Condis Yeah. So one of the things to think about with videogames is a lot of times we think about them as these technical objects. But I also like to think about them as stories or as texts. And it maybe is sort of obvious to say this, but people create the types of texts and the types of stories that they would like to see in the world and that appeal to them. And so when you have a medium whose origins are so narrow in terms of who was able to have access to the tools that were needed to create this particular kind of text, then it sets a certain kind of expectation about what type of stories this tool should be used to tell, the type of stories that appeal to, as you said, the straight white male engineers who had access to computer technology in the 70s.

And so even as new generations of gamers start to encounter the pastime and start to get interested in development and start to want to create their own stories within the media, there’s this pressure that exists in terms of what types of genres of story are expected—or what types of communities you imagine yourself to be creating for—that remains in place. Like there’s these pressures that say we expect gamers to be members of these certain demographics. And so, of course, if you want your game to be successful, then you should create for those demographics, at least within [the] triple-A version of the industry where we’re risking millions of dollars on creating a product in hopes that it’s going to recoup its costs.

We can see that even in the early origins of gaming in the 70s and 80s, there were exceptions to these rules. So there were women developers, people of color, queer people who are developing games. But oftentimes the communities in which they were rooted held them up as the exceptions to the rule. Or, these creators were making games for outsiders to the community or they were trying to bring people into the gamer community. All of which are descriptions that take for granted who the gamer community is or who we expect gamers to be.

Steven Cherry I was really surprised to learn that among consumers of videogames, African-Americans are currently overrepresented.

Megan Condis Yeah, I think that’s interesting. If we kind of break down that number, you might ask questions about like, well, what machines are different demographics using in order to game? So who’s gaming on PC versus who’s gaming on console versus who’s gaming on mobile? But yeah, I think if you look at just games as a whole, you’re playing an interactive game on a digital device, it’s a lot more diverse of a population than we might think. And yet when you look at images of gamers in advertisements or in the media, like if you’re watching a movie about gamers like Steven Spielberg’s Ready Player One. It’s this one particular image of the nerdy white guy who lives in his parent’s basement, who comes to stand in for what we expect gamers to look like. Even though, you know, depending on the context or depending on the type of game or depending on the type of hardware, different groups filter into gaming culture in different ways.

Steven Cherry Yeah, you’ve written and talked a lot about Ready Player One. I need to give a heads up for listeners: There are going to be some spoilers here. The book is about 10 years old and the movie almost three. This is a story that’s set in a gaming universe. A universe in which gaming is both an escape from reality and a way of creating real-life success.

Megan Condis I think Ready Player One is interesting because it’s this fantasy version of our world today. Our world today is extremely game of offside in the sense that, if you go to school, a lot of times kids are learning via digital gaming apps. If you get a job, a lot of times the training that you go through, it takes the form of gaming apps. But also, you know, even things that aren’t expressly presented to us in gaming contexts often are based in the same architecture of surveillance and measurement. And in order to succeed within this particular context, you have to assemble enough points or enough reputation or likes on social media or whatever. So there’s so many different ways in which we engage with these digital gamified contexts. And being good at strategically engaging with those systems, being a gamer who is able to kind of hack those systems is how you achieve success today. And so Ready Player One takes what our world already is, this world in which we’re surrounded by these overlapping contexts of digital surveillance and gamification and says, “But what if the games in which we were embroiled were fun games? What if they were games that let us engage with pop culture or games that let us express our skill at manipulating a controller, as opposed to the kinds of reputation management systems where you’re just curating your online presence or you’re maximizing the efficiency of your CV or whatever it happens to be.

And so it’s this fantasy of we’re all gamers in a sense, by which I mean we all have to play the games that corporations and governments and institutions have set up for us. And navigating those games is how we make it in the world. But Ready Player One offers us the fantasy of what if those games were actually fun and what if we could, by participating in those games, be praised for the kinds of knowledge of the kinds of skill that we would enjoy cultivating, as opposed to the types of knowledge or skill that institutions require of us.

Steven Cherry In Ready Player One there’s a bunch of ways in which the real lives of the gamers become game-like. For example, the ways in which the hero Wade pursues the woman he loves—he reenacts scenes from rom-com movies such as the classic Cameron Crowe teen movie Say Anything—in a way very similar to the way the game at the center of the novel requires the competitors to reenact scenes from movies like the 1983 classic teen adventure story, War Games. Maybe nothing represents the fluidity between real life and gaming and the fluidity across meta-levels for gamers than Gamergate. I mean, you can just remind us what Gamergate was about.

Megan Condis Ah, man, that’s that’s hard. So I say that’s difficult because Gamergate means a lot of different things to a lot of different people. And so to summarize it is necessarily to take a perspective on it. And I’m sure there will be people who will object to the perspective that I take on it. And that’s … I’m OK with that. But to me, what Gamergate was about … It started off as a kind of interpersonal conflict between a guy who felt like he had been slighted by his girlfriend and who kind of wanted to recruit the Internet onto his cause and get them to take his side in their interpersonal conflict. But then it ended up spiraling out from there into becoming a story about how women are treated within gaming culture and particularly within the press that covers gaming culture.

So there was this feeling that gamers were getting denigrated in the press or looked down upon by the press. They were being dismissed as these geeky guys who were unsuccessful at getting the girl or who were unsuccessful according to the traditional measures of masculinity. And so it became this cause of fighting back against these people in the media who are giving gamers a bad name and who are trashing what it means to be a gamer. And I think because the origins of this squabble began in this interpersonal conflict—and because a lot of the journalists who were being targeted as people who are saying bad stuff about gamers were women—it ended up becoming this battle of “feminism is the thing that is giving gamers a bad name.” “Feminists are calling gamer culture sexist.” “They’re saying that if you are a gamer, you are by necessity hating women. And so we want to push back against that.”.

And ironically, unfortunately, the manner in which a lot of the participants in Gamergate pushed back against that was to attack female journalists using the fact that they are women as their mode of attack. So gendered attacks, sexual harassment online, doxing people, threatening people and just using a lot of hatred towards the idea that women would have something to say about this culture that they considered to be a safe space to be a guy, and to do masculine things that—like Donald Trump’s proverbial locker room—like the locker room talk that they felt like now their locker room was being invaded and they were being told, you’re not allowed to have this kind of discourse exist in this space.

Steven Cherry It’s funny, I was going to ask you if you thought Gamergate in any way presaged the broader culture wars that we’re seeing in real life, especially in politics.

Megan Condis I do think Gamergate was a preview of some of the issues that were to come. So from 2014 to today, we see a lot of different venues in which this figure of the male who feels like their place in the world has been taken away from them, that they don’t have the same opportunities as they used to have. But also, I think Gamergate was a precursor to the rise of the alt-right in the culture wars, in the sense that there were outlets such as Breitbart.com or various alt-right affiliated writers that very intentionally waded in to the Gamergate debate and tried to stoke those fires, spread the hashtag, spread the kind of terminology or ideology that they wanted to spread within those circles as—I’m going to use the word recruitment, or at least as an onboarding mechanism—to try to introduce a group of people who probably before 2014 didn’t think of themselves as particularly political, but could be introduced to the political implications of what their hobby could mean or what their feelings of being erased; how that might be useful to be recruited into a particular … not a political party, per se, but a political ideology.

Steven Cherry Getting back to the movies for a moment, you say that gaming and the broader mindsets of, and about, computer programmers spill over into other aspects of our culture made me think of the operating system in the 2013 Spike Jones movie, Her which is given an active mind and personality and which (or who) the protagonist Theodore inevitably falls in love with. (Sorry, another spoiler.) Are checkbots becoming another overlap between virtual life and real life?

Megan Condis Hmm. So when I think about a chatbot, at least in the current iteration of a chatbot, I’m thinking about a machine that’s designed to provide the esthetic of a conversation. And a lot of times I think the way that we like to engage with chatbots is to try to find the limits of what they can understand. Some thinking about, like Microsoft’s chatbot Tay that was introduced on Twitter and the kind of big selling point of Tay was that the more that you talked with her, the more that she would learn and the better that she would be able to respond. And so people decided to turn engagement with Tay into a game that was designed to see how far they could push the limits of this chatbot to see if there were any boundaries that had been built into her software. And unfortunately, what they discovered was that the programmers who created Tay had not installed any protections for her to get her to filter out any content. And so she was taught by the Internet—people who were playing this game with her—to use a lot of racist and sexist language. And so the game was … I don’t think the game was we’re going to turn a robot into a racist or sexist. I think the game was what are the limits of the system that has been presented to me? And do those limits—like the linguistic limits that were programmed into this robot—do they match the kind of social limits that are generally agreed upon in society? Was this robot built with the social contract already installed into it? Or could we create a new version of the social contract by teaching this robot that this language that usually would be considered unacceptable or rude is okay? And what they discovered is that actually, they could do it.

Steven Cherry So the game was to get the chat up to acquire characteristics that Microsoft hadn’t intended. But that’s a wide universe of potential characteristics. Do you think there’s any significance to the fact that these people went straight for racism and sexism?

Megan Condis I’m not sure. It also could be Tay—the chatbot—was personified, was given this … You know, she was given a gender, she was given a face, she was turned into a human person. And so rather than being just the disembodied Alexa or the chatbot that pops up that says, can I help you with your purchase when you’re on a Web site. Because it was this bot that was personified as a teenage girl, maybe it becomes more interesting or more provocative to have the, quote-unquote face of this racist, sexist language be this teenage girl’s face. So, yeah, I’m not really sure about that. But that’s something interesting to think about.

Steven Cherry In a 2018 talk you said that gamers are ready to take over the world. Is that more true today or less?

Megan Condis I think it is more true in the sense that, as I kind of alluded to earlier, I think game developers took over the world 10 years ago. I think, you know, even people who whose job doesn’t say in their job description “I am a game developer” are often creating systems that we use to manage the world that are at root games or at least, you know, gamified. They have a set of rules. You act within a structured system according to those rules. Your success or loss-condition is governed by those rules. And I think what is happening more and more is that gamers—so game players who have been living within these gamified systems for the last decade—are starting to realize the power that they might be able to wield within those systems and the ways in which they’ve been trained now for a long time to think about navigating these systems in terms of efficiency and in terms of strategy. And they’re now starting to think, okay, rather than getting really good at navigating these systems and the intended ways, what if we were able to find some of the unintended, unexpected ways to navigate that system? And what if we were able to take our skills at breaking a system down and finding the most efficient pathways through that system—what if we could turn that to our own advantage or to the collective advantage of the users.

And the Tay example then, the Tay chatbot example, is an example of people doing just that, not necessarily towards a productive end or towards like a revolutionary end, but just for fun. Let’s see if they haven’t thought of all the ways in which we could break the system. But more and more, I think gamers are starting to come together and think about the ways in which rather than just playing for the sake of play, what if we were to play for our own purposes? And what if we were able to have some say in the way in which these gamified systems were developed rather than just existing within those systems and trying our best to succeed within those systems?

Steven Cherry We’re currently living in something of an alternative reality with new rules for day-to-day life. I’m referring, of course, to the Coronavirus pandemic. Has it directly affected the gaming world and gamers?

Megan Condis So I think it’s really interesting. A couple of years ago, the World Health Organization had put out this notice that they were going to be investigating addictive gaming behavior. And there was this big outcry within the gaming community that the World Health Organization was pathologies and gaming, and they were stigmatizing gaming and saying that it was like an unhealthy thing to engage with.

And when the coronavirus hit, the WTO ended up releasing a statement that talked about how when people were in quarantine and when they were isolated, gaming could be a crucial means of self-care, a really important way for people to be able to have social interaction besides just watching TV or like passively consuming media. It would be a way for people to be able to reach out and talk and engage with others even though they were stuck at home.

And so I think what the Coronavirus has done is it has forced a lot of institutions that maybe were wanting to dismiss gaming as frivolous or as escapism—as not real—and getting those institutions to recognize that the social interactions that you have in a virtual world are real, they can be productive and supportive and they can be useful in keeping people’s mental health up and can be great as self-care.

But then the flip side of that, of course, becomes … that also means that the negative social interactions that you have in the virtual worlds are also real. And so that, you know, that raises some questions about moderation practices and safety online, especially for young kids. You know, if young kids are going online and they’re having negative social interactions with trolls or people who are acting abusive to them online, then is that the same as being bullied in their classroom face to face? Is that something that we do have to worry about in addition to the sort of productive, positive relationships and friendships that they could be forming online?

Steven Cherry So that’s twice it’s come up that people used to—and maybe still do—look down on games and gamers, the first was the question of whether the press looked down on them in Gamergate. Do people in academia, look down on professors who focus on games and gaming culture?

Megan Condis Whoa. Okay. So I’m not tenured yet. So, as an object of study, I think academia is very welcoming towards looking at games as this object that’s worthy of study, if only because it’s so omnipresent. Most articles and books about gaming open with this paragraph that says there are so many millions of gaming consoles and households across America and the gaming industry makes so many millions of dollars or whatever. So, you know, I think academia is very open towards looking at games as an object of study.

Over the past 15 years, academia has gotten a lot better about being willing to entertain different methodologies of looking at games. So 15, 20 years ago, yes, let’s study video games. We’re gonna study them in terms of media psychology and are going to study them in terms of the effects of video games on the development of brains and stuff like that. But over the course of time, as people got more familiar with video games and more comfortable with video games, academia started to become more open to, well, maybe we could apply humanities-oriented methodologies, maybe we could close-read video games in the same way we would give a novel or a painting or statue close attention as an art object. Or maybe video game cultures—and fan cultures generally—might be worthy of study in the same way that other types of communities or other types of relationships or organizations are considered worthy of study.

Steven Cherry Ironically, Willie Crowther wrote his adventure game for his young daughters as something they could play while visiting him after he and his wife divorced. He’s quoted as saying that his adventure game was deliberately written in a way that would not be intimidating to non-computer people—using natural language commands, for example. You write your own games, I gather mainly as teaching tools. Do you think if you wrote a commercial game, it would be hard to navigate your way through the stereotypes of gaming and the expectations of gamers?

Megan Condis Oftentimes, it’s not necessarily in the writing of a game. That process becomes difficult. It’s more in terms of the marketing of the game, because in the indie gaming scene, there’s tons of people who are writing extremely personal stories, who are writing games that engage with political topics and culturally specific topics and that are narrowcasting towards this really specific audience. And that’s OK when you’re directly marketing your game to people through Kickstarter or Patreon or what have you and you’re able to directly communicate with your audience. I think that one of the problems with commercial games is there’s this expectation in the video game industry, just like the film industry, television industry, that you’re going to need to target your game towards an audience that’s considered safe, that can be relied upon not just to purchase the first game in this series, but the 10th game in the series down the line. And up until extremely recently, the videogame industry had placed its bet on the young teenage male and said, this is going to be the audience that we’re gonna develop as our most reliable audience.

And so we don’t want to take risks in marketing games towards other people, even though we know in our own studies that other types of people are playing this game. We just don’t want to risk reaching out and marketing to those other people because we don’t want to alienate our core. And I think in the last five years, the video game industry has realized that that audience is pretty saturated. They are extremely reliable, but they’ve kind of kept out—in terms of how many people that are in that target demographic that they haven’t already reached yet.

Steven Cherry The movie industry is risk-averse in many of the same ways, especially with respect audience… And so there are the equivalents of independent movies in the game world?

Megan Condis Absolutely. So it’s kind of this loose, just like the film industry, where … What makes you an indie film? Lots of debates around that. But I think a kind of quick and easy definition would be unaffiliated developers who either as individuals or small teams create game projects that aren’t released through the studio system or that you wouldn’t necessarily be able to buy as a physical disk at your local GameStop, but rather are distributed on the Internet.

And so there’s a lot of indie games that get released through steam for P.c or that even get distributed via crowdfunding. So they will go out and find their audience before they even begin the process of developing in order to ensure that they have a sufficient wellspring of people to draw from in order to fund their game. But yeah, I think that the indie scene is a really exciting place for looking at the diversity being improved within gaming culture.

And it’s also a great—I don’t know if this is the right word—like a great stable that the AAA industry can now pull from. So you see someone who created a really successful indie game that addresses some of these questions of diversity and inclusion, and then you have a big company like an EA or Ubisoft who says, you know, we really want to reach out to that demographic. We can look to the indie scene and see here are some developers who have already made relationships with these demographics that we’re hoping to court. We can pull this person up and hire them into our system in order to try to pursue those same demographics with our AAA games.

Steven Cherry We’ve seen that in the movie world, too, where people go from independent director to Star Wars director.

Megan Condis For sure.

Steven Cherry Final question. Are there cultural differences between the PlayStation and the Xbox and in any event, which device’s release are you more looking forward to?

Megan Condis Ooh. It’s one of those things where there are definitely fans of the PlayStation versus the Xbox, and they would say, “it’s totally different and we have this totally different culture.” But I think someone looking in from the outside was, hey, they’re very similar. If you have—I don’t know, any fan culture, right—fans of Star Wars might say we like Empire Strikes Back better than Return of the Jedi. But if you’re not already in that conversation, it just all looks the same to you. So for myself, I mean, I’ve gone back and forth. I … back when the PlayStation 1 and 2 were out, I was definitely diehard PlayStation.

And then I ended up switching over to the Xbox for the previous generation. But right now, I’ve been playing a lot of PlayStation exclusive titles, Horizon Zero Dawn was a big favorite of mine. It’s just now finally starting to migrate over to other consoles. Based on the last couple of years, I would say probably immediately following release, I would be excited about the PlayStation 5. But, you know, it just always depends on which ecosystem is able to land the games that you’re interested in. And the nice thing about being an adult. So when I was a kid, it was Nintendo versus Sega. And when you’re a kid, your parents are like, I’m only going to buy you one. So you have to pick one. And then you have to make sure and always argue for yourself. Like, I picked the right one. It’s got all the best games because you’re a kid, you can’t go by both. But the nice thing about being an adult is, well, if the X box does come out with something that I really want to go after, I don’t have to go call my mom and beg her to get me the other console. I can actually get both of them if I really want. Now that I’m saying that out loud, that’s very privileged, too, right? So I’m very grateful for that.

Steven Cherry So it seems, though, that there isn’t the same sort of lock into a platform where we’re seeing what we’ve always seen with personal computers, Mac versus Windows, phones, iPhone versus Android. Even in the car world, people are starting to be locked into a platform—somebody who has driven a Prius for 10 years is so used to the Prius interface, they’re going to get another one. But that doesn’t seem to be the case in the game.

Megan Condis Well, I would say console exclusives or, yeah, that idea that if you want to play Final Fantasy, it’s PlayStation or nothing, right, that idea is still kicking around in gamer culture. I think it meets a lot more resistance from gamers today than it used to. And it seems to me like usually what happens is if a game is going to be exclusive to one console or another, it often stays exclusive the first year or two after release and then after that it will migrate to other consoles. So usually if you wait long enough, you can get a chance to play some of these games that maybe initially were kept away from you, but still a lot of times that means you missed the critical discourse around a game, like you didn’t get to participate in the initial moment of reaction. The same, like we’ve talked about spoilers earlier, like you mentioned, spoilers for films. You can get spoiled for games, too. And if you don’t get to play it right when the game comes out, sometimes you feel like you missed out on being a part of that critical mass.

Steven Cherry Well, Megan, games provide a refuge from a fractious world, even as they reflect and even reinforce it. And maybe this episode can provide some refuge from a confusing world, even as we try to understand it better. Thank you for your research and thanks for joining us today.

Megan Condis Thank you so much. It was really fun. I hope I was able to be helpful.

Steven Cherry We’ve been speaking with Megan Condis, a professor of game studies at Texas Tech University and the author of Gaming Masculinity, published by the University of Iowa Press in 2018, about the manifold ways gaming culture influences our broader culture.

This interview was recorded October 12th, 2020.

Our thanks to Miles of Gotham Podcast Studio for audio engineering; our music is by Chad Crouch. Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers.

For Radio Spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

See Also:

A Review of Code: Debugging the Gender Gap
19 Jun 2015
Reviewed by Stephen Cass

Turing Award for Computer Scientists: More Inclusiveness Needed
Most recipients tend to be older white male academics, who are primarily from the U.S.
02 Nov 2020
Guest post by Chai K Toh

5G, Robotics, AVs, and the Eternal Problem of Latency

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/computing/networks/5g-robotics-avs-and-the-eternal-problem-of-latency

Steven Cherry Hi, this is Steven Cherry for Radio Spectrum.

In the winter of 2006 I was in Utah reporting on a high-speed broadband network, fiber-optic all the way to the home. Initial speeds were 100 megabits per second, to rise tenfold within a year.

I remember asking one of the engineers, “That’s a billion gigabits per second—who needs that?” He told me that some studies had been done in southern California, showing that for a orchestra to rehearse remotely, it would need at least 500 megabits per second to avoid any latency that would throw off the synchronicity of a concert performance. This was fourteen years before the coronavirus would make remote rehearsals a necessity.

You know who else needs ultra-low latency? Autonomous vehicles. Factory robots. Multiplayer games. And that’s today. What about virtual reality, piloting drones, or robotic surgery?

What’s interesting in hindsight about my Utah experience is what I didn’t ask and should have, which was, “So what you really need is low latency, and you’re using high bandwidth as a proxy for that?” We’re so used to adding bandwidth when what we really need is to reduce latency that we don’t even notice we’re doing it. But what if enterprising engineers got to work on latency itself? That’s what today’s episode is all about.

It turns out to be surprisingly hard. In fact, if we want to engineer our networks for low latency, we have to reengineer them entirely, developing new methods for encoding, transmitting, and routing. So says the author of an article in November’s IEEE Spectrum magazine, “Breaking the Latency Barrier.”

Shivendra Panwar is a Professor in the Electrical and Computer Engineering Department at New York University’s Tandon School of Engineering. He is also the Director of the New York State Center for Advanced Technology in Telecommunications and the Faculty Director of its New York City Media Lab. He is also an IEEE Fellow, quote, “For contributions to design and analysis of communication networks.” He joins us by a communications network, specifically Skype.

Steven Cherry Shiv. Welcome to the podcast.

Shivendra Panwar Thank you. Great to be here.

Steven Cherry Shiv, in the interests of disclosure, let me first rather grandly claim to be your colleague, in that I’m an adjunct professor at NYU Tandon, and let me quickly add that I teach journalism and creative writing, not engineering.

You have a striking graph in your article. It suggests that VoIP, FaceTime, Zoom, they all can tolerate up to 150 milliseconds of latency, while for virtual reality it’s about 10 milliseconds and for autonomous vehicles it’s just two. What makes some applications so much more demanding of low latency than others?

Shivendra Panwar So it turns out, and this was actually news to me, is that we think that the human being can react on the order of 100 or 150 milliseconds.

We hear about fighter pilots in the Air Force who react within 100 ms or the enemy gets ahead of them in a dogfight. But it turns out human beings can actually react at even a lower threshold when they are doing other actions, like trying to touch or feel or balance something. And that can get you down to tens of milliseconds. What has happened is in the 1980s, for example, people were concerned about applications like the ones you mentioned, which required 100, 150 ms, like a phone call or a teleconference. And we gradually figured out how to do that over a packet-switched network like the Internet. But it is only recently that we became aware of these other sets of applications, which require an even lower threshold in terms of delay or latency. And this is not even considering machines. So there are certain mechanical operations which require feedback loops of the order of milliseconds or tens of milliseconds.

Steven Cherry Am I right in thinking that we keep throwing bandwidth at the latency problem? And if so, what’s wrong with that strategy?

Shivendra Panwar So that’s a very interesting question. If you think of bandwidth in terms of a pipe. Okay, so this is going back to George W. Bush. If you don’t remember this famous interview or debate he had and he likened the Internet to be a set of pipes and everyone made fun of him. Actually, he was not far off. You can make the analogy that the Internet is a set of pipes. But coming back to your question, if you view the Internet as a pipe, there are two dimensions to a pipe, it’s the diameter of the pipe, how wide it is, how fat it is, and then there’s the length of the pipe. So if you’re trying to pour … if you think of bits as a liquid and you’re trying to pour something through that pipe, the rate at which you’d be able to get it out at the other end or how fast you get it out of the other end depends on two things—the width of the pipe and the length of the pipe. So if you have a very wide pipe, you’ll drain the liquid really fast. So that’s the bandwidth question. And if you shorten the length of the pipe, then it’ll come out faster because it has less length of pipe to traverse. So both are important. So bandwidth certainly helps in reducing latency if you’re trying to download a file, for example, because the pipe width will essentially make sure you can download the file faster.

But it also matters how long the pipe is. What are the fixed delays? What are the variable delays going through the Internet? So both are important.

Steven Cherry You say one big creator of latency is congestion delays. To use the specific metaphor of the article, you describe pouring water into a bucket that has a hole in it. If the flow is too strong, water rises in the bucket and that’s congestion delay, water droplets—the packets, in effect—waiting to get out of the hole. And if the water overflows the bucket, if I understand the metaphor, those packets are just plain lost. So how do we keep the water flowing at one millimeter of latency or less?

Shivendra Panwar So that’s a great question. So if you’re pouring water into this bucket with a hole and you want to keep—first of all, you want to keep it from overflowing. So that was: Don’t put too much water because even in the bucket, the hole will gradually fill up and overflow from the top. But the other and equally important issue is you want to fill the bucket, maybe just the bottom of the bucket. You know, just maybe a little bit over that hole so that the time it takes for water that you are pouring to get out is minimized.

And that’s minimizing the queuing delay, minimizing the congestion and minimizing ultimately the delay through the network. And so if you want it to be less than a millisecond, you want to be very careful pouring water into that bucket so that just fields or uses the capacity of that hole but not starts filling up the bucket.

Steven Cherry Phone calls used to run on a dedicated circuit between the caller and the receiver. Everything runs on TCP now, I guess in hindsight, it’s remarkable that we can even have phone calls and Zoom sessions at all with our voices and video chopped up into packets and sent from hop to hop and reassembled at a destination. At a certain point, the retuning that you’re doing of TCP starts to look more and more like a dedicated circuit, doesn’t it? And how do you balance that against the fundamental point of TCP, which is to keep routers and other midpoints available to packets from other transmissions as well as your own?

Shivendra Panwar So that is the key point that you have mentioned here. And this was, in fact, a hugely controversial point back in the ’80s and ’90s when the first experiments to switch voice from circuit-switched networks to packet-switched networks was first considered. And there were many diehards who said you cannot equal the latency and reliability of a circuit switch network. And to some extent, actually, that’s still right. The quality on a circuit switch line, by the time of the 1970s and 1980s, when it had reached its peak of development was excellent.

And sometimes we struggle to get to that quality today. However, the cost issue overrode it. And the fact that you are able to share the network infrastructure with millions and now billions of other people made the change inevitable. Now, having said that, this seems to be a tradeoff between quality and cost. And to some extent it is. But there is, of course, a ceaseless effort to try and improve the quality without giving up anything on the cost. And that’s where the engineering comes in. And that’s where monitoring what’s happening to your connection on a continuous basis so that whenever you sense that congestion is building up, what TCP does in particular is to back off or reduce the rate so that it does not contribute to the congestion. And its vital bits get through in time.

Steven Cherry You make a comparison to shoppers at a grocery store picking which checkout lane has the shorter line or is moving faster. Maybe as a network engineer, you always get it right, but I often pick the wrong lane.

Shivendra Panwar That is that is true in terms of networking as well, because there are certain things that you cannot predict. You might go to two lines in a router, for example, and one may look a lot shorter than yours. But there is some hold up, right? A packet may need extra processing or some other issue may crop up. And so you may end up spending more time waiting on a line which initially appeared short to you. So there is actually a lot of randomness in networking. In fact, a knowledge of probability theory, queuing theory, and all of this probabilistic math, is the basis of engineering networks.

Steven Cherry Let’s talk about cellular for a minute. The move to 5G will apparently help us reduce latency by reducing frame durations, but apparently it also potentially opens us up to more latency because of its use of millimeter waves?

Shivendra Panwar That is indeed a tradeoff. The engineers who work at the physical layer have been working very hard to increase the bandwidth to get us into gigabits per second at this point in 5G and reduce the frame lengths—the time you spend waiting to put your bits onto the channel is reduced. But in this quest for low bandwidth, they moved up the electromagnetic spectrum to millimeter waves, which have a lot more capacity but have poorer propagation characteristics. In the millimeter waves, what happens is it can no longer go through the wall of a building, for example, or even the human body or a tree. If you can imagine yourself, let’s say you’re in Times Square before code and you’re walking with your 5G phone, every passerby, or every truck rolling by, would potentially block the connection between your cell phone and the cell tower. Those interruptions are driven by the physical world. In fact, I joke this is the case of Sir Isaac Newton meeting Maxwell, the electromagnetic guru. Because what happens is those interruptions, since they are uncontrollable essentially, you can get typical interruptions of the order of half a second or a second before you switch to another base station, which is the current technology, and find another way to get your bits through. So those blockages, unfortunately, can easily add a couple of hundred milliseconds of delay because you may not have an alternate way to get your bits through to the cell tower.

Steven Cherry I guess that’s especially important, not so much for phone conversations and Web use or whatever we’re using our phones for, where, as we said before, a certain amount of latency is not a big problem. But 5G is going to be used for the Internet of Things. And there, there will be applications that require very low latency.

Shivendra Panwar Okay, so there are some relatively straightforward solutions. If your application needs relatively low bandwidth, so, many of the IoT applications need kilobits per second, which is a very low rate. What you could do is you could assign those applications to what is called sub-six gigahertz. That is the frequency that we currently use. Those are more reliable in the sense that they penetrate buildings, they penetrate the human body.

And as long as your station has decent coverage, you can have more predictable performance. It is only as we move up the frequency spectrum and we try and send both broadband applications—applications that use a gigabit per second or more—and we want the reliability and the low latency that we start running into problems.

Steven Cherry I noticed that, as you alluded to earlier, there are all sorts of applications where we would benefit from very low latency or maybe can’t even tolerate anything but very low latency. So to take just one example, another of our colleagues, a young robotics professor at NYU, Tandon is working on exoskeletons and rehabilitation robots for Parkinson’s patients to help them control hand tremors. And he and his fellow researchers say, and I’m quoting here, “a lag of nearly 10 or 20 milliseconds can afford effective compensation by the machine and in some cases may even jeopardize safety.”

So are there latency issues even within the bus of an exoskeleton or a prosthetic device that they need to get down to single-digit millisecond latency?

Shivendra Panwar That sounds about right in terms of the 10 to 20 milliseconds or perhaps even less. There is one solution to that, of course, is to make sure that all of the computational power—all of the data that you need to transmit—stays on that human subject (the person who’s using the exoskeleton) and then you do not depend on the networking infrastructure. So that will work. The problem with that is the compute power and communications will, first of all, be heavy, even if we can keep reducing that thanks to Moore’s Law, and also drains a lot of battery power. One approach is seeing if we can get the latency and reliability right, is to offload all of that computation to, let’s say, the nearest base station or a wireless Wi-Fi access point. This will reduce the amount of weight that you’re carrying around in your exoskeleton and reduce the amount of battery power that you need to be able to do this for long periods of time.

Steven Cherry Yeah, something I hadn’t appreciated until your article was, you say that ordinary robots as well could be lighter and have greater uptime and might even be cheaper with ultra low latency.

Shivendra Panwar That’s right. Especially if you think of flying robots. Right? You have UAVs. And there, weight is paramount to keep them up in the air.

Steven Cherry As I understand it, Shiv, there’s a final obstacle, or limit at least, to reducing latency. And that’s the speed of light.

Shivendra Panwar That’s correct. So most of us are aware of the magic number, which is like 300 000 km/s. But that’s in a vacuum or through free space. If you use a fiber-optic cable, which is very common these days, that goes down to 200 000 km/s. So you used to always take that into account but it was not a big issue.

But now, if you think about it, if you are trying to aim for a millisecond delay, let’s say, that is a distance of quote unquote, only 300 km that light controls in free space or even less on a fiber optic cable—down to 200 kilometers. That means you cannot do some of the things we’ve been talking about sitting in New York, if it happens to be something that we are controlling in Seoul, South Korea, right? The speed of light takes a perceptible amount of time to get there. Similarly, what has been happening is the service providers who want to host all these new applications now have to be physically close to you to meet those delay requirements. Earlier, we didn’t consider them very seriously because there are so many other sources of delays and delays were of the order of a hundred milliseconds—up to a second, even, if you think further back—that a few extra milliseconds didn’t matter. And so you could have a server farm in Utah dealing with the entire continental U.S., that would be sufficient. But that is no longer possible.

And so a new field has come up—edge computing, which takes applications closer to the edge in order to support more of these applications. The other reason to consider mobile computing is you can keep the traffic off the Internet core, if you push it closer to the edge. For both those reasons, computation may be coming closer and closer to you in order to keep the latency down and to reduce costs.

Steven Cherry Well, Shiv, it seems we have a never-ending ebb and flow between putting more of the computing at the endpoint, and then more of the computing at the center, from the mainframes and dumb terminals of the 1950s, to the networked workstations ’80s, to cloud computing today, to putting AI inside of IoT nodes tomorrow, but all through it, we always need the network itself to be faster and more reliable. Thanks for the thankless task of worrying about the network in the middle of it all, and for being my guest today.

Shivendra Panwar Thank you. It’s been a pleasure talking to Steve.

Steven Cherry We’ve been speaking with IEEE Fellow Shivendra Panwar about his research into ultra-low-latency networking at NYU’s Tandon School of Engineering.

This interview was recorded October 27, 2020. Our thanks to Mike of Gotham Podcast Studio for our audio engineering; our music is by Chad Crouch.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers, a professional organization dedicated to advancing technology for the benefit of humanity.

For Radio Spectrum, I’m Steven Cherry.

Are Electronic Media Any Good at Getting Out the Vote?

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/at-work/education/are-electronic-media-any-good-at-getting-out-the-vote

Steven Cherry Hi, this is Steven Cherry for Radio Spectrum.

For some years and still today, there’s been a quiet but profound schism among political strategists. There are those who favor modern methods and modern media—mass mailings, robocalling, television advertising, and, increasingly, social-media advertising. On the other hand are those, including my guest today, who not only still see a value in traditional person-to-person messaging, but see it as, frequently, the better bang for the campaign buck.

Just last week [this was recorded Oct 5, 2020—Ed.] the attorney general of Michigan— a state that has been a battleground, not just for electoral delegates, but this methodological dispute—announced that two political operatives were charged with felonies in connection with robocalls that made a number of false claims about the risks of voting by mail, in an apparent attempt to discourage residents of Detroit from voting by mail. And last week as well, the Biden campaign announced a complete turnaround on the question of door-to-door canvassing, perhaps the gold standard of person-to-person political campaigning. Are they perhaps afraid of Democratic standard-bearers making the same mistake twice?

In the endless post-mortem of the 2016 Presidential election, an article in Politico argued that the Clinton campaign was too data-driven and model-driven, and refused local requests, especially in Michigan, for boots-on-the-ground support. It quoted a longtime political hand in Michigan as describing quote “months of failed attempts to get attention to the collapse she was watching unfold in slow-motion among women and African-American millennials.”

I confess I saw something of that phenomenon on a recent Saturday. I’m living in Pittsburgh these days, and in the morning, I worked a Pennsylvania-based phone bank for my preferred political party. One of my first calls was to someone in the Philadelphia area, who told me he had already made his absentee ballot request and asked, while he had me on the phone, when his ballot would come. “There used to be someone around here I forget what you call her but someone I could ask stuff of.” That was strike one.

In another call, to a man in the Erie area, the conversation turned to yard signs. He said he would like to put one out but he had no idea where to get it. Strike two. In the late afternoon, two of us went to a neighborhood near us to put out door-hangers, and if we saw someone face-to-face we would ask if they wanted a yard sign. One fellow said he would. “We were supposed to get one,” he told us. When he saw we had a stack of them in our car, he sheepishly added, “We were supposed to get two in fact, one for a friend.” That was my third indication in one day that there was a lack of political party involvement at the very local level—in three different parts of what could well be the most critical swing state of the 2020 Presidential election.

When I strung these three moments together over a beer, my partner immediately thought of a book she owned, Get Out the Vote, now in its fourth edition. Its authors, Donald Green and Alan Gerber, argue that political consultants and campaign managers have underappreciated boots-on-the-ground canvassing in person and on the phone, in favor of less personal, more easily-scaled methods—radio and TV advertising, robocalling, mass mailings, and the like.

Of particular interest, they base their case with real data, based on experimental research. The first edition of their book described a few dozen such experiments; their new edition, they say, summarizes hundreds.

One of those authors is Donald Green, a political scientist at Columbia University focusing on such issues as voting behavior and partisanship, and most importantly, methodologies for studying politics and elections. His teaching career started at Yale University, where he directed its Institution for Social and Policy Studies. He joins us via Skype.

Steven Cherry Don, welcome to the podcast.

Donald Green Thank you very much for having me.

Steven Cherry Modern campaigns can employ an army of advisers, consultants, direct mail specialists, phone bank vendors, and on and on. You say that much of the advice candidates get from these professionals comes from war stories and not evidence. Robocalls seem to be one example of that. The study of a 2006 Texas primary found that 65 000 calls for one candidate increased his vote share by about two votes.

Donald Green Yes, the robocalls have an almost perfect record of never working in randomized trials. These are trials in which we randomly assigned some voters to get a robocall and others not and allow the campaign to give it its best shot with the best possible robocall. And then at the end of the election, we look at voter turnout records to see who voted. And in that particular case, the results were rather dismal. But not just in that case. I think that there have been more than 10 such large-scale experiments, and it’s hard to think of an instance in which they’ve performed well.

Steven Cherry The two robocallers in Michigan allegedly made 12 000 calls into Detroit, which is majority black—85 000 calls in total to there and similar areas in other cities. According to a report in the Associated Press, calls falsely claimed that voting by mail would result in personal information going into databases that will be used by police to resolve old warrants, credit card companies to collect debts, and federal officials to track mandatory vaccines. It quoted the calls as saying, “Don’t be finessed into giving your private information to The Man. Beware of vote-by-mail.” You’ve studied plenty of affirmative campaigns, that is, attempts to increase voter participation. Do you have any thoughts about this negative robocalling?

Donald Green Well, that certainly seems like a clear case of attempted voter suppression—to try to scare people away from voting. I don’t think I’ve ever seen anything like this. I haven’t heard the call. I’d be curious to know something about the voiceover that was used. But let’s suppose that it seemed credible. You know, the question is whether people take it seriously enough or whether they questioned the content, maybe talking to others in ways that undercut its effectiveness. But if robocalls seldom work, it’s probably because people just don’t notice them. Not sure whether this one would potentially work because it would get somebody to notice at any rate. We don’t know how effective it would be. I suspect not terribly effective, but probably effective enough to be concerning.

Steven Cherry Yeah, it was noticed enough that complaints about it filtered up to the state attorney general, but that doesn’t give us any quantitative data.

For decades, campaigns have spent a lot of their money on television advertising. And it can influence strategy. To take just one example, there’s a debate among Democrats about whether their candidate should invest in Texas because there’s so many big media markets. It’s a very expensive state to contest. What does the experimental data tell us about television?

Donald Green Experience on television is relatively rare. One that I’m most familiar with is one that actually I helped conduct with my three coauthors back when we were studying the Texans for Rick Perry campaign in 2006. We randomly assigned 18 of the 20 media markets in Texas to receive varying amounts of TV advertising, and various timings at which point it would be rolled out. And we conducted daily tracking polls to see the extent to which public opinion moved as ads rolled out in various media markets. And what we found was there was some effect of Rick Perry’s advertising campaign, but it subsided very quickly. Only a few days passed before it was essentially gone without a trace, which means that one can burn quite a lot of money for a relatively evanescent effect in terms of the campaign. I really don’t think that there’s much evidence that the very, very large amounts of money that are spent on television in the context of a presidential campaign have any lasting effect. And so it’s really an open question as to whether, say, the $300 million dollars that the Clinton campaign spent in 2016 would have been better spent least as well spent on the ground.

Steven Cherry In contrast to war stories, you and your colleagues conduct true randomized experiments. Maybe you could say a little bit more about how hard that is to do in the middle of an election.

Yes, it’s a juggling act for sure. The idea is, if we wanted to study, for example, the effects of direct mail on voter turnout, one would randomly assign large lists of registered voters, some to get the mail, some to be left alone. And then we’d use the fact that voting is a public record in the United States—and a few other countries as well—to gauge voter turnout after the election is over. This is often unsatisfactory for campaigns. They want to know the answer ahead of time. But first, we know no good way of answering the question before people actually cast their ballots. And so this is something that’s been done in increasing numbers since 1998. And now hundreds of those trials have been done on everything ranging from radio, robocalls, TV, direct mail, phone calls, social media, etc, etc.

Steven Cherry One thing you would expect campaign professionals to have data on is cost-effectiveness, but apparently they don’t. But you do. You’ve found, for example, that you can generate the same 200 votes with a quarter of a million robocalls, 38 000 mailers, or 2500 door-to-door conversations.

Donald Green Yes, we try to not only gauge the effects of the intervention through randomized trials but also try to figure out what that amounts to in terms of dollars per vote. And these kinds of calculations are always going to be context-dependent because some campaigns are able to rely on inexpensive people power, to inspire volunteers in vast numbers. And so in some sense, the costs that we estimate could be greatly overstated for the kinds of boots-on-the-ground canvassing that are typical of presidential elections in battleground states. Nevertheless, I think that it is interesting to note that even with relatively cautious calculations, to the effect that people are getting $16 an hour for canvassing, canvassing still acquits itself rather well in terms of its comparisons to other campaign tactics.

Steven Cherry Now that’s just for turnout, not votes for one candidate instead of another; a nonpartisan good-government group might be interested in turnout for its own sake, but a campaign wants a higher turnout of its own voters. How does it make that leap?

Donald Green Well, typically what they do is rely on voter files—and augmented voter files, which is, say, voter files that had other information about people appended to them—in order to make an educated guess about which people on the voter file are likely to be supportive of their own campaign. So Biden supporters have been micro-targeted and so have Trump supporters and so on and so forth, based on their history of donating to campaigns or signing petitions or showing up in party primaries. And that makes the job of the campaign much easier because instead of trying to persuade people or win them over from the other side, they’re trying to bring a bigger army to the battlefield by building up enthusiasm and mobilizing their own core supporters. So the ideal for that kind of campaign is a person who is very strongly aligned with the candidate that is sponsoring the campaign but has a low propensity of voting. And so that that kind of person is really perfect for a mobilization campaign.

So that could also be done demographically. I mean, there are zip codes in Detroit that are 80 percent black.

Donald Green Yes, there are lots of ways of doing this based on aggregates. No, you often don’t have to rely on aggregates because you typically have information about each person. But if you were to basically do it, say, precinct by precinct, you could use as proxies for the left—percentage-African-American—or proxies for the right demographics that are associated with Trump voting. So it’s possible to do it, but it’s probably not state of the art.

Steven Cherry You mentioned door-to-door canvassing; it increases turnout but—perhaps counterintuitively—apparently, it doesn’t matter much whether it’s a close contest or a likely blowout, and if it doesn’t matter what the canvasser’s message is.

Donald Green This is one of the most interesting things, actually about studying canvassing and other kinds of tactics experimentally. It appears that some of the most important communication at the door is nonverbal. You know, you show up at my door, and I wonder what you’re up to—are you trying to sell me something, trying to, you know, make your way in here? I figure, oh, actually you’re just having a pleasant conversation. You’re a person like me. You’re taking your time out to encourage me to vote. Well, that sounds okay. And I think that that message is probably the thing that sticks with people, perhaps more than the details of what you’re trying to say to me about the campaign or the particularities about why I should vote—should I vote because it’s my civic duty or should I vote because I need to stand up in solidarity with my community? Those kinds of nuances don’t seem to matter as much as we might suppose.

Steven Cherry So it seems reminiscent of what the sociologists would call a Hawthorne effect.

Donald Green Some of it is reminiscent of the Hawthorne effect. The Hawthorne effect is basically, we increase our productivity when we’re being watched. And so there’s some sense in which being monitored, being encouraged by another person makes us feel as though we’ve got to give a bit more effort. So there’s a bit of that. But I think partly what’s going on is voting is a social activity. And just as you’re more likely to go to a party if you were invited by a person as opposed to by e-mail. So too, you’re more likely to show up to vote if somebody makes an authentic, heartfelt appeal to you and encourages you to vote in-person or through something that’s very much like in-person. So it’s some gathering or some friend to friend communication as opposed to something impersonal, like you get a postcard.

Steven Cherry So without looking into the details of the Biden campaign flip-flop on door-to-door canvassing, your hunch would be that they’re making the right move?

Donald Green Yes, I think so. I mean, putting aside the other kinds of normative concerns about whether people are at risk if they get up and go out to canvass or they’re putting others at risk … In terms of the raw politics of winning votes, it’s a good idea in part because in 2018, they were able to field an enormous army of very committed activists in many of the closely contested congressional elections and showed apparently very good, good results. And the tactic itself is so well tested that if they can do it with appropriate PPE and precautions, they could be quite effective.

Steven Cherry In your research you found by contrast, door-hangers and yard signs—the way I spent that Saturday afternoon I described—have little or maybe even no utility.

Donald Green Well, yard signs might have some utility to candidates, especially down-ballot candidates who are trying to increase their vote share. It doesn’t seem to have much of an effect on voter turnout. Maybe that’s because the election is already in full swing and everybody knows that there’s an election coming up—the yard sign isn’t going to convey any new information. But I do think the door hangers have some residual effect. They’re probably about as effective as a leaflet or a mailer, which is not very effective, but maybe a smidge better than zero.

Steven Cherry You’re more positive on phone banks, albeit with some qualifiers.

Donald Green Yes, I think that phone banking, especially authentic volunteer-staffed phone banking, can be rather effective. You know, I think that if you have an unhurried conversation with someone who is basically like-minded. They’re presumably targeted because they’re someone who shares more or less your political outlook and you bring them around to explain to them why it’s an important and historic election, giving them any guidance you can about when and how to vote. You can have an effect. It’s not an enormous effect. It’s something in the order of, say, three percentage points or about one additional vote for every 30 calls you complete. But it’s a substantial effect.

And if you are able to extract a commitment to vote from that person and you were to be so bold as to call them back on the day before the election to make sure that they’re making good on their pledge, then you can have an even bigger effect, in fact, a very large effect. So I do think it can be effective. I also think that perfunctory, hurried calls by telemarketing operations are rather ineffective for a number of reasons, but especially the lack of authenticity.

Steven Cherry Let’s turn to social media, particularly Facebook. You described one rather pointless Facebook campaign that ended up costing $474 per vote. But your book also describes a very successful experiment in friend-to-friend communication on Facebook.

Donald Green That’s right. We have a number of randomized trials suggesting that encouragements to vote via Facebook ads or other kinds of Facebook media that are mass-produced seem to be relatively limited in their effects. Perhaps the biggest, most intensive Facebook advertising campaign was its full-day banner ads that ran all day long—I think it was the 2010 election—and had precisely no effect, even though it was tested among 61 million people.

More effective on Facebook were ads that showed you whether your Facebook friends had claimed to vote. Now, that didn’t produce a huge harvest of votes, but it increased turnout by about a third of a percentage point. So better than nothing. The big effects you see on Facebook and elsewhere are where people are, in a personalized way, announcing the importance of the upcoming election and urging their Facebook friends—their own social networks—to vote.

And that seems to be rather effective and indeed is part of a larger literature that’s now coming to light, suggesting that even text messaging, though not a particularly personal form of communication, is quite effective when friends are texting other friends about the importance of registering and voting. Surprisingly effective, and that, I think, opens up the door to a wide array of different theories about what can be done to increase voter turnout. It seems as though friend-to-friend communication or neighbor-to-neighbor communication or communication among people who are coworkers or co-congregants … that could be the key to raising turnout—not by not just one or two percentage points, but more like eight to 10.

Steven Cherry On this continuum of personal versus impersonal, Facebook groups,—which are a new phenomenon—seem to lie somewhere in between. Some people are calling them “toxic echo chambers,” but they would seem to maybe be a godsend for political engagement.

Donald Green I would think so, as long as the communication within the groups is authentic. If it’s if it’s automated, then probably not so much. But to the extent that the people in these groups have gotten to know each other or knew each other before they came into the group, then I think communication among them or between them could be quite compelling.

Steven Cherry Yes. Although, of course, that person that you think you’re getting to know might be some employee in St. Petersburg, Russia, of the Internet Research Agency. Snapchat has been getting some attention these days in terms of political advertising. They’ve tried to be more transparent than Facebook, and they do some fact-checking on political advertising. Could it be a better platform for political ads or engagement?

Donald Green I realize I just don’t know very much about the nuances of what they’re doing. I’m not sure that I have enough information to say.

Steven Cherry Getting back to more analog activities, your book discusses events like rallies and processions, but I didn’t see anything about smaller coffee-klatch-style events where, say, you invite all your neighbors and friends to hear a local candidate speak. That would seem to combine the effectiveness of door-to-door canvassing with the Facebook friend-to-friend campaign. But maybe it’s hard to study experimentally.

Donald Green That’s right. I would be very, very optimistic about the effects of those kinds of small gatherings. And it’s not that we are skeptical about their effects. It’s just, as you say, difficult to orchestrate a lot of experiments where people are basically opening their homes to friends. We need to talk to rope in more volunteers to bring in their friends experimentally.

Steven Cherry The business model for some campaign professionals is to get paid relative to the amount of money that gets spent. Does that disincentivize the kind of person-to-person campaigning you generally favor?

Donald Green Yes, I would say that one of the biggest limiting factors on person-to-person campaigning is that it’s very difficult for campaign consultants to make serious money off of it. And that goes double for the kind of serious money that is poured into campaigns in the final weeks. Huge amounts of money tend to be donated within the last three weeks of an election. And by that point, it’s very difficult to build the infrastructure necessary for large-scale canvassing or really any kind of retail-type politics. For that reason, the last-minute money tends to be dumped into digital ads and in television advertising—and in lots and lots of robocalls.

Steven Cherry Don, as we record, this is less than a week after the first 2020 presidential debate and other events in the political news have maybe superseded the debate already. But I’m wondering if you have any thoughts about it in terms of getting out the vote. Many people, I have to say, myself included, found the debate disappointing. Do you think it’s possible for a debate to depress voter participation?

Donald Green I think it’s possible. I think it’s rather unlikely to the extent that political science researchers have argued that negative campaigning depresses turnout, tends to depress turnout among independent voters, not so much among committed partisans who watched the debate and realize more than ever that their opponent is aligned with the forces of evil. For independent voters, they might say, “a plague on both your houses, I’m going to participate.” But I think that this particular election is one that is so intrinsically interesting that the usual way that independents feel about partisan competition probably doesn’t apply here.

Steven Cherry On a lighter note, an upcoming podcast episode for me will be about video game culture. And it’ll be with a professor of communications who writes her own video games for her classes. Your hobby turns out to be designing board games. Are they oriented toward political science? Is there any overlap of these passions?

Donald Green You know, it’s strange that they really don’t overlap at all. My interest in board games goes back to when I was a child. I’ve always been passionate about abstract board games like chess or go. And there was an accident that I started to design them myself. I did it actually when my fully-adult children were kids and we were playing with construction toys. And I began to see possibilities for games in those construction toys. And one thing led to another. And they were actually deployed to the world and marketed. And now I think they’re kind of going the way of the dinosaur. But there’s still a few dinosaurs like me who enjoy playing on an actual physical board.

Steven Cherry My girlfriend and I still play Rack-O. So maybe this is not a completely lost cause.

Well Don, I think in the US, everyone’s thoughts will be far from the election until the counting stops. Opinions and loyalties differ. But the one thing I think we can all agree on is that participation is essential for the health of the body politic. On behalf of all voters, let me thank you for all that your book has done toward that end and for myself and my listeners, thank you for joining me today.

Donald Green I very much appreciate it. Thanks.

Steven Cherry We’ve been speaking with Donald Green, a political scientist and co-author of Get Out the Vote, which takes a data-driven look at maximizing efforts to get out the vote.

This interview was recorded October 5th, 2020. Our thanks to Mike at Gotham Podcast Studio for audio engineering. Our music is by Chad Crouch.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers.

For Radio Spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

Going Carbon-Negative—Starting with Vodka

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/energy/environment/going-carbonnegativestarting-with-vodka

Steven Cherry Hi this is Steven Cherry for Radio Spectrum.

In 2014, two Google engineers, writing in the pages of IEEE Spectrum, noted that “if all power plants and industrial facilities switch over to zero-carbon energy sources right now, we’ll still be left with a ruinous amount of CO2 in the atmosphere. It would take centuries for atmospheric levels to return to normal, which means centuries of warming and instability.” Citing the work of climatologist James Hansen, they continued: “To bring levels down below the safety threshold, Hansen’s models show that we must not only cease emitting CO2 as soon as possible but also actively remove the gas from the air and store the carbon in a stable form.”

One alternative is to grab carbon dioxide as it’s produced, and stuff it underground or elsewhere. People have been talking about CSS, which alternatively stands for carbon capture and storage, or carbon capture and sequestration, for well over a decade. But you can look around, for example at Exxon-Mobil’s website, and see how much progress hasn’t been made.

In fact, in 2015, a bunch of mostly Canadian energy producers decided on a different route. They went to the XPRIZE people and funded what came to be called the Carbon XPRIZE to, as a Spectrum article at the time said, turn “CO2 molecules into products with higher added value.”

In 2018, the XPRIZE announced 10 finalists, who divvied up a $5 million incremental prize. The prize timeline called for five teams each to begin an operational phase in two locations, one in Wyoming and the other in Alberta, culminating in a $20 million grand prize. And then the coronavirus hit, rebooting the prize timeline.

One of the more unlikely finalists emerged from the hipsterish Bushwick neighborhood of Brooklyn, N.Y. Their solution to climate change: vodka. Yes, vodka. The finalist, which calls itself the Air Company, takes carbon dioxide that has been liquified and distills it into ethanol, and then fine-tunes it into vodka. The resulting product is, the company claims, not only carbon-neutral but carbon negative.

The scientific half of founding duo of the Air Company is Stafford Sheehan—Staff, as he’s known. He had two startups under his belt by the time he graduated from Boston College. He started his next venture while in graduate school at Yale. He’s a prolific researcher but he’s determined to find commercially viable ways to reduce the carbon in the air, and he’s my guest today, via Skype.

Staff, welcome to the podcast.

Stafford Sheehan Thanks very much for having me. Steven.

Steven Cherry Staff, I’m sure people have been teasing you that maybe vodka doesn’t solve the problem of climate change entirely, but it can make us forget it for a while. But in serious engineering terms, the Air Company process seems a remarkable advance. Talk us through it. It starts with liquefied carbon dioxide.

Stafford Sheehan Yeah, happy to. So, we use liquefied carbon dioxide because we source it offsite in in Bushwick. But really, we can just feed any sort of carbon dioxide into our system. We combine the carbon dioxide with water by first splitting the water into hydrogen and oxygen. Water is H2O, so we use what’s called an electrolyzer to split water into hydrogen gas and oxygen gas and then combine the hydrogen together with carbon dioxide in a reactor over proprietary catalysts that I and my coworkers developed over the course of the last several years. And that produces a mixture of ethanol and water that we then distill to make a very, very clean and very, very pure vodka.

Steven Cherry Your claim that the product is carbon-negative is based on a life-cycle analysis. The calculation starts with an initial minus of the amount of carbon you take out of the atmosphere. And then we start adding back the carbon and carbon equivalents needed to get it into a bottle and onto the shelf of a hipster bar. That first step where your supplier takes carbon out of the atmosphere, puts it into liquefied form and then delivers it to your distillery. That puts about 10 percent of that that carbon back into the atmosphere.

Stafford Sheehan Yeah, 10 to 20 percent. When a tonne of carbon dioxide arrives in liquid form at our Bushwick facility, we assume that it took 200 kilograms of CO2 emitted—not only for the capture of the carbon dioxide; most of the carbon dioxide that we get actually comes from fuel ethanol fermentation. So we take the carbon dioxide emissions of the existing ethanol industry and we’re turning that into a higher purity ethanol. But it’s captured from those facilities and then it’s liquefied and transported to our Bushwick facility. And if you integrate the lifecycle carbon emissions of all of the equipment, all the steel, all of the transportation, every part of that process, then you you get about a maximum life-cycle CO2 emissions for the carbon dioxide of 200 kilograms per ton. So we still have eight hundred kilograms to play with at our facility.

Steven Cherry So another 10 percent gets eaten up by that electrolysis process.

Stafford Sheehan Yeah. The electrolysis process is highly dependent on what sort of electricity you use to power it with. We use a company called Clean Choice. And we’re we work very closely with a number of solar and wind deployers in New York State to make sure that all the electricity that’s used at our facility is solar or wind. And if you use wind energy, that’s the most carbon-friendly energy source that we have available there. Right now, the mix that we have, which is certified through Con Edison, is actually very heavily wind and a little bit of solar. But that was the lowest lifecycle-intensity electricity that we could get. So we get … it’s actually a little bit less than 10 percent of that is consumed by electrolysis. So the electrolysis is actually quite green as long as you power it with a very low-carbon source of electricity.

Steven Cherry And the distilling process, even though it’s solar-based, takes maybe another 13 percent or so?

Stafford Sheehan It’s in that ballpark. The distilling process is powered by an electric steam boiler. So we use the same electricity that we use to split water, to heat our water for the distillation system. So we have a fully electric distillery process. You could say that we’ve electrified vodka distilling.

Steven Cherry There’s presumably a bit more by way of carbon equivalents when it comes to the bottles the vodka comes in, shipping it to customers, and so on, but that’s true of any vodka that ends up on that shelf of any bar, and those also have a carbon-emitting farming process—whether it’s potatoes or sugar beets or wheat or whatever—that your process sidesteps.

Stafford Sheehan Yes. And I think one thing that’s really important is, this electrification act aspect by electrifying or all of our distillery processes, for example, if you’re boiling water using a natural gas boiler, your carbon emissions are going to be much, much higher as compared to boiling water using an electric steam boiler that’s powered with wind energy.

Steven Cherry It seems like if you just poured the vodka down the drain or into the East River, you would be benefiting the environment. I mean, would it be possible to do that on an industrial scale as a form of carbon capture and storage that really works?

Stafford Sheehan Yeah. I don’t think you’d want to pour good alcohol down the drain in any capacity just because the alcohol that we make can offset the use of fossil fuel alcohol.

So by putting the alcohol that we make—this carbon negative alcohol that we make—into the market, that means you have to make less fossil alcohol. And I’m including corn ethanol in that because so many fossil fuels go into its production. But that makes it so that our indirect CO2, our indirect CO2 utilization is very, very high because we’re offsetting a very carbon-intensive product.

Steven Cherry That’s interesting. I was thinking that maybe you could earn carbon credits and sell them for more than you might make with having a, you know, another pricey competitor to Grey Goose and Ketel One.

Stafford Sheehan The carbon credit, the carbon credit system is still very young, especially in the US.

We also … our technology still has a ways to scale between our Bushwick facility—which is, I would say, a micro distillery—and a real bona industrial process, which … we’re working on that right now.

Steven Cherry Speaking of which, though, it is rather pricey stuff at this point, isn’t it? Did I read $65 or $70 a bottle?

Stafford Sheehan Yeah, it’s pricey not only because you pay a premium for our electricity, for renewable electricity, but we also pay a premium for carbon dioxide that, you know, has that that only emits 10 to 20 percent of the carbon intensity of its actual weight, so we pay a lot more for the inputs than is typical—sustainability costs money—and also we’re building these systems, they’re R&D systems, and so they’re  more costly to operate on a R&D scale, on kind of our pilot plant scale. As we scale up, the cost will go down. But at the scales we’re at right now, we need to be able to sell a premium product to be able to have a viable business. Now, on top of that, the product is also won a lot of awards that put it in that price category. It’s won three gold medals in the three most prestigious blind taste test competitions. And it’s won a lot of other spirits and design industry awards that enable us to get that sort of cost for it.

Steven Cherry I’m eager to do my own blind taste testing. Vodka is typically 80 proof, meaning it’s 60 percent water. You and your co-founder went on an epic search for just the right water.

Stafford Sheehan That we did. We tested over … probably over one hundred and thirty different types of water. We tried to find which one was best to make vodka with using the very, very highly pure ethanol that comes out of our process. And it’s a very nuanced thing. Water, by changing things like the mineral content, the pH, by changing the very, very small trace impurities in the water—that in many cases are good for you—can really change the way the water feels in your mouth and the way that it tastes. And adding alcohol to water just really amplifies that. It lowers the boiling point and it makes it more volatile so that it feels different in your mouth. And so different types of water have a different mouth feel; they have a different taste. We did a lot of research on water to be able to find the right one to mix with our vodka.

Steven Cherry Did you end up where you started with New York water?

Stafford Sheehan Yes. In in a in a sense, we are we’re very, very close to where we started.

Steven Cherry I guess we have to add your vodka to the list that New Yorkers would claim includes New York’s bagels and New York’s pizza as uniquely good, because if their water.

Stafford Sheehan Bagels, pizza, vodka … hand sanitizer …

Steven Cherry It’s a well-balanced diet. So where do things stand with the XPRIZE? I gather you finally made it to Canada for this operational round, but take us through the journey getting there.

Stafford Sheehan So I initially entered the XPRIZE when it was soliciting for very first submissions—I believe it was 2016—and going through the different stages, we had at the end of 2017, we had very rigorous due diligence on our prototype scale. And we passed through that and got good marks and continuously progressed through to the finals where we are now. Now, of course, coronavirus kind of threw both our team and many other teams for a loop, delaying deployment, especially for us: We’re the only American team deploying in Canada. The other four teams that are deploying at the ACCTC [Alberta Carbon Conversion Technology Centre] are all Canadian teams. So being the only international team in a time of a global pandemic that, you know, essentially halted all international travel—and a lot of international commerce—put some substantial barriers in our way. But over the course of the last seven months or so, we’ve been able to get back on our feet. And I’m currently sitting in quarantine in Mississauga, Ontario, getting ready for a factory-acceptance test. That’s scheduled to happen right at the same time as quarantine ends. So we’re gonna be at the end of this month landing our skid in Alberta for the finals and then in November, going through diligence and everything else to prove out its operation and then operating it through the rest of the year.

Steven Cherry I understand that you weren’t one of the original 10 finalists named in 2018.

Stafford Sheehan No, we were not. We were the runner-up. There was a runner-up for each track—the Wyoming track and the Alberta track. And ultimately, there were teams that dropped out or merged for reasons within their own businesses. We were given the opportunity to rejoin the competition. We decided to take it because it was a good proving ground for our next step of scale, and it provided a lot of infrastructure that allowed us to do that at a reasonable cost—at a reasonable cost for us and at a reasonable cost in terms of our time.

Steven Cherry Staff, you were previously a co-founder of a startup called Catalytic Innovations. In fact, you were a 2016 Forbes magazine, 30-under-30 because of it. What was it? And is it? And how did it lead to Air Company and vodka?

Stafford Sheehan For sure. That was a company that I spun out of Yale University, along with a professor at Yale, Paul Anastas. We initially targeted making new catalysts for fuel cell and electrolysis industries, focusing around the water oxidation reaction. So to turn carbon dioxide—or to produce fuel in general using renewable electricity—there are three major things that need to happen. You need to have a very efficient renewable energy source. Trees, for example, use the sun. That’s photosynthesis. You have to be able to oxidize water into oxygen gas. And that’s why trees breathe out oxygen. And you have to be able to use the protons and electrons that come out of water oxidation to either reduce carbon dioxide or through some other method, produce a fuel. So I studied all three of those when I was in graduate school, and upon graduating, I spun out Catalytic Innovations that focused on the water oxidation reaction and commercializing materials that more efficiently produced oxygen for all of The man-made processes such as metal refining that do that chemistry. And that company found its niche in corrosion—anti-corrosion and corrosion protection—because one of the big challenges, whenever you’re producing oxygen, be it for renewable fuels or be it to produce zinc or to do a handful of different electrorefining and electrowinning processes in the metal industry. You always have a very serious corrosion problem. Did a lot of work in that industry in Catalytic Innovations, and they still continue to do work there, to this day.

Steven Cherry You and your current co-founder, Greg Constantine, are a classic match—a technologist, in this case an electrochemist and a marketer. If this were a movie, you would have met in a bar drinking vodka. And I understand you actually did meet at a bar. Were you drinking vodka?

Stafford Sheehan No, we were actually drinking whiskey. So I didn’t … I actually I’m not a big fan of vodka pre-Air Company, but it was the product that really gave us the best value proposition where really, really clean, highly pure ethanol is most important. So I’ve always been more of a whiskey man myself, and Greg and I met over whiskey in Israel when we were on a trip that was for Forbes. You know, they sent us out there because we were both part of their 30-Under-30 list and we became really good friends out there. And then several months later, fast forward, we started Air Company.

Steven Cherry Air Company’s charter makes it look like you would like to go far beyond vodka when it comes to finding useful things to do with CO2. In the very near term, you turned to using your alcohol in a way that contributes to our safety.

Stafford Sheehan Yeah. So we we had always planned the air company, not the air vodka company. We had always planned to go into several different verticals with ultra-high-purity ethanol that we create. And spirits is one of the places where you can realize the value proposition of a very clean and highly pure alcohol, very readily—spirits, fragrance is another one. But down the list a little bit is sanitizer, specifically hand sanitizer. And when coronavirus hit, we actually pivoted all of our technology because there was a really, really major shortage of sanitizer in New York City. A lot of my friends from graduate school that had kind of gone more on the medical track were telling me that the hospitals that they worked in, in New York didn’t have any hand sanitizer. And when the hospitals—for the nurses and doctors—ran out of hand sanitizer, that means you really have a shortage. And so we pivoted all of our technology to produce sanitizer in March. And for three months after that, we gave it away. We donated it to these hospitals, to the fire department, to NYPD and to other organizations in the city that needed it most.

Yeah, the hand sanitizer, I like to think, is also a very premium product. You can’t realize the benefits of the very, very clean and pure ethanol that we use for it as readily as you can with the bad guys since you’re not tasting it. But we did have to go through all of the facility registrations and that sort of thing to make the sanitizer because it is classified as a drug. So our pilot plant in and in Bushwick, which was a converted warehouse, I used to tell people in March that I always knew my future was going to be sitting in a dark warehouse in Bushwick making drugs. But, you know, never thought that it was actually going to become a reality.

Steven Cherry That was in the short term. By now, you can get sanitizer in every supermarket and Home Depot. What are the longer-term prospects for going beyond vodka?

Stafford Sheehan Longer term, we’re looking at commodity chemicals, even going on to fuel. So longer term, we’re looking at the other verticals where we can take advantage of the high-purity value proposition of our ethanol—like pharmaceuticals, as a chemical feedstock, things like that. But then as we scale, we want to be able to make renewable fuel as well from this and renewable chemicals. Ultimately, we want to we want to get to world scale with this technology, but we need to take the appropriate steps to get there. And what we’re doing now are the stepping-stones to scaling it.

Steven Cherry It seems like if you could locate the distilling operation right at the ethanol plant, you would just be making more ethanol for them with their waste product, avoid a lot of shipping and so forth. It, you would just become of value add to their industry.

Stafford Sheehan That is something that we hope to do in the long term. You know what, our current skids are fairly small scale where we couldn’t take a massive amount of CO2 with them. But as we scale, we do hope to get there gradually when we get to larger scales, like talking about several barrels per day rather than liters per hour, which is the scale we’re at now.

A lot of stuff you can turn CO2 into. One of the prime examples is calcium carbonate. C03-[[minus]] CO2 is CO2. You can very easily convert carbon dioxide into things like that for building materials. So pour concrete for different parts of bricks and things like that. There are a lot of different ways to mineralized CO2 as well. Like you can inject it into the ground. That will also turn it into carbon-based minerals. Beyond that, as far as more complex chemical conversion goes, the list is almost endless. You can make plastics. You can make pharmaceutical materials. You can make all sorts of crazy stuff from CO2. Almost any of the base chemicals that have carbon in them can come from CO2. And in a way, they do come from CO2 because all the petrochemicals that we mine from the ground, that they’re from photosynthesis that happened over the course of the last two billion years.

Have you ever seen the movie Forest Gump? There’s a part in that where Bubba, Gump’s buddy in the Vietnam War, talks about all the things you can do with shrimp. And it kind of goes on and on and on. But I could say the same about CO2. You can make plastic. You can make clothes. You can make sneakers. You can make alcohol. You can make any sort of chemical carbon-based ethylene, carbon monoxide, formic acid, methanol, ethanol. And there … The list goes on. Just about any carbon-based chemical you can think of. You can make from CO2.

Steven Cherry Would it be possible to pull carbon dioxide out of a plastic itself and thereby solve two problems at once?

Yeah, you could you could take plastic and capture the CO2 that’s emitted when you either incinerate it or where you gasify it. That is a strategy that’s used in certain places, gasification of waste, municipal waste. It doesn’t give you CO2, but it actually gives you something that you can do chemistry with a little more easily. It gives you a syngas—a mixture of carbon monoxide and hydrogen. So, there are a lot of different strategies that you can use to convert CO2 into things better for the planet than global warming.

Steven Cherry If hydrogen is a byproduct of that, you have a ready use for it.

Stafford Sheehan Yeah, exactly, that is one of the many places where we could source feedstock materials for our process. Our process is versatile and that’s one of the big advantages to it.

If we get hydrogen, as a byproduct of chloralkali production, for example, we can use that instead of having to source the electrolyzer. If our CO2 comes from direct air capture, we can use that. And that means we can place our plants pretty much wherever there’s literally air, water and sunlight. As far as the products that come out, liquid products that are made from CO2 have a big advantage in that they can be transported and they’re not as volatile, obviously, as the gases.

Steven Cherry Well, Staff, it’s a remarkable story, one that certainly earns you that XPRIZE finalist berth. We wish you great luck with it. But it seems like your good fortune is self-made and assured, in any event to the benefit of the planet. Thank you for joining us today.

Stafford Sheehan Thanks very much for having me, Steven.

Steven Cherry We’ve been speaking with Staff Sheehan, co-founder of the Air Company, a Brooklyn startup working to actively undo the toxic effects of global warming.

This interview was recorded October 2, 2020. Our thanks to Miles of Gotham Podcast Studio for our audio engineering; our music is by Chad Crouch.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers.

For Radio Spectrum, I’m Steven Cherry.

 

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

 

Why Does the U.S. Have Three Electrical Grids?

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/energy/renewables/why-does-the-us-have-three-electrical-grids

Steven Cherry Hi, this is Steven Cherry for Radio Spectrum.

If you look at lists of the 100 greatest inventions of all time, electricity figures prominently. Once you get past some key enablers that can’t really be called inventions—fire, money, the wheel, calendars, the alphabet—you find things like light bulbs, the automobile, refrigeration, radios, the telegraph and telephone, airplanes, computers and the Internet. Antibiotics and the modern hospital would be impossible without refrigeration. The vaccines we’re all waiting for depend on electricity in a hundred different ways.

It’s the key to modern life as we know it, and yet, universal, reliable service remains an unsolved problem. By one estimate, a billion people still do without it. Even in a modern city like Mumbai, generators are commonplace, because of an uncertain electrical grid. This year, California once again saw rolling blackouts, and with our contemporary climate producing heat waves that can stretch from the Pacific Coast to the Rocky Mountains, they won’t be the last.

Electricity is hard to store and hard to move, and electrical grids are complex, creaky, and expensive to change. In the early 20teens, Europe began merging its distinct grids into a continent-wide supergrid, an algorithm-based project that IEEE Spectrum wrote about in 2014. The need for a continent-wide supergrid in the U.S. has been almost as great, and by 2018 the planning of one was pretty far long—until it hit a roadblock that, two years later, still stymies any progress. The problem is not the technology, and not even the cost. The problem is political. That’s the conclusion of an extensively reported investigation jointly conducted by The Atlantic magazine and InvestigateWest, a watchdog nonprofit that was founded in 2009 after the one of Seattle’s daily newspapers stopped publishing. The resulting article, with the heading, “Who Killed the Supergrid?”, was written by Peter Fairley, who has been a longtime contributing editor for IEEE Spectrum and is my guest today. He joins us via Skype.

Peter, welcome to the podcast.

Peter Fairley It’s great to be here, Steven.

Steven Cherry Peter, you wrote that 2014 article in Spectrum about the Pan-European Hybrid Electricity Market Integration Algorithm, which you say was needed to tie together separate fiefdoms. Maybe you can tell us what was bad about the separate fiefdoms served Europe nobly for a century.

Peter Fairley Thanks for the question, Steven. That story was about a pretty wonky development that nevertheless was very significant. Europe, over the last century, has amalgamated its power systems to the point where the European grid now exchange’s electricity, literally across the continent, north, south, east, west. But until fairly recently, there have been sort of different power markets operating within it. So even though the different regions are all physically interconnected, there’s a limit to how much power can actually flow all the way from Spain up to Central Europe. And so there are these individual regional markets that handle keeping the power supply and demand in balance, and putting prices on electricity. And that algorithm basically made a big step towards integrating them all. So that you’d have one big market and a more competitive, open market and the ability to, for example, if you have spare wind power in one area, to then make use of that in some place a thousand kilometers away.

Steven Cherry The U.S. also has separate fiefdoms. Specifically, there are three that barely interact at all. What are they? And why can’t they share power?

Peter Fairley Now, in this case, when we’re talking about the U.S. fiefdoms, we’re talking about big zones that are physically divided. You have the Eastern—what’s called the Eastern Interconnection—which is a huge zone of synchronous AC power that’s basically most of North America east of the Rockies. You have the Western Interconnection, which is most of North America west of the Rockies. And then you have Texas, which has its own separate grid.

Steven Cherry And why can’t they share power?

Peter Fairley Everything within those separate zones is synched up. So you’ve got your 60 hertz AC wave; 60 times a second the AC power flow is changing direction. And all of the generators, all of the power consumption within each zone is doing that synchronously. But the east is doing it on its own. The West is on a different phase. Same for Texas.

Now you can trickle some power across those divides, across what are called “seams” that separate those, using DC power converters—basically, sort of giant substations with the world’s largest electronic devices—which are taking some AC power from one zone, turning it into DC power, and then producing a synthetic AC wave, to put that power into another zone. So to give you a sense of just what the scale of the transfers is and how small it is, the East and the West interconnects have a total of about 950 gigawatts of power-generating capacity together. And they can share a little over one gigawatt of electricity.

Steven Cherry So barely one-tenth of one percent. There are enormous financial benefits and reliability benefits to uniting the three. Let’s start with reliability.

Peter Fairley Historically, when grids started out, you would have literally a power system for one neighborhood and a separate power system for another. And then ultimately, over the last century, they have amalgamated. Cities connected with each other and then states connected with each other. Now we have these huge interconnections. And reliability has been one of the big drivers for that because you can imagine a situation where if you if you’re in city X and your biggest power generator goes offline, you know, burn out or whatever. If you’re interconnected with your neighbor, they probably have some spare generating capacity and they can help you out. They can keep the system from going down.

So similarly, if you could interconnect the three big power systems in North America, they could support each other. So, for example, if you have a major blackout or a major weather event like we saw last month—there was this massive heatwave in the West, and much of the West was struggling to keep the lights on. It wasn’t just California. If they were more strongly interconnected with Texas or the Eastern Interconnect, they could have leaned on those neighbors for extra power supply.

Steven Cherry Yeah, your article imagines, for example, the sun rising in the West during a heatwave sending power east; the sun setting in the Midwest, wind farms could send power westward. What about the financial benefits of tying together these three interconnects? Are they substantial? And are they enough to pay for the work that would be needed to unify them into a supergrid?

Peter Fairley The financial benefits are substantial and they would pay for themselves. And there’s really two reasons for that. One is as old as our systems, and that is, if you interconnect your power grids, then all of the generators in the amalgamated system can, in theory, they can all serve that total load. And what that means is they’re all competing against each other. And power plants that are inefficient are more likely to be driven out of the market or to operate less frequently. And so that the whole system becomes more efficient, more cost-effective, and prices tend to go down. You see that kind of savings when you look at interconnecting the big grids in North America. Consumers benefit—not necessarily all the power generators, right? There you get more winners and losers. And so that’s the old part of transmission economics.

What’s new is the increasing reliance on renewable energy and particularly variable renewable energy supplies like wind and solar. Their production tends to be more kind of bunchy, where you have days when there’s no wind and you have days when you’ve got so much wind that the local system can barely handle it. So there are a number of reasons why renewable energy really benefits economically when it’s in a larger system. You just get better utilization of the same installations.

Steven Cherry And that’s all true, even though sending power 1000 miles or 3000 miles? You lose a fair amount of that generation, don’t you?

Peter Fairley It’s less than people imagine, especially if you’re using the latest high voltage direct current power transmission equipment. DC power lines transmit power more efficiently than AC lines do, because the physics are actually pretty straightforward. An AC current will ride on the outside of a power cable, whereas a DC current will use the entire cross-section of the metal. And so you get less resistance overall, less heating, and less loss. And so. And the power electronics that you need on either side of a long power line like that are also becoming much more efficient. So you’re talking about losses of a couple of percent on lines that, for example in China, span over 3000 kilometers.

Steven Cherry The reliability benefits, the financial benefits, the way a supergrid would be an important step for helping us move off of our largely carbon-based sources of power—we know all this in part because in the mid-2010s a study was made of the feasibility—including the financial feasibility—of unifying the U.S. in one single supergrid. Tell us about the Interconnections Seams Study.

Peter Fairley So the Interconnection Seams Study [Seams] was one of a suite of studies that got started in 2016 at the National Renewable Energy Laboratory in Colorado, which is one of the national labs operated by the U.S. Department of Energy. And the premise of the Seams study was that the electronic converters sitting between the east and the west grids were getting old; they were built largely in the 70s; they are going to start to fail and need to be replaced.

And the people at NREL were saying, this is an opportunity. Let’s think—and the power operators along the seam were thinking the same thing—we’re gonna have to replace these things. Let’s study our strategic options rather than have them go out of service and just automatically replace them with similar equipment. So what they posited was, let’s look at some longer DC connections to tie the East and the West together—and maybe some bigger ones. And let’s see if they pay for themselves. Let’s see if they have the kind of transformative effects that one would imagine that they would, just based on the theory. So they set up a big simulation modeling effort and they started running the numbers…

Now, of course, this got started in 2016 under President Obama. And it continued to 2017 and 2018 under a very different president. And basically, they affirmed that tying these grids, together with long DC lines, was a great idea, that it would pay for itself, that it would make much better use of renewable energy. But it also showed that it would accelerate the shutdown of coal-fired power. And that got them in some hot water with the new masters at the Department of Energy.

Steven Cherry By 2018 the study was largely completed and researchers will begin to share its conclusions with other energy experts and policymakers. For example, there was a meeting in Iowa. You describe where there is a lot of excitement over the scenes study. You write that things took a dramatic turn at one such gathering in Lawrence, Kansas.

Peter Fairley Yes. So the study was complete as far as the researchers were concerned. And they were working on their final task under their contract from the Department of Energy, which was to write and submit a journal article in this case. They were targeting an IEEE journal. And they, as you say, had started making some presentations. The second one was in August, in Kansas, and there’s a DOE official—a political appointee—who’s sitting in the audience and she does not like what she’s hearing. She, while the talk is going on, pulls out her cell phone, writes an email to DOE headquarters, and throws a red flag in the air.

Steven Cherry The drama moved up the political chain to a pretty high perch.

Peter Fairley According to an email from one of the researchers that I obtained and is presented in the InvestigateWest version of this article, it went all the way to the current secretary of energy, Daniel Brouillette, and perhaps to the then-Secretary of Energy, former Texas Governor [Rick] Perry.

Steven Cherry And the problem you say in that article was essentially the U.S. administration’s connections to—devotion to—the coal industry.

Peter Fairley Right. You’ve got a president who has made a lot of noise both during his election campaign and since then about clean, beautiful coal. He is committed to trying to stop the bleeding in the U.S. coal industry, to slow down or stop the ongoing mothballing of coal-fired power plants. His Secretary of Energy. Rick Perry is doing everything he can to deliver on Trump’s promises. And along comes this study that says we can have a cleaner, more efficient power system with less coal. And yes, so it just ran completely counter to the political narrative of the day.

Steven Cherry You said earlier the financial benefits to consumers are unequivocal. But in the case of the energy providers, there would be winners and losers and the losers with largely come from the coal industry.

Peter Fairley I would just add one thing to that, and that is and this depends on really the different systems. You’re looking at the different conditions and scenarios and assumptions. But, you know, in a scenario where you have more renewable energy, there are also going to be impacts on natural gas. And the oil and gas industry is definitely also a major political backer of the Trump administration.

Steven Cherry The irony is that the grid is moving off of coal anyway, and to some extent, oil and even natural gas, isn’t it?

Peter Fairley Definitely oil. It’s just a very expensive and inefficient way to produce power. So we’ve been shutting that down for a long time. There’s very little left. We are shutting down coal at a rapid rate in spite of every effort to save it. Natural gas is growing. So natural gas has really been—even more so than renewables—the beneficiary of the coal shutdown. Natural gas is very cheap in the U.S. thanks to widespread fracking. And so it’s coming on strong and it’s still growing.

Steven Cherry Where is the Seams study now?

Peter Fairley The Seams study is sitting at the National Renewable Energy Lab. Its leaders, under pressure from the political appointees at DOE, its leaders have kept it under wraps. It appears that there may have been some additional work done on the study since it got held up in 2018. But we don’t know what the nature of that work was. Yeah, so it’s just kind of missing in action at this point.

My sources tell me that there is an effort underway at the lab to get it out. And I think the reason for that is that they’ve taken a real hit in terms of the morale of their staff. the NREL Seams study is not the only one that’s been held up, that is being held up. In fact, it’s one of dozens, according to my follow-up reporting. And, you know, NREL researchers are feeling pretty hard done by and I think the management is trying to show its staff that it has some scientific integrity.

But I think it’s important to note that there are other political barriers to building a supergrid. It might be a no brainer on paper, but in addition to the pushback from the fossil-fuel industry that we’re seeing with Seams, there are other political crosscurrents that have long stood in the way of long-distance transmission in the U.S. For example—and this is a huge one—that, in the U.S., most states have their own public utility commission that has to approve new power lines. And when you’re looking at the kind of lines that Seams contemplated, or that would be part of a supergrid, you’re talking about long lines that have to span, in some cases, a dozen states. And so you need to get approval from each of those states to transit— to send power from point A to point X. And that is a huge challenge. There’s a wonderful book that really explores that side of things called Superpower [Simon & Schuster, 2019] by the Wall Street Journal’s Russell Gold.

Steven Cherry The politics that led to the suppression of the publication of the Seams study go beyond Seams itself don’t they? There are consequences, for example, at the Office of Energy Efficiency and Renewable Energy.

Peter Fairley Absolutely. Seams is one of several dozen studies that I know of right now that are held up and they go way beyond transmission. They get into energy efficiency upgrades to low-income housing, prices for solar power… So, for example—and I believe this hasn’t been reported yet; I’m working on it—the Department of Energy has hitherto published annual reports on renewable energy technologies like wind and solar. And, in those, they provide the latest update on how much it costs to build a solar power plant, for example. And they also update their goals for the technology. Those annual reports have now been canceled. They will be every other year, if not less frequent. That’s an example of politics getting in the way because the cost savings from delaying those reports are not great, but the potential impact on the market is. There are many studies, not just those performed by the Department of Energy that will use those official price numbers in their simulations. And so if you delay updating those prices for something like solar, where the prices are coming down rapidly, you are making renewable energy look less competitive.

Steven Cherry And even beyond the Department of Energy, the EPA, for example, has censored itself on the topic of climate change, removing information and databases from its own Web sites.

Peter Fairley That’s right. The way I think of it is, when you tell a lie, it begets other lies. And you and you have to tell more lies to cover your initial lie and to maintain the fiction. And I see the same thing at work here with the Trump administration. When the president says that climate change is a hoax, when the president says that coal is a clean source of power, it then falls to the people below him on the political food chain to somehow make the world fit his fantastical and anti-science vision. And so, you just get this proliferation of information control in a hopeless bid to try and bend the facts to somehow make the great leader look reasonable and rational.

Steven Cherry You say even e-mails related to the Seams study have disappeared, something you found in your Freedom of Information Act requests. What about the national labs themselves? Historically, they have been almost academic research organizations or at least a home for unfettered academic freedom style research.

Peter Fairley That’s the idea. There has been this presumption or practice in the past, under past administrations, that the national labs had some independence. And that’s not to say that there’s never been political oversight or influence on the labs. Certainly, the Department of Energy decides what research it’s going to fund at the labs. And so that in itself shapes the research landscape. But there was always this idea that the labs would then be—you fund the study and then it’s up to the labs to do the best work they can and to publish the results. And the idea that you are deep-sixing studies that are simply politically inconvenient or altering the content of the studies to fit the politics that’s new. That’s what people at the lab say is new under the Trump administration. It violates. DOE’s own scientific integrity policies in some cases, for example, with the Lawrence Berkeley National Laboratory. It violates the lab’s scientific integrity policy and the contract language under which the University of California system operates that lab for the Department of Energy. So, yeah, the independence of the national labs is under threat today. And there are absolutely concerns among scientists that precedents are being set that could affect how the labs operate, even if, let’s say, President Trump is voted out of office in November.

Steven Cherry Along those lines, what do you think the future of grid unification is?

Peter Fairley Well, Steven, I’ve been writing about climate and energy for over 20 years now, and I would have lost my mind if I wasn’t a hopeful person. So I still feel optimistic about our ability to recognize the huge challenge that climate change poses and to change the way we live and to change our energy system. And so I do think that we will see longer power lines helping regions share energy in the future. I am hopeful about that. It’s just it makes too much sense to leave that on the shelf.

Steven Cherry Well, Peter, it’s an amazing investigation of the sort that reminds us why the press is important enough to democracy to be called the fourth estate. Thanks for publishing this work and for joining us today.

Peter Fairley Thank you so much. Steven. It’s been a pleasure.

Steven Cherry We’ve been speaking with Peter Fairley, a journalist who focuses on energy and the environment, about his researching and reporting on the suspension of work on a potential unification of the U.S. energy grid.

This interview was recorded September 11, 2020. Our audio engineering was by Gotham Podcast Studio; our music is by Chad Crouch.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers.

For Radio Spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

Fake News Is a Huge Problem, Unless It’s Not

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/telecom/internet/fake-news-is-a-huge-problem-unless-its-not

Steven Cherry Hi, this is Steven Cherry for Radio Spectrum.

Jonathan Swift in 1710 definitely said, “Falsehood flies, and the truth comes limping after it.” Mark Twain, on the other hand, may or may not have said, “A lie can travel halfway around the world while the truth is putting on its shoes.”

Especially in the context of politics, we lately use the term “fake news” instead of “political lies” and the problem of fake news—especially when it originates abroad—seems to be much with us these days. It’s believed by some to have had a decisive effect upon the 2016 U.S. Presidential election and fears are widespread that the same foreign adversaries are at work attempting to influence the vote in the current contest.

A report in 2018 commissioned by the U.S. Senate Intelligence Committee centered its attention on the Internet Research Agency, a shadowy arm of Russia’s intelligence services. The report offers, to quote an account of it in Wired magazine, “the most extensive look at the IRA’s attempts to divide Americans, suppress the vote, and boost then-candidate Donald Trump before and after the 2016 presidential election.”

Countless hours of research have gone into identifying and combating fake news. A recent study found more than 2000 articles about fake news published between 2017 and 2020.

Nonetheless, there’s a dearth of actual data when it comes to the magnitude, extent, and impact, of fake news.

For one thing, we get news that might be fake in various ways—from the Web, from our phones, from television—yet it’s hard to aggregate these disparate sources. Nor do we know what portion of all our news is fake news. Finally, the impact of fake news may or may not exceed its prevalence—we just don’t know.

A new study looks into these very questions. Its authors include two researchers at Microsoft who listeners of the earlier incarnation of this podcast will recognize: David Rothschild and Duncan Watts were both interviewed here back in 2012. The lead author, Jennifer Allen, was a software engineer at Facebook before becoming a researcher at Microsoft in its Computational Social Science Group and she is also a Ph.D. student at the MIT Sloan School of Management and the MIT Initiative on the Digital Economy. She’s my guest today via Skype.

Jenny, welcome to the podcast.

Jennifer Allen Thank you, Steven. Happy to be here.

Steven Cherry [[COPY]] Jenny, Wikipedia defines “fake news” as “a type of yellow journalism or propaganda that consists of deliberate misinformation or hoaxes spread via traditional print and broadcast news media or online social media.” The term made its way into the online version of the Random House dictionary in 2017 as “false news stories, often of a sensational nature, created to be widely shared or distributed for the purpose of generating revenue, or promoting or discrediting a public figure, political movement, company, etc.” Jenny, are you okay with either of these definitions? More simply, what is fake news?

Jennifer Allen Yeah. Starting off with a tough question. I think the way that we define fake news really changes whether or not we consider it to be a problem or the magnitude of the problem. So the way that we define fake news in our research—and how the academic community has defined fake news—is that it is false or misleading information masquerading as legitimate news. I think the way that, you know, we’re using fake news is really sort of this hoax news that’s masquerading as true news. And that is the definition that I’m going to be working with today.

Steven Cherry The first question you tackled in this study and here I’m quoting it: “Americans consume news online via desktop computers and increasingly mobile devices as well as on television. Yet no single source of data covers all three modes.” Was it hard to aggregate these disparate sources of data?

Jennifer Allen Yes. So that was one thing that was really cool about this paper, is that usually when people study fake news or misinformation, they do so in the context of a single platform. So there’s a lot of work that happens on Twitter, for example. And Twitter is interesting and it’s important for a lot of reasons, but it certainly does not give a representative picture of the way that people consume information today or consume news today. It might be popular among academics and journalists. But the average person is not necessarily on Twitter. And so one thing that was really cool and important, although, as you mentioned, it was difficult as well, was to combine different forms of data.

And so we looked at a panel of Nielsen TV data, as well as a desktop panel of individual Web traffic, also provided by Nielsen. And then finally, we also looked at mobile traffic with an aggregate data set provided to us by ComScore. And so we have these three different datasets that really allow us to triangulate the way that people are consuming information and give sort of a high-level view.

Steven Cherry You found a couple of interesting things. One was that most media consumption is not news-related—maybe it isn’t surprising—and there’s a big difference across age lines.

Jennifer Allen Yes, we did find that. And so—as perhaps it might not be surprising—older people consume a lot more television than younger people do. And younger people spend more time on mobile and online than older people do. However, what might be surprising to you is that no matter whether it’s old people or younger people, the vast majority of people are consuming more news on TV. And so that is a stat that surprises a lot of people, even as we look across age groups—that television is the dominant news source, even among people age 18 to 24.

Steven Cherry When somebody looked at a television-originating news piece on the web instead of actually on television, you characterized it as online. That is to say, you characterized by the consumption of the news, not its source. How did you distinguish news from non-news, especially on social media?

Jennifer Allen Yes. So there are a lot of different definitions of news here that you could use. We tried to take the widest definition possible across all of our platforms. So on television, we categorized as news anything that Nielsen categorizes as news as part of their dataset. And they are the gold standard dataset for TV consumption. And so, think Fox News, think the Today show. But then we also added things that maybe they wouldn’t consider news. So Saturday Night Live often contains news clips and touches on the topical events of the day. And so we also included that show as news. And so, again, we tried to take a really wide definition. And the same online.

And so online, we also aggregated a list of, I think, several thousand Web site that were both mainstream news and hyper-partisan news, as well as fake news. And we find hyper-partisan news and fake news using these news lists that have emerged in the large body of research that has come out of the 2016 elections / fake news phenomenon. And so there again, we tried to take the widest definition of fake news. And so not only are things like your crappy single-article site but also things like Breitbart and the Daily Wire we categorize as hyper-partisan sites.

Steven Cherry Even though we associate online consumption with young people and television with older people, you found that fake news stories were more likely to be encountered on social media and that older viewers were heavier consumers than younger ones.

Jennifer Allen Yes, we did find that. This is a typical finding within the fake news literature, which is that older people tend to be more drawn to fake news for whatever reason. And there’s been work looking at why that might be. Maybe it’s digital literacy. Maybe it’s just more interested in news generally. And it’s true that on social media, there’s more fake and hyper-partisan news than, you know, on the open web.

That being said, I would just emphasize that the dominant … that the majority of news that is consumed even on social media—and even among older Americans—is still mainstream. And so, think your New York Times or your Washington Post instead of your Daily Wire or Breitbart.

Steven Cherry You didn’t find much in the way of fake news on television at all.

Jennifer Allen Yes. And so this is sort of a function, as I was saying before, of the way that we defined fake news. We, by definition, did not find any fake news on television, because the way the fake news has really been studied and in the literature and also talked about sort of in the mainstream media is as this phenomenon of Web sites masquerading as legitimate news outlets. That being said, I definitely believe that there is misinformation that occurs on television. You know, a recent study came out looking at who the biggest spreader of misinformation around the coronavirus was and found it to be Donald Trump. And just because we aren’t defining that content as fake news—because it’s not deceptive in the way that it is presenting itself—doesn’t mean that it is necessarily legitimate to information. It could still be misinformation, even though we do not define it as fake news.

Steven Cherry I think the same thing would end up being true about radio. I mean, there certainly seems to be a large group of voters—including, it’s believed, the core supporters of one of the presidential candidates—who are thought to get a lot of their information, including fake information from talk radio.

Jennifer Allen Yeah, talk radio is unfortunately a hole in our research; we were not able to get a good dataset looking at talk radio. And indeed, you know, talk radio. And, you know, Rush Limbaugh’s talk show, for example, can really be seen as the source of a lot of the polarization in the news and the news environment.

And there’s been work done by Yochai Benkler at the Harvard Berkman Klein Center that looks at the origins of talk radio in creating a polarized and swampy news environment.

Steven Cherry Your third finding, and maybe the most interesting or important one, is and I’m going to quote again, “fake news consumption is a negligible fraction of Americans’ daily information diet.”

Jennifer Allen Yes. So it might be a stat that surprises people. We find that fake news comprises only 0.15 percent of Americans’ daily media diet. Despite the outsized attention that fake news gets in the mainstream media and especially within the academic community: more than half of the journal articles that contain the word news are about fake news in recent years. It is actually just a small fraction of the news that people consume. And also a small fraction of the information that people consume. The vast majority of the content that people are engaging with online is not news at all. It’s YouTube music videos. It’s entertainment. It’s Netflix.

And so I think that it’s an important reminder that when we consider conversations around fake news and its potential impact, for example, on the 2016 election, that we look at this information in the context of the information ecosystem and we look at it not just in terms of the numerator and the raw amount of fake news that people are consuming, but with the denominator as well. So how much of the news that people consume is actually fake news?

Steven Cherry So fake news ends up being only one percent or even less of our overall media diet. What percentage is it of news consumption?

Jennifer Allen It occupied less than one percent of overall news consumption. So that is, including TV. Of course, when you zoom in, for example, to fake news on social media, the scale of the problem gets larger. And so maybe seven to 10 percent—and perhaps more, depending on your definition of fake news—of news that is consumed on social media could be considered what we say is hyper-partisan or fake news. But still, again, to emphasize, the majority of people on Facebook are not seeing any news at all. So, you know, over 50 percent of people on Facebook and in our data don’t click on any news articles that we can see.

Steven Cherry You found that our diet is pretty deficient in news in general. The one question that you weren’t able to answer in your study is whether fake news, albeit just a fraction of our news consumption—and certainly a tiny fraction of our media consumption—still might have an outsized impact compared with regular news.

Jennifer Allen Yeah, that’s very true. And I think here it’s important to distinguish between the primary and secondary impact of fake news. And so in terms of, you know, the primary exposure of people consuming fake news online and seeing a news article about Hillary Clinton running a pedophile ring out of a pizzeria and then changing their vote, I think we see very little data to show that that could be the case. 

That being said, I think there’s a lot we don’t know about the secondary sort of impact of fake news. So what does it mean for our information diets that we now have this concept of fake news that is known to the public and can be used and weaponized?

And so, the extent to which fake news is covered and pointed to by the mainstream media as a problem also gives ammunition to people who oppose journalists, you know, mainstream media and want to erode trust in journalism and give them ammunition to attack information that they don’t agree with. And I think that is a far more dangerous and potentially negatively impactful effect of fake news and perhaps its long-lasting legacy.

The impetus behind this paper was that there’s all this conversation around fake news out of the 2016 election. There is a strong sense that was perpetuated by the mainstream media that fake news on Facebook was responsible for the election of Trump. And that people were somehow tricked into voting for him because of a fake story that they saw online. And I think the reason that we wanted to write this paper is to contradict that narrative because you might read those stories and think people are just living in an alternate fake news reality. I think that this paper really shows that that just isn’t the case.

To the extent that people are misinformed or they make voting decisions that we think are bad for democracy, it is more likely due to the mainstream media or the fact that people don’t read news at all than it is to a proliferation of fake news on social media. And you know, one thing that David [Rothschild]—and in one piece of research that David and Duncan [Watts] did prior to this study that I thought was really resonant was to say that, let’s look at the New York Times. And in the lead-up to the 2016 election, there were more stories about Hillary Clinton’s email scandal in the seven days before the 2016 election than there were about policy at all over the whole scope of the election process. And so instead of zeroing in on fake news, really push our attention to really take a hard look at the way the mainstream media operates. And also, you know, what happens in this news vacuum where people aren’t consuming any news at all.

Steven Cherry So people complain about people living inside information bubbles. What your study shows is fake news, if it’s a problem at all, is really the smallest part of the problem. A bigger part of the problem would be false news—false information that doesn’t rise to the level of fake news. And then finally, the question that you raise here of balance when it comes to the mainstream media. “Balance”—I should even say “emphasis.”

Jennifer Allen Yes. So I think, again, the extent to which people are misinformed, I think that we can look to the mainstream news. And, you know, for example, it’s overwhelming coverage of Trump and the lies that often he spreads. And I think some of the new work that we’re doing is trying to look at the mainstream media and its potential role and not reporting false news that is masquerading as true. But, you know, reporting on people who say false things without appropriately taking the steps to discredit those things and really strongly punch back against them. And so I think that is an area that is really understudied. And I would hope that researchers look at this research and sort of look at the conversation that is happening around Covid and, you know, mail in voting and the 2020 election and really take a hard look at mainstream media, you know, so-called experts or politicians making wild claims in a way that we would not consider them to be fake news, but are still very dangerous.

Steven Cherry Well, Jenny, it’s all too often true that the things we all know to be true aren’t so true. And as usual, the devil is in the details. Thanks for taking a detailed look at fake news, maybe with a better sense of it quantitatively, people can go on and get a better sense of its qualitative impact. So thank you for your work here and thanks for joining us today.

Jennifer Allen Thank you so much. Happy to be here.

We’ve been speaking with Jennifer Allan, lead author of an important new study, “Evaluating the Fake News Problem at the Scale of the Information Ecosystem.” This interview was recorded October 7, 2020. Our thanks to Raul at Gotham Podcast’s Studio for our engineering today and to Chad Crouch for our music.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers.

For Radio Spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

Banking, Cash, and the Future of Money

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/at-work/innovation/banking-cash-and-the-future-of-money

Steven Cherry Hi this Steven Cherry for Radio Spectrum.

We’re used to the idea of gold and silver being used as money, but in the in the 1600s, Sweden didn’t have a lot of gold and silver—not enough to sustain its economy. The Swedes had a lot of copper, though, so that’s what they used for their money. Copper isn’t really great for the job—it’s not nearly scarce enough—so Swedish coins were big—the largest denomination weighed fortythree pounds and people carried them to market on their backs. So the Swedes created a bank that gave people paper money in exchange for giant copper coins.

The Swedes weren’t the first to create paper money—they missed that mark by about several hundred years. Nor will they likely be the first to get rid of paper money, though they may have the lead in that race. A few years ago, banks there started to refuse cash deposits and to allow cash withdrawals, until a law was passed requiring them to do so.

A new book about the history and future of money has just come out, imaginatively titled, Money. It’s not specifically about Sweden—in fact, those are the only two times Sweden comes up. It’s about money itself, and how it has changed wildly across time and geography—from Greek city-states in 600 B.C. to China in the eighth century and Kublai Khan in the thirteenth, to Amsterdam in the seventeenth, Paris during the Enlightenment, and the U.S. in the nineteenth century and cyberspace in the twenty-first.

It’s a wild ride that the world is still in the middle of, and it’s told in a thoroughly researched but thoroughly entertaining, and I mean laugh-out-loud entertaining, literally—I had to finish the book last night downstairs on the couch—told as a series of stories by one of radio’s great storytellers. Jacob Goldstein was a newspaper reporter before joining National Public Radio’s popular show, Planet Money, which he currently co-hosts, and he’s the author of the destined-to-be popular tome, Money, newly minted by Hachette Books. And he’s my guest today. He joins us via Skype.

Jacob, welcome to the podcast.

Jacob Goldstein Thanks so much for having me. And thank you for that very kind and generous introduction.

Steven Cherry Jacob wasn’t fair to the title of the book, and I wasn’t entirely fair—

Jacob Goldstein I didn’t want to be petty, but that thought crossed my mind—.

Steven Cherry —and I wasn’t entirely fair about what the book is about. The full title is Money: The True Story of a Made Up Thing, which hints at what I take your book’s thesis to be: Money is whatever we trust for the exchange of goods and services. Money is whatever we trust to be money.

Jacob Goldstein That’s right. And, you know, I’m not sure like on one level, I worry that that’s a little bit obvious. But I do think there is this thing that happens. And it was quite striking to me as I was doing the research for the book. And that is in whatever period people are living in, whatever monetary regime, they—we—seem to think that whatever we’re doing as money, whatever we’re using for money, however we’re doing money, is like some kind of natural law. It’s like the way money must be is the way we’re doing it now. And everything else is just crazy or weird. And the point I was trying to make in the subtitle of the book is That is not so. Right? We are constantly inventing and reinventing money. And, you know, it has changed many times in the past and it will continue to change in the future.

Steven Cherry We can’t tell the whole story of money here in 20 or 30 minutes. But let’s touch on the history in order to understand some of the things you say about its future. You say that it’s easy to think of money as growing out of a barter economy, but there has never been a barter economy. We went straight from more or less self-sufficiency, maybe augmented by status-seeking gifting, to empowering cowrie shells and other things as a way to store value over time—a kind of proto-money.

Jacob Goldstein Yes. So the piece about barter is really to refute this kind of standard historical story of money. For a long time the set-piece story about the origin of money was it’s very inconvenient to barter, right, because for you and I to trade—to do business with each other, whatever—you have to have what I want and I have to have what you want. But if we could just have some intermediate thing—some piece of silver, a dollar bill—that would solve the problem. Which is a very tidy story. But anthropologists in the 20th century started raising their hands and saying, sorry, economists, it just doesn’t appear that the world works that way. And what they described instead is a much more, I don’t know if organic is the right word, a much more social kind of construction of money where you have lots of small nonindustrial societies with lots of rules about giving and getting. And what you have to give somebody’s family if you’re going to marry them or somebody’s family if you killed somebody in their family. And those kinds of norms really seem to be the roots of money.

Steven Cherry You look at the history of the world, the beginnings of money, and kind of summarized things by saying the first writers weren’t poets, they were accountants, which as a writer, I find a sobering thought and you must too.

Jacob Goldstein I mean, I respect accountants. I feel like they do something very useful. Let’s not be too highfalutin.

I respect accountants, like, I respect them more now that, you know, now that I’ve written the book. That period that is Mesopotamia, essentially the classic cradle of civilization several thousand years ago, and what happened there, apparently, obviously, it’s a long time ago…. But what seems to have happened there is people initially would give each other like a clay sort of ball with maybe a cone in it or a ball, a sphere in it as like an IOU. So I would give you a clay ball with a cone in it. And that would mean, I don’t know, I owe you six sheep. And then from there, people were like, wait a minute, maybe we don’t have to put a little cone inside the ball. What if we just pressed it into the clay on the outside? And the notion is that, that is proto-writing. And then these cities start to spring up, the civilization gets more complex, and you get this essentially class of accountants who are working at the temple—which is kind of like a temple/city-hall—and they develop Cuneiform. They develop the first kind of writing, which is making marks in clay tablets, basically to keep the ledgers of the temple of the city-state.

Steven Cherry In the 5000-year history of money, the gold standard takes up three percent of that time. But it’s a pretty important century and a half and it still influences how we think of money. So maybe tell us how it began.

Jacob Goldstein I think you’re right. I think it does. Gold and silver were money for quite a long time, for thousands of years. But when economists use the phrase the gold standard, then in this very particular period of time from, what, 1830-ish, give or take a few years, to basically the 1930s—that century-ish.

And what happened was, it started in Britain, which was the most important economy of the world. And lots of countries had sort of used gold and silver and kind of gone back-and-forth. And it’s hard to have two different metals as money because their values can change. So Britain feels like, all right, we’re just going to be on the gold standard. And because Britain was so important, lots of people followed them. The 19th century, as it progressed, was this first great wave of globalization. Lots of other countries followed onto the gold standard. And so that by the end of the 19th century, most of the major economies of the world were on this kind of uniform gold standard.

Steven Cherry Yeah, I said a century and a half because we officially went off the gold standard in the 1970s, but it was really barely more than a century: Even before we officially went off the gold standard, you say FDR, in 1933, for all intents and purposes, took us off it.

Yes, the depression was really the big turn. And I think, you know, just this year, 1933 is an incredibly momentous year in the history of money. So what’s happening is, the Depression, obviously. And people didn’t realize it at the time—and I feel like the vernacular version of the story doesn’t really include this fact—but a core problem with the depression, maybe the core problem with the depression, was the gold standard. And that—among economists now—is not controversial. That’s what everybody thinks. And what happened was, you know, there was this crash in 1929 of the stock market and the economy started to plunge.

And, you know, now what happens when there’s a crash and the economy starts to plunge, is the Federal Reserve, the central bank, can essentially create more money and make it easier for people who are in debt to stay afloat—make it easier for businesses that are in trouble to stay open. But that was not the case then. It was, under the gold standard, the Federal Reserve wound up doing essentially the opposite. The Federal Reserve raised interest rates, which took up a bad crash and turned it into the Great Depression. And that sent prices falling and banks collapsing.

And so Roosevelt gets elected in ’32, takes office in ’33, and he, against the advice of almost all of his advisors who tell him that going off the gold standard will mean, quote, “the end of Western civilization,” basically goes off the gold standard. He says, you know, the key rule under the gold standard is a dollar is worth a fixed amount of gold or a fixed amount of gold is worth a dollar. It was like 20 dollars and change got you an ounce of gold. And that had been the case decade after decade. He’s like, we’re not going to do that. We’re not going to do that. A dollar is going to get you less gold than it used to. And that is why I say and most people say he was really the one who took us off the gold standard. And I should say ’33, when he did that, was when the Depression started to get better. Now, clearly, it didn’t get all better until WWII, but very clearly the turn things are going down before then and they start going up after them. And it’s true in many countries, as each country goes off the gold standard you see each country starting to get better from the depths of the Depression. Now, for a few decades after that, to your point, it was true that ordinary people could no longer exchange dollars for a fixed amount of gold, but other countries could change their currency for dollars and then other countries could change dollars for a fixed amount of gold. But that was basically a formality. And Nixon ended that formality in ’70 or ’71.

Steven Cherry One of the great things about your book is, I thought I knew the story of the gold standard and William Jennings Bryan and the Cross of Gold, and you tell that story, too. But then you come along with something really interesting and crazy, like the story of Irving Fisher, who you describe as a Yale economist, a health food zealot, a Prohibitionist, and a fitness guru who filled a floor of his New Haven mansion with exercise equipment. It turns out when he wasn’t exercising and making his hapless employees join him, he devoted much of his life to trying to untie our idea of money to gold and instead to something a little bit more like the Consumer Price Index? Tell us about the money illusion.

Jacob Goldstein Sure. I’m glad you like Irving Fisher. I love Irving Fisher. You know, I had been covering economics for, I don’t know, 10 years. When I wrote this book and I didn’t really know about Fisher. I feel like he has largely been forgotten, in part because in 1929 he said the stock market was on a permanently high plateau, like, two weeks before the market crash. So bad investing advice, but great economist. OK, the money illusion. So the money illusion was this idea that he articulated that is really very resonant today. And the basic idea of the money illusion is we get confused by inflation and deflation. So a simple example is, you know, if you if your parents say they bought their house, whatever, 40 years ago and they paid $100,000 for it and they sold it this year and they got $400,000 for it, they might think, great, I made, you know, $300,000. I made 4x my investment—I did great on that house. In fact, they are wrong—because of inflation. Right? Because $400,000 today buys you less than $100,000 bought 40 or 50 years ago. It may be semi-obvious in that case, but that kind of misunderstanding creates a lot of problems. It was one of the problems in the Depression, when you had deflation, when you had prices falling by like 30 percent. And it’s very hard for people to take like a wage cut of 30 percent, even though the stuff they buy gets 30 percent cheaper. So they end up—businesses end up laying off workers. So this idea that we are confused by inflation and deflation is really important. And it is, frankly, a big part of the reason that the Federal Reserve today tries to maintain this low, steady, two-percent rate of inflation.

Steven Cherry You describe the travails of the euro. How it miraculously worked for a while and how it stopped miraculously working. It seems it failed in some of the same way as the gold standard itself, even though everyone was off it.

Jacob Goldstein Yeah, that’s really insightful. That is correct. And in some ways, it is, in fact, quite similar. One of the fundamental features of the international gold standard that, you know, high 19th-century gold standard was because each currency was fixed to a set amount of gold. It also meant that each currency’s relationship to every other currency was the same. I don’t have the numbers in front of me, but if one dollar get to four pounds today, one dollar will get you four pounds forever. Everything is in the same relationship. And the euro effectively did that to all of the countries in Europe.

It meant that the money in Greece was the same value as the money in Germany as the money in Italy, as the money in France. And that is okay when everybody is doing fine—or badly. But when the economies diverge, when, say, Germany is doing great and Greece is doing really badly, that is actually quite bad. You know, when the currencies are separate and they diverge, then Greece, if it had its own drachma, can say, oh, we need to wait, we need to put some more money in the system. We need to make it easier for people to borrow, so that, you know, we can get unemployment down. Business will invest and hire. But when Greece gave up the drachma and joined the euro, they lost the ability to do that. And so in the same way that under the gold standard, all these countries are linked together, under the euro, all these countries are linked together.

Steven Cherry You point out that America is a confederation of states, but we don’t treat Arizona the same way the Europeans treated Greece. And in fact, this was anticipated. The chancellor of Germany at the time said we need to either be politically unified as well as fiscally, or neither.

Jacob Goldstein That’s right. And I think that is still an open question. I mean, I think in the long run for the euro to survive. Europe, the eurozone needs to become more like a United States of Europe. And interestingly, they do seem slowly to be moving in that direction. This year in response to the pandemic and the economic crisis that went with it, the E.U. authorized borrowing at the EU level. A lot of money. So that is a new thing. And that is more like being a single country. Like in the US, the federal government borrows trillions of dollars and sends that money to people in Arizona and in Florida and in Maine. And that is now what the European Union in a smaller way is starting to do. So they do seem to be moving in that direction. And it does seem like in the long run, they either got to be a lot more like one country or give up on sharing a currency.

Steven Cherry The first time I saw a millennial pay for a cup of coffee with a credit card, I was shocked, but now I do it myself.

Jacob Goldstein Do you mean the phone—use the phone? It’s pretty good.

Steven Cherry That that day is coming.

Jacob Goldstein Okay.

Steven Cherry I mentioned, early on, the Swedes working their way toward ending cash and they’re not alone. Is that something you foresee?

Jacob Goldstein I mean, it seems like in the long run it will happen, right? I don’t think I have super insight into what’s going to happen, but it certainly seems directionally like that is the way we’re going. One interesting countervailing trend is the fact that more and more paper money is going out into the world. Even if you account for the growth of the economy, the amount of paper dollars in the world is growing faster than the economy.

Now, there is a gap between what we do in our everyday lives, which is pay for a cup of coffee with a credit card or our iPhone and what’s going on with paper money. A lot of that paper money is hundred-dollar bills. There are more hundred-dollar bills than one-dollar bills. You know, there’s like, 40 hundred-dollar bills for every man, woman, and child in America. And pretty clearly, a lot of that is just crime.

Paper money is really good for crime or tax evasion, which is crime. Some of it is, you know, people in other countries where the banks are not stable, the currency is not stable, they’re holding hundreds. So that is not a crime. This economist, Ken Rogoff, has said we should get rid of big bills, but I don’t know. I mean, sure, in the long run, I suppose cash will go away.

You know, there are a lot of people who don’t have bank accounts. And so, you know, a standard refrain is the end of cash would be bad for those people. And that is true. But it’s also worth pointing out that it is bad for them now not to have a bank account. If you don’t have a bank account, you tend to get screwed. You have to carry cash on you, which is not safe. You have to go to a check-cashing store. You have to pay a high fee. So like the problem of people not having bank accounts gets lumped into the problem of not having cash. But that’s a problem in either world and it’s a problem. You know, the government could solve that by giving people bank accounts or debit cards. So that seems like a solvable problem that’s often lumped in with the end of the cash. It seems worth solving on its own.

Steven Cherry Well, people want the post office to go back to acting a little bit like a bank. And that seems like a good idea to me.

Jacob Goldstein Yes. And it seems like a reasonable role for the government, frankly. Money is very much a government thing—certainly now, but really always. And giving people a way, today, when even people who don’t have a bank account have a smartphone, it seems like a very solvable problem.

The Hoover Institution seems about as mainstream conservative as it gets. But you quote, an economist affiliated with it is calling banks—and we mean ordinary banks like Wells Fargo and Ulster Savings of Kingston, New York, which holds many mortgage—huge crony capitalist nightmares. Our banks, huge crony capitalist nightmares. And more to the point. Can you imagine a world without banks?

Jacob Goldstein As a reporter, I’m not going to weigh in on whether they are huge crony capitalists nightmares. But to your point, it is striking that—that is John Cochrane, who is a very, very pro free market, classic style economist who told me that. And, you know, the reason he said that is banks—well, banks do this very special thing in the economy. They create money, right?

When banks make loans, they are actually creating money. And that is this public function. How much money there is matters to everybody—creating money is this kind of public thing. They’re able to do it because the government, you know, guarantees are deposits at the bank. And the Federal Reserve, which is part of the government, promises—that’s a standing promise—that it will lend money to banks in a crisis. And in exchange for those guarantees, banks are very heavily regulated by the government. You would say not always regulated well; some people say not always regulated enough; but they are heavily regulated. So that is what the conservative economist is talking about when he describes it as a huge crony capitalist nightmare. It is this web, this close linkage between the government and the banks.

And to your question about a future without banks, I mean, the reason you basically need to—or at least the reason we have decided to—have that linkage is: Banks are fundamentally unstable. And that’s not because they’re evil or anything like that. It’s just the basic structure of the most plain-vanilla Main Street bank is, you have your money there on deposit that you can take out all of it at any moment. But also that money is loaned out to somebody to buy their house. Your mortgage rate is there that you don’t have to pay back for 30 years.

And so the nature of any bank is if everybody with a deposit goes and ask for their money back at any time, the bank doesn’t have it. And that is a big problem. And so the way we have solved it is by creating this whole web, this old crony capitalist thing, in the words of this economist and what he has suggested and what a lot of economists going back to Irving Fisher, who you mentioned before, suggested all the way back in the Depression was: Why do we have to do it this way? Why do we have to have this fundamental problem that we do all this work to solve? What if we just stopped, started from scratch and imagined a different world? So the problem is that banks are doing these two different things. They’re holding our money, they’re letting us get direct deposit, letting us pay our bills online. That’s one thing they’re doing. Then the other thing they’re doing is they’re making these loans that people may or may not pay back. Not everybody pays back a loan. There’s fundamental risk in lending. And there are moments when lots of people don’t pay back their loans. And that’s when we have financial crises. Why not separate those two things? So on the one hand, you would have a money warehouse, call it, where you would deposit your money, you get your direct deposit there, you pay your bills there—the basic things we do with the bank day-to-day, your checking account. Now, you might pay a fee for that because they’re providing a service. Fair. Fine.

And then you would have another kind of thing, another kind of company that is making loans. But that money is coming from people who, (a), know they might lose it and (b), cannot demand it back at any time, which is basically like, we have bond mutual funds today, and that’s basically how they work. You invest your money and the bond fund essentially is lending out your money and you can, in that case, ask for it back. But, you know, you might lose money if the people who borrow don’t pay back the money, you will lose money. The government doesn’t have to come in and bail you out. So, like, that idea actually seems quite reasonable to me and it would solve a lot of problems.

Now, politically, it doesn’t seem like it’s going to happen anytime soon. But you could imagine if there were another financial crisis, another big bank bailout, it is the kind of big change that we’ve seen before and it’s imaginable to me.

Another thing that might be politically undoable involves your final question about the future of money in the book, which involves something called modern monetary theory—yet another way in which money might evolve. What is modern monetary theory?

So modern monetary theory is a set of ideas about how money works. That has become popular with a small group of economists who tend to be associated with the political left. Stephanie Kelton is maybe the most prominent and she was an adviser to Bernie Sanders a few years ago. And their basic idea is this. We have been too worried about the government running deficits. They don’t say you can always run deficits, but they say there are a lot of times when we would be better off if the government just spent more money. To, you know, get people working. Get the economy going. And the standard response to that in traditional economics has been, well, if that happens, interest rates are going to go up. Inflation is going to go up. And that is going to be bad.

And what the modern-monetary-theory theorists say is, well, if inflation goes up, what we can do is we can raise taxes. We should keep spending and keep spending, keep helping people, keep doing stuff in the economy and wait for inflation to actually go up. And to be fair, they say there’s different things you can do. But one of the important things they say you should do when inflation goes up is raise taxes because raising taxes takes money back out of the economy. And it is a way to fight inflation.

They also say attach to their set of ideas. We should have a jobs guarantee. The government should offer everybody a job. So those are the basic pieces. The government should be more willing to run deficits. The government should offer a jobs guarantee. And one of the key ways you can fight inflation is raise taxes.

Steven Cherry As I read the book there seems to be a sort of historical line that can be drawn from Irving Fisher, the money-illusion fitness-nut guy to Alexandria Ocasio-Cortez, who seems to be an endorser of modern monetary theory—and by the way, who happens to be my mother’s congresswoman. Is that fair to say?

Jacob Goldstein I like all the Queens shout-outs.

Steven Cherry Is that fair to say? And what is that historical line?

Jacob Goldstein Well, let me think about that. Certainly an easier line, a more obvious line for me, is from Irving Fisher to … whoever … Jerome Powell, Janet Yellen—to the people, to the modern chairmen and -women of the Fed. Irving Fisher solution to the money problem and to the gold standard was we should manage the dollar not based on gold, but just based on how much the stuff everybody buys costs. What we want is for prices to be basically stable. And that is essentially the way the Fed works now. So that to me is the clearer line. I think to draw a line from him to modern monetary theory is a bit more of a stretch. I mean, if you zoomed out more, you could say, well, in both cases they’re saying, look, the way we do money now is wrong, is suboptimal, and we could be doing this better. So I think at an abstract level, there’s that connection. But I think, practically speaking, he’s pretty close to the way money, in fact, works now.

Steven Cherry You seem sympathetic to Fisher’s ideas and maybe modern monetary theory, but you also write in the book for modern money to work—to have banks and a stock market and a central bank—there needs to be tension. Investors and bankers and activists and government officials all need to be arguing over who gets to do what and when. Does modern monetary theory eliminate too much of that tension?

Jacob Goldstein It may I mean, you know, one of the things working on the book has done for me is it’s made me humble in terms of trying to be prescriptive or predictive. I’m able to think about different things and talk about them. But, you see time and again, really smart people who know a lot being just wrong about the world, about the present, about the future. So, I mean, one thing I will say about modern monetary theory is just in terms of the political realities, the notion that Congress will raise taxes to fight inflation seems like a thing that might not happen. And there is an argument that, well, you wouldn’t have to have Congress vote every time. Congress could create an automatic mechanism that is out of their hands. But still, people know when their taxes go up and they tend not to like it. And so, one place to look at and think about with modern monetary theory is that particular crux—do you really think Congress will set up a system that will automatically raise taxes to fight inflation when necessary?

Steven Cherry In the fall of 1933, President Roosevelt wrote in a letter to a Harvard economist, “You place a former artificial gold standard among nations above human suffering and the crying needs of your own country.” You say in the book, the most important word in that sentence is “artificial.” And I was reminded of something you wrote earlier in the book: “The Incas had rivers full of gold and mountains full of silver. And they used gold and silver for art and for worship. But they never invented money because it was a fiction they had no use for.” The most important words in that sentence are “fiction” and “use.” Money is a useful fiction—maybe the most useful fiction we have because it shares with some other primordial fictions the same character. I’m thinking of love and faith. And that character is trust. Is that the basis of all of this?

Jacob Goldstein I think so. I mean, certainly today, when we have fiat money, money that is backed by nothing, the dollar is only backed by our trust in the United States government, the United States economy. It’s this notion that our country as an entity, as a political entity, which obviously is another kind of fiction, will persist. And function. So it’s more obvious today, but I think even if you look back to a time when people are using, say, gold itself or silver itself as money, the gold- or silverness of the thing is not the money part. The part that makes it money—his thing we know we will be able to exchange for other things—is trust that other people will also think it’s money. It’s money if everybody thinks it’s money. And if everybody doesn’t think it’s money, it’s not money.

Steven Cherry Jacob, your book is a highly useful—and as I said, wildly entertaining—nonfiction. And I thank you for writing it and for joining us today.

Jacob Goldstein It’s very kind of you to say. I really had fun.

Steven Cherry We’ve been speaking with Jacob Goldstein, author of a new book just released: “Money: The True Story of a Made Up Thing,” about the past, present, and future of this most important made-up thing.

This interview was recorded September 2, 2020. Our audio engineering was by Gotham Podcast Studio in New York. Our music is by Chad Crouch. My thanks to the folks at Hachette for helping this along.

Radio Spectrum is brought to you by IEEE Spectrum, the magazine of the Institute of Electrical and Electronic Engineers.

For Radio Spectrum, I’m Steven Cherry.

Additional Resources:

The Last Days of Cash: How E-Money Technology Is Plugging Us into the Digital Economy (IEEE Spectrum special report on the future of money)

/static/future-of-money

The Murderer, The Boy King, And The Invention Of Modern Finance (Planet Money podcast)

https://www.npr.org/2020/09/04/909876702/the-murderer-the-boy-king-and-the-invention-of-modern-finance

The Economist Who Believes the Government Should Just Print More Money (New Yorker profile of Stephanie Kelton)

https://www.newyorker.com/news/news-desk/the-economist-who-believes-the-government-should-just-print-more-money

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

The Problem of Filter Bubbles Hasn’t Gone Away

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/telecom/internet/the-problem-of-filter-bubbles-hasnt-gone-away

Steven Cherry Hi, this is Steven Cherry for Radio Spectrum.

In 2011, the former executive director of MoveOn gave a widely-viewed TED talk, “Beware Online Filter Bubbles“ that became a 2012 book and a startup. In all the talk of fake news these days, many of us have forgotten the unseen power of filter bubbles in determining the ways in which we think about politics, culture, and society. That startup tried to get people to read news they might otherwise not see by repackaging them with new headlines.

A recent app, called Ground News, has a different approach. It lets you look up a topic and see how it’s covered by media outlets with identifiably left-leaning or right-leaning slants. You can read the coverage itself, right–left–moderate or internationally; look at its distribution; or track a story’s coverage over time. Most fundamentally, it’s a way of seeing stories that you wouldn’t ordinarily come across.

My guest today is Sukh Singh, the chief technology officer of Ground News and one of its co-founders.

Sukh, welcome to the podcast.

Sukh Singh Thanks for having me on, Steven.

Steven Cherry Back in the pre-Internet era, newspapers flourished, but overall news sources were limited. Besides a couple of newspapers in one’s area, there would be two or three television stations you could get, and a bunch of radio stations that were mainly devoted to music. Magazines were on newsstands and delivered by subscription, but only a few concentrated on weekly news. That world is gone forever in favor of a million news sources, many of them suspect. And it seems the Ground News strategy is to embrace that diversity instead of lamenting it, putting stories in the context of who’s delivering them and what their agendas might be, and most importantly, break us out of the bubbles we’re in.

Sukh Singh That’s true that we are embracing the diversity as you mentioned, moving from the print era and the TV era to the Internet era. The costs of having a news outlet or any kind of a media distribution outlet have dropped dramatically to the point of a single-person operation becoming viable. That has had the positive benefit of allowing people to cater to very niche interests that were previously glossed over. But on the negative side, the explosion of a number of outlets out there certainly has—it has a lot of drawbacks. Our approach at Ground News is to take as wide a swath as we meaningfully can and put it in one destination for our subscribers.

Steven Cherry So has the problem of filter bubbles gotten worse since that term was coined a decade ago?

Sukh Singh It has. It has certainly gotten much worse. In fact, I would say by the time it was even coined, the development of filter bubbles were well underway before the phenomena was observed.

And that’s largely because it’s a natural outcome of the algorithms and, later, machine learning models used to determine what is being served, what content is being served to people. In the age of the Internet, personalization of a news feed became possible. And that meant more and more individual personalization and machines picking—virtually handpicking—what anybody get served. By and large, we saw the upstream effect of that being that news publications found out what type of content appealed to a large enough number of people to make to make them viable. And that has resulted in a shift from a sort of aspiration of every news outlet to be the universal record of truth to a more of an erosion of that. And now many outlets, certainly not all of them, but many outlets embracing the fact that they’re not going to cater to everyone; they are going to cater to a certain set of people who agree with their worldview. And their mission then becomes reinforcing that worldview, that agenda, that that specific set of beliefs through reiterated and repeated content for everything that’s happening in the world.

Steven Cherry People complain about the filtering of social media. But that original TED talk was about Google and its search results, which seems even more insidious. Where’s is the biggest problem today?

Sukh Singh I would say that social media has shocked us all in terms of how bad the problem can get. If you think back 10 years, 15 years … Social media as we have it today, would not have been the prediction of most people. If we think back to the origin of, say, Facebook, it was very much in the social networking era over most of the content you were receiving was from friends and family. It’s a relatively recent phenomenon, certainly in this decade. Not the last one, where we saw this chimera of social network plus digital media becoming social media and the news feed there—going back to the personalization—catered to the one-engagement rhetoric, one-success metric being how much time-on-platform could they get the user to spend. Which, comparing against Google, isn’t as much about the success metrics, it’s more getting you to the right page or to the most relevant page as quickly as possible. But with social media, when that becomes a multi-hour-per-day activity, it certainly had more wide-reaching and deeper consequences.

Steven Cherry So I gave just a little sketch of how Ground News works. Do you want to say a little bit more about that?

Sukh Singh Absolutely. So I think you alluded to this in your earlier questions … going from the past age of print and TV media to the Internet age. What we’ve seen is that news delivery today has really bifurcated into two types, into two categories. We have the more traditional, more legacy based news outlets coming to the digital age, to the mobile age, with websites and mobile apps that are tailored along the conventional sections of, here’s the sports section, here’s the entertainment section, here’s a politics section—that was roughly how the world of information was divided up by these publications, and that has carried over. And on the other side, we see the social media feed, which has in many ways blown the legacy model out of the water.

It’s a single-drip feed where the user has to do no work and can just scroll and keep finding more and more and more … an infinite supply of engaging content. That divide doesn’t map exactly to education versus entertainment. Entertainment and sensationalism has been a part of media as far as back media goes. But there certainly is more affinity toward entertainment in a social media feed which caters to engagement.

So we at Ground News serve both those needs, through both those models, with two clearly labeled and divided feeds. One is Top Stories and the other one is My Feed. The Top Stories is more. … there’s a legacy model of, here are universally important news events that you should know about, no matter which walk of life you come from, no matter where you are located, no matter where your interests lie. And the second being My Feed, which is the recognition that ultimately people will care more about certain interests, certain topics, certain issues than other ones. So it is a nod to that personalization within the limits of not delving down to the same spiral of filter bubbles.

Steven Cherry There’s only so many reporters at a newspaper. There’s only so many minutes we have in a day to read the news. So in all of the coverage, for example, of the protests this past month—coverage we should be grateful for of an issue that deserves all the prominence it can get—a lot of other stories got lost. For example, there was a dramatic and sudden announcement of a reduction in our U.S. troop count in Germany. [Note: This episode was recorded June 18, 2020 — Ed.] I happened to catch that story in the New York Times myself. But it was a pretty small headline, buried pretty far down. It was covered in your weekly newsletter, though. I take it you see yourself as having a second mission besides one-sided news. The problem of under-covered news.

Sukh Singh Yes, we do, and that’s been a realization as we’ve made the journey of Ground News. It wasn’t something that we recognized from the onset, but something that we discovered as we were … as our throughput of news increased. We spoke about the problem of filter bubbles. And we initially thought the problem was bias. The problem was that a news event happens, some real concrete event happens in the real world, and then it is passed on as information through various news outlets, each one spinning it or at least wording it in a way that aligned to either their core agenda or to the likings of their audience. More and more, we found that the problem isn’t just bias and spin, it’s also the mission.

So if we look at the wide swath of left-leaning and right-leaning publications, news publications, in America today … If we were to go to the home page of two publications fairly wide apart on the political spectrum, you would not just find the same news stories with different headlines or different lenses, but an entirely different set of news stories. So much so—you mentioned our newsletter to the Blindspot report—in the Blindspot report. We pick each week five to six stories that were covered massively on one side of the political spectrum but entirely omitted from the other.

So in this case—the event that you mentioned about the troop withdrawal from Germany—it did go very unnoticed by certain parts of the political spectrum. So as a consumer, as a consumer who wants to be informed, going to one or two news sources, no matter how valuable, no matter how rigorous they are, will inevitably result in very large parts of the news out there that will be emitted from your field of view. It’s a secondary conversation, whether that’s whether if you’re going to the right set of publications or not. But what a more primary and more concerning conversation is: how do you communicate with your neighbor when they’re coming from a completely different set of news stories and a different worldview informed by them?

Steven Cherry The name Ground News seems to be a reference to the idea that there’s a ground truth. There are ground truths in science and engineering; it would be wonderful, for example, if we could do some truly random testing for coronavirus and get the ground truth on rates of infection. But are there ground truths in the news business anymore? Or are there only counterbalancings of partial truths?

Sukh Singh That’s a good question. I wouldn’t be as cynical to say as that there’s no news publications out there reporting what they truly believe to be the ground truth. But we do find ourselves in a world where … in a world of court counterbalances. We do turn on the TV news networks and we do see a set of three talking heads with a moderator in the middle and differing opinions on either side. So what we do, at Ground News—as you said, the reference to the name—is try to have that flat, even playing field where different perspectives can come and make their case.

So our aspiration is always to take the—whether that’s the ground truth, whether that’s in the world of science or in the world philosophy, whatever you want to call an atomic fact—is, take the real event and then have dozens, typically, on average, we have about 20 different news perspectives, local, national, international, left, right, all across the board covering the same news event. And are our central thesis is that the ultimate solution is reader empowerment, that no publication or technology can truly come to the conclusions for a person. And there’s perhaps a “shouldn’t” in there as well. So our mission really is to take the different news perspectives, present them on an even playing field to the user, to our subscribers, and then allow them to come to their own conclusion.

Steven Cherry So without getting entirely philosophical about this, it seems to me that—let’s say in the language of Plato’s Republic and the allegory of the cave—you’re able to help us look at more than just the shadows that are projected on the wall of the cave. We get to see all of the different people projecting the shadows, but we’re still not going to get to the Platonic forms of the actual truth of what’s happening in the world. Is that fair to say?

Sukh Singh Yes. Keeping with that allegory, I would say that our assertion is not that every single perspective is equally valid. That’s not a value judgment view that we ever make. We don’t label the left-right-moderate biases on any news publication or a platform. We actually source them from three arms-length nonprofit agencies that have the mission of labeling news publications by their demonstrated bias. So we aggregate and use those as labels in our platform. So we never pass a value judgment on any perspective. But my hope personally and ours as a company really is that some perspectives are getting you closer to the glimpse of the outside rather than just being another shadow on the wall. The onus really is on the reader to be able to say which perspective or which coverage they think most closely resembles what the ground truth is.

Steven Cherry I think that’s fair enough. And I think I would also be fair to add that, even for issues for which there really isn’t a pairing of two opposing sides—for example, climate change, responsible journalists pretty much ignore the idea of there being no climate change—but still, it’s important for people politically to understand that there are people out there who have not accepted climate change and that they’re still writing about it and still sharing views and so forth. And so it seems to me that what you’re doing is shining a light on that aspect of it.

Sukh Singh Absolutely, and one of our key aspirations and our mission is to enable people to have those conversations. So even if you are 100 percent convinced that you are going to credible news publications and you’re getting the most vetted and journalistically rigorous news coverage that is available on the free market, it may still be that you might not be able to reach across the aisle or just go next door and talk to your neighbor or your friend, who is living in a very different … different world view. Better or worse, again, we won’t pass judgment, but just having a more expanded scope of news stories that come into your field of view, on your radar, does enable you to have those conversations, even if you feel some of your peers may be misguided.

Steven Cherry The fundamental problem in news is that there are financial incentives for aggregators like Google and Facebook and for the news sources themselves to keep us in the bubbles that we’re in, feeding us only stories that fit our world view and giving us extreme versions of the news instead of more moderate ones. You yourself noted that with Facebook and other social networks, the user does no work in those cases. Using Ground News is something you have to do actively. Do you ever fear that it’s just a sort of Band-Aid that we can place on this gaping social wound?

Sukh Singh So let me deal with that in two parts, the first part is the financial sustainability of journalism. There certainly is a crisis there. And then I think we can have another of several more of these conversations about the financial sustainability in journalism and solutions to that crisis.

But one very easily identifiable problem is the reliance on advertising. I think a lot of news publications all too willingly started publicizing their content on the Internet to increase their reach and any advertising revenue that they could get off of that from Facebook, sorry, from Google and later Facebook, was incremental revenue to their print subscription. And they were, on the whole, very chipper to get incremental revenue by using the Internet. As we’ve seen, that problem has become a more and more of a stranglehold on news publications and media publications in general, where they’re trying to find a fight for these ad dollars. And the natural end of that, that competition is sensationalism and clickbait. That’s speaking to the financial sustainability in journalism there.

I mean, the path we’ve chosen to go down—exactly for that reason—is to charge subscriptions directly to our users. So we have thousands of paying subscribers now paying a dollar a month or ten dollars a year to access the features on Ground News. And that’s a nominal price point. But it also has an ulterior motive to that. It really is about habit-building and getting people to pay for news again. There are many of us have forgotten over the last couple of decades that news paying for news, which almost used to be that the same as paying for electricity or water, that sense of having to pay for news, has disappeared. We’re trying to revive that, which again, will hopefully pay dividends down the line for financial sustainability in journalism.

In terms of being a Band-Aid solution, we do think there is more of a movement for people accepting the responsibility to do the work, to inform themselves, which is direct and stands in direct contrast to the social media feed, which I think most of us have come to distrust, especially in recent years. There was a, I believe, Reuters study two years ago that showed that 2013 was the first year where people went to Facebook for their news, fewer people than to Facebook for their news in twenty eighteen than they did in the year before. And that was the first time in a decade. So I do think there’s a recognition of that. There’s a recognition a social media feed is no longer a viable news delivery mechanism. So people we do see come doing that little bit of work and on our part, we make it as accessible as possible here. Your question reminds me of the kind of adage that as a consumer, if you’re not the customer, you’re the product. And that really is the divide using a free social media feed as opposed to paying for a news delivery mechanism.

Steven Cherry Ground News is actually a service of your earlier startup Snapwise. Do you want to say a little bit about it, what it does.

Sukh Singh My co-founder was a former NASA engineer of NASA’s satellite engineer who worked on earth observation satellites. So she was working on a constellation of satellites that went across the planet every 24 hours and mapped every square foot of the planet for literally the ground truth, what was happening everywhere on the planet. And once she left her space career and she and I was starting to talk about the impact of technology in journalism, we realized that if we can map the entire planet every 24 hours and have an undeniable record of what what’s happening in the world, why can’t we have the same in the news industry? So our earliest iteration of what is now Ground News was much more focused on citizen journalism and getting folks to use their phones to communicate what was happening in the world around them and getting that firsthand data into the information stream, which we consume as news consumers.

If this is starting to sound like Twitter, we ran into several of the same drawbacks, especially when it came to news integrity and verifying the facts and making sure that what people were using as information really was to them to the same grade as professional journalists. And more and more, we realized we couldn’t diminish the role of professional journalists in delivering what the news is. So we started to advocate more and more vetted, credible news publications from across the world. And before we knew it, we had fifty thousand different unique sources of news, local, national, international, left-to-right, all the way down from your town newspaper to you to a giant multi-national press wire service like Thomson Reuters. We were taking all those different news sources and putting them in the same platform. So so that’s really been our evolution, as people trying to solve some of these problems in the journalistic industry.

Steven Cherry How do you identify publications as being on the left or on the right?

Sukh Singh As we started aggregating more and more news sources, we got over to the 10,000 mark. And before we knew what we were up to 50,000 news sources that we were covering. It’s humanly impossible for our small team or imagine even a much, much larger team to really carefully go and label each of them. So we’ve taken that from a number of news monitoring agencies whose mission and entire purpose as organizations is to review and review news publications.

So we use three different ones today. Media Bias Fact Check, AllSides, as well as Ad Fontes Media and all three of these, I would call them rating agencies, if you want to use the stock market analogy, that sort of rate, the political leanings and factual sort of demonstrated factuality of these news organizations. We take that as inputs. We aggregate them. But you do make exactly their original labels available on our platform, to use an analogy from the movie world, where we’re sort of like Metacritic, aggregating ratings from IMDb and Rotten Tomatoes and different platforms and making that all transparently available for consumers.

Steven Cherry You’re based in Canada, in Kitchener, which is a small city about an hour from Toronto. I think Americans think of Canada as having avoided some of the extremisms of the U.S. I mean, other than maybe burning down the White House a couple of centuries ago, it’s been a pretty easy-going get-along kind of place. Do you think being Canadian and looking at the U.S. from a bit of a distance contributed to what you’re doing?

Sukh Singh We had I don’t think we’ve had a belligerent reputation since since the War of 1812. As Canadians, we do enjoy a generally nice-person kind of stereotype. We are, as you said, at arm’s length. And sitting’s not quite a safe distance away, but across the border from everything that happens in the US, but with frequent trips down and just being deeply integrating with the United States as as a country, we do get a very, very close view on what’s happening.

North of the border, we do have our own political … I mean, we do have our own political system to deal with all of its workings and all of its ins and outs. But in terms of where we’ve really seen Ground News deliver value, it certainly has been in the United States. That is are both our biggest market and our largest set of subscribers by far.

Steven Cherry Thank you so much for giving us this time today and explaining a service that’s really providing an essential function in this chaotic news political world.

Sukh Singh Thanks, Steven.

Steven Cherry We’ve been speaking with Sukh Singh, CTO and co-founder of Ground News, an app that helps break us out of our filter bubbles and tries to provide a 360-degree view of the news.

Our audio engineering was by Gotham Podcast Studio in New York. Our music is by Chad Crouch.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers.

For Radio Spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

This interview was recorded June 18, 2020.

Resources

Spotify, Machine Learning, and the Business of Recommendation Engines

https://mediabiasfactcheck.com/

AllSides

Ad Fontes Media

“Beware Online Filter Bubbles” (TED talk)

The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think, by Eli Pariser (Penguin, 2012)

Reimagining Public Buses in the Age of Uber

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/transportation/alternative-transportation/reimagining-public-buses-in-the-age-of-uber

Steven Cherry Hi this is Steven Cherry, for Radio Spectrum.

A thread on Reddit once started:

“So I recently got a job offer that is about 15 miles away from where I live. I don’t have a car so I’m planning on commuting by bus, however the commute is estimated to last anywhere from 70-85 minutes. This is my first post-grad job … I really need the experience. However, I’m wondering if it is worth it … two hours a day just in my commute.”

Metropolises like London, Tokyo, and New York are built up on a backbone of subways and rail transit. But in much of the world, people without cars travel by bus. And that’s a problem, if a 15-mile commute takes five times as many minutes.

Marchetti’s Constant, named after Italian physicist Cesare Marchetti, is the average time people spend on their daily commute, which is approximately a half hour each way, all around the world. The average U.S. commute is about 27 minutes, up 8 percent from a decade earlier. But that averages people who walk 10 minutes to work with people who drive an hour; it averages people who have a quick subway ride and people taking two or three buses that run only infrequently.

In the mid-2000s, the megacity of São Paulo developed a system of buses making limited stops and with their own lanes that my colleague Erico Guizzo wrote about in 2007. It’s a scheme that perhaps made sense 15 years ago, trying to combine the best of highway transport with the best of rail transit.

But in the mind of another Italian physicist who has turned his attention and his career to transportation, we now have enough computing power—smartphones, AI, and the cloud—for a different kind of solution.

My guest today, Tommaso Gecchelin, is a physicist and industrial designer. After studying quantum mechanics in Padua, Italy, and industrial design in Venice, he co-founded something called NEXT Future Transportation. For the past seven years there he has been developing a system of bus pods, one that in effect chops up a bus into car-sized pieces and has the potential to combine the best of commuter buses with the best of Uber. He joins me via Skype.

Tom, welcome to the podcast.

Tommaso Gecchelin Thank you very much. Thank you for having me here.

Steven Cherry Tom “chopping up a bus into car sized pieces” is my characterization of this system. Why don’t you describe it yourself?

Tommaso Gecchelin Yes, we would describe them like a very short section of a bus that can dock together, forming a longer unit. So it’s like a train when where all the cars can drive themselves. They can be independent. They can be like cars or like taxis when they are alone. And when they join together, they can form a bus. But the most important thing is that they communicate with each other physically. So when they are connected, the doors open in between. So basically, they create something that we call a “stopless station” because the passengers can freely walk between the one unit and the other internally without the need for the entire bus to stop, to drop off, and pick them up again.

Steven Cherry So they’re a little bit longer than a Smart car. They’re self-driving. They have doors at either end. They communicate with one another and their passengers constantly. They mate up at highway speeds so smoothly that you can walk from one to the other. And they know who needs to get where and which pod needs to connect with which potentially hundreds of pods and tens of thousands of people, maybe twenty-four hours a day. What could possibly go wrong?

Tommaso Gecchelin Yes. It seems like a very complicated thing but actually we try to simplify lots of the project from the start of it to now. And for example, what we are trying to do now, it’s to focus on the docking procedure and the modularity of the system instead of being too focused on the self-driving because the self-driving part, it’s most difficult thing to do, and it’s the most difficult also to certify and to be legal on the road at the moment.

Steven Cherry We’re going to get to all of that, but let me first ask what the experience will be like. As I understand it, I’m sitting at home in the morning getting ready for work. My phone tells me a pod has arrived; I get on; the system, figures out that another pod is going to go near my destination. It determines where the two pods should connect. My phone alerts me when that moment is near and tells me which pod to switch to because there may be several connected at this train like way at this point. Am I picturing this correctly?

Tommaso Gecchelin Exactly. The only difference is that it’s very likely that you don’t have more than two pods docked together at the same time. Because generally the first mile, it’s covered by one pod that picks you up alone like a taxi that you call. And then afterward, when you’re merging into main roads, it docking to another pod, that generally is already almost full because they have done the same thing again and again. So you just walk to the port that has already few people inside. So you leave your pod completely empty and your pod will detach and go to pick up other people. So it’s like a relay race.

Steven Cherry And from the central city in the afternoon, back to the suburbs, it would be the same in reverse.

Tommaso Gecchelin Exactly. Generally, you are not that interested in timing when you are going back to home. So the vehicles will split with ten people inside, then they will do two or three stops before you get home. So you don’t need to get home completely alone. That’s a slight difference from the beginning of the trip in the morning.

Steven Cherry Yes. And that’s in the central city model, people leave for work at very different times, but frequently they leave work at pretty close to the same time. So it is a little different in that respect. I’m wondering, does your modeling tell you—if I have, say, in my own car, a maybe a half-hour commute—with the pods connecting and multiple people and perhaps even waiting at a sort of pod bus shelter for a few minutes for my next pod to come, what would be the average transit time? I imagine it would increase a little bit at least.

Tommaso Gecchelin Well, this is the difference. You never have to stop to wait for another pod. The pods always docked together when they are traveling. And if you cannot find another pod traveling in your direction, you will get directly to your destination. Because we want to differentiate our system from traditional buses, to increase the comfort for the passengers. So we absolutely never stop to drop off people at the end to pick up another pod in case they have any connection, let’s say, pod connection, they will dock together—the two pods—and the people would just walk from one to the other. So it’s very different from a bus. The transportation system is built for this feature. And to reply to your question, we did a lot of simulation and roughly, the increase in the travel time is roughly five percent.

Steven Cherry That’s very little. And there is an advantage of not having to park at the destination. So that might actually save that five percent as well.

This depends on a certain amount of scale, I would imagine. And so do you have any thoughts on what the optimum geographies are? Is it as big metropolitan areas with lots of suburbs? What about smaller ones like, say, Pittsburgh … the city has 300,000 people and the metro area is about four times that. And what about cities like Albany, New York, or South Bend, Indiana, which have about 100,000 each.

Tommaso Gecchelin Well, we focus on cities that have very dense downtown, and very sparse suburb areas. So most of the cities in the U.S. are like that. We concentrated also on cities like Dubai that have very concentrated traffic in the main part of the city. Sheikh Zayev Road. So these cities are the most optimizable by our system. And on the other side, European cities vary with fairly homogeneous density of pick-up and drop-off—and so origin and destination matrix. In that case, the optimization level is slightly less.

Steven Cherry Now you first developed a 1:10 scaled prototype and brought it to Dubai, where I guess it was enough of a hit that they had you build two pods to be tested there. And you’ve trialed some key technology pieces: the linking up and the walking from one to the other at speed; the cloud intelligence; the communication to the mobile-device app … Each of these seems like a huge challenge.

Tommaso Gecchelin Yes, it was very critical for us to test the vehicle in a 1:1 scale after the 1:10 scale was working. So we convinced the sheik of Dubai to buy two vehicles and up to the very end of the engineering and prototyping phase, we were a little bit afraid about the docking procedure. Then afterwards, it worked perfectly. So every calculation we have done was good. And we have done a very, very good job and good showcase in Dubai.

Steven Cherry Does Dubai’s cities have the kind of density and central city / sparse suburbs that you imagine to be optimal? And do they have any thoughts about building out a complete system for themselves?

Tommaso Gecchelin Yes, the city of Dubai … It’s perfect for our system, especially because the the destinations are very, very concentrated. For example, Dubai Mall—it’s a destination of very, let’s say punctual, very specific. And on the other hand, you have in almost every house, it’s sparse in the other part of the city that that’s Sharjah [U.A.E.], that it’s like the residential area of Dubai. So we had done a lot of similation, especially in Dubai. And this is optimizing very much the system, the traffic in Dubai.

Steven Cherry Do they have much of a public transport network there now? And more generally, do you envision the system coexisting with transport systems or do you expect it to largely replace them?

Tommaso Gecchelin I think that it will co-exist because it’s not trying to take away passengers from the buses or Metro. They are trying to take out private cars. So, in fact, the system, it’s a little bit more expensive than regular buses, but nonetheless, it’s picking you up at home. So it’s much more similar to a taxi. Let’s say the price tag, it’s in between them and it’s much cheaper than having a car, that private car, and to manage them, to park them. So it’s much more convenient and also cheaper than a car a private car. But at the same time, it’s cheaper than a taxi, even if it’s basically giving you the same service. So at the same time to this destination and the same ubiquity as a taxi.

Steven Cherry In New York, for example, there are single and double buses, sometimes it’s standing room only, but sometimes they’re only carrying a handful of people. When you “chop up a bus” to use my term of maybe 40 or 50 or 80 people into four or six or eight pods, each pod is closer to its capacity. How expensive might five pods end up being compared to, say, a 50-person bus?

Tommaso Gecchelin Five pods will be roughly equivalent to a 12-meter bus. So we are trying to get to the price where five pods will be equivalent also, in terms of the price to an electric bus, a premium electric bus. So this is our goal.

Steven Cherry Just to be clear, a 12-meter bus would have what seating capacity?

Tommaso Gecchelin It really depends if it’s a city bus or an entire city bus. But generally it goes from 50 to 70 people.

Steven Cherry So similar capacity really, because your pods would seat six and have a total capacity of 10.

Tommaso Gecchelin Exactly. Very comfortably. And they can go up to 15 people—each pod—if you want to have the same density per person—so [the same] people-per-square-meter of a typical city bus.

Steven Cherry A point I haven’t heard in any of the presentations of yours that I’ve watched is that in an all-electric vehicle system, a single pod can go out of service to recharge instead of an entire 50 person or 75 person bus. So only one-fifth of the bus, so to speak, has to go offline.

Tommaso Gecchelin Exactly. This is a very interesting feature because it’s like having swappable batteries. Because you can swap one pod. So you cut your capacity at that moment by 10 percent or 20 percent instead of the entire capacity of the bus.

Steven Cherry It seems like pods are also going to be much more manageable within the cities than buses—making left turns on narrow streets, parking, pulling over … Bus stops nowadays typically take up one hundred feet of road or sidewalk. There are a lot of things to like about these smaller pods.

Tommaso Gecchelin Yes, in fact, that you can park two pods stuck together in the place where you generally put a traditional car.

Steven Cherry I think you know that I teach at New York University’s engineering school as an adjunct professor. There’s an NYU connection to the story, as I understand it.

Tommaso Gecchelin Yes, yes. And actually, Joseph Chow featured us in a paper. And afterwards, we started a collaboration with them. And so they are doing a research paper on this modularity and the benefit—that he’s calling “in-route transfer.” So it’s the transfer of the passengers while they are going on the road without stopping. It’s it’s a very interesting collaboration, actually.

Steven Cherry Yes, Chow is the deputy director of C2Smart, which stands for Connected Cities with Smart Transportation. And there was also an important contribution by a graduate student as part of his master’s thesis?

Tommaso Gecchelin Yes, exactly. Nick Paros as part of the master’s thesis did a great job describing the behavior of our vehicles.

Steven Cherry Tom, what sort of timeline are we on? Do we have to wait for others to perfect the self-driving aspect? Do you have any idea when we would see a full system ready to be built?

Tommaso Gecchelin Well, at the moment, we are doing a lot of studies to understand if the system makes sense before self-driving would be legal. So, for example, when you split the bus, each pod will be driven by one of the passengers. So it’s like a hybrid between Uber and the traditional bus. So our next step is to certify the vehicle in Europe at the moment for European laws—to be road legal in all the public roads. With the driver. So driverless will be the next step, but not right now.

Steven Cherry And in the long run, you envision that these pods could also do package delivery as almost an additional business model?

Tommaso Gecchelin Yes, they they can do, let’s say, package logistics. But the most interesting thing, it’s to do retail logistics. So not just delivering to you a package like Amazon is doing, but delivering to you the entire retail experience, because each pod can be dressed, can be customized like a room, like a retail store. So when it’s coming to you—or in motion in the future—it will really be a new business line for us. And it will be really the future of retail, especially in this Covid period in which it’s a little bit more frightening to go to the mall.

Steven Cherry Tom, I mentioned at the top of the show your eclectic background. You also paint real paintings that have been featured in art exhibitions. And you’ve written that—and this is a quote of yours—”art reaches the eyes and the heart of the user.” Calling the viewer a user suggests that these are closely related passions for you, art and science and technology. Are they?

Tommaso Gecchelin Absolutely. I always tried to merge them, to mix them, to create something that it’s more than the two parts separated because generally heart … It doesn’t really use the science to get to the point, to get to be fully useful for people. And I’m trying to do something that …  It’s not just expressing myself, but it’s trying to be something really useful, something that it’s doing good for the whole world.

Steven Cherry Well, Tom, I think you’ve come up with an artful, elegant solution to what has been an intractable urban and especially suburban problem. I wish you and the project in boca al lupo.

Tommaso Gecchelin Ah, yes [laughter], in boca al lupo.

Steven Cherry And I thank you for joining me today. Grazie.

Tommaso Gecchelin Prego.

Steven Cherry We’ve been speaking with Tomaso Gecchelin, co-founder and CEO of Next Future Transportation, which wants to reimagine Uber as a public transit system, where connecting from one bus to another is as easy as walking from the kitchen to the living room.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers.

For Radio Spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

 

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

The Problem of Old Code and Older Coders

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/computing/it/the-problem-of-old-code-and-older-coders

Steven Cherry Hi, this is Steven Cherry for Radio Spectrum.

The coronavirus pandemic has exposed any number of weaknesses in our technologies, business models, medical systems, media, and more. Perhaps none is more exposed than what my guest today calls, “The Hidden World of Legacy IT.” If you remember last April’s infamous call for volunteer COBOL programmers by the governor of New Jersey, when his state’s unemployment and disability benefits systems needed to be updated, that turned out to be just the tip of a ubiquitous multi-trillion-dollar iceberg—yes, trillion with ‘t’—of outdated systems. Some of them are even more important to us than getting out unemployment checks—though that’s pretty important in its own right. Water treatment plants, telephone exchanges, power grids, and air traffic control are just a few of the systems controlled by antiquated code.

In 2005, Bob Charette wrote a seminal article, entitled “Why Software Fails.” Now, fifteen years later, he strikes a similar nerve with another cover story that shines a light at the vast and largely hidden problem of legacy IT. Bob is a 30-year veteran of IT consulting, a fellow IEEE Spectrum contributing editor, and I’m happy to say my good friend as well as my guest today. He joins us by Skype.

Bob, welcome to the podcast.

Bob Charette Thank you, Steven.

Steven Cherry Bob, legacy software, like a middle child or your knees meniscus, isn’t something we think about much until there’s a problem. You note that we know more about government systems because the costs are a matter of public record. But are the problems of the corporate world just as bad?

Bob Charette Yes. There’s really not a lot of difference between what’s happening in government and what’s happening in industry. As you mentioned, government is more visible because you have auditors who are looking at failures and [are] publishing reports. But there’s been major problems in airlines and banks and insurance companies—just about every industry that has IT has a problem with legacy systems in one way or another.

Steven Cherry Bob, the numbers are staggering. In the past 10 years, at least $2.5 trillion has been spent trying to replace legacy IT systems, of which some seven hundred and twenty billion dollars was utterly wasted on failed replacement efforts. And that’s before the last of the COBOL generation retires. Just how big a problem is this?

Bob Charette That’s a really good question. The size of the problem really is unknown. We have no clear count of the number of systems that are legacy in government where we should be able to have a pretty good idea. We have really no insight into what’s happening in industry. The only thing that we that we do know is that we’re spending trillions of dollars annually in terms of operations and maintenance of these systems, and as you mentioned, we’re spending hundreds of billions per year in trying to modernize them with large numbers failing. This is this is one of the things that when I was doing the research and you try to find some authoritative number, there just isn’t any there at all.

In fact, a recent report by the Social Security Administration’s Inspector General basically said that even they could not figure out how many systems were actually legacy in Social Security. And in fact, the Social Security Administration didn’t know itself either.

Steven Cherry So some of that is record-keeping problems, some of that is secrecy, especially on the corporate side. A little bit of that might be definitional. So maybe we should step back and ask what the philosophers call the the ti esti [τὸ τί ἐστι] question—the what-is question. Does everybody agree on what legacy IT is? What counts as legacy?

Bob Charette No. And that’s another problem. What happens is there’s different definitions in different organizations and in different government agencies, even in the US government. And no one has a standard definition. About the closest that we come to is that it’s a system that does not meet the business need for some reason. Now, I want to make it very clear: The definition doesn’t say that it has to be old, or past a certain point in time Nor does it mean that it’s COBOL. There are systems that have been built and are less than 10 years old that are considered legacy because they no longer meet the business need. So the idea is, is that there’s lots of reasons why it may not meet the business needs—there may be obsolescent hardware, the software software may not be usable or feasible to be improved. There may be bugs in the system that just can’t be fixed at any reasonable cost. So there’s a lot of reasons why a system may be termed legacy, but there’s really no general definition that everybody agrees with.

Steven Cherry Bob, states like your Virginia and New York and every state in the Union keep building new roads, seemingly without a thought. A few years ago, a Bloomberg article noted that every mile of fresh new road will one day become a mile of crumbling old road that needs additional attention. Less than half of all road budgets go to maintenance. A Texas study found that the 40-year cost to maintain a $120 million  highway was about $800 million. Do we see the same thing in IT? Do we keep building new systems, seemingly without a second thought that we’re going to have to maintain them?

Bob Charette Yes, and for good reason. When we build a system and it actually works, it works usually for a fairly long time. There’s kind of an irony and a paradox. The irony is that the longer these systems live, the harder they are to replace. Paradoxically, because they’re so important, they also don’t receive any attention in terms of spend. Typically, for every dollar that’s spent on developing a system, there’s somewhere between eight and 10 dollars that’s being spent to maintain it over its life. But very few systems actually are retired before their time. Almost every system that I know of, of any size tends to last a lot longer than what the designers ever intended.

Steven Cherry The Bloomberg article noted that disproportionate spending by states on road expansion, at the expense of regular repair—again, less than half of state road budgets are spent on maintenance—has left many roads in poor condition. IT spent a lot of money on maintenance, but a GAO study found that a big part of IT budgets are for operations and maintenance at the expense of modernization or replacement. And in fact, that ratio is getting higher, that less and less money is available for upgrades.

Bob Charette  Well, there’s two factors at play. One is, it’s easier to build new systems, so there’s money to build new systems, and that’s what we we constantly do. So we’re building new IT systems over time, which has again, proliferated the number of systems that we need to maintain. So as we build more systems, we’re going to eat up more of our funding so that when it comes time to actually modernize these, there’s less money available. The other aspect is, as we build these systems, we don’t build them a standalone systems. These systems are interconnected with others. And so when you interconnect lots of different systems, you’re not maintaining just an individual s— you’re maintaining this system of systems. And that becomes more costly. Because the systems are interconnected, and because they are very costly to replace, we tend to hold onto these systems longer. And so what happens is that the more systems that you build and interconnect, the harder it is later to replace them, because the cost of replacement is huge. And the probability of failure is also huge.

Steven Cherry Finally—and I promise to get off the highway comparison after this—there seems to be a point at which roads, even when well maintained, need to be reconstructed, not just maintained and repaved. And that point for roads is typically the 40-year mark. Are we seeing something like that in IT?

Bob Charette Well, we’re starting to see a radical change. I think that one of the real changes in IT development and maintenance and support has been the notion of what’s called DevOps, this notion of having development and operations being merged into one.

Since the beginning almost of IT systems development, we’ve thought about it as kind of being in two phases. There is the development phase, and then there was a separate maintenance phase. And a maintenance phase could last anywhere from 8, 10, some systems now are 20, 30, 40 years old. The idea now is to say when you develop it, you have to think about how you’re going to support it and therefore, development and maintenance are rolled into one. It’s kind of this idea that software is never done and therefore, hopefully in the future, this problem of legacy in many ways will go away. We’ll still have to worry about at some point where you can’t really support it anymore. But we should have a lot fewer failures, at least in the operational side. And our costs hopefully will also go down.

Steven Cherry So we can have the best of intentions, but we build roads and bridges and tunnels to last for 40 or 50 years, and then seventy-five years later, we realize we still need them and will for the foreseeable future. Are we still going to need COBOL programmers in 2030? 2050? 2100?

Bob Charette Probably. There’s so much coded in COBOL. And a lot of them work extremely well. And it’s not the software so much that that is the problem. It’s the hardware obsolescence. I can easily foresee COBOL systems being around for another 30, 40, maybe even 50 years. And even that I may be underestimating the longevity of these systems. What’s true in the military, where aircraft like to be 50 to, which was supposed to have about a 20 to 25 year life, is now one hundred years old, replacing everything in the aircraft over a period of time.

There is research being done by DARPA and others to look at how to extend systems and possibly have a system be around for 100 years. And you can do that if you’re extremely clever in how you design it. And also have this idea of how I’m going to constantly upgrade and constantly repair the system and make it easy to move both the data and the hardware. And so I think, again, we’re starting to see the realization that IT, which at one time—again, systems designers were always thinking about 10 years is great, twenty years is fantastic—that maybe now that these system’s, core systems, may be around for one hundred years,.

Steven Cherry Age and complexity have another consequence: Unlike roads, there’s a cybersecurity aspect to all of this as well.

Bob Charette Yeah, that’s probably the biggest weakness that that occurs in new systems, as well as with legacy systems. Legacy systems were never really built with security in mind. And in fact, one of the common complaints even today with new systems is that security isn’t built in; it’s bolted on afterwards, which makes it extremely difficult.

I think security has really come to the fore, especially in the last two or three years where we’ve had this … In fact last year we had over 100 government systems in the United States—local, state and federal systems—that were subject to ransomware attacks and successful ransomware attacks because the attackers focused in on legacy systems, because they were not as well maintained in terms of their security practices as well as the ability to be made secure. So I think security is going to be an ongoing issue into the foreseeable future.

Steven Cherry The distinction between development and operations brings to mind another one, and that is we think of executable software and data as very separate things. That’s the basis of computing architectures ever since John von Neumann. But legacy IT has a problem with data as well as software, doesn’t it?

Bob Charette One of the areas that we didn’t get to explore very deeply in the story, mostly because of space limitations, is is the problem of data. Data is one of the most difficult things to move from one system to another. In the story, we talked about a Navy system, a payroll system … The Navy was trying to consolidate 55 systems into one and they use dozens of programing languages. They have multiple databases. The formats are different. How the data is accessed—what business processes, how they use the data—is different. And when you try to think about how you’re going to move all that information and make sure that the information is relevant, it’s correct. We want to make sure we don’t have dirty data. Those things all need to come to be so that when we move to a new system, the data actually is what we want. And in fact, if you take a look at the IRS, the IRS has 60-year-old systems and the reason they have 60-year-old systems is because they have 60-year-old data on millions of companies and millions of—or hundreds of millions of—taxpayers and trying to move that data to new systems and make sure that you don’t lose it and you don’t corrupt it has been a decades-long problem that they’ve been trying to solve.

Steven Cherry Making sure you don’t lose individuals or duplicate individuals across databases when you merge them.

Bob Charette One of the worst things that you can do is have not only duplicate data, but have data that actually is incorrect and then you just move that incorrect data into a new system.

Steven Cherry Well, Bob, as I said, you did it before with why software fails and you’ve done it again with this detailed investigation. Thanks for publishing “The Hidden World of Legacy IT,” and thanks for joining us today.

Bob Charette My pleasure. Steven.

We’ve been speaking with IT consultant Bob Charette about the enormous and still-growing problem of legacy IT systems.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers.

For Radio Spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

 

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

Spotify, Machine Learning, and the Business of Recommendation Engines

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/consumer-electronics/audiovideo/spotify-machine-learning-and-the-business-of-recommendation-engines

Steven Cherry Hi, this is Steven Cherry for Radio Spectrum.

You’re surely familiar—though you may not know it by name—with the Paradox of Choice; we’re surrounded by it: 175 salad dressing choices, 80,000 possible Starbucks beverages, 50 different mutual funds for your retirement account.

“All of this choice,” psychologists say, “starts to be not only unproductive, but counterproductive—a source of pain, regret, worry about missed opportunities, and unrealistically high expectations.”

And yet, we have more choices than ever— 32,000 hours to watch on Netflix, 10 million e-books on our Kindles, 5000 different car makes and models, not counting color and dozens of options.

It’s too much. We need help. And that help is available in the form of recommendation engines. In fact, they may be helping us a bit too much, according to my guest today.

Michael Schrage is a research fellow at the MIT Sloan School’s Initiative on the Digital Economy. He advises corporations— including Procter & Gamble, Google, Intel, and Siemens—on innovation and investment, and he’s the author of several books including 2014’s The Innovator’s Hypothesis, and the 2020 book Recommendation Engines, newly published by MIT Press. He joins us today via Skype.

Steven Cherry Michael, welcome to the podcast.

Michael Schrage Thank you so much for having me.

Steven Cherry Michael, many of us think of recommendation engines as those helpful messages at Netflix or Amazon, such as people like you also watched or also bought, but also Yelp and TripAdvisor, predictive text choices and spelling corrections. And, of course, Twitter posts, Google results and they order things are your Facebook feed. How ubiquitous are recommendation engines?

Michael Schrage They’re ubiquitous. They’re pervasive. They’re becoming more so, in no small part because of the way the reason for which you set up this talk. There’s more choices and more choices aren’t inherently better choices. So what are the ways that better data and better algorithms can personalize or customize or in some other way make more relevant a choice or an opportunity for you? That is the reason why I wrote the Recommendation Engines book, because this issue of, on one hand, the explosion of choice in the absence of time, the constraints of time, but the chance, the opportunity to get something that really resonates with you, that really pleasantly and instructively and empoweringly helps you—that’s a big deal. That’s a big deal. And I think it’s a big deal that’s going to become a bigger deal as machine learning algorithms kick in and our recommender systems, our recommendation engines become even smarter.

Steven Cherry I want to get to some examples, but before we do, it seems like prediction and recommendation are all tied up with one another. Do we need to distinguish them?

That is an excellent, excellent question. And let me tell you why I think it’s such an excellent question. When I really began looking into this area, I thought of recommendation as just that, you know, analytics and analytics proffers out relevant choices. In fact, depending upon the kind of datasets you have access to, one can and should think of recommendation engines as generators of predictions of things you will like. Predictions of things you will engage with. Predictions of things you will buy. Now, what you like and what you engage with and what you buy may be different things to optimize around, but they are all different predictions. And so, yes—recommendation engines are indeed predictors.

Steven Cherry Back in the day Spectrum had some of the early articles on the Netflix Prize. Netflix now seems to collect data on just about every moment we spend on the service … what we queue up, how much bingeing we do, where exactly we stop a show for the night or forever …. It switched from a five-star rating system to a thumbs up or down, and it seems it doesn’t even need that information when there’s so much actual behavior to go by.

Michael Schrage You are exactly right on the Netflix instrumentation. It is remarkable what they have learned and what they do—and decline to disclose about what they’ve learned—about the difference between what’s called implicit versus explicit ratings. Explicit ratings are exactly what you’ve described, five stars. But in fact, thumbs up, thumbs down turns out to be statistically quite relevant and quite useful. The most important thing Netflix discovered and of course, let’s not forget that Netflix didn’t begin as a streaming service. It began with its delivery system being the good old-fashioned United States Postal Service. What Netflix discovered was people’s behavior was far more revealing and far more useful in terms of, yes, predicting preference.

Steven Cherry So Netflix encourages us to binge-watch. But it seems YouTube is really the master at queuing up a next video and playing it … you start a two-minute video and find that an hour has gone by … In the book, you say Uber uses much the same technique with its drivers. How does that work?

Michael Schrage Yes! YouTube has really literally, not just figuratively re-engineered itself around recommendation. And TikTok was a born recommender. The circumstances for Uber were somewhat different because what Uber did and its analytics was it discovered that if there was too much downtime between rides, some of its drivers would abandon the platform and become Lyft drivers. So the question then became, how do we keep our power drivers engaged constructively, productively, cost-effectively, time-effectively engaged? And they began to do stacking and they began to—the team using a platform I think called Leonardo—began analyzing what kind of requests were coming in. And they began sorting out and stacking requests so that drivers could literally, as they were dropping somebody off, have the option, a recommendation of one or two or three, depending upon what the flow was, what the queue was, what ride match they could do next.

And that was a very obviously a very, very big deal because it was a win-win for the platform. It gave more choices for people who wanted to use the ride-hailing service. But it also kept the flow of drivers very, very smooth and consistent. So it was a demand-smoothing and a supply-smoothing approach. And recommendation is really key for that. In fact, forgive me for going on a bit longer on this, but this was one of the reasons why Netflix went into recommender systems and recommendation engines, because, of course, everybody wanted the blockbuster films back in the DVD days. So what could we give people if we were out of the top five blockbusters? So this was the emergence of recommendation engines to help manage the long- or longer-tail phenomenon. How can we customize the experience? How can we do a better job of managing inventory? So there are certain transcendent mathematical algorithmic techniques that were as relevant to the Uber you hail as to the movie you watch.

Steven Cherry Recommendation engine designers draw from psychology, but also you say in the book from behavioral economics and then even one more area persuasion technology. How do these three things differ and how did they fit into the recommendation engines?

Michael Schrage They are very different flavors—and I’m very much appreciative of that question and that perception because there’s the classic notion that, you know, the recommendation it’s about persuasion and there is persuasion technology. It’s been called captiveology. And it’s the folks at Stanford were pioneers in that regard. And one of the people in the class, one of the captiveology classes, persuasion technology classes, ended up being a founder, a co-founder of Instagram.

And the whole notion there is how do we persuade or how do we design a technology was rooted in technology. How do we design and technology, engagement or interaction to create persistent engagement, persistent awareness? So in Stanford, for understandable reasons, it was rooted in technology. Psychology, absolutely. The history of choice presentation you mentioned about the tyranny, the paradox of choice. Barry Schwartz’s work in Swarthmore. How do people cognitively process choice? When does choice become overwhelming? There’s the famous article by George Miller, the magic number seven plus or minus two, which talks about the cognitive constraints that people have when they are trying to make a decision. This is where psychology and economics began to intersect with Herb Simon, who was one of the pioneers of artificial intelligence from Carnegie Tech and then Carnegie Mellon—bounded rationality. So there are limits on what we can’t remember everything. So what are the limits and how do those limits constrain the quantity and quality of choices that we make?

This evolved into behavioral economics. This was Daniel Kahneman and Amos Tversky, and Kahneman also won the Nobel Prize in economics. Cognitive heuristics, shortcuts. And basically what you had was the incred—because of the Internet, you had all of these incredible software and technical tools, and the Internet became the greatest laboratory for behavioral, economic, psychological, and captiveology experiments in the history of the world. And the medium, the format, the template, which made the most sense for doing this kind of experimentation. These kinds of experimentation, mashing up persuasion technology with behavioral economics was recommender systems, recommendation engines. They really became the domain where this work was tested, explored, exploited. And in 2007, the RecSys, the Recommendation Systems Conference, academic conference, was launched internationally and people from all over the world, and most importantly, from all of these disciplines, computer science, psychology, economics, etc., they came and began sharing stuff in this regard. So it’s a remarkable, remarkable epistemological story, not just a technological or innovation story.

Steven Cherry You point out that startups nowadays are not only born digital but born recommending. You have three great case studies in the book, Spotify, ByteDance, which is the company behind TikTok, and Stitch Fix, which is a billion-dollar startup that applies data science to fashion. I want to talk about Spotify because the density of data is just a bit mind-bending here. Two hundred million users, 30 million songs, spectrogram data of each song’s tempo key and loudness, activity logs and user playlists. micro-genre analysis…. You were particularly impressed by Spotify Discover Weekly Service, which uses all of that data. Could you tell us about it?

Michael Schrage Yes. And it’s the fifth anniversary and I just got a release saying that they’ve been over 2.3 billion hours streamed under Discovery. And the whole idea was that it’s an obvious idea, but it’s an obvious idea that’s difficult to do. The idea was, what can you listen to that you’ve never heard before? Discover. This is key. One of the key elements of effective recommender-system/recommendation- engine design is discovery and serendipity. And what they did was basically launch a program where you get a playlist, where you get to hear stuff that you’ve never heard before but that you are likely to like. And how can they be confident that you are likely to like it? Because it deals directly with everything that you mentioned in setting up the question. The micro-genres, the tempo, the cadence, the different genres, what you have on your playlist, what your friends have on their playlists, etc. And of course, as with Netflix, they track your behavior. How long do you listen to the track? Do you skip it, etc? And it’s proven to be remarkably successful.

And it illustrates to my mind one of the most interesting ontological, epistemological, esthetic, and philosophical issues that that recommendation engine design raises: What is the nature of similarity? How similar is similar? What is the more important similarity in music? The lyrics? The tempo, the mood, the spectrograph, the acoustics, the time of day? What are the dimensions of similarity that matter most? And the algorithms that either individually or ensembled tease out and identify and rank those similarities. And based on those similarities, proffer this list, t his playlist of songs, of music you are most likely to like. It’s remarkable. It’s a prediction of your future taste based on your past behavior. But! But! Not in a way that is simply, no pun intended, an echo of what you’ve liked in the past.

But it represents a trajectory of what you are likely to like in the future. And I find that fascinating because it’s using the past to predict serendipitous, surprising, and unexpected future preferences, seemingly unexpected future preferences. I think that’s a huge deal.

Steven Cherry Yeah, the music case is so interesting to me, I think, because, you know, we want to hear new things that we’re going to like, but we also want to hear the old stuff that we know that we like. It’s that mix that’s really interesting. And it seems to me that you go to a concert and the artist, without all of the machinery of a recommendation engine, is doing that, him- or herself. They’re presenting the stuff off of the new album, but they’re making sure to give you a healthy selection of the old favorites.

I’m going to make a little bit of a leap here, but something like that, I think goes on with—you mentioned ranking and we have this big election coming up in the U.S. and a handful of jurisdictions have moved to ranked-choice voting. In its simplest form, this is where people select not just their preferred candidate, but they rank them. And then after an initial counting, the candidate with the fewest votes as the first choice has dropped out from the count and their ballots get redistributed based on people’s number-two choices and so on until there’s a clear winner.

The idea is to get to a candidate who is acceptable to the largest number of voters instead of maybe one that’s more strongly preferred by a smaller number. And here’s the similarity where I think a concert is sort of in a crude form doing what the recommendation engine does. Runoff elections do this in a much cruder way. And so my question for you is, is there some way in a manageable form for recommendation systems to help us in voting for candidates and and and help us get to the candidate who is most acceptable to the largest number of voters?

Michael Schrage What a fascinating question. And just to be clear, my background is in computer science and economics. I’m not a political scientist. Some of my best friends are political scientists. And let me point out that there is a very, very rich literature on this. And I would go so far to say that people who are really interested in pursuing this question should look at public choice literature. They should go back to the French. You know, Condorcet … the French came up with a variety of voting systems in this regard. But let me tell you one of my immediate, visceral reactions to it. Are we voting for people or are we voting for policies? What would be the better option or opportunity for people: to vote for a referendum on immigration, public health, or for the people to enact a variety of policies where whereas, you know, we do not have direct democracy. There are certain areas there, certain states where you can, of course, you know, vote on directly on a particular question. The way I would repurpose your question would be, do we want recommendation engines that help us vote for people? Help us vote for policies? Or help us vote for some hybrid of the two?

Why am I complicating your seemingly simple question? Precisely because it is the kind of question that forces us to ask, “Well, what is the real purpose of the recommendation?” To get us to make a better vote for a person or to get a better outcome from the political system and the governance system?

Let me give you a real-world example that I’ve worked with companies on. We can come up with a recommendation system recommendation engine that is optimized around the transaction, getting you to buy now. Now! We’re optimizing recommendations so that you will buy now! Or we say hmm, that customer, we can have a relationship with. Maybe what we should do is have recommendations that optimize customer lifetime value. And this is one of the most important questions going on for every single Internet platform company that I know. Google had exactly this issue with YouTube; it still has this issue with its advertising partners and platform. This is the exact issue that Amazon has. Clearly, it regards its Prime customers from a customer lifetime value perspective. So your political question raises the single most important issue. What are the recommendations optimized for: the vote in this election or the general public welfare over the next three to four to six years?

Steven Cherry That turned out to be more interesting than I thought it was going to be. I’d be remiss if I didn’t take a minute to ask you about your previous book, The Innovator’s Hypothesis. Its core idea is that cheap experiments are a better path to innovation than brainstorming sessions, innovation vacations, and a bunch of things that companies are often advised to do to promote innovation and creativity. Maybe you could just take a minute and tell us about the 5×5 framework.

Michael Schrage Oh, my goodness. So I’d be delighted. And just to be clear, the book on recommendation engines could not and would not have been written without the work that I did on The Innovator’s Hypothesis.

You have five people from different parts of the organization come up with a portfolio of five different experiments based on well-framed, rigorous, testable hypotheses. Imagine doing this with 15 or 20 teams throughout an organization. What you’re creating is an innovation pipeline, an experimentation pipeline. You see what percent and proportion of your hypotheses address the need of users, customers, partners, suppliers. Which ones are efficiency-oriented? Which ones are new value-oriented? Wow! What a fascinating way to gain insight into the creative and innovative and indeed the business culture of the enterprise. So I wanted to move the Schwerpunkt, the center of gravity, away from, “Where are good ideas coming from?” to, “Can we set up an infrastructure to frame, test, experiment, and scale business hypotheses that matter? What do you think a recommendation engine is? A recommendation engine is a way to test hypotheses about what people want to watch, what they want to listen to, who they want to reach out to, who they want to share with. Recommendation engines are experimentation engines.

Steven Cherry I’m reminded of IDEO, the design company. It has a sort of motto, no meeting without a prototype.

Michael Schrage Yes. A prototype is an experiment. A prototype shouldn’t be…. Here’s where you know people are cheating: when they say “proof of concept.” Screw the proof of concept. You want skepticism when you want to validate a hypothesis. Screw validation. What do you want to learn? What do you want to learn? Prototypes are about learning. Experimentation is about learning. Recommendation engines are about learning—learning people’s preferences, what they’re willing to explore, what they’re not willing to explore. Learning is the key. Learning is the central organizing principle for why these technologies and these opportunities are so bloody exciting.

Steven Cherry You mentioned Stanford earlier and it seems there’s a direct connection between the two books here, and that is the famous 2007 class at Stanford where Facebook’s future programmers were taught to “move fast and break things.” There was a key class taught by the experimental psychologist B.J. Fogg.

Michael Schrage Right. B.J. is a remarkable guy. He is one of the pioneers of captiveology and persuasion technology. And one of the really impressive things about B.J. is he really took a prototyping/experimentation sensibility to all of this.

It used to be that the deliverable if you took a class on entrepreneurship at Stanford or M.I.T.—and this is as recent as a decade ago—is what did you have to come up with? What was your deliverable? A business plan. Screw the business plan! With things like GitHub and open-source software, you now have to come up with a prototype. What’s the cliché phrase? The MVP: the minimum viable prototype. Okay? But that’s really the key point here. We’re trying to turn prototypes into platforms from learning what matters most, learning about what matters most in terms of our customer or user orientation and value, and learning what matters most about what we need to build and how it needs to be built. How modular should it be? How scalable should it be? How bespoke should be?

What’s the big difference between this in 2020 and 2010? What we’re building now will have the capability to learn—machine learning capabilities. One of the bad things that happened to me is I wrote my book was, far faster than I expected machine learning algorithms colonized the recommendation engine/recommender system world. And so I had to get up to speed and get up to speed fast on machine learning and machine learning platforms because recommendation engines now are the paragon and paradigm of machine learning worldwide.

Steven Cherry Well, it seems that once our basic needs are satisfied, the most precious commodities, we have our time and attention. One of the central dilemmas of our age is that we may be giving over too much of our everyday life to recommendation engines, but we certainly can’t live our overly complex everyday lives without them. Michael, thank you for studying them, for writing this book about them, and for joining us today.

Michael Schrage My thanks for the opportunity. It was a pleasure.

Steven Cherry We’ve been speaking with Michael Schrage of the MIT Sloan School and author of the new book, Recommendation Engines, about how they are influencing more and more of our experiences.

This interview was recorded 26 August 2020. Our audio engineering was by Gotham Podcast Studio in New York. Our music is by Chad Crouch.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronics Engineers.

For Radio Spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.