All posts by Mark Anderson

AI and the future of work: The prospects for tomorrow’s jobs

Post Syndicated from Mark Anderson original

AI experts gathered at MIT last week, with the aim of predicting the role artificial intelligence will play in the future of work. Will it be the enemy of the human worker? Will it prove to be a savior? Or will it be just another innovation—like electricity or the internet?

As IEEE Spectrum previously reported, this conference (“AI and the Future of Work Congress”), held at MIT’s Kresge Auditorium, offered sometimes pessimistic outlooks on the job- and industry-destroying path that AI and automation seems to be taking: Self-driving technology will put truck drivers out of work; smart law clerk algorithms will put paralegals out of work; robots will (continue to) put factory and warehouse workers out of work.

Andrew McAfee, co-director of MIT’s Initiative on the Digital Economy, said even just in the past couple years, he’s noticed a shift in the public’s perception of AI. “I remember from previous versions of this conference, it felt like we had to make the case that we’re living in a period of accelerating change and that AI’s going to have a big impact,” he said. “Nobody had to make that case today.”

Elisabeth Reynolds, executive director of MIT’s Task Force on the Work of the Future, noted that following the path of least resistance is not a viable way forward. “If we do nothing, we’re in trouble,” she said. “The future will not take care of itself. We have to do something about it.”

Panelists and speakers spoke about championing productive uses of AI in the workplace, which ultimately benefit both employees and customers.

As one example, Zeynep Ton, professor at MIT Sloan School of Management, highlighted retailer Sam’s Club’s recent rollout of a program called Sam’s Garage. Previously customers shopping for tires for their car spent somewhere between 30 and 45 minutes with a Sam’s Club associate paging through manuals and looking up specs on websites.

But with an AI algorithm, they were able to cut that spec hunting time down to 2.2 minutes. “Now instead of wasting their time trying to figure out the different tires, they can field the different options and talk about which one would work best [for the customer],” she said. “This is a great example of solving a real problem, including [enhancing] the experience of the associate as well as the customer.”

“We think of it as an AI-first world that’s coming,” said Scott Prevost, VP of engineering at Adobe. Prevost said AI agents in Adobe’s software will behave something like a creative assistant or intern who will take care of more mundane tasks for you.

Prevost cited an internal survey of Adobe customers that found 74 percent of respondents’ time was spent doing repetitive work—the kind that might be automated by an AI script or smart agent.

“It used to be you’d have the resources to work on three ideas [for a creative pitch or presentation],” Prevost said. “But if the AI can do a lot of the production work, then you can have 10 or 100. Which means you can actually explore some of the further out ideas. It’s also lowering the bar for everyday people to create really compelling output.”

In addition to changing the nature of work, noted a number of speakers at the event, AI is also directly transforming the workforce.

Jacob Hsu, CEO of the recruitment company Catalyte spoke about using AI as a job placement tool. The company seeks to fill myriad positions including auto mechanics, baristas, and office workers—with its sights on candidates including young people and mid-career job changers. To find them, it advertises on Craigslist, social media, and traditional media.

The prospects who sign up with Catalyte take a battery of tests. The company’s AI algorithms then match each prospect’s skills with the field best suited for their talents.

“We want to be like the Harry Potter Sorting Hat,” Hsu said.

Guillermo Miranda, IBM’s global head of corporate social responsibility, said IBM has increasingly been hiring based not on credentials but on skills. For instance, he said, as much as 50 per cent of the company’s new hires in some divisions do not have a traditional four-year college degree. “As a company, we need to be much more clear about hiring by skills,” he said. “It takes discipline. It takes conviction. It takes a little bit of enforcing with H.R. by the business leaders. But if you hire by skills, it works.”

Ardine Williams, Amazon’s VP of workforce development, said the e-commerce giant has been experimenting with developing skills of the employees at its warehouses (a.k.a. fulfillment centers) with an eye toward putting them in a position to get higher-paying work with other companies.

She described an agreement Amazon had made in its Dallas fulfillment center with aircraft maker Sikorsky, which had been experiencing a shortage of skilled workers for its nearby factory. So Amazon offered to its employees a free certification training to seek higher-paying work at Sikorsky.

“I do that because now I have an attraction mechanism—like a G.I. Bill,” Williams said. The program is also only available for employees who have worked at least a year with Amazon. So their program offers medium-term job retention, while ultimately moving workers up the wage ladder.

Radha Basu, CEO of AI data company iMerit, said her firm aggressively hires from the pool of women and under-resourced minority communities in the U.S. and India. The company specializes in turning unstructured data (e.g. video or audio feeds) into tagged and annotated data for machine learning, natural language processing, or computer vision applications.

“There is a motivation with these young people to learn these things,” she said. “It comes with no baggage.”

Alastair Fitzpayne, executive director of The Aspen Institute’s Future of Work Initiative, said the future of work ultimately means, in bottom-line terms, the future of human capital. “We have an R&D tax credit,” he said. “We’ve had it for decades. It provides credit for companies that make new investment in research and development. But we have nothing on the human capital side that’s analogous.”

So a company that’s making a big investment in worker training does it on their own dime, without any of the tax benefits that they might accrue if they, say, spent it on new equipment or new technology. Fitzpayne said a simple tweak to the R&D tax credit could make a big difference by incentivizing new investment programs in worker training. Which still means Amazon’s pre-existing worker training programs—for a company that already famously pays no taxes—would not count.

“We need a different way of developing new technologies,” said Daron Acemoglu, MIT Institute Professor of Economics. He pointed to the clean energy sector as an example. First a consensus around the problem needs to emerge. Then a broadly agreed-upon set of goals and measurements needs to be developed (e.g., that AI and automation would, for instance, create at least X new jobs for every Y jobs that it eliminates).

Then it just needs to be implemented.

“We need to build a consensus that, along the path we’re following at the moment, there are going to be increasing problems for labor,” Acemoglu said. “We need a mindset change. That it is not just about minimizing costs or maximizing tax benefits, but really worrying about what kind of society we’re creating and what kind of environment we’re creating if we keep on just automating and [eliminating] good jobs.”

AI and the Future of Work: The Economic Impacts of Artificial Intelligence

Post Syndicated from Mark Anderson original

This week at MIT, academics and industry officials compared notes, studies, and predictions about AI and the future of work. During the discussions, an insurance company executive shared details about one AI program that rolled out at his firm earlier this year. A chatbot the company introduced, the executive said, now handles 150,000 calls per month.

Later in the day, a panelist—David Fanning, founder of PBS’s Frontline—remarked that this statistic is emblematic of broader fears he saw when reporting a new Frontline documentary about AI. “People are scared,” Fanning said of the public’s AI anxiety.

Fanning was part of a daylong symposium  about AI’s economic consequences—good, bad, and otherwise—convened by MIT’s Task Force on the Work of the Future.

AI and the Future of Work: What to look out for

Post Syndicated from Mark Anderson original

The robots have come for our jobs. This is the fear that artificial intelligence increasingly stokes with both the tech and policy elite and the public at large. But how worried should we really be?

To consider what impact AI will have on employment, a conference at MIT titled The Future of Work is convening this week, bringing together some leading thinkers and analysts. In advance of the conference, IEEE Spectrum talked with R. David Edelman, director of MIT’s Project on Technology, Economy & National Security about his take on AI’s coming role.

Edelman says he’s lately seen AI-related worry (mixed with economic anxiety) ramp up in much the same way that cybersecurity worries began ratcheting up ten or fifteen years ago.

“Increasingly, issues like the implications of AI on the future of work are table stakes for understanding our economic future and our ability to deliver prosperity for Americans,” he says. “That’s why you’re seeing broad interest, not just from the Department of Labor, not just from the Council of Economic Advisors, but across the government and, in turn, across society.”

Before coming to MIT, Edelman worked in the White House from 2010-’17 under a number of titles, including Special Assistant to the President on Economic and Technology Policy. Edelman also organizes a related conference in the spring at MIT, the MIT AI Policy Congress.

At this week’s Future of Work conference though, Edelman say he’ll be keeping his ears open for a number of issues that he thinks are not quite on everyone’s radar yet. But they may be soon.

For starters, Edelman says, not enough attention in mainstream conversations concerns the boundary between AI-controlled systems and human-controlled ones.

“We need to figure out when people are comfortable handing decisions over to robots, and when they’re not,” he says. “There is a yawning gap between the percentage of Americans who are willing to turn their lives over to autopilot on a plane, and the percentage of Americans willing to turn their lives over to autopilot on a Tesla.”

Which, to be clear, Edelman is not saying represents any sort of unfounded fear. Just that public discussion over self-driving or self-piloting systems is very either/or. Either a self-driving system is seen as 100 percent reliable for all situations, or it’s seen as immature and never to be used.

Second, not enough attention has yet been devoted to the question of metrics we can put in place to understand when an AI system has earned public trust and when it has not.

AI systems are, Edelman points out, only as reliable as the data that created them. So questions about racial and socioeconomic bias in, for instance, AI hiring algorithms are entirely appropriate.“Claims about AI-driven hiring are careening evermore quickly forward,” Edelman says. “There seems to be a disconnect. I’m eager to know, Are we in a place where we need to pump the brakes on AI-influenced hiring? Or do we have some models in a technical or legal context that can give us the confidence we lack today that these systems won’t create a second source of bias?”

A third area of the conversation that Edelman says deserves more media and policy attention is the question of what industries does AI threaten most. While there’s been discussion about jobs that have been put in the AI cross hairs, less discussed, he says, is the bias inherent in the question itself.

A 2017 study by Yale University and the University of Oxford’s Future of Humanity Institute surveyed AI experts for their predictions about, in part, the gravest threats AI poses to jobs and economic prosperity. Edelman points out that the industry professionals surveyed all tipped their hands a bit in the survey: The very last profession AI researchers said would ever be automated was—surprise, surprise—AI researchers.

“Everyone believes that their job will be the last job to be automated, because it’s too complex for machines to possibly master,” Edelman says.

“It’s time we make sure we’re appropriately challenging this consensus that the only sort of preparation we need to do is for the lowest-wage and lowest-skilled jobs,” he says. “Because it may well be that what we think of as good middle-income jobs, maybe even requiring some education, might be displaced or have major skills within them displaced.”

Last is the belief that AI’s effect on industries will be to eliminate jobs and only to eliminate jobs. When, Edelman says, the evidence suggests any such threats could be more nuanced.

AI may indeed eliminate some categories of jobs but may also spawn hybrid jobs that incorporate the new technology into an old format. As was the case with the rollout of electricity at the turn of the 20th century, new fields of study spring up too. Electrical engineers weren’t really needed before electricity became something more than a parlor curiosity, after all. Could AI engineering one day be a field unto its own? (With, surely, its own categories of jobs and academic fields of study and professional membership organizations?)

“We should be doing the hard and technical and often untechnical and unglamorous work of designing systems to earn our trust,” he says. “Humanity has been spectacularly unsuccessful in placing technological genies back in bottles. … We’re at the vanguard of a revolution in teaching technology how to play nice with humans. But that’s gonna be a lot of work. Because it’s got a lot to learn.”

Supercomputers Simulate Solar Flares to Help Physicists Understand Magnetic Reconnection

Post Syndicated from Mark Anderson original

New supercomputer simulations have successfully modeled a mysterious process believed to produce some of the hottest and most dangerous solar flares—flares that can disrupt satellites and telecommunications networks, cause power outages, and otherwise wreak havoc on the grid. And what researchers have learned may also help physicists design more efficient nuclear fusion reactors.

In the past, solar physicists have had to get creative when trying to understand and predict flares and solar storms. It’s difficult, to put it mildly, to simulate the surface of the sun in a lab. Doing so would involve creating and then containing an extended region of dense plasma with extremely high temperatures (between thousands of degrees and one million degrees Celsius) as well as strong magnetic fields (of up to 100 Tesla).

However, a team of researchers based in the United States and France developed a supercomputer simulation (originally run on Oak Ridge National Lab’s recently retired Titan machine) that successfully modeled a key part of a mysterious process that produces solar flares. The group presented its results last month at the annual meeting of the American Physical Society’s (APS) Plasma Physics division, in Fort Lauderdale, Fla.

Alphabet’s Makani Tests Wind Energy Kites in the North Sea

Post Syndicated from Mark Anderson original

The idea is simple: Send kites or tethered drones hundreds of meters up in the sky to generate electricity from the persistent winds aloft. With such technologies, it might even be possible to produce wind energy around the clock. However, the engineering required to realize this vision is still very much a work in progress.

Dozens of companies and researchers devoted to developing technologies that produce wind power while adrift high in the sky gathered at a conference in Glasgow, Scotland last week. They presented studies, experiments, field tests, and simulations describing the efficiency and cost-effectiveness of various technologies collectively described as airborne wind energy (AWE).

In August, Alameda, Calif.-based Makani Technologies ran demonstration flights of its airborne wind turbines—which the company calls energy kites—in the North Sea, some 10 kilometers off the coast of Norway. According to Makani CEO Fort Felker, the North Sea tests consisted of a launch and “landing” test for the flyer followed by a flight test, in which the kite stayed aloft for an hour in “robust crosswind(s).” The flights were the first offshore tests of the company’s kite-and-buoy setup. The company has, however, been conducting onshore flights of various incarnations of their energy kites in California and Hawaii.

Nuclear Weapons Inspection: Encryption System Could Thwart Spies and Expose Hoaxes

Post Syndicated from Mark Anderson original

A new nuclear weapons inspection technology could enhance inspectors’ ability to verify that a nuclear warhead has been dismantled without compromising state secrets behind the weapon’s design.

This new non-proliferation tool, its inventors argue, would greatly assist the often delicate dance of nuclear weapons inspectors—who want to know they haven’t been hoaxed but are also sensitive to a military’s fear that spies may have infiltrated their ranks.

While nuclear non-proliferation treaties have historically verified the dismantlement of weapons delivery systems like ICBMs and cruise missiles, there have in fact never been any verified dismantlements of nuclear warheads themselves (in part for the reasons described above).

Yet there are 13,000 nuclear warheads in the world, meaning the entire globe is still just a hair trigger away from apocalypse—even as we approach the thirtieth anniversary of the Berlin Wall’s collapse.

As UN Secretary-General Antonio Guterres told world leaders last month, “I worry that we are slipping back into bad habits that will once again hold the entire world hostage to the threat of nuclear annihilation.”

How, then, to verifiably dismantle a nuclear bomb?

Microsize Lens Pushes Photonics Closer to an On-Chip Future

Post Syndicated from Mark Anderson original

Optical microcomputing, next-generation compact LiDAR units, and on-chip spectrometers all took a step closer to reality with the recent announcement of a new kind of optical lens.

The lens is not made of glass or plastic, however. Rather, this low-loss, on-chip lens is made from thin layers of specialized materials on top of a silicon wafer. These “metasurfaces” have shown much promise in recent years as a kind of new, microscale medium for containing, transmitting, and manipulating light.

Photonics at the macro-scale is more than 50 years old and has applications today in fields including telecommunications, medicine, aviation, and agriculture. However, shrinking all the elements of traditional photonics down to microscale—to match the density of signals and processing operations inside a traditional microchip—requires entirely new optical methods and materials.

A team of researchers at the University of Delaware, including Tingyi Gu, an assistant professor of electrical and computer engineering, recently published a paper in the journal Nature Communications that describes their effort to build a lens from a thin metasurface material on top of a silicon wafer.

Gu says that metasurfaces have typically been made from thin metal films with nanosized structures in it. These “plasmonic” metasurfaces offered the promise of, as a Nature Photonics paper from 2017 put it, “Ultrathin, versatile, integrated optical devices and high-speed optical information processing.”

The problem, Gu says, is that these “plasmonic” materials are not exactly transparent like windowpanes. Traveling just fractions of a micrometer can introduce signal loss of a few decibels to tens of dB.

“This makes it less practical for optical communications and signal processing,” she says.

Her group uses an alternate kind of metasurface made from etched dielectric materials atop silicon wafers. Making optical components out of dielectric metasurfaces, she says, could sidestep the signal loss problem. Her group’s paper notes that their lens introduces a signal loss of less than one dB.

Even a small improvement (and going from handfuls of dB down to fractions of a dB is more than small) would make a big difference, because a real-world photonics chip might one day have many such components in it. And the more lossy the photonics chip, the greater the amount of laser power needed to be pumped through the chip. More power means more heat and noise, which might ultimately limit the extent to which the chip could be miniaturized. But with her team’s dielectric metasurface lens, “We can make a device much smaller and more compact,” she says.

Her group’s lens is made from a configuration of gratings etched in the metasurface — following a wavy pattern of vertical lines that looks a bit like the Cisco company logo. Gu’s group was able to achieve some of the familiar properties of lenses, including converging beams with a measurable focal length (8 micrometers) and object and image distance (44 and 10.1 µm).

The group further used the device’s lensing properties to achieve a kind of optical signal Fourier Transform—which is also a property of classical, macroscopic lenses.

Gu says that next steps for their device include exploring new materials and to work toward a platform for on-chip signal processing.

“We’re trying to see if we can come up with good designs to do tasks as complicated as what traditional electronic circuits can do,” she says. “These devices have the advantage that they can process signals at the speed of light. It doesn’t need logic signals going back and forth between transistors. … It’s going to be fast.”

Making the Ultimate Software Sandbox

Post Syndicated from Mark Anderson original

Is it possible to process highly sensitive data (say, a person’s genome or their tax returns) on an untrusted, remote system without ever exposing that data to that system? Some of the biggest names in tech are now working together to figure out a way to do just that.

Last month, the Linux Foundation announced the Community Computing Consortium (CCC)—a cross-industry effort to develop and standardize the safeguarding of data even when it’s in use by a potentially untrusted system. Alibaba, Arm, Baidu, Google Cloud, IBM, Intel, Microsoft, Red Hat, Swisscom, and Tencent are all CCC founding members.

Encrypting data at rest and in transit, the Foundation said in a press statement, are familiar challenges for cloud computing providers and users today. But the new challenge the Consortium is taking up concerns allowing sensitive, encrypted data to be processed by a system that otherwise would have no access to that data.

They may want to take a look an an open-source project first launched in 2016 called Ryoan.

“In our model, not only is the code untrusted, the platform is also untrusted,” says Emmett Witchel, professor of computer science at the University of Texas at Austin.

Witchel and co-authors developed Ryoan as a piece of code that would use security features in Intel CPUs to effectively sandbox encrypted data—and allow computation on that data without requiring that either the software or the hardware (other than the CPU) be secure or trusted.

Witchel says a key inspiration for his group was a 1973 paper by Butler Lampson of Xerox’s Palo Alto Research Center. “He talked about how difficult it is to confine untrusted code,” Witchel says. “That means if you have code that wasn’t written by you, you don’t know the motivations of the people who wrote it, and you want to make sure that code isn’t stealing any secrets [from your data], it’s very, very difficult.”

Witchel says one of his graduate students at the time argued that confining data completely within a sandbox—in the face of potentially adversarial code and hardware—was practically impossible.

Witchel adds that he agreed with his student, up to a point. “But it’s not 1973. And people are doing different things with computers now,” he says. “They’re recognizing images. They’re processing genome data. These are very specific tasks that have properties that we can take advantage of.”

According to Witchel, Ryoan uses what’s called a “one-shot data model.” That means the program looks at a user’s sensitive data only once in passing—a scheme that might be applicable to rapid-fire video or image streams that run image recognition software.

It was this transitory, one-time nature of the data processing that simplified Lampson’s Confinement Problem and made it solvable, Witchel says.

On the other hand, you couldn’t use Ryoan if you were processing a sensitive image on a remote, untrusted system that was storing and processing—and then storing and further processing—one’s data. Such a process may be necessary if, say, you were running Photoshop remotely on a sensitive image.

Ryoan relies on an Intel hardware security feature called Software Guard Extensions (SGX), which allowed its creators to begin addressing the problem. However, SGX is only a first step, says Tyler Hunt, a University of Texas, Austin computer science graduate student and co-developer of Ryoan.

“SGX only allows you to have 128 megabytes of physical memory, so there’s some challenge in finding applications that are small enough,” Hunt says. “And genomes are very large, so if you actually wanted to process an entire genome, you’d need some larger hardware to do it.”

Witchel and Hunt say there are other efforts now afoot to make “enclaves” like Ryoan (in other words—isolated, trusted computing environments) in larger and more diverse systems than just a CPU.

Microsoft researchers last year proposed a software environment called Graviton, which would run trusted computing enclaves on GPUs. RISC-V architecture hardware systems may soon be adopting a trusted computing enclave standard called Keystone. And ARM—a CCC member—offers its own trusted computing environment called TrustZone.

Witchel says trusted enclaves like Ryoan and its descendants could be important as users become more savvy about the permissions they give for use of their personal data.

People today could be better educated, Witchel says, so they can understand and assert their rights. They need to know “that their data is out there and is being used and monetized. And that they should have some control and some assurance that their data is only being used for the purposes that they released it for. I want my genome data to tell me about my ancestry. I don’t want you to keep that genome data to use it to develop a drug that you then sell to me at an inflated price. That seems unfair.”

Drawing Humor From STEM’s Absurdist Extremes

Post Syndicated from Mark Anderson original

The guy who draws those comic strips with stick figures is actually a very capable and talented artist. Physics equations are the surefire road to absurdist humor. It only takes the 1000 most common words to sell a million books. These are some of the contradictions one grows accustomed to in Randall Munroe’s world.

Munroe, who is the author of two best-selling books What If? and Thing Explainer and has gained Internet celebrity for his incisive XKCD comic strip, has a new book out this month called How To: Absurd Scientific Advice for Common Real-World Problems. In it, he offers hard-won advice in such essential matters as How to build a lava moat and How to catch a drone with sports equipment. (In the latter chapter, he enlists tennis star Serena Williams to knock a quadcopter out of the air using her powerhouse serves. It only took three tries.)

IEEE Spectrum spoke with Munroe in his fortress of solitude somewhere in Massachusetts. (This interview transcript has been edited.)

Spectrum: Boundaries seem important to your humor—finding them, exploring them, defying them.

Munroe: I was always the kid who when I played a car racing game, and I’d see cool mountains on the horizon, I’d want to go off the track to get closer to them. You of course run into an invisible wall almost immediately. Games have gotten more sophisticated about that over the years. But I really like any story or any kind of exploration where you get to discover that the world is much bigger than you thought. Some of my favorite (XKCD) comics have been ones where I’ve taken something familiar and gotten to extend it beyond the bounds of where you think it can go.

You were a physics major in college, right?

I had a physics major with a math and computer science minor.

Were you much of a doodler as a student?

That’s actually how I got started doing XKCD. A lot of the early strips I posted were just scans of things from my notebook. I’d post them to share with friends, because the notebooks were falling apart. But then they started getting passed around. So I said, “Well, if you like these things, I can draw more.”

You famously draw stick figures in your books and comic strips. But in your work, everything other than the stick figures are actually quite well rendered and carefully drawn—from spaceships to animals to cars and houses and various crazy inventions.

Teaching seems so difficult, and I’m very impressed by people who do it. But one thing for physics specifically, I feel like every physics professor would really gain from having a weekend course in drawing — or even just the basics. A course that taught you how to draw a cube, in perspective. I feel like a lot of my professors would have benefited from just that one specific thing.

What’s your pile of rejected ideas look like?

There’s lots of stuff where I decide it just doesn’t seem too interesting to me. Or where I don’t have a satisfying answer. But there was one example of something I cut from How To, which was How to dry out your phone if it gets wet. It’s hard to get a definitive answer on that. The most common suggestion is to put it in a container of rice. And that might be better than doing nothing. But it’s not actually that helpful. You’d do better setting it in a room with a fan blowing over it.

But it happened that one person I was hanging out with accidentally dropped his phone in a lake. And I jumped up and said, “A-ha! I just read everything on the internet about how to dry out your phone.” But I realized that I didn’t have a great, practical answer. And that’s OK if the practical answer is easy to find somewhere else. But in a situation where I might be the authority that people have read on this, and if I don’t have a good answer and don’t even have a good authority to point them to, it felt weird using it as an example to explore all kinds of physics.

You’ve pioneered a certain kind of scientific absurdist humor. This might be because you make it look so easy, but why do you think more people don’t do stuff like this?

I grew up reading The Far Side, which had lots of biology jokes. There’s the Ignobel Prizes. There’s TV shows like MythBusters and Penn and Teller. I grew up watching Bill Nye The Science Guy. I feel like it’s around. But I think one thing that might get in the way of it is people are worried about being taken seriously. So there are a lot of barriers to freely exploring ridiculous problems in an accessible way. I notice that’s a problem for women who do science communication; they get seen as less serious when they do. I feel like people cut me a lot of slack by assuming that i know what I’m talking about. And I have women friends who are scientists who try to do the same thing, and get more immediate responses like, “Oh, well clearly she doesn’t know what she’s talking about.” I think that kind of insecurity breeds a lot of defensiveness and a lot of reluctance to speak in normal terms about things—which makes it harder to speak with the public.

You put disclaimers in all your books never to try your crazy ideas at home. Nevertheless, have you had fans who did try them out?

The nice thing about the kinds of problems I tackle is they’re, for the most part, thought experiments that would have large practical barriers to trying them out—like altering the rotation of the Earth to get to your meetings faster. A lot of the time, I will steer clear of questions that are easy to try and dangerous to do. Often because I’ve found the MythBusters have already done it.

But I do have a few chapters with some practical advice. I have a section on how to take a selfie backlit against the moon—or the even the sun if they have the right filter. It just takes a huge amount of coordination and organization. I took it to an extreme, too. In principle, if you [and someone holding a camera] were several miles away, you could take a photo of yourself, probably on a mountaintop, in front of the disc of Jupiter. It’d involve traveling to find two mountaintops that are aligned just right. But I think someone could do it. I don’t know if anyone has done it.

We’ll notify the IEEE membership.

I did include the Jupiter and Venus examples (in How To) because I do hope someone tries it and posts their results online.

Publishers have a famous aversion to using equations anywhere in non-academic books. You flout that rule all the time. Do you think there’s a good reason for them to be in place?

People have a lot of insecurities and frustrations with math. Everyone I talk to who didn’t do a STEM degree of some kind has a story in which they realized they were not a math person. But I don’t think there really are math people and not-math people. It’s more like music. There is some amount of talent that some people have, sure. But you just have to practice for a long amount of time to get better at it. The secret is it is just tedious. You just do it enough, and it gets easier.

But that’s why I like using math. It lets you get answers to questions that you couldn’t have [arrived at] any other way. You can write down a bunch of things you know and follow these steps on paper. And get an answer that might surprise you and tell you something new. And that’s really cool. I like showing people that aspect of math. And I hope that showing people why I’m doing some equation is helpful. But maybe it does scare people off. So I don’t know the answer to that.

Monty Python has that skit about mosquito hunters who use bazookas and surface-to-air missiles to catch their prey. Like the Pythons, you have a tremendous talent for extracting humor from the dogged pursuit of whatever objective, no matter the apparent cost. Are you obsessive or focused like this in your everyday life?

Talking about how I work, one of easiest questions to ask is what happens if you extend something in this direction or that? Looking at those extremes can get you a better idea of how the thing behaves overall. And often those are also the most fun and vivid examples to think about. Physics is full of atoms and black holes, partly because they represent two extremes of massiveness. And everything else falls between.

Graphene Detectors Bring Terahertz Astronomy to Light

Post Syndicated from Mark Anderson original

A newly developed graphene-based telescope detector may usher in a new wave of astronomical observations in a band of radiation between microwaves and infrared light. Applications including medical imaging, remote sensing, and manufacturing could ultimately be beneficiaries of this detector, too.

Microwave and radio wave radiation oscillate at frequencies measured in gigahertz or megahertz—slow enough to be manipulated and electronically processed in conventional circuits and computer systems. Light in the infrared range (with frequencies beginning around 20 THz) can be manipulated by traditional optics and imaged by conventional CCDs.

But the no-man’s land between microwaves and infrared (known as the “terahertz gap”) has been a challenging although not entirely impossible band in which astronomers could observe the universe.

To observe terahertz waves from astronomical sources first requires getting up above the atmosphere or at least up to altitudes where the Earth’s atmosphere hasn’t completely quenched the signal. The state-of-the-art in THz astronomy today is conducted with superconducting detectors, says Samuel Lara-Avila, associate research professor in the Department of Microtechnology and Nanoscience at Chalmers University of Technology in Sweden.

Observatories like the Atacama Large Millimeter/submillimeter Array (ALMA) in Chile and the South Pole Telescope might use such detectors combined with local oscillators pumping out reference signals at frequencies very close to the target signal the astronomers are trying to detect. If a telescope is looking for radiation at 1 THz, adding a local oscillator at 1.001 THz would produce a combined signal with beat frequencies in the 1 GHz (0.001 THz) range, for instance. And gigahertz signals represent a stream of data that won’t overwhelm a computer’s ability to track it.

Sounds simple. But here’s the rub: According to Lara-Avila, superconducting detectors require comparatively powerful local oscillators—ones that operate in the neighborhood of a microwatt of power. (That may not sound like much, but the detectors operate at cryogenic temperatures. So a little bit of local oscillator power goes a long way.)

By contrast, the new graphene detector would require less than a nanowatt of local oscillator power, or three orders of magnitude less. The upshot: A superconducting detector in this scenario might generate a single pixel of resolution on the sky, whereas the new graphene technology could enable detectors with as many as 1000 pixels.

“It’s possible to dream about making [THz] detector arrays,” Lara-Avila says.

Probably the most famous observation in THz or near-THz astronomy is the Event Horizon Telescope, which earlier this month won the Breakthrough Prize in Fundamental Physics. (Pictured) Some of the frequencies it operated at, according to Wikipedia, were between 0.23 and 0.45 THz.

The graphene detector pioneered by Lara-Avila and colleagues in Sweden, Finland, and the UK is described in a recent issue of the journal Nature Astronomy.

The group doped its graphene by adding polymer molecules (like good old 2,3,5,6-Tetrafluoro-7,7,8,8-tetracyanoquinodimethane, or F4-TCNQ ) atop the pure carbon sheets. Tuned just right, these dopants can bring the ensemble to a delicate quantum balance state (the so-called “Dirac point”) in which the system is highly sensitive to a broad range of electromagnetic frequencies from 0.09 to 0.7 THz and, they speculate, potentially higher frequencies still.

All of which adds up to a potential THz detector that, the researchers say, could represent a new standard for THz astronomy. Yet astronomical applications for technology often just represents the first wave of technology that labs and companies spin off for many more down-to-earth applications. That CCD detector powering the cameras on your cellphone originated in no small part from the work of engineers in the 1970s and ‘80s developing sensitive CCDs whose first applications were in astronomy.

Terahertz technologies for medical applications, remote sensing, and manufacturing are already works in progress. This latest graphene detector could be a next-gen development in these or other as yet unanticipated applications.

At this point, says Lara-Avila, his group’s graphene-based detector version 1.0 is still a sensitive and refined piece of kit. It won’t directly beget THz technology that would find its way into consumers’ pockets. More likely, he says, is that this detector could be lofted into space for next-generation THz orbital telescopes.

“It’s like the saying that you shouldn’t shoot a mosquito with a cannon,” Lara-Avila says. “In this case, the graphene detector is a cannon. We need a range and a target for that.”

Green Data: The Next Step to Zero-Emissions Data Centers

Post Syndicated from Mark Anderson original

Data centers consume just two to three percent of the planet’s total electricity usage. So reducing data centers’ climate footprint may not seem, at first blush, to be a high priority as world leaders gather in New York next week to consider practical climate change solutions at the UN Climate Action Summit.

However there are at least two reasons why data centers will likely play a key role in any attempt to curb global emissions. First, as cloud computing becomes more energy-efficient and increasingly relies on renewable sources, other sectors such as manufacturing, transportation, and buildings could turn to green data centers to reduce their own emissions. For example—a car manufacturer might outsource all of its in-house computing to zero-emission data centers.

Even without such partnerships, though, data centers will likely play an important part in the climate’s future. The rise of AI, machine learning, big data, and the Internet of Things mean that data centers’ global electricity consumption will continue to increase. By one estimate, consumption could jump to as much as 13 percent of the world’s total electricity demand by 2030.

For these reasons, says Johan Falk, senior innovation fellow at the Stockholm Resilience Center in Sweden, data centers will have outsized importance in climate change mitigation efforts. And the more progress society makes in the near term, the sooner the benefits will begin to multiply.

A Two-Track Algorithm To Detect Deepfake Images

Post Syndicated from Mark Anderson original

A neural-network-based tool can spot image manipulation at the level of single-pixels

Journal Watch report logo, link to report landing page

Researchers have demonstrated a new algorithm for detecting so-called deepfake images—those altered imperceptibly by AI systems, potentially for nefarious purposes. Initial tests of the algorithm picked out phony from undoctored images down to the individual pixel level with between 71 and 95 percent accuracy, depending on the sample data set used. The algorithm has not yet been expanded to include the detection of deepfake videos.

Deepfakes “are images or videos that have been doctored—either you insert something into it or remove something out of it—so it changes the meaning of the picture,” says Amit Roy-Chowdhury, professor of electrical and computer engineering at the University of California, Riverside. The challenge arises because it’s done “in a way so that to the human eye it’s not obvious immediately that it has been manipulated.”

In rapidly developing situations, such as an humanitarian crisis, a business’s product launch, or an election campaign, deepfake videos and images could alter how events play out. Imagine a doctored image in which a political candidate was supposedly committing a violent crime, or a doctored video in which a CEO supposedly confesses to concealing safety problems with her company’s signature product line.

Chowdhury is one of five authors of the deepfake-detecting algorithm, described in a recent IEEE Transactions on Image Processing. He says such detection algorithms could be a powerful tool to fight this new menace of the social media age. But people also need to be careful not to become over-dependent on these algorithms either, he warns. An overly trusted detection algorithm that can be tricked could be weaponized by those seeking to spread false information. A deepfake crafted to exploit a trusted algorithm’s particular weaknesses could effectively result in the algorithm blessing the fake with a certificate of authenticity in the minds of experts, journalists and the public, rendering it even more damaging.

“I think we have to be careful in anything that has to do with AI and machine learning today,” Roy-Chowdhury says. “We need to understand that the results these systems give are probabilistic. And very often the probabilities are not in the range of 0.98 or 0.99. They’re much lower than that. We should not accept them on blind faith. These are hard problems.”

In that sense, he says, deepfakes are really just a new frontier in cybersecurity. And cybersecurity is a perpetual arms race with bad guys and good guys each making advances in often incremental steps.

Roy-Chowdhury says that with their latest work his group has harnessed a set of concepts that already exist separately in the literature, but which they have combined in novel and potentially powerful way.

One component of the algorithm is a variety of a so-called “recurrent neural network,” which splits the image in question into small patches and looks at those patches pixel by pixel. The neural network has been trained by letting it examine thousands of both deepfake and genuine images, so it has learned some of the qualities that make fakes at stand out at the single-pixel level.

Roy-Chowdhury says the boundaries around the doctored portion of an image are often what contain telltale signs of manipulation. “If an object is inserted, it is often the boundary regions that have certain characteristics,” he says. “So the person who’s tampering the image will probably try to do it so that the boundary is very smooth. What we found out is the tampered images were often smoother than in natural images. Because the person who did the manipulation was going out of his way to make it very smooth.”

Another portion of the algorithm, on a parallel track to the part looking at single pixels, passes the whole image through a series of encoding filters—almost as if it were performing an image compression, as when you click the “compress image” box when saving a TIFF or a JPEG. These filters, in a mathematical sense, enable the algorithm to consider the entire image at larger, more holistic levels.

The algorithm then compares the output of the pixel-by-pixel and higher-level encoding filter analyses. When these parallel analyses trigger red flags over the same region of an image, it is then tagged as a possible deepfake.

For example, say that a stock image of a songbird has been pasted onto a picture of an empty tree branch. The pixel-by-pixel algorithm in this case might flag the pixels around the bird’s claws as problematic, while the encoder algorithm might spot patterns in the larger image (noticing, perhaps, other boundary problems or anomalies at the larger-scale level). So long as both of these neural nets flagged the same region of the image around the bird, then Roy-Chowdhury’s group’s algorithm would categorize the bird-and-branch photo as a possible deepfake.

Roy-Chowdhury says that the algorithm now needs to be expanded to handle video. Such a next-level algorithm, he says, would potentially include how the image evolves frame-by-frame and whether any detectable patterns can be discerned from that evolution in time.

Given the urgency of deepfake detection, as hostile actors around the world increasingly seek to manipulate political events using false information, Roy-Chowdhury encourages researchers to contact his group for code or pointers toward further developing this algorithm for deepfake detection in the wild.

Order for First All-Electric Passenger Airplane Placed by Massachusetts Carrier

Post Syndicated from Mark Anderson original

Cape Air recently ordered the Eviation “Alice” battery-powered, 9-seat regional aircraft—pointing toward aviation’s e-future

Commercial electric aviation took its first steps forward last month when a Massachusetts-based regional airline announced the first order of the first all-electric passenger airplane. The “Alice,” a three-engine, battery powered airplane with a 1000-kilometer range on a single charge, is slated to be delivered to Cape Air airlines for passenger flights in 2022. 

The Alice, manufactured by Kadima, Israel-based startup company Eviation, has not yet been certified by the U.S. Federal Aviation Administration. However, the company’s e-airplane “could be certified right now to fly,” insists Lior Zivan, Eviation’s CTO. “It does not need a major rewrite of the rules to get this in the air,” he says.

Zivan says the company is “anticipating full certification by 2022.”

The Alice, Zivan says, will be powered by a 900-kilowatt-hour (kWH) lithium ion battery manufactured by South Korean battery maker Kokam Battery. (For comparison, the Tesla Model 3 electric car uses a 50- to 75-kWH battery pack, according to a 2017 investor call from company CEO Elon Musk.)

Cape Air is a Northeast regional airline that flies to Cape Cod, Martha’s Vineyard, Nantucket, and numerous other vacation and regional destinations. According to Trish Lorino, Cape Air vice president of marketing and public relations, the company’s historic order of Eviation’s Alice aircraft “makes sense for us because we are a short-haul carrier.” Lorino notes that, “For 30 years, we have specialized in serving short-haul routes, particularly to niche and island destinations.”

According to Cape Air’s website, the carrier currently operates 88 Cessna 402s (which seat 6 to 10 passengers) and 4 Islander planes (9-seat capacity) made by the British company Britten-Norman. The 9-seater Alice e-aircraft thus fits within the Cape Air fleet’s general size and passenger capacity.

Lorino says that although the carrier has not yet decided which routes will feature the Alice, company officials currently anticipate that e-flights will cover routes that keep the plane close to the company’s Massachusetts headquarters. “Short-haul routes ‘in our backyard’ such as Nantucket, Martha’s Vineyard, and Provincetown would be the likely routes,” she says.

Eviation CEO Omar Bar-Ohay showcased the Alice at the Paris Air Show last month, featuring an informal tour and 30 minute talk.

Bar-Ohay’s remarks at the Show highlighted the differences inherent in designing and engineering an all-electric airplane and a conventional, petroleum-fueled plane. As he pointed out, the Alice has a maximum takeoff weight of 6,350 kilograms (14,000 pounds), but 3,700 kg of that is the battery. (And of course there is no fuel burned, so its takeoff weight is more or less its landing weight.)

Each of the Alice’s three motors, according to Zivan, has one moving part. “A similar [petroleum-fueled] reciprocating engine has about 10: six pistons, a crankshaft, oil pump, and a two-shaft gearbox,” Zivan says. “Obviously, electric propulsion has a major advantage in both reliability and maintenance.”

There are redundant systems in the Alice, Zivan says, in both the propulsion and the battery assembly. The e-aircraft’s three engines (two “pusher” motors mounted at the rear ends of the two wingtips and another “pusher” motor mounted at the rear of the plane) have, he says, “mostly dual and for some components triple [redundancy].”

As for the electrical system, Zivan says, “The battery assembly is redundant in many levels, starting at the parallelism of the cells and ending at the number of in-series cells branches. The battery is designed in such a way that any malfunction or failure will result in a minimal reduction in the capacity if any.”

Because Alice doesn’t burn any fuel in flight, and relies only on cheaper electric charge, the cost of operating the plane is expected to be lower than its petroleum-fueled counterparts. And the noise emitted by a plane with no internal combustion engines is also lower; this is especially true for Alice, given its ability (unique to e-aircraft) to vary its propeller speeds to compensate for crosswinds and to lower cabin noise.

As an early standard-bearer in electric passenger flight, Cape Air says its decision to purchase Alice (the number of electric aircraft that will join its fleet has not been finalized) was also partly motivated by the company’s “deep sense of social responsibility,” Lorino says. (The company’s headquarters is 100-percent solar powered, she says, and the company is now hoping to use sustainable energy sources for charging its fleet of e-airplanes.)

“Our hope is that electric-powered flight is a reality in the next decade and that there is adoption from the public to view this as a viable, natural form of transportation,” she says.