Tag Archives: computing

Q&A: Sourcegraph’s Universal Code Search Tool

Post Syndicated from Rina Diane Caballar original https://spectrum.ieee.org/tech-talk/computing/software/sourcegraph-universal-code-search-tool

In software development, code search is a way to better navigate and understand code. But it’s an often overlooked technique, with development tools and coding environments offering clunky and limited search functionalities.

Tech startup Sourcegraph aims to change that with its universal code search tool by the same name that makes searching code as seamless as doing a Google search on the web. To achieve that efficiency, Sourcegraph models code and its dependencies as a graph, and performs queries on the graph in real time.

Terabits-Per-Second Data Rates Achieved at Short Range

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/computing/networks/terabit-second

Using the same kind of techniques that allow DSL to transmit high-speed Internet over regular phone lines, scientists have transmitted signals at 10 terabits per second or more over short distances, significantly faster than other telecommunications technologies, a new study finds.

Digital subscriber line (DSL) modems delivered the first taste of high-speed Internet access to many users. They make use of the fact that existing regular telephone lines are capable of handling a much greater bandwidth than is needed just for voice. DSL systems leverage that extra bandwidth to send multiple signals in parallel across many frequencies.

Using megahertz frequencies, current DSL technologies can achieve downstream transmission rates of up to 100 megabits per second at a range of 500 meters, and more than 1 gigabit per second at shorter distances. (DSL signal quality often decreases over distance because of the limitations of phone lines; telephone companies can boost voice signals with small amplifiers called loading coils, but these do not work for DSL signals.)

Programmable Material Could Speed Production of Photonic Integrated Circuits

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/computing/hardware/programmable-pics

Reprogrammable photonic circuits based on a novel programmable material might speed the rate at which engineers can develop working photonic devices, researchers say.

Electronic integrated circuits (ICs) are nowadays key to many technologies, but their light-based counterparts, photonic integrated circuits (PICs), may offer many advantages, such as lower energy consumption and faster operation. However, current fabrication methods for PICs experience a great deal of variability, such that many of the resulting devices are slightly off base from the desired specifications, resulting in limited yields.

Show The World You Can Write A Cool Program Inside A Single Tweet

Post Syndicated from Stephen Cass original https://spectrum.ieee.org/tech-talk/computing/software/show-the-world-you-can-write-a-cool-program-inside-a-single-tweet

Want to give your coding chops a public workout? Then prove what you can do with the BBC Micro Bot. Billed as the world’s first “8-bit cloud,” and launched on 11 February, the BBC Micro Bot is a Twitter account that waits for people to tweet at it. Then the bot takes the tweet, runs it through an emulator of the classic 1980s BBC Microcomputer running Basic, and tweets back an animated gif of three seconds of the emulator’s output. It might sound like that couldn’t amount to much, but folks have been using it to demonstrate some amazing feats of programming, most notably including Ebon Upton, creator of the Raspberry Pi.

“The Bot’s [output] posts received over 10 million impressions in the first few weeks, and it’s running around 1000 Basic programs per week,” said the account’s creator, Dominic Pajak, in an email interview with IEEE Spectrum.

Upton, for example, performed a coding tour de force with an implementation of Conway’s Game of Life, complete with a so-called Gosper Gun, all running fast enough to see the Gun spit out glider patterns in real time. (For those not familiar with Conway’s Game of Life, it’s a set of simple rules for cellular automata that exist on a flat grid. Cells are turned on and off based on the state of neighboring cells according to those rules. Particularly interesting patterns that emerge from the rules have been given all sorts of names.)

Upton did this by writing 150 bytes of data and machine code for the BBC Microcomputer’s original CPU, the 6502, which the emulator behind the BBC Micro Bot is comprehensive enough to handle. He then converted this binary data into tweetable text using Base64 encoding, and wrapped that data with a small Basic program that decoded it and launched the machine code. Since then, people have been using even more elaborate encoding schemes to pack even more in. 

Pajak, who is the vice-president of business development for Arduino, created the BBC Micro Bot because he is a self-described fan of computer history and Twitter. “It struck me that I could combine the two.” He chose the BBC Micro as his target because “growing up in the United Kingdom during the 1980s, I learnt to program the BBC Micro at school. I certainly owe my career to that early start.” Pajak adds that, “There are technical reasons too. BBC Basic was largely developed by Sophie Wilson (who went on to create the Arm architecture) and was by far the best Basic implementation of the era, with some very nice features allowing for ‘minified’ code.”

Pajak explains that the bot is written in Javascript for the Node.js runtime environment, and acts as a front end to Matt Godbolt’s JSbeeb emulation of the BBC Micro. When the bot spots a tweet intended for it, “it does a little bit of filtering and then injects the text into the emulated BBC Micro’s keyboard buffer. The bot uses ffmpeg to create a 3-second video after 30 seconds of emulation time.” Originally the bot was running on a Raspberry Pi 4, but he’s since moved it to Amazon Web Services.

Pajak has been pleasantly surprised by the response: “The fact that BBC BASIC is being used for the first time by people across the world via Twitter is definitely a surprise, and its great to see people discovering and having fun with it. There were quite a lot of slogans and memes being generated by users in Latin America recently. This I didn’t foresee for sure.”

The level of sophistication of the programs has risen sharply, from simple Basic programs through Upton’s Game of Life implementation and beyond. “The barriers keep getting pushed. Every now and then I have to do a double take: Can this really be done with 280 characters of code?Pajak points to Katie Anderson’s tongue-in-cheek encoding of the Windows 3.1 logo, and the replication of a classic bouncing ball demo by Paul Malin—game giant Activision’s technical director—which, Pajak says, uses “a special encoding to squeeze 361 ASCII characters of code into a 280 Unicode character tweet.”

If you’re interested in trying to write a program for the Bot, there are a host of books and other coding advice about the BBC Micro available online, with Pajak hosting a starting set of pointers and a text encoder on his website, www.dompajak.com.

As for the future and other computers, Pajak says he given some tips to people who want to build similar bots for the Apple II and Commodore computers. For himself, he’s contemplating finding a way to execute the tweets on a physical BBC Micro, saying “I already have my BBC Micro connected to the Internet using an Arduino MKR1010…”

Here’s a Blueprint for a Practical Quantum Computer


Post Syndicated from Richard Versluis original https://spectrum.ieee.org/computing/hardware/heres-a-blueprint-for-a-practical-quantum-computer

The classic Rubik’s Cube has 43,252,003,274,489,856,000 different states. You might well wonder how people are able to take a scrambled cube and bring it back to its original configuration, with just one color showing on each side. Some people are even able to do this blindfolded after viewing the scrambled cube once. Such feats are possible because there’s a basic set of rules that always allow someone to restore the cube to its original state in 20 moves or less.

Controlling a quantum computer is a lot like solving a Rubik’s Cube blindfolded: The initial state is well known, and there is a limited set of basic elements (qubits) that can be manipulated by a simple set of rules—rotations of the vector that represents the quantum state. But observing the system during those manipulations comes with a severe penalty: If you take a look too soon, the computation will fail. That’s because you are allowed to view only the machine’s final state.

The power of a quantum computer lies in the fact that the system can be put in a combination of a very large number of states. Sometimes this fact is used to argue that it will be impossible to build or control a quantum computer: The gist of the argument is that the number of parameters needed to describe its state would simply be too high. Yes, it will be quite an engineering challenge to control a quantum computer and to make sure that its state will not be affected by various sources of error. However, the difficulty does not lie in its complex quantum state but in making sure that the basic set of control signals do what they should do and that the qubits behave as you expect them to.

If engineers can figure out how to do that, quantum computers could one day solve problems that are beyond the reach of classical computers. Quantum computers might be able to break codes that were thought to be unbreakable. And they could contribute to the discovery of new drugs, improve machine-learning systems, solve fiendishly complex logistics problems, and so on.

The expectations are indeed high, and tech companies and governments alike are betting on quantum computers to the tune of billions of dollars. But it’s still a gamble, because the same quantum-mechanical effects that promise so much power also cause these machines to be very sensitive and difficult to control.

Must it always be so? The main difference between a classical supercomputer and a quantum computer is that the latter makes use of certain quantum mechanical effects to manipulate data in a way that defies intuition. Here I will briefly touch on just some of these effects. But that description should be enough to help you understand the engineering hurdles—and some possible strategies for overcoming them.

Whereas ordinary classical computers manipulate bits (binary digits), each of which must be either 0 or 1, quantum computers operate on quantum bits, or qubits. Unlike classical bits, qubits can take advantage of a quantum mechanical effect called superposition, allowing a qubit to be in a state where it has a certain amount of zero-ness to it and a certain amount of one-ness to it. The coefficients that describe how much one-ness and how much zero-ness a qubit has are complex numbers, meaning that they have both real and imaginary parts.

In a machine with multiple qubits, you can create those qubits in a very special way, such that the state of one qubit cannot be described independently of the state of the others. This phenomenon is called entanglement. The states that are possible for multiple entangled qubits are more complicated than those for a single qubit.

While two classical bits can be set only to 00, 01, 10, or 11, two entangled qubits can be put into a superposition of these four fundamental states. That is, the entangled pair of qubits can have a certain amount of 00-ness, a certain amount of 01-ness, a certain amount of 10-ness, and a certain amount of 11-ness. Three entangled qubits can be in a superposition of eight fundamental states. And n qubits can be in a superposition of 2n states. When you perform operations on these n entangled qubits, it’s as though you were operating on 2n bits of information at the same time.

The operations you do on a qubit are akin to the rotations done to a Rubik’s Cube. A big difference is that the quantum rotations are never perfect. Because of certain limitations in the quality of the control signals and the sensitivity of the qubits, an operation intended to rotate a qubit by 90 degrees may end up rotating it by 90.1 degrees or by 89.9 degrees, say. Such errors might seem small but they quickly add up, resulting in an output that is completely incorrect.

Another source of error is decoherence: Left by themselves, the qubits will gradually lose the information they contain and also lose their entanglement. This happens because the qubits interact with their environment to some degree, even though the physical substrate used to store them has been engineered to keep them isolated. You can compensate for the effects of control inaccuracy and decoherence using what’s known as quantum error correction, but doing so comes at great cost in terms of the number of physical qubits required and the amount of processing that needs to be done with them.

Once these technical challenges are overcome, quantum computers will be valuable for certain special kinds of calculations. After executing a quantum algorithm, the machine will measure its final state. This measurement, in theory, will yield with high probability the solution to a mathematical problem that a classical computer could not solve in a reasonable period of time.

So how do you begin designing a quantum computer? In engineering, it’s good practice to break down the main function of a machine into groups containing subfunctions that are similar in nature or required performance. These functional groups then can be more easily mapped onto hardware. My colleagues and I at QuTech in the Netherlands have found that the functions needed for a quantum computer can naturally be divided into five such groups, conceptually represented by five layers of control. Researchers at IBM, Google, Intel, and elsewhere are following a similar strategy, although other approaches to building a quantum computer are also possible.

Let me describe that five-layer cake, starting at the top, the highest level of abstraction from the nitty-gritty details of what’s going on deep inside the hardware.

At the top of the pile is the application layer, which is not part of the quantum computer itself but is nevertheless a key part of the overall system. It represents all that’s needed to compose the relevant algorithms: a programming environment, an operating system for the quantum computer, a user interface, and so forth. The algorithms composed using this layer can be fully quantum, but they may also involve a combination of classical and quantum parts. The application layer should not depend on the type of hardware used in the layers under it.

Directly below the application layer is the classical-processing layer, which has three basic functions. First, it optimizes the quantum algorithm being run and compiles it into microinstructions. That’s analogous to what goes on in a classical computer’s CPU, which processes many microinstructions for each machine-code instruction it must carry out. This layer also processes the quantum-state measurements returned by the hardware in the layers below, which may be fed back into a classical algorithm to produce final results. The classical-processing layer will also take care of the calibration and tuning needed for the layers below.

Underneath the classical layer are the digital-, analog-, and quantum-processing layers, which together make up a quantum processing unit (QPU). There is a tight connection between the three layers of the QPU, and the design of one will depend strongly on that of the other two. Let me describe more fully now the three layers that make up the QPU, moving from the top downward.

The digital-processing layer translates microinstructions into pulses, the kinds of signals needed to manipulate qubits, allowing them to act as quantum logic gates. More precisely, this layer provides digital definitions of what those analog pulses should be. The analog pulses themselves are generated in the QPU’s analog-processing layer. The digital layer also feeds back the measurement results of the quantum calculation to the classical-processing layer above it, so that the quantum solution can be combined with results computed classically.

Right now, personal computers or field-programmable gate arrays can handle these tasks. But when error correction is added to quantum computers, the digital-processing layer will have to become much more complicated.

The analog-processing layer creates the various kinds of signals sent to the qubits, one layer below. These are mainly voltage steps and sweeps and bursts of microwave pulses, which are phase and amplitude modulated so as to execute the required qubit operations. Those operations involve qubits connected together to form quantum logic gates, which are used in concert to carry out the overall computation according to the particular quantum algorithm that is being run.

Although it’s not technically difficult to generate such a signal, there are significant hurdles here when it comes to managing the many signals that would be needed for a practical quantum computer. For one, the signals sent to the different qubits would need to be synchronized at picosecond timescales. And you need some way to convey these different signals to the different qubits so as to be able to make them do different things. That’s a big stumbling block.

In today’s small-scale systems, with just a few dozen qubits, each qubit is tuned to a different frequency—think of it as a radio receiver locked to one channel. You can select which qubit to address on a shared signal line by transmitting at its special frequency. That works, but this strategy doesn’t scale. You see, the signals sent to a qubit must have a reasonable bandwidth, say, 10 megahertz. And if the computer contains a million qubits, such a signaling system would need a bandwidth of 10 terahertz, which of course isn’t feasible. Nor would it be possible to build in a million separate signal lines so that you could attach one to each qubit directly.

The solution will probably involve a combination of frequency and spatial multiplexing. Qubits would be fabricated in groups, with each qubit in the group being tuned to a different frequency. The computer would contain many such groups, all attached to an analog communications network that allows the signal generated in the analog layer to be connected only to a selected subset of groups. By arranging the frequency of the signal and the network connections correctly, you can then manipulate the targeted qubit or set of qubits without affecting the others.

That approach should do the job, but such multiplexing comes with a cost: inaccuracies in control. It remains to be determined how such inaccuracies can be overcome.

In current systems, the digital- and analog-processing layers operate mainly at room temperature. Only the quantum-processing layer beneath them, the layer holding the qubits, is kept near absolute zero temperature. But as the number of qubits increases in future systems, the electronics making up all three of these layers will no doubt have to be integrated into one packaged cryogenic chip.

Some companies are currently building what you might call pre-prototype systems, based mainly on superconducting qubits. These machines contain a maximum of a few dozen qubits and are capable of executing tens to hundreds of coherent quantum operations. The companies pursuing this approach include tech giants Google, IBM, and Intel.

By extending the number of control lines, engineers could expand current architectures to a few hundred qubits, but that’s the very most. And the short time that these qubits remain coherent—today, roughly 50 microseconds—will limit the number of quantum instructions that can be executed before the calculation is consumed by errors.

Given these limitations, the main application I anticipate for systems with a few hundred qubits will be as an accelerator for conventional supercomputers. Specific tasks for which the quantum computer runs faster will be sent from a supercomputer to the quantum computer, with the results then returned to the supercomputer for further processing. The quantum computer will in a sense act like the GPU in your laptop, doing certain specific tasks, like matrix inversion or optimization of initial conditions, a lot faster than the CPU alone ever could.

During this next phase in the development of quantum computers, the application layer will be fairly straightforward to build. The digital-processing layer will also be relatively simple. But building the three layers that make up the QPU will be tricky.

Current fabrication techniques cannot produce completely uniform qubits. So different qubits have slightly different properties. That heterogeneity in turn requires the analog layer of the QPU to be tailored to the specific qubits it controls. The need for customization makes the process of building a QPU difficult to scale. Much greater uniformity in the fabrication of qubits would remove the need to customize what goes on in the analog layer and would allow for the multiplexing of control and measurement signals.

Multiplexing will be required for the large numbers of qubits that researchers will probably start introducing in 5 to 10 years so that they can add error correction to their machines. The basic idea behind such error correction is simple enough: Instead of storing the data in one physical qubit, multiple physical qubits are combined into one error-corrected, logical qubit.

Quantum error correction could solve the fundamental problem of decoherence, but it would require anywhere from 100 to 10,000 physical qubits per logical qubit. And that’s not the only hurdle. Implementing error correction will require a low-latency, high-throughput feedback loop that spans all three layers of the QPU.

It remains to be seen which of the many types of qubits being experimented with now—superconducting circuits, spin qubits, photonic systems, ion traps, nitrogen-vacancy centers, and so forth—will prove to be the most suitable for creating the large numbers of qubits needed for error correction. Regardless of which one proves best, it’s clear that success will require packaging and controlling millions of qubits if not more.

Which brings us to the big question: Can that really be done? The millions of qubits would have to be controlled by continuous analog signals. That’s hard but by no means impossible. I and other researchers have calculated that if device quality could be improved by a few orders of magnitude, the control signals used to perform error correction could be multiplexed and the design of the analog layer would become straightforward, with the digital layer managing the multiplexing scheme. These future QPUs would not require millions of digital connections, just some hundreds or thousands, which could be built using current techniques for IC design and fabrication.

The bigger challenge could well prove to be the measurement side of things: Many thousands of measurements per second would need to be performed on the chip. These measurements would be designed so that they do not disturb the quantum information (which remains unknown until the end of the calculation) while at the same time revealing and correcting any errors that arise along the way. Measuring millions of qubits at this frequency will require a drastic change in measurement philosophy.

The current way of measuring qubits requires the demodulation and digitization of an analog signal. At the measurement rate of many kilohertz, and with millions of qubits in a machine, the total digital throughput would be petabytes per second. That’s far too much data to handle using today’s techniques, which involve room-temperature electronics connected to the chip holding the qubits at temperatures near absolute zero.

Clearly, the analog and digital layers of the QPU will have to be integrated with the quantum-processing layer on the same chip, with some clever schemes implemented there for preprocessing and multiplexing the measurements. Fortunately, for the processing that is done to correct errors, not all qubit measurements would have to be passed up to the digital layer. That only needs to be done when local circuity detects an error, which drastically reduces the required digital bandwidth.

What goes on in the quantum layer will fundamentally determine how well the computer will operate. Imperfections in the qubits mean that you’ll need more of them for error correction, and as those imperfections get worse, the requirements for your quantum computer explode beyond what is feasible. But the converse is also true: Improvements in the quality of the qubits might be costly to engineer, but they would very quickly pay for themselves.

In the current pre-prototyping phase of quantum computing, individual qubit control is still unavoidable: It’s required to get the most out of the few qubits that we now have. Soon, though, as the number of qubits available increases, researchers will have to work out systems for multiplexing control signals and the measurements of the qubits.

The next significant step will be the introduction of rudimentary forms of error correction. Initially, there will be two parallel development paths, one with error correction and the other without, but error-corrected quantum computers will ultimately dominate. There’s simply no other route to a machine that can perform useful, real-world tasks.

To prepare for these developments, chip designers, chip-fabrication-process engineers, cryogenic-control specialists, experts in mass data handling, quantum-algorithm developers, and others will need to work together closely.

Such a complex collaboration would benefit from an international quantum-engineering road map. The various tasks required could then be assigned to the different sets of specialists involved, with the publishers of the road map managing communication between groups. By combining the efforts of academic institutions, research institutes, and commercial companies, we can and will succeed in building practical quantum computers, unleashing immense computing power for the future. 

This article appears in the April 2020 print issue as “Quantum Computers Scale Up.”

About the Author

Richard Versluis is the system architect at QuTech, a quantum-computing collaboration between Delft University of Technology and the Netherlands Organization for Applied Scientific Research

Data Centers Are Plagued by Wasteful Computing. Game Theory Could Help

Post Syndicated from Seyed Majid Zahedi original https://spectrum.ieee.org/computing/hardware/data-centers-plagued-by-wasteful-computing-game-theory-could-help

When you hear the wordsdata center” and “games,” you probably think of massive multiplayer online games like World of Warcraft. But there’s another kind of game going on in data centers, one meant to hog resources from the shared mass of computers and storage systems.

Even employees of Google, the company with perhaps the most massive data footprint, once played these games. When asked to submit a job’s computing requirements, some employees inflated their requests for resources in order to reduce the amount of sharing they’d have to do with others. Interestingly, some other employees deflated their resource requests to pretend that their tasks could easily fit within any computer. Once their tasks were slipped into a machine, those operations would then use up all the resources available on it and squeeze out their colleagues’ tasks.

Such trickery might seem a little comical, but it actually points to a real problem—inefficiency.

Globally, data centers consumed 205 billion kilowatt-hours of electricity in 2018. That’s not much less than all of Australia used, and about 1 percent of the world total. A lot of that energy is wasted because servers are not used to their full capacity. An idle server dissipates as much as 50 percent of the power it consumes when running at its peak; as the server takes on work, its fixed power costs are amortized over that work. Because a user running a single task typically takes up only 20 to 30 percent of the server’s resources, multiple users must share the server to boost its utilization and consequently its energy efficiency. Sharing also reduces capital, operating, and infrastructure costs. Not everybody is rich enough to build their own data centers, after all.

To allocate shared resources, data centers deploy resource-management systems, which divide up available processor cores, memory capacity, and network resources according to users’ needs and the system’s own objectives. At first glance, this task should be straightforward because users often have complementary demands. But in truth, it’s not. Sharing creates competition among users, as we saw with those crafty Googlers, and that can distort the use of resources.

So we have pursued a series of projects using game theory, the mathematical models that describe strategic interactions among rational decision makers, to manage the allocation of resources among self-interested users while maximizing data-center efficiency. In this situation, playing the game makes all the difference.

Helping a group of rational and self-interested users share resources efficiently is not just a product of the big-data age. Economists have been doing it for decades. In economics, market mechanisms set prices for resources based on supply and demand. Indeed, many of these mechanisms are currently deployed in public data centers, such as Amazon EC2 and Microsoft Azure. There, the transfer of real money acts as a tool to align users’ incentives (performance) with the provider’s objectives (efficiency). However, there are many situations where the exchange of money is not useful.

Let’s consider a simple example. Suppose that you are given a ticket to an opera on the day of your best friend’s wedding, and you decide to give the ticket to someone who will best appreciate the event. So you run what’s called a second-price auction: You ask your friends to bid for the ticket, stipulating that the winner pay you the amount of the second-highest bid. It has been mathematically proven that your friends have no incentives to misrepresent how much they value the opera ticket in this kind of auction.

If you do not want money or cannot make your friends pay you any, your options become very limited. If you ask your friends how much they would love to go the opera, nothing stops them from exaggerating their desire for the ticket. The opera ticket is just a simple example, but there are plenty of places—such as Google’s private data centers or an academic computer cluster—where money either can’t or shouldn’t change hands to decide who gets what.

Game theory provides practical solutions for just such a problem, and indeed it has been adapted for use in both computer networks and computer systems. We drew inspiration from those two fields, but we also had to address their limitations. In computer networks, there has been much work in designing mechanisms to manage self-interested and uncoordinated routers to avoid congestion. But these models consider contention over only a single resource—network bandwidth. In data-center computer clusters and servers, there is a wide range of resources to fight over.

In computer systems, there’s been a surge of interest in resource-allocation mechanisms that consider multiple resources, notably one called dominant resource fairness [PDF]. However, this and similar work is restricted to performance models and to ratios of processors and memory that don’t always reflect what goes on in a data center.

To come up with game theory models that would work in the data center, we delved into the details of hardware architecture, starting at the smallest level: the transistor.

Transistors were long made to dissipate ever less power as they scaled down in size, in part by lowering the operating voltage. By the mid-2000s, however, that trend, known as Denard Scaling, had broken down. As a result, for a fixed power budget, processors stopped getting faster at the rate to which we had become accustomed. A temporary solution was to put multiple processor cores on the same chip, so that the enormous number of transistors could still be cooled economically. However, it soon became apparent that you cannot turn on all the cores and run them at full speed for very long without melting the chip.

In 2012, computer architects proposed a workaround called computational sprinting. The concept was that processor cores could safely push past their power budget for short intervals called sprints. After a sprint, the processor has to cool down before the next sprint; otherwise the chip is destroyed. If done correctly, sprinting could make a system more responsive to changes in its workload. Computational sprinting was originally proposed for processors in mobile devices like smartphones, which must limit power usage both to conserve charge and to avoid burning the user. But sprinting soon found its way into data centers, which use the trick to cope with bursts of computational demand.

Here’s where the problem arises. Suppose that self-interested users own sprinting-enabled servers, and those servers all share a power supply in a data center. Users could sprint to increase the computational power of their processors, but if a large fraction of them sprint simultaneously, the power load will spike. The circuit breaker is then tripped. This forces the batteries in the uninterruptible power supply (UPS) to provide power while the system recovers. After such a power emergency, all the servers on that power supply are forced to operate on a nominal power budget—no sprinting allowed—while the batteries recharge.

This scenario is a version of the classic “tragedy of the commons,” first identified by British economist William Forster Lloyd in an 1833 essay. He described the following situation: Suppose that cattle herders share a common parcel of land to graze their cows. If an individual herder puts more than the allotted number of cattle on the common, that herder could achieve marginal benefits. But if many herders do that, the overgrazing will damage the land, hurting everyone.

Together with Songchun Fan, then a Duke University doctoral candidate, we studied sprinting strategies as a tragedy of the commons. We built a model of the system that focused on the two main physical constraints. First, for a server processor, a sprint restricts future action by requiring the processor to wait while the chip dissipates heat. Second, for a server cluster, if the circuit breaker trips, then all the server processors must wait while the UPS batteries recharge.

We formulated a sprinting game in which users, in each round, could be in one of three states: active, cooling after a sprint, or recovering after a power emergency. In each epoch, or round of the game, a user’s only decision is whether or not to sprint when their processor is active. Users want to optimize their sprinting to gain benefits, such as improved throughput or reduction in execution time. You should note that these benefits vary according to when the sprint happens. For instance, sprinting is more beneficial when demand is high.

Consider a simple example. You are at round 5, and you know that if you sprint, you will gain 10 units of benefit. However, you’d have to let your processor cool down for a couple of rounds before you can sprint again. But now, say you sprint, and then it turns out that if you had instead waited for round 6 to sprint, you could have gained 20 units. Alternatively, suppose that you save your sprint for a future round instead of using it in round 5. But it turns out that all the other users decided to sprint at round 5, causing a power emergency that prevents you from sprinting for several rounds. Worse, by then your gains won’t be nearly as high.

All users must make these kinds of decisions based on how much utility they gain and on other users’ sprinting strategies. While it might be fun to play against a few users, making these decisions becomes intractable as the number of competitors grows to data-center scale. Fortunately, we found a way to optimize each user’s strategy in large systems by using what’s called mean field game analysis. This method avoids the complexity of scrutinizing individual competitors’ strategies by instead describing their behavior as a population. Key to this statistical approach is the assumption that any individual user’s actions do not change the average system behavior significantly. Because of that assumption, we can approximate the effect of all the other users on any given user with a single averaged effect.

It’s kind of analogous to the way millions of commuters try to optimize their daily travel. An individual commuter, call her Alice, cannot possibly reason about every other person on the road. Instead she formulates some expectation about the population of commuters as a whole, their desired arrival times on a given day, and how their travel plans will contribute to congestion.

Mean field analysis allows us to find the “mean field equilibrium” of the sprinting game. Users optimize their responses to the population, and, in equilibrium, no user benefits by deviating from their best responses to the population.

In the traffic analogy, Alice optimizes her commute according to her understanding of the commuting population’s average behavior. If that optimized plan does not produce the expected traffic pattern, she revises her expectations and rethinks her plan. With every commuter optimizing at once, over a few days, traffic converges to some recurring pattern and commuters’ independent actions produce an equilibrium.

Using the mean field equilibrium, we formulated the optimal strategy for the sprinting game, which boils down to this: A user should sprint when the performance gains exceed a certain threshold, which varies depending on the user. We can compute this threshold using the data center’s workloads and its physical characteristics.

When everybody operates with their optimal threshold at the mean field equilibrium, the system gets a number of benefits. First, the data center’s power management can be distributed, as users implement their own strategies without having to request permission from a centralized manager to sprint. Such independence makes power control more responsive, saving energy. Users can modulate their processor’s power draw in microseconds or less. That wouldn’t be possible if they had to wait tens of milliseconds for permission requests and answers to wind their way across the data center’s network. Second, the equilibrium gets more computing done, because users optimize strategies for timely sprints that reflect their own workload demands. And finally, a user’s strategy becomes straightforward—sprinting whenever the gain exceeds a threshold. That’s extremely easy to implement and trivial to execute.

The sprinting power-management project is just one in a series of data-center management systems we’ve been working on over the past five years. In each, we use key details of the hardware architecture and system to formulate the games. The results have led to practical management mechanisms that provide guarantees of acceptable system behavior when participants act selfishly. Such guarantees, we believe, will only encourage participation in shared systems and establish solid foundations for energy-efficient and scalable data centers.

Although we’ve managed to address the resource-allocation problem at the levels of server multiprocessors, server racks, and server clusters, putting them to use in large data centers will require more work. For one thing, you have to be able to generate a profile of the data center’s performance. Data centers must therefore deploy the infrastructure necessary to monitor hardware activity, assess performance outcomes, and infer preferences for resources.

Most game theory solutions for such systems require the profiling stage to happen off-line. It might be less intrusive instead to construct online mechanisms that can start with some prior knowledge and then update their parameters during execution as characteristics become clearer. Online mechanisms might even improve the game as it’s being played, using reinforcement learning or another form of artificial intelligence.

There’s also the fact that in a data center, users may arrive and depart from the system at any time; jobs may enter and exit distinct phases of a computation; servers may fail and restart. All of these events require the reallocation of resources, yet these reallocations may disrupt computation throughout the system and require that data be shunted about, using up resources. Juggling all these changes while still keeping everyone playing fairly will surely require a lot more work, but we’re confident that game theory will play a part.

This article appears in the April 2020 print issue as “A Win for Game Theory in the Data Center.”

About the Authors

Benjamin C. Lee, an associate professor of electrical and computer engineering at Duke University, and Seyed Majid Zahedi, an assistant professor at the University of Waterloo, in Ont., Canada, describe a game they developed that can make data centers more efficient. While there’s a large volume of literature on game theory’s use in computer networking, Lee says, computing on the scale of a data center is a very different problem. “For every 10 papers we read, we got maybe half an idea,” he says.

China Launches National Blockchain Network in 100 Cities

Post Syndicated from Nick Stockton original https://spectrum.ieee.org/computing/software/china-launches-national-blockchain-network-100-cities

Next month, an alliance of Chinese government groups, banks, and technology companies will publicly launch the Blockchain-based Service Network (BSN). It will be among the first blockchain networks to be built and maintained by a central government. Think of it like an operating system, where participants can use existing blockchain programs, or build their own bespoke tools, without having to design a framework from the ground up.

The BSN’s proponents say it will reduce the costs of doing blockchain-based business by 80 percent. By the end of 2020, they hope to have nodes in 200 Chinese cities. Eventually, they believe it could become a global standard.

China leads the world in blockchain-related patents, according to the World Intellectual Property Organization. And blockchain goes far beyond Bitcoin; the technology can be used to verify all sorts of transactions.

For instance, JD.com—one of China’s largest online storesuses blockchain tech to verify its supply chain to customers and business partners who had worried the retailer was selling knock-off versions of luxury brands. The company recently made its platform open source. China’s General Administration of Customs uses blockchain to monitor 26 international border crossings.

And, though China has effectively banned cryptocurrencies like Bitcoin, digital payments are wildly popular. “Most people prefer to use WeChat or Alipay,” says Hong Wan, a blockchain expert from North Carolina State University. She says the government may want BSN to become central to a digital currency and payment system that would rival those services.

The biggest roadblock for blockchain technology has been that setting up a platform is expensive and difficult, says Yang Xiang, the dean of the Digital Research Innovation Capability Platform at Swinburne University in Australia. “When we look back at the development of blockchain technology, the emergence of BSN or similar solutions is inevitable,” he says.

According to a white paper [PDF] published by the BSN’s founding members—which include the Chinese National Information Center, China UnionPay, China Mobile, and payroll services company Red Date—most companies can expect to spend at least US $14,000 to build, operate, and maintain a blockchain platform for one year.

The BSN will let programmers develop blockchain applications without requiring them to do so much heavy lifting. The white paper estimates it will cost businesses, on average, less than $300 to deploy an application on BSN.

Unlike Bitcoin and other so-called permissionless blockchains, where anybody can join and review the entire transaction record, applications running on the BSN will have closed membership by default. This ‘permissioned’ setup is much more amenable to businesses, which typically want to share transaction data only with trusted partners. Permissioned networks are also easier to scale, because all verifications happen in-house.

The BSN’s founders announced the platform on 15 October 2019—about a week before Chinese President Xi Jinping declared blockchain a national tech priority. Since then, individual developers and enterprise-scale engineering groups have been building and beta testing the platform. By launch time, the BSN Development Alliance says it hopes to have 100 city nodes running the platform, each with thousands of users.

The plan isn’t without its critics, though. North Carolina State’s Wan, for one, fears that the platform will experience performance lag due to verifying so many diffuse transactions. She says the BSN hasn’t yet released detailed technical specifications on the platform, so she doesn’t know how its creators will overcome this problem. “I think we all have skepticisms about what is going on in the tech,” says Wan. “It is still in the testing phase.”

The BSN Alliance hopes the platform will someday become the global standard for blockchain operations. But, China’s international partners may hesitate to join due to privacy concerns: The Chinese government will hold the BSN’s root key, which would allow it to monitor all transactions made using the platform.

China isn’t the only country betting on blockchain. In 2016, Australia spearheaded an effort to create global blockchain standards through the International Organization for Standardization. The European Union has been trudging toward a blockchain platform for years. And IBM, Facebook, and other tech companies have already launched their own versions.

Jiangshan Yu, associate director of the Blockchain Technology Centre at Monash University in Australia, isn’t concerned. He takes the long view: “What I see happening with the global blockchain infrastructure is there will be many national, local, or business platforms that will all eventually come together.”

This article appears in the April 2020 print issue as “China Takes Blockchain National.”

New Photonics Engine Promises Low-Loss, Energy-Efficient Data Capacity for Hyperscale Data Centers

Post Syndicated from Lynne Peskoe-Yang original https://spectrum.ieee.org/tech-talk/computing/networks/new-photonics-engine-promises-lowloss-energyefficient-data-capacity-for-hyperscale-data-centers

At the Optical Networking and Communication Conference in San Francisco, which wrapped up last Thursday, a team of researchers from Intel described a possible solution to a computing problem that keeps data server engineers awake at night: how networks can keep up with our growing demand for data.

The amount of data used globally is growing exponentially. Reports from last year suggest that something like 2.5 quintillion bytes of data are produced each day. All that data has to be routed from its origins—in consumer hard drives, mobile phones, IoT devices, and other processors—through multiple servers as it finds its way to other machines.

“The challenge is to get data in and out of the chip,” without losing information or slowing down processing, said Robert Blum, the Director of Marketing and New Business at Intel. Optical systems, like fiber-optic cables, have been in widespread use as an alternative computing medium for decades, but loss still occurs at the inevitable boundaries between materials in a hybrid optoelectronic system.

The team at Intel has developed a photonic engine with the equivalent processing power of sixteen 100-GB transceivers, or 4 of the latest 12.8 TB generation. The standout feature of the new chip is its co-packaging, a method of close physical integration of the necessary electrical components with faster, lossless optical ones.

The close integration of the optical components allows Intel’s engine to “break the wall,” of the maximum density of pluggable port transceivers on a switch ASIC, according to Blum. More ports on a switch—the specialized processor that routes data traffic—mean higher processing power, but only so many connectors can fit together before overheating becomes a threat.

The photonic engine brings the optical elements right up to the switch. Optical fibers require less space to connect and improve air flow throughout the server without adding to its heat waste. “With this [co-packaging] innovation, higher levels of bit rates are possible because you are no longer limited by electrical data transfer,” said Blum. Once you get to optical computing, distance is free—2 meters, 200 meters, it doesn’t matter.”

Driving huge amounts of high-speed data over the foot-long copper trays, as is necessary in standard server architectures, is also expensive—especially in terms of energy consumption. “With electrical [computation], as speed goes higher, you need more power; with optical, it is literally lossless at any speed,” said lead device integration engineer Saeed Fathololoumi.

“Power is really the currency on which data centers operate,” added Blum. “They are limited by the amount of power you can supply to them, and you want to use as much of that power as possible to compute.”

The co-packaged photonic engine currently exists as a functional demo back at Intel’s lab. The demonstration at the conference used a P4-programmable Barefoot Tofino 2 switch ASIC capable of speeds reaching12.8 terabits per second, in combination with Intel’s 1.6-Tbps silicon photonics engines. “The optical interface is already the standard industry interface, but in the lab we’re using a switch that can talk to any other switch using optical protocols,” said Blum.

It’s the first step toward an all-optical input-output scheme, which may offer future data centers a way to cope with the rapidly expanding data demands of the Internet-connected public. For the Intel team, that means working with the rest of the computing industry to define the initial deployments of the new engines. “We’ve proven out the main technical building blocks, the technical hurdles,” said Fathololoumi. “The risk is low now to develop this into a product.”

Programming Without Code: The Rise of No-Code Software Development

Post Syndicated from Rina Diane Caballar original https://spectrum.ieee.org/tech-talk/computing/software/programming-without-code-no-code-software-development

Code is the backbone of most software programs and applications. Each line of code serves as an instruction—a logical, step-by-step mechanism for computers, servers, and other machines to perform an action. To create those instructions, one must know how to write code—a valuable skill that’s sometimes in short supply

But what if you could build software without writing a single line of code? That’s the premise behind no-code development, a software development method that has been gathering momentum. With the help of no-code platforms, it’s possible to develop software without writing any underlying code.

“No-code allows people who don’t know how to write code to develop the same applications that a software engineer would,” says Vlad Magdalin, co-founder and CEO of Webflow, a no-code platform for building websites. “It’s the ability to do without code what has traditionally been done with code.”

Image Sensor Doubles as a Neural Net

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/computing/hardware/image-neural

A new ultra-fast machine-vision device can process images thousands of times faster than conventional techniques with an image sensor that is also an artificial neural network.

Machine vision technology often uses artificial neural networks to analyze images. In artificial neural networks, components dubbed “neurons” are fed data and cooperate to solve a problem, such as recognizing images. The neural net repeatedly adjusts the strength of the connections or “synapses” between its neurons and sees if the resulting patterns of behavior are better at solving the problem. Over time, the network discovers which patterns are best at computing solutions. It then adopts these as defaults, mimicking the process of learning in the human brain.

Cloud Services Tool Lets You Pay for Data You Use—Not Data You Store

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/computing/networks/pay-cloud-services-data-tool-news

Cloud storage services usually charge clients for how much data they wish to store. But charging users only when they actually use that data may be a more cost-effective approach, a new study finds.

Internet-scale web applications—the kind that run on servers across the globe and may handle millions of users—are increasingly relying on services that store data in the cloud. This helps applications deal with huge amounts of data. Facebook, for example, generates 4 petabytes (4 million gigabytes) of data every day.

Election Security Experts Cautiously Optimistic About New Voting Machines in Los Angeles

Post Syndicated from Fahmida Y Rashid original https://spectrum.ieee.org/tech-talk/computing/hardware/election-security-experts-new-voting-machines-los-angeles-super-tuesday-news

Election security experts will be carefully watching the Democratic primaries and caucuses in 14 states and one U.S. territory on Super Tuesday for signs of irregularities which may prevent accurate and timely reporting of voting results. Of particular interest will be Los Angeles County, where election officials are debuting brand-new custom voting machines to improve how residents vote.

Los Angeles County officials spent US $300 million over the past 10 years to make it easier and more convenient for people to vote—by expanding voting schedules, redesigning ballots, and building 31,000 new ballot-marking machines. As the nation’s largest county in terms of the number of residents, the geographic area that it covers, and the number of languages that must be supported, county officials decided to commission a brand-new system built from scratch instead of trying to customize existing systems to meet their requirements.

Honeywell’s Ion Trap Quantum Computer Makes Big Leap

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/computing/hardware/honeywells-ion-trap-quantum-computer-makes-big-leap

Honeywell may be a giant industrial technology firm, but it’s definitely not synonymous with advanced computing. Yet the company has made a ten-year commitment to developing an inhouse quantum computing, and it is about to start paying back.

“We expect within next three months we will be releasing world’s most powerful quantum computer,” says Tony Uttley, president of Honeywell Quantum Solutions. It’s the kind of claim competitors like IBM and Google have made periodically, but with Honeywell there’s a difference. Those others, using superconducting components chilled to near absolute zero, have been racing to cram more and more qubits onto a chip, Google reached its “quantum supremacy” milestone with 53 qubits. Uttley says Honeywell can beat it with a handful of its ion qubits.

Uttley is measuring its success using a relatively new metric pushed by IBM and called quantum volume. It’s essentially a measure of the number of physical qubits, how connected they are, and how error prone they are. IBM claimed a leading quantum volume of 32 using a 28-qubit system in early January. Honeywell’s four-qubit system reached 16, and it will hit 64 in coming months, says Uttley

The company has an ambitious path toward rapid expansion after that. “We expect to be on a trajectory to increase quantum volume 10-fold every year for the next five years,” he says. IBM is planning to double its figure every year.

Honeywell’s computer uses ytterbium ions trapped in an electromagnetic field in a narrow groove built in a chip. The qubit relies on the spin state of the ion’s outermost electron and that of its nucleus. This can be manipulated by lasers and can hold its state—remain coherent—for a fairly long time compared to other types of qubits. Importantly, the qubits can be moved around on the trap chip, allowing them to interact in ways that produce quantum algorithms

“We chose trapped ions because we believe in these early days of quantum computing, quality of qubit is going to matter most,” says Uttley. 

Honeywell is claiming qubits that are so free from corruption that they’ve achieved a first, a a “mid-circuit” measurement. That is, the system can interrogate the state of a qubit during a computation without damaging the states of the others, and, based on that observed qubit, it can change what the rest of the computation does. “It’s equivalent to an ‘if’ statement,” explains Uttley. Mid-circuit measurements are not currently possible in other technologies. “It’s theoretically possible,” he says. “But practically speaking, it will be a point of differentiation [for Honeywell] for a while.”

Ion-trap quantum systems were first developed at the U.S. National Institute of Standards and Technology in the 1990s. In 2015, a veteran of that group Chris Monroe cofounded the ion-trap quantum computer company
IonQ. IonQ has already fit 160 ytterbium-based qubits in its system and performed operations on 79 of them. The startup has
published several tests of its system, but not a quantum volume measure.

Do You Have the Right Complexion for Facial Recognition?

Post Syndicated from Willie D. Jones original https://spectrum.ieee.org/tech-talk/computing/software/do-you-have-the-right-complexion-for-facial-recognition

Back in my days as an undergraduate student, campus police relied on their “judgement” to decide who might pose a threat to the campus community. The fact that they would regularly pass by white students, under the presumption that they belonged there, in order to interrogate one of the few black students on campus was a strong indicator that officers’ judgement—individual and collective—was based on flawed application of a limited data set. Worse, it was an issue that never seemed to respond to the “officer training” that was promised in the wake of such incidents.

Nearly 30 years hence, some colleges are looking to avoid accusations of prejudice by letting artificial intelligence exercise its judgement about who belongs on their campuses. But facial recognition systems offer no escape from bias. Why? Like campus police, their results are too often based on flawed application of a limited data set.

Ambitious Data Project Aims to Organize the World’s Geoscientific Records

Post Syndicated from Michael Dumiak original https://spectrum.ieee.org/computing/software/ambitious-data-project-aims-to-organize-the-worlds-geoscientific-records

Geoscience researchers are excited by a new big-data effort to connect millions of hard-won scientific records in databases around the world. When complete, the network will be a virtual portal into the ancient history of the planet.

The project is called Deep-time Digital Earth, and one of its leaders, Nanjing-based paleontologist Fan Junxuan, says it unites hundreds of researchers—geochemists, geologists, mineralogists, paleontologists—in an ambitious plan to link potentially hundreds of databases.

The Chinese government has lined up US $75 million for a planned complex near Shanghai that will house dedicated programming teams and academics supporting the project, and a supercomputer for related research. More support will come from other institutions and companies, with Fan estimating total costs to create the network at about $90 million.

Right now, a handful of independent databases with more than a million records each serve the geosciences. But there are hundreds more out there holding data related to Earth’s history. These smaller collections were built with assorted software and documentation formats. They’re kept on local hard drives or institutional servers, some decades old, and converted from one format into another as time, funding, and interest allow. The data might be in different languages and is often guided by informal or variably defined concepts. There is no standard for arranging the hundreds of tables or thousands of fields. This archipelago of information is potentially very useful but hard to access.

Fan saw an opportunity while building a database comprising the Chinese geological literature. Once it was complete, he and his colleagues were able to use parallel computing programs to examine data on 11,000 marine fossil species in 3,000 geological sections. The results dated patterns of paleobiodiversity—the appearance, flowering, and extinction of whole species—at a temporal resolution of 26,000 years. In geologic time, that’s pretty accurate.

The Deep-time project planners want to build a decentralized system that would bring these large and small data sources together. The main technical challenge is not to aggregate petabytes of data on centralized servers but rather to script strings of code. These strings would work through a programming interface to link individual databases so that any user could extract information through that interface.

Harmonizing these data fields requires human beings to talk to one another. Fan and his colleagues hope to kick off those discussions in New Delhi, which in March is hosting a big gathering of geoscientists. A linked network could be a gold mine for researchers scouring geologic data for clues.

In a 19th-century building behind Berlin’s Museum für Naturkunde, micropaleontology curator David Lazarus and paleobiologist postdoc Johan Renaudie run the group’s ­Neptune database, which is likely to be linked with Deep-time Digital Earth as it develops. Neptune holds a wealth of data on core samples from the world’s ocean floors. Lazarus started the database in the late 1980s, before the current SQL language standard was readily available—at that time it was mostly found only on mainframes. Renaudie explains that Neptune has been modified from its incarnation as a relational database using 4th Dimension for Mac, and has been carefully patched over the years.

There are many such patched-up archives in the field, and some researchers start, develop, and care for data centers that drift into oblivion when funding runs out. “We call them whale fall,” Lazarus says, referring to dead whales that sink to the ocean floor.

Creating a database network could keep this information alive longer and distribute it further. It could lead to new kinds of queries, says Mike ­Benton, a vertebrate paleontologist in Bristol, England, making it possible to combine independent data sources with iterative algorithms that run through millions or billions of equations. Doing this can deliver more precise time resolutions, which hitherto has been really difficult. “If you want to analyze the dynamics of ancient geography and climate and its influence on life, you need a high-resolution geological timeline,” Fan says. “Right now this analysis is not available.”

This article appears in the March 2020 print issue as “Data Project Aims to Organize Scientific Records.”

Google v. Oracle Explained: The Fight for Interoperable Software

Post Syndicated from Rina Diane Caballar original https://spectrum.ieee.org/tech-talk/computing/software/google-v-oracle-explained-supreme-court-news-apis-software

Application programming interfaces (APIs) are the building blocks of software interoperability. APIs provide the specifications for different software programs to communicate and interact with each other. For instance, when a travel aggregator website sends a request for flight information to an airline’s API, the API would send flight details back for the website to display.

Keeping APIs open, meaning they’re publicly listed and available or shared through a partnership, enables developers to freely build applications that work together. That practice is the basis of how software works today. But a decade-long fight between Google and Oracle over API copyright and fair use could upend the status quo.

What North Korea Really Wants From Its Blockchain Conference

Post Syndicated from Morgen Peck original https://spectrum.ieee.org/tech-talk/computing/software/north-korea-blockchain-conference

A blockchain conference slated to take place next week in Pyongyang, North Korea, now seems unlikely to go forward as law enforcement agencies in the United States and regulators at the United Nations send a clear message that the transfer of cryptocurrency and blockchain expertise to the DPRK will not go unpunished. The uncertainty surrounding the event comes as fallout from a similar conference in 2019 continues to spread into the new year.

The 2019 conference, which took place last April resulted in the arrest of Virgil Griffith, a U.S. citizen and Ethereum developer who gave a presentation on blockchain technology in Pyongyang after receiving warnings from the FBI not to do so. The Southern District of New York charged Griffith in early January with one count of conspiracy to violate the International Emergency Economic Powers Act. He is now on bail awaiting trial. 

Plans for a 2020 repeat of the conference drew a quick response from the United Nations, which flagged the event as a likely sanctions violation in a confidential report, according to Reuters. The website for the conference has since been taken down. Organizers did not respond to emails asking about the status of the event. However, one of the organizers listed on the website of the 2019 conference, Chris Emms of Coinstreet Partners, responded on the messaging app, Telegram to say he is no longer involved. “I am not involved nor am I organising it whatsoever [sic],” wrote Emms.

With the fate of the event in doubt, experts are now debating whether it would have indeed complicated international efforts to restrict North Korea’s ability to finance its nuclear program. Over the last three years, the regime has proven itself highly proficient at implementing cryptocurrencies, both for criminal and non-criminal activities. Some analysts argue there’s little a developer like Griffiths could teach officials in the North Korean regime about money laundering and sanctions evasion that they don’t already know. 

“I don’t think he was sharing any shocking insights,” says Kayla Izenman, a research analyst at the Royal United Services Institute in London. “It’s pretty obvious that North Korea knows what they’re doing with cryptocurrency.”

According to Izenman’s own research at RUSI’s Centre for Financial Crime and Security Studies, North Korea has successfully used cryptocurrencies as a revenue stream and money laundering tool since at least 2017. In May of that year, North Korea-affiliated hackers deployed the Wannacry ransomware attack that first hit hospitals in the United Kingdom, but went on to circle the globe within five days. The Wannacry worm took computer hard drives hostage but offered victims the chance to recover data in return for bitcoins.

North Korea has sought out other cryptocurrency-related revenue streams as well. One of the most lucrative has been a series of hacking attacks carried out against online exchanges that often hold large sums of cryptocurrency. Izenman’s research indicates that the regime has been especially successful preying on low-security exchanges in South Korea. 

“They’ve actually been wildly successful in what they’ve done and it’s been, I would say, relatively low effort,” says Izenman.

Research indicates that the North Korean regime has also been mining cryptocurrency, either to use in illicit transactions today or to hoard for future use. A report released last week by the cybersecurity firm, Recorded Future, found that North Korea has increased its mining of the cryptocurrency Monero by tenfold since 2018.

Monero is a privacy coin that obscures the identity of users, making it difficult, if not impossible, to track transactions. After Wannacry, hackers exchanged the bitcoin proceeds from that attack into Monero, at which point, investigators lost track of the funds. 

“Following the money is the absolutely key to placing any leverage on the Kim regime,” says Priscilla Moriuchi, an author on the Recorded Future report and a senior fellow at Harvard’s Belfer Center for Science and International Affairs. “Where crypto enters and what exits that chain is absolutely critical.”

As the North Korean regime finds ways to bring in cryptocurrency, it also needs ways to cash out. To do so, it relies on regional cryptocurrency exchanges that operate below the radar. According to Izenman, there are plenty to choose from.

“It’s just a huge weak spot,” says Izenman. “In some places exchanges don’t have to do comprehensive due diligence because there’s no government regulation. In some places they don’t have to do it because the existing government regulation isn’t enforced. And some exchanges just aren’t compliant with regulation. There are so many gaps in the whole system.”

But Moriuchi stresses that there is a broader issue at play. “It’s not just cryptocurrency that has changed the game. It’s the entire weaponization of the internet,” says Moriuchi. “The things that the North Korean state are doing, engaging in the blockchain development, mining cryptocurrency, doing IT work, ripping off gamers, robbing banks. All of these are things that other countries are starting to emulate.”

Why then would a country that is itself host to some of the most expert cyber-criminals in the world need to host a conference about blockchain technology? Izenman suggests the event may serve more as a propaganda tool than technology transfer.

“What they want is the attention from having the conference and being able to fly in Americans and say, “look we have a guy from Ethereum talking to us about crypto and how we can evade sanctions,’” says Izenman. “So I kind of get why, as a U.S. entity, you would be wanting to stop that idea from spreading.”

Algorithm Groups People More Fairly to Reduce AI Bias

Post Syndicated from Matthew Hutson original https://spectrum.ieee.org/tech-talk/computing/software/algorithm-groups-people-more-fairly-reduce-ai-bias

Say you work at a big company and you’re hiring for a new position. You receive thousands of resumes. To make a first pass, you may turn to artificial intelligence. A common AI task is called clustering, in which an algorithm sorts through a set of items (or people) and groups them into similar clusters.

In the hiring scenario, you might create clusters based on skills and experience and then hire from the top crop. But algorithms can be unfair. Even if you instruct them to ignore factors like gender and ethnicity, these attributes often correlate with factors you do count, leading to clusters that don’t represent the larger pool’s demographics. As a result, you could end up hiring only white men.

In recent years, computer scientists have constructed fair clustering algorithms to counteract such biases, and a new one offers several advantages over those that came before. It could improve fair clustering, whether the clusters contain job candidates, customers, sick patients, or potential criminals.

Toshiba’s Optimization Algorithm Sets Speed Record for Solving Combinatorial Problems

Post Syndicated from John Boyd original https://spectrum.ieee.org/tech-talk/computing/software/toshiba--optimization-algorithm-speed-record-combinatorial-problems

Toshiba has come up with a new way of solving combinatorial optimization problems. A classic example of such problems is the traveling salesman dilemma, in which a salesman must find the shortest route between many cities.

Such problems are found aplenty in science, engineering, and business. For instance, how should a utility select the optimal route for electric transmission lines, considering construction costs, safety, time, and the impact on people and the environment? Even the brute force of supercomputers is impractical when new variables increase the complexity of a question exponentially. 

But it turns out that many of these problems can be mapped to ground-state searches made by Ising machines. These specialized computers use mathematical models to describe the up-and-down spins of magnetic materials interacting with each other. Those spins can be used to represent a combinatorial problem. The optimal solution, then, becomes the equivalent of finding the ground state of the model.

Battle of the Video Codecs: Coding-Efficient VVC vs. Royalty-Free AV1

Post Syndicated from Rina Diane Caballar original https://spectrum.ieee.org/tech-talk/computing/software/battle-video-codecs-hevc-coding-efficiency-vvc-royalty-free-av1

Video is taking over the world. It’s projected to account for 82 percent of Internet traffic by 2022. And what started as an analog electronic medium for moving visuals has transformed into a digital format viewed on social media platforms, video sharing websites, and streaming services.

As video evolves, so too does the video encoding process, which applies compression algorithms to raw video so the files take up less space, making them easier to transmit and reducing the bandwidth required. Part of this evolution involves developing new codecs—encoders to compress videos plus decoders to decompress them for playback—to support higher resolutions, modern formats, and new applications such as 360-degree videos and virtual reality.

Today’s dominant standard, HEVC (High Efficiency Video Coding), was finalized in 2013 as a joint effort between the Moving Picture Experts Group (MPEG) and the Video Coding Experts Group (VCEG). HEVC was designed to have better coding efficiency over the existing Advanced Video Coding (AVC) standard, with tests showing an average of 53 percent lower bit rate than AVC while still achieving the same subjective video quality. (Fun fact: HEVC was recognized with an Engineering Emmy Award in 2017 for enabling “efficient delivery in Ultra High Definition (UHD) content over multiple distribution channels,” while AVC garnered the same award in 2008.)

HEVC may be the incumbent, but two emerging options—VVC and AV1—could upend it.