Tag Archives: computing

Descartes Labs Built a Top 500 Supercomputer From Amazon Cloud

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/computing/hardware/descartes-labs-built-a-top-500-supercomputer-from-amazon-cloud

Cofounder Mike Warren talks about the future of high-performance computing in a data-rich, cloud computing world

Descartes Labs cofounder Mike Warren has had some notable firsts in his career, and a surprising number have had lasting impact. Back in 1998 for instance, his was the first Linux-based computer fast enough to gain a spot in the coveted Top 500 list of supercomputers. Today, they all run Linux. Now his company, which crunches geospatial and location data to answer hard questions, has achieved something else that may be indicative of where high-performance computing is headed: It’s built the world’s 136th fastest supercomputer using just Amazon Web Services and Descartes Labs’ own software. In 2010, this would have been the most powerful computer on the planet.

Notably, Amazon didn’t do anything special for Descartes. Warren’s firm just plunked down US $5,000 on the company credit card for the use of a “high-network-throughput instance block” consisting of 41,472 processor cores and 157.8 gigabytes of memory. It then worked out some software to make the collection act as a single machine. Running the standard supercomputer test suite, called LinPack, the system reached 1,926.4 teraFLOPS (trillion floating point operations per second). (Amazon itself made an appearance much lower down on the Top 500 list a few years back, but that’s thought to have been for its own dedicated system in which Amazon was the sole user rather than what’s available to the public.)

Specifying nonlinearity in torque sensors

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/specifying-nonlinearity-in-torque-sensors

Reliable torque data leads to efficient, cost-effective designs

Description: Download this application note to learn selection parameters for selecting current sensors, as well as the limitations of alternative technologies such as current sense resistors.

Developing Critical Applications for Multicore Environments: The SCADE Advantage

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/developing-critical-applications-for-multicore-environments-the-scade-advantage

How to Handle Complex Multicore Environments with ANSYS SCADE

Multicore environments that feature clusters of microprocessors deliver a range of benefits, but present significant engineering challenges. ANSYS SCADE produces embedded software that enables the entire electronics architecture to perform reliably and all components to work together flawlessly, while accounting for the way essential tasks are distributed across multiple cores.

image

IBM and Linux Call on Developers to Make Natural Disasters Less Deadly

Post Syndicated from Lynne Peskoe-Yang original https://spectrum.ieee.org/tech-talk/computing/software/ibm-and-linux-ask-developers-to-build-tech-that-makes-natural-disasters-less-deadly

The international Call for Code competition encourages developers to invent new technologies that can assist people after a hurricane or flood

On a stormy Tuesday in July, a group of 30 young programmers gathered in New York City to take on natural disasters. The attendees—most of whom were current college students and alumnae of the nonprofit Girls Who Code—had signed up for a six-hour hackathon in the middle of summer break.

Flash floods broke out across the city, but the atmosphere in the conference room remained upbeat. The hackathon was hosted in the downtown office of IBM as one of the final events in this year’s Call for Code challenge, a global competition sponsored by IBM and Linux. The challenge focuses on using technology to assist survivors of catastrophes including tropical storms, fires, and earthquakes. 

Recent satellite hackathon events in the 2019 competition have recruited developers in Cairo to address Egypt’s national water shortage; in Paris to brainstorm AI solutions for rebuilding the Notre Dame cathedral; and in Bayamón, Puerto Rico, to improve resilience in the face of future hurricanes. 

Does the Repurposing of Sun Microsystems’ Slogan Honor History, or Step on It?

Post Syndicated from Tekla S. Perry original https://spectrum.ieee.org/view-from-the-valley/computing/networks/does-repurposing-of-sun-microsystems-slogan-honor-history

“The Network is the Computer” catchphrase has a proud new parent: Cloudflare

“The Network is the Computer.” That phrase, coined by John Gage in the mid-1980s, was the tagline of Sun Microsystems for decades. Ray Rothrock, Sun’s former director of CAD/CAM marketing, in an interview with Cloudflare CTO John Graham-Cumming, explained the concept of “The Network is the Computer.” According to Rothrock, “[It] essentially said that you had one window into the network through your desktop computer…. And if you had the appropriate software you could use other people’s computers (for CPU power). And so you could do very hard problems that that single computer could not do because you could offload some of that CPU to the other computers.”

In early 2010, Oracle acquired most of Sun’s assets, including both software (Java) and hardware (the SPARC line of processors). The famous tagline, it appears, was never discussed—or used or defended.

Maybe Oracle just didn’t like it. Maybe it got lost in the shuffle. Or maybe people at that time thought it was so tightly linked to Sun that it wouldn’t be worth anything to anyone else. Imagine another food besides Wheaties calling itself the Breakfast of Champions, or another airline besides United urging people to “Fly the Friendly Skies”

Nevertheless, if you abandon a trademark, it’s up for grabs. And so this month, Cloudflare, a decade-old company that runs the most popular content delivery network—second only to Amazon’s Cloudfront—grabbed it, announcing that it has registered it for itself.

John Gage, in another interview with Graham-Cumming posted on Cloudflare’s blog, says he’s fine with the slogan being picked up. Cloudflare’s existence and efforts in networked computers, he indicated, is a sign that Sun’s efforts were a success. “The phrase, ‘The Network is the Computer,’ resides in your brain. And when you get up in the morning and decide what to do, a little bit nudges you toward making the network work.”

What do other former Sun employees think? Are they happy the slogan, at the very least, carries some of Sun’s energy forward? Or do they think it’s odd to associate it with another company? I reached out to a Facebook group of Sun alumni to find out.

Larry Wake, who spent more than 20 years at Sun in various positions, recalled that “When Sun originated that tag line in the early 1980s, it was actually quite audacious. It was a stake in the ground [stating] ‘Computers should be networked, or they’re… not computers. Well, at least, you’re missing their potential by a country mile. They’re “islands of automation,” and you can do better than that. Join us!’”

“Sun,” he continued, “put a network interface in every computer they built from day one. That was not even remotely the norm at the time. But the part people tend to overlook is that Sun didn’t just say ‘networks are good.’ They wanted it to be *open* networking.” Wake recalls that at that time, if you wanted to network your computers, you paid extra for proprietary, non-interoperable networks: “SNA for your IBM mainframes, DECnet for your DEC minis, Novell Netware for your PCs. But Sun said, ‘Nah. Let’s all use Ethernet and TCP/IP. Those are open standards.” Sun, he says, “Kept pushing the envelope throughout our history.”

“So,” Wake concluded, “all props to Cloudflare for recognizing a great tagline when they see one, but ‘The CDN is the computer’ is not quite as world-changing as what Sun did.”

To Larry Rutter, also a long-time Sun employee, The Network is the Computer  “will always be a Sun slogan—trademark or no trademark. If I see someone else use it without attribution [to Sun], I’ll view it negatively.”

Christian Funke, a former Sun employee from Germany, is more forgiving. “They could have taken the slogan and never mentioned Sun,” he says, and “just some old nerds like us would have noticed. But they did mention [Sun] and by that appreciated the original genius, which is even more alive these days than it maybe was at that time. So I guess I am OK with it.”

So is Jonathan Lancaster, Sun employee #126, who was in the room when the phrase was coined.  “I am OK with it having a new owner,” he says. “Sun was a pioneer in networking, open systems, [and] open source; and the concept and the idea continues to become more apparent with each leap forward in networking. I started with ‘bleeding edge’ 3 megabits per second Ethernet on a Sun 100u; now 5G wireless deployment is happening, and the difference between network and computer will be a blur.”

Lancaster pointed out that “The Network is the Computer” wasn’t the only slogan coined by Sun. It was predated, he says, by “Open Systems for Open Minds.” Sun’s rights to that phrase have likely also lapsed; I wonder if we’ll see any takers.

A Two-Track Algorithm To Detect Deepfake Images

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/tech-talk/computing/software/a-twotrack-algorithm-to-detect-deepfake-images

A neural-network-based tool can spot image manipulation at the level of single-pixels

Journal Watch report logo, link to report landing page

Researchers have demonstrated a new algorithm for detecting so-called deepfake images—those altered imperceptibly by AI systems, potentially for nefarious purposes. Initial tests of the algorithm picked out phony from undoctored images down to the individual pixel level with between 71 and 95 percent accuracy, depending on the sample data set used. The algorithm has not yet been expanded to include the detection of deepfake videos.

Deepfakes “are images or videos that have been doctored—either you insert something into it or remove something out of it—so it changes the meaning of the picture,” says Amit Roy-Chowdhury, professor of electrical and computer engineering at the University of California, Riverside. The challenge arises because it’s done “in a way so that to the human eye it’s not obvious immediately that it has been manipulated.”

In rapidly developing situations, such as an humanitarian crisis, a business’s product launch, or an election campaign, deepfake videos and images could alter how events play out. Imagine a doctored image in which a political candidate was supposedly committing a violent crime, or a doctored video in which a CEO supposedly confesses to concealing safety problems with her company’s signature product line.

Chowdhury is one of five authors of the deepfake-detecting algorithm, described in a recent IEEE Transactions on Image Processing. He says such detection algorithms could be a powerful tool to fight this new menace of the social media age. But people also need to be careful not to become over-dependent on these algorithms either, he warns. An overly trusted detection algorithm that can be tricked could be weaponized by those seeking to spread false information. A deepfake crafted to exploit a trusted algorithm’s particular weaknesses could effectively result in the algorithm blessing the fake with a certificate of authenticity in the minds of experts, journalists and the public, rendering it even more damaging.

“I think we have to be careful in anything that has to do with AI and machine learning today,” Roy-Chowdhury says. “We need to understand that the results these systems give are probabilistic. And very often the probabilities are not in the range of 0.98 or 0.99. They’re much lower than that. We should not accept them on blind faith. These are hard problems.”

In that sense, he says, deepfakes are really just a new frontier in cybersecurity. And cybersecurity is a perpetual arms race with bad guys and good guys each making advances in often incremental steps.

Roy-Chowdhury says that with their latest work his group has harnessed a set of concepts that already exist separately in the literature, but which they have combined in novel and potentially powerful way.

One component of the algorithm is a variety of a so-called “recurrent neural network,” which splits the image in question into small patches and looks at those patches pixel by pixel. The neural network has been trained by letting it examine thousands of both deepfake and genuine images, so it has learned some of the qualities that make fakes at stand out at the single-pixel level.

Roy-Chowdhury says the boundaries around the doctored portion of an image are often what contain telltale signs of manipulation. “If an object is inserted, it is often the boundary regions that have certain characteristics,” he says. “So the person who’s tampering the image will probably try to do it so that the boundary is very smooth. What we found out is the tampered images were often smoother than in natural images. Because the person who did the manipulation was going out of his way to make it very smooth.”

Another portion of the algorithm, on a parallel track to the part looking at single pixels, passes the whole image through a series of encoding filters—almost as if it were performing an image compression, as when you click the “compress image” box when saving a TIFF or a JPEG. These filters, in a mathematical sense, enable the algorithm to consider the entire image at larger, more holistic levels.

The algorithm then compares the output of the pixel-by-pixel and higher-level encoding filter analyses. When these parallel analyses trigger red flags over the same region of an image, it is then tagged as a possible deepfake.

For example, say that a stock image of a songbird has been pasted onto a picture of an empty tree branch. The pixel-by-pixel algorithm in this case might flag the pixels around the bird’s claws as problematic, while the encoder algorithm might spot patterns in the larger image (noticing, perhaps, other boundary problems or anomalies at the larger-scale level). So long as both of these neural nets flagged the same region of the image around the bird, then Roy-Chowdhury’s group’s algorithm would categorize the bird-and-branch photo as a possible deepfake.

Roy-Chowdhury says that the algorithm now needs to be expanded to handle video. Such a next-level algorithm, he says, would potentially include how the image evolves frame-by-frame and whether any detectable patterns can be discerned from that evolution in time.

Given the urgency of deepfake detection, as hostile actors around the world increasingly seek to manipulate political events using false information, Roy-Chowdhury encourages researchers to contact his group for code or pointers toward further developing this algorithm for deepfake detection in the wild.