Cable assemblies fulfill an important role in high-density electronic systems. Often overlooked as ‘a couple of connectors and some cable’, these are in fact an essential element of modern engineering design and should not be underestimated.
In this paper, we outline the process of creating cable assemblies for application sectors requiring the highest levels of reliability, such as aerospace, defense, space and motorsport.
There’s been a lot of intense and well-funded work developing chips that are specially designed to perform AI algorithms faster and more efficiently. The trouble is that it takes years to design a chip, and the universe of machine learning algorithms moves a lot faster than that. Ideally you want a chip that’s optimized to do today’s AI, not the AI of two to five years ago. Google’s solution: have an AI design the AI chip.
“We believe that it is AI itself that will provide the means to shorten the chip design cycle, creating a symbiotic relationship between hardware and AI, with each fueling advances in the other,” they write in a paper describing the work that posted today to Arxiv.
“We have already seen that there are algorithms or neural network architectures that… don’t perform as well on existing generations of accelerators, because the accelerators were designed like two years ago, and back then these neural nets didn’t exist,” says Azalia Mirhoseini, a senior research scientist at Google. “If we reduce the design cycle, we can bridge the gap.”
Mirhoseini and senior software engineer Anna Goldie have come up with a neural network that learn to do a particularly time-consuming part of design called placement. After studying chip designs long enough, it can produce a design for a Google Tensor Processing Unit in less than 24 hours that beats several weeks-worth of design effort by human experts in terms of power, performance, and area.
Placement is so complex and time-consuming because it involves placing blocks of logic and memory or clusters of those blocks called macros in such a way that power and performance are maximized and the area of the chip is minimized. Heightening the challenge is the requirement that all this happen while at the same time obeying rules about the density of interconnects. Goldie and Mirhoseini targeted chip placement, because even with today’s advanced tools, it takes a human expert weeks of iteration to produce an acceptable design.
Goldie and Mirhoseini modeled chip placement as a reinforcement learning problem. Reinforcement learning systems, unlike typical deep learning, do not train on a large set of labeled data. Instead, they learn by doing, adjusting the parameters in their networks according to a reward signal when they succeed. In this case, the reward was a proxy measure of a combination of power reduction, performance improvement, and area reduction. As a result, the placement-bot becomes better at its task the more designs it does.
The team hopes AI systems like theirs will lead to the design of “more chips in the same time period, and also chips that run faster, use less power, cost less to build, and use less area,” says Goldie.
Are you interested in modelling piezoelectric sensors and actuators? If so, then join us for this webinar.
In this webinar, James Ransley from Veryst Engineering will work through a case study in which the properties of a piezoelectric bender actuator are optimized. The material orientation and dimensions are optimized to maximize the efficiency of the bender for the application. The blocking curve is computed and the dimensions are modified to achieve a given specification with minimal actuator volume.
The study will also include a live demo of piezoelectric device design using the COMSOL Multiphysics® simulation software. A Q&A session will conclude the webinar.
Optimal crystal orientation and displacement profile (color is proportional to relative displacement) for a piezoelectric bender made from lithium niobate (left) and PZT 5H (right).
People often think that Moore’s Law is all about making smaller and smaller transistors. But these days, a lot of the difficulty is squeezing in the tangle of interconnects needed to get signals and power to them. Those smaller, more dense interconnects are more resistive, leading to a potential waste of power. At the IEEE International Electron Devices Meeting in December, Arm engineers presented a processor design that demonstrates a way to reduce the density of interconnects and deliver power to chips with less waste.
The Application Note explains the main measurement concept and will guide the user during the measurements and mention the main topics in a practical manner. Wherever possible, a hint is given where the user should pay attention.
Power consumption and heat generation: these hurdles impede progress toward faster, cheaper chips, and are worrying semiconductor industry veterans far more than the slowing of Moore’s Law. That was the takeaway from several discussions about current and future chip technologies held in Silicon Valley this week.
What seems like a simple task—building a useful form of augmented reality into comfortable, reasonably stylish, eyeglasses—is going to need significant technology advances on many fronts, including displays, graphics, gesture tracking, and low-power processor design.
That was the message of Sha Rabii, Facebook’s head of silicon and technology engineering. Rabii, speaking at Arm TechCon 2019 in San Jose, Calif., on Tuesday, described a future with AR glasses that enable wearers to see at night, improve overall eyesight, translate signs on the fly, prompt wearers with the names of people they meet, create shared whiteboards, encourage healthy food choices, and allow selective hearing in crowded rooms. This type of AR will be, he said, “an assistant, connected to the Internet, sitting on your shoulders, and feeding you useful information to your ears and eyes when you need it.”
Scientists and engineers in Switzerland and California have come up with a technique that can reveal the 3D design of a modern microprocessor without destroying it.
Typically today, such reverse engineering is a time-consuming process that involves painstakingly removing each of a chip’s many nanometers-thick interconnect layers and mapping them using a hierarchy of different imaging techniques, from optical microscopy for the larger features to electron microscopy for the tiniest features.
The inventors of the new technique, called ptychographic X-ray laminography, say it could be used by integrated circuit designers to verify that manufactured chips match their designs, or by government agencies concerned about “kill switches” or hardware trojans that could have secretly been added to ICs they depend on.
“It’s the only approach to non-destructive reverse engineering of electronic chips—[and] not just reverse engineering but assurance that chips are manufactured according to design,” says Anthony F. J. Levi, professor of electrical and computer engineering at University of Southern California, who led the California side of the team. “You can identify the foundry, aspects of the design, who did the design. It’s like a fingerprint.”
The end of Moore’s Law will drive a renaissance in chip innovation, CEOs say. But the semiconductor industry must face the existential question of power consumption driving climate change
We have entered a “Renaissance of Silicon.” That was the thesis of a panel that brought together semiconductor industry CEOs at Micron Technology’s San Jose campus last week. This renaissance, the executives indicated, will lead to an exciting—but not predictable—innovation in chip technology driven by applications that demand more computing power and by the demise of Moore’s Law.
“I’ve never seen a more exciting time in my 40 years in the industry,” said Sanjay Mehrotra, CEO of Micron Technology.
“I hadn’t heard semiconductor and Renaissance in the same sentence in 20 years,” said Tammy Kiely, Goldman Sachs global head of semiconductor investment banking. Kiely moderated the panel, which was organized by the Churchill Club.
The driving force behind this renaissance is “burning necessity,” said Xilinx CEO Victor Peng. Arm CEO Simon Segars agreed.
“For last 15 years, the driver of growth was mobile,” Segars said. “Over the last five years, the industry was in a bit of a lull. Then all of a sudden there is this combination of effects.” He listed cloud computing, handheld computing, IoT devices, 5G, AI, and autonomous vehicles as contributing to the boom. “Lots of things are coming together at once,” he said, along with “fundamental algorithm development.”
All these things, Xilinx’s Peng said, mean that the industry will have to come up with a way to improve computing power and storage by a factor of 100—if not 1000—over the next 10 years. That will require new architectures, new packaging—and a new way of looking at the entire ecosystem. “The entire data center is a computer,” he said, pointing out that computation will have to happen all over it, in memory, in switches, even in the communications lines.
Getting a 100- to 1000-times improvement in processing power will also require innovation in software, Peng continued. “People got used to Moore’s law enabling them to throw cycles away to enable abstraction. It’s not that simple anymore…. When you need 100 times [improvement], you don’t evolve the same architecture, you start all over. When Moore’s law was chipping away every year, you didn’t rethink the entire problem. But now you have to look at hardware and software together.”
Concerned About Climate Change
The panelists reminded the audience that it’s also no longer just about making chips better, faster, and cheaper (or these days, as Peng points out, getting one or two of those things at best). The semiconductor industry also has to drive power consumption down.
“Power consumption is an existential question, [considering] climate change,” Peng said, noting that data centers now consume about 10 percent of the world’s electric power. “That cannot scale exponentially and be sustainable.” Getting power consumption down is, he said, “not only a huge business opportunity but a moral imperative.”
Climate change, Segars said, will be a big driver for semiconductor innovation over the next five years. The industry, he said, will have to “create different computing engines to solve things in a more efficient way… [and innovate] on microarchitectures to get power down. In the long term, [we have to] think about workloads. The ultimate architecture might be dedicated engines for different commonly executed tasks that do things in a more efficient way.”
Segars also suggested that we ought to consider the bigger picture when weighing the power costs of computing. “Smart cities,” he said, “may result in more energy getting burned in data centers to process the IoT data, but their net could be energy savings.”
Don’t Expect a Startup Surge
This boom in innovation is unlikely to lead to a boom in startups, the panelists agreed. That may be counter to what we expect from Silicon Valley, but it’s the reality, they indicated.
“The cost of taking a chip out at 5 nanometers is astronomical, so unless you can amortize costs over multiple designs, nobody can afford it,” Segars said. “So we will need the large semiconductor companies to keep progress aggressive. Of course, the more expensive it is, the fewer that can afford to do it. But unlike other industries, like steel, I don’t think innovation is going to dry up.”
“Larger companies have a greater ability to invest in future innovations, to make big bets,” Mehrotra agreed. However, he said, “there are startups that are forming that are working on silicon aspects. Certainly what Simon [Segars] said about increased complexity and the time and money [involved] compared to the past is true.” But, he said, at least some architecture innovation is happening outside the big companies—though “not the way it was when I joined the industry 40 years ago.”
“There has been a lot of VC money going into AI chip companies,” Segars concurred. But, he predicts that, “unfortunately I don’t think we are going back to the days where Sand Hill Road is going to hand out wheelbarrows of money to people to design chips.”
Simulink optimizes system behavior by simulating multiple design options from a single environment.
Learn how to: use simulation to develop a digital controller for a DC-DC power converter; model passive circuit elements, power semiconductors, power sources and loads; simulate continuous and discontinuous conduction modes; simulate power losses and thermal behavior; tune controller gains to meet design requirements; and generate C code for a TI C2000 microcontroller.
The collective thoughts of the interwebz
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.