Tag Archives: energy

Startup Aims to Tackle Grid Storage Problem With New Porous Silicon Battery

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/energywise/energy/the-smarter-grid/startup-aims-to-tackle-grid-storage-problem-with-serverracksized-battery

A Canadian company emerges from stealth mode to provide grid-scale energy storage with its high-density battery tech

A new Canadian company with roots in Vermont has emerged from stealth mode and has ambitious plans to roll out a new grid-scale battery in the year ahead. The longshot storage technology, targeted at utilities, offers four times the energy density and four times the lifetime of lithium-ion batteries, the company says, and will be available for half the price.

The new company’s CEO, a former Democratic nominee for governor of Vermont, founded Cross Border Power in the wake of her electoral loss last November. Within days after the election, she was at her computer and writing a thesis (since posted on her campaign website) that she boldly calls “[The] North American Solution to Climate Change.”

One of Christine Hallquist’s planks as gubernatorial candidate was to set the Green Mountain state on a path to obtaining 90 percent of its energy from renewable sources by 2050. In the final weeks of the election, the Republican Governors Association attacked Hallquist’s campaign by claiming her vision would raise taxes on Vermonters and hike gasoline prices at the pump.

Today, she might agree that economics may indeed shape the future of renewable energy—but through low prices, not high ones. “I think we’re at the point, especially with our batteries, that renewables are going to be cheaper than any of the fossil fuels,” she says.

A (Very) Close Look at Carbon Capture and Storage

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/energywise/energy/environment/a-very-close-look-at-carbon-capture-and-storage

A material called ZIF-8 swells up when carbon dioxide molecules are trapped inside, new images reveal

A new kind of molecular-scale microscope has been trained for the first time on a promising wonder material for carbon capture and storage. The results, researchers say, suggest a few tweaks to this material could further enhance its ability to scrub greenhouse gases from emissions produced by traditional power plants.

The announcement comes in the wake of a separate study concerning carbon capture published in the journal Nature. The researchers involved in that study found that keeping the average global temperature change to below 1.5 degrees C (the goal of the Paris climate accords) may require more aggressive action than previously anticipated. It will not be enough, they calculated, to stop building new greenhouse-gas-emitting power stations and allow existing plants to age out of existence. Some existing plants will also need to be shuttered or retrofitted with carbon capture and sequestration technology.

An Oft-Struck Mountaintop Tower Gets a New Lightning Sensor

Post Syndicated from Amy Nordrum original https://spectrum.ieee.org/tech-talk/energy/environment/an-oftstruck-mountaintop-tower-gets-a-new-lightning-sensor

Säntis Tower in the Swiss Alps is struck by lightning more than 100 times a year

Atop a rocky peak in the Swiss Alps sits a telecommunications tower that gets struck by lightning more than 100 times a year, making it perhaps the world’s most frequently struck object. Taking note of the remarkable consistency with which lightning hits this 124-meter structure, researchers have adorned it with instruments for a front-row view of these violent electric discharges.

On Wednesday, a small team installed a new gadget near Säntis Tower in their years-long quest to better understand how lightning forms and why it behaves the way it does. About two kilometers from the tower, they set up a broadband interferometer that one member, Mark Stanley of New Mexico Tech, had built back in his lab near Jemez, New Mexico.

“You can’t really go to a company and find an instrument that’s built just for studying lightning,” says Bill Rison, Stanley’s collaborator who teaches electrical engineering at New Mexico Tech. “You have to build your own.”

The one Stanley built has three antennas with bandwidth from 20 to 80 megahertz (MHz) to record powerful electromagnetic pulses in the very high-frequency range that lightning is known to produce. The device also has a fourth antenna to measure sferics, which are low-frequency signals that result from the movement of charge that occurs with a strike or from storm activity within clouds“Basically, lightning is a giant spark,” Rison explains. “Sparks give off radio waves and the interferometer detects the radio waves.” 

To anyone who has witnessed a lightning strike, everything seems to happen all at once. But Stanley’s sensor captures several gigabytes of data about the many separate pulses that occur within each flash. Those data can be made into a video that replays, microsecond by microsecond, how “channels” of lightning form in the clouds.

By mapping lightning in this way, the Säntis team, which hired Stanley and Rison to haul their interferometer to Switzerland, hopes to better understand what prompts lightning’s “initiation”—that mysterious moment when it cracks into existence.  

So far, measurements have raised more questions than they’ve answered. One sticking point is, in order for a thunderstorm to emit a lightning strike, the electric field within it must build to an intensity on the order of several megavolts per meter. But while researchers have sent balloons into thunderstorms, no one has measured a field beyond 200 kilovolts per meter, or one-tenth of the required value, says Farhad Rachidi of the Swiss Federal Institute of Technology (EPFL), who co-leads the Säntis research team.  

“The conditions required for lightning to be started within the clouds never seem to exist based on the measurements made in the clouds,” says Marcos Rubinstein, a telecommunications professor at Switzerland’s School of Business and Engineering Vaud and co-leader of the Säntis team with Rachidi. “This is a big, big question.”

In his own research at New Mexico Tech, Rison has laid some groundwork that could explain how small electric fields can produce such big sparks. In 2016, he and his colleagues published a paper in Nature Communications that described experimental evidence showing that a process known as fast positive breakdown can create a series of streamers, or tiny sparks, and may arise from much stronger local electric fields that occur in small pockets within a storm.

If enough streamers occur in quick succession and within close vicinity to one another, they make more streamers, adding up to a streamer “avalanche” that turns into positive leaders, or mini-bolts that branch toward clouds or the ground.

“We haven’t hit any roadblocks yet to say, this is something that isn’t the process for the initiation of lightning,” Rison says. With his evidence in hand, theorists are now trying to explain exactly how and why these fast positive breakdowns occur in the first place.

Meanwhile, the Säntis team wants to adapt a mathematical technique called time-reversal, which was originally pioneered for acoustics, to better understand lightning’s initiation. With this method, they intend to use data gathered by the tower’s many instruments (which include a collection of six antennas called a lightning mapping array, two Rogowski coils to measure current, two B-Dot sensors to measure the current time-derivative, broadband electric and magnetic field sensors, and a high-speed camera) to reconstruct the total path of strikes soon after they happen, tracing the electromagnetic radiation all the way back to its original source.

As has been true of past lightning research, their findings may someday inform the design of airplanes or electric grids, and help protect people and equipment against lightning strikes and other sudden power surges. The Säntis team’s work has held particular relevance for wind farm operators. That’s because most strikes recorded at the tower are examples of upward lightning—which travels from ground-to-cloud instead of cloud-to-ground.

Upward lightning often originates from tall buildings and structures, which can actually create a lightning bolt that shoots skyward, and this process can damage wind turbines. In 2013, the team published one of the most extensive descriptions to date of this type of flash.

More recently, their work has raised questions about why industry safety certifications for aircraft are based on data about downward strikes, instead of upward ones, which commonly occur with aircraft and cause particular kinds of damage that look more like lightning damage reported by pilots and mechanics.

By the end of this year, the Säntis team expects to record its 1,000th lightning strike at the tower. And there’s one more elusive scientific matter with massive practical implications they hope to someday resolve. “If we understand how lightning is initiated, we could take a big step forward on one of the other questions we’ve been trying to solve for a long time, and that’s to be able to predict lightning before it happens,” says Rubinstein.  

Best Algorithms to Make Solar Power Storage Profitable

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/energywise/energy/the-smarter-grid/best-algorithms-to-make-solar-power-profitable

Which algorithms are best at integrating solar arrays with electrical grid storage?

By analyzing the kinds of algorithms that control the flow of electricity between solar cells and lithium-ion batteries, scientists have identified the best types of algorithms to govern electrical grid storage of solar power.

A dizzying number of algorithms exist to help manage the flow of electricity between photovoltaic cells and lithium-ion batteries in the most profitable manner. These come in a variety of complexities and have diverse computational power requirements.

“Lithium-ion batteries are expensive components, and photovoltaic plant owners have to pay large amounts of money in order to install lithium-ion batteries on plant,” says study lead author Alberto Berrueta, an engineering researcher at the Public University of Navarre’s Institute of Smart Cities in Pamplona, Spain. “Management algorithms are of capital importance in order to preserve a long lifetime for the batteries to make the most out of the batteries.”

To see which types of these algorithms work best at getting the most out of lithium-ion batteries, researchers developed models based off the amount of power generated over the course of a year from a medium-sized roughly 100-kilowatt solar cell array located in Navarre. They focused on concerns such as the computational requirements needed, the price of electricity, battery life, battery costs, and battery charging and discharging rates.

The researchers looked at three families of algorithms currently used in managing electricity from commercial solar cell arrays: dynamic, quadratic and linear. Dynamic algorithms tackle complex, sequential optimization problems by breaking them down into several simpler sub-problems. Quadratic algorithms each involve at least one squared variable and often find use in calculating areas, computing the profit of a product, and pinpointing the speed and position of an object. Linear algorithms each involve variables that are not squared and have the simplest computational requirements.

The scientists found the dynamic algorithms required far more computational power than the other two families of algorithms; as the number of variables grew, they experienced an exponential increase in problem complexity. A commercial PC that would take about 10 seconds to compute the energy flow between the solar cells and lithium-ion batteries using the linear and quadratic algorithms would take 42 minutes with the dynamic algorithms.

Linear algorithms had the lowest computational requirements but suffered in terms of accuracy. For instance, their simplified models did not account for how electrical current can reduce battery lifetime. All in all, the linear algorithms provided an average of 20 percent lower profits than the maximum achievable.

The researchers concluded that quadratic algorithms provided the best trade-off between accuracy and computational simplicity for solar power applications. Quadratic algorithms had about the same low computational requirements as linear algorithms while achieving revenues similar to dynamic algorithms for all battery sizes.

In the future, scientists can investigate which management algorithms might work best with hybrid energy storage systems, Berrueta says. Future research can also investigate which computer models work best at calculating all the factors affecting the lifetime of lithium-ion batteries, including batteries discarded from electric vehicles that might find a second life working in renewable energy plants, he adds.

The researchers detailed their findings at the IEEE International Conference on Environmental and Electrical Engineering in Genoa, Italy, on June 11.

Cosmic Ray Failures of Power Semiconductor Devices

Post Syndicated from ABB Semiconductor original https://spectrum.ieee.org/energy/renewables/cosmic-ray-failures-of-power-semiconductor-devices

Cosmic ray failures are sudden events caused by cosmic particles in devices subjected to a high electric field strength

Introduction
The increased failure rate of traction propulsion converters in the early 1990s, lead to the recognition of cosmic ray failure mode for power devices. The famous experiment, whereby the failure rate of devices in a blocking condition in a laboratory were compared to the blocking failure rates in a salt-mine, was undertaken by Siemens engineers. The absence of failure in the salt mine supported the hypotheses of cosmic ray particles as the root cause of the failures [1]. Further tests executed by ABB on the Jungfraujoch at 3580 m above sea level (a.s.l), confirmed this hypothesis. Additional tests with proton beams made it reproducible in the laboratory. This leads to improved design and statement in regard to ruggedness of power semiconductor devices.

Cosmic particles
Primary cosmic particles (typically protons) are generated in remote areas of the universe e.g. in the supernovae. The particle energy can be extremely high; several orders of magnitude higher than artificially accelerated particles in the most powerful accelerators such as the one in the CERN research centre. But these are not the particles that directly cause device failure on the earth. During their travel towards the earth, many particles are deflected by the sun and earth’s magnetic field. This is why the cosmic particles detected on earth vary with the 11 years activity cycle of the sun. Those particles that are approaching the earth interact with the atmosphere. In this interaction, a shower of new, secondary, tertiary, … particles (protons, neutrons, electrons, …) are generated. Up to an altitude of 10,000 – 15,000 m the generation of particles is dominant, whereas nearer earth altitudes, absorption of particles dominates. At the surface, the x’th generation of the initial primary cosmic can be detected (terrestrial cosmic). A typical flux is 20 neutrons per cm2 per hour (sea level New York), see Figure 1.From this description, one can conclude that the flux of terrestrial cosmic particles is dependent on altitude. Dependencies on the latitude, due to the influence of the earth’s magnetic field, and the actual sun activity, can be neglected for a first order estimation.

As the atmosphere is absorbing cosmic particles, other materials could be used as a shield. For example, a 45 cm layer of concrete reduces the intensity of cosmic neutron particles by half [3]. But as for a significant shielding effect, heavy shielding would be necessary which is not an option for many applications.

Failure mode

Most cosmic particles pass the semiconductor devices without any interaction. With a certain probability the cosmic particles interact with the nucleus of a silicon atom in the device. Then the energy of the particle displaces the hit atom and may generate new particle species. Although a microscopic defect in the silicon crystal is generated, the concentration of the generated defects, even during the lifetime, would not lead to a measurable degradation for typical irradiation conditions, for a non-operating device. The situation might be different for operated semiconductor devices. Logic devices, or memories, store their information, typically, through a small charged capacitor. Here the deposited energy of a cosmic particle may lead to a bit flip and therefore to a loss of information. In devices that support an electric field, the deposited energy of a cosmic particle may lead to a local charge cloud that is amplified through the electric field. A short current pulse may be detected at the biased device. This effect is used for particle detectors for physical experiments to identify and count high energetic particles. In devices that support a high electric field, like power semiconductor devices, the deposited energy may lead to formation of a streamer, a conducting pipe in the blocking semiconductor device see Figure 2. In such a case the device may be destroyed as shown in Figure 3.

The failure of devices due to cosmic rays are sudden events, without any precursor. Therefore, they are often called ‘Single Event Burnout’ (SEB). The probability of a device failure depends on the intensity of the cosmic irradiation (therefore on the altitude and shielding as previously explained), and strongly on the electric field strength and distribution in the power semiconductor device (therefore on the applied blocking voltage and device design). Other influencing parameters are device temperature and beam direction.

Testing

For testing of semiconductor devices for cosmic ray ruggedness, data needs to be acquired in a reasonable testing time. This can be reached by:

  • Testing many devices in parallel to get a higher probability of failure events during the test time.

  • Increasing the sensitivity of the tested devices, by operating them during the test at higher blocking voltages than in typical applications. The failure rate for a typical application then needs to be extrapolated. This method was used in the beginning of the cosmic ray ruggedness investigation .

  • Another way to accelerate testing is to increase the cosmic ray flux. As seen before, the intensity of cosmic particles increases to an altitude of 10,000 – 15,000 m a.s.l. In the beginning of cosmic ray ruggedness investigation of power semiconductor devices, this effect was used to reduce test time. ABB operated a test lab at the High Altitude Research Station, Jungfraujoch in the Swiss Alps at 3580 m, see Figure 4. At this altitude the intensity of cosmic particles is approximately a factor of 10 higher that at sea level.

But even with acceleration, testing times are still in the order of several months to years. This is too long, especially for verification testing during the development phase of semiconductors with several learning cycles.

A much faster way to get the relevant data is to use artificial particle beams, like neutron or proton beams. Studies showed, that the failure rates generated by natural cosmic particles very well correspond to data generated in artificial particle beams, see Figure 5. The test time for such a setup reduces to minutes.

Specification of devices ruggedness

Some power semiconductor suppliers specify the cosmic ray ruggedness of their devices either in the data sheet or in an application note e.g. [5]. The parameters for the failure probability, such as applied bias, junction temperature or altitude, are given for an unshielded device. This helps to estimate the failure rate of the power semiconductor under the individual application conditions and to choose the right device.

References

[1]

H. Kabza, H.-J. Schulze, Y. Gerstenmaier, P. Voss, J. Wilhelmi, W. Schmid, F. Pfirsch und K. Platzöder, „Cosmic Radiation as a Cause for Power Device Failure and Possible Countermeasures,“ in Proc. of the 6th Internat. Symposium on Power Semiconductor Devices & IC’s, Davos. Switzerland, 1994.

[2]

O. C. Allkhofer und P. K. F. Grieder, ,,Cosmic Rays on earth,“ Physics Data , Bd. 25, Nr. 1, 1984.

[3]

F. J. Ziegler, „Terrestrial cosmic rays,“ IBM J. Res. Develop., Bd. 40, Nr. 1, pp. 19-39, 1996.

[4]

C. Findeisen, E. Herr, M. Schenkel, R. Schlegel und H. Zeller, „Extrapolation of cosmic ray induced failures from test to field conditions for IGBT modules,“ Microelectronics Reliability, Bd. 38, pp. 1335 – 1339, 1998.

[5]

H. Zeller, „Cosmic ray induced breakdown in high voltage semicoductor devices, microscopic model and phenolenological lifetime prediction,“ in 6th International Symposium on Power Devices & IC’s , Davos, Switzerland, 1994.

[6]

5SYA 2046-03, „Failure rates of IGCTs due to comsic rays,“ ABB Application Note, 2014.

Infographic cosmic rays influence on semiconductors

Failure rates of fast recovery diodes due to cosmic rays

Failure rates of IGBT modules due to cosmic rays

Failure rates of IGCTs due to cosmic rays

Transmission Failure Causes Nationwide Blackout in Argentina

Post Syndicated from Amy Nordrum original https://spectrum.ieee.org/energywise/energy/the-smarter-grid/transmission-failure-causes-nationwide-blackout-in-argentina

Preliminary reports suggest problems with several 500-kilovolt transmission lines disrupted the flow of electricity from two dams to Argentina’s grid

A preliminary company memo suggests that problems with at least two 500-kilovolt transmission lines were the proximate cause of nationwide blackouts in Argentina on Sunday 16 June. The lines connect a pair of hydroelectric dams to Argentina’s grid. Parts of Brazil, Paraguay, and Uruguay also experienced power outages, though the total number of people affected is not yet clear. 

Government authorities have not yet determined what caused the disconnect and investigations are ongoing. Officials are expected to issue a more comprehensive report within 10 days.

In a statement on Sunday morning, the Secretariat of Energy attributed the blackouts, which began at 7:07 AM local time, to the “collapse of the Argentine Interconnection System (SADI).” The SADI is a high-voltage transmission network operated by Transener that transports electricity from generators, including power plants and dams, to distribution networks that serve tens of millions of customers.

According to a public statement by Edesur, one of Argentina’s largest electricity distributors, the failure occurred along a critical route of Argentina’s interconnection system that supplies the nation’s grid with power generated by the Yacyreta Dam in Paraguay and the Salto Grande Dam on the Uruguay River.

3D-Printed Semiconductor Cube Could Convert Waste Heat to Electricity

Post Syndicated from Tracy Staedter original https://spectrum.ieee.org/energywise/energy/renewables/3dprinted-semiconductor-efficiently-converts-heat-to-electricity

Here’s how these cubeoids could harness waste heat from steel plants

From his office at Swansea University in the United Kingdom, associate professor Matthew Carnie has a good view of Tata Steel’s furnace stacks. To some, those chimneys rising over Port Talbot are unsightly. To Carnie, they’re an opportunity. They emit a good portion of the plant’s waste heat, which overall has the same power output as some nuclear plants, says Carnie—around 1,300 megawatts, according to his calculations.

With that much potential power waiting to be captured, Carnie and his research team have developed a hybrid, 3D-printed semiconductor material that converts waste heat into electricity. It’s 50 percent more efficient than another inexpensive semiconductor material, lead telluride, that’s screen-printed, and the new material could be assembled cheaply into a device that converts up to 10 percent of heat wherever it’s applied.

“Ideally, they could be deployed in areas where there is high-grade waste heat and be used to generate power to help with energy efficiency,” says Carnie. With one-sixth of all energy used by industry in the United Kingdom pouring into the atmosphere as waste heat, the possibilities are big, he says.

Do You Know Your Oscilloscope’s Signal Integrity?

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/do-you-know-your-oscilloscopes-signal-integrity

Ebook: How to Determine Oscilloscope Signal Integrity

High oscilloscope signal integrity is critical but often misunderstood! Whether you are debugging your latest design, verifying compliance against an industry standard, or decoding a serial bus, it is important that your oscilloscope displays a true representation of your signal. Learn how to verify that your instrument has the high signal integrity you need with the “How to Determine Oscilloscope Signal Integrity” eBook.

img

A Glass Battery That Keeps Getting Better?

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/energywise/energy/environment/a-glass-battery-that-keeps-getting-better

A prototype solid-state battery based on lithium and glass faces criticism over claims that its capacity increases over time

Is there such a thing as a battery whose capacity to store energy increases with age? One respected team of researchers say they have developed just such a technology. Controversy surrounds their claims, however, in part because thermodynamics might seem to demand that a battery only deteriorates over many charge-discharge cycles.

The researchers have a response for that critique and continue to publish peer-reviewed papers about this work. If such claims came from almost any other lab, they might be ignored and shunned by the broader community of battery researchers, the same way physicists turn their noses up at anything that smacks of a perpetual motion machine.

But this lab belongs to one of the most celebrated battery pioneers today—and one of the inventors of the lithium-ion battery itself. John Goodenough, who at 96 continues to research and publish like scientists one-third his age, last year joined with three co-authors in publishing a paper that grabbed headlines. (Spectrum had profiled him and his battery technology the year before, following an initial announcement about his group’s new glass battery.)

Parts of the Navajo Nation Are Still Off the Grid—but That’s Changing

Post Syndicated from Maria Gallucci original https://spectrum.ieee.org/energy/fossil-fuels/parts-of-the-navajo-nation-are-still-off-the-gridbut-thats-changing

A six-week pilot program connected 200 homes, but more than 15,000 remain without electricity

David Hefner and his crew rumbled toward Arizona in bucket trucks, digger derricks, and vehicles full of materials. The Oklahoma linemen typically drive their fleet to storm-ravaged communities after hurricanes and tornadoes disrupt power for days. But when the team set off in April, it wasn’t to repair battered poles and wires. Instead, they helped bring light to homes left in the dark for generations.

About 60,000 people in the Navajo Nation—a vast swath of high plains and desert in Arizona, New Mexico, and Utah—still can’t access the electric grid from their homes. Thousands more lack running water. In recent years, the Navajo Tribal Utility Authority (NTUA) has doubled down on efforts to extend power lines, build substations, and provide residents with off-grid renewable energy units. Now, public utilities across the United States are pitching in to accelerate the country’s longest-running rural electrification campaign.

“We have our own American people right here in our backyard that don’t have what we consider the modern necessities,” said Hefner, the distribution power line superintendent at Grand River Dam Authority, a nonprofit utility in Vinita, Okla. “We wanted to be a part of helping build this infrastructure.”

For six weeks in April and May, about 125 volunteers from two dozen utilities partnered with Navajo crews and met with families through Light Up Navajo, an initiative by NTUA and the American Public Power Association (APPA). In the coming months, organizers will assess how to replicate the program in years to come.

Teams in the pilot session installed poles, transformers, lines, and meters to connect more than 200 houses to the grid—including the home of an elderly man who planned to buy his first refrigerator. NTUA itself has connected about 4,900 homes in the past 10 years, though the work remains costly and painstakingly slow, said Walter Haase, the general manager.

The utility spends about US $40,000 on average to hook one home up to the grid, including thousands of dollars in fees to use a federal right-of-way (since the reservation is on federal land). Homeowners must pay more than $3,000 to wire their houses and connect electric meters—a considerable expense. The average NTUA customer pays about $630 a year for electricity, which is not nearly enough for the utility to recoup its infrastructure costs.

At the current pace, NTUA says it will take 40 years to connect the remaining 15,000 off-grid homes, or about a third of the houses scattered throughout the 70,000-square-kilometer reservation. “That’s just too long to wait,” Haase said from his office in Fort Defiance, Ariz.

Under the Rural Electrification Act of 1936, the U.S. government provided financial incentives to help utilities and newly formed cooperatives bring electricity to far-flung farms and towns. Yet the movement largely bypassed Native American lands. Along with Navajo households, thousands of Hopi families in Arizona and numerous Alaska Native households still aren’t connected to the grid.

The Navajo Nation formed NTUA in 1959 to address this oversight. The utility’s first large solar plant, the 27.3-megawatt Kayenta facility, came on line in 2017. A 28-MW addition is slated for completion in June. Revenues from the solar electricity help fund the utility’s rural electrification efforts.

Haase said the idea to partner with outside utilities came after a rash of extreme weather blasted grids in Puerto Rico, Texas, and Florida. Workers arrived in droves to restore power through mutual-aid agreements. Haase recently chaired APPA’s board, and members frequently asked about using a similar approach for the Navajo effort. APPA awarded the utility a $125,000 grant to design and launch the pilot program, and NTUA and volunteers are raising more money through GoFundMe campaigns.

Through a multimillion-dollar project with the U.S. Department of Energy, the utility has also provided hundreds of off-grid units to Navajo families, including hybrid models that combine an 880-watt solar array, a 400-W wind turbine, and a small battery bank.

The units supply a few hours’ worth of electricity in the evening. For elderly couples or people living alone, this can be sufficient. But large families and younger residents, accustomed to round-the-clock power off the reservation, tend to use more electricity than the units are designed to support. In those cases, grid power makes more sense, said Sandra Begay, an engineer and Navajo Nation member who helped facilitate the project for Sandia National Laboratories.

She said the rural electrification efforts aren’t intended to push modern infrastructure on Navajo families, but rather to give them the same access enjoyed by residents in the rest of the country. “It’s really about choice,” Begay said. “I don’t ever want to have it where somebody doesn’t have a choice.”

This article appears in the June 2019 print issue as “Plugging in the Navajo Nation.”

ABB & Siemens Test Subsea Power Grids for Underwater Factories

Post Syndicated from Amy Nordrum original https://spectrum.ieee.org/energy/fossil-fuels/abb-siemens-test-subsea-power-grids-for-underwater-factories

Putting a power-distribution station on the ocean floor could allow more raw materials to be processed down there

Slowly but surely, oil- and gas-drilling technology is migrating from floating platforms to the seafloor. Pumps moved down there decades ago. More recently, compressors (which boost pressure in a well to keep gas flowing) and separators (which isolate oil from water and silt) have relocated to the murky depths.

Putting this equipment closer to wells makes them more productive and energy efficient. Some oil and gas companies even aspire to build subsea factories that extract and process oil and natural gas directly on the seafloor. These factories would be safe from hazards such as icebergs and hurricanes. They would be controlled remotely, reducing labor costs. Eventually, some believe, offshore platforms could be phased out entirely.

However, all of this sunken gear requires electricity. Today, operators typically string power lines from power plants or diesel generators aboard nearby oil rigs to every piece of subsea equipment they install. That works for a few machines, but it’s impractical to string dozens of umbilicals, as they’re known, to the ocean floor.

Industry suppliers ABB and Siemens are now putting the finishing touches on competing versions of the world’s first subsea power-distribution stations. Once installed, these stations would connect via a single line to a “topside” (maritime parlance for above water) generator, wind turbine, or power plant, and redistribute electricity to underwater equipment. “Our technology is an enabling technology for the subsea factory,” says Bjørn Rasch, head of subsea power for Siemens.

Both projects have been in the works for more than five years. ABB will complete its final round of testing in June and expects to install its first subsea power system in 2020. Siemens tested its version in shallow water in Norway last November and is now talking with clients about putting its first unit in the field. “We’re getting close to where we’re actually deploying this technology in a real project,” Rasch says.

Siemens’s model, which the company calls its Subsea Power Grid, consists of a transformer, a medium-voltage switchgear, and a variable-speed drive. Its distribution voltage is around 30 kilovolts, while its variable-speed drive puts out 6.6 kV. The system can provide electricity to devices with power ratings between 1 and 15 megawatts. The umbilical that hooks it to a generation station also includes an embedded fiber-optic cable so operators can run everything from afar.

One of the hardest parts of building the station, Rasch says, was ensuring it could withstand the high water pressure of the seafloor. Instead of encasing all the equipment in a pressurized chamber, engineers flooded the electronics with a synthetic fluid called Midel. This biodegradable fluid inside the equipment maintains the same pressure as the seawater, which alleviates stress. The fluid also passively cools the device by transferring heat from equipment to the chilly seawater.

Chevron, Eni Norge, Equinor, and ExxonMobile have all worked with Siemens to get the company’s project this far. The next step for both ABB and ­Siemens will be to deliver the first model for installation at an active production site.

Brian Skeels, professor of subsea engineering at the University of Houston and director of emerging technology for the offshore design and consulting firm TechnipFMC, has seen many attempts to “marinize” technologies to work underwater. Dealing with heat is a common stumbling block. If water can’t flow freely around a device, the heat it generates prompts marine life to grow on the equipment, which shortens its life-span. And, Skeels cautions, “what may work in shallow water may not work at deeper depths.”

Both systems are expected to work at depths of up to 3,000 meters and operate for 30 years with minimal maintenance. At the end of their lives, the units can be removed from the seafloor.

A power-distribution center would be just one piece of any future subsea factory—a vision that has captivated the industry for more than a decade. Skeels says the future of subsea processing will depend largely on whether such projects can add more value to the industry than they drain in expense. Investment into subsea processing dried up when oil prices crashed in 2014. Looking ahead, Skeels thinks the technology holds the most potential for remote wells more than 160 kilometers from other facilities.

Hani Elshahawi, digitalization lead for deepwater technologies at Shell, says there are clear benefits to having power readily available on the seafloor. But he doesn’t think subsea factories will supplant all platform activities, or replace any of them in the near future. “It will require decades, in my view,” he says. “We foresee a more gradual and lengthy transition.”

To Rasch at Siemens, though, the industry’s vision of subsea factories does not seem as far out as it once did. “There are many technologies in many companies that are in place or close to being in place,” he says. “This can be realized in the close future, that’s for sure.”

This article appears in the June 2019 print issue as “ABB and Siemens Test Subsea Power Grids.”

Atom Power Is Launching the Era of Digital Circuit Breakers

Post Syndicated from Prachi Patel original https://spectrum.ieee.org/energywise/energy/the-smarter-grid/atom-power-is-launching-the-era-of-digital-circuit-breakers

New digital circuit breakers that combine computing power with wireless connectivity may finally replace an 140-year-old electromechanical technology

In the dark, dank depths of your home basement hangs a drab gray box that guards the building’s electrical circuits. The circuit breakers inside switch off current flow when there is risk of an overload or short circuit, keeping you safe from fires or electrocution. It’s a critical job, and one that breakers have been doing with a fairly simple, 140-year-old electromechanical technology.

But circuit breakers are about to get a digital overhaul. New semiconductor breakers that combine computing power and wireless connectivity could become the hub of smart, energy-efficient buildings of the future.

“It’s like going from a telephone that just makes calls to a smartphone with capabilities we’d never imagined before,” says Ryan Kennedy, CEO and co-founder of Atom Power in Charlotte, North Carolina. “This is a platform that changes everything in power systems.”

Digital circuit breakers have been a holy grail in power engineering circles. Atom Power has now become the first to earn certification from the Underwriters Laboratory (UL) for its product, which combines breakers based on silicon carbide transistors with software. While UL approval isn’t legally required, it’s the industry safety standard for commercial use.

Ionic Materials Explores Plastic Electrolyte for Lithium-Ion Batteries

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/energywise/energy/renewables/nextgen-battery-tech-iteration-rather-than-disruption

Replacing a liquid electrolyte with a plastic one could lead to lithium-ion batteries that are safer and more energy dense

Better batteries for electric cars and grid energy storage may be just one revolution away—whether in fuel cells or flow batteries or supercapacitors. But there’s a company in Massachusetts that’s betting the evolution of existing technologyrather than a revolution—will determine how we power future EVs and store renewable energy. 

“Lithium ion has this massive scale,” says Erik Terjesen, senior director of licensing and strategy for Ionic Materials, based in the city of Woburn. “The people who build lithium-ion factories—the LGs, the CATLs of the world—are building massive capacity for lithium-ion.” These billions of dollars already invested, Terjesen says, represent inertia that will resist revolutionary new battery technologies—especially if lithium technology can offer more energy storage, safer products, lower prices, and be made in existing factories.

As IEEE Spectrum profiled in 2018, Ionic Materials is developing a plastic, solid-state electrolyte to sit between a rechargeable battery’s anode and cathode. The electrolyte acts as the conduction medium through which lithium ions flow from anode to cathode and back again—providing the basis for many charge-discharge cycles in the battery’s lifetime.

The most effective and resilient electrolytes in lithium-ion batteries to date have been liquids, which conduct ions well but do nothing to keep the cathode and anode from ever touching. This has been the job of a thin plastic membrane with tiny micron-sized holes in it called the separator, which allows lithium ions to pass through.

The problems come when there are manufacturing defects, tears in the separator, a puncture in the battery, or a growth of stalactite-like “dendrites” bridging cathode and anode through the separators. In all those cases, a short circuit could result. That is where the other downside of the liquid electrolyte comes in: It’s highly flammable. Which is why there have been reports of the (rare) exploding laptop, smartphone, and EV.

Ionic Materials’ solid-state electrolyte is, of course, its own separator. And it’s non-flammable.

“From the dialogues we’re having with electric vehicle OEMs, it’s exciting for them to have an inherently safe battery in their cars that works in the same way,” Terjesen says. “We’re not talking about a new design. We’re not talking about a new cell format. It fits into their world today.”

Since 2018, Terjesen says, the company has been establishing partnerships and announcing investors, including the French oil and gas company Total, A123 Systems, Dyson, Samsung, Renault-Nissan-Mitsubishi, and Volta Energy Technologies.

“We are aware of the fact that there is obviously a lot of hype that comes with the battery industry in general,” Terjesen says. “So our CEO Mike Zimmerman says you really need to prove what you’re saying, rather than just making claims.”

The areas the company is now most carefully investigating around their polymer electrolyte, he says, are safety, energy density, and cost.

The first two, he says, go hand in hand. The greater a battery’s energy density, the more the electrolyte’s safety matters. “We think our polymer can work with more energy dense anode and cathode combinations,” he says. “As people try to squeeze all the energy they can out of these cells, by default, the cell will become more volatile. We think the safety question will only continue to increase as you look at these higher-energy chemistries.”

The question of price, Terjesen says, is also important. In 2010, the industry produced batteries costing some US $1,200 per kilowatt-hour. By 2014, that price had fallen to $600/kWh. As of last year, it was south of $200/kWh. And now, Terjesen says, many industry players are trying to get below $100/kWh. (Ionic Materials does not release data on its cost or ability to enable battery companies to drive their unit cost down.)

“Getting below ($100/kWh) will be challenging, because the fundamental materials themselves are commodities. And the raw materials themselves have a certain price,” he says.

For instance, cobalt is both expensive and controversial, with much of its global reserves found in the Democratic Republic of the Congo—where corruption and disputed labor practices have led Elon Musk to swear off the mineral in Tesla’s future-generation cars.

“We’ve learned that cobalt is often used in these cells as a stabilizing agent,” Terjesen says. “So if we can create greater safety with our material, it opens the door for the potential to reduce or eliminate the cobalt.”

However, Terjesen says Ionic Materials is ultimately chemistry-agnostic. They do not even build batteries. The company only provides the solid-state electrolyte for battery-makers to develop whatever next-generation solid-state batteries the market will bear.

“There isn’t a single chemistry that we’re betting on,” he says. “We’re not going to the market and saying—you have to do this chemistry or that chemistry. We have multiple chemistries that we’re working on with multiple partners with our polymer.”

In other words, Ionic Materials is trying not to disrupt an industry accustomed to disruption.

“Most people who look at solid-state [batteries] think, it’s not a disruptor of lithium ions,” Terjesen says. “It’s the next phase of lithium ions.”

AI, Drones Survey Great Barrier Reef in Last Ditch Effort to Avoid Catastrophe

Post Syndicated from John Boyd original https://spectrum.ieee.org/tech-talk/energy/environment/how-to-keep-a-close-eye-on-australias-great-barrier-reef

An Australian research team is using tech to monitor global climate change’s assault on the world’s largest living organism

The stats are daunting. The Great Barrier Reef is 2,300 kilometers long, comprises 2,900 individual coral reefs, and covers an area greater than 344,000 square kilometers, making it the world’s largest living organism and a UNESCO World Heritage Site. 

A team of researchers from Queensland University of Technology (QUT) in Brisbane, is monitoring the reef, located off the coast of northeastern Australia, for signs of degradation such as the bleaching caused by a variety of environmental pressures including industrial activity and global warming. 

The team, led by Felipe Gonzalez, an associate professor at QUT, is collaborating with the Australian Institute of Marine Science (AIMS), an organization that has been monitoring the health of the reef for many years. AIMS employs aircraft, in-water surveys, and NASA satellite imagery to collect data on a particular reef’s condition. But these methods have drawbacks, including the relatively low resolution of satellite images and high cost of operating fixed-wing aircraft and helicopters.

So Gonzalez is using an off-the-shelf drone modified to carry both a high-resolution digital camera and a hyperspectral camera. The monitoring is conducted from a boat patrolling the waters 15 to 70 km from the coast. The drone flies 60 meters above the reef, and the hyperspectral camera captures reef data up to three meters below the water’s surface. This has greatly expanded the area of coverage and is helping to verify AIMS’s findings.

The digital camera is used to build up a conventional 3D model of an individual reef under study, explains Gonzalez. But this conventional camera is capable of capturing light only from three spectral channels: the red, green, and blue covering the 380-to-740-nanometer portion of the electromagnetic spectrum. The hyperspectral camera, by contrast, collects the reflected light of 270 spectral bands.

“Hyperspectral imaging greatly improves our ability to monitor the reef’s condition based on its spectral properties,” says Gonzalez. “That’s because each component making up a reef’s environment—water, sand, algae, etc.—has its own spectral signature, as do bleached and unbleached coral.”

But this expansion in reef coverage and richness of gathered data presented the team with a new challenge. Whereas AIMS divers can gather information on 40 distinct points on a reef in an underwater session, just one hyperspectral image presents more than 4,000 data points. Consequently, a single drone flight can amass a thousand gigabytes of raw data that has to be processed and analyzed. 

In processing the data initially, the team used a PC, custom software tools, and QUT’s high-performance computer, a process that took weeks and drew heavily on the machine’s run time.

So the team applied for and received a Microsoft AI for Earth grant, which makes software tools, cloud computing services, and AI deep learning resources available to researchers working on global environmental challenges. 

“Now we can use Microsoft’s AI tools in the cloud to supplement our own tools and quickly label the different spectral signatures,” says Gonzalez. “So, where processing previous drone sweeps used to take three or four weeks, depending on the data, it now takes two or three days.”

This speedup in data processing is critical. If it took a year or more before the team were able to tell AIMS that a certain part of the reef is degrading rapidly, it might be too late to save it. 

“And by being informed early, the government can then take quicker action to protect an endangered area of the reef,” Gonzalez adds.

He notes that the use of hyperspectral imaging is now a growing area of remote sensing in a variety of fields, including agriculture, mineral surveying, mapping, and location of water resources.

For example, he and colleagues at QUT are also using the technology to monitor forests, wheat crops, and vineyards that can be affected by pathogens, fungi, or aphids.

Meanwhile, over the next two months, Gonzalez will continue processing the spectral data collected from the reef so far; and then in September, he will start a second round of drone flights. 

“We aim to return to the four reefs AIMS has already studied to monitor any changes,” he says, “then extend the monitoring to new reefs.”

[$] Notes from the 2nd Operating-System-Directed Power-Management Summit

Post Syndicated from corbet original https://lwn.net/Articles/754923/rss

The second Operating-System-Directed Power-Management (OSPM18) Summit took
place at the ReTiS Lab of the Scuola Superiore Sant’Anna in Pisa between
April 16 and April 18, 2018. Like last
year
, the summit was organized as a collection of collaborative
sessions focused on trying to improve how operating-system-directed power
management and the kernel’s task scheduler work together to achieve the
goal of reducing energy consumption while still meeting performance and
latency requirements. Read on for an extensive set of notes collected by a
number of the participants to the summit.

The Helium Factor and Hard Drive Failure Rates

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/helium-filled-hard-drive-failure-rates/

Seagate Enterprise Capacity 3.5 Helium HDD

In November 2013, the first commercially available helium-filled hard drive was introduced by HGST, a Western Digital subsidiary. The 6 TB drive was not only unique in being helium-filled, it was for the moment, the highest capacity hard drive available. Fast forward a little over 4 years later and 12 TB helium-filled drives are readily available, 14 TB drives can be found, and 16 TB helium-filled drives are arriving soon.

Backblaze has been purchasing and deploying helium-filled hard drives over the past year and we thought it was time to start looking at their failure rates compared to traditional air-filled drives. This post will provide an overview, then we’ll continue the comparison on a regular basis over the coming months.

The Promise and Challenge of Helium Filled Drives

We all know that helium is lighter than air — that’s why helium-filled balloons float. Inside of an air-filled hard drive there are rapidly spinning disk platters that rotate at a given speed, 7200 rpm for example. The air inside adds an appreciable amount of drag on the platters that in turn requires an appreciable amount of additional energy to spin the platters. Replacing the air inside of a hard drive with helium reduces the amount of drag, thereby reducing the amount of energy needed to spin the platters, typically by 20%.

We also know that after a few days, a helium-filled balloon sinks to the ground. This was one of the key challenges in using helium inside of a hard drive: helium escapes from most containers, even if they are well sealed. It took years for hard drive manufacturers to create containers that could contain helium while still functioning as a hard drive. This container innovation allows helium-filled drives to function at spec over the course of their lifetime.

Checking for Leaks

Three years ago, we identified SMART 22 as the attribute assigned to recording the status of helium inside of a hard drive. We have both HGST and Seagate helium-filled hard drives, but only the HGST drives currently report the SMART 22 attribute. It appears the normalized and raw values for SMART 22 currently report the same value, which starts at 100 and goes down.

To date only one HGST drive has reported a value of less than 100, with multiple readings between 94 and 99. That drive continues to perform fine, with no other errors or any correlating changes in temperature, so we are not sure whether the change in value is trying to tell us something or if it is just a wonky sensor.

Helium versus Air-Filled Hard Drives

There are several different ways to compare these two types of drives. Below we decided to use just our 8, 10, and 12 TB drives in the comparison. We did this since we have helium-filled drives in those sizes. We left out of the comparison all of the drives that are 6 TB and smaller as none of the drive models we use are helium-filled. We are open to trying different comparisons. This just seemed to be the best place to start.

Lifetime Hard Drive Failure Rates: Helium vs. Air-Filled Hard Drives table

The most obvious observation is that there seems to be little difference in the Annualized Failure Rate (AFR) based on whether they contain helium or air. One conclusion, given this evidence, is that helium doesn’t affect the AFR of hard drives versus air-filled drives. My prediction is that the helium drives will eventually prove to have a lower AFR. Why? Drive Days.

Let’s go back in time to Q1 2017 when the air-filled drives listed in the table above had a similar number of Drive Days to the current number of Drive Days for the helium drives. We find that the failure rate for the air-filled drives at the time (Q1 2017) was 1.61%. In other words, when the drives were in use a similar number of hours, the helium drives had a failure rate of 1.06% while the failure rate of the air-filled drives was 1.61%.

Helium or Air?

My hypothesis is that after normalizing the data so that the helium and air-filled drives have the same (or similar) usage (Drive Days), the helium-filled drives we use will continue to have a lower Annualized Failure Rate versus the air-filled drives we use. I expect this trend to continue for the next year at least. What side do you come down on? Will the Annualized Failure Rate for helium-filled drives be better than air-filled drives or vice-versa? Or do you think the two technologies will be eventually produce the same AFR over time? Pick a side and we’ll document the results over the next year and see where the data takes us.

The post The Helium Factor and Hard Drive Failure Rates appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Continued: the answers to your questions for Eben Upton

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/eben-q-a-2/

Last week, we shared the first half of our Q&A with Raspberry Pi Trading CEO and Raspberry Pi creator Eben Upton. Today we follow up with all your other questions, including your expectations for a Raspberry Pi 4, Eben’s dream add-ons, and whether we really could go smaller than the Zero.

Live Q&A with Eben Upton, creator of the Raspberry Pi

Get your questions to us now using #AskRaspberryPi on Twitter

With internet security becoming more necessary, will there be automated versions of VPN on an SD card?

There are already third-party tools which turn your Raspberry Pi into a VPN endpoint. Would we do it ourselves? Like the power button, it’s one of those cases where there are a million things we could do and so it’s more efficient to let the community get on with it.

Just to give a counterexample, while we don’t generally invest in optimising for particular use cases, we did invest a bunch of money into optimising Kodi to run well on Raspberry Pi, because we found that very large numbers of people were using it. So, if we find that we get half a million people a year using a Raspberry Pi as a VPN endpoint, then we’ll probably invest money into optimising it and feature it on the website as we’ve done with Kodi. But I don’t think we’re there today.

Have you ever seen any Pis running and doing important jobs in the wild, and if so, how does it feel?

It’s amazing how often you see them driving displays, for example in radio and TV studios. Of course, it feels great. There’s something wonderful about the geographic spread as well. The Raspberry Pi desktop is quite distinctive, both in its previous incarnation with the grey background and logo, and the current one where we have Greg Annandale’s road picture.

The PIXEL desktop on Raspberry Pi

And so it’s funny when you see it in places. Somebody sent me a video of them teaching in a classroom in rural Pakistan and in the background was Greg’s picture.

Raspberry Pi 4!?!

There will be a Raspberry Pi 4, obviously. We get asked about it a lot. I’m sticking to the guidance that I gave people that they shouldn’t expect to see a Raspberry Pi 4 this year. To some extent, the opportunity to do the 3B+ was a surprise: we were surprised that we’ve been able to get 200MHz more clock speed, triple the wireless and wired throughput, and better thermals, and still stick to the $35 price point.

We’re up against the wall from a silicon perspective; we’re at the end of what you can do with the 40nm process. It’s not that you couldn’t clock the processor faster, or put a larger processor which can execute more instructions per clock in there, it’s simply about the energy consumption and the fact that you can’t dissipate the heat. So we’ve got to go to a smaller process node and that’s an order of magnitude more challenging from an engineering perspective. There’s more effort, more risk, more cost, and all of those things are challenging.

With 3B+ out of the way, we’re going to start looking at this now. For the first six months or so we’re going to be figuring out exactly what people want from a Raspberry Pi 4. We’re listening to people’s comments about what they’d like to see in a new Raspberry Pi, and I’m hoping by early autumn we should have an idea of what we want to put in it and a strategy for how we might achieve that.

Could you go smaller than the Zero?

The challenge with Zero as that we’re periphery-limited. If you run your hand around the unit, there is no edge of that board that doesn’t have something there. So the question is: “If you want to go smaller than Zero, what feature are you willing to throw out?”

It’s a single-sided board, so you could certainly halve the PCB area if you fold the circuitry and use both sides, though you’d have to lose something. You could give up some GPIO and go back to 26 pins like the first Raspberry Pi. You could give up the camera connector, you could go to micro HDMI from mini HDMI. You could remove the SD card and just do USB boot. I’m inventing a product live on air! But really, you could get down to two thirds and lose a bunch of GPIO – it’s hard to imagine you could get to half the size.

What’s the one feature that you wish you could outfit on the Raspberry Pi that isn’t cost effective at this time? Your dream feature.

Well, more memory. There are obviously technical reasons why we don’t have more memory on there, but there are also market reasons. People ask “why doesn’t the Raspberry Pi have more memory?”, and my response is typically “go and Google ‘DRAM price’”. We’re used to the price of memory going down. And currently, we’re going through a phase where this has turned around and memory is getting more expensive again.

Machine learning would be interesting. There are machine learning accelerators which would be interesting to put on a piece of hardware. But again, they are not going to be used by everyone, so according to our method of pricing what we might add to a board, machine learning gets treated like a $50 chip. But that would be lovely to do.

Which citizen science projects using the Pi have most caught your attention?

I like the wildlife camera projects. We live out in the countryside in a little village, and we’re conscious of being surrounded by nature but we don’t see a lot of it on a day-to-day basis. So I like the nature cam projects, though, to my everlasting shame, I haven’t set one up yet. There’s a range of them, from very professional products to people taking a Raspberry Pi and a camera and putting them in a plastic box. So those are good fun.

Raspberry Shake seismometer

The Raspberry Shake seismometer

And there’s Meteor Pi from the Cambridge Science Centre, that’s a lot of fun. And the seismometer Raspberry Shake – that sort of thing is really nice. We missed the recent South Wales earthquake; perhaps we should set one up at our Californian office.

How does it feel to go to bed every day knowing you’ve changed the world for the better in such a massive way?

What feels really good is that when we started this in 2006 nobody else was talking about it, but now we’re part of a very broad movement.

We were in a really bad way: we’d seen a collapse in the number of applicants applying to study Computer Science at Cambridge and elsewhere. In our view, this reflected a move away from seeing technology as ‘a thing you do’ to seeing it as a ‘thing that you have done to you’. It is problematic from the point of view of the economy, industry, and academia, but most importantly it damages the life prospects of individual children, particularly those from disadvantaged backgrounds. The great thing about STEM subjects is that you can’t fake being good at them. There are a lot of industries where your Dad can get you a job based on who he knows and then you can kind of muddle along. But if your dad gets you a job building bridges and you suck at it, after the first or second bridge falls down, then you probably aren’t going to be building bridges anymore. So access to STEM education can be a great driver of social mobility.

By the time we were launching the Raspberry Pi in 2012, there was this wonderful movement going on. Code Club, for example, and CoderDojo came along. Lots of different ways of trying to solve the same problem. What feels really, really good is that we’ve been able to do this as part of an enormous community. And some parts of that community became part of the Raspberry Pi Foundation – we merged with Code Club, we merged with CoderDojo, and we continue to work alongside a lot of these other organisations. So in the two seconds it takes me to fall asleep after my face hits the pillow, that’s what I think about.

We’re currently advertising a Programme Manager role in New Delhi, India. Did you ever think that Raspberry Pi would be advertising a role like this when you were bringing together the Foundation?

No, I didn’t.

But if you told me we were going to be hiring somewhere, India probably would have been top of my list because there’s a massive IT industry in India. When we think about our interaction with emerging markets, India, in a lot of ways, is the poster child for how we would like it to work. There have already been some wonderful deployments of Raspberry Pi, for example in Kerala, without our direct involvement. And we think we’ve got something that’s useful for the Indian market. We have a product, we have clubs, we have teacher training. And we have a body of experience in how to teach people, so we have a physical commercial product as well as a charitable offering that we think are a good fit.

It’s going to be massive.

What is your favourite BBC type-in listing?

There was a game called Codename: Druid. There is a famous game called Codename: Droid which was the sequel to Stryker’s Run, which was an awesome, awesome game. And there was a type-in game called Codename: Druid, which was at the bottom end of what you would consider a commercial game.

codename druid

And I remember typing that in. And what was really cool about it was that the next month, the guy who wrote it did another article that talks about the memory map and which operating system functions used which bits of memory. So if you weren’t going to do disc access, which bits of memory could you trample on and know the operating system would survive.

babbage versus bugs Raspberry Pi annual

See the full listing for Babbage versus Bugs in the Raspberry Pi 2018 Annual

I still like type-in listings. The Raspberry Pi 2018 Annual has a type-in listing that I wrote for a Babbage versus Bugs game. I will say that’s not the last type-in listing you will see from me in the next twelve months. And if you download the PDF, you could probably copy and paste it into your favourite text editor to save yourself some time.

The post Continued: the answers to your questions for Eben Upton appeared first on Raspberry Pi.

More power to your Pi

Post Syndicated from James Adams original https://www.raspberrypi.org/blog/pi-power-supply-chip/

It’s been just over three weeks since we launched the new Raspberry Pi 3 Model B+. Although the product is branded Raspberry Pi 3B+ and not Raspberry Pi 4, a serious amount of engineering was involved in creating it. The wireless networking, USB/Ethernet hub, on-board power supplies, and BCM2837 chip were all upgraded: together these represent almost all the circuitry on the board! Today, I’d like to tell you about the work that has gone into creating a custom power supply chip for our newest computer.

Raspberry Pi 3 Model B+, with custome power supply chip

The new Raspberry Pi 3B+, sporting a new, custom power supply chip (bottom left-hand corner)

Successful launch

The Raspberry Pi 3B+ has been well received, and we’ve enjoyed hearing feedback from the community as well as reading the various reviews and articles highlighting the solid improvements in wireless networking, Ethernet, CPU, and thermal performance of the new board. Gareth Halfacree’s post here has some particularly nice graphs showing the increased performance as well as how the Pi 3B+ keeps cool under load due to the new CPU package that incorporates a metal heat spreader. The Raspberry Pi production lines at the Sony UK Technology Centre are running at full speed, and it seems most people who want to get hold of the new board are able to find one in stock.

Powering your Pi

One of the most critical but often under-appreciated elements of any electronic product, particularly one such as Raspberry Pi with lots of complex on-board silicon (processor, networking, high-speed memory), is the power supply. In fact, the Raspberry Pi 3B+ has no fewer than six different voltage rails: two at 3.3V — one special ‘quiet’ one for audio, and one for everything else; 1.8V; 1.2V for the LPDDR2 memory; and 1.2V nominal for the CPU core. Note that the CPU voltage is actually raised and lowered on the fly as the speed of the CPU is increased and decreased depending on how hard the it is working. The sixth rail is 5V, which is the master supply that all the others are created from, and the output voltage for the four downstream USB ports; this is what the mains power adaptor is supplying through the micro USB power connector.

Power supply primer

There are two common classes of power supply circuits: linear regulators and switching regulators. Linear regulators work by creating a lower, regulated voltage from a higher one. In simple terms, they monitor the output voltage against an internally generated reference and continually change their own resistance to keep the output voltage constant. Switching regulators work in a different way: they ‘pump’ energy by first storing the energy coming from the source supply in a reactive component (usually an inductor, sometimes a capacitor) and then releasing it to the regulated output supply. The switches in switching regulators effect this energy transfer by first connecting the inductor (or capacitor) to store the source energy, and then switching the circuit so the energy is released to its destination.

Linear regulators produce smoother, less noisy output voltages, but they can only convert to a lower voltage, and have to dissipate energy to do so. The higher the output current and the voltage difference across them is, the more energy is lost as heat. On the other hand, switching supplies can, depending on their design, convert any voltage to any other voltage and can be much more efficient (efficiencies of 90% and above are not uncommon). However, they are more complex and generate noisier output voltages.

Designers use both types of regulators depending on the needs of the downstream circuit: for low-voltage drops, low current, or low noise, linear regulators are usually the right choice, while switching regulators are used for higher power or when efficiency of conversion is required. One of the simplest switching-mode power supply circuits is the buck converter, used to create a lower voltage from a higher one, and this is what we use on the Pi.

A history lesson

The BCM2835 processor chip (found on the original Raspberry Pi Model B and B+, as well as on the Zero products) has on-chip power supplies: one switch-mode regulator for the core voltage, as well as a linear one for the LPDDR2 memory supply. This meant that in addition to 5V, we only had to provide 3.3V and 1.8V on the board, which was relatively simple to do using cheap, off-the-shelf parts.

Pi Zero sporting a BCM2835 processor which only needs 2 external switchers (the components clustered behind the camera port)

When we moved to the BCM2836 for Raspberry Pi Model 2 (and subsequently to the BCM2837A1 and B0 for Raspberry Pi 3B and 3B+), the core supply and the on-chip LPDDR2 memory supply were not up to the job of supplying the extra processor cores and larger memory, so we removed them. (We also used the recovered chip area to help fit in the new quad-core ARM processors.) The upshot of this was that we had to supply these power rails externally for the Raspberry Pi 2 and models thereafter. Moreover, we also had to provide circuitry to sequence them correctly in order to control exactly when they power up compared to the other supplies on the board.

Power supply design is tricky (but critical)

Raspberry Pi boards take in 5V from the micro USB socket and have to generate the other required supplies from this. When 5V is first connected, each of these other supplies must ‘start up’, meaning go from ‘off’, or 0V, to their correct voltage in some short period of time. The order of the supplies starting up is often important: commonly, there are structures inside a chip that form diodes between supply rails, and bringing supplies up in the wrong order can sometimes ‘turn on’ these diodes, causing them to conduct, with undesirable consequences. Silicon chips come with a data sheet specifying what supplies (voltages and currents) are needed and whether they need to be low-noise, in what order they must power up (and in some cases down), and sometimes even the rate at which the voltages must power up and down.

A Pi3. Power supply components are clustered bottom left next to the micro USB, middle (above LPDDR2 chip which is on the bottom of the PCB) and above the A/V jack.

In designing the power chain for the Pi 2 and 3, the sequencing was fairly straightforward: power rails power up in order of voltage (5V, 3.3V, 1.8V, 1.2V). However, the supplies were all generated with individual, discrete devices. Therefore, I spent quite a lot of time designing circuitry to control the sequencing — even with some design tricks to reduce component count, quite a few sequencing components are required. More complex systems generally use a Power Management Integrated Circuit (PMIC) with multiple supplies on a single chip, and many different PMIC variants are made by various manufacturers. Since Raspberry Pi 2 days, I was looking for a suitable PMIC to simplify the Pi design, but invariably (and somewhat counter-intuitively) these were always too expensive compared to my discrete solution, usually because they came with more features than needed.

One device to rule them all

It was way back in May 2015 when I first chatted to Peter Coyle of Exar (Exar were bought by MaxLinear in 2017) about power supply products for Raspberry Pi. We didn’t find a product match then, but in June 2016 Peter, along with Tuomas Hollman and Trevor Latham, visited to pitch the possibility of building a custom power management solution for us.

I was initially sceptical that it could be made cheap enough. However, our discussion indicated that if we could tailor the solution to just what we needed, it could be cost-effective. Over the coming weeks and months, we honed a specification we agreed on from the initial sketches we’d made, and Exar thought they could build it for us at the target price.

The chip we designed would contain all the key supplies required for the Pi on one small device in a cheap QFN package, and it would also perform the required sequencing and voltage monitoring. Moreover, the chip would be flexible to allow adjustment of supply voltages from their default values via I2C; the largest supply would be capable of being adjusted quickly to perform the dynamic core voltage changes needed in order to reduce voltage to the processor when it is idling (to save power), and to boost voltage to the processor when running at maximum speed (1.4 GHz). The supplies on the chip would all be generously specified and could deliver significantly more power than those used on the Raspberry Pi 3. All in all, the chip would contain four switching-mode converters and one low-current linear regulator, this last one being low-noise for the audio circuitry.

The MXL7704 chip

The project was a great success: MaxLinear delivered working samples of first silicon at the end of May 2017 (almost exactly a year after we had kicked off the project), and followed through with production quantities in December 2017 in time for the Raspberry Pi 3B+ production ramp.

The team behind the power supply chip on the Raspberry Pi 3 Model B+ (group of six men, two of whom are holding Raspberry Pi boards)

Front row: Roger with the very first Pi 3B+ prototypes and James with a MXL7704 development board hacked to power a Pi 3. Back row left to right: Will Torgerson, Trevor Latham, Peter Coyle, Tuomas Hollman.

The MXL7704 device has been key to reducing Pi board complexity and therefore overall bill of materials cost. Furthermore, by being able to deliver more power when needed, it has also been essential to increasing the speed of the (newly packaged) BCM2837B0 processor on the 3B+ to 1.4GHz. The result is improvements to both the continuous output current to the CPU (from 3A to 4A) and to the transient performance (i.e. the chip has helped to reduce the ‘transient response’, which is the change in supply voltage due to a sudden current spike that occurs when the processor suddenly demands a large current in a few nanoseconds, as modern CPUs tend to do).

With the MXL7704, the power supply circuitry on the 3B+ is now a lot simpler than the Pi 3B design. This new supply also provides the LPDDR2 memory voltage directly from a switching regulator rather than using linear regulators like the Pi 3, thereby improving energy efficiency. This helps to somewhat offset the extra power that the faster Ethernet, wireless networking, and processor consume. A pleasing side effect of using the new chip is the symmetric board layout of the regulators — it’s easy to see the four switching-mode supplies, given away by four similar-looking blobs (three grey and one brownish), which are the inductors.

Close-up of the power supply chip on the Raspberry Pi 3 Model B+

The Pi 3B+ PMIC MXL7704 — pleasingly symmetric

Kudos

It takes a lot of effort to design a new chip from scratch and get it all the way through to production — we are very grateful to the team at MaxLinear for their hard work, dedication, and enthusiasm. We’re also proud to have created something that will not only power Raspberry Pis, but will also be useful for other product designs: it turns out when you have a low-cost and flexible device, it can be used for many things — something we’re fairly familiar with here at Raspberry Pi! For the curious, the product page (including the data sheet) for the MXL7704 chip is here. Particular thanks go to Peter Coyle, Tuomas Hollman, and Trevor Latham, and also to Jon Cronk, who has been our contact in the US and has had to get up early to attend all our conference calls!

The MXL7704 design team celebrating on Pi Day — it takes a lot of people to design a chip!

I hope you liked reading about some of the effort that has gone into creating the new Pi. It’s nice to finally have a chance to tell people about some of the (increasingly complex) technical work that makes building a $35 computer possible — we’re very pleased with the Raspberry Pi 3B+, and we hope you enjoy using it as much as we’ve enjoyed creating it!

The post More power to your Pi appeared first on Raspberry Pi.

Build a solar-powered nature camera for your garden

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/solar-powered-nature-camera/

Spring has sprung, and with it, sleepy-eyed wildlife is beginning to roam our gardens and local woodlands. So why not follow hackster.io maker reichley’s tutorial and build your own solar-powered squirrelhouse nature cam?

Raspberry Pi- and solar-powered nature camera

Inspiration

“I live half a mile above sea level and am SURROUNDED by animals…bears, foxes, turkeys, deer, squirrels, birds”, reichley explains in his tutorial. “Spring has arrived, and there are LOADS of squirrels running around. I was in the building mood and, being a nerd, wished to combine a common woodworking project with the connectivity and observability provided by single-board computers (and their camera add-ons).”

Building a tiny home

reichley started by sketching out a design for the house to determine where the various components would fit.

Raspberry Pi- and solar-powered nature camera

Since he’s fan of autonomy and renewable energy, he decided to run the project’s Raspberry Pi Zero W via solar power. To do so, he reiterated the design to include the necessary tech, scaling the roof to fit the panels.

Raspberry Pi- and solar-powered squirrel cam
Raspberry Pi- and solar-powered squirrel cam
Raspberry Pi- and solar-powered squirrel cam

To keep the project running 24/7, reichley had to figure out the overall power consumption of both the Zero W and the Raspberry Pi Camera Module, factoring in the constant WiFi connection and the sunshine hours in his garden.

Raspberry Pi- and solar-powered nature camera

He used a LiPo SHIM to bump up the power to the required 5V for the Zero. Moreover, he added a BH1750 lux sensor to shut off the LiPo SHIM, and thus the Pi, whenever it’s too dark for decent video.

Raspberry Pi- and solar-powered nature camera

To control the project, he used Calin Crisan’s motionEyeOS video surveillance operating system for single-board computers.

Build your own nature camera

To build your own version, follow reichley’s tutorial, in which you can also find links to all the necessary code and components. You can also check out our free tutorial for building an infrared bird box using the Raspberry Pi NoIR Camera Module. As Eben said in our YouTube live Q&A last week, we really like nature cameras here at Pi Towers, and we’d love to see yours. So if you have any live-stream links or photography from your Raspberry Pi–powered nature cam, please share them with us!

The post Build a solar-powered nature camera for your garden appeared first on Raspberry Pi.