The number and size of data centers(DCs) is exploding around the world. Our always on, always connected society has driven the massive need for storage, compute and networking resources to facilitate the rapid, 24/7 delivery of information, entertainment and communications. ...

Power semiconductor usage in data centers evolves to meet power efficiency needs

The number and size of data centers(DCs) is exploding around the world.  Our always on, always connected society has driven the massive need for storage, compute and networking resources to facilitate the rapid, 24/7 delivery of information, entertainment and communications.  New applications in machine-to-machine connectivity, artificial intelligence (AI) and virtual/artificial reality will accelerate growth for years to come.

DCs are increasingly consuming more and more electricity. IHS Markit estimates that, on average, between 2-3% of developed countries’ electricity consumption is currently attributed to data centers. Over the past several years the growth in demand for electricity for DCs has been moderated by the increased use of the server installed base through multi-tenant server software such as server virtualization and container software which enables running multiple applications on a single physical server with isolation of application code and user data and sharing of CPU, memory and network.  This resulted in annual growth of new server shipments in the low single digits as discussed in the latest IHS Markit Data Center Server Equipment Market Tacker – Regional. Adoption of multi-tenant server software is nearing saturation and server unit shipments are expected to accelerate, driven by connected devices and AI.  In addition, rack power density (a measure of the electric load consumed by an IT rack) is increasing, driven by new power-hungry servers such as (1) those configured with specialized co-processors like general purpose GPUs (GPUs) and field programmable gate arrays (FPGAs), capable of high degrees of parallelism, computing workloads like AI, as well as, (2) those configured with more memory and more powerful CPUs for data-intensive workloads like analytics.

For most data centers, the largest operation cost is electricity. In addition, many large operators are conscious of their environmental footprint as high energy users.  These two factors drive DCs to seek ways to improve their energy efficiency. One means of mitigating this growing need for power is to improve the power conversion efficiency in data centers.  This can take many forms including overhauling the power distribution architecture  within the data center, including changing the point at which AC power is converted to DC, the DC bus voltages and DC conversion stages and the efficiency of the conversion stages at each of these processes.  Efficiency is key – it is estimated that a 1% improvement in power conversion efficiency in all the world’s data centers would be equivalent to eliminating 4.6 nuclear power plants.

Implications for power semiconductors

IHS Markit has researched the usage of power semiconductors in data centers and issued a report – the Power Semiconductors in Data Centers Database – 2019. Most power semiconductors in data centers are found in the AC-DC power supply units (PSUs) that convert line voltage AC to DC (usually 12V), and on the compute and storage server boards where 12V is converted to the low voltage power rails needed by the server components.  The dominant electrical architecture for PSUs has been a boost power factor correction (PFC) circuit followed by a converter that drops the bus voltage to less than 60V.  This is followed by an isolated intermediate bus converter that generates the 12V output used by the server boards.

This architecture has provided a good balance of efficiency, load response and power density. However, the limits of this design are being tested with higher efficiency demands and faster dynamic load response leading to new architecture proposals.  The first is to convert the boost PFC from a full bridge design to a totem-pole design.  The totem-pole architecture uses only a half bridge input stage instead of full bridge followed by synchronous rectification diodes or MOSFETs.  This eliminates one diode drop and provides power conversion efficiencies at high power up to 99% (up from 97.5% for the full-bridge design). However, high power levels are only achievable with this architecture when the totem-pole switches are implemented with wide bandgap semiconductors.  These devices switch faster than silicon with almost zero reverse recovery loss and provide higher power densities than full bridge designs.  We believe this design will quickly catch on, especially in high power applications and drive high growth rates for wide bandgap devices such as Silicon Carbide MOSFETs and Gallium Nitride transistors.

The second proposal is to eliminate the intermediate bus converter stage by changing the LLC to an isolated converter and providing its output (nominally +54.5V, and generally referred to as 48V) directly to the server boards. This is a concept first proposed and implemented by Google and now taken up by the Open Compute Project. It eliminates a conversion stage, and saves a few percent of efficiency, but there is a secondary advantage.  Large hyperscale data centers typically deploy rack units with bus bars that the server units plug into to receive input power.  At nominal 12.5V output the amount of current carried in the busbars is several hundred amps.  By using a 48V bus, the current to power the same load is reduced by 4.  Power losses in the bus bars, which is a function of the square of the current, are thereby decreased by a factor of 16, a large improvement in efficiency.  The impact for power semiconductors is that the output stage will need higher voltage MOSFETs but at lower currents.

Server units that utilize 48V input power require novel solutions to step the voltage down to the low voltages needed by the server integrated circuits (ICs).  High current (>100A @ 1V) multi-phase point-of-load (PoL) converters have been used to power server processors, memory and other high current devices for many years.  Raising the input voltage provides extra challenges in creating a stable output rail for a load that can change dramatically.  We believe that a range of solutions will be introduced to address the need, including multi-stage topologies and new control architectures, and more integrated packaging.

Servers shipped to hyperscale cloud service providers (CSPs) represent the fastest growing market segment of the server market and are about a third of the total.  For multi-tenant and collocation data centers the standard 12V server input will likely remain.  Even without the additional challenge of converting from 48V, new methods will be needed to supply power rails as the server power requirements for the new generation of processors and co-processors increases. IHS Markit expects novel packaging solutions to combine the control, power stage and magnetics into a single package to meet the power density needs.  These density-optimized solutions will allow more computing power in the same space without breaking the thermal budget.

Power Semiconductors in Data Centers Database

Part of the Power Semiconductor Intelligence Service, this report provides a comprehensive look at the usage of power semiconductors in data center server and power supply unit applications. It segments the market into four server and three power supply categories, and covers twenty-nine power semiconductor device types.  It provides two years of history and a five-year forecast.