Driving Next-Gen AI

High-performance computation with artificial intelligence demands an immense number of floating-point operations per second (FLOPS), leading GPUs to consume substantial power for optimal performance. on data centres. Supplying power to these GPUs is a significant challenge. For instance, the latest GPUs, such as the NVIDIA H100 Tensor Core, require peak power of 700W to 1000W at voltages ranging from 1.1V to 3.3V. This not only increases power losses but also takes up more space. The primary challenge is to miniaturize the power supply by adopting direct conversion from 48V, or even from 400V, to the required 1.1V to 3.3V output voltage. This approach reduces distribution losses and achieves miniaturization, facilitated by high-frequency magnetics, GaN devices, and the high power density of capacitors.

Scroll to Top