MENU

New DC-DC power technology aims to be the greenest yet

New DC-DC power technology aims to be the greenest yet

Technology News |
By eeNews Europe



Power supply technology has very quantifiable goals. Systems designers are increasingly driven to extract the most from the available technology in terms of energy efficiency, performance, power density, and cost.

Energy efficiency goals are driven by thermal considerations in the equipment that the power supply is to serve, by overall system manufacturing and operating costs and, increasingly, by government regulation.

Performance goals for power supplies go well beyond providing the correct number of watts at a specified voltage; they span regulation over input and load range, dynamic response to load changes, and conducted and radiated noise. In compact systems, energy density is a very critical system design goal.

Cost per watt has always been an important metric applied to the bill of materials of power systems and energy costs per year are now commonly quoted for systems that are powered on a continual basis. In many systems, the idle (standby) or low power energy usage is also a significant consideration in total energy usage. Modern power supplies have evolved to meet these goals, with DC-DC converters providing the underlying technology.

The development of packaged DC-DC converters, starting in the 1980s, was a huge step forward in the evolution of DC power technology. The ability to generate local DC power busses with some degree of efficiency challenged the existing practice of a central power supply that generated all system voltage levels and distributed them throughout the system.

DC-DC converters enabled the distribution of power within a system in an entirely new mode. A central power supply generated a relatively high “distribution rail” that was routed around the system, and multiple DC-DC blocks converted the distribution rail to the locally required levels very near the point of load. This methodology has already evolved through several generations, but the DC-DC converter remains as the foundation technology.

Like most revolutionary technologies, the first DC-DC modules had some serious limitations that were inherent in the design and limited by components that were available to their designers.

Maximum achievable efficiency was limited; 70% efficiency was considered a very good figure and in fact represented a comparatively good outcome, considering that the alternative solution was a linear regulator. Where the difference between the input and output voltage was considerable, this was much better than could be achieved using a linear regulator where the voltage was dropped across a power device that dissipated power equal to the product of the voltage drop and the output current.  

Some of the inefficiency in the first generation products was due to unavoidable ohmic losses in the semiconductor switches. Another large proportion was related to the less than optimal timing of the switching signals that sapped efficiency by wastefully discharging a sizeable portion of the reactive energy that was being delivered to the primary side of the circuit.

A third contributor to reduced efficiency was the limited frequency at which the converter could be switched. Frequency was limited by the power dissipation capability of the switch MOSFETs, which had to absorb large switching transients on every cycle. This low energy efficiency resulted in limited output power.

The other weakness of first generation DC-DC power supply modules was electrical switching noise. The switching of the inductive elements generated large spikes on the input rail and careful design of filters was necessary to keep this noise from propagating throughout the system. The spikes also contributed a third key weakness, reliability problems, especially with the switch MOSFETs.

Second generation products attacked the efficiency, noise and reliability issues with better components and better design.  Advances in semiconductor processes yielded MOSFET switches with much lower on-state resistance, resulting in decreased conduction losses.

Circuit designers developed techniques to modify the primary waveforms by adding capacitance to form a resonant circuit and timed the primary switch MOSFETs transitions to coincide with “zero-crossings” of the voltage and current. For this reason these converters are known as resonant or “zero-crossing” (or ZVS or ZCS) converters. This archtiecture eliminated much of the power dissipation in the switching devices and reduced the switching spikes. These changes resulted in a much-improved DC-DC converter that was capable of conversion efficiencies in the range of 90%. The same changes also addressed the very high levels of switching noise produced by the previous generation of converters. These improvements led to widespread adoption of resonant DC-DC converters across the electronics industry.

Over time, incremental improvements to the “zero-crossing” design have increased its efficiency to the low to mid 90% range, and it has served as an industry workhorse, but the “zero-crossing” design suffers from inherent limitations. In operation, energy to be transferred from the primary to the secondary is stored in the inductance of the transformer and is proportional to LI2(where L represents the primary inductance and I represents the primary current,). This “packet” of energy is fixed for a given circuit implementation.

To increase the delivered power (the rate at which energy is transferred), the switching rate must be increased, increasing the number of packets transferred in a time period. In this configuration, output power is directly dependent on the switching frequency. Switching frequencies in ZCS/ZVS resonant converters are limited to about 125 kHz by the trade-off between stored energy per cycle and the necessary conditions to achieve zero-crossing switching.

A new circuit technique, which is dubbed a Sine Amplitude Converter (SAC), looks superficially similar (see Figure 1) in topology to the resonant converter, but its principles of operation are entirely different. Intermediate Bus Converters (IBCs) based on this new circuit technique achieve peak efficiencies of 98%. Unlike resonant converters, the Sine Amplitude Converter operates at a fixed frequency equal to the resonant frequency of the primary side tank circuit.

Figure 1
Outline view of a sine-amplitude converter, showing the fundamental components involved in the power-transfer cycle.

The switching of the FETs in the primary is locked to the natural resonant frequency of the primary; the FETs are switching at zero crossing points, eliminating power dissipation in the switches (boosting efficiency) and greatly reducing the generation of high order noise harmonics (requiring less filtering of the output voltage). The current in the primary resonant tank is a pure sinusoid rather than a square wave or a partially sinusoidal waveform as seen in prior generations of converters. This purely sinusoidal current contributes to greatly reduced harmonic content and therefore much cleaner output.

The leakage inductance of the primary is minimized in an SAC converter since it is not required to provide the critical energy storage. The Sine Amplitude Converter can, therefore, operate at a much higher frequency than ZCS/ZVS resonant converters, allowing for a much smaller transformer, and increasing both power density and efficiency.

Today’s SAC based products operate at frequencies greater than 1 MHz. In contrast to resonant converters this frequency is fixed regardless of load. In response to an increased load on the secondary, the Sine Amplitude Converter reacts by increasing the amplitude of the sinusoidal current in the primary resonant tank. This acts to increase the amount of energy coupled into the secondary, countering the increased load. When the load is reduced, the amplitude of the sinusoid decreases, going to zero under “no load” conditions.

Modules based on the Sine Amplitude Converter are functional blocks that the design engineer can genuinely use as a “DC transformer”. A typical example might provide 5:1 voltage reduction from a nominal 48 V input (36-60 V range) with, as already stated, conversion efficiency of 98%.

The implications of that figure bear examination. On one hand, and compared to a prior-generation device achieving perhaps 92%, the 98% figure might be thought of as just a few points better. The correct perspective is to look at the performance in terms of losses relative to output power delivered, making the comparison 2% rather than 8%; one-quarter of the losses of the prior design. This can have a dramatic effect on system design. For the same system power, the designer has to deal with only 25% of the heat dissipated in the power conversion stage. Alternatively, that increased margin can be used to increase system power, or run at a higher ambient temperature, or with reduced cooling airflow.

To translate those principles into typical real-world figures, a state-of-the-art converter might be rated to deliver 70 A at 25°C and 100 LFM (linear feet/minute) of cooling airflow. Designers know that they can make trade-offs with these figures; for example, the same unit would sustain the same output at an ambient temperature of 55°C ambient and 200LFM. Both performance points are a major step up from the previous best-in-class devices which could deliver 54 A at 25°C/100LFM, or just 49 A at 55°C/200LFM. Due to the factors mentioned above, all of this is achieved from a smaller volume occupied, or baord space taken up.

Figure 2 sets out details the differences between a converter based on SAC and a competitive 5:1 quarter-brick bus converter using a conventional resonant converter circuit.




Figure 2
Representative metrics of a Vicor IBC (intermediate bus converter) usingsine amplitude conversion, compared to those of a nearest-equivalent, competitive “quarter-brick” converter employing conventional technology.

The first four metrics are all “less-is-better” parameters, and as the chart shows, the SAC-based device offers significant reductions in conversion loss, output voltage ripple, output impedance and response time. The last three metrics are “greater-is-better”, and here the SAC architecture shows meaningful gains in power density, switching frequency and output power. Taken together, these differences represent a giant step forward.

About the author

Bob Marchetti is Senior Manager of Product Marketing for Vicor’s Brick Business Unit.  Marchetti holds a BSEE and an MBA and has been in the power supply industry for over 17 years.

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s