Skip to main content

Solving Thermal Performance in Data Centers

As the demand for data centers to process more data increases every year, so too does the need for computing power. This demand for increased computing power is driving higher-power data racks and more efficient data centers.

Over the next decade, the industry expects data center electricity usage to increase. While there may be a desire for large scale data centers to use less power, power consumption by data center equipment continues to increase. These data centers are constantly expected to balance the performance that the public demands while minimizing power consumption.

In today’s facilities, a typical forced air data rack can consume up to 15 kilowatts of energy . However, this can increase to as much as 30 to 50 kilowatts to accommodate higher performance installations. The higher temperatures that typically occur due to higher power levels can directly affect the longevity of components in the data rack. Sensitive components tend to fail faster or systems may shut down if the thermal design is not optimized.

A more efficient way to manage these risks is to invest in data center architectures that balance power dissipation and high-speed operation under peak loads. This involves carefully selecting electronic and optical components that can operate at maximum speeds at rated thermal conditions.

Decades ago, the expected lifecycle of data center equipment was about 20 years. Today, performance pressures and shorter technology cycles have dropped typical operating requirements to below 10 years. With each technology improvement cycle, operators typically upgrade equipment to make the data center as efficient as possible. This shortened lifecycle trend is expected to continue in coming years.

 To achieve long-term efficiency, typically, the design must account for the thermal environment, equipment design, and rack configuration. All these aspects can influence the net efficiency and energy utilization in the data center. Inefficiencies can quickly spiral into equipment failure.

Commonly, protecting data racks begins with determining the limits for how much heat the data rack can remove effectively and how much heat the data center can manage overall. Most data racks are designed using a power budget, meaning that each data rack can have a thermal strategy-allotted budget.

Typically, this budget is based on the data center’s limitations as well as how much heat or thermal dissipation power must be removed from the data system.

To optimize for thermal performance, system designers should often consider the work of chip and board designers as well as the data center architect and other teams. This could help them understand how to provide sufficient cooling – at all levels – for the data rack and its constituent components.

With this, system designers may also need to define the optimization problem to solve. For example, one solution could involve choosing liquid cooling. It’s commonly an efficient way to cool the data center, but it can also be more expensive to implement because it may require more hardware and increases the potential for failure.

To optimize for reliability, airflow could be increased and more redundancy added while also using higher end components. To optimize for cost, pay for more cooling and use components with shorter lifecycles as an option.

One optimization concern is knowing which elements must be carefully planned and where to create compromises. For example, in colocation data centers, the thermal management priority is often designing racks that can cool appropriately in a consolidated space.

When retrofitting an existing building, the challenge is determining how the layout affects the thermal strategy. For example, when retrofitting an urban data center, designers must typically choose compact, small, and densely arranged data racks and add a thermal management solution capable of handling high energy density.

At TE, we consider various aspects of the customer’s design when developing our solutions, which means manufacturing parts with the capacity to address thermal performance requirements and the overall system design. Our portfolio offers a broad selection of solutions engineered to achieve the system reliability and transmission efficiencies which most data center operators may expect – including thermal, mechanical, and electrical performance. 

Thermal Bridge I/O Connectors 

TE’s input/output (I/O) connectors are designed for the system level to help address cooling requirements in I/O transceivers. The thermal bridge can help effectively extract the heat from pluggable I/O modules and couple it with some highly efficient cooling solutions, such as liquid cooling. A key feature of the thermal bridge is its compliance – it’s commonly able to absorb variation in dimensional tolerance without sacrificing thermal conductivity.

Compared to conventional interface materials, our thermal bridge is an order of magnitude more efficient in being able to move heat across the interface. The flexible nature combined with the general low thermal resistance on the interface can make it a gamechanger in expanding the life of hot and sensitive optics components.

ICCON Portfolio

Our ICCON portfolio consists of pin and socket connectors . These are designed to pass increasing amounts of current through smaller and smaller spaces. Typically, with the pin and socket approach, there is less resistive losses due to multiple contact points. With fewer losses, less heat is generated that the system must remove.

As power keeps increasing in data centers, normally, one must address not only how to dissipate it but how to deliver the power to equipment. The ICCON product line is designed to deliver power in an efficient manner where there are not a lot of losses, and as thermal increases the data center is able to deliver more power.

 

We also offer a variety of thermal products in the I/O space. Which one is right for your data center depends on the thermal strategy of the equipment. For traditional cooling systems, we offer  a variety of heat sinks that can operate in switches or adapter network interface cards. At TE, we have transitioned to more efficient heat sinks based on thin film technologies, improving the thermal resistance for the connection between the heat sink and pluggable modules. We are also developing heat sink technologies that can address each customer’s thermal strategies.

At a macro level, from the point where power enters the data center to when it is used, there is a 10 to 15% loss in that power distribution to the actual point of use. That loss in power, normally comes out as a source of heat, which must be cooled again by the overall cooling system. At TE, our focus is on designing high-speed connectors and busbar connectors with more efficiency.

To develop data systems tailored to fit a data center’s thermal requirements, system designers should look for a partner who can bring the design expertise and component portfolio that are engineered to address  the difficult thermal challenges and driven by industry standards that establish the minimum set of requirements. In choosing TE, you gain a partner who can help you effectively address operating requirements and who can help tailor your system design to achieve optimal performance.

Author

Dave Helster, TE Engineering Fellow

Throughout his career as a system-design engineer, he has partnered with customers to design high-speed communications systems for increased efficiency and business growth. As a TE Connectivity (TE) Fellow and the leader of the global System Architecture and Signal Integrity group, Dave provides the technical direction to help his team gain the technical competency needed to compete in our markets. Holding over 30 granted patents and the author of numerous technical articles, Dave is also responsible for developing technology drivers – such as standards, silicon technology, and data center design – for product innovation at TE.