Data Center Efficiency

While overall data center efficiency can be increased through server virtualization and consolidation efforts, the use of these technologies is also pushing many data centers’ power and cooling densities beyond their original infrastructure capacity. These trends, along with the EPA’s recent report on data center inefficiencies, have caused many data center owners to reevaluate the design of their facilities and to look for innovative ideas to optimize their next generation mission critical facilities.1

Data Center Cabling

An organized system for data center cabling is critical for trouble-free operation and maintenance. The TIA/EIA 942 Telecommunications Infrastructure Standard for Data Centers has established zoned cabling distribution for limiting the amount of recabling needed during new equipment roll-outs. Because of the added complexity of the cabling system, a network infrastructure tracking and management system is recommended to properly manage the physical layer in a mission critical data center. This tracking system can be integrated into a full asset management system to completely document a data center’s equipment locations and connections.

Integrated Controls

As control systems move to the Internet Protocol (IP) network, the existing network and cabling infrastructure can be utilized to add new monitoring points to the automation system. Today’s automation systems are able to monitor power and environmental conditions for each rack and activate an alarm in a critical event. These systems may also track power and temperature to avoid disruption as heat and power loads increase.

Commissioning

Successful mission critical facility implementation requires comprehensive commissioning to verify that all systems perform as designed. All systems should be tested and sequences verified to ensure availability and reliability. Retrocommissioning is also an important part of an ongoing operation and maintenance practice to verify all systems will respond correctly in an emergency.

Cooling Strategies

A chronic issue for high performance computing facilities is heat build-up on the raised floor. The advent of newer equipment, such as blade servers, is pushing the heat output of racks beyond the traditional 4-5 kW/rack to levels approaching 20 kW/rack. There are practical limits for a typical hot aisle/cold aisle cooling arrangement, but these limits can be stretched with the aid of computational fluid dynamics (CFD) modeling. CFD analyses can uncover counterintuitive results that help identify the “hot spots” associated with higher rack densities.

CFD analysis of the floor tile air velocities at the University of Miami Medical Research Data Center

Synergy Between Systems

Electrical and mechanical systems for data centers cannot be properly designed independently. Both systems must respond to the loads imposed by the computing equipment, space available to house the equipment, reliability demanded by applications, anticipated growth, and the owner’s budget.

In order to be cost effective, both mechanical and electrical systems should be designed to similar standards of redundancy and reliability; otherwise, money spent on the overdesign of one system is wasted by the lesser design of the other. Mechanical and electrical engineers must collaborate early in the design process, and communicate their recommendations to the owner. For maximum uptime, availability, fault tolerance, and concurrent maintainability, the electrical and mechanical systems must work together seamlessly.

Recent Experience

Mississippi Power Company Operations Center
Lyman, Mississippi
MSTSD, Inc., Architects
A hurricane-resistant one story operations center with provisions for self-contained operation for three days and independent (limited outside support) operation for fourteen days. The facility includes limited living facilities for seventy-eight people during this period, with redundant engine-generator sets, water and sanitary storage tanks, and a 2,500 square foot data and communications equipment area.
75,500 square feet

BlueCross BlueShield of Tennessee
Chattanooga, Tennessee
S. L. King & Associates, Inc.
HKS, Inc., Associate Architects
Duda/Paine Architects, LLP, Associate Architects
A corporate headquarters complex including a 20,000 square foot Tier 2 data center.
950,000 square feet

University of Miami Medical Research Data Center
Miami, Florida
University of Miami School of Medicine, Client
A medical research data center with expansion capabilities for an additional 2,800 square feet. The UPS service for the facility is a 2(N+1) arrangement sized for 2.4 MW of power. The facility is a critical component in meeting the university’s growing need for the high performance computing requirements of genetic research.
3,600 square feet

Reference

  1. “Report to Congress on Server and Data Center Energy Efficiency Public Law 109-431”
Share This: