Data centers are consuming electricity at an astounding rate. Currently they represent about 5% of the total electrical use in the country. According to the report “Revolutionizing Data Center Energy Efficiency” from consultant McKinsey & Co., the current energy output measured at a data center today will quadruple by the year 2020. Furthermore, the DOE reports that data centers can consume between 100 and 200 times more electricity than a standard office space.
As the demand for the technology housed in a data center increases, owners are pressing their Building Teams to design data centers that operate more efficiently.
Recently, newly constructed data centers for companies such as Google and Facebook have incorporated methodology designed to keep energy costs in check. In addition to new construction, Building Teams are increasingly being tasked with retrofitting existing data centers, opening up an rapidly growing new market for many architecture, engineering, and construction firms.
BD+C recently discussed this topic with a number of industry experts. The consensus emerging from the discussion led to seven factors Building Teams should consider when retrofitting an existing data center.
1. Understand the client’s budget
Building Teams must ensure the owner they understand the specifics of the budget designated for the data center. They must stay within the means of the client’s business budget by avoiding the temptation to specify products and technology that are not critical to the data center. This doesn’t benefit the owner, whose main concern is keeping costs to a minimum while delivering a product targeting specific data center availability criteria.
“It’s critical the Building Team does not distinguish itself by applying design principles for a data center without understanding the client business model (i.e., budget) and the characteristics and limitations of the facility they are trying to develop,” says Bill Pirrone, president at Rubicon Professional Services, Middletown, N.J.
It is important for an owner to select a Building Team with not only the technical capability to develop a data center but also the ability to demonstrate the fiscal acuity to the client’s budget so the owner’s capital and performance expectations are aligned.
“Ultimately, this could be the difference between a ‘go’ or ‘no-go’ decision relative to the business model, and what it could sustain in terms of capital investment. There has to be a balance and blend between capital performance and the business model,” says Pirrone.
2. Focus on technology and operational requirements
Cost associated with future data center expansions and improvements is always a function of what you start with. It is important to build a new data center that can be easily upgraded in the future.
“Technology is changing rapidly; therefore, you should design the building to be ready for future additions of redundant power capacity and air- and water-based cooling systems so you are ready for the higher power densities to come with the servers and processors of the future,” says Bruce Myatt, director of mission-critical facilities, M+W U.S. Inc., San Francisco, Calif.
Retrofit projects demand an entirely unique set of requirements. The key to successfully retrofitting existing office space into an effective data center is for the Building Team to have an accurate understanding of the mechanical and electrical systems supporting the space.
“Detailed and step-by-step change management procedures are recommended for all operating data centers in order to avoid a loss of power or cooling that may result in unexpected and expensive downtime for the facility,” says Myatt.
In addition to knowing the technology incorporated into the data center, it is critical that the Building Team understand how the facility is operated. This includes detailed knowledge of how the data center is maintained so that the engineering, design, and constructability aspects can be blended and not impact the operational environment. Prior to the retrofit, detailed change management procedures incorporated for the retrofit are recommended for the data center in order to avoid a loss of power or cooling that may result in unexpected and expensive downtime for the facility.
“A lot of design firms put blinders on and they become totally aloof to the technological and operational requirements necessary to retrofit the facility in a noninvasive, cost-effective fashion,” says Pirrone.
Building Teams should also consider the owner’s long-term goals with the data center in terms of both technological and structural expansion. Flexibility incorporated into the original design can help ensure the owner’s data center requirements will be met in the future.
3. Reduce energy consumption
The EPA reported that power consumption associated with data centers doubled between 2006 and 2011, making data centers one of the nation’s largest commercial users of electrical power. Despite the creation of DOE and EPA guiding principles designed to gauge energy use and create opportunities for improved energy consumption, data center energy output is expected to quadruple by 2020, according to the McKinsey & Co. report.
Modest operational changes incorporated during the retrofit process can provide an owner significant savings. For example, the Building Team should understand how the mechanical cooling system is operating and be able to find the balance between the actual cooling demand and cooling-related energy produced. Often, excess cooling capacity is unnecessarily delivered to the data center. Once the cooling imbalance is identified it can be corrected, resulting in reduced energy consumption.
“Mechanical cooling equipment could be running inefficiently or in excess because the original design process did not incorporate equipment that could modulate down to lower loads,” says Paul Mihm, executive vice president at Rubicon Professional Services.
In an existing data center, there may be a legitimate load in the facility that could be assessed within a short amount of time to identify tangible savings opportunities. For a retrofit, the design can incorporate infrastructure equipment that can automatically modulate for partial load conditions. This allows for a phased population of a facility and handsome utility savings.
Of course, other means of reducing power consumption exist throughout a data center. According to Myatt, these include:
- Aisle containment for separating the cool supply air and the warm exhaust air
- Monitoring the elevated supply air temperatures to manage the expanding temperature envelope
- Incorporating free cooling methods using either air or water economizers.
4. Establish a cable management strategy
Cables and other rack-mounted equipment can block the airflow through racks in a data center, preventing the necessary removal of heat from the space. Furthermore, cable congestion in raised floors can also reduce total airflow and impact airflow distribution through perforated tile floors. Both situations can promote the development of hot spots.
When retrofitting a data center, the Building Team should implement a cable management strategy to minimize the airflow obstructions caused by cable congestion. The strategy should involve the entire cooling airflow path, including rack-level IT equipment air intake and discharge areas, as well as underfloor areas.
Routing cables in an organized fashion either overhead or below the floor and along the rack corner posts will help avoid this interference. As a result, the cable management strategy can help maintain effective air management and avoid creating hot spots within the data center.
5. Pay close attention to data center density
From a power and cooling standpoint, specifying the number of watts per rack is critically important. In recent years, the densities of data centers have gone up. The traditional data center constructed years ago is no longer capable of handling the densities being demanded today, resulting in the need for a retrofit.
Recently, an 800,000-sf data center was constructed inside the 70-year-old Macy’s building in Boston with infrastructure housed in the basement and in multiple areas on the roof. The power density exceeds 400 watts per square foot, which is nearly quadruple the power density of a typical data center built 10-15 years ago.
“When you get to that level of density, particularly on the cooling side, you need to start rethinking the methodology of your density in terms of the design,” says Kevin Brown, vice president of data centers at Schneider Electric.
Brown recommends Building Teams also pay close attention to containment—the separation of cool supply air and warmer exhaust air—when retrofitting data centers, regardless of the density. One of the major contributors to inefficiencies in data centers is the lack of isolation between the hot air and the cold air. This can be resolved by containing the hot air and forcing it directly back into the cooling system, says Brown.
In new construction, hot-aisle containment is recommended. For a retrofit however, cold-aisle containment may be more practical to implement.
Brown suggests in-row cooling systems that fit right next to the rack, as opposed to larger perimeter units outside the data center. The in-row cooling systems are better-suited for a retrofit application as the installation can be done incrementally. In-row cooling racks are also beneficial in data centers that have both high- and low-density racks. The in-row cooling racks can be placed alongside the higher density racks to ensure supplemental cooling as close to the load as possible.
6. Effectively manage the PUE rating
Power usage effectiveness (PUE), developed by the Green Grid technology consortium (which is dedicated to raising the energy efficiency of data centers), is an industry-recognized metric designed to reflect a data center’s power efficiency by dividing the total power entering an IT facility by the total power consumed by the IT equipment in the building.
At the lower end and symbolic of a perfect number would be a rating of 1.0. The best-in-class PUE ratings include companies such as Google and Facebook (1.08 to 1.20), which are incorporating very progressive cooling systems using outdoor air and advanced technologies. In comparison, for new data center construction using technology directly off the shelf and incorporating best practices, the Building Team and owner should be able to achieve a PUE in the range of 1.35 to 1.40.
Ideally, a retrofitted data center that is operating efficiently should have a PUE rating between 1.35 to 1.60. To get there, the IT load must be closely matched to the data center’s power and cooling capacity.
“Inefficiencies start being introduced if there is a 250 kW load and the data center has a megawatt of power and cooling equipment. By nature, it will be a very inefficient data center with a very high PUE rating,” says Brown. “If the 250 kW load is matched up with 250 kW of power and cooling equipment, the PUE rating should generally fall within the 1.35 to 1.60 range.”
According to Brown, when retrofitting a data center, a rating of 1.8 PUE is acceptable and 1.6 PUE is attainable. A 1.35 to 1.40 PUE rating means the data center likely has a traditional chilled water cooling system.
The 1.08 to 1.20 PUE ratings are indeed possible. However, the entire building needs to be designed and engineered specifically to accomplish these lower PUE ratings.
7. Consider taking a modular approach to your design
The modular approach to either a new or retrofit data center is a choice that many owners are opting for these days. Modular data center construction can provide flexibility and scalability.
Instead of thinking in terms of one large data center of 10,000 sf, owners should think of the space sectioned off as four 2,500-sf data centers. “A section of data center may contain 16-18 racks, and it should be thought of as a miniature data center. When you think in those terms, it’s easier to address technology refreshes. Instead of refreshing an entire data center, the owner could do only one portion,” says Schneider Electric’s Brown.
“The modular approach frequently provides the most efficient systems due to the tighter control possible. Because of the possibility of incremental and rapid deployment, it can match more closely to your actual load growth,” says Terry Rennaker, vice president of Skanska USA Building, New York, N.Y. BD+C