Architects

Data Centers Keeping Energy, Security in Check

Sept. 13, 2010
22 min read

Power consumption for data centers doubled from 2000 and 2006, and it is anticipated to double again by 2011, making these mission-critical facilities the nation's largest commercial user of electric power. Major technology companies, notably Hewlett-Packard, Cisco Systems, and International Business Machines, are investing heavily in new data centers. HP, which acquired technology services provider EDS in 2008, announced in June that it would be closing many of its older data centers and would be building new, more highly optimized centers around the world. No wonder Building Teams see these mission-critical facilities as a golden opportunity, and why they are working hard to keep energy costs at data centers in check.

Although server equipment is the number one power gobbler in data centers, and server demand will no doubt only continue to grow, engineers are making great strides in reducing power-usage effectiveness (PUE), a key operational measure, as well as improving back-up power efficiencies and employing energy-efficient cooling strategies. In addition, industry initiatives such as the Green Grid and the U.S. Green Building Council's LEED for Data Centers standard, now under development, will continue to mitigate the environmental effects of data center growth.

MEETING POWER REQUIREMENTS

As server technology continues to advance-for example, new blade racks which allow for installation of up to five chassis of 16 servers within each chassis, creating very high power densities of up to 30 kilowatts (kW) per rack-electrical distribution and cooling has become much more challenging. While numerous approaches can be taken, data center designers uniformly agree that the best strategy for accommodating higher power densities is implementing better airflow-management strategies.

"By ensuring adequate separation between hot and cold air streams, the heat rejection equipment works more efficiently," says Adil Attlassy, vice president of engineering at Digital Realty Trust (www.digitalrealtytrust.com), San Francisco, a global data center developer, owner, and operator. Preventing short-circuiting of hot air at the equipment intakes makes it possible to run higher loads within the standard cabinet while still using traditional forced-air cooling, he adds.

Dan Dickenson, PE, LEED AP, mechanical engineering principal with AECOM's Ellerbe Becket (www.ellerbebecket.com), Minneapolis, recommends more efficient uninterruptible power system (UPS) and server rack distribution systems, as well as novel technologies, such as hot aisle containment and server virtualization, and systems that take advantage of sleep mode when possible. Another building strategy, Dickenson says, is to look for ways to capture heat generated by the data center for recycling into a usable utility service.

Paul E. Schlattman, vice president of the mission-critical facilities group at Chicago's Environmental Systems Design, an MEP engineering firm, sees hot aisle/cold aisle configurations as an effective strategy. Furthermore, chilled water applied directly to the rack can reject heat more quickly than air, although not all racks on the market can accommodate this technique. And because today's servers can run at higher temperatures, Schlattman adds, raising chilled-water temperatures and temperature points within the cold aisle can help reduce the PUE.

Building Teams have come up with no shortage of PUE-reducing ideas, as energy reduction is the holy grail of data facility design. Charles B. Kensky, PE, LEED AP, executive vice president of Bala Consulting Engineers (www.bala.com), Philadelphia, lists several of the more prominent strategies:

  • Scalable and modular system designs and equipment.
  • Utilizing water-cooled equipment with free cooling features.
  • 100% outside air for free cooling.
  • Hot aisle/cold aisle IT equipment arrangements with ceiling returns.
  • High-efficiency UPS modules.
  • Lights-out operations.
  • Water cooling of IT equipment.
  • Energy Star IT components.

Thomas E. Reed, PE, principal and senior director with KlingStubbins, in Philadelphia, points out that the temperate climates in many areas of the United States and most parts of Canada can support "free cooling" for 7,000-8,000 hours per year, meaning that unconditioned outdoor air is often sufficient to cool the data center interiors. "This can be achieved by injecting varying amounts of filtered outdoor air directly into the data center for cooling or using the evaporative effects of outdoor air for producing chilled water cooling," says Reed.

And now that newer standards, such as ASHRAE TC 9.9, Thermal Guidelines for Data Processing Environments, allow for wider temperature and humidity conditions inside typical data centers, designers can really take advantage of outdoor air, although they must still account for humidity and dust control.

All these strategies for saving power seem even more important for data centers, and for good reason: Saving one watt at the server floor can ultimately translate to saving approximately 2.5 watts of overall electrical service after all power losses and equipment inefficiencies are accounted for, according to Robert Sty, PE, LEED AP, a mechanical engineer in the Phoenix office of A/E firm SmithGroup (www.smithgroup.com).

At the same time, Peter M. Curtis, president of Power Management Concepts (www.powermanage.com), Bethpage, N.Y., and an associate professor at the New York Institute of Technology, points out that even though PUE is a valuable metric for measuring how well a data center is performing with regard to power usage, "It is an ongoing process which needs constant measurement and trending to ascertain the progress and overall impact on energy savings, especially because data centers are such dynamic environments with moves, adds, and changes occurring on almost a daily basis."

UNINTERRUPTIBLE UPTIME

Back-up power systems, and particularly uninterruptible power supply (UPS) systems, are an essential part of any mission-critical center's power reliability scheme. Yet they are notorious resource hogs. Fortunately, advances are being made to reduce waste and redundancy in UPS systems.

"UPS manufacturers are at various stages of producing econo-mode UPS systems," says Digital Realty Trust's Attlassy, who has worked on dozens of data center projects. "The econo-mode settings will essentially run the critical loads on static bypass, with controls to sense and quickly switch to full UPS protection in the event of an input power anomaly." However, says Attlassy, most enterprise users are not prepared to accept the additional risks associated with these types of systems, such as slow reaction times that might expose downstream equipment to potentially harmful surges.

Beyond this innovation, more conventional technology tweaks include transformerless UPS systems, active control of rectifiers and inverters, and delta transformers installed within the UPS, which convert three-phase power without a neutral wire into three-phase power with a neutral. It is claimed that these techniques can increase backup power efficiencies by up to 98%. For example, a system of active control of rectifiers and inverters only utilizes true double conversion-a less efficient route, in which power is run directly through the battery array-in the event of a power abnormality or outage, according to Tom Faucette, PE, LEED AP, an electrical engineer in SmithGroup's Washington, D.C., office. He adds that the development of 400V systems and their acceptance within the industry have allowed designers to reduce some losses associated with transformers in power distribution units.

However, not everyone in the field is sold on these new technologies. "Only time will tell regarding certain technological changes," says Power Management Concepts' Curtis, a published author and industry lecturer in the data center field. "We will have to see how these systems actually respond in the field."

Other engineers and experts are more sanguine about some of these advances. "New modular, scalable UPS and battery system designs are allowing users greater flexibility and ease of future deployments and upgrades," says Daniel J. McGroary, a senior project manager and data center market sector leader with Bala Consulting Engineers. He also says another form of physical energy storage, flywheel combinations with traditional battery plants, is becoming more common.

When it comes to reliability and quality, however, Ellerbe Becket's Dickenson says, the best systems utilize 1) traditional double-conversion UPS systems for power quality, 2) flywheel energy storage to handle most power blips and protect the batteries, and 3) batteries (possibly) for additional backup time. "They also have paralleled generator sets for the longer duration outages," he adds.

MORE COOLING STRATEGIES

Taking a more analytical look at some of the prevalent cooling strategies for mission-critical facilities, SmithGroup's Sty says that the chilled-water approach, as opposed to more conventional air cooling, is catching on, although it remains controversial. Sty's advice? Don't be too quick to dismiss chilled water. "Proper coordination and implementation of the power, data, and chilled-water infrastructure can minimize risk while greatly improving cooling efficiency," he says. "Cooling with chilled water allows the system to respond to the specific needs of each equipment rack, reducing waste associated with conditioning a fixed volume of air that is not responsive to at-rack demands."

For example, one of the SmithGroup's government clients in the Rocky Mountain region decided to specify servers housed in equipment cabinets with integrated direct water-cooled radiators. This approach successfully removed up to 90% of the heat generated at the rack.

For this same project, the design also taps into the cool, dry mountain climate to generate cooling water over 98% of the year, without using refrigerated chillers. "This, in combination with a very low static pressure air delivery design, resulted in a preliminary energy model producing a PUE of 1.06," says Sty. Such a low PUE rating is considered "very good," according to Sty.

Drawing from his 27 years of experience designing data centers, KlingStubbins's Reed emphasizes the importance of the hot air/cold air containment approach in order to achieve a well-managed airflow scheme. The basic strategy here is to separate the hot and cold air paths: At low loads, he explains, this may be accomplished with 1) continuous hot/cold aisle arrangements; 2) air-management features, such as cabinet-blanking panels (which prevent unintended airflow through empty racks where IT equipment is contained within cabinets) and cable brush grommets (which seal cable openings in raised floors); and 3) a return-air ceiling plenum to reduce air recirculation.

To capture really big efficiencies, however, Building Teams need to adopt more aggressive strategies, such as hot- or cold-air containment at the row or cabinet level, each of which has its own advantages and limitations. According to KlingStubbins's Reed, hot-aisle containment maximizes efficiency and keeps most of the computer room at an acceptable temperature. He adds, however, that "there are practical limits considering that with high-density loads, the temperature in that area will be above 100ºF and unsuitable for personnel."

On the other hand, cold aisle containment "exposes more of the room to higher temperatures, which may be undesirable, but can be a good way to regain ‘stranded' cooling capacity in an existing data center with air-management deficiencies," says Reed. "Cold aisle containment may also make more sense when return ducting or a ceiling plenum is not possible."

To combine the best of both worlds, Reed recommends including features from both systems: in other words, partially contained cold aisles with aisle end-doors or cabinet extensions, as well as ducted returns or a ceiling plenum. Since every project is one of a kind, Reed says multiple cooling options must be considered to meet its unique parameters, including cost constraints, energy-efficiency goals, cabinet heat loads, space planning and layout, and the preferences of the owner and end users.

ASHRAE'S ACTIVISM

New recommendations like the TC 9.9 from the American Society of Heating, Refrigeration and Air-conditioning Engineers (ASHRAE) allow higher supply temperatures. While the updated standard does offer much more design flexibility, the data center industry is approaching the matter with some hesitation.

"Many owners are skeptical of the higher supply temperatures, and are concerned about cooling ride-through on a power failure, transient temperatures, and shorter reaction time available to facilities staff with these higher temperatures," says Bala Consulting Engineers' Kensky, a frequent industry presenter on data center HVAC and green design. Cooling ride-through refers to the time available for a data center to operate without power within an acceptable temperature range; transient temperatures include such effects as heat build-up in localized areas of a server room.

Digital Realty Trust's Attlassy points out that running at higher temperatures removes the operating temperature cushion, thus shortening the amount of time systems can survive during restart of cooling systems.

At the same time, Kensky recognizes the ASHRAE recommendations as a sound design strategy and viable option in data centers with the appropriate IT load densities and cooling systems. Likewise, Curtis's team at Power Management Concepts has also been encouraging clients to expand their operational environmental envelope, but only after a full assessment has been completed.

"We have also been utilizing ASHRAE recommendations for higher server temperatures where the equipment can handle the elevated temperatures and the owner/operator is comfortable with this type of approach," adds Ellerbe Becket's Dickenson.

NEXT UP: LEED FOR DATA CENTERS

Although the Leadership in Energy and Environmental Design rating system was created for application to conventional commercial facilities with standard occupancies, the U.S. Green Building Council is in the process of developing a LEED for Data Centers standard. In the meantime, more progressive Building Teams have been achieving LEED certification for their data center projects. Digital Realty Trust has eight LEED-certified data centers, two of them Platinum-certified, with 12 others currently going through the LEED certification process, according to Attlassy.

Environmental Systems Design was involved with three of the industry's 10 LEED Gold data center projects last year. One of them, Allstate's new greenfield data center in the Chicago metropolitan area, won the Data Center Executive of the Year's Green Data Center of the Year award in 2009, recording a commendable 1.34 PUE during the month of August 2009.

SmithGroup's data center portfolio includes the Gold-certified Lawrence Berkeley National Laboratory Molecular Foundry facility, which comprises laboratory, clean room, and data center components. The novel and highly modern-looking facility has applied several sustainable strategies such as high-efficiency air handling, air-side economizers, and wind tunnel modeling. Another LEED Gold project for a Southern Arizona data center client employs a nonchemical treatment system and reclaims the blowdown water from the cooling towers, along with a high-efficiency central plant. "This captured water is used for landscaping/irrigation and nonpotable applications," Sty explains.

In addition to LEED, another valuable industry resource for the mission-critical crowd is the Green Grid (www.thegreengrid.org), an online library featuring useful design guides and tools created by a global consortium of IT companies and professionals seeking to improve energy efficiency in data centers. One of the website's most popular areas is a collection of PUE metrics compiled from data centers nationwide. "This allows facilities to benchmark and compare their own performance," relates SmithGroup's Sty. "Hopefully, there's a little friendly competition out there driving energy efficiency."

While things are definitely moving in a positive direction on the LEED front, Reed points out that there is a lot of discrepancy in how LEED and its referenced engineering standards match up with those of the general commercial building types, particularly regarding energy and water costs. He recommends that ASHRAE Standard 90.1 be updated to address high process load building systems, "since that is what the USGBC relies upon for energy cost-savings calculations."

In addition, while the current LEED system recognizes things like variable-speed cooling fans, high-efficiency chilled-water systems, and air- and water-side economizers, other popular data center systems–such as high-efficiency transformers and UPS systems, and reduced cooling-tower water consumption methods–are not adequately acknowledged at this point, say many designers.

In an attempt to address this shortcoming, KlingStubbins is taking part in a California Energy Commission-sponsored Environmental Performance Criteria industry group. The cross-disciplinary group will create a more accurate data center format and point structure to be recommended ultimately for LEED adoption into the Data Center standard now under development.

NEW APPROACHES TO FIRE PROTECTION

Fire protection is another critical issue for these high-heat, high-value buildings. Depending on the jurisdiction, building codes may require such safety measures as water-based fire suppression systems. Still, there's no shortage of opinions regarding optimally safe design.

On the one hand, some engineers, including Attlassy, claim that products like double-interlock, pre-action sprinklers offer a sufficiently effective, localized fire suppression. He adds that there is no benefit to installing an additional suppression system, as coordination can be difficult to achieve while avoiding the risk of premature discharge. However, other expert designers prefer integrating a gaseous clean-agent system with a water system.

"Because the FM200 system heads are activated by smoke, the system can act quite a bit faster than the wet system, which is activated by heat," says SmithGroup's Sty, with regard to the clean-agent, non-water suppressant. "Then there's the whole issue of water damage due to wet sprinkler discharge. The safest approach is to have an FM200 system as the primary source, with the water system as a back-up. This will drive the design to a higher first cost, but the facility may be willing to make the investment to protect the operation." A less expensive alternative to FM200, the FE-25 system, is gaining acceptance in the industry, according to ESD's Schlattman.

Also gaining in appeal are VESDA ("very-early-smoke-detection apparatus") systems. "Detection systems should take advantage of the latest early-warning systems in all areas, including the data center proper, underfloor areas, and return-air plenums," recommends Reed.

While the VESDA systems are quite costly, the cost is justifiable, at least in the opinion of experts such as Doug Berry, owner and president of Texas Fire Protection Specialists (www.tfpsinc.com), Carrollton, Texas. "These units will give the facility personnel an early warning to quickly investigate a possible incident," he says–and time is critical in data center emergencies. However, Bala's Kensky believes that the VESDA technology is too sensitive for system activation and should be used for early warning in conjunction with standard smoke- and heat-detection systems.

With regard to wet-pipe systems, some Building Teams emphasize the importance of the solid pipe design. "Code-minimum pipe wall thicknesses can be rapidly eroded and fail with repeated fill-and-drain cycles mandated by local officials to test the systems," Reed explains. "Piping systems should be carefully engineered in coordination with local officials to build in demands of the test requirements. Frequently in data centers, alternate inert gases are using to pressurize the pre-action piping." He says that, where it is cost-effective, schedule-40 pipe can be considered.

BETTER STRUCTURES AND MATERIALS

While MEP and fire and life-safety systems represent the overriding design priorities for data centers, the kind of shell and building system used to house the mission-critical equipment is also crucial to successful operation. "The facility's physical resistance to natural disasters works in tandem with its mechanical and electrical redundancy, and down-time resulting from damage to the building may still equate to lost revenue," says KlingStubbins's Reed.

Consequently, building materials and construction methods must be carefully thought through. For example, "Steel construction can be quite cost-effective, but for many data centers, a typical office-building steel deck is often just not enough to resist the kinds of uplift forces generated by extreme high winds, and some institutions will look to poured concrete for greater stability," says Reed.

In certain regions, such as the East Coast or Midwest, a steel interior structure may work just fine, particularly if it's a noncritical or low-tier data center.

"Lightweight, open-web joist and metal deck-framing systems easily accommodate longer spans and work well with precast or tilt-up exterior wall panels," says Reed. On the other hand, "Supporting collateral or hung loads are restricted and need to be carefully coordinated, or extensive reinforcing of the joists may be required."

Concrete, particularly precast tilt-up construction, has emerged as a common choice for data centers on account of its durability, cost benefits, and speed of erection.

Meanwhile, cast-in-place concrete framing systems are well-suited for construction markets which support economic cast-in-place structures and can absorb the cost premium associated with forming the higher floor-to-floor heights required by data centers. In general, says Reed, cast-in-place can be used on spans under 45 feet; if a bay can be limited to 30x30 feet, a flat slab or flat plate may be economical, which will minimize forming costs. "With appropriate slab thicknesses and concrete cover over reinforcing steel, concrete construction will provide required fire ratings without the need for supplemental fireproofing," he says.

Another option, composite concrete slab on metal deck and a steel frame, combines the benefits of construction speed and greater load-resisting capabilities. Ideal for multi-story data centers, fire-rated construction can be achieved by increasing the thickness of the slab and applying spray fireproofing to the columns, according to Reed. "This system can be integrated with tilt-up or precast construction, although generally the gravity loads will be supported by steel columns and the exterior façade panels will only resist lateral wind loads on the façade," he says.

One drawback, however, is that if the data center needs to fit with an existing aesthetic-for example, on a college campus-concrete may be harder to work with, says Bill Ash, AIA, LEED AP, a SmithGroup architect based in Raleigh-Durham, N.C.

While a number of roof systems can work for data centers, concrete slab deck is the preferred choice, say many building experts. "Roofing systems normally consist of a poured concrete substrate with a fully adhered fleece-backed membrane, two layers of mechanically fastened insulation topped by a loose laid, single-ply membrane system with ballast, and walkway pavers," explains Faye LeDoux, a principal and project director with EllerbeBecket.

Additional beneficial roof features include an integrated alarm leak-detection system, as well as insulation and solar reflectivity materials to help reduce external heat load. Also, if photovoltaics are to be installed on an existing roof, it's important to retrofit the roof-a common oversight, adds Curtis.

Regarding insulation, Reed recommends installing wall and roof insulation in tandem with a high-quality vapor barrier, because the data center will be humidified in the winter. "Moisture migration out of the data center is both a risk and an energy waste," says Curtis.

Important: If possible, design the roof to be fully drained to the exterior, with no internal roof drain leaders. This is the ideal configuration, the experts say.

ADDING WINDOWS TO DATA CENTERS

For obvious reasons, windows are not very common in data centers. They are a big security risk, which is the chief concern, but they can also add cooling load and make the space more vulnerable to climate issues.

"Weather events and other hazard conditions must be addressed when specifying any exterior glazing for a critical or raised-floor area," says EllerbeBecket's LeDoux. "On occasion we have designed curtain wall systems to withstand 200-mph winds, but have specified that the system must be tested and certified."

At the same time, daylighting is becoming more desirable, especially in data center spaces frequently occupied by people (see "Designing Data Centers for People, Too"). In some cases, owner companies actually want to display the inner workings of their data center for visitors and passersby, says SmithGroup's Ash. "Lobbies or visitor areas may include view windows into secured areas, but they need to utilize ballistic glass, security frames, or other electronic security measures that will likely affect the availability of options, finishes, and sizes," he notes.

As an alternative, secured area wells or planted courtyards can bring in daylight to human-occupied spaces deeper into the interior. If windows are strongly desired along the exterior perimeter, one strategy is to turn this space into a protected service corridor, so that visibility to the actual data center is minimized.

MODELING MISSION-CRITICAL

The learning curve for designers to use modeling tools, such as computational fluid dynamics (CFD) and building information modeling (BIM), is becoming much more manageable. This is good news, say data center specialists like Digital Realty Trust's Attlassy: "CFD is essential for designing and operating a data center," he says. "Operating a data center without this tool is like driving while blindfolded."

Now that the design tools have become mainstream, it is clear that they do save money, time, and energy consumption in mission-critical facilities, based on the experience of firms like Environmental Systems Design. The Chicago-based MEP firm commonly employs modeling tools for multiple trade/drawing coordination, energy modeling, air distribution, short-circuit and arc-flash studies (known as SKM), and reliability calculations.

KlingStubbins uses CFD modeling on virtually every design to assure performance at varying load levels and to assure proper containment of hot and cool air, says the firm's Tom Reed. "For existing facilities, we frequently use CFD to gain energy benefits and higher utilization of existing equipment, which, in turn, allows higher capacity to be deployed in the data center," he says.

Bala engineers commonly employ CFD to help identify hot spots, verify transient temperatures during power failures, and optimize energy utilization.

CFD can also be used to verify temperature profiles during normal operation, although SmithGroup's Sty likes the fact that it enables designers to run various scenarios. "We have used it to predict profiles during a simulated failure of computer room AC units," he says. "This simulation is very important, especially as average densities go up, which reduce the amount of time for facility maintenance staff to act on an equipment failure."

As for BIM, one of the main benefits of building information modeling is its ability to support a higher level of coordination during the design phase. "This helps to expedite the front-end construction activities and cut down the overall time to deploy equipment in a new data center," notes Bala's McGroary. "We are also seeing BIM as a very valuable tool for the data center managers to manage the facility infrastructure and modifications beyond the initial design and build activities."

As KlingStubbins's Reed points out, however, to get the most out of BIM, the ideal scenario would be where the design team and all construction trades are all on board and fully using and collaborating on the model from the start. As with BIM and CFD, while there have been significant advances on the technology and design of data center landscape, green design and scaled-back budgets will surely continue to shape tomorrow's mission-critical facilities. It is unlikely that there will be any shortage of opportunities for designers and builders to try out new, exciting design and construction ideas and schemes for tomorrow's data centers. BD+C Caption: A corporate greenfield data center in Northern Illinois, designed by ESD, utilizes overhead modular power to enable greater power densities and flexibility.

Sign up for Building Design+Construction Newsletters