E-GuideBest practices for data centercooling and power managementAs power and heat densities continue to rise, cooling hardwarebecomes more difficult, and the importance of energy efficiency andair containment solutions becomes evident. In this expert e-guide, learn about the design issues of hot and coldaisle approaches and find out about the tradeoffs of containment. Also,discover why it is necessary to invest in energy-efficient power andcooling for the data center.Sponsored By: E-GuideBest practices for data center cooling and power managementE-GuideBest practices for data center coolingand power managementTable of ContentsContainment solutions for data center coolingInvest in energy-efficient power and cooling for data center ROIResources from Legrand OrtronicsSponsored By:Page 2 of 11 E-GuideBest practices for data center cooling and power managementContainment solutions for data center coolingBy Robert McFarlane, ContributorAs power and heat densities continue to rise, cooling hardware becomes more difficult, andthe importance of energy efficiency and air containment solutions becomes evident. This tipexplains the design issues of typical hot- and cold-aisle approaches, and introduces theimportant concepts and tradeoffs of containment.Overcoming hot- and cold-aisle design issuesHot- and cold-aisle design was developed to lessen the occurrence of hot and cold airmixing. It was a big step forward, allowing air to be directed to and from thermal loadsmore efficiently and making it possible to cool higher-density loads. But as heat outputgrew, some of the previous air-mixing problems started creeping back in. The major causewas easy to identify and solve–open spaces.If there are open spaces, hot air can’t be kept in the hot aisle, and cold air can’t be kept inthe cold aisle. Therefore, all the openings in cabinets need to be blocked. Blocking the holeskeeps hot air from re-circulating back through the rack spaces between computers and noncontiguous cabinets, and also keeps valuable cold air from bypassing the equipment throughthose openings. The use of blanking panels in unused rack spaces is still the most ignoredprinciple in the industry, and it leads to a great deal of ineffective data center cooling andenergy waste.But there are two other factors to consider: Heat wants to rise, and fans will take the airthey want from wherever they can get it. As heat densities increase, hot air spills over thetops of cabinets and back into the servers. As equipment needs more air, the fans also pullit around the ends of rows and back into the cold aisles. The obvious solution is to putbarriers in those pathways as well, block the ends of aisles with walls and doors, and put aceiling over the aisles at the tops of cabinets.Sponsored By:Page 3 of 11 E-GuideBest practices for data center cooling and power managementVoila–the hot-aisle or cold-aisle is now contained! Hot air is now completely trapped in thehot aisle so it can’t escape, and cold air is contained in the cold aisle so none of it is wasted.Seems simple enough, but let’s examine further.The two containment typesSo why do we even need to decide between hot-aisle or cold-aisle? Why not just containboth aisles and run the rest of the room on building air? It has been done, but it creates alot of unnecessary work and expense. Take the time to decide which method is right foryou, understand the potential problems and consider the benefits.Hot-aisle containmentHot-aisle containment is generally accepted as easier to implement than cold aisle, and ithas a small advantage in energy efficiency. Proponents note that the rest of the room hasthe same comfortable environment as the cold aisle, which actually doesn’t need to be“cold” anymore, and may well be 75 degrees Fahrenheit (23.9 degrees Celsius) or as highas 80.6, according to ASHRAE’s revised Thermal Guidelines for Data ProcessingEnvironment.With so much of the room at a reasonable temperature, equipment fans can draw air fromwherever they need it. Therefore, while we should always endeavor to deliver sufficient airto the hardware, it is not as critical to control as it is in cold-aisle containment.The major drawback usually cited with hot-aisle containment is the working environmentwithin the hot aisle, which can reach 95 degrees Fahrenheit or higher. This is not acomfortable working condition for extended periods of time but, contrary to popular belief, itdoes not exceed OSHA standards. To temper the heat, some designs actually introduce alittle cold air into the contained area to keep the temperature within reasonable limits.Obviously, this offsets some of the efficiency gains of the contained solution, but it iscertainly more “worker friendly.”The ability to reach higher return-air temperatures is actually what improves efficiency. Airconditioner coils deliver more cooling capacity when they are presented with higher-Sponsored By:Page 4 of 11 E-GuideBest practices for data center cooling and power managementtemperature air. Table 1 shows examples of this capacity increase for several common airconditioner sizes. If containment is complete, the hot air can only return to the airconditioners via the physical path that is provided, which maximizes its temperature. Thiscan be accomplished with very large ductwork, but it’s more common to just use the spaceabove the ceiling, known as the “plenum.” A few cautions: The above-ceiling area should bedirt-free, and the ceiling tiles should be sealed on the back so they don’t flake off, or you’llbe changing filters way too often. A number of large grills are required in the hot-aisleceiling for the air to easily pass through.Table 1: Typical CRAH capacity ratings with different Return Air Temperatures.NOM. kW CAP kW CAP kW 9213.1Cold-aisle containmentOne of the biggest advantages of cold-aisle containment is that with either under-floor oroverhead data center cooling, the aisle tends to fill up with cool air and hot air is preventedfrom creeping in. This ensures that all available cold air is delivered to the equipment andalso minimizes temperature differentials between the upper and lower parts of cabinets.Cold-aisle containment can be particularly advantageous with below-floor air delivery,because cold air falls, making the under-floor air supply fundamentally contrary to the lawsof physics.When cold air is pushed up through floor-tile openings, it will only rise to a certain heightunless something propels it higher. The fans in the computer hardware usually pull that airSponsored By:Page 5 of 11 E-GuideBest practices for data center cooling and power managementup and in, but it still gets warmer as it rises. However, if we can fully contain the cold aisle,the air within that aisle tends to stabilize much closer to its “delivery” temperature, fromfloor to ceiling.As previously noted, the cold aisle doesn’t need to be the chilly 55 degrees Fahrenheit thatwe’ve seen it at for years. With the ASHRAE upper limit of 80.6 degrees Fahrenheit (27degrees Celsius), you could pick a very comfortable cold-aisle temperature of around 75degrees Fahrenheit and be very safe. This temperature allows you to increase the set pointson your computer room air conditioning (CRAC) units, which saves a lot of air conditionerenergy. Opponents point out that the rest of the room is now essentially a hot aisle, whichcould be 95 degrees Fahrenheit or more, making everything uncomfortable except thecontained cold aisles.The real challenge of cold-aisle containment is air balance and control. Computer equipmentrequires a certain amount of air to keep cool. When the only air available is the air deliveredto the contained cold aisles, you need to ensure that it’s sufficient by adjusting perforatedor grate floor tiles, or overhead grills. This assumes the air conditioners can actually deliverall the cool air the computers need. One could simply “open the flood gates,” and push asmuch cold air as possible into each cold aisle, but that invites other problems. You may airstarve other areas of your data center by serving some aisles at the expense of others, oryou may have to install more air conditioners than you really need to make up the airvolume, which is an expensive task that wastes energy. You also might over-pressurize thecold aisle, forcing more air through the computers, as well as through any open spacesbetween panels and cabinets, which wastes cold air, reduces the hot-aisle temperature anddegrades CRAC efficiency. In short, pushing too much of your precious air into the aisle canbe counterproductive.Sponsored By:Page 6 of 11

Layer Zero - The Infrastructure Layer Revolutionizeyour networkfrom the ground upFor maximum network efficiency, start at the foundation.Layer Zero provides a new foundation for the OSI model to addressthe critical role that infrastructure plays in network performance.The right solutions at Layer Zero can reduce power consumptionand cooling costs, reduce the risk of equipment failure and improveoverall system performance. For more information, to be better .Ortronics E-GuideBest practices for data center cooling and power managementInvest in energy-efficient power and cooling fordata center ROIBy Frank Ohlhorst, ContributorAlthough budgets are still tight, many data center managers have found that the pursestrings are starting to loosen. However, there is a catch -- data center managers mustprove that a technology offers a measurable bang for the buck before the bean countersrelease any of those bucks for a technology purchase. Luckily, data center managers havefound a host of technologies that enable them to modernize the data center while stillmeeting those cost savings objectives -- with power and cooling enhancements leading thelist.However, determining the benefits of power-saving technologies and the related reductionin cooling needs is not an easy science. What's more, the associated cost savings dependsupon a number of factors, including the history of the existing data center as well asanticipated future needs. Regardless of the driving factors, data center managers need toapproach upgrade concepts with a firm foundation of knowledge. Therefore, the path to anupgrade starts with an audit.The audit process will uncover critical nuggets of information that will determine thefeasibility of any data center redesign as well as the background information needed tomake decisions on product selection. There are some critical details the audit should include-- namely, current loads (storage and CPU utilization), equipment in place, maintenancecosts, minimum and maximum activity loads, and rack density.Utilization proves to be one of the most important metrics for determining return oninvestment (ROI). For example, if multiple servers are spiking to high utilization ratesalmost continually, there is an argument to add capacity. On the other hand, if processorutilization is on the low side, it may be the ideal time to consolidate servers usingvirtualization technology. Although both cases are at different ends of the spectrum, bothprove to be ideal foundations for deploying power-saving technologies while solving aproblem. In the case of high utilization, new multi-core blade servers can be deployed,Sponsored By:Page 8 of 11 E-GuideBest practices for data center cooling and power managementwhich reduce power and physical space requirements. That allows consolidation of racks,increases density and usually reduces power and cooling requirements.In a low-utilization scenario, deploying virtualization accomplishes much of the same goals - servers can be consolidated, the number of racks reduced and the overall power andcooling footprint reduced. Either situation proves that auditing can lead to increased ROIand reduced total cost of ownership (TCO) just by solving a common issue. Although thatmay be a simplified example, the logic of assessing, addressing and improving data centercapabilities rings true.The real secret behind power reduction and cooling efficiency ROI comes from a singleconcept: density. Simply put, increasing data center density using newer, more efficienttechnologies accomplishes two primary goals: reduced square footage requirements andreduced power consumption. That combination has a direct correlation with cooling needs.However, there are a few catches to be aware of behind the concept of increased density,including specific rack power and cooling needs. For example, if multiple racks areconsolidated into single-rack solutions by using server or storage blades, the powerconsumption of that single rack may increase beyond the original operational envelop andthe cooling needs for that individual rack may also increase. Although rack consolidationdecreases the overall power and cooling needs of the data center, a single rack may haveincreased needs. With that in mind, it becomes critically important to baseline the originalpower consumption of the rack (as well as cooling demand) and then calculate the demandfor the replacement equipment. That information will be used to size the reconfigured rack,making sure the rack does not place more demand on the external infrastructure (rackpower and environmental demands) than originally designed for.All of those elements contribute to ROI calculations, which, in the simplest form, amount tothe costs of the upgrades verses the savings offered by the upgrades, both of which aremeasurable elements. However, it is critical to make sure that all of the representative datais collected to validate lowered TCO. Managers assembling the ROI argument will need toinclude equipment costs, downtime costs, personnel costs and other ancillary costsassociated with an upgrade.Sponsored By:Page 9 of 11 E-GuideBest practices for data center cooling and power managementCalculating the anticipated savings proves to be a little more complex. Here, managers willneed to measure the current electric loads of the equipment to be replaced and then assigna cost to those loads. To calculate the anticipated loads, managers will have to rely oninformation provided by the vendors. However, a simple way to judge the savings basedupon a percentile is to look at vendor specifications for the old equipment's power demandsand compare that to vendor specifications of the new equipment's power demands.Calculations based on those elements should result in a reasonably accurate savingspercentage that can be applied to current costs to determine future savings.Data center managers seeking to re-engineer for improved ROI on power and cooling willneed to become familiar with the concept of power usage effectiveness (PUE). PUE is a ratiothat measures the total power required for the facility divided by the power required for theIT equipment. A hypothetical value of 1.0 is perfect and unattainable, simply because thatwould mean that only the IT equipment, and nothing else in the facility (no lights, noenvironmental, etc.), would consume energy. Typically, most data centers see a PUEranging from 2 to 3, meaning that the total power demand for a data center is 2 to 3 timeswhat is needed solely for the IT equipment.The reason that PUE has become so important is that it can indicate how little or how mucha specific power-saving technology affects the bottom line. Ideally, data center managerswill want to push the PUE ratio below a 2. However, if a PUE exceeds 3, power-saving ITequipment may only have a marginal effect on cost savings, since equipment other than ITis consuming most of the power. In that case, it may be more appropriate to focus onreducing the operational costs of non-IT equipment before approaching any IT technologyengineering. When PUE ratios are lower, IT equipment savings have a larger impact onoperational budget costs. Most data center managers will find that determining real-world ITpower consumption and cooling demand has to be balanced against PUE rations todetermine if re-engineering can deliver the savings needed in a reasonable amount of timeto justify the initial costs.Sponsored By:Page 10 of 11 E-GuideBest practices for data center cooling and power managementResources from Legrand OrtronicsMighty Mo Air Control Containment SolutionsOrtronics CFD Analysis Services – intelligent airflow management servicesPrepare for the future with Ortronics Data Center SolutionsAbout Legrand OrtronicsLegrand Ortronics (, headquartered in New London,Connecticut, USA, is a global leader in high performance network infrastructure solutions,offering a complete range of Category 5e, 6 and 6a copper, fiber optics and Layer Zero physical support solutions, including Cablofil wire mesh cable tray and Wiremold pathways.Sponsored By:Page 11 of 11

Best practices for data center cooling and power management . Sponsored By: Page 8 of 11 . Invest in energy-efficient power and cooling for data center ROI . By Frank Ohlhorst, Contributor Although budgets are still tight, many data center managers