Equilibrium requires the perfect balance between two elements. In the data centre this is power and cooling. However, there is a fine line between guaranteeing a data centre has an efficient amount of operational power, while making sure it is being adequately cooled, so it is crucial for data centre managers to be able to strike that balance.
Turning up the cooling is the quick fix for treating hot spots. But by failing to equip data centres with the right technology, data centre managers will never know how best to cope with potential problems to create a more energy efficient, cost effective environment.
Data centres, like the rest of the technology industry, have to change with the times and look for ways to not only make it easier for managers to assess where power is distributed, but also help businesses achieve lower carbon footprint and power consumption, which will inevitably become more constraining as legislation develops.
Power Vs Cooling
When creating the optimum data centre, there are two elements of the process to consider. The power is the productive element (profit-generating) and the cooling bill is the unproductive element (profit-consuming). Clearly it makes business sense to limit the unproductive aspect and there are two new ways of thinking to achieve this.
The data centre industry mind set is often ‘keep cool at all costs’ but the latest advice from ASHRAE (American Society of Heating, Refrigeration, and Air-Conditioning Engineers) is actually to INCREASE the temperature by as little as one degree to save power and reduce cooling costs by 4-5 per cent. Focusing solely on keeping data centres cool can turn out to be problematic as there are difficulties that occur with different types of cooling options. Most standard PDUs are rated to only 45 degrees, but while server manufacturers are developing products to withstand higher temperatures, many PDU players have chosen not to follow this important trend.
The latest intelligent PDUs can tolerate temperatures as high as 60 degrees, allowing data centre managers to increase the temperature inside data centres. Less cooling required means less energy is needed so the data centre becomes a more energy efficient, greener environment, in addition to providing a much-needed and an easily identifiable cost reduction for the business.
Intelligent power distribution units (PDUs) can identify power usage and distribution, enabling them to detect potential problems before they occur, and allowing data centre managers to be more proactive. Unfortunately the problem cannot be solved by simply raising the temperature and even if it could, this option is often met with reluctance.
The second route to consider, for data centre managers who don’t want to increase the temperature, is to focus on the power aspect of the business by turning off idle servers. Servers that are simply switched on can generate 60 per cent of heat. They can also draw up to 40 per cent power despite adding no value to the business.[1] This means that people are paying a cooling bill for assets that are dormant. Intelligent PDUs can monitor energy consumption, making the users feel more confident in allowing data centre managers to assess which ones need to be switched on and identify the racks in which they can afford to reduce power consumption to increase efficiency.
What have you got to lose?
It’s in our nature to be reluctant to change, but for an industry that has researched power and cooling extensively and believes it already understands how to be cost effective and energy efficient, the challenge is even bigger. While there isn’t a specific solution that will solve all power and cooling problems, a series of small and gradual changes will help. As the saying goes, if you search for something new, you might find something better. So what have you got to lose?
The issue of power and cooling starts in the rack. Power comes into the data centre from the grid, enters the rack through a PDU and powers the servers, which generate heat and need to be cooled. Picture the dynamic data centre. It is a busy environment with servers processing different amounts of information at different temperatures, meaning, from a cooling perspective, it can be difficult to keep the environment stable.
The variability of hotspots and power demands means that at times, different parts of a data centre have varying needs and it is important that managers don’t miss the signs that point to an upcoming problem. In an industry where a minute of downtime can cost £100k, customers want to make sure that their servers are being looked after and any sort of failure, prevented.
The answer lies in finding just the right equation between turning off idle servers and keeping some of the productive servers running, ready to take the load off assets that begin to overwork.
Are you thinking proactively?
Many data centre managers increase the cooling to treat hotspots. This method ensures that every rack in the data centre becomes a little cooler but in order to do this effectively, the overall temperature has to be considerably decreased, just to enable the one hotspot to be reduced to a safer temperature.
Some companies would argue that in-row cooling is needed, which controls the air contained in individual rows of the data centre and therefore allows the temperature of a single row to be adjusted.
This targets hotspots without having to reduce the temperature of every rack in the data centre; however it is a very expensive solution. Surely it is easier to pre-empt rather than treat so thinking proactively and building user confidence with accurate, reliable technology is key.
Avoiding the pitfalls
By equipping data centres with environmental monitoring and measuring devices, data centre managers can receive email alerts as often as needed to inform them when a rack reaches a certain degree in temperature. Monitoring any temperature changes from the start allows professionals to assess whether a hotspot is a problem or not. Hotspots may not always be problematic but if they go unnoticed they could become dangerous.
The temperature could be gradually increasing over time and if it was to exceed 45 degrees at the back of the rack, those using existing PDU vendors would see their PDUs fail. By monitoring properly, managers can clearly see whether a rack has been steadily increasing in temperature and can therefore make changes before the temperature continues to rise. If nothing is done to monitor conditions, it can begin to have an impact at different levels in the chain and the issue spreads.
The industry needs to realise that this issue will continue to spread and flourish within the racks of the data centre. We must call upon the new technologies available that will help us in moving the industry forward for the better, guaranteeing this much-needed balance of power and cooling and recognising that prevention really is the cure.