Data center owners and operators are increasingly looking for ways to minimize total cost of ownership (TCO), cost per kW of IT load, and downtime. In an industry where the average TCO overspend is around $27 million per MW, where cost per kW (of IT load) can spiral out of control within just a few short years of entering operation, and where the average cost of downtime is $740 thousand per incident, owner/operators want solutions. Using an integrated and continuous modelling process can help data center administrators save millions of dollars annually per data hall.
While the amount of operational information has grown, it has remained siloed, causing organizational and physical fragmentation. Poor planning and inefficient use of power, cooling, or space often threaten efforts to minimize costs. This can force managers into a corner: do you build a new facility to help alleviate the strain or invest in a major overhaul? This is a dilemma no owner or operator wants to face.
It’s clear that data centers have the potential to be financial black holes. To help avoid common pitfalls, we’ve identified five primary reasons why data center operations are a financially risky business:
1. Design chain coordination
The tendering process produces an environment where a single product (the facility) is being supplied by multiple vendors. Vendors typically don’t communicate or coordinate with each other. The resulting lack of common vision leads to problems when the data center is built and handed over.
2. Siloed operations
IT operations, corporate real estate, facilities engineering, etc., all plan and execute actions in their respective silos. These decisions are driven by multiple stakeholders, often with mutually exclusive interests. Such silo-based operations lead to fragmented operational processes, which in turn leads to the fragmentation and diminishment of physical capacity.
3. IT Operations vs. Conceptual Design
It’s not possible for conceptual design to guarantee performance in normal operation due to changing IT and business needs. The uneven buildout of the facility over time means that most data centers will only realize a capacity utilization of about 70%.
4. Variable IT in a fixed infrastructure
IT hardware must be refreshed every few months or years. Newer IT hardware can have completely different requirements for space, power and cooling resources, requiring an operational redesign.
5. Capacity tracking
Physical capacity is dictated by the resource that is least available — space, power, cooling, or networking. For example, when cooling is utilized faster than space and power, the data center reaches the end of its life far quicker than anticipated. Data center infrastructure management (DCIM) tools provide a powerful means to monitor & track space and power. However, there are limitations. DCIM cannot::
Model and track cooling availability
Relate the distributions of space, power, cooling, and IT to each other to show capacity
Predict the impact of future IT plans on power & cooling collectively