Intelligent Data Centres Issue 01 | Page 65

THE EDGE sk any operations team if their data centre rooms are fully performance-optimised and the answer will invariably be yes. However, despite the best efforts of FM and IT teams, the reality is that even today’s best run data centres have cooling, power and space issues that should be resolved. A This is an industry-wide challenge, particularly as powering data centre cooling now accounts for around 30% of a data centre’s overall operating cost. And with research suggesting that just a third of installed data centre cooling equipment actually delivers active cooling benefits, there’s clearly a requirement for organisations to look again at their data centre’s performance if they’re to successfully remove thermal risk, achieve cooling energy savings and release further IT capacity. Unfortunately very few data centres have access to the kind of real-time core power and thermal metrics that they need to make informed performance optimisation decisions about their critical facilities. Instead, many still rely on their original CAD floor layouts to calculate power and cooling requirements, while those that do regularly collect thermal data often consign it to inflexible spreadsheets that are rarely either accessible or up-to-date when critical information is needed. This challenge was brought home last month when I visited a data centre that was undergoing a significant upgrade. They were updating around 15% of their existing racks and were adding 30 new ones over the next few weeks. Given the scale of change, how could they hope to ensure the right thermal, power and capacity balance if they had no access to current performance data? Getting serious about sensing So, if you’re serious about optimising data centre performance then you really need to know what’s happening right now – not what was going on yesterday or last week. That’s why it’s only when data rooms are carefully mapped with all the appropriate data fields that operations teams can start to gain a true understanding of their overall data centre performance. When a data room is carefully mapped with appropriate thermal data fields – right down to an individual rack or server level – whole new levels of understanding and cooling efficiency become possible. To address this, organisations need to work out how to build rack-level detailed maps of their data centre estate that display all their cooling, power and thermal performance in real-time. It’s only by combining this kind of granular cooling and thermal data with GARTNER SEES THIS DIGITAL TWINS MODEL AS A GREAT WAY FOR ORGANISATIONS TO DISRUPT TRADITIONAL CAD MODEL-BASED MANAGEMENT. smart monitoring and analysis software that they can access the intelligence required to enable informed performance optimisation decisions to be made. Unfortunately, less than 5% of data centres currently gather this kind of precision data, as it requires a far greater networked mesh of sensors to accurately capture not just temperatures, but also energy usage, heat outputs and airflows. Turning critical data into meaningful intelligence Given that so much data operator and facilities management time is taken This highlights a dilemma at the heart of data centre optimisation. Do you choose to lock down your initial data centre set- up – in the hope of maintaining reliability and performance? Or do you recognise that IT will need to keep pace as your operation evolves, requiring frequent attention in order to keep pace with your changing business requirements? The answer should be clear, but delivering on it will require true, real-time visibility of your critical heartbeat operational data. www.intelligentdatacentres.com Issue 01 65