KNOW HOW
There is a common
misconception that running low
density racks instead of higher
density ones will be less costly
when it comes to power but the
reverse is actually the case.
Running fewer high density racks
than lower density ones will yield
a lower total cost of ownership
because they have far superior
compute capabilities while using
significantly less data centre
resource; switchgear, UPS, power,
cooling towers and pumps, chillers,
lighting and so on.
Therefore, it’s increasingly
important that a data centre
provider designs their facility to
accommodate high density racks
and can achieve the right balance
between rack space and power.
Many racks installed in data
centres now consume more than
10kW, and some even 60kW!
Few can supply this level of
power per rack today and this
‘As cloud
services have
developed
over the past
few years
so have the
data centre
infrastructures
required
to support
their critical
workloads.’
problem is only going to get worse.
Out of town locations typically
offer a more abundant supply.
NGD’s mega data centre in South
Wales, for example, has a total
capacity of 180MW available.
Facilities which are less
dependent on multiple pylon hops,
where cables will be particularly
exposed to climatic wear and
tear, or better still, connecting
directly to the national grid, are
also likely to benefit from far
greater reliability and smoother
transmission. Mitigating the risk
of outages with deployment of
diverse power feeds and ensuring
adequate back up (battery and
generator) is also a primary
concern for enterprise and
colocation operators.
Clearly being able to generate
some or all power from renewables
is a major cost saving benefit,
not to mention from a CSR
perspective with the increasingly
rigorous environmental compliance
demanded by governments and
expected by customers.
Energy management
PUE is a big driver in the data
centre as the power required
is so vast. As well as using
‘Green Power’ the processes
and procedures to lower the
PUE should focus particularly
on reducing power for cooling.
This will require a combination of
close-coupled CRACs, hot and
cold aisle containment, higher
data hall temperatures and,
assuming sufficiently low ambient
temperatures are available, fresh
air cooling.
Faced with these challenges
best practice dictates that
data centre and facilities
professionals will increasingly
need to apply real-time analysis
and monitoring techniques to the
data centre itself - for optimising
cooling systems plant and
maintaining appropriate operating
temperatures for IT assets, without
fear of compromising performance
and uptime. An advanced system
will save thousands of pounds
through reduced power costs and
by minimising the environmental
impact while helping to ensure
maximum uptime through
predictive maintenance.
Connectivity
Last but not least, as well as
offering sufficient renewably
sourced power and efficient energy
management, one should not
overlook data centre connectivity,
especially with hybrid cloud
deployments in the ascendancy.
Combining private and public
models as well as legacy systems
together, hybrid clouds necessitate
data centres providing diverse, low
latency connectivity. This is critical
for ensuring a seamless, secure
interchange of data between the
different environments, no matter
if it’s a hybrid cloud supporting a
few hundred users nationwide, or
several thousand spread across
the globe. It must be able to scale
without degradation and therefore
requires circumnavigating the
public internet with the data
centre directly connecting into
global cloud provider hyperscale
networks.
September 2017 | 45