design & facilities management
employed, the type of IT deployed
and local climate.
As multiple data centres
become more diverse in
engineering terms, and come to
be distributed across a variety of
geographies with varying climates,
managing all of these variables to
optimise the energy consumption
and efficiency of each data centre
will become a greater challenge.
The trend of data centres
becoming more widely distributed
geographically is a feature of edge
computing: the evolution of data
centres away from massive hubs at
the centre of a global network, to
smaller regionally based installations
ever closer to the users of the data
and applications housed within them.
Among the trends driving this
change are the emergence of the
Internet of Things (IoT), in which
embedded sensor driven network
connectivity will connect all manner
of physical devices. These range
from buildings to automobiles and all
manner of smart devices, collecting
information to enable better business
decision making.
The growth of data intensive
multimedia applications such as
video on demand is also a key driver,
accompanied by the growth in high
definition TV services, which requires
efficient management of high
bandwidth networks at a regional
as well as global level. Delivery of
digital HDTV and video on demand
requires service providers to locate
their server farms close to their
customers. Therefore viewers in
London or Glasgow downloading
the latest movie blockbuster to their
TVs are likely to have a smoother,
glitch free experience if the server
farm is located in the UK rather
than California.
Lastly, the increasing demand
for computing services in specialist
applications such as mining and
fossil fuel extraction, which typically
takes place in remote or hostile
environments, is driving the need for
ruggedised computing applications.
Congestion
Edge computing is an inevitable
consequence of vastly increased
data traffic which requires more
sophisticated traffic management.
With the Internet of Things
expected to comprise 50 billion
devices connected worldwide by
2020, network latency and speed
of response will require data
transactions to be contained, as far
as possible, within regional networks,
to remove some of the congestion
from global networks.
Much effort has already been
directed towards the challenge of
using energy more efficiently in data
centres. Vendors of data centre
infrastructure equipment such as
cooling, air conditioning, power supply
and containment products have
produced reference designs that allow
highly predictable installations to be
constructed. They make widespread
use of metrics such as PUE to
validate how efficiently a data centre’s
power can be delivered to the IT
equipment it contains.
However, PUE is limited in
terms of managing the overall
energy consumption of a data
centre, measuring only the
relative difference between power
consumed on IT equipment and that
consumed on IT and infrastructure
combined. So although it is now
easier to build data centres with
confidence that a low PUE rating will
be achieved, it doesn’t automatically
mean that overall energy
consumption will be reduced.
One strategy recommended by
the American Society of Heating,
Refrigerating, and Air-Conditioning
Engineers (ASHRAE) to reduce
overall energy consumption is to
arrange a data centre so that the
ambient temperature inside can
be allowed to rise. If a data centre
can operate effectively at higher
temperatures, the initial thought
was that cooling equipment such
as chillers can operate in economy
mode and will not need to be
deployed as frequently, resulting in a
lower energy requirement.
However, this technique has
not been widely adopted for a
variety of reasons. Apart from a
natural conservative reluctance
among engineers to change an
approach that has been seen to work
effectively, the results of allowing
temperature to rise have been mixed.
One size will not fit
all for the many and
varied data centres
that will be built in
the future.
21