OPINION
By Alex Blake, ABM Critical
Solutions
www.abm.co.uk
Exploring 70 years of evolution and the
future of data centre management
in the financial services sector
With every swipe of a bank card, contactless tap onto the
tube or a simple post on Facebook, a data centre is hard at
work behind the scenes. But this hasn’t always been the
case – the financial services sector was the first adopter of
the concept over 70 years ago, paving the way for the robust
critical frameworks we have today. However, with legacy
sometimes comes a hangover, and as new and old processes
collide, how should the industry navigate this dilemma?
A look back in time
In the 1950s and 60s, data centres, or mainframes1 as
they were known, were a different beast. Running these
facilities was labour intensive and demanded an enormous
expense. Pitt Turner, Executive Director of the Uptime
Institute, summed this up nicely when recalling how the
process worked historically at a large regional bank: “In the
evening, all trucks would arrive carrying reams of paper.
Throughout the night the paper would be processed, the data
crunched and printouts created. These printouts would be
sent back to bank branches by truck in the morning.”
The previously cutting-edge, mainframes are a far cry from
where we are today, and frankly, with pace and accuracy
at the heart of how all industries run, they wouldn’t cut
the mustard; especially in the financial sector which has
grown exponentially and relentlessly, demanding speed
and efficiency. What’s come with this growth, is a trend
towards mixed processes – the sector uses a combination of
outsourced data centres via colocation services, alongside
original sites to manage huge data footprints.
For financial institutions working under a cloud of
uncertainty and risk, these centres need constant investment
but often there’s an unwillingness for this to come from
CAPEX. So, building their own data centres or updating and
maintaining legacy systems isn’t a priority. Instead, data
centres expand with more racks and hardware, making
monitoring a constantly evolving job. At what risk though?
The risk of downtime
Downtime is the biggest risk factor in legacy data centres,
regularly driven by air particle contamination. Unlike new
facilities, which restrict airflow and the ability for particles to
contaminate equipment, legacy centres are often more open
to threat, require expert cleaning teams and require constant
management from specialists.
It’s hard to equate cleaning to serious financial risk, but
in the financial sector there’s pressure for online banking,
payment processing and the protection of personal
information to work around the clock. Failure to deliver means
fines and reputational damage – which can be avoided with
the right technical cleaning and infrastructure management.
Preventative cleaning measures
Frequent air particle testing is fundamental in identifying
issues ahead of time, especially in legacy centres, which must
be done by specialist engineers and cleaners. Companies
shouldn’t wait for issues to occur – as the saying goes,
prevention is better than the cure; a preventative cleaning
regime comes at a cost, but it will help manage issues before
they threaten service.
Some specialists can determine the cause of contamination
on surfaces, but often the real damage can be caused by
airborne particles not visible to the eye. The solution is
to implement an annual preventative technical cleaning
programme to ensure ISO Class 8 standards are maintained
in critical spaces. Simultaneously carrying out particle tests on
surfaces as well as zinc whisker testing and remediation is one
such way effective testing and cleaning can take place.
The right infrastructure
Downtime can also be successfully avoided in a legacy
data centre by implementing data centre infrastructure
management (DCIM). Relying on older, outdated solutions
can be a gamble, given how susceptible legacy centres are to
building degradation and contamination.
DCIM can enable smart, real-time decision making, and has
the ability to introduce fail-safes, meaning an issue doesn’t
have to stunt services to catastrophic effect. For example, a
custom alarm can be developed and installed, which works to
alert a specific team or contact as soon as an error occurs to
ensure that technology and people work together; a problem
will always be flagged immediately and attended to by an
expert to assess and remedy the issue quickly.
The future: monitoring technology
Monitoring technology will continue to grow and expand
its remit, becoming more intelligent, precise and affordable.
This will benefit legacy sites and with the right measures in
place, will limit vulnerabilities. There will be a time in the not
too distant future whereby advanced monitoring technology
will help to drive efficiencies that lead to more remote and
cost-effective off-site management models. Used correctly, it
would ultimately provide users with data that will guide their
decisions and ensure they are one step ahead.
New locations
Last year, we saw a data centre submerged into the sea off
the coast of Scotland. As technology increasingly helps us
identify and fix issues remotely, we’ll likely see more non-
traditional data centre locations come in to play.
We’re at a very exciting inflexion point in the industry;
infrastructure, technology and artificial intelligence are
working together in ways we didn’t think possible. There are
more options than ever before to get it right, and while we’ll
continue to see a shift towards utilising colocation services,
legacy centres will be more protected than ever, owing to
advancements across the board. n
www.networkseuropemagazine.com
17