HEWLETT-PACKARD
Short of not being in business at all, operational
risks are part of the game and managers do not
have any choice about being exposed to them.
They can reduce their exposure by a number of
means described later but they cannot avoid
them as they could the other risk categories.
There are very few choices of avoidance that do
not call into question the basic business
models. As an example, a virus attack on a
data centre can only be avoided to a certain
degree since the attack is malicious and uses
historical data to succeed.
Operational risk losses also have no relation to
the value of the assets exposed to risk. If a
server costing 50k$ is exposed, the cost of data
recovery, downtime, cleaning-up, accounting,
reputation can be one hundred times as great.
If a generator breaks and a business continuity
plan fails?
It is fairly common to try to apply the same
modelling techniques to operational risk as
those applied to credit and market risk. The
success ranges from none to limited. A model
could show that a certain sequence or grouping
of events leads to a risk appearing (this is event
correlation). This would be based on historical
data by definition and should lead to some
action to stop the risk occurring – such as
avoiding the events or changing them in some
way. As a one-time exercise to identify and
eliminate obvious risks, this makes sense. As a
way forward it is similar to treating the
symptoms and not the cause. The very nature
and scope of operational risk makes these
approaches an endless, time-consuming and
costly exercise with no guarantee of success,
ever. In the cases of malicious acts causing
operational risks, it would be counterproductive
since a malicious act will always try something
that has not been done before and this type of
modelling would act as an excellent oracle. In
short, it is a waste of time.
OPERATIONAL RISK – SOURCE, CONTROLS,
ACCOUNTING AND HISTORY and the same scenario holds true for any
management control.
Where do the risks come from?
Operational risk comes from people, processes
and technology as well as external events, and
it is no secret that financial institutions are
entirely dependent upon technology. Without
the IT infrastructure, all but a few private banks
would cease to be able to trade. As a simple
example, how many clerks could raise an
invoice (assuming they were allowed to do it
manually)? When asked to raise an invoice, the
natural reaction is to press a button –
technology use has led to a situation of
complete dependency and the actual content
and legal use of an invoice in this example is no
longer in the capacity of operational people.
Whether we like it or not we need to deal with
this situation as it actually is. It is a bit like picking mushrooms – just because
you found one does not mean the next will be
found nearby.
The IT infrastructure is a fundamental of
business operations and this is where the first
search for operational risk should begin.
The second area is a combination of people
and processes – people run processes that are
defined by people. In general, processes have
been audited a hundred times and have
usually been defined by intelligent people.
Processes fail for two reasons (apart from
malice) – they are ill defined for the objectives
they hope to achieve or people do not obey
them to the letter. It is rare that a process
needs redefinition or re-engineering to reduce
the operational risk.
Auditing and management controls
Auditing has a fairly clear objective – ensuring
that things are as they are intended to be. This
assumes that the intended state is well defined,
which may not be the case. The other constraint
on auditing (in the search for operational risk at
least) is that there is a practical requirement that
says that auditors will usually look at controls
for evidence of deviations and will use some
form of sampling on data. There is simply too
much data to look at every transaction.
So, why not have a set of simple management
controls that are audited and reported to
management? Suppose one of the controls was
a trial balance on a database that was executed
and audited every week and which was correct?
What would this prove?
It would prove that at the end of the week, it
was correct. It would tell nothing about next
week and may even have misled you about the
previous week. It would tell management
nothing about the vulnerability of the processes
or whether there was some operational risk
exposure or whether an event would occur
tomorrow. It would not matter if this balance
had been correct every week for the last twenty
years. This is the problem of operational risk
In short, auditing and management controls are
not effective tools in the control of operational
risk – they help only.
Accounting for losses
When a loss occurs in the traditional areas of
risk (credit and market), the accounting
methods and standards are known and
practiced. When an operational risk occurs,
there is not the prior art in accounting for it. The
tools exist but are rarely used.
As an example, a credit of 50k$ lost is
accounted for as a 50k$ loss. An operational
server loss with an asset value of 50k$ could
well cost the company 2m$ in losses
associated with recovery but will most probably
be accounted for as expenses, neglecting the
original intent of tracking all financial and non
financial impacts.
This has some considerable impact since the
idea of Basel 2 is that operational losses will be
used to calculate capital adequacy. If there is
not a culture that fully accounts for operational
loss (including reputational loss incidentally),
then capital adequacy will be understated. This,
however, is a minor inconvenience in
comparison with the impact on the books or the
impact of having an actual loss of 2m$ which
may repeat itself.
Operational loss history
A provision in Basel 2 is that a historical
database of losses be available with three years
data. Whilst it is true that a historical database
tells the past with some accuracy (bearing in
mind the accounting problems cited above), it
tells little about the future. Just because you
have had no car accidents for ten years does
not mean you won’t have one tomorrow. Worse,
the current provisions are flexible such that
external data can be presented if internal data
is neither accurate nor available and there
seems to be no minimum quantity requirement
– ten losses are as good as ten thousand.
All in all, the loss database approach is
conceptually flawed for the search for
operational risk. As a simple example, a virus
attack last year caused a certain loss. Next
week a different virus attack can occur –in this
example in a technical sense, the loss database
tells us what won’t happen in the future since it
is unlikely you would be caught twice with the
same trick. This is a fundamental flaw of the
loss database approach – it tells you the ones
that caught you out and tells nothing about
control or future operational risk scenarios.
FEDERATION OF EURO-ASIAN STOCK EXCHANGES YEARBOOK 2003/2004
PAGE 11