While this form of AI to analyze true data
correlations that fall outside of normal
parameters is still in its nascent stages, its
high success rate is stirring hopes that a critical
new weapon is at hand.
ANOTHER ARROW IN THE QUIVER
For security experts, machine learning is not
a replacement for current threat assessment
practices. Rather, it’s a valuable adjunct that
confronts the very real problem of too many
false alarms.
“Most intrusion detection systems are
rules-based—if a specific condition occurs,
you respond according to what the rules
state,” explains Samir Hans, a partner in
Deloitte’s risk and financial advisory practice
who focuses on vigilant cyber threat management
solutions. “But that’s a challenge
all data security specialists have to contend
with, since not every alert is an actual threat.
There’s so much noise, making it difficult to
confirm what is and isn’t a threat.”
Given this high decibel level, most companies
cannot hire enough information security
analysts to listen in on every possible intrusion.
“The threats just keep adding up,” says
Hans. “It begins to feel like a losing battle,
even with accuracy improvements in rulesbased
systems.” For years, he explains, really
smart researchers have been asking what else
they can do to detect real fraud.
That’s where machine learning has come
into the picture. For Hans and his team at
Deloitte, machine learning algorithms achieve
two very clear goals—first, they sample
unique data behaviors so security staff can
improve their discernment of a threat, and
second, they help experts learn from the experience.
“We’re not throwing away the rules,”
he says, “we’re just layering more advanced
techniques like machine learning to enhance
the speed and precision of our threat detection
capabilities.”
This is important given the serious shortage
of skilled cyber security employees, as
Ramsey describes it. “Consequently, we want
our security resources focused on the threats
that machines can’t determine are malicious
or have low confidence that they are malicious.”
He says organizations can think of this
as “tri-state logic: ‘no, it’s legitimate; yes,
it’s malicious; or I don’t know.’ In the cases
the machines don’t know, you get a human
involved.” For example, if three people look at
a threat suggested by the algorithmic calculations
and agree it looks like the real thing,
that’s considered an efficient use of resources.
Otherwise, he says, “everyone is looking at
every possible threat.”
Despite the need for machine assistance to
supplement the shortage, Ramsey emphasizes
that machine learning is not a replacement for
people—it’s just another tool for security specialists
to sharpen their analyses. “We humans
are imperfect and mathematically inconsistent;
sometimes we’re right, sometimes we’re
wrong,” Ramsey says. “Machine learning can
be a great training tool to increase the odds of
being right.”
GROUND TRUTHS
To underscore the value of machine learning
technology to identify large-scale cyber
threats, Ramsey highlights a scenario of three
separate attacks against companies in three
different industries—oil and gas, copper and
gold mining, and agricultural. “Since the three
companies have little to do with each other,
the attack against one company would appear
to have no relationship to the attacks against
the other two,” he explains.
43