INTELLIGENT ENTERPRISE SECURITY
Volume Quality
During the past few years, the deployment
of enhanced and verbose security sensors
and defences has resulted in a high
volume of data fed into threat intelligence
tools. Big data analytics and machine-
learning tools consume this data and
add their analyses to it. The net effect is
an improvement in internal capabilities
to detect potential attacks and a marked
increase in internal threat detection, but a
massive signal-to-noise problem remains
to be solved. Although the systems are
getting better at detection, we have not
yet seen a corresponding improvement in
the capability of human analysts to triage,
process and act on the intelligence.
Vendors are working on solutions
to address this problem, from access
monitors on sensitive data to sophisticated
sandboxes and traps that can resolve
contextual clues about a potential attack
or suspicious event. Further automation
and process orchestration is essential to
augment human capacity. Related to source validation is the quality
of the information we share. Legitimate
sources can send anything from definitive
indicators of attack or compromise to
their entire event feed, which may be
of little or no relevance to the receiver.
Although more threat intelligence is
generally better, much of it is duplicated,
and too much low-quality intelligence is
of little value. Many threat exchanges are
coming online, but they are only as good
as their inputs and sensors.
Vendors need to re-architect security
sensors to capture and communicate
richer trace data to help decision-support
systems identify key structural elements
of a persistent attack. Filters, tags and
deduplication are critical intake tasks to
automate to increase the value of threat
intelligence and make it actionable.
An early, promising effort to improve
threat intelligence quality will come
online in 2017 through the Cyber Threat
Alliance. Threat intelligence coming
from CTA members will be automatically
scored for its quality, and members will
be able to draw out threat intelligence
only if they have provided sufficient
quality input.
Validation
Disinformation and fake news are not new.
Adversaries may file false threat reports to
mislead or overwhelm threat intelligence
systems. As a result, it is essential to
validate the sources of shared threat
intelligence, from both inside and outside
the organisation.
Outside validation is perhaps the
more obvious requirement, ensuring that
incoming threat intelligence is being sent
by legitimate sources and has not been
tampered with in transit. This is typically
accomplished with encryption, hashes and
other methods of digitally signing content.
Internal validation is a different problem, not
so much validating the sources as analysing
and evaluating the content to determine if it
is a legitimate attack or a noisy distraction
to draw attention and resources away from a
quieter, stealthier threat.
Speed
The speed of transmission or, more
accurately, the latency between a threat
detection and the reception of critical
intelligence, is also an important attribute.
Intelligence received too late to prevent
an attack is still valuable, but only for the
clean-up process. This is one reason why
open and standardised communication
protocols, designed and optimised for
sharing threat intelligence, are essential to
successful threat intelligence operations.
The propagation of attacks between
systems happens within a minute or two
of a machine being compromised, so
communications between sensors and
systems within the enterprise have to
operate in near real time. Meanwhile,
advanced persistent threats and
sophisticated, targeted campaigns often
go after multiple organisations in the
same vertical market, so communications
from one organisation to another, usually
involving an intermediary or exchange,
have to take place within a few hours of the
first indication of an attack.
Correlation
Finally, as threat intelligence is received,
correlating the information — while
looking for patterns and key data points
relevant to the organisation — is the most
critical step. Although some organisations
treat the raw data as a proprietary or
competitive advantage, the ability to
collect data is not a critical factor. It is
the processing that turns data first into
intelligence and then into knowledge
that can inform and direct the security
operations teams.
The ability to validate data in near
real time, correlate it across multiple
operating systems, devices and networks,
use it to triage the event, prioritise the
investigation and scope the response is
critical to provide effective detection and
correction actions. The goal is to leverage
technologies and machine capabilities to
triage event data, distil it into high-quality
events, and scope and prioritise incidents
so that security analysts can focus their
attention on the highest-risk items.
Together, these issues describe threat
intelligence sharing’s ‘last mile’ problem:
taking this information and converting
it to controlled action. To cover this last
mile, we need to find better ways to share
threat intelligence between a vendor’s
products and with other vendors; improve
methods to automatically identify
relationships between the intelligence
collected and employ machine assistance
to simplify triage. ¢
39