AI MIGHT be the hottest technology
today, but sustained excitement can’t
be taken for granted. As we approach
the peak of another AI hype cycle, it’s
important to remember previous waves
of AI enthusiasm have been followed by
seasons of decline, commonly referred to
as “AI Winters.” Can we prevent the next
AI Winter? If so, it won’t be due to hype
but by separating hype from reality. AI
is often portrayed as a magic wand that
will solve everything. In reality many AI
projects fail, and according to research,
a vast majority of them will disappoint.
While there’s a lot of excitement now,
making AI succeed is hard. History has
shown that sustained progress in AI is
not a given. Therefore, the technology
community must proactively address
issues that have caused AI to fail in the
past. In previous seasons, overpromising
and underperforming has translated into
such dramatic failures that funding pretty
much dried up. That’s not likely to happen
in exactly the same way today given the
successes we’ve already achieved with
current generation AI. But it’s very possible
that today’s optimism about AI ultimately
results in widespread disappointment and
decreases in funding for new research.
By learning from the past, we can better
address the challenges ahead and help
prevent widespread disillusionment
that might cause the next AI winter.
TECHNOLOGICAL
HYPE VS. REALITY
AI is considered by renowned computer
scientist John Launchbury to be in its
second wave, where techniques like neural
networks, once considered impossible,
can now process huge amounts of data
with little human intervention. Current AI
methods have led to compelling results. AI
is at the core of many of the world’s largest
technology companies, and governments
around the world are also investing
heavily. But within the AI community,
excitement has been tempered by
inconsistent successes across applications.
Today’s AI can solve certain problems
really well, but there are many it cannot,
and it’s important to have a clear
understanding of its limitations. Second
wave AI tends to operate effectively
under narrow conditions in which quality
training data is available, while third
wave AI is expected to interact with
humans in more intuitive ways through
a greater understanding of context.
Meanwhile, “artificial general
intelligence”—which so often captures
the popular imagination because it blurs
the line between humans and machines—
remains elusive. Most computer scientists
agree artificial general intelligence is at
least a decade away, which is another
way of saying that they have no idea
when it will be possible. That’s why it’s
so critical to understand the strengths
and limitations of current AI and to
choose projects that can deliver real
value based on the art of the possible.
GREAT POWER, GREAT
RESPONSIBILITY
With rapid AI adoption comes the need
for policies and practices that address key
ethical, legal, and security implications.
We want AI systems to be fair, robust, and
secure, but our current tools for building
AI systems deliver “black boxes” that
don’t offer the transparency we need.
For example, current generation AI
algorithms can easily reflect and even
amplify human bias. And from a security
perspective, AI generates new attack
surfaces, making algorithms vulnerable
to a growing set of attacks that can cause
them to fail or reveal sensitive data.
There’s a significant risk that many
early AI efforts across government will
ultimately fail if they are launched without
the technology solutions to deliver on the
necessary security and ethical standards.
Creating clear and effective policy to
address these concerns and pairing
that policy with effective and mature
technology solutions will be critical to
long-term acceptance of AI, particularly
within government. The goal is to ensure
AI is fair, explainable, and assured.
That requires the right policy and the
right technology to realize that policy.
PLANNING FOR THE LONG HAUL
AI systems are operating expenses, not
capital investments. AI can generate
value by boosting revenue and cutting
costs, but leaders must budget resources
to ensure it functions properly over time
and is adjusted as factors change and
new sources of data emerge. This is
called “hidden technical debt,” necessary
expenses beyond data collection and model
building. It is not enough to build a model
in the lab that solves a particular problem.
The real challenge is actually putting it
into production, securing it, testing it,
and maintaining it over time. It can be a
huge cost that typically isn’t planned.
Finally, AI projects need to include
investment plans and execution metrics
for the broader organization, including
legal, human resources, procurement and
purchasing, and secure IT capabilities.
The challenge for leaders is to optimize
large-scale operations with new ways of
business that integrate AI into a holistic
organizational improvement process.
Delivering value over the long term is
the best way to avoid the next AI Winter.
AI can do great things. If we set realistic
expectations, write good policy, and make
smart investments, we’ll be able to deliver
on the promise of AI while avoiding
the disillusionment of the past.
MEET RON KEESING
As leader of the Leidos AI/ML Accelerator, Ron Keesing is responsible for developing and
implementing all aspects of AI and ML strategy, including the evaluation of emerging technology
and the selection and execution of investments in R&D. He leads a team of 30+ Ph.D.-level AI
and ML researchers and data scientists within the Leidos Innovations Center who develop
research-based solutions for customers and internal partners as well as the community of AI/ML
practitioners and data scientists across Leidos.
CESGovernment.com • 27