INVESTMENT IN AI-CENTERED
HEALTHCARE
Beyond research laboratories and
hospitals, the emergence of AI has
caused exponential growth in policies
regarding AI and investment in AI
around the world. AI-based startups
have seen rampant growth. Startup
Health, an incubator in US recently
reported that there were 7,600
healthcare start-ups around the world
working on digital health innovation,
a major portion of which involves AI
based innovation. An Accenture report
published in late 2017 states, “Growth
in the AI health market is expected
to reach $6.6 billion by 2021 - that’s
a compound annual growth rate of
86.2% accuracy. Similar algorithms
can be used to see nuanced differences
in electrocardiograms, CT scan images
and even in oncology to look for
invisible patterns of disease onset and
progression. As artificial intelligence
algorithms get better after each iteration,
routine lab tests like X-rays, CT scans,
MRI scans, ECG etc. would fall into
the domain of artificial intelligence for
more quick and reliable results.
14
Volume 4 | Issue 1 | January-March 2019
40%”. Another report by CIS India
published this year states that AI could
add a whopping $957 billion to the
Indian economy by 2035. Even state
governments are pushing for growth in
AI-based sectors. Government of India
aims to increase healthcare spending
to 2.5% of the Gross Domestic Product
(GDP) by the end of its 12th five-year
plan, and to 3% by 2022. Such high
rates of adoption are due to several AI
start-ups and involvement of major
players like Microsoft and IBM.
Given the skewed ratio of doctors to
patients in India, AI-based healthcare
techniques would provide much-needed
help in providing healthcare amenities
to the masses. Globally, US government
have made heavy investments in two of
its AI-centered healthcare initiatives,
with $1 billion proposed budget to
its Cancer Moonshot Program and
another $215 million in its Precision
Medicine Initiative.
ETHICS AND ISSUES WITH AI IN
HEALTHCARE
As rapidly as AI has been embraced by
the medical and healthcare community,
its benefits cannot be actualized without
understanding its ethical pitfalls.
But there are several concerns when
applying these algorithms at a large
scale to make real clinical decisions.
Algorithms, albeit self-learning are
products designed by human and may
reflect their biases in the results they
produce. These algorithms may reflect
the biases of its designer or biases caused
by the dataset on which the algorithm
was trained. For example, algorithms
developed by private sector entities can
be biased to ensure outcomes of their
interest or healthcare institutes may
use AI systems selectively based on say,
insurance plan or economic status of
that patient or any other parameter.
Even though Deep Learning algorithms
can perform sophisticated predictions
on imaging data, they are essentially not
fed by an explicit code of information
but are self-taught systems and even
though the prediction score it gives,
for example, whether the lesion is
malignant or benign are surprisingly
accurate when corroborated with the
diagnostic report by a doctor, there’s no
way to determine how exactly it came
to that conclusion, thus rendering AI
systems as a black box; with little clarity