providing responses to prior authorisations and insurance claim denials , which no clinical provider enjoys doing . It will also be able to enhance clinical decision-making by summarising patient records and extracting the pertinent information that is needed to meet the patient ’ s care needs in the moment .
Still , AI is only as good as the data it employs , so that information will have to be accurate , up to date , and unbiased for the technology to be used in a field as important as healthcare . As it stands , ChatGPT has limited knowledge on anything after 2021 , so some of its ideas may be outdated or incorrect .
AI Chatbots will be ‘ The future of medicine delivery ’ The ROI is crystal clear on the sheer power of AI chatbots . AI chatbots are the future of the delivery of medicine . Just like automation has made flying airplanes significantly safer , so too will AI chatbots .
Other possible ChatGPT uses for hospitals and health systems include writing rough drafts of patient education content ; quickly summarising lengthy medical records ( with a HIPAA-compliant version of ChatGPT ); and near - or real-time translation services . The key , at least for now , will be to ensure that humans still review the work . ChatGPT is still learning , and like any learner , still needs some oversight .
Some risks as technology still novel The tool ’ s “ newness ” should warrant caution as ChatGPT could have security risks .
This is new technology and in most cases is a ‘ free ’ technology . So it also stands to reason that we really need to proceed cautiously and understand exactly how this technology works , determine how accurate it truly is , and verify from both a privacy and security perspective just what the implications of this ‘ free ’
technology could mean for patient and organisational data , and what privacy protections and recourse should exist for healthcare organisations .
When tested , Chat�PT could offer only limited support for languages other than English and could not identify political material , spam , deception or malware . ChatGPT also warns its users that it “ may occasionally produce harmful instructions or biased content .”
Healthcare needs to remain cautious about the tool ’ s potential risk in generating false or inaccurate information .
But the risk can be significant due to the potential to generate inaccurate or false information . Therefore , its use in clinical medicine will require greater caution with lots of clinical collaboration and input into the specific clinical use cases .
This technology can possibly be used to answer patient-related administrative , “ decision-tree ”, or general health-education questions .
However , it ’ s important to remember that healthcare is very personal , and generative AI technologies are as good as the data accessed . Data sources within healthcare are rightfully secure and protected , so we must be careful not to overhype the promise — especially when it comes to areas such as symptom detections , triage , and clinical treatment suggestions . Chat�PT takes artificial intelligence into a new realm , one that can create real value and also palpable harm . But we don ’ t believe in artificial intelligence , we believe in augmented intelligence — that is to say , as humans , being given a statistical analysis of data of the past to help us make decisions in the future is wonderful .
( Excerpts extracted from Becker ’ s hospital Review 2023 )
Dr Timothy Low is CEO and Board Director of Farrer Park Hospital in Singapore .
GlobalHealthAsiaPacific . com ISSUE 2 | 2023