Driving Healthcare Transformation Through Generative AI
Figure
3-1 : Comparing medical domain to medical LLMs .
The medical professionals in the US typically go through regular schooling , then 4-years of college before the medical degree programs . In the education leading up to an undergraduate degree , they pick up broad subjects , language , and communication skills . The Foundational LLMs try to simulate similar knowledge by scraping the corpus of knowledge from the web . The huge computation power and large volumes of input training data , tried to create models that can act like humans in limited sense , using natural language for interaction . In the next stage , medical professionals go through specialized education such as physicians would do MD , followed by Residency and optionally Fellowship or similar part for nursing , therapist , or physician assistant , to learn about the healthcare principles .
Similarly , Foundational LLMs can be trained and fine-tuned using medical data , to create medical LLMs . Such training data is very scarce and subject to data privacy and security , making this a challenging task . Finally , when trained healthcare professionals start working in a medical setting , this is comparable to a deployed Healthcare GenAI solution which has access to specific patient data and can create patient summaries and similar drafts for review by trained healthcare professionals , to augment them . Use of Retrieval-Augmented Generation ( RAG ) also applies at this stage of the solution . RAG allows use of enterprise information to enhance the results of the LLM , without exposing the internal data to the public LLMs or outside the organization . RAG is further described in 3.1.2 .
Next , let ’ s look at each of the four use cases that we will deep-dive into .
Journal of Innovation 93