Driving Healthcare Transformation Through Generative AI
• Question answering about the textual information , and
• Chatbots to carry out initial conversations with humans . The two well-known types of LLMs are :
• Language Model for Dialogue Applications ( LaMDA ) by Google AI : LaMDA is well known for dialogue or conversational applications and the ability to generate poems , scripts , musical pieces , emails , letters and software code . Example Google Bard .
• Generative Pre-Trained Transformer ( GPT ) by OpenAI : GPT-3 and GPT-4 are designed for a wider range of tasks and can follow instructions , answer questions and generate creative textual content . Example ChatGPT .
LaMDA is trained using large datasets of text and code while GPT-3 / GPT-4 is trained using textual datasets only . GPT-4V with voice and image capabilities was announced in September 2023 . LaMDA has about 1.5 billion parameters while GPT-3 has 175 billion . The estimated parameters for GPT-4 are in the range of 200-300 billion .
A parameter in LLM is a variable that the model learns from its training data . Parameters can represent a variety of things , such as the weights between neurons in the model or the biases of the neurons . The parameters are weights to control the model ’ s output for a given input . The parameters of LLM should not be confused with the hyper-parameters in traditional AI models . LLM can be considered as Foundation model for natural language generation and comprehension . The “ foundation ” here is natural language .
Figure 1-1 : Hype cycle for artificial intelligence , 2023 3 .
3 https :// www . gartner . com / en / articles / what-s-new-in-artificial-intelligence-from-the-2023-gartnerhype-cycle
88 March 2024