Bringing Creativity, Agility, and Efficiency with Generative AI in Industries 24th Edition | Page 23

Responsible Generative AI
2.1 ACCURACY , BIAS , AND HALLUCINATION
Some recent research papers 3 cover detailed experiments on the correctness of different commercially available Large Language Models ( LLMs ) like ChatGPT , etc .
In some cases , LLMs have shown capabilities of deception including cheating and demonstrating innocence . LLMs can reason their way into using deception as a strategy for accomplishing a task . Sycophantic deception in which chatbots agreeing to a popular conversation has been cited as a tendency 4 .
There are some experimental studies which demonstrate that large language models often repeat common misconceptions since they are trained on language texts created by humans and trained on sometimes false data on the internet . Often the explanations produced by these models , although they seem to be plausible yet misleading , risk trust and safety of their use 5 .
Incorrect answers , unsubstantiated conclusions , toxicity , lack of robustness , unethical solutions , bias-influenced advice , and overconfidence on model performance are some of the concerns expressed by even the model developers and technical leaders 6 .
2.2 FILLING IN THE BLANKS
The LLMs are trained in available data , especially those available on the internet , some of which can be inaccurate , biased , and not proven . When the source itself doesn ’ t reflect the truth , the model trained on the same data obviously cannot be any better . Secondly , the generative part of the LLMs is mostly a composition of the patterns ( text , sound , or image ) it has seen in the past . This composition happens by a “ fill in the blanks ” kind of approach from the learned patterns without having any validation of the domain principles , and relationships have no resemblance to the real physics and logic of the domain .
2.3 WHAT IS THE IMPACT ?
The lack of trust in GenAI may lead to several risks . An IBM Business Value Institute report lists the following potential risks :
• Increasing legal liability .
• Introducing new vulnerabilities .
• Answers not identifying data sources .
• Generating outcomes inconsistent with society ’ s expectations .
3 https :// arxiv . org / pdf / 2308.14752 . pdf
4 https :// arxiv . org / pdf / 2303.08774 . pdf
5 https :// arxiv . org / abs / 2305.04388
6 https :// arxiv . org / abs / 2306.11698 18
March 2024