Responsible Generative AI
• Spreading misinformation .
• Asserting incorrect information as fact .
• Prejudicial or preferential propositions .
• Tricking and manipulating people .
There are other risks too such as “ herd mentality ” or monocultural society where everyone thinks and acts the same way because they are advised by the same AI models .
In fact , many AI models have similar risks ; however , they are amplified with the generative capabilities of the newer models like ChatGPT , Google Bard , and similar ones . Fortunately , many socio-economic leaders , research scientists , and political leaders are already aware of these although most of them complained that we don ’ t have adequate regulatory controls and guidelines to manage the safe proliferation of the powerful GenAI models and their potential misuse .
The potential to “ hallucinate ” ( leading users to believe complete non-factual matters ) is indeed very alarming . We heard many disturbing stories such as “ AI generated photo of Pentagon explosion causes panic on Twitter even affecting stock market 7 .”
Morphing of images and altering videos are not new . The ability of social media to disseminate fake data on an industrial scale makes the ability to easily create fake data extremely dangerous . The new reality is that it has become much easier to create such fake entities – news , audio , or video – very quickly with the generative functions of some AI models . Many political and technology leaders have expressed concern on potential impact on elections with the advent of fake entities which can be created very easily and convincingly using GenAI tools .
2.4 CORRECTIVE STEPS AND GUIDELINES
Installing guard rails with appropriate tooling is the technological answer to the issues of trust , bias , and incorrect results created by GenAI . Harvard Business Review outlines several guidelines to ensure better trust and accuracy . These include organizations and model builders focusing on :
• Training the models on their own data .
• Enabling the users to validate .
• Publishing the sources of data and uncertainties in areas where results are not reliable .
• Explaining why certain results are produced .
• Preventing certain tasks ( illegal , unethical , not trained on ) from being performed .
• Conducting continuous bias , explain ability , and robustness assessments .
• Respecting data provenance , privacy , and ensuring consent to use .
7 https :// www . nytimes . com / 2023 / 05 / 23 / business / ai-picture-stock-market . html Journal of Innovation 19