Bringing Creativity, Agility, and Efficiency with Generative AI in Industries 24th Edition | Page 40

Responsible Generative AI
8 NEED FOR REGULATING GENAI
8.1 WHY REGULATIONS ARE NECESSARY
Malicious use of GenAI in promoting harmful information and beliefs , enforce authoritative control , terrorist activities , military weaponization , unfair control of power leading to exploitation are big risks . Lethal autonomous weapons guided by AI technology can identify and execute targets without human intervention . Such robot driven warfare can trigger irresponsible and ruthless actions leading to quick massive destruction of human lives and property . There are reports that even military tactical planners 34 are launching safety practices and guidelines with AI weaponry .
There are several currently visible examples and projected future scenarios of harmful effects of GenAI . Without proper legal and ethical guidelines , the generative power of AI could play havoc in the lives of people in many ways .
8.2 CURRENT EFFORTS
Governmental agencies of many countries and professional organizations have already initiated steps to regulate the responsible usage and managed proliferation of GenAI technology .
Seven US companies met with the White House Team and agreed to self-regulate AI systems 35 . The points of commitment included .
• Ensuring products are safe before introducing them to the public .
• Building systems that put security first .
• Earning the public ’ s trust .
Also noteworthy is the announcement of the " Blueprint for an AI Bill of Rights ” 36 .
European Commission proposed the first EU regulatory framework for AI 37 . AI applications that pose an “ unacceptable risk ” would be banned ; high-risk applications in such fields as finance , the justice system , and medicine would be subject to strict oversight .
GenAI , like ChatGPT , would have to comply with transparency requirements :
• Disclosing that the content was generated by AI .
34 https :// defensescoop . com / 2023 / 08 / 29 / pentagon-to-launch-pilot-focused-on-calibrated-trust-in-ai /
35
https :// www . whitehouse . gov / briefing-room / statements-releases / 2023 / 07 / 21 / fact-sheet-biden-harrisadministration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-tomanage-the-risks-posed-by-ai /
36 https :// www . whitehouse . gov / ostp / ai-bill-of-rights /
37
https :// www . europarl . europa . eu / news / en / headlines / society / 20230601STO93804 / eu-ai-act-firstregulation-on-artificial-intelligence
Journal of Innovation 35