and these concerns, UBS concluded that:
Political leaders should be encouraged to share the positive aspects of new technologies, and more importantly, they need to make sure that no one is left behind. Realizing that growth and inclusion go hand in hand is a crucial part of this.5
Generative Artificial Intelligence (AI)
Of most recent concern is the advent of generative artificial intelligence (AI), brought about because of public awareness of ChatGPT, Bard, Bing Chat, Dell-E, etc. AI engines are advanced software systems designed to perform tasks that would typically require human intelligence, such as natural language processing, image recognition, and data analysis. AI engines are “designed to learn from vast amounts of data and generate new content that resembles the original data distribution. These models go beyond simple classification or prediction tasks and aim to create new samples that exhibit artistic, intellectual, or other desirable qualities.”6 Such models can be applied in number of different ways, including image generation, text generation, music synthesis, video synthesis, and more. The models “empower artists, designers, storytellers, and innovators to push the boundaries of creativity and open new possibilities for content creation.”7
Public awareness has also increased because of positions both pro and con, taken by celebrities such as former Google CEO Eric Schmidt and former Secretary of State Henry Kissinger (writing with Daniel Huttenlocher in The Age of AI: and our Human Future)8 and the Geoffrey Hinton, the “godfather of AI,” who quit Google to warn of the dangers ahead if people continue to use AI.
Hinton is by far the more critical, suggesting that companies – like Google and Microsoft – must stop their competitive pursuit of generative AI. Hinton has specifically warned about six potential risks posed by the rapid development of current AI models: bias and discrimination; unemployment; online echo chambers; fake news; “battle robots”; and existential risks to humanity. But by far, his greatest concern is the existential threat to humanity. Emphasizing how serious he is taking this threat, Hinton has pointed out that “it's important that people understand it’s not just science fiction; it’s not just fear-mongering – it is a real risk that we need to think about, and we need to figure out in advance how to deal with it.” Moreover, he does not necessarily believe it is simply an issue of one machine controlling another: “I’m not convinced that a good AI that is trying to stop bad AI can get control.” Noting that “right now, there’s 99 very smart people trying to make [AI] better and one very smart person trying to figure out how to stop it from taking over,” Hinton has suggested that government get involved so there might be more balance.9
Serving as a commissioner on the National Security Commission on Artificial Intelligence, Schmidt is more cautious, believing generative AI could pose an existential threat but equally that appropriate government oversight and private sector guardrails could benefit society – from producing new cures for disease and more productive commercial environments. Schmidt notes that many people believe that computers will be able to “recursively self-improve,” meaning it will be able to get better on its own. This suggests, Schmidt believes, that the system will – within five years – learn something and “be able to act upon it” in a process called “tool use.” He cites research from Deep Mind that identifies some of the potential risks and dangers involved with such an advance.
Schmidt believes there needs to be rules or models, such as the one written by Entropy for its systems, that constrain or limit the ability to act outside the best interests of humans. Moreover, he believes it will be necessary for governments to get involved, to talk with one another and build guidelines or guardrails or “implied permission sets” for the development of AI. And such activity cannot solely be targeted at larger companies but must also
97