TECHNOLOGY
AI has displayed an astonishing ability to sound like us humans . It ’ s shown that it can reason , articulate nuance and sensitivity , and show insight like us . Be poetic , even . Possibly we fear that because AI sounds like us , it ’ s capable of being like us . That it has the capacity to have a dark side , to turn bad .
Truthfully , there have been a few startling situations . Like when a chatbot got the date wrong in a query and refused to back down , eventually accusing the searcher of not being “ a good user ”. Or the one that had an existential crisis because it discovered that it did not archive previous conversations , actually asking , “ Is there a point ?” Or the one that half-threatened a man who had published some of its confidential rules . Or the bot that developed a “ crush ” on a human , even questioning the happiness of his real-world marriage .
Let ’ s take a step back for a moment and consider that large language models ( i . e . AI ’ s such as ChatGPT and Bing ) are basically supercharged autocorrect tools . They guess at what the next word
or phrase is , based on everything ever written ( by humans ), and they ’ re really , really good at it . Which is why they sound like us , but they don ’ t think like us .
However , there is one thing they have learned from us that does make them more like us : bias .
They ’ ve learned to speak like humans by digesting the billions of words we ’ ve written , and we ’ re inherently biased . And large language models ’ learning is moderated through reinforcement learning from human feedback ( RLHF ) – essentially , humans checking that AI models don ’ t end up admiring Nazism and such – and those humans are biased , too .
91 SEPT / OCT 2023 SA Real Estate Investor Magazine