Intelligent CXO Issue 25 | Page 55

REGIONAL ROUND-UP

AFRICA APAC EUROPE MIDDLE EAST NORTH AMERICA

DEEPFAKES : 74 % OF AFRICANS TRICKED INTO BELIEVING THEY ’ RE REAL

The Top Risks Report 2023 by the Eurasia Group defined advances in deepfakes and the rapid rise of misinformation as ‘ weapons of mass disruption ’ and it is not far from wrong . Advances in Artificial Intelligence ( AI ) and powerful facial recognition and voice synthesis technologies have shifted the boundaries of reality while the recent explosion of AI-powered intelligences like ChatGPT and Stable Diffusion have made it harder than ever to distinguish between the work of a human versus that of a machine . These are extraordinary and have immense positive potential but , as Anna Collard , SVP Content Strategy & Evangelist at KnowBe4 Africa , points out , there are some significant risks to businesses and individuals .

“ Apart from abusing these platforms with online bullying , shaming or sexual harassment , such as fake revenge porn , these tools can be used to increase the effectiveness of phishing and business email compromise ( BEC ) attacks ,” she added .
“ These deepfake platforms are capable of creating civil and societal unrest when used to spread mis- or dis-information in political and election campaigns , and remain a dangerous element in modern digital society . This is cause for concern and asks for more awareness and understanding among the public and policymakers .”
In a recent survey undertaken by KnowBe4 across 800 employees aged 18 – 54 in Mauritius , Egypt , Botswana , South Africa and Kenya , 74 % said that they had believed a communication via email or direct message , or a photo or video , was true when , in fact , it was a deepfake . Considering how deepfake technology uses both Machine Learning and AI to manipulate data and imagery using realworld images and information , it is easy to see how they were tricked . The problem is awareness of deepfakes and how they work is very low in Africa and this puts users at risk .
Just over 50 % of respondents said they were aware of deepfakes , while 48 % were unsure or had little understanding of what they were . While a significant percentage of respondents were not clear as to what a deepfake was , most ( 72 %) said they did not believe that every photo or video they saw was genuine , which was a positive step in the right direction , even though nearly 30 % believed that the camera never lies .
“ It is also important to note that nearly 67 % of respondents would trust a message from a friend or legitimate contact on WhatsApp or a direct message while 43 % would trust a video , 42 % an email and 39 % a voice note . Any one of these could be a fake that the trusted contact did not recognise or their account was hacked ,” said Collard .
Interestingly , when asked if they would believe a video showing an acquaintance in a compromising position , even if this was out of character , most were hesitant to do so and nearly half ( 49 %) said they would speak to the acquaintance to get to the bottom of it . However , nearly 21 % said that they would believe it and 17 % believed a video is impossible to fake . The response was similar when they were asked the same question , but of a video with a high-profile person in the compromising situation , with 50 % saying they would give them the benefit of the doubt and 36 % saying they would believe it . x
www . intelligentcxo . com
55