iGB issue 138_iGB L!VE 2025 | Page 12

igamingbusiness. com
TECH AND INNOVATION

HOW MUCH OF A THREAT ARE DEEPFAKES TO THE GAMBLING SECTOR?

AI-enabled fraud and the creation of identification deepfakes is a growing threat, but there are ways operators can better protect themselves and players, writes Conor Reynolds

A s technology evolves, so do the risk factors for operators and consumers alike. The use of AI for the creation of identification deepfakes and generation of synthetic identities is a growing risk for operators, one that may necessitate greater security standardisation or expensive software to mitigate.

AI is already being used to create promotional content for illegal operators. Earlier this year, Sky News reported it had discovered an AI-generated video of some of its presenters touting gambling apps. Footage of news presenter Matt Barbet was used to make a video that purported to have him talking to another Sky News correspondent about an iPhone game they had won £ 500,000 on. The fake adverts were spread through social media and supported the marketing of illegal gambling sites contained within gaming applications on the Apple app store.
AML RISK OF AI In April, the UK’ s Gambling Commission issued an update warning of the prevalence of AI deepfakes connected to emerging money laundering and terrorist financing risks.
Last year, the UK’ s Joint Money Laundering Intelligence Taskforce published an amber alert on the use of AI to bypass customer due diligence checks. The UK’ s National Crime Agency( NCA) took down a website last year that was offering AI-generated identity documents, such as passports or driver licences, for just $ 15.
The Gambling Commission has advised all operators of the need to train staff in the assessment of customer documentation for AIgenerated documents.
Threat actors and fraudsters are well versed in emerging technologies. With the prevalence of digital mediums for public and private services, synthetic identity theft has become an increasing challenge for law enforcement.
“ Synthetic identity theft is a type of fraud in which genuine and fabricated personal information are blended to generate a completely new, fake identity,” Dr Michaela MacDonald, senior lecturer in law and technology at Queen Mary University of London, tells iGB.
“ Alongside voice cloning, behavioural mimicry and deepfake technologies, AI-generated synthetic identities can bypass traditional Know Your Customer( KYC) systems by defeating facial recognition, exploiting support chats or spoofing voice-activated authentication.”
Research on deepfake technology from the Alan Turing Institute, published in March, said AI-enabled crime is being driven by the technology’ s ability to automate, augment and vastly scale up criminal activity volumes.
That report stated:“ UK law enforcement is not adequately
12 • ISSUE 138 • iGB L! VE 2025