Acceleration of automated decision-making
The advancements in handling big data through the use of AI have changed the AML architecture . As SAS states , “ Financial institutions [ FIs ] can replace their rules-based engines with machine learning models or use machine learning as a support system to develop models that feed into and out of the rules-based engine , bringing new intelligence to such activities as risk ranking , rule tuning and alert prioritization .” 1 AI can also change the way FIs perform know your customer ( KYC ) activities . AI can input customer data ( e . g ., identity , account use and financial history ), to calculate their risk profile , allowing for the identification of entities that require additional scrutiny and for FIs to make better decisions regarding them . 2
AI amplification of bias
Even with all the benefits AI brings , risks must also be considered . AI systems trained on biased data may learn historical patterns of discrimination . If biases in AI systems are not identified and mitigated before deploying into a process like KYC , social biases will be perpetuated and cause unintended consequences at scale by inaccurately assigning risk against women , people of color or other vulnerable populations . Even in AML conversations , it is important to consider the humans behind the algorithm and ensure that activities are not causing unexpected harm .
Importance of accountability for AI decisions
Organizations that demonstrate responsible and ethical use of AI in an initiative-taking manner are likely to be more commercially successful . Consumers and employees expect businesses to stand up for them and act ethically and competently , as 63 % 3 of consumers buy or advocate for brands based on their beliefs and values . This need for consumer trust in organizations also applies to the organizational use of AI . Sixty-two percent 4 of consumers place higher trust in a company whose AI interactions they perceive as ethical . Of those customers , 59 % 5 have higher loyalty to the company and 55 % 6 purchase more products , providing high ratings and positive feedback on social media .
In addition , governments worldwide are trying to balance the benefits of AI with the need to protect citizens from the potential negative consequences of AI-driven decisions . The global regulatory landscape around AI is changing fast as countries continue to publish best practices , laws and nonbinding recommendations . Model risk management ( MRM ) is standard practice for FIs to assess models ’ risk , but as models become more advanced and utilize AI / machine learning techniques , MRM policies will need to be updated to account for any additional risk or regulatory burden . For example , the European Union ’ s ( EU ) proposed Artificial Intelligence Act ( AI Act ) is widely recognized as the first comprehensive AI framework that suggests classifying AI systems into multiple levels of risk and may set prescriptive requirements for each level based on the category of use . 7 AI systems categorized as high-risk by the proposed EU AI Act may have strict obligations before they can be deployed . 8
Across the globe , other jurisdictions are also considering AI regulation . If your organization is considering leveraging AI solutions , you will need to familiarize yourself with the pending regulations relevant to your markets and align your organizational strategy and MRM policies to the direction of the law . This proactive work will ensure that your organization is not surprised or scrambling to adhere to the regulations once they are enacted .
Trustworthy AI for automated decisionmaking in AML
This is where implementing trustworthy AI can help . Trustworthy AI is an AI system designed and developed with human-centricity in mind and with guardrails for safety , privacy , reliability and inclusivity , while reflecting the ethics and values of the organization employing it . Though varying from organization to organization , these principles serve as the foundation for how each organization will define their trustworthy AI system . These systems also incorporate accountability , transparency and robustness to mitigate harm .
“ While AI tools can present benefits , we must also be mindful of the risks if the banks ’ use of AI is not properly managed and controlled . Potential adverse outcomes can be caused by poorly designed models , faulty data , inadequate testing , or limited human oversight . Banks need effective risk management and controls for model validation and explainability , data management , privacy , and security regardless of whether a bank develops AI tools internally or purchases through a third party .”
Kevin Greenfield , deputy comptroller for Operational Risk Policy , Office of the Comptroller of the Currency 9
ACAMS Today September – November 2023 69