AI Risk Mitigation and Legal Strategies Series No. 1: Financial Services Industry

Are you a financial institution concerned about the impact of AI tools on your compliance? Or are you an AI developer looking to attract clients from financial institutions? In this article, I examine the challenges AI presents to legal compliance in the financial services industry, including compliance with key regulations such as AML, GLBA, FCRA, and ECOA. I will also provide practical solutions to help you navigate these challenges and avoid potential government investigations and consumer class action lawsuits.

A. AML Compliance

The Bank Secrecy Act ("BSA") mandates financial institutions to assist government agencies in identifying and preventing money laundering. Since 1970, the BSA has been amended several times by separate acts, including the USA PATRIOT Act of 2001. On January 1, 2021, Congress passed the Anti-Money Laundering Act of 2020 ("AMLA 2020"), which requires the Treasury to issue a rule specifying standards for testing the technology and internal processes for BSA compliance (for example, transaction monitoring systems) and perform a financial technology assessment. AMLA also increased sanctions and penalties for BSA violations. The responsibility for enforcing Anti-Money Laundering (AML) regulations is shared among multiple federal agencies, including the Financial Crimes Enforcement Network (FinCEN), the Office of the Comptroller of the Currency (OCC), the Federal Reserve, the Federal Deposit Insurance Corporation (FDIC), the Securities and Exchange Commission (SEC), and the Commodity Futures Trading Commission (CFTC).

Utilization of the AI system, particularly those that rely on machine learning, can undermine the AML compliance program because AI can produce inaccurate or unrepresentative data if the training data is biased, incomplete, or unrepresentative of real-world scenarios. Inaccurate data may lead to false positives or negatives while conducting customer due diligence, ongoing transaction monitoring, or reporting suspicious activities (SAR), potentially jeopardizing the AML compliance program.

In addition, using AI in AML compliance may prevent financial institutions from explaining and justifying their actions to regulatory authorities. On the one hand, regulatory authorities often require financial institutions to provide clear and transparent explanations for their actions and decisions related to AML compliance. On the other hand, complex AI models, such as deep learning models, may achieve higher accuracy in various tasks, but they often operate as "black boxes," making it challenging to understand and explain how they arrive at their decisions.

B. Data Security

The Gramm-Leach-Bliley Act requires financial institutions to comply with the statute's Financial Privacy and Safeguards rules.In October 2021, the Federal Trade Commission (FTC) revised the Safeguards Rule, broadening its applicability to include institutions involved in activities incidental to financial activities, as determined by the Federal Reserve Board. Most significantly, these amendments adopted detailed requirements that govern the information security programs of subject financial institutions, including encryption and multifactor authentication.

AI imposes significant challenges to compliance with data security requirements. Generative AI (GenAI) could be exploited to enable advanced phishing attacks, identity theft, fraud, and convincing deepfake content. Furthermore, GenAI models face vulnerabilities from data poisoning and input attacks. Data poisoning could result in compromised training accuracy or hidden malicious action at the training stage, while input attacks can manipulate the GenAI data environment for malicious purposes at the operating stage. In addition, current GenAI models are increasingly subject to successful "jailbreaking" attacks, which utilize sets of carefully designed prompts (word sequences or sentences) to bypass rules and filters or insert malicious data or instructions —referred to as a "prompt injection attack." These attacks could lead to corrupt GenAI operations or data breaches.

C. Fairness

Federal laws governing the utilization of data in the credit decision-making process, such as the Fair Credit Reporting Act (FCRA) and the Equal Credit Opportunity Act (ECOA), require financial institutions to make lending decisions with fairness and equality. These laws are designed to prevent discrimination in lending and ensure that individuals are treated fairly when applying for credit. Enforcement of these laws falls under the jurisdiction of the Consumer Financial Protection Bureau (CFPB), the Federal Trade Commission (FTC), and through private legal actions. Furthermore, both the CFPB and FTC possess broad authority to oversee and prevent unfair and deceptive practices (UDAAP/UDAP).

Implementation of AI may inadvertently violate those laws and expose the companies to potential government investigations. Due to incomplete or biased training data or human-influenced AI algorithm design, AI can exhibit embedded bias because of systematic discrimination favoring certain groups. GenAI exacerbates the problem of embedded bias by generating textual responses from its training data, with each part's accuracy influenced by prompts that could potentially contain human biases.

In May 2022, in response to the challenges posed by AI, the CFPB published a circular stating that the adverse action notice requirements apply to credit decisions involving “complex algorithms, including artificial intelligence or machine learning,” and therefore, a creditor cannot use AI as a defense for noncompliance with the law. CFPB further emphasized in its most recent annual report that creditors need to reevaluate their AI systems to ensure the protection of consumers’ rights under the law.

D. Countermeasures

Whether your company is a financial institution utilizing AI tools or an AI developer for financial institutions, the effective countermeasure to address the above issues is to establish a comprehensive AI governance program with the following components:

a. Conduct an initial risk and ongoing impact assessment to identify and rectify training data defects, potential biases, over/underrepresentation in output, and the effectiveness of mitigation efforts.

b. Develop comprehensive ethical guidelines and policies for detecting and rectifying bias and unfairness within AI training models to ensure strict compliance with anti-discrimination laws. Establish reporting and mitigation mechanisms to address bias in both the design and operation of AI systems.

c. Prepare an AI policy that mandates transparency in AI systems, emphasizing interpretability for decisions. The policy should also require transparency testing and open communication with board members and key stakeholders regarding the AI data handling process, as well as its associated limitations.

d. Generate a data provenance record to document data sources, evaluate prompt quality, and record the production methodologies. Maintain thorough records related to the AI system's development, testing, and performance. Develop comprehensive documentation that explains the decision-making processes of AI models, ensuring it is readily accessible for regulatory reviews.

e. Develop an AI provider due diligence, contracting, and oversight processes. Negotiate a contract that clearly defines the AI provider's responsibilities and data security obligations and safeguards the company's interests in the event of government investigations or consumer class action lawsuits.

f. Reevaluate the data privacy notice in the context of AI, specifically examining data generation, acquisition, collection, storage, security, and distribution, and make revisions where necessary to align with AI data practices.

g. Obtain explicit user consent for AI data collection and processing while maintaining transparency regarding the purposes of data use. De-identify sensitive data within AI training to minimize privacy risks and potential legal complications.

h. Revise the company’s record retention policy to address the retention period of data collected by GenAI systems.

i. Ensure your AI system complies with cybersecurity regulations and standards enforced by relevant authorities. Develop a robust incident response plan that outlines procedures for addressing and mitigating AI-related security incidents and breaches.

REFERENCES

Jania Okwechime, 2023. How Artificial Intelligence is Transforming the Financial Services Industry. https://www.deloitte.com/ng/en/services/risk-advisory/services/how-artificial-intelligence-is-transforming-the-financial-services-industry.html

Ibitola, Joseph. 2023. Why Data Quality Is The Bedrock of Effective AML Compliance. https://blog.flagright.com/post/why-data-quality-is-the-bedrock-of-effective-aml-compliance.

Shabsigh, Ghiath and Boukherouaa, El Bachir. 2023. Generative Artificial Intelligence in Finance: Risk Considerations. Fintech Notes, Note/2023/006. International Monetary Fund.

If you enjoyed my AI series and would like to receive exclusive future articles on AI, please share your email address HERE. Your privacy is important to us, and we promise to only send you our articles.

Click here for the Author’s Profile

Click here for other articles in the AI Series.

 
Previous
Previous

AI Risk Mitigation and Legal Strategies Series No. 2: Executive Order