AI Risk Mitigation and Legal Strategies Series No. 5: Explainable AI

Explainable AI

Apple and Goldman Sachs survived a government investigation because of explainable AI.  Is your company developing or using explainable AI or a black box? Why is the explainable AI a solution to prevent potential class action lawsuits and government actions?  In this article, I will discuss the legal benefits of explainable AI and recommend legal strategies to ensure explainable AI.

  1. What is Explainable AI?

According to IBM, Explainable AI (XAI) is "a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms." Since AI systems often rely on intricate algorithms that are not easily understandable to humans, the difficulty in understanding how AI systems and machine learning models process data and generate predictions or decisions leads to a "black box." Explainable AI serves as a key to unlocking the "black box" and preventing potential legal risks.

 

2.      What are the Legal Benefits of Explainable AI?

a.      XAI Helps Prevent Discrimination Claims

Several class action lawsuits and government actions have arisen from allegations or concerns about biased or unfair treatment by AI systems. Companies can use explainable AI to prevent discrimination claims by clarifying the input features and decision-making criteria and thereby demonstrating their AI model's fairness across different groups.  

The Apple Card case highlights the importance of explainable AI in discrimination cases. In 2019, consumers filed a complaint with NYDFS, alleging the violations of the Equal Credit Opportunity Act by Apple Card, due to gender disparities in credit limits and using a complex algorithm, often called a "black box." Apple and Goldman Sachs, the issuer of Apple Card, successfully defended their AI model by providing their policies related to creditworthiness determinations and underwriting data for each consumer who complained. As a result, the NYDFS found no discrimination.

 

b.      XAI Helps Prevent AI Washing Claims

Both SEC and FTC recently expressed their strong stand against AI washing. "AI washing" is a deceptive marketing practice misrepresenting the extent of artificial intelligence (AI) capabilities in products or services. (Please see my article for an in-depth analysis of AI Washing, AI Risk Mitigation and Legal Strategies Series No. 4 AI Washing at https://www.lklawfirm.net/blog/avoid-regulatory-risk-ai-washing-greenwashing-artificial-intelligence-ftc-sec-scrutiny ).

An explainable AI would allow a company to clearly communicate how its AI systems work, including the data used, the decision-making process, and any potential limitations, and help validate claims about the system's AI capabilities and, therefore, avoid the AI washing claims.  

c.       XAI Helps Satisfy Government Audits

Regulatory authorities often require companies to provide clear and transparent explanations for their actions and decisions related to using AI to meet compliance requirements, especially in the financial services industry. Failure to provide such explanations can lead to hefty fines and reputational damage. (I discuss this requirement in more detail in my article, AI Risk Mitigation and Legal Strategies Series No. 1 Financial Services Industry, which can be found at https://www.lklawfirm.net/blog/financial-services-aml-glba-fcra-ecoa-regulatory-compliance-artificial-intelligence .) Therefore, it’s crucial for a company to implement an explainable AI system to meet the demands of government officials, because it ensures that the company can provide transparent explanations for AI-driven decisions and present compelling evidence when required.

 

3.      Legal Strategies

To ensure the explainability of AI, a company must prepare an AI policy with the following components:  

a.      Mandating transparency in AI systems, emphasizing interpretability for decisions.

b.      Requiring transparency testing and open communication with board members and key stakeholders regarding the AI data handling process and its associated limitations.

c.       Requiring a vendor to comply with the explainable AI requirements.

In addition, a company shall develop comprehensive ethical guidelines and policies for detecting and rectifying bias and unfairness within AI training models to ensure strict compliance with anti-discrimination laws and establish reporting and mitigation mechanisms to address bias in the design and operation of AI systems.

 

References:

IBM. What is explainable AI? https://www.ibm.com/topics/explainable-ai

Jagati, Shiraz. May 5, 2023. AI’s black box problem: Challenges and solutions for a transparent future https://cointelegraph.com/news/ai-s-black-box-problem-challenges-and-solutions-for-a-transparent-future

New York State Department of Financial Services. March 2021. Report on Apple Card Investigation, https://www.dfs.ny.gov/system/files/documents/2021/03/rpt_202103_apple_card_investigation.pdf

 

Please click here to subscribe to the Blog.

Please click here to view the author’s profile.

Please email the author at lkempe@lklawfirm.net if you have any questions.

 

 
Previous
Previous

AI Legal Strategies Series No. 6: Navigating the AI Employment Bias Maze: Legal Compliance Guidelines and Strategies

Next
Next

AI Risk Mitigation and Legal Strategies Series No. 4: AI Washing