On June 30, 2022, the California Department of Insurance released a Bulletin which places several limitations on the use of Artificial Intelligence (“AI”) and alternative data sets (“Big Data”) by the insurance industry. The Bulletin states that the Department is aware of recent allegations of racial discrimination in marketing, rating, underwriting and claims practices by insurance companies and reminds all insurance companies of their obligations to conduct their businesses “in a manner that treats all similarly-situated persons alike.” The Bulletin goes on to describe recent examples of alleged unfair discrimination being investigated by the Department, including (1) subjecting claims from certain inner-city ZIP Codes for special scrutiny, (2) using facial recognition in claims decisions, and (3) collecting personal information that is unrelated to the risk being underwritten.
The six most significant aspect of the Bulletin are:
1. Restrictions on Both AI and Big Data: The Bulletin provides that insurance companies must avoid discrimination that can result from the use of artificial intelligence, as well as Big Data, which is described as “extremely large data sets analyzed to reveal patterns and trends.”
2. Restrictions Beyond Underwriting: Insurance companies must avoid discrimination when using AI or Big Data for underwriting, as well as marketing, rating, processing claims, and investigating suspected fraud relating to any insurance transaction that impacts California residents.
3. A Focus on Proxy Discrimination: The Department notes a growing concern in the use of purportedly neutral individual characteristics as a proxy for prohibited characteristics, which include sex, race, color, religion, ancestry, national origin, disability, medical condition, genetic information, marital status, sexual orientation, citizenship, primary language, or immigration status. Potential proxies listed in the Bulletin include ZIP Codes, biometrics, facial recognition, geographic data, homeownership data, credit information, education level, civil judgments, court records, consumer retail purchase history, social media, internet use, condition or type of an applicant’s electronic device, and how a consumer appears in a photograph. This list has many of the same inputs the New York Department of Financial Services described as potentially suspect in underwriting for life insurance in its Circular Letter No. 1 (2019).
4. Concerns Over Lack of Actuarial Nexus: The Department likely views the above list of suspect characteristics as non-exhaustive because it goes on to state that any input used for AI models and Big Data that lacks a sufficient actuarial nexus to the risk of loss has the potential to unfairly discriminate.
5. Transparency and Explainability Requirements: The Bulletin provides that when insurers use complex algorithms in a declination, limitation, premium increase, or other adverse action, the insurer must provide the specific reason or reasons for that decision to the consumer.
6. Due Diligence Requirements: The Bulletin states that before utilizing any data collection method, fraud algorithm or rating/underwriting or marketing tool, insurers “must conduct their own due diligence to ensure full compliance with all applicable laws.”
Takeaways. The Bulletin is part of an emerging patchwork of state laws and regulatory pronouncements placing significant obligations on insurers’ use of AI applications and Big Data, which includes recent developments in Connecticut, Colorado, and New York, as well as guidance from the NAIC and NCOIL. Insurance companies seeking to comply with these new developments should consider taking some of the following steps:
AI/Big Data Inventory: Assembling a list of AI models and Big Data uses that could be subject to these regulations will help insurers prioritize which applications may require immediate attention.
Risk Rating: Creating an AI risk-management framework that includes a list of high-risk factors for AI and Big Data uses (e.g., use of potentially suspect inputs in underwriting algorithms). Those criteria can then be used to identify the highest risk AI and Big Data applications for review and possible mitigation.
Mitigation: Identifying mitigation options for high-risk AI applications, including testing suspect inputs, additional transparency, human oversight of decisions, ensuring data quality, and increasing the diversity of the teams designing and operating the AI/Big Data applications.
Training: Conducting trainings on AI and data compliance, governance, and risk management for employees and contractors involved in designing and operating the AI/Big Data applications, as well as certain members of senior management, legal, compliance, risk, and business functions.
Governance: Creating a cross-functional AI Oversight Committee to review certain high-risk AI/Big Data applications and recommend mitigation. The Committee can also implement AI/Big Data Polices, including Guiding Principles and Codes of Conduct.
Vendor Risk Management: Many AI/Big Data applications are at least partially developed by third parties. Insurers should consider whether their diligence and contractual procedures are sufficient for vendors that are providing services that may be covered by the Bulletin.
Documentation: As the risk of regulatory exams and civil litigation over insurers’ use of AI and Big Data increases, so does the need for robust documentation of efforts to meet regulatory compliance obligations. This may include review of data inputs, results of model testing, assessment of actuarial risk, implementation of mitigations, who received training, and how concerns about models or particular decisions were resolved.
Examinations of Business Practices, Algorithms, and Models: The Bulletin also states that the California Department of Insurance may use market conduct examinations or Special Investigative Unit examinations to audit and examine all insurer business practices, including their marketing, rating, claim, and underwriting criteria, programs, algorithms, and models.