Synthetic intelligence (AI) is en vogue. Because it quickly reshapes industries, firms are racing to combine and market AI–pushed options and merchandise. However how a lot is an excessive amount of? Some firms are discovering out the laborious method.
The authorized dangers related to AI, particularly these going through company management, are rising as rapidly because the expertise itself. As we defined in a current publish, administrators and officers threat private legal responsibility, each for disclosing and failing to reveal how their companies are utilizing AI. Two current securities class motion lawsuits illustrate the dangers related to AI–associated misrepresentations, underscoring the necessity for administration to have a transparent and correct understanding of how the enterprise is utilizing AI and the significance of guaranteeing sufficient insurance coverage protection for AI-related liabilities.
AI Washing: A Rising Authorized Danger
Constructed on the identical premise as “greenwashing,” AI washing is on the rise. In its easiest phrases, AI washing refers back to the apply of exaggerating or misrepresenting the function AI performs in an organization’s services or products. Simply final week, two extra securities lawsuits had been filed in opposition to company executives based mostly on alleged misstatements about how their firms had been utilizing AI applied sciences. These newest lawsuits, very like the Innodata and Telus lawsuits we beforehand wrote about, function early warnings for firms navigating AI–associated disclosure points.
Cesar Nunez v. Skyworks Options, Inc.
On March 4, 2025, a plaintiff shareholder filed a putative securities class motion lawsuit in opposition to semiconductor merchandise producer Skyworks Options and sure of its administrators and officers within the US District Court docket for the Central District of California. See Cesar Nunez v. Skyworks Options, Inc. et al. Docket No. 8:25–cv–00411 (C.D. Cal. Mar. 4, 2025).
Amongst different issues, the lawsuit alleges that Skyworks misrepresented its place and talent to capitalize on AI within the smartphone improve cycle, main traders to buy the corporate’s securities at “artificially inflated costs.”
Quiero v. AppLovin Corp.
An analogous lawsuit was filed the subsequent day in opposition to cell expertise firm AppLovin and sure of its executives. See Quiero v. AppLovin Corp. et al. Docket No. 4:25-cv-02294 (N.D. Cal. Mar. 5, 2025).
The Applovin criticism alleges, amongst different issues, that AppLovin misled traders by misleadingly touting its use of “chopping–edge AI applied sciences” “to extra effectively match ads to cell video games, along with increasing into net–based mostly advertising and marketing and e–commerce.” In accordance with the criticism, these deceptive statements coincided with the reporting of “spectacular monetary outcomes, outlooks, and steerage to traders, all whereas utilizing dishonest promoting practices.”
Danger Mitigation and the Position of D&O Insurance coverage
Our current posts have proven how AI can implicate protection underneath all strains of economic insurance coverage. The Skyworks and AppLovin lawsuits underscore the precise significance of complete D&O legal responsibility insurance coverage as a part of any company threat administration resolution.
As we mentioned in a earlier publish, firms could want to assess their D&O applications from a number of angles to maximise safety in opposition to AI–washing lawsuits. Key concerns embrace:
- Coverage Overview: Making certain that AI-related losses are coated and never excluded underneath exclusions like cyber or expertise exclusions.
- Regulatory Protection: Confirming that insurance policies present protection not just for shareholder claims but in addition regulator claims and authorities investigations.
- Coordinating Coverages: Evaluating legal responsibility coverages, particularly D&O and cyber insurance coverage, holistically to keep away from or eradicate gaps in protection.
- AI-Particular Insurance policies: Contemplating the acquisition of AI–centered endorsements or standalone insurance policies for extra safety.
- Government Safety: Verifying sufficient protection and limits, together with “Aspect A” solely or difference-in-condition protection, to guard particular person officers and administrators, notably if company indemnification is unavailable.
- New “Chief AI Officer” Positions: Chief data safety officers (CISOs) stay vital in monitoring cyber–associated dangers however usually are not the one rising positions to suit into current insurance coverage applications. Though not a standard C–suite place, increasingly firms are creating “chief AI officer” positions to handle the multi–faceted and evolving use of AI applied sciences. Making certain that these positions are included inside the scope of D&O and administration legal responsibility protection is crucial to affording safety in opposition to AI–
In sum, a proactive strategy—particularly when putting or renewing insurance policies—can assist mitigate the danger of protection denials and improve safety in opposition to AI–associated authorized challenges. Participating skilled insurance coverage brokers and protection counsel can additional strengthen coverage phrases, shut potential gaps and facilitate complete threat protection within the evolving AI panorama.