Claims relating to and arising from misleading statements by or on behalf companies pertaining to their artificial intelligence (AI) capabilities are on the rise
Over the past 12 months, numerous class actions have been commenced in the US by shareholders or investors against companies and their directors due to misleading statements, made to entice investors, about the sophistication and effectiveness of their AI, or related technologies. These perpetrators are generally guilty of concealing their reliance on manual labour, third party tools or non-AI solutions, presumably to appear unique, fashionable and “cutting edge”. The concealment, more commonly referred to as AI washing, takes different forms but with one thing in common: a decline in share values when the masquerade is lifted, exposing the directors to the risk of claims.
Recent examples include: AppLovin Corporation and Skyworks Solution Inc. AppLovin Corporation is a software-based platform for advertisers to use to enhance marketing, made misleading statements to investors concerning its launch of a digital ad platform, known as AXON 2.0. This platform was described as using ‘cutting-edge AI technologies’ to match advertisements to mobile games, allowing its customer base to expand its e-commerce footprint. A report was released which concluded that the company had inflated its installation numbers through “clicks” when in fact a large number of installations had occurred through unwanted apps on customer’s devices via unwanted back-door practices. A securities lawsuit was launched summarising these allegations, following which the company’s share price has plummeted. Skyworks Solution Inc projected growth owing to improvement in its AI based smartphone capabilities, those capabilities were overstated and it is said to have resulted in overinflated share values, leading to a securities action by investors.
It is apposite to point out that, in either example, further claims could ensue by creditors and shareholders should the performance of the company continue to decline, whether arising directly or indirectly from the securities actions (including the impact on its balance sheet). The decisions made by the directors of those companies, should such a situation unfold, will be invariably brought under further scrutiny.
The news that Elon Musk’s Grok AI chatbot is facing international regulatory scrutiny has taken centre stage this month. The intervention arises from Grok’s potential to create or distribute offensive material (including the proliferation of deepfakes), and the failure of Musk’s “X” platform to protect users, including children. There is now an opportunity for regulators to make an example of Grok, in an effort to crack down on online safety and to lift the corporate veil and hold corporates to account for unethical and illegal practices. Whilst this is an extreme high-profile example of regulatory intervention in the AI space, it signals more generally a motivation by regulators to take greater interest in automated technology and could result in earlier or more frequent regulatory intervention, leading to the increased risk of fines and greater expenditure on investigations.
Although it is fair to say that our international litigious friends are probably ahead of the curve, it is only a matter of time before such actions are commonplace in England and Wales. This emerging risk is likely to be of grave interest to insurers, on the basis that the use of automation and technology is being welcomed and encouraged amongst various sectors,
Insurers will want to take note of the greater regulatory intervention that we predict to be on the horizon (plus costs), and the likely uptick in claims against directors as insolvency events linked to or arising from digitalisation are on rise.
For further information on the issues discussed, please contact Amy Nesbitt.
Read More