Who is liable when AI goes wrong?

As artificial intelligence blurs the line between human and machine, it also blurs the line between who is ultimately responsible for the technology's mistakes.

Generative AI tools like ChatGPT have already faced lawsuits and legal challenges. Picture: Pixabay

Since it burst on the scene two years ago, generative AI software has faced legal challenges. Media companies (including the New York Times); famous authors and music publishers have lined up to file lawsuits against large language model (LLM)-based AI tools like OpenAI’s ChatGPT, mostly for copyright infringement.

But the copyright complaints are just the tip of the iceberg in terms of the upcoming legal minefield that AI is ushering in, says Dan Jasnow, a partner and the group co-leader for AI, metaverse and blockchain at the US law firm ArentFox Schiff.

What happens, for example, when generative AI software gives erroneous information that leads to harm for the end user? And even where you can establish legally that the AI is to blame, who ultimately bears the legal responsibility, the AI developer or the smartphone maker on whose device the AI is installed?

“If I were talking to a client right now, I would say that all we can really be sure of is that we will see litigation that tries to hold the developer and hardware provider responsible for harmful responses or inaccurate responses that the LLM produces,” says Jasnow.


AI legal expert Dan Jasnow. Picture: ArentFox Schiff

He adds: “I don't think any of that means that the developers shouldn’t move forward with the technology. These are all going to be questions that are just inherent to a large-scale deployment of a new and complicated suite of technologies.”

One variable that is likely to impact the liability of the stakeholders is the relationship between the hardware and software developers, says Jasnow. In cases where the AI developer and hardware maker are one and the same entity – for example, with the generative AI features included on Google’s recent Pixel 8 launches – the responsibility over liability becomes a moot point.

Contractual protections

But where this is not the case, tech companies can protect themselves from future litigation by ensuring that liability is already ascertained contractually ahead of time, says Jasnow.

“If we were representing a hardware provider that was negotiating with a third-party LLM developer, then you have a chance to negotiate contractual protections for the hardware developer,” he says. “You'd say, ‘We will agree to offer your LLM as a local option on our devices, but in exchange you have to indemnify us for any claims related to the LLM.’”

The other potential legal stumbling block with generative AI is when the AI is used in the development of consumer tech software or hardware. It’s feasible, for example, that a developer using generative AI in the creation of a product could inadvertently be fed data from the AI that infringes another company’s intellectual property (IP), says Jasnow.

For this reason, many tech companies are proceeding cautiously, says Jasnow.

“We've seen some very sophisticated technology companies that have said to their employees that they are not allowed to use Gen AI when developing their most valuable proprietary software because they just don't think it's trustworthy enough, and there are too many risks related to harmful content, and third-party IP.”

He adds: “At this point there's no guarantee that anything you're using from these tools is going to be non-infringing.”