The New York Times has filed a lawsuit against OpenAI and Microsoft, alleging copyright infringement. The lawsuit claims that these companies used the newspaper’s content to train their artificial intelligence technology, including ChatGPT, without proper authorization or compensation.
This technology, according to the lawsuit, now directly competes with The New York Times.
The New York Times’ lawsuit is part of a broader series of legal actions challenging the use of extensive internet content scraping to train large language AI models.
Creative professionals, including actors, writers, and journalists, have expressed concerns that AI could leverage their online content to create competitive chatbots and information sources without fair compensation.
This lawsuit is notable for being the first major action by a news publisher against the leading AI brands, OpenAI and Microsoft. The New York Times argues that the use of its content to create AI products that compete with its services threatens its ability to provide these services.
The dispute centers around the alleged “widescale copying” of content by OpenAI and Microsoft. The New York Times contends that while other sources were also used, its content was given particular emphasis.
The newspaper seeks fair compensation and terms of agreement, which it claims have not been reached despite negotiations since April.
OpenAI and Microsoft reportedly argue that their use of The New York Times’ content falls under “fair use,” a legal doctrine allowing limited use of copyrighted material for transformative purposes. However, The New York Times disputes this, stating that the AI models’ outputs, which mimic and compete with the inputs used to train them, do not constitute fair use.
The lawsuit highlights the growing tension between the development of generative AI technologies and the rights of content creators. The New York Times has taken steps to block OpenAI’s web crawler, GPTBot, from scanning its platform.
The newspaper asserts that AI tools trained on its content can generate outputs that closely mimic or even directly copy its content, sometimes attributing false information to The New York Times.
Diane Brayton, Executive Vice President and General Counsel of The New York Times, emphasized the potential of generative AI for the public and journalism. However, she insists that the development of these technologies should not undermine journalistic institutions.
The New York Times demands permission and fair valuation for the use of its work in creating AI tools.
The New York Times is pursuing unspecified monetary damages and a permanent injunction to prevent further alleged infringement by Microsoft and OpenAI. Additionally, the newspaper seeks the destruction of GPT and any other AI models or training sets incorporating its journalism.
This lawsuit represents a critical juncture in the evolving relationship between AI technology and copyright law, with potential far-reaching implications for the future of content creation and AI development.