The New York Times and OpenAI have been in negotiations for a licensing deal that would involve OpenAI paying The Times for using its stories in AI tools like ChatGPT. However, the discussions have become contentious, leading The Times to consider legal action to protect its intellectual property rights.
The New York Times is concerned that ChatGPT’s capabilities, such as answering questions using content derived from The Times’ reporting, might position it as a direct competitor to the newspaper. This concern is amplified by integrating generative AI tools into search engines like Microsoft’s Bing, which a variant of ChatGPT powers.
AI models gather data from the internet without explicit permission, raising copyright legality issues. If violations are found, infringed articles might need to be destroyed.
What could be the potential legal outcomes?
If OpenAI is found to have violated copyrights, potential outcomes could include destroying ChatGPT’s dataset, OpenAI could face fines of up to $150,000 for each act of infringement, and if the court orders the removal of infringing content, OpenAI might need to recreate ChatGPT’s dataset using only content for which it has the necessary authorization.
If this is to happen, then OpenAI and other AI companies might invoke the “fair use doctrine” as a defense.
What is fair use doctrine?
‘Fair use’ is a legal doctrine in the United States that allows the limited use of copyrighted material without permission from the copyright holder. It could be for purposes like research, criticism, and news reporting. However, there is uncertainty about whether fair use would apply in these AI-related cases.
Similar lawsuits involve comedian Sarah Silverman and Getty Images taking legal action against ChatGPT and Stability AI, respectively (which we discussed here) for content usage without permission.