OpenAI disputes New York Times suit, claims it ‘manipulated’ case evidence

OpenAI, the company behind ChatGPT, said that the New York Times lawsuit over the company’s use of its content and data will not hold up in court and that the newspaper manipulated the chatbot to create incriminating evidence.

The company released a blog post on Monday detailing its arguments against the New York Times‘s lawsuit in December, which the newspaper filed in an attempt to hold the artificial intelligence developer accountable for content it included in the training data used to power ChatGPT. While the New York Times lawsuit is not the first to allege copyright infringement, it is the case with the clearest evidence to date. It is presented by many copyright lawyers as the best chance to set some rules for the technology.

OpenAI noted that it has met with several news outlets to discuss how ChatGPT can “support a healthy news ecosystem, be a good partner, and create mutually beneficial opportunities” for all. It cited its partnership with Business Insider publisher Axel Springer and the Associated Press. Several news companies have reported being in negotiation with OpenAI.

The company also argued that its use of the New York Times’s content is in the context of “Fair Use,” a legal principle that allows creators to use parts of content without paying copyright based on the context. It also noted that OpenAI has a process that enables publishers to withdraw their content from the training model if they wish to.

Finally, the company argued that the New York Times‘s legal team was deceptive in how it displayed its results. For example, a key piece of evidence the suit presented was a list of 100 examples in which a part of a New York Times article was posted in ChatGPT, and then the chatbot was asked to complete the story. The bot could replicate the New York Times story all 100 times.

CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER

OpenAI described these entries as a “rare failure of the learning process” for the chatbot and that the prompt was “intentionally manipulating” the model to get that desired result. It also argued that this approach is not a usual practice for users and that the New York Times was unwilling to share the results with the company with the hope of fixing the bug.

OpenAI faces a range of legal challenges. A literary group representing Jonathan Franzen and John Grisham filed a suit against OpenAI in September over alleged copyright infringement. Sarah Silverman filed a similar suit against OpenAI and Meta over allegations they used her memoir to train their bots.

Related Content