Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> In copyright cases, typically you need to show some kind of harm.

NYT is suing for statutory copyright infringement. That means you only need to demonstrate that the copyright infringement, since the infringement alone is considered harm; the actual harm only matters if you're suing for actual damages.

This case really comes down to the very unsolved question of whether or not AI training and regurgitation is copyright infringement, and if so, if it's fair use. The actual ways the AI is being used is thus very relevant for the case, and totally within the bounds of discovery. Of course, OpenAI has also been engaging this lawsuit with unclean hands in the first place (see some of their earlier discovery dispute fuckery), and they're one of the companies with the strongest "the law doesn't apply to US because we're AI and big tech" swagger.



NYT doesn't care about regurgitation. When it was doable, it was spotty enough that no one would rely on it. But now the "trick" doesn't even work anymore (you would paste the start of an article and chatgpt would continue it).

What they want is to kill training, and more over, prevent the loss of being the middle-man between events and users.


> What they want is to kill training, and more over, prevent the loss of being the middle-man between events and users.

So... they want to continue reporting news, and they don't want their news reports to be presented to users in a place where those users are paying someone else and not them. How horrible of them?

If NYT is not reporting news, then NYT news reports will not be available for AIs to ingest. They can perhaps still get some of that data from elsewhere, perhaps from places that don't worry about the accuracy of the news (or intentionally produces inaccurate news). You have to get signal from somewhere, just the noise isn't enough, and killing off the existing sources of signal (the few remaining ones) is going to make that a lot harder.

The question is, does journalism have a place in a world with AIs, and should OpenAI be the one deciding the answer to that question?


It's easy to see a future where primary sources post their information directly online (already largely the case) and AI agents make tailored, interactive news for their users.

Sure, there may still be investigative journalism and long form, but those are hardly the money makers.

Also, just like SWE's, writers have that same "do I have a place in the future?" anxiety in the back of their head.

The media is very hostile towards AI, and the threat is on multiple levels.


The problem is that the publishing industry seems to think their job is to print ink on paper, and they reluctantly admit that this probably also involves putting pixels on a screen.

They're hideously anti-tech and they completely ignore technological advancement when thinking about the scope of their product. Instead of investing millions of dollars in developing their own AI solutions that are the New York Times answer machine, they pay those millions of dollars to lawyers and sue people building the answer machines. It's entirely the wrong strategy, it's regressive, and yes, they are to blame for it.

The biggest bug I've observed in my life is that people think technology is its own sector when really it's a cross-cutting concern that everybody needs to be thinking about.


> prevent the loss of being the middle-man between events and users

I'm confused by this phrase. I may be misreading but it sounds like you're frustrated, or at least cynical about NYT wanting to preserve their business model of writing about things that happen and selling the publication. To me it seems reasonable they'd want to keep doing that, and to protect their content from being stolen.

They certainly aren't the sole publication of written content about current events, so calling them "the middle-man between events and users" feels a bit strange.

If your concern is that they're trying to prevent OpenAI from getting a foot in the door of journalism, that confuses me even more. There are so, so many sources of news: other news agencies, independent journalists, randos spreading word-of-mouth information.

It is impossible for chatgpt to take over any aspect of being a "middle-man between events and users" because it can't tell you the news. it can only resynthesize journalism that it's stolen from somewhere else, and without stealing from others, it would be worse than the least reliable of the above sources. How could it ever be anything else?

This right here feels like probably a good understanding of why NYT wants openai to keep their gross little paws off their content. If I stole a newspaper off the back of a truck, and then turned around and charged $200 a month for the service of plagiarizing it to my customers, I would not be surprised if the Times's finest lawyers knocked on my door either.

Then again, I may be misinterpreting what you said. I tend to side with people who sue LLM companies for gobbling up all their work and regurgitating it, and spend zero effort trying to avoid that bias


> preserve their business model of writing about things that happen and selling the publication. To me it seems reasonable they'd want to keep doing that

Be very wary of companies that look to change the landscape to preserve their business model. They are almost always regressive in trying to prevent the emergence of something useful and new because it challenges their revenue stream. The New York Times should be developing their own AI and should not be ignoring the march of technological progress, but instead they are choosing to lawyer up and use the legal system to try to prevent progress. I don't have any sympathy for them; there is no right to a business model.


This feels less like changing the landscape and more like trying to stop a new neighbor from building a four-level shopping complex in front of your beach-front property while also strip-mining the forest behind.

As for whether the Times should be developing their own LLM bot, why on earth would they want that?


> prevent the loss of being the middle-man between events and users.

OpenAI is free to do own reporting. NY Times is nowhere near trying to prevent others for competing as middleman.


It’s more than middle man right? Like if visits to NYT reduce then they get less ads revenue and their ability to do business goes away. On the other hand, if they demand licensing fees then they’ll just be marginalized by other news anyways.


Notably absent from their complaint is any suggestion that they've been harmed by a reduction in readership as a result of OpenAI's emergence.


It sounds like the defendant would much prefer middle-men who do not have the resources to enforce copyright.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: