Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You don't need to predict if it what written by LLM, if it's a human or machine makes no difference to the validity of a text. You just need to be able to extract the actual information out of it and cross check it against other sources.

The summary that an LLM can provide is not just of one text, but of all the texts about the topic it has access to. Thus you never need to access the actual texts itself, just whatever the LLM condenses out of them.



"just" need to "extract the actual information out of it and cross check it against other sources".

How do you determine the trustworthiness of those other sources when an ever increasing portion are also LLM generated?

All the "you just need to" responses are predicted on being able to police the LLM output based upon your own expertise (e.g., much talk about code generation being like working with junior devs, and so being able to replace all your juniors and just have super productive seniors).

Question: how does one become an expert? Yep, it's right there: experts are made through experience.

So if LLMs replace all the low experience roles, how exactly do new experts emerge?


You're trusting the LLM a lot more than you should. It's entirely possible to skew those too. (Even ignoring the philosophical question of what an "unskewed" LLM would even be.) I'm actually impressed by OpenAI's efforts to do so. I also deplore them and think it's an atrocity, but I'm still impressed. The "As an AI language model" bit is just the obvious way they're skewed. I wouldn't trust an LLM any farther than I can throw it to accurately summarize anything important.


>cross check it against other sources.

The problem comes in when 99.999999% of other sources are also bullshit.


If LLMs start writing a majority of HN comments, we won’t know what is true or not. HN will be noise and worthless then.


For HN and forums in general, I think this will mean disabling APIs and having strict captchas for posting.

Beyond HN, I think this will translate in video content and reviews becoming more trustworthy, even if it's just a person reading a LLM-produced script. You will at least know they cared enough to put a human in the loop. That and reputation. More and more credit will be assigned based on reputation, number of followers, etc. And that'll be until each of these systems get cracked somehow (fake followers, plausible generated videos, etc.).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: