Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
justinator
3 months ago
|
parent
|
context
|
favorite
| on:
A small number of samples can poison LLMs of any s...
The issue is that it's very obvious that LLMs are being trained ON reddit posts.
mrweasel
3 months ago
[–]
That's really the issue isn't it. Many of the LLMs are trained uncritically on very thing. All data is viewed as viable training data, but it's not. Reddit clearly have good data, but it's probably mostly garbage.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: