Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We don't know how much aware of the problems (or of tbe likelihood that they'd occur) OpenAI was, and how much they deliberately pushed through.

If they were and did, they sure bear responsibility for what happened



What if OpenAI knew responses like this were likely, but also knew preventing them would degrade overall model quality?

I'm being selfish here! I am confident that no AI model will convince me to harm myself, and I don't want the models I use to be hamstrung.


What if they knew that preventing them would reduce engagement and revenue?

We just don't know, and it seems sensible to me to investigate it.

Were it only to not degrade the quality model, anyhow, I think it's reasonable that someone's life could be more important than that, but that's me.

> I'm being selfish here! I am confident that no AI model will convince me to harm myself, and I don't want the models I use to be hamstrung.

I do see that you're being selfish


By the way, «logs show he tried to resist ChatGPT’s alleged encouragement to take his life».




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: