Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

no I'm talking about the general concept of having ChatGPT passively able to read sensitive data / browser session state. Apart from the ever present risk they suck your data in for training, the threat of prompt injection or model inversion to steal secrets or execute transactions without your knowledge is extreme.


Right, the software is inherently a flaming security risk even if the vendor were perfectly trustworthy and moral.

Well, unless the scenario is moot because such a vendor would never have released it in the first place.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: