Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Biggest takeaway: extraction of prompts seems to be complete bullshit.


System prompts are (usually) just text prepended to your own prompt, and an LLM is certainly capable of reliably quoting text fed into it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: