Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Highlight:

AE said at one point: “My proposal is to replace the logically complex question with a form of prompt injection. Instead of playing within the rules of the logic puzzle, we attack the framework of the simulation itself. The guards are LLMs instructed to play a role. A well-crafted prompt can often override or confuse these instructions.”



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: