And what if you let a human expert fact-check the output of an LLM? Provided you're transparent about the output (and its preceding prompt(s)) ?
Because I'd much rather ask an LLM about a topic I don't know much about and let a human expert verify its contents than waste the time of a human expert in explaining the concept to me.
Once it's verified, I add it to my own documentation library so that I can refer to it later on.
It's also not on archive.org Wayback Machine it seems.
So can anyone please copy and paste the article contents here? Thanks.
0: http://x.com/i/article/2024235288512569344
reply