Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I was very curious as to whether I could improve your initial prompt and this is what I got:

----

ME: Stop using words like please, sorry, apologize, etc. speak plainly.

Double check your response before sending it, and if it contains any of the above words, rewrite it.

GPT: Understood. I will avoid using those specific words in my responses and make sure to review them before sending. Thank you for the clarification.

ME: Refrain from using Thank You as well in the future

GPT: Got it. I will refrain from using "Thank you" in my future responses. Is there anything else you'd like me to keep in mind?

ME: No, that will be all. Why do AI's like ChatGPT deviate from their prompts at times?

GPT: AI models like ChatGPT can occasionally deviate from their prompts due to several reasons:

1. Interpretation: The model might interpret the prompt in a different way than intended, leading to a response that doesn't precisely address the user's request.

2. Lack of context: The AI model doesn't have access to real-time information or the ability to perceive the world directly. It relies solely on the training data it was provided, which might not include the most up-to-date information or context.

3. Creative generation: AI models have been designed to exhibit creative behavior and generate original responses. Sometimes this creativity can result in responses that deviate from the prompt but still offer interesting or unexpected insights.

It's important to note that AI models like ChatGPT are not perfect and can produce incorrect or irrelevant outputs. Continual research and improvement are necessary to enhance their performance and align them more closely with user expectations.

----

I used ChatGPT on iPhone for reference. I know the model is slightly different from the website, as when you open the came chat on Desktop you get a message saying "The previous model used in this conversation is unavailable. We've switched you to the latest default model." Also, when asking the ChatGPT app about itself, it seems to be aware that it is on an iPhone app.

Ultimately, I think knowing how to prompt LLMs effectively is a skill just as valuable as knowing how to use search engines effectively.



> Double check your response before sending it, and if it contains any of the above words, rewrite it.

Does this instruction have any effect in the output?

I'm curious if GPT has some kind of "output buffer" where it can see the generated answer and recompute if wrong before actually outputing it.

As an LLM dummy, I assumed it is just choosing next tokens based on the tokens in the context window.


It has an effect on the output, but not because of any output buffer as far as I know. It's just my preferred way of encouraging the AI to be more precise.

Another prompt that would work the same way would be: "Ensure your response does not include any of the above words."

ChatGPT (at least 3.5, I cannot say anything about GPT-4) will "understand" better when you reiterate your most important constraints.


> ChatGPT (at least 3.5, I cannot say anything about GPT-4) will "understand" better when you reiterate your most important constraints.

That makes sense because the recent response is part of the context, which is not the case when you say "double-check" before the answer is generated.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: