I have cleared my chat history recently so don't have all the prompts but here is a recent use case I used it for:
Prompt: "Generate a react mui component that fetches a Seller's shipping credits and shows his current credit balance in the top portion. There should be a recharge button next to the credit balance. The bottom portion should show recent transactions. Use below api for fetching credit balance: <api req/resp>. Use below api for fetching the transactions: <api req/resp>
Recharge should be done with the below api: <api req/resp>
Make the component beautiful with good spacing, elevation etc. Use Grid/Card components."
Thank you for your reply. The impression I'm getting is that it is producing concrete solutions to specific problems, but not generic solutions to abstract problems - does that sound about right?
Kind of I guess. If it is a single step: That is modify or generate text (could be code or anything) in a specific way, it works great even on obscure things.
However if the problem is stated in a way that it has to think through derivative of it's solution: that is generate some code that generates some other code which behaves in a certain way, it fails miserably. I'm not sure why. The problem I stated in current thread which it failed on I have tried multiple prompts to make it understand the problem but unfortunately nothing worked. It's as if it can do first level but but not second level abstraction.
I'm assuming something like tree of thought needs used on problems like this. GPT 'mostly' thinks in a single step.
Also, in general we attempt to make these models as economically efficient as possible at this point because of the extreme lack of excess computing power to run these things. You can't have it spending the next 30 minutes thinking in loops about the problem at hand.