Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

People waste too much time building out stuff that is really bad in AI right now.

Of course, everything is, but instead of taking on the task of patching that up, the better approach would be to pretend there will be something that is a lot better than GPT-4 in the near future (because there will be) and design a differentiated product under that premise.



Can I assume AI are continuing their training as they interact with people when deployed? Are ChatGPT, Claude, learning from my interactions with them? I do, BTW, correct them when they unknowingly (I assume) steer me wrong.

One wonders, if that's the case, how quickly an AI might improve if it has something close to Google's search site throughput. I mean fielding several billion queries a day, for a year — that would be some pretty stellar training right there I would think.


They don't train as they go. Training is incredibly expensive.

They do take your feedback and presumably do something with it. Your actual queries are only indirectly useful since they might have private info in them.


> Can I assume AI are continuing their training as they interact with people when deployed?

Yes, you can. Some of the big providers are fairly clear on where in their products this happens, and all offer a way out (mostly when paying for api access)

> One wonders, if that's the case, how quickly an AI might improve if it has something close to Google's search site throughput

Indeed. Another possibility is that user input will turn out to be increasingly less important for upcoming state of the art models.


That’s an interesting idea. What do you mean?

I understand the prompts as a service is short term… but what is a long term product you see?


I think a general way to answer this is by considering for any domain you know: What would you pay a human to do right now, that LLMs frustratingly can't, but should in theory, if only they were a bit better and more consistent?

This could mean: Instead of diving into langchain and trying to program your way out of a bad model, or trying to do weird prompts, just write a super clear set of instructions and wait for a model that is capable of understanding clear instructions, because that is an obvious goal of everyone working on models right now and they are going to solve this better than your custom workaround can.

This is not a rigid rule, just a matter of proportions. For example, you should probably be willing to try a few weird intermediary prompt hacks, if you want to get going with AI dev right now. But if most of what most people do will probably be solved by a somewhat better model, that's probably a cause for pause.


I suppose with an eye on open-source, an interesting 'rule' would be to set a cut-off point for models that can run locally, and/or are considered to be feasible locally soon.


I may be misunderstanding your meaning, but I'm not convinced that "prompts as a service" is short term. I think we'll see a number of apps pop up that will be essentially that, i.e. powered by a generative AI, but with a great UX. Not everyone is good at prompting, and although it is a skill many will develop, packaging up great prompts in niche problem areas still looks like an area of opportunity to me. I'm not talking necessarily about chat experiences, but apps that can, as an example, maintain task lists for you after consuming your incoming communications.


How many times did you have issues communicating with your spouse because of prompting issues? And how did you resolve it?

Why would you need an extra layer here?


I don't understand your comment. I was talking about apps built on LLMs where the prompts aren't given by the users, but the LLM is still an important part of the functionality.


I am trying to say in the very near future LLMs will be smart enough to not necessitate a middle man for prompt experimentation.


Why do PR firms and copy editors exist?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: