Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Wouldn't a vector database just get you nearest-neighbors on the embeddings? How would that answer a generative or extractive question? I can see it might get you sentiment, but would it help with "tell me all the places that are mentioned in this review"?


i think the point is that you use the vector database to locate the relevant context to pass to the LLM for question answering. here’s an end-to-end example:

https://www.dbdemos.ai/demo.html?demoName=llm-dolly-chatbot


Right. You feed the text chunks (from the matched embeddings) to a generative LLM to do the extractive/summarization part.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: