I built semanticcache, a Go library that uses vector embeddings to cache and retrieve semantically similar content. It’s designed for LLM applications, search systems, and other use cases where semantic similarity matters.
Features include multiple backends (in-memory LRU, LFU, FIFO, Redis), OpenAI embedding integration, vector similarity search with pluggable comparators, extensibility for custom backends and providers, type-safe generics, and batch operations.
You can cache responses or documents with semantic lookup, use custom similarity functions, and choose between in-memory or Redis persistence.
Repo: https://github.com/botirk38/semanticcache