This DevOps friction is exactly why I'm building an open-source "Firebase for LLMs."
The moment you want to add AI to an app, you're forced to build a backend just to securely proxy API calls—you can't expose LLM API keys client-side.
So developers who could previously build entire apps backend-free suddenly need servers, key management, rate limiting, logging, deployment... all just to make a single OpenAI call.
Anyone else hit this wall? The gap between "AI-first" and "backend-free" development feels very solvable.
Yeah, hit this exact wall building a small AI tool. Ended up spinning up a whole backend just to keep the keys safe. Feels like there should be a simpler way, but haven’t seen anything that’s truly plug-and-play yet. Curious to see what you’re working on.
I don't even have a product although I'd love people to work on something open source together. Also, I'm not nearly cool enough to earn a green username.
I think the way the friction could be reduced to almost zero was through OpenAI "custom GPTs" https://help.openai.com/en/articles/8554397-creating-a-gpt or "Alexa skills". how much easier can it get than the user using their own OpenAI account? Of course I'd rather have them on my own website but if were talking complete ease of use then I think that is a contender
Fair point. I'm no expert in custom GPTs, I wonder what limitations there would be beyond the obvious branding and UI/UX control. Like, how far can someone customize a custom GPT (ha). I imagine any multi-step/agentic flows might be a challenge or impossible as it currently exists. It also seems like custom GPTs have been completely forgotten, but I very well could be wrong and OpenAI announced a big investment in them and new features tomorrow.
Probably some sort. In the meantime it doesn't currently exist and I want it for myself. I also feel like having something open source and that allows you to bring your own LLM provider might still be useful.