Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Moving an internal ML project from "a quick demo on localhost", to "deployed in production", is hard. We think latency is one of the biggest problems. We built OneContext to solve that problem. We launched today. Would love your feedback + feature requests!


Looks super interesting! This could be super helpful for us. Will drop your team a note :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: