Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One way I've seen this done in practice is to construct an offline model that produces an initial set of posterior samples, then construct a second, online model that takes posterior samples and new observations as input and constructs a new posterior. This probably wouldn't make sense computationally in a high-frequency streaming context, but (micro)batching works fine.

I've seen lots of other approaches proposed for this over the years, here's a recent Stan forum thread with some links: https://discourse.mc-stan.org/t/updating-model-based-on-new-...



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: