Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Google, at one point, had component versioning that was not just "build everything from the latest commit". Libraries within the tree would get tagged releases, and everything else would build from the latest tag of those libraries.

This practice was abandoned, but I don't know the reasoning for why it was abandoned.



People hated that they couldn't make atomic changes across components. Google's monorepo means everybody has to move in lock-step, which is bad for everybody:

* library maintainers must make sure they don't introduce any regressions to any users at all. There's no major version number that you can increment to let people know that something has changed. Development necessarily slows.

* Library users must deal with any breakage in any library they use. Breakage can happen at any time because everybody effectively builds from HEAD. There are complicated systems in place for tracking ranges of bad CL numbers

Monorepo isn't entirely to blame for this, but it certainly doesn't help. I've been at Google 15 years and I'm tired of this.


Question: Doesn't the same thing apply to managed services?

Let's say you want to make a change to the filesystem. You can change the client libraries today, but old client libraries are going to be in production for weeks, or longer. Your filesystem service has to be backwards compatible with some weeks or months of filesystem libraries.


Yes, and this is a good thing! If you're building infrastructure that people are going to come to rely on, it shouldn't change very much. Its interface certainly shouldn't change without very good reason.

Another problem Google has is that people feel the need to change things in order to get promoted.


Regarding tracking bad CL ranges: Ecosystems (outside Google) which use versioned packages have the same requirement. If some version of a package you depend on has a bug then you might detect it yourself if you're lucky but more likely you won't detect it, so you need to use tools to centrally track known-bad versions and check whether your systems are affected. Package repositories support removing versions that are known to be bad for the same reason. Most of the attention in these areas is on security related bugs right now, but that's really just a sub-category of the overall problem.

I don't think the bad-versions tracking outside Google is any less complicated than the bad-CL-ranges tracking inside Google.


Your frustration has already been addressed in danluu article under the header "Cross-project changes"

I believe he said that he wrote this article to avoid repeating the same convo again and again....




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: