Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's 'easy' to do it with rsync and Make as long as you don't have a lot of dependencies. Once you start pulling in dependencies, you have to figure out how to get them installed, hopefully before you roll code that uses them. If the dependencies are small, you can pull them into your source (more or less) and deploy them that way, but that may make tracking upstream harder. (otoh, a lot of things I personally used like that were more of pull this probably abandoned thing that's close to what we need, and modify it quite a bit to fit our needs; there wasn't a need or desire to follow upstream after that)


rsync+make is the deployment+run.

But this assumes the source of rsync is already static.

Handling dependencies is done in a build process, upstream.

Fetching dependencies at deployment time is a direct path to deployment failures (especially if the dependency management relies on external resources or non-locked versionning).


Just curious, would your deployment method be any different today? If so, how?


If I was going to do it again, I would probably work on a user management script a lot sooner (you don't really need it, but it would have been nice, and the earlier you do it, the less crufty differences need to be tolerated) and maybe make build on one host, deploy on many or at least push from workstation to colo once and fan out; might have been nice to have that around 100 hosts instead of much later.

Of course, today, my fleet is down to less than ten hosts, depending on exactly how you count, and I'm usually the only user, so I can do whatever.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: