Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I disagree. Prefer integration tests to unit tests where ever possible. If your tests run fast - which most integration tests should be able to do; and you are running your tests often - there is no downside. Your tests run fast and since you run them often you always know what broke: the last thing you changed.

Fixtures done right ensure that everyone starts with a good standard setup. The question is WHAT state the fixture setups. I have a fixture that setups a temporary data directory with nothing in it - you can setup your state, but everything will read from that temporary data directory.

Unit tests do have a place, but most of us are not writing code that has a strong well defined interface that we can't change. As such they don't add much value since changes to the code also imply changes to the code that uses them. When some algorithm is used in a lot of places unit tests it well - you wouldn't dare change it anyway, but when the algorithm is specific to the one place that calls it then there is no point in a separate test for it even though you could. (there is a lot of grey area in the middle where you may do a few unit tests but trust the comprehensive integration tests)

> Worst case verify that it called some mocks with x,y,z.

That is the worst case to avoid if at all possible (sometimes it isn't) that a function is called is an implementation details. Nobody cares. I've seen too many tests fail because I decided to change a function signature and now there is a new parameter A that every test needs to be updated to expect. Sometimes this is your only choice, but mock heavy tests are a smell in general and that is really what I'm against. Don't test implementation details, test what the customers care about is my point, and everything else follows from that (and where you have a different way that follows from that it may be a good think I want to know about!)



I guess it depends a bit on what you work on. Lately I'm working on algorithm heavy stuff, then testing input=>output is much more valuable than if things run, easier to consider edge cases when calling it directly etc. But if you make a crud app it's often more useful to test a flow. So depends.

As for mocks I don't disagree, hence calling it worst case.

What often works for me is separating the code. For instance if I call a function that first queries the db and then marshall that data into something, it's often easier to test it by splitting it. One function that queries, that one can test with some db fixtures or other setup. And then another that gets a model in and only does the pure logic and returns the result. Can then be tested separately. And then a third function which is the new one, that just calls the first and pass the result into the second. Can be boilerplaty, so again, depends.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: