To me, fixtures are a code smell. If you need so much common setup to test your application, the code under testing is doing too much. It's unfortunately quite common in Rails or Django projects. You need to pass the Foo model to your function, but it will lookup foo.bar.baz, so you need to wire up these as well, which again need further models. Of course everything also talks with the database.
Instead, if you're able to decouple the ORM from your application, with a separate layer, and instead pass plain objects around (not fat db backed models), one is much freer to write code that's "pure". This input gives that output. For tests like these one only needs to create whatever data structure the function desires, and then verify the output. Worst case verify that it called some mocks with x,y,z.
In theory you're 100% right, a true unit test is completely isolated from the rest of the system and than a lot of the problems disappear.
In reality, that is also not free. It imposes some restrictions on the code. Sometimes being pragmatic, backing off from the ideal leads to faster development and quicker deliver of value to the users. Rails is big on these pragmatic tradeoffs. The important thing is that we know when and why we're making the tradeoff.
Usually I go with Rails defaults and usually it's not a problem. Sometimes, when the code is especially complex and perhaps on the critical path, I turn up the purity dial and go down the road you describe exactly for the benefits you describe.
But when I decide that sticking to the defaults the is right tradeoff I want to get the most of it and use Fixtures (or Factories) in the optimal way.
Yup, so I'm not against fixtures per se, they have their uses and can be a pragmatic choice. I just often don't like when I have to use them, as it's often to patch over something else. But things are never perfect.
Right let's add more layers that always solves everything.
No language or abstraction is perfect but if someone prefers pure functional coding, Rails and Django are just not it, don't try to make them. Others like em just as they are
If you have to pass data up and down, back-and-forth the chain and rely on mutation, sure, yes. That’s ball-of-yarn in a nutshell. There is such a thing as reducto ad absurdum in software.
Nevertheless I’ve found far more God classes that could be refactored into clean layers than the other way around. Specifically in the context of Rails style web app as GP is specially discussing. Batteries included doesn’t necessarily require large tangled God classes. One can just as well compose a series of layers into a strong default implementation that wraps complex behavior while allowing one to bail-out and recompose with necessary overrides, for example reasonable mocks in a test context.
Of course this could then allow one to isolate and test individual units easily, and circle back with an integration test of the overall component.
I disagree. Prefer integration tests to unit tests where ever possible. If your tests run fast - which most integration tests should be able to do; and you are running your tests often - there is no downside. Your tests run fast and since you run them often you always know what broke: the last thing you changed.
Fixtures done right ensure that everyone starts with a good standard setup. The question is WHAT state the fixture setups. I have a fixture that setups a temporary data directory with nothing in it - you can setup your state, but everything will read from that temporary data directory.
Unit tests do have a place, but most of us are not writing code that has a strong well defined interface that we can't change. As such they don't add much value since changes to the code also imply changes to the code that uses them. When some algorithm is used in a lot of places unit tests it well - you wouldn't dare change it anyway, but when the algorithm is specific to the one place that calls it then there is no point in a separate test for it even though you could. (there is a lot of grey area in the middle where you may do a few unit tests but trust the comprehensive integration tests)
> Worst case verify that it called some mocks with x,y,z.
That is the worst case to avoid if at all possible (sometimes it isn't) that a function is called is an implementation details. Nobody cares. I've seen too many tests fail because I decided to change a function signature and now there is a new parameter A that every test needs to be updated to expect. Sometimes this is your only choice, but mock heavy tests are a smell in general and that is really what I'm against. Don't test implementation details, test what the customers care about is my point, and everything else follows from that (and where you have a different way that follows from that it may be a good think I want to know about!)
I guess it depends a bit on what you work on. Lately I'm working on algorithm heavy stuff, then testing input=>output is much more valuable than if things run, easier to consider edge cases when calling it directly etc. But if you make a crud app it's often more useful to test a flow. So depends.
As for mocks I don't disagree, hence calling it worst case.
What often works for me is separating the code. For instance if I call a function that first queries the db and then marshall that data into something, it's often easier to test it by splitting it. One function that queries, that one can test with some db fixtures or other setup. And then another that gets a model in and only does the pure logic and returns the result. Can then be tested separately. And then a third function which is the new one, that just calls the first and pass the result into the second. Can be boilerplaty, so again, depends.
People really use fixture to simulate internal code? i thought it was overwhelmingly used to simulate external API response, or weird libraries that need some context switching (and in that case, an advice: the NIH syndrome is _very_ valid, and sometime the library you use isn't worth the time you put "fixing" it: just rewrite the damn thing)
[edit] though in my case we have one fixture that load a json representation of our dev dynamodb into moto, and thus we mock internal data, but this data is still read through our data models, it doesn't really replace internal code, only internal "mechanics"
Simulating an external API is the responsibility of a test double of some sort, not a fixture. Fixtures often setup the test doubles with test data, but they are not the test double. Fixtures can setup other things as well (the line between factories and fixtures is blurry)
Instead, if you're able to decouple the ORM from your application, with a separate layer, and instead pass plain objects around (not fat db backed models), one is much freer to write code that's "pure". This input gives that output. For tests like these one only needs to create whatever data structure the function desires, and then verify the output. Worst case verify that it called some mocks with x,y,z.