I don't think an algorithm that relies on this is particularly well designed. Anything that trusts a float is non-zero is probably some kind of division where you must avoid division by zero. In that case you should explicitly checking for the zero condition instead of relying on the semantics of real numbers, since floats are not real numbers.
That’s precisely the misapprehension that makes folks think that -ffast-math is fine. Each and every floating point number is an exact quantity. They are real mathematical objects that identify exact values. They just have limited precision and might round (think snap) to the nearest representable number after each computation. You might not use them that way, but it doesn’t mean others shouldn’t.
For example, floating point without can represent two numbers whose difference is smaller than the number can represent and rounds to zero. Subnormal floating point doesn't fix this, it just moves the impacts of underflow to smaller magnitudes that are less likely to be seen.
Look, -ffast-math isn't always fine, but specifically I'm looking at DAZ/FTZ enabled, possibly non-deterministically process-wide (which is bad, don't get me wrong!). But part of why this doesn't phase people is because in practice, programs don't care and the observable effects don't lead to bugs.
Perhaps a better example than `z/(x-y)` is `log(x-y)`. Unlike division by 0.0, `log(0.0)` often throws an immediate error whereas `log(5e-324)` is a finite — and meaningful! — result.