Possibly only a partial sin since I use Moore-Penrose pseudo inverse and L2 ridge regularization. This sin is being commit as a consequence of committing next sin.
> 2. Forming the Cross-Product Matrix A^TA
Yes yes this is potential numerically bad. However in practice as long as you are careful with scaling it’s perfectly fine and enable better parallel computations.
> 7. Using Eigenvalues to Estimate Conditioning
This sin is the only way I actually know so after reading this I realized I need to read more on this.
The last of the 4, which obviously is an avoidable sin.
> 3. Evaluating Matrix Products in an Inefficient Order
I definitely have code that should he changed. It’s also on my todo list now to audit a specific routine that I suspect can be fixed. This one is just stupid it can be avoided.
Putting this out to invoke Cunningham's Law... my intuition says that while the article may be right about matrices in the real numbers, using the eigenvalues to check for closeness to singularity may be more valid on the floats, because probably what you're testing for isn't "closeness to singularity" but how close you are to having floating point failures, and that seems at least likely to be heavily correlated.
I now sit back and wait for someone to explain why this is wrong while I act like this was an entirely unanticipated result of my post.