I'm curious how this affects Traveling Salesman. I was under the impression that all NP-Complete problems take O(n!). Does this method improve it at all?
Depending on the problem it can also O(2^n), but that is always the worst case scenario. Modern ILP solvers employ a variety of heuristics that in many cases significantly reduce the time needed to find a solution.
Anecdotally, some years back I was developing MILPs with millions of variables and constraints, and most of them could be solved within minutes to hours. But some of them could not be cracked after weeks, all depending the inputs.
Often, the concrete problems we are interested in have some internal structure that make them easier to solve in practice. Solving Boolean formulas is NP-complete but we routinely solve problems with millions of variables.
ILP (and sat) solvers are interesting because they are great at ruling out large areas that cannot contain solutions. It's also easy to translate many problems into ILP or SAT problems.
> I was under the impression that all NP-Complete problems take O(n!).
SAT is NP-complete and the naive algorithm ("just try every combination") is O(2^n). Even for TSP there is a dynamic programming approach that takes O(n^2*2^n) instead of O(n!).
We actually don't know how long NP-complete problems take to solve. We conjecture that it's superpolynomial, but that can be exponentially faster than O(n!).
So what? It takes time for the community to digest the result, grasp its significance, and then write popularization articles about it. If you want to know what's being discovered right this second, read arXiv preprints. If you want to know what was discovered semi-recently and you want an explanation in layman terms that puts the results in perspective, read popularization pieces a while later.
I'm curious how this affects Traveling Salesman. I was under the impression that all NP-Complete problems take O(n!). Does this method improve it at all?