Update: Just released an open-source Numba heuristic (~320 lines) hitting 0.3674–0.3677 on G81 benchmark (20k nodes, 40k edges):
- 99% convergence in ~1200 iterations
- Early stopping cuts hours to minutes
- <80MB RAM, no GPU, no external solvers
Quick comparison on same graph:
- Random: 0.258
- Greedy (10 restarts): 0.324
- Simulated Annealing: 0.349–0.356
- Tabu Search: 0.362–0.365
- Goemans-Williamson (theoretical): 0.878 → unusable at this scale
GravOpt with 1200 steps beats most classics and is 50–200x faster.
Quick comparison on same graph: - Random: 0.258 - Greedy (10 restarts): 0.324 - Simulated Annealing: 0.349–0.356 - Tabu Search: 0.362–0.365 - Goemans-Williamson (theoretical): 0.878 → unusable at this scale GravOpt with 1200 steps beats most classics and is 50–200x faster.
Code + official G81 file (auto-downloads if missing): https://github.com/Kretski/GravOpt-MAXCUT Run `python gravopt.py` and watch it work!
Is this a rediscovered 90s metaheuristic with better convergence + early stopping, or useful for 20k–200k QUBO instances in 2025? Feedback welcome! Pro version (€200, first 100): https://kretski.lemonsqueezy.com/buy/9d7aac36-dc13-4d7f-b61a...