I think it depends and could be either worse or better depending.
Some code is compiled more often than it is run, and some code is run more often than it's compiled.
If you can spend 100k operations per compilation to save 50k operations at runtime on average... That'll probably be a net positive for chromium or glibc functions or linux syscalls, all of which end up being run by users more often than they are built by developers.
If it's 100k operations at build-time to remove 50k operations from a test function only hit by CI, then yeah, you'll be in the hole 50k operations per CI run.
All of this ignores the human cost; I don't really want to try (and fail) to approximate the CO2 emissions of converting coffee to performance optimizations.
Not all optimizations are more energy consuming. For an analogy, does a using a car consume more energy than a bicycle? Yes. But a using a bicycle does not consume more energy than a man running on feet.
And on a slightly ranty note, Apple's A12z and A14 are still apparently "too weak" to run multiple windows simultaneously :)