> "While AlphaEvolve is currently being applied across math and computing, its *general* nature means it can be applied to any problem whose solution can be described as an algorithm, and automatically verified. We believe AlphaEvolve could be transformative across many more areas such as material science, drug discovery, sustainability and wider technological and business applications."
Is that not general enough for you? or not intelligent?
Do you imagine AGI as a robot and not as datacenter solving all kinds of problems?
> Do you imagine AGI as a robot and not as datacenter solving all kinds of problems?
AGI means it can replace basically all human white collar work, alpha evolve can't do that while average humans can. White collar work is mostly done by average humans after all, if average humans can learn that then so should an AGI.
An easier test is that the AGI must be able to beat most computer games without being trained on those games, average humans can beat most computer games without anyone telling them how to do it, they play and learn until they beat it 40 hours later.
AGI was always defined as an AI that could do what typical humans can do, like learn a new domain to become a professional or play and beat most video games etc. If the AI can't study to become a professional then its not as smart or general as an average human, so unless it can replace most professionals its not an AGI because you can train a human of average intelligence to become a professional in most domains.
AlphaEvolve demonstrates that Google can build a system which can be trained to do very challenging intelligent tasks (e.g. research-level math).
Isn't it just an optimization problem from this point? E.g. now training take a lot of hardware and time. If they make it so efficient that training can happen in matter of minutes and cost only few dollars, won't it satisfy your criterion?
I'm not saying AlphaEvolve is "AGI", but it looks odd to deny it's a step towards AGI.
I think most people would agree that AlphaEvolve is not AGI, but any AGI system must be a bit like AlphaEvolve, in the sense that it must be able to iteratively interact with an external system towards some sort of goal stated both abstractly and using some metrics.
I like to think that the fundamental difference between AlphaEvolve and your typical genetic / optimization algorithms is the ability to work with the context of its goal in an abstract manner instead of just the derivatives of the cost function against the inputs, thus being able to tackle problems with mind-boggling dimensionality.
The "context window" seems to be a fundamental blocker preventing LLMs from replacing a white collar worker without some fundamental break through to solve it.