Okay, but (1) we don't need to simulate physics faster than physics to make accurate-enough predictions to fly a plane, in our heads, or build a plane on paper, or to model flight in code. (2) If that's only because we've cleared out the trees and the Canada Geese and whatnot from our simplified model and "built the road" for the wheels, then necessity is also the mother of invention. "Hey, I want to fly but I keep crashing into trees" could lead an AI agent to keep crashing, or model flying chainsaws, or eventually something that would flatten the ground in the shape of a runway. In other words, why are we assuming that agents cannot shape the world (virtual, for now) to facilitate their simplified mechanical and physical models of "flight" or "rolling" in the same way that we do?
Also, isn't that what's actually scary about AI, in a nutshell? The fact that it may radically simplify our world to facilitate e.g. paper clip production?
> we don't need to simulate physics faster than physics to make accurate-enough predictions to fly a plane
No, but that's only a small part of what you need to model. It won't help you negotiate a plane-saturated airspace, or avoid missiles being shot at you, for example, but even that is still a small part. Navigation models won't help you with supply chains and acquiring the necessary energy and materials for maintenance. Many things can -- and will -- go wrong there.
> In other words, why are we assuming that agents cannot shape the world
I'm not assuming anything, sorry if I'm giving the wrong impression. They could. But the "shapability" of the world is an environment constraint, it isn't fully under the agent's control. To take the paper clipper example, it's not operating with the same constraints we are. For one, unlike us (notwithstanding our best efforts to do just that), it needs to "simplify" humanity. But humanity is a fast, powerful, reactive, unpredictable monster. We are harder to cut than trees. Could it cull us with a supervirus, or by destroying all oxygen, something like that? Maybe. But it's a big maybe. Such brute force takes requires a lot of resources, the acquisition of which is something else it has to do, and it has to maintain supply chains without accidentally sabotaging them by destroying too much.
So: yes. It's possible that it could do that. But it's not easy, especially if it has to "simplify" humans. And when we simplify, we use our animal intelligence quite a bit to create just the right shapes. An entity that doesn't have that has a handicap.
>Also, isn't that what's actually scary about AI, in a nutshell? The fact that it may radically simplify our world to facilitate e.g. paper clip production?
No, it's more about massive job losses and people left to float alone, mass increase in state control and surveillance, mass brain rot due to AI slop, and full deterioration of responsibility and services through automation and AI as a "responsibility shield".
Also, isn't that what's actually scary about AI, in a nutshell? The fact that it may radically simplify our world to facilitate e.g. paper clip production?