Well then, isn’t the whole case about just denying the inevitable?
If OpenAI can do it, I would not say that that is very unlikely for someone else to do the same. Open or not. The best chance is still that we prepare with the best available information.
Yep, it absolutely is about denying the inevitable, or rather, "playing for time." The longer we manage to delay, the more likely somebody comes up with some clever approach for actually controlling the things. Also humanity stays alive in the meantime, which is no small thing in itself.
... Eh? You as an end-user can't contribute to this anyways. If you really want to work on safety, either use a smaller network or join the safety team at a big org.
> The best information we have now is if we create AGI/ASI at this time, we all die.
We can still unplug or turn off the things. We are still very faraway from the situation where AI has some factories and full supply chain to control and take physical control of the world.
Meanwhile, every giant AI company: "yeah we're looking at robotics, obviously if we could embody these things and give them agency in the physical world that would be a great achievement"
Our rush into AI and embodiment reminds me of the lily pad exponential growth parable.
>Imagine a large pond that is completely empty except for 1 lily pad. The lily pad will grow exponentially and cover the entire pond in 3 years. In other words, after 1 month there will 2 lily pads, after 2 months there will be 4, etc. The pond is covered in 36 months
We're all going to be sitting around at 34 months saying "Look, it's been years and AI hasn't taken over that much of the market.