That's generally a singular goal in RL, and is human programmed. And also you would need to assume AGI won't have a programmed goal or motivation the same way as RL, and will pick it's own goals and need to discover it's own motivation.
And the agents are still only interacting with envs programmed by humans.
I find pretty much all of Bostrom's arguments outlandish and lacking a grounding in reality.
That's generally a singular goal in RL, and is human programmed. And also you would need to assume AGI won't have a programmed goal or motivation the same way as RL, and will pick it's own goals and need to discover it's own motivation.
And the agents are still only interacting with envs programmed by humans.
I find pretty much all of Bostrom's arguments outlandish and lacking a grounding in reality.