To be clear, nobody thinks GPT itself is capable of doing anything really bad. (They actually tried to coach GPT-4 to escape onto the internet and it failed.) It's that more that 1) they think we're definitely within 5-10 years of creating something which could become SkyNet, and 2) we don't actually know how to ensure that that an AI wouldn't decide to just kill us, and 3) the nature of competition means everyone is going to try to get there first in spite of #2, and therefore 4) we're all doomed.
I'm not as pessimistic as Yudowski, but I do think that his fears are worth considering. It looks like OpenAI are in a similar place.
To be clear, nobody thinks GPT itself is capable of doing anything really bad. (They actually tried to coach GPT-4 to escape onto the internet and it failed.) It's that more that 1) they think we're definitely within 5-10 years of creating something which could become SkyNet, and 2) we don't actually know how to ensure that that an AI wouldn't decide to just kill us, and 3) the nature of competition means everyone is going to try to get there first in spite of #2, and therefore 4) we're all doomed.
I'm not as pessimistic as Yudowski, but I do think that his fears are worth considering. It looks like OpenAI are in a similar place.