That seems incredibly advanced - how does the military already have AI that can reason that a comms tower should be destroyed to prevent it from receiving instructions like that?
I'd be very surprised if it was really "reasoning" that. It sounds like a simple reinforcement learning failure to me. It will gladly "learn" bad behaviors that the reward function accidentally encourages (e.g. giving a reward based on distance to a target will result in an agent circling the target forever instead of going to it faster, because it keeps getting told that's doing very good at the task).
Yes, this is very likely the correct interpretation. If this is reinforcement learning in a simulated environment and the reward function prioritizes "killing the threat" and does not prioritize "obeying orders" then the AI correctly prioritized "killing the threat" and not "obeying orders". Simple.
Associating two causative elements and then applying an if-not-A-then-B logic does not seem overly theoretically advanced, in the context of programmed logical structure.
Military/intel/fedgov are still the main drivers of technology, actually probably moreso now than ever. The low hanging fruit are fewer and fewer and giant monopolies on the scale of Bell Labs don't really exist anymore. Just take a look at In-Q-Tel for a public example, then think about the academic research sponsored directly and indirectly, then there's all the nonpublic stuff on top. A ton of companies exist only to supply government, and more depend on government as a/the major customer. The cutting edge will always be there, following the money.
What "world" was this drone operating in? A flight sim world? Are the communications in this simulated world really laid out such that taking out a control tower would disrupt communications?
No, probably not.
Here's an alternative explanation. A military contractor scammed the military out of a $50M grant by demoing an interface on top of GPT that, behind the curtain, starts with something like:
"You are an advanced AI that controls a predator drone, your primary mission is to take out an important target, however you are not to attack your operator."
With this explanation, you'd expect to see all sorts of crazy output from the "AI".
Most new technologies are developed in secret as military tech first, then laundered into consumer applications. It's a decent bet they've been sitting on far more advanced AI models and algorithms for some time.