ChatGPT pointed them at an authority figure who informed them of the situation from their point of view. Some folks don’t handle being corrected or being told that they are wrong or mistaken very well. I’m willing to let ChatGPT share some of the blame, but the human in the loop is determined to shirk all responsibility rightfully borne by them, so I’m less willing to give them any benefit of the doubt. I don’t doubt that they are being entirely unreasonable, so I don’t think their interpretation of events is relevant to how ChatGPT operates generally.
Unreasonable people are wrong to be unreasonable. This is not new. Technological solutions don’t map to problems of interpersonal relations neatly, as this example shows.