I think a big part of the switching cost is the cost of learning a different model's nuances. Having good intuition for what works/doesn't, how to write effective prompts, etc.
Maybe someday future models will all behave similarly given the same prompt, but we're not quite there yet
Maybe someday future models will all behave similarly given the same prompt, but we're not quite there yet