Why even look at the LLM? A human deployed it and defers to its advice for decision-making; the problem is in the decision-making process, which is in full control of the human. It should be obvious how to regulate that: be critical of what the human behind the LLM does.
I'm giving the benefit of the doubt here that the machine can in theory work well, and is also well used by the human (and the flaw was very unlikely to be found by their inspection) : so in this idealized setting only the machine's manufacturer screwed up (potentially also that human's boss which decided to buy that machine).
Rather, why would LLMs be treated any differently from other machines ? Mass recall of flawed machines (if dangerous enough) is common after all.