This model is not learning, but the transcripts being farmed at scale are for sure being used (with human assistance and control) for building the next models. I don't think the kind of exploit of the microsoft bot is possible, however the next model might give a superficial appearance of being safer, since the transcripts we are given OpenAI of us trying to outsmart the model will be used to train it further.