I'm not sure why PULSE is being called out here. This is extremely common on any super-resolution trained on extremely biased datasets like FFHQ. In fact, this discussion is something that led to the dataset being changed. The authors actually extended their research to discuss the bias[0]. I'll also link to the Reddit discussion[1]. IMO the authors here responded to this correctly. It is important to remember that algorithms are only good on their in distribution datasets. And kudos on the authors for doing more experiments and including work on a less racially and sexually biased dataset.
[0] section 6 https://arxiv.org/pdf/2003.03808.pdf
[1] https://www.reddit.com/r/MachineLearning/comments/hk2ryn/d_h...