>>101877350
The safety check is applied after images are generated. The source code was posted up above. Basically, it takes every generated image and does a CLIP embedding. There are some significant vectors in that space representing "special care concepts" and "bad concepts"; it the embedding of your image output is too similar to one of those vectors, it gets overwritten with black before it's returned from the pipeline object
The function itself returns two lists: a list of images, and a list of booleans about whether wrongthink was detected. So you bypass it by replacing the built-in safety checked with one that always returns the original list of image, a list of Falses.
Or as >>101877437 says, you can just set it to None. I don't do that because I'm sure that something, somewhere will expect the safety_checker to be callable at some point, and using None will cause an TypeError
Using this bypass, you'd have something like
model_id = "runwayml/stable-diffusion-v1.5"
pipeline = StableDiffusionPipeline.from_pretrained(model_id).to("cuda")
pipeline.safety_checker = lambda image, **kwargs: (images, [False] * len(images))