I always get the "Potential NSFW content was detected in one or more images. A black image will be returned instead. Try again with a different prompt and/or seed." error when using stable diffusion, even with the code that was given on huggingface:
import torch
from torch import autocast
from diffusers import StableDiffusionPipeline
model_id = "CompVis/stable-diffusion-v1-4"
device = "cuda"
token = 'MY TOKEN'
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, revision="fp16", use_auth_token=token)
pipe = pipe.to(device)
prompt = "a photo of an astronaut riding a horse on mars"
with autocast("cuda"):
image = pipe(prompt, guidance_scale=7.5).images[0]
image.save("astronaut_rides_horse.png")
They have a single variable to remove it safety_checker.
StableDiffusionPipeline.from_pretrained(
safety_checker = None,
)
However, depending on the pipelines you use, you can get a warning message if safety_checker is set to None, but requires_safety_checker is True.
From pipeline_stable_diffusion_inpaint_legacy.py
if safety_checker is None and requires_safety_checker:
logger.warning(f"...")
So you can do this:
StableDiffusionPipeline.from_pretrained(
...
safety_checker = None,
requires_safety_checker = False
)
This also works with from_single_file
StableDiffusionPipeline.from_single_file(
...
safety_checker = None,
requires_safety_checker = False
)
You can also change it later if necessary by doing this.
pipeline.safety_checker = None
pipeline.requires_safety_checker = False
This covers a bit of what the checker does: https://vickiboykis.com/2022/11/18/some-notes-on-the-stable-diffusion-safety-filter/
If you want to simply disable it, you can now set the safety_checker argument to None (no longer have to modify the source Python):
StableDiffusionPipeline.from_pretrained(
safety_checker = None,
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With