>>102140880
The default is like 30 steps iirc.
I just checked, DPM++ 2M Karras 50 steps.
Launching Web UI with arguments: --opt-sub-quad-attention --disable-nan-check
ONNX: version=1.19.0.dev20240514001+rocm60 provider=ROCMExecutionProvider, available=['MIGraphXExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider']
Loading weights [d5f8393fea] from /home/user/Downloads/amdgpu/models/Stable-diffusion/anything-v4.0-pruned-fp16.safetensors
Creating model from config: /home/user/Downloads/amdgpu/configs/v1-inference.yaml
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 7.0s (prepare environment: 9.3s, initialize shared: 0.6s, load scripts: 0.3s, create ui: 0.4s, gradio launch: 0.3s).
Loading VAE weights specified in settings: /home/user/Downloads/amdgpu/models/VAE/anything-v4.0.vae.pt
Applying attention optimization: sub-quadratic... done.
Model loaded in 3.8s (load weights from disk: 0.5s, create model: 0.3s, apply weights to model: 1.7s, load VAE: 0.9s, calculate empty prompt: 0.2s).
Total progress: 100%|| 50/50 [00:08<00:00, 4.32it/s]
Total progress: 100%|| 50/50 [00:08<00:00, 4.32it/s]
Total progress: 100%|| 50/50 [00:09<00:00, 4.26it/s]
Total progress: 100%|| 50/50 [00:09<00:00, 4.26it/s]
Total progress: 100%|| 50/50 [00:08<00:00, 4.31it/s]
Total progress: 100%|| 50/50 [00:09<00:00, 4.25it/s]
Total progress: 100%|| 50/50 [00:09<00:00, 4.26it/s]
About 7.5 img/min
This was also using a LoRA, I don't know if that makes it slower