>>101706202
I also followed that, and I'm also using the fp16 version (unless the .sft one is bf16), driver 546.
Total VRAM 12287 MB, total RAM 65451 MB
pytorch version: 2.4.0+cu121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3060 : cudaMallocAsync
[...]
loading in lowvram mode 9981.07
100%|--------| 20/20 [01:44<00:00, 5.24s/it]
Using pytorch attention in VAE
Using pytorch attention in VAE
Requested to load AutoencodingEngine
Loading 1 new model
Prompt executed in 159.24 seconds
I guess I will try updating my drivers... But I'm not feeling confident.