ggml_backend_cuda_buffer_type_alloc_buffer: allocating 27263.30 MiB on device 0: cudaMalloc failed: out of memory
alloc_tensor_range: failed to allocate CUDA0 buffer of size 28587643136
[ERROR]: ggml_extend.hpp:1593 - Wan2.2-I2V-14B alloc params backend buffer failed, num_tensors = 1095
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 27263.30 MiB on device 0: cudaMalloc failed: out of memory
alloc_tensor_range: failed to allocate CUDA0 buffer of size 28587643136
[ERROR]: ggml_extend.hpp:1593 - Wan2.2-I2V-14B alloc params backend buffer failed, num_tensors = 1095
[DEBUG]: ggml_extend.hpp:1601 - wan_vae params backend buffer size = 242.10 MB(VRAM) (194 tensors)
[DEBUG]: stable-diffusion.cpp:558 - loading weights
>tfw can't fit the 14B Q8 goof