>>107986896
have you tried using the version built into torch this is what i have in my comfy env launch script
export TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTAL=1
export FIND_MODE=FAST
export PYTORCH_TUNABLEOP_ENABLED=1
export MIOPEN_FIND_MODE=FAST
export GPU_ARCHS=gfx1100
export FLASH_ATTENTION_TRITON_AMD_ENABLE=TRUE
python dlbackend/ComfyUI/main.py --use-flash-attention --reserve-vram 1.2
i dont think i ever got sage attention to work though, tea cache worked and the fa definitely worked