use case for rocm?
>>108769457it's fine. though vulkan does better in llama.cpp
>>108769457So you can take potshots at it to feel superior despite being one of us /g/tards that accomplish nothing.
>>108769467i don't know about linux but hardware support is a bad joke on windows. outclassed by vulkan both for llama.cpp and stable-diffusion.cpp
>>108769457it's good for programming supercomputers to break world records nobody gives a shit about
>>108769467does vulkan outperform cuda on nvidia card?
>>108769586with the right extension It can and is really competitive in general https://www.phoronix.com/news/NVIDIA-Vulkan-AI-ML-Successthough I'd be surprised if the gap doesn't significantly widen in cuda's favor for bigger models and much larger context
>>108769489it is known to exist maybe someone knows how to opencl opengl and video hardware encode/decode
>>108769586if used correctly, yes. cuda is a high level wrapper, while vulkan is really low level. the difficult part is that cuda has a lot of data processing libraries, while vulkan is more tailored toward graphics. but vulkan is lower level, so it's entirely possible to efficiently port cuda libraries/programs to it.
>>108769457I thought it was what AMD needs for running AI shit locally?
>>108769467>>108769586vulkan might be less buggy, but rocm is much faster on amd cards
>>108771191>but rocm is much faster on amd cardssource: it came to me in a dreamhttps://www.phoronix.com/review/rocm-71-llama-cpp-vulkan
>>108769457for me, it's opencl. it's super easy and works everywhere.
>>108771180cuda and rocm predates machine learning, they're gpu compute libraries and happen to be good at training and running models but they're not mandatory for it.
>>108771191not on my 7900xtx
>>108769457Training using AMDAlso it's open source so it's better on principle than cuda.
>>108771356i'd rather it be closed source and not trash