[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


File: 1773868904045271.jpg (1 MB, 1280x1024)
1 MB JPG
use case for rocm?
>>
>>108769457
it's fine. though vulkan does better in llama.cpp
>>
>>108769457
So you can take potshots at it to feel superior despite being one of us /g/tards that accomplish nothing.
>>
>>108769467
i don't know about linux but hardware support is a bad joke on windows. outclassed by vulkan both for llama.cpp and stable-diffusion.cpp
>>
>>108769457
it's good for programming supercomputers to break world records nobody gives a shit about
>>
>>108769467
does vulkan outperform cuda on nvidia card?
>>
>>108769586
with the right extension It can and is really competitive in general https://www.phoronix.com/news/NVIDIA-Vulkan-AI-ML-Success
though I'd be surprised if the gap doesn't significantly widen in cuda's favor for bigger models and much larger context
>>
>>108769489

it is known to exist maybe someone knows how to opencl opengl and video hardware encode/decode
>>
>>108769586
if used correctly, yes. cuda is a high level wrapper, while vulkan is really low level. the difficult part is that cuda has a lot of data processing libraries, while vulkan is more tailored toward graphics. but vulkan is lower level, so it's entirely possible to efficiently port cuda libraries/programs to it.
>>
>>108769457
I thought it was what AMD needs for running AI shit locally?
>>
>>108769467
>>108769586
vulkan might be less buggy, but rocm is much faster on amd cards
>>
>>108771191
>but rocm is much faster on amd cards
source: it came to me in a dream
https://www.phoronix.com/review/rocm-71-llama-cpp-vulkan
>>
>>108769457
for me, it's opencl. it's super easy and works everywhere.
>>
>>108771180
cuda and rocm predates machine learning, they're gpu compute libraries and happen to be good at training and running models but they're not mandatory for it.
>>
>>108771191
not on my 7900xtx
>>
>>108769457
Training using AMD
Also it's open source so it's better on principle than cuda.
>>
>>108771356
i'd rather it be closed source and not trash



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.