[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


File: 1000032884.png (241 KB, 1200x1219)
241 KB
241 KB PNG
is there any reason why I should use GGML or anything else over picrel? Porting pytorch models to onnx is just so much easier in comparison to porting to ggml but mostly because ggml lacks proper documentation.

>INB4 its made my Microsoft
Don't care
>>
onnx is a format if I am not wrong while ggml is a framework to run inference gguf is their current format. Yeah ggml sucks ass to write but setting up onnx can be pain in the ass and not all platforms are supported as well as ggml. I think kha-white/ocr was one thing that is a major pain in the ass to setup on onnx at least a year ago when I tried.
>>
>>101547222
ONNX is a format but I'm referring to ONNX runtime which is the inferance engine.

from what I see Onnx runtime handles most of the heavy lifting whilst ggml requires you to create a bunch of different custom tensor operations, especially if you're porting from pytorch.

I will agree that ONNX runtime was a pain to get working with CMAKE, they do seem to be using cmake as a build system but they're also using python to call the cmake. For now I'm just downloading the binaries and linking against them.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.