>>107182594
Okay, obviously I'm fucking something up here. Specs in pic related. Here's how I'm running the model:
llama-server -m models/gpt-oss-120b-mxfp4-00001-of-00003.gguf -c 0 -fa on --jinja --chat-template-file models/templates/openai-gpt-oss-120b.jinja --reasoning-format none -t 8 -ngl 10
That jinja template I'm using has reasoning on high. Where am I fucking this up?