>>108550319
>I'm confused about jinja
you get to talk to the model without having to reimplement the template in every program you write. It's the purpose. It may not matter to the goyslop eaters of shittytavern who love write a template for every model under the sun instead of sending a structured json object but most of us writing scripts that interact with LLMs are grateful we don't have to care what sort of chat template a LLM has. We just send
{"messages":[{"role":"user","content":"test"}],"model":"gemma","temperature":1,"top_p":0.95,"top_k":64,"chat_template_kwargs":{"enable_thinking":false},"stream":true}
and it works. I don't have to know what it looks like to the model, the backend formats the message.