Can I indeed set the temperature for each and every request overriding what was set in the command line of llama-server?
client = OpenAI(api_key=api_key, base_url=base_url)
response = client.chat.completions.create(
model=model,
messages=messages,
extra_body=extra_body,
temperature=temperature, # @grok is this true???
tools=tools,
tool_choice="auto"
)