>>108317644
In my case, the function tool() will extract the information from the decorator, and add in to the request every time. This way, a LLM trained for agentic tool use can generate a correct JSON-formatted response to execute a certain tool.
def tool(description: str, parameters: dict = None):
"""
Decorator to mark a function as an agent tool.
Args:
description: Human-readable description of what the tool does
parameters: JSON schema for parameters (auto-generated if not provided)
Returns:
Decorated function with tool metadata attached
"""
def decorator(func: Callable):
@wraps(func)
def wrapper(*args, **kwargs):
return func(*args, **kwargs)
# Attach metadata to the function
wrapper.__tool_description__ = description
wrapper.__tool_parameters__ = parameters or _auto_generate_schema(func)
wrapper.__tool_name__ = func.__name__
wrapper.__tool_func__ = func
# Register globally
REGISTERED_TOOLS.append(wrapper)
return wrapper
return decorator
Here is the actual function. Its purpose and its parameters are described here.
# ---------- Basic arithmetic ----------
@tool(
description="Add two numbers (a + b)",
parameters={
"type": "object",
"properties": {
"a": {"type": "number", "description": "First number"},
"b": {"type": "number", "description": "Second number"}
},
"required": ["a", "b"]
}
)
def add(a: float, b: float) -> float:
"""Add two numbers."""
return a + b
I confess I have no idea, but it werks somehow for me.
Big LLM's are capable to process long prompts containing many steps.
Small LLM's (like 9b, that's why I was wondering) mostly suck at multi-step executions.