Function calling enables dynamic workflows by allowing the model to select and suggest function calls based on user input, which helps in building agentic workflows. By defining a set of functions, or tools, you provide context that lets the model recommend and fill in function arguments as needed.Documentation Index
Fetch the complete documentation index at: https://sambanova-systems.mintlify.dev/docs/llms.txt
Use this file to discover all available pages before exploring further.
How function calling works
Function calling enables adaptive workflows that leverage real-time data and structured outputs, creating more dynamic and responsive model interactions.- Submit a Query with tools: Start by submitting a user query along with available tools defined in JSON Schema. This schema specifies parameters for each function.
- The model processes and suggests: The model interprets the query, assesses intent, and decides if it will respond conversationally or suggest function calls. If a function is called, it fills in the arguments based on the schema.
- Receive a model response: You’ll get a response from the model, which may include a function call suggestion. Execute the function with the provided arguments and return the result to the model for further interaction.
Supported models
Meta-Llama-3.3-70B-InstructLlama-4-Maverick-17B-128E-InstructQwen3-235Bgpt-oss-120bDeepSeek-V3.1DeepSeek-V3.2MiniMax-M2.5
To get better quality in tool calling requests with
gpt-oss-120b, set the reasoning_effort to high.Example usage
The examples below describe each step of using function calling with an end-to-end example after the last step.Step 1: Define the function schema
Define a JSON schema for your function. You will need to specify:- The name of the function.
- A description of what it does.
- The parameters, their data types, and descriptions.
Example schema for getting the weather
Step 2: Configure function calling in your request
When sending a request, include the function definition in thetools parameter and set tool_choice to the following:
auto: allows the model to choose between generating a message or calling a function. This is the default tool choice when the field is not specified.required: This forces the model to generate a function call. The model will then always select one or more function(s) to call.none: prevents the model from calling any functions, forcing a text response.- To enforce a specific function call, set
tool_choice = {"type": "function", "function": {"name": "get_weather"}}. This ensures the model will only use the specified function.
For
gpt-oss-120b, when forcing a specific function call, use the Chat Completions API format: {"type": "function", "function": {"name": "function_name"}}. The OpenAI Responses API format — {"type": "function", "name": "function_name"} (no inner "function" key) — is not supported. The allowed_tools parameter is also not supported.The following code block shows a fake weather lookup that returns a random temperature between 20°C and 50°C. For accurate and real-time weather data, use a proper weather API.
Step 3: Handle tool calls
If the model chooses to call a function, you will findtool_calls in the response. Extract the function call details and execute the corresponding function with the provided parameters.
Example code
Step 4: Provide function results back to the model
Once you have computed the result, pass it back to the model to continue the conversation or confirm the output.Example code
Step 5: Example output
Example output:End-to-end example
The following code block shows a fake weather lookup that returns a random temperature between 20°C and 50°C. For accurate and real-time weather data, use a proper weather API.
JSON schema
You can set theresponse_format parameter to your defined schema to ensure the model produces a JSON object that matches your specified structure.
Ensure to set the
"strict" parameter to false, as true isn’t supported yet. When it is available, it will ensure the model strictly follows your function schema instead of making a best-effort attempt.JSON mode
You can set theresponse_format parameter to json_object in your request to ensure that the model outputs a valid JSON. In case the mode is not able to generate a valid JSON, an error will be returned.
In case the model fails to generate a valid JSON, you will get an error message Model did not output valid JSON.
Example response

