Function Calling Schema Builder
Build OpenAI and Anthropic function/tool schemas visually — live JSON preview, runs entirely in your browser
Generates the tools array format for OpenAI Chat Completions API (tool_choice support).
Function 1
Use snake_case, no spaces
{
"type": "function",
"function": {
"name": "my_function",
"parameters": {
"type": "object",
"properties": {},
"required": []
}
}
}client.chat.completions.create({
model: "gpt-4o",
tools: [<paste here>],
tool_choice: "auto",
messages: [...]
})Function calling (also called tool use) lets large language models interact with external systems — databases, APIs, calculators, and more — by outputting structured JSON that your application can execute. Instead of generating freeform text, the model signals “call this function with these arguments,” and your code handles the actual execution.
This builder lets you define functions visually and instantly preview the exact JSON your API call expects. No trial-and-error with the docs, no copy-paste mistakes.
What Is Function Calling?
Without function calling, you’d rely on prompt engineering to extract structured data from a model response and parse it yourself — fragile and error-prone. With function calling, you declare what functions exist and what parameters they accept. The model decides when to call them and fills in the arguments from the conversation context.
Example flow:
- User asks: “What’s the weather in Tokyo?”
- Model sees a
get_weatherfunction defined with alocationparameter. - Model responds with a tool call:
get_weather({ "location": "Tokyo, Japan" }). - Your app calls the real weather API and feeds the result back to the model.
- Model synthesizes the final answer: “It’s currently 18°C and cloudy in Tokyo.”
The model never directly calls APIs — it only outputs structured arguments. Your code remains in control of execution.
OpenAI vs Anthropic: Format Differences
Both APIs implement function calling, but with different JSON structures.
OpenAI format wraps each tool in a type: "function" envelope with a nested function object:
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City and country, e.g. Tokyo, Japan"
}
},
"required": ["location"]
}
}
}
Pass this to the tools array in chat.completions.create().
Anthropic format uses a flatter structure with input_schema instead of parameters:
{
"name": "get_weather",
"description": "Get current weather for a location",
"input_schema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City and country, e.g. Tokyo, Japan"
}
},
"required": ["location"]
}
}
Pass this to the tools array in messages.create().
The key differences: OpenAI uses parameters, Anthropic uses input_schema. OpenAI wraps in { type: "function", function: {...} }, Anthropic does not. This tool handles both — switch the tab and copy the correct format.
Best Practices for Schema Design
Write descriptions that guide the model, not just document the parameter. A description like "City name" tells the model what type of value to pass. A description like "City and country in the format 'City, Country', e.g. 'Paris, France' — required for disambiguation" tells it how to pass it and why. Models use descriptions to decide how to fill in arguments.
Mark only truly required parameters as required. If a parameter has a sensible default that the model can omit, make it optional. Overly strict required lists cause the model to hallucinate values when it doesn’t have enough context.
Use enum to constrain categorical parameters. Instead of a free-text unit parameter that might receive “celsius,” “Celsius,” “C,” or “metric,” define "enum": ["celsius", "fahrenheit"]. This eliminates an entire class of validation errors downstream.
Keep function names short and unambiguous. The model uses the function name as a major signal for which tool to call. search_products is clearer than do_search or perform_product_search_operation. Use snake_case — it’s the convention in all major API documentation.
One function, one responsibility. Avoid do_everything(action: string, ...) patterns. Define granular functions: search_products, get_product_details, add_to_cart. The model is better at choosing the right tool when each tool has a clear, narrow purpose.
Keep descriptions concise — they consume tokens. Every character in your function schema counts against your context window. A good description fits in one sentence. If you need a paragraph, the function probably does too much.
Token Costs
Function schemas are included in every API request. A schema with 5 parameters typically costs 100–300 tokens. A large schema with 20 functions and detailed descriptions can cost 1,000–3,000 tokens per call.
The token estimate shown above is a rough approximation (~4 chars per token). Use the LLM Token Counter for exact counts by model.
To minimize token usage:
- Remove parameters the model doesn’t need to fill in (compute them server-side instead)
- Trim redundant descriptions
- Remove optional enum values if the model reliably infers the correct values from context
Frequently Asked Questions
Does the model actually call my function? No. The model outputs a JSON object indicating which function to call and with what arguments. Your application code is responsible for routing that to the actual function and returning the result. The model never has direct access to your systems.
What’s the difference between tools and functions in the OpenAI API?
The functions parameter was the original API (deprecated). tools is the current format and supports additional tool types beyond functions (like code interpreter and file search in the Assistants API). Always use tools for new integrations.
Can I define multiple functions? Yes. Add as many functions as needed using the “Add Another Function” button above. For OpenAI/Anthropic, the output becomes an array of tool definitions. Best practice: include only functions relevant to the current task — the model performs better with a focused set of 5–10 tools rather than 50.
What happens if the model calls a function with invalid arguments? Validate the arguments in your application code before execution. For critical operations (database writes, payment processing, sending emails), never trust model-generated arguments without schema validation. Libraries like Zod (TypeScript) or Pydantic (Python) make this straightforward.