Tool Calling
AI Model Hub for Free: From December 1, 2024, to June 30, 2025, IONOS is offering all foundation models in the AI Model Hub for free. Create your contract today and kickstart your AI journey!
In typical conversational use cases, users often want a Large Language Model (LLM) to enrich responses with information from external tools or APIs. Imagine a chatbot that, among other features, answers questions about the current weather. Since LLMs are trained once and not updated daily, they cannot answer such real-time queries independently.
One possible workaround is to use vector databases, which store documents that can be retrieved during inference. But this is not always ideal:
Rapidly changing data like weather — Consider a chatbot that should tell users the current weather. Since weather conditions can change multiple times a day, constantly uploading fresh data to a vector database is inefficient and impractical. You need a way to fetch this information in real time, based on the user's question.
Structured tasks like meeting scheduling — Imagine a chatbot that helps users schedule meetings. You would not upload everyone's calendar into a vector database. Instead, you would want the model to collect necessary information (like time, participants, and topic) and pass it to a scheduling tool. It is a perfect use case for tool calling, where the LLM provides structured input to trigger an external action.
These use cases are better handled through tool calling.
Tool calling extends the capabilities of LLMs by enabling them to:
Recognize when external data or actions are required,
Ask the user for parameters needed to perform those actions, and
Respond in a structured, machine-readable format to initiate the tool execution.
This creates a seamless user experience where the LLM can fetch real-time data or perform actions outside its built-in capabilities.
Running example
Let us walk through an example where a tool retrieves the current weather. The tool requires two inputs:
The city name
The temperature unit (Celsius or Fahrenheit)
Tool definition
To use this tool, you define a tool definition using a JSON schema. The description of the tool includes the tool name, a tool description, relevant parameters, and their types:
This definition is sent to the LLM with the user prompt. It tells the model:
Which tools are available
What parameters each tool expects
When to consider using a tool instead of generating a static response
Tool calls
The LLM may respond with a tool call when tool definitions are included in the prompt. A tool call appears as follows:
The critical part is the function field, which includes:
The name of the function (get_weather)
The arguments with values the function should receive
In this example:
city is "Berlin" (a string)
unit is "celsius" (one of the allowed enum values)
This process is powerful because:
The tool call matches the tool definition
The values are either inferred by the model or collected from the user
Tool execution
At this point, the LLM’s job is done. It doesn’t execute the tool itself. That responsibility falls to your application:
Parse the model’s tool call response
Trigger the corresponding tool/API
Integrate the result into the ongoing conversation
Explore further
To see tool calling in action, visit our Tool Calling tutorial.
Last updated
Was this helpful?