Tool Calling
AI Model Hub for Free: From December 1, 2024, to June 30, 2025, IONOS is offering all foundation models in the AI Model Hub for free. Create your contract today and kickstart your AI journey!
The IONOS AI Model Hub supports tool calling, a feature of our OpenAI-compatible text generation API. Tool calling allows Large Language Models (LLMs) to trigger external functions and APIs during conversation, making the models interactive and dynamic. It significantly extends the LLM’s capabilities beyond pre-trained data or vector database retrieval, enabling real-time actions such as retrieving live weather data or interacting with your business systems.
Text generation models supporting tool calling
Only the following LLMs offer tool calling:
Meta (License)
Llama 3.3 Instruct (70B)
Best for conversational tasks and natural language processing. Outperforms Llama 3.1 405B at similar quality levels.
Meta (License)
Llama 3.1 Instruct (8B and 405B)
Ideal for chatbots, assistants, and dialogue-driven applications.
Mistral AI (License)
Mistral Instruct v0.3 (7B)
Strong in European languages; supports longer contexts; great for assistant-style use cases.
Overview
In this tutorial, you will learn how to integrate tool calling with a LLM through the OpenAI-compatible API to generate a mock weather forecast.
This tutorial is intended for developers with basic knowledge of:
REST APIs
A programming language capable of making HTTP requests (Python and Bash examples included)
IONOS AI Model Hub's OpenAI compatible (text generation API)
The concept of tool calling
By the end, you will be able to:
Define tool definitions that describe available external tools
Trigger tool calls based on user queries
Parse tool call responses to deliver final answers
Get started with tool calling
First, set up your environment and authenticate using the IONOS OpenAI-compatible API endpoint.
Download the respective code files to easily access tool calling-specific scripts and examples and generate the intended output:
Download the Python Notebook to explore tool calling with ready-to-use examples.
Step 1: Retrieve available models
To begin, retrieve a list of available models that support tool calling. This helps you confirm which models we offer for your use case.
The output will be a JSON document listing each model by name. You will reference one of these model names in later steps to perform text generation and tool calling operations.
Step 2: Define your functions
Tool calling requires you to define external functions that the model can invoke. These functions might fetch external data, trigger workflows, or perform custom computations.
In this example, we define a mock get_weather
function that simulates retrieving current weather data.
These functions form the core logic your AI model will invoke when processing relevant queries.
Step 3: Derive tool definition
Once you've created your function, define a tool definition that describes it in a format the model can understand. This includes the tool name, description, expected parameters, and parameter types.
Each tool definition must use clear, human-readable names and descriptions. It helps the model determine the most appropriate tool to call and which parameters to request or infer.
In the above example, the model may infer that a query about "Berlin" defaults to Celsius and "New York" to Fahrenheit. If it cannot infer the unit, it may ask the user for clarification.
To use multiple tools, simply add more definitions to the tools list.
Step 4: Enrich user query with available tool definitions
Now, send a user query with your defined tools using the OpenAI-compatible API.
The model will decide whether to call the tool based on the user query. For example, if the user asks about Berlin’s weather, the model might infer the unit as Celsius without prompting it explicitly.
Step 5: Process tool call response and execute tool
Once the LLM responds with a tool call, your application must handle the tool call by executing the corresponding function and returning the result to the user.
Tool call responses follow a predictable structure, making it easy to extract arguments and determine the function to execute. However, response behavior can vary based on the model, prompt structure, and tool definition quality.
You can also guide model behavior more precisely by:
Using detailed system prompts
Providing complete and clear tool descriptions
Adjusting the temperature or other generation parameters
In real-world use cases, always validate parameters and add safety checks before executing external actions based on user prompts.
Summary
In this tutorial, you learned how to:
Create a mock weather function simulating a real API call
Define a tool definition based on the function
Prompt the LLM with a user query and tool definition
Parse the model’s response to extract tool call details
Execute the tool and return structured output
Tool calling enables your application to integrate LLMs with external data sources, APIs, or internal logic. This makes the LLM more powerful, responsive, and capable of handling real-time, or dynamic user scenarios.
For more information about other AI capabilities, see our documentation on text generation and image generation.
Last updated
Was this helpful?