Tool Calling

The IONOS AI Model Hub supports tool calling, a feature of our OpenAI-compatible text generation API. Tool calling allows Large Language Models (LLMs) to trigger external functions and APIs during conversation, making the models interactive and dynamic. It significantly extends the LLM’s capabilities beyond pre-trained data or vector database retrieval, enabling real-time actions such as retrieving live weather data or interacting with your business systems.

Note: In modern AI APIs, the term "tool calling" has replaced "function calling." This guide uses the updated terminology in accordance with current standards.

Text generation models supporting tool calling

Not all Large Language Models on the AI Model Hub models list support tool calling. To find out which ones are compatible, check the individual model cards.

Overview

In this guide, you will learn how to integrate tool calling with a LLM through the OpenAI-compatible API to generate a mock weather forecast.

This guide is intended for developers with basic knowledge of:

  • REST APIs

  • A programming language capable of making HTTP requests (Python and Bash examples included)

  • IONOS AI Model Hub's OpenAI compatible (text generation API)

  • The concept of tool calling

By the end, you will be able to:

  1. Define tool definitions that describe available external tools

  2. Trigger tool calls based on user queries

  3. Parse tool call responses to deliver final answers

Get started with tool calling

First, set up your environment and authenticate using the IONOS OpenAI-compatible API endpoint.

Download the respective code files to easily access tool calling-specific scripts and examples and generate the intended output:

Download the Python Notebook to explore tool calling with ready-to-use examples.

Step 1: Retrieve available models

To begin, retrieve a list of available models that support tool calling. This helps you confirm which models we offer for your use case.

The output will be a JSON document listing each model by name. You will reference one of these model names in later steps to perform text generation and tool calling operations.

Step 2: Define your functions

Tool calling requires you to define external functions that the model can invoke. These functions might fetch external data, trigger workflows, or perform custom computations.

In this example, we define a mock get_weather function that simulates retrieving current weather data.

These functions form the core logic your AI model will invoke when processing relevant queries.

Step 3: Derive tool definition

Once you've created your function, define a tool definition that describes it in a format the model can understand. This includes the tool name, description, expected parameters, and parameter types.

Each tool definition must use clear, human-readable names and descriptions. It helps the model determine the most appropriate tool to call and which parameters to request or infer.

In the above example, the model may infer that a query about "Berlin" defaults to Celsius and "New York" to Fahrenheit. If it cannot infer the unit, it may ask the user for clarification.

To use multiple tools, simply add more definitions to the tools list.

Step 4: Enrich user query with available tool definitions

Now, send a user query with your defined tools using the OpenAI-compatible API.

The model will decide whether to call the tool based on the user query. For example, if the user asks about Berlin’s weather, the model might infer the unit as Celsius without prompting it explicitly.

Step 5: Process tool call response and execute tool

Once the LLM responds with a tool call, your application must handle the tool call by executing the corresponding function and returning the result to the user.

Tool call responses follow a predictable structure, making it easy to extract arguments and determine the function to execute. However, response behavior can vary based on the model, prompt structure, and tool definition quality.

You can also guide model behavior more precisely by:

  • Using detailed system prompts

  • Providing complete and clear tool descriptions

  • Adjusting the temperature or other generation parameters

In real-world use cases, always validate parameters and add safety checks before executing external actions based on user prompts.

Summary

In this guide, you learned how to:

  1. Create a mock weather function simulating a real API call

  2. Define a tool definition based on the function

  3. Prompt the LLM with a user query and tool definition

  4. Parse the model’s response to extract tool call details

  5. Execute the tool and return structured output

Tool calling enables your application to integrate LLMs with external data sources, APIs, or internal logic. This makes the LLM more powerful, responsive, and capable of handling real-time, or dynamic user scenarios.

For more information about other AI capabilities, see our documentation on text generation and image generation.

Last updated

Was this helpful?