Tool Integration
The IONOS AI Model Hub provides an OpenAI-compatible API, allowing seamless integration with various frontend tools that use Large Language Models (LLMs). This guide walks you through the setup process, using AnythingLLM as an example tool.
By the end of this tutorial, you will be able to configure AnythingLLM to use the IONOS AI Model Hub as its backend for AI-powered responses.
Step 1: Get an Authentication Token
You will need an authentication token to access the IONOS AI Model Hub. For more information about how to generate your token in the IONOS DCD, see Generate authentication token.
Save this token in a secure place, as you’ll need to enter it into AnythingLLM during setup.
Step 2: Select a Language Model
The IONOS AI Model Hub offers a variety of Large Language Models to suit different needs. Choose the model that best fits your use case from the table below:
Foundation Model | Model Name | Purpose |
---|---|---|
Llama 3.1 Instruct, 8B |
| Suitable for general-purpose dialogue and language tasks. |
Llama 3.1 Instruct, 70B |
| Ideal for more complex conversational agents and virtual assistants. |
Llama 3.1 Instruct, 405B |
| Optimized for extensive dialogue tasks, supporting large context windows. |
Mistral Instruct v0.3, 7B |
| Designed for conversational agents, with enhanced European language support. |
Mixtral, 8x7B |
| Supports multilingual interactions and is optimized for diverse contexts. |
During setup, you’ll enter the model’s "Model Name" value into AnythingLLM’s configuration.
Step 3: Obtain the Base URL
For connecting to the IONOS AI Model Hub, use the following Base URL for the OpenAI-compatible API:
You will enter this URL in the configuration settings of AnythingLLM.
Step 4: Configure AnythingLLM
With your authentication token, selected model name, and base URL in hand, you’re ready to set up AnythingLLM:
Open AnythingLLM and go to the configuration page for the Large Language Model (LLM) settings.
In AnythingLLM, this can be accessed by clicking the wrench icon in the lower left corner, then navigating to AI Providers -> LLM.
Choose Generic OpenAI as the provider.
Enter the following information in the respective fields:
API Key: Your IONOS authentication token.
Model Name: The name of the model you selected from the table (e.g.,
meta-llama/Meta-Llama-3.1-8B-Instruct
).Base URL:
https://openai.inference.de-txl.ionos.com/v1
Your screen should look similar to the image below:
Click Save Changes to apply the settings.
From now on, AnythingLLM will use the IONOS AI Model Hub as its backend, enabling AI-powered functionality based on your chosen Large Language Model.
Summary
This guide provides a straightforward path for integrating the IONOS AI Model Hub into third-party frontend tools using the OpenAI-compatible API. For other tools and more advanced configurations, the steps will be similar: generate an API key, select a model, and configure the tool’s API settings.
Last updated