AI Model Hub for free: From December 1, 2024 until March 31, 2025, IONOS offers all foundation models of the AI Model Hub for free. Create your contract now and get your AI journey started today!
The IONOS AI Model Hub offers powerful AI capabilities to meet various needs. Here are five pivotal use cases you can implement with this service:
The IONOS AI Model Hub uses authentication tokens to ensure that only users with corresponding permissions can make use of it.
Key Features:
Authentication tokens are bound to IONOS Public Cloud users.
Usage is billed via the IONOS Public Cloud contract owner responsible for these users.
The Access Management tutorial includes step-by-step instructions for generating an IONOS Public Cloud contract, users, and authentication tokens.
Text generation models enable advanced language processing tasks, such as content creation, summarization, conversational responses, and question-answering. These models are pre-trained on extensive datasets, enabling high-quality text generation with minimal setup.
Key Features:
Access open-source Large Language Models (LLMs) via an OpenAI-compatible API.
Ensure data privacy with processing confined within Germany.
For step-by-step instructions on text generation, see the Text Generation tutorial.
Image generation models allow you to create high-quality, detailed images from descriptive text prompts. These models can be used for applications in creative design, marketing visuals, and more.
Key Features:
Generate photorealistic or stylized images based on specific prompts.
Choose from models optimized for authenticity or creative and artistic outputs.
To learn how to implement image generation, see the Image Generation tutorial.
Embedding models allow you to create numerical representations of texts, which are similar if the texts are semantically similar. These models are ideal for applications like text retrieval, comparison, ranking, etc.
Key Features:
Identify texts that answer a query based on semantic similarity between query and potential answer.
Compare texts to determine their semantic closeness or difference.
To learn how to derive embeddings and calculate similarity of texts, see the Text Embeddings tutorial.
Vector databases enable you to store and query large collections of documents based on semantic similarity. Converting documents into embeddings allows you to perform effective similarity searches, making it ideal for applications like document retrieval and recommendation systems.
Key Features:
Persist documents and search for semantically similar content.
Manage document collections through simple API endpoints.
For detailed instructions, see Document Collections tutorial.
RAG combines the strengths of foundation models and vector databases. It retrieves the most relevant documents from the database and uses them to augment the output of a foundation model. This approach enriches the responses, making them more accurate and context-aware.
Key Features:
Combine foundation models with additional context retrieved from document collections.
Enhance response accuracy and relevance for user queries.
To learn how to implement Retrieval Augmented Generation, see the Retrieval Augmented Generation tutorial.
The IONOS AI Model Hub can be seamlessly integrated into various frontend tools that use Large Language Models or text-to-image models through its OpenAI-compatible API. This integration allows you to leverage foundation models in applications without complex setups. For example, using the tool AnythingLLM, you can configure and connect to the IONOS AI Model Hub to serve as the backend for Large Language Model functionalities.
Key Features:
Easily connect to third-party tools with the OpenAI-compatible API.
Enable custom applications with IONOS-hosted foundation models.
For detailed guidance on integrating with tools, see the Tool Integration tutorial.
These tutorials will guide you through each use case, providing clear and actionable steps to integrate advanced AI capabilities into your applications using the IONOS AI Model Hub.
AI Model Hub for free: From December 1, 2024 until March 31, 2025, IONOS offers all foundation models of the AI Model Hub for free. Create your contract now and get your AI journey started today!
The IONOS AI Model Hub API allows you to access vector databases to persist your document collections and find semantically similar documents.
The vector database is used to persist documents in document collections. Each document is any form of pure text. In the document collection not only the input text is persisted, but also a transformation of the input text into an embedding. Each embedding is a vector of numbers. Input texts which are semantically similar have similar embeddings. A similarity search on a document collection finds the most similar embeddings for a given input text. These embeddings and the corresponding input text are returned to the user.
This tutorial is intended for developers. It assumes you have basic knowledge of:
REST APIs and how to call them
A programming language to handle REST API endpoints (for illustration purposes, the tutorials uses Python and Bash scripting)
By the end of this tutorial, you'll be able to:
Create, delete and query a document collection in the IONOS vector database
Save, delete and modify documents in the document collection and
Answer customer queries using the document collection.
The IONOS AI Model Hub API offers a vector database that you can use to persist text in document collections without having to manage corresponding hardware yourself.
Our AI Model Hub API provides all required functionality without your data being transfered out of Germany.
To get started, you should open your IDE to enter Python code.
Next generate a header document to authenticate yourself against the endpoints of our REST API:
After this step, you have one variable header you can use to access our vector database.
To get started, you should open a terminal and ensure that curl and jq is installed. While curl is essential for communicating with our API service, we use jq throughout our examples the improve the readability of the results of our API.
In this section you learn how to create a document collection. We will use this document collection to fill it with the data from your knowledge base in the next step.
To track, if something went wrong this section also shows how to:
List existing document collections
Remove document collections
Get meta data of a document collection
To create a document collection, you have to specify the name of the collection and a description and invoke the endpoint to generate document collections:
If the creation of the document collection was successful, the status code of the request is 201 and it returns a JSON document with all relevant information concerning the document collection.
To modify the document collection you need its identifier. You can extract it from the returned JSON document in the variable id.
To ensure that the previous step went as expected, you can list the existing document collections.
To retrieve a list of all document collections saved by you:
This query returns a JSON document consisting of your document collections and corresponding meta information
The result consists of 8 attributes per collection of which 3 are relevant for you:
id: The identifier of the document collection
properties.description: The textual description of the document collection
properties.documentsCount: The number of documents persisted in the document collection
If you have not created a collection yet, the field items is an empty list.
If the list of document collections consists of document collections you do not need anymore, you can remove a document collection by invoking:
This query returns a status code which indicates whether the deletion was successful:
204: Status code for successfull deletion
404: Status code given the collection did not exist
If you are interested in the meta data of a collection, you can extract it by invoking:
This query returns a status code which indicates whether the collection exists:
200: Status code if the collection exists
404: Status code given the collection does not exist
The body of the request consists of all meta data of the document collection.
In this section, you learn how to add documents to the newly created document collection. To validate your insertion, this section also shows how to
List the documents in the document collection,
Get meta data for a document,
Update an existing document and
Prune a document collection.
To add an entry to the document collection, you need to at least specify the content, the name of the content and the contentType:
Note:
You need to encode your content using base64 prior to adding it to the document collection. This is done here in line 7 of the source code.
This request returns a status code 200 if adding the document to the document collection was successful.
To ensure that the previous step went as expected, you can list the existing documents of your document collection.
To retrieve a list of all documents in the document collection saved by you:
This query returns a JSON document consisting of your documents in the document collection and corresponding meta information
The result has a field items with all documents in the collection. This field consists of 10 attributes per entry of which 5 are relevant for you:
id: The identifier of the document
properties.content: The base64 encoded content of the document
properties.name: The name of the document
properties.description: The description of the document
properties.labels.number_of_tokens: The number of tokens in the document
If you have not created the collection yet, the request will return a status code 404. It will return a JSON document with the field items set to an empty list if no documents were added yet.
If you are interested in the metadata of a document, you can extract it by invoking:
This query returns a status code which indicates whether the document exists:
200: Status code if the document exists
404: Status code given the document does not exist
The body of the request consists of all meta data of the document.
If you want to update a document, invoke:
This will replace the existing entry in the document collection with the given id by the payload of this request.
If you want to remove all documents from a document collection invoke:
This query returns the status code 204 if pruning the document collection was successful.
Finally, this section shows how to use the document collection and the contained documents to answer a user query.
To retrieve the documents relevant for answering the user query, invoke the query endpoint as follows:
This will return a list of the NUM_OF_DOCUMENTS most relevant documents in your document collection for answering the user query.
In this tutorial you learned how to use the IONOS AI Model Hub API to conduct semantic similarity searches using our vector database.
Namely, you learned how to:
Create a necessary document collection in the vector database and modify it
Insert your documents into the document collection and modify the documents
Conduct semantic similarity searches using your document collection.
AI Model Hub for free: From December 1, 2024 until March 31, 2025, IONOS offers all foundation models of the AI Model Hub for free. Create your contract using this tutorial now and get your AI journey started today!
The IONOS AI Model Hub grants access to users with authentication tokens. An authentication token is a unique string assigned to a specific user. Do not share your authentication tokens with others; each authentication token grants access to modifying or using the corresponding IONOS solutions.
The IONOS authentication tokens used for the AI Model Hub are bound to users of our public cloud offering. The central advantage of this approach is that existing users can use their authentication tokens to access the AI Model Hub with no or only minor changes. However, new users interested solely in the AI Model Hub must first create a public cloud contract.
This tutorial helps both new and existing users to get access to the AI Model Hub.
This tutorial is intended for users without prior programming knowledge.
By the end of this tutorial, you will be able to:
Create a new contract for the IONOS Public Cloud offering.
Add and edit users to gain access to the AI Model Hub.
Generate an authentication token for an existing user.
If you have an existing contract with IONOS Public Cloud, you can directly log into the DCD and proceed with adding and editing users to gain access to the AI Model Hub.
To create a public cloud contract, proceed as follows:
Open the IONOS signup page.
Select the Country you are living in.
Enter your Email address and a Password you want to use.
Accept the Pricing and Terms and Conditions by marking the corresponding checkbox.
Click Test now.
The page with the details will look similar to this:
You are now informed that you will receive an email from IONOS. Remember to click on the hyperlink specified in this email to validate your email address.
After confirming the email address is valid, enter the following details:
Enter your First Name and Last Name.
Enter your Phone Number.
Click Test now.
Your IONOS Public Cloud contract is now created and can be used to log in.
After logging in, you can with the new contract for the first time, you can neither create new users nor generate authentication tokens. To do this, you have to "activate unlimited access" by entering your contact data:
Click Get full access on the top of the screen.
Enter your Street address, ZIP and the City you life in.
CLick Save and Continue.
After entering the contact data, please specify your payment details:
Select whether to specify your Credit Card data our SEPA Direct Debit data.
Enter the relevant information.
Click Save and Continue
We now manually check whether the data you provided is correct. After this evaluation, we inform you and actiate unlimited access. This process can take up to 24 hours.
Every contract owner has sufficient rights to access the AI Model Hub. In addition, the contract owner can use every API endpoint of the IONOS Public Cloud offering. That is, the contract owner can create new users, setup infrastructure in the IONOS cloud and configure existing setups.
We therefore suggest that you create a new user to be specifically used for the AI Model Hub and grant this user only the rights they will need.
You can create a corresponding user in a few simple steps.
You first need to log into the Data Center Designer (DCD) our frontend for our Public Cloud offering:
Open the URL https://dcd.ionos.com in a browser.
Enter the Email address and the Password you specified when creating your Public Cloud contract.
Click Sign in.
The filled screen is similar to:
Next open the IONOS User Manager by clicking "Management" -> "Users & Groups".
In the User Manager you create a new user group:
Click Groups -> + Create.
Enter a name for your user group in the field Group Name.
Click Create.
You have now created a new user group without any rights.
To grant your new user group the necessary access rights:
Select your user group in the left part of the screen.
Your Group Name is now displayed in the upper right area of the screen.
Scroll to the end of the list in the lower right part of the screen.
Select the checkbox for Access and manage AI Model Hub.
Your user group is now granted access to the AI Model Hub.
Next, create a user and add this user to the newly created user group:
Click Users -> + Create.
Enter First Name, Last Name, Email and Password for this user.
Click Create.
The new user is now created.
To add the new user to the user group with access rights to the AI Model Hub:
Click Users in the IONOS User Manager.
Select the newly generated User in the left part of the screen.
Select Groups in the right area of the screen.
Click + Add to Group.
Click on the name of the newly created group.
Your user is now in the user group of the AI Model Hub and can access to corresponding service.
To generate an authentication token log into the Data Center Designer (DCD) our frontend for our Public Cloud offering with the user for which you want to create an authentication token.
In our example, log out of the contract owner’s account and log in with the newly created user account to proceed with token generation.
After logging in, create a token as follows:
Click Management -> Token Management.
Select the Time to Live (TTL). This is the duration for which the authentication token is valid.
Click Generate Token.
On the next screen you are shown the authentication token. Click "Download" to save this token locally or mark and copy it.
Note:
You need to copy your authentication token in this step. It will not be displayed afterwards in any dialog and you have no chance to recover the authentication token, if you missed copying it here.
In this tutorial, you learned how to create an authentication token for the IONOS AI Model Hub.
Namely, you learned how to:
Create a new contract for the IONOS Public Cloud.
Create a new user with the access rights to use the AI Model Hub.
Create an authentication token to use with the API of the AI Model Hub.
AI Model Hub for free: From December 1, 2024 until March 31, 2025, IONOS offers all foundation models of the AI Model Hub for free. Create your contract now and get your AI journey started today!
The IONOS AI Model Hub offers an OpenAI-compatible API that enables powerful text generation capabilities through foundation models. These Large Language Models (LLMs) can perform a wide variety of tasks, such as generating conversational responses, summaries, and contextual answers, without requiring you to manage hardware or extensive infrastructure.
The following models are currently available for text generation, each suited to different applications:
Llama 3.1 Instruct (8B, 70B and 405B)
Ideal for dialogue use cases and natural language tasks: conversational agents, virtual assistants, and chatbots.
Code Llama Instruct HF (13B)
Focuses on generating different kinds of computer code, understands programming languages
Mistral Instruct v0.3 (7B), Mixtral (8x7B)
Ideal for: Conversational agents, virtual assistants, and chatbots; Comparison to Llama 3: better with European languages; supports longer context length
In this tutorial, you will learn how to generate text using foundation models via the IONOS API. This tutorial is intended for developers with basic knowledge of:
REST APIs
A programming language for handling REST API endpoints (Python and Bash examples are provided)
By the end, you will be able to:
Retrieve a list of text generation models available in the IONOS AI Model Hub.
Apply prompts to these models to generate text responses, supporting applications like virtual assistants and content creation.
To use text generation models, first set up your environment and authenticate using the OpenAI-compatible API endpoints.
Fetch a list of models to see which are available for your use case:
This query returns a JSON document listing each models name, which you’ll use to specify a model for text generation in later steps.
To generate text, send a prompt to the chat/completions endpoint.
The returned JSON includes several key fields, most importantly:
choices.[].message.content
: The generated text based on your prompt.
usage.prompt_tokens
: Token count for the input prompt.
usage.completion_tokens
: Token count for the generated output.
In this tutorial, you learned how to:
Access available text generation models.
Use prompts to generate text responses, ideal for applications such as conversational agents, content creation, and more.
For information on image generation, refer to our dedicated tutorial on text-to-image models.
AI Model Hub for free: From December 1, 2024 until March 31, 2025, IONOS offers all foundation models of the AI Model Hub for free. Create your contract now and get your AI journey started today!
The IONOS AI Model Hub provides an OpenAI-compatible API that enables embedding generation for text input using state-of-the-art embedding models. Embeddings are multi-dimensional vectors, i.e. lists of numerical values. The more semantically similar the text input, the more similar the embeddings.
The following models are currently available for embedding generation in the IONOS AI Model Hub, each suited for different use cases:
Paraphrase Multilingual MPNet base v2
Transformer model supporting several different languages with high performance and short input length (128 tokens).
BAAI Large EN V1.5
Embedding model specific for english, medium sized inputs (512 tokens).
BAAI M3
Multipurpose embedding model for multilingual text(100 working languages) and large documents (8192 tokens).
One trivial task for each of us is identifying semantically similar concepts. Think, for example, of the following three texts:
"AI Model Hub"
"Micheal Jackson"
"best selling music artists"
For a human, it would be trivial to decide that "Michael Jackson" and "best selling music artists" are semantically similar, while "AI Model Hub" is not.
Embeddings, a central concept in modern foundation models, offer an effective solution for identifying semantically similar texts. An embedding is a numerical vector of the text with one central property: Semantically similar texts have similar vectors.
In this sense, the three texts above could be transferred into embeddings:
"AI Model Hub": (0.10; 0.10)
"Michael Jackson": (0.95; 0.90)
"best selling music artists": (0.96; 0.87)
Embedding vectors typically have dozens to thousands of dimensions, but for simplicity, we use 2D vectors in this example. One could illustrate these embeddings in a chart as follows:
As you can see, the texts "Michael Jackson" and "best selling music arists" are close to each other, while "AI Model Hub" is not.
Embedding models are models which transfer texts into such embeddings. They are available for texts in a single language, multiple languages, images, spoken language, and more. The IONOS AI Model Hub currently supports embedding models for texts in English as well as models for multiple languages.
In this tutorial, you will learn how to generate embeddings via the OpenAI compatible API. This tutorial is intended for developers with basic knowledge of:
REST APIs
A programming language for handling REST API endpoints (Python and Bash examples are provided)
By the end, you will be able to:
Retrieve a list of available embedding models in the IONOS AI Model Hub.
Use the API to generate embeddings with these models.
Use the generated embeddings as input to calculate similarity scores.
To use embedding models, first set up your environment and authenticate using the OpenAI-compatible API endpoints.
Fetch a list of embedding models to see which models are available for your use case:
Output
This query returns a JSON document listing each model's name, which you’ll use to specify a model for embedding generation in later steps.
To generate an embedding, send the text to the /embeddings
endpoint.
The returned JSON includes several key fields, most importantly:
data.[..].embedding
: The generated embedding as a vector of numeric values.
usage.prompt_tokens
: Token count for the input prompt.
usage.total_tokens
: Token count for the entire process.
Using python, you can calculate the similarity of two results:
The Embeddings API uses standard HTTP error codes to indicate the outcome of a request. The error codes and their description are as below:
200 OK
: The request was successful.
401 Unauthorized
: The request was unauthorized.
404 Not Found
: The requested resource was not found.
500 Internal Server Error
: An internal server error occurred.
In this tutorial, you learned how to:
Access available embedding models.
Generate embeddings with these models.
Calculate similarity scores using the numpy library.
For information on how to use embeddings in document collections, refer to our dedicated tutorial on Document Collections.
AI Model Hub for free: From December 1, 2024 until March 31, 2025, IONOS offers all foundation models of the AI Model Hub for free. Create your contract now and get your AI journey started today!
The IONOS AI Model Hub provides an OpenAI-compatible API that enables high-quality image generation using state-of-the-art foundation models. By inputting descriptive prompts, users can create detailed images directly through the API, without the need for managing underlying hardware or infrastructure.
The following models are currently available for image generation, each suited to different types of visual outputs:
Stable Diffusion XL
Generates photorealistic images, ideal for marketing visuals, product mockups, and natural scenes.
FLUX.1-schnell
Generates artistic, stylized images, well-suited for creative projects, digital art, and unique concept designs.
In this tutorial, you will learn how to generate images using foundation models via the IONOS API. This tutorial is intended for developers with basic knowledge of:
REST APIs
A programming language for handling REST API endpoints (Python and Bash examples are provided)
By the end, you will be able to:
Retrieve a list of available image generation models in the IONOS AI Model Hub.
Use prompts to generate images with these models.
To use image generation models, first set up your environment and authenticate using the OpenAI-compatible API endpoints.
Fetch a list of models to see which are available for your use case:
This query returns a JSON document listing each model's name, which you’ll use to specify a model for image generation in later steps.
To generate an image, send a prompt to the /images/generations
endpoint. Customize parameters like size
for the resolution of the output image.
The returned JSON includes several key fields, most importantly:
data.[].b64_json
: The generated image in base64 format.
usage.prompt_tokens
: Token count for the input prompt.
usage.total_tokens
: Token count for the entire process (usually zero for image generation, as billing is per image).
In this tutorial, you learned how to:
Access available image generation models.
Use descriptive prompts to generate high-quality images, ideal for applications in design, creative work, and more.
For information on text generation, refer to our dedicated tutorial on text generation models.
Meta ()
Meta ()
Mistral AI ()
()
()
()
stability.ai ()
BlackForestLab ()
AI Model Hub for free: From December 1, 2024 until March 31, 2025, IONOS offers all foundation models of the AI Model Hub for free. Create your contract now and get your AI journey started today!
The IONOS AI Model Hub allows you to combine Large Language Models and a vector database to implement Retrieval Augmented Generation use cases.
Retrieval Augmented Generation is an approach that allows you to teach an existing Large Language Model, such as LLama or Mistral, to answer not only based on the knowledge the model learned during training, but also based on the knowledge you specified yourself.
Retrieval Augmented Generation uses two components:
a Large Language Model (we offer corresponding models for text generation) and
If one of your users queries your Retrieval Augmented Generation system, you first get the most similar documents from the corresponding document collection. Second, you ask the Large Language Model to answer the query by using both the knowledge it was trained on and the most similar documents from your document collection.
This tutorial is intended for developers. It assumes you have basic knowledge of:
REST APIs and how to call them
A programming language to handle REST API endpoints (for illustration purposes, the tutorials use Python and Bash scripting)
You should also be familiar with the IONOS:
By the end of this tutorial, you'll be able to: Answer customer queries using a Large Language Model which adds data from your document collections to the answers.
The IONOS AI Model Hub API offers both document collections and Large Language Models that you can use to implement retrieval augmented generation without having to manage corresponding hardware yourself.
Our AI Model Hub API provides all required functionality without your data being transferred out of Germany.
To get started, set up a document collection using Document Collections and get the identifier of this document collection.
You will need this identifier in the subsequent steps.
To get started, you should open your IDE to enter Python code.
Next generate a header document to authenticate yourself against the endpoints of our REST API:
After this step, you have one variable header you can use to access our vector database.
To get started, you should open a terminal and ensure that curl
and jq
are installed. While curl
is essential for communicating with our API service, we use jq
throughout our examples the improve the readability of the results of our API.
To retrieve a list of Large Language Models supported by the IONOS AI Model Hub API enter:
This query returns a JSON document consisting of all foundation models and corresponding meta information.
The JSON document consists an entry items*. This is a list of all available foundation models. Of the 7 attributes per foundation model 3 are relevant for you:
id: The identifier of the foundation model
properties.description: The textual description of the model
properties.name: The name of the model
Note:
The identifiers for the foundation models differ between our API for Retrival Augmented Generation and for the image generation and text generation endpoints compatible with OpenAI.
From the list you generated in the previous step, choose the model you want to use and the id. You will use this id in the next step to use the foundation model.
This section shows how to use the document collection and the contained documents to answer a user query.
To retrieve the documents relevant to answering the user query, invoke the query endpoint as follows:
This will return a list of the NUM_OF_DOCUMENTS
most relevant documents in your document collection for answering the user query.
Now, combine the user query and the result from the document collection in one prompt:
The result will be a JSON-Document
consisting of the answer to the customer and some meta information. You can access it in the field at properties.output
Note:
The best prompt strongly depends on the Large Language Model used. You might need to adapt your prompt to improve results.
The IONOS AI Model Hub allows for automating the process described above. Namely, by specifying the collection ID and the collection query directly to our foundation model endpoint, it first queries the document collection and returns it in a variable which you can then directly use in your prompt. This section describes how to do this.
To implement a Retrieval Augmented Generation use case with only one prompt, you have to invoke the /predictions endpoint of the Large Language Model you want to use and send the prompt as part of the body of this query:
This query conducts all steps necessary to answer a user query using Retrieval Augmented Generation:
The user query (saved at collectionQuery) is sent to the collection (specified at collectionId).
The results of this query are saved in a variable .context, while the user query is saved in a variable .collection_query. You can use both variables in your prompt.
The example prompt uses the variables .context and .collection_query to answer the customer query.
Note:
The best prompt strongly depends on the Large Language Model used. You might need to adapt your prompt to improve results.
In this tutorial, you learned how to use the IONOS AI Model Hub API to implement Retrieval Augmented Generation use cases.
Namely, you learned how to: Derive answers to user queries using the content of your document collection and one of the IONOS foundation models.
AI Model Hub for free: From December 1, 2024 until March 31, 2025, IONOS offers all foundation models of the AI Model Hub for free. Create your contract now and get your AI journey started today!
The IONOS AI Model Hub provides an OpenAI-compatible API, allowing seamless integration with various frontend tools that use Large Language Models (LLMs). This guide walks you through the setup process, using AnythingLLM as an example tool.
By the end of this tutorial, you will be able to configure AnythingLLM to use the IONOS AI Model Hub as its backend for AI-powered responses.
You will need an authentication token to access the IONOS AI Model Hub. For more information about how to generate your token in the IONOS DCD, see Generate authentication token.
Save this token in a secure place, as you’ll need to enter it into AnythingLLM during setup.
The IONOS AI Model Hub offers a variety of Large Language Models to suit different needs. Choose the model that best fits your use case from the table below:
Llama 3.1 Instruct, 8B
meta-llama/Meta-Llama-3.1-8B-Instruct
Suitable for general-purpose dialogue and language tasks.
Llama 3.1 Instruct, 70B
meta-llama/Meta-Llama-3.1-70B-Instruct
Ideal for more complex conversational agents and virtual assistants.
Llama 3.1 Instruct, 405B
meta-llama/Meta-Llama-3.1-405B-Instruct-FP8
Optimized for extensive dialogue tasks, supporting large context windows.
Mistral Instruct v0.3, 7B
mistralai/Mistral-7B-Instruct-v0.3
Designed for conversational agents, with enhanced European language support.
Mixtral, 8x7B
mistralai/Mixtral-8x7B-Instruct-v0.1
Supports multilingual interactions and is optimized for diverse contexts.
During setup, you’ll enter the model’s "Model Name" value into AnythingLLM’s configuration.
For connecting to the IONOS AI Model Hub, use the following Base URL for the OpenAI-compatible API:
You will enter this URL in the configuration settings of AnythingLLM.
With your authentication token, selected model name, and base URL in hand, you’re ready to set up AnythingLLM:
Open AnythingLLM and go to the configuration page for the Large Language Model (LLM) settings.
In AnythingLLM, this can be accessed by clicking the wrench icon in the lower left corner, then navigating to AI Providers -> LLM.
Choose Generic OpenAI as the provider.
Enter the following information in the respective fields:
API Key: Your IONOS authentication token.
Model Name: The name of the model you selected from the table (e.g., meta-llama/Meta-Llama-3.1-8B-Instruct
).
Base URL: https://openai.inference.de-txl.ionos.com/v1
Your screen should look similar to the image below:
Click Save Changes to apply the settings.
From now on, AnythingLLM will use the IONOS AI Model Hub as its backend, enabling AI-powered functionality based on your chosen Large Language Model.
This guide provides a straightforward path for integrating the IONOS AI Model Hub into third-party frontend tools using the OpenAI-compatible API. For other tools and more advanced configurations, the steps will be similar: generate an API key, select a model, and configure the tool’s API settings.