Foundation Models

The IONOS AI Model Hub API allows you to access foundation models, namely Large Language and text-to-image models. Currently we offer the following foundation models:

FromFoundation ModelPurpose

Meta (Licence)

Llama 3.1 Instruct (8B and 70B)

Ideal for dialogue use cases and natural language tasks: conversational agents, virtual assistants, and chatbots.

Meta (Licence)

Code Llama Instruct HF (13B)

Focuses on generating different kinds of computer code, understands programming languages

Mistral AI (Licence)

Mistral Instruct v0.3 (7B), Mixtral (8x7B)

Ideal for: Conversational agents, virtual assistants, and chatbots; Comparison to Llama 3: better with European languages; supports longer context length

stability.ai (Licence)

Stable Diffusion XL

Text to high-quality images

Overview

In this tutorial, you will learn how to access all foundation models hosted by IONOS. This tutorial is intended for developers. It assumes you have basic knowledge of:

  • REST APIs and how to call them

  • A programming language to handle REST API endpoints (for illustration purposes, the tutorials uses Python and Bash scripting)

By the end of this tutorial, you will be able to:

  • Get a list of all foundation models IONOS currently offers

  • Apply your prompt to one of the offered foundation models

Background

  • The IONOS AI Model Hub API is an inference service that you can use to apply deep learning foundation models without having to manage necessary hardware yourself.

  • Our foundation models offering provides many state of the art open source models, you can use with your data being transfered out of Germany.

  • Using the foundation models enables you to use Generative Artificial Intelligence out of the box.

Before you begin

To get started, you should open your IDE to enter Python code.

  1. Install required libraries

You need to install the module requests to your python environment. Optionally, we install pandas to format results:

!pip install requests
!pip install pandas

2. Import required libraries

You need to import the module requests and pandas:

import requests
import pandas as pd

After this step, you have installed all python modules to use the foundation models API endpoints.

Access list of foundation models

  1. Invoke endpoint to get all models

To retrieve a list of foundation models supported by the IONOS AI Model Hub API enter:

API_TOKEN = [ YOUR API TOKEN HERE ]
header = {
    "Authorization": f"Bearer {API_TOKEN}", 
    "Content-Type": "application/json"
}
result = requests.get("https://inference.de-txl.ionos.com/models", headers=header)

This query returns a JSON document consisting of all foundation models and corresponding meta information

  1. Convert list of endpoints to a human readable form

You can convert this JSON document to a pandas dataframe using:

pd.json_normalize(result.json()['items'])

The JSON document consists of 7 attributes per foundation model of which 3 are relevant for you:

  • id: The identifier of the foundation model

  • properties.description (IONOS API only): The textual description of the model

  • properties.name (IONOS API only): The name of the model

Note:

The identifier for the foundation models differ between IONOS API and OpenAI API.

  1. Select the model to use

From the list you generated in the previous step, choose the model you want to use and the id. You will use this id in the next step to use the foundation model.

Use foundation model

  1. Apply prompt to foundation model

To use a foundation model with a prompt you wrote, you have to invoke the /predictions endpoint of the model and send the prompt as part of the body of this query:

MODEL_ID = [ YOUR MODEL ID HERE ]
QUERY = [ YOUR PROMPT HERE ]

endpoint = f"https://inference.de-txl.ionos.com/models/{MODEL_ID}/predictions"
body = { 
    "properties": {
        "input": QUERY,
        "parameters": {  
            "max_length": 500,  
            "temperature": 0.1  
        }  
    }
}
result = requests.post(endpoint, json=body, headers=header)

The endpoint will return the result after applying the prompt to the foundation model.

Our Large Language Models support two parameters when querying:

  • max_length (max_tokens for OpenAI compatiblity) specifies the maximum length of the output generated by the Large Language Model in tokens.

  • temperature specifies the temperature, that is the degree of creativity of the Large Language Model. The temperature can vary between 0 and 1. Lower values stand for less, higher values for more creativity.

  1. Extract result

The result of the endpoint consists of several meta data and the output of the foundation model in one JSON object. The relevant data is saved in the field properties. You can access it using:

result.json()['properties']

The field properties again consists of several key values pairs. The most relevant are:

  • input: The prompt you specified

  • output: The output of the foundation model after applying your prompt

  • inputLengthInTokens: The length of tokens of your input

  • outputLengthInTokens: The length of tokens of your output

Note:

You are billed based on the length of your input and output in tokens. That is, you can calculate the cost of each query based on the fields inputLengthInTokens and outputLengthInTokens when using the IONOS API and usage.prompt_tokens and usage.completion_tokens when using the OpenAI API.

Summary

In this tutorial you learned how to use the IONOS AI Model Hub API to apply your prompts to the hosted foundation models.

Namely, you learned how to:

  • Get the list of supported foundation models

  • Make predictions by inputing your prompt to one of the foundation models.

Last updated