Text Generation

AI Model Hub for free: From December 1, 2024 until March 31, 2025, IONOS offers all foundation models of the AI Model Hub for free. Create your contract now and get your AI journey started today!

The IONOS AI Model Hub offers an OpenAI-compatible API that enables powerful text generation capabilities through foundation models. These Large Language Models (LLMs) can perform a wide variety of tasks, such as generating conversational responses, summaries, and contextual answers, without requiring you to manage hardware or extensive infrastructure.

Supported Text Generation Models

The following models are currently available for text generation, each suited to different applications:

Model Provider
Model Name
Purpose

Meta (License)

Llama 3.1 Instruct (8B, 70B and 405B)

Ideal for dialogue use cases and natural language tasks: conversational agents, virtual assistants, and chatbots.

Meta (License)

Code Llama Instruct HF (13B)

Focuses on generating different kinds of computer code, understands programming languages

Mistral AI (License)

Mistral Instruct v0.3 (7B), Mixtral (8x7B)

Ideal for: Conversational agents, virtual assistants, and chatbots; Comparison to Llama 3: better with European languages; supports longer context length

Overview

In this tutorial, you will learn how to generate text using foundation models via the IONOS API. This tutorial is intended for developers with basic knowledge of:

  • REST APIs

  • A programming language for handling REST API endpoints (Python and Bash examples are provided)

By the end, you will be able to:

  1. Retrieve a list of text generation models available in the IONOS AI Model Hub.

  2. Apply prompts to these models to generate text responses, supporting applications like virtual assistants and content creation.

Getting Started with Text Generation

To use text generation models, first set up your environment and authenticate using the OpenAI-compatible API endpoints.

Step 1: Retrieve Available Models

Fetch a list of models to see which are available for your use case:

# Python example to retrieve available models
import requests

IONOS_API_TOKEN = "[YOUR API TOKEN HERE]"

endpoint = "https://openai.inference.de-txl.ionos.com/v1/models"

header = {
    "Authorization": f"Bearer {IONOS_API_TOKEN}", 
    "Content-Type": "application/json"
}
requests.get(endpoint, headers=header).json()

This query returns a JSON document listing each models name, which you’ll use to specify a model for text generation in later steps.

Step 2: Generate Text with Your Prompt

To generate text, send a prompt to the chat/completions endpoint.

# Python example for text generation
import requests

IONOS_API_TOKEN = "[YOUR API TOKEN HERE]"
MODEL_NAME = "[MODEL NAME HERE]"
PROMPT = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}
  ]

endpoint = "https://openai.inference.de-txl.ionos.com/v1/chat/completions"

header = {
    "Authorization": f"Bearer {IONOS_API_TOKEN}", 
    "Content-Type": "application/json"
}
body = {
    "model": MODEL_NAME,
    "messages": PROMPT,
}
requests.post(endpoint, json=body, headers=header).json()

Step 3: Extract and Interpret the Result

The returned JSON includes several key fields, most importantly:

  • choices.[].message.content: The generated text based on your prompt.

  • usage.prompt_tokens: Token count for the input prompt.

  • usage.completion_tokens: Token count for the generated output.

Summary

In this tutorial, you learned how to:

  1. Access available text generation models.

  2. Use prompts to generate text responses, ideal for applications such as conversational agents, content creation, and more.

For information on image generation, refer to our dedicated tutorial on text-to-image models.

Last updated