# Text Generation

The IONOS AI Model Hub offers an OpenAI-compatible API that enables powerful text generation capabilities through foundation models. These Large Language Models (LLMs) can perform a wide variety of tasks, such as generating conversational responses, summaries, and contextual answers, without requiring you to manage hardware, or extensive infrastructure.

## Supported Text Generation Models

All Large Language Models shown on the AI Model Hub [<mark style="color:blue;">Models</mark>](https://docs.ionos.com/sections-test/guides/ai/ai-model-hub/models) page can be used for text generation. Review the individual model cards to find the best solution for your specific application.

## Overview

In this guide, you will learn how to generate text using foundation models through the IONOS API. This guide is intended for developers with basic knowledge of:

* REST APIs
* A programming language for handling REST API endpoints (Python and Bash examples are provided)

By the end, you will be able to:

1. Retrieve a list of text generation models available in the IONOS AI Model Hub.
2. Apply prompts to these models to generate text responses, supporting applications like virtual assistants and content creation.

## Getting Started with Text Generation

To use text generation models, first set up your environment and authenticate using the OpenAI-compatible API endpoints.

Download the respective code files to easily access text generation-specific scripts and examples and generate the intended output:

{% tabs %}
{% tab title="Python Notebook" %}
Download this Python Notebook file to easily access text generation-specific scripts and examples and generate the intended output.

{% file src="<https://1737632334-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-MifAzdGvKLDTtvJP8sm%2Fuploads%2Fgit-blob-be20853e737bf085afa6243752bbe47e4f9eecfd%2Fai-model-hub-text-generation.ipynb?alt=media>" %}
{% endtab %}

{% tab title="Python Code" %}
Download this Python code file to easily access text generation-specific scripts and examples and generate the intended output.

{% file src="<https://1737632334-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-MifAzdGvKLDTtvJP8sm%2Fuploads%2Fgit-blob-41ebecce1323e3c40f03588ad2c194ab6f9b5207%2Fai-model-hub-text-generation.py?alt=media&token=eeba76eb-ba06-496b-a1e9-561347fddedb>" %}
{% endtab %}

{% tab title="Bash Code" %}
Download this Bash code file to easily access text generation-specific scripts and examples and generate the intended output.

{% file src="<https://1737632334-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-MifAzdGvKLDTtvJP8sm%2Fuploads%2Fgit-blob-717c72c53d8482a8bd1d6b620a8ad91c7a3b769f%2Fai-model-hub-text-generation.sh?alt=media&token=b047635a-b5c3-4f60-b29b-a6b2f981811e>" %}
{% endtab %}
{% endtabs %}

### Step 1: Retrieve Available Models

Fetch a list of models to see which are available for your use case:

{% tabs %}
{% tab title="Python" %}

```python
# Python example to retrieve available models
import requests

IONOS_API_TOKEN = "[YOUR API TOKEN HERE]"

endpoint = "https://openai.inference.de-txl.ionos.com/v1/models"

header = {
    "Authorization": f"Bearer {IONOS_API_TOKEN}", 
    "Content-Type": "application/json"
}
requests.get(endpoint, headers=header).json()
```

{% endtab %}

{% tab title="Bash" %}

```bash
#!/bin/bash

IONOS_API_TOKEN=[YOUR API TOKEN HERE]

curl -H "Authorization: Bearer ${IONOS_API_TOKEN}" \
     --get https://openai.inference.de-txl.ionos.com/v1/models
```

{% endtab %}
{% endtabs %}

This query returns a JSON document listing each models name, which you’ll use to specify a model for text generation in later steps.

### Step 2: Generate Text with Your Prompt

To generate text, send a prompt to the chat/completions endpoint.

{% tabs %}
{% tab title="Python" %}

```python
# Python example for text generation
import requests

IONOS_API_TOKEN = "[YOUR API TOKEN HERE]"
MODEL_NAME = "[MODEL NAME HERE]"
PROMPT = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}
  ]

endpoint = "https://openai.inference.de-txl.ionos.com/v1/chat/completions"

header = {
    "Authorization": f"Bearer {IONOS_API_TOKEN}", 
    "Content-Type": "application/json"
}
body = {
    "model": MODEL_NAME,
    "messages": PROMPT,
}
requests.post(endpoint, json=body, headers=header).json()
```

{% endtab %}

{% tab title="Bash" %}

```bash
#!/bin/bash

IONOS_API_TOKEN=[YOUR API TOKEN HERE]
MODEL_NAME=meta-llama/Meta-Llama-3.1-8B-Instruct 
PROMPT='[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}
]'

BODY="{ 
    \"model\": \"$MODEL_NAME\",
    \"messages\": $PROMPT
}"
echo $BODY

curl -H "Authorization: Bearer ${IONOS_API_TOKEN}" \
     -H "Content-Type: application/json" \
     -d "$BODY" \
     https://openai.inference.de-txl.ionos.com/v1/chat/completions
```

{% endtab %}
{% endtabs %}

### Step 3: Extract and Interpret the Result

The returned JSON includes several key fields, most importantly:

* **`choices.[].message.content`**: The generated text based on your prompt.
* **`usage.prompt_tokens`**: Token count for the input prompt.
* **`usage.completion_tokens`**: Token count for the generated output.

## Summary

In this guide, you learned how to:

1. Access available text generation models.
2. Use prompts to generate text responses, ideal for applications such as conversational agents, content creation, and more.

For information on image generation, see [<mark style="color:blue;">text-to-image</mark>](https://docs.ionos.com/sections-test/guides/ai/ai-model-hub/how-tos/image-generation) models.
