# Text Generation

The IONOS AI Model Hub offers an OpenAI-compatible API that enables powerful text generation capabilities through foundation models. These Large Language Models (LLMs) can perform a wide variety of tasks, such as generating conversational responses, summaries, and contextual answers, without requiring you to manage hardware, or extensive infrastructure.

## Supported Text Generation Models

All Large Language Models shown on the AI Model Hub [<mark style="color:blue;">Models</mark>](/cloud/ai/ai-model-hub/models.md) page can be used for text generation. Review the individual model cards to find the best solution for your specific application.

## Overview

In this guide, you will learn how to generate text using foundation models through the IONOS API. This guide is intended for developers with basic knowledge of:

* REST APIs
* A programming language for handling REST API endpoints (Python and Bash examples are provided)

By the end, you will be able to:

1. Retrieve a list of text generation models available in the IONOS AI Model Hub.
2. Apply prompts to these models to generate text responses, supporting applications like virtual assistants and content creation.

## Getting Started with Text Generation

To use text generation models, first set up your environment and authenticate using the OpenAI-compatible API endpoints.

Download the respective code files to easily access text generation-specific scripts and examples and generate the intended output:

{% tabs %}
{% tab title="Python Notebook" %}
Download this Python Notebook file to easily access text generation-specific scripts and examples and generate the intended output.

{% file src="/files/usRdBrtqkdu26ubqbqW0" %}
{% endtab %}

{% tab title="Python Code" %}
Download this Python code file to easily access text generation-specific scripts and examples and generate the intended output.

{% file src="/files/dTXIfWHpXcV4asEEGSXz" %}
{% endtab %}

{% tab title="Bash Code" %}
Download this Bash code file to easily access text generation-specific scripts and examples and generate the intended output.

{% file src="/files/9as0MIwYkn1uNE0371a0" %}
{% endtab %}
{% endtabs %}

### Step 1: Retrieve Available Models

Fetch a list of models to see which are available for your use case:

{% tabs %}
{% tab title="Python" %}

```python
# Python example to retrieve available models
import requests

IONOS_API_TOKEN = "[YOUR API TOKEN HERE]"

endpoint = "https://openai.inference.de-txl.ionos.com/v1/models"

header = {
    "Authorization": f"Bearer {IONOS_API_TOKEN}", 
    "Content-Type": "application/json"
}
requests.get(endpoint, headers=header).json()
```

{% endtab %}

{% tab title="Bash" %}

```bash
#!/bin/bash

IONOS_API_TOKEN=[YOUR API TOKEN HERE]

curl -H "Authorization: Bearer ${IONOS_API_TOKEN}" \
     --get https://openai.inference.de-txl.ionos.com/v1/models
```

{% endtab %}
{% endtabs %}

This query returns a JSON document listing each models name, which you’ll use to specify a model for text generation in later steps.

### Step 2: Generate Text with Your Prompt

To generate text, send a prompt to the chat/completions endpoint.

{% tabs %}
{% tab title="Python" %}

```python
# Python example for text generation
import requests

IONOS_API_TOKEN = "[YOUR API TOKEN HERE]"
MODEL_NAME = "[MODEL NAME HERE]"
PROMPT = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}
  ]

endpoint = "https://openai.inference.de-txl.ionos.com/v1/chat/completions"

header = {
    "Authorization": f"Bearer {IONOS_API_TOKEN}", 
    "Content-Type": "application/json"
}
body = {
    "model": MODEL_NAME,
    "messages": PROMPT,
}
requests.post(endpoint, json=body, headers=header).json()
```

{% endtab %}

{% tab title="Bash" %}

```bash
#!/bin/bash

IONOS_API_TOKEN=[YOUR API TOKEN HERE]
MODEL_NAME=meta-llama/Meta-Llama-3.1-8B-Instruct 
PROMPT='[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}
]'

BODY="{ 
    \"model\": \"$MODEL_NAME\",
    \"messages\": $PROMPT
}"
echo $BODY

curl -H "Authorization: Bearer ${IONOS_API_TOKEN}" \
     -H "Content-Type: application/json" \
     -d "$BODY" \
     https://openai.inference.de-txl.ionos.com/v1/chat/completions
```

{% endtab %}
{% endtabs %}

### Step 3: Extract and Interpret the Result

The returned JSON includes several key fields, most importantly:

* **`choices.[].message.content`**: The generated text based on your prompt.
* **`usage.prompt_tokens`**: Token count for the input prompt.
* **`usage.completion_tokens`**: Token count for the generated output.

## Summary

In this guide, you learned how to:

1. Access available text generation models.
2. Use prompts to generate text responses, ideal for applications such as conversational agents, content creation, and more.

For information on image generation, see [<mark style="color:blue;">text-to-image</mark>](/cloud/ai/ai-model-hub/how-tos/image-generation.md) models.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.ionos.com/cloud/ai/ai-model-hub/how-tos/text-generation.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
