# OpenCode Integration

[<mark style="color:blue;">OpenCode</mark>](https://opencode.ai) is a terminal-based AI coding assistant. Because the AI Model Hub exposes an OpenAI-compatible API, you can connect OpenCode to it using the `@ai-sdk/openai-compatible` SDK package—no custom adapter required.

By the end of this guide, you will have OpenCode configured to use the AI Model Hub as its model provider.

## Prerequisites

* An IONOS Cloud account with access to the AI Model Hub
* [<mark style="color:blue;">OpenCode</mark>](https://opencode.ai) installed on your system
* An IONOS Cloud API authentication token

## Step 1: Get an Authentication Token

You need an authentication token to access the AI Model Hub. For instructions on how to generate a token in the Data Center Designer (DCD), see [<mark style="color:blue;">Generate authentication token</mark>](/cloud/set-up-ionos-cloud/management/identity-access-management/token-manager.md).

{% hint style="info" %}
IONOS Cloud tokens are JSON Web Tokens (JWTs) with an expiration date. If your token stops working, check the `exp` claim and generate a new token if needed.
{% endhint %}

## Step 2: Set the Token as an Environment Variable

Add the token to your shell profile so it persists across terminal sessions.

{% tabs %}
{% tab title="Zsh" %}
Add the following line to your `~/.zshrc` file:

```bash
export IONOS_API_TOKEN="your-token-here"
```

Then reload the profile:

```bash
source ~/.zshrc
```

{% endtab %}

{% tab title="Bash" %}
Add the following line to your `~/.bashrc` file:

```bash
export IONOS_API_TOKEN="your-token-here"
```

Then reload the profile:

```bash
source ~/.bashrc
```

{% endtab %}
{% endtabs %}

Replace `your-token-here` with the token you generated in Step 1.

To verify that the variable is set, run:

```bash
echo $IONOS_API_TOKEN
```

You should see your token printed in the terminal.

## Step 3: Select a Language Model

The AI Model Hub offers a variety of large language models. Choose the model that best fits your use case from the AI Model Hub [<mark style="color:blue;">Models</mark>](/cloud/ai/ai-model-hub/models.md).

{% hint style="info" %}
IONOS Cloud periodically adds and retires models. For the latest list, see the [<mark style="color:blue;">LLMs</mark>](/cloud/ai/ai-model-hub/models/llms.md) page or query the API directly:

```bash
curl -s -H "Authorization: Bearer $IONOS_API_TOKEN" \
  https://openai.inference.de-txl.ionos.com/v1/models | jq '.data[].id'
```

{% endhint %}

## Step 4: Configure OpenCode

Open your OpenCode configuration file at `~/.config/opencode/opencode.json` and add the `provider` block. If the file already exists with other configuration, merge the `provider` section into it.

### Single model

To configure a single model, use the following example:

```json
{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "ionos": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "IONOS Cloud AI Model Hub",
      "options": {
        "baseURL": "https://openai.inference.de-txl.ionos.com/v1",
        "apiKey": "{env:IONOS_API_TOKEN}"
      },
      "models": {
        "meta-llama/Meta-Llama-3.1-8B-Instruct": {
          "name": "Llama 3.1 8B Instruct",
          "limit": {
            "context": 131072,
            "output": 8192
          }
        }
      }
    }
  }
}
```

### All available models

To make all AI Model Hub models available in OpenCode, use the following configuration:

```json
{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "ionos": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "IONOS Cloud AI Model Hub",
      "options": {
        "baseURL": "https://openai.inference.de-txl.ionos.com/v1",
        "apiKey": "{env:IONOS_API_TOKEN}"
      },
      "models": {
        "openGPT-X/Teuken-7B-instruct-commercial": {
          "name": "Teuken 7B",
          "limit": {
            "context": 8192,
            "output": 2048
          }
        },
        "meta-llama/Meta-Llama-3.1-8B-Instruct": {
          "name": "Llama 3.1 8B Instruct",
          "limit": {
            "context": 131072,
            "output": 8192
          }
        },
        "mistralai/Mistral-Nemo-Instruct-2407": {
          "name": "Mistral Nemo 12B",
          "limit": {
            "context": 131072,
            "output": 8192
          }
        },
        "meta-llama/CodeLlama-13b-Instruct-hf": {
          "name": "Code Llama 13B",
          "limit": {
            "context": 16384,
            "output": 4096
          }
        },
        "mistralai/Mistral-Small-24B-Instruct": {
          "name": "Mistral Small 24B",
          "limit": {
            "context": 131072,
            "output": 8192
          }
        },
        "meta-llama/Llama-3.3-70B-Instruct": {
          "name": "Llama 3.3 70B Instruct",
          "limit": {
            "context": 131072,
            "output": 8192
          }
        },
        "openai/gpt-oss-120b": {
          "name": "GPT-OSS 120B",
          "limit": {
            "context": 131072,
            "output": 8192
          }
        },
        "meta-llama/Meta-Llama-3.1-405B-Instruct-FP8": {
          "name": "Llama 3.1 405B Instruct",
          "limit": {
            "context": 131072,
            "output": 8192
          }
        }
      }
    }
  }
}
```

### Configuration fields

| **Field**                   | **Description**                                                                    |
| --------------------------- | ---------------------------------------------------------------------------------- |
| `npm`                       | The AI SDK package. Use `@ai-sdk/openai-compatible` for any OpenAI-compatible API. |
| `name`                      | Display name shown in the OpenCode model picker.                                   |
| `options.baseURL`           | The IONOS Cloud OpenAI-compatible inference endpoint. Must end with `/v1`.         |
| `options.apiKey`            | References the `IONOS_API_TOKEN` environment variable using `{env:...}` syntax.    |
| `models.<id>`               | The model identifier. Must match the exact ID from the AI Model Hub API.           |
| `models.<id>.name`          | A human-readable display name for the model.                                       |
| `models.<id>.limit.context` | Maximum input context window in tokens.                                            |
| `models.<id>.limit.output`  | Maximum output tokens the model can generate per response.                         |

## Step 5: Select a Model in OpenCode

1. Launch OpenCode.
2. Use the `/models` command to open the model picker.
3. Your models appear under the **IONOS Cloud AI Model Hub** provider.
4. Select the model you want to use.

## Troubleshooting

### Unauthorized or 401 error

* Verify your token is set: `echo $IONOS_API_TOKEN`
* Ensure the token has not expired — IONOS Cloud JWTs have an `exp` claim.
* Generate a new token from the DCD if needed.

### Model not found

* Model identifiers are case-sensitive and must match exactly.
* Run the model list query in Step 3 to confirm the identifier.
* Review the [<mark style="color:blue;">Models</mark>](/cloud/ai/ai-model-hub/models/llms.md) for retirement notices.

### Connection timeout

* Confirm the base URL is `https://openai.inference.de-txl.ionos.com/v1`.
* The URL must end with `/v1` — not `/v1/chat/completions`.
* Verify that your network allows outbound `HTTPS` to `inference.de-txl.ionos.com`.

### Token not picked up by OpenCode

* Ensure your shell profile has been sourced: `source ~/.zshrc` or `source ~/.bashrc`.
* Restart OpenCode after changing environment variables.
* Verify the configuration uses `{env:IONOS_API_TOKEN}` and not a hardcoded token value.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.ionos.com/cloud/ai/ai-model-hub/how-tos/opencode-integration.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
