# Models Comparison

This comprehensive comparison table shows all currently active foundation models in the IONOS AI Model Hub, providing essential information about their capabilities, modalities, and features at a glance. Click on each model name to access detailed summaries and specifications.

## Features overview

### Language and Coding Models

| **Model Name**                                                                                                                   | **Type**       | **Input Modality** | **Output Modality** | **Streaming** | **Tool Calling** | **Context Window** | **Model Size** |
| -------------------------------------------------------------------------------------------------------------------------------- | -------------- | :----------------: | :-----------------: | :-----------: | :--------------: | :----------------: | :------------: |
| [<mark style="color:blue;">**Mistral Nemo 12B**</mark>](/cloud/ai/ai-model-hub/models/llms/mistral-nemo.md)                      | Language Model |        Text        |         Text        |       ✅       |         ✅        |        128k        |      Small     |
| [<mark style="color:blue;">**Llama 3.1 8B**</mark>](/cloud/ai/ai-model-hub/models/llms/meta-llama-3-1-8b.md)                     | Language Model |        Text        |         Text        |       ✅       |         ✅        |        128k        |      Small     |
| [<mark style="color:blue;">**Mistral Small 24B**</mark>](/cloud/ai/ai-model-hub/models/llms/mistral-small-24b.md)                | Language Model |     Text, Image    |         Text        |       ✅       |         ✅        |        128k        |     Medium     |
| [<mark style="color:blue;">**Llama 3.3 70B**</mark>](/cloud/ai/ai-model-hub/models/llms/meta-llama-3-3-70b.md)                   | Language Model |        Text        |         Text        |       ✅       |         ✅        |        128k        |     Medium     |
| [<mark style="color:blue;">**GPT-OSS 120B**</mark>](/cloud/ai/ai-model-hub/models/llms/openai-gpt-oss-120b.md)                   | Language Model |        Text        |         Text        |       ✅       |         ✅        |        128k        |     Medium     |
| [<mark style="color:blue;">**Llama 3.1 405B**</mark>](/cloud/ai/ai-model-hub/models/llms/meta-llama-3-1-405b.md)                 | Language Model |        Text        |         Text        |       ✅       |         ✅        |        128k        |      Large     |
| [<mark style="color:blue;">**Code Llama 13B**</mark>](/cloud/ai/ai-model-hub/models/coding-models/meta-code-llama-13b.md)        | Coding Model   |        Text        |         Text        |       ✅       |         ❌        |         16k        |     Medium     |
| [<mark style="color:blue;">**Qwen3 Coder Next 80B**</mark>](/cloud/ai/ai-model-hub/models/coding-models/qwen3-coder-next-80b.md) | Coding Model   |        Text        |         Text        |       ✅       |         ✅        |        128k        |     Medium     |

### Embedding, Image Generation, and OCR Models

| **Model Name**                                                                                                                                              | **Type**         | **Input Modality** | **Output Modality** | **Max Input Length** |
| ----------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------- | :----------------: | :-----------------: | :------------------: |
| [<mark style="color:blue;">**BGE Large v1.5**</mark>](/cloud/ai/ai-model-hub/models/embedding-models/bge-large-1-5.md)                                      | Embedding Model  |        Text        |        Vector       |      8192 tokens     |
| [<mark style="color:blue;">**BGE m3**</mark>](/cloud/ai/ai-model-hub/models/embedding-models/bge-m3.md)                                                     | Embedding Model  |        Text        |        Vector       |      8192 tokens     |
| [<mark style="color:blue;">**Paraphrase Multilingual MPNet v2**</mark>](/cloud/ai/ai-model-hub/models/embedding-models/paraphrase-multilingual-mpnet-v2.md) | Embedding Model  |        Text        |        Vector       |      128 tokens      |
| [<mark style="color:blue;">**FLUX.1-schnell**</mark>](/cloud/ai/ai-model-hub/models/image-generation-models/flux-1-schnell.md)                              | Image Generation |        Text        |        Image        |      256 tokens      |
| [<mark style="color:blue;">**LightOnOCR-2-1B**</mark>](/cloud/ai/ai-model-hub/models/ocr-models/lightonocr-2-1b.md)                                         | OCR              |        Image       |         Text        |      16k tokens      |

## Understanding model categories

The IONOS AI Model Hub offers three distinct categories of foundation models, each optimized for specific use cases:

### Large language models

#### Small models (Less than 15B parameters)

Small language models are optimized for fast inference and low resource consumption. They are ideal for real-time applications and scenarios where latency and cost are critical.

#### Medium models (15B to 150B parameters)

Medium-sized models provide a strong balance between response quality and inference speed. They are suitable for applications that demand higher accuracy and more nuanced language understanding.

#### Large models (More than 150B parameters)

Large models are designed for maximum language understanding, deep reasoning, and high-quality responses. They are best suited for advanced applications where accuracy and depth of knowledge are paramount.

### Embedding models

Embedding models convert text into dense vector representations, enabling semantic search, clustering, and similarity comparison. They are essential for building search engines, recommendation systems, and knowledge retrieval applications.

### Text-to-image models

Text-to-image models generate images from textual descriptions, supporting creative and generative AI use cases such as content creation, design, and prototyping.

### OCR models

Optical Character Recognition (OCR) models convert documents such as PDFs, scans, and images into clean, structured text. These vision-language models process visual content end-to-end, making them ideal for document digitization, data extraction, and content accessibility workflows.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.ionos.com/cloud/ai/ai-model-hub/models/models-comparison.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
