# BGE Large v1.5

**Summary:** BGE Large 1.5 is a high-performance English-focused embedding model that transforms text into precise 1024-dimensional vector representations, achieving an impressive MTEB score of 64.23. This model excels in semantic search, document retrieval, similarity comparison, and clustering applications, making it ideal for building sophisticated search engines, recommendation systems, and knowledge management platforms where accurate English text understanding and retrieval are critical.

|                                                                       **Intelligence**                                                                      |                                         **Speed**                                         |                                                                      **Sovereignty**                                                                     |                                                                 **Input**                                                                 |                                                                 **Output**                                                                |
| :---------------------------------------------------------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------------------------------------------------: |
| ![Intelligence active](/files/dnDi7yuqXqkBFqwaxdnm) ![Intelligence active](/files/dnDi7yuqXqkBFqwaxdnm) ![Intelligence active](/files/dnDi7yuqXqkBFqwaxdnm) | ![Speed active](/files/evfYW3bq4dTBLlZH3dQf) ![Speed active](/files/evfYW3bq4dTBLlZH3dQf) | ![Sovereignty active](/files/bNpzGRJfez9SidEjNCoy) ![Sovereignty active](/files/bNpzGRJfez9SidEjNCoy) ![Sovereignty active](/files/bNpzGRJfez9SidEjNCoy) | ![Text active](/files/45qlqURbT8c2Ekr8HJfK) ![Image inactive](/files/0mPVwOtrYhZrpz9clC3D) ![Audio inactive](/files/PRglWWEC5Zoc5fgynNLM) | ![Text active](/files/45qlqURbT8c2Ekr8HJfK) ![Image inactive](/files/0mPVwOtrYhZrpz9clC3D) ![Audio inactive](/files/PRglWWEC5Zoc5fgynNLM) |
|                                                                            *High*                                                                           |                                          *Medium*                                         |                                                                         *Medium*                                                                         |                                                                   *Text*                                                                  |                                                              *Number Vector*                                                              |

## Central parameters

**Description:** Embedding model developed by Beijing Academy of Artificial Intelligence, producing 1024-dimensional vectors from English text.

**Model identifier:** `BAAI/bge-large-en-v1.5`

## IONOS AI Model Hub Lifecycle and Alternatives

|  **IONOS Launch** | **End of Life** |                                             **Alternative**                                             | **Successor** |
| :---------------: | :-------------: | :-----------------------------------------------------------------------------------------------------: | :-----------: |
| *January 1, 2025* |       N/A       | [<mark style="color:blue;">**BGE M3**</mark>](/cloud/ai/ai-model-hub/models/embedding-models/bge-m3.md) |               |

## Origin

|                               **Provider**                              | **Country** |                                                  **License**                                                  | **Flavor** |     **Release**    |
| :---------------------------------------------------------------------: | :---------: | :-----------------------------------------------------------------------------------------------------------: | :--------: | :----------------: |
| [<mark style="color:blue;">**BAAI**</mark>](https://www.baai.ac.cn/en/) |  Community  | [<mark style="color:blue;">**License**</mark>](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE) |      -     | *January 28, 2024* |

## Technology

| **Input Length** | **Parameters** | **Tensor Type** | **Multilingual** |                                         **Further details**                                        |
| :--------------: | :------------: | :-------------: | :--------------: | :------------------------------------------------------------------------------------------------: |
|      *8192*      |     *335M*     |     *float*     |       *Yes*      | [<mark style="color:blue;">**Hugging Face**</mark>](https://huggingface.co/BAAI/bge-large-en-v1.5) |

## Modalities

|     **Text**     |   **Image**   |   **Audio**   |
| :--------------: | :-----------: | :-----------: |
| Input and output | Not supported | Not supported |

## Endpoints

| **Chat Completions** | **Embeddings** | **Image generation** |
| :------------------: | :------------: | :------------------: |
|     Not supported    |  v1/embeddings |     Not supported    |

## Features

| **Streaming** | **Reasoning** | **Tool calling** |
| :-----------: | :-----------: | :--------------: |
| Not supported | Not supported |   Not supported  |

## Rate limits

Rate limits ensure fair usage and reliable access to the AI Model Hub. In addition to the [<mark style="color:blue;">contract-wide rate limits</mark>](/cloud/ai/ai-model-hub/how-tos/rate-limits.md), no model-specific limits apply.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.ionos.com/cloud/ai/ai-model-hub/models/embedding-models/bge-large-1-5.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
