Fine-Tuning
AI Model Studio for Free: AI Model Studio is available as a free beta offering in the German market from October 1 through December 30, 2025. Register today to fine-tune your first AI model.´
Attention: The AI Model Studio is a free beta. Breaking changes and limited availability of features may occur at any time. It is developed in cooperation with our partners at manufactAI and is based on their innovative fine-tuning solutions. While the service runs entirely on IONOS Cloud infrastructure, our partners manage it at manufactAI, and support for the service is only provided through contacting manufactAI. This service does not accept IONOS Cloud API tokens and log-in credentials.
Fine-tuning, which means continuing to train AI models on use-case-specific data, is the core functionality of the AI Model Studio. To achieve this, a very efficient method known as Low Rank Adapter (LoRA) is employed. Efficiency is achieved by freezing pre-trained model weights and introducing trainable rank decomposition matrices, which reduce the number of parameters while maintaining model quality.
The resulting adapters can then be easily loaded and used together with the base model for inference tasks.
Configuration
The AI Model Studio allows the user to set specific configurations and training settings when creating a model. The following list gives an overview of these parameters and their meaning.
Model Configuration: Name of the model under which it is displayed later in the model selection drop-down list. This name should be descriptive and help identify the purpose or version of your fine-tuned model.
Training Dataset: Dataset to be used for fine-tuning. The full dataset is used for training and needs to be created beforehand. The dataset should contain input-output pairs that represent the specific task or domain you want the model to learn. For more information, see Creating a Dataset in the AI Model Studio.
Base Model: The pre-trained model to be used for fine-tuning. This serves as the starting point for your custom model. Select a base model that aligns with your use case; for example, choose a language model for text tasks or a vision model for image processing. The available models are listed in the model overview.
Training Settings: Training settings can either be set manually or simply configured using auto-mode, with auto being recommended. With auto enabled, these values get optimized to your dataset.
Learning Rate: Controls how quickly the model adapts during training. This parameter determines the size of steps taken during optimization. A higher learning rate (e.g., 1e-3) enables faster learning but may cause instability, while a lower learning rate (e.g., 1e-5) provides more stable but slower convergence. Defaults to 2e-4.
Batch Size: Number of training examples processed simultaneously in each training step. Larger batch sizes (e.g., 32-128) provide more stable gradients and faster training on powerful hardware, but require more memory. Smaller batch sizes (e.g., 4-16) use less memory but may result in noisier training. Choose based on your available computational resources and dataset size. Defaults to 16.
Epochs: Number of complete passes through the entire training dataset. One epoch means the model has seen every example in your dataset once. More epochs allow the model to learn better but can lead to overfitting if set too high. Typical ranges are 3-10 epochs for fine-tuning, though this depends on dataset size and complexity. Monitor validation metrics to determine the optimal number of epochs. Defaults to 3.
Fine-tuning jobs
Fine-tuning jobs that have already started are listed in the job overview with their status being shown. Additionally, you are informed via email whenever a job starts, finishes successfully, or fails.
When opening a specific job, you receive detailed information on the status of every step of the training, the fingerprint of the used dataset, the worker used to compute the job, and the job timeline.
Access and test your Model
Your trained models can be accessed either directly in the AI Model Studio web app using the Playground feature or called using the API.
Playground
To access the model using the Playground feature, go to the Playground subpage of the AI Model Studio. Choose the model that is supposed to be tested on the top of the page via the available drop-down menu. The instruction used for training the model is then pre-selected and will be added to all model inputs.
Input Limitations: You can add up to 16 input examples that may not exceed 10mb in total. Press Process All to generate the inference outputs for the uploaded input examples.
File Processing: File inputs may be processed through additional pipeline elements to meet the specific input requirements of various models. For example, PDF documents are run through an OCR solution if a pure text model is chosen. These additional pipeline elements are not part of the model itself and are not available through the API. In this case, you must implement the required processing steps.
Accessing the Model through the API
To access the Model through the API, see API.
Last updated
Was this helpful?