Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Prerequisite: For the installation instructions, see Install or update to the latest version of the AWS CLI.
Run the aws configure
command in a terminal.
AWS Access Key ID [None]: Insert the Access Key. In the DCD, go to Menu > Storage > IONOS Object Storage > Key management and check the Access keys section to find the essential details.
AWS Secret Access Key [None]: Paste the Secret Key. In the DCD, go to Menu > Storage > IONOS Object Storage > Key management and check the Access keys section to find the essential details.
Default region name [None]: de
.
Default output format [None]: json
.
Test if you set up AWS CLI correctly by running a command to list buckets; use any endpoints for testing purposes.
If the setup works correctly, you may proceed with the other commands.
For each command, be sure to include one of the endpoints in the endpoint-url
parameter:
For information on the supported IONOS Object Storage Service endpoints, see Endpoints.
There are two sets of commands:
S3: Offers high-level commands for managing buckets and moving, copying, and synchronizing objects.
S3api: Allows you to work with specific features such as ACL, CORS, and Versioning.
For additional information, see the official AWS CLI Command Reference.
IONOS Object Storage is compatible with the S3 protocol, which means that it can be used to manage buckets and objects with existing S3 clients once properly configured.
Amazon Web Services (AWS) Command-line Interface (CLI) is unique in offering a wide range of commands for comprehensive management of buckets and objects which is ideal for scripting and automation. IONOS Object Storage supports using AWS CLI for Windows, macOS, and Linux.
This document provides instructions to manage Replication using the CLI. Additionally, these tasks can also be performed using the DCD and IONOS Object Storage API.
Prerequisites:
Set up the AWS CLI by following the installation instructions.
Make sure to consider the supported Endpoints.
Versioning must be enabled for source and destination buckets.
Create the file replication_configuration.json
with the following content:
Enable replication from my-source-bucket
to my-destination-bucket
(use the endpoint of the source bucket):
Retrieve the replication configuration:
Delete the replication configuration:
Info: It takes up to a few minutes for the deletion of a replication rule to propagate fully.
This document provides instructions for managing IONOS Object Storage using the AWS CLI. Additionally, this task can also be performed through the DCD DCD and API.
Prerequisites:
Set up the AWS CLI by following the installation instructions.
Make sure to consider the supported Endpoints.
Option 1: Using s3 set of commands:
Option 2: Using s3api set of commands:
Create a bucket in the eu-central-2
region (Berlin, Germany):
Option 1: Using s3 set of commands:
Option 2: Using s3api set of commands:
Create a bucket in the de
region (Frankfurt, Germany) with Object Lock enabled:
Upload an object from the current directory to a bucket:
Download all the objects from the my-bucket
bucket to the local directory my-dir
:
Copy the object to the bucket:
Copy the contents of the local directory my-dir
to the bucket my-bucket
:
For more information, see the cp command reference.
Copy all objects from my-source-bucket
to my-dest-bucket
excluding .zip files. The command does not support cross-region copying for IONOS Object Storage:
Sync the bucket my-bucket
with the contents of the local directory my-dir
:
For more information, see sync command reference.
This document provides instructions to manage Versioning using the CLI. Additionally, these tasks can also be performed using the DCD and IONOS Object Storage API.
Prerequisites:
Set up the AWS CLI by following the installation instructions.
Make sure to consider the supported Endpoints.
Get the versioning state of the bucket:
Enable versioning for the bucket:
List object versions for the bucket:
List object versions for the object my-object.txt
:
This document provides instructions to manage Bucket Policy using the CLI. Additionally, these tasks can also be performed using the DCD and IONOS Object Storage API.
Prerequisites:
Set up the AWS CLI by following the installation instructions.
Make sure to consider the supported Endpoints.
To create a file policy.json
with the JSON policy, see Examples.
Apply a bucket policy to a bucket:
Save a bucket policy to file:
Delete the bucket policy:
This document provides instructions to Manage ACL for Buckets using the AWS CLI. Additionally, these tasks can also be performed using the DCD and IONOS Object Storage API.
Prerequisites:
Set up the AWS CLI by following the installation instructions.
Make sure to consider the supported Endpoints.
Use the following keys to define access permissions:
--grant-read
: Grants read-only access.
--grant-write
: Grants write-only access.
--grant-read-acp
: Grants permission to read the Access Control List.
--grant-write-acp
: Grants permission to modify the Access Control List.
--grant-full-control
: Grants full access, encompassing the permissions listed above (read, write, read ACL, and write ACL).
Note: Granting access to a bucket for another IONOS user does not make the bucket appear in the user's Object Storage in the DCD due to the S3 protocol's architecture. To access the bucket, the user must utilize other S3 Tools, as the granted access does not translate to interface visibility.
Grant full control of my-bucket
to a user with a specific Canonical user ID:
Separate grants with a comma if you want to specify multiple Canonical user IDs:
Grant full control of my-bucket
to multiple users using their Canonical user IDs:
Grant full control of my-bucket
by using an email address
instead of a Canonical User ID:
Retrieve the ACL of a bucket and save it to the file acl.json
:
Edit the file. For example, remove or add some grants and apply the updated ACL to the bucket:
Use the following values for the --acl
key:
private
removes public access.
public-read
allows public read-only access.
public-read-write
allows public read/write access.
authenticated-read
allows read-only access to all authenticated users of IONOS Object storage (including ones out of your contract).
Allow public read-only access to the bucket:
Remove public access to the bucket:
Set WRITE
and READ_ACP
permissions for the Log Delivery Group, which is required before enabling the Logging feature for a bucket:
This document provides instructions to Manage ACL for Objects using the AWS CLI. Additionally, these tasks can also be performed using the DCD and IONOS Object Storage API.
Prerequisites:
Set up the AWS CLI by following the installation instructions.
Make sure to consider the supported Endpoints for object upload.
Use the following keys to define access permissions:
--grant-read
: Grants read-only access.
--grant-write
: Grants write-only access.
--grant-read-acp
: Grants permission to read the Access Control List.
--grant-write-acp
: Grants permission to modify the Access Control List.
--grant-full-control
: Grants full access, encompassing the permissions listed above (read, write, read ACL, and write ACL).
Use --key
to specify the object for granting access:
Use the following values for the --acl
key:
private
removes public access.
public-read
allows public read-only access.
public-read-write
allows public read/write access.
authenticated-read
allows read-only access to all authenticated users of IONOS Object storage (including ones out of your contract).
Allow public read-only access to the object:
Remove public access from the object:
S3cmd is a free command line tool and client for loading, retrieving, and managing data in S3. It has over 60 command line options, including multipart uploads, encryption, incremental backup, S3 sync, ACL and metadata management, bucket size, and bucket policies (Linux, macOS).
Install 3cmd for your distribution:
on CentOS/RHEL and Fedora: sudo dnf install s3cmd
on Ubuntu/Debian: sudo apt-get install s3cmd
on macOS using Brew: brew install s3cmd
You can also install the latest version from SourceForge.
Run the following command in a terminal: s3cmd --configure
. This will guide you through the interactive installation process:
Enter your Access Key and Secret key. To get them, log in to the DCD, go to Menu > Storage > IONOS Object Storage > Key management.
Note: Your credentials are not tied to a specific region or bucket.
Specify the region of your bucket for Default Region
. Example: eu-central-2
. Please refer to the list of available regions.
Specify the endpoint for the selected region for Endpoint
from the same list. For example, s3-eu-central-2.ionoscloud.com
.
Insert the same endpoint again for DNS-style bucket+hostname:port template
.
Specify or skip password (press Enter) for Encryption password
.
Press Enter for Path to GPG program
.
Press Enter for Use HTTPS protocol
.
Press Enter for HTTP Proxy server name
.
Press Enter for Test access with supplied credentials? [Y/n]
.
S3cmd will try to test the connection. If everything went well, save the configuration by typing y
and pressing Enter. The configuration will be saved in the .s3cfg
file.
If you need to work with more than one region or with different providers, there is a way to set up multiple configurations. Use s3cmd -configure --config=ionos-fra
to save the configuration for a specific location or provider. Run s3cmd with the -c
option to override the default configuration file. For example, list the object in the bucket:
You can also specify an endpoint directly on the command line to override the default setting. The Access Key and Secret key are region-independent, so s3cmd can take them from the default configuration:
Or even specify it with an Access Key and the Secret Key:
Please refer to the list of available endpoints for the --host
option. You can skip this option if you are only using the region from the configuration file.
List buckets (even buckets from other regions will be listed):
Create a bucket (the name must be unique for the whole IONOS Object Storage). You need to explicitly use the --region
option, otherwise a bucket will be created in the default de
region:
Create the bucket my-bucket
in the region de
(Frankfurt, Germany):
Create the bucket my-bucket
in the region eu-cental-2
(Berlin, Germany):
Create the bucket my-bucket
in the region eu-south-2
(Logrono, Spain):
List objects of the bucket my-bucket
:
Upload filename.txt from the current directory to the bucket my-bucket
:
Copy the contents of local directory my-dir
to the bucket my-bucket
with prefix my-dir
:
Copy all objects from my-source-bucket
to my-dest-bucket
excluding .zip files (or use mv
to move objects). The command doesn’t support cross-region copying for IONOS Object Storage, use rclone utility for cross-region copying:
Download all the objects from the my-bucket
bucket to the local directory my-dir
(the directory should exist):
Synchronize a directory to S3 (checks files using size and md5 checksum):
Get Cross-Origin Resource Sharing (CORS) configuration:
Set up Cross-Origin Resource Sharing (CORS) configuration:
cors_rules.xml:
Delete CORS from the bucket:
Get information about buckets or objects:
s3cmd info s3://my-bucket
s3cmd info s3://my-bucket/my-object
Generate a public URL for download that will be available for 10 minutes (600 seconds):
Set up a lifetime policy for a bucket (delete objects older than 1 day):
detete-after-one-day.xml:
Encrypt and upload files. This option allows you to encrypt files before uploading, but in order to use it, you have to run s3cmd --configure
and fill out the path to the GPG utility and the encryption password. There is no need to use special parameters to decrypt the file on downloading with get
option as this is done automatically using the data from the configuration file.
Add or modify user-defined metadata. Use headers starting with x-amz-meta-
and store data in the set of key-value pairs. The user-defined metadata is limited to 2 KB in size. The size of the user-defined metadata is measured by taking the sum of the number of bytes in the UTF-8 encoding of each key and value.
s3cmd modify --add-header x-amz-meta-my_key:my_value s3://my-bucket/prefix/filename.txt
Check the changes:
Delete metadata:
For more information, visit S3cmd usage and S3cmd FAQ.
This document provides instructions for managing Static Website Hosting using the CLI. Additionally, these tasks can also be performed using the DCD and IONOS Object Storage API.
Prerequisites:
Set up the AWS CLI by following the installation instructions.
Make sure to consider the supported Endpoints.
Make the bucket public for static website hosting using Bucket Policy:
Contents of policy.json
:
Enable static website hosting for my-bucket
:
Info: The website URLs differ from the endpoint URLs. The command sets up the static website here – http://my-bucket.s3-website-eu-central-2.ionoscloud.com
.
Disable static website hosting for my-bucket
:
S3 Browser is a free, feature-rich Windows client for IONOS Object Storage.
Download and install the S3 Browser.
Add a new account and select:
Display name: Enter a name for the connection.
Account type: Select S3 Compatible Storage from the drop-down list.
REST Endpoint: If you already have a bucket, select the endpoint URL from the list. Otherwise, you can select s3-eu-central-2.ionoscloud.com, which corresponds to the location in Berlin, Germany.
To get the Access Key and Secret Key, log in to the DCD, go to Menu > Storage > IONOS Object Storage > Key management.
Click Advanced S3-compatible storage settings in the lower-left corner of the form.
Storage settings:
Signature version: Select Signature V4 from the drop-down list.
Addressing model: Leave Path style.
Override storage regions: Paste the following text into the text area:
Region-specific endpoint: Insert the following text: s3-{region-code}.ionoscloud.com
Save the details.
Try creating your first bucket. The bucket name must be unique across the entire IONOS Object store. That's why S3 Browser will offer to add random text to the bucket name. But you can still try to come up with your unique name.
IONOS Object Storage supports using Cyberduck, a Cloud Storage browser with SFTP, WebDAV, and S3 support for Windows, macOS, and Linux.
For the installation instructions, see Cyberduck.
Once inside Cyberduck, select Cyberduck > Preferences… from the menu.
Select Profiles to open the Connection Profiles page.
Select the IONOS Cloud Object Storage (Berlin) connection profile or IONOS Cloud Object Storage (Frankfurt), or IONOS Cloud Object Storage (Logrono) from the list of available connection profiles, or best use the search option to search for it.
Close the Preferences window and restart Cyberduck to install the selected connection profiles.
Open Cyberduck and select File > Open Connection… You will see the connection dialog.
At the top, click the dropdown menu and select the IONOS Cloud Object Storage (Berlin) profile that corresponds with the data center you want to use.
Enter key values in the Access Key and Secret Key fields.
To access the Object Storage keys:
Log in to the DCD, go to Menu > Storage > IONOS Object Storage > Key management.
Choose "Generate a key" and confirm the action by clicking Generate. The object storage key will be generated automatically.
Click Connect.
-c FILE, --config=FILE
- Config file name. Defaults to $HOME/.s3cfg
.
-e, --encrypt
- Encrypt files before uploading to S3.
--upload-id=UPLOAD_ID
- UploadId for Multipart Upload, in case you want to continue an existing upload (equivalent to --continue-put
) and there are multiple partial uploads. Use s3cmd multipart [URI]
to see what UploadIds are associated with the given URI.
This document provides instructions to manage Logging using the CLI. Additionally, these tasks can also be performed using the DCD and IONOS Object Storage API.
Prerequisites:
Set up the AWS CLI by following the installation instructions.
Make sure to consider the supported Endpoints.
Prerequisite: Grant permissions to the Log Delivery Group to the bucket where logs will be stored. We recommend using a separate bucket for logs, but it must be in the same region. Log Delivery Group must be able to write objects and read ACL.
After that, you can enable Logging for a bucket:
Contents of logs-acl.json
:
Retrieve bucket logging settings:
Disable logging for a bucket:
Postman is a free tool for conveniently working with APIs in a graphical interface. It is available for Windows, macOS, and Linux.
You can follow the installation instructions described on Postman.
In the Authorization tab for a request, select AWS Signature from the Type dropdown list. Specify where Postman should append your authorization data using the Add authorization data to drop-down menu.
If you select Request Headers, Postman populates the Headers tab with Authorization and X-Amz- prefixed fields.
If you select the Request URL, Postman populates the Params tab with authentication details prefixed with X-Amz-.
Note: The parameters listed below contain confidential information. We recommend using variables to keep this data secure while working in a collaborative environment.
To get the Access Key and Secret Key, log in to the DCD, go to Menu > Storage > IONOS Object Storage > Key management.
Advanced fields are optional, but Postman will attempt to generate them automatically if necessary.
For AWS Region, leave the field blank as the region from the endpoint will be used.
For Service Name, enter s3
. The name of the service that receives the requests.
For Session Token, leave the field blank. This is only required when temporary security credentials are used.
Setup completed. Now check the API description to get the right endpoint to call.
Note: You need to use the correct endpoint URL for each region (see the list of available endpoints).
IONOS Object Storage is fully compatible with S3, which means that it can be used to manage buckets and objects with existing S3 clients once properly configured. We suggest a list of popular tools for working with IONOS Object Storage, as well as instructions for configuring them:
: Tool for API development and testing. Its unique feature is a graphical interface for sending API requests to object storage endpoints, facilitating testing and development.
: An open-source GUI client supporting object storage among other protocols, presenting storage objects as local files for easy browsing, upload, and download.
: Freeware Windows client for object storage, providing an easy way to manage buckets and objects, including file permissions and access control lists, through a visual interface.
: Unique in offering a wide range of commands for comprehensive management of buckets and objects. Ideal for scripting and automation.
: Offers direct, scriptable control over object storage buckets and objects. However, it lacks certain features like versioning and replication management.
: A CLI program for syncing files between local and cloud storage, distinguishing itself with powerful synchronization capabilities, specifically functional when handling large data quantities and complex sync setups.
: Provides high-level object-oriented API as well as low-level direct service access.
: Comprehensive backup and disaster recovery solution for virtual, physical, and cloud-based workloads. Supports creating an Object Storage repository for backing up to one or multiple buckets.
is the official AWS SDK for Python. It allows you to create, update, and configure IONOS Object Storage objects from within your Python scripts.
Install the latest Boto3 release via pip: pip install boto3
There are several ways to , e.g. passing credentials as parameters to the boto.client() method, via environment variables, or with a generic credential file (~/.aws/credentials).
An example of passing credentials as parameters when creating a Session object:
To get the Access Key and Secret Key, , and go to Menu > Storage > IONOS Object Storage > Key management.
NOTE: Your credentials are not tied to a specific region or bucket.
For information on the supported IONOS Object Storage Service endpoints, see .
List buckets:
Create bucket my-bucket
at the region eu-central-1
:
Upload filename.txt to the bucket my-bucket
:
Download the file filename.txt
from the my-bucket
:
List objects of the bucket my-bucket
Copy the filename.txt from the bucket my-source-bucket
to the bucket my-dest-bucket
and add the prefix uploaded/
. Instead of the client()
method, we use the resource()
method here. It provides a higher level of abstraction than the low-level calls made by service clients.
: The most used Infrastructure as Code (IAC) tool, which allows you to manage infrastructure with configuration files rather than through a GUI. Terraform allows you to build, change, and manage your infrastructure safely, consistently, and repeatedly by defining resource configurations that you can version, reuse, and share.
For more information, see AWS SDK documentation on .
For more examples, see , such as:
For more information on Boto3 and Python, see .
This document provides instructions to manage Lifecycle using the CLI. Additionally, these tasks can also be performed using the DCD and IONOS Object Storage API.
Prerequisites:
Set up the AWS CLI by following the installation instructions.
Make sure to consider the supported Endpoints.
Versioning must be enabled for source and destination buckets.
Create a file lifecycle.json
with the JSON policy:
Apply the lifecycle configuration to a bucket:
Save the bucket’s lifecycle configuration to a file:
Delete the Lifecyle configuration:
Rclone is a command line tool for managing files in the cloud. It is available for Windows, macOS, and Linux. Rclone also has a built-in HTTP server that can be used to remotely control rclone using its API and a web GUI (graphical user interface).
rclone helps:
backing up (and encrypting) files to cloud storage
restoring (and decrypting) files from cloud storage
mirroring cloud data to other cloud services or locally
transferring data to the cloud or between cloud storage providers
mounting multiple encrypted, cached, or diverse cloud storages in the form of a disk
analyzing and taking into account data stored in cloud storage using lsf, ljson, size, and ncdu
Download the latest version of rclone from rclone.org. The official Ubuntu, Debian, Fedora, Brew, and Chocolatey repositories include rclone.
You can find the configuration example here.
Configurations configured with the rclone config
command are called remotes. If you already have or plan to use buckets in different IONOS Object Storage regions, you will need to set up a separate remote for each region you use.
Please refer to the list of commands at the rclone website.
List remotes:
List buckets of "ionos1" remote:
Create bucket my-bucket
at the remote ionos1
:
Upload filename.txt from the current directory to the bucket my-bucket
:
Copy the contents of local directory my-dir
to the bucket my-bucket
:
Copy all objects with the prefix my-dir
from the bucket my-source-bucket
to my-dest-bucket
:
The buckets could be located in different regions and even at different providers. Unless buckets are located within the same region, the data is not copied directly from the source to the destination. For cross-regional copying, the data is downloaded to you from the source bucket and then uploaded to the destination.
Download all the objects from the my-bucket
bucket to the local directory my-dir
:
Sync the bucket my-bucket
with the contents of the local directory my-dir
. The destination is updated to match the source, including deleting files if necessary:
Get the total size and number of objects in remote:path:
Check if the files in the local directory and destination match:
Produce an md5sum file for all the objects in the path:
This document provides instructions to manage CORS using the CLI. Additionally, these tasks can also be performed using the DCD and API.
Prerequisites:
Set up the AWS CLI by following the installation instructions.
Make sure to consider the supported Endpoints.
Versioning must be enabled for source and destination buckets.
Get the CORS configuration for the bucket my-bucket
:
Set up CORS configuration for the bucket my-bucket
:
For more information, see put-bucket-cors command reference.
This document provides instructions to manage Object Lock using the CLI. Additionally, these tasks can also be performed using the DCD and IONOS Object Storage API.
Prerequisites:
Object Lock configuration is only feasible when enabled at the time of bucket creation. It cannot be activated for an existing bucket.
Set up the AWS CLI by following the installation instructions.
Make sure to consider the supported Endpoints.
Create a bucket my-bucket
in the de
region (Frankfurt, Germany) with Object Lock:
An Object Lock with Governance mode on a bucket provides the bucket owner with better flexibility compared to the Compliance mode. It permits the removal of the Object Lock before the designated retention period has expired, allowing for subsequent replacements or deletions of the object.
Apply Governance mode configuration to the bucket my-bucket-with-object-lock
with a default retention period equal to 15 days (or use the PutObjectLockConfiguration
API Call):
On applying this configuration, the newly uploaded objects adhere to this retention setting.
An Object Lock with Compliance mode on a bucket ensures strict control by enforcing a stringent retention policy on objects. Once this mode is set, the retention period for an object cannot be shortened or modified. It provides immutable protection by preventing objects from being deleted or overwritten during their retention period.
This mode is particularly suited for meeting regulatory requirements as it guarantees that objects remain unaltered. It does not allow locks to be removed before the retention period concludes, ensuring consistent data protection.
Apply Compliance mode configuration to the bucket my-bucket-with-object-lock
with a default retention period equal to 15 days:
On applying this configuration, the newly uploaded objects adhere to this retention setting.
Retrieve Object Lock configuration of a bucket (the same could be achieved with the GetObjectLockConfiguration
API Call):
Upload my-object.pdf
to the bucket my-bucket-with-object-lock
:
This task could also be achieved by using the PutObject API call.
Note: The Object Lock retention is not specified so a bucket’s default retention configuration will be applied.
Upload my-object.pdf
to the bucket my-bucket-with-object-lock
and override the bucket’s default Object Lock configuration:
Note: You can overwrite objects protected with Object Lock. Since Versioning is used for a bucket, it allows to keep multiple versions of the object. It also allows deleting objects because this operation only adds a deletion marker to the object’s version.
The permanent deletion of the object’s version is prohibited, and the system only creates a deletion marker for the object. But it makes IONOS Object Storage behave in most ways as though the object has been deleted. You can only list the delete markers and other versions of an object by using the ListObjectVersions API call.
Note: Delete markers are not WORM-protected, regardless of any retention period or legal hold in place on the underlying object.
Apply legal-hold
status to my-object.pdf
in the bucket my-bucket-with-object-lock
:
Use Status=OFF
to turn off the legal-hold
status.
To check the Object Lock status for a particular version of an object, you can utilize either the GET Object
or the HEAD Object
commands. Both commands will provide information about the retention mode, the designated 'Retain Until Date' and the status of the legal hold for the chosen object version.
When multiple users have permission to upload objects to your bucket, there is a risk of overly extended retention periods being set. This can lead to increased storage costs and data management challenges. While the system allows for up to 100 years using the s3:object-lock-remaining-retention-days
condition key, implementing limitations can be particularly beneficial in multi-user environments.
Establish a 10-day maximum retention limit:
Save it to the policy.json
file and apply using the following command:
IONOS Object Storage is fully compatible with the S3 protocol, and once properly configured, it can be used to manage buckets and objects with existing S3 clients. Terraform is HashiCorp's infrastructure-as-code tool. It lets you define resources and infrastructure in human-readable format, declarative configuration files and manages your infrastructure's lifecycle. Using Terraform has several advantages over manually managing your infrastructure.
To use IONOS Object Storage with AWS Terraform Provider, Configure AWS Terraform Provider. For more information, see Examples.
Info:
— The access_key
and secret_key
can be retrieved in the DCD, go to Menu > Storage > IONOS Object Storage > Key management.
— For the list of IONOS Object Storage Service endpoints, see Endpoints.
Prerequisite: For installing or updating the latest versions, refer to the instructions in the AWS Terraform Provider documentation.
The following information must be specified in your Terraform provider configuration hcl:
AWS Access Key ID: Insert the Access Key. In the DCD, go to Menu > Storage > IONOS Object Storage > Key management and check the Access keys section to find the essential details.
AWS Secret Access Key: Paste the Secret Key. In the DCD, go to Menu > Storage > IONOS Object Storage > Key management and check the Access keys section to find the essential details.
skip_credentials_validation: When set to true
, it skips Security Token Service validation.
skip_requesting_account_id: The account ID is not requested when set to true
. It is useful for AWS API implementations that do not have the IAM, STS API, or metadata API.
skip_region_validation: The region name is not validated when set to true
. It is useful for WS-like implementations that use their own region names.
endpoints: For the list of IONOS Object Storage Service endpoints, see Endpoints.
Veeam Backup & Replication offers two options for backup to the Object Storage:
1. Direct backup to Object Storage
2. Backup to Object Storage as a capacity tier in the Scale-out Backup Repository (SOBR).
Follow these steps to configure direct backup:
1. Add Object Storage Repository: In Veeam Backup & Replication, navigate to the backup infrastructure and add a new object storage repository, selecting S3 compatible and providing your IONOS Object Storage credentials. We recommend using the eu-central-3
region for Veeam backups.
2. Configure Backup Jobs: Set up or modify backup jobs to target the newly added IONOS Object Storage repository.
For more information, see Add Backup Repository.
SOBR in Veeam enables you to create a highly flexible and scalable backup storage solution by combining multiple storage resources into a single repository. It allows moving data from performance to capacity tiers automatically to optimize storage usage and cost.
Follow these steps to configure SOBR with Object Storage:
1. Create Performance Tier: Add your primary, fast storage devices (local or network) as the performance tier in Veeam Backup & Replication.
2. Add Capacity Tier: Integrate IONOS Object Storage as the capacity tier, allowing Veeam to offload older backup files to cost-effective object storage.
3. Policy Configuration: Configure policies to define how and when data should be moved between the performance and capacity tiers.
In Veeam 12, both Performance and Capacity Tiers support multiple buckets without any limit and apply to users as follows:
Existing users upgrading to v12 should consider setting up a new SOBR with multiple buckets, as Veeam does not redistribute existing VM backups across newly added buckets.
For new users planning to integrate IONOS Object Storage in the Capacity Tier with a traditional SOBR configuration, it is advisable to start with multiple buckets.
Veeam automatically allocates VM backups across these buckets. While adding new buckets is possible later as well, Veeam does not reallocate existing VM backups to these new buckets; only new VM backups will utilize them. Utilizing multiple buckets in the Capacity Tier removes the necessity for multiple SOBRs.
Minimal Veeam Version
Recommended Object Storage regions
Backup Job Storage Optimization
Immutability Retention Period
The immutability retention period must be less or equal to the backup retention period.
Object Storage Repository Task Limit
Create a bucket for every 100 VMs or 200 TB of data to be backed up (Performance or Capacity Tier, depending on the use case).
For more information, see Recommended Settings.
The minimum version is Veeam 11a CP4 (11.0.1.126120220302,) which brings an important special fix when using Object Lock.
In May 2024, we introduced the eu-central-3
Object Storage region in Berlin, Germany. This region features contract-owned buckets, which provide enhanced performance and resilience. This new bucket type creates an easier opportunity for user management, since the bucket list is visible for all users of the contract, and the contract owner or administrators can assign permissions to view bucket contents. For more information, see Bucket Types and Endpoints.
This setting is available during the addition of the new Object Storage repository. For more information, see Configure repository details in Create Object Storage as an object repository in Veeam.
You can also update the concurrent tasks limit for an existing Object Storage repository by following these steps:
Go to the Backup Infrastructure tab.
Click Backup Repositories.
Right-click on your Object Storage repository and choose Properties.
Select the checkbox Limit concurrent tasks to and set the value as follows:
Up to 10—for eu-south-2
region.
Up to 20—for all other regions.
Veeam Backup & Replication allows you to configure block sizes for each backup job, which can significantly impact deduplication efficiency and incremental backup size. By default, blocks are compressed, typically achieving a compression ratio of about 50%.
Smaller blocks can enhance deduplication; they increase the number of calls to the object storage, potentially affecting performance.
Larger blocks reduce the number of storage calls and can increase throughput to and from IONOS Object Storage, improving overall backup performance.
For optimal performance with IONOS Object Storage, we recommend using 8 MB blocks. These blocks are unavailable in the interface by default and must be enabled via a registry key by following these steps:
1. Open the Registry Editor.
2. Navigate to Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Veeam\Veeam Backup and Replication
.
3. Add a new DWORD (32-bit) value named UIShowLegacyBlockSize
.
4. Set the value of UIShowLegacyBlockSize
to 1.
Result: The 8 MB blocks are now available in the Veeam interface.
To change the storage optimization setting, follow these steps:
1. Click the Backup section under Jobs.
2. Select the required backup job, right-click on it, and choose Edit in the menu.
3. Click Storage > Advanced and change the tab to Storage.
4. In the Storage optimization drop-down list, choose 8 MB marked as "not recommended”. Disregard this mark as it is the correct choice. This option must be there if you modify the registry setting as described in the previous section. Do not use a block size less than 4 MB.
5. Click Save as default at the bottom of the Advanced Settings screen.
Result: The new storage configuration is saved as the default setting. This will ensure the settings are automatically used for any new backup jobs created.
The immutability retention period of the Object Storage Repository must be less or equal to the backup retention period of the backup job.
To check the immutability retention period, follow these steps:
1. Go to the Backup Infrastructure tab.
2. Click Backup Repositories.
3. Right-click Object Storage Repository and choose Properties.
4. Click Next upon seeing the Name setting and Account setting.
5. Set the number to 30 to Make recent backups immutable for setting.
Result: The immutable retention period is successfully set.
To check the backup retention period, follow these steps:
1. Click the Backup section under Jobs.
2. Select the required backup job, right-click on it, and choose Edit in the menu. You will see the Job Mode screen.
3. Click Storage in the left menu and check the number in the Retention policy setting.
Note: The immutability retention period from the previous check (30) is lower than the retention policy listed here (31), and this is the correct setting.
Create a bucket for every 100 virtual machines or 200 TB of data to be backed up; Performance or Capacity Tier, depending on the use case.
In May 2024, we introduced the eu-central-3
Object Storage region in Berlin, Germany. This region features contract-owned buckets, providing enhanced performance and resilience. A new 92-character access key is required for this region.
To add a backup repository, follow these steps:
1. Create a new access key: Check your access key length in the Key Management. If it is 20 characters long, you need to create a new 92-character key compatible with all regions. For more information, see Generate a Key.
2. Create a new contract-owned bucket with an Object Lock: Create a new contract-owned bucket in the eu-central-3
region with Object Lock enabled. For more information, see Create a Bucket and refer to the "contract-owned buckets" section.
Note: Select the No default retention option as the mode for the Object Lock.
3. Create backup repository: Create an Object Storage as an object repository in Veeam. To do so, follow the steps in Create Object Storage as an object repository in Veeam.
Result: An object repository is successfully created. In the Backup Infrastructure tab, you can view the repository listed under Backup Repositories.
To add a new object storage repository on Veeam Backup & Replication, follow these steps:
1. Navigate to the backup repositories. To do so, go to the Backup Infrastructure tab, and click Backup Repositories > Add Repository to open the wizard for adding new backup repositories.
2. Select repository type by following these steps:
Enter a name and an optional description for the object storage repository.
Select the Limit concurrent tasks to checkbox and set the values as follows:
Up to 10 – for the eu-south-2
region
Up to 20 – for all other regions
Click Next.
3. Input the endpoint details as follows:
Service endpoint: https://s3.eu-central-3.ionoscloud.com
Region: eu-central-3
Click Add to input the access and secret keys.
Info: Only the 92-character access key supports all the Object Storage regions.
Click Next.
4. Configure bucket and folders by following these steps:
Enter the bucket name or browse and select from the list.
Click Browse to create a new folder where backups will be stored.
(Optional) Set the limit for used storage and enable backup immutability. The immutability retention period of the Object Storage Repository must be less or equal to the backup retention period of the backup job. For more information, see Set the immutability retention period.
Click Next.
5. In the Mount server tab, keep the default values and click Next.
6. In the Review tab, click Apply and then click Next to continue.
7. Finalize the repository creation by reviewing the Summary and clicking Finish.
Result: An object repository is successfully created as an object repository in Veeam. In the Backup Infrastructure tab, you can view the repository listed under Backup Repositories.
Continue to Create a Backup Job.
In May 2024, we introduced the eu-central-3
Object Storage region in Berlin, Germany. This region features contract-owned buckets, providing enhanced performance and resilience. Note that a new 92-character access key is required for this region.
To move a backup repository to the eu-central-3
region, follow these steps:
1. Create a new access key: Check your access key length in the Key Management. If it is 20 characters long, you need to create a new 92-character key compatible with all regions. For more information, see Generate a Key.
2. Create a new contract-owned bucket with an Object Lock: Create a new contract-owned bucket in the eu-central-3
region with Object Lock enabled. For more information, see Create a Bucket and refer to the "contract-owned buckets" section.
Note: Select the No default retention option as the mode for the Object Lock.
3. Set up Veeam storage optimization: To do so, follow the steps in Set up Veam storage optimization.
4. Create backup repository: Creta an Object Storage as an object repository in Veeam. To do so, follow the steps in Create Object Storage as an object repository in Veeam.
5. Move the backup repository: You need to move the backup repository to the new Object Storage region. To do so, follow these steps:
Right-click on the Backup Jobs name in the Object Storage section of the Backup tab and select Move backup.
Specify the target repository to move backups to and click OK. The migration of your backup repository will start. The data will be transferred directly from one region to another.
Result: The Veeam backup repository is successfully moved to the eu-central-3
Object Storage region.
1. Click the Backup section under Jobs.
2. Select the required backup job, right-click on it, and choose Edit in the menu.
3. Click Storage, go to Advanced Settings and change the tab to Storage.
Note: We recommend using at least 4 MB in Veeam to benefit from better performance, and 8 MB storage blocks are a preferred choice, but these blocks are unavailable in the interface by default and must be enabled via a registry key.
4. In the Storage optimization dropdown list, choose 4 MB or 8 MB (marked as "not recommended"). Disregard this "not recommended" mark that you see in the web interface.
5. (Optional) If you want to benefit from better performance, follow the instructions to enable 8 MB blocks:
Open the Registry Editor.
Navigate to Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Veeam\Veeam Backup and Replication
Add a new DWORD
(32-bit) value named UIShowLegacyBlockSize
.
Set the value of UIShowLegacyBlockSize
to 1. Now, the 8 MB block option will be available in the interface.
To apply the new storage configuration as the default for all future backup jobs, click Save as Default at the bottom of the Advanced Settings screen. This will ensure the settings are automatically used for any new backup jobs created.
Result: The Veeam storage optimization is successfully set up.
To create a backup job, follow these steps:
1. Click the Backup Job button and choose your workload type. For example, Windows computer.
2. Choose the Job Mode type as Workstation or Server and click Next.
3. Name the backup job and add an optional description. For example, back up to Object Storage eu-central-3
and click Next.
4. Choose computers for backing up. Click Add and choose Individual computer.
5. Enter the computer’s IP address. Click the Add button to enter the Username and password. Click OK and Next.
6. Choose backup mode from the following options: Entire computer, Volume level backup, or File level backup and click Next.
7. Choose a backup repository from the drop-down menu. Update the Retention policy setting and click Next.
8. In the Guest Processing, keep the default settings and click Next.
9. Choose a Schedule for your backup and click Apply.
10. Check the Summary and click Finish.
Result: A backup job is successfully created.
Note: Instead of waiting for the next run of the backup, you can right-click on the job name from the Jobs and choose Start to make the initial backup immediately.
Veeam Backup & Replication provides a comprehensive solution for data management, encompassing backup, recovery, and replication capabilities.
Learn about the two options to backup to the Object Storage from Veeam Backup & Replication.
Learn how to create a new Object Storage repository for your backups.
Learn how to create backup jobs with performance-optimized settings.
Learn how migrate your backup repository to the eu-central-3
region.
Learn more about the recommended settings to apply while setting up Veeam Backup & Replication to Object Storage.
Veeam 11a CP4 (11.0.1.126120220302) brings an important special fix when using .
eu-central-3
. For more information, see .
Minimum—4 MB, recommended—8 MB and this option must be enabled via a registry key. For more information, see .