Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
S3 Compatible: Object Storage adheres to the industry-standard S3 protocol, ensuring seamless integration with S3 Tools and applications designed for S3-compatible platforms. For more information, see S3 API Compatibility.
Data Management: The data storage pool comprising of objects and buckets in a flat data environment is well manageable with the following data management functions:
Replication: Safeguards your data by duplicating it across multiple locations, providing redundancy and ensuring high availability. You can replicate data within user-owned buckets of the same user as well as replication to contract-owned buckets in the eu-central-3
region.
Versioning: Tracks and manages multiple versions of an object, enabling easy rollback of objects and buckets to the previous states and preserving historical versions of objects and buckets.
Lifecycle: Archives or deletes objects based on predefined criteria, optimizing costs and managing data efficiently.
Object Lock: Secures your data by implementing retention policies or legal holds, ensuring that data objects remain immutable for a specified duration or indefinitely. This way, the data meets the Write Once Read Many (WORM) data storage technology and prevents the data from being erased or modified.
Access Management: The following functions allow users to set access permissions to other Object Storage users, defining who can access their objects and buckets.
Access Control List (ACL): A granular permissions for objects and buckets, controlling who can access and modify your data.
Bucket Policy: You can set overarching access policies for a bucket that provides additional security and control over how data is accessed and used.
Logging: Monitors and records access requests to your objects and buckets, providing a clear audit trail and helping identify suspicious activities. This feature is currently not supported for contract-owned buckets.
Cross-Origin Resource Sharing (CORS): Defines rules for client web applications from different domains to access the data resources stored in your buckets.
Public Access: The data in the IONOS Object Storage are well managed by allowing or blocking access permissions to be public access wherever needed with the following functions:
Block Public Access: Ensures data privacy by blocking all public access at the bucket or account level. Currently, this feature is available only via the IONOS Object Storage API.
Static Website Hosting: Using Object Storage, you can host static websites directly, eliminating the need for additional web servers, thus simplifying deployment. You can enable the objects in these buckets with public read access, allowing users to view all the content on these static websites.
Security: Data object protection is achieved through the following:
Encryption in Transit: Secures data as it travels to and from the Object Storage using robust TLS 1.2/1.3 encryption protocol.
Server-Side Encryption: Protects stored data by encrypting it on the server side with IONOS Object Storage managed keys (SSE-S3) and customer-managed keys (SSE-C) using AES256 encryption algorithm. The storage objects are decrypted automatically when downloaded.
Features: IONOS Object Storage secures your data in the storage pool through Versioning, Block Public Access, Object Lock, and Replication features.
Security Certification: The solution adheres to the ISO 27001 certificate based on IT-Grundschutz and complies with the European Union's (EU's) General Data Protection Regulation (GDPR).
Large Data Volume: Data in the Object Storage are stored as objects, which include metadata and a unique identifier, making object retrieval easier for large volumes of unstructured data.
Cost-Effective Billing: A straightforward pay-as-you-go Pricing Model, eliminating upfront costs. You are charged solely based on storage utilization and outbound data transfer per gigabyte. Additionally, we do not impose charges for requests.
Highly Scalable: With Object Storage, you can start with small data storage and expand data storage as your application needs at any time, offering the utmost flexibility with data storage.
Georedundant Hosting: Using Replication, you can replicate objects and buckets in the Object Storage to multiple data centers in different geographical locations, guaranteeing high availability and data durability even during primary site failures or outages. For Replication support based on the bucket types, see Feature Comparison.
Compliance Standards: IONOS Object Storage infrastructure and processes comply with IT-Grundschutz, GDPR, and ISO-27001 standards, offering peak data protection and robust privacy policies.
Write Once Read Many (WORM): The Object Lock on the data stored in the Object Storage is protected and prevents the data from being erased or modified.
Data Protection: With access control lists and Object Lock features, multiple layers of data protection can be enforced on data objects and define permissions for who can access the data in the Object Storage. With advanced data encryption algorithms, secure data storage is achieved.
Lifecycle Management: With Object Storage Lifecycle rules, you can enforce the data deletion process for historical data and save the storage cost.
Based on IONOS Object Storage features and benefits, the following use cases are derived that meet your business demands:
Data Backup and Restore: IONOS Object Storage backs up critical databases and data with ease. With replication and resilience features along with versioning of buckets, the data security and access are enhanced.
Website Asset Storage: You can store specific website assets like images or downloadable files on Object Storage even if you do not host the whole site, helping in cost-saving and server space optimization.
Static Website Hosting: Utilize Object Storage for hosting static websites that load quickly as Object Storage does not require server-side processing.
Multimedia Asset Hosting: Storing static multimedia files like images, videos, audio, and documents, which seldom change, is easier in IONOS Object Storage and does not need block storage volumes. With a dedicated URL to each object, you can easily embed or host these assets on a Static Website without needing a server.
Private File Storage: Safely store private data with default settings, making objects inaccessible through regular HTTP. You get the flexibility to modify object access permissions whenever needed.
Storing Unstructured Data: A flat data structure is ideal for storing and managing large datasets outside of traditional databases. You can customize the metadata of objects to classify and retrieve data such as images, videos, audio, documents, and Big Data more efficiently.
Artifact Storage: Storing and sharing development artifacts such as log data via Object Storage URL is an ideal solution for developers. Using access keys, you can safely share artifacts with intended users only. Developers can also store software applications as objects in the Object Storage.
Software Hosting and Distribution: Developers can upload software applications as objects in the buckets and easily provide access to their software via unique URLs, making it a go-to solution for hosting and distributing software.
Periodic Data Retention: For periodic logs that need to be accessed only for a certain period and removed after a specific duration, Object Storage Lifecycle Rules make it possible to retain data and delete data objects on the specified data expiration date; thus it is ideal for periodic data storage.
IONOS Object Storage is a secure, scalable storage solution that offers high data availability and performance. The product adheres to the S3 API standards, enabling the storage of vast amounts of unstructured data and seamless integration into S3-compatible applications and infrastructures.
Unlike traditional hierarchical systems like block storage volumes or disk file systems, Object Storage utilizes a flat structure that is ideal for storing large chunks of unstructured, static data that you want to keep ‘as is’ for later access. Businesses of all sizes can use IONOS Object Storage to store files (objects) for varied Use Cases.
The IONOS Object Storage service is available in the following locations:
Data Center | Region | Bucket Type |
---|---|---|
For the list of available points of access, see Endpoints.
In IONOS Object Storage, the data that you want to store in the Object Storage is called Objects. The data types could be archives, backups, log files, documents, images, and media assets. Each object is allocated a unique URL for direct access. Further, you can group these objects within a folder to help organize and manage these objects within a bucket. For more information, see Objects and Folders.
To begin with Object Storage, you need to generate a key, which is a unique identifier that allows you access to buckets and objects. This key is a combination of Access Key and Secret Key, listed in the Key Management section. For more information, see Key Management.
To upload objects into the Object Storage, you need to create containers known as Buckets by choosing the region and a unique bucket name. The objects are stored in these buckets which are accompanied by rich metadata. For more information, see Buckets and Bucket Types.
Based on access permissions, buckets, and objects can be publicly accessible or kept private and shared with only intended users. Use the Access Control List (ACL) or Bucket Policy settings to manage access.
When you log on to the Object Storage using the , the DCD manages authentication and authorization so that you can access the object storage with just one click.
Prerequisite: Make sure you have the corresponding privilege to enable IONOS Object Storage. Only contract owners and administrators can enable Object Storage.
1. In the DCD, go to Menu > Management, and click Users & Groups.
2. or open an existing group from the Groups drop-down list.
3. In the Privileges, select the Use Object Storage checkbox to enable Object Storage permission.
4. In the Members tab, add users to the group that you wish to authorize for the use of the object storage.
Result: All members of the authorized group get the required permission to manage IONOS Object Storage using the DCD.
You need to create a bucket or an Object Storage access key to start using the IONOS Object Storage.
Prerequisite: The Use Object Storage permission must be enabled for the user account. The Object Storage is not enabled for an IONOS account by default. For more information, see .
To set up Object Storage through the DCD, follow these steps:
1. In the DCD, go to Menu > Storage > IONOS Object Storage.
The Buckets and Key Management section are shown on the IONOS Object Storage Home page.
2. Click Create a bucket. Creating a new bucket also creates an access key if it does not exist already. For more information, see .
3. In the Key management, click Generate a key to create a new access key.
Result: The Object Storage is successfully set up through the DCD. On setting up Object Storage, the billing starts only after you upload an object.
Contract owners and administrators may use this functionality to access buckets and objects stored in the IONOS Object Storage accounts of users who are no longer active members of their contracts.
Warning: Before you delete a user or all of their Object Storage access keys from your account, ensure that the content in their IONOS Object Storage is accessible so that you can continue to use it or delete it by adjusting the access rights accordingly.
To access other user's Object Storage, follow these steps:
1. In the DCD, go to Menu > Management, and click Users & Groups.
2. Select the required user from the Users drop-down list.
3.In the Object Storage Keys, click Manage.
Result: You are now logged on as the bucket owner of the selected user's IONOS Object Storage. You can now access the user's buckets, objects, and access keys. You can also update the Object Storage of this user account.
Note:
— The contract is charged for data stored even if all the Object Storage keys are deleted. However, you can create a new key and continue to work with Object Storage.
— You need to delete all the objects from the user-owned bucket before you delete a user or all of their Object Storage Keys from your account; otherwise, the contract continues to be charged for the stored data.
IONOS Object Storage is a service offered by IONOS for storing and accessing unstructured data. The Object Storage is fully S3-compliant, which means that it can be used to manage buckets and objects using existing S3 clients.
To get answers to the most commonly encountered questions about IONOS Object Storage, see .
When creating a bucket, you must carefully consider the following settings:
The .
Supported regions.
To enable or not for the bucket.
The Object Storage bucket can be created through one of the following methods:
The easiest way to create a bucket is by using the DCD. You must create a bucket before you can start uploading objects to the Object Storage.
You can create either a contract-owned bucket or a user-owned bucket but the shift towards a contract-owned bucket model will be the primary focus for future Object Storage updates.
To create a contract-owned bucket, follow these steps:
Prerequisites:
Make sure you have the corresponding permission to create the Object Storage. Only a contract owner or an administrator can create contract-owned buckets.
You must have at least one active access key; otherwise, or an existing access key.
1. In the DCD, go to Menu > Storage > IONOS Object Storage.
2. In the Buckets tab, click Create a bucket.
3. Choose the Bucket region which determines the geographical location where the data inside the buckets will be stored. Currently, you can create contract-owned buckets only in the eu-central-3
region.
3. Enter a unique name for the Bucket name that adheres to the for a bucket.
Note: A bucket will not be created if a bucket with the same name already exists in the IONOS Object Storage.
4. (Optional) Choose whether you want to enable the Object Lock for the bucket. If yes, then select the Enable Object Lock checkbox.
5. If the Object Lock is enabled, select the mode of Object Lock to apply on the objects uploaded to the bucket. Choose from the Governance mode or Compliance mode and input the Retention period in days or years. For more information, see .
6. Click Create bucket.
Result: A contract-owned bucket is created in the eu-central-3
region.
To create a user-owned bucket, follow these steps:
1. In the DCD, go to Menu > Storage > IONOS Object Storage.
2. In the Buckets tab, click Create a bucket.
3. Choose the Bucket region which determines the geographical location where the data inside the buckets will be stored. You can create user-owned buckets in any of these Object Storage regions: de
, eu-central-2
, and eu-south-2
.
Note: A bucket will not be created if a bucket with the same name already exists in the IONOS S3 Object Storage.
4. (Optional) Choose whether you want to enable the Object Lock for the bucket. If yes, then select the Enable Object Lock checkbox.
6. Click Create bucket.
Result: A user-owned bucket is created in the selected region.
In the de
region (Franfurt, Germany):
In the eu-central-2
region (Berlin, Germany):
In the eu-south-2
region (Logrono, Spain):
In the de
region (Franfurt, Germany):
In the eu-central-2
region (Berlin, Germany):
In the eu-south-2
region (Logrono, Spain):
Your Object Storage credentials consist of an Access Key and a Secret Key. The DCD automatically uses these credentials to set up Object Storage. These credentials are also required to set up access to IONOS Object Storage using . For more information, see .
For more billing information, see or contact the .
Note: When enabling Object Lock, is enabled for the bucket by default.
Prerequisite: You must have at least one active access key; otherwise, or an existing access key.
3. Enter a unique name for the Bucket name that adheres to the for a bucket.
5. If the Object Lock is enabled, select the mode of Object Lock to apply on the objects uploaded to the bucket. Choose from the Governance mode or Compliance mode and input the Retention period in days or years. For more information, see .
Note: When enabling Object Lock, is enabled for the bucket by default.
Using the API operation, you can create a bucket with or without the object lock.
For details on configuring the AWS CLI tool, see .
Frankfurt, Germany
de
User-owned buckets
Berlin, Germany
eu-central-2
User-owned buckets
Logroño, Spain
eu-south-2
User-owned buckets
Berlin, Germany
eu-central-3
Contract-owned buckets
IONOS Object Storage provides a range of access options, including DCD, desktop applications, CLI tools, and an option to develop your application using API and SDKs.
In the DCD, go to Menu > Storage > IONOS Object Storage to access IONOS Object Storage via the DCD. Here you can manage buckets, and objects, set access controls, and much more. To set up Object Storage, see Enable Object Storage access.
The Object Storage is fully compatible with S3, using which users can establish seamless integration of Object Storage with existing S3-compatible tools. A few of the popular GUI tools are Postman, Cyberduck, and S3 Browser; and CLI tools are AWS CLI, S3cmd, and rclone. For more information, see S3 Tools.
Being S3 compatible means you can use standard S3 API calls and SDKs with our storage solution. For more information, see IONOS Object Storage API Reference.
Explore the use cases to use IONOS Object Storage. |
Tasks that guide you to quickly set up and get started with using Object Storage. |
Explanations of core components in the Object Storage and its functions. |
Help guide in detail to accomplish tasks such as creation, updation, deletion, configuration, and management. |
Detailed guide on data management, access management, and public access of Object Storage. |
The service availability endpoints to use IONOS Object Storage. |
Guide to working with Object Storage compatible Graphical User Interface (GUI) tools, Command Line (CLI) tools, and Software Development Kits (SDKs). |
When you upload a file to IONOS Object Storage, it is stored as an Object and can be stored in buckets and folders in the Object Storage.
You can upload objects to buckets through one of the following methods:
Prerequisites:
— Make sure a bucket already exists to which you want to upload objects (files).
— If you want to use object lock, then make sure the object lock is enabled for the bucket as well. For more information, see Object Lock.
1. In the DCD, go to Menu > Storage > IONOS Object Storage.
2. From the drop-down list in the Buckets tab, choose either Show user-owned buckets or Show contract-owned buckets depending on the bucket type you want to view.
3. From the Buckets list, choose the bucket to which objects must be uploaded.
4. Click Upload objects which opens an overlay screen.
5. Click Browse files or drag and drop the files to be uploaded. You can choose to upload multiple files.
Info: During object upload, you can turn on or off Server-Side Encryption with Object Storage Managed Keys (SSE-S3) for user-owned buckets. The object upload via DCD does not support encryption for contract-owned buckets.
5. Review the selected files to be uploaded. Use the Remove and Remove all options to remove any files from being uploaded.
6. Click Start upload to confirm the files to be uploaded.
Result: The objects are successfully uploaded to the selected bucket.
A few of the limitations to consider while using object upload through the DCD are:
Multi-part upload is not supported.
The Server-side Encryption with Customer Provided Keys (SSE-C) is not supported.
A maximum of 4,65 GiB upload size for a single object applies.
Other applications or the SDKs and APIs are not subject to these limitations.
Using the Object Storage API, you can perform object upload and manage objects in a bucket.
Note: Only a single storage class is currently available: STANDARD
. It is designed for general-purpose storage of frequently accessed data.
Prerequisites:
— Set up the AWS CLI by following the installation instructions.
— Make sure to consider the supported Endpoints for object upload.
To upload an object from the current directory to a bucket:
To copy the contents of the local directory my-dir
to the bucket my-bucket
:
To copy all objects from my-source-bucket
to my-dest-bucket
, excluding .zip
files:
Info: This command does not support cross-region copying for IONOS Object Storage.
To sync the bucket my-bucket
with the contents of the local directory my-dir
:
You can upload and copy objects using the multi-part upload feature that allows you to break down a single object into smaller parts and upload these object parts in parallel, maximizing the upload speed. While the DCD does not support multi-part upload due to the upload size limit of 4,65 GiB per object, the Object Storage API and many S3 Tools offer this functionality, allowing users to take advantage of efficiency through parallel uploads.
1. In the DCD, go to Menu > Storage > IONOS Object Storage.
2. From the drop-down list in the Buckets tab, choose either Show user-owned buckets or Show contract-owned buckets depending on the bucket type you want to view.
3. From the Buckets list, choose the bucket in which the folder must be created.
4. Click Create a folder which opens an overlay screen.
5. Enter a name in the Prefix field. The prefix must contain only alphanumeric characters, dashes and hyphens.
6. Click Create to continue with folder creation.
Result: The folder is successfully created in the selected bucket.
Note:
— A folder once created, cannot be renamed.
— Objects that have already been uploaded cannot be moved to a different folder.
Create subfolders within a folder by following the steps in Create a folder.
Upload objects to a folder and subfolders.
Search for folders and objects within folders using the Search by Prefix option within a bucket.
An object in the Object Storage can be viewed and downloaded to your local computer. On downloading, the SSE-S3 encryption applied to that object is automatically decrypted before the download process begins. In the case of SSE-C, you need to provide the encryption keys for download. This feature is unavailable in the DCD. You can use the CLI tools, SDK, or API methods to download the objects protected with SSE-C encryption.
You can download objects through one of the following methods:
For large objects, you may not need to download the entire file. You can perform a partial download of objects using the Object Storage API. The API allows you to specify a byte range in your request, enabling you to download only a portion of the object data.
Note: An object's metadata can be viewed directly from the properties page in the DCD or through the API call, providing a quick way to inspect an object's properties without incurring data transfer fees. Data transfer fees apply when you download objects from your Object Storage bucket. For more information, see Pricing Model.
Using the DCD, you can download one object at a time. For downloading multiple objects, consider using CLI tools, SDKs, or REST API.
1. In the DCD, go to Menu > Storage > IONOS Object Storage.
2. From the drop-down list in the Buckets tab, choose either Show user-owned buckets or Show contract-owned buckets depending on the bucket type you want to view.
3. From the Buckets list, choose the bucket from which you want to download the object. The list of objects in the bucket is listed.
4. Choose the object to download and click on the respective object's action menu (three dots). The Download option is also available from the respective object's properties page.
5. Click Download. If an object has been shared through a public URL, click the URL to download the object.
Result: The object is successfully downloaded.
Using the Object Storage API, you can download objects from a bucket.
To download my-object.txt
to a specified file locally:
To download a specified version of the my-object.txt
to a specified file locally:
To download all the objects from the my-bucket
bucket to the local directory my-dir
:
To recursively copy all objects with the /my-dir/
prefix from my-bucket-1
to my-bucket-2
:
To get the object’s metadata without downloading an object:
For more information, see the cp, get-object, and head-object command references.
An Object Storage key must be generated manually through the DCD or Object Storage Management API. Only upon generating the first key, the Canonical User ID is displayed in the Object Storage Credentials and Users & Groups > Users > Object Storage Keys > IDs section.
Prerequisite: Make sure you have the corresponding permission to manage the Object Storage. If you are not the contract owner or the administrator, you must be added to a group with Use Object Storage privilege.
To create an access key, follow these steps:
1. In the DCD, go to Menu > Storage > IONOS Object Storage.
2. In the Key management tab, go to the Access keys section and click the Generate key.
3. Confirm key generation by clicking Generate. The keys generated since April 2024 will have access to both contract-owned buckets and user-owned buckets. It works with all Endpoints.
Result: A new access key for IONOS Object Storage is successfully generated and is in active
status by default.
Info: It is recommended to download the Access Key and Secret Key as a backup copy by using the Download key pair option. Using the key details, you can sign in to other Object Storage applications. A maximum of five unique Object Storage Keys can be created per user for different S3 applications.
To deactivate or delete keys, see Manage Keys.
Prerequisite: Make sure you have the corresponding permission to manage the Object Storage. If you are not the contract owner or the administrator, you must be added to a group with Use Object Storage privilege.
1. In the DCD, go to Menu > Management > Users & Groups.
2. Select the user from the Users list and click the Object Storage Keys tab.
3. Click + Generate Key and confirm key generation by clicking OK.
Result: A new access key for IONOS Object Storage is successfully generated for the user and listed in the user's Object Storage Keys tab. By default, the key is in the active
status.
To deactivate or delete keys, see Manage Keys.
The pricing model for IONOS Object Storage is as follows:
1 Gigabyte (GB) is equal to 1024 Megabytes (MB).
Storage space is charged per GB per hour.
Data transfer is charged in GB. Inbound data transfer is free, but it will be counted as outbound data transfer for your Virtual Machine (VM) if you upload data from it. Outbound data transfer can be paid or free, depending on the conditions outlined in Outbound data transfer from IONOS Object Storage.
Using the IONOS Object Storage API is free of charge.
Prices are listed in the respective price lists:
IONOS Ltd. – United Kingdom.
IONOS Inc. – United States and Canada.
All outbound data transfer from IONOS Object Storage is billable as public traffic, except for data transfer to the VMs in the same data center.
The cost per GB for outbound data transfer is contingent upon the cumulative data consumption of the account. A tiered pricing structure is implemented for all outbound traffic, including data transfer from VMs and IONOS Object Storage.
All outbound data transfer from IONOS Object Storage is billed as public traffic. The local and national traffic definitions do not apply. This includes outgoing data transfer to IONOS VMs or dedicated servers regardless of their geographical location.
While inter-bucket data transfer is subject to charges, replication traffic both within the same region and across different regions is cost-free.
The cost per GB for outbound data transfer is contingent upon the cumulative data consumption of the account. A tiered pricing structure is implemented for all outbound traffic, including data transfer from VMs and IONOS Object Storage.
No charges are imposed on inbound data transfer to IONOS Object Storage. It is essential to know that when uploading data to IONOS Object Storage, the same data transfer may be billed as an outbound data transfer for your VM.
While calculating network costs for data transfer from a VM to IONOS Object Storage, the following distinctions are made in the billing based on the traffic type:
Data Transfer from a VM to IONOS Object Storage | Billing |
---|---|
IONOS Object Storage organizes data as objects. The data could range from documents, pictures, videos, backups, and other types of content. You can store these objects within the buckets and each object can be a maximum of 5 TB in size. An object consists of a key that represents the name given to the object. This key acts like a unique identifier, which you can use to retrieve the object.
Every object uploaded to a bucket includes Object properties and Object metadata.
The properties refer to object details and the metadata are key-value pairs that store additional information about the object. The maximum size of metadata is 2 KB (keys+values). For instance, an object of type 'image' can include metadata such as its photographer, capture date, or camera used. Properly defined metadata aids in filtering and pinpointing objects using specific criteria.
Note: Currently, it is not possible to add metadata using the DCD. You can add metadata using the or (in case of multipart upload) API calls for uploading objects.
In the Object Settings, you can retrieve the object's properties and metadata alongside the object.
From the Object Properties page, you can also perform the following actions:
Download an object.
Copy the object URL to the clipboard.
Generate a Pre-Signed URL.
Delete an object.
Folders, also known as Prefixes are containers that help to organize the objects within a bucket. You can create folders within a bucket and upload objects to folders. Object Storage offers a flat data structure instead of a hierarchy such as a file system. Hence, to support the organization of data in a well-structured way, the creation of Folders is allowed within a bucket. You can also create subfolders within a folder and upload objects to subfolders.
Unlike traditional file systems with nested folders, IONOS Object Storage maintains a flat environment without a hierarchy of folders or directories; hence you can emulate folders using key naming conventions with slashes (/).
You can use prefix names that contain alphanumeric characters, slashes, and hyphens only.
Example: Instead of saving a report as Annual_Report_2023.pdf
, using a key such as reports/2023/Annual_Report.pdf
gives the semblance of a folder structure. These virtual folders through prefixes aid in logically grouping related objects.
Following are a few examples of using prefixes for objects to emulate folder structure:
user_profiles/john_doe/avatar.jpg
data/backups/June/backup.zip
In IONOS Object Storage, a bucket is the primary container for data. Think of it like a directory in a file system where you can store files known as objects. Each object is stored in a bucket and is identified by a unique key, allowing easy retrieval. You can store any number of objects in a bucket and can create up to 500 buckets in a user account.
A region corresponds to a geographical location where the data inside the buckets will be stored. Different regions have different , which are URLs to access the Object Storage.
IONOS Object Storage is currently available in four regions:
Berlin, Germany in eu-central-2
and eu-central-3
Frankfurt, Germany in de
Logroño, Spain in eu-south-2
Choosing the right bucket region is crucial for optimizing your Cloud storage. Consider the following:
Proximity: Select a region that is close to your application or user base to reduce latency and costs.
Redundancy: For backups, consider a region geographically separate from your primary location to ensure data safety during local outages or disasters.
For information on supported regions based on the , see .
When naming buckets and folders, the name must adhere to the following rules:
Be unique throughout the entire .
Consists of 3 to 63 characters.
Starts with a letter or a number.
Consists of lowercase letters (a-z) and numbers (0-9).
The use of hyphens (-), periods (.), and underscores (_) is conditional.
Note: The bucket name must not:
End with a period, hyphen, or underscore.
Include multiple periods in a row (...).
Contain hyphens next to periods.
Have the format of an IPv4 address (Example: 192.168.1.4
).
Contain underscores if the bucket is to be used for auto-tiering later.
Following are a few examples of correct bucket naming:
data-storage-2023
userphotos123
backup-archive
1234
Following are a few examples of incorrect bucket naming:
IONOS Object Storage authenticates users by using a pair of keys — Access Key and Secret Key.
An Object Storage key must be generated manually using or . Only upon generating the first key, the Canonical User ID is displayed in the and Users & Groups > Users > Object Storage Keys > IDs section.
You will need the keys to work with Object Storage through supported applications or develop your own using . Using the Key management, you can view and share your and manage .
There are two forms of user identification: Contract User ID and Canonical User ID. Depending on the to get access to, use the appropriate user ID as follows:
Share your Contract User ID with other users to get access to the contract-owned buckets and objects.
Share your Canonical User ID with other users to get access to the user-owned buckets and objects. This is the ID assigned to a user by the IONOS Object Storage.
For more information, see .
Note:
— Starting May 30, 2024, a new endpoint eu-central-3
is added in Berlin, Germany to support contract-owned bucket types.
In the Access keys list,
Each key shows whether it is valid for all buckets (contract-owned buckets and user-owned buckets) or valid only for user-owned buckets.
The ADMIN KEY
refers to the key valid for all the buckets and provides the same access permissions as the contract owner or administrator.
Access Key and Secret Key Length: To prepare new functionalities of IONOS Object Storage, effective April 25, 2024, the key character length is modified as follows:
Access Key: The key length is increased from 20 to 92 characters.
Previous format example: 23cbca2790edd9f62100
New format example: EAAAAAFaSZEvg5hC2IoZ0EuXHRB4UNMpLkvzWdKvecNpEUF-YgAAAAEB41A3AAAAAAHnUDl-h_Lwot1NVP6F_MARJv_o
Secret Key: The key length is increased from 40 to 64 characters.
Previous format example: 0Q1YOGKz3z6Nwv8KkkWiButqx4sVmSJW4bTGwbzO
New format example: Opdxr7mG09tK4wX4s6J3nrl1Z4EJgYRui/rldkgiPmrI5bavWHuThswRqPwgbeLP
Note: The keys generated before April 25, 2024, continue to exist in the previous key length format and remain functional. However, these keys may not enable you to use the new functionalities in the Object Storage.
Generate object storage keys: A user can have multiple Object Storage keys, which can be given to other users or automated scripts. Users using such an additional Object Storage key to access the IONOS Object Storage automatically inherit the credentials and access rights of the user.
Delete: If a key is no longer needed or if it should no longer be possible to gain access to the IONOS Object Storage with this key, it can be deleted. This cannot be undone.
Note:
— Deleting all the Object Storage keys does not affect the stored objects. However, the contract is charged for the data stored. You can create a new key and continue to work with Object Storage.
The following are a few limitations to consider while using IONOS Object Storage:
Access keys: A user can have up to five access keys.
Storage size: The minimum storage size available is 1 Byte of data and is extendable to a maximum storage of petabytes.
Bucket naming conventions: Only buckets for static website hosting can use dots (.) in the bucket names. For more information, see bucket .
Bucket count: A user can create up to 1000 contract-owned buckets and 500 user-owned buckets. For more information, see .
Bucket Policy size: The maximum allowed Bucket Policy size for a contract-owned bucket is 1 MiB, and for a user-owned bucket is 20 KiB.
Object size: The maximum allowed object size is 5 TiB.
Object name length: The maximum allowed length of the folder path, including the file name, is 1024 characters.
File upload size: A file upload size cannot exceed 5 GiB (5368709120 bytes) for contract-owned buckets and 4,65 GiB (5000000000 bytes) for user-owned buckets. If you have a single file exceeding this limit, you can bypass it using multi-part uploads. CLI tools such as and graphical tools such as automatically handle larger files by breaking them into parts during uploading.
Bandwidth: Each connection is theoretically capped at approximately 10 G per region. However, remember that this is a shared environment. Based on our operational data, achieving peak loads up to 2x7 G is feasible by leveraging parallel connections. However, this is on a best-effort basis and without any guaranteed Service Level Agreement (SLA).
Learn about the key components of IONOS Object Storage, its functions, and capabilities to manage your Object Storage.
To manage your buckets, objects, and keys in your Object Storage, refer to the following How-Tos that guide you with step-by-step instructions to complete the tasks.
The S3 (Simple Storage Service) API has been the global standard for object storage for many years. It provides interoperability and compatibility of various object storage systems that adhere to this standard. IONOS Object Storage has one of the highest levels of S3 API support.
IONOS Object Storage lets users create the following two types of buckets:
1. Contract-owned buckets
2. User-owned buckets
Each of these bucket types offers a different feature set. For more information, see .
Starting May 30, 2024, all the newly launched have a contract owner as a bucket owner, and the administrator also holds the same set of permissions as a contract owner. You can continue creating user-owned buckets using specific endpoints, but the shift towards a contract-owned bucket model will be our primary focus for future features.
For more information, see documentation.
For each user, an Object Storage key must be generated manually using .
1. In the DCD, go to Menu > Storage > IONOS Object Storage.
2. In the Key management tab, go to the Access keys section that lists all keys present in your account.
3. Select the key and toggle on or off the Active option to activate or deactivate the key respectively.
Result: The access key status is set as active
when toggled on and deactivated
when toggled off.
Warning:
— When you have only one access key, disabling this key will lose your access to all existing buckets. However, the objects remain and usage costs continue to apply.
— To avoid losing access to your Object Storage buckets, you need at least one active access key. You can either generate a new access key or set an existing deactivated key as active
.
Prerequisite: Only contract administrators and owners can set up the object storage and manage keys for other users. Make sure you have the corresponding permission.
1. In the DCD, go to Menu > Management > Users & Groups.
2. Select the user from the Users list and click the Object Storage Keys tab.
3. Select the checkbox Active against the Key you want to set as active. Uncheck the checkbox if you want to deactivate the key.
Result: The access key status is successfully updated.
1. In the DCD, go to Menu > Storage > IONOS Object Storage.
2. In the Key management tab, go to the Access keys section that lists all keys present in your account.
3. Select the key to be deleted and click Delete.
Warning: Any access associated with this key will be revoked and cannot be restored.
4. To confirm the deletion of the key, click Delete.
Result: The access key is successfully deleted.
Prerequisite: Only contract administrators and owners can delete keys for other users. Make sure you have the corresponding permission.
1. In the DCD, go to Menu > Management > Users & Groups.
2. Select the user from the Users list and click the Object Storage Keys tab.
3. Select the key to be deleted from the list of keys and click Delete.
Warning: Any access associated with this key will be revoked and cannot be restored.
4. To confirm the deletion of the key, click OK.
Result: The access key is successfully deleted.
Using the DCD, you can search for objects in buckets if the prefix or full name is known. For technical reasons, it is not possible to search for objects across buckets or folders.
To search for an object, follow these steps:
1. In the DCD, go to Menu > Storage > IONOS Object Storage.
2. From the drop-down list in the Buckets tab, choose either Show user-owned buckets or Show contract-owned buckets depending on the bucket type you want to view.
3. From the Buckets list, choose the bucket in which you want to search for objects.
4. In the Search by Prefix field, enter the prefix or file name to search for.
Result: The objects matching your search criteria are displayed.
If you have enabled versioning for your Object Storage bucket, you have the flexibility to download non-current versions of objects. Toggle the Show versions option to view objects that are versioned. Objects that were already uploaded to the object storage before versioning was activated are identified by ID null
. If versioning is deactivated, existing object versions are retained.
1. In the DCD, go to Menu > Storage > IONOS Object Storage.
2. From the drop-down list in the Buckets tab, choose either Show user-owned buckets or Show contract-owned buckets depending on the bucket type you want to view.
3. From the Buckets list, choose the bucket or folder from which you want to download an object.
4. (Optional) To view the object's versions, toggle on Show versions. This option is available only if Versioning is enabled for the bucket.
Info: You may toggle off Show versions to view only the newest version of the objects.
5. Choose the object or object's version to download and click on the respective object's action menu (three dots).
6. Click Download.
7. (Optional) Use the Copy URL option to copy the object's URL to the clipboard.
Result: The object is successfully downloaded.
If you no longer want to keep the objects in the IONOS Object Storage, these objects can be deleted. Deleted objects are not physically removed from the Object Storage, but receive a 'delete marker' and then have a size of 0 KB. These markers are deleted at an interval specified by the user or by the system.
There are two ways to delete objects from the IONOS Object Storage using the DCD - manually and automatically.
1. In the DCD, go to Menu > Storage > IONOS Object Storage.
2. From the drop-down list in the Buckets tab, choose either Show user-owned buckets or Show contract-owned buckets depending on the bucket type you want to view.
3. From the Buckets list, choose the bucket from which you want to delete an object.
4. Choose the object to delete and click on the respective object's action menu (three dots). Alternatively, you can also select the object to delete and click Delete selected objects.
6. Click Delete.
6. Confirm the deletion of the object by choosing Delete.
Result: The object is successfully deleted from the bucket.
1. In the DCD, go to Menu > Storage > IONOS Object Storage.
2. From the drop-down list in the Buckets tab, choose either Show user-owned buckets or Show contract-owned buckets depending on the bucket type you want to view.
3. From the Buckets list, choose the bucket from which you want to delete a folder.
4. Choose the folder to delete and click on the respective folder's action menu (three dots). Alternatively, you can also select the folder to delete and click Delete selected objects.
5. Click Delete.
6. Confirm the deletion of the folder by choosing Delete. If the folder contains objects, you see an option to Empty and delete which deletes all the objects within the folder and then deletes the folder.
Result: The folder is successfully deleted from the bucket.
You can delete multiple objects and folders in a bucket at a time by following these steps:
1. In the DCD, go to Menu > Storage > IONOS Object Storage.
2. From the drop-down list in the Buckets tab, choose either Show user-owned buckets or Show contract-owned buckets depending on the bucket type you want to view.
3. From the Buckets list, choose the bucket from which you want to delete objects and folders.
4. Select the checkboxes next to the names of the objects and folders to be deleted.
5. (Optional) To delete all objects and folders, select the checkbox next to the names of the objects and folders.
6. Click Delete selected objects.
7. Confirm the deletion of selected objects and folders by choosing Delete. If the folder contains objects, you see an option to Empty and delete which deletes all the objects within the folder and then deletes the folder.
Result: The objects and folders selected for deletion are successfully deleted from the bucket.
IONOS Object Storage allows users to create the following two types of buckets:
1. Contract-owned buckets
2. User-owned buckets
Each bucket type has a different feature set to cater to your business needs. For more information, see .
Note: Starting from May 30, 2024, all the newly launched (regions) will use a contract owner as a bucket owner. You can still create user-owned buckets using specific endpoints, but this shift toward a contract-owned bucket model will be the primary focus for future Object Storage updates.
This bucket type is recommended for users within a single organization. Contract-owned buckets are the new bucket types supported in the Object Storage starting May 30, 2024.
Following are the key highlights of this bucket type:
The contract owner is the bucket owner of all the contract buckets. The contract owner or an administrator can create and manage buckets by default and define permissions in the Bucket Policy settings for other users to manage the buckets.
Every user in the contract can view the list of all buckets within their contract, even if they do not have permission to access the content.
Only the contract owner or an administrator can grant users access to view the bucket objects or manage these buckets.
You can create contract-owned buckets only in the following region:
Data Center | Region |
---|
Currently, cross-regional bucket replication is not possible for contract-owned buckets since this bucket type is supported only in the eu-central-3
region. However, you can replicate user-owned buckets to contract-owned buckets in the eu-central-3
and this function is supported both via the DCD and the API.
Logging bucket setting is not supported at the moment.
This bucket type is recommended if the users of the contract are separate entities and does not require viewing or accessing buckets of other users in the contract. The bucket type supported before the launch of contract-owned buckets is now termed user-owned buckets.
Every contract user independently owns their buckets and has the autonomy to create and manage them without seeking approval from the contract owner. A combined list of all user-owned buckets under the contract is not available, and the contract owner must individually check the bucket list of the users to view the buckets a user owns.
Users under the contract can only have visibility to their buckets, with no access to other user's bucket lists in the contract.
You can create user-owned buckets only in the following regions:
The user-owned bucket and contract-owned bucket offer a wide range of operations with the following differences in their feature set:
IONOS Object Storage provides multiple features to manage access to your buckets and objects effectively. This allows you to define precisely who may access what.
By default, newly created user-owned buckets and objects are private, and only the bucket owner can access them. In the case of the newly created contract-owned buckets, the buckets and objects are private, and both the contract owner and administrators can access and manage them.
Use the following options to share access to a bucket and to all or specific objects in a bucket:
: This policy is applied at the bucket level and it offers a robust framework for setting fine-grained access controls to your Object Storage buckets and objects. It is useful for restricting access based on certain conditions like IP addresses or time of access. With Bucket Policy, you can manage access to specific objects or prefixes within a bucket. However, the size of the policy is limited, which could be a consideration if you have extensive access control requirements. You can use Bucket Policy to make a bucket or object public, or to share access with specific authorized users by defining the necessary permissions within the policy.
: Provides a simpler mechanism for controlling access and can be specified for every object if needed, making them more flexible on a per-object basis. You can use ACLs to make a bucket or object public or to share access with certain authorized users by setting the right permissions. ACLs do not offer the ability to restrict access based on conditions like IP address.
There are two roles involved in granting access: Owner and Grantee. Their definitions depend on the .
Owner: The contract owner owns all the buckets. Administrators have the same permissions as the contract owner but must use the that is created after they have become administrators.
Grantee: Refers to the Object Storage defined user groups to whom permissions are granted that specify which buckets and objects they may access. Grantee could be any of the following:
A user of the same contract according to the Bucket Policy defined by the contract owner or administrator.
Another contract using ACL. If you share contract access, all contract users are granted access.
Specific users of another contract according to the Bucket Policy defined by the contract owner or administrator.
Predefined groups: All users and authenticated users of IONOS Object Storage (users from any contract). Both ACL and Bucket Policy support this function.
Owner: The user who creates the bucket is called the owner. Each user owns buckets of their account.
Grantee: Refers to the Object Storage defined user groups to whom permissions are granted that specify which buckets and objects they may access. Grantee could be any of the following:
A user from the same contract.
A user from another contract.
Predefined groups: All users, authenticated users of IONOS Object Storage (users from any contract), and Log Delivery Group.
Note: Granting access to a bucket for another IONOS user does not make the bucket appear in the DCD due to the Object Storage protocol's architecture. To access the bucket, the user will need to utilize other , as the granted access does not translate to interface visibility.
Manage your Object Storage buckets, objects, and their access permissions effectively using the data management, access management, and public access settings.
From the Object Storage Keys list, click the on the respective key you want to use. Copy the Key value and Secret value from the respective fields to sign in to other Object Storage applications.
In the , for any object under a bucket, the following object properties are displayed:
Properties | Description |
---|
Versions: Versioning objects enables the preservation, retrieval, and restoration of all versions of objects in your bucket. When versioning is enabled for a bucket, every time an object is uploaded to it, a new version of that object is created, and each version has a unique version ID. For more information, see .
Access Control List (ACL): The object Access Control List (ACL) contains access control for each object, defining which user accounts can read, write, or modify objects within a bucket. You can share access to a bucket and to all or specific objects in a bucket. The access permissions defined at the bucket level also influence the object access in a bucket. For more information, see .
Note: For contract-owned buckets, you cannot share access to a specific object with users from the same contract using ACL. Instead, use .
Object Lock: Prevents objects from being deleted or overwritten for a specified amount of time or indefinitely. It is beneficial for compliance or regulatory reasons. Currently, enabling Object lock is possible only during the . For more information, see .
Multi-part upload: This breaks down a single large object into smaller parts and uploads these objects to the bucket, maximizing the upload speed. For more information, see .
Example | Reason for Incorrectness |
---|
Logging on to IONOS Object Storage requires an access key as part of the authentication process. Your Object Storage credentials consist of an Access Key and a Secret Key. The DCD automatically uses these credentials to set up Object Storage. Hence, deactivating an access key restricts your access through the web interface. These credentials are also required to set up access to IONOS Object Storage using .
— All the newly generated keys from April 25, 2024, are valid for both the by default and are usable at all the .
— The keys generated before April 25, 2024, will only have access to the user-owned buckets and be usable only in the endpoints that support user-owned buckets. For more information, see .
This can be useful for allowing users automated (scripted) or temporary access to object storage. For more information, see .
Note: A maximum of five object storage keys per user is possible. You can create technical users to assign a different set of permissions and share access to the bucket with them. For more information, see .
Activate or deactivate keys: A key when generated is in an active state by default. You can change the key status between active
and inactive
. Deactivating an Object Storage key will block its access to the IONOS Object Storage. You can reactivate the key and restore access to manage buckets and objects. For more information, see .
— You need to delete all the objects from the user-owned bucket before you delete a user or all of their Object Storage Keys from your account; otherwise, the contract continues to be charged for the stored data. In this case, contact .
Warning: When you have only one access key with existing buckets, you cannot delete this key. You can either choose to or create a new access key before deleting the selected key. You can also delete existing buckets and continue with deleting the last access key.
Use the or to manage Object Storage access keys.
Versioning is not enabled by default. For more information, see .
You can also automate the deletion of objects using .
Data Center | Region |
---|
Features | Contract-owned buckets | User-owned buckets |
---|
For information on supported API functions for these bucket types, see .
: An excellent choice for securely providing temporary access to your objects. Essential for sharing files with someone without requiring them to have an IONOS account, and for granting temporary access to authorized users for a specified period, after which the URL expires.
: If you allow public access to your bucket, you can specify which domains can make cross-origin requests to your Object Storage using this function. It is useful when you need to serve resources from your bucket to web applications hosted on different domains.
Block Public Access: Overrides any other permissions applicable on buckets and objects. Maintaining your data’s privacy is essential. Using Block Public Access, ensure your buckets and objects are not accidentally made public and are accessible only to authorized individuals or systems. Currently, this feature is available only via the .
Within the confines of the same data center
Local traffic
Located in the same country but at a different data center
National traffic
Located in a data center in a different country
Public traffic
Type | Defines the object (file) type such as image, pdf, zip, and so on. |
Size | The file size is shown as sequence of bytes such as MB, KB, and so on. |
Modified on | The date and time when the object was last modified is displayed here. |
Version ID | Represents an unique object version. If versioning for the bucket is enabled, then every object in that bucket is assigned a unique version ID. If versioning is not enabled for a bucket, then, version ID is not available for the object. |
| Contains uppercase letters. |
| Contains periods which might cause SSL issues. |
| Too short, less than 3 characters. |
| Exceeds the 63 character limit. |
| Ends with a hyphen. |
| Allowed but not a recommended naming convention. |
Frankfurt, Germany |
|
Berlin, Germany |
|
Logroño, Spain |
|
Feature | Supported by contract-owned buckets | Supported by user-owned buckets |
Bucket Create, Read, Update, Delete (CRUD) | Yes | Yes |
Object CRUD | Yes | Yes |
Object Copy | Yes, only for buckets without encryption. | Yes, cross-regional copying is not supported. |
Multipart Uploads | Yes | Yes |
Pre-Signed URLs | Yes | Yes |
Bucket ACLs | Yes, but without the Logging Group. | Yes |
Object ACLs | Yes | Yes |
Block Public Access | Yes, only via the API. | Yes, only via the API. |
Bucket Policy | Yes | Yes |
CORS Configuration | Yes | Yes |
Bucket Versioning | Yes | Yes |
Bucket Replication | Not supported as contract-owned buckets are currently supported only in the | Yes, intraregional and cross-regional replication are supported. |
Bucket Tagging | Yes, only via the API. | Yes, only via the API. |
Object Tagging | Yes, only via the API. | Yes, only via the API. |
Bucket Lifecycle | Yes | Yes |
Bucket Access Logging | No | Yes |
Bucket Encryption Configuration | Yes, only via the API. | Yes, only via the API. |
Object Encryption | Yes, server-side encryption is used by default in the web interface. The encryption with customer-managed encryption keys is available via the API. | Yes, server-side encryption is used by default in the web interface. The encryption with customer-managed encryption keys is available via the API. |
Bucket Website | Yes, including support for redirects through the API reference. | Yes |
Bucket Inventory | No | Yes, only via the API. |
Object Lock | Yes | Yes |
Legal Hold | Yes | Yes |
Object Ownership | No | Yes |
Identity and Access Management (IAM) | No, available in the near future. | No |
Security Token Service (STS) | No | No |
Multi-factor Authentication | No | No |
Bucket Notifications | No | No |
Request Payment | Yes | No |
Bucket Metrics | No | No |
Bucket Analytics | No | No |
Bucket Accelerate | No | No |
Object Query | Yes | No |
Berlin, Germany |
|
An Access Control List (ACL) is a mechanism that defines who can access or modify specific resources, such as buckets and objects. ACLs allow resource owners to grant varying levels of permissions such as read, write, or full control to different users or groups.
Note: ACL is supported for both contract-owned buckets and user-owned buckets. For contract-owned buckets, sharing access with users is available only for grantees from other contracts. For more information, see Bucket Types.
Note: Due to the granularity limitations and the complexity of managing permissions across a large scale of resources and users, we recommend using Bucket Policy instead of ACLs.
You can use ACLs to make a bucket or object public or to share access with certain authorized users by setting the right permissions. IONOS Object Storage offers the following ACL management methods:
The feature functions in the IONOS Object Storage Service Availability regions and supports both contract-owned buckets and user-owned buckets.
Use Bucket Policy instead of ACLs which offers the following additional capabilities:
Manage access to prefixes like /folder/*
or *.jpg
.
Use conditions to grant access, for example, IP address.
Allow or deny certain actions like listing the object list.
We recommend using Share Objects with Pre-Signed URLs instead of ACL for granting temporary access to authorized users for a specified period, after which the URL expires.
If you have defined ACLs granting public access, activating the Block Public Access revokes these permissions, ensuring your data remains private. This feature is invaluable in scenarios where ensuring data privacy is paramount, or when you want to enforce a blanket no-public-access rule, irrespective of ACL settings. Currently, Block Public Access is available only via the IONOS Object Storage API.
Bucket Policy is a JSON-based access policy language that allows you to create fine-grained permissions for your Object Storage buckets. With Bucket Policy, you can specify which users or services can access specific objects and what actions users can perform.
Note: Bucket Policy is supported for both contract-owned buckets and user-owned buckets. The maximum allowed Bucket Policy size for a contract-owned bucket is 1 MB, and for a user-owned bucket is 20 KB. For more information, see Bucket Types.
Note: Granting access of a user-owned bucket to another IONOS user does not make the bucket appear in the user's Object Storage in the DCD as the granted access does not translate to interface visibility due to the S3 protocol's architecture. To access the bucket, the user must utilize other S3 Tools.
Use this feature to grant access to a specific user or group to only a subset of the objects in your bucket.
Restrict access to certain operations on your bucket, for example, list objects or remove object lock.
Using Bucket Policy, you can grant access based on conditions, such as the IP address of the user.
Create fine-grained access control rules to allow a user to put objects to a specific prefix in your bucket, but not to get objects from that prefix.
Use Bucket ACL and Object ACL instead of Bucket Policy if you need to define different sets of permissions such as READ
, WRITE
, or FULL CONTROL
to many objects.
Use Share Objects with Pre-Signed URLs to grant temporary access to authorized users for a specified period, after which the URL and the access to the object expire.
A JSON-formatted bucket policy contains one or more policy statements. Within a policy's statement blocks, IONOS Object Storage support for policy statement elements and their values is as follows:
Id (optional): A unique identifier for the policy. Example: SamplePolicyID
.
Version (required): Specifies the policy language version. The current version is 2012-10-17
.
Statement (required): An array of individual statements, each specifying a permission.
Sid (optional): Custom string identifying the statement. For example, Delegate certain actions to another user
.
Action (required): Specifies the action(s) that are allowed or denied by the statement. See the Action section in the Request for the supported values. Example: s3:GetObject
for allowing read access to objects.
Effect (required): Specifies the effect of the statement. Possible values: Allow
, Deny
.
Resource (required): Must be one of the following:
arn:aws:s3:::<bucketName>
– For bucket actions (such as s3:ListBucket) and bucket subresource actions (such as s3:GetBucketAcl
).
arn:aws:s3:::<bucketName>/*
or arn:aws:s3:::<bucketName>/<objectName>
– For object actions (such as s3:PutObject
).
Condition (optional): Specifies conditions for when the statement is in effect. See the Condition section in the Request for the supported values. Example: {"aws:SourceIp": "123.123.123.0/24"}
restricts access to the specified IP range. For the list of supported bucket and object actions and condition values, see Supported Action Values.
Principal (required): Specifies the user, account, service, or other entity to which the statement applies. For information specific to the bucket types, see the following:
"AWS": “*”
– Statement applies to all users (also known as 'anonymous access').
"AWS": "arn:aws:iam:::user/<contractNumber>"
– Statement applies to the specified contract number.
"AWS": ["arn:aws:iam:::user/<contractNumber>:<UUID1>", "arn:aws:iam:::user/<contractNumber>:<UUID2>", …]
– Statement applies to the specified IONOS Object Storage users.
{"CanonicalUser": "*"}
– Statement applies to all users (also known as 'anonymous access').
"CanonicalUser": ["<canonicalUserId>", "<canonicalUserId>",...]
– Statement applies to the specified IONOS Object Storage users.
For more information, see Bucket Policy Examples and supported bucket and object actions and condition values.
You can apply Bucket Policy using the DCD by following these steps:
1. In the DCD, go to Menu > Storage > IONOS Object Storage.
2. From the drop-down list in the Buckets tab, choose either Show user-owned buckets or Show contract-owned buckets depending on the bucket type you want to view.
3. From the Buckets list, choose the required bucket and click the Bucket settings.
4. Go to the Bucket Policy setting under the Access management section and click Edit.
5. Copy and paste the provided JSON policy by replacing BUCKET_NAME
and USER_ID
with the actual values. Depending on the Bucket Types, replace the USER_ID
as follows:
Use Contract user ID for contract-owned buckets.
Use Canonical user ID for user-owned buckets.
Info: You can retrieve your user ID from the Key management section. For more information, see Retrieve User ID.
6. Click Save.
Result: This action grants the specified user full access to your bucket.
Info: You have the option to restrict actions, define the scope of access, or incorporate conditions into the Bucket Policy for more tailored control. For more information, see Examples.
You can delete a Bucket Policy at any time using the Bucket Policy section in the Bucket settings and click Delete.
Info: Removing a bucket policy is irreversible and it is advised to create a backup policy before deleting it.
Use the API to manage the Bucket Policy configuration.
Use the CLI to manage Bucket Policy.
If you have defined a bucket policy to grant public access, activating the Block Public Access feature will revoke these permissions, ensuring your data remains private. This feature is invaluable in scenarios where ensuring data privacy is paramount, or when you want to enforce a blanket no-public-access rule, irrespective of Bucket Policy settings. Currently, Block Public Access is available only via the IONOS Object Storage API.
Following are a few examples of common use cases and their corresponding bucket policy configurations.
Prerequisite: You can retrieve the Contract User ID and Canonical User ID from the Key Management section by following the steps in the Retrieve User ID.
To grant full control over a contract-owned bucket or a user-owned bucket and its objects to other IONOS Object Storage users:
To grant read-only access to objects within a specific prefix of a contract-owned bucket to other IONOS Object Storage users:
To grant read-only access to objects within a specific prefix of a user-owned bucket to other IONOS S3 Object Storage users:
To allow read access to certain objects within a contract-owned bucket or a user-owned bucket while keeping other objects private:
To restrict all users from performing any S3 operations within the designated bucket type, unless the request is initiated from the specified range of IP addresses:
For more information on bucket policy configurations, see Bucket Policy, supported bucket and object actions and condition values, and Retrieve user ID.
Bucket ACL | Use to share access between users of the contract and to other contracts. |
Bucket Access Logging | Not supported | Supported |
Bucket Replication | Cannot replicate contract-owned buckets as this bucket type is currently supported only in the | Supports replication within user-owned buckets of the same user and as well as replication to contract-owned buckets. |
Identity and Access Management (IAM) | No, available in the near future. | Not supported |
Object Query | Supported | Not supported |
Redirects for Static Website Hosting | Supported | Not supported |
Free data transfer to VMs within the same region | Supported | Not supported |
Using an Object Storage endpoint with a Managed Network Load Balancer (NLB) creates a secure connection to use IONOS Object Storage within your work environment.
To access Object Storage from a private LAN using NLB, follow these steps:
Prerequisites:
— Set up an NLB by following the steps in Create a NLB. If a load balancer already exists, then it has a private IP address.
— Use the public IP addresses of the desired Endpoints as the Target IP address.
1. In the DCD, select the NLB element to open its properties in the Inspector pane on the right.
2. In the Settings, provide the information such as Name, Primary IPv4, and Add IP settings. Adding one or more additional Listener IPs is optional. For more information, see Settings.
Note: Public IPs must be reserved first. You can reserve public IPs by following the steps in Reserve an IPv4 address.
3. In the Private IPs, add the private IP. To do so, follow the steps in Add and delete IPs.
4. In the Forwarding rules, add a forwarding rule as follows:
Select the Private IP as the Listener IP of the forwarding rule.
Choose any algorithm.
The protocol can be used as TCP, which is the default value.
For more information, see Create a rule.
5. Add target by using these values:
Target IP: Select a corresponding Target IP value that is the public IP address of the desired endpoint.
Following is the example of IP address values obtained for the endpoints:
Target Port: Use the value 443. This is the specific port on which a service or application is running on a server.
Weight: Enter a target weight from 1 to 256.
Proxy Protocol: Choose none
for disabling the proxy protocol.
For more information, see the steps in Create a target.
6. Click PROVISION CHANGES to save the configurations and apply them.
7. Configure /etc/hosts
on the backend server. For example, run the following command to open the file with sudo privileges:sudo nano /etc/hosts
.
Edit the file /etc/hosts
by adding a new line with a private listener IP address followed by the endpoint. This will map a specific domain to the private IP address of your NLB.
Example:
Result: The private LAN using NLB is successfully set up to access Object Storage.
By default, objects in the IONOS Object Storage are private and only the bucket owner has permission to access them. Only the bucket owner can generate a pre-signed URL for objects and grant time-bound permission to other users to access these objects. It is a secure and user-friendly way to share private objects stored in your Object Storage with other users.
Note: For a contract-owned bucket, in addition to the bucket owner, the administrator has permission to generate a pre-signed URL for objects and grant time-bound permission to other users to access these objects.
This way, the objects are made publically available for users with the object's pre-signed URL; however, you could limit the period of access to the object.
Pre-signed URLs are ideal for providing temporary access to a specific object without needing to change the object's permissions or share your credentials with other users.
Allows other users to upload objects directly to your Object Storage bucket without needing to provide them with access and secret keys.
You can generate a pre-signed URL to share objects through one of the following methods:
1. In the DCD, go to Menu > Storage > IONOS Object Storage.
2. From the drop-down list in the Buckets tab, choose either to Show user-owned buckets or Show contract-owned buckets depending on the bucket type you want to view.
3. From the Buckets list, choose the bucket from which you want to share the objects. The list of objects in the bucket is listed.
4. Select the object to share and click Generate Pre-Signed URL.
5. Enter the expiration time for the URL and choose whether the specified time refers to seconds, minutes, hours, or days.
6. Click Generate.
7. Copy the generated pre-signed URL and share it with users who require access to this object.
Result: The pre-signed URL for the selected object is generated successfully and copied to the clipboard. The URL is valid for the period defined during URL generation.
Prerequisites: — Set up the AWS CLI by following the installation instructions.
— Make sure to consider the supported Endpoints.
Generate a pre-signed URL for my-object.txt
in the my-bucket
bucket which will expire in 3600 seconds:
The creation of pre-signed URLs does not involve a dedicated API by design. These URLs are generated locally via a signing algorithm using your credentials without relying on the S3 API. To create these URLs, use the appropriate SDK for your programming language.
IONOS Object Storage is S3-compatible, allowing seamless integration with any SDK supporting the S3 protocol for tasks like generating pre-signed URLs. For generating pre-signed URLs using SDKs, see the following AWS methods: Python, Go, Java 2.x., JavaScript 2.x., JavaScript v3, and PHP Version 3.
Replication allows you to create and manage replicas of your data across Endpoints.
Note:
— Replication is currently supported only for user-owned buckets and is available in the de
, eu-central-2
, and eu-south-2
regions.
— You can also replicate user-owned buckets to contract-owned buckets in the eu-central-3
region. This function is supported both via the DCD and the API.
Note: Replication is not supported for contract-owned buckets since this bucket type is available only in the eu-central-3
region.
Disaster Recovery: In the event of a regional outage, your data remains accessible from another region.
Compliance Requirements: Meet legal and compliance mandates by storing copies of data in different geographical locations.
Latency Reduction: Serve data from the nearest region to your users, minimizing latency and improving performance.
Data Aggregation: Aggregate logs or other data from multiple buckets to a central bucket, where they can be analyzed.
Only objects directly uploaded into a bucket by a client application are replicated.
Replication traffic, including cross-region replication, is not counted towards data usage; thus, Object Storage offers free data transfer.
Objects are not replicated if they are themselves replicas from another source bucket.
In the case of an object deletion request specifying the object version, the object version is deleted from the source bucket but not from the destination bucket.
If an object deletion request does not specify the object version, the deletion marker added to the source bucket is replicated to the destination bucket.
With bi-directional replication, you can configure two buckets to replicate each other. For example, objects directly uploaded into bucket1 can be copied to bucket2
, and objects directly uploaded into bucket2
are replicated to bucket1. It is possible to replicate objects uploaded from one bucket into another bucket. Still, objects will not be copied back into the source bucket, thus avoiding an endless replication loop.
You can manage Replication using the DCD, API, and CLI.
Prerequisite: Versioning must be enabled for source and destination buckets.
1. In the DCD, go to Menu > Storage > IONOS Object Storage.
2. From the Buckets list, choose the bucket for which the Replication rule must be added and click Bucket settings.
3. Go to the Replication setting under the Data management section and click Add a rule.
4. Enter a Rule name.
5. Choose the Replication scope. You can either apply the replication rule to all objects in the bucket or limit to objects filtered by prefix.
Info: Use a prefix that is unique and does not include the source bucket name.
6. Browse Object Storage to choose the Destination bucket.
7. Click Add a rule.
Result: The Replication rule is successfully added and automates the replication of objects between the source and destination bucket.
Info: Using the same Replication bucket settings, you can enable, disable, modify, and delete an existing rule. It takes up to a few minutes for the deletion of a Replication rule to propagate fully.
Use the API to manage the replication of objects.
Use the CLI to manage Replication.
Replication configuration is possible only if Versioning is enabled for both source and destination buckets participating in Replication.
Each Replication rule serves to identify a specific prefix for replication, and it must be unique.
Version 2 of the AWS S3 specification for Replication configuration XML is not supported. Only the version 1 is currently supported.
The following options are not supported: DeleteMarkerReplication
, EncryptionConfiguration
, ReplicationTime
, ExistingObjectReplication
, Filter
(use Prefix
instead), Priority
, SourceSelectionCriteria
, AccessControlTranslation
, Account
, and Metrics
.
Replication is not possible in the following cases:
A source bucket that has Object Lock enabled. However, an Object Lock enabled bucket can be a destination bucket for Replication.
A source bucket that has Lifecycle for auto-tiering enabled.
Objects uploaded before enabling Replication.
Objects encrypted by the SSE-C method.
Objects that are themselves replicas from other source buckets. For example, if you configure bucket1
to replicate to bucket2
, and you configure bucket2
to copy to bucket3
, then an object that you upload to bucket1
will get replicated to bucket2
but will not get reproduced from there on to bucket3
. Only objects you directly upload into bucket2
will be copied to bucket3
.
Versioning allows you to keep multiple versions of the same object and it must be set up for both the source and the target bucket before enabling the replication.
Versioning allows you to keep multiple versions of the same object. Upon enabling Versioning for your bucket, each version of an object is considered a separate entity contributing to your storage space usage. Every version represents the full object, not just the differences from its predecessor. This aspect will be evident in your usage reports and will influence your usage-based billing.
Note: Versioning is supported for both contract-owned buckets and user-owned buckets. For more information, see Bucket Types.
Data Recovery: Versioning can be used as a backup solution for your data. If you accidentally overwrite or delete an object, you can restore it to a previous version.
Tracking Changes: Versioning can be used to track changes to your data over time. This can be useful for debugging purposes or auditing changes to your data.
Buckets can exist in one of three states:
Unversioned: Represents the default state. No versioning is applied to objects in a bucket.
Versioning - enabled: In this state, each object version is preserved.
Versioning - disabled: No new versions are created, but existing versions are retained.
Objects residing in your bucket before the activation of versioning possess a version ID of null
. Once versioning is enabled, it cannot be disabled but can be suspended. During suspension:
New object versions are not created.
Existing object versions are retained.
You can resume Versioning anytime, with new versions being created henceforth.
Upon enabling Versioning for a bucket, every object version is assigned a unique, immutable Version ID, serving as a reliable reference for different object versions. New object versions are generated exclusively through PUT
operations, with actions such as COP
entailing a PUT
operation, thus spawning a new version.
Notably, a new Version ID is allocated for each version, even if the object content remains unaltered. Objects residing in the bucket before the activation of versioning bear a Version ID of null
.
When an object is deleted, all its versions persist in the bucket, while Object Storage introduces a delete marker, which is also assigned its Version ID.
You can manage Versioning using the DCD, API, and CLI.
1. In the DCD, go to Menu > Storage > IONOS Object Storage.
2. From the drop-down list in the Buckets tab, choose either Show user-owned buckets or Show contract-owned buckets depending on the bucket type you want to view.
3. From the Buckets list, choose the bucket to which you want to manage Versioning.
4. Click Bucket settings and go to the Versioning setting under the Data management section.
5. In the Versioning, click Enable to have object versions. On choosing the Disable option, it suspends object versioning but preserves existing object versions.
Result: Based on the selection, Versioning is either enabled or disabled for objects in the bucket.
1. In the DCD, go to Menu > Storage > IONOS Object Storage.
2. From the drop-down list in the Buckets tab, choose either to Show user-owned buckets or Show contract-owned buckets depending on the bucket type you want to view.
3. From the Buckets list, choose the bucket in which the desired object exists.
4. Click the object name within the bucket listing.
5. Navigate to the object's Versions tab by clicking the object name or clicking the three dots against the object name.
6. Copy Version IDs or download non-current versions of the object. You can also select and delete non-current object versions.
Result: Based on the selection, Version IDs and non-current object versions are successfully managed.
Use the API to configure and manage Versioning for a bucket.
Use the CLI to manage Versioning.
For a bucket with Object Lock enabled, Versioning is automatically enabled and cannot be suspended.
For Bucket Replication to function correctly, Versioning must be enabled.
IONOS Object Storage allows the setup of lifecycle rules for managing both current and non-current versions of objects in versioning-enabled buckets. For instance, you can automate the deletion of non-current object versions after a specified number of days after they transition to a non-current status. For more information, see Lifecycle.
Object Lock is a feature that enables you to apply WORM protection to objects, preventing them from being deleted or modified for a specified duration. It provides robust, programmable safeguards for storing critical data that must remain immutable. Enabling Object Lock automatically enables bucket Versioning.
Warning: Once a bucket is created without an object lock, you cannot add it later.
Note: Object Lock is supported for both contract-owned buckets and user-owned buckets. For more information, see Bucket Types.
Data Preservation: Protects critical data from accidental or malicious alteration and deletion, ensuring integrity and consistency.
Regulatory Compliance: Aligns with European regulations such as GDPR, Markets in Financial Instruments Directive (MiFID) II, and the Electronic ID and Trust Services (eIDAS) regulation, maintaining records in an unalterable state.
Legal Holds and Audits: Facilitates legal holds and audits that offer immutable data preservation, providing a transparent data record. It also offers an auditable trail of when and why the data is placed on hold, which is essential for legal and regulatory audits.
Object lock can be applied in two different modes:
Governance: Allows specific users with special permissions to override the lock settings. Ideal for flexible control.
Compliance: Enforces a strict lock without any possibility of an override. Suited for regulatory and legal mandates.
These two lock modes require configuring the duration for which the object will remain locked. The period can range from days to years, depending on the object's compliance needs.
The Retention period refers to the duration for which the objects stored in a particular Object Storage bucket are protected from deletion or modification. You can set the retention period to a maximum of 365 days via the DCD. To set a longer retention period, use the API.
The retention configuration can be modified or removed for the objects under Governance mode by including a specific header variable in the API request. However, for objects in Compliance mode, reducing the retention period or removing the retention configuration is not possible.
Note: Under Object Lock or Object Hold, permanent deletion of an object's version is not permissible. Instead, a deletion marker is generated for the object, causing IONOS Object Storage to consider that the object has been deleted.
However, the delete markers on the objects are not subject to protection from deletion, irrespective of any retention period or legal hold on the underlying object. Deleting the delete markers restores the previous version of the objects.
An additional setting called Legal Hold can place a hold on an object, enforceable without specifying a retention period. It could be applied both to objects with or without Object Lock. The Legal Hold will continue to be applied till manual removal even if the object’s retention period for Governance or compliance mode is over.
Note: Object Lock configuration can only be enabled during the initial creation of a bucket and cannot be applied to an existing bucket.
When a bucket is created with Object Lock enabled, you can set up Object Lock configurations. These configurations determine the default mode and retention period for newly uploaded objects. Alternatively, Object Lock settings can be explicitly defined for each object during its creation, overriding the bucket's default settings.
Prerequisite: Ensure you create a new bucket to enable Object Lock.
1. In the DCD, go to Menu > Storage > IONOS Object Storage.
2. Depending on the Bucket Types you want to create, follow the steps in Create a bucket and enable Object Lock.
3. From the drop-down list in the Buckets tab, choose either Show user-owned buckets or Show contract-owned buckets depending on the bucket type you created.
4. From the Buckets list, choose the bucket for which the Object Lock is enabled.
5. Click Bucket settings and go to the Object Lock setting under the Data management section.
6. Modify the Object Lock mode applied on the bucket and the Retention period as needed.
7. Click SAVE.
Note: The modified Object Lock settings apply to the newly uploaded objects to the bucket. The existing objects adhere to the Object Lock settings applied during the bucket creation.
Result: The Object Lock settings are successfully updated and applied to the bucket.
Use the API to manage the Object Lock configuration on the specified buckets.
Use the CLI to manage Object Lock.
The following are a few limitations to consider while using Object Lock:
Once the Object Lock is enabled during bucket creation, both Object Lock and Versioning cannot be disabled afterward.
When you place or modify an Object Lock, updating the object version's metadata does not overwrite the object version or change its Last-Modified timestamp.
A bucket with Object Lock enabled cannot be chosen as a source for replication or tiering, but it could be a destination for replication or tiering.
In the Compliance mode, an object is immutable until its retention date has passed. It is not possible to disable this mode for the object or shorten the retention period. This setting could not be changed either by the bucket owner or IONOS.
Logging in IONOS Object Storage enables the tracking and storage of requests made to your bucket. When you enable Logging, Object Storage automatically records access requests, such as the requester, bucket name, request time, request action, response status, and error codes, if any. By default, Logging is disabled for a bucket.
Note: Logging is currently supported only for user-owned buckets and is available in the de
, eu-central-2
, and eu-south-2
regions.
Note: Logging is not supported for contract-owned buckets.
Security Monitoring: Tracks access patterns and identifies unauthorized or suspicious access to your data. In the event of a security breach, logs provide vital information for investigating the incident, such as IP addresses, request times, and the actions that were performed.
Auditing: Many industries require compliance with specific regulatory standards that mandate the monitoring and logging of access to data. Logging facilitates compliance with regulations like HIPAA, GDPR, or SOX by providing a detailed record of who accessed what data and when.
Troubleshooting: If there are issues with how applications are accessing your Object Storage data, logs can provide detailed information to help diagnose and resolve these issues. Logs show errors and the context in which they occurred, aiding in quick troubleshooting.
You can manage Logging using the DCD, API, and CLI.
Prerequisite: Make sure you have provided access permissions for the Log Delivery Group. For more information, see Grant access permission for Logging.
To activate Logging, follow these steps:
1. In the DCD, go to Menu > Storage > IONOS Object Storage.
2. From the Buckets list, choose the bucket and click Bucket settings.
3. Go to the Logging setting under the Access management section and click Browse Object Storage to select the destination bucket in the same region to store logs.
Note: Although it is possible to store logs in the same bucket being logged, it is recommended to use a different bucket to avoid potential complications with managing log data and user data together.
4. (Optional) Specify the prefix for log storage, providing flexibility in organizing and accessing your log data. If no prefix is entered, the log file name is derived from its time stamp alone.
5. Click Save.
Result: Logging is enabled for the selected bucket.
You can modify or deactivate Logging at any time with no effect on existing log files. Log files are handled like any other object. Using the Logging section in the Bucket settings, you can click Disable Logging to stop collecting log data for a bucket.
1. In the DCD, go to Menu > Storage > IONOS Object Storage.
2. From the Buckets list, choose the bucket for which the logging must be enabled.
3. Click Bucket settings and go to the Access Control List (ACL).
4. For Logging, select the OBJECTS:WRITE and BUCKET ACL:READ checkboxes.
5. Click Save.
Result: The required access permissions to enable Logging for a bucket is granted.
Use the API to configure and manage Logging for a bucket.
Use CLI to manage Logging for buckets.
Logs can only be stored in the same-region buckets.
Warning: Although it is possible to store logs in the same bucket being logged, it is not recommended due to potential complications with managing log data and user data together.
Use Lifecycle Management in conjunction with Logging to manage and automate the lifecycle of log files. For instance, you can set up a lifecycle policy to permanently delete logs that are no longer needed after a certain period.
Lifecycle management allows you to automate the deletion of objects and their versions to optimize costs and adhere to compliance requirements.
The Lifecycle comprises rules with actions applied to objects within a bucket. These policies help automate processes that manage the lifecycle of your data.
Note: Lifecycle setting is supported for both contract-owned buckets and user-owned buckets. For more information, see Bucket Types.
Object Expiration: Automatically deletes objects no longer needed after a certain period, such as temporary files, logs, or other transient data. It helps to declutter the storage and reduce costs.
Regulatory Compliance: Assists in meeting legal and compliance requirements by deleting objects according to the defined Lifecycle rules.
Version Control: Manages multiple versions of objects by automatically deleting the non-current object versions and saves storage costs.
Temporary Storage: Stores data generated from batch processing or other workloads and deletes these provisional data when no longer needed using the object expiration Lifecycle actions.
A Lifecycle rule supports the following actions:
Expire current versions.
Permanently delete noncurrent versions of objects.
Delete expired object delete markers.
Delete incomplete multipart uploads.
With this action, you can specify a period after which the object's current version must expire. Depending on whether the Versioning is enabled for a bucket or not, the action Expire current versions of the object impacts in the following ways:
If the Versioning is enabled for the bucket, then the expiration of the current version of an object does not result in the deletion of the object data from the storage. Instead, when the object's current version reaches its expiration date, a "delete marker" is created for this object and retained as its "current version"; the object data transitions to a non-current object version.
If the Versioning is not enabled for the bucket, then the current versions are the only versions of objects in your bucket. When the object reaches its expiration date, it is permanently deleted from the storage.
When the Expire current versions action is set for a bucket that uses Versioning, the system automatically deletes the expired delete markers as part of the lifecycle processing. An expired delete marker is a delete marker for which there is no corresponding object data because all non-current versions of the object have been deleted. This functionality aids in maintaining a clean and organized bucket and retains only necessary data.
This action is applicable only if the bucket uses Versioning. Permanently deleting non-current versions of objects takes place after the specified retention period, and it helps to ensure the removal of outdated versions of objects from the storage.
A non-current object version refers to those that are superseded by a newer object version or a delete marker. When a non-current version of an object reaches its scheduled expiration, it is permanently deleted from the storage. The expiration scheduling for non-current versions of objects is based on the number of days since the objects became non-current, which is the number of days since being superseded by a newer version or a "delete marker."
If the bucket has Object Lock enabled, then the non-current object versions are not deleted before their defined retention period is completed. Suppose the expiration date of a non-current object version (based on your configured expiration schedule) comes before the end of the object version's lock period, then the Object Lock setting overpowers. The system retains the non-current object version until the end of its lock period. Shortly after the lock period concludes, the system automatically deletes the non-current object version, ensuring adherence to expiration and retention policies.
This action is applicable only if the bucket uses Versioning and the Expire current versions schedule has been set. In a versioning-enabled bucket, when you delete the current version of an object, a "delete marker" replaces that object version and becomes the new current object version. All the older versions of the object are retained in the system and remain retrievable.
However, if all older versions subsequently expire (through the execution of the expiration rule for non-current versions), an orphaned delete marker remains. With the Delete expired object delete markers action, you enable the system to automatically remove a delete marker after a few hours of all the older object versions have expired or been deleted.
This action stops all incomplete multi-part uploads and allows the automatic deletion of incomplete multi-part uploads, freeing up storage space and ensuring the bucket remains clean and organized. The Multi-part upload facilitates the uploading of large objects in parts. However, if an upload is incomplete, it consumes storage space.
You can manage the Lifecycle using the DCD, API, and CLI.
1. In the DCD, go to Menu > Storage > IONOS Object Storage.
2. From the drop-down list in the Buckets tab, choose either Show user-owned buckets or Show contract-owned buckets depending on the bucket type you want to view.
3. From the Buckets list, choose the bucket to which you want to perform Lifecycle Management.
4. Click Bucket settings, go to the Lifecycle setting under the Data management section and click Add a rule.
5. Enter the following details to configure the Lifecycle rule:
Lifecycle Rule name: Enter a name to identify the rule uniquely.
Set Rule Scope: Choose whether to apply the Lifecycle rule to all objects in the bucket or limit to objects filtered by the prefix. The prefix is subject to a single Lifecycle rule.
Select an action: Choose one or more from the following Lifecycle actions to apply to the objects in the bucket:
Expire current versions: You can choose to enter the number of days after object creation should the current object version expire or select a date from the calendar shown, after which the current object version must expire. The rule application varies depending on whether the bucket is versioned or not.
Permanently delete noncurrent versions of objects: Mention the number of days after the object version becomes non-current should it be permanently deleted.
Delete expired object delete markers: Select this action to remove all object delete markers and improve performance. You cannot apply this action if the Expire current versions action is selected.
Delete incomplete multipart uploads: Mention the number of days after upload initiation should the incomplete multipart uploads be deleted.
For more information, see Lifecycle actions.
6. Click Save.
Result: The Lifecycle rule is successfully added.
Info: Using the same Lifecycle bucket settings, you can turn on, off, modify, and delete an existing rule. It takes up to a few minutes for the deletion of a Lifecycle rule to propagate fully.
Use the API to manage the Lifecycle rules.
Use the CLI to manage Lifecycle configuration.
Currently, IONOS Object Storage only supports Standard storage class. You cannot use lifecycle rules to transition objects to another storage class.
A maximum of 1,000 rules can be set in the Lifecycle configuration.
Multiple Lifecycle rules can be created for a bucket, each applying to a different object prefix. However, more than one Lifecycle rule cannot be set for the same object prefix.
If the bucket uses Object Lock, non-current object versions cannot be deleted before the completion of their defined retention period.
The NewerNoncurrentVersions
setting is not supported for the NoncurrentVersionExpiration
option.
Versioning allows you to keep multiple versions of the same object. For more information, see Versioning.
Static Website Hosting enables the hosting of static content, including HTML, CSS, JavaScript, and images, directly, eliminating the need for an external web server. You can specify both an index page and an error page. Additionally, there is an option to link a custom domain.
Note: Static Website Hosting setting is supported for both contract-owned buckets and user-owned buckets. For more information, see Bucket Types.
Depending on the region of your Object Storage bucket, the static website URL varies. For more information on the static website endpoint, see Endpoints.
Static Content Hosting: Directly serve HTML, CSS, JavaScript, and media files statically on a website.
Publish Landing Pages: Host promotional or event-specific landing pages with high availability.
Documentation Sites: Host product documentation or manuals with easy access to the users.
You can manage Versioning using the DCD, API, and CLI.
Note: Static Website Hosting is disabled by default for a bucket. Enabling this setting will make all objects in the bucket publicly readable.
1. In the DCD, go to Menu > Storage > IONOS Object Storage.
2. From the drop-down list in the Buckets tab, choose either Show user-owned buckets or Show contract-owned buckets depending on the bucket type you want to view.
3. From the Buckets list, choose the bucket for which you want to manage Static Website Hosting.
4. Click Bucket settings and go to the Static Website Hosting setting under the Public access section.
5. Click Edit and add the following details:
Index document: Enter the file name that serves as an index document. Example: index.html
. An index document is a default webpage that IONOS Object Storage returns upon receiving a request to the root of a website or a subfolder.
Error document: Enter the file name of the HTML error document that is uploaded to the Object Storage bucket. An error document is a default HTML file with details you want the user to view when an error occurs.
6. Click Enable.
Result: Static Website Hosting is successfully enabled for a bucket.
Info: In the Static Website Hosting setting, choose Edit and click Disable to remove Static Website Hosting for a bucket.
Use the API to configure and manage Static Website Hosting for a bucket.
Use the CLI to manage Static Website Hosting.
Static Website Hosting is unsuitable for hosting websites that require server-side processing, such as PHP and Python.
To store your data in IONOS Object Storage, learn about buckets that serve as data containers. |
Learn about the contract-owned bucket and user-owned bucket types' feature sets and limitations. |
Organize data in the Object Storage by learning about the objects, object functions, metadata, folders, and prefixes. |
To authenticate with your Object Storage credentials for using Object Storage and to manage keys, learn about key management. |
To control access permissions to your buckets and objects, learn about access management. |
Learn about the features compatible with S3 API. |
View the bucket types, navigate to the bucket settigs, copy the endpoint URL, or delete a bucket. |
Use the search, versioning, prefixes, and delete options to manage objects and folders effectively. |
Generate Object Storage keys to login securely, and activate or deactivate to keys to manage access to buckets and objects. |
Retrieve Canonical User ID for sharing buckets, objects, and object versions with other Object Storage users. |
Generate pre-signed URL to time-bound object share access with other Object Storage users. |
Create a secure connection using NLB and access Object Storage from a private LAN. |
Use Object Lock to protect critical objects in a bucket for an immutable period. |
Use Replication to create and manage data replicas across multiple Object Storage regions. |
Manage multiple versions of the same object using Versioning. |
Manage the deletion of objects and their versions efficiently using the Lifecycle rules. |
Use Bucket Policy to define granular access permissions and actions users can perform on buckets and objects. |
Use ACL to define access permissions on buckets and objects to control who can access them. |
With Logging, track and record storage requests for your buckets. |
Define permissions to specific domains that can access bucket content. |
Host static website content by configuring the index and error document. |
Depending on the Bucket Types access you want to share with the user, learn how to retrieve the required user ID.
For another user to share the content of their IONOS Object Storage with you, they need your user ID, which you will find in the Object Storage Key Management section.
Prerequisite:
— Make sure you have the corresponding permissions to work with the Object Storage. If you are not the contract owner or administrator, you must be added to a group with Use Object Storage privilege.
— You must have generated the first Object Storage key using Generate a Key. Only upon generating the first key, the Canonical User ID of the user is displayed in the Credentials and Users & Groups > Users > Object Storage Keys > IDs section.
1. In the DCD, go to Menu > Storage and click the IONOS Object Storage.
2. Select the Key management tab.
3. In the Object Storage Credentials, click Copy against the user's ID as follows:
Copy the Contract User ID to grant access to contract-owned buckets.
Copy the Canonical User ID to grant access to user-owned buckets.
Result: Your user ID is successfully copied to the clipboard.
The grantee is the user under the same contract at IONOS, but it also could be the user under another contract. You need the user ID to share access to the bucket or object using Share access methods.
Prerequisites:
— Make sure the grantee Object Storage account already exists. If not, then, begin creating the grantee by following the steps to Retrieve the user ID of a new user.
— Make sure you have the corresponding permission to work with the Object Storage. You must be added to a group with Use Object Storage privilege. Only contract owners and administrators can retrieve the IONOS Object Storage IDs of their account users.
— Only upon generating the first object storage key, the Canonical User ID of the user is displayed in the Users & Groups > Users > Object Storage Keys > IDs section.
1. In the DCD, go to Menu > Management > Users & Groups.
2. Select the user from the Users list and click the Object Storage Keys tab.
3. Click the Object Storage link and retrieve the user's ID as follows:
Copy the Contract User ID to grant access to contract-owned buckets.
Copy the Canonical User ID to grant access to user-owned buckets.
Result: The user ID for the grantee is successfully retrieved.
If the grantee's user account does not already exist or you want to assign a different set of permissions, then the root user of the contract needs to create the user account and then retrieve the user ID to grant access to buckets and objects.
1. In the DCD, go to Menu > Management > Users & Groups.
2. In the Users tab, click + Create.
3. Enter the user details such as First Name, Last Name, Email, Password, and click Create.
Result: The new user is created and shown in the Users list.
4. Add the user to a group with Use Object Storage privilege enabled.
5. The user must log in to the DCD with their credentials and manually generate the Object Storage key by using Generate a Key.
Info: Only upon generating the first key, the Canonical User ID of the user is displayed in the Object Storage Credentials and Users & Groups > Users > Object Storage Keys > IDs section.
6. In the Users list, select the user and click the Object Storage Keys tab.
5. Select the Active checkbox to activate the Key.
6. Click the Object Storage link and retrieve the user's ID as follows:
Copy the Contract User ID to grant access to contract-owned buckets.
Copy the Canonical User ID to grant access to user-owned buckets.
Result: The new user is successfully created and the user ID is retrieved. You can now share access to the bucket with the new user using Share access.
IONOS Object Storage allows users to create the following two types of buckets:
1. Contract-owned buckets
2. User-owned buckets
For more information, see Bucket Types.
Note: Starting from May 30, 2024, all the newly launched endpoints (regions) will use a contract owner as a bucket owner and support contract administrators having the same set of permissions as the bucket owner. You can still create user-owned buckets using specific endpoints, but this shift toward a contract-owned bucket model will be the primary focus for future Object Storage updates.
The IONOS Object Storage Service endpoints for the bucket types are as follows:
Data Center | Region | Endpoint | Static Website Endpoint |
---|---|---|---|
Note: The BSI IT Grundschutz certification for the eu-central-3
region is currently pending.
Data Center | Region | Endpoint | Static Website Endpoint |
---|---|---|---|
Note: — The endpoints are available through both HTTP
and HTTPS
URLs.
— The Object Storage service does not support HTTPS
for hosting static websites unless the full domain path is used.
You can manage ACL permission for buckets through the DCD, IONOS Object Storage API, or the CLI.
Note: Due to the granularity limitations and the complexity of managing permissions across a large scale of resources and users, we recommend using Bucket Policy instead of ACLs.
The following table shows the ACL permissions that you can configure for buckets in the IONOS Object Storage:
Note: For security, granting some of the access permissions such as Public access WRITE
, Public access WRITE_ACP
, Authenticated users WRITE
, Authenticated users WRITE_ACP
is possible only through an API Call.
To manage ACL for buckets using the DCD, follow these steps:
Prerequisites:
— Make sure the user ID of the grantee is known. For more information, see Retrieve User ID.
— The grantee should already exist. If not, create a user and retrieve the Canonical User ID by following the steps in Retrieve the user ID of a new user.
1. In the DCD, go to Menu > Storage > IONOS Object Storage.
2. From the drop-down list in the Buckets tab, choose either Show user-owned buckets or Show contract-owned buckets depending on the bucket type you want to view.
3. From the Buckets list, choose the bucket to which you want to manage the ACL.
4. Click Bucket settings and choose the Access Control List (ACL) under the Access management section.
5. Depending on the Bucket Types, manage the access permissions as follows:
Select the checkboxes against the access permissions to grant at each user level such as specific or all users of another contract, all users of a group, and authenticated users of a group. For more information, see ACL permission for buckets.
Add grantees to provide additional users with access permission to the contract-owned bucket.
In the Additional Grantees section, enter the retrieved Contract Number of the grantee.
Select the checkboxes on the bucket ACL permissions to grant, and click Add.
Select the checkboxes against the access permissions to grant at each user level such as users, all users of a group, authenticated users of a group, and Log Delivery Group. For more information, see ACL permission for buckets.
Add grantees to provide additional users with access permission to the user-owned bucket.
In the Additional Grantees section, enter the retrieved Canonical user ID of the grantee.
Select the checkboxes on the bucket ACL permissions to grant, and click Add.
6. Click Save to apply ACL permissions and add the grantee to the bucket.
Result: The ACL permissions are successfully applied on the bucket.
Note: Granting access to a bucket for another IONOS user does not make the bucket appear in the user's Object Storage in the DCD due to the S3 protocol's architecture. To access the bucket, the user must utilize other S3 Tools as the granted access does not translate to interface visibility.
Use the API to manage bucket ACL permissions.
Use CLI to manage ACL permission for buckets.
Cross-origin Resource Sharing (CORS) allows you to specify which domains can make cross-origin requests to your Object Storage. CORS is beneficial when you need to serve resources from your bucket to web applications hosted on different domains.
Note: CORS is supported for both contract-owned buckets and user-owned buckets. For more information, see Bucket Types.
Cross-Domain Image Serving: Suitable for websites that need to display images stored in the Object Storage buckets on the various domains without encountering cross-domain restrictions.
Multi-Domain: Supports complex web applications that operate across multiple domains to access and use data stored in the buckets seamlessly.
Development and Testing Environment: Facilitates the access of development and staging versions of your web applications hosted on different domains to the same Object Storage resources. You can configure the CORS headers on the staging servers to allow requests from the development or testing domains, ensuring seamless testing without running into cross-origin restrictions.
You can manage CORS using the DCD, API, and CLI.
1. In the DCD, go to Menu > Storage > IONOS Object Storage.
2. From the drop-down list in the Buckets tab, choose either Show user-owned buckets or Show contract-owned buckets depending on the bucket type you want to view.
3. From the Buckets list, choose the bucket for which the CORS rule must be configured and click Bucket settings.
4. Go to the CORS setting under the Access management section and click Add a rule.
4. Enter the following details to configure the CORS rule:
Rule name: Enter a name to identify the rule uniquely.
Allowed origins: Enter the complete domain of the client you want to access your bucket's content from and click Add. The domain should start with a protocol identifier, such as HTTP, and end with a hostname; for example, https://*.example.com
. You can add one or more origins.
Allowed headers (Optional): Specify the non-default headers that your Object Storage bucket must accept from your client and click Add. The CORS automatically takes default headers such as Content-Length
and Content-Type
.
Allowed methods: Select the API method checkbox to allow interaction with your Object Storage bucket. You can enable or restrict the following API methods:
GET
: Fetch the CORS configuration of the bucket.
POST
: Create a new bucket.
PUT
: Update the bucket's property or content.
HEAD
: Retrieve the bucket's metadata.
DELETE
: Delete a bucket.
Expose headers (Optional): Specify the headers in the response that you want users to be able to access from their applications and click Add.
Max age (Optional): Specify the time in seconds for how long a request’s verification is cached. The Object Storage bucket can accept more requests from the same origin while the verification is cached.
5. Click Add a rule.
Result: The CORS rule is successfully added.
Info: Using the same CORS bucket settings, you can turn on, off, modify, and delete an existing rule. It takes up to a few minutes for the deletion of a CORS rule to propagate fully.
Use the API to manage the CORS rules.
Use the CLI to manage CORS configuration.
You can manage ACL permission for objects through the DCD, IONOS Object Storage API, or the CLI.
Note: Due to the granularity limitations and the complexity of managing permissions across a large scale of resources and users, we recommend using Bucket Policy instead of ACLs.
The following table shows the ACL permissions that you can configure for objects in a bucket in the IONOS Object Storage:
These permissions are applied at individual object levels offering a high granularity in access control.
Note: For security, granting some of the access permissions such as Public access WRITE_ACP
and Authenticated users WRITE_ACP
is possible only through an API call.
To manage ACL for objects using the DCD, follow these steps:
Prerequisites:
— Make sure the user ID of the grantee is known. For more information, see Retrieve User ID.
— The grantee should already exist. If not, create a user and retrieve the Canonical User ID by following the steps in Retrieve the user ID of a new user.
1. In the DCD, go to Menu > Storage > IONOS Object Storage.
2. From the drop-down list in the Buckets tab, choose either Show user-owned buckets or Show contract-owned buckets, depending on the bucket type you want to view.
3. From the Buckets list, choose the bucket under which the object ACL to be modified exists.
4. From the Objects list, choose the object for which ACL permissions must be modified.
5. From the Object Settings, go to the Access Control List (ACL).
6. Depending on the Bucket Types, manage the object access permissions as follows:
Select the checkboxes against the access permissions to grant at each user level such as specific or all users of another contract, all users of a group, and authenticated users of a group. For more information, see ACL permission for objects.
Add grantees to provide additional users with access permission to the contract-owned bucket's objects.
In the Additional Grantees section, enter the retrieved Contract Number of the grantee.
Select the checkboxes on the object ACL permissions to grant, and click Add.
Select the checkboxes against the access permissions to grant at each user level such as users, all users of a group, authenticated users of a group, and Log Delivery Group. For more information, see ACL permission for objects.
Add grantees to provide additional users with access permission to the user-owned bucket's objects.
In the Additional Grantees section, enter the retrieved Canonical user ID of the grantee.
Select the checkboxes on the object ACL permissions to grant, and click Add.
7. Click Save to apply ACL permissions and add the grantee to the object.
Result: The object ACL permissions are successfully applied to the object.
Use the API to manage object ACL permissions.
Use CLI to manage ACL permission for objects.
IONOS Object Storage is compatible with the S3 protocol, which means that it can be used to manage buckets and objects with existing S3 clients once properly configured.
Amazon Web Services (AWS) Command-line Interface (CLI) is unique in offering a wide range of commands for comprehensive management of buckets and objects which is ideal for scripting and automation. IONOS Object Storage supports using AWS CLI for Windows, macOS, and Linux.
Postman is a free tool for conveniently working with APIs in a graphical interface. It is available for Windows, macOS, and Linux.
You can follow the installation instructions described on .
In the Authorization tab for a request, select AWS Signature from the Type dropdown list. Specify where Postman should append your authorization data using the Add authorization data to drop-down menu.
If you select Request Headers, Postman populates the Headers tab with Authorization and X-Amz- prefixed fields.
If you select the Request URL, Postman populates the Params tab with authentication details prefixed with X-Amz-.
Note: The parameters listed below contain confidential information. We recommend using variables to keep this data secure while working in a collaborative environment.
Advanced fields are optional, but Postman will attempt to generate them automatically if necessary.
For AWS Region, leave the field blank as the region from the endpoint will be used.
For Service Name, enter s3
. The name of the service that receives the requests.
For Session Token, leave the field blank. This is only required when temporary security credentials are used.
S3cmd is a free command line tool and client for loading, retrieving, and managing data in S3. It has over 60 command line options, including multipart uploads, encryption, incremental backup, S3 sync, ACL and metadata management, bucket size, and bucket policies (Linux, macOS).
Install 3cmd for your distribution:
on CentOS/RHEL and Fedora: sudo dnf install s3cmd
on Ubuntu/Debian: sudo apt-get install s3cmd
on macOS using : brew install s3cmd
You can also install the latest version from .
Run the following command in a terminal: s3cmd --configure
. This will guide you through the interactive installation process:
Enter your Access Key and Secret key. To get them, , go to Menu > Storage > IONOS Object Storage > Key management.
Note: Your credentials are not tied to a specific region or bucket.
Specify the region of your bucket for Default Region
. Example: eu-central-2
. Please refer to the .
Specify the endpoint for the selected region for Endpoint
from the same list. For example, s3-eu-central-2.ionoscloud.com
.
Insert the same endpoint again for DNS-style bucket+hostname:port template
.
Specify or skip password (press Enter) for Encryption password
.
Press Enter for Path to GPG program
.
Press Enter for Use HTTPS protocol
.
Press Enter for HTTP Proxy server name
.
Press Enter for Test access with supplied credentials? [Y/n]
.
S3cmd will try to test the connection. If everything went well, save the configuration by typing y
and pressing Enter. The configuration will be saved in the .s3cfg
file.
If you need to work with more than one region or with different providers, there is a way to set up multiple configurations. Use s3cmd -configure --config=ionos-fra
to save the configuration for a specific location or provider. Run s3cmd with the -c
option to override the default configuration file. For example, list the object in the bucket:
You can also specify an endpoint directly on the command line to override the default setting. The Access Key and Secret key are region-independent, so s3cmd can take them from the default configuration:
Or even specify it with an Access Key and the Secret Key:
List buckets (even buckets from other regions will be listed):
Create a bucket (the name must be unique for the whole IONOS Object Storage). You need to explicitly use the --region
option, otherwise a bucket will be created in the default de
region:
Create the bucket my-bucket
in the region de
(Frankfurt, Germany):
Create the bucket my-bucket
in the region eu-cental-2
(Berlin, Germany):
Create the bucket my-bucket
in the region eu-south-2
(Logrono, Spain):
List objects of the bucket my-bucket
:
Upload filename.txt from the current directory to the bucket my-bucket
:
Copy the contents of local directory my-dir
to the bucket my-bucket
with prefix my-dir
:
Download all the objects from the my-bucket
bucket to the local directory my-dir
(the directory should exist):
Synchronize a directory to S3 (checks files using size and md5 checksum):
Get Cross-Origin Resource Sharing (CORS) configuration:
Set up Cross-Origin Resource Sharing (CORS) configuration:
cors_rules.xml:
Delete CORS from the bucket:
Get information about buckets or objects:
s3cmd info s3://my-bucket
s3cmd info s3://my-bucket/my-object
Generate a public URL for download that will be available for 10 minutes (600 seconds):
Set up a lifetime policy for a bucket (delete objects older than 1 day):
detete-after-one-day.xml:
Encrypt and upload files. This option allows you to encrypt files before uploading, but in order to use it, you have to run s3cmd --configure
and fill out the path to the GPG utility and the encryption password. There is no need to use special parameters to decrypt the file on downloading with get
option as this is done automatically using the data from the configuration file.
Add or modify user-defined metadata. Use headers starting with x-amz-meta-
and store data in the set of key-value pairs. The user-defined metadata is limited to 2 KB in size. The size of the user-defined metadata is measured by taking the sum of the number of bytes in the UTF-8 encoding of each key and value.
s3cmd modify --add-header x-amz-meta-my_key:my_value s3://my-bucket/prefix/filename.txt
Check the changes:
Delete metadata:
This document provides instructions for managing IONOS Object Storage using the AWS CLI. Additionally, this task can also be performed through the DCD and .
Prerequisites:
Set up the AWS CLI by following the .
Make sure to consider the supported .
Option 1: Using s3 set of commands:
Option 2: Using s3api set of commands:
Create a bucket in the eu-central-2
region (Berlin, Germany):
Option 1: Using s3 set of commands:
Option 2: Using s3api set of commands:
Create a bucket in the de
region (Frankfurt, Germany) with Object Lock enabled:
Upload an object from the current directory to a bucket:
Download all the objects from the my-bucket
bucket to the local directory my-dir
:
Copy the object to the bucket:
Copy the contents of the local directory my-dir
to the bucket my-bucket
:
Copy all objects from my-source-bucket
to my-dest-bucket
excluding .zip files. The command does not support cross-region copying for IONOS Object Storage:
Sync the bucket my-bucket
with the contents of the local directory my-dir
:
Prerequisite: For the installation instructions, see .
Run the aws configure
command in a terminal.
AWS Access Key ID [None]: Insert the Access Key. In the DCD, go to Menu > Storage > IONOS Object Storage > Key management and check the Access keys section to find the essential details.
AWS Secret Access Key [None]: Paste the Secret Key. In the DCD, go to Menu > Storage > IONOS Object Storage > Key management and check the Access keys section to find the essential details.
Default region name [None]: de
.
Default output format [None]: json
.
Test if you set up AWS CLI correctly by running a command to list buckets; use any endpoints for testing purposes.
If the setup works correctly, you may proceed with the other commands.
For each command, be sure to include one of the endpoints in the endpoint-url
parameter:
For information on the supported IONOS Object Storage Service endpoints, see .
There are two sets of commands:
: Offers high-level commands for managing buckets and moving, copying, and synchronizing objects.
: Allows you to work with specific features such as ACL, CORS, and Versioning.
IONOS Object Storage supports using Cyberduck, a Cloud Storage browser with SFTP, WebDAV, and S3 support for Windows, macOS, and Linux.
For the installation instructions, see .
Once inside Cyberduck, select Cyberduck > Preferences… from the menu.
Select Profiles to open the Connection Profiles page.
Select the IONOS Cloud Object Storage (Berlin) connection profile or IONOS Cloud Object Storage (Frankfurt), or IONOS Cloud Object Storage (Logrono) from the list of available connection profiles, or best use the search option to search for it.
Close the Preferences window and restart Cyberduck to install the selected connection profiles.
Open Cyberduck and select File > Open Connection… You will see the connection dialog.
At the top, click the dropdown menu and select the IONOS Cloud Object Storage (Berlin) profile that corresponds with the data center you want to use.
Enter key values in the Access Key and Secret Key fields.
To access the Object Storage keys:
Choose "Generate a key" and confirm the action by clicking Generate. The object storage key will be generated automatically.
Click Connect.
-c FILE, --config=FILE
- Config file name. Defaults to $HOME/.s3cfg
.
-e, --encrypt
- Encrypt files before uploading to S3.
--upload-id=UPLOAD_ID
- UploadId for Multipart Upload, in case you want to continue an existing upload (equivalent to --continue-put
) and there are multiple partial uploads. Use s3cmd multipart [URI]
to see what UploadIds are associated with the given URI.
IONOS Object Storage is fully compatible with S3, which means that it can be used to manage buckets and objects with existing S3 clients once properly configured. We suggest a list of popular tools for working with IONOS Object Storage, as well as instructions for configuring them:
: Tool for API development and testing. Its unique feature is a graphical interface for sending API requests to object storage endpoints, facilitating testing and development.
: An open-source GUI client supporting object storage among other protocols, presenting storage objects as local files for easy browsing, upload, and download.
: Freeware Windows client for object storage, providing an easy way to manage buckets and objects, including file permissions and access control lists, through a visual interface.
: Unique in offering a wide range of commands for comprehensive management of buckets and objects. Ideal for scripting and automation.
: Offers direct, scriptable control over object storage buckets and objects. However, it lacks certain features like versioning and replication management.
: A CLI program for syncing files between local and cloud storage, distinguishing itself with powerful synchronization capabilities, specifically functional when handling large data quantities and complex sync setups.
: Provides high-level object-oriented API as well as low-level direct service access.
: Comprehensive backup and disaster recovery solution for virtual, physical, and cloud-based workloads. Supports creating an Object Storage repository for backing up to one or multiple buckets.
This document provides instructions to manage using the CLI. Additionally, these tasks can also be performed using the and .
Prerequisites:
Object Lock configuration is only feasible when enabled at the time of bucket creation. It cannot be activated for an existing bucket.
Set up the AWS CLI by following the .
Make sure to consider the supported .
Create a bucket my-bucket
in the de
region (Frankfurt, Germany) with Object Lock:
An Object Lock with Governance mode on a bucket provides the bucket owner with better flexibility compared to the Compliance mode. It permits the removal of the Object Lock before the designated retention period has expired, allowing for subsequent replacements or deletions of the object.
Apply Governance mode configuration to the bucket my-bucket-with-object-lock
with a default retention period equal to 15 days (or use the PutObjectLockConfiguration
API Call):
On applying this configuration, the newly uploaded objects adhere to this retention setting.
An Object Lock with Compliance mode on a bucket ensures strict control by enforcing a stringent retention policy on objects. Once this mode is set, the retention period for an object cannot be shortened or modified. It provides immutable protection by preventing objects from being deleted or overwritten during their retention period.
This mode is particularly suited for meeting regulatory requirements as it guarantees that objects remain unaltered. It does not allow locks to be removed before the retention period concludes, ensuring consistent data protection.
Apply Compliance mode configuration to the bucket my-bucket-with-object-lock
with a default retention period equal to 15 days:
On applying this configuration, the newly uploaded objects adhere to this retention setting.
Retrieve Object Lock configuration of a bucket (the same could be achieved with the GetObjectLockConfiguration
API Call):
Upload my-object.pdf
to the bucket my-bucket-with-object-lock
:
Note: The Object Lock retention is not specified so a bucket’s default retention configuration will be applied.
Upload my-object.pdf
to the bucket my-bucket-with-object-lock
and override the bucket’s default Object Lock configuration:
Note: You can overwrite objects protected with Object Lock. Since Versioning is used for a bucket, it allows to keep multiple versions of the object. It also allows deleting objects because this operation only adds a deletion marker to the object’s version.
Note: Delete markers are not WORM-protected, regardless of any retention period or legal hold in place on the underlying object.
Apply legal-hold
status to my-object.pdf
in the bucket my-bucket-with-object-lock
:
Use Status=OFF
to turn off the legal-hold
status.
To check the Object Lock status for a particular version of an object, you can utilize either the GET Object
or the HEAD Object
commands. Both commands will provide information about the retention mode, the designated 'Retain Until Date' and the status of the legal hold for the chosen object version.
When multiple users have permission to upload objects to your bucket, there is a risk of overly extended retention periods being set. This can lead to increased storage costs and data management challenges. While the system allows for up to 100 years using the s3:object-lock-remaining-retention-days
condition key, implementing limitations can be particularly beneficial in multi-user environments.
Establish a 10-day maximum retention limit:
Save it to the policy.json
file and apply using the following command:
S3 Browser is a free, feature-rich Windows client for IONOS Object Storage.
and install the S3 Browser.
Add a new account and select:
Display name: Enter a name for the connection.
Account type: Select S3 Compatible Storage from the drop-down list.
REST Endpoint: If you already have a bucket, select the . Otherwise, you can select s3-eu-central-2.ionoscloud.com, which corresponds to the location in Berlin, Germany.
To get the Access Key and Secret Key, , go to Menu > Storage > IONOS Object Storage > Key management.
Click Advanced S3-compatible storage settings in the lower-left corner of the form.
Storage settings:
Signature version: Select Signature V4 from the drop-down list.
Addressing model: Leave Path style.
Override storage regions: Paste the following text into the text area:
Region-specific endpoint: Insert the following text: s3-{region-code}.ionoscloud.com
Save the details.
Try creating your first bucket. The bucket name must be unique across the entire IONOS Object store. That's why S3 Browser will offer to add random text to the bucket name. But you can still try to come up with your unique name.
Use to share access to other contracts. To share access with users from the same contract, use .
Endpoint | Target IP Address |
---|---|
To get the Access Key and Secret Key, , go to Menu > Storage > IONOS Object Storage > Key management.
Setup completed. Now check the to get the right endpoint to call.
Note: You need to use the correct endpoint URL for each region (see the ).
Please refer to the for the --host
option. You can skip this option if you are only using the region from the configuration file.
Copy all objects from my-source-bucket
to my-dest-bucket
excluding .zip files (or use mv
to move objects). The command doesn’t support cross-region copying for IONOS Object Storage, use for cross-region copying:
For more information, visit and .
For more information, see the .
For more information, see .
For additional information, see the official .
, go to Menu > Storage > IONOS Object Storage > Key management.
: The most used Infrastructure as Code (IAC) tool, which allows you to manage infrastructure with configuration files rather than through a GUI. Terraform allows you to build, change, and manage your infrastructure safely, consistently, and repeatedly by defining resource configurations that you can version, reuse, and share.
This task could also be achieved by using the API call.
The permanent deletion of the object’s version is prohibited, and the system only creates a deletion marker for the object. But it makes IONOS Object Storage behave in most ways as though the object has been deleted. You can only list the delete markers and other versions of an object by using the API call.
s3.eu-central-3.ionoscloud.com
85.215.142.30
s3.eu-central-1.ionoscloud.com
81.173.115.249
s3.eu-central-2.ionoscloud.com
85.215.240.253
s3.eu-south-2.ionoscloud.com
93.93.114.231
Berlin, Germany
eu-central-3
s3.eu-central-3.ionoscloud.com
s3-website.eu-central-3.ionoscloud.com
Frankfurt, Germany
de
s3.eu-central-1.ionoscloud.com
s3-website-de-central.profitbricks.com
Berlin, Germany
eu-central-2
s3.eu-central-2.ionoscloud.com
s3-website-eu-central-2.ionoscloud.com
Logroño, Spain
eu-south-2
s3.eu-south-2.ionoscloud.com
s3-website-eu-south-2.ionoscloud.com
Grantee
Console permission
ACL permission
Access granted
Specific or all users of another contract
Objects - Read
READ
Allows grantee to list the objects in the bucket. With this permissions, you cannot read the object data and its metadata.
Specific or all users of another contract
Objects - Write
WRITE
Allows grantees to create new objects in the bucket. For the bucket and object owners of existing objects, it also allows deletions and overwrites of those objects.
Specific or all users of another contract
Bucket ACL - Read
READ_ACP
Grants the ability to read the ACL of the bucket.
Specific or all users of another contract
Bucket ACL - Write
WRITE_ACP
Allows the grantee to write the ACL of the bucket.
Group: All users
Objects - Read
READ
Allows anyone to list the objects in the bucket. With this permission, you cannot read the object data and metadata.
Group: All users
Bucket ACL - Read
READ_ACP
Grants public read access for the bucket ACL. Anyone can access the bucket ACL.
Group: Authenticated users
Objects - Read
READ
Allows anyone with an IONOS account to list the objects in the bucket. With this permssion, you cannot read the object data and its metadata.
Group: Authenticated users
Bucket ACL - Read
READ_ACP
Grants read access to bucket ACL to anyone with an IONOS account.
Grantee
Console permission
ACL permission
Access granted
User
Objects - Read
READ
Allows grantee to list the objects in the bucket. With this permissions, you cannot read the object data and its metadata..
User
Objects - Write
WRITE
Allows grantees to create new objects in the bucket. For the bucket and object owners of existing objects, it also allows deletions and overwrites of those objects.
User
Bucket ACL - Read
READ_ACP
Grants the ability to read the ACL of the bucket.
User
Bucket ACL - Write
WRITE_ACP
Allows the grantee to write the ACL of the bucket.
Group: All users
Objects - Read
READ
Allows anyone to list the objects in the bucket. With this permission, you cannot read the object data and metadata.
Group: All users
Bucket ACL - Read
READ_ACP
Grants public read access for the bucket ACL. Anyone can access the bucket ACL.
Group: Authenticated users
Objects - Read
READ
Allows anyone with an IONOS account to list the objects in the bucket. With this permssion, you cannot read the object data and its metadata.
Authenticated users
Bucket ACL - Read
READ_ACP
Grants read access to bucket ACL to anyone with an IONOS account.
Log Delivery Group
Objects - Write
WRITE
Enables the group to write server access logs to the bucket.
Grantee
Console permission
ACL permission
Access granted
Specific or all users of another contract
Objects - Read
READ
Allows grantee to read the object data and its metadata.
Specific or all users of another contract
Object ACL - Read
READ_ACP
Grants the ability to read the object ACL.
Specific or all users of another contract
Object ACL - Write
WRITE_ACP
Allows the grantee to write the ACL of the applicable object.
Group: All users
Objects - Read
READ
Allows anyone to read the object data and its metadata.
Group: All users
Object ACL - Read
READ_ACP
Allows anyone to read the object ACL.
Group: Authenticated users
Objects - Read
READ
Allows anyone with an IONOS account to read the object data and its metadata.
Group: Authenticated users
Object ACL - Read
READ_ACP
Grants read access to object ACL to anyone with an IONOS account.
Grantee
Console permission
ACL permission
Access granted
User
Objects - Read
READ
Allows grantee to read the object data and its metadata.
User
Object ACL - Read
READ_ACP
Grants the ability to read the object ACL.
User
Object ACL - Write
WRITE_ACP
Allows the grantee to write the ACL of the applicable object.
Group: All users
Objects - Read
READ
Allows anyone to read the object data and its metadata.
Group: All users
Object ACL - Read
READ_ACP
Allows anyone to read the object ACL.
Group: Authenticated users
Objects - Read
READ
Allows anyone with an IONOS account to read the object data and its metadata.
Group: Authenticated users
Object ACL - Read
READ_ACP
Grants read access to object ACL to anyone with an IONOS account.
This document provides instructions to manage Versioning using the CLI. Additionally, these tasks can also be performed using the DCD and IONOS Object Storage API.
Prerequisites:
Set up the AWS CLI by following the installation instructions.
Make sure to consider the supported Endpoints.
Get the versioning state of the bucket:
Enable versioning for the bucket:
List object versions for the bucket:
List object versions for the object my-object.txt
:
This document provides instructions to manage Replication using the CLI. Additionally, these tasks can also be performed using the DCD and IONOS Object Storage API.
Prerequisites:
Set up the AWS CLI by following the installation instructions.
Make sure to consider the supported Endpoints.
Versioning must be enabled for source and destination buckets.
Create the file replication_configuration.json
with the following content:
Enable replication from my-source-bucket
to my-destination-bucket
(use the endpoint of the source bucket):
Retrieve the replication configuration:
Delete the replication configuration:
Info: It takes up to a few minutes for the deletion of a replication rule to propagate fully.
This document provides instructions to Manage ACL for Buckets using the AWS CLI. Additionally, these tasks can also be performed using the DCD and IONOS Object Storage API.
Prerequisites:
Set up the AWS CLI by following the installation instructions.
Make sure to consider the supported Endpoints.
Use the following keys to define access permissions:
--grant-read
: Grants read-only access.
--grant-write
: Grants write-only access.
--grant-read-acp
: Grants permission to read the Access Control List.
--grant-write-acp
: Grants permission to modify the Access Control List.
--grant-full-control
: Grants full access, encompassing the permissions listed above (read, write, read ACL, and write ACL).
Note: Granting access to a bucket for another IONOS user does not make the bucket appear in the user's Object Storage in the DCD due to the S3 protocol's architecture. To access the bucket, the user must utilize other S3 Tools, as the granted access does not translate to interface visibility.
Grant full control of my-bucket
to a user with a specific Canonical user ID:
Separate grants with a comma if you want to specify multiple Canonical user IDs:
Grant full control of my-bucket
to multiple users using their Canonical user IDs:
Grant full control of my-bucket
by using an email address
instead of a Canonical User ID:
Retrieve the ACL of a bucket and save it to the file acl.json
:
Edit the file. For example, remove or add some grants and apply the updated ACL to the bucket:
Use the following values for the --acl
key:
private
removes public access.
public-read
allows public read-only access.
public-read-write
allows public read/write access.
authenticated-read
allows read-only access to all authenticated users of IONOS Object storage (including ones out of your contract).
Allow public read-only access to the bucket:
Remove public access to the bucket:
Set WRITE
and READ_ACP
permissions for the Log Delivery Group, which is required before enabling the Logging feature for a bucket:
This document provides instructions to manage Lifecycle using the CLI. Additionally, these tasks can also be performed using the DCD and IONOS Object Storage API.
Prerequisites:
Set up the AWS CLI by following the installation instructions.
Make sure to consider the supported Endpoints.
Versioning must be enabled for source and destination buckets.
Create a file lifecycle.json
with the JSON policy:
Apply the lifecycle configuration to a bucket:
Save the bucket’s lifecycle configuration to a file:
Delete the Lifecyle configuration:
This document provides instructions to manage Bucket Policy using the CLI. Additionally, these tasks can also be performed using the DCD and IONOS Object Storage API.
Prerequisites:
Set up the AWS CLI by following the installation instructions.
Make sure to consider the supported Endpoints.
To create a file policy.json
with the JSON policy, see Examples.
Apply a bucket policy to a bucket:
Save a bucket policy to file:
Delete the bucket policy:
This document provides instructions to manage CORS using the CLI. Additionally, these tasks can also be performed using the DCD and API.
Prerequisites:
Set up the AWS CLI by following the installation instructions.
Make sure to consider the supported Endpoints.
Versioning must be enabled for source and destination buckets.
Get the CORS configuration for the bucket my-bucket
:
Set up CORS configuration for the bucket my-bucket
:
For more information, see put-bucket-cors command reference.
This document provides instructions for managing Static Website Hosting using the CLI. Additionally, these tasks can also be performed using the DCD and IONOS Object Storage API.
Prerequisites:
Set up the AWS CLI by following the installation instructions.
Make sure to consider the supported Endpoints.
Make the bucket public for static website hosting using Bucket Policy:
Contents of policy.json
:
Enable static website hosting for my-bucket
:
Info: The website URLs differ from the endpoint URLs. The command sets up the static website here – http://my-bucket.s3-website-eu-central-2.ionoscloud.com
.
Disable static website hosting for my-bucket
:
Boto3 is the official AWS SDK for Python. It allows you to create, update, and configure IONOS Object Storage objects from within your Python scripts.
Install the latest Boto3 release via pip: pip install boto3
There are several ways to provide credentials, e.g. passing credentials as parameters to the boto.client() method, via environment variables, or with a generic credential file (~/.aws/credentials).
An example of passing credentials as parameters when creating a Session object:
To get the Access Key and Secret Key, log in to the DCD, and go to Menu > Storage > IONOS Object Storage > Key management.
NOTE: Your credentials are not tied to a specific region or bucket.
For information on the supported IONOS Object Storage Service endpoints, see Endpoints.
List buckets:
Create bucket my-bucket
at the region eu-central-1
:
Upload filename.txt to the bucket my-bucket
:
For more information, see AWS SDK documentation on Uploading files.
Download the file filename.txt
from the my-bucket
:
List objects of the bucket my-bucket
Copy the filename.txt from the bucket my-source-bucket
to the bucket my-dest-bucket
and add the prefix uploaded/
. Instead of the client()
method, we use the resource()
method here. It provides a higher level of abstraction than the low-level calls made by service clients.
For more examples, see Boto3 documentation, such as:
For more information on Boto3 and Python, see Realpython.com – Python, Boto3, and AWS S3: Demystified.
This document provides instructions to Manage ACL for Objects using the AWS CLI. Additionally, these tasks can also be performed using the DCD and IONOS Object Storage API.
Prerequisites:
Set up the AWS CLI by following the installation instructions.
Make sure to consider the supported Endpoints for object upload.
Use the following keys to define access permissions:
--grant-read
: Grants read-only access.
--grant-write
: Grants write-only access.
--grant-read-acp
: Grants permission to read the Access Control List.
--grant-write-acp
: Grants permission to modify the Access Control List.
--grant-full-control
: Grants full access, encompassing the permissions listed above (read, write, read ACL, and write ACL).
Use --key
to specify the object for granting access:
Use the following values for the --acl
key:
private
removes public access.
public-read
allows public read-only access.
public-read-write
allows public read/write access.
authenticated-read
allows read-only access to all authenticated users of IONOS Object storage (including ones out of your contract).
Allow public read-only access to the object:
Remove public access from the object:
Rclone is a command line tool for managing files in the cloud. It is available for Windows, macOS, and Linux. Rclone also has a built-in HTTP server that can be used to remotely control rclone using its API and a web GUI (graphical user interface).
rclone helps:
backing up (and encrypting) files to cloud storage
restoring (and decrypting) files from cloud storage
mirroring cloud data to other cloud services or locally
transferring data to the cloud or between cloud storage providers
mounting multiple encrypted, cached, or diverse cloud storages in the form of a disk
analyzing and taking into account data stored in cloud storage using lsf, ljson, size, and ncdu
Download the latest version of rclone from rclone.org. The official Ubuntu, Debian, Fedora, Brew, and Chocolatey repositories include rclone.
You can find the configuration example here.
Configurations configured with the rclone config
command are called remotes. If you already have or plan to use buckets in different IONOS Object Storage regions, you will need to set up a separate remote for each region you use.
Please refer to the list of commands at the rclone website.
List remotes:
List buckets of "ionos1" remote:
Create bucket my-bucket
at the remote ionos1
:
Upload filename.txt from the current directory to the bucket my-bucket
:
Copy the contents of local directory my-dir
to the bucket my-bucket
:
Copy all objects with the prefix my-dir
from the bucket my-source-bucket
to my-dest-bucket
:
The buckets could be located in different regions and even at different providers. Unless buckets are located within the same region, the data is not copied directly from the source to the destination. For cross-regional copying, the data is downloaded to you from the source bucket and then uploaded to the destination.
Download all the objects from the my-bucket
bucket to the local directory my-dir
:
Sync the bucket my-bucket
with the contents of the local directory my-dir
. The destination is updated to match the source, including deleting files if necessary:
Get the total size and number of objects in remote:path:
Check if the files in the local directory and destination match:
Produce an md5sum file for all the objects in the path:
Veeam Backup & Replication offers two options for backup to the Object Storage:
1. Direct backup to Object Storage
2. Backup to Object Storage as a capacity tier in the Scale-out Backup Repository (SOBR).
Follow these steps to configure direct backup:
1. Add Object Storage Repository: In Veeam Backup & Replication, navigate to the backup infrastructure and add a new object storage repository, selecting S3 compatible and providing your IONOS Object Storage credentials. We recommend using the eu-central-3
region for Veeam backups.
2. Configure Backup Jobs: Set up or modify backup jobs to target the newly added IONOS Object Storage repository.
For more information, see Add Backup Repository.
SOBR in Veeam enables you to create a highly flexible and scalable backup storage solution by combining multiple storage resources into a single repository. It allows moving data from performance to capacity tiers automatically to optimize storage usage and cost.
Follow these steps to configure SOBR with Object Storage:
1. Create Performance Tier: Add your primary, fast storage devices (local or network) as the performance tier in Veeam Backup & Replication.
2. Add Capacity Tier: Integrate IONOS Object Storage as the capacity tier, allowing Veeam to offload older backup files to cost-effective object storage.
3. Policy Configuration: Configure policies to define how and when data should be moved between the performance and capacity tiers.
In Veeam 12, both Performance and Capacity Tiers support multiple buckets without any limit and apply to users as follows:
Existing users upgrading to v12 should consider setting up a new SOBR with multiple buckets, as Veeam does not redistribute existing VM backups across newly added buckets.
For new users planning to integrate IONOS Object Storage in the Capacity Tier with a traditional SOBR configuration, it is advisable to start with multiple buckets.
Veeam automatically allocates VM backups across these buckets. While adding new buckets is possible later as well, Veeam does not reallocate existing VM backups to these new buckets; only new VM backups will utilize them. Utilizing multiple buckets in the Capacity Tier removes the necessity for multiple SOBRs.
For more information, see Recommended Settings.
This document provides instructions to manage Logging using the CLI. Additionally, these tasks can also be performed using the DCD and IONOS Object Storage API.
Prerequisites:
Set up the AWS CLI by following the installation instructions.
Make sure to consider the supported Endpoints.
Prerequisite: Grant permissions to the Log Delivery Group to the bucket where logs will be stored. We recommend using a separate bucket for logs, but it must be in the same region. Log Delivery Group must be able to write objects and read ACL.
After that, you can enable Logging for a bucket:
Contents of logs-acl.json
:
Retrieve bucket logging settings:
Disable logging for a bucket:
In May 2024, we introduced the eu-central-3
Object Storage region in Berlin, Germany. This region features contract-owned buckets, providing enhanced performance and resilience. Note that a new 92-character access key is required for this region.
To move a backup repository to the eu-central-3
region, follow these steps:
1. Create a new access key: Check your access key length in the . If it is 20 characters long, you need to create a new 92-character key compatible with all regions. For more information, see .
2. Create a new contract-owned bucket with an Object Lock: Create a new contract-owned bucket in the eu-central-3
region with Object Lock enabled. For more information, see and refer to the "contract-owned buckets" section.
Note: Select the No default retention option as the mode for the Object Lock.
3. Set up Veeam storage optimization: To do so, follow the steps in .
4. Create backup repository: Creta an Object Storage as an object repository in Veeam. To do so, follow the steps in .
5. Move the backup repository: You need to move the backup repository to the new Object Storage region. To do so, follow these steps:
Right-click on the Backup Jobs name in the Object Storage section of the Backup tab and select Move backup.
Specify the target repository to move backups to and click OK. The migration of your backup repository will start. The data will be transferred directly from one region to another.
Result: The Veeam backup repository is successfully moved to the eu-central-3
Object Storage region.
1. Click the Backup section under Jobs.
2. Select the required backup job, right-click on it, and choose Edit in the menu.
3. Click Storage, go to Advanced Settings and change the tab to Storage.
Note: We recommend using at least 4 MB in Veeam to benefit from better performance, and 8 MB storage blocks are a preferred choice, but these blocks are unavailable in the interface by default and must be enabled via a registry key.
4. In the Storage optimization dropdown list, choose 4 MB or 8 MB (marked as "not recommended"). Disregard this "not recommended" mark that you see in the web interface.
5. (Optional) If you want to benefit from better performance, follow the instructions to enable 8 MB blocks:
Open the Registry Editor.
Navigate to Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Veeam\Veeam Backup and Replication
Add a new DWORD
(32-bit) value named UIShowLegacyBlockSize
.
Set the value of UIShowLegacyBlockSize
to 1. Now, the 8 MB block option will be available in the interface.
To apply the new storage configuration as the default for all future backup jobs, click Save as Default at the bottom of the Advanced Settings screen. This will ensure the settings are automatically used for any new backup jobs created.
Result: The Veeam storage optimization is successfully set up.
IONOS Object Storage is fully compatible with the S3 protocol, and once properly configured, it can be used to manage buckets and objects with existing S3 clients. Terraform is HashiCorp's infrastructure-as-code tool. It lets you define resources and infrastructure in human-readable format, declarative configuration files and manages your infrastructure's lifecycle. Using Terraform has several advantages over manually managing your infrastructure.
To use IONOS Object Storage with AWS Terraform Provider, . For more information, see .
The minimum version is Veeam 11a CP4 (11.0.1.126120220302,) which brings an important special fix when using Object Lock.
In May 2024, we introduced the eu-central-3
Object Storage region in Berlin, Germany. This region features contract-owned buckets, which provide enhanced performance and resilience. This new bucket type creates an easier opportunity for user management, since the bucket list is visible for all users of the contract, and the contract owner or administrators can assign permissions to view bucket contents. For more information, see and .
This setting is available during the addition of the new Object Storage repository. For more information, see Configure repository details in .
You can also update the concurrent tasks limit for an existing Object Storage repository by following these steps:
Go to the Backup Infrastructure tab.
Click Backup Repositories.
Right-click on your Object Storage repository and choose Properties.
Select the checkbox Limit concurrent tasks to and set the value as follows:
Up to 10—for eu-south-2
region.
Up to 20—for all other regions.
Veeam Backup & Replication allows you to configure block sizes for each backup job, which can significantly impact deduplication efficiency and incremental backup size. By default, blocks are compressed, typically achieving a compression ratio of about 50%.
Smaller blocks can enhance deduplication; they increase the number of calls to the object storage, potentially affecting performance.
Larger blocks reduce the number of storage calls and can increase throughput to and from IONOS Object Storage, improving overall backup performance.
For optimal performance with IONOS Object Storage, we recommend using 8 MB blocks. These blocks are unavailable in the interface by default and must be enabled via a registry key by following these steps:
1. Open the Registry Editor.
2. Navigate to Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Veeam\Veeam Backup and Replication
.
3. Add a new DWORD (32-bit) value named UIShowLegacyBlockSize
.
4. Set the value of UIShowLegacyBlockSize
to 1.
Result: The 8 MB blocks are now available in the Veeam interface.
To change the storage optimization setting, follow these steps:
1. Click the Backup section under Jobs.
2. Select the required backup job, right-click on it, and choose Edit in the menu.
3. Click Storage > Advanced and change the tab to Storage.
4. In the Storage optimization drop-down list, choose 8 MB marked as "not recommended”. Disregard this mark as it is the correct choice. This option must be there if you modify the registry setting as described in the previous section. Do not use a block size less than 4 MB.
5. Click Save as default at the bottom of the Advanced Settings screen.
Result: The new storage configuration is saved as the default setting. This will ensure the settings are automatically used for any new backup jobs created.
The immutability retention period of the Object Storage Repository must be less or equal to the backup retention period of the backup job.
To check the immutability retention period, follow these steps:
1. Go to the Backup Infrastructure tab.
2. Click Backup Repositories.
3. Right-click Object Storage Repository and choose Properties.
4. Click Next upon seeing the Name setting and Account setting.
5. Set the number to 30 to Make recent backups immutable for setting.
Result: The immutable retention period is successfully set.
To check the backup retention period, follow these steps:
1. Click the Backup section under Jobs.
2. Select the required backup job, right-click on it, and choose Edit in the menu. You will see the Job Mode screen.
3. Click Storage in the left menu and check the number in the Retention policy setting.
Note: The immutability retention period from the previous check (30) is lower than the retention policy listed here (31), and this is the correct setting.
Create a bucket for every 100 virtual machines or 200 TB of data to be backed up; Performance or Capacity Tier, depending on the use case.
Info:
— The access_key
and secret_key
can be retrieved in the DCD, go to Menu > Storage > IONOS Object Storage > Key management.
— For the list of IONOS Object Storage Service endpoints, see .
Prerequisite: For installing or updating the latest versions, refer to the instructions in the documentation.
The following information must be specified in your Terraform provider configuration hcl:
AWS Access Key ID: Insert the Access Key. In the DCD, go to Menu > Storage > IONOS Object Storage > Key management and check the Access keys section to find the essential details.
AWS Secret Access Key: Paste the Secret Key. In the DCD, go to Menu > Storage > IONOS Object Storage > Key management and check the Access keys section to find the essential details.
skip_credentials_validation: When set to true
, it skips Security Token Service validation.
skip_requesting_account_id: The account ID is not requested when set to true
. It is useful for AWS API implementations that do not have the IAM, STS API, or metadata API.
skip_region_validation: The region name is not validated when set to true
. It is useful for WS-like implementations that use their own region names.
endpoints: For the list of IONOS Object Storage Service endpoints, see .
In May 2024, we introduced the eu-central-3
Object Storage region in Berlin, Germany. This region features contract-owned buckets, providing enhanced performance and resilience. A new 92-character access key is required for this region.
To add a backup repository, follow these steps:
1. Create a new access key: Check your access key length in the . If it is 20 characters long, you need to create a new 92-character key compatible with all regions. For more information, see .
2. Create a new contract-owned bucket with an Object Lock: Create a new contract-owned bucket in the eu-central-3
region with Object Lock enabled. For more information, see and refer to the "contract-owned buckets" section.
Note: Select the No default retention option as the mode for the Object Lock.
3. Create backup repository: Create an Object Storage as an object repository in Veeam. To do so, follow the steps in .
Result: An object repository is successfully created. In the Backup Infrastructure tab, you can view the repository listed under Backup Repositories.
To add a new object storage repository on Veeam Backup & Replication, follow these steps:
1. Navigate to the backup repositories. To do so, go to the Backup Infrastructure tab, and click Backup Repositories > Add Repository to open the wizard for adding new backup repositories.
2. Select repository type by following these steps:
Enter a name and an optional description for the object storage repository.
Select the Limit concurrent tasks to checkbox and set the values as follows:
Up to 10 – for the eu-south-2
region
Up to 20 – for all other regions
Click Next.
3. Input the endpoint details as follows:
Service endpoint: https://s3.eu-central-3.ionoscloud.com
Region: eu-central-3
Click Add to input the access and secret keys.
Info: Only the 92-character access key supports all the Object Storage regions.
Click Next.
4. Configure bucket and folders by following these steps:
Enter the bucket name or browse and select from the list.
Click Browse to create a new folder where backups will be stored.
Click Next.
5. In the Mount server tab, keep the default values and click Next.
6. In the Review tab, click Apply and then click Next to continue.
7. Finalize the repository creation by reviewing the Summary and clicking Finish.
Result: An object repository is successfully created as an object repository in Veeam. In the Backup Infrastructure tab, you can view the repository listed under Backup Repositories.
To create a backup job, follow these steps:
1. Click the Backup Job button and choose your workload type. For example, Windows computer.
2. Choose the Job Mode type as Workstation or Server and click Next.
3. Name the backup job and add an optional description. For example, back up to Object Storage eu-central-3
and click Next.
4. Choose computers for backing up. Click Add and choose Individual computer.
5. Enter the computer’s IP address. Click the Add button to enter the Username and password. Click OK and Next.
6. Choose backup mode from the following options: Entire computer, Volume level backup, or File level backup and click Next.
7. Choose a backup repository from the drop-down menu. Update the Retention policy setting and click Next.
8. In the Guest Processing, keep the default settings and click Next.
9. Choose a Schedule for your backup and click Apply.
10. Check the Summary and click Finish.
Result: A backup job is successfully created.
Note: Instead of waiting for the next run of the backup, you can right-click on the job name from the Jobs and choose Start to make the initial backup immediately.
(Optional) Set the limit for used storage and enable backup immutability. The immutability retention period of the Object Storage Repository must be less or equal to the backup retention period of the backup job. For more information, see .
Continue to .
Minimal Veeam Version
Veeam 11a CP4 (11.0.1.126120220302) brings an important special fix when using Object Lock.
Recommended Object Storage regions
eu-central-3
. For more information, see Endpoints.
Backup Job Storage Optimization
Minimum—4 MB, recommended—8 MB and this option must be enabled via a registry key. For more information, see Apply 8 MB blocks in the storage optimization setting.
Immutability Retention Period
The immutability retention period must be less or equal to the backup retention period.
Object Storage Repository Task Limit
Create a bucket for every 100 VMs or 200 TB of data to be backed up (Performance or Capacity Tier, depending on the use case).
Learn about the two options to backup to the Object Storage from Veeam Backup & Replication.
Learn how to create a new Object Storage repository for your backups.
Learn how to create backup jobs with performance-optimized settings.
Learn how migrate your backup repository to the eu-central-3
region.
Learn more about the recommended settings to apply while setting up Veeam Backup & Replication to Object Storage.
The following are a few FAQs to provide insight into the IONOS Object Storage application.
In the DCD, go to Menu > Storage > IONOS Object Storage. The feature is generally available to all existing and new users. Alternatively, you can also use the GUI tools, CLI tools, API and SDKs to access the Object Storage.
Contract-owned buckets are recommended for users within a single organization and are supported currently in the eu-central-3
region only. User-owned buckets are recommended when you do not need to view or access buckets of other users in the contract. This bucket type is supported in the de
, eu-central-2
, and eu-south-2
regions. For detailed feature comparison, see Bucket Types.
In the DCD, go to Storage > IONOS Object Storage > Key management to view the access keys. You can generate a new key in the Access keys by using the Generate a key function.
To prepare new functionalities of IONOS Object Storage, we made important changes to our Access and Secret Keys specifications. Effective from April 25, 2024, the character length of all newly generated keys has been increased significantly:
Access Key: The key length has been increased from 20 to 92 characters.
Previous format example: 23cbca2790edd9f62100
New format example: EAAAAAFaSZEvg5hC2IoZ0EuXHRB4UNMpLkvzWdKvecNpEUF-YgAAAAEB41A3AAAAAAHnUDl-h_Lwot1NVP6F_MARJv_o
Secret Key: The key length has been increased from 40 to 64 characters.
Previous format example: 0Q1YOGKz3z6Nwv8KkkWiButqx4sVmSJW4bTGwbzO
New format example: Opdxr7mG09tK4wX4s6J3nrl1Z4EJgYRui/rldkgiPmrI5bavWHuThswRqPwgbeLP
We recommend replacing the existing keys with new access keys. Existing keys will remain working but may not enable you to use the upcoming functionalities in the Object Storage.
You can store any type of data, including documents, photos, videos, backups, and large data sets for analytics and big data applications. Each object can only be a maximum of 5.120 GB (5 TB) in size. For more information, see Objects and Folders.
The pricing is based on the actual amount of storage used and outgoing data transfer. There are no minimum storage charges, allowing you to start using the Object Storage by uploading as little as one byte of data. For more information, see Pricing Model.
Endpoints are URLs used to access storage services for operations like uploading and downloading data. Each bucket resides in a specific region; each region has a particular endpoint. For the list of available access points, see Endpoints.
With our ongoing efforts to continuously improve our product functions, the IONOS Object Storage will offer a Bucket Inventory feature shortly. It generates regular reports listing the objects in a storage bucket, including details like metadata, size, and storage class. It helps in tracking and managing stored content efficiently. The Object Storage would also offer a User Policy to enable users to create and manage buckets independently.
Objects (files) of any format can be uploaded to and stored in the Object Storage. Objects may not exceed 4,65 GiB (5.000.000.000 bytes) if uploaded using the DCD. Other applications are not subject to this limit. Use the MultipartUpload set of functions in API or SDKs to upload large files.
Each object can only be a maximum of 5.120 GB (5 TB) in size. For more information, see Limitations.
To speed up the upload of large files, you can use the Multi-part upload that breaks the large files into smaller, manageable parts and upload them in parallel. The feature is not available via the DCD but can be implemented in your application via the API or SDKs.
For contract-owned buckets, retrieve your Contract User ID and provide it to the bucket owner or administrator; they can then grant you access by using the Bucket Policy.
For user-owned buckets, retrieve your Canonical User ID and provide it to the bucket owner who can then grant you access by using the Bucket Policy.
For more information, see Retrieve User ID and Bucket Policy.
Using Bucket Policy, the objects can be shared with other Object Storage users as follows:
The contract owner or administrator of the contract-owned bucket can share access with other users by using their Contract User ID.
The bucket owner of the user-owned bucket can share access with other users by using their Canonical User ID.
You can also temporarily share objects with other users without additional authorization using Share Objects with Pre-Signed URLs.
Yes, by setting appropriate permissions on your buckets or individual objects, you can make data accessible over the internet. Static Website Hosting lets you host static websites directly from your buckets, serving your content (HTML files, images, and videos) to users over the web.
Yes, you can back up your bucket using the Replication feature, which allows automatic replication of your bucket's objects to another bucket, which can be in a different geographic location. Replication within user-owned buckets of the same user as well as replication to contract-owned buckets in the eu-central-3
region is supported. Additionally, you can apply Object Lock to the destination bucket for enhanced security, preventing the replicated data from being deleted or modified.
If you wish to sync your bucket with local storage, tools like AWS CLI or rclone can be utilized for seamless synchronization.
You can use the Lifecycle setting to automatically delete outdated objects such as logs. This feature enables you to create rules that specify when objects should be deleted. For example, you can set a policy to automatically remove log files after they have reached a certain time.
To safeguard your data against ransomware, you can use the Object Lock. With this feature, you can set the Write Once Read Many (WORM) model on your objects, preventing them from being deleted or modified for a fixed amount of time or indefinitely.
During transit, TLS 1.3 is used for encryption. For data at rest, two options are available: AES256 server-side encryption (SSE-S3) and encryption with a customer-provided key (SSE-C). SSE-S3 is the default for uploads via the DCD. SSE-C, on the other hand, is not available in the DCD but can be utilized through API and SDKs.
Data redundancy is achieved through erasure coding. This process involves dividing data into fragments, expanding, and encoding it with redundant data pieces, which are then stored across a set of different servers. During a hardware failure or data corruption, the system can reconstruct the data using these fragments and redundancy information, ensuring data durability and reliability.
To improve the durability and availability of your data, use the Replication feature. This functionality allows you to replicate your data to another geographic location, offering enhanced protection and resilience against natural disasters or other regional disruptions. It also offers additional security for your data and faster data access from the region where the replica resides. Replication within user-owned buckets of the same user as well as replication to contract-owned buckets in the eu-central-3
region is supported.
Yes, you can access Object Storage from a private LAN connection by using the public IP addresses of the desired Endpoints as the Target IP address in the Managed Network Load Balancer (NLB). To do so, see Access Object Storage from a Private LAN.
1. In the DCD, go to Menu > Storage > IONOS Object Storage.
2. From the drop-down list in the Buckets tab, choose either to Show user-owned buckets or Show contract-owned buckets depending on the bucket type you want to view.
Result: All the buckets present under the selected bucket type are listed.
— On choosing Show user-owned buckets, only buckets owned by the user are listed.
— On choosing Show contract-owned buckets, all the buckets created by all the users under this contract are listed.
— Each bucket displays the bucket name, bucket type, Region, and the date of bucket creation.
1. In the DCD, go to Menu > Storage > IONOS Object Storage.
2. From the drop-down list in the Buckets tab, choose either Show user-owned buckets or Show contract-owned buckets depending on the bucket type you want to view.
3. From the Buckets list, choose the bucket on which you want to perform the actions and click on the respective bucket's action menu (three dots).
4. You can perform the following actions:
Bucket Settings: Manage your bucket and its objects by applying the bucket settings related to data management, access management, and public access settings.
Delete: Use this option to Delete a bucket.
Result: The action chosen to perform on the bucket is successfully applied.
1. In the DCD, go to Menu > Storage > IONOS Object Storage.
2. From the drop-down list in the Buckets tab, choose either Show user-owned buckets or Show contract-owned buckets depending on the bucket type you want to view.
3. From the Buckets list, choose the bucket to delete and click on the respective bucket's action menu (three dots).
4. Click Delete.
5. Confirm the deletion of the bucket by choosing Delete. If the bucket consists of objects and folders, you see an option to Empty and delete which deletes all the content within the bucket and then deletes the bucket.
Result: The bucket is successfully deleted and removed from the Buckets list.
You can get started with IONOS Object Storage by completing the initial setup and working with buckets, objects, and access keys.
Start with setting up Object Storage access from the DCD.
Create your first Object Storage bucket to serve as a container to hold data and select whether or not an Object Lock is needed.
Add data as objects in the bucket by uploading them.
View and download the objects to your local device.
Create folders or prefixes in a bucket to organize and manage objects.
Generate access keys to login securely to the Object Storage.