The following are a few FAQs to provide insight into the IONOS Object Storage application.
In the DCD, go to Menu > Storage > IONOS Object Storage. The feature is generally available to all existing and new users. Alternatively, you can also use the DCD, S3 tools, API and SDKs to access the Object Storage.
Contract-owned buckets are recommended for users within a single organization and are supported currently in the eu-central-3
region and in the us-central-1
region. User-owned buckets are recommended when you do not need to view or access buckets of other users in the contract. This bucket type is supported in the de
, eu-central-2
, and eu-south-2
regions. For detailed feature comparison, see Bucket Types.
In the DCD, go to Storage > IONOS Object Storage > Key management to view the access keys. You can generate a new key in the Access keys by using the Generate a key function.
To prepare new functionalities of IONOS Object Storage, we made important changes to our Access and Secret Keys specifications. Effective from April 25, 2024, the character length of all newly generated keys has been increased significantly:
Access Key: The key length has been increased from 20 to 92 characters.
Previous format example: 23cbca2790edd9f62100
New format example: EAAAAAFaSZEvg5hC2IoZ0EuXHRB4UNMpLkvzWdKvecNpEUF-YgAAAAEB41A3AAAAAAHnUDl-h_Lwot1NVP6F_MARJv_o
Secret Key: The key length has been increased from 40 to 64 characters.
Previous format example: 0Q1YOGKz3z6Nwv8KkkWiButqx4sVmSJW4bTGwbzO
New format example: Opdxr7mG09tK4wX4s6J3nrl1Z4EJgYRui/rldkgiPmrI5bavWHuThswRqPwgbeLP
We recommend replacing the existing keys with new access keys. Existing keys will remain working but may not enable you to use the upcoming functionalities in the Object Storage.
You can store any type of data, including documents, photos, videos, backups, and large data sets for analytics and big data applications. Each object can only be a maximum of 5.120 GB (5 TB) in size. For more information, see Objects and Folders.
The pricing is based on the actual amount of storage used and outgoing data transfer. There are no minimum storage charges, allowing you to start using the Object Storage by uploading as little as one byte of data. For more information, see Pricing Model.
Endpoints are URLs used to access storage services for operations like uploading and downloading data. Each bucket resides in a specific region; each region has a particular endpoint. For the list of available access points, see Endpoints.
With our ongoing efforts to continuously improve our product functions, the IONOS Object Storage will offer a Bucket Inventory feature shortly. It generates regular reports listing the objects in a storage bucket, including details like metadata, size, and storage class. It helps in tracking and managing stored content efficiently. The Object Storage would also offer a User Policy to enable users to create and manage buckets independently.
Objects (files) of any format can be uploaded to and stored in the Object Storage. Objects may not exceed 4,65 GiB (5.000.000.000 bytes) if uploaded using the DCD. Other applications are not subject to this limit. Use the MultipartUpload set of functions in API or SDKs to upload large files.
Each object can only be a maximum of 5.120 GB (5 TB) in size. For more information, see Limitations.
To speed up the upload of large files, you can use the Multi-part upload that breaks the large files into smaller, manageable parts and upload them in parallel. The feature is not available via the DCD but can be implemented in your application via the API or SDKs.
For contract-owned buckets, retrieve your Contract User ID and provide it to the bucket owner or administrator; they can then grant you access by using the Bucket Policy.
For user-owned buckets, retrieve your Canonical User ID and provide it to the bucket owner who can then grant you access by using the Bucket Policy.
For more information, see Retrieve User ID and Bucket Policy.
Using Bucket Policy, the objects can be shared with other Object Storage users as follows:
The contract owner or administrator of the contract-owned bucket can share access with other users by using their Contract User ID.
The bucket owner of the user-owned bucket can share access with other users by using their Canonical User ID.
You can also temporarily share objects with other users without additional authorization using Share Objects with Pre-Signed URLs.
Yes, by setting appropriate permissions on your buckets or individual objects, you can make data accessible over the internet. Static Website Hosting lets you host static websites directly from your buckets, serving your content (HTML files, images, and videos) to users over the web.
Yes, you can back up your bucket using the Replication feature, which allows automatic replication of your bucket's objects to another bucket, which can be in a different geographic location. Replication within user-owned buckets of the same user as well as replication to contract-owned buckets is supported. Additionally, you can apply Object Lock to the destination bucket for enhanced security, preventing the replicated data from being deleted or modified.
If you wish to sync your bucket with local storage, tools like AWS CLI or rclone can be utilized for seamless synchronization.
You can use the Lifecycle setting to automatically delete outdated objects such as logs. This feature enables you to create rules that specify when objects should be deleted. For example, you can set a policy to automatically remove log files after they have reached a certain time.
To safeguard your data against ransomware, you can use the Object Lock. With this feature, you can set the Write Once Read Many (WORM) model on your objects, preventing them from being deleted or modified for a fixed amount of time or indefinitely.
During transit, TLS 1.3 is used for encryption. For data at rest, two options are available: AES256 server-side encryption (SSE-S3) and encryption with a customer-provided key (SSE-C). SSE-S3 is the default for uploads via the DCD. SSE-C, on the other hand, is not available in the DCD but can be utilized through API and SDKs.
Data redundancy is achieved through erasure coding. This process involves dividing data into fragments, expanding, and encoding it with redundant data pieces, which are then stored across a set of different servers. During a hardware failure or data corruption, the system can reconstruct the data using these fragments and redundancy information, ensuring data durability and reliability.
To improve the durability and availability of your data, use the Replication feature. This functionality allows you to replicate your data to another geographic location, offering enhanced protection and resilience against natural disasters or other regional disruptions. It also offers additional security for your data and faster data access from the region where the replica resides. Replication within user-owned buckets of the same user as well as replication to contract-owned buckets is supported.
Yes, you can access Object Storage from a private LAN connection by using the public IP addresses of the desired Endpoints as the Target IP address in the Managed Network Load Balancer (NLB). To do so, see Access Object Storage from a Private LAN.