FAQs

The following are a few FAQs to provide insight into the IONOS S3 Object Storage application.

Getting started

What is the IONOS S3 Object Storage new web console?

The new web console for IONOS S3 Object Storage is an enhanced version of the previously available S3 web console in the DCD, offering improved user experience and guidance with a design catering to multiple target groups. With this revamp, the S3 Web Console in the DCD is renamed to IONOS S3 Object Storage.

How does the IONOS S3 Object Storage new web console differ from the old web console?

The new web console for the IONOS S3 Object Storage offers an improved user interface (UI) of the application and the following are the capabilities that you will notice:

  • Visually enhanced UI of the web console catering to multiple target audiences without needing external browser windows to use the application.

  • Natively integrated front-end in the DCD using IONOS standard design system. Instead of an external browser window, you can access the S3 web console within the DCD.

  • Offers context-sensitive Learn more help links from within the UI to support users with relative and detailed product documentation about the workflows.

  • The application offers faster responsiveness and improved performance.

How can I get started with using IONOS S3 Object Storage?

In the DCD, go to Menu > Storage > IONOS S3 Object Storage. The feature is generally available to all existing and new users. Alternatively, you can also use the GUI tools, CLI tools, API and SDKs to access the Object Storage.

Where can I find the access keys in the web console?

In the DCD, go to Storage > IONOS S3 Object Storage > Key management to view the access keys. You can generate a new key in the Access keys by using the Generate a key function.

What are the new Access and Secret Key format changes for IONOS S3 Object Storage?

To prepare new functionalities of IONOS S3 Object Storage, we made important changes to our Access and Secret Keys specifications. Effective from 25.04.2024, the character length of all newly generated keys has been increased significantly:

  • Access Key: The key length has been increased from 20 to 92 characters.

    • Previous format example: 23cbca2790edd9f62100

    • New format example: EAAAAAFaSZEvg5hC2IoZ0EuXHRB4UNMpLkvzWdKvecNpEUF-YgAAAAEB41A3AAAAAAHnUDl-h_Lwot1NVP6F_MARJv_o

  • Secret Key: The key length has been increased from 40 to 64 characters.

    • Previous format example: 0Q1YOGKz3z6Nwv8KkkWiButqx4sVmSJW4bTGwbzO

    • New format example: Opdxr7mG09tK4wX4s6J3nrl1Z4EJgYRui/rldkgiPmrI5bavWHuThswRqPwgbeLP

We recommend replacing the existing keys with new access keys. Existing keys will remain working but may not enable you to use the upcoming functionalities in the Object Storage.

Can I use both the new IONOS S3 Object Storage and the old web console?

No, with the new web console generally available to all users, the old console is phased out and removed from the DCD. All the capabilities of the old console are now improved and made available in the new web console.

What kind of data can I store in the Object Storage?

You can store any type of data, including documents, photos, videos, backups, and large data sets for analytics and big data applications. Each object can only be a maximum of 5.120 GB (5 TB) in size. For more information, see Objects and Folders.

Are there any changes to the bucket settings?

Functionally, the bucket settings remain unchanged. To offer an improved user experience and simplify the UI design, you will notice the following changes:

  • The Bucket Permissions setting in the old web console is now available as an Access Control List.

  • Bucket Canned ACL – a predefined set of permissions per grantee and Storage Policy – a setting automatically applied when a user creates a bucket, are deprecated.

What is the price model for using IONOS S3 Object Storage?

The pricing is based on the actual amount of storage used and outgoing data transfer. There are no minimum storage charges, allowing you to start using the Object Storage by uploading as little as one byte of data. For more information, see Pricing Model.

What are the endpoints?

The IONOS S3 Object Storage service is available in the following S3 regions: de, eu-central-2, and eu-south-2. For the list of available points of access, see S3 Endpoints.

What are the upcoming IONOS S3 Object Storage features?

With our ongoing efforts to continuously improve our product functions, the IONOS S3 Object Storage will offer a Bucket Inventory feature shortly. It generates regular reports listing the objects in a storage bucket, including details like metadata, size, and storage class. It helps in tracking and managing stored content efficiently.

Uploading

Are there any data limits on the object upload?

Objects (files) of any format can be uploaded to and stored in the Object Storage. Objects may not exceed 4,65 GB (5.000.000.000 bytes) if uploaded using the web console. Other applications are not subject to this limit. Use the MultipartUpload set of functions in API or SDKs to upload large files.

What is the maximum size of the object that can I upload?

Each object can only be a maximum of 5.120 GB (5 TB) in size. For more information, see Limitations.

How can I speed up the upload of large files?

To speed up the upload of large files, you can use the Multi-part upload that breaks the large files into smaller, manageable parts and upload them in parallel. The feature is not available via the web console but can be implemented in your application via the API or SDKs.

Access management

How can I get access to a bucket owned by another S3 user?

Retrieve your Canonical user ID and provide it to the bucket owner; the owner can then grant you access by using Bucket ACL or Bucket Policies.

How can I share objects with other S3 users?

By default, objects in the IONOS S3 Object Storage are private and only the bucket owner has permission to access them. The bucket owner can use the ACL permission for objects to share objects with other S3 users.

You can also temporarily share objects with other users without additional authorization using the Pre-Signed URLs.

Can I make my Object Storage data public?

Yes, by setting appropriate permissions on your buckets or individual objects, you can make data accessible over the internet. The Static Website Hosting feature also enables you to host static websites directly from your buckets, serving your content (HTML files, images, and videos) to users over the web.

Data management

Can I back up my S3 bucket?

Yes, you can back up your bucket using the Replication feature that allows automatic replication of your bucket's objects to another bucket, which can be in a different geographic location. Additionally, you can apply Object Lock to the destination bucket for enhanced security, preventing the replicated data from being deleted or modified.

If you wish to sync your bucket with local storage, tools like AWS CLI or rclone can be utilized for seamless synchronization.

How can I automatically delete objects that become outdated?

You can use the Lifecycle Policy feature to automatically delete outdated objects such as logs. This feature enables you to create rules that specify when objects should be deleted. For example, you can set a policy to automatically remove log files after they have reached a certain time.

Security

How can I protect my data against ransomware?

To safeguard your data against ransomware, you can use the Object Lock. With this feature, you can set the Write Once Read Many (WORM) model on your objects, preventing them from being deleted or modified for a fixed amount of time or indefinitely.

How is data encryption managed during transit and at rest?

During transit, TLS 1.3 is used for encryption. For data at rest, two options are available: AES256 server-side encryption (SSE-S3) and encryption with a customer-provided key (SSE-C). SSE-S3 is the default for uploads via the web console. SSE-C, on the other hand, is not available in the web console but can be utilized through API and SDKs.

How is data redundancy achieved?

Data redundancy is achieved through erasure coding. This process involves dividing data into fragments, expanding, and encoding it with redundant data pieces, which are then stored across a set of different servers. During a hardware failure or data corruption, the system can reconstruct the data using these fragments and redundancy information, ensuring data durability and reliability.

How can I increase the durability and availability of my data?

To improve the durability and availability of your data, use the Data Replication feature. This functionality allows you to replicate your data to another geographic location, offering enhanced protection and resilience against natural disasters or other regional disruptions. Also offers additional security for your data and faster data access from the region where the replica resides.

Last updated