The following are a few FAQs to provide insight into the IONOS S3 Object Storage application.
The new web console for IONOS S3 Object Storage is an enhanced version of the previously available S3 web console in the DCD, offering improved user experience and guidance with a design catering to multiple target groups. With this revamp, the S3 Web Console in the DCD is renamed to IONOS S3 Object Storage.
The new web console for the IONOS S3 Object Storage offers an improved user interface (UI) of the application and the following are the capabilities that you will notice:
Visually enhanced UI of the web console catering to multiple target audiences without needing external browser windows to use the application.
Natively integrated front-end in the DCD using IONOS standard design system. Instead of an external browser window, you can access the S3 web console within the DCD.
Offers context-sensitive help as learn more links from within the UI to support users with relative documentation about the workflows.
The application offers faster responsiveness and improved performance.
In the DCD, go to menu > Storage > IONOS S3 Object Storage. The feature is generally available to all existing and new users. Alternatively, you can also use the Graphical user interface tools, Command-line tools, API and SDKs to access the Object Storage.
In the DCD, go to Storage > IONOS S3 Object Storage > Key management to view the access keys. You can generate a new key in the Access keys by using the Generate a key function.
No, with the new web console generally available to all users, the old console is phased out and removed from the DCD. All the capabilities of the old console are now improved and made available in the new web console.
You can store any type of data, including documents, photos, videos, backups, and large data sets for analytics and big data applications. Each object can only be a maximum of 46.566 GB (~42 TB) in size. For more information, see Objects and Folders.
Functionally, the bucket settings remain unchanged. To offer an improved user experience and simplify the UI design, you will notice the following changes:
The Bucket Permissions setting in the old web console is now available as Access Control List.
Bucket Canned ACL – a predefined set of permissions per grantee and Storage Policy – a setting automatically applied when a user creates a bucket, are deprecated.
The pricing is based on the actual amount of storage used and outgoing data transfer. There are no minimum storage charges, allowing you to start using the Object Storage by uploading as little as one byte of data. For more information, see Pricing Model.
The IONOS S3 Object Storage service is available in the following S3 regions: de
, eu-central-2
, and eu-south-2
. For the list of available points of access, see S3 Endpoints.
With our ongoing efforts to continuously improve our product functions, the IONOS S3 Object Storage will offer Bucket Inventory feature shortly. It generates regular reports listing the objects in a storage bucket, including details like metadata, size, and storage class. It helps in tracking and managing stored content efficiently.
Objects (files) of any format can be uploaded to and stored in the Object Storage. Objects may not exceed 4,65 GB (5.000.000.000 bytes) if uploaded using the web console. Other applications are not subject to this limit. Use the MultipartUpload set of functions in API or SDKs to upload large files.
Each object can only be a maximum of 46.566 GB (~42 TB) in size. For more information, see Limitations.
To speed up the upload of large files, you can use the Multi-part upload that breaks the large files into smaller, manageable parts and upload them in parallel. The feature is not available via the web console but can be implemented in your application via the API or SDKs.
Retrieve your Canonical user ID and provide it to the bucket owner; the owner can then grant you access by using Bucket ACL or Bucket Policies.
By default, objects in the IONOS S3 Object Storage are private and only the bucket owner has permission to access them. The bucket owner can use the ACL permission for objects to share objects with other S3 users.
You can also temporarily share objects with other users without additional authorization using the Pre-Signed URLs.
Yes, by setting appropriate permissions on your buckets or individual objects, you can make data accessible over the internet. The Static Website Hosting feature also enables you to host static websites directly from your buckets, serving your content (HTML files, images, and videos) to users over the web.
Yes, you can back up your bucket using the Replication feature that allows automatic replication of your bucket's objects to another bucket, which can be in a different geographic location. Additionally, you can apply Object Lock to the destination bucket for enhanced security, preventing the replicated data from being deleted or modified.
If you wish to sync your bucket with local storage, tools like AWS CLI or rclone can be utilized for seamless synchronization.
You can use the Lifecycle Policy feature to automatically delete outdated objects such as logs. This feature enables you to create rules that specify when objects should be deleted. For example, you can set a policy to automatically remove log files after they have reached a certain time.
To safeguard your data against ransomware, you can use the Object Lock. With this feature, you can set the Write Once Read Many (WORM) model on your objects, preventing them from being deleted or modified for a fixed amount of time or indefinitely.
During transit, TLS 1.3 is used for encryption. For data at rest, two options are available: AES256 server-side encryption (SSE-S3) and encryption with a customer-provided key (SSE-C). SSE-S3 is the default for uploads via the web console. SSE-C, on the other hand, is not available in the web console but can be utilized through API and SDKs.
Data redundancy is achieved through erasure coding. This process involves dividing data into fragments, expanding, and encoding it with redundant data pieces, which are then stored across a set of different servers. During a hardware failure or data corruption, the system can reconstruct the data using these fragments and redundancy information, ensuring data durability and reliability.
To improve the durability and availability of your data, use the Data Replication feature. This functionality allows you to replicate your data to another geographic location, offering enhanced protection and resilience against natural disasters or other regional disruptions. Also offers additional security for your data and faster data access from the region where the replica resides.