Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
IONOS Simple Storage Service (S3) Object Storage is a secure, scalable storage solution that offers high data availability and performance. The product adheres to the S3 API standards, enabling the storage of vast amounts of unstructured data and seamless integration into S3-compatible applications and infrastructures.
Unlike traditional hierarchical systems like block storage volumes or disk file systems, Object Storage utilizes a flat structure that is ideal for storing large chunks of unstructured, static data that you want to keep ‘as is’ for later access. Businesses of all sizes can use IONOS S3 Object Storage to store files (objects) for varied Use Cases.
The IONOS S3 Object Storage service is available in the following locations:
Data Center | S3 Region |
---|---|
For the list of available points of access, see S3 Endpoints.
In IONOS S3 Object Storage, the data that you want to store in the Object Storage is called Objects. The data types could be archives, backups, log files, documents, images, and media assets. Each object is allocated a unique URL for direct access. Further, you can group these objects within a folder to help organize and manage these objects within a bucket. For more information, see Objects and Folders.
To upload objects into the Object Storage, you need to first create containers known as Buckets by choosing the S3 region and a unique bucket name. The objects are stored in these buckets which are accompanied by rich metadata. For more information, see Buckets.
On creating the first bucket, a key is generated if it does not exist already. A key is a unique identifier to access buckets and objects. This key is a combination of Access Key and Secret Key which is listed in the Key Management section. Each object in the bucket has exactly one key. For more information, see Key Management.
Based on access permissions, buckets, and objects can be publicly accessible or kept private and shared with only intended users. Use the bucket and object Access Control List (ACL) setting and S3 Endpoints to manage access.
The illustration summarizes the core components of Object Storage and the functional benefits businesses can attain with Object Storage. Important use cases where Object Storage is of benefit to enterprises are highlighted here. For more information, see Features and Benefits, Use Cases.
Frankfurt, Germany
de
Berlin, Germany
eu-central-2
Logroño, Spain
eu-south-2
The following are a few limitations to consider while using IONOS S3 Object Storage:
Access Keys: A user can have up to 5 Access Keys.
Storage size: The minimum storage size available is 1 Byte of data and is extendable to a maximum storage of petabytes.
Bucket count: A user can create a maximum of 500 buckets only.
Object size: The maximum allowed object size is 46.566 GB.
File upload size: A file upload size cannot exceed 4,65 GB. If you have a single file exceeding this limit, you can bypass it using multi-part uploads. Command-line tools such as AWS CLI and graphical tools such as Cyberduck automatically handle larger files by breaking them into parts during uploading.
Bucket naming conventions: Only buckets for static website hosting can use dots (.) in the bucket names. For more information, see Bucket naming conventions.
Object name length: The maximum allowed length of the folder path, including the file name, is 1024 characters.
Bandwidth: Each connection is theoretically capped at approximately 10G per region. However, remember that this is a shared environment. Based on our operational data, achieving peak loads up to 2x7 G is feasible by leveraging parallel connections, although this is on a best-effort basis and without any guaranteed Service Level Agreement (SLA).
IONOS S3 Object Storage provides a range of access options, including a web console, desktop applications, command-line tools, and an option to develop your application using API and SDKs.
In the DCD, go to Menu > Storage > IONOS S3 Object Storage to access IONOS S3 Object Storage via the web console. Here you can manage buckets, and objects, set access controls, and much more. To set up Object Storage, see .
is an open-source GUI client supporting object storage among other protocols, presenting storage objects as local files for easy browsing, upload, and download.
is a freeware Windows client for Object Storage, providing an easy way to manage buckets and objects, including file permissions and access control lists, through a visual interface.
You can access the IONOS S3 Object Storage via the following command-line tools:
is unique in offering a wide range of commands for the comprehensive management of buckets and objects which is ideal for scripting and automation.
is a command-line tool offering direct, scriptable control over object storage buckets and objects. However, it lacks certain features like versioning and replication management.
is a command-line program for syncing files between local and cloud storage, distinguishing itself with powerful synchronization capabilities, specifically useful when handling large data quantities and complex sync setups.
Being S3 compatible means you can use standard S3 API calls and SDKs with our storage solution. For more information, see .
Based on IONOS S3 Object Storage features and benefits, the following use cases are derived that meets your business demands:
Data Backup and Restore: IONOS S3 Object Storage backs up critical databases and data with ease. With replication and resilience features along with versioning of buckets, the data security and access are enhanced.
Website Asset Storage: You can store specific website assets like images or downloadable files on Object Storage even if you do not host the whole site, helping in cost-saving and server space optimization.
Static Website Hosting: Utilize Object Storage for hosting static websites that load quickly as Object Storage does not require server-side processing.
Multimedia Asset Hosting: Storing static multimedia files like images, videos, audio, and documents, which seldom change, is easier in IONOS S3 Object Storage and does not need block storage volumes. With a dedicated URL to each of these objects, you can easily embed or host these assets on a Static Website without the need for a server.
Private File Storage: Safely store private data with default settings, making objects inaccessible through regular HTTP. You get the flexibility to modify object access permissions whenever needed.
Storing Unstructured Data: With a flat data structure, it is ideal for storing and managing large datasets outside of traditional databases. You can customize the metadata of objects to classify and retrieve data such as images, videos, audio, documents, and Big Data more efficiently.
Artifact Storage: Storing and sharing development artifacts such as log data via Object Storage URL is an ideal solution for developers. Using access keys, you can safely share artifacts with intended users only. Developers can also store software applications as objects in the Object Storage.
Software Hosting and Distribution: Developers can upload software applications as objects in the buckets and easily provide access to their software via unique URLs, making it a go-to solution for hosting and distributing software.
Periodic Data Retention: For periodic logs that need to be accessed only for a certain period and be removed after a specific duration, Object Storage Lifecycle Rules make it possible to retain data and delete data objects on the specified data expiration date; thus ideal for periodic data storage.
S3 Compatible: Object Storage adheres to the industry-standard S3 protocol, ensuring seamless integration with S3 Tools and applications designed for S3-compatible platforms. For more information, see S3 API Compatibility.
Data Management: The data storage pool comprising of objects and buckets in a flat data environment is well manageable with the following data management functions:
Replication: Safeguards your data by duplicating it across multiple locations, providing redundancy and ensuring high availability.
Versioning: Tracks and manages multiple versions of an object, enabling easy rollback of objects and buckets to the previous states and preserving historical versions of objects and buckets.
Lifecycle: Archives or deletes objects based on predefined criteria, optimizing costs and managing data efficiently.
Object Lock: Secures your data by implementing retention policies or legal holds, ensuring that data objects remain immutable for a specified duration or indefinitely. This way, the data meets the Write Once Read Many (WORM) data storage technology and prevents the data from being erased or modified.
Access Management: The following functions allow users to set access permissions to other Object Storage users, defining who can access their objects and buckets.
Access Control List (ACL): A granular permissions for objects and buckets, controlling who can access and modify your data.
Bucket Policy: You can set overarching access policies for a bucket that provides additional security and control over how data is accessed and used.
Logging: Monitors and records access requests to your objects and buckets, providing a clear audit trail and helping identify suspicious activities.
Cross-Origin Resource Sharing (CORS): Defines rules for client web applications from different domains to access the data resources stored in your buckets.
Public Access: The data in the IONOS S3 Object Storage are well managed by allowing or blocking access permissions to be public access wherever needed with the following functions:
Block Public Access: Ensures data privacy by blocking all public access at the bucket or account level.
Static Website Hosting: Using Object Storage, you can host static websites directly, eliminating the need for additional web servers, thus simplifying deployment. You can enable the objects in these buckets with public read access, allowing users to view all the content on these static websites.
Security: Data object protection is achieved through the following:
Encryption in Transit: Secures data as it travels to and from the Object Storage using robust TLS 1.2/1.3 encryption protocol.
Server-Side Encryption: Protects stored data by encrypting it on the server side with IONOS S3 Object Storage managed keys (SSE-S3) and customer-managed keys (SSE-C) using AES256 encryption algorithm. The storage objects are decrypted automatically when downloaded.
S3 Features: IONOS S3 Object Storage secures your data in the storage pool through Versioning, Block Public Access, Object Lock, and Replication features.
Security Certification: The solution adheres to the ISO 27001 certificate based on IT-Grundschutz and complies with the EU's GDPR.
Large Data Volume: Data in the Object Storage are stored as objects, which include metadata and a unique identifier, making object retrieval easier for large volumes of unstructured data.
Cost-Effective Billing: A straightforward pay-as-you-go model, eliminating upfront costs. You are charged solely based on storage utilization and outbound data transfer per gigabyte. Additionally, we do not impose charges for requests.
Highly Scalable: With Object Storage, you can start with small data storage and expand data storage as your application needs at any time, offering the utmost flexibility with data storage.
Georedundant Hosting: The objects and buckets in Object Storage are hosted on multiple data centers in different geographical locations, guaranteeing high availability and data durability even during primary site failures or outages.
Compliance Standards: IONOS S3 Object Storage infrastructure and processes comply with IT-Grundschutz, GDPR, and ISO-27001 standards, offering peak data protection and robust privacy policies.
Write Once Read Many (WORM): The Object Lock on the data stored in the Object Storage is protected and prevents the data from being erased or modified.
Data Protection: With access control lists and object lock features, multiple layers of data protection can be enforced on data objects and define permissions for who can access the data in the Object Storage. With advanced data encryption algorithms, secure data storage is achieved.
Lifecycle Management: With Object Storage Lifecycle rules, you can enforce the data deletion process for historical data and save the storage cost.
The pricing model for IONOS S3 Object Storage is as follows:
1 Gigabyte (GB) ​is equal to 1024 Megabytes (MB).
Storage space is charged per GB per hour.
Data transfer is charged in GB. Outbound data transfer is paid, except for replication traffic. Inbound data transfer is free, but it will be counted as outbound data transfer for your virtual machine if you upload data from it.
Using the IONOS S3 Object Storage API is free of charge.
Prices are listed in the respective price lists:
IONOS Ltd. – United Kingdom.
IONOS Inc. – United States and Canada.
All outbound data transfer from IONOS S3 Object Storage is billed as public traffic. The local and national traffic definitions do not apply. This includes outgoing data transfer to IONOS Virtual Machines (VMs) or dedicated servers regardless of their geographical location.
While inter-bucket data transfer is subject to charges, replication traffic both within the same region and across different regions is cost-free.
The cost per GB for outbound data transfer is contingent upon the cumulative data consumption of the account. A tiered pricing structure is implemented for all outbound traffic, including data transfer from VMs and IONOS S3 Object Storage.
No charges are imposed on inbound data transfer to IONOS S3 Object Storage. It is essential to know that when uploading data to IONOS S3 Object Storage, the same data transfer may be billed as an outbound data transfer for your VM. While calculating network costs for data transfer from a VM to IONOS S3 Object Storage, the following distinctions are made between local, national, and public traffic:
Data transfer from a VM to IONOS S3 Object Storage within the confines of the same data center is billed as local traffic.
Data transfer from a VM to IONOS S3 Object Storage located in the same country but at a different data center is billed as national traffic.
Data transfer from a VM to IONOS S3 Object Storage in a data center in a different country is billed as public traffic.
When creating a bucket, you must choose the two essential settings:
The bucket region.
To whether or not to enable the Object Lock for the bucket.
The Object Storage bucket can be created through one of the following methods:
The easiest way to create a bucket is by using the S3 Web Console. You must create a bucket before you can start uploading objects to the Object Storage.
To create an Object Storage bucket, follow these steps:
1. In the DCD, go to Menu > Storage > IONOS S3 Object Storage.
2. In the Buckets tab, choose the Bucket region which determines the geographical location where the data inside the buckets will be stored. For more information on choosing the right bucket region, see Bucket region.
3. Enter a unique name for the Bucket name that adheres to the naming conventions for a bucket.
Note: A bucket will not be created if a bucket with the same name already exists in the IONOS S3 Object Storage.
4. Click Create.
Result: A bucket is created in the selected S3 region.
Using the Create Bucket API, you can create a bucket with or without the object lock.
For details on configuring the AWS CLI tool, see AWS CLI.
In the de
region (Franfurt, Germany)
In the eu-central-2
region (Berlin, Germany)
In the eu-south-2
region (Logrono, Spain)
In the de
region (Franfurt, Germany)
In the eu-central-2
region (Berlin, Germany)
In the eu-south-2
region (Logrono, Spain)
Note: Creating a bucket with object lock enabled is currently possible only using command-line tools or Create Bucket API.
You can get started with IONOS S3 Object Storage by completing the initial setup and working with buckets, objects, and access keys.
When you upload a file to IONOS S3 Object Storage, it is stored as an Object and can be stored in buckets and folders in the Object Storage.
You can upload objects to buckets through one of the following methods:
Prerequisites:
Make sure a bucket already exists to which you want to upload objects (files).
If you want to use object lock, then make sure the object lock is enabled for the bucket as well. For more information, see .
1. In the DCD, go to Menu > Storage > IONOS S3 Object Storage.
2. From the Buckets list, choose the bucket to which objects must be uploaded.
3. Click Upload objects which opens an overlay screen.
4. Click Browse files or drag and drop the files to be uploaded. You can choose to upload multiple files.
Info: The File encryption for objects being uploaded supports Server-Side Encryption with S3 Managed Keys (SSE-S3) and is enabled by default. You can toggle to turn off this option.
5. Review the selected files to be uploaded. Use the Remove and Remove all options to remove any files from being uploaded.
6. Click Start upload to confirm the files to be uploaded.
Result: The objects are successfully uploaded to the selected bucket.
A few of the limitations to consider while using object upload through a web console are:
Multi-part upload is not supported.
The Server-side Encryption with Customer Provided Keys (SSE-C) is not supported.
A maximum of 5 GB upload size for a single object applies.
Note: Only a single storage class is currently available: STANDARD
. It is designed for general-purpose storage of frequently accessed data.
Prerequisites:
To upload an object from the current directory to a bucket:
To copy the contents of the local directory my-dir
to the bucket my-bucket
:
To copy all objects from my-source-bucket to my-dest-bucket, excluding .zip files:
This command does not support cross-region copying for IONOS S3 Object Storage.
To sync the bucket my-bucket with the contents of the local directory my-dir:
Other applications or the are not subject to these limitations.
Using the , you can perform object upload and manage objects in a bucket.
Set up the AWS CLI by following the .
Make sure to consider the supported for object upload.
You can upload and copy objects using the multi-part upload feature that allows you to break down a single object into smaller parts and upload these object parts in parallel, maximizing the upload speed. While the does not support multi-part upload due to the upload size limit of 4,65 GB GB per object, the and many offer this functionality, allowing users to take advantage of efficiency through parallel uploads.
Tasks that guide you to quickly set up and get started with using Object Storage.
Explanations of core components in the Object Storage and its functions.
Help guide in detail to accomplish tasks such as creation, updation, deletion, configuration, and management.
Detailed guide on data management, access management, and public access of Object Storage.
The service availability endpoints to use IONOS S3 Object Storage.
Guide to working with Object Storage compatible GUI tools, command-line tools, and SDKs.
When you log on to the Object Storage using the DCD, the DCD manages authentication and authorization so that you can access the object storage with just one click.
Prerequisite: Make sure you have the corresponding privilege to enable IONOS S3 Object Storage. Only contract owners and administrators can enable Object Storage.
1. In the DCD, go to Menu > Management, and click Users & Groups.
2. Create a Group or open an existing group from the Groups drop-down list.
3. In the Privileges, select the Use Object Storage checkbox to enable Object Storage permission.
4. In the Members tab, add users to the group that you wish to authorize for the use of the object storage.
Result: The Object Storage access keys of each user are activated together with the authorization. All members of the authorized group can now access the IONOS S3 Object Storage using the web console on the DCD.
You need to create a bucket or an Object Storage access key to start using the IONOS S3 Object Storage.
Prerequisite: The Use Object Storage permission must be enabled for the user account. The Object Storage is not enabled for an IONOS account by default. For more information, see Enable Object Storage access.
To set up Object Storage through the web console, follow these steps:
1. In the DCD, go to Menu > Storage > IONOS S3 Object Storage. The Buckets and Key Management section are shown on the IONOS S3 Object Storage Home page.
2. Click Create bucket to create a new bucket. Creating a new bucket also creates an access key if it does not exist already. For more information, see Create a Bucket.
3. In the Key management, click Generate a key to create a new access key. Your S3 credentials consist of an Access Key and a Secret Key. The web console automatically uses these credentials to set up Object Storage. These credentials are also required to set up access to IONOS S3 Object Storage using S3 Tools. For more information, see Generate a Key.
Result: The Object Storage is successfully set up through the web console. On setting up Object Storage, the billing starts only after you upload an object.
Contract owners and administrators may use this functionality to access buckets and objects stored in the IONOS S3 Object Storage accounts of users who are no longer active members of their contracts.
Warning: Before you delete a user or all of their Object Storage access keys from your account, ensure that the content in their IONOS S3 Object Storage is accessible so that you can continue to use it or delete it by adjusting the access rights accordingly.
To access other user's Object Storage, follow these steps:
1. In the DCD, go to Menu > Management, and click Users & Groups.
2. Select the required user from the Users drop-down list.
3.In the Object Storage Keys, click Manage.
Result: You are now logged on as the bucket owner of the selected user's IONOS S3 Object Storage. You can now access the user's buckets, objects, and access keys. You can also update the Object Storage of this user account.
Note: The objects set to 'private' and have not been removed before the user deletion or if all of the user's Object Storage access keys have been deleted and are no longer accessible to modify the objects; in both these cases, the billing continues to be charged; contact the IONOS Cloud Support. For more billing information, see Pricing Model.
Start with setting up Object Storage access from the web console. |
Create your first Object Storage bucket to serve as containers to hold data and select whether or not Object Lock is needed. |
Add data as objects in the bucket by uploading them. |
View and download the objects to your local device. |
Create folders or prefixes in a bucket to organize and maange objects. |
Generate access keys to login securey to the Object Storage. |
An object in the Object Storage can be viewed and downloaded to your local computer. On downloading, the SSE-S3 encryption applied to that object is automatically decrypted before the download process begins. In the case of SSE-C, you need to provide the encryption keys for download. This feature is not available in the web console, but you can use command-line tools, SDK or API to download objects protected with SSE-C encryption.
You can download objects through one of the following methods:
For large objects, you may not need to download the entire file. You can perform a partial download of objects using the Object Storage API. The API allows you to specify a byte range in your request, enabling you to download only a portion of the object data.
Note: An object's metadata can be viewed directly from the properties page in the web console or through the API call, providing a quick way to inspect an object's properties without incurring data transfer fees. Data transfer fees apply when you download objects from your S3 bucket. For more information, see Pricing Model.
Using the web console, you can download one object at a time. For downloading multiple objects, consider using command line tools, SDKs, or REST API.
1. In the DCD, go to Menu > Storage > IONOS S3 Object Storage.
2. From the Buckets list, choose the bucket from which you want to download the object. The list of objects in the bucket is listed.
3. Choose the object to download and click on the respective object's action menu (three dots). The Download option is also available from the respective object's properties page.
4. Click Download. If an object has been shared through a public URL, open the URL and download the object from there.
Result: The object is successfully downloaded.
Using the Object Storage API, you can download objects from a bucket.
To download my-object.txt
to a specified file locally:
To download a specified version of the my-object.txt
to a specified file locally:
To download all the objects from the my-bucket
bucket to the local directory my-dir
:
To recursively copy all objects with the /my-dir/
prefix from my-bucket-1
to my-bucket-2
:
To get the object’s metadata without downloading an object:
For more information, see cp, get-object, and head-object command reference.
1. In the DCD, go to Menu > Storage > IONOS S3 Object Storage.
2. From the Buckets list, choose the bucket in which the folder must be created.
3. Click Create a folder which opens an overlay screen.
4. Enter a name in the Prefix field. The prefix must contain only alphanumerical characters, dashes and hyphens.
5. Click Create to continue with folder creation.
Result: The folder is successfully created in the selected bucket.
Note:
A folder once created, cannot be renamed.
Objects that have already been uploaded cannot be moved to a different folder.
Create subfolders within a folder by following the steps in Create a folder.
Upload objects to a folder and subfolders.
Search for folders and objects within folders using the Search by Prefix option within a bucket.
For each user, an Object Storage Key is generated automatically, which is activated when the user is granted permission to use the IONOS S3 Object Storage. A maximum of five unique Object Storage Keys can be created per user for different S3 applications. For more information, see .
Prerequisite: Make sure you have the corresponding permission to create the Object Storage. Only contract owners and administrators with the Object-Storage-Key can set up the object storage.
To create Object Storage keys, follow these steps:
1. In the DCD, go to Menu > Storage > IONOS S3 Object Storage.
2. In the Key management tab, go to the Access keys section and click Generate a key.
3. Confirm key generation by clicking Generate.
Result: A new access key for IONOS S3 Object Storage is successfully generated and is in active
status by default.
You can copy the Access Key and Secret Key from the respective fields to sign in to other Object Storage applications.
Prerequisite: Make sure you have the corresponding permission to create the Object Storage. Only contract owners and administrators with the Object-Storage-Key can set up the object storage and manage keys for other users.
1. In the DCD, go to Menu > Management > Users & Groups.
2. Select the user from the Users list and click the Object Storage Keys tab.
3. Click + Generate Key and confirm key generation by clicking OK.
4. Confirm the action by clicking OK.
Result: A new access key for IONOS S3 Object Storage is successfully generated for the user and is in active
status by default.
You can copy Key and Secret from the respective fields to sign in to other object storage applications.
To deactivate or delete keys, see .
To deactivate or delete keys, see .
In IONOS S3 Object Storage, a bucket is the primary container for data. Think of it like a directory in a file system where you can store files (known as objects). Each object is stored in a bucket and is identified by a unique key, allowing easy retrieval. You can store any number of objects in a bucket and can create up to 500 buckets in a user account.
A region corresponds to a geographical location where the data inside the buckets will be stored. Different regions have different S3 Endpoints, which are URLs to access the Object Storage.
IONOS S3 Object Storage is currently available in 3 regions:
Berlin, Germany
Frankfurt, Germany
Logrono, Spain
Choosing the right bucket region is crucial for optimizing your Cloud storage. Consider the following:
Proximity: Select a region that is close to your application or user base to reduce latency and costs.
Redundancy: For backups, consider a region geographically separate from your primary location to ensure data safety during local outages or disasters.
When naming buckets and folders, the name must adhere to the following rules:
Be unique throughout the entire IONOS S3 Object Storage.
Consists of 3 to 63 characters.
Starts with a letter or a number.
Consists of lowercase letters (a-z) and numbers (0-9).
The use of hyphens (-), periods (.), and underscores (_) is conditional. The name must not:
End with a period, hyphen, or underscore.
Include multiple periods in a row (...).
Contain hyphens next to periods.
Have the format of an IPv4 address (Example: 192.168.1.4).
Contain underscores if the bucket is to be used for auto-tiering later.
Following are a few examples of correct bucket naming:
data-storage-2023
userphotos123
backup-archive
1234
Following are a few examples of incorrect bucket naming:
IONOS S3 Object Storage authenticates users by using a pair of keys – Access Key and Secret Key. For each user, an Object Storage Key is generated automatically on user creation which is activated when the user is granted permission to use the IONOS S3 Object Storage. You will need the keys to work with Object Storage through supported applications or develop your own using API.
Using the key management section in the IONOS S3 Object Storage, you can view and share your S3 Credentials and manage Access keys.
Depending on the selected S3 client, you have various options for sharing buckets, objects, or object versions with users of the IONOS S3 Object Storage. In addition to roles and predefined profiles, you can share the content of your buckets with selected users by using their IONOS S3 Object Storage ID known as ACL Sharing or S3 Sharing.
There are two forms of user identification - Canonical User ID, and Email address. The Canonical User ID is the ID assigned to a user by the IONOS S3 Object Storage. You can Retrieve Canonical User ID and share it with other S3 users to get access to their buckets and objects.
Some S3 clients only require the e-mail address of a registered S3 user for sharing objects as they are capable of converting the e-mail address to the ID required by the object storage.
S3 clients that support the 'Display Name' feature will display the e-mail address instead of the ID of a user for better readability.
Logging on to IONOS S3 Object Storage requires an access key as part of the authentication process. Your S3 credentials consist of an Access Key and a Secret Key. The web console automatically uses these credentials to set up Object Storage. Hence, deactivating an access key restricts your access through the web interface. These credentials are also required to set up access to IONOS S3 Object Storage using S3 Tools.
Generate object storage keys: A bucket owner can have multiple Object Storage Keys, which can be given to other users or automated scripts. Users using such an additional Object Storage Key to access the IONOS S3 Object Storage automatically inherit credentials and access rights of the bucket owner. This can be useful for allowing users automated (scripted) or temporary access to object storage. For more information, see Generate a Key.
Note: A maximum of five object storage keys per user is possible. You can create technical users to assign a different set of permissions and share access to the bucket with them. For more information, see Retrieve the Canonical User ID of a new user.
Activate or deactivate keys: A key when generated is in an active state by default. You can change the key status between active
and deactivation
. When the automated or temporary use of the key is over, the additional Object Storage Key can be deactivated. Deactivating an Object Storage Key will block access to the IONOS S3 Object Storage. You can reactivate the key and restore access to buckets and objects. For more information, see Manage Keys.
Delete: If a key is no longer needed or if it should no longer be possible to gain access to the IONOS S3 Object Storage with this key, it can be deleted. This cannot be undone.
Note: Before you delete a user or all of their Object Storage Keys from your account, ensure that the content in their IONOS S3 Object Storage is accessible so that you can continue to use it or delete it by adjusting the access rights accordingly.
The content set to 'private' that has not been removed before the user or all of their Object Storage Keys have been deleted is no longer accessible, but will continue to be charged. In this case, contact IONOS Cloud Support.
IONOS S3 Object Storage organizes data as objects. The data could range from documents, pictures, videos, backups, and other types of content. You can store these objects within the buckets and each object can be a maximum of 5 TB in size. An object consists of a key, that represents the name given to the object. This key acts like a unique identifier and you can use it to retrieve the object.
Every object uploaded to a bucket includes Object properties and Object metadata.
The properties refer to object details and the metadata are key-value pairs that store additional information about the object. The maximum size of metadata is 2 KB (keys+values). For instance, an object of type 'image' can include metadata such as its photographer, capture date, or camera used. Properly defined metadata aids in filtering and pinpointing objects using specific criteria.
Note: Currently, it is not possible to add metadata using the web console. You can add metadata using the PutObject or CreateMultipartUpload (in case of multipart upload) API calls for uploading objects.
During object storage, its properties and metadata are retrievable alongside the object.
In the web console, for any object under a bucket, the following object properties are displayed:
From the Object Properties page, you can also perform the following actions:
Download an object.
Copy the object URL to the clipboard.
Generate a Pre-Signed URL.
Delete an object.
Versions: Versioning objects enables the preservation, retrieval, and restoration of all versions of objects in your bucket. When versioning is enabled for a bucket, it also influences the object upload. For more information, see Versioning.
Access Control List (ACL): The object Access Control List (ACL) contains access control for each object defining which user accounts can read, write, or modify objects within a bucket. The access permissions defined at the bucket level also influence the object access in a bucket. For more information, see Access Control List.
Object lock: Object lock allows you to prevent objects from being deleted or overwritten for a specified amount of time or indefinitely. It is beneficial for compliance or regulatory reasons. Currently, enabling Object lock is possible only during the bucket creation. For more information, see Object Lock.
Multi-part upload: Breaks down a single large object into smaller parts and uploads these objects to the bucket maximizing the upload speed. For more information, see Multi-part upload.
Folders, also known as Prefixes are containers that help to organize the objects within a bucket. You can create folders within a bucket and upload objects to folders. Object Storage offers a flat data structure instead of a hierarchy such as a file system. Hence, to support the organization of data in a well-structured way, the creation of Folders is allowed within a bucket. You can also create subfolders within a folder and upload objects to subfolders.
Unlike traditional file systems with nested folders, IONOS S3 Object Storage maintains a flat environment. There is no hierarchy of folders or directories. While the structure is flat, you can emulate folders using key naming conventions with slashes (/).
You can use prefix names that contain alphanumeric characters, dashes, and hyphens only.
Example: Instead of saving a report as Annual_Report_2023.pdf
, using a key such as reports/2023/Annual_Report.pdf
gives the semblance of a folder structure. These virtual folders through prefixes aid in logically grouping related objects.
Following are a few examples of using prefixes for objects to emulate folder structure:
user_profiles/john_doe/avatar.jpg
data/backups/June/backup.zip
Learn about the key components of IONOS S3 Object Storage, its functions, and capabilities to manage your Object Storage.
To manage your buckets, objects, and keys in your Object Storage, refer to the following How-Tos that guide you with step-by-step instructions to complete the tasks.
The S3 (Simple Storage Services) API has been the global standard for object storage for many years. It provides interoperability and compatibility of various object storage systems that adhere to this standard. IONOS S3 Object Storage has one of the highest levels of S3 API support.
For more information, see documentation.
Example | Reason for Incorrectness |
---|---|
Properties | Description |
---|---|
Data-Storage
Contains uppercase letters.
user.photos
Contains periods which might cause SSL issues.
a2
Too short, less than 3 characters.
a-very-very-long-bucket-name-that-exceeds-sixty-three-characters-in-total
Exceeds the 63 character limit.
bucket-
Ends with a hyphen.
bucket_with_underscore
Allowed but not a recommended naming convention.
Type
Defines the object (file) type such as image, pdf, zip, and so on.
Size
The file size is shown as sequence of bytes such as MB, KB, and so on.
Modified on
The date and time when the object was last modified is displayed here.
Version ID
Represents an unique object version. If versioning for the bucket is enabled, then every object in that bucket is assigned a unique version ID. If versioning is not enabled for a bucket, then, version ID is not available for the object.
Feature | Supported | Notes |
Bucket CRUD | Yes |
Object CRUD | Yes |
Object Copy | Yes | Cross-regional copying is not supported |
Multipart Uploads | Yes |
Pre-Signed URLs | Yes | Signature types v2 and v4 are supported |
Bucket ACLs | Yes |
Object ACLs | Yes |
Block Public Access | Yes | Only via the API |
Bucket Policies | Yes |
Object Policies | Yes |
CORS Configuration | Yes |
Bucket Versioning | Yes |
Bucket Replication | Yes | Intraregional and cross-regional replication are supported |
Bucket Tagging | Yes | Only via the API |
Object Tagging | Yes | Only via the API |
Bucket Lifecycle | Yes |
Bucket Access Logging | Yes |
Bucket Encryption Configuration | Yes | Server-side encryption is used by default in the web interface. The encryption with customer-managed encryption keys is available via the API. |
Object Encryption | Yes | Only via the API |
Bucket Websites | Yes |
Bucket Inventory | Yes | Only via the API |
Object Lock | Yes |
Legal Hold | Yes |
Identity and Access Management (IAM) | No |
Security Token Service (STS) | No |
Multi-factor Authentication | No |
Bucket Notifications | No |
Request Payment | No |
Bucket Metrics | No |
Bucket Analytics | No |
Bucket Accelerate | No |
S3 Select | No |
To store your data in IONOS S3 Object Storage, learn about buckets that serve as data containers. |
To organize data in the Object Storage, learn about objects, folders, its metadata, functions, and prefixes. |
To authenticate with your S3 credentials for using Object Storage and to manage S3 keys, learn about key management. |
To control access permissions to your buckets and objects, learn about access management. |
Learn about the features compatible with S3 API. |
Use the search, versioning, prefixes, and dleete options to manage objects and folders effectively. |
Generate Object Storage keys to login securely, and activate or deactivate to keys to manage access to buckets and objects. |
Retrieve Canonical user-ID for sharing buckets, objects, and object versions with other S3 users. |
Generate pre-signed URL to time-bound object share access with other S3 users. |
Using the S3 Web Console, you can search for objects in buckets if the prefix or full name is known. For technical reasons, it is not possible to search for objects across buckets or folders.
To search for an object in the web console, follow these steps:
1. In the DCD, go to Menu > Storage > IONOS S3 Object Storage.
2. From the Buckets list, choose the bucket in which you want to search for objects.
3. In the Search by Prefix dialog box, enter the prefix or file name to search for.
Result: The objects matching your search criteria are displayed.
If you no longer want to keep the objects in the IONOS S3 Object Storage, these objects can be deleted. Deleted objects are not physically removed from the Object Storage, but receive a 'delete marker' and then have a size of 0 KB. These markers are deleted at an interval specified by the user or by the system.
There are two ways to delete objects from the IONOS S3 Object Storage using the S3 Web Console - manually and automatically.
1. In the DCD, go to Menu > Storage > IONOS S3 Object Storage.
2. From the Buckets list, choose the bucket or folder from which you want to delete an object.
3. Choose the object to delete and click on the respective object's action menu (three dots).
4. Click Delete.
5. Confirm the deletion of the object by choosing Delete.
Result: The object is successfully deleted from the bucket.
You can also permanently delete non-current versions of objects, delete objects with expired delete markers, and delete incomplete multipart uploads.
1. In the DCD, go to Menu > Storage > IONOS S3 Object Storage.
2. From the Buckets list, choose the bucket from which you want to delete a folder.
3. Choose the folder to delete and click on the respective folder's action menu (three dots).
4. Click Delete.
5. Confirm the deletion of the folder by choosing Delete.
Result: The folder is successfully deleted from the bucket.
You can delete multiple objects and folders in a bucket at a time by following these steps:
1. In the DCD, go to Menu > Storage > IONOS S3 Object Storage.
2. From the Buckets list, choose the bucket from which you want to delete objects and folders.
3. Select the checkboxes against the names of the objects and folders to be deleted.
4. (Optional) To delete all objects and folders, select the checkbox against the title NAME which lists all the objects and folder names.
5. Click Delete selected objects.
6. Confirm the deletion of selected objects and folders by choosing Delete.
Result: The objects and folders selected for deletion are successfully deleted from the bucket.
If you have enabled versioning for your S3 bucket, you have the flexibility to download non-current versions of objects. Versioning is not enabled by default. Objects that were already uploaded to the object storage before versioning was activated are identified by ID null
. If versioning is deactivated, existing object versions are retained. For more information, see .
Object Lock is a feature that enables you to apply Write-Once-Read-Many (WORM) protection to objects, preventing them from being deleted or modified for a specified duration. It provides robust, programmable safeguards for storing critical data that must remain immutable.
Note: Once a bucket is created without an object lock, you cannot add it later.
Data Preservation: Protects critical data from accidental or malicious alteration and deletion, ensuring integrity and consistency.
Regulatory Compliance: Aligns with European regulations such as GDPR, MiFID II, and the Electronic ID and Trust Services (eIDAS) regulation, maintaining records in an unalterable state.
Legal Holds and Audits: Facilitates legal holds and audits, meeting requirements for transparency and accountability.
Object lock can be applied in two different modes:
Governance: Allows specific users with special permissions to override the lock settings. Ideal for flexible control.
Compliance: Enforces a strict lock without any possibility of an override. Suited for regulatory and legal mandates.
These two lock modes require configuring the duration for which the object will remain locked. The period can range from days to years, depending on the object's compliance needs.
For the objects under Governance mode, the retention configuration can be modified or removed by including a specific header variable in the API request. However, for objects in Compliance mode, reducing the retention period or removing the retention configuration is not possible.
Note: Under Object Lock or Object Hold, permanent deletion of an object's version is not permissible. Instead, a deletion marker is generated for the object, causing IONOS S3 Object Storage to consider that the object has been deleted.
However, the delete markers on the objects are not subject to protection from deletion, irrespective of any retention period or legal hold on the underlying object. Deleting the delete markers restores the previous version of the objects.
An additional setting called Legal Hold can place a hold on an object, enforceable without specifying a retention period. It could be applied both to objects with or without Object Lock. The Legal Hold will continue to be applied till manual removal even if the object’s retention period for Governance or compliance mode is over.
Note: Object Lock configuration can only be enabled during the initial creation of a bucket and cannot be applied to an existing bucket.
When a bucket is created with Object Lock enabled, you can set up Object Lock configurations. These configurations determine the default mode and retention period for newly uploaded objects. Alternatively, Object Lock settings can be explicitly defined for each object during its creation, overriding the bucket's default settings.
Prerequisite: Make sure you are creating a new bucket for which you want to enable Object Lock.
1. In the DCD, go to Menu > Storage > IONOS S3 Object Storage.
2. Create a new bucket with Object Lock enabled.
3. From the Buckets list, choose the bucket for which Object Lock is enabled and click Bucket settings.
4. Click Object Lock to manage these settings on the bucket.
Result: The object lock is successfully applied on the bucket upon creation.
Use the Object Lock API to manage the Object Lock configuration on the specified buckets.
Use the CLI to manage Object Lock.
The following are a few limitations to consider while using Object Lock:
Versioning will be automatically enabled in addition to Object Lock.
Once the Object Lock is enabled during bucket creation, both Object Lock and Versioning cannot be disabled afterward.
When you place or modify an Object Lock, updating the object version's metadata does not overwrite the object version or change its Last-Modified timestamp.
A bucket with Object Lock enabled cannot be chosen as a source for replication or tiering, but it could be a destination for replication or tiering.
In the Compliance mode, an object is immutable until its retention date has passed. It is not possible to disable this mode for the object or shorten the retention period. This setting could not be changed either by the bucket owner or IONOS.
IONOS S3 Object Storage provides multiple features to manage access to your buckets and objects effectively. This allows you to define precisely who may access what. By default, newly created buckets and objects are 'private'. Only the bucket owner can access them.
Use the following options to share access to a bucket and to all or specific objects in a bucket:
Bucket and Object Access Control Lists (ACLs): Provides a simpler mechanism for controlling access and can be specified for every object if needed, making them more flexible on a per-object basis. You can use ACLs to make a bucket or object public or to share access with certain authorized users by setting the right permissions. ACLs do not offer the ability to restrict access based on conditions like IP address. For more information, see Access Control List.
Bucket Policy: This policy is applied at the bucket level and it offers a robust framework for setting fine-grained access controls to your Object Storage buckets and objects. It is useful for restricting access based on certain conditions like IP addresses or time of access.
With Bucket Policy, you can manage access to specific objects or prefixes within a bucket. However, the size of the policy is limited, which could be a consideration if you have extensive access control requirements. You can use Bucket Policy to make a bucket or object public, or to share access with specific authorized users by defining the necessary permissions within the policy. For more information, see Bucket Policy.
Pre-Signed URLs: An excellent choice for securely providing temporary access to your objects. Essential for sharing files with someone without requiring them to have an IONOS account, and for granting temporary access to authorized users for a specified period, after which the URL expires. For more information, see Share Objects with Pre-Signed URLs.
Cross-Origin Resource Sharing (CORS): If you allow public access to your bucket, you can specify which domains can make cross-origin requests to your Object Storage using this function. It is useful when you need to serve resources from your bucket to web applications hosted on different domains.
Block Public Access: Overrides any other permissions applicable on buckets and objects. Essential for maintaining your data’s privacy by ensuring your buckets and objects are not accidentally made public and accessible only to authorized individuals or systems. Currently, this feature is available only via the IONOS S3 Object Storage API.
IONOS S3 Object Storage allows for comprehensive access management at the bucket and object levels. This allows you to define precisely who may access what.
There are two roles involved in granting access:
Owner: The user who creates the bucket is referred to as the owner.
Grantee: Object Storage defined user groups to whom permissions are granted that specify which buckets and objects they may access. Usually, the grantee is the user under the same contract at IONOS, but it also could be the user under another contract. You need to get the Grantee’s Canonical User ID to share access to the bucket or object. For more information, see Retrieve Canonical User ID.
Note: Granting access to a bucket for another IONOS user does not make the bucket appear in the user's S3 web interface due to the S3 protocol's architecture. To access the bucket, the user will need to utilize other S3 Tools, as the granted access does not translate to interface visibility.
Permissions: These are the access rights that can be assigned to Grantees. By default, buckets and objects are private and only the bucket owner can access them. The content of a bucket is always accessible (as a list) as soon as the bucket is public, even if the objects it contains are private and can therefore neither be displayed nor downloaded!
By default, objects in the IONOS S3 Object Storage are private and only the bucket owner has permission to access them. Only the bucket owner can generate a pre-signed URL for objects and grant time-bound permission to other users to access these objects. It is a secure and user-friendly way to share private objects stored in your Object Storage with other users.
This way, the objects are made publically available for users with the object's pre-signed URL; however, you could limit the period of access to the object.
Pre-signed URLs are ideal for providing temporary access to a specific object without needing to change the object's permissions or share your credentials with other users.
Allows other users to upload objects directly to your S3 bucket without needing to provide them with access and secret keys.
You can generate a pre-signed URL to share objects through one of the following methods:
1. In the DCD, go to Menu > Storage > IONOS S3 Object Storage.
2. From the Buckets list, choose the bucket from which you want to share the objects. The list of objects in the bucket is listed.
3. Select the object to share and click Generate Pre-Signed URL.
4. Enter the expiration time for the URL and choose whether the specified time refers to seconds, minutes, hours, or days.
5. Click Generate.
6. Copy the generated pre-signed URL and share it with users that require this object access.
Result: The pre-signed URL for the selected object is generated successfully and copied to the clipboard. The URL is valid for the period defined during URL generation.
Prerequisites:
Set up the AWS CLI by following the installation instructions.
Make sure to consider the supported Endpoints.
Generate a pre-signed URL for my-object.txt`` in the
my-bucket` bucket which will expire in 3600 seconds:
The creation of pre-signed URLs does not involve a dedicated API by design. These URLs are generated locally via a signing algorithm using your credentials without relying on the S3 API. To create these URLs, use the appropriate SDK for your programming language.
IONOS S3 Object Storage is S3-compatible, allowing seamless integration with any SDK supporting the S3 protocol for tasks like generating pre-signed URLs. For generating pre-signed URLs using SDKs, see the following AWS methods:
Python, Go, Java 2.x., JavaScript 2.x., JavaScript v3, and PHP Version 3.
Manage your Object Storage buckets, objects and their access permissions effectively by using the data management, access management, and public access settings.
Grantee | Bucket | Object |
---|---|---|
Permission | Bucket | Object |
---|---|---|
Use the Object Lock to protect critical objects in a bucket for an immutable period.
Manage multiple versions of the same object using Versioning.
Use the Bucket Policy to define granular access permissions and actions users can perform on buckets and objects.
Use the ACL to define access permissions on buckets and objects to control who can access them.
With Logging, track and record storage requests for your buckets.
Public
Everyone
Authenticated Users
All users of the IONOS S3 Object Storage (not limited to a contract).
Log Delivery Group
Group required for logging (in combination with the "Log Delivery Write" ACL)
n/a
Individual users
Selected users of the IONOS S3 Object Storage (not limited to a contract)
Sharing buckets with individual users requires their IONOS S3 Object Storage ID.
Read access (Readable)
View the contents of a bucket as a list. Opening and downloading objects is not possible.
Open and download objects
Write access (Writable)
Upload and delete objects
n/a
Read access to permissions (ACP Readable)
View the access rights of the bucket or object
Write access to permissions (ACP Writable)
View and edit the access rights of the bucket or object
For each user, an Object Storage Key is generated automatically, which is activated when the user is granted permission to use the IONOS S3 Object Storage.
You can manage the keys via the Web console or IONOS Cloud API.
1. In the DCD, go to Menu > Storage > IONOS S3 Object Storage.
2. In the Key management tab, go to the Access keys section that lists all keys present in your account.
3. Select the key and toggle on or off the Key active option to activate or deactivate the key.
Result: The access key status is set as active
when toggled on and deactivated
when toggled off.
Warning: When you have only one access key, disabling this key will lose your access to all existing buckets. However, the objects remain and usage costs continue to apply. To avoid losing access to your S3 buckets, you need to have at least one active access key.
Prerequisite: Only contract owners and administrators can set up the object storage and manage keys for other users. Make sure you have the corresponding permission.
1. In the DCD, go to Menu > Management > Users & Groups.
2. Select the user from the Users list and click the Object Storage Keys tab.
3. Select the checkbox Active against the Key you want to set as active. Uncheck the checkbox if you want to deactivate the key.
Result: The access key status is successfully updated.
1. In the DCD, go to Menu > Storage > IONOS S3 Object Storage.
2. In the Key management tab, go to the Access keys section that lists all keys present in your account.
3. Select the key to be deleted and click Delete.
Warning: Any access associated with this key will be revoked and cannot be restored.
4. To confirm the deletion of the key, click Delete.
Result: The access key is successfully deleted.
Important: When you have only one access key with existing buckets, you cannot delete this key. You can either choose to Deactivate a key or create a new access key before deleting the selected key. You can also delete existing buckets and continue with deleting the last access key.
Prerequisite: Only contract owners and administrators can delete keys for other users. Make sure you have the corresponding permission.
1. In the DCD, go to Menu > Management > Users & Groups.
2. Select the user from the Users list and click the Object Storage Keys tab.
3. Select the key to be deleted from the list of keys and click Delete.
Warning: Any access associated with this key will be revoked and cannot be restored.
4. To confirm the deletion of the key, click OK.
Result: The access key is successfully deleted.
Using the User S3 keys set of API calls, you can list, create, retrieve, modify, and delete keys.
Following are a few examples of common use cases and their corresponding bucket policy configurations.
To grant full control over a bucket and its objects to other IONOS S3 Object Storage users:
To grant read-only access to objects within a specific prefix of a bucket to other IONOS S3 Object Storage users:
To allow read access to certain objects within a bucket while keeping other objects private:
To restrict all users from performing any S3 operations within the designated bucket, unless the request is initiated from the specified range of IP addresses:
For more information on bucket policy configurations, see Bucket Policy, supported bucket and object actions and condition values, and Retrieve Canonical User ID.
Versioning allows you to keep multiple versions of the same object. Upon enabling Versioning for your bucket, each version of an object is considered a separate entity contributing to your storage space usage. Every version represents the full object, not just the differences from its predecessor. This aspect will be evident in your usage reports and will influence your usage-based billing.
Data Recovery: Versioning can be used as a backup solution for your data. If you accidentally overwrite or delete an object, you can restore it to a previous version.
Tracking Changes: Versioning can be used to track changes to your data over time. This can be useful for debugging purposes or auditing changes to your data.
Buckets can exist in one of three states:
Unversioned: Represents the default state. No versioning is applied to objects in a bucket.
Versioning - enabled: In this state, each object version is preserved.
Versioning - suspended: No new versions are created, but existing versions are retained.
Objects residing in your bucket before the activation of versioning possess a version ID of null
. Once versioning is enabled, it cannot be disabled but can be suspended. During suspension:
New object versions are not created.
Existing object versions are retained.
You can resume versioning anytime, with new versions being created henceforth.
Upon enabling versioning for a bucket, every object version is assigned a unique, immutable Version ID, serving as a reliable reference for different object versions. New object versions are generated exclusively through PUT
operations, with actions such as COP
entailing a PUT
operation, thus spawning a new version.
Notably, a new Version ID is allocated for each version, even if the object content remains unaltered. Objects residing in the bucket before versioning activation bear a Version ID of null
.
When an object is deleted, all its versions persist in the bucket, while Object Storage introduces a delete marker, which is also assigned its Version ID.
You can manage Versioning using the web console, APIs, and command-line tool.
1. In the DCD, go to Menu > Storage > IONOS S3 Object Storage.
2. From the Buckets list, choose the bucket for which versioning must be enabled and click Bucket settings.
3. In the Versioning, click Enable to have versioning of objects. On choosing Disable option, it suspends object versioning but preserves existing object versions.
Result: Based on the selection, versioning is either enabled or disabled for objects in the bucket.
1. In the DCD, go to Menu > Storage > IONOS S3 Object Storage.
2. From the Buckets list, choose the bucket in which the desired object exists.
3. Click the object name within the bucket listing.
4. Navigate to the object's Versions tab by clicking the object name or clicking the three dots against the object name.
5. Copy Version IDs or download non-current versions of the object. You can also select and delete non-current object versions.
Result: Based on the selection, Version IDs and non-current object versions are successfully managed.
Use the Versioning API to configure and manage Versioning for a bucket.
Use the CLI to manage Versioning.
IONOS S3 Object Storage allows the setup of lifecycle rules for managing both current and non-current versions of objects in versioning-enabled buckets. For instance, you can automate the deletion of non-current object versions after a specified number of days post their transition to a non-current status.
For a bucket with Object Lock enabled, Versioning is automatically enabled and cannot be suspended.
For Bucket Replication to function correctly, Versioning must be enabled.
With the help of a detailed authorization system, based on the S3 Access Control List (ACL), you can control precisely who accesses and edits your content. By assigning ACLs to a group of users as per S3-compliant ACL, you can manage who may access the buckets and objects of your IONOS S3 Object Storage.
Use instead of ACLs if you need to:
Manage access to prefixes like /folder/*
or *.jpg
.
Use conditions to grant access, for example, IP address.
Allow or deny certain actions like listing the object list.
Use instead of ACL for granting temporary access to authorized users for a specified period, after which the URL expires.
You can use ACLs to make a bucket or object public or to share access with certain authorized users by setting the right permissions. IONOS S3 Object Storage offers the following ACL management methods:
If you have defined ACLs granting public access, activating the Block Public Access revokes these permissions, ensuring your data remains private. This feature is invaluable in scenarios where ensuring data privacy is paramount, or when you want to enforce a blanket no-public-access rule, irrespective of ACL settings.
Logging in IONOS S3 Object Storage enables the tracking and storage of requests made to your bucket. When you enable logging, S3 automatically records access requests, such as the requester, bucket name, request time, request action, response status, and error codes, if any. By default, Logging is disabled for a bucket.
Security Monitoring: Tracks access patterns and identifies unauthorized or suspicious access to your data. In the event of a security breach, logs provide vital information for investigating the incident, such as IP addresses, request times, and the actions that were performed.
Auditing: Many industries require compliance with specific regulatory standards that mandate the monitoring and logging of access to data. S3 logging facilitates compliance with regulations like HIPAA, GDPR, or SOX by providing a detailed record of who accessed what data and when.
Troubleshooting: If there are issues with how applications are accessing your S3 data, logs can provide detailed information to help diagnose and resolve these issues. Logs show errors and the context in which they occurred, aiding in quick troubleshooting.
You can manage Logging using the web console, APIs, and command-line tool.
Prerequisite: Make sure you have provided access permissions for Log Delivery Group. For more information, see .
To activate logging, follow these steps:
1. In the DCD, go to Menu > Storage > IONOS S3 Object Storage.
2. From the Buckets list, choose the bucket and click Bucket settings.
3. Go to Logging and click Browse S3 to select the destination bucket in the same region to store logs.
Note: Although it is possible to store logs in the same bucket being logged, it is recommended to use a different bucket to avoid potential complications with managing log data and user data together.
4. (Optional) Specify the prefix for log storage, providing flexibility in organizing and accessing your log data. If no prefix is entered, the log file name is derived from its time stamp alone.
5. Click Save.
Result: Logging is enabled for the selected bucket.
You can modify or deactivate logging at any time with no effect on existing log files. Log files are handled like any other object. Using the Logging section in the Bucket settings, you can click Disable Logging to stop collecting log data for a bucket.
1. In the DCD, go to Menu > Storage > IONOS S3 Object Storage.
2. From the Buckets list, choose the bucket for which the logging must be enabled.
3. Click Bucket settings and go to Access Control List (ACL).
4. For Logging, select the OBJECTS:WRITE and BUCKET ACL:READ checkboxes.
5. Click Save.
Result: The required access permissions to enable Logging for a bucket is enabled.
Depending on the selected S3 client, you have various options for sharing buckets, objects, or object versions with users of the Object Storage. In addition to roles and predefined profiles, you can share the content of your buckets with selected users by using their IONOS S3 Object Storage ID known as Canonical user ID using the and Bucket Policies.
You can also share buckets and objects with other users by using their user IDs. User identification is possible through the Canonical user ID, and Email address. For more information, see .
Retrieving the Canonical user ID includes the following:
For another user to share the content of their IONOS S3 Object Storage with you, they need your IONOS S3 Object Storage ID, which you will find in the Object Storage Key Manager.
Prerequisite: Make sure you have the corresponding permission to create the Object Storage. Only contract owners and administrators with the Object-Storage-Key can set up the object storage.
1. In the DCD, go to Menu > Storage and click the IONOS S3 Object Storage.
2. Select the Key management tab.
3. In the S3 Credentials, click Copy against the Canonical User ID. You can also copy the required user IDs and use them to get access to other buckets and objects.
Result: Your Canonical user ID is successfully copied to the clipboard.
To retrieve the Canonical user ID of a grantee, follow these steps:
Prerequisites:
Make sure you have the corresponding permission to create the IONOS S3 Object Storage. Only contract owners and administrators can retrieve the IONOS S3 Object Storage IDs of their IONOS account users.
1. In the DCD, go to Menu > Management > Users & Groups.
2. Select the user from the Users list and click the Object Storage Keys tab.
3. Click the S3 link and copy the user's S3 Canonical User ID. This ID is used to share access to this user. You can also copy the required other user IDs and use them for sharing your objects with this user.
Result: The Canonical user ID for the grantee is successfully retrieved.
If the grantee’s user account does not already exist or if you want to assign a different set of permissions, then, the root user of the contract needs to create the user account and then retrieve the Canonical user ID to grant access to buckets and objects.
1. In the DCD, go to Menu > Management > Users & Groups.
2. In the Users tab, click + Create.
3. Enter the user details such as First Name, Last Name, Email, Password, and click Create.
Info: The new user is created and shown in the Users list and their S3 access keys are automatically created but are disabled. When the user is added to a group with Use Object Storage privilege enabled, the access key is set to active
.
4. In the Users list, select the user and click the Object Storage Keys tab.
5. Select the checkbox Active to activate the Key.
6. Click the S3 link and copy the user's S3 Canonical User ID. This ID is used to share access to this user. You can also copy the required other user IDs and use them for sharing your objects with this user.
Bucket Policy is a JSON-based access policy language that allows you to create fine-grained permissions for your S3 buckets. With Bucket Policy, you can specify which users or services can access specific objects and what actions users can perform.
Note: Granting access to a bucket for another IONOS user does not make the bucket appear in the user's S3 web console due to the S3 protocol's architecture. To access the bucket, the user must utilize other as the granted access does not translate to interface visibility.
Use this feature to grant access to a specific user or group to only a subset of the objects in your bucket.
Restrict access to certain operations on your bucket, for example, list objects or remove object lock.
Using Bucket Policy, you can grant access based on conditions, such as the IP address of the user.
Create fine-grained access control rules to allow a user to put objects to a specific prefix in your bucket, but not to get objects from that prefix.
Use instead of Bucket Policy if you need to:
Define permissions in a simple way such as READ
, WRITE
, or FULL CONTROL
.
Apply different sets of permissions to many objects.
Use for granting temporary access to authorized users for a specified period, after which the URL and the access to the object expires.
A JSON-formatted bucket policy contains one or more policy statements. Within a policy's statement blocks, IONOS S3 Object Storage support for policy statement elements and their values is as follows:
Id (optional): A unique identifier for the policy. Example: SamplePolicyID
.
Version (required): Specifies the policy language version. The current version is 2012-10-17
.
Statement (required): An array of individual statements, each specifying a permission.
Sid (optional): Custom string identifying the statement. For example, Statement1
or Only allow access from specific source IPs
.
Effect (required): Specifies the effect of the statement. Possible values: Allow
, Deny
.
Principal (required): Specifies the user, account, service, or other entity to which the statement applies.
*
– Statement applies to all users (also known as 'anonymous access').
{"CanonicalUser": "<canonicalUserId>"}
– Statement applies to the specified IONOS S3 Object Storage user.
{"CanonicalUser": ["<canonicalUserId>", "<canonicalUserId>",...]}
– Statement applies to the specified IONOS S3 Object Storage users.
Action (required): Specifies the action(s) that are allowed or denied by the statement. See section 'Supported Action Values'. Example: s3:GetObject
for allowing read access to objects.
Resource (required): Must be one of the following:
arn:aws:s3:::<bucketName>
– For bucket actions (such as s3:ListBucket) and bucket subresource actions (such as s3:GetBucketAcl
).
arn:aws:s3:::<bucketName>/*
or arn:aws:s3:::<bucketName>/<objectName>
– For object actions (such as s3:PutObject
).
Condition (optional): Specifies conditions for when the statement is in effect. See section 'Supported Condition Values'. Example: {"aws:SourceIp": "123.123.123.0/24"}
restricts access to the specified IP range.
You can apply Bucket Policy using the web console by following these steps:
1. In the DCD, go to Menu > Storage > IONOS S3 Object Storage.
2. From the Buckets list, choose the required S3 bucket and click the Bucket settings.
4. Click Save.
This action grants the specified user full access to your bucket.
You have the option to restrict actions, define the scope of access, or incorporate conditions into the Bucket Policy for more tailored control.
You can delete a Bucket Policy at any time using the Bucket Policy section in the Bucket settings and click Delete.
Info: Removing a bucket policy is irreversible and it is advised to create a backup policy before deleting it.
If you have defined a bucket policy to grant public access, activating the Block Public Access feature will revoke these permissions, ensuring your data remains private. This feature is invaluable in scenarios where ensuring data privacy is paramount, or when you want to enforce a blanket no-public-access rule, irrespective of Bucket Policy settings.
You can manage ACL permission for buckets through the web console, IONOS S3 Object Storage API, or the command-line tool.
The following table shows the ACL permissions that you can configure for buckets in the IONOS S3 Object Storage.
Note: For security, granting some of the access permissions such as Public access WRITE
, Public access WRITE_ACP
, Authenticated users WRITE
, Authenticated users WRITE_ACP
is possible only through an API Call.
To manage ACL for buckets using the web console, follow these steps:
1. In the DCD, go to Menu > Storage > IONOS S3 Object Storage.
2. From the Buckets list, choose the bucket to which you want to access the ACL.
3. Click Bucket settings and choose the Access Control List (ACL) under the Access management section.
6. Click Save to apply the ACL settings to the bucket.
Result: The bucket ACL permissions are successfully applied on the bucket.
Prerequisites:
1. In the DCD, go to Menu > Storage > IONOS S3 Object Storage.
2. From the Buckets list, choose the bucket to which you want to add the grantee.
3. Click Bucket settings and choose the Access Control List (ACL) under the Access management section.
5. Add any number of grantees to the bucket by following step 4.
6. Click Save to add the additional grantees with corresponding ACL permissions to the bucket.
Result: The grantees are successfully added to the bucket.
You can manage ACL permission for objects through the web console, IONOS S3 Object Storage API, or the command-line tool.
The following table shows the ACL permissions that you can configure for objects in a bucket in the IONOS S3 Object Storage.
These permissions are applied at individual object levels within a bucket, offering a high level of granularity in access control.
Note: For security, granting some of the access permissions such as Public access WRITE_ACP
and Authenticated users WRITE_ACP
is possible only through an API Call.
To manage ACL for objects using the web console, follow these steps:
1. In the DCD, go to Menu > Storage > IONOS S3 Object Storage.
2. From the Buckets list, choose the bucket under which the object ACL to be modified exists.
3. From the Objects list, choose the object for which ACL permissions are to be modified.
4. From the Object Settings, click Access Control List (ACL).
7. Click Save to apply the ACL settings to the object.
Result: The object ACL permissions are successfully applied to the object.
Prerequisites:
1. In the DCD, go to Menu > Storage > IONOS S3 Object Storage.
2. From the Buckets list, choose the bucket under which the object ACL to be modified exists.
3. From the Objects list, choose the object for which you want to add the grantee.
5. Add any number of grantees to the object by following step 4.
6. Click Save to add the additional grantees with corresponding ACL permissions to the object.
Result: The grantees are successfully added to the object.
Use the to configure and manage Logging for a bucket.
The grantee is the user under the same contract at IONOS, but it also could be the user under another contract. You need the user's Canonical user ID to share access to the bucket or object using ACL. For more information, see .
Make sure the grantee Object Storage account already exists. If not, then, begin with creating the grantee by following the steps in .
Result: The new user is successfully created and the Canonical user ID is retrieved. You can now share access to the bucket with the new user using .
3. In the Bucket Policy, click Edit, copy and paste the provided JSON policy, replacing BUCKET_NAME
and CANONICAL_USER_ID
with the actual values. You can retrieve your Canonical user ID from the Key management section. For more information, see .
Use the to manage the Bucket Policy configuration.
Use the to manage Bucket Policy.
4. Select the checkboxes against the access permissions to grant at each user level such as bucket owner, public access, authenticated users, and logging. For more information, see .
5. Add grantees to provide additional users with access permission to the bucket. For more information, see .
Make sure the canonical user ID of the grantee is known. To retrieve the ID, see .
The grantee should already exist. If not, create a user and retrieve the Canonical user ID by following the steps in .
4. In the Additional Grantees section, enter the retrieved Canonical user ID of the grantee, select the checkboxes on the ACL permissions to grant, and click Add. For ACL permissions, see .
Note: Granting access to a bucket for another IONOS user does not make the bucket appear in the user's S3 web console due to the S3 protocol's architecture. To access the bucket, the user must utilize other as the granted access does not translate to interface visibility.
Use the Object Storage API to manage bucket ACL permissions.
Use to manage ACL permission for buckets.
5. Select the checkboxes against the access permissions to grant at each user level such as bucket owner, public access, and authenticated users. For more information, see .
6. Add grantees to provide additional users with access permission to the object. For more information, see .
Make sure the canonical user ID of the grantee is known. To retrieve the ID, see .
The grantee should already exist. If not, create a user and retrieve the Canonical user ID by following the steps in .
4. In the Additional Grantees section, enter the retrieved Canonical user ID of the grantee, select the checkboxes on the ACL permissions to grant, and click Add. For ACL permissions, see .
Use the Object Storage API to manage object ACL permissions.
Use to manage ACL permission for objects.
User | Console permission | ACL permission | Access granted |
Bucket Owner | Objects - Read | READ | Allows grantee to read the object data and its metadata. |
Bucket Owner | Objects - Write | WRITE | Enables the grantee to write object data and its metadata, including deleting the object. |
Bucket Owner | Bucket ACL - Read | READ_ACP | Grants the ability to read the ACL of the bucket. |
Bucket Owner | Bucket ACL - Write | WRITE_ACP | Allows the grantee to write the ACL of the bucket. |
Public access | Objects - Read | READ | Grants public read access for the objects in the bucket. Anyone can access the objects in the bucket. |
Public access | Bucket ACL - Read | READ_ACP | Grants public read access for the bucket ACL. Anyone can access the bucket ACL. |
Authenticated users | Objects - Read | READ | Grants read access to objects in the bucket to anyone with an IONOS account using which they can access the objects in the bucket. |
Authenticated users | Bucket ACL - Read | Read_ACP | Grants read access to bucket ACL to anyone with an IONOS account. |
Logging | Objects - Read | READ | Allows grantee to read the object log data. |
Logging | Objects - Write | WRITE | Enables the grantee to write object data and its metadata, including deleting the object. |
Logging | Bucket ACL - Read | READ_ACP | Grants the ability to read the log data of the bucket. |
Logging | Bucket ACL - Write | WRITE_ACP | Allows the grantee to write the ACL of the bucket. |
User | Console permission | ACL permission | Access granted |
Bucket Owner | Objects - Read | READ | Allows grantee to read the object data and its metadata. |
Bucket Owner | Object ACL - Read | READ_ACP | Grants the ability to read the object ACL. |
Bucket Owner | Object ACL - Write | WRITE_ACP | Allows the grantee to write the ACL of the applicable object. |
Public access | Objects - Read | READ | Grants public read access for the objects in the bucket. Anyone can access the objects in the bucket. |
Public access | Object ACL - Read | READ_ACP | Grants public read access for the object ACL. Anyone can access the object ACL. |
Authenticated users | Objects - Read | READ | Grants read access to objects in the bucket to anyone with an IONOS account using which they can access the objects in the bucket. |
Authenticated users | Object ACL - Read | Read_ACP | Grants read access to object ACL to anyone with an IONOS account. |
The IONOS S3 Object Storage is fully compatible with S3, which means that it can be used to manage buckets and objects with existing S3 clients once properly configured.
We suggest you a list of popular tools for working with IONOS S3 Object Storage, as well as instructions for configuring them:
Postman – a tool for API development and testing. Its unique feature is a graphical interface for sending API requests to object storage endpoints, facilitating testing and development.
Cyberduck – an open-source GUI client supporting object storage among other protocols, presenting storage objects as local files for easy browsing, upload, and download.
S3 Browser – a freeware Windows client for object storage, providing an easy way to manage buckets and objects, including file permissions and access control lists, through a visual interface.
AWS CLI is unique in offering a wide range of commands for comprehensive management of buckets and objects, ideal for scripting and automation.
S3cmd – a command-line tool offering direct, scriptable control over object storage buckets and objects. However, it lacks certain features like versioning and replication management.
rclone – a command-line program for syncing files between local and cloud storage, distinguishing itself with powerful synchronization capabilities, specifically useful when handling large data quantities and complex sync setups.
Boto3 Python SDK provides high-level object-oriented API as well as low-level direct service access.
Veeam Backup and Replication is a comprehensive backup and disaster recovery solution for virtual, physical, and cloud-based workloads.
S3cmd is a free command line tool and client for loading, retrieving, and managing data in S3. It has over 60 command line options, including multipart uploads, encryption, incremental backup, S3 sync, ACL and metadata management, bucket size, and bucket policies (Linux, macOS).
Install 3cmd for your distribution:
on CentOS/RHEL and Fedora: sudo dnf install s3cmd
on Ubuntu/Debian: sudo apt-get install s3cmd
on macOS using Brew: brew install s3cmd
You can also install the latest version from SourceForge.
Run the following command in a terminal: s3cmd --configure
. This will guide you through the interactive installation process:
Enter your Access Key and Secret key. To get them, log in to the DCD, go to Menu > Storage > IONOS S3 Object Storage > Key management.
Note: Your credentials are not tied to a specific region or bucket.
Specify the region of your bucket for Default Region
. Example: eu-central-2
. Please refer to the list of available regions.
Specify the endpoint for the selected region for S3 Endpoint
from the same list. For example, s3-eu-central-2.ionoscloud.com
.
Insert the same endpoint again for DNS-style bucket+hostname:port template
.
Specify or skip password (press Enter) for Encryption password
.
Press Enter for Path to GPG program
.
Press Enter for Use HTTPS protocol
.
Press Enter for HTTP Proxy server name
.
Press Enter for Test access with supplied credentials? [Y/n]
.
S3cmd will try to test the connection. If everything went well, save the configuration by typing y
and pressing Enter. The configuration will be saved in the .s3cfg
file.
If you need to work with more than one region or with different providers, there is a way to set up multiple configurations. Use s3cmd -configure --config=ionos-fra
to save the configuration for a specific location or provider. Run s3cmd with the -c
option to override the default configuration file. For example, list the object in the bucket:
You can also specify an endpoint directly on the command line to override the default setting. The Access Key and Secret key are region-independent, so s3cmd can take them from the default configuration:
Or even specify it with an Access Key and the Secret Key:
Please refer to the list of available endpoints for the --host
option. You can skip this option if you are only using the region from the configuration file.
List buckets (even buckets from other regions will be listed):
Create a bucket (the name must be unique for the whole IONOS S3 Object Storage). You need to explicitly use the --region
option, otherwise a bucket will be created in the default de
region:
Create the bucket my-bucket
in the region de
(Frankfurt, Germany):
Create the bucket my-bucket
in the region eu-cental-2
(Berlin, Germany):
Create the bucket my-bucket
in the region eu-south-2
(Logrono, Spain):
List objects of the bucket my-bucket
:
Upload filename.txt from the current directory to the bucket my-bucket
:
Copy the contents of local directory my-dir
to the bucket my-bucket
with prefix my-dir
:
Copy all objects from my-source-bucket
to my-dest-bucket
excluding .zip files (or use mv
to move objects). The command doesn’t support cross-region copying for IONOS S3 Object Storage, use rclone utility for cross-region copying:
Download all the objects from the my-bucket
bucket to the local directory my-dir
(the directory should exist):
Synchronize a directory to S3 (checks files using size and md5 checksum):
Get Cross-Origin Resource Sharing (CORS) configuration:
Set up Cross-Origin Resource Sharing (CORS) configuration:
cors_rules.xml:
Delete CORS from the bucket:
Get information about buckets or objects:
s3cmd info s3://my-bucket
s3cmd info s3://my-bucket/my-object
Generate a public URL for download that will be available for 10 minutes (600 seconds):
Set up a lifetime policy for a bucket (delete objects older than 1 day):
detete-after-one-day.xml:
Encrypt and upload files. This option allows you to encrypt files before uploading, but in order to use it, you have to run s3cmd --configure
and fill out the path to the GPG utility and the encryption password. There is no need to use special parameters to decrypt the file on downloading with get
option as this is done automatically using the data from the configuration file.
Add or modify user-defined metadata. Use headers starting with x-amz-meta-
and store data in the set of key-value pairs. The user-defined metadata is limited to 2 KB in size. The size of the user-defined metadata is measured by taking the sum of the number of bytes in the UTF-8 encoding of each key and value.
s3cmd modify --add-header x-amz-meta-my_key:my_value s3://my-bucket/prefix/filename.txt
Check the changes:
Delete metadata:
For more information, visit S3cmd usage and S3cmd FAQ.
The IONOS S3 Object Storage Service endpoints are listed below.
S3 region (global default): de
S3 endpoint: s3-eu-central-1.ionoscloud.com
Legacy endpoint: s3-de-central.profitbricks.com
S3 static website endpoint: s3-website-de-central.profitbricks.com
(Please note that only this region uses the profitbricks.com domain for static website endpoints.)
S3 region (LocationConstraint): eu-central-2
S3 endpoint: s3-eu-central-2.ionoscloud.com
Legacy endpoint s3-eu-central-2.profitbricks.com
S3 static website endpoint: s3-website-eu-central-2.ionoscloud.com
S3 region (LocationConstraint): eu-south-2
S3 endpoint: s3-eu-south-2.ionoscloud.com
Legacy endpoint: s3-eu-south-2.profitbricks.com
S3 static website endpoint: s3-website-eu-south-2.ionoscloud.com
Note: The IONOS S3 Object Storage Service does not support HTTPS
for hosting static websites unless the full domain path is used.
Postman is a free tool for conveniently working with APIs in a graphical interface. It is available for Windows, macOS, and Linux.
You can follow the installation instructions described on Postman.
In the Authorization tab for a request, select AWS Signature from the Type dropdown list. Specify where Postman should append your authorization data using the Add authorization data to drop-down menu.
If you select Request Headers, Postman populates the Headers tab with Authorization and X-Amz- prefixed fields.
If you select the Request URL, Postman populates the Params tab with authentication details prefixed with X-Amz-.
Note: The parameters listed below contain confidential information. We recommend using variables to keep this data secure while working in a collaborative environment.
To get the Access Key and Secret Key, log in to the DCD, go to Menu > Storage > IONOS S3 Object Storage > Key management.
Advanced fields are optional, but Postman will attempt to generate them automatically if necessary.
For AWS Region, leave the field blank as the region from the endpoint will be used.
For Service Name, enter s3
. The name of the service that receives the requests.
For Session Token, leave the field blank. This is only required when temporary security credentials are used.
Setup completed. Now check the S3 API description to get the right endpoint to call.
Note: You need to use the correct endpoint URL for each region (see the list of available endpoints).
IONOS S3 Object Storage supports using Cyberduck, a Cloud Storage browser with SFTP, WebDAV, and S3 support for Windows, macOS, and Linux.
For the installation instructions, see Cyberduck.
Once inside Cyberduck, select Cyberduck > Preferences… from the menu.
Select Profiles to open the Connection Profiles page.
Select the IONOS Cloud Object Storage (Berlin) connection profile or IONOS Cloud Object Storage (Frankfurt), or IONOS Cloud Object Storage (Logrono) from the list of available connection profiles, or best use the search option to search for it.
Close the Preferences window and restart Cyberduck to install the selected connection profiles.
Open Cyberduck and select File > Open Connection… You will see the connection dialog.
At the top, click the dropdown menu and select the IONOS Cloud Object Storage (Berlin) profile that corresponds with the data center you want to use.
Enter key values in the Access Key and Secret Key fields.
To access the Object Storage keys:
Log in to the DCD, go to Menu > Storage > IONOS S3 Object Storage > Key management.
Choose "Generate a key" and confirm the action by clicking Generate. The object storage key will be generated automatically.
Click Connect.
-c FILE, --config=FILE
- Config file name. Defaults to $HOME/.s3cfg
.
-e, --encrypt
- Encrypt files before uploading to S3.
--upload-id=UPLOAD_ID
- UploadId for Multipart Upload, in case you want to continue an existing upload (equivalent to --continue-put
) and there are multiple partial uploads. Use s3cmd multipart [URI]
to see what UploadIds are associated with the given URI.
S3 Browser is a free, feature-rich Windows client for IONOS S3 Object Storage.
Download and install the S3 Browser.
Add a new account and select:
Display name: Enter a name for the connection.
Account type: Select S3 Compatible Storage from the drop-down list.
REST Endpoint: If you already have a bucket, select the endpoint URL from the list. Otherwise, you can select s3-eu-central-2.ionoscloud.com, which corresponds to the location in Berlin, Germany.
To get the Access Key and Secret Key, log in to the DCD, go to Menu > Storage > IONOS S3 Object Storage > Key management.
Click Advanced S3-compatible storage settings in the lower-left corner of the form.
Storage settings:
Signature version: Select Signature V4 from the drop-down list.
Addressing model: Leave Path style.
Override storage regions: Paste the following text into the text area:
Region-specific endpoint: Insert the following text: s3-{region-code}.ionoscloud.com
Save the details.
Try creating your first bucket. The bucket name must be unique across the entire IONOS S3 Object store. That's why S3 Browser will offer to add random text to the bucket name. But you can still try to come up with your unique name.
This document provides instructions to manage using the command-line tool. Additionally, these tasks can also be performed using the and .
Prerequisites:
Set up the AWS CLI by following the .
Make sure to consider the supported .
To get the versioning state of the bucket:
To enable versioning for the bucket:
To list object versions for the bucket:
To list object versions for the object my-object.txt
:
This document provides instructions to manage using the command-line tool. Additionally, these tasks can also be performed using the and .
Prerequisites:
Set up the AWS CLI by following the .
Make sure to consider the supported .
To create a file policy.json
with the JSON policy. For more information, see .
To apply a bucket policy to a bucket:
To save a bucket policy to file:
To delete the bucket policy:
This document provides instructions to using the AWS CLI command-line tool. Additionally, these tasks can also be performed using the and .
Prerequisites:
Set up the AWS CLI by following the .
Make sure to consider the supported for object upload.
Use the following keys to define access permissions:
--grant-read
: Grants read-only access.
--grant-write
: Grants write-only access.
--grant-read-acp
: Grants permission to read the Access Control List.
--grant-write-acp
: Grants permission to modify the Access Control List.
--grant-full-control
: Grants full access, encompassing the permissions listed above (read, write, read ACL, and write ACL).
Use --key
to specify the object for granting access:
Use the following values for the --acl
key:
private
removes public access.
public-read
allows public read-only access.
public-read-write
allows public read/write access.
authenticated-read
allows read-only access to all authenticated users of IONOS S3 Object storage (including ones out of your contract).
To allow public read-only access to the object:
To remove public access to the object:
This document provides instructions to using the AWS CLI command-line tool. Additionally, these tasks can also be performed using the and .
Prerequisites:
Set up the AWS CLI by following the .
Make sure to consider the supported .
Use the following keys to define access permissions:
--grant-read
: Grants read-only access.
--grant-write
: Grants write-only access.
--grant-read-acp
: Grants permission to read the Access Control List.
--grant-write-acp
: Grants permission to modify the Access Control List.
--grant-full-control
: Grants full access, encompassing the permissions listed above (read, write, read ACL, and write ACL).
Note: Granting access to a bucket for another IONOS user does not make the bucket appear in the user's S3 web console due to the S3 protocol's architecture. To access the bucket, the user must utilize other , as the granted access does not translate to interface visibility.
To grant full control of my-bucket
to a user with a specific Canonical user ID:
To separate grants with a comma if you want to specify multiple IDs:
To grant full control of my-bucket
to multiple users using Canonical user ID:
To grant full control of my-bucket
by using an email address
instead of Canonical User ID:
Retrieve the ACL of a bucket and save it to the file acl.json
:
To edit the file, for example, remove or add some grants and apply updated ACL to the bucket:
Use the following values for the --acl
key:
private
removes public access.
public-read
allows public read-only access.
public-read-write
allows public read/write access.
authenticated-read
allows read-only access to all authenticated users of IONOS S3 Object storage (including ones out of your contract).
To allow public read-only access to the bucket:
To remove public access to the bucket:
To set WRITE
and READ_ACP
permissions for the Log Delivery Group which is required before enabling the Logging feature for a bucket:
This document provides instructions to manage using the command-line tool. Additionally, these tasks can also be performed using the and .
Prerequisites:
Object Lock configuration is only feasible when enabled at the time of bucket creation. It cannot be activated for an existing bucket.
Set up the AWS CLI by following the .
Make sure to consider the supported .
To create a bucket my-bucket
in the de
region (Frankfurt, Germany) with Object Lock:
An Object Lock with Goverance mode on a bucket provides the bucket owner with better flexibility compared to the Compliance mode. It permits the removal of the Object Lock before the designated retention period has expired, allowing for subsequent replacements or deletions of the object.
To apply Governance mode configuration to the bucket my-bucket-with-object-lock
with a default retention period equal to 15 days (or use the PutObjectLockConfiguration API Call):
On applying this configuration, the newly uploaded objects adhere to this retention setting.
An Object Lock with Compliance mode on a bucket ensures strict control by enforcing a stringent retention policy on objects. Once this mode is set, the retention period for an object cannot be shortened or modified. It provides immutable protection by preventing objects from being deleted or overwritten during their retention period.
This mode is particularly suited for meeting regulatory requirements as it guarantees that objects remain unaltered. It does not allow locks to be removed before the retention period concludes, ensuring consistent data protection.
To apply Compliance mode configuration to the bucket my-bucket-with-object-lock
with a default retention period equal to 15 days:
On applying this configuration, the newly uploaded objects adhere to this retention setting.
To retrieve Object Lock configuration for a bucket (the same could be achieved with the GetObjectLockConfiguration API Call):
To upload my-object.pdf
to the bucket my-bucket-with-object-lock
:
Note: The Object Lock retention is not specified so a bucket’s default retention configuration will be applied.
To upload my-object.pdf
to the bucket my-bucket-with-object-lock
and override the bucket’s default Object Lock configuration:
Note: You can overwrite objects protected with Object Lock. Since Versioning is used for a bucket, it allows to keep multiple versions of the object. It also allows deleting objects because this operation only adds a deletion marker to the object’s version.
Note: Delete markers are not WORM-protected, regardless of any retention period or legal hold in place on the underlying object.
To apply LegalHold status to my-object.pdf
in the bucket my-bucket-with-object-lock
(use OFF
to switch it off):
To check the Object Lock status for a particular version of an object, you can utilize either the GET Object
or the HEAD Object
commands. Both commands will provide information about the retention mode, the designated 'Retain Until Date' and the status of the legal hold for the chosen object version.
When multiple users have permission to upload objects to your bucket, there is a risk of overly extended retention periods being set. This can lead to increased storage costs and data management challenges. While the system allows for up to 100 years using the s3:object-lock-remaining-retention-days
condition key, implementing limitations can be particularly beneficial in multi-user environments.
To establish a 10-day maximum retention limit:
Save it to the policy.json
and apply using the following command:
IONOS S3 Object Storage supports using Amazon's AWS Command Line Interface (AWS CLI) for Windows, macOS, and Linux.
For the installation instructions, see .
Run the following command in a terminal: aws configure
.
AWS Access Key ID [None]: Insert the Access Key. To get it, , go to Menu > Storage > IONOS S3 Object Storage > Key management.
AWS Secret Access Key [None]: Paste the Secret Key. It can be found in the Data Center Designer by selecting Storage > S3 Key Management.
Default region name [None]: de
.
Default output format [None]: json
.
For each command, be sure to include one of the endpoints in the endpoint-url
parameter:
For information on the supported IONOS S3 Object Storage Service endpoints, see .
There are 2 sets of commands:
: Offers high-level commands for managing S3 buckets and for moving, copying, and synchronizing objects.
: Allows you to work with specific features such as ACL, CORS, and Versioning.
List buckets:
Option 1: Using s3 set of commands
Option 2: Using s3api set of commands
Create a bucket in the eu-central-2
region (Berlin, Germany):
Option 1: Using s3 set of commands
Option 2: Using s3api set of commands
Create a bucket in the de
region (Frankfurt, Germany) with Object Lock enabled:
Upload an object from the current directory to a bucket:
Copy the object to the bucket:
Copy the contents of the local directory my-dir
to the bucket my-bucket
:
Copy all objects from my-source-bucket
to my-dest-bucket
excluding .zip files. The command doesn’t support cross-region copying for IONOS S3 Object Storage:
Download all the objects from the my-bucket
bucket to the local directory my-dir
:
Sync the bucket my-bucket
with the contents of the local directory my-dir
:
Get Cross-Origin Resource Sharing (CORS) configuration:
Set up Cross-Origin Resource Sharing (CORS) configuration:
cors.json:
Enable versioning for the bucket:
Get versioning state of the bucket:
Set up a lifetime policy for a bucket (delete objects starting with "my/prefix/" older than 5 days):
delete-after-5-days.json:
This task could also be achieved by using the API call.
The permanent deletion of the object’s version is prohibited, and the system only creates a deletion marker for the object. But it makes IONOS S3 Object Storage behave in most ways as though the object has been deleted. You can only list the delete markers and other versions of an object by using the API call.
For more information, see .
For more information, see .
For more information, see .
Boto3 is the official AWS SDK for Python. It allows you to create, update, and configure IONOS S3 Object Storage objects from within your Python scripts.
Install the latest Boto3 release via pip: pip install boto3
There are several ways to provide credentials, e.g. passing credentials as parameters to the boto.client() method, via environment variables, or with a generic credential file (~/.aws/credentials).
An example of passing credentials as parameters when creating a Session object:
To get the Access Key and Secret Key, log in to the DCD, go to Menu > Storage > IONOS S3 Object Storage > Key management.
NOTE: Your credentials are not tied to a specific region or bucket.
For information on the supported IONOS S3 Object Storage Service endpoints, see S3 Endpoints.
List buckets:
Create bucket my-bucket
at the region eu-central-1
:
Upload filename.txt to the bucket my-bucket
:
For more information, see AWS SDK documentation on Uploading files.
Download the file filename.txt
from the my-bucket
:
List objects of the bucket my-bucket
Copy the filename.txt from the bucket my-source-bucket
to the bucket my-dest-bucket
and add the prefix uploaded/
. Instead of the client()
method, we use the resource()
method here. It provides a higher level of abstraction than the low-level calls made by service clients.
For more examples, see Boto3 documentation, such as:
For more information on Boto3 and Python, see Realpython.com – Python, Boto3, and AWS S3: Demystified.
Rclone is a command line tool for managing files in the cloud. It is available for Windows, macOS, and Linux. Rclone also has a built-in HTTP server that can be used to remotely control rclone using its API and a web GUI (graphical user interface).
rclone helps:
backing up (and encrypting) files to cloud storage
restoring (and decrypting) files from cloud storage
mirroring cloud data to other cloud services or locally
transferring data to the cloud or between cloud storage providers
mounting multiple encrypted, cached, or diverse cloud storages in the form of a disk
analyzing and taking into account data stored in cloud storage using lsf, ljson, size, and ncdu
Download the latest version of rclone from rclone.org. The official Ubuntu, Debian, Fedora, Brew, and Chocolatey repositories include rclone.
You can find the configuration example here.
Configurations configured with the rclone config
command are called remotes. If you already have or plan to use buckets in different IONOS S3 Object Storage regions, you will need to set up a separate remote for each region you use.
Please refer to the list of commands at the rclone website.
List remotes:
List buckets of "ionos1" remote:
Create bucket my-bucket
at the remote ionos1
:
Upload filename.txt from the current directory to the bucket my-bucket
:
Copy the contents of local directory my-dir
to the bucket my-bucket
:
Copy all objects with the prefix my-dir
from the bucket my-source-bucket
to my-dest-bucket
:
The buckets could be located in different regions and even at different providers. Unless buckets are located within the same region, the data is not copied directly from the source to the destination. For cross-regional copying, the data is downloaded to you from the source bucket and then uploaded to the destination.
Download all the objects from the my-bucket
bucket to the local directory my-dir
:
Sync the bucket my-bucket
with the contents of the local directory my-dir
. The destination is updated to match the source, including deleting files if necessary:
Get the total size and number of objects in remote:path:
Check if the files in the local directory and destination match:
Produce an md5sum file for all the objects in the path:
This document provides instructions to manage Logging using the command-line tool. Additionally, these tasks can also be performed using the web console and IONOS S3 Object Storage API.
Prerequisites:
Set up the AWS CLI by following the installation instructions.
Make sure to consider the supported S3 Endpoints.
Prerequisite: Grant permissions to the Log Delivery Group to the bucket where logs will be stored. We recommend using a separate bucket for logs, but it must be in the same S3 region. Log Delivery Group must be able to write objects and read ACL.
After that, you can enable Logging for a bucket:
Contents of logs-acl.json
:
To retrieve bucket logging settings:
This information refers to Veeam versions older than 11.0.1.1261 20220302. No action is required for newer versions.
When using IONOS S3 Object Storage to offload or archive backup data, old versions of Veeam Backup and Replication use a file structure that is significantly different than network or block storage.
The hierarchy and granularity of the stored metadata also affect the database structure of the backend systems used by IONOS to provide IONOS S3 Object Storage.
This leads to increased performance requirements for the storage system and longer response times for queries from our customers. This can therefore also affect the recovery times when retrieving data from the S3 storage.
We will need to implement custom policies if your Veeam version is older than 11.0.1.1261 20220302
to optimize your new and existing S3 repositories. Please contact the IONOS Cloud Customer Support at support@cloud.ionos.com and provide the following information:
IONOS contract number and support PIN
Names of buckets used with Veeam
A maintenance time window, during which we can implement the policy. Please keep your time window within business hours; Monday to Friday 08:00 - 17:00 CET.
Caution: Your buckets will be unavailable for a short period within the specified time window. The duration of the adjustment depends on the amount of data and the number of saved objects. However, expect no more than 90 minutes of downtime.
The data will not be changed or viewed during maintenance. There is therefore no risk to the integrity of the contents of the bucket.
With the Custom Policies, we will also add a Bucket Lifecycle Policy to the Veeam Bucket, which will automatically remove the expired Deletion Marker. This is done by us and can be reviewed by you, as shown in the screenshot below.
This can also be viewed using the API
The following are a few FAQs to provide insight into the IONOS S3 Object Storage application.
The new web console for IONOS S3 Object Storage is an enhanced version of the previously available S3 web console in the DCD, offering improved user experience and guidance with a design catering to multiple target groups. With this revamp, the S3 Web Console in the DCD is renamed to IONOS S3 Object Storage.
The new web console for the IONOS S3 Object Storage offers an improved user interface (UI) of the application and the following are the capabilities that you will notice:
Visually enhanced UI of the web console catering to multiple target audiences without needing external browser windows to use the application.
Natively integrated front-end in the DCD using IONOS standard design system. Instead of an external browser window, you can access the S3 web console within the DCD.
Offers context-sensitive help as learn more links from within the UI to support users with relative documentation about the workflows.
The application offers faster responsiveness and improved performance.
In the DCD, go to menu > Storage > IONOS S3 Object Storage. The feature is generally available to all existing and new users. Alternatively, you can also use the , , to access the Object Storage.
In the DCD, go to Storage > IONOS S3 Object Storage > Key management to view the access keys. You can generate a new key in the Access keys by using the Generate a key function.
No, with the new web console generally available to all users, the old console is phased out and removed from the DCD. All the capabilities of the old console are now improved and made available in the new web console.
Functionally, the bucket settings remain unchanged. To offer an improved user experience and simplify the UI design, you will notice the following changes:
Bucket Canned ACL – a predefined set of permissions per grantee and Storage Policy – a setting automatically applied when a user creates a bucket, are deprecated.
With our ongoing efforts to continuously improve our product functions, the IONOS S3 Object Storage will offer Bucket Inventory feature shortly. It generates regular reports listing the objects in a storage bucket, including details like metadata, size, and storage class. It helps in tracking and managing stored content efficiently.
Objects (files) of any format can be uploaded to and stored in the Object Storage. Objects may not exceed 4,65 GB (5.000.000.000 bytes) if uploaded using the web console. Other applications are not subject to this limit. Use the MultipartUpload set of functions in API or SDKs to upload large files.
Yes, by setting appropriate permissions on your buckets or individual objects, you can make data accessible over the internet. The Static Website Hosting feature also enables you to host static websites directly from your buckets, serving your content (HTML files, images, and videos) to users over the web.
You can use the Lifecycle Policy feature to automatically delete outdated objects such as logs. This feature enables you to create rules that specify when objects should be deleted. For example, you can set a policy to automatically remove log files after they have reached a certain time.
Data redundancy is achieved through erasure coding. This process involves dividing data into fragments, expanding, and encoding it with redundant data pieces, which are then stored across a set of different servers. During a hardware failure or data corruption, the system can reconstruct the data using these fragments and redundancy information, ensuring data durability and reliability.
To improve the durability and availability of your data, use the Data Replication feature. This functionality allows you to replicate your data to another geographic location, offering enhanced protection and resilience against natural disasters or other regional disruptions. Also offers additional security for your data and faster data access from the region where the replica resides.
You can store any type of data, including documents, photos, videos, backups, and large data sets for analytics and big data applications. Each object can only be a maximum of 46.566 GB (~42 TB) in size. For more information, see .
The Bucket Permissions setting in the old web console is now available as .
The pricing is based on the actual amount of storage used and outgoing data transfer. There are no minimum storage charges, allowing you to start using the Object Storage by uploading as little as one byte of data. For more information, see .
The IONOS S3 Object Storage service is available in the following S3 regions: de
, eu-central-2
, and eu-south-2
. For the list of available points of access, see .
Each object can only be a maximum of 46.566 GB (~42 TB) in size. For more information, see .
To speed up the upload of large files, you can use the that breaks the large files into smaller, manageable parts and upload them in parallel. The feature is not available via the web console but can be implemented in your application via the .
and provide it to the bucket owner; the owner can then grant you access by using or .
By default, objects in the IONOS S3 Object Storage are private and only the bucket owner has permission to access them. The bucket owner can use the to share objects with other S3 users.
You can also temporarily share objects with other users without additional authorization using the .
Yes, you can back up your bucket using the Replication feature that allows automatic replication of your bucket's objects to another bucket, which can be in a different geographic location. Additionally, you can apply to the destination bucket for enhanced security, preventing the replicated data from being deleted or modified.
If you wish to sync your bucket with local storage, tools like or can be utilized for seamless synchronization.
To safeguard your data against ransomware, you can use the . With this feature, you can set the Write Once Read Many (WORM) model on your objects, preventing them from being deleted or modified for a fixed amount of time or indefinitely.
During transit, TLS 1.3 is used for encryption. For data at rest, two options are available: AES256 server-side encryption (SSE-S3) and encryption with a customer-provided key (SSE-C). SSE-S3 is the default for uploads via the web console. SSE-C, on the other hand, is not available in the web console but can be utilized through and SDKs.