Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
IONOS S3 Object Storage is a service offered by IONOS for storing and accessing unstructured data. The Object Storage is fully S3-compliant, which means that it can be used to manage buckets and objects using existing S3 clients. For assistance with your hosting requirements, consult our user guides, reference documentation, and FAQs.
Storage: IONOS S3 Object Storage is a modern storage technology that can be found in private and public cloud storage applications. In object storage architecture, files are not stored and managed in hierarchies or blocks as is the case with file or block storage, but as objects. An object consists of the file itself, customizable metadata, and a unique identifier through which it is addressed.
Compatibility: Object storage is almost indefinitely scalable. It can be accessed through and internet protocols. This makes it well-suited for solutions. The S3 API (Simple Storage Services) has been established as a global standard for object storage for years. It allows for interoperability and compatibility across different object storage systems that adhere to this standard.
IONOS S3 Object Storage is currently available in Germany and Spain (see the list of available endpoints).
Applications: Object storage is best used for storing large chunks of unstructured, static data, such as videos, images, music, and other files not intended for manipulation by frequent transactions. This includes archives, backups, log files, documents, and any file type that you want to keep “as is” for later access.
Automation: IONOS S3 Object Storage is based on S3. The object storage solution offers the industry’s best compatibility with the S3 API. This guarantees a high level of interoperability with other object storage systems adhering to S3.
Furthermore, you can use any client application that supports S3 to access it. A GUI is available to make the management and use of IONOS Cloud Object Storage as comfortable as possible. The GUI is called the Object Storage Management Console.
Objects: The IONOS S3 Object Storage can store objects, i.e. files of any format. Neither format nor content is checked during upload. Objects can be stored in and folders. The number of objects you can save is unlimited.
Buckets: logical containers in which the objects of object storage are stored. Before files can be uploaded to object storage, a bucket must first be created. The name of a bucket must be unique throughout the IONOS S3 Object Storage. The bucket name must adhere to the naming convention. The User can define how the objects contained in a bucket are versioned and that access to them is logged. Access to a bucket is managed by authorizations.
Folders: logical containers in which objects can be stored in a structured way, similar to a file system. A bucket can contain folders at multiple levels, meaning a folder can contain other folders. You cannot define properties or permissions for folders, this is done using buckets and objects. The same naming rules apply to folders as to objects - once a folder is created, it cannot be renamed. Objects already uploaded cannot be moved to a newly created folder.
Security
IONOS S3 Object Storage protects User data on several levels. The storage policy chosen for the object storage covers the highest data protection level possible. Technical failures of any kind will not result in data loss.
Connection to the object storage is SSL-encrypted. Moreover, you can store uploaded objects using server-side encryption. Therefore, objects can be stored in IONOS object storage in any encrypted form. Storage objects are decrypted automatically when downloaded.
S3 allows for comprehensive access management at the bucket and object levels. This allows you to define precisely who may access what. By default, newly created buckets and objects are “private”. Only the bucket owner can access them. In order to protect content from unauthorized access, it is recommended that you make only those buckets or objects public that are to be shared publicly.
Grantees: S3-defined user groups to whom permissions are granted that specify which buckets and objects they may access in which way.
Permissions: These are the access rights that can be assigned to Grantees. By default, buckets and objects are "private", i.e. only the bucket owner can access them. The content of a bucket is always accessible (as a list) as soon as the bucket is "public", even if the objects it contains are private and can therefore neither be displayed nor downloaded!
Access Control Lists (ACLs): With the help of a detailed authorization system, based on S3 ACLs (Access Control Lists), you can control precisely who accesses and edits your content. By assigning ACLs to a group of users in accordance with an S3-compliant access control list, you can manage who may access the buckets and objects of your IONOS S3 Object Storage.
Canned ACLs: Pre-defined access profiles so that you don't have to enter the combination of permissions per grantee manually. By default, buckets and objects are "private", i.e. only the bucket owner can access them.
Object size: Please note that objects may not exceed 5 GB in size if they are uploaded using the Object Storage Management Console. Other applications or the IONOS S3 Object Storage API are not subject to this limit.
Bucket limits: Each user may create up to 500 buckets.
When naming and folders, the name must:
be unique throughout the entire
consist of 3 to 63 digits
start with a letter or a number
consist of lower case letters (a-z) and numbers (0-9)
The use of hyphens (-), periods (.), and underscores (_) is conditional. The name must not:
end with a period, hyphen, or underscore.
consist of multiple periods in a row (...)
contain hyphens next to periods.
have the format of an IPv4 address (e. g. 192.168.1.4).
contain underscores if the bucket is to be used for auto-tiering later.
Prerequisites: Make sure you are logged on to the IONOS S3 Object Storage using the Object Storage Management Console.
1. In the Buckets tab, click + ADD NEW BUCKET.
2. In Bucket Name, enter a name that adheres to the naming convention for buckets and objects.
3. Leave the default values set for Region.
4. Click Create.
The bucket is created unless a bucket with the same name already exists in the IONOS S3 Object Storage.
Storage Policy and Region cannot be changed after a bucket has been created.
Prerequisites: Make sure you are logged on to the IONOS S3 Object Storage using the Object Storage Management Console. Only bucket owners can create a folder.
1. Open the bucket to which to add a folder. The Objects tab opens.
2. Click + CREATE FOLDER. A popup window will open.
3. In the Folder Name field, enter a name that adheres to the naming convention.
4. Click OK to save the settings.
The folder is created and displayed in the bucket to which it belongs. Please keep in mind that this folder cannot be renamed. Objects that have already been uploaded cannot be moved to a different folder.
Buckets must be empty before they can be deleted.
Prerequisites: Make sure that the bucket does not contain any objects. You must be logged in to the IONOS S3 Object Storage using the Object Storage Management Console and be the bucket owner.
Open the Buckets tab.
Click Delete next to the selected item.
In the dialog, confirm the action by clicking Ok.
The bucket is deleted and cannot be restored.
Grantee | Bucket | Object |
---|---|---|
Permission | Bucket | Object |
---|---|---|
A feature allows contract owners and administrators to log on to the object storage accounts of their contract members as bucket owners with full access rights.
Canned ACL | Bucket | Object |
---|---|---|
Access the DCD Console and enable user access.
Generate Object Storage Keys to login securely.
Retrieve user IDs for sharing purposes.
Learn the basics of IONOS S3 Object Storage Buckets and Folders.
Work with Objects inside of IONOS S3 Object Storage Buckets.
Discover different ways to share Objects publicly.
Record access to buckets and save log files.
Public
Everyone
Authenticated Users
All users of the IONOS S3 Object Storage (not limited to a contract).
Log Delivery Group
Group required for logging (in combination with the "Log Delivery Write" ACL)
n/a
Individual users
Selected users of the IONOS S3 Object Storage (not limited to a contract)
Sharing buckets with individual users requires their IONOS S3 Object Storage ID.
Read access (Readable)
View the contents of a bucket as a list. Opening and downloading objects is not possible.
Open and download objects
Write access (Writable)
Upload and delete objects
n/a
Read access to permissions (ACP Readable)
View the access rights of the bucket or object
Write access to permissions (ACP Writable)
View and edit the access rights of the bucket or object
Private (default)
Full access for bucket owners
Public Read
Full access for bucket owners
Read access to buckets for all users of the IONOS S3 Object Storage (not limited to a contract).
Please note that the content of a bucket is always displayed as a list as soon as it is made "public", even if the objects it contains are private and can therefore neither be displayed nor downloaded!
Public Read Write
Full access for bucket owners
Read and write access for everyone. Everyone may view the bucket contents and upload and delete files.
n/a
Authenticated Read
Full access for bucket owners
Read access for all users of the IONOS S3 Object Storage (not limited to a contract).
Log Delivery Write
Full access for bucket owners
Write access for the Log Delivery Group, which can also view the access permissions of a bucket. This access profile is required for saving the log files generated when logging is activated for a bucket.
n/a
Bucket Owner Read
n/a
Full access for object owners
Read access for bucket owners
Bucket Owner Full
n/a
Full access for object and bucket owners
The IONOS S3 Object Storage Service endpoints are listed below.
S3 region (global default): de
S3 endpoint: s3-eu-central-1.ionoscloud.com
Legacy endpoint: s3-de-central.profitbricks.com
S3 static website endpoint: s3-website-de-central.profitbricks.com
(Please note that only this region uses the profitbricks.com domain for static website endpoints.)
S3 region (LocationConstraint): eu-central-2
S3 endpoint: s3-eu-central-2.ionoscloud.com
Legacy endpoint s3-eu-central-2.profitbricks.com
S3 static website endpoint: s3-website-eu-central-2.ionoscloud.com
S3 region (LocationConstraint): eu-south-2
S3 endpoint: s3-eu-south-2.ionoscloud.com
Legacy endpoint: s3-eu-south-2.profitbricks.com
S3 static website endpoint: s3-website-eu-south-2.ionoscloud.com
Note: The IONOS S3 Object Storage Service does not support
HTTPS
for hosting static websites unless the full domain path is used.
Logging on requires a key (Object Storage Key) as part of the authentication process. This Object Storage Key consists of key and secret.
For each user, an Object Storage Key is generated automatically, which is activated when the user is granted permission to use the IONOS S3 Object Storage.
A maximum of five Object Storage Keys may be created per user.
Generate object storage keys: A bucket owner can have multiple Object Storage Keys, which can be given to other users or automated scripts. Users using such an additional Object Storage Key to access the IONOS S3 Object Storage automatically inherit credentials and access rights of the bucket owner. This can be useful for allowing users automated (scripted) or temporary access to object storage. When the automated or temporary use is over, the additional Object Storage Key can be deactivated.
Activate/deactivate: Deactivating an Object Storage Key will block access to the IONOS S3 Object Storage. A deactivated key can be reactivated and access restored.
Delete: If a key is no longer needed or if it should no longer be possible to gain access to the IONOS S3 Object Storage with this key, it can be deleted. This cannot be undone.
Before you delete a user or all of their Object Storage Keys from your account, please ensure that the content in their IONOS S3 Object Storage is accessible so that you can continue to use it or delete it by adjusting the access rights accordingly.
The content set to "private" that has not been removed before the user or all of their Object Storage Keys have been deleted is no longer accessible, but will continue to be charged. In this case, please contact the IONOS enterprise support team.
Prerequisites: Make sure you have the corresponding permission to create the Object Storage. Only contract owners and administrators with the Object-Storage-Key can set up the object storage.
Only contract owners and administrators can manage the Object Storage Keys of other users.
1. Go to Menu > Management > Users & Groups
2. Select a User and click the Object Storage Keys tab on the right.
3. Choose + Generate Key
4. Confirm the action by clicking OK.
An active Object Storage Key is generated, which can be used to connect to the IONOS S3 Object Storage of the user.
You can copy Key
and Secret
from the respective fields to sign in to other object storage applications.
Select the required key.
In the Object Storage Keys, click Delete.
The selected key is deleted and can no longer be used to connect to the IONOS S3 Object Storage. The key cannot be restored.
Select the required Object Storage Key.
Activate: Select the Active check box.
Deactivate: Deactivate the Active check box.
Click Save.
The key and with it access to the IONOS S3 Object Storage is activated or deactivated. If there is no active Object Storage Key, the Object Storage menu item is not displayed in the Menu Bar of the DCD.
Depending on the selected S3 client, you have various options for sharing , objects, or object versions with users of the IONOS S3 Object Storage . In addition to roles and predefined profiles, you can share the content of your buckets with selected users by using their IONOS S3 Object Storage ID (so-called "ACL Sharing" or "S3 Sharing"). There are three separate forms of identification:
Contract-user-ID: The contract-user-ID consists of contract number and user ID (contract number|User UUID
). In the , this ID can be used for the sharing of objects with selected users of the entire IONOS S3 Object Storage (not limited to users of your own account).
S3 Canonical-user-ID: The Canonical User ID is the ID assigned to a user by the IONOS S3 Object Storage.
Email address: Some S3 clients only require the e-mail address of a registered S3 user for sharing objects as they are capable of converting the e-mail address to the ID required by the object storage.
S3 clients that support the "Display Name" feature will display the e-mail address instead of the ID of a user for better readability.
In order for another user to share the content of their IONOS S3 Object Storage with you, they need your IONOS S3 Object Storage ID, which you will find in the Object Storage Key Manager.
Prerequisites: Make sure you have the corresponding permission to create the Object Storage. Only contract owners and administrators with the Object-Storage-Key can set up the object storage.
Open Menu Bar > Storage > S3 Key Management
The IONOS S3 Object Storage IDs are displayed. You can now copy the required ID and tell the user who wants to share the content of their object storage with you.
If you want to share the content of your IONOS S3 Object Storage with other users, you need their IONOS S3 Object Storage ID.
Prerequisites: Make sure you have the corresponding permission to create the IONOS S3 Object Storage. Only contract owners and administrators can retrieve the IONOS S3 Object Storage IDs of their IONOS account users in the User Manager.
1. Open the Management > Users & Groups.
2. In the Users tab, select the required user.
3. In the Object Storage Keys tab, open the S3 drop-down menu.
The IONOS S3 Object Storage IDs are displayed. You can now copy the required ID and use it for sharing your objects with this user.
You can record all accesses to a in a log file that conforms to IONOS S3 Object Storage bucket logging from the time logging is activated. If the bucket is accessed, this log file is created in a bucket of your choice at an interval specified by the system. Please note that logging is not activated by default.
Note: Logging into a bucket is not activated by default.
You can use the same bucket, but this is not recommended. Because additional logs are created for the logs written to the bucket, this makes it difficult to find the logs you need.
Prerequisites: Make sure you have the appropriate permissions. Only contract owners, administrators, or users with access rights to the data center can log in to the bucket. Other user types have read-only access and can't provision changes.
1. Access individual bucket properties by going to Buckets > Name > Properties.
There are two ways to assign write access. The first method is via Bucket Permissions.
2. From the Bucket Permissions tab, check off the Writable box under Log Delivery Group.
3. Save your preferences.
Alternatively, you may set write access from the Bucket Canned ACL tab.
4. From the Bucket Canned ACL tab, select Log Delivery Write from the SET CANNED ACL dropdown.
5. Save your preferences.
Prerequisites: Write access for Log Delivery needs to be enabled on the target bucket. You have to be the owner of the target bucket.
1. Under a Bucket's properties, open the Logging tab.
2. To activate logging, click the Enable Logging check box.
3. In the Destination Bucket field, enter the name of the buckets in which to save the log files. Source and target bucket may be identical.
Note: The destination bucket for logs can only be in the same region, and you must be the owner of the destination bucket.
Optional: In the Target Prefix field, enter the prefix for log files so that you can sort them more easily later (e. g. log_
). If you enter no prefix, the log file name is derived from its time stamp alone.
The prefix can also be used as a folder, such as logs/log_
.
Once logging is enabled, you can use the Bucket Lifecycle feature. This feature adjusts the processing of logs as they become outdated.
1. To deactivate logging, clear the Enable Logging check box.
2. Save your settings by clicking Save.
3. Log files are generated and saved to the target bucket in the format (<prefix>)<time stamp>
.
The S3 API (Simple Storage Services) has been the global standard for object storage for many years. It provides interoperability and compatibility of various object storage systems that adhere to this standard. IONOS S3 Object Storage has one of the highest levels of S3 API support.
Please also refer to the for detailed information.
4. You can modify or deactivate logging at any time with no effect on existing log files. Log files are handled like any other object in the .
Feature | Supported | Notes |
Bucket CRUD | Yes |
Object CRUD | Yes |
Object Copy | Yes | Cross-regional copying is not supported |
Multipart Uploads | Yes |
Pre-Signed URLs | Yes | Signature types v2 and v4 are supported |
Bucket ACLs | Yes |
Object ACLs | Yes |
Block Public Access | Yes | Only via the API |
Bucket Policies | Yes |
Object Policies | Yes |
CORS Configuration | Yes | Only via the API |
Bucket Versioning | Yes |
Bucket Replication | Yes | Intraregional and cross-regional replication are supported |
Bucket Tagging | Yes | Only via the API |
Object Tagging | Yes | Only via the API |
Bucket Lifecycle | Yes |
Bucket Access Logging | Yes |
Bucket Encryption Configuration | Yes | Only via the API |
Object Encryption | Yes | Only via the API |
Bucket Websites | Yes |
Bucket Inventory | Yes | Only via the API |
Object Lock | Yes | Only via the API |
Legal Hold | Yes | Only via the API |
Object Torrent | Yes | Only via the API |
Identity and Access Management (IAM) | No |
Security Token Service (STS) | No |
Multi-factor Authentication | No |
Bucket Notifications | No |
Request Payment | No |
Bucket Metrics | No |
Bucket Analytics | No |
Bucket Accelerate | No |
S3 Select | No |
The pricing model for IONOS S3 Object Storage is as follows:
1 Gigabyte (GB) is equal to 1024 Megabytes (MB).
Storage space is charged per GB per hour.
Data transfer is charged in GB. Outbound data transfer is paid, except for replication traffic. Inbound data transfer is free, but it will be counted as outbound data transfer for your virtual machine if you upload data from it.
Using the IONOS S3 Object Storage API is free of charge.
Prices are listed in the respective price lists:
IONOS Ltd. – United Kingdom.
IONOS Inc. – United States and Canada.
All outbound data transfer from IONOS S3 Object Storage is billed as public traffic. The local and national traffic definitions do not apply. This includes outgoing data transfer to IONOS Virtual Machines (VMs) or dedicated servers regardless of their geographical location.
While inter-bucket data transfer is subject to charges, replication traffic both within the same region and across different regions is cost-free.
The cost per GB for outbound data transfer is contingent upon the cumulative data consumption of the account. A tiered pricing structure is implemented for all outbound traffic, including data transfer from VMs and IONOS S3 Object Storage.
No charges are imposed on inbound data transfer to IONOS S3 Object Storage. It is essential to know that when uploading data to IONOS S3 Object Storage, the same data transfer may be billed as an outbound data transfer for your VM. While calculating network costs for data transfer from a VM to IONOS S3 Object Storage, following distinctions are made between local, national, and public traffic:
Data transfer from a VM to IONOS S3 Object Storage within the confines of the same data center is billed as local traffic.
Data transfer from a VM to IONOS S3 Object Storage located in the same country but at a different data center is billed as national traffic.
Data transfer from a VM to IONOS S3 Object Storage in a data center in a different country is billed as public traffic.
The IONOS S3 Object Storage is fully compatible with S3, which means that it can be used to manage buckets and objects with existing S3 clients once properly configured.
We suggest you a list of popular tools for working with IONOS S3 Object Storage, as well as instructions for configuring them:
Postman – a tool for API development and testing. Its unique feature is a graphical interface for sending API requests to object storage endpoints, facilitating testing and development.
Cyberduck – an open-source GUI client supporting object storage among other protocols, presenting storage objects as local files for easy browsing, upload, and download.
S3 Browser – a freeware Windows client for object storage, providing an easy way to manage buckets and objects, including file permissions and access control lists, through a visual interface.
AWS CLI is unique in offering a wide range of commands for comprehensive management of buckets and objects, ideal for scripting and automation.
S3cmd – a command-line tool offering direct, scriptable control over object storage buckets and objects. However, it lacks certain features like versioning and replication management.
rclone – a command-line program for syncing files between local and cloud storage, distinguishing itself with powerful synchronization capabilities, specifically useful when handling large data quantities and complex sync setups.
Boto3 Python SDK provides high-level object-oriented API as well as low-level direct service access.
Rclone is a command line tool for managing files in the cloud. It is available for Windows, macOS, and Linux. Rclone also has a built-in HTTP server that can be used to remotely control rclone using its API and a web GUI (graphical user interface).
rclone helps:
backing up (and encrypting) files to cloud storage
restoring (and decrypting) files from cloud storage
mirroring cloud data to other cloud services or locally
transferring data to the cloud or between cloud storage providers
mounting multiple encrypted, cached, or diverse cloud storages in the form of a disk
analyzing and taking into account data stored in cloud storage using lsf, ljson, size, and ncdu
Download the latest version of rclone from rclone.org. The official Ubuntu, Debian, Fedora, Brew, and Chocolatey repositories include rclone.
You can find the configuration example here.
Configurations configured with the rclone config
command are called remotes. If you already have or plan to use buckets in different IONOS S3 Object Storage regions, you will need to set up a separate remote for each region you use.
Please refer to the list of commands at the rclone website.
List remotes:
List buckets of "ionos1" remote:
Create bucket my-bucket
at the remote ionos1
:
Upload filename.txt from the current directory to the bucket my-bucket
:
Copy the contents of local directory my-dir
to the bucket my-bucket
:
Copy all objects with the prefix my-dir
from the bucket my-source-bucket
to my-dest-bucket
:
The buckets could be located in the different regions and even at different providers. Unless buckets are located within the same region, the data is not copied directly from the source to destination. For cross-regional copying, the data is downloaded to you from the source bucket and then uploaded to the destination.
Download all the objects from the my-bucket
bucket to the local directory my-dir
:
Sync the bucket my-bucket
with the contents of the local directory my-dir
. Destination is updated to match source, including deleting files if necessary:
Get the total size and number of objects in remote:path:
Check if the files in the local directory and destination match:
Produce an md5sum file for all the objects in the path:
S3cmd is a free command line tool and client for loading, retrieving, and managing data in S3. It has over 60 command line options, including multipart uploads, encryption, incremental backup, S3 sync, ACL and metadata management, bucket size, and bucket policies (Linux, macOS).
Install 3cmd for your distribution:
on CentOS/RHEL and Fedora: sudo dnf install s3cmd
on Ubuntu/Debian: sudo apt-get install s3cmd
on macOS using Brew: brew install s3cmd
You can also install the latest version from SourceForge.
Run the following command in a terminal: s3cmd --configure
. This will guide you through the interactive installation process:
Enter your Access Key and Secret key. To get them, log in to the DCD, click Storage > S3 Key Management.
Note: Your credentials are not tied to a specific region or bucket.
Specify the region of your bucket for Default Region
. Example: eu-central-2
. Please refer to the list of available regions.
Specify the endpoint for the selected region for S3 Endpoint
from the same list. For example, s3-eu-central-2.ionoscloud.com
.
Insert the same endpoint again for DNS-style bucket+hostname:port template
.
Specify or skip password (press Enter) for Encryption password
.
Press Enter for Path to GPG program
.
Press Enter for Use HTTPS protocol
.
Press Enter for HTTP Proxy server name
.
Press Enter for Test access with supplied credentials? [Y/n]
.
S3cmd will try to test the connection. If everything went well, save the configuration by typing y
and pressing Enter. The configuration will be saved in the .s3cfg
file.
If you need to work with more than one region or with different providers, there is a way to set up multiple configurations. Use s3cmd -configure --config=ionos-fra
to save the configuration for a specific location or provider. Run s3cmd with the -c
option to override the default configuration file. For example, list the object in the bucket:
You can also specify an endpoint directly on the command line to override the default setting. The Access Key and Secret key are region-independent, so s3cmd can take them from the default configuration:
Or even specify it with an Access Key and the Secret Key:
Please refer to the list of available endpoints for the --host
option. You can skip this option if you are only using the region from the configuration file.
List buckets (even buckets from other regions will be listed):
Create a bucket (the name must be unique for the whole IONOS S3 Object Storage). You need to explicitly use the --region
option, otherwise a bucket will be created in the default de
region:
Create the bucket my-bucket
in the region de
(Frankfurt, Germany):
Create the bucket my-bucket
in the region eu-cental-2
(Berlin, Germany):
Create the bucket my-bucket
in the region eu-south-2
(Logrono, Spain):
List objects of the bucket my-bucket
:
Upload filename.txt from the current directory to the bucket my-bucket
:
Copy the contents of local directory my-dir
to the bucket my-bucket
with prefix my-dir
:
Copy all objects from my-source-bucket
to my-dest-bucket
excluding .zip files (or use mv
to move objects). The command doesn’t support cross-region copying for IONOS S3 Object Storage, use rclone utility for cross-region copying:
Download all the objects from the my-bucket
bucket to the local directory my-dir
(the directory should exist):
Synchronize a directory to S3 (checks files using size and md5 checksum):
Get Cross-Origin Resource Sharing (CORS) configuration:
Set up Cross-Origin Resource Sharing (CORS) configuration:
cors_rules.xml:
Delete CORS from the bucket:
Get information about buckets or objects:
s3cmd info s3://my-bucket
s3cmd info s3://my-bucket/my-object
Generate a public URL for download that will be available for 10 minutes (600 seconds):
Set up a lifetime policy for a bucket (delete objects older than 1 day):
detete-after-one-day.xml:
Encrypt and upload files. This option allows you to encrypt files before uploading, but in order to use it, you have to run s3cmd --configure
and fill out the path to the GPG utility and the encryption password. There is no need to use special parameters to decrypt the file on downloading with get
option as this is done automatically using the data from the configuration file.
Add or modify user-defined metadata. Use headers starting with x-amz-meta-
and store data in the set of key-value pairs. The user-defined metadata is limited to 2 KB in size. The size of the user-defined metadata is measured by taking the sum of the number of bytes in the UTF-8 encoding of each key and value.
s3cmd modify --add-header x-amz-meta-my_key:my_value s3://my-bucket/prefix/filename.txt
Check the changes:
Delete metadata:
For more information, visit S3cmd usage and S3cmd FAQ.
Access the DCD Console and enable user access. |
Generate Object Storage Keys to login securely. |
Retrieving user IDs for sharing purposes. |
Learn the basics of IONOS S3 Object Storage Buckets and Folders. |
Work with Objects inside of IONOS S3 Object Storage Buckets. |
Discover different ways to share Objects publicly. |
Record access to buckets and save log files. |
You can use access rights and predefined profiles (Canned ACLs) to control access to buckets and objects in your IONOS S3 Object Storage for different user groups. Users can then use a suitable S3 client to access the objects whose authorization profiles they match.
By default, buckets and objects are private, i.e. only the bucket owner can access them.
The content of a bucket is always accessible (as a list) as soon as the bucket is public, even if the objects it contains are private and can therefore neither be displayed nor downloaded.
Prerequisites: When working with Objects, make sure you are inside of the Object Storage Management Console with full control of the Bucket. You should be able to set the Bucket to ACP Writable.
1. Click the Properties tab of the object you wish to share.
2. To grant access in the Object Canned ACL tab, select one of the ACL profiles.
Example: Public Read means the item is available to everyone but cannot be modified.
3. Confirm your entries by clicking on Save.
The item is available according to your settings.
If you want to share your buckets and their content with users of the IONOS S3 Object Storage outside your own contract, you can use ACL sharing. All you need is the user’s contract user id in the format contract number|User UUID
.
1. Open the properties of the bucket that you would like to share by clicking on Properties in the respective tab.
2. In the Bucket Permissions tab, click + ADD NEW.
3. In the Grantee column, enter the contract user id of the user.
If you want to share the bucket or object with all users of a particular contract, you only have to enter the contract number as follows: contract number|
( e. g. 1701441|
).
Please note: IDs entered this way are not validated. An invalid ID has no effect.
4. Set the permission for the user by selecting the appropriate check box.
5. Confirm your entries by clicking on Save.
The item is shared according to your settings. If you ever wish to remove access, return to this view and click Delete.
URLs can only be generated for objects owned by the bucket owner.
If you want to share content with users who do not have access to an S3 client application, you can share an object by making it publicly available through a URL. This URL can be generated by the Object Storage Management Console and can optionally be provided as an SSL-encrypted URL (using HTTPS).
Objects shared this way are always visible to everyone as they are public - regardless of their access permissions. You can, however, limit the number of downloads and the period of availability.
1. Open the Properties tab of the object that you would like to share.
2. Open the Public URL Access tab.
3. Check the Enable Public URL Access box. Further input fields are opened:
Maximum Downloads: specifies the maximum number of downloads for this URL. Enter -1
for unlimited access.
Current Downloads: shows the current number of downloads. To update this field, please reload the dialog in the Object Storage Management Console by means of a so-called "hard refresh".
Secure URL: (Optional) Activate the check box to generate an SSL-encrypted URL and increase the security of the file.
Expiration Date: (Required) Change the time at which the URL stops being valid. After expiration or change of the validity or the number of downloads, a new URL is generated and the link is no longer accessible. An error message will appear instead.
4. To generate the URL, click Apply.
The URL is generated and displayed. You can now copy the URL and share it with others or send it by email using the MAIL TO button. For the email to be sent out, your browser needs to be configured so that it can open your default web-based email program.
The file can be accessed using a browser. No S3 application is required.
If you want to share content with users who do not have access to an S3 client application, you can configure a bucket as a website, which can be accessed using a standard web browser (instead of an S3 web client). This website needs to be static; it cannot deliver personalised content or run server-side scripts. This feature is useful for sharing a collection of objects.
A bucket-hosted website can be accessed as follows: ''http://<Bucketname>.<S3WebsiteEndpoint>/<IndexDocument>''
(e. g. http://mywebsite.s3-website-de-central.profitbricks.com/index.html).
Shared objects contained in this bucket are available as follows: http://<Bucketname>.<S3WebsiteEndpoint>/<Objectname>
. <Objectname>
may also contain folders.
A website configured with this feature can be accessed via HTTP and HTTPS (SSL).
The bucket mywebsite
is used as a container for your website. “Static Website Hosting” has been enabled in its properties. It contains the start page (index.htm
) and a page that is displayed on error (404.htm
).
The bucket contains the img
folder in which images are stored. Among others, it contains the file test0.png
:
The file is available at: http://mywebsite.s3-website-de-central.profitbricks.com/img/test0.png
Files can be linked with each other through relative paths. If you want to link from index.htm
at the bucket (root) level to test0.png
, which is located in the img
folder of the mywebsite
bucket, you can refer to it with href=”img/test0.png”
.
1. Create a bucket. Please mind the naming rules that apply! The bucket name is part of the URL of the static website.
2. Upload the website content to the bucket, which also includes the start page (usually index.htm
) and an error page (usually 404.htm
).
3. In the Properties of the bucket, open the Static Website Hosting tab.
4. Activate the Enable Website Hosting check box.
5. In the dialog box that appears, confirm that you want all objects to be public by clicking OK.
If you upload other objects to the bucket, please make sure that they are set to Public Read, otherwise, they are not accessible to others.
Index Document: Enter the start page that you uploaded (e.g. index.htm
).
Error Document (Optional): Enter the error page that you uploaded (e. g. 404.htm
).
6. Save your entries by clicking Save.
The website is now available at http://<Bucketname>.<S3WebsiteEndpoint>/<IndexDocument>
(e.g. http://mywebsite.s3-website-de-central.profitbricks.com/index.html)
IONOS S3 Object Storage supports using Cyberduck, a Cloud Storage browser with SFTP, WebDAV, and S3 support for Windows, macOS, and Linux.
For the installation instructions, see Cyberduck.
Once inside Cyberduck, select Cyberduck > Preferences… from the menu.
Select Profiles to open the Connection Profiles page.
Select the IONOS Cloud Object Storage (Berlin) connection profile or IONOS Cloud Object Storage (Frankfurt), or IONOS Cloud Object Storage (Logrono) from the list of available connection profiles, or best use the search option to search for it.
Close the Preferences window and restart Cyberduck to install the selected connection profiles.
Open Cyberduck and select File > Open Connection… You will see the connection dialog.
At the top, click the dropdown menu and select the IONOS Cloud Object Storage (Berlin) profile that corresponds with the data center you want to use.
Enter key values in the Access Key and Secret Key fields.
To access the Object Storage keys:
Log in to your IONOS DCD, click Storage > S3 Key Management.
Choose "Generate Key" and confirm the action with OK. The object storage key will be generated automatically.
Click Connect.
-c FILE, --config=FILE
- Config file name. Defaults to $HOME/.s3cfg
.
-e, --encrypt
- Encrypt files before uploading to S3.
--upload-id=UPLOAD_ID
- UploadId for Multipart Upload, in case you want to continue an existing upload (equivalent to --continue-put
) and there are multiple partial uploads. Use s3cmd multipart [URI]
to see what UploadIds are associated with the given URI.
Postman is a free tool for conveniently working with APIs in a graphical interface. It is available for Windows, macOS, and Linux.
You can follow the installation instructions described on Postman.
In the Authorization tab for a request, select AWS Signature from the Type dropdown list. Specify where Postman should append your authorization data using the Add authorization data to drop-down menu.
If you select Request Headers, Postman populates the Headers tab with Authorization and X-Amz- prefixed fields.
If you select the Request URL, Postman populates the Params tab with authentication details prefixed with X-Amz-.
Note: The parameters listed below contain confidential information. We recommend using variables to keep this data secure while working in a collaborative environment.
To get Access Key and Secret Key, log in to the DCD, click Storage > S3 Key Management.
Advanced fields are optional, but Postman will attempt to generate them automatically if necessary.
For AWS Region, leave the field blank as the region from the endpoint will be used.
For Service Name, enter s3
. The name of the service that receives the requests.
For Session Token, leave the field blank. This is only required when temporary security credentials are used.
Setup completed. Now check the S3 API description to get the right endpoint to call.
Note: You need to use the correct endpoint URL for each region (see the list of available endpoints).
In order to log on to your IONOS S3 Object Storage by means of a GUI, you can use the Object Storage Management Console, which allows you to manage your objects and buckets.
When you log on to the IONOS S3 Object Storage using the DCD, the DCD manages authentication and authorization so that you can access the object storage with just one click.
Every user is the bucket owner of their own IONOS S3 Object Storage and has full access to its content.
It is not possible to use the Object Storage Management Console to access public buckets or content shared with you by users of other S3 systems. We recommend using suitable S3 clients not only for accessing this type of content but also for uploading very large files, as the size of individual files that can be uploaded to the IONOS S3 Object Storage is limited to 5 GB. The Object Storage Management Console can only be opened using the DCD and is available in English only.
You can access the IONOS S3 Object Storage with just one click on the corresponding item in the Menu Bar of the DCD. This opens the Object Storage Management Console, a graphical user interface with which you can manage your S3 objects.
The bucket overview of the Object Storage Management Console is displayed in a new window so that you can continue to work on your VDC in the DCD.
Contract owners and administrators can use this functionality to access content stored in the IONOS S3 Object Storage accounts of users who are no longer active members of their contracts.
Before you delete a user or all of their Object Storage Keys from your account, ensure that the content in their IONOS S3 Object Storage is accessible so that you can continue to use it or delete it by adjusting the access rights accordingly.
The content set to "private" that has not been removed before the user or all of their Object Storage Keys have been deleted is no longer accessible, but will continue to be charged. In this case, contact the IONOS enterprise support team.
1. Open the User Manager. Go to Menu Bar > Management > Users & Groups.
2. Select the required user.
3. In the Object Storage Keys, click Manage.
You are now logged on as the bucket owner of the selected IONOS S3 Object Storage.
Prerequisites: Make sure you have the corresponding privilege to enable IONOS S3 Object Storage. Only contract owners and administrators can enable access.
1. Go to User Manager. Menu Bar > Management > Users & Groups.
2. Create a new Group or open an existing Group.
3. In the Privileges, activate the Use Object Storage check box.
4. In the Members, add users to the group that you wish to authorize for the use of the object storage.
The Object Storage Keys of each user are activated together with the authorization. All members of the authorized group can now access the IONOS S3 Object Storage using the corresponding button in the Menu Bar of the DCD.
This information refers to Veeam versions older than 11.0.1.1261 20220302. No action required for newer versions.
When using IONOS S3 Object Storage to offload or archive backup data, old versions of Veeam Backup and Replication use a file structure that is significantly different than network or block storage.
The hierarchy and granularity of the stored metadata also affect the database structure of the backend systems used by IONOS to provide IONOS S3 Object Storage.
This leads to increased performance requirements for the storage system and longer response times for queries from our customers. This can therefore also affect the recovery times when retrieving data from the S3 storage.
We will need to implement custom policies if your Veeam version is older than 11.0.1.1261 20220302
in order to optimize your new and existing S3 repositories. Please contact the IONOS Cloud Customer Support at support@cloud.ionos.com and provide the following information:
IONOS contract number and support PIN
Names of buckets used with Veeam
A maintenance time window, during which we can implement the policy. Please keep your time window within business hours; Monday to Friday 08:00 - 17:00 CET.
Caution: Your buckets will be unavailable for a short period of time within the specified time window. The duration of the adjustment depends on the amount of data and the number of saved objects. However, expect no more than 90 minutes of downtime.
The data will not be changed or viewed during maintenance. There is therefore no risk to the integrity of the contents of the bucket.
With the Custom Policies, we will also add a Bucket Lifecycle Policy to the Veeam Bucket, which will automatically remove the expired Deletion Marker. This is done by us and can be reviewed by you, as shown in the screenshot below.
This can also be viewed using the API
S3 Browser is a free, feature-rich Windows client for IONOS S3 Object Storage.
Download and install the S3 Browser.
Add a new account and select:
Display name: Enter a name for the connection.
Account type: Select S3 Compatible Storage from the drop-down list.
REST Endpoint: If you already have a bucket, select the endpoint URL from the list. Otherwise, you can select s3-eu-central-2.ionoscloud.com, which corresponds to the location in Berlin, Germany.
To get Access Key ID and Secret Access Key, log in to the DCD, click Storage > S3 Key Management.
Click Advanced S3-compatible storage settings in the lower-left corner of the form.
Storage settings:
Signature version: Select Signature V4 from the drop-down list.
Addressing model: Leave Path style.
Override storage regions: Paste the following text into the text area:
Region-specific endpoint: Insert the following text: s3-{region-code}.ionoscloud.com
Save the details.
Try creating your first bucket. The bucket name must be unique across the entire IONOS S3 Object store. That's why S3 Browser will offer to add random text to the bucket name. But you can still try to come up with your own unique name.
is the official AWS SDK for Python. It allows you to create, update, and configure IONOS S3 Object Storage objects from within your Python scripts.
Install the latest Boto3 release via pip: pip install boto3
There are several ways to , e.g. passing credentials as parameters to the boto.client() method, via environment variables, or with a generic credential file (~/.aws/credentials).
An example of passing credentials as parameters when creating a Session object:
Your Access and Secret keys can be obtained from the . , click on the Storage > S3 Key Management to get the Object Storage Keys.
NOTE: Your credentials are not tied to a specific region or bucket.
For information on the supported IONOS S3 Object Storage Service endpoints, see .
List buckets:
Create bucket my-bucket
at the region eu-central-1
:
Upload filename.txt to the bucket my-bucket
:
Download the file filename.txt
from the my-bucket
:
List objects of the bucket my-bucket
Copy the filename.txt from the bucket my-source-bucket
to the bucket my-dest-bucket
and add the prefix uploaded/
. Instead of the client()
method, we use the resource()
method here. It provides a higher level of abstraction than the low-level calls made by service clients.
IONOS S3 Object Storage supports using Amazon's AWS Command Line Interface (AWS CLI) for Windows, macOS, and Linux.
For the installation instructions, see .
Run the following command in a terminal: aws configure
.
AWS Access Key ID [None]: Insert the Access Key. It can be found in the by selecting Storage > S3 Key Management.
AWS Secret Access Key [None]: Paste the Secret Key. It can be found in the Data Center Designer by selecting Storage > S3 Key Management.
Default region name [None]: de
.
Default output format [None]: json
.
For each command, be sure to include one of the endpoints in the endpoint-url
parameter:
For information on the supported IONOS S3 Object Storage Service endpoints, see .
There are 2 sets of commands:
: Offers high-level commands for managing S3 buckets and for moving, copying, and synchronizing objects.
: Allows you to work with specific features such as ACL, CORS, and Versioning.
List buckets:
Option 1: Using s3 set of commands
Option 2: Using s3api set of commands
Create a bucket in the eu-central-2
region (Berlin, Germany):
Option 1: Using s3 set of commands
Option 2: Using s3api set of commands
Create a bucket in the de
region (Frankfurt, Germany) with Object Lock enabled:
Upload an object from the current directory to a bucket:
Copy the object to the bucket:
Copy the contents of the local directory my-dir
to the bucket my-bucket
:
Copy all objects from my-source-bucket
to my-dest-bucket
excluding .zip files. The command doesn’t support cross-region copying for IONOS S3 Object Storage:
Download all the objects from the my-bucket
bucket to the local directory my-dir
:
Sync the bucket my-bucket
with the contents of the local directory my-dir
:
Get Cross-Origin Resource Sharing (CORS) configuration:
Set up Cross-Origin Resource Sharing (CORS) configuration:
cors.json:
Enable versioning for the bucket:
Get versioning state of the bucket:
Set up a lifetime policy for a bucket (delete objects starting with "my/prefix/" older than 5 days):
delete-after-5-days.json:
Prerequisites: Make sure you are logged on to the IONOS S3 Object Storage using the Object Storage Management Console. Only bucket owners can create a folder.
1. Create a or open the bucket or folder to which objects should be added.
2. Click Upload File.
3. Click + Add files to select all files to be uploaded.
4. (Optional) Activate the Store encrypted check box to encrypt your files on the and increase the security of your data. Files are automatically decrypted during download.
5. Upload all - click Start Upload, or individual files - click the Start button of the item.
6. The upload status is displayed:
(Optional) To stop the upload, click Cancel. Otherwise, close the dialog box.
The Upload Files modal will open up:
The files are uploaded and displayed in the bucket to which they were added.
It is possible to use the Object Storage Management Console to search for files in their object storage if the prefix or full name is known. For technical reasons, it is not possible to search for objects across buckets or folders.
Prerequisites: Make sure you are logged on to the IONOS S3 Object Storage using the Object Storage Management Console.
1. Open the bucket or folder you wish to search.
2. Click on Search by Prefix.
3. In the dialog box, enter the prefix or file name and click Ok.
Following Search By Prefix modal will open up:
Files matching your search criteria should be displayed.
Prerequisites: Make sure you have access to the required object. You must be logged in to the IONOS S3 Object Storage using the Object Storage Management Console.
1. Open the bucket containing the required object.
2. (Optional) If versioning is active, all available versions of an object can be viewed by clicking on Show Versions in the Objects tab.
3. Click on the item to download, or if an object has been shared through a public URL, open the URL and download the object from there.
If no other version of a file has been selected, the latest version will be downloaded.
When versioning is enabled for a bucket, versions are saved for each of its objects. When the user uploads an object with the same name more than once, to the same bucket or folder, all of its versions - current and previous - are stored.
Versioning is not activated by default. Objects that were already uploaded to the object storage before versioning was activated are identified by ID null
. If versioning is deactivated, existing object versions are retained.
Versioning of objects increases object storage volume and will be charged accordingly.
Prerequisites: Make sure you have access to the required object. You must be logged in to the IONOS S3 Object Storage using the Object Storage Management Console.
In the Buckets tab, open Properties.
Open the Versioning tab.
Enable versioning: click Enable.
Disable versioning: click Suspend.
Versioning is activated or deactivated for the selected bucket.
To show the versions of an object, click Show Versions in the Objects tab. Object versions can be deleted and managed like normal objects.
If the user no longer wants to keep objects in the IONOS S3 Object Storage, these objects can be deleted. Deleted objects are not physically removed from the object storage, but receive a so-called "delete marker" and then have a size of 0 KB. These markers are deleted at an interval specified by the user or by the system.
There are two ways to delete objects from the IONOS S3 Object Storage using the Object Storage Management Console: manually and automatically.
Prerequisites: Make sure you are the bucket owner. You must be logged in to the IONOS S3 Object Storage using the Object Storage Management Console.
Open the bucket or folder containing the required objects.
To delete individual versions of an object, click Show Versions.
To delete one object or the object version, click Delete at the end of each entry.
4. To delete several objects or object versions, activate their checkboxes.
5. To delete a folder and its contents, activate its checkbox.
Unlike buckets, folders do not have to be empty to be deleted.
6. Click Delete.
7. In the dialog box, confirm the action by clicking Ok.
The selected objects are deleted. Folders containing other objects are deleted with their entire contents without notice.
It is possible to quickly delete the contents of buckets without having to select individual objects or object versions. This is useful if the user wants to delete files, such as log files, regularly or automate the deletion of the contents of a bucket using the Object Storage Management Console. This requires the definition of rules and schedules.
Objects are deleted within several hours. Short-term deletion is not possible in an automated mode.
Prerequisites: Make sure you are the bucket owner. You must be logged in to the IONOS S3 Object Storage using the Object Storage Management Console.
Open Properties in the Buckets tab.
Open Lifecycle Policy tab.
Click + Add new rule.
4. (Optional) In the Rule Name, enter a name that describes the rule (e. g. “delete all”).
5. In the Object Prefix, enter the complete path to the objects. Folders are separated by /.
Example: 2015/
affects all objects contained in the 2015 folder, including other subfolders.
Leave the field empty if to delete all objects in the bucket.
6. Activate the Expire Objects checkbox.
Actions in further input fields
Choose which object versions to delete:
If versioning is activated for a bucket:
Current Version: The last or current version of an object.
Previous Version: All existing versions of an object with the exception of the current version.
If versioning is not activated: Select Current Version.
2. Define when the objects are to be deleted:
Current Version:
Fixed date: In the After Date field, select date and time. Further entries have no influence on this setting. The number of days after creation date: Select Use Creation Date/Time field below. Then define the number of days in the Days After Creation Date field.
The number of days after last access: Select Use Last Access Time field below. Then define the number of days in the Days After Last Access Date/Time field.
If several rules are created for different objects of a bucket, all rules must use the same appointment type (1., 2. or 3.).
Previous Version:
(Optional) To completely remove objects marked as deleted, choose Clean Up Expired Object Delete Markers.
For technical reasons, it is not possible to automatically delete the current version of an object and remove all its deleted previous versions at the same time. Deleted object versions are marked with a "delete marker" and are recognizable by a corresponding icon.
It is not possible to apply this clean-up if a rule has been defined for automatically deleting the current object versions. The checkbox cannot be activated in this case, and the message "You cannot enable clean up expired object delete markers if you enable Expiration" displays.
(Optional) To remove incomplete parts of a multi-part upload, choose Clean Up Incomplete Multipart Uploads.
Confirm entries by clicking Save.
The rule is saved and the selected objects are deleted at the defined time.
For more information, see AWS SDK documentation on .
For more examples, see , such as:
For more information on Boto3 and Python, see .
For more information, see .
For more information, see .
For more information, see .
Objects (files) of any format can be uploaded to and stored in the . Objects may not exceed 5 GB in size if uploaded using the . Other applications or the are not subject to this limit.