Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
IONOS S3 Object Storage supports using Cyberduck, a Cloud Storage browser with SFTP, WebDAV, and S3 support for Windows, macOS, and Linux.
For the installation instructions, see Cyberduck.
Once inside Cyberduck, select Cyberduck > Preferences… from the menu.
Select Profiles to open the Connection Profiles page.
Select the IONOS Cloud Object Storage (Berlin) connection profile or IONOS Cloud Object Storage (Frankfurt), or IONOS Cloud Object Storage (Logrono) from the list of available connection profiles, or best use the search option to search for it.
Close the Preferences window and restart Cyberduck to install the selected connection profiles.
Open Cyberduck and select File > Open Connection… You will see the connection dialog.
At the top, click the dropdown menu and select the IONOS Cloud Object Storage (Berlin) profile that corresponds with the data center you want to use.
Enter key values in the Access Key and Secret Key fields.
To access the Object Storage keys:
Log in to the DCD, go to Menu > Storage > IONOS S3 Object Storage > Key management.
Choose "Generate a key" and confirm the action by clicking Generate. The object storage key will be generated automatically.
Click Connect.
-c FILE, --config=FILE
- Config file name. Defaults to $HOME/.s3cfg
.
-e, --encrypt
- Encrypt files before uploading to S3.
--upload-id=UPLOAD_ID
- UploadId for Multipart Upload, in case you want to continue an existing upload (equivalent to --continue-put
) and there are multiple partial uploads. Use s3cmd multipart [URI]
to see what UploadIds are associated with the given URI.
IONOS S3 Object Storage supports using Amazon's AWS Command Line Interface (AWS CLI) for Windows, macOS, and Linux.
For the installation instructions, see Installing or updating the latest version of the AWS CLI.
Run the following command in a terminal: aws configure
.
AWS Access Key ID [None]: Insert the Access Key. To get it, log in to the DCD, go to Menu > Storage > IONOS S3 Object Storage > Key management.
AWS Secret Access Key [None]: Paste the Secret Key. It can be found in the Data Center Designer by selecting Storage > S3 Key Management.
Default region name [None]: de
.
Default output format [None]: json
.
For each command, be sure to include one of the endpoints in the endpoint-url
parameter:
For information on the supported IONOS S3 Object Storage Service endpoints, see Endpoints.
There are 2 sets of commands:
s3: Offers high-level commands for managing S3 buckets and for moving, copying, and synchronizing objects.
s3api: Allows you to work with specific features such as ACL, CORS, and Versioning.
List buckets:
Option 1: Using s3 set of commands
Option 2: Using s3api set of commands
Create a bucket in the eu-central-2
region (Berlin, Germany):
Option 1: Using s3 set of commands
Option 2: Using s3api set of commands
Create a bucket in the de
region (Frankfurt, Germany) with Object Lock enabled:
Upload an object from the current directory to a bucket:
Copy the object to the bucket:
Copy the contents of the local directory my-dir
to the bucket my-bucket
:
For more information, see cp command reference.
Copy all objects from my-source-bucket
to my-dest-bucket
excluding .zip files. The command doesn’t support cross-region copying for IONOS S3 Object Storage:
Download all the objects from the my-bucket
bucket to the local directory my-dir
:
Sync the bucket my-bucket
with the contents of the local directory my-dir
:
For more information, see sync command reference.
Get Cross-Origin Resource Sharing (CORS) configuration:
Set up Cross-Origin Resource Sharing (CORS) configuration:
cors.json:
For more information, see put-bucket-cors command reference.
Enable versioning for the bucket:
Get versioning state of the bucket:
Set up a lifetime policy for a bucket (delete objects starting with "my/prefix/" older than 5 days):
delete-after-5-days.json:
This document provides instructions to manage Object Lock using the command-line tool. Additionally, these tasks can also be performed using the web console and IONOS S3 Object Storage API.
Prerequisites:
Object Lock configuration is only feasible when enabled at the time of bucket creation. It cannot be activated for an existing bucket.
Set up the AWS CLI by following the installation instructions.
Make sure to consider the supported S3 Endpoints.
To create a bucket my-bucket
in the de
region (Frankfurt, Germany) with Object Lock:
An Object Lock with Goverance mode on a bucket provides the bucket owner with better flexibility compared to the Compliance mode. It permits the removal of the Object Lock before the designated retention period has expired, allowing for subsequent replacements or deletions of the object.
To apply Governance mode configuration to the bucket my-bucket-with-object-lock
with a default retention period equal to 15 days (or use the PutObjectLockConfiguration API Call):
On applying this configuration, the newly uploaded objects adhere to this retention setting.
An Object Lock with Compliance mode on a bucket ensures strict control by enforcing a stringent retention policy on objects. Once this mode is set, the retention period for an object cannot be shortened or modified. It provides immutable protection by preventing objects from being deleted or overwritten during their retention period.
This mode is particularly suited for meeting regulatory requirements as it guarantees that objects remain unaltered. It does not allow locks to be removed before the retention period concludes, ensuring consistent data protection.
To apply Compliance mode configuration to the bucket my-bucket-with-object-lock
with a default retention period equal to 15 days:
On applying this configuration, the newly uploaded objects adhere to this retention setting.
To retrieve Object Lock configuration for a bucket (the same could be achieved with the GetObjectLockConfiguration API Call):
To upload my-object.pdf
to the bucket my-bucket-with-object-lock
:
This task could also be achieved by using the PutObject API call.
Note: The Object Lock retention is not specified so a bucket’s default retention configuration will be applied.
To upload my-object.pdf
to the bucket my-bucket-with-object-lock
and override the bucket’s default Object Lock configuration:
Note: You can overwrite objects protected with Object Lock. Since Versioning is used for a bucket, it allows to keep multiple versions of the object. It also allows deleting objects because this operation only adds a deletion marker to the object’s version.
The permanent deletion of the object’s version is prohibited, and the system only creates a deletion marker for the object. But it makes IONOS S3 Object Storage behave in most ways as though the object has been deleted. You can only list the delete markers and other versions of an object by using the ListObjectVersions API call.
Note: Delete markers are not WORM-protected, regardless of any retention period or legal hold in place on the underlying object.
To apply LegalHold status to my-object.pdf
in the bucket my-bucket-with-object-lock
(use OFF
to switch it off):
To check the Object Lock status for a particular version of an object, you can utilize either the GET Object
or the HEAD Object
commands. Both commands will provide information about the retention mode, the designated 'Retain Until Date' and the status of the legal hold for the chosen object version.
When multiple users have permission to upload objects to your bucket, there is a risk of overly extended retention periods being set. This can lead to increased storage costs and data management challenges. While the system allows for up to 100 years using the s3:object-lock-remaining-retention-days
condition key, implementing limitations can be particularly beneficial in multi-user environments.
To establish a 10-day maximum retention limit:
Save it to the policy.json
and apply using the following command:
This document provides instructions to manage Versioning using the command-line tool. Additionally, these tasks can also be performed using the web console and IONOS S3 Object Storage API.
Prerequisites:
Set up the AWS CLI by following the installation instructions.
Make sure to consider the supported S3 Endpoints.
To get the versioning state of the bucket:
To enable versioning for the bucket:
To list object versions for the bucket:
To list object versions for the object my-object.txt
:
The IONOS S3 Object Storage is fully compatible with S3, which means that it can be used to manage buckets and objects with existing S3 clients once properly configured.
We suggest you a list of popular tools for working with IONOS S3 Object Storage, as well as instructions for configuring them:
Postman – a tool for API development and testing. Its unique feature is a graphical interface for sending API requests to object storage endpoints, facilitating testing and development.
Cyberduck – an open-source GUI client supporting object storage among other protocols, presenting storage objects as local files for easy browsing, upload, and download.
S3 Browser – a freeware Windows client for object storage, providing an easy way to manage buckets and objects, including file permissions and access control lists, through a visual interface.
AWS CLI is unique in offering a wide range of commands for comprehensive management of buckets and objects, ideal for scripting and automation.
S3cmd – a command-line tool offering direct, scriptable control over object storage buckets and objects. However, it lacks certain features like versioning and replication management.
rclone – a command-line program for syncing files between local and cloud storage, distinguishing itself with powerful synchronization capabilities, specifically useful when handling large data quantities and complex sync setups.
Boto3 Python SDK provides high-level object-oriented API as well as low-level direct service access.
Veeam Backup and Replication is a comprehensive backup and disaster recovery solution for virtual, physical, and cloud-based workloads.
This document provides instructions to manage using the command-line tool. Additionally, these tasks can also be performed using the and .
Prerequisites:
Set up the AWS CLI by following the .
Make sure to consider the supported .
Prerequisite: Grant permissions to the Log Delivery Group to the bucket where logs will be stored. We recommend using a separate bucket for logs, but it must be in the same S3 region. Log Delivery Group must be able to write objects and read ACL.
After that, you can enable Logging for a bucket:
Contents of logs-acl.json
:
To retrieve bucket logging settings:
S3cmd is a free command line tool and client for loading, retrieving, and managing data in S3. It has over 60 command line options, including multipart uploads, encryption, incremental backup, S3 sync, ACL and metadata management, bucket size, and bucket policies (Linux, macOS).
Install 3cmd for your distribution:
on CentOS/RHEL and Fedora: sudo dnf install s3cmd
on Ubuntu/Debian: sudo apt-get install s3cmd
on macOS using : brew install s3cmd
You can also install the latest version from .
Run the following command in a terminal: s3cmd --configure
. This will guide you through the interactive installation process:
Enter your Access Key and Secret key. To get them, , go to Menu > Storage > IONOS S3 Object Storage > Key management.
Note: Your credentials are not tied to a specific region or bucket.
Specify the region of your bucket for Default Region
. Example: eu-central-2
. Please refer to the .
Specify the endpoint for the selected region for S3 Endpoint
from the same list. For example, s3-eu-central-2.ionoscloud.com
.
Insert the same endpoint again for DNS-style bucket+hostname:port template
.
Specify or skip password (press Enter) for Encryption password
.
Press Enter for Path to GPG program
.
Press Enter for Use HTTPS protocol
.
Press Enter for HTTP Proxy server name
.
Press Enter for Test access with supplied credentials? [Y/n]
.
S3cmd will try to test the connection. If everything went well, save the configuration by typing y
and pressing Enter. The configuration will be saved in the .s3cfg
file.
If you need to work with more than one region or with different providers, there is a way to set up multiple configurations. Use s3cmd -configure --config=ionos-fra
to save the configuration for a specific location or provider. Run s3cmd with the -c
option to override the default configuration file. For example, list the object in the bucket:
You can also specify an endpoint directly on the command line to override the default setting. The Access Key and Secret key are region-independent, so s3cmd can take them from the default configuration:
Or even specify it with an Access Key and the Secret Key:
List buckets (even buckets from other regions will be listed):
Create a bucket (the name must be unique for the whole IONOS S3 Object Storage). You need to explicitly use the --region
option, otherwise a bucket will be created in the default de
region:
Create the bucket my-bucket
in the region de
(Frankfurt, Germany):
Create the bucket my-bucket
in the region eu-cental-2
(Berlin, Germany):
Create the bucket my-bucket
in the region eu-south-2
(Logrono, Spain):
List objects of the bucket my-bucket
:
Upload filename.txt from the current directory to the bucket my-bucket
:
Copy the contents of local directory my-dir
to the bucket my-bucket
with prefix my-dir
:
Download all the objects from the my-bucket
bucket to the local directory my-dir
(the directory should exist):
Synchronize a directory to S3 (checks files using size and md5 checksum):
Get Cross-Origin Resource Sharing (CORS) configuration:
Set up Cross-Origin Resource Sharing (CORS) configuration:
cors_rules.xml:
Delete CORS from the bucket:
Get information about buckets or objects:
s3cmd info s3://my-bucket
s3cmd info s3://my-bucket/my-object
Generate a public URL for download that will be available for 10 minutes (600 seconds):
Set up a lifetime policy for a bucket (delete objects older than 1 day):
detete-after-one-day.xml:
Encrypt and upload files. This option allows you to encrypt files before uploading, but in order to use it, you have to run s3cmd --configure
and fill out the path to the GPG utility and the encryption password. There is no need to use special parameters to decrypt the file on downloading with get
option as this is done automatically using the data from the configuration file.
Add or modify user-defined metadata. Use headers starting with x-amz-meta-
and store data in the set of key-value pairs. The user-defined metadata is limited to 2 KB in size. The size of the user-defined metadata is measured by taking the sum of the number of bytes in the UTF-8 encoding of each key and value.
s3cmd modify --add-header x-amz-meta-my_key:my_value s3://my-bucket/prefix/filename.txt
Check the changes:
Delete metadata:
This document provides instructions to using the AWS CLI command-line tool. Additionally, these tasks can also be performed using the and .
Prerequisites:
Set up the AWS CLI by following the .
Make sure to consider the supported for object upload.
Use the following keys to define access permissions:
--grant-read
: Grants read-only access.
--grant-write
: Grants write-only access.
--grant-read-acp
: Grants permission to read the Access Control List.
--grant-write-acp
: Grants permission to modify the Access Control List.
--grant-full-control
: Grants full access, encompassing the permissions listed above (read, write, read ACL, and write ACL).
Use --key
to specify the object for granting access:
Use the following values for the --acl
key:
private
removes public access.
public-read
allows public read-only access.
public-read-write
allows public read/write access.
authenticated-read
allows read-only access to all authenticated users of IONOS S3 Object storage (including ones out of your contract).
To allow public read-only access to the object:
To remove public access to the object:
Postman is a free tool for conveniently working with APIs in a graphical interface. It is available for Windows, macOS, and Linux.
You can follow the installation instructions described on .
In the Authorization tab for a request, select AWS Signature from the Type dropdown list. Specify where Postman should append your authorization data using the Add authorization data to drop-down menu.
If you select Request Headers, Postman populates the Headers tab with Authorization and X-Amz- prefixed fields.
If you select the Request URL, Postman populates the Params tab with authentication details prefixed with X-Amz-.
Note: The parameters listed below contain confidential information. We recommend using variables to keep this data secure while working in a collaborative environment.
Advanced fields are optional, but Postman will attempt to generate them automatically if necessary.
For AWS Region, leave the field blank as the region from the endpoint will be used.
For Service Name, enter s3
. The name of the service that receives the requests.
For Session Token, leave the field blank. This is only required when temporary security credentials are used.
This document provides instructions to manage using the command-line tool. Additionally, these tasks can also be performed using the and .
Prerequisites:
Set up the AWS CLI by following the .
Make sure to consider the supported .
To create a file policy.json
with the JSON policy. For more information, see .
To apply a bucket policy to a bucket:
To save a bucket policy to file:
To delete the bucket policy:
This document provides instructions to using the AWS CLI command-line tool. Additionally, these tasks can also be performed using the and .
Prerequisites:
Set up the AWS CLI by following the .
Make sure to consider the supported .
Use the following keys to define access permissions:
--grant-read
: Grants read-only access.
--grant-write
: Grants write-only access.
--grant-read-acp
: Grants permission to read the Access Control List.
--grant-write-acp
: Grants permission to modify the Access Control List.
--grant-full-control
: Grants full access, encompassing the permissions listed above (read, write, read ACL, and write ACL).
Note: Granting access to a bucket for another IONOS user does not make the bucket appear in the user's S3 web console due to the S3 protocol's architecture. To access the bucket, the user must utilize other , as the granted access does not translate to interface visibility.
To grant full control of my-bucket
to a user with a specific Canonical user ID:
To separate grants with a comma if you want to specify multiple IDs:
To grant full control of my-bucket
to multiple users using Canonical user ID:
To grant full control of my-bucket
by using an email address
instead of Canonical User ID:
Retrieve the ACL of a bucket and save it to the file acl.json
:
To edit the file, for example, remove or add some grants and apply updated ACL to the bucket:
Use the following values for the --acl
key:
private
removes public access.
public-read
allows public read-only access.
public-read-write
allows public read/write access.
authenticated-read
allows read-only access to all authenticated users of IONOS S3 Object storage (including ones out of your contract).
To allow public read-only access to the bucket:
To remove public access to the bucket:
To set WRITE
and READ_ACP
permissions for the Log Delivery Group which is required before enabling the Logging feature for a bucket:
Please refer to the for the --host
option. You can skip this option if you are only using the region from the configuration file.
Copy all objects from my-source-bucket
to my-dest-bucket
excluding .zip files (or use mv
to move objects). The command doesn’t support cross-region copying for IONOS S3 Object Storage, use for cross-region copying:
For more information, visit and .
To get the Access Key and Secret Key, , go to Menu > Storage > IONOS S3 Object Storage > Key management.
Setup completed. Now check the to get the right endpoint to call.
Note: You need to use the correct endpoint URL for each region (see the ).
Boto3 is the official AWS SDK for Python. It allows you to create, update, and configure IONOS S3 Object Storage objects from within your Python scripts.
Install the latest Boto3 release via pip: pip install boto3
There are several ways to provide credentials, e.g. passing credentials as parameters to the boto.client() method, via environment variables, or with a generic credential file (~/.aws/credentials).
An example of passing credentials as parameters when creating a Session object:
To get the Access Key and Secret Key, log in to the DCD, go to Menu > Storage > IONOS S3 Object Storage > Key management.
NOTE: Your credentials are not tied to a specific region or bucket.
For information on the supported IONOS S3 Object Storage Service endpoints, see S3 Endpoints.
List buckets:
Create bucket my-bucket
at the region eu-central-1
:
Upload filename.txt to the bucket my-bucket
:
For more information, see AWS SDK documentation on Uploading files.
Download the file filename.txt
from the my-bucket
:
List objects of the bucket my-bucket
Copy the filename.txt from the bucket my-source-bucket
to the bucket my-dest-bucket
and add the prefix uploaded/
. Instead of the client()
method, we use the resource()
method here. It provides a higher level of abstraction than the low-level calls made by service clients.
For more examples, see Boto3 documentation, such as:
For more information on Boto3 and Python, see Realpython.com – Python, Boto3, and AWS S3: Demystified.
Rclone is a command line tool for managing files in the cloud. It is available for Windows, macOS, and Linux. Rclone also has a built-in HTTP server that can be used to remotely control rclone using its API and a web GUI (graphical user interface).
rclone helps:
backing up (and encrypting) files to cloud storage
restoring (and decrypting) files from cloud storage
mirroring cloud data to other cloud services or locally
transferring data to the cloud or between cloud storage providers
mounting multiple encrypted, cached, or diverse cloud storages in the form of a disk
analyzing and taking into account data stored in cloud storage using lsf, ljson, size, and ncdu
Download the latest version of rclone from rclone.org. The official Ubuntu, Debian, Fedora, Brew, and Chocolatey repositories include rclone.
You can find the configuration example here.
Configurations configured with the rclone config
command are called remotes. If you already have or plan to use buckets in different IONOS S3 Object Storage regions, you will need to set up a separate remote for each region you use.
Please refer to the list of commands at the rclone website.
List remotes:
List buckets of "ionos1" remote:
Create bucket my-bucket
at the remote ionos1
:
Upload filename.txt from the current directory to the bucket my-bucket
:
Copy the contents of local directory my-dir
to the bucket my-bucket
:
Copy all objects with the prefix my-dir
from the bucket my-source-bucket
to my-dest-bucket
:
The buckets could be located in different regions and even at different providers. Unless buckets are located within the same region, the data is not copied directly from the source to the destination. For cross-regional copying, the data is downloaded to you from the source bucket and then uploaded to the destination.
Download all the objects from the my-bucket
bucket to the local directory my-dir
:
Sync the bucket my-bucket
with the contents of the local directory my-dir
. The destination is updated to match the source, including deleting files if necessary:
Get the total size and number of objects in remote:path:
Check if the files in the local directory and destination match:
Produce an md5sum file for all the objects in the path:
S3 Browser is a free, feature-rich Windows client for IONOS S3 Object Storage.
Download and install the S3 Browser.
Add a new account and select:
Display name: Enter a name for the connection.
Account type: Select S3 Compatible Storage from the drop-down list.
REST Endpoint: If you already have a bucket, select the endpoint URL from the list. Otherwise, you can select s3-eu-central-2.ionoscloud.com, which corresponds to the location in Berlin, Germany.
To get the Access Key and Secret Key, log in to the DCD, go to Menu > Storage > IONOS S3 Object Storage > Key management.
Click Advanced S3-compatible storage settings in the lower-left corner of the form.
Storage settings:
Signature version: Select Signature V4 from the drop-down list.
Addressing model: Leave Path style.
Override storage regions: Paste the following text into the text area:
Region-specific endpoint: Insert the following text: s3-{region-code}.ionoscloud.com
Save the details.
Try creating your first bucket. The bucket name must be unique across the entire IONOS S3 Object store. That's why S3 Browser will offer to add random text to the bucket name. But you can still try to come up with your unique name.
This information refers to Veeam versions older than 11.0.1.1261 20220302. No action is required for newer versions.
When using IONOS S3 Object Storage to offload or archive backup data, old versions of Veeam Backup and Replication use a file structure that is significantly different than network or block storage.
The hierarchy and granularity of the stored metadata also affect the database structure of the backend systems used by IONOS to provide IONOS S3 Object Storage.
This leads to increased performance requirements for the storage system and longer response times for queries from our customers. This can therefore also affect the recovery times when retrieving data from the S3 storage.
We will need to implement custom policies if your Veeam version is older than 11.0.1.1261 20220302
to optimize your new and existing S3 repositories. Please contact the IONOS Cloud Customer Support at support@cloud.ionos.com and provide the following information:
IONOS contract number and support PIN
Names of buckets used with Veeam
A maintenance time window, during which we can implement the policy. Please keep your time window within business hours; Monday to Friday 08:00 - 17:00 CET.
Caution: Your buckets will be unavailable for a short period within the specified time window. The duration of the adjustment depends on the amount of data and the number of saved objects. However, expect no more than 90 minutes of downtime.
The data will not be changed or viewed during maintenance. There is therefore no risk to the integrity of the contents of the bucket.
With the Custom Policies, we will also add a Bucket Lifecycle Policy to the Veeam Bucket, which will automatically remove the expired Deletion Marker. This is done by us and can be reviewed by you, as shown in the screenshot below.
This can also be viewed using the API