S3cmd is a free command line tool and client for loading, retrieving, and managing data in S3. It has over 60 command line options, including multipart uploads, encryption, incremental backup, S3 sync, ACL and metadata management, bucket size, and bucket policies (Linux, macOS).
Install 3cmd for your distribution:
on CentOS/RHEL and Fedora: sudo dnf install s3cmd
on Ubuntu/Debian: sudo apt-get install s3cmd
on macOS using Brew: brew install s3cmd
You can also install the latest version from SourceForge.
Run the following command in a terminal: s3cmd --configure
. This will guide you through the interactive installation process:
Enter your Access Key and Secret key. To get them, log in to the DCD, click Storage > S3 Key Management.
Note: Your credentials are not tied to a specific region or bucket.
Specify the region of your bucket for Default Region
. Example: eu-central-2
. Please refer to the list of available regions.
Specify the endpoint for the selected region for S3 Endpoint
from the same list. For example, s3-eu-central-2.ionoscloud.com
.
Insert the same endpoint again for DNS-style bucket+hostname:port template
.
Specify or skip password (press Enter) for Encryption password
.
Press Enter for Path to GPG program
.
Press Enter for Use HTTPS protocol
.
Press Enter for HTTP Proxy server name
.
Press Enter for Test access with supplied credentials? [Y/n]
.
S3cmd will try to test the connection. If everything went well, save the configuration by typing y
and pressing Enter. The configuration will be saved in the .s3cfg
file.
If you need to work with more than one region or with different providers, there is a way to set up multiple configurations. Use s3cmd -configure --config=ionos-fra
to save the configuration for a specific location or provider. Run s3cmd with the -c
option to override the default configuration file. For example, list the object in the bucket:
You can also specify an endpoint directly on the command line to override the default setting. The Access Key and Secret key are region-independent, so s3cmd can take them from the default configuration:
Or even specify it with an Access Key and the Secret Key:
Please refer to the list of available endpoints for the --host
option. You can skip this option if you are only using the region from the configuration file.
List buckets (even buckets from other regions will be listed):
Create a bucket (the name must be unique for the whole IONOS S3 Object Storage). You need to explicitly use the --region
option, otherwise a bucket will be created in the default de
region:
Create the bucket my-bucket
in the region de
(Frankfurt, Germany):
Create the bucket my-bucket
in the region eu-cental-2
(Berlin, Germany):
Create the bucket my-bucket
in the region eu-south-2
(Logrono, Spain):
List objects of the bucket my-bucket
:
Upload filename.txt from the current directory to the bucket my-bucket
:
Copy the contents of local directory my-dir
to the bucket my-bucket
with prefix my-dir
:
Copy all objects from my-source-bucket
to my-dest-bucket
excluding .zip files (or use mv
to move objects). The command doesn’t support cross-region copying for IONOS S3 Object Storage, use rclone utility for cross-region copying:
Download all the objects from the my-bucket
bucket to the local directory my-dir
(the directory should exist):
Synchronize a directory to S3 (checks files using size and md5 checksum):
Get Cross-Origin Resource Sharing (CORS) configuration:
Set up Cross-Origin Resource Sharing (CORS) configuration:
cors_rules.xml:
Delete CORS from the bucket:
Get information about buckets or objects:
s3cmd info s3://my-bucket
s3cmd info s3://my-bucket/my-object
Generate a public URL for download that will be available for 10 minutes (600 seconds):
Set up a lifetime policy for a bucket (delete objects older than 1 day):
detete-after-one-day.xml:
Encrypt and upload files. This option allows you to encrypt files before uploading, but in order to use it, you have to run s3cmd --configure
and fill out the path to the GPG utility and the encryption password. There is no need to use special parameters to decrypt the file on downloading with get
option as this is done automatically using the data from the configuration file.
Add or modify user-defined metadata. Use headers starting with x-amz-meta-
and store data in the set of key-value pairs. The user-defined metadata is limited to 2 KB in size. The size of the user-defined metadata is measured by taking the sum of the number of bytes in the UTF-8 encoding of each key and value.
s3cmd modify --add-header x-amz-meta-my_key:my_value s3://my-bucket/prefix/filename.txt
Check the changes:
Delete metadata:
For more information, visit S3cmd usage and S3cmd FAQ.
Rclone is a command line tool for managing files in the cloud. It is available for Windows, macOS, and Linux. Rclone also has a built-in HTTP server that can be used to remotely control rclone using its API and a web GUI (graphical user interface).
rclone helps:
backing up (and encrypting) files to cloud storage
restoring (and decrypting) files from cloud storage
mirroring cloud data to other cloud services or locally
transferring data to the cloud or between cloud storage providers
mounting multiple encrypted, cached, or diverse cloud storages in the form of a disk
analyzing and taking into account data stored in cloud storage using lsf, ljson, size, and ncdu
Download the latest version of rclone from rclone.org. The official Ubuntu, Debian, Fedora, Brew, and Chocolatey repositories include rclone.
You can find the configuration example here.
Configurations configured with the rclone config
command are called remotes. If you already have or plan to use buckets in different IONOS S3 Object Storage regions, you will need to set up a separate remote for each region you use.
Please refer to the list of commands at the rclone website.
List remotes:
List buckets of "ionos1" remote:
Create bucket my-bucket
at the remote ionos1
:
Upload filename.txt from the current directory to the bucket my-bucket
:
Copy the contents of local directory my-dir
to the bucket my-bucket
:
Copy all objects with the prefix my-dir
from the bucket my-source-bucket
to my-dest-bucket
:
The buckets could be located in the different regions and even at different providers. Unless buckets are located within the same region, the data is not copied directly from the source to destination. For cross-regional copying, the data is downloaded to you from the source bucket and then uploaded to the destination.
Download all the objects from the my-bucket
bucket to the local directory my-dir
:
Sync the bucket my-bucket
with the contents of the local directory my-dir
. Destination is updated to match source, including deleting files if necessary:
Get the total size and number of objects in remote:path:
Check if the files in the local directory and destination match:
Produce an md5sum file for all the objects in the path:
The IONOS S3 Object Storage is fully compatible with S3, which means that it can be used to manage buckets and objects with existing S3 clients once properly configured.
We suggest you a list of popular tools for working with IONOS S3 Object Storage, as well as instructions for configuring them:
Postman – a tool for API development and testing. Its unique feature is a graphical interface for sending API requests to object storage endpoints, facilitating testing and development.
Cyberduck – an open-source GUI client supporting object storage among other protocols, presenting storage objects as local files for easy browsing, upload, and download.
S3 Browser – a freeware Windows client for object storage, providing an easy way to manage buckets and objects, including file permissions and access control lists, through a visual interface.
AWS CLI is unique in offering a wide range of commands for comprehensive management of buckets and objects, ideal for scripting and automation.
S3cmd – a command-line tool offering direct, scriptable control over object storage buckets and objects. However, it lacks certain features like versioning and replication management.
rclone – a command-line program for syncing files between local and cloud storage, distinguishing itself with powerful synchronization capabilities, specifically useful when handling large data quantities and complex sync setups.
Boto3 Python SDK provides high-level object-oriented API as well as low-level direct service access.
IONOS S3 Object Storage supports using Cyberduck, a Cloud Storage browser with SFTP, WebDAV, and S3 support for Windows, macOS, and Linux.
For the installation instructions, see Cyberduck.
Once inside Cyberduck, select Cyberduck > Preferences… from the menu.
Select Profiles to open the Connection Profiles page.
Select the IONOS Cloud Object Storage (Berlin) connection profile or IONOS Cloud Object Storage (Frankfurt), or IONOS Cloud Object Storage (Logrono) from the list of available connection profiles, or best use the search option to search for it.
Close the Preferences window and restart Cyberduck to install the selected connection profiles.
Open Cyberduck and select File > Open Connection… You will see the connection dialog.
At the top, click the dropdown menu and select the IONOS Cloud Object Storage (Berlin) profile that corresponds with the data center you want to use.
Enter key values in the Access Key and Secret Key fields.
To access the Object Storage keys:
Log in to your IONOS DCD, click Storage > S3 Key Management.
Choose "Generate Key" and confirm the action with OK. The object storage key will be generated automatically.
Click Connect.
-c FILE, --config=FILE
- Config file name. Defaults to $HOME/.s3cfg
.
-e, --encrypt
- Encrypt files before uploading to S3.
--upload-id=UPLOAD_ID
- UploadId for Multipart Upload, in case you want to continue an existing upload (equivalent to --continue-put
) and there are multiple partial uploads. Use s3cmd multipart [URI]
to see what UploadIds are associated with the given URI.
S3 Browser is a free, feature-rich Windows client for IONOS S3 Object Storage.
Download and install the S3 Browser.
Add a new account and select:
Display name: Enter a name for the connection.
Account type: Select S3 Compatible Storage from the drop-down list.
REST Endpoint: If you already have a bucket, select the endpoint URL from the list. Otherwise, you can select s3-eu-central-2.ionoscloud.com, which corresponds to the location in Berlin, Germany.
To get Access Key ID and Secret Access Key, log in to the DCD, click Storage > S3 Key Management.
Click Advanced S3-compatible storage settings in the lower-left corner of the form.
Storage settings:
Signature version: Select Signature V4 from the drop-down list.
Addressing model: Leave Path style.
Override storage regions: Paste the following text into the text area:
Region-specific endpoint: Insert the following text: s3-{region-code}.ionoscloud.com
Save the details.
Try creating your first bucket. The bucket name must be unique across the entire IONOS S3 Object store. That's why S3 Browser will offer to add random text to the bucket name. But you can still try to come up with your own unique name.
Postman is a free tool for conveniently working with APIs in a graphical interface. It is available for Windows, macOS, and Linux.
You can follow the installation instructions described on Postman.
In the Authorization tab for a request, select AWS Signature from the Type dropdown list. Specify where Postman should append your authorization data using the Add authorization data to drop-down menu.
If you select Request Headers, Postman populates the Headers tab with Authorization and X-Amz- prefixed fields.
If you select the Request URL, Postman populates the Params tab with authentication details prefixed with X-Amz-.
Note: The parameters listed below contain confidential information. We recommend using variables to keep this data secure while working in a collaborative environment.
To get Access Key and Secret Key, log in to the DCD, click Storage > S3 Key Management.
Advanced fields are optional, but Postman will attempt to generate them automatically if necessary.
For AWS Region, leave the field blank as the region from the endpoint will be used.
For Service Name, enter s3
. The name of the service that receives the requests.
For Session Token, leave the field blank. This is only required when temporary security credentials are used.
Setup completed. Now check the S3 API description to get the right endpoint to call.
Note: You need to use the correct endpoint URL for each region (see the list of available endpoints).
IONOS S3 Object Storage supports using Amazon's AWS Command Line Interface (AWS CLI) for Windows, macOS, and Linux.
For the installation instructions, see .
Run the following command in a terminal: aws configure
.
AWS Access Key ID [None]: Insert the Access Key. It can be found in the by selecting Storage > S3 Key Management.
AWS Secret Access Key [None]: Paste the Secret Key. It can be found in the Data Center Designer by selecting Storage > S3 Key Management.
Default region name [None]: de
.
Default output format [None]: json
.
For each command, be sure to include one of the endpoints in the endpoint-url
parameter:
For information on the supported IONOS S3 Object Storage Service endpoints, see .
There are 2 sets of commands:
: Offers high-level commands for managing S3 buckets and for moving, copying, and synchronizing objects.
: Allows you to work with specific features such as ACL, CORS, and Versioning.
List buckets:
Option 1: Using s3 set of commands
Option 2: Using s3api set of commands
Create a bucket in the eu-central-2
region (Berlin, Germany):
Option 1: Using s3 set of commands
Option 2: Using s3api set of commands
Create a bucket in the de
region (Frankfurt, Germany) with Object Lock enabled:
Upload an object from the current directory to a bucket:
Copy the object to the bucket:
Copy the contents of the local directory my-dir
to the bucket my-bucket
:
Copy all objects from my-source-bucket
to my-dest-bucket
excluding .zip files. The command doesn’t support cross-region copying for IONOS S3 Object Storage:
Download all the objects from the my-bucket
bucket to the local directory my-dir
:
Sync the bucket my-bucket
with the contents of the local directory my-dir
:
Get Cross-Origin Resource Sharing (CORS) configuration:
Set up Cross-Origin Resource Sharing (CORS) configuration:
cors.json:
Enable versioning for the bucket:
Get versioning state of the bucket:
Set up a lifetime policy for a bucket (delete objects starting with "my/prefix/" older than 5 days):
delete-after-5-days.json:
is the official AWS SDK for Python. It allows you to create, update, and configure IONOS S3 Object Storage objects from within your Python scripts.
Install the latest Boto3 release via pip: pip install boto3
There are several ways to , e.g. passing credentials as parameters to the boto.client() method, via environment variables, or with a generic credential file (~/.aws/credentials).
An example of passing credentials as parameters when creating a Session object:
Your Access and Secret keys can be obtained from the . , click on the Storage > S3 Key Management to get the Object Storage Keys.
NOTE: Your credentials are not tied to a specific region or bucket.
For information on the supported IONOS S3 Object Storage Service endpoints, see .
List buckets:
Create bucket my-bucket
at the region eu-central-1
:
Upload filename.txt to the bucket my-bucket
:
Download the file filename.txt
from the my-bucket
:
List objects of the bucket my-bucket
Copy the filename.txt from the bucket my-source-bucket
to the bucket my-dest-bucket
and add the prefix uploaded/
. Instead of the client()
method, we use the resource()
method here. It provides a higher level of abstraction than the low-level calls made by service clients.
For more information, see .
For more information, see .
For more information, see .
For more information, see AWS SDK documentation on .
For more examples, see , such as:
For more information on Boto3 and Python, see .