S3cmd
S3cmd is a free command line tool and client for loading, retrieving, and managing data in S3. It has over 60 command line options, including multipart uploads, encryption, incremental backup, S3 sync, ACL and metadata management, bucket size, and bucket policies (Linux, macOS).
Configuration
Install 3cmd for your distribution:
on CentOS/RHEL and Fedora:
sudo dnf install s3cmd
on Ubuntu/Debian:
sudo apt-get install s3cmd
on macOS using Brew:
brew install s3cmd
You can also install the latest version from SourceForge.
Configuration steps
Run the following command in a terminal: s3cmd --configure
. This will guide you through the interactive installation process:
Enter your Access Key and Secret key. To get them, log in to the DCD, go to Menu > Storage > IONOS S3 Object Storage > Key management.
Specify the region of your bucket for
Default Region
. Example:eu-central-2
. Please refer to the list of available regions.Specify the endpoint for the selected region for
S3 Endpoint
from the same list. For example,s3-eu-central-2.ionoscloud.com
.Insert the same endpoint again for
DNS-style bucket+hostname:port template
.Specify or skip password (press Enter) for
Encryption password
.Press Enter for
Path to GPG program
.Press Enter for
Use HTTPS protocol
.Press Enter for
HTTP Proxy server name
.Press Enter for
Test access with supplied credentials? [Y/n]
.S3cmd will try to test the connection. If everything went well, save the configuration by typing
y
and pressing Enter. The configuration will be saved in the.s3cfg
file.
If you need to work with more than one region or with different providers, there is a way to set up multiple configurations. Use s3cmd -configure --config=ionos-fra
to save the configuration for a specific location or provider. Run s3cmd with the -c
option to override the default configuration file. For example, list the object in the bucket:
s3cmd -c ionos-fra ls s3://my-bucket
You can also specify an endpoint directly on the command line to override the default setting. The Access Key and Secret key are region-independent, so s3cmd can take them from the default configuration:
s3cmd --host s3-eu-south-2.ionoscloud.com ls s3://my-bucket
Or even specify it with an Access Key and the Secret Key:
s3cmd --access_key=YOUR_ACCESS_KEY --secret_key=SECRET_KEY --host s3-eu-south-2.ionoscloud.com ls s3://my-bucket
Using s3cmd with IONOS S3 Object Storage
Please refer to the list of available endpoints for the --host
option. You can skip this option if you are only using the region from the configuration file.
Sample usage
List buckets (even buckets from other regions will be listed):
s3cmd ls
Create a bucket (the name must be unique for the whole IONOS S3 Object Storage). You need to explicitly use the
--region
option, otherwise a bucket will be created in the defaultde
region:Create the bucket
my-bucket
in the regionde
(Frankfurt, Germany):
s3cmd --host s3-eu-cental-1.ionoscloud.com --region=de mb s3://my-bucket
Create the bucket
my-bucket
in the regioneu-cental-2
(Berlin, Germany):
s3cmd --host s3-eu-cental-2.ionoscloud.com --region=eu-central-2 mb s3://my-bucket
Create the bucket
my-bucket
in the regioneu-south-2
(Logrono, Spain):
s3cmd --host s3-eu-south-2.ionoscloud.com --region=eu-south-2 mb s3://my-bucket
List objects of the bucket
my-bucket
:s3cmd ls s3://my-bucket
Upload filename.txt from the current directory to the bucket
my-bucket
:s3cmd put filename.txt s3://my-bucket
Copy the contents of local directory
my-dir
to the bucketmy-bucket
with prefixmy-dir
:s3cmd my-dir s3://my-bucket --recursive
Copy all objects from
my-source-bucket
tomy-dest-bucket
excluding .zip files (or usemv
to move objects). The command doesn’t support cross-region copying for IONOS S3 Object Storage, use rclone utility for cross-region copying:s3cmd cp s3://my-source-bucket s3://my-dest-bucket --recursive
Download all the objects from the
my-bucket
bucket to the local directorymy-dir
(the directory should exist):s3cmd get s3://my-bucket my-dir --recursive
Synchronize a directory to S3 (checks files using size and md5 checksum):
s3cmd sync my-dir s3://my-bucket/
Get Cross-Origin Resource Sharing (CORS) configuration:
s3cmd info s3://my-bucket
Set up Cross-Origin Resource Sharing (CORS) configuration:
s3cmd cors_rules.xml s3://my-bucket
cors_rules.xml:
<CORSConfiguration> <CORSRule> <AllowedOrigin>http://www.MY-DOMAIN.com</AllowedOrigin> <AllowedOrigin>http://MY-DOMAIN.COM</AllowedOrigin> <AllowedMethod>PUT</AllowedMethod> <AllowedMethod>POST</AllowedMethod> <AllowedMethod>DELETE</AllowedMethod> <AllowedHeader>*</AllowedHeader> <MaxAgeSeconds>3000</MaxAgeSeconds> <ExposeHeader>ETag</ExposeHeader> </CORSRule>
Delete CORS from the bucket:
s3cmd delcors s3://my-bucket
Get information about buckets or objects:
s3cmd info s3://my-bucket
s3cmd info s3://my-bucket/my-object
Generate a public URL for download that will be available for 10 minutes (600 seconds):
s3cmd signurl s3://my-bucket/filename.txt +600
Set up a lifetime policy for a bucket (delete objects older than 1 day):
s3cmd setlifecycle detete-after-one-day.xml s3://my-bucket
detete-after-one-day.xml:
<?xml version="1.0" ?> <LifecycleConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <Rule> <ID>Delete older than 1 day</ID> <Prefix/> <Status>Enabled</Status> <Expiration> <Days>1</Days> </Expiration> </Rule> </LifecycleConfiguration>
Encrypt and upload files. This option allows you to encrypt files before uploading, but in order to use it, you have to run
s3cmd --configure
and fill out the path to the GPG utility and the encryption password. There is no need to use special parameters to decrypt the file on downloading withget
option as this is done automatically using the data from the configuration file.Add or modify user-defined metadata. Use headers starting with
x-amz-meta-
and store data in the set of key-value pairs. The user-defined metadata is limited to 2 KB in size. The size of the user-defined metadata is measured by taking the sum of the number of bytes in the UTF-8 encoding of each key and value.s3cmd modify --add-header x-amz-meta-my_key:my_value s3://my-bucket/prefix/filename.txt
Check the changes:
s3cmd info s3://my-bucket/prefix/filename.txt
Delete metadata:
s3cmd modify --remove-header x-amz-meta-my_key s3://my-bucket/prefix/filename.txt
For more information, visit S3cmd usage and S3cmd FAQ.
Last updated
Was this helpful?