S3cmd

S3cmd is a free command line tool and client to load, retrieve, and manage data in S3. It has over 60 command line options, including multipart uploads, encryption, incremental backup, S3 sync, ACL and metadata management, bucket size, and bucket policies (Linux, macOS).

Configuration

Install S3cmd:

  • on CentOS/RHEL and Fedora: sudo dnf install s3cmd

  • on Ubuntu/Debian: sudo apt-get install s3cmd

  • on macOS using Brew: brew install s3cmd

You can also install the latest version from SourceForge.

Steps

1. Execute the following command in a terminal: s3cmd --configure. This will guide you through the interactive installation process.

2. Enter your Access Key and Secret key. To get them, see Generate a Key.

Note: Your credentials are not tied to a specific region or bucket.

3. Specify the region of your bucket for Default Region. Example: eu-central-2. For information on the supported IONOS Object Storage service endpoints, see Endpoints.

4. Specify the Endpoint for the selected region from the same list, such as s3-eu-central-2.ionoscloud.com.

5. Insert the same Endpoint again for DNS-style bucket+hostname:port template.

6. Specify or skip password for Encryption password.

7. Press Enter for the Path to GPG program.

8. Press Enter for the Use HTTPS protocol.

9. Press Enter for the HTTP Proxy server name.

10. Press Enter for the Test access with supplied credentials? [Y/n]. S3cmd will try to test the connection.

11. Upon the test pass, save the configuration by typing y and pressing Enter. The configuration is saved in the .s3cfg file.

Result: The S3cmd configuration is complete.

Set up multiple configurations

If you need to work with more than one region or different providers, there is a way to set up multiple configurations.

Use s3cmd -configure --config=ionos-fra to save the configuration for a specific location or provider. Execute s3cmd with the -c option to override the default configuration file. For example, list the object in the bucket:

s3cmd -c ionos-fra ls s3://my-bucket

You can also specify an endpoint directly on the command line to override the default setting. The Access Key and Secret Key are region-independent, so s3cmd can take them from the default configuration:

s3cmd --host s3-eu-south-2.ionoscloud.com ls s3://my-bucket

Specify it with an Access Key and the Secret Key:

s3cmd  --access_key=YOUR_ACCESS_KEY --secret_key=SECRET_KEY --host s3-eu-south-2.ionoscloud.com ls s3://my-bucket

Use s3cmd with IONOS Object Storage

For the --host option, see the list of supported IONOS Object Storage service Endpoints. If you are only using the region from the configuration file, you can skip this option.

Examples

List buckets (will list even buckets from other regions): s3cmd ls

Create a bucket:

Note:

— The bucket name must be unique across the IONOS Object Storage.

— You need to explicitly use the --region option; otherwise a bucket is created in the default de region.

Create a bucket my-bucket in the region de (Frankfurt, Germany): s3cmd --host s3-eu-central-1.ionoscloud.com --region=de mb s3://my-bucket

Create a bucket my-bucket in the region eu-central-2 (Berlin, Germany): s3cmd --host s3-eu-central-2.ionoscloud.com --region=eu-central-2 mb s3://my-bucket

Create a bucket my-bucket in the region eu-south-2 (Logrono, Spain): s3cmd --host s3-eu-south-2.ionoscloud.com --region=eu-south-2 mb s3://my-bucket

List objects of the bucket my-bucket: s3cmd ls s3://my-bucket

Upload filename.txt from the current directory to the bucket my-bucket: s3cmd put filename.txt s3://my-bucket

Copy the contents of the local directory my-dir to the bucket my-bucket with the prefix my-dir: s3cmd my-dir s3://my-bucket --recursive

Copy all objects from my-source-bucket to my-dest-bucket excluding .zip files (or use mv to move objects): s3cmd cp s3://my-source-bucket s3://my-dest-bucket --recursive

Note: The command does not support cross-region copying for IONOS Object Storage. Use rclone utility for cross-region copying.

Download all the objects from the my-bucket bucket to the local directory my-dir (the directory should exist): s3cmd get s3://my-bucket my-dir --recursive

Synchronize a directory to S3 (checks files using size and md5 checksum): s3cmd sync my-dir s3://my-bucket/

Get Cross-Origin Resource Sharing (CORS) configuration: s3cmd info s3://my-bucket

Set up Cross-Origin Resource Sharing (CORS) configuration: s3cmd cors_rules.xml s3://my-bucket

cors_rules.xml:

	<CORSConfiguration>
	<CORSRule>
	    <AllowedOrigin>http://www.MY-DOMAIN.com</AllowedOrigin>
	    <AllowedOrigin>http://MY-DOMAIN.COM</AllowedOrigin>
	    <AllowedMethod>PUT</AllowedMethod>
	    <AllowedMethod>POST</AllowedMethod>
	    <AllowedMethod>DELETE</AllowedMethod>
	    <AllowedHeader>*</AllowedHeader>
	    <MaxAgeSeconds>3000</MaxAgeSeconds>
	    <ExposeHeader>ETag</ExposeHeader>
	</CORSRule>

Delete CORS from the bucket: s3cmd delcors s3://my-bucket

Get information about buckets or objects:

  • s3cmd info s3://my-bucket

  • s3cmd info s3://my-bucket/my-object

Generate a public URL for download that will be available for 10 minutes (600 seconds): s3cmd signurl s3://my-bucket/filename.txt +600

Set up a lifetime policy for a bucket (delete objects older than 1 day): s3cmd setlifecycle detete-after-one-day.xml s3://my-bucket

detete-after-one-day.xml:

	<?xml version="1.0" ?>
	<LifecycleConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
		<Rule>
			<ID>Delete older than 1 day</ID>
			<Prefix/>
			<Status>Enabled</Status>
			<Expiration>
				<Days>1</Days>
			</Expiration>
		</Rule>
	</LifecycleConfiguration>

Encrypt and upload files: This option allows you to encrypt files before uploading. To use it, you have to run s3cmd --configure and fill out the path to the GNU Privacy Guard (GPG) utility and the encryption password. There is no need to use special parameters to decrypt the file on downloading with get option as this is done automatically using the data from the configuration file.

Add or modify user-defined metadata: Use headers starting with x-amz-meta- and store data in a set of key-value pairs. The user-defined metadata is limited to 2 KB in size. Its size is measured by taking the sum of the number of bytes in the UTF-8 encoding of each key and value.

  • s3cmd modify --add-header x-amz-meta-my_key:my_value s3://my-bucket/prefix/filename.txt

  • Check the changes:

    s3cmd info s3://my-bucket/prefix/filename.txt
  • Delete metadata:

    s3cmd modify --remove-header x-amz-meta-my_key s3://my-bucket/prefix/filename.txt

For more information, see S3cmd Usage and S3cmd FAQs.

Last updated