Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Manage your Object Storage buckets, objects, and their access permissions effectively using the data management, access management, and public access settings.
Replication allows you to create and manage replicas of your data across S3 Endpoints. Only objects directly uploaded into a bucket by a client application are replicated. Replication traffic, including cross-region replication, is not counted towards data usage; thus Object Storage offers free data transfer. The Replication configuration can contain up to 1000 rules.
Note: Objects are not replicated if they are themselves replicas from another source bucket.
In the case of an object deletion request specifying the object version, the object version is deleted from the source bucket but not from the destination bucket. But if an object deletion request does not specify the object version, the deletion marker added to the source bucket is replicated to the destination bucket.
Disaster Recovery: In the event of a regional outage, your data remains accessible from another region.
Compliance Requirements: Meet legal and compliance mandates by storing copies of data in different geographical locations.
Latency Reduction: Serve data from the nearest region to your users, minimizing latency and improving performance.
Data Aggregation: Aggregate logs or other data from multiple buckets to a central bucket, where they can be analyzed.
With bi-directional replication, you can configure two buckets to replicate each other. For example, objects directly uploaded into bucket1
can be copied to bucket2
, and objects directly uploaded into bucket2
are replicated to bucket1
. It is possible to replicate objects uploaded from one bucket into another bucket. Still, objects will not be copied back into the source bucket, thus avoiding an endless replication loop.
You can manage Replication using the web console, API, and CLI.
Prerequisite: Versioning must be enabled for source and destination buckets.
1. In the DCD, go to Menu > Storage > IONOS S3 Object Storage.
2. From the Buckets list, choose the bucket for which the Replication rule must be added and click Bucket settings.
3. Go to the Replication setting and click Add a rule.
4. Enter a Rule name.
5. Choose the Replication scope. You can either apply the replication rule to all objects in the bucket or limit to objects filtered by prefix.
Info: Use a prefix that is unique and does not include the source bucket name.
6. Browse S3 to choose the Destination bucket.
7. Click Add a rule.
Result: The Replication rule is successfully added and automates the replication of objects between the source and destination bucket.
Info: Using the same Replication bucket settings, you can enable, disable, modify, and delete an existing rule. It takes up to few minutes for the deletion of a Replication rule to propagate fully.
Use the Replication API to manage the replication of objects.
Use the CLI to manage Replication.
Replication configuration is possible if Versioning is enabled for both source and destination buckets participating in Replication.
Each Replication rule serves to identify a specific prefix for replication, and it must be unique.
Version 2 of the AWS S3 specification for Replication configuration XML is not supported. Only the version 1 is currently supported.
The following options are not supported: DeleteMarkerReplication
, EncryptionConfiguration
, ReplicationTime
, ExistingObjectReplication
, Filter
(use Prefix
instead), Priority
, SourceSelectionCriteria
, AccessControlTranslation
, Account
, and Metrics
.
Replication is not possible in the following cases:
A source bucket that has Object Lock enabled. However, an Object Lock enabled bucket can be a destination bucket for Replication.
A source bucket that has Lifecycle for auto-tiering enabled.
Objects uploaded before enabling Replication.
Objects encrypted by the SSE-C method.
Objects that are themselves replicas from other source buckets. For example, if you configure bucket1
to replicate to bucket2
, and you configure bucket2
to copy to bucket3
, then an object that you upload to bucket1
will get replicated to bucket2
but will not get reproduced from there on to bucket3
. Only objects you directly upload into bucket2
will be copied to bucket3
.
Versioning allows you to keep multiple versions of the same object and it must be set up for both the source and the target bucket before enabling the replication. For more information, see Versioning.
Object Lock is a feature that enables you to apply WORM protection to objects, preventing them from being deleted or modified for a specified duration. It provides robust, programmable safeguards for storing critical data that must remain immutable.
Warning: Once a bucket is created without an object lock, you cannot add it later.
Data Preservation: Protects critical data from accidental or malicious alteration and deletion, ensuring integrity and consistency.
Regulatory Compliance: Aligns with European regulations such as GDPR, Markets in Financial Instruments Directive (MiFID) II, and the Electronic ID and Trust Services (eIDAS) regulation, maintaining records in an unalterable state.
Legal Holds and Audits: Facilitates legal holds and audits, meeting requirements for transparency and accountability.
Object lock can be applied in two different modes:
Governance: Allows specific users with special permissions to override the lock settings. Ideal for flexible control.
Compliance: Enforces a strict lock without any possibility of an override. Suited for regulatory and legal mandates.
These two lock modes require configuring the duration for which the object will remain locked. The period can range from days to years, depending on the object's compliance needs.
The Retention period refers to the duration for which the objects stored in a particular Object Storage bucket are protected from deletion or modification. You can set the retention period to a maximum of 365 days. To set a longer retention period, use the API.
The retention configuration can be modified or removed for the objects under Governance mode by including a specific header variable in the API request. However, for objects in Compliance mode, reducing the retention period or removing the retention configuration is not possible.
Note: Under Object Lock or Object Hold, permanent deletion of an object's version is not permissible. Instead, a deletion marker is generated for the object, causing IONOS S3 Object Storage to consider that the object has been deleted.
However, the delete markers on the objects are not subject to protection from deletion, irrespective of any retention period or legal hold on the underlying object. Deleting the delete markers restores the previous version of the objects.
An additional setting called Legal Hold can place a hold on an object, enforceable without specifying a retention period. It could be applied both to objects with or without Object Lock. The Legal Hold will continue to be applied till manual removal even if the object’s retention period for Governance or compliance mode is over.
Note: Object Lock configuration can only be enabled during the initial creation of a bucket and cannot be applied to an existing bucket.
When a bucket is created with Object Lock enabled, you can set up Object Lock configurations. These configurations determine the default mode and retention period for newly uploaded objects. Alternatively, Object Lock settings can be explicitly defined for each object during its creation, overriding the bucket's default settings.
Prerequisite: Ensure you create a new bucket to enable Object Lock.
1. In the DCD, go to Menu > Storage > IONOS S3 Object Storage.
2. Create a bucket with Object Lock enabled.
3. From the Buckets list, choose the bucket for which the Object Lock is enabled and click Bucket settings.
4. Click Object Lock to manage these settings on the bucket.
5. Modify the Object Lock mode applied on the bucket and the Retention period as needed.
6. Click SAVE.
Note: The modified Object Lock settings apply to the newly uploaded objects to the bucket. The existing objects adhere to the Object Lock settings applied during the bucket creation.
Result: The Object Lock settings are successfully updated and applied to the bucket.
Use the Object Lock API to manage the Object Lock configuration on the specified buckets.
Use the CLI to manage Object Lock.
The following are a few limitations to consider while using Object Lock:
Versioning will be automatically enabled in addition to Object Lock.
Once the Object Lock is enabled during bucket creation, both Object Lock and Versioning cannot be disabled afterward.
When you place or modify an Object Lock, updating the object version's metadata does not overwrite the object version or change its Last-Modified timestamp.
A bucket with Object Lock enabled cannot be chosen as a source for replication or tiering, but it could be a destination for replication or tiering.
In the Compliance mode, an object is immutable until its retention date has passed. It is not possible to disable this mode for the object or shorten the retention period. This setting could not be changed either by the bucket owner or IONOS.
Versioning allows you to keep multiple versions of the same object. Upon enabling Versioning for your bucket, each version of an object is considered a separate entity contributing to your storage space usage. Every version represents the full object, not just the differences from its predecessor. This aspect will be evident in your usage reports and will influence your usage-based billing.
Data Recovery: Versioning can be used as a backup solution for your data. If you accidentally overwrite or delete an object, you can restore it to a previous version.
Tracking Changes: Versioning can be used to track changes to your data over time. This can be useful for debugging purposes or auditing changes to your data.
Buckets can exist in one of three states:
Unversioned: Represents the default state. No versioning is applied to objects in a bucket.
Versioning - enabled: In this state, each object version is preserved.
Versioning - suspended: No new versions are created, but existing versions are retained.
Objects residing in your bucket before the activation of versioning possess a version ID of null
. Once versioning is enabled, it cannot be disabled but can be suspended. During suspension:
New object versions are not created.
Existing object versions are retained.
You can resume Versioning anytime, with new versions being created henceforth.
Upon enabling Versioning for a bucket, every object version is assigned a unique, immutable Version ID, serving as a reliable reference for different object versions. New object versions are generated exclusively through PUT
operations, with actions such as COP
entailing a PUT
operation, thus spawning a new version.
Notably, a new Version ID is allocated for each version, even if the object content remains unaltered. Objects residing in the bucket before the activation of versioning bear a Version ID of null
.
When an object is deleted, all its versions persist in the bucket, while Object Storage introduces a delete marker, which is also assigned its Version ID.
You can manage Versioning using the web console, API, and CLI.
1. In the DCD, go to Menu > Storage > IONOS S3 Object Storage.
2. From the Buckets list, choose the bucket for which object versions must be enabled and click Bucket settings.
3. In the Versioning, click Enable to have object versions. On choosing the Disable option, it suspends object versioning but preserves existing object versions.
Result: Based on the selection, Versioning is either enabled or disabled for objects in the bucket.
1. In the DCD, go to Menu > Storage > IONOS S3 Object Storage.
2. From the Buckets list, choose the bucket in which the desired object exists.
3. Click the object name within the bucket listing.
4. Navigate to the object's Versions tab by clicking the object name or clicking the three dots against the object name.
5. Copy Version IDs or download non-current versions of the object. You can also select and delete non-current object versions.
Result: Based on the selection, Version IDs and non-current object versions are successfully managed.
For Bucket Replication to function correctly, Versioning must be enabled.
IONOS S3 Object Storage allows the setup of lifecycle rules for managing both current and non-current versions of objects in versioning-enabled buckets. For instance, you can automate the deletion of non-current object versions after a specified number of days after their transition to a non-current status.
Use the to configure and manage Versioning for a bucket.
Use the to manage Versioning.
For a bucket with enabled, Versioning is automatically enabled and cannot be suspended.
Use Object Lock to protect critical objects in a bucket for an immutable period.
Use Replication to create and manage data replicas across multiple S3 regions.
Manage multiple versions of the same object using Versioning.
Manage the deletion of objects and their versions efficiently using the Lifecycle rules.
Use Bucket Policy to define granular access permissions and actions users can perform on buckets and objects.
Use ACL to define access permissions on buckets and objects to control who can access them.
With Logging, track and record storage requests for your buckets.
Define permissions to specific domains that can access bucket content.
Host static website content by configuring the index and error document.
Bucket Policy is a JSON-based access policy language that allows you to create fine-grained permissions for your S3 buckets. With Bucket Policy, you can specify which users or services can access specific objects and what actions users can perform.
Note: Granting access to a bucket for another IONOS user does not make the bucket appear in the user's S3 web console due to the S3 protocol's architecture. To access the bucket, the user must utilize other S3 Tools as the granted access does not translate to interface visibility.
Use this feature to grant access to a specific user or group to only a subset of the objects in your bucket.
Restrict access to certain operations on your bucket, for example, list objects or remove object lock.
Using Bucket Policy, you can grant access based on conditions, such as the IP address of the user.
Create fine-grained access control rules to allow a user to put objects to a specific prefix in your bucket, but not to get objects from that prefix.
Use Bucket ACL instead of Bucket Policy if you need to:
Define permissions in a simple way such as READ
, WRITE
, or FULL CONTROL
.
Apply different sets of permissions to many objects.
Use Share Objects with Pre-Signed URLs to grant temporary access to authorized users for a specified period, after which the URL and the access to the object expire.
A JSON-formatted bucket policy contains one or more policy statements. Within a policy's statement blocks, IONOS S3 Object Storage support for policy statement elements and their values is as follows:
Id (optional): A unique identifier for the policy. Example: SamplePolicyID
.
Version (required): Specifies the policy language version. The current version is 2012-10-17
.
Statement (required): An array of individual statements, each specifying a permission.
Sid (optional): Custom string identifying the statement. For example, Delegate certain actions to another user
.
Action (required): Specifies the action(s) that are allowed or denied by the statement. See the Action section in the Request for supported values. Example: s3:GetObject
for allowing read access to objects.
Effect (required): Specifies the effect of the statement. Possible values: Allow
, Deny
.
Resource (required): Must be one of the following:
arn:aws:s3:::<bucketName>
– For bucket actions (such as s3:ListBucket) and bucket subresource actions (such as s3:GetBucketAcl
).
arn:aws:s3:::<bucketName>/*
or arn:aws:s3:::<bucketName>/<objectName>
– For object actions (such as s3:PutObject
).
Condition (optional): Specifies conditions for when the statement is in effect. See the Condition section in the Request for supported values. Example: {"aws:SourceIp": "123.123.123.0/24"}
restricts access to the specified IP range.
Principal (required): Specifies the user, account, service, or other entity to which the statement applies.
*
– Statement applies to all users (also known as 'anonymous access').
{"CanonicalUser": "<canonicalUserId>"}
– Statement applies to the specified IONOS S3 Object Storage user.
{"CanonicalUser": ["<canonicalUserId>", "<canonicalUserId>",...]}
– Statement applies to the specified IONOS S3 Object Storage users.
You can apply Bucket Policy using the web console by following these steps:
1. In the DCD, go to Menu > Storage > IONOS S3 Object Storage.
2. From the Buckets list, choose the required S3 bucket and click the Bucket settings.
3. In the Bucket Policy, click Edit, copy and paste the provided JSON policy, replacing BUCKET_NAME
and CANONICAL_USER_ID
with the actual values. You can retrieve your Canonical user ID from the Key management section. For more information, see Retrieve User IDs.
4. Click Save.
This action grants the specified user full access to your bucket.
You have the option to restrict actions, define the scope of access, or incorporate conditions into the Bucket Policy for more tailored control.
You can delete a Bucket Policy at any time using the Bucket Policy section in the Bucket settings and click Delete.
Info: Removing a bucket policy is irreversible and it is advised to create a backup policy before deleting it.
Use the Bucket Policy API to manage the Bucket Policy configuration.
Use the CLI to manage Bucket Policy.
If you have defined a bucket policy to grant public access, activating the Block Public Access feature will revoke these permissions, ensuring your data remains private. This feature is invaluable in scenarios where ensuring data privacy is paramount, or when you want to enforce a blanket no-public-access rule, irrespective of Bucket Policy settings.
Following are a few examples of common use cases and their corresponding bucket policy configurations.
To grant full control over a bucket and its objects to other IONOS S3 Object Storage users:
To grant read-only access to objects within a specific prefix of a bucket to other IONOS S3 Object Storage users:
To allow read access to certain objects within a bucket while keeping other objects private:
To restrict all users from performing any S3 operations within the designated bucket, unless the request is initiated from the specified range of IP addresses:
Lifecycle management allows you to automate the deletion of objects and their versions to optimize costs and adhere to compliance requirements.
The Lifecycle comprises rules with actions applied to objects within a bucket. These policies help automate processes that manage the lifecycle of your data.
Object Expiration: Automatically deletes objects no longer needed after a certain period, such as temporary files, logs, or other transient data. It helps to declutter the storage and reduce costs.
Regulatory Compliance: Assists in meeting legal and compliance requirements by deleting objects according to the defined Lifecycle rules.
Version Control: Manages multiple versions of objects by automatically deleting the non-current object versions and saves storage costs.
Temporary Storage: Stores data generated from batch processing or other workloads and deletes these provisional data when no longer needed using the object expiration Lifecycle actions.
A Lifecycle rule supports the following actions:
Expire current versions.
Permanently delete noncurrent versions of objects.
Delete expired object delete markers.
Delete incomplete multipart uploads.
With this action, you can specify a period after which the object's current version must expire. Depending on whether the Versioning is enabled for a bucket or not, the action Expire current versions of the object impacts in the following ways:
If the Versioning is enabled for the bucket, then the expiration of the current version of an object does not result in the deletion of the object data from the storage. Instead, when the object's current version reaches its expiration date, a "delete marker" is created for this object and retained as its "current version"; the object data transitions to a non-current object version.
If the Versioning is not enabled for the bucket, then the current versions are the only versions of objects in your bucket. When the object reaches its expiration date, it is permanently deleted from the storage.
When the Expire current versions action is set for a bucket that uses Versioning, the system automatically deletes the expired delete markers as part of the lifecycle processing. An expired delete marker is a delete marker for which there is no corresponding object data because all non-current versions of the object have been deleted. This functionality aids in maintaining a clean and organized bucket and retains only necessary data.
This action is applicable only if the bucket uses Versioning. Permanently deleting non-current versions of objects takes place after the specified retention period, and it helps to ensure the removal of outdated versions of objects from the storage.
A non-current object version refers to those that are superseded by a newer object version or a delete marker. When a non-current version of an object reaches its scheduled expiration, it is permanently deleted from the storage. The expiration scheduling for non-current versions of objects is based on the number of days since the objects became non-current, which is the number of days since being superseded by a newer version or a "delete marker."
If the bucket has Object Lock enabled, then the non-current object versions are not deleted before their defined retention period is completed. Suppose the expiration date of a non-current object version (based on your configured expiration schedule) comes before the end of the object version's lock period, then the Object Lock setting overpowers. The system retains the non-current object version until the end of its lock period. Shortly after the lock period concludes, the system automatically deletes the non-current object version, ensuring adherence to expiration and retention policies.
However, if all older versions subsequently expire (through the execution of the expiration rule for non-current versions), an orphaned delete marker remains. With the Delete expired object delete markers action, you enable the system to automatically remove a delete marker 48 hours after all the older versions of the object have been expired or deleted.
You can manage Lifecycle using the web console, API, and CLI.
1. In the DCD, go to Menu > Storage > IONOS S3 Object Storage.
2. From the Buckets list, choose the bucket for which the Lifecycle rule must be added and click Bucket settings.
3. Go to the Lifecycle setting listed under Data management and click Add a rule.
4. Enter the following details to configure the Lifecycle rule:
Lifecycle Rule name: Enter a name to identify the rule uniquely.
Set Rule Scope: Choose whether to apply the Lifecycle rule to all objects in the bucket or limit to objects filtered by the prefix. The prefix is subject to a single Lifecycle rule.
Select an action: Choose one or more from the following Lifecycle actions to apply to the objects in the bucket:
Expire current versions: You can choose to enter the number of days after object creation should the current object version expire or select a date from the calendar shown, after which the current object version must expire. The rule application varies depending on whether the bucket is versioned or not.
Permanently delete noncurrent versions of objects: Mention the number of days after the object version becomes non-current should it be permanently deleted.
Delete expired object delete markers: Select this action to remove all object delete markers and improve performance. You cannot apply this action if the Expire current versions action is selected.
Delete incomplete multipart uploads: Mention the number of days after upload initiation should the incomplete multipart uploads be deleted.
5. Click Save.
Result: The Lifecycle rule is successfully added.
Info: Using the same Lifecycle bucket settings, you can turn on, off, modify, and delete an existing rule. It takes up to a few minutes for the deletion of a Lifecycle rule to propagate fully.
Currently, IONOS S3 Object Storage only supports Standard storage class. You cannot use lifecycle rules to transition objects to another storage class.
A maximum of 1,000 rules can be set in the Lifecycle configuration.
Multiple Lifecycle rules can be created for a bucket, each applying to a different object prefix. However, more than one Lifecycle rule cannot be set for the same object prefix.
If the bucket uses Object Lock, non-current object versions cannot be deleted before the completion of their defined retention period.
The NewerNoncurrentVersions
setting is not supported for the NoncurrentVersionExpiration
option.
With the help of a detailed authorization system, based on the S3 Access Control List (ACL), you can control precisely who accesses and edits your content. By assigning ACLs to a group of users as per S3-compliant ACL, you can manage who may access the buckets and objects of your IONOS S3 Object Storage.
Manage access to prefixes like /folder/*
or *.jpg
.
Use conditions to grant access, for example, IP address.
Allow or deny certain actions like listing the object list.
You can use ACLs to make a bucket or object public or to share access with certain authorized users by setting the right permissions. IONOS S3 Object Storage offers the following ACL management methods:
If you have defined ACLs granting public access, activating the Block Public Access revokes these permissions, ensuring your data remains private. This feature is invaluable in scenarios where ensuring data privacy is paramount, or when you want to enforce a blanket no-public-access rule, irrespective of ACL settings.
You can manage ACL permission for buckets through the web console, IONOS S3 Object Storage API, or the CLI.
The following table shows the ACL permissions that you can configure for buckets in the IONOS S3 Object Storage.
Note: For security, granting some of the access permissions such as Public access WRITE
, Public access WRITE_ACP
, Authenticated users WRITE
, Authenticated users WRITE_ACP
is possible only through an API Call.
To manage ACL for buckets using the web console, follow these steps:
1. In the DCD, go to Menu > Storage > IONOS S3 Object Storage.
2. From the Buckets list, choose the bucket to which you want to access the ACL.
3. Click Bucket settings and choose the Access Control List (ACL) under the Access management section.
6. Click Save to apply the ACL settings to the bucket.
Result: The bucket ACL permissions are successfully applied on the bucket.
Prerequisites:
1. In the DCD, go to Menu > Storage > IONOS S3 Object Storage.
2. From the Buckets list, choose the bucket to which you want to add the grantee.
3. Click Bucket settings and choose the Access Control List (ACL) under the Access management section.
5. Add any number of grantees to the bucket by following step 4.
6. Click Save to add the additional grantees with corresponding ACL permissions to the bucket.
Result: The grantees are successfully added to the bucket.
You can manage ACL permission for objects through the web console, IONOS S3 Object Storage API, or the CLI.
The following table shows the ACL permissions that you can configure for objects in a bucket in the IONOS S3 Object Storage.
These permissions are applied at individual object levels within a bucket, offering a high level of granularity in access control.
Note: For security, granting some of the access permissions such as Public access WRITE_ACP
and Authenticated users WRITE_ACP
is possible only through an API Call.
To manage ACL for objects using the web console, follow these steps:
1. In the DCD, go to Menu > Storage > IONOS S3 Object Storage.
2. From the Buckets list, choose the bucket under which the object ACL to be modified exists.
3. From the Objects list, choose the object for which ACL permissions are to be modified.
4. From the Object Settings, click Access Control List (ACL).
7. Click Save to apply the ACL settings to the object.
Result: The object ACL permissions are successfully applied to the object.
Prerequisites:
1. In the DCD, go to Menu > Storage > IONOS S3 Object Storage.
2. From the Buckets list, choose the bucket under which the object ACL to be modified exists.
3. From the Objects list, choose the object for which you want to add the grantee.
5. Add any number of grantees to the object by following step 4.
6. Click Save to add the additional grantees with corresponding ACL permissions to the object.
Result: The grantees are successfully added to the object.
For more information on bucket policy configurations, see , supported actions and condition values, and .
This action is applicable only if the bucket uses Versioning and the schedule has been set. In a versioning-enabled bucket, when you delete the current version of an object, a "delete marker" replaces that object version and becomes the new current object version. All the older versions of the object are retained in the system and remain retrievable.
This action stops all incomplete multi-part uploads and allows the automatic deletion of incomplete multi-part uploads, freeing up storage space and ensuring the bucket remains clean and organized. The facilitates the uploading of large objects in parts. However, if an upload is incomplete, it consumes storage space.
For more information, see .
Use the to manage the Lifecycle rules.
Use the to manage Lifecycle configuration.
Versioning allows you to keep multiple versions of the same object. For more information, see .
Use instead of ACLs if you need to:
Use instead of ACL for granting temporary access to authorized users for a specified period, after which the URL expires.
4. Select the checkboxes against the access permissions to grant at each user level such as bucket owner, public access, authenticated users, and logging. For more information, see .
5. Add grantees to provide additional users with access permission to the bucket. For more information, see .
Make sure the canonical user ID of the grantee is known. To retrieve the ID, see .
The grantee should already exist. If not, create a user and retrieve the Canonical user ID by following the steps in .
4. In the Additional Grantees section, enter the retrieved Canonical user ID of the grantee, select the checkboxes on the ACL permissions to grant, and click Add. For ACL permissions, see .
Note: Granting access to a bucket for another IONOS user does not make the bucket appear in the user's S3 web console due to the S3 protocol's architecture. To access the bucket, the user must utilize other as the granted access does not translate to interface visibility.
Use the Object Storage API to manage bucket ACL permissions.
Use to manage ACL permission for buckets.
5. Select the checkboxes against the access permissions to grant at each user level such as bucket owner, public access, and authenticated users. For more information, see .
6. Add grantees to provide additional users with access permission to the object. For more information, see .
Make sure the canonical user ID of the grantee is known. To retrieve the ID, see .
The grantee should already exist. If not, create a user and retrieve the Canonical user ID by following the steps in .
4. In the Additional Grantees section, enter the retrieved Canonical user ID of the grantee, select the checkboxes on the ACL permissions to grant, and click Add. For ACL permissions, see .
Use the Object Storage API to manage object ACL permissions.
Use to manage ACL permission for objects.
User | Console permission | ACL permission | Access granted |
Bucket Owner | Objects - Read | READ | Allows grantee to read the object data and its metadata. |
Bucket Owner | Objects - Write | WRITE | Enables the grantee to write object data and its metadata, including deleting the object. |
Bucket Owner | Bucket ACL - Read | READ_ACP | Grants the ability to read the ACL of the bucket. |
Bucket Owner | Bucket ACL - Write | WRITE_ACP | Allows the grantee to write the ACL of the bucket. |
Public access | Objects - Read | READ | Grants public read access for the objects in the bucket. Anyone can access the objects in the bucket. |
Public access | Bucket ACL - Read | READ_ACP | Grants public read access for the bucket ACL. Anyone can access the bucket ACL. |
Authenticated users | Objects - Read | READ | Grants read access to objects in the bucket to anyone with an IONOS account using which they can access the objects in the bucket. |
Authenticated users | Bucket ACL - Read | Read_ACP | Grants read access to bucket ACL to anyone with an IONOS account. |
Logging | Objects - Read | READ | Allows grantee to read the object log data. |
Logging | Objects - Write | WRITE | Enables the grantee to write object data and its metadata, including deleting the object. |
Logging | Bucket ACL - Read | READ_ACP | Grants the ability to read the log data of the bucket. |
Logging | Bucket ACL - Write | WRITE_ACP | Allows the grantee to write the ACL of the bucket. |
User | Console permission | ACL permission | Access granted |
Bucket Owner | Objects - Read | READ | Allows grantee to read the object data and its metadata. |
Bucket Owner | Object ACL - Read | READ_ACP | Grants the ability to read the object ACL. |
Bucket Owner | Object ACL - Write | WRITE_ACP | Allows the grantee to write the ACL of the applicable object. |
Public access | Objects - Read | READ | Grants public read access for the objects in the bucket. Anyone can access the objects in the bucket. |
Public access | Object ACL - Read | READ_ACP | Grants public read access for the object ACL. Anyone can access the object ACL. |
Authenticated users | Objects - Read | READ | Grants read access to objects in the bucket to anyone with an IONOS account using which they can access the objects in the bucket. |
Authenticated users | Object ACL - Read | Read_ACP | Grants read access to object ACL to anyone with an IONOS account. |
Logging in IONOS S3 Object Storage enables the tracking and storage of requests made to your bucket. When you enable Logging, S3 automatically records access requests, such as the requester, bucket name, request time, request action, response status, and error codes, if any. By default, Logging is disabled for a bucket.
Security Monitoring: Tracks access patterns and identifies unauthorized or suspicious access to your data. In the event of a security breach, logs provide vital information for investigating the incident, such as IP addresses, request times, and the actions that were performed.
Auditing: Many industries require compliance with specific regulatory standards that mandate the monitoring and logging of access to data. S3 logging facilitates compliance with regulations like HIPAA, GDPR, or SOX by providing a detailed record of who accessed what data and when.
Troubleshooting: If there are issues with how applications are accessing your S3 data, logs can provide detailed information to help diagnose and resolve these issues. Logs show errors and the context in which they occurred, aiding in quick troubleshooting.
You can manage Logging using the web console, API, and CLI.
Prerequisite: Make sure you have provided access permissions for the Log Delivery Group. For more information, see Grant access permission for Logging.
To activate Logging, follow these steps:
1. In the DCD, go to Menu > Storage > IONOS S3 Object Storage.
2. From the Buckets list, choose the bucket and click Bucket settings.
3. Go to Logging and click Browse S3 to select the destination bucket in the same region to store logs.
Note: Although it is possible to store logs in the same bucket being logged, it is recommended to use a different bucket to avoid potential complications with managing log data and user data together.
4. (Optional) Specify the prefix for log storage, providing flexibility in organizing and accessing your log data. If no prefix is entered, the log file name is derived from its time stamp alone.
5. Click Save.
Result: Logging is enabled for the selected bucket.
You can modify or deactivate Logging at any time with no effect on existing log files. Log files are handled like any other object. Using the Logging section in the Bucket settings, you can click Disable Logging to stop collecting log data for a bucket.
1. In the DCD, go to Menu > Storage > IONOS S3 Object Storage.
2. From the Buckets list, choose the bucket for which the logging must be enabled.
3. Click Bucket settings and go to Access Control List (ACL).
4. For Logging, select the OBJECTS:WRITE and BUCKET ACL:READ checkboxes.
5. Click Save.
Result: The required access permissions to enable Logging for a bucket is granted.
Use the Logging API to configure and manage Logging for a bucket.
Use CLI to manage Logging for buckets.
Cross-Origin Resource Sharing (CORS) allows you to specify which domains can make cross-origin requests to your Object Storage. CORS is beneficial when you need to serve resources from your bucket to web applications hosted on different domains.
Cross-Domain Image Serving: Suitable for websites that need to display images stored in the S3 buckets on the various domains without encountering cross-domain restrictions.
Multi-Domain: Supports complex web applications that operate across multiple domains to access and use data stored in the S3 buckets seamlessly.
Development and Testing Environment: Facilitates the access of development and staging versions of your web applications hosted on different domains to the same S3 resources. You can configure the CORS headers on the staging servers to allow requests from the development or testing domains, ensuring seamless testing without running into cross-origin restrictions.
You can manage CORS using the web console, API, and CLI.
1. In the DCD, go to Menu > Storage > IONOS S3 Object Storage.
2. From the Buckets list, choose the bucket for which the CORS rule must be configured and click Bucket settings.
3. Go to the CORS setting under Access management and click Add a rule.
4. Enter the following details to configure the CORS rule:
Rule name: Enter a name to identify the rule uniquely.
Allowed origins: Enter the complete domain of the client you want to access your bucket's content from and click Add. The domain should start with a protocol identifier, such as HTTP, and end with a hostname; for example, https://*.example.com
. You can add one or more origins.
Allowed headers (Optional): Specify the non-default headers that your Object Storage bucket must accept from your client and click Add. The CORS automatically takes default headers such as Content-Length
and Content-Type
.
Allowed methods: Select the API method checkbox to allow interaction with your S3 bucket. You can enable or restrict the following API methods:
GET
: Fetch the CORS configuration of the bucket.
POST
: Create a new bucket.
PUT
: Update the bucket's property or content.
HEAD
: Retrieve the bucket's metadata.
DELETE
: Delete a bucket.
Expose headers (Optional): Specify the headers in the response that you want users to be able to access from their applications and click Add.
Max age (Optional): Specify the time in seconds for how long a request’s verification is cached. The Object Storage bucket can accept more requests from the same origin while the verification is cached.
5. Click Add a rule.
Result: The CORS rule is successfully added.
Info: Using the same CORS bucket settings, you can turn on, off, modify, and delete an existing rule. It takes up to a few minutes for the deletion of a CORS rule to propagate fully.
Use the CORS API to manage the CORS rules.
Use the CLI to manage CORS configuration.
Static Website Hosting enables the hosting of static content, including HTML, CSS, JavaScript, and images, directly, eliminating the need for an external web server. You can specify both an index page and an error page. Additionally, there is an option to link a custom domain.
Static Content Hosting: Directly serve HTML, CSS, JavaScript, and media files statically on a website.
Publish Landing Pages: Host promotional or event-specific landing pages with high availability.
Documentation Sites: Host product documentation or manuals with easy access to the users.
You can manage Versioning using the web console, API, and CLI.
Note: Static Website Hosting is disabled by default for a bucket. Enabling this setting will make all objects in the bucket publicly readable.
1. In the DCD, go to Menu > Storage > IONOS S3 Object Storage.
2. From the Buckets list, choose the bucket for which you want to manage Static Website Hosting.
3. Click Bucket settings and go to the Static Website Hosting setting under the Public access section.
4. Click Edit and add the following details:
Index document: Enter the file name that serves as an index document. Example: index.html
. An index document is a default webpage that IONOS S3 Object Storage returns upon receiving a request to the root of a website or a subfolder.
Error document: Enter the file name of the HTML error document that is uploaded to the Object Storage bucket. An error document is a default HTML file with details you want the user to view when an error occurs.
5. Click Enable.
Result: Static Website Hosting is successfully enabled for a bucket.
Info: In the Static Website Hosting setting, choose Edit and click Disable to remove Static Website Hosting for a bucket.
Use the API to configure and manage Static Website Hosting for a bucket.
Use the CLI to manage Static Website Hosting.
Static Website Hosting is unsuitable for hosting websites that require server-side processing, such as PHP and Python.