User access management is crucial for a secure cloud environment. It prevents unauthorized access, mitigates data breaches, and ensures regulatory compliance. This topic explores practical examples and best practices for securing user access in a public cloud environment.
For enhanced user access security, it is vital to use secure and strong authentication mechanisms that apply to any user, independent of whether it is the owner of an account or users added to the account with roles of administrators or read-only access. You need to use:
1. The service login credentials are confidential information and must be handled accordingly. Usually, they consist of a user identifier, such as an email address and a strong password.
2. The service login credentials require a strong user identifier and password. For example, an email address and a strong password. Common and weak passwords, such as the following, are frequently found in breach lists:
123456
password
qwerty
abc123
letmein
Warning: Using any of the above-mentioned passwords or easily guessable patterns is strongly discouraged, as they are highly vulnerable to brute-force attacks.
The source of this information is based on an analysis of various data breaches and password dumps that have occurred over the years. Websites and services such as Have I Been Pwned and data security reports often compile and analyze this data to raise awareness about the importance of strong passwords and security practices.
3. Complex passwords are difficult for humans to remember, so use safe password vaults to store data across multiple services securely. It also helps to use different complex passwords across multiple services and allows the introduction of password retention periods. This protects you if one service gets compromised; other services might not be affected. In any case, passwords do not belong in easy-to-access places like post-its stuck to your monitor.
4. The essential step of strong authentication is using unique, complex, and lengthy passwords. To make it secure, it is recommended that all user accounts utilize Multi-Factor Authentication (MFA). MFA requires multiple verification forms, such as passwords and a unique code, to reduce unauthorized access risks.
The IONOS Cloud platform supports Multi-Factor Authentication, which users can enable for their accounts. Contract owners and users with administrative privileges can manage users within the User Management module, and enforce MFA on additional users, requiring them to complete the activation process before using granted services or resources.
Adhering to the principle of least privilege is crucial for minimizing the risk of unauthorized or unintended actions by users. In the IONOS Cloud platform, the owner of a contract and users with administrative privileges will receive maximum access to the platform, which also includes the right to manage the privileges and permissions of other users.
It is recommended to adhere to the following practices to mitigate the impact of compromised accounts or insider threats:
assess which users must have administrative privileges.
evaluate granting users the minimum level of access necessary to perform their job functions.
regularly review and only update access permissions as roles and responsibilities change.
Within the IONOS Cloud platform, you can create custom group profiles with fine-grained privileges, limiting access to only the necessary resources and actions. By assigning roles based on job responsibilities and regularly reviewing access permissions, you can ensure users have the appropriate level of access without unnecessary privileges.
The IONOS role and permission concept is explained in detail:
The first user who creates an account at IONOS becomes the respective account owner and the user:
receives all privileges and permissions.
cannot be revoked from this role.
will be the recipient of all legal communications. For example, changes in contract conditions like updates of terms and conditions, as well as invoices.
will always be allowed to access all resources within the account—independent of who created the resource/is the owner of the resource.
has the right to add or remove additional users to or from the account, respectively.
Perform regular access reviews and audits to identify and address security vulnerabilities or excessive user privileges:
Periodically review user accounts, permissions, and activity logs to ensure compliance with security policies
Promptly revoke access for users who no longer require it, for instance, when a user's role within your organization has changed and no longer requires access.
Establish routines to revoke access in a timely manner from users who have resigned.
The user can be promoted to the "Administrator" role, which automatically grants the user all privileges and permissions to all resources. The user in an administrator role:
has the same privileges as the account owner, except for changing the the payment method.
is authorized to add or remove users from the account except the owner's user account.
A "User" can be given explicit privileges and permissions.
Regular access reviews and audits are essential for identifying security vulnerabilities and excessive user privileges. Use the IONOS Activity Log Service API to monitor user activity and generate access reports. Review the reports to identify anomalies, such as unusual login patterns, and take appropriate action to mitigate risks. The reports contain the following:
User login data that indicates when did the user log in.
Data on device indicates the device information and the IP address.
Resource actions indicate what action was executed. Example: reading, creating, updating, or deleting resources.
User awareness is crucial for securing cloud environments. Educate users about best practices for password management, phishing awareness, and recognizing social engineering attempts. Encourage the use of strong, unique passwords and regular password updates.
Secure user access management is crucial for maintaining resource integrity and confidentiality in public cloud environments. Implementing strong authentication, PoLP, regular access review, and user education enhances security posture. In the next topic, we will explore best practices for securing Virtual Machines (VM) within the public cloud.
form the backbone of Infrastructure-as-a-Service (IaaS) products, providing flexible and scalable computing resources in the cloud. Ensuring the security of VMs is paramount to protecting your applications, data, and overall cloud infrastructure.
When discussing VMs, it's important to understand that the reference is not only to the server instance that consists of CPU and memory but also the attached devices that give access to networks through Network Interface Cards (NICs), block storage volumes that host your application or data, and the image the VM is booting from. This topic applies to IONOS Cloud server products—Compute Engine and Cubes—including all attached devices that access networks through NICs, storage volumes, and boot images.
The service provider's responsibility in computing instances lies in maintaining the underlying infrastructure, including the physical servers, virtualization layer, and hypervisor. IONOS Cloud is responsible for ensuring the availability, reliability, and performance of these components.
The service provider is also responsible for offering a secure and compliant platform. It includes implementing security controls at the infrastructure level, such as network security, host-based firewalls, and intrusion detection systems. They should also ensure that the hypervisor and VM management systems are appropriately secured.
However, it's important to note that the service user is responsible for securing the actual compute instances, which include the following:
configuring and managing access controls,
securing operating systems and applications, and
implementing proper security measures within instances.
By following best practices, such as regularly updating and patching compute instances, service users can mitigate security risks and maintain a secure computing environment within the public cloud infrastructure. The shared responsibility model ensures collaboration between the service provider and the service user, where each party contributes to the overall security of the compute instances and infrastructure.
One crucial best practice for computing instances in a public cloud environment is regularly updating and patching your instances. This practice ensures that your instances have the latest security fixes and updates, minimizing the risk of exploited vulnerabilities. By staying up to date with patches, you enhance the overall security posture of your compute instances and reduce the potential for security breaches.
This routine always applies independently of using public images offered by the service provider or private images uploaded by the service user. It also does not matter if the instance booted from a block storage instance, ISO image, or network boot profile.
When provisioning VMs, start with secure base images provided by the IaaS provider or use trusted images from reputable sources. These images are typically preconfigured with security-hardened settings, reducing the attack surface. Regularly update the VM images to include the latest security patches and updates.
Stay vigilant about applying security updates and patches to your VMs. If VMs are not kept up to date, vulnerabilities can be exploited. Establish a patch management process to ensure timely updates, or consider utilizing automation tools for patch deployment.
Notably, this best practice applies not only to the operating system you are installing on your VM but also to all applications you are running on such an instance later. There is no difference between open source and commercial applications and applications developed by your organization. It is always advisable to address any security threat, regardless of its magnitude, immediately after a patch becomes available to minimize the risk of exploitation. Even when you are aware of a vulnerability within your image or application that does not have a fix available yet, you may need to consider a decision to pause or even decommission a service if the security risk is too high.
The principle of implementing least privilege access controls also applies to VM operating systems and applications deployed on these systems, such as databases. Your VM is booting from an operating system, which requires setting up users who will enable them to log in to the VM operating system and continue further operation, configuration, etc.
Grant administrative access to trusted individuals who require it and use separate non-administrative accounts for day-to-day operations. Operating systems allow user accounts with basic credentials like username and password through remote desktop connection and login via terminal and Secure Socket Shell (SSH) keys.
IONOS Cloud offers a variety of public images. Each image will require at least one type of login - password for a root/ administrator account and/or a public SSH key. The root/administrator account password does not persist at IONOS Cloud, nor is it logged in any log files. It gets injected into the image and allows login to the VM as the root user. Since it uses the default root/administrator user of the respective image, it is recommended to create individual and personal accounts after initial setup and close the default user account.
While logging in via SSH key is the recommended way of accessing the VM, it is advisable to configure at least one user with a username and password, as your VM might not be accessible via the network. You need to access the VM through a remote console.
Define strict rules to allow only necessary connections and protocols, blocking all others by default.
Restrict inbound network traffic to only essential ports and protocols, minimizing exposure to potential threats.
IONOS Cloud allows the configuration of firewall rules on each NIC individually.
Not every VM requires an interface to the public network.
Create security by moving VMs into separate LANs so that only required applications can exchange data.
Secure access through restrictive roles and firewall rules.
In today's digital landscape, where businesses rely heavily on the cloud for their computing needs, ensuring the resilience of VMs has become paramount. VM resilience refers to its ability to withstand and recover from disruptions, failures, and unexpected events while maintaining their critical functionalities. This topic explores the importance of VM resilience and provides valuable insights into enhancing the resilience of your VMs in a public cloud environment. When building and deploying VMs, adopting a "design for failure" mindset is crucial. Acknowledge that failures are inevitable, and plan your VM architecture accordingly.
You must make an important design decision when assessing the criticality or impact of your application downtime. The higher the impact, the more you will need to create redundancy in your application by distributing VMs across multiple availability zones or even in different data center regions to reduce the impact of failures. You can take advantage of the high availability features offered by your cloud provider. These features maximize uptime and resilience. They ensure that VM instances are distributed across fault domains, power sources, or data centers, minimizing the impact of localized failures on your applications.
Each data center operates as a distinct entity without dependency on other physical data centers, such as power supply, emergency power, cooling, network connectivity, 24/7 operations, etc. It ensures the isolation of each data center from local elementary or extensive events like fire, flooding, regional power outage, regional network outage, etc. For security reasons, physical addresses are not published.
Each physical data center has a security building block architecture, which means that a set of physical hardware devices is built in blocks (clusters) to limit the risk of total failure. Distributing your workloads across building blocks is not possible. It is mainly for the cloud service provider's operational security purposes. For example, if a gateway fails in one building block, it does not create network issues in other building blocks. Each building block has redundancy of service equipment to buffer failures of service components. Also, each building block has spare parts installed and configured in hot standby to replace failing servers or storage components immediately. The cloud service provider is entirely accountable for frequent maintenance of the data centers, their building blocks, and service components.
As a cloud service user, you may need to access if your workloads are deployed across multiple physical data center sites. Select data centers that fulfill your needs regarding security (closer or further distance) and which are suitable for your business. Create individual virtual data centers per location and set up required infrastructure (VMs, Storages, Network) individually. The user must establish connectivity across the data center locations through a public internet connection and secure traffic through a virtual private network (VPN) configuration.
Each data center allows the provisioning of VMs in different availability zones, ensuring that VMs remain separated from the host servers. The VMs will remain within the same region but are separated from each other so that failures on a single hardware (for example, server kernel issue, PSU breakdown) or server rack (for example, rack switch failure) can be mitigated by switching traffic to another instance running on different hardware within the same data center.
The other VMs can be either in hot mode (up and running) or cold mode (shut down and de-provisioning):
In hot mode, you can announce to the gateway to switch traffic to another target. It can continue operations within seconds. Furthermore, it allows you to sync data from your primary instance to your secondary instance.
In cold mode, you need to start the VM before you can switch traffic by announcing the new route to the gateway. In this mode, data synchronization is impossible because the VM is shut down and, therefore, cannot receive data updates. A synchronization after the start may not be possible if the primary instance is not accessible. This mode is best suited for stateless applications and services.
Creating redundancy is not the only way to achieve resilience. Often, a VM becomes unresponsive because it handles too much workload and cannot process requests fast enough. In such cases, you can utilize auto-scaling capabilities to dynamically adjust resources based on demand, ensuring high availability even during peak loads.
There are two ways of auto-scaling.
Load balancers are critical in distributing traffic across multiple VM instances, optimizing performance, and increasing availability. By evenly distributing workloads, load balancers improve resource utilization and enhance resilience by redirecting traffic away from unhealthy or underperforming VMs.
Enable logging for VMs to capture system and application logs, including security-related events. Centralize and analyze these logs to identify anomalies, detect potential security incidents, and respond proactively. Utilize security information and event management (SIEM) tools or log analysis services to gain valuable insights.
Deploy intrusion detection and prevention systems (IDPS) to monitor and protect VMs against malicious activities. IDPS solutions can detect and block unauthorized access attempts, malware, and other potential threats. Regularly update and configure IDPS rules to adapt to evolving security risks.
Perform regular vulnerability assessments and penetration testing on your VMs to identify and address potential security weaknesses. Utilize automated scanning tools or engage third-party security experts to assess the security posture of your VMs and applications.
Enhancing the resilience of your VMs is crucial to maintaining the availability, performance, and continuity of your applications in the cloud. By designing for failure, implementing automated monitoring, utilizing load balancing and fault-tolerant architectures, leveraging high availability features, and implementing robust security practices, you can ensure your VMs are resilient in the face of disruptions.
Remember, VM resilience is not a one-time task but an ongoing effort. Regularly review and update your resilience strategies to align with evolving business needs and emerging technologies. Investing in VM resilience builds a strong foundation for your cloud infrastructure, enabling your applications to thrive in the face of adversity.
Implementing these security best practices for VMs in your IaaS environment can bolster the protection and resilience of your cloud infrastructure. Secure base images, least privilege access, firewall protection, disk encryption, monitoring logs, patch management, intrusion detection, and vulnerability assessments are essential components of a robust VM security strategy. The next topic will delve into securing data storage in the public cloud.
Effective error code handling is crucial for building robust and reliable applications in the public cloud. Error codes, whether generated by public cloud services or HTTP protocols, provide valuable feedback on the status and outcome of operations. This topic explores best practices for handling error codes when using public cloud services, focusing on both cloud service-specific errors and HTTP protocol responses.
IONOS Cloud provides multiple interfaces to its products—Data Center Designer (DCD), APIs, SDKs, and Configuration Management tools.
The DCD includes functions that validate configurations of virtual data centers and will return errors or warnings before provisioning.
Errors appear in red, and they block the provisioning of a resource. Hence, we recommend resolving them before provisioning. For example, when a new block storage with a public image gets created, you must define either a root password to the image or apply an SSH key. If you do not configure either, the DCD returns an error and requests the user to resolve it.
Warnings are marked in yellow and indicate useful improvements but do not block provisioning. These warnings may indicate that the configuration is incomplete, and you have to continue to complete the configuration after the current provisioning run. For example, you can create an instance without a network interface, which means it cannot communicate with other instances or the public internet. As this is an uncommon use case, the DCD returns a notification that the configuration needs to be improved. The gets displayed before provisioning. You can fix the errors and click Provision Now to continue.
Apart from client-side validations in the DCD, there are other types of errors that require respective handling.
When dealing with HTTP-based cloud services, adhere to standard HTTP status codes to process the result of a request. IONOS APIs use well-known status codes, such as 200 for successful responses and 4xx and 5xx codes for different error scenarios. Here are a few examples, but as the list of HTTP status codes, an entire list of potential use cases resulting in each code cannot be provided.
An official list can be retrieved from the , maintained by the Internet Assigned Numbers Authority (IANA).
IONOS applications have two types of errors—fail-fast validation errors and provisioning errors.
Whenever you send a request to create, update, or delete a resource, the IONOS backend applies first checks. Here are a few examples:
Is the user authorized to execute this change?
Does the contract contain enough resources to request a specific resource?
Does the API call request a resource with a configuration out of product range or a resource configuration not supported in the respective location?
The API will respond synchronously to such errors in the request call. The response contains an explicit and readable description of the error so it can be correlated with the initial API call.
You need to change your API call accordingly to meet the application criteria. For example, change the configuration to fit the product specifications and retry.
Note: The IONOS API only returns the first fail-fast validation error it encounters. It does not validate the entire request and returns all errors but fails immediately after encountering the first error. Consequently, the retry may reveal the following fail-fast error that requires the user's attention.
Once the request passes all fail-fast validations, it will be queued for processing within the customer contract API queue.
The API call will return an HTTP Status 200 to signal the successful submission of the API call.
In the next step, IONOS processes the API call and may detect other issues, resulting in an incomplete order. As this happens asynchronously, the error is reported within the request's status.
In the response header of the initial API call, you will find the header "location" that contains the URI for the respective request status call, including its identifier. You can use this resource to retrieve information about the order's status. Remember to update it frequently as progress is reflected at the request's runtime.
If the request fails to be completed successfully, the request status returns an error, including an error code in the format of "VDC-x-xxx" (while 'x' may represent a certain 1 - 4 digit numeric code).
Note: The details in this section apply to all IONOS Infrastructure services—server, block storages, and network.
IONOS Object Storage explicitly has application error descriptions containing specific action recommendations for users. Here are a few examples:
When the format of the bucket name is wrong.
The bucket name already exists. IONOS operates its own installation of Object Storage from a third party, which follows the standard S3 Object Storage REST error responses. Usually, the error response contains information for the content type and the HTTP status code in the scope of 3xx, 4xx, or 5xx. Furthermore, the error message contains information about the error in a message tag, which helps to identify and resolve the issue. Depending on the error message, you should retry or change the S3 Object Storage request to resolve the issue returned in the error message.
At this point, IONOS will not publish a list of potential error codes. It is planned for a later period and once a list of error codes—including measures to handle these errors—is available, it will also be linked here.
When implementing automatic routines via API, SDKs, or Configuration Management tools, it is best to consider a few elements to help deal with potential errors returned from the cloud service provider application.
Comply with the HTTP status code standard: If you follow the standard, your implementation and the interface consumed by this implementation will create the best value.
Implement retry mechanism: Consider implementing retry mechanisms for transient errors. Public cloud services occasionally experience temporary issues, and retrying the operation after a short delay can often lead to successful outcomes. Implementing exponential backoff algorithms can help prevent overwhelming cloud services with retries during high error periods.
Implement circuit breakers: In addition to the retry mechanism, there are error cases that are good to retry directly or after a short wait period. You may need to use circuit breaker patterns to manage service failures gracefully. Circuit breakers act as safety mechanisms that detect recurring errors and temporarily halt requests to the failing service. It prevents cascading failures and helps the system recover from transient errors more effectively.
Log errors and exceptions: Log errors and exceptions at appropriate levels in the logging system of your implementation. Detailed error logs aid in post-mortem analysis, troubleshooting, and monitoring the health of your implementation. It can also be a helpful source for analysis and investigation on the cloud service provider side. It provides details of your use case and data used to reproduce the error, which is necessary to arrive at a proper solution or a workaround. It may also happen that an unintended error has been discovered and requires a fix by the cloud service provider.
Plan for unknown errors: Prepare your application to handle unknown or unexpected errors gracefully, in addition to logging errors and exceptions. Implement fallback mechanisms and default behavior for scenarios with undocumented or unrecognized error codes.
Continuously review and improve error handling: Regularly review your error handling mechanisms and identify areas for improvement. Embrace a culture of continuous improvement to ensure your application evolves along with changing cloud service conditions and requirements.
IONOS logs errors and uses them for failure analysis. As outlined earlier, errors are categorized into two segments. First, the application analyzes errors caused by invalid use of the interfaces, which might be caused by misleading documentation, product descriptions, or customer expectations. The second category collects errors caused by the IONOS application itself. In both cases, IONOS will analyze and review the errors to apply improvements to eliminate them. The data collected for this action aligns with data privacy regulations and will only be used to reduce errors and improve the product and its associated services, such as documentation. As mentioned earlier, it will not be used for other purposes, especially commercial purposes.
In almost all cases, data is a company's highest asset. If it is the data of its customers, intellectual property, research data, etc., it might be a competitive advantage against other market participants and requires protection from unauthorized access and data loss.
It is essential to implement robust security measures to protect sensitive information stored in Network Block Storage. covered securing access to the data through the VM instances. This topic explores security best practices for Network Block Storage and IONOS Object Storage in a public cloud environment, outlining the responsibilities of both the service provider and the service user.
Both Network Block Storage and IONOS Object Storage provide scalable and reliable storage solutions in the public cloud, empowering organizations to store and access their data efficiently.
Depending on the application requirement and the service provider offering, Network Block Storage is based on different storage technologies, such as Hard Disk Drive (HDD) or Solid State Drive (SSD) network storage, which is installed in a hardware server separate from the compute resources' Central Processing Unit (CPU) and memory. Also, the compute server hardware usually has an installed Nonvolatile Memory Express(NVMe)-based SSD.
The service provider is responsible for ensuring that no data gets lost at any time. Usually, service providers duplicate data within a storage server via RAID, so that if some of the storage discs fail, the entire data can still be recovered from the remaining data that is stored across other disks within the storage server. Additionally, you can create a replication to a second volume.
IONOS Cloud goes even a step further. By default, every HDD and SSD block storage volume is . Firstly, the volume is redundantly created via RAID on one physical storage service. It creates resilience if a number of disks fail. In addition, the data of each volume is constantly synchronized with a volume on a second storage server within the same region. It is called a “two-leg” setup. Also, the data persists in a RAID configuration on the second server. Even when an entire storage server has an outage and disks of the second storage service are failing, it is still possible to provide the service and recreate the double-redundant setup in the background after fixing the disks and servers to restore maximum protection.
Although IONOS already has double-redundant provisioned Network Block Storage, it also allows users to configure for HDD and SSD storage. We recommend using this feature to create placement groups, ensuring that certain volumes do not share the same physical storage pair. Configuring zoning allows you to separate data, preventing it from operating on the same physical storage server or even on the same disk.
Note: IONOS will not create redundancy across regions. It is within the responsibility of the cloud service user to:
distribute their workloads across different physical data center locations.
create redundancy by synchronizing data between these locations themselves.
Establish a comprehensive data backup and disaster recovery strategy for Network Block Storage.
Backups secure your data against multiple risk scenarios, data loss being one of them. It saves your data from external threats like exploits, ransomware attacks, or erroneous operations by employees.
Regularly back up critical data and test the restoration process. You can use replication or snapshot features provided by the cloud service provider to ensure redundancy and data availability.
Backup solutions are the recommended choice for disaster recovery. Data backup solutions are highly effective and offer various options to meet your needs. Most of them share the ability to control backup policies more granularly, like the frequency of backups, the type of backup policy (full backups or incremental backups), and the retention period of backups. They may also include features to encrypt backup data to protect sensitive information from unauthorized users.
Backups should be stored in a separate location from your infrastructure to ensure data is not lost in catastrophic events like fires or natural disasters. In such cases, you could recreate your infrastructure from this backup at a different location and continue your business after recovery.
In any case, it is the service user's responsibility to implement and manage regular data backups, test restoration processes, and leverage the provided backup and disaster recovery features to safeguard their storage data.
Backup data management solutions require secure user access management to ensure that the data is accessible only to authorized and qualified users. This is because the data could contain confidential or sensitive information.
Be aware that backups can be restored on different virtual instances in various locations, making them accessible to users who did not have access to the original instance from which the backup was retrieved.
Securing Network Block Storage in a public cloud environment requires a collaborative effort between the service provider and the user. By adhering to these security best practices, including access control as mentioned in the paragraphs above, network security, data backup and recovery, and security monitoring, organizations can enhance the protection of their sensitive data stored in Network Block Storage. By understanding the respective responsibilities, the service provider and the service user can work together to ensure the security and integrity of Network Block Storage in the public cloud.
Simple Storage Service (S3) is a widely used object storage service that provides scalable and durable storage for various data types in the cloud. To ensure the security of your data stored in S3 Object Storage, it is crucial to implement robust security practices. S3 Object Storage is a stand-alone service and can be used independently of any other service offered by a public cloud service provider. Usually, S3 Object Storage are accessible from the public internet which makes it a sensitive data storage and requires attention to apply essential security best practices, enabling you to protect your data and maintain a secure storage environment. Therefore, it is required to have an isolated assessment of best practices for this particular service.
As with any other service, it is essential to start by implementing strong access controls to restrict unauthorized access. This needs to be separated into multiple disciplines.
IONOS Object Storage is based on a structure of data (objects) in a customer-defined structure (buckets). A bucket is owned by the user who created it. You cannot transfer ownership of buckets; hence, we recommend that you decide in your early planning who will own buckets and what your strategy will be when the objects or entire buckets are migrated to another S3 user account of your organization.
Again, follow the principle of least privilege by granting only necessary permissions to users and roles. Review and configure bucket policies and ACLs carefully to prevent unintended public access or unauthorized permissions.
IONOS Object Storage allows buckets and objects to be publicly available, meaning that even anonymous users can access objects within the bucket. It also includes permissions to anonymous users that read and write objects to buckets. It is highly recommended to implement regular security assessments and monitor access policies to ensure compliance so that only explicitly approved objects and buckets get published, and access control lists secure any other data to explicit users. Ensure that these users have access to objects and buckets according to their needs, like read or write/ delete permissions.
Object Lock—also called WORM (Write once, read many)—is a bucket policy that allows you to lock objects for a period of time once written. If you implement an object lock policy for a bucket, users cannot alter or delete objects through S3 interfaces until the object age exceeds a specified retention period.
Object lock must be combined with versioning, as updating a locked object requires creating a new version of the respective object. Object lock is highly recommended to ensure that sensitive data is not deleted but also not changed, such as compliance-relevant data, financial information for yearly accounting audits, or legal requirements or regulations.
Data stored on an S3 object storage must be protected from any loss by proper data replication. IONOS Cloud runs its S3 object storage clusters in an erasure coding setup, sharding an object across multiple nodes. An object's data is stored on different physical storage nodes within the storage cluster. Depending on the erasure coding setup, multiple storage nodes can fail. At the same time, the object is still accessible from the remaining storage nodes, and high availability can be recovered once the broken node gets fixed or replaced. Data is rebalanced to the new storage node.
While erasure coding is a local replication feature of data, IONOS S3 Object Storage also offers cross-region replication on the bucket level. This feature allows you to multiply any object added to a bucket to be replicated to a different bucket, which can be configured on a different S3 Object Storage region. In case of major outages to the primary S3 Object Storage location, you can switch to your secondary site that contains a similar bucket and objects. The feature of cross-region replication is also useful when you need to interact with sensitive data often and fast so that an S3 Object Storage location close to your infrastructure is required, for example, for low latency. However, new objects will still be stored at a remote location in case of major disasters within your primary location.
By following these security best practices for IONOS Object Storage, you can enhance the security of your data and protect against unauthorized access, data breaches, and accidental deletions. You can also secure access controls, secure data transfers, object versioning, monitoring, auditing, and data resilience. Careful management of bucket policies and ACLs is essential to maintaining a secure S3 environment. By incorporating these practices into your S3 implementation, you can ensure the confidentiality, integrity, and availability of your data stored in S3.
Activate and configure the IONOS Cloud platform's to control incoming and outgoing network traffic to VMs. Remember the following:
For VMs that need to access the internet but shall not be accessible from the internet, it is recommended to set up a Source NAT Gateway that masquerades the private network and its connected VMs from the public internet while still allowing VMs to access services outside the cloud. IONOS Cloud offers that allows individual and granular configuration of NAT rules, including selective enabling of IP endpoints and ports.
Regularly review and update firewall rules to align with your security policies. IONOS Cloud offers each VM NIC the option to configure . This service records network traffic and stores it in a configurable S3 Object Storage bucket. The service allows you to configure if incoming or outgoing (or both) network packets have been accepted or rejected (or both). Based on this information, you can analyze if your firewall rule is correct and efficient or if changes to the existing configuration are required to ensure verified access only.
In combination with the aspects above, remember to carefully design your network between your VMs as :
IONOS Cloud gives you access to all , which you can use to set up your infrastructure according to your requirements. Each data center listed in the document is a separate physical location within the metro region mentioned and, therefore, at a distance of several hundred kilometers apart from each other.
publishes the uptime status and availability of all data centers. We recommend that you subscribe to this page to receive any updates. You can retrieve the status of every service available in that respective location, such as Compute Engine or IONOS Object Storage. The website also includes information on scheduled maintenance and current incidents, including an expected resolution time frame.
For this scenario, IONOS Cloud allows you to apply IP Failover Configurations that announce the IP to multiple nodes and let you define the primary VM. For more information, see .
You can by adding resources like CPU and memory, which gives more power to your already running instance. IONOS Cloud allows you to add CPU and memory resources to almost all public and private images while the VM continues to run and does not have to be rebooted, which ensures your VM remains operational.
You can scale horizontally, adding further nodes with the same configuration to your infrastructure. IONOS Cloud provides a capability that monitors the workload of your VMs. It lets you define threshold limits that trigger events to scale your infrastructure and add or remove instances from your setup.
IONOS Cloud offers two Managed Load Balancers: and . VM Auto Scaling also features a Managed Application Load Balancer as part of its services. For more information about associating an Application Load Balancer with VM Auto Scaling, see .
IONOS Cloud offers a that allows the collection of logs from your VM so that you can analyze all data running across multiple instances in a centralized repository and validate your security routines.
IONOS Cloud operates a Distributed Denial-of-Service (DDoS) protection service applied to all networks per default. It analyzes traffic and routes suspicious activity into a scrubbing service that filters malicious packets before they reach your virtual data center and its provisioned components, such as virtual servers. This service can be expanded by DDoS advanced protection, allowing you to permanently route traffic through the scrubbing platform and providing access to network security resources for further consulting proactive security monitoring and threat mitigation. For more information, see .
IONOS maintains a to publish known vulnerabilities in its platform and links to external vulnerability registers from third parties it uses to provide the cloud service. Frequently visit the page and check for the latest news and updates on mitigation and solution fixes.
In some cases, the response directly returns the reason for the error. In some other cases, the error code mentions you to get in contact with for further assistance. At this point, IONOS will not publish a list of potential error codes. It is planned for a later period, and once a list of error codes—including measures to handle these errors—is available, it will also be linked here.
IONOS Object Storage is available via the web interface and the API. The web interface is based on the same API, so this documentation will cover API only. IONOS Object Storage APIs are based on HTTP and follow the same standards mentioned in the section; hence, we recommend reading it beforehand for further details.
Integrate error handling with Cloud Providers Service Level Agreements (SLA) and Service Catalog: Understand the and the of the cloud services you are using. Integrate error-handling practices with the defined SLA to manage response times and escalations during prolonged service disruptions. The service catalog will provide details about the service range boundaries which helps to understand the limits of the product.
Note: Snapshots of your block storages usually contain a copy of your volume stored within the same region or availability zone as your infrastructure. Snapshots are recommended for temporary and short-term recovery points. For example, running an application update may require a rollback in case it does not succeed. For more information, see .
IONOS Cloud offers a that gives full access to a series of backup features mentioned above and many more. Alternatively, service users can use that create backups from volumes and persist data on an IONOS Object Storage, thus enabling the combination of this storage type with several additional security features. For more information, see .
First, grant access to the IONOS Object Storage. IONOS has integrated its S3 Object Storage into the . IONOS Cloud Contract Owners and IONOS users with the role "Administrator" have access to IONOS Object Storage per default. Other users need to receive access by receiving the respective privilege through the within the user management. As S3 Object Storage has its own permission management, IONOS will enable or revoke access for users that have either a respective role or a privilege assigned to their account. The concept helps you grant access to a least privileged concept, as mentioned multiple times throughout this guideline.
Second, it is about the access controls of buckets and objects. IONOS Object Storage allows defining .
Protect your data during transit to and from any S3 Object Storage using secure protocols and mechanisms. utilize SSL/TLS encryption (HTTPS) to secure data transfer to and from IONOS S3 Object Storage.
As IONOS Object Storage also offers publishing of URLs for particular objects, it is possible to enable HTTPS to the , which you can share with users who are supposed to access the document via the respective link. In addition to enabling public URL access to objects, you can add additional security by limiting the maximum number of downloads of the object and setting an expiry date for the public URL. The access to the object automatically terminates when the number of downloads exceeds or the access time expires.
Enable on IONOS Object Storage to protect against accidental deletions or modifications. Versioning allows you to maintain multiple versions of an object and recover from unintended changes or deletions. Regularly test object versioning to ensure proper functionality and recovery.
IONOS Object Storage can record within a bucket and store the data in an explicit destination bucket. It can be a useful audit trail to ensure that only authorized users have access to buckets and objects and to track which users have changed objects. In combination with versioning, it helps to create transparency on activities within your bucket and recover objects if needed.
IONOS Object Storage supports , so it can be used directly or through third-party clients that support object lock. Configuring Object Lock via the IONOS S3 Object Storage console will be provided soon.
S3 is a managed data storage service operated by a public cloud service provider. The provider is responsible for maintaining the S3 object storage and installing updates and patches whenever required. They are also responsible for operating interfaces.
publishes the uptime status and availability of all data centers. You can retrieve the status of every service available in that respective location, such as Compute Engine or S3 Object Storage. The website also includes information on scheduled maintenance and current incidents, including an expected resolution time frame. We recommend that you subscribe to the page to receive any updates.
HTTP Status Codes
Description
200
The status code is returned when the respective API call is accepted. It still may return an error from the backend application (see next chapter), but the call construct itself was valid and consistent.
401
The status code is returned when the credentials used for an API call are incorrect. This happens when there is a typo in the username or password or the user does not exist within the IONOS user management. IONOS will not return details if the username or the password is incorrect not to reveal if there is a user within its database.
404
The status code is returned when a resource was supposed to be retrieved but does not exist (resource not found). This could be the case when a resource Universal Unique Identifier (UUID) was used that is incomplete or when a resource was retrieved that was deleted before and, therefore, no longer exists.
500
The status code is returned when the API backend encounters an unexpected error. The user cannot resolve this error. If the issue persists after retries, it can be helpful to contact IONOS Cloud support.
The Best Practices Guideline is the central source of information about recommendations for using public cloud services securely.
Welcome to the guide on IONOS Cloud security best practices!
As a leading public cloud service provider, IONOS understands the importance of ensuring a secure and efficient cloud environment for our valued customers. This topic highlights the shared responsibility model, outlining the roles and responsibilities of both the service provider and the service user.
In today's dynamic digital landscape, organizations embrace public cloud services for flexibility, scalability, and cost-effectiveness, sharing security and operational duties. IONOS is responsible for maintaining the infrastructure, network, and hypervisor layers, ensuring availability, reliability, performance, and data privacy.
However, it is important to note that the service user is responsible for securing and managing cloud platform workloads, applications, and data. It involves understanding security best practices, configuring access controls, and updating software. Adhering to a shared responsibility model maximizes benefits while maintaining a secure environment.
This topic covers secure public cloud services, including user access management, network security, data protection, monitoring, logging, and incident response.
We are committed to providing the necessary resources, support, and expertise to help you navigate this journey and leverage the full potential of public cloud services.
Disclaimer: This site contains references to external links which are labeled accordingly. IONOS does not have control over the content or availability of the linked websites, nor does IONOS endorse or guarantee their accuracy, relevance, or completeness. IONOS is not responsible for any issues that may arise from accessing or using these external websites, and IONOS recommends reviewing the terms and privacy policies of each respective external website.