Best Practice Guideline

The Best Practices Guideline is the central source of information about recommendations for using public cloud services securely.

Introduction

Welcome to the guide on IONOS Cloud security best practices!

IONOS as a leading public cloud service provider, understands the importance of ensuring a secure and efficient cloud environment for our valued customers. This guide aims to shed light on the shared responsibility model, outlining the roles and responsibilities of both the service provider and the service user.

In today's dynamic digital landscape, organizations embrace public cloud services for flexibility, scalability, and cost-effectiveness, sharing security and operational duties. IONOS is responsible for maintaining infrastructure, network, and hypervisor layers, ensuring availability, reliability, performance, and data privacy.

However, it is important to note that the service user bears the responsibility for securing and managing cloud platform workloads, applications, and data. This involves understanding security best practices, configuring access controls, and updating software. Adhering to a shared responsibility model maximizes benefits while maintaining a secure environment.

Best practices for IONOS Cloud

This guide covers secure public cloud services, including user access management, network security, data protection, monitoring, logging, and incident response.

We are committed to providing the necessary resources, support, and expertise to help you navigate this journey and leverage the full potential of public cloud services.

Disclaimer: This site contains references to external links which are labeled accordingly. IONOS does not have control over the content or availability of the linked websites, nor does IONOS endorse or guarantee their accuracy, relevance, or completeness. IONOS is not responsible for any issues from accessing or using these external websites, and IONOS recommends reviewing the terms and privacy policies of each respective external website.

Best practices for cloud security products

Secure user access management

User access management is crucial for a secure cloud environment, preventing unauthorized access, mitigating data breaches, and ensuring regulatory compliance. This chapter explores practical examples and best practices for securing user access in a public cloud environment.

Implementing strong authentication mechanisms

To enhance the security of user access, it is vital to use secure and strong authentication mechanisms that apply to any user - independent if it is the owner of an account or users added to the account with roles of administrators or read-only access. You need to use:

  1. Service login credentials are confidential information and must be handled like this. Usually, they consist of a user identifier. For example, an email address, and a strong password.

  2. The service login credentials are confidential and require a strong user identifier and password, with common and weak passwords appearing in password breach lists. For example, an email address and a strong password. Here are some commonly used weak passwords frequently found in breach lists:

  • 123456

  • password

  • qwerty

  • abc123

  • letmein

Note: Using any of the above-mentioned passwords or easily guessable patterns is strongly discouraged, as they are highly vulnerable to brute-force attacks.

The source of this information is based on an analysis of various data breaches and password dumps that have occurred over the years. Websites and services such as Have I Been Pwned and data security reports often compile and analyze this data to raise awareness about the importance of strong passwords and security practices.

  1. Complex passwords are difficult for humans to remember, so use safe password vaults to securely store data across multiple services. It also helps to use different complex passwords across multiple services and allows the introduction of password retention periods. It protects you if one service gets compromised, then other services might not be affected. In any case, passwords do not belong in easy-to-access places like post-its stuck to your monitor.

  2. Using unique, complex, and lengthy passwords is the basic step of strong authentication. To make it secure it is recommended to consider utilizing Multi-Factor Authentication (MFA) for all user accounts. MFA requires multiple verification forms, such as a password and a unique code, to reduce unauthorized access risks.

IONOS cloud platform supports Multi-Factor Authentication, enabling users to enable it for their accounts. Contract owners and administrative privileges can manage users within the User Management module, and enforce MFA on additional users, requiring them to complete the activation process before using granted services or resources.

Applying the Principle of Least Privilege (PoLP)

Adhering to the principle of least privilege is crucial for minimizing the risk of unauthorized or unintended actions by users. In the IONOS cloud platform, the owner of a contract and users with administrative privileges will receive maximum access to the platform, which also includes the right to manage the privileges and permissions of other users.

It is recommended to assess which users must have administrative privileges but also to evaluate granting users the minimum level of access necessary to perform their job functions and regularly review and only update access permissions as roles and responsibilities change. This practice helps mitigate the impact of compromised accounts or insider threats.

Within the IONOS Cloud platform, you can create custom group profiles with fine-grained privileges, limiting access to only the necessary resources and actions. By assigning roles based on job responsibilities and regularly reviewing access permissions, you can ensure users have the appropriate level of access without unnecessary privileges. In more detail, the IONOS role and permission concept works the following:

The first user creating an account at IONOS will be the owner of this account with all privileges and permissions. This role cannot be revoked from this user. It is the user that will also be the recipient of all legal communication. For example, changes in contract conditions like updates of terms and conditions, as well as invoices. The user account of this user will always be allowed to access all resources within the account - independent of who was creating the resource/is the owner of the resource. This user has the right to add as well as remove further users to/ from his account.

Performing regular access reviews and audits is crucial for identifying and addressing any security vulnerabilities or excessive user privileges. Periodically review user accounts, permissions, and activity logs to ensure compliance with security policies and promptly revoke access for users who no longer require it, for instance, when the role of the user within your organization has changed and does not require access anymore. Establish routines to revoke access in a timely manner from users who have resigned.

First, the user can be promoted to the role "Administrator" which automatically grants this user all privileges as well as permissions to all resources. The Administrator is not allowed to change the payment method - otherwise, this user role has the same privileges as the owner of the account. Also the "Administrator" is authorized to add or remove users from the account except the user account of the owner.

Second, the "User" can be given explicit privileges as well as permissions. A "privilege" is a grant for certain actions, For example, create a new Data Center, create Snapshots, use S3 Object Storage, and Access Activity Logs. "Privileges" are associated with actions that either create additional costs to the account (the user will receive the privilege to create resources or use services on behalf of the account owner that will add charges to the account) or that allow access to services with sensitive data (the user could access Activity Log and retrieves usage profiles of his organization). The list of privileges is growing as new services are subject to be reflected within the privilege management. When new services get added they are not granted with users per default. It must be shared by the "owner" or "administrators" explicitly. When a user has the "privilege" to create a resource, for example, a new virtual data center - the user becomes the owner of this resource. Even when the "privilege" is revoked, the user still has access to the resources created since the user is the owner of it.

Apart from that, a "User" can retrieve permissions to access certain resources like a virtual data center, backups, etc. that have been created by other users. In comparison to "privileges", the user cannot create a new resource but can access already existing resources. However, the user may receive explicit permissions for example "read-only" (user can open and look into the resource) as well as "write" permissions. In "read-only" mode, the user can open or retrieve resources and read the configuration. This could include sensitive information like the IP addresses of a VM. As IONOS does not persist VM root passwords, this information cannot be retrieved. The user cannot change resources, their configuration, or any other parameter.

The "Write" permissions allow users to add, change as well and delete elements - for example: add VMs to a virtual data center, start/stop a VM within a virtual datacenter, and delete a VM from a virtual data center. IONOS does not offer granular access management to individual devices within an infrastructure setup. In other words, a "write" permission allows all operations to the resource itself as well as to elements within the resource, for example, servers within a shared virtual data center. It does not support the exclusion of delete operations. Delete operations are included in "write" permissions as it is considered an intended change to the shared resource. In addition, users can receive the "share" permission that allows them to share a resource with other groups they are members of. Once they share it with another group, they can only grant permissions they also have on this resource.

A sensitive resource in this context are images, snapshots and backups since these resource can contain confidential data. All of these resources are explicitly shareable resources. Again, "Administrators" will always have access to these resources and can create, use, update as well and delete them. They also can share these resources with other members of the account that are in the role of "Users". "Users" must receive explicit share permission to access these resources. For example: When a "User" has access to backups then they are allowed to use the backup for restoring the data on a new instance and get access to the data. You must be aware if the user has the qualifications as well as the permission to do so. You may need to decide per individual user and the sensitivity of data included in the data source, for example, customer data, confidential data, etc. The same applies to images as well as snapshots.

IONOS Cloud allows creating custom group profiles, limiting access to resources and actions, assigning roles based on job responsibilities, and reviewing access permissions to ensure appropriate user access without unnecessary privileges. For more information, see Assigning privileges to Groups.

Performing regular access reviews and audits is crucial for identifying and addressing any security vulnerabilities or excessive user privileges. Periodically review user accounts, permissions, and activity logs to ensure compliance with security policies and promptly revoke access for users who no longer require it, for instance, when the role of the user within your organization has changed and does not require access anymore. Establish routines to revoke access in a timely manner from users that have resigned.

Regularly review and audit user access

Regular access reviews and audits are essential for identifying security vulnerabilities and excessive user privileges. Use the IONOS Activity Log Service API to monitor user activity and generate access reports. Review the reports to identify anomalies, such as unusual login patterns, and take appropriate action to mitigate risks. This includes data on device, IP, and resource actions.

The Activity Log Service API stores all user login data, device, IP, and resource actions.

The IONOS ActivityLog API contains data when the user logged in, from which device and which IP, and which action was executed, for instance, reading, creating, updating, or deleting resources.

User awareness is crucial for securing cloud environments. Educate users about best practices for password management, phishing awareness, and recognizing social engineering attempts. Encourage the use of strong, unique passwords and regular password updates.

Conclusion

Secure user access management is crucial for maintaining resource integrity and confidentiality in public cloud environments. Implementing strong authentication, PoLP, regular access review, and user education enhances security posture. In the next chapter, we will explore best practices for securing Virtual Machines (VM) within the public cloud.

Best practices for cloud server security products

Virtual machines and network

VMs form the backbone of Infrastructure-as-a-Service (IaaS) products, providing flexible and scalable computing resources in the cloud. Ensuring the security of VMs is paramount to protecting your applications, data, and overall cloud infrastructure.

When talking about virtual machines, we are not talking about the server instance that consists of CPU and memory but also the attached devices that give access to networks through Network Interface Cards (NICs), block storage volumes that host your application or data but also the image the virtual machine is booting from. This chapter applies to IONOS Cloud server products which also includes all attached devices that give access to networks through NICs, storage volumes, and boot images. This chapter applies to IONOS Cloud server products Compute Engine and Cubes.

Service provider's responsibility

The responsibility of the service provider in relation to compute instances lies in maintaining the underlying infrastructure, including the physical servers, virtualization layer, and hypervisor. IONOS Cloud is responsible for ensuring the availability, reliability, and performance of these components.

Additionally, the service provider is responsible for offering a secure and compliant platform. This includes implementing security controls at the infrastructure level, such as network security, host-based firewalls, and intrusion detection systems. They should also ensure that the hypervisor and virtual machine management systems are appropriately secured.

However, it's important to note that the responsibility for securing the actual compute instances lies with the service user. This includes configuring and managing access controls, securing operating systems and applications, and implementing proper security measures within the instances themselves.

By following best practices, such as regularly updating and patching compute instances, service users can mitigate security risks and maintain a secure computing environment within the public cloud infrastructure. The shared responsibility model ensures collaboration between the service provider and the service user, where each party contributes to the overall security of the compute instances and infrastructure.

Use secure and updated images

One crucial best practice for compute instances in a public cloud environment is to update and patch your instances regularly. This practice ensures that your instances have the latest security fixes and updates, minimizing the risk of exploited vulnerabilities. By staying up to date with patches, you enhance the overall security posture of your compute instances and reduce the potential for security breaches.

This routine always applies independently of using public images offered by the service provider or private images uploaded by the service user. It also does not matter if the instance is booting from a block storage instance, ISO image, or network boot profile.

When provisioning VMs, start with secure base images provided by the IaaS provider or use trusted images from reputable sources. These images are typically preconfigured with security-hardened settings, reducing the attack surface. Regularly update the VM images to include the latest security patches and updates.

Stay vigilant about applying security updates and patches to your VMs. Vulnerabilities can be exploited if VMs are not kept up to date. Establish a patch management process to ensure timely updates, or consider utilizing automation tools for patch deployment.

It is noteworthy that this best practice applies not only to the operating system you are installing on your VM but also to all applications you are running on such an instance later. There is no difference to open source or commercial applications as well as applications developed by your organization. Any security threat must be fixed as soon as possible once a patch becomes available to reduce the risk of exploits. Even when you are aware of a vulnerability within your image or application that does not have a fix available yet, you may need to consider a decision to pause or even decommission a service if the security risk is too high.

Apply least privilege access

The principle of implementing least privilege access controls also applies to VM operating systems as well as applications deployed on these systems, for example, databases. Your VM is booting from an operating system, which requires the setup of users that will enable these users to log in to the VM operating system and continue further operation and configuration, etc.

  1. Grant administrative access to trusted individuals who require it and use separate non-administrative accounts for day-to-day operations. Operating systems allow user accounts with basic credentials like user name and password through remote desktop connection and login via terminal and Secure Socket Shell (SSH) keys.

  2. IONOS Cloud offers a variety of public images. Each image will require at least one type of login - password for a root/ administrator account and/or a public SSH key. The password for the root/administrator account is not persisted at IONOS Cloud nor logged in any logfiles. It gets injected into the image and allows login to the VM as the root user. Since it uses the default root/administrator user of the respective image, it is recommended to create individual and personal accounts after initial setup and close the default user account.

  3. While the login via SSH key is the recommended way of accessing the VM it is advisable to configure at least one user with username and password as it might be that your VM is not accessible via network. You need to access the VM through a remote console.

Enable firewall protection

Activate and configure the built-in firewall capabilities of the IONOS Cloud platform to control incoming and outgoing network traffic to VMs. Define strict rules to allow only necessary connections and protocols, blocking all others by default. Restrict inbound network traffic to only essential ports and protocols, minimizing exposure to potential threats. IONOS Cloud allows the configuration of firewall rules on each NIC individually.

For virtual machines that need to access the internet but shall not be accessible from the internet, it is recommended to set up a Source NAT Gateway that masquerades the private network and its connected VMs from the public internet while still allowing VMs to access services outside the cloud. IONOS Cloud offers such NAT Gateway that allows individual and granular configuration of NAT rules including selective enabling of IP endpoints and ports.

Regularly review and update firewall rules to align with your security policies. IONOS Cloud offers each VM NIC the option to configure Flow Logs. This service records network traffic and stores it in a configurable S3 Object Storage bucket. The service allows you to configure if incoming or outgoing (or both) network packets have been accepted or rejected (or both). Based on this information you can analyze if your firewall rule is correct and efficient or if changes to the existing configuration are required to ensure verified access only.

Network segmentation

In combination with the aspects mentioned before, you should design your network between your virtual machines in form of private networks carefully. Not every virtual machine requires an interface to the public network. Create security by moving virtual machines into separate LANs so that only required applications can exchange data. Secure access through restrictive roles and firewall rules.

VM Resilience

In today's digital landscape, where businesses rely heavily on the cloud for their computing needs, ensuring the resilience of virtual machines (VMs) has become paramount. VM resilience refers to the ability of VMs to withstand and recover from disruptions, failures, and unexpected events while maintaining their critical functionalities. This article explores the importance of VM resilience and provides valuable insights into enhancing the resilience of your VMs in a public cloud environment. When building and deploying VMs, adopting a "design for failure" mindset is crucial. Acknowledge that failures are inevitable, and plan your VM architecture accordingly.

An important design decision has to be taken when assessing the criticality or impact of downtime of your application. The higher the impact, the more you will need to create redundancy in your application by distributing VMs across multiple availability zones or even in different data center regions to reduce the impact of failures. You can take advantage of the high availability features offered by your cloud provider. These features are designed to maximize uptime and resilience. They ensure that VM instances are distributed across fault domains, power sources, or data centers, minimizing the impact of localized failures on your applications.

IONOS Cloud gives you access to all its data center locations (list of data centers) in which you can set up your infrastructure according to your requirements. Each data center listed in the document is a separate physical location within the metro region mentioned and therefore, in a distance of several hundred kilometers apart from each other. Each data center is operated as a distinct entity without dependency on other physical data centers like power supply, emergency power, cooling, network connectivity, 24/7 operations, etc. This ensures the isolation of each data center from local elementary or extensive events like fire, flooding, regional power outage, regional network outage, etc. For security reasons, no physical addresses are published. Each physical data center has a security building block architecture, which means that a set of physical hardware devices is built in blocks (clusters) to limit the risk of total failure. Distributing your workloads across building blocks is not possible. It is mainly for operational security purposes of the cloud service provider - for example, if a gateway fails in one building block it does not create network issues in other building blocks. Each building block has redundancy of service equipment to buffer failures of service components. Also, each building block has spare parts installed and configured in hot standby to ensure that failing server or storage components can be replaced immediately. Frequent maintenance of the data centers, their building blocks, and service components are with the full accountability of the cloud service provider.

The uptime status and availability of all data centers are published on the IONOS Cloud Status page. You can retrieve the status of every service available in that respective location, such as Compute Engine or S3 Object Storage. The website also includes information on scheduled maintenance and current incidents, including an expected resolution time frame. We recommend that you subscribe to this page to receive any updates.

As a cloud service user, you may need to access if your workloads shall be deployed across multiple physical data center sites. Select data centers that fulfill your needs regarding security (closer or further distance) as well as which are suitable for your business. Create individual virtual data centers per location and set up required infrastructure (VMs, Storages, Network) individually. Connectivity across the data center locations must be established by the customer through public internet connection combined with securing traffic through virtual private network (VPN) configuration.

Each data center allows you to provision VMs in different availability zones, ensuring VMs are separated from the host servers. The VMs will remain within the same region but are separated from each other so that failures on a single hardware, for example, server kernel issue, PSU breakdown, or server rack, for example, rack switch failure, can be mitigated by switching traffic to another instance running on different hardware within the same data center.

  • For this scenario, IONOS Cloud allows you to apply IP Failover Configurations that announce the IP to multiple nodes and let you define the primary VM. For more information, see IP Failover configurations.

  • The other VMs can be either in hot mode (up and running) or cold mode (shut down and de-provisioning). If they are in hot mode, you can announce to the gateway to switch traffic to another target. It can continue operations within seconds. Furthermore, it allows you to sync data from your primary instance to your secondary instance. In cold mode, you need to start the VM before you can switch traffic by announcing the new route to the gateway. The cold mode does not allow data synchronization as the VM was shut down and could not receive data. A synchronization after the start may not be possible if the primary instance is not accessible. This mode could be used for stateless applications and services.

Resilience cannot be only achieved by creating redundancy. Often, a VM becomes unresponsive because it handles too much workload and cannot process requests fast enough. In such cases, you can utilize auto-scaling capabilities to dynamically adjust resources based on demand, ensuring high availability even during peak loads.

There are two ways of auto-scaling.

  1. You can scale vertically by adding resources like CPU and memory which gives more power to your already running instance. IONOS Cloud allows you to add CPU and memory resources to almost all public as well as private images while the VM continues to run and does not have to be rebooted which ensures your VM remains operational.

  2. You can scale horizontally which will add further nodes with the same configuration to your infrastructure. IONOS Cloud provides a VM Auto Scaling capability that monitors the workload of your VMs. It lets you define threshold limits that trigger events to scale your infrastructure and add or remove instances from your setup.

Utilize load balancing

Load balancers play a critical role in distributing traffic across multiple VM instances, optimizing performance, and increasing availability. By evenly distributing workloads, load balancers not only improve resource utilization but also enhance resilience by redirecting traffic away from unhealthy or underperforming VMs.

IONOS Cloud offers two Managed Load Balancers - for Network Load Balancing as well as Application Load Balancing. VM Auto Scaling also includes a Managed Application Load Balancer. For more information about associating an Application Load Balancer (ALB) with VM Auto Scaling, see Associate an Application Load Balancer.

Monitor and analyze VMlLogs

Enable logging for VMs to capture system and application logs, including security-related events. Centralize and analyze these logs to identify anomalies, detect potential security incidents, and respond proactively. Utilize security information and event management (SIEM) tools or log analysis services to gain valuable insights.

IONOS Cloud offers a Logging service that allows the collection of logs from your virtual machine so that you can analyze all data running across multiple instances in a centralized repository and validate your security routines.

Enable intrusion detection and prevention

Deploy intrusion detection and prevention systems (IDPS) to monitor and protect VMs against malicious activities. IDPS solutions can detect and block unauthorized access attempts, malware, and other potential threats. Regularly update and configure IDPS rules to adapt to evolving security risks.

IONOS Cloud operates a Distributed Denial-of-Service (DDoS) protection service applied to all networks per default. It analyzes traffic and routes suspicious activity into a scrubbing service that filters malicious packets before they reach your virtual data center and its provisioned components like virtual servers. This service can be expanded by DDoS advanced protection, which even allows you to permanently route traffic through the scrubbing platform and provides access to network security resources for further consulting proactive security monitoring and threat mitigation. For more information, see DDoS Protect.

Regularly conduct vulnerability assessments

Perform regular vulnerability assessments and penetration testing on your VMs to identify and address potential security weaknesses. Utilize automated scanning tools or engage third-party security experts to assess the security posture of your VMs and applications.

IONOS maintains a vulnerability register to publish known vulnerabilities of its platform as well links to external vulnerability registers from third parties it is using to provide the cloud service. Frequently visit the page and check for the latest news and updates for mitigation and solution fixes.

Conclusion

Enhancing the resilience of your virtual machines is crucial to maintaining the availability, performance, and continuity of your applications in the cloud. By designing for failure, implementing automated monitoring, utilizing load balancing and fault-tolerant architectures, leveraging high availability features, and implementing robust security practices, you can ensure your VMs are resilient in the face of disruptions.

Remember, VM resilience is not a one-time task but an ongoing effort. Regularly review and update your resilience strategies to align with evolving business needs and emerging technologies. By investing in VM resilience, you build a strong foundation for your cloud infrastructure, enabling your applications to thrive in the face of adversity.

By implementing these security best practices for virtual machines in your IaaS environment, you can bolster the protection and resilience of your cloud infrastructure. Secure base images, least privilege access, firewall protection, disk encryption, monitoring logs, patch management, intrusion detection, and vulnerability assessments are essential components of a robust VM security strategy. The next chapter will delve into securing data storage in the public cloud.

Best practices for cloud storage products

Data storage

In almost all cases, data is the highest asset of a company. If it is the data of its customers, intellectual property, research data, etc. - it might be a competitive advantage against other market participants and requires protection from unauthorized access and loss of data.

It is essential to implement robust security measures to protect sensitive information stored in network block storage. Securing access to the data through the VM instances was already covered in the previous section. This chapter explores security best practices for network block storage and S3 Object Storage in a public cloud environment, outlining the responsibilities of both the service provider and the service user.

Both network block storage and S3 Object Storage provide scalable and reliable storage solutions in the public cloud, empowering organizations to store and access their data efficiently.

Network block storage

Depending on the application requirement and the service provider offering, network block storage is based on different storage technologies like Hard Disk Drive (HDD) or Solid State Drive (SSD) network storage which is installed in a hardware server separate from the compute resources Central Processing Unit (CPU) and memory. There also is a Nonvolatile Memory Express (NVMe) based SSD that is usually installed within the compute server hardware.

The service provider is responsible for ensuring that no data gets lost at any time. Usually, service providers will duplicate data within a storage server via RAID so that some of the storage discs can fail, but the entire data can still be recovered from the remaining data that is stored across other disks within the storage server. Additionally, you may be able to create a replication to a second volume.

IONOS Cloud goes even a step further. Every HDD as well as SSD block storage volume is by default double-redundant provisioned. First of all, the volume is redundantly created on one physical storage service via RAID. This creates resilience if a number of disks are failing. In addition to this, the data of each volume is constantly synchronized with a volume on a second storage server within the same region. This is called a “two-leg” set-up. Also on the second server, the data is persisted in a RAID configuration. Even when an entire storage server has an outage and disks of the second storage service are failing, it is still possible to provide the service and recreate the double-redundant setup in the background after fixing the disks and servers to restore maximum protection.

Network block storage resilience

Although IONOS already has a double redundant provisioned network block storage it also allows users to configure availability zones for HDD as well as SSD storages. Make use of this feature to create placement groups so that certain volumes do not end up on the same physical storage pair. Configure zoning allows you to separate data from running on the same physical storage server or even on the same disk.

Please note: IONOS will not create redundancy across regions. This is within the responsibility of the cloud service user. As mentioned before, the cloud service user can distribute his workloads across different physical data center locations and can create redundancy by synchronizing data between these locations himself.

Regular data backup and disaster recovery of network Block Storage devices

Establish a comprehensive data backup and disaster recovery strategy for network block storage. Regularly back up critical data and test the restoration process. You can use replication or snapshot features provided by the cloud service provider to ensure redundancy and data availability. Backups secure your data against multiple risk scenarios. Losing data is just one of them. Backups are saving your data from external threats like exploits, ransomware attacks, or erroneous operations by employees.

Note:

Snapshots of your block storages usually contain a copy of your volume that is stored within the same region or availability zone then your infrastructure. It is recommended to use snapshots for temporary and short-term recovery points. For Example, when running an update of your application it may require a rollback in case it does not succeed. For more information, see Snapshots.

Backup solutions are the recommended choice for disaster recovery. Data backup solutions are more powerful and provide you with different options. Most of them have in common that you can control backup policies more granularly, like the frequency of backups, the backup policy (full backups or incremental backups), and the retention period of backups. They may also include features to encrypt backup data since it may contain sensitive data that shall not be accessible to unauthorized users. Backups can be stored at a different location than your infrastructure to ensure data does not get lost in catastrophic events like capital fire scenarios or natural disasters. In such cases, you could recreate your infrastructure from this backup at a different location and continue your business after recovery.

IONOS Cloud offers a direct backup solution that gives full access to a series of backup features mentioned above and many more. Alternatively, service users can use third-party solutions like Veeam that create backups from volumes and persist data on an IONOS Cloud S3 Object Storage, thus enabling the combination of this storage type with several additional security features. For more information, see S3 Object Storage

In any case, it is the responsibility of the service user to implement and manage regular data backups, test restoration processes, and leverage the provided backup and disaster recovery features to safeguard their storage data.

Backup data management solutions require secure user access management as this data must be accessible only to authorized and qualified users since it could contain confidential or sensitive data. Keep in mind that backups could be restored on other virtual instances in other locations and, therefore are accessible by users that did not have access to the instance from which the backup was retrieved.

Conclusion

Securing network block storage in a public cloud environment requires a collaborative effort between the service provider and the service user. By adhering to these security best practices, including access control as mentioned in the paragraphs above, network security, data backup and recovery, and security monitoring, organizations can enhance the protection of their sensitive data stored in network block storage. By understanding the respective responsibilities, both the service provider and the service user can work together to ensure the security and integrity of network block storage in the public cloud.

S3 object storage

S3 (Simple Storage Service) is a widely used object storage service that provides scalable and durable storage for various data types in the cloud. To ensure the security of your data stored in S3 Object Storage, it is crucial to implement robust security practices. S3 Object Storage is a stand-alone service and can be used independently of any other service offered by a public cloud service provider. Usually, S3 Object Storage are accessible from the public internet which makes it a sensitive data storage and requires attention to apply essential security best practices, enabling you to protect your data and maintain a secure storage environment. Therefore, it is required to have an isolated assessment of best practices for this particular service.

Secure access control

As with any other service, it is essential to start by implementing strong access controls to restrict unauthorized access. This needs to be separated into multiple disciplines.

First, it is about granting access to IONOS S3 Object Storage. IONOS has integrated its S3 Object Storage into the user management. IONOS Cloud Contract Owners as well as IONOS users with the role "Administrator" have access to IONOS S3 Object Storage per default. Other users need to receive access via receiving the respective privilege through the group management inside of the user management. Since S3 Object Storage has its own permission management, IONOS will take care to enable or revoke access for users that have either a respective role or a privilege assigned to their account. The concept helps you to grant access to a least privileged concept that was mentioned throughout this guideline multiple times.

IONOS S3 Object Storage is based on a structure of data (objects) in a customer-defined structure (buckets). A bucket is owned by the user that was creating the bucket. Transferring ownership of buckets is not possible. So, in your early planning, you may need to decide who shall be the owner of buckets and what your strategy is when the objects or entire buckets are migrated to another S3 user account of your organization.

Second, it is about the access controls of buckets and objects. IONOS S3 Object Storage allows defining fine-grained access policies. Again, follow the principle of least privilege by granting only necessary permissions to users and roles. Review and configure bucket policies and ACLs carefully to prevent unintended public access or unauthorized permissions.

IONOS S3 Object Storage allows making buckets and objects publicly available which means that even anonymous users can access objects within the bucket - this also includes permissions to anonymous users that not only read but also write objects to buckets. It is highly recommended to implement regular security assessments and monitor access policies to ensure compliance so that only explicitly approved objects and buckets get published and any other data is secured by access control lists to explicit users. Make sure that these users have access to objects and buckets according to their needs like read or write/ delete permissions.

Secure data transfers

Protect your data during transit to and from any S3 Object Storage. Utilize secure protocols and mechanisms. IONOS S3 Object Storage endpoints are utilizing SSL/TLS encryption (HTTPS) to secure data transfer to and from IONOS S3 Object Storage.

As IONOS S3 Object Storage also offers publishing of URLs for particular objects, it is possible to enable HTTPS to the static download link which you can share with users that are supposed to access the document via the respective link. In addition to enabling public URL access to objects, you may want to add additional security by limiting the maximum number of downloads of the object and setting an expiry date for the public URL. Once the number of downloads has been exceeded or the access time has expired, the access to the object terminates automatically.

Implement object versioning and logging

Enable object versioning on IONOS S3 Object Storage to protect against accidental deletions or modifications. Versioning allows you to maintain multiple versions of an object and recover from unintended changes or deletions. Regularly test object versioning to ensure proper functionality and recovery.

IONOS S3 Object Storage contains the option to record logs of all activities within a bucket and store the data in an explicit destination bucket. This can be a useful audit trail to verify that only authorized users have access to buckets and objects and which users have changed objects. In combination with versioning, it helps to create transparency on activities within your bucket and recover objects if needed.

Utilize object lock

Object Lock - also called WORM (Write once, read many) - is a bucket policy that allows you to lock objects for a period of time once written. When an object lock policy is implemented for a bucket, any user cannot alter or delete objects through S3 interfaces until the object age exceeds a specified retention period.

Object lock must be combined with versioning since an update of a locked object requires creating a new version of the respective object. Object lock is highly recommended to ensure that sensitive data is not deleted but also not changed - for instance, compliance-relevant data, financial information for yearly accounting audits, or any other legal requirements or regulations.

IONOS S3 Object Storage supports Object Lock via S3 API so it can be used directly or through 3rd party clients supporting object lock. Configuring Object Lock via IONOS S3 Object Storage console will be provided soon.

Data resilience

S3 is a managed data storage service operated by the public cloud service provider. The cloud service provider is responsible for maintaining the S3 object storage and installing updates and patches whenever required. He also is responsible for operating secure data transfer interfaces as mentioned above.

Data stored on an S3 object storage must be protected from any loss by proper data replication. IONOS Cloud runs its S3 object storage clusters in an erasure coding set up which is sharding an object across multiple nodes. The data of an object is stored on different physical storage nodes within the storage cluster. Depending on the erasure coding setup, multiple storage nodes can fail while the object is still accessible from the remaining storage nodes and high availability can be recovered once the broken node gets fixed or replaced and data is rebalanced to the new storage node.

While erasure coding is a local replication feature of data, IONOS S3 Object Storage also offers cross-region replication on the bucket level. This feature allows you to multiply any object that gets added to a bucket to be replicated to a different bucket, which can be configured on a different S3 Object Storage region. In case of major outages to the primary S3 Object Storage location, you can simply switch to your secondary site that contains a similar bucket and objects. The feature of cross-region replication is also useful when you need to interact with sensitive data often and fast so that an S3 Object Storage location close to your infrastructure is required, for example for low latency, but new objects still shall be stored at a remote location in case of major disasters within your primary location.

The uptime status and availability of all data centers are published on the IONOS Cloud Status page. You can retrieve the status of every service available in that respective location, such as Compute Engine or S3 Object Storage. The website also includes information on scheduled maintenance and current incidents, including an expected resolution time frame. We recommend that you subscribe to this page to receive any updates.

Conclusion

By following these security best practices for IONOS S3, you can enhance the security of your data and protect against unauthorized access, data breaches, and accidental deletions. Secure access controls, secure data transfers, object versioning, monitoring, and auditing, as well as data resilience. Careful management of bucket policies and ACLs are essential aspects of maintaining a secure S3 environment. By incorporating these practices into your S3 implementation, you can ensure the confidentiality, integrity, and availability of your data stored in S3.

Error code handling in public cloud services

Effective error code handling is a crucial aspect of building robust and reliable applications in the public cloud. Error codes, whether generated by public cloud services or HTTP protocols, provide valuable feedback on the status and outcome of operations. This chapter explores best practices for handling error codes when using public cloud services, focusing on both cloud service-specific errors and HTTP protocol responses.

IONOS Cloud provides multiple interfaces to its products - Data Center Designer, APIs, SDKs as well as Configuration Management tools.

The Data Center Designer includes functions that validate configurations of virtual data centers and will return errors or warnings before provisioning. Errors are marked red and must be resolved as they are blocking the provisioning of a resource. For example, when a new block storage with a public image gets created, you must define either a root password to the image or apply an SSH key. If this has not been configured, the Data Center Designer will return an error and request the user to resolve it. Warnings are marked yellow and indicate useful improvements but will not block provisioning. These warnings may indicate that the configuration is incomplete and you have to continue to complete the configuration after the current provisioning run. For example, you could create an instance without a network interface, which means it could not communicate with other instances or the public internet. As this is an uncommon use case, the DCD would return a notification that this needs to be improved. The validation dialog gets displayed before provisioning.

Apart from client-side validations in the DCD, there are other types of errors that require respective handling.

HTTP error handling

When dealing with HTTP-based cloud services, adhere to standard HTTP status codes for processing the result of a request. IONOS APIs use well-known status codes, such as 200 for successful responses and various 4xx and 5xx codes for different error scenarios. There are a few examples but as the list of HTTP status codes an entire list of potential use cases resulting in each code cannot be provided.

  • HTTP Status 200 is returned when the respective API call is accepted. It still may return an error from the backend application (see next chapter), but the call construct itself was valid and consistent

  • HTTP Status 401 is returned when the credentials used for an API call are incorrect. This happens when there is a typo in the username or password, or the user does not exist within the IONOS user management. IONOS will not return details if the username or the password is incorrect not to reveal if there is a user within its database.

  • HTTP Status 404 is returned when a resource was supposed to be retrieved that does not exist (resource not found). This could be the case when a resource Universal Unique Identifier (UUID) was used that is incomplete or when a resource was retrieved that got deleted before and therefore no longer exists.

  • HTTP Status 500 is returned when the API backend has an unexpected error. This error cannot be resolved by the user. It can be useful to contact the IONOS Cloud support if the issue remains after retries.

An official list can be retrieved from the Hypertext Transfer Protocol (HTTP) Status Code Registry, maintained by the Internet Assigned Numbers Authority (IANA).

IONOS application errors

IONOS applications have two types of errors - fail-fast validation errors and provisioning errors.

Whenever you are sending a request to create, update, or delete a resource, the IONOS backend will apply first checks. For example, is the user authorized to execute this change? Does the contract contain enough resources to request a certain resource? Does the API call request a resource with a configuration out of product range or a resource configuration not supported in the respective location, etc. Such errors will be responded synchronously to the API request call. The response contains an explicit and readable description of the error so it can be correlated with the initial API call. You need to change your API call accordingly to meet the criteria of the application; for example, change the configuration to fit into the product specifications and retry. The IONOS API will only return the first fail-fast validation error it runs into. It will not validate the entire request and return all errors - it fails immediately once the first error is identified. Consequently, the retry may reveal the next fail-fast error that requires the user's attention.

Once all fail-fast validations have been passed, the request will be queued within the customer contract API queue and processed. The API call will return an HTTP Status 200 to signal the successful submission of the API call. In the next step, IONOS is processing the API call and may detect other issues, resulting in a scenario in which the order cannot be completed. As this happens asynchronously, the error gets reported within the status of the request. In the response header of the initial API call, you will find the header "location" that contains the URI for the respective request status call including its identifier. This resource must be consulted to retrieve information about the status of the order. It needs to be updated frequently as the progress gets reflected at the runtime of the request. If the request was not completed successfully, then the request status will return an error including an error code in the format of "VDC-x-xxx" (while 'x' may represent a certain 1 - 4 digit numeric code). In some cases, the error reason is returned directly. In some other cases, the error code asks you to get in contact with IONOS Support Team for further assistance. At this point, IONOS will not publish a list of potential error codes. It is planned for a later period and once a list of error codes is available - including measures to handle these errors- it will also be linked here.

The routine described before applies to all IONOS Infrastructure services (server, block storages, network). S3 Object Storage is described below.

IONOS S3 object storage error handling

IONOS S3 Object Storage is available via web interface as well as API. The web interface is based on the same API so this documentation will cover API only. First of all, IONOS S3 Object Storage APIs are based on HTTP and follow the same standards mentioned above. Please consult the section beforehand for further details.

S3 itself has its application error descriptions that contain explicit action recommendations for users, for example, when the format of the bucket name is wrong, or the bucket name already exists. IONOS operates its own installation of S3 Object Storage from a third party, which follows the standard S3 Object Storage REST error responses. Usually, the error response contains information for the content type as well as the HTTP status code in the scope of 3xx, 4xx, or 5xx. Furthermore, the error message contains information about the error in a message tag, which helps to identify and resolve the issue. Depending on the error message, you should retry or change the S3 Object Storage request to resolve the issue returned in the error message.

At this point, IONOS will not publish a list of potential error codes. It is planned for a later period and once a list of error codes is available - including measures to handle these errors- it will also be linked here.

Implementation of best practice advisor

When implementing automatic routines via API, SDKs, or Configuration Management tools, it is best to consider a few elements to help deal with potential errors returned from the cloud service provider application.

  • Be compliant with the HTTP status code standard. If you follow the standard your implementation and the interface consumed by this implementation create the best value.

  • Implement retry mechanism: Consider implementing retry mechanisms for transient errors. Public cloud services occasionally experience temporary issues, and retrying the operation after a short delay can often lead to successful outcomes. Implementing exponential backoff algorithms can help prevent overwhelming cloud services with retries during high error periods.

  • In addition to the previous point - Implement circuit breakers: There are error cases that are good to retry directly or after a short wait period. You may need to use circuit breaker patterns to manage service failures gracefully. Circuit breakers act as safety mechanisms that detect recurring errors and temporarily halt requests to the failing service. This prevents cascading failures and helps the system recover from transient errors more effectively.

  • Log Errors and Exceptions: Log errors and exceptions at appropriate levels in the logging system of your implementation. Detailed error logs aid in post-mortem analysis, troubleshooting, and monitoring the health of your implementation. It can also be a helpful source for analysis and investigation on the cloud service provider side since it provides details of your use case and data used to reproduce the error. Based on that either a proper solution or a work-around can be provided. It may also happen that an unintended error has been discovered and requires a fix by the cloud service provider.

  • In the trend of the previous point - plan for unknown errors: Prepare your application to handle unknown or unexpected errors gracefully. Implement fallback mechanisms and default behavior for scenarios where error codes are undocumented or not recognized.

  • Integrate Error Handling with Cloud Providers Service Level Agreements (SLA) and Service Catalog: Understand the Service Level Agreements (SLA) as well as the service catalog specifications of the cloud services you are using. Integrate error-handling practices with the defined SLA to manage response times and escalations during prolonged service disruptions. The service catalog will provide details about the service range boundaries which helps to understand the limits of the product.

  • Continuously Review and Improve Error Handling: Regularly review your error handling mechanisms and identify areas for improvement. Embrace a culture of continuous improvement to ensure your application evolves along with changing cloud service conditions and requirements.

IONOS Responsibility

IONOS logs errors and uses them for analysis of failures. As outlined earlier, errors are categorized into two segments. First, errors will be analyzed by invalid use of the interfaces which might be caused by misleading documentation, product description, or customer expectation. The second category collects errors caused by the IONOS application itself. In both cases, IONOS will analyze and review the error to apply improvements to eliminate the error. The data collected for this action is in alignment with data privacy regulations and will only be used to reduce errors and improve the product and its associated services, such as documentation. As mentioned earlier, it will not be used for other purposes - especially commercial purposes.

Last updated