Only this pageAll pages
Couldn't generate the PDF for 170 pages, generation stopped at 100.
Extend with 50 more pages.
1 of 100

Products

Loading...

Getting Started

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Compute Engine

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Managed Services

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

DCD How-Tos

Prerequisites: Prior to setting up a virtual machine, please make sure you have the appropriate privileges. Only contract owners, administrators, or users with the Create Data Center privilege can set up a VDC. Other user types have read-only access and can't provision changes.

Learn how to create and configure a Server inside of the DCD.

Learn how to create and configure a Cloud Cube inside of the DCD.

Use the Remote Console to connect to Server instances without SSH.

Use Putty or OpenSSH to connect to Server instances.

Automate the creation of virtual instances with the cloud-init package.

Enable IPv6 support for Virtual Servers and Cloud Cubes.

Data Center Designer

The Data Center Designer (DCD) is a unique tool for creating and managing your . DCD's graphical user interface makes data center configuration intuitive and straightforward. You can drag-and-drop virtual elements to set up and configure data center infrastructure components.

As is the case with a physical data center, you can use the DCD to connect various virtual elements to create a complete hosting infrastructure. For more information on DCD features, you can visit the page.

The same visual design approach is used to make any adjustments at a later time. You can log in to the DCD and scale your infrastructure capacity on the go. Alternatively, you can set defaults and create new resources when needed.

Core Functionality

The DCD allows the customer to both control and manage the following services provided by IONOS Cloud:

  • : Create, configure and delete entire data centers. Cross-connect between VDCs and tailor user access across your organization.

  • : Setup, pause, and restart virtual instances with customizable storage, CPU, and RAM capacity. Instances can be scaled based on usage.

  • : Upload, edit and delete your own private images or use images provided by IONOS Cloud. Create or save snapshots for use with future instances.

  • Networking: Reserve and manage static public . Create and manage private and public LANs including firewall setups.

  • Basic Features: Save and manage ; connect via , launch instances via , record networking via flow logs and monitor your instance use with monitoring software.

Compatibility

As a web application, the DCD is supported by the following browsers:

  • Google Chrome™: Version 30+

  • Mozilla® Firefox®: Version 28+

  • Apple® Safari®: Version 5+

  • Opera™: Version 12+

  • Microsoft® Internet Explorer®: Version 11 & Edge

We recommend using Google Chrome™ and Mozilla® Firefox®.

Next Steps

If you are ready to get started with the Data Center Designer, consult our beginner . These step-by-step instruction sets will teach you how to and configure initial user settings.

Setup a Virtual Server
Setup a Cloud Cube
Connect via Remote Console
Connect via SSH
Boot with cloud-init
Enable IPv6

Connect via Remote Console

The Remote Console is used to connect to a server when, for example, no SSH is available. You must have the root or administrator password for this type of log-in to the server.

Prerequisites: Make sure you have the appropriate permissions. Only contract owners, administrators, or users with access rights to the data center can connect to a server. Other user types have read-only access and can't provision changes.

Procedure

  1. Start the Remote Console from the server.

  2. Open the data center containing the required server.

  3. In the Workspace, select the server.

  4. In the Inspector, choose Remote Console or select Remote Console from the context menu of the server.

  5. Start the Remote Console from the Start Center (contract owners and administrators only).

  6. Open the Start Center: Menu Bar > Data Center Designer > Open Start Center

  7. Open the Details of the required data center. A list of servers in this data center is displayed.

  8. Select the server and click Open Remote Console.

Remote Console version matching your browser opens; you can now log on to the server with root or administrator password.

Use the Send Key Combo button on the top right of the Remote Console window to send shortcut key combinations (such as CTRL+ALT+DEL).

Launch this Remote Console window again with one click by bookmarking its URL address in your browser.

For security reasons, once your session is over, always close the browser used to connect to VM with this bookmark.

First Steps
Tutorials
create a basic Virtual Data Center
Create an entire hosting infrastructure using the Data Center Designer visual interface

Virtual Servers

Virtual servers that you create in the are provisioned and hosted in one of IONOS physical data centers. Virtual servers behave exactly like physical servers. They can be configured and managed with your choice of the operating system. Information on creating a server can be found here.

Boot options: For each server, you can select to boot from a virtual CD-ROM/DVD drive or from a storage device (/). Any operating system can be used on the platform. The only requirement is the use of KVM VirtIO drivers. IONOS provides a number of ready-to-boot images with current versions of Linux operating systems.

We do not offer Windows images during the PoC. However, you can upload your own images with our FTP access, including your own licensed Windows images.

Availability Zones

Secure your data, enhance reliability, and set up high-availability scenarios by deploying your virtual servers and storage devices across multiple .

Assigning different Availability Zones ensures that servers or storage devices reside on separate physical resources at IONOS.

For example, a server or a storage device assigned to Availability Zone 1 resides on a different resource than a server or storage device assigned to Availability Zone 2.

You have the following Availability Zone options:

  • Zone 1

  • Zone 2

  • A - Auto (default; our system automatically assigns an Availability Zone upon provisioning)

Live Vertical Scaling (LVS)

If the capacity of your Virtual Data Center no longer matches your requirements, you can still increase or decrease your resources after provisioning. With upscaling resources, you can change the resources of a virtual server during operation without restarting it. This means you can add RAM or ("hot plug"). This allows you to react to peak loads quickly without compromising performance.

After uploading, you can define the properties for your own images before applying them to new storage volumes. The settings must be supported by the image, otherwise, they will not work as expected. After provisioning, you can change the settings directly on the storage device, which will require a restart of the server.

The types of resources that you can scale without rebooting will depend on the operating system of your VMs. Since kernel 2.6.25, Linux has LVO modules installed by default, but they may have to be activated manually depending on the derivative. For more information see the Linux VirtIO page.

For IONOS images, the supported properties are already preset. Without restarting the server, its resources can be scaled as follows:

  • Upscaling: CPU, RAM, NICs, storage volumes

  • Downscaling: NICs, storage volumes

Scaling up is the increase or speed up of a component to handle a larger load. The goal is to increase the number of resources that support an application to achieve or maintain accurate performance. Scaling down means reducing system resources, whether or not you've used the scaling up approach. Without restarting the server, only upscaling can be done.

Limitations

CPU Types: Virtual server configurations are subject to the following limitations, by CPU type:

AMD CPU

Components
Minimum
Maximum

Cores

1 core

62 cores

RAM

0,25 GB RAM

230 GB RAM

NICs and storage

0 PCI connectors

24 PCI connectors

CD-ROM

0 CD-ROMs

2 CD-ROMs

Intel® CPU

Components
Minimum
Maximum

Cores

1 core

51 cores

RAM

0,25 GB RAM

230 GB RAM

NICs and storage

0 PCI connectors

24 PCI connectors

CD-ROM

0 CD-ROMs

2 CD-ROMs

A single Intel® physical core with Hyper-Threading Technology is exposed to the operating system of your virtual server as two distinct “logical cores”, which process separate threads.

RAM Sizes: Because the working memory (RAM) size cannot be processed during the initial configuration, newly provisioned servers with more than 8 GB of RAM may not start successfully when created from IONOS Windows images.

Live Vertical Scaling: Linux supports the entire scope of IONOS Live Vertical Scaling, whereas Windows is limited to CPU scaling. Furthermore, it is not possible to use LVS to reduce storage size after provisioning.

Basic Tutorials

Recommended for beginners:

Selected user guides:

This section lists the most commonly referred topics in the User Guides.

Setup a Cloud Cube

Creating a Cube

1. Drag the Cube element from the Palette into the Workspace.

2. Click the Cube element to highlight it. The Inspector will appear on the right.

3. In the Inspector, configure your Cube from the Settings tab.

  • Name: Your choice is recommended to be unique to this .

  • Template: choose the appropriate configuration template.

  • vCPUs: set automatically when a Template is chosen.

  • RAM in GB: set automatically when a Template is chosen.

  • Storage in GB: set automatically when a Template is chosen.

4. You will also notice that the Cube comes with an Unnamed Direct Attached Storage. Click on the device and rename it in the Inspector.

  • Name: Your choice is recommended to be unique to this Virtual Data Center (VDC).

  • Size in GB: Specify the required storage capacity.

  • Image: You can select one of IONOS' images or use your own.

  • Password: The combination should be between 8 and 50 characters in length; using only Latin characters and numbers.

  • Backup Unit: Backs up all data with version history to local storage or your private cloud storage.

Adding more Storage

1. Drop a Storage element from the Palette onto a Cube in the Workspace to connect both.

2. In the Inspector, configure your Storage device in the Settings tab.

  • Name: Your choice is recommended to be unique to this Virtual Data Center (VDC).

  • Availability Zone: Choose the Zone where you wish to host the Storage device.

  • Size in GB: Specify the required storage capacity for the

  • Performance: Depends on the size of the SSD.

  • Image: You can select one of IONOS' images or use your own.

  • Password: The combination should be between 8 and 50 characters in length; using only Latin characters and numbers.

  • Backup Unit: Backs up all data with version history to local storage or your private cloud storage.

Connecting to the internet

1. Each virtual machine has a NIC, which is activated via the Autoport symbol. Connect the Cube to the Internet by dragging a line from the Cube's Autoport to the Internet's NIC.

2. In the Inspector, configure your LAN device in the Network tab.

  • Name: Your choice is recommended to be unique to this Virtual Data Center (VDC).

  • MAC: The MAC address will be assigned automatically upon provisioning.

  • Primary IP: The primary is automatically assigned by the IONOS DHCP . You can, however, enter an IP address for manual assignment by selecting one of the reserved IPs from the drop-down menu. Private IP addresses should be entered manually. The has to be connected to the Internet.

  • Failover: If you have an HA setup including a failover configuration on your VMs, you can create and manage IP failover groups that support your HA setup.

  • Firewall: Configure a firewall.

  • DHCP: It is often necessary to run a DHCP server in your virtual data center (e.g. PXE boot for fast rollout of VMs). If you use your own DHCP server, clear this checkbox so that your IPs are not reassigned by the IONOS DHCP server.

  • Additional IPs: In order to use "floating" or virtual IPs, you can assign additional IPs to a NIC by selecting them from the drop-down menu.

Provisioning Changes

1. Start the provisioning process by clicking PROVISION CHANGES in the Inspector.

2. The Provision Data Center dialog opens. Review your changes in the Validation tab.

3. Confirm changes with your password. Resolve outstanding errors without a password.

4. Once ready, click Provision Now to start provisioning resources.

The data center will now be provisioned with the new Cube. DCD will display a Provisioning Complete notification once your cloud infrastructure is ready.

Log in to the Data Center Designer

  1. Open the in your web browser by going to .

  2. Select your preferred language (DE | EN) in the top right corner of the Log in window.

  3. Enter the Email and Password created during registration.

  4. Click the Log in button.

Verification code: By default, no code is required. You may activate this option at a later time. You will need the Google Authenticator app to generate the code.

Dashboard

Once you have logged in, you will see the Dashboard. The Dashboard shows a concise overview of your data centers, resource allocation, and options for help and support. You may click on the IONOS logo, in the Menu bar, at any time, to return to the Dashboard.

Inside the Dashboard, you can see the My Data Centers list and the Resource Allocation view. The Resource Allocation view shows the current usage of resources across your infrastructure.

Usually, clicking on a data center in the My Data Centers list opens the data center. However, if this is your first time using DCD, you will need to create your first Virtual Data Center (). Learn how to set VDC defaults in the .

Menu bar

The Menu bar, at the top of every DCD screen, has buttons that allow you to access the DCD features, notifications, and help. These buttons also allow you to manage your account.

Tutorial

Targeted Use

Log in to the Data Center Designer

Log in to the Data Center Designer (DCD), explore the dashboard and menu options.

Data Center Basics

Create a data center and learn about individual user interface (UI) elements.

Configure a Data Center

Create a Server, add storage and a network. Provision changes.

Manage User Access

Set user privileges; limit or extend access to chosen roles.

Account Settings

Manage general settings, payment and contract details.

Set Up a Virtual Server

Set Up a Network Load Balancer

Set Up a Database Cluster

Set Up Storage

Set Up a NAT Gateway

Set Up a Kubernetes Cluster

Configure a Network

Set Up S3 Object Storage Access

Create Alarms and Actions

Connect via SSH

Set Up Backup Units

Configure Flow Logs

Virtual Data Center (VDC)
storage
SSD
IP address
server
NIC
Drag the Cube element from the Palette into the Workspace. Highlighting it opens the Inspector on the right
Highlight the Direct Attached Storage element within the Cube and configure the storage device from the Inspector pane
Drag an additional HDD from the Palette onto a Cube in the Workspace. A separate configuration is available
Connect the Cube to the Internet and configure the network from the Network tab of the Inspector
Finalize your Cube setup
Click Provision Now once you have resolved all errors

Menu option

Description

1. IONOS logo

Return link to the Dashboard.

2. Data Center Designer

List existing VDCs and/or create new ones.

3. Storage

List storage buckets and/or create new ones.

4. Containers

Manage Kubernetes and Container Registeries.

5. Databases

Manage Databases.

6. Management

User, Group and Resource settings and management.

7. Notification icon

Shows active notifications.

8. Help icon

Customer Support and FTP Image Upload info.

9. Account Management

Account settings, resource usage and billing methods.

DCD
https://dcd.ionos.com
VDC
Setup a Data Center
Login Screen for DCD Portal
Dashboard for DCD Portal
Menu bar for DCD Portal

Virtual Data Center LANs

A Virtual Data Center (VDC) is a collection of cloud resources for creating an enterprise-grade IT infrastructure. A Local Area Network (LAN) in a VDC refers to the interconnected network of Virtual Machines (VMs) within a single physical server or cluster of servers. The LAN in a VDC is a critical component of cloud computing infrastructure that enables efficient and secure communication between VMs and other resources within the data center.

VDC operates in a dual-stack mode that is, the Network Interface Cards (NICs) can communicate over IPv4, IPv6, or both. In Data Center Designer (DCD), IPv6 can be enabled for both Private and Public LANs, but on provisioning, only Public IPv6 addresses are allocated to all LANs.

Support

IONOS IPv6 is operated by a team of experts and qualified network administrators. Contact IONOS support team for all your queries.

Add Custom CoreDNS Configuration

It may be desirable to enhance the configuration of CoreDNS by incorporating additional settings. To ensure the persistence of these changes during control plane maintenance, it is necessary to create a ConfigMap within the kube-system namespace. The ConfigMap should be named coredns-additional-conf and should include a data entry with the key extra.conf. The value associated with this entry must be a string that encompasses the supplementary configuration.

Below is an illustrative example that demonstrates the process of adding a custom DNS entry for example.abc:

apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns-additional-conf
  namespace: kube-system
data:
    extra.conf: |
      example.abc:53 {
        hosts {
          1.2.3.4 example.abc
          2.3.4.5 server.example.abc
          fallthrough
        }
      }

DCD How-Tos

IONOS Cloud Documentation

Explore our guides and reference documents to integrate IONOS Cloud products and services.

Latest Release: DBaaS MongoDB User Guide

Getting Started

With Data Center Designer (DCD), IONOS Cloud's visual user interface, you can create a fully functioning Virtual Data Center (VDC). Learn more about DCD with our code-free guide:

APIs, SDKs & Tools

Product User Guides

Set up and manage your products and services via examples and troubleshooting cases below:

Compute Engine

Scalable instances with a dedicated resource functionality.

| |

Add more SSD or HDD storage to your existing instances.

| |

Internal, external and core network configurations.

| |

Managed Services

Facilitate a fully automated setup of Kubernetes clusters.

| |

Configure and connect private VMs to public repositories.

|

Automatically distribute your workloads over several servers.

|

Improve application responsiveness and availability.

|

Manage PostgreSQL cluster operations, database scaling, patching, backup creation, and security.

| |

Manage MongoDB clusters, along with scaling, security, and creating snapshots for backups.

|

Gather metrics on Virtual Server and Cube resource utilization.

|

Storage & Backup

Create buckets and store objects with this S3 Object Storage compliant service.

|

Secure your data with high-performance cloud backups.

| |

Virtual Machines

With IONOS Cloud , you can quickly provision and . Leverage our user guides, reference documentation, and FAQs to support your hosting needs.

Product Overview

Developer Tools

Recommended How-To's

Learn how to create and configure a Server inside of the DCD.

Learn how to create and configure a Cloud Cube inside of the DCD.

Use the Remote Console to connect to Server instances without SSH.

Use Putty or OpenSSH to connect to Server instances.

Automate the creation of virtual instances with the cloud-init package.

Frequently Asked Questions

Change Log

10 October 2021 | Live Vertical Scaling Overview section updated

Manage User Access

Access inside of the Virtual Data Center can be controlled for both Users, user Groups, and Resources. Please consult the following instructions for all three.

User Access Control

When you create a new Virtual Data Center (), only you, as the Contract Owner, have access to the space and the infrastructure contained within. However, you might want to assign maintenance duties to another member of your organization. This tutorial teaches you how to create new users and how to assign privileges to them. This will allow them to work with the resources that you created inside of the VDC.

Prerequisites: Make sure you have the appropriate privileges. Only Contract Managers or Administrators can manage users within a VDC.

Creating a new User

The User Manager lets you create new users, add them to user groups, and assign privileges to each Group. Privileges either limit or expand access, based on the user role in your organization. This panel lets you control user access to specific areas of your VDC.

To access the User Manager, go to Menu > Management > Users & Groups.

Manager Resrouces Dropdown Menu

1. In the User Manager, click + Create in the Users tab.

2. Enter the user's First Name, Last Name, Email, and Password. Click Create.

Creating a new user is done in the User Manager panel within the DCD

Creating a Group

The creation of Groups is useful when you need to assign specific duties to a specific group of a team member. For example, you may want to assign certain duties to a team of Quality Assurance experts. You have to create a Group called "Quality Assurance" and add it to the User Manager. Later, you can add members to this group. You can then assign privileges to the entire Group.

To create a group, use the following procedure:

1. In the Groups tab, click + Create and enter a Group Name in the drop-down.

2. Click Create to confirm. The group is now created and visible in the list.

You can now assign permissions, users, and resources to your Quality Assurance department.

User groups can be created directly from the Groups tab in the User Manager

Adding Users to a Group

Users are added to your Quality Assurance group on an individual basis. Once you have created a new member, you must assign them to the Group.

1. In the Groups tab, select the required group.

2. In the Members tab, add Users from the + Add User drop-down.

3. To remove users, select the required user and click Remove User.

Choose a User from the dropdown and add them to the highlighted Group as Member

Users assigned to the group now have privileges and access rights to resources corresponding to their group membership.

By assigning a user to a group, a contract owner or administrator not only defines which actions a user is authorized to perform in the DCD but also which resources (VDCs, , , IP blocks) members of this group can access.

Administrators do not need to be managed in groups as they automatically have access to all resources associated with the contract.

Assigning privileges to Groups

1. In the Groups tab, select the required group.

2. In the Privileges tab, check or uncheck boxes, as appropriate, next to the privilege name.

You do not need to save your selections. This action automatically grants/removes privileges.

If you want the members of a Group to have certain privileges, please check them off in the appropriate boxes

Adding Groups to a Resource

1. Select a resource from the Resources tab in the User Manager panel.

2. Under the Visible to Groups tab, click + Add Group.

3. Click on a Group which was previously created.

4. The selected group is now able to see the resource.

Select the resource you wish to make available to a user Group. The Group's members can now exercise chosen privileges

Assigning Resources from the Groups tab

1. In the Groups tab, select the required group.

2. Open the Resources of Group tab.

3. Click on + Grant Access and select the resource from the dropdown.

Select a resource from the Resources of a Group tab to assign a particular set of rules to a group of users

This enables read access to the selected resource.

Check each box to enable editing or sharing of a resource. To disable access, select the required resource and uncheck either the permission box or click Revoke Access.

Enable IPv6

You can enable IPv6 on Virtual Servers and Cloud Cubes when you create them or after you create them.

Set up IPv6

You can set up IPv6 to improve the network connectivity for your virtualized environment. By setting up IPv6 for your virtual servers and cubes, you can ensure that they are accessible to IPv6-enabled networks and clients.

Prerequisites: Prior to enabling IPv6, make sure you have the appropriate privileges. New VDC can be created by the contract owners, admins, or users with create VDC privilege. The number of bits in the fixed address is the prefix length. For Data Center IPv6 CIDR, the prefix length is /56.

To enable IPv6 for Virtual Servers, connect the server to an IPv6-enabled Local Area Network (LAN). Select the Network option on the right pane and fill in the following fields:

  • Name: Is is recommended to enter a unique name for this Network Interface Controller (NIC).

  • MAC: This field is automatically populated.

  • LAN: Select an IPv6 enabled Local Area Network (LAN).

  • Firewall: Specify whether you want to enable or disable the firewall. For enabling the firewall, choose Ingress to create flow logs for incoming traffic, Egress for outgoing traffic, or Bidirectional to create flow logs for all traffic.

  • Flow Log: Select + to add a new flow log. Enter name, direction, action, target S3 bucket, and select + Flow Log to complete the configuration of the flow log. It becomes applied once you provision your changes.

  • IPv4 Configuration: This field is automatically populated. If Dynamic Host Configuration Protocol (DHCP) is enabled, the Internet Protocol version 4 (IPv4) address is dynamic, meaning it can change while the virtual server is operational or in the case of a restart. Add additional public or private IP addresses in Add IP. It is an optional field.

  • IPv6 Configuration: You can populate a NIC IPv6 CIDR block with prefix length /80 or allow it to be automatically assigned from the VDCs allocated range, as seen in the screenshot below. Add additional public or private IP addresses in Add IP. It is an optional field.

Setting up IPv6 for Virtual Servers

To enable IPv6 for Cloud cubes, connect the server to an IPv6-enabled LAN. Select the Network option on the right pane and fill in the following fields:

  • Name: Is is recommended to enter a unique name for this Network Interface Controller (NIC).

  • MAC: This field is automatically populated.

  • LAN: Select an IPv6 enabled Local Area Network (LAN).

  • Firewall: Specify whether you want to enable or disable the firewall. For enabling the firewall, choose Ingress to create flow logs for incoming traffic, Egress for outgoing traffic, or Bidirectional to create flow logs for all traffic.

  • Flow Log: Select + to add a new flow log. Enter name, direction, action, target S3 bucket, and select + Flow Log to complete the configuration of the flow log. It becomes applied once you provision your changes.

  • IPv4 Configuration: This field is automatically populated. If Dynamic Host Configuration Protocol (DHCP) is enabled, the Internet Protocol version 4 (IPv4) address is dynamic, meaning it can change while the virtual server is operational or in the case of a restart. Add additional public or private IP addresses in Add IP. It is an optional field.

  • IPv6 Configuration: You can populate a NIC IPv6 CIDR block with prefix length /80 or allow it to be automatically assigned from the VDCs allocated range, as seen in the screenshot below. In order to use "floating" or virtual IPs, you can assign additional IPs to a NIC by selecting them from the drop-down menu in Add IP.

Setting up IPv6 for Virtual Servers

Note:

  • IPv6 CIDR assigned to LANs(/64) and NICs(/80 and /128) must be unique.

  • You can create a maximum of 256 IPv6-enabled LANs per VDC.

Configure a Data Center

This tutorial contains a detailed description of how to manually configure your IONOS Cloud infrastructure for each server via the Virtual Data Center (VDC). It comprises all the building blocks and the necessary resources required to configure, operate, and manage your products and services. You can configure and manage multiple VDCs.

Prerequisites: You will need appropriate permissions to create a VDC.

It is also possible to configure settings for each server automatically.

Create a Server

1. Drag the Server element from the Palette into the Workspace.

2. To configure your Server, enter the following details in the Settings tab of the Inspector pane:

Drag a Server element from the Palette onto the Workspace and configure its properties on the right
  • Name: Enter a server name unique to the VDC.

  • Availability Zone: Select a zone from the drop-down list to host the server on the chosen zone.

  • CPU Architecture: Choose between AMD or Intel cores.

  • Cores: Choose the number of CPU cores.

  • RAM: Choose any size starting from 0.25 GB to the maximum limit allotted to you. The size can be increased or reduced in steps of 0.25 GB. The maximum limit varies based on your contract resource limits and the chosen data center. For more information about creating a full-fledged server, see Configure a Virtual Server.

Add storage to the server

1. Drag a Storage element from the Palette onto a Server in the Workspace.

2. To configure your Storage element, enter the following details in the Inspector pane:

Combining a Storage element with the Server in the Workspace joins two elements
  • Name: Enter a storage name unique to the VDC.

  • Availability Zone: Select a zone from the drop-down list to host the storage element associated with the server.

  • Size in GB: Choose the required storage capacity.

  • Performance: Select a value from the drop-down list based on the requirement. You can either select Premium or Standard, and the performance of your storage element varies accordingly.

  • Image: Select an image from the drop-down list. You can select one of IONOS images or choose your own.

  • Password: Enter a password for the chosen image on the server—a root or an administrator password.

  • Backup Unit: Select a backup unit from the drop-down list. Click Create Backup Unit to instantly create a new backup unit if unavailable. For more information about adding storage to the server, see Configure a Virtual Server.

Connect to the internet

1. Drag a Network Interface Card (NIC) element from the Palette into the Workspace to connect the elements.

2. To configure your NIC element, enter the following details in the Network tab of the Inspector pane:

  • Name: Enter a NIC name unique to this VDC.

  • Media Access Control Address (MAC) and Primary IPv4 addresses are added automatically.

  • LAN: The name of the configured LAN is displayed. To select another network, select a value from the drop-down list.

  • Firewall: It is Disabled by default. Select a value from the drop-down list to configure your firewall settings. For more information, see Configure a Firewall.

For more information about network configuration, see Configure a Network

Connect your server to the internet by configuring the Network settings in the Inspector pane

Provision Changes

1. Start the provisioning process by clicking PROVISION CHANGES in the Inspector pane.

2. Review your changes in the Validation tab of the Provision Data Center dialog.

3. Confirm changes by entering your password. Resolve conflicts without a password.

4. When you are ready, click Provision Now to start provisioning resources.

Once ready, finalize the setup by requesting that the resources are provisioned
Confirm by clicking Provision Now

The data center will now be provisioned. DCD will display a Provisioning Complete notification when your cloud infrastructure is ready.

You may configure the MAC and IP addresses once the resource is provisioned.

Next steps

After configuring data centers, you can specify a preferred default data center location, IP settings, and resource capacity for future VDCs. For more information about configuring VDC defaults, see My Settings.

Data Center Basics

Your IONOS Cloud infrastructure is set up in Virtual Data Centers (VDCs). Here you will find all the building blocks and the resources required to configure and manage your products and services.

Prerequisites: Make sure you have the appropriate permissions. Only Administrators or Users with the Create Data Center permission can create a VDC.

Creating a new VDC

1. On the Menu bar, click Data Center Designer. A drop-down panel will open.

You can create a VDC from the menu.

You can create a VDC from the menu

Or alternatively,

You can create a VDC right on the Dashboard
  • Name: Enter an appropriate name for your VDC.

  • Description: Describe your new VDC (optional).

  • Region: Choose the physical IONOS data center location that will host your infrastructure.

3. Confirm your actions by clicking Create Data Center.

4. The data center is created and opened in the Workspace. You will find the VDC has been added to the My Data Centers list in the Dashboard.

Inside the Data Center Designer

You can set up your data center infrastructure by using a drag-and-drop visual interface. The DCD User Interface (UI) contains the following elements:

Name

Description

1. Menu bar

This provides access to the DCD functions via drop-down menus.

2. Palette

Movable element icons to be combined in the Workspace.

3. Element

The icon represents a component of the virtual data center.

4. Workspace

You can arrange element icons in this space via drag-and-drop.

5. Inspector pane

View and configure properties of the selected element.

6. Context menu

Right-click an element to display additional options.

The VDC consists of several key sections: the Palette, the Workspace and the Inspector

Element icons

Square Element icons serve as building blocks for your VDC. Each Element represents an IONOS Cloud product or service. Keep in mind that some Elements are compatible, while others aren't. For example, a Server icon can be combined with the Storage (HDD or SSD) icon. In practice, this would represent the physical act of connecting a hard drive to a server machine. For more information, see Setup Storage.

Individual element icons
Once joined, the elements create a new combined icon in the Workspace

The Palette and the Workspace

The Palette is a dynamic sidebar that contains VDC Elements. You can click and drag each Element from the Palette into your Workspace and position there, as required.

All cloud products and services are compatible with each other. You may create a Server and add Storage to it. A LAN Network will interconnect your Servers.

Some Elements may connect automatically via drag-and-drop. The DCD will then join the two if able. Otherwise, it will open configuration space for approval.

Selecting an element and pressing Delete or Backspace removes it from the Workspace.

The Context Menu

Right-clicking an element inside of the Workspace reveals additional functions. For example, you may right-click a Cube or a Server to Power it up or down. You may also use Pause or Delete, to remove it from your data center infrastructure.

The Context Menu always offers different options, depending on the Element.

Context menu

The Inspector pane

When an Element is selected, the Inspector pane will appear on the right. You can configure Element properties, such as Name and Availability Zone.

This pane allows you to finalize the creation of your data center. Once your VDC is set up, click PROVISION CHANGES. This makes your infrastructure available for use.

Inspector Pane

Using the Start Center

The Start Center is an alternative option in VDC creation and management. You may access existing VDCs or create a new one from the Start Center view.

1. Inside the Dashboard Menu bar, select Data Center Designer > Open Start Center.

Using the Start Center in DCD

2 . The Start Center left section lists all your data centers in alphabetical order. The Create Data Center section, on the right, can also be used to create new VDCs.

3. The location region and version number are shown for each VDC. Version numbers begin at 0 and are incremented by 1 each time the data center is provisioned.

4. The Details button, to the right of each VDC, displays all associated VMs, resources, and status. The different status indications are on, off, or de-allocated.

5. You can click on each VDC name on the list to open it.

Connect via SSH

When creating based on IONOS Linux images, you can inject into your . This lets you access your VM safely and allows for secure communication. SSH keys that you intend to use more often can be saved in the SSH Key Manager.

Types of SSH keys

Default SSH keys: SSH keys that you intend to use often and mark them as such in the SSH Key Manager. Default SSH keys are preselected when you configure storage devices. You can specify which SSH keys are actually to be used before and deselecting the preselected standard keys in favor of another SSH key.

Ad-hoc SSH keys: SSH keys that you only use once and don't intend to save in the SSH Key Manager for later re-use.

Generating an SSH key

SSH keys can be generated and used on macOS or Linux if both OpenSSH and the ssh-keygen command-line tools are installed. OpenSSH is a collection of tools for establishing SSH connections to remote servers, while ssh-keygen is a utility for generating SSH keys.

Manually generate SSH keys when working with OpenSSH via the Terminal application by following the steps below.

1. Enter the following command below into the Terminal window and press ENTER.

ssh-keygen

The key generation process is initiated by the command above. When you run this command, the ssh-keygen utility prompts you for a location to save the key.

2. Accept the default location by pressing the ENTER key, or enter the path to the file where you want to save the key /home/username/.ssh/id_rsa.

Enter file in which to save the key (/home/username/.ssh/id_rsa):

If you have previously generated a key pair, you may see the following prompt below. If you choose to overwrite the key, you will no longer authenticate with the previous key that was generated.

/home/username/.ssh/id_rsa already exists.
Overwrite (y/n)?

3. Enter the passphrase that will be used to encrypt the private key file on the disk. You can also press ENTER to accept the default (no passphrase). However, we recommend that you use a passphrase.

4. Enter your passphrase once more.

After you confirm the passphrase, the public and private keys are generated and saved in the specified location. Thus, the confirmation will look like this:

Your identification has been saved in /home/username/.ssh/id_rsa.
Your public key has been saved in /home/username/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:AcP/ieiAOoD7MjrKepXks/qHAhrRasGOysiaIR94Quk [email protected]
The key's randomart image is:
+---[RSA 3072]----+
|     .o          |
|      .o         |
|..     ..        |
|.oo .   ..       |
|+=.+ . .So .     |
|X+. * . . o      |
|&Eo. *           |
|&Oo.o o          |
|@O++..           |
+----[SHA256]-----+

The public key is saved to the file id_rsa.pub which will be the key you upload to your DCD account. Your private key is saved to the id_rsa file in the .ssh directory and is used to verify that the public key you use belongs to the same DCD account.

You can copy the public key to your clipboard by running the following command:

 pbcopy < ~/.ssh/id_rsa.pub

Storing SSH keys

In the SSH Key Manager of the DCD, you can save and manage up to 100 public SSH keys for the setup of SSH accesses. This saves you from having to repeatedly copy and paste the public part of an SSH key from an external source.

1. To open the SSH Key Manager, go to Menu > MANAGER resources > SSH Key Manager.

2. In the SSH Key Manager, select + Add Key.

3. Enter a Name and click Add.

4. Copy and paste the public key to the SSH key field. Alternatively, you may upload it via Select key file. Please ensure the SSH keys you enter are valid. The DCD does not validate syntax or format.

5. (Optional) Activate the Default checkbox to have the SSH key automatically pre-selected when SSH access is configured.

6. Click Save to store the key.

Either copy/paste or upload the key file via SSH Key Manager.

The SSH key is stored in the SSH Key Manager and can be used for the configuration of SSH accesses.

Deleting an SSH key in the SSH Key Manager

To delete an existing SSH key, select the SSH key from the list and click Delete Key.

The SSH key is removed from the SSH Key Manager.

Connecting via OpenSSH

You can connect to your virtual instance via OpenSSH. Still, you will need the terminal application, which varies depending on your operating system. For:

  • Linux: Search Terminal or press CTRL+ALT+T

  • macOS: Search Terminal

  • Windows: Search Bash. If you don’t have Bash installed, use PuTTY instead.

The steps below will show you how to connect to your VM.

1. Open the Terminal application and enter the SSH connection command below. After the @, add the IP address of your VM instance. Then press ENTER.

ssh [email protected]

When you log in for the first time, the server isn't recognized on your local machine, so you'll be asked if you're sure you want to keep connecting. You can type yes and then press ENTER.

2. Authentication is the next step in the connection process. If you've added the SSH keys, you'll be able to connect to the VM immediately or after entering your key pair's passphrase.

If you haven't already added SSH keys, you'll be asked for your password:

[email protected]'s password:

3. Once you’ve entered the password, press ENTER.

If the SSH key is configured correctly, this will log you into VM.

Update IPv6

To update IPv6 configurations for LANs in the Data Center Designer (DCD), follow these steps:

  1. Select the LAN you want to update IPv6 for.

  2. You can update your IPv6 CIDR block with prefix length /64 from the VDCs allocated range.

  3. Start the provisioning by clicking PROVISION CHANGES in the Inspector pane.

The Virtual Data Center (VDC) will now be provisioned with the new network settings. In this case, the existing configuration gets reprovisioned accordingly.

Note: IPv6 traffic and IPv6-enabled LANs are now supported for the Flow Logs feature. For more information about how to enable flow logs in DCD, see .

Limitations

  • One limitation of IPv6 is that a /56 block is typically assigned to a data center, with a /64 block assigned inside this /56 block to the Local Area Network (LAN). The difference between a /56 and a /64 block is 8, resulting in 2^8 (2 to the power of 8) blocks, or a total of 256 blocks. This limitation can impact the scalability and flexibility of IPv6 addressing in large networks. Therefore, it is important to carefully consider the allocation of IPv6 blocks to ensure efficient utilization of available resources.

  • You will get a new /56 prefix every time you create a new data center. If your services depend on static IPv6 addresses, and you want to rebuild your data center, you must not delete the data center itself, but only its components, such as, LANs, NICs, etc. For more information about how to create new Data Center LANs in DCD, see .

  • Currently, only Ubuntu and Windows images are supported. If you want to use images other than these, you need to tweak the OS initialization process of your image. For example, the Dynamic Host Configuration Protocol version 6 (DHCPv6) client may need to be run manually after restarting the system for Debian. We are currently working on supporting IPv6 on all IONOS images selectable in the DCD. Generally, if the interfaces have not received an IPv6 address from the IONOS DHCP server, try to run the DHCPv6 client manually. For more information, see .

  • AlmaLinux operates seamlessly when the hostname aligns with the requirements of the Network Manager DHCPv6 client. To ensure smooth functionality, it is crucial to have a valid hostname. For more information, see the .

  • In previous versions of Rocky Linux, it is important to note that the IPv6 protocol may not be readily available after the initial boot. For the latest version Rocky Linux 9.0, you can use IPv6 support right from the first boot.

  • Currently, IPv6 is not available for Managed Services such as Application Load Balancer (ALB), Network Load Balancer (NLB), Network Address Translation (NAT) Gateway, IP Failover and Managed Kubernetes (MK8s).

Disable IPv6

To disable IPv6 for LANs in the Data Center Designer (DCD), follow these steps:

  1. Select the LAN you want to disable IPv6 for, and clear the Activate IPV6 for this LAN checkbox.

  2. Start the provisioning by clicking PROVISION CHANGES in the Inspector pane.

The Virtual Data Center (VDC) is provisioned with the new network settings. On disabling IPv6 on a LAN, existing IPv6 configuration on the Network Interface Card (NICs) will be removed or deleted.

FAQs

  1. How do I configure IPv6 on my network?

    IPv6 can be configured via the Data Center Designer (DCD) or Cloud API using IPv6-enabled LAN. You can get IPv6 support by configuring the network. For more information about how to enable IPv6 on Virtual Data Center LANs in DCD, see .

  2. Why do we need IPv6 configuration in DCD?

    The main reason for the transition to IPv6 is the exhaustion of available IPv4 addresses due to the exponential growth of the internet and the increasing number of devices connected to it.

  3. If I use private images, do I need to adapt them in any way so that they support IONOS IPv6?

    IONOS IPv6 implementation currently supports Ubuntu and Windows images. If you want to use images other than Ubuntu, you may need to tweak the OS initialization process of your image. For example, the Dynamic Host Configuration Protocol version 6 (DHCPv6) client may need to be run manually after the system boot. We are currently working on supporting IPv6 on all IONOS public images selectable in the DCD. Generally, if the interfaces have not received an IPv6 address from the IONOS Dynamic Host Configuration Protocol (DHCP) server, try to run the dhcp6 client manually.

    For other operating systems, the DHCPv6 client may require a manual restart to apply the new configuration received from the DHCPv6 server. This is because the client device may have cached the previous configuration information and needs to clear it before applying the new one. However, not all DHCPv6 implementations require a manual restart, as some may be able to automatically apply the new configuration without any intervention.

DCD How-Tos

Prerequisites: Make sure you have the appropriate privileges. Only Contract Owners, Administrators or Users with the Create Kubernetes Clusters permission can create a cluster. Other user types have read-only access.

Download Kubeconfig

You can download the generated Kubeconfig for your cluster to be used with the kubectl command. The file will be generated on the fly when you request it.

You can download the file from the or from the Inspector pane, in case the viewed data center contains active node pools.

Using the Kubernetes Manager

  1. Go to MANAGER Resources > Kubernetes Manager

  2. Select a cluster from the list

  3. Click on kubeconfig.yaml or to kubeconfig.json for downloading

Using the Inspector pane

1. Open a data center containing node pools

2. Select a Server that is managed by Kubernetes

3. On the right, select Open Kubernetes Manager

4. Choose between kubeconfig.yaml or to kubeconfig.json for download

Database as a Service

IONOS's Database as a Service (DBaaS) consists of fully managed databases, with high availability, performance, and reliability hosted in IONOS Cloud and integrated with other IONOS Cloud services.

Database Engines

We currently offer the following database engines:

MongoDB

IONOS DBaaS lets you quickly set up and manage database clusters. Using IONOS DBaaS, you can manage MongoDB clusters, along with their scaling, security and creating snapshots for backups.

PostgreSQL

IONOS DBaaS, gives you access to the capabilities of the database engine. Using IONOS DBaaS, you can manage PostgreSQL cluster operations, database scaling, patch your database, create backups, and security.

DCD How-Tos

Quick Links:

PostgreSQL FAQ

Will DBaaS be ever offered on Cloud Cubes?

As for now, DBaaS is only offered on Virtual Servers. Cloud Cubes may be used in the future as well.

Is there a connection pooling option for PosgreSQL?

IONOS DBaaS doesn't provide connection pooling. However, you may use a connection pooler (such as pgbouncer) between your application and the database.

After the connection limit has been reached, will there be an error?

Depending on the library you are using, it should be something like:

failed to create DB connection: addr x.x.x.x:5432: connection refused.

How do we prevent reaching the connection limit?

The best way to manage connections is to have your application maintain a pool of at least 10-20 connections. It is considered bad practice to have a lot of DB connections. However, letting the user configure max_connections themselves in the future is an option.

Can I scale the deployment to increase its connection limit?

Yes, see for more info.

What are the supported backup methods?

We provide an automated backup within our cloud. If you want to backup to somewhere else, you may use a client-side tool, such as .

What are the main considerations for latency and performance?

The number of standby nodes (in addition to primary node) doesn’t really matter. If you have one or ten makes no difference. Synchronous modes are slower in write performance due to the increase in latency for communication between a primary node and a standby node.

Why can't I restore to the time I specified?

If you're receiving an error message Parameter out of bounds: The recovery target time is before the newest basebackup., please check the earliestRecoveryTargetTime of your backup. Your target time of the restore needs to be after this timestamp. Maybe you have another backup for your cluster with an earlier earliestRecoveryTargetTime and can use that.

If the earliestRecoveryTargetTime is missing in your backup, the cluster from where you want to restore wasn't able to do a base backup. This can happen, when you e.g. quickly delete a newly created cluster, since the base backup will be triggered up to a minute after the cluster is available.

Sizing

Note: Currently, DBaaS - MongoDB does not support scaling existing clusters.

RAM

The WiredTiger cache uses only a part of the RAM; the remainder is available for other system services and MongoDB calculations. The size of the RAM used for caching is calculated as 50% of (RAM size - 1 GB) with a minimum of 256 MB. For example, a 1 GB RAM instance uses 256 MB, a 2 GB RAM instance uses 512 MB, a 4 GB instance uses 1.5 GB, and so on.

You get the best performance if your working set fits into the cache, but it is not required. The working set is the size of all indexes plus the size of all frequently accessed documents.

To view the size of all databases' indexes and documents, you can use a similar script as described in the . You must estimate what percentage of all documents are accessed at the same time based on your knowledge of the workload.

Additionally, each connection can use up to 1 MB of RAM that is not used by WiredTiger.

Disk

The disk contains:

  • Logs written by . They can take up to 2% of the disk space before MongoD deletes them.

  • OpLogs for the last 24 hours. The size of these depends on your workload. There is no upper limit, so they can grow quite large. But they are removed after 24 hours on their own.

  • The data itself.

Operating system and applications are kept separately, outside of the configured storage, and are managed by IONOS.

Limitations

Connection Limits: As each connection requires a separate thread, the number of connections is limited to 51200 connections.

CPU: The total upper limit for CPU cores depends on your quota. A single instance can't exceed 16 cores.

RAM: The total upper limit for RAM depends on your quota. A single instance can't exceed 64 GB.

Storage: The upper limit for storage size is 1280 GB.

Backups: Storing cluster backups is limited to the last 7 days. Deleting a cluster also immediately removes all backups of it.

Backup and Recovery

Backups

MongoDB Backups: A cluster can have multiple snapshots. They are created:

  1. when a cluster is created, known as initial sync which usually happens in less than 24hs.

  2. after a restore.

A snapshot is a copy of the data in the cluster at a certain time. Every 24 hours, a base snapshot is taken, and every Sunday, a full snapshot is taken. We keep snapshots for the last 7 days, so recovery is possible from up to a week ago.

You can restore from any snapshot as long as it was created with the same or older MongoDB patch version.

Snapshots are stored in an IONOS S3 Object Storage buckets in the same region as your database. Databases in regions where IONOS S3 Object Storage is not available will be backed up to eu-central-2.

Warning: If you destroy a MongoDB cluster, all of its snapshots are also deleted.

Recovery

Recovery is achieved via restore jobs. A restore job is responsible to create and catalog a cluster restoration process. A valid snapshot reference is required for a restore job to recover the database from. The API exposes available snapshots of a cluster.

There must be no other active restore job for the cluster in order to create a new one.

Warning: When restoring a database, it is advised to avoid connections to it until its restore job is complete and the cluster reaches AVAILABLE state.

View Cluster Metrics

To view cluster metrics in DCD, select the cluster of interest from the available Databases. The chosen database will open up. In Properties, select the database name next to the Monitor Databases option. The cluster metrics will open up:

It is possible to choose a time frame for metrics and instances of interest.

Setup a Cluster

Learn how to create and configure a Kubernetes Cluster using the DCD.

Manage Node Pools

Create, update and rebuild node pools.

Upgrade Node Pools

Upgrade and manage node pools.

Download Kubeconfig

Generate and download the yaml file.

Setup a Cluster

View Cluster Metrics

DCD How-Tos
FAQs
Hostname
Enable IPv6
MongoDB
PostgreSQL
Connection Limits
pg_dump
MongoDB Atlas documentation
MongoD
Tutorial - Learn how to Restore from Backup
Enable Flow Logs
How to update IPv6
How to disable IPv6
Kubernetes Manager
Retrieve PostgreSQL cluster metrics

IPv6 Configuration in Virtual Environments

Overview

Machines use IP addresses to communicate over a network, and IONOS has introduced Internet Protocol version 6 (IPv6) to its compute instances, offering a significantly larger pool of unique addresses. This upgrade enables support for the ever-growing number of connected devices.

At IONOS, we recognize the significance of IPv6 configuration in virtual environments and offer a flexible and scalable infrastructure that accommodates IPv6 configuration, allowing our customers to take advantage of the latest features.

One of the primary requirements is to ensure that VMs in the VDC can access services on the internet over IPv6. IONOS allows you to do the necessary provisions to provide seamless service access.

In addition to being a client to an IPv6 service, a Virtual Machine (VM) in the IONOS Virtual Data Center (VDC) can provide a service, such as a simple REST API, over IPv6. In this case, it is essential to ensure that the IPv6 address assigned to the VM is static. If DHCPv6 is enabled, the NICs can receive their static IPv6 address(es) using DHCPv6. You do not need to log in to every server and hardcode the IPv6 address. A Network Interface Card (NIC) has a Media Access Control address (MAC) and it sends a DHCPv6-Discover request to every user asking for a configuration for its MAC address. DHCPv6 shares configuration information with NIC, containing the IPv6 address. Our DHCPv6 has the information on which MAC address gets which IPv6 address(es). This is a critical requirement to allow you to access the service continuously, without any interruptions.

Concepts

IONOS supports the internet standard IPv6. Following are a few concepts associated with it:

  • IPv6 or Internet Protocol version 6, is the most recent version of the Internet Protocol (IP) that provides a new generation of addressing and routing capabilities. The IPv6 is designed to replace the older IPv4 protocol, which is limited in its available address space.

  • IPv6 uses 128-bit addresses, providing an almost limitless number of unique addresses. This allows for a much larger number of devices to be connected to the Internet.

  • IPv6 defines several types of addresses, including unicast, multicast, and anycast addresses. Unicast addresses identify a single interface on a device, multicast addresses identify a group of devices, and anycast addresses identify a group of interfaces that can respond to a packet.

  • IPv6 addresses are divided into two parts: a prefix and an interface identifier. The prefix is used for routing and can be assigned by an Internet Service Provider (ISP) or network administrator, while the interface identifier is typically generated by the device.

  • As IPv6 adoption continues, transition mechanisms are used to ensure compatibility between IPv6 and IPv4 networks. These mechanisms include dual-stack, tunneling, and translation methods. For more information about IPv6 see our latest blog on IPv6: Everything about the New Internet Standard.

High Availability

Cluster options

To guarantee partition tolerance, only odd numbers of cluster members are allowed. For the playground edition you can only have 1 replica. All other editions allow the use of 3 replicas. Soon we will allow more than 3 replicas per cluster.

All of these sizes are automatically highly available and replicate your data between instances. One instance is the primary, which accepts write transactions, and the others are secondary, which can optionally be used for read operations.

Replication

All instances in an IONOS MongoDB cluster are members of the same replica set, so all secondary instances replicate the data from the primary instance.

By default the write concern before acknowledging a write operations is set to "majority" with j: true. The term "majority" means the write operation must be propagated to the majority of instances. In a three-instance cluster, for example, at least two instances must have the operation before the primary acknowledges it. The j option also requires that the change has already been persisted to the on-disk journal of these instances.

If data is not replicated to the majority, it may be lost in the event of a primary instance failure and subsequent rollback.

Changing the commit guarantees per transaction

You can change the write concern for single operations, for example due to performance reasons. See the official documentation on write concerns.

Read preferences

You can determine which instance to use for read operations by setting the read preference on the client side, if your client supports it.

If you read from the primary instance, you always get the most up-to-date data. You can spread the load by reading from secondary sources, but you might get stale data. However, you can get consistency guarantees using a read concern.

Data Center Designer
Virtual Machines
How-Tos
Reference
FAQ
Block Storage
How-Tos
Reference
FAQ
Networks
How-Tos
Reference
FAQ
Kubernetes
How-Tos
Reference
FAQ
NAT Gateway
How-Tos
Reference
Network Load Balancer
How-Tos
Reference
Application Load Balancer
How-Tos
Reference
DBaaS PostgreSQL
How-Tos
Reference
FAQ
DBaaS MongoDB
How-Tos
Reference
Monitoring as a Service
How-Tos
Reference
IONOS S3 Object Storage
How-Tos
Reference
Managed Backup
How-Tos
Reference
FAQ
Virtual Servers
Cloud Cubes
Cloud API
Compute SDKs
Config Management Tools
Setup a Virtual Server
Setup a Cloud Cube
Connect via Remote Console
Connect via SSH
Boot with cloud-init
Virtual Machines - Frequently Asked Questions

Set Up a Virtual Server

The user who creates the Server has full root or administrator access rights. A server, once provisioned, retains all its settings (resources, drive allocation, password, etc.), even after a restart at the operating system level. The server will only be removed from your virtual data center once you delete it in the DCD. For more information, see the Virtual Servers page.

Creating a Server

Prerequisites: Make sure you have the appropriate privileges. Only contract owners, administrators, or users with the Create Data Center privilege can set up a VDC. Other user types have read-only access and can't provision changes.

1. Drag the Server element from the Palette onto the Workspace.

The created server is automatically highlighted in turquoise. The Inspector pane allows you to configure the properties of this individual server instance.

Create a Server by dragging it onto the Workspace. Fill out the Inspector pane on the right with Server properties

2. In the Inspector pane on the right, configure your server in the Settings tab.

  • Name: Choose a name unique to this VDC.

  • Availability Zone: The zone where you wish to physically host the server. Choosing A - Auto selects a zone automatically. This setting can be changed after provisioning.

  • CPU Architecture: Choose between AMD or Intel cores. You can later change the CPU type for a virtual server that is already running, though you will have to restart it first.

  • Cores: Specify the number of CPU cores. You may change these after provisioning. Please note that there are configuration limits.

  • RAM: Specify RAM size; you may choose any size between 0.25 GB to 240 GB in steps of 0.25 GB. This setting can be increased after provisioning.

  • SSH Keys: Select premade SSH Key. You must first have a key stored in the SSH Key Manager. Learn how to create and add SSH Keys.

  • Ad-hoc Key: Copy and paste the public part of your SSH key into this field.

Adding a bootable drive

  1. Drag a storage element (HDD or SSD) from the Palette onto a Server in the Workspace to connect them together. The highlighted VM will expand with a storage section.

  2. Click the Unnamed HDD Storage to highlight the storage section. Now you can see new options in the Inspector on the right.

Storage type cannot be changed after provisioning.

Configuring Storage

  1. Enter a name that is unique within your VDC.

  2. Select a zone in which you want the storage device to be maintained. When you select A (Auto), our system assigns the optimal Zone. The Availability Zone cannot be changed after provisioning.

  3. Specify the required storage capacity; the size can be increased after provisioning, even while the server is running, as long as this is supported by its operating system. It is not possible to reduce the storage size after provisioning.

You can select one of IONOS images or snapshots, or use your own. Only images and snapshots that you have access to are available for selection. Since provisioning does not require you to specify an image, you can also create empty storage volumes.

Authentication

  1. Set the root or administrator password for your server according to the guidelines. This is recommended for both operating system types

  2. Select an SSH key stored in the SSH Key Manager.

  3. Copy and paste the public part of your SSH key into this field.

  4. Select the storage volume from which the server is to boot by clicking on BOOT or Make Boot Device.

  5. Provision your changes. The storage device is now provisioned and configured according to your settings.

Alternative Mode

  • When adding a storage element using the Inspector, select the appropriate check box in the Add Storage dialog box. If you wish to boot from the network, set this on the server: Server in the Workspace > Inspector > Storage.

  • It is recommended to always use VirtIO to benefit from the full performance of InfiniBand. IDE is intended for troubleshooting if, for instance, the operating system has no VirtIO drivers installed. In this case, Windows usually displays a "blue screen" when booting.

  • After provisioning, the Live Vertical Scaling properties of the selected image are displayed. You can make changes to these properties later, which will require a reboot. You can set the properties of your uploaded images before you apply them to storage volumes in the Image Manager.

  • (Optional) Add and configure further storage elements.

  • (Optional) Make further changes to your data center.

  • Provision your changes. The storage device is now provisioned and configured according to your settings.

Adding a CD-ROM drive

To assign an image and specify a boot device, you need to add and configure a storage element.

  • Click on CD-ROM to add a CD-ROM drive so that you can use ISO images to install and configure an operating system from scratch.

  • Set up a network by connecting the server to other elements, such as an internet access element or other servers through their NICs.

  • Provision your changes.

The server is available according to your settings.

Start, Suspend or Reset a Server

We maintain dedicated resources available for each customer. You do not share your physical CPUs with other IONOS clients. For this reason, the servers, switched off at the operating system level, still incur costs.

You should use the DCD to shut down virtual machines so that resources are completely deallocated and no costs are incurred. Servers deallocated this way remain in your infrastructure, while the resources are released and can then be redistributed.

This can only be done in the DCD. Shutting down a VM at the operating system level alone does not deallocate the resources or suspend the billing. Regardless of how the VM is shut down, it can only be restarted using the DCD.

A reset forces the server to shut down and reboot, but data loss may result.

Suspending a server

1. Choose a server. From the Settings tab in the Inspector, select Power > Suspend. The server is not deleted. This only suspends it and deallocates resources.

2. (Optional) In the dialog that appears, connect using Remote Console and shut down the VM at the operating system level to prevent data loss.

3. Confirm your action by checking the appropriate box and clicking Apply Suspend.

4. Provision your changes. Confirm the action by entering your password.

Inspector pane
Power State alert

The server is switched off. CPU, RAM, and IP addresses are released and billing is suspended. Connected storage devices will still be billed. Reserved IP addresses are not removed from the server. The deallocated virtual machine is marked by a red cross in DCD.

Starting a server

1. Choose a server. From the Settings tab in the Inspector, select Power > Start.

2. In the dialog box that appears, confirm your action by checking the appropriate box and clicking Apply STOP.

3. Provision your changes. Confirm the action by entering your password.

The server is booted. A new public IP address is assigned depending on the configuration. Billing is resumed.

Resetting a server

1. Choose a server. From the Settings tab in the Inspector, select Power > Reset.

2. (Optional) In the dialog that appears, connect using the Remote Console and shut down the VM at the operating system level to prevent data loss.

3. Confirm your action by checking the appropriate box and clicking Apply STOP.

4. Provision your changes. Confirm the action by entering your password.

Result: The server will shut down and reboot.

Scaling a Server

1. In the Workspace, select the required server and use the Inspector pane on the right.

If you want to make changes to multiple VMs, select the data center and change the properties in the Settings tab.

In this tab, you will find an overview of all assets belonging to the selected VDC. You can change cores, RAM, server status, and storage size without having to switch from VM to VM in the Workspace.

2. Modify storage:

  • (Optional) Create a snapshot of the system for recovery in event of problems.

3. In the Workspace, select the required server and increase the CPU size.

4. Provision your changes.

The CPU size is adjusted in the DCD. You must set the new size at the operating system level of your VM.

Deleting a Server

When you no longer need a particular server in your cloud infrastructure, you can remove it with a click of a mouse or keyboard, with or without the associated storage devices.

To ensure that no processes are interrupted and no data is lost, you should turn off the server before you delete it.

1. Select the Server in the Workspace

2. Right-click and open the context menu of the element. Select Delete.

2. You may also select the element icon and press the DEL key.

3. In the dialog that appears, choose whether you also want to delete storage devices that belong to the server.

4. Provision your changes

The server and its storage devices are deleted.

When you delete a server and its storage devices, or the entire data center, their backups are not deleted automatically. Only when you delete a Backup Unit will the backups it contains actually be deleted.

When you no longer need the backups of deleted VMs, you should delete them manually in the Backup Unit Manager to avoid unnecessary costs.

Block Storage

With IONOS Cloud Block Storage, you can quickly provision virtual servers and other Infrastructure-as-a-Service (IaaS) offerings. Consult our user guides, reference documentation, and FAQs to support your hosting needs.

Developer Tools

Recommended How-To's

Learn how to setup additional block storage for your virtual instances.

Upload your own images or use those supplied by IONOS Cloud.

Manage User Access to various storage elements.

Learn More

Change Log

10 October 2021 | Live Vertical Scaling Overview section updated

DCD How-Tos

Prerequisites: Make sure you have the appropriate privileges. Only contract owners, administrators, or users with the Create Data Center privilege can set up a . Other user types have read-only access and can't provision changes.

How-Tos

Learn how to setup additional block storage for your virtual instances.

Upload your own images or use those supplied by IONOS Cloud.

Manage User Access to various storage elements.

DCD How-Tos

Prerequisites: Make sure you have the appropriate privileges. Only contract owners, administrators, or users with the Create Data Center privilege can set up a VDC. Other user types have read-only access and can't provision changes.

How-Tos

Reserve and return IP addresses for network use.

Create a private network and add internet access.

Activate a multidirectional firewall and add rules.

Ensure that HA setups are available on your VMs.

Capture data related to IPv4 network traffic flows.

Connect VDCs with each other using a LAN.

Configure a Firewall

Activate and configure a Firewall for each Network Interface Card (NIC) to better protect your servers from attacks. IONOS Cloud Firewalls can filter incoming (ingress), outgoing (egress), or bidirectional traffic. When configuring firewalls, define appropriate rules to filter traffic accordingly.

Note: A Firewall without set rules blocks all traffic.

Activating a Firewall

1. In the Workspace, select a Virtual Machine (VM) with a NIC.

2. From the Inspector, open the Network tab.

3. Open the properties of the NIC for which you wish to set up a Firewall.

4. To activate the Firewall,choose between Ingress / Egress / Bidirectional

Activate the Firewall from the Inspector pane by choosing Ingress/Egress/Bidirectional. Make sure to set rules for the Firewall

Activating the Firewall without additional rules will block all incoming traffic. You can now add exceptions for ports and protocols by clicking Manage Rules.

Managing Firewall Rules

To create rules, define a new rule by clicking Create Firewall Rule. Additionally, you may add an existing set of rules by clicking Rules from Template. As a third option, you may import an existing rule set by clicking Clone Rules from other NIC.

You may enter a new rule or clone/template existing rules for Firewalls.

Modify the values of the Firewall rule:

  • Name: Enter a name for the rule.

  • Source MAC: Enter the MAC address to be passed through by the firewall.

  • Source IP: Enter the IP address to be passed through by the firewall.

  • Target IP: If you use virtual IP addresses on the same network interface, you can enter them here to allow access.

  • Port Range Start: Set the first port of an entire port range.

  • Port Range End: Set the last port of a port range, or enter the port from Port Range Start if you only want this port to be allowed.

  • ICMP Type: Enter the ICMP Type to be allowed. Example: 0 or 8 for echo requests (ping) or 30 for traceroutes.

  • ICMP Code: Enter the ICMP Code to be allowed. Example: 0 for echo requests.

  • IP Version: Select a version from the drop-down list. By default, it is Auto.

Enter values for Firewalls

Click Save to confirm your Firewall setup.

DCD How-Tos

Prerequisites:

Prior to enabling IPv6, make sure you have the appropriate privileges. New VDC can be created by the contract owners, administrators, or users with create VDC privilege. The number of bits in the address that is fixed is the prefix length. For Data Center IPv6 CIDR, the prefix length is /56.

You can enable the IPv6 LAN and configure the network to support IPv6. Using IPv6 LANs, devices can communicate on the same LAN using standard IPv6 protocols. IONOS LANs can help users in forwarding packets between devices and networks, ensuring that the network operates smoothly and efficiently.

Quicklinks

Learn how to enable IPv6 for LANs in VDC using the DCD.

Learn how to update IPv6 for LANs in VDC using the DCD.

Learn how to disable IPv6 for LANs in VDC using the DCD.

Learn all about the limitations associated with IPv6.

Learn all about the FAQs associated with IPv6.

Learn all about the IPv6 Support.

Managed Kubernetes

With IONOS Cloud Managed Kubernetes, you can quickly set up Kubernetes clusters and manage Node Pools. Consult our user guides, reference documentation, and FAQs to support your hosting needs.

Developer Tools

Recommended How-To's

Learn how to create and configure a Server inside of the DCD.

Use the Remote Console to connect to instances without SSH.

Leverage Live Vertical Scaling to manage the use of resources.

Learn how to extend CoreDNS with additional configuration.

Learn More

Set Up a Kubernetes Cluster

Creating a cluster in the Kubernetes Manager

Prerequisites: Make sure you have the appropriate permissions. Only contract Owners, administrators or users with the Create Kubernetes Clusters permission can create a cluster. Other user types have read-only access.

1. Go to Containers > Managed Kubernetes.

Accessing the create cluster modal

2. Select Create Cluster.

3. Give the cluster a unique Name.

Naming conventions for Kubernetes:

  • Maximum of 63 characters in length.

  • Begins and ends with an alphanumeric character ([a-z0-9A-Z]).

  • Must not contain spaces or any other white-space characters.

  • Can contain dashes (-), underscores (_), dots (.) in between.

4. Select the Kubernetes Version you want to run in the cluster.

5. Click Create Cluster.

Result: The cluster will now be created and can be further modified and populated with node pools once its status is active.

To access the Kubernetes API provided by the cluster simply download the kubeconfig file and use it with tools like kubectl.

Configuring a cluster in Cluster Settings

1. Select Cluster name from the list and type a new name.

2. Select the Version of Kubernetes you want to run on the node pool.

3. Select the Maintenance time of your preferred maintenance window. Necessary maintenance for Managed Kubernetes will be performed accordingly.

4. Click Update Cluster to save your changes.

Configuring a cluster in cluster settings

The maintenance window starts at the time of your choosing and remains open for another four hours. All planned maintenance work will be performed within this window, however not necessarily at the beginning.

Deleting a cluster

Prerequisites: Make sure you have the appropriate permissions and access to the chosen cluster. The cluster should be active, evacuated, and no longer contain any node pools.

  1. Open the Kubernetes Manager.

  2. Select the cluster from the list.

  3. Click the Delete in the menu.

  4. Confirm the deletion when prompted.

Deleting a cluster

API How-Tos

All Kubernetes API instructions can be found in the main Cloud API specification file.

To access the Kubernetes API provided by the cluster, simply download the kubeconfig for the cluster and use it with tools like kubectl.

Retrieve Kubernetes configuration files

GET https://api.ionos.com/cloudapi/v6/k8s/{k8sClusterId}/kubeconfig

Retrieve a configuration file for the specified Kubernetes cluster, in YAML or JSON format as defined in the Accept header; the default Accept header is application/yaml.

Path Parameters

Name
Type
Description

k8sClusterId*

String

The unique ID of the Kubernetes cluster.

Query Parameters

Name
Type
Description

depth

String

Controls the detail depth of the response objects.

Headers

Name
Type
Description

X-Contract-Number

Integer

Users with multiple contracts must provide the contract number, for which all API requests are to be executed.

{}
{
    "httpStatus": 400,
    "messages": {
        "errorCode": "123",
        "message": "Error message example."
    }
}

Account Settings

Introduction

The Account Management panel is accessed by clicking on your name and email address. Here you can perform key administrative tasks related to your account and contract. Only Contract Owners have complete access. Consult access levels by user Role:

Menu item
Contract Owner
Administrator
User

My Settings

You can set default values for future . Each time you open a new VDC, DCD will place your resources in the preset location, assigning them the same number of cores, memory size, capacity, and reserved . For example, you can specify that all new VDCs must be located in Karlsruhe, or that all processors will use the Intel architecture.

1. Go to Account Management > My Settings.

2. In the My Settings panel, set default values for Session, Data Center, Server, and Storage.

Your new values are valid immediately. You may undo your changes by clicking on Reset or the Reset All button.

Password & Security

Your IONOS Cloud account comes with a number of security features to protect you from unauthorized access:

Changing your password

You define the password for your IONOS account yourself during the registration process. Your password must contain at least eight characters and a mixture of upper and lowercase letters and special characters.

1. Go to Account Management > Password & Security > Change Password.

2. Enter your Current Password and the New Password twice. Click Change Password.

The password is changed and becomes effective with the next login.

Forgot your password? Click to reset it.

2-Factor Authentication

In addition to log-in credentials, this authentication method also requires an app-generated security code. Once has been activated, you can only access your account by receiving this code from the Google Authenticator App. This method can be extended to hide specific data centers and snapshots from users, even if they belong to an authorized group. This feature is only available in DCD.

Prerequisites: The Google Authenticator App is compatible with all Android or iOS mobile devices. You can install it on your device, free of charge, from the or from . The app must be able to access your camera and the time on the mobile device needs to be set automatically.

Users can turn on 2-Factor Authentication for their own accounts. Make sure it is not already activated by a Contract Owner or Administrator.

1. Go to Account Management > Password & Security.

2. Check the box: Enable 2-Factor Authentication. The Setup Assistant will open.

3. Proceed through each step by clicking Next.

  • Install the Google Authenticator app;

  • Scan the QR code using the app;

  • Enter the Security Token;

  • Confirm.

2-Factor Authentication is now on. You will need to provide a security code next time you log in.

4. To deactivate, return to Account Management > Security.

5. Uncheck the box: Enable 2-Factor Authentication. The setting is effective upon the next login.

Contract Owners or Administrators can turn on 2-Factor Authentication for other user accounts in order to maintain heightened security.

1. Go to Menu Bar > Manager Resources > User Manager

2. Select the required user.

3. In Meta Data, check the box: Force 2-Factor Auth. Click Save.

The setting will be effective upon the next login. The user will be guided through the Setup Assistant to complete the activation. For details on how to complete the Setup Assistant, consult the previous tab.

The user may not circumvent this step, nor are they able to deactivate 2-Factor Authentication.

To deactivate, in the Meta Data, uncheck the box: Force 2-Factor Auth.

The setting will be effective upon the next login.

Setting Support PIN

To ensure support calls are made by authorized users, we usually ask for the support PIN to verify the account. You can set your support PIN in the DCD and change it at any time. To set or change your support PIN, use the following procedure:

1. Go to Account Management > Password & Security > Set Support PIN.

2. Enter your support PIN in the PIN field. Click Set Support PIN.

The support PIN is now saved. You can use it to verify your account with .

Resource Overview

In this tab, you can track the global usage of resources available in your account.

Furthermore, this page provides an overview of usage limits per virtual instance.

Cost and Usage

In this tab, you can view the breakdown of estimated costs for the next invoice. The costs displayed in the DCD are a non-binding extrapolation based on your resource allocation since the last invoice. Please refer to your invoice for the actual costs. For more pricing information, please visit our page.

1. Go to Account Management > Cost and Usage. The list breaks down your Snapshot, IP address, and Data Center usage.

2. You may click the down arrow to expand each section to view individual item charges.

The Total amount displayed excludes VAT.

Payment Method

As a contract owner, you can choose between two payment methods: direct debit or a credit card.

1. Open the Account Management > Payment Method.

2. Choose either method, enter your information, and Submit.

Credit card data are safely stored with our payment service provider. If you choose to pay by direct debit, you will receive a form from us with which we ask you to give us a direct debit authorisation in writing.

Contract Owners

Custom settings: If you wish to change your e-mail address or username, please contact your sales representative or our .

Removing a user account: As a contract owner or administrator, you can cancel a user account by removing the user from the User Manager. Resources created by the user are not deleted.

Canceling your account: If you wish to cancel your Enterprise Cloud (IaaS) contract and delete your account including all VDCs completely, please contact your IONOS account manager or the .

If you are a 1&1 IONOS hosting customer, please refer to the following help page: .

Networks

With IONOS Cloud Networks, you can quickly provision and other Infrastructure-as-a-Service () offerings. Consult our user guides, reference documentation, and FAQs to support your hosting needs.

Developer Tools

Recommended How-To's

Learn More

Change Log

10 October 2021 | Live Vertical Scaling Overview section updated

Reserve an IP Address

If you want to build a network using static , IONOS Cloud offers you the option to reserve IP addresses for a fee. You can register one or more addresses in an IP block using the IP Manager. It is not possible to reserve a specific IP address; you are assigned a random address by IONOS Cloud.

An IP address can only be used in the data center from the region where it was reserved. Therefore, if you need an IP address for your virtual data center in Karlsruhe, you should reserve the IP address there. Each IP address can only be used once, but different IP addresses from a block can be used in different networks, provided these networks are provisioned in the same region where the IP block is located.

Reserving and using IP addresses is restricted to authorized users only. Contract owners and administrators may grant privileges to reserve IP addresses.

Reserving an IP address

Prerequisites: Make sure you have the appropriate permissions. Only contract owners, administrators, or users with the Reserve IP privilege can reserve IP addresses. Other user types have read-only access and can't provision changes.

1. Open the IP Manager: Menu > MANAGER Resources > IP Manager.

2. Click on + Reserve IPs.

3. Enter the following IP block information:

  • Name: Enter a name for the IP block.

  • Number of IPs: Enter the number of IPs you want to reserve.

  • Region: Enter the location of the IONOS data center where you want your IPs to be available.

4. Confirm your entries by clicking Reserve IPs.

The number of IPs you have reserved are available as an IP block. The IP block details should now be visible on the right.

Returning an IP address

IP addresses cannot be returned individually, but only as a block and only when they are not in use. Please keep in mind that if you return a static IP address, you can't reserve it again afterwards.

  1. Open the IP Manager: Menu > MANAGER Resources > IP Manager.

  2. Ensure the IPs you want to release are not in use.

  3. Select the required IP block.

  4. Click Delete to return the IP block to the pool.

  5. In the dialog that appears, confirm your action by clicking OK.

The IP block and all IP addresses contained are released and removed from your IONOS Cloud account.

Upgrade Node Pools

A upgrade generally happens automatically during weekly maintenance. You can also trigger it manually, e.g. when upgrading to a higher version of . In any case the node pool upgrade will result in rebuilding all nodes belonging to the node pool.

During the upgrade, an "old" node in a node pool is replaced by a new node. This may be necessary for several reasons:

  • Software updates: Since the nodes are considered immutable, IONOS Cloud does not install software updates on the running nodes, but replaces them with new ones.

  • Configuration changes: Some configuration changes require replacing all included nodes.

Considerations: Multiple node pools of the same cluster can be upgraded at the same time. A node pool upgrade locks the affected node pool and you cannot make any changes until the upgrade is complete. During a node pool upgrade, all of its nodes are replaced one by one, starting with the oldest one. Depending on the number of nodes and your workload, the upgrade can take several hours.

If the upgrade was initiated as part of weekly maintenance, some nodes may not be replaced to avoid exceeding the maintenance window.

Rebuilding a node

Please make sure that you have not exceeded your contract quota for servers, otherwise, you will not be able to provision a new node to replace an existing one.

The rebuilding process consists of the following steps:

  1. Provision a new node to replace the "old" one and wait for it to register in the control plane.

  2. Exclude the "old" node from scheduling to avoid deploying additional pods to it.

  3. Drain all existing workload from the "old" node.

  • First, IONOS Cloud tries to gracefully drain the node.

- are enforced for up to 1 hour.

- for pods is respected for up to 1 hour.

  • If the process takes more than 1 hour, all remaining pods are deleted.

4. Delete the "old" node from the node pool.

Draining nodes

Please consider the following node drain updates and their impact on the maintenance procedure:

Under the current platform setup, a node drain considers PodDisruptionBudgets (PDBs). If a concrete eviction of a pod violates an existing PDB, the drain would fail. If the drain of a node fails, the attempt to delete this node would also fail.

In the past, we observed problems with unprepared workloads or misconfigured PDBs, which often led to failing drains, node deletions and resulting failure in node pool maintenance.

To prevent this, the node drain will split into two stages. In the first stage, the system will continue to try to gracefully evict the pods from the node. If this fails, the second stage will forcefully drain the node by deleting all remaining pods. This deletion will bypass checking PDBs. This prevents nodes from failing during the drain.

How does this affect node pool maintenance?

As a result of the two-stage procedure, the process will stop failing due to unprepared workloads or misconfigured PDBs. However, please note that this change may still cause interruptions to workloads that are not prepared for maintenance. During maintenance, nodes are replaced one-by-one. For each node in a node pool, a new node is created. After that, the old node is drained and then deleted.

At times, a pod would not return to READY after having been evicted from a node during maintenance. In such cases, a PDB was in place for a pod’s workload. This led to failed maintenance and the rest of the workload left untouched. With the force drain behavior, the maintenance process will proceed and all parts of the workload will be evicted and potentially end up in a non-READY state. This might lead to an interruption of the workload. To prevent this, please ensure that your workload’s pods are prepared for eviction at any time.

Failover and Upgrade

Failover procedures

Planned failover: During a failure or planned failover, the client must reconnect to the database. A planned failover is signaled to the client by the closing of the TCP connection on the server. The client must also close the connection and reconnect.

In the event of a failure, the connection might not be closed correctly. The new leader will send a gratuitous ARP packet to update the MAC address in the client's ARP table. Open TCP connections will be reset once the client sends a TCP packet. We recommend re-establishing a connection to the database by using an exponential back-off retry with an initial immediate retry.

Uncontrolled disconnection: Since we do not allow read connections to standby nodes, only primary disconnections are possible. However, uncontrolled disconnections can happen during maintenance windows, a cluster change, and during unexpected situations such as loss of storage disk space. Such disconnections are destructive for the ongoing transactions and also clients should reconnect.

If a node is disconnected from the cluster, then a new node will be created and provisioned. Losing a primary node leads to the same situation when a client should reconnect. Losing a replica is not noticeable to the customer.

PostgreSQL upgrades

IONOS Cloud updates and patches your database cluster to achieve high standards of functionality and security. This includes minor patches for PostgreSQL, as well as patches for the underlying OS. We try to make these updates unnoticeable to your operation. However, occasionally, we might have to restart your PostgreSQL instance to allow the changes to take effect. These interruptions will only occur during the maintenance window for your database, which is a weekly four-hour window.

When your cluster only contains one replica you might experience a short down-time during this maintenance window, while your database instance is being updated. In a replicated cluster, we only update standbys, but we might perform a switchover in order to change the leader node.

Considerations: Updates to a new minor version are always backward compatible. Such updates are done during the maintenance window with no additional actions from the user side.

Major Version Upgrades

Caution: Major changes of the PostgreSQL version are irreversible and can fail. You should read the official migration guide and test major version upgrades with an appropriate development cluster first.

Prerequisites:

  • Read the migration guide from Postgres (e.g. ) and make sure your database cluster can be upgraded

  • Test the upgrade on development cluster with similar / the same data (you can create a new database cluster as a clone of your existing cluster)

  • Prepare for a downtime during the major version upgrade

  • Ensure the database cluster has enough available storage. While the upgrade is space-efficient (i.e. it does not copy the data directory), some temporary data is written to disk.

Before upgrading PostgreSQL major versions, customers should be aware that IONOS Cloud is not responsible for customer data or any utilized postgreSQL functionality. Hence, it is the responsibility of the customer to ensure that the migration to a new PostgreSQL major version does not impact their operations.

As per : "New major versions also typically introduce some user-visible incompatibilities, so application programming changes might be required."

Supported Versions

Starting with version 10, PostgreSQL moved to a yearly release schedule, where each major version is supported for 5 years after initial release. You can find more details at . We strive to support new versions as soon as possible.

When a major version approaches its end of life (EOL), we will announce the deprecation and removal of the version at least 3 months in advance. About 1 month before the EOL, no new database can be created with the deprecated version (the exact date will be part of the first announcement). When the EOL is reached, not yet upgraded databases will be upgraded in their next maintenance window.

Database Migration

You can migrate your existing databases over to DBaaS using the pg_dump, pg_restore and psql tools.

Using the SQL script format

To dump a database use the following command:

The -t <tablename> flag is optional and can be added if you only want to dump a single table.

This command will create a script file containing all instructions to recreate your database. It is the most portable format available. To restore it, simply feed it into psql. The database to restore to has to already exist.

The flag -F c is selecting the custom archive format. If you are interested in the other available formats, please refer to the .

To restore from a custom format archive you have to use pg_restore. The following command assumes that the database to restore to already exists.

When specifying the -C parameter, pg_restore can be instructed to recreate the database for you. For this to work you will need to specify a database that already exists. This database is used for initially connecting to and creating the new database. In this example we will use the database "postgres", which is the default database that should exist in every PostgreSQL cluster. The name of the database to restore to will be read from the archive.

Large databases can be restored concurrently by adding the -j <number> parameter and specifying the number of jobs to run concurrently.

For more information on pg_restore see the .

Note: The use of pg_dumpall is not possible because it requires a superuser role to work correctly. Superuser roles are not obtainable on managed databases.

Change Log

5 January 2022 | Product documentation updates:

  • Upgrades added to Features

  • WAL and Logs added to Resource Usage

  • Logs changed in Limitations

  • explained rotation of log files

  • new Performance section (Estimates)

  • further details regarding max_connections

21 December 2021 | Product documentation updates:

  • new sections: High Availability and Scaling; Failover and Upgrade; Backup and Recovery

  • new section: Activate Extensions

  • new section: PostgreSQL FAQ

10 October 2021 | API documentation updates:

  • the API swagger doc URL has changed: it is now (cutting out the "cloudapi" part)

  • Renamed all parameters from snake_case to camelCase

  • lastModifiedBy and createdBy now contain the unique username instead of the display name

  • createdByUserId and lastModifiedByUserId now contain the UUID of the user, which is also used in other Cloud APIs.

  • ram_size is now ram and changes to a plain integer (in megabytes)

  • cpu_core_count is now cores

  • replicas is now instances

  • vdc_connection is now connection

  • vdc_id is now datacenterId

  • ip_address is now cidr

  • in maintenance window: weekday is now dayOfTheWeek

  • metadata is no longer part of the properties. metadata and id move up to sit beside properties

  • lifecycle_status is now state and moves to metadata

  • For cloning/restoring the backup: all parameters are now in the request body

  • error messages no longer contain a titlefield

  • backup_enabled field is now removed. Backup is always enabled and the option was deprecated for some time.

  • the swagger spec was migrated to v3

Overview

IONOS DBaaS offers you to use a replicated MongoDB setup in minutes.

DBaaS is fully integrated into the . You may also manage it via automation tools like and .

Compatibility: DBaaS currently supports MongoDB version 5.0.

Locations: Offered only in limited locations due to technical reasons. Currently available are de/fra, de/txl, gb/lhr, es/vit, us/ewr and fr/par.

Features

High availability: Multi-instance clusters across different physical hosts with automatic data replication and failure handling.

Security: Communication between instances and between the client and cluster are protected with TLS using Let's Encrypt.

Backup: Daily snapshots are kept for up to seven days.

Restore: Databases can be restored in-place.

Resources: Cluster instances are dedicated Cubes, with a dedicated CPU, storage, and RAM. All data is stored on high-performance directly attached NVMe devices and encrypted at rest.

Network: Clusters can be accessed via private LANs.

Platform Tasks

Note: IONOS Cloud doesn’t allow full access to the MongoDB cluster. For example, due to security reasons, you can't use all roles and have to create users via the IONOS API.

DBaaS services offered by IONOS Cloud:

Our platform is responsible for all back-end operations required to maintain your database in optimal operational health.

  • Database management via the DCD or the DBaaS API

  • Configuring default values, for example for data replication and security related settings

  • Automated backups for a period of 7 days

  • Regular patches and upgrades during maintenance

  • Disaster recovery via automated backup

  • Service monitoring: both for the database and the underlying infrastructure

Customer database administration duties:

Tasks related to the optimal health of the database remain the responsibility of the customer. These include:

  • Choosing an adequate sizing

  • Data organisation

  • Creation of indexes

  • Updating statistics

  • Consultation of access plans to optimise queries

Terminology

  • Cluster: The whole MongoDB cluster, currently equal to the replica set.

  • Instance: A single server or replica set member inside a MongoDB cluster.

API How-Tos

Quick Links:

DBaaS API - OpenAPI Specification

Endpoint: https://api.ionos.com/databases/mongodb

To make authenticated requests to the API, you must include a few fields in the request headers. Please find relevant descriptions below:

Request parameter headers

Header
Required
Type
Description

Examples

We use curl in our examples, as this tool is available on Windows 10, Linux and macOS. Please refer to our blogpost about curl on Windows if you encounter any problems:

Connect from Kubernetes

This guide shows you how to connect to a database from your managed Kubernetes cluster.

We assume the following prerequisites:

  • A datacenter with id xyz-my-datacenter.

  • A private LAN with id 3 using the network 10.1.1.0/24.

  • A database connected to LAN 3 with IP 10.1.1.5/24.

  • A Kubernetes cluster with id xyz-my-cluster.

In this guide we use DHCP to assign IPs to node pools. Therefore, it is important that the database is in the same subnet that is used by the DHCP server.

To enable connectivity, you must connect the node pools to the private LAN in which the database is exposed:

Wait for the node pool to become available. To test the connectivity let's create a pod that contains the Postgres tool pg_isready. If you have multiple node pools make sure to schedule the pod only the node pools that are attached to the additional LAN.

Let's create the pod...

... and attach to it.

If everything works, we should see that the database is accepting connections. If you see connection issues, make sure that the node is properly connected to the LAN. To debug the node start a debugging container ...

... and follow the .

MongoDB

With IONOS Cloud Database as a service, you can quickly setup and manage MongoDB database clusters. Leverage our user guides, reference documentation, and FAQs to support your hosting needs.

Product Overview

API How-Tos

Monitoring as a Service

With IONOS Cloud Monitoring as a Service (MaaS), you can track any virtual instance in your data center and set triggers when usage limits are reached. Leverage our user guides, reference documentation, and FAQs to support your hosting needs.

Developer Tools

Recommended How-To's

Change Log

10 October 2021 | Live Vertical Scaling Overview section updated

Database Migration

This How-to shows you how to migrate your MongoDB data from one cluster to another one. For example how to migrate from your existing on-premise MongoDB cluster to the IONOS MongoDB cluster.

This migration requires some downtime, as you need to stop write operations to your old cluster during dumping the data.

High level plan

The steps for a migration are as follows:

  1. Preparations

  2. Stop writes to old cluster - start of downtime

  3. Dump data from old cluster using mongodump

  4. Optionally: copy dump

  5. Restore data into new cluster using mongorestore

  6. Use the new cluster - end of downtime

Preparations

A place to run mongodump and mongorestore

You need one machine that can access the old cluster, that can run mongodump, and one machine that can access the new cluster, that can run mongorestore.

Both can happen on the same machine. If they're two different machines, you need a way to copy the data dump from one machine to the other.

Check MongoDB version

Both clusters need to be running the same major version. You can see the version in the greeting if you connect with mongosh or you can query it with db.version().

Access to old cluster

If your old cluster has access control enabled, you need a user with permissions for find to all databases. Easiest is to grant the backup role to the user that you want to use for dumping the data.

You can then verify that you can connect by using the command for mongodump mentioned in the section "Dump old data" and then aborting the dump with Ctrl-C.

User in new cluster

You need one user with write permissions to all databases that your dump contains.

Additionally you can't restore the users and roles via mongorestore. So you have to create all the users with their credentials and roles via the IONOS API. You can list all the users and roles in your old cluster with db.system.users.find() in the admin database and can then create them in the new cluster according to the documentation on . You can't see the plain text password on your old cluster, so you need to collect them from wherever you stored them.

Dump data from old cluster

Caution: The use of --oplog isn't possible because it requires elevated privileges on restoring. Therefore you need to make sure that no write operations happen during the dump. Otherwise you get an inconsistent dump.

You can dump the data from the old cluster with this command:

Optionally you can limit data using --db, --collection, or --query flags to only dump specific databases, collections, or documents.

Restore data in new cluster

You can restore the dumped date in the new cluster with this command:

admin.system.* resources are excluded, since you can't modify users and roles from inside MongoDB in an IONOS MongoDB cluster. You need to use the IONOS API for them.

Overview

High Availability

Backup and Recovery

Sizing

Access Logs

Connect from Kubernetes

Create a Cluster

Migration

Restore a database

Users Management

Cloud API
Compute SDKs
Config Management Tools
Setup Storage
Images and Snapshots
Resource Access Control
Block Storage - Frequently Asked Questions
Reserve an IP Address
Configure a Network
Configure a Firewall
Manage IP Failover
Enable Flow Logs
Cross Connect VDCs
Enable IPv6
Update IPv6
Disable IPv6
Limitations
FAQs
Support
Cloud API Reference
Cloud API SDKs
Config Management Tools
DCD How-Tos
Manage Node Pools
Download Kubeconfig
Add Custom CoreDNS Configurations
Kubernetes - Frequently Asked Questions
Cloud API - Open API Specification file
Retrieve Kubernetes configuration files

Cloud API

Compute SDKs

Config Management Tools

Reserve an IP Address

Reserve and return IP addresses for network use.

Configure a Network

Create a private network and add internet access.

Configure a Firewall

Activate a multidirectional firewall and add rules.

Manage IP Failover

Ensure that HA setups are available on your VMs.

Enable Flow Logs

Capture data related to IPv4 network traffic flows.

Cross Connect VDCs

Connect VDCs with each other using a LAN.

IONOS Cloud Networks - Frequently Asked Questions

virtual servers
IaaS
node pool
Kubernetes
PodDisruptionBudgets
GracefulTerminationPeriod

Tutorial - Learn how to set maintenance windows

to version 13
PostgreSQL official documentation
https://www.postgresql.org/support/versioning/
pg_dump -U <username> -h <host> -p <port> -t <tablename> <databasename> -f dump.sql
psql -U <username> -h <host> -p <port> -d <databasename> -f dump.sql

### Using the custom format

`pg_dump` can also be used to dump a database in an archived format. Archived formats cannot be restored using `psql`, instead `pg_restore` is used. This has some advantages like smaller filesize and bandwith savings, but also speedups in the restore process by restoring concurrently.

In this guide we will use the "custom" format. It is compressed by default and provides the most flexible restore options.

```bash
pg_dump -U <username> -h <host> -p <port> -F c <databasename> -f dump
pg_restore -U <username> -h <host> -p <port> -F c -d <databasename> dump
pg_restore -U <username> -h <host> -p <port> -F c -C -d postgres dump
official documentation
official documentation
"https://api.ionos.com/docs/postgresql/v1/"

Tutorial - Learn how to access logs

Data Center Designer
Terraform
Ansible
ionosctl k8s nodepool create --cluster-id xyz-my-cluster --datacenter-id xyz-my-datacenter --lan-ids 3 --dhcp=true --name=my_nodepool
# pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: connectivity-test
  labels:
    role: connectivity-test
spec:
  containers:
  - name: postgres
    image: postgres
    stdin: true
    tty: true
    command:
      - "/bin/bash"
kubectl apply -f pod.yaml
kubectl attach -it connectivity-test
If you don't see a command prompt, try pressing enter.
root@connectivity-test:/# pg_isready -h 10.1.1.5
10.1.1.5:5432 - accepting connections
kubectl debug node/$(kubectl get po connectivity-test -o jsonpath="{.spec.nodeName}") -it --image=busybox
ionosctl
network troubleshooting guide
mongodump --host="hostname:port" \
  --username="username" --password="password" \
  --authenticationDatabase "admin" \
  --gzip --archive=mongodb.dump
mongorestore --uri "mongodb+srv://username:[email protected]" \
  --gzip --archive=mongodb.dump \
  --nsExclude "admin.system.*"
User Management

Configure a Network

Setting up a private network

DCD helps you connect the elements of your infrastructure and build a network to set up a functional virtual data center. Without a connected internet access element, your network is private.

The quickest way to connect elements is to drag them from the Palette directly onto elements that are already in the Workspace. The DCD will then show you whether and how the elements can be connected automatically.

1. Drag the elements from the Palette into the Workspace and connect them through their NICs.

2. In the Workspace, select the required VM; the Inspector will show its properties on the right.

3. From the Inspector, open the Network tab. Now you can access NIC properties.

4. Set NIC properties according to the following rules:

  • MAC: The MAC address will be assigned automatically upon provisioning.

  • Primary IP: The primary IP address is automatically assigned by the IONOS DHCP server. You can, however, enter an IP address for manual assignment by selecting one of the reserved IPs from the drop-down menu. Private IP addresses (according to RFC 1918) must be entered manually. The NIC has to be connected to the Internet.

  • Failover: If you have an HA setup including a failover configuration on your VMs, you can create and manage IP failover groups that support your HA setup.

  • Firewall: Configure a firewall.

  • DHCP: It is often necessary to run a DHCP server in your virtual data center (e.g. PXE boot for fast rollout of VMs). If you use your own DHCP server, clear this check box so that your IPs are not reassigned by the IONOS DHCP server.

  • Additional IPs: In order to use "floating" or virtual IPs, you can assign additional IPs to a NIC by selecting them from the drop-down menu.

When ready, provision your changes. The VDC will create a private network according to set properties.

Splitting LANs

1. To split a LAN, select the required LAN in the Workspace.

2. In the Inspector, open the Actions menu and select Split LAN.

3. Confirm by clicking Split LAN.

4. Make further changes to your data center and provision your changes when ready.

The selected LAN is split and new IPs are assigned to the NICs in the new LAN.

Merging LANs

1. To merge a LAN, select the required LAN in the Workspace.

2. To integrate this LAN into another LAN.

3. In the Inspector, open the Actions menu and select Merge LAN with another LAN.

4. In the dialog that appears, select the LANs to be merged with the selected LAN.

5. Select the checkboxes of the LANs you wish to keep separate.

6. Confirm by clicking Merge LANs.

(Optional) Make further changes to your data center.

7. Provision your changes

The selected LANs are merged and new IPs are assigned to the NICs in the newly integrated LAN.

A private LAN that is integrated into a public LAN also becomes a public LAN.

Adding Internet access

Servers with internet access are assigned an IP automatically by the IONOS DHCP server. Please note that multiple servers sharing the same internet interface also share the same subnet. With required permissions, you can add as many internet access elements as you wish.

Users who do not have the permissions to add a new internet access element, can connect to an existing element in their VDC, provided they have the permissions to edit it.

1. To add internet access, drag the Internet element from the Palette onto the Workspace.

2. Connect this element with Servers.

3. Set further properties of the connection at the respective NIC.

Connect from Kubernetes

This guide shows you how to connect to a MongoDB cluster from your managed Kubernetes cluster.

We assume the following prerequisites:

  • A data center with id xyz-my-datacenter.

  • A private LAN with id 3.

  • A MongoDB cluster connected to LAN 3, with the connection string mongodb+srv://m-xyz-example.mongodb.de-txl.ionos.com.

  • A Kubernetes cluster with id xyz-my-k8s-cluster.

  • ionosctl set up with your IONOS credentials.

In this guide, we use DHCP to assign IP addresses to node pools. Therefore, it is important that the database is in the same subnet that's used by the DHCP server.

To enable connectivity, you must connect the node pools to the private LAN with the MongoDB cluster:

ionosctl k8s nodepool create --cluster-id xyz-my-k8s-cluster --datacenter-id xyz-my-datacenter --lan-ids 3 --dhcp=true --name=my_nodepool

Wait for the node pool to become available. To test the connectivity you can create a pod that contains the MongoDB tool mongosh. If you have multiple node pools, make sure to schedule the pod on one of the node of the node pools that are attached to the private LAN.

# pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: connectivity-test
  labels:
    role: connectivity-test
spec:
  containers:
  - name: mongo
    image: mongo
    stdin: true
    tty: true
    command:
      - "/bin/bash"

Let's create the pod...

kubectl apply -f pod.yaml

... and attach to it.

kubectl attach -it connectivity-test
If you don't see a command prompt, try pressing enter.
root@connectivity-test:/# mongosh "mongodb+srv://m-xyz-example.mongodb.de-txl.ionos.com"
Current Mongosh Log ID:	631063ca901a9459bab0b4d4
Connecting to:		mongodb+srv://m-xyz-example.mongodb.de-txl.ionos.com/?appName=mongosh+1.5.4
Using MongoDB:		5.0.10
Using Mongosh:		1.5.4

For mongosh info see: https://docs.mongodb.com/mongodb-shell/


To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy).
You can opt-out by running the disableTelemetry() command.

Enterprise a0def940-2455-11ed-a564-7a0f508690ac [primary] test> 

If everything works, we should see that the database is accepting connections. If you see connection issues, make sure that the node is properly connected to the LAN. To debug the node start a debugging container ...

kubectl debug node/$(kubectl get po connectivity-test -o jsonpath="{.spec.nodeName}") -it --image=busybox

... and follow the network troubleshooting guide.

Cloud Cubes

A Cloud Cube (or just Cube) is a virtual machine with an attached NVMe Volume. Each Cube you create is a new virtual machine you can use, either standalone or in combination with other IONOS Cloud products. For more information, see the Cloud Cubes.

You can create and configure your Cubes visually using the DCD interface. See the Setup a Cloud Cube page for more information. However, the creation and management of Cubes is easily automated via the Cloud API, as well as our custom-made tools and SDKs.

Configuration templates

You may choose between eight template sizes. Each template varies by processor, memory, and storage capacity. The breakdown of resources is as follows:

Size
vCPUs
RAM
NVMe storage

XS

1

1 GB

30 GB

S

1

2 GB

50 GB

M

2

4 GB

80 GB

L

4

8 GB

160 GB

XL

6

16 GB

320 GB

XXL

8

32 GB

640 GB

3XL

12

48 GB

960 GB

4XL

16

64 GB

1280 GB

Configuration templates are set upon provisioning and cannot subsequently be changed.

Resource usage

Counters: The use of Cubes' vCPU, RAM, and NVMe storage resources counts into existing VDC resource usage. However, dedicated resource usage counters are enabled for Cloud Cubes. These counters permit granular monitoring of vCPUs and NVMe storage, which differ from dedicated Cores for the enterprise VM instances and SSD block storage.

Billing: Please note that suspended Cubes continue to incur costs. If you do not delete unused instances, you will continue to be charged for usage. Save on costs by creating snapshots of NVMe volumes that you do not immediately need and delete unused instances. At a later time, use these snapshots to recreate identical Cubes as needed. Please note that recreated instances may be assigned a different IP address.

Storage options

Included direct-attached storage: A default Cube comes ready with a high-speed direct-attached NVMe storage volume. Please check Configuration Templates for NVMe Storage sizes.

Add-on network block storage: You may attach more HDD or SSD (Standard or Premium) block storage. Each Cube supports up to 23 block storage devices in addition to the existing NVMe volume. Added HDD and SSD devices, as well as CD-ROMs, can be unmounted and deleted any time after the Cube is provisioned for use.

Boot options: Any storage device, including the CD-ROM, can be selected as the boot volume. You may also boot from the network.

Images and snapshots: Images and snapshots can be created from and copied to direct-attached storage, block storage devices, and CD-ROM drives. Also, direct-attached storage volume snapshots and block storage volumes can be used interchangeably

Disaster recovery

A recovery point is generated daily for each Cube NVMe storage volume. This recovery point can be used to recreate the instance again with the same contents, except for those stored in added volumes.

IONOS Cloud network block storage devices are already protected by a double-redundant setup, which is not included in the recovery points. Instead, recovered block storage devices will automatically be mounted to new Cubes instances.

Limitations

Cloud Cubes are limited to a maximum of 24 devices. The NVMe volume already occupies one of these slots.

You may not change the properties of a configuration template (vCPU, RAM, and direct-attached storage size) after the Cube is provisioned.

The direct-attached NVMe storage volume is set upon provisioning and cannot be unmounted or deleted from the instance.

If available account resources are not sufficient for your tasks, please contact our support team to increase resource limits for your account.

My Settings

+

+

+

Password & Security

+

+

+

Resource Overview

+

+

Cost and Usage

+

+

Payment Method

+

Changing your Password

2-Factor Authentication

Setting Support PIN

VDCs
IP addresses
here
Google Play Store
Apple iTunes
Features and prices
IONOS enterprise support team
IONOS enterprise support team
Cancel an IONOS Contract
My Settings lets you set defaults for any infrastructure created in subsequent data centers
Enable 2-Factor Authentication from the Password & Security tab of the Account Settings
Enable 2-Factor Authentication for an individual user from the User Manager panel.
Customer Support

Users Management

Authorization

yes

string

HTTP Basic authorization. A base64 encoded string of a username and password separated by a colon. [email protected]:password

X-Contract-Number

no

integer

Users with more than one contract may apply this header to indicate the applicable contract.

Content-Type

yes

string

Set this to application/json.

Monitoring APIs

SDKs

Config Management Tools

Access MaaS

Access MaaS inside of the using the Monitoring Manager or via the Inspector.

Set User Privileges

Learn how to grant, modify and revoke user privileges.

Create Alarms and Actions

Use the Monitoring Manager to set Alarms and modify resulting Actions

Setup Storage
Images and Snapshots
Resource Access Control
Reserve IPs from the IP Manager by clicking + Reserve IP in the top left corner
Return an IP block by highlighting the block and clicking Delete. A confirmation dialog will open

Resource Access Control

Users who are not contract owners or administrators need access rights to view, use, or edit resources in a . These access rights are assigned to groups and are inherited by group members.

Setting access rights and ownership

Users can access a resource with the following access rights:

  • Read: Users can see and use the resource, but they cannot modify it. Read access is automatically granted as soon as a user is assigned to a group that has this access right.

  • Edit: Users can modify and delete the resource.

  • Share: Users can share a resource, including their access rights, with the groups to which they belong.

A user who created a resource is the owner of that resource and can specify its access rights.

The owner is shown in the Security tab of a resource.

Setting restrictions using Two-Factor Authentication

In addition to enabling access to resource, for users of authorized groups only, data centers and snapshots can be protected even further by restricting access to users who have 2-factor authentication activated. Other users cannot see or select these resources - even if they belong to an authorized group.

Depending on their role, users can set access rights at the resource level and in the User Manager.

Setting access rights at the resource level

Prerequisites: Make sure that you have the appropriate permissions. Only contract owners, administrators, or users with access rights permission can share the required resource. Other user types have read-only access and cannot provision changes.

  1. Select the required resource

  2. Open the data center:

  • Images: Menu Bar > Resource Manager > Image Manager > Image.

  • Snapshots: Menu Bar > Resource Manager > Image Manager > Snapshot.

  • IP addresses: Menu Bar > Resource Manager > IP Manager.

  • Kubernetes Cluster: Menu Bar > Resource Manager > Kubernetes Manager.

3. Select the required resource

4. Open Security > Visible to Groups

5. Enable access:

  • From the + Add Group menu, select the required groups. Read access is granted. Users can see and use, but not modify the resource.

  • (Optional) Select further permissions (Edit, Share). You may only share permissions that you have yourself.

6. Restrict or disable access:

  • Select the required group

  • Deactivate the checkbox of the permission

Read access is retained.

Alternatively, you can click Remove Group. Access will be disabled for all members of the selected group.

Optional: To protect the resource (data center, snapshots) more thoroughly by only allowing access to users whose login is secured with 2-factor authentication, activate the 2-Factor Protected check box.

Setting access rights in the User Manager

Contract owners and administrators can also define in the User Manager who may access a resource to what extent.

Prerequisites: Make sure you have the appropriate permissions. Only contract owners and administrators can set the access rights.

Set the access rights in the User Manager

  1. Open the User Manager: Menu Bar > Resource Manager > User Manager.

  2. In the Resources, select the required resource.

  3. Open the Visible to Groups.

  4. Enable access

  • From the + Add Group list, add the required groups.

  • (Optional) To enable write access or sharing of a resource, activate the relevant check box.

5. Disable access: deactivate the checkbox of the permission or click Remove Group.

Optional: To protect the resource (data center, snapshots) more thoroughly by only allowing access to users whose login is secured with 2-factor authentication, activate the 2-Factor Protected check box.

Assigning resources to a group

  1. In the Groups, select the required group.

  2. Open the Resources of Group.

  3. To enable access:

  • Select the required resource by clicking on + Grant Access. This enables read access to the selected resource.

  • (Optional) To enable write access or sharing of a resource, activate the respective check box.

4. To disable access:

  • Select the required resource.

  • Deactivate the check box of the appropriate permission or click on Revoke Access.

You can find more information about managing the Groups here.

PostgreSQL

With IONOS Cloud Database as a Service, you can quickly setup and manage a PostgreSQL database. Leverage our user guides, reference documentation, and FAQs to support your hosting needs.

Product Overview

DCD How-Tos

API How-Tos

API Reference

SDK Reference

Frequently Asked Questions

API How-Tos

Quick Links:

DBaaS API - OpenAPI Specification

Endpoint: https://api.ionos.com/docs/postgresql/v1/

To make authenticated requests to the API, you must include a few fields in the request headers. Please find relevant descriptions below:

Request parameter headers

Header
Required
Type
Description

Authorization

yes

string

HTTP Basic authorization. A base64 encoded string of a username and password separated by a colon. [email protected]:password

X-Contract-Number

no

integer

Users with more than one contract may apply this header to indicate the applicable contract.

Content-Type

yes

string

Set this to application/json.

Examples

We use curl in our examples, as this tool is available on Windows 10, Linux and macOS. Please refer to our blogpost about curl on Windows if you encounter any problems:

OpenSSH Instructions

Generating an SSH key

SSH keys can be generated and used on macOS or Linux if both OpenSSH and the ssh-keygen command-line tools are installed. OpenSSH is a collection of tools for establishing SSH connections to remote servers, while ssh-keygen is a utility for generating SSH keys.

An SSH key is composed of two files. The first is the private key, which should never be shared. The other is a public key that enables you to access your provisioned Cubes. When you generate the keys, you will use ssh-keygen to store them in a secure location so that you can connect to your instances without encountering the login prompt.

Manually generate SSH keys when working with OpenSSH via the Terminal application by following the steps below.

Enter the following command below into the Terminal window and press ENTER.

ssh-keygen

The key generation process is initiated by the command above. When you run this command, the ssh-keygen utility prompts you for a location to save the key.

Accept the default location by pressing the ENTER key, or enter the path to the file where you want to save the key /home/username/.ssh/id_rsa.

Enter file in which to save the key (/home/username/.ssh/id_rsa):

If you have previously generated a key pair, you may see the following prompt below. If you choose to overwrite the key, you will no longer authenticate with the previous key that was generated.

/home/username/.ssh/id_rsa already exists.
Overwrite (y/n)?

Enter the passphrase that will be used to encrypt the private key file on the disk. You can also press ENTER to accept the default (no passphrase). However, we recommend that you use a passphrase.

Enter your passphrase once more.

After you confirm the passphrase, the public and private keys are generated and saved in the specified location. Thus, the confirmation will look like this:

Your identification has been saved in /home/username/.ssh/id_rsa.
Your public key has been saved in /home/username/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:AcP/ieiAOoD7MjrKepXks/qHAhrRasGOysiaIR94Quk [email protected]
The key's randomart image is:
+---[RSA 3072]----+
|     .o          |
|      .o         |
|..     ..        |
|.oo .   ..       |
|+=.+ . .So .     |
|X+. * . . o      |
|&Eo. *           |
|&Oo.o o          |
|@O++..           |
+----[SHA256]-----+

The public key is saved to the fileid_rsa.pubwhich will be the key you upload to your DCD account. Your private key is saved to the id_rsa file in the .ssh directory and is used to verify that the public key you use belongs to the same DCD account.

You can copy the public key to your clipboard by running the following command:

 pbcopy < ~/.ssh/id_rsa.pub

Adding the SSH key to the DCD Resource Manager

In addition to the SSH Keys stored in the SSH Key Manager, the IONOS Cloud Cubes SSH key concept includes:

  • Default keys

  • Ad-hoc SSH Keys.

Default keys are SSH keys that you intend to use frequently and have marked as such in the SSH Key Manager. When you configure storage devices, the default SSH keys are pre-selected. You can, however, specify which SSH keys are to be used before provisioning and deselect the preselected standard keys in favor of another SSH key.

Ad-hoc SSH keys, on the other hand, are SSH keys that you only use once and do not intend to save in the SSH Key Manager for future use.

The DCD's SSH Key Manager allows you to save and manage up to 100 public SSH keys for SSH access setup. This saves you from having to copy and paste the public part of an SSH key from an external source multiple times.

Log in to your DCD account after copying the SSH key to the clipboard (Link).

1. Open the SSH Key Manager: Menu > Management > SSH Keys

Accessing the SSH Key Manager from the Data Center Designer

2. Select the + Add Key in the top left corner.

Adding an SSH Key to the Manager

3. Paste the SSH key from the clipboard into the SSH Key field. If you have saved your SSH Key in a file, you can upload it by selecting the Choose file button in the Select Key file field.

Copy and paste key info in the SSH Key field.

Make sure the SSH keys you enter are valid. The DCD does not validate the syntax or format of the keys.

Optional: Select the Default checkbox to have the SSH key pre-selected when configuring SSH access.

4. Click Save to save the key. The SSH key has now been saved in the SSH Key Manager and is visible in the SSH Key Manager's table of keys.

Connecting via OpenSSH

You can connect to your Cubes instance via OpenSSH. Still, you will need the terminal application, which varies depending on your operating system. For:

  • Linux: Search Terminal or press CTRL+ALT+T

  • macOS: Search Terminal

  • Windows: Search Bash. If you don’t have Bash installed, use PuTTY instead.

The steps below will show you how to connect to your Cubes.

Open the Terminal application and enter the SSH connection command below. After the @, add the IP address of your Cubes instance. Then press ENTER.

ssh [email protected]

When you log in for the first time, the server isn't recognized on your local machine, so you'll be asked if you're sure you want to keep connecting. You can type yes and then press ENTER.

Authentication is the next step in the connection process. If you've added the SSH keys, you'll be able to connect to the Cubes immediately or after entering your key pair's passphrase.

If you haven't already added SSH keys, you'll be asked for your password:

[email protected]'s password:

Nothing is displayed in the terminal when you enter your password, making it easier to paste in the initial password. Pasting into text-based terminals is different from other desktop applications. It is also different from one window manager to another:

  • For Linux Gnome Terminal, use CTRL+SHIFT+V.

  • For macOS, use the SHIFT-CMD-V or a middle mouse button.

  • For Bash on Windows, right-click on the window bar, choose Edit, then Paste. You can also right-click to paste if you enable QuickEdit mode.

Once you’ve entered the password, press ENTER.

If the SSH key is configured correctly, this will log you into the Cubes.

Manage IP Failover

Managing IP Failover groups

To make sure that high-availability (HA) or setups on your are effective in case of events such as a physical server failure, you should set up "IP failover groups".

They are essential to all HA or fail-over setups irrespective of the mechanism or protocol used.

Please ensure that the high-availability setup is fully installed on your VMs. Creating an IP failover group in the alone is not enough to set up a failover scenario.

A failover group is characterized by the following components:

  • Members: The same (reserved, public) is assigned to all members of an IP failover group so that communication within this group can continue in the event of a failure. You can set up multiple IP failover groups. A can be a member of multiple IP failover groups. Virtual servers should be spread over different .\ \ The rules for managing the traffic between your VMs in event of a failure are specified at the operating system level using the options and features for setting up high-availability or fail-over configurations. Users must have access rights for the IPs they wish to use.

  • Master: During the initial provisioning, the master of an IP failover group in the DCD represents the master of the HA setup on your virtual machines. If you change the master later, you won't have to change the master of the IP failover group in the DCD.

  • Primary IP address: The IP address of the IP failover group can be provisioned as the primary or additional IP address. We recommend that you provide the IP address used for the IP failover group as the primary IP address, as it is used to calculate the gateway IP, which is advantageous for some backup solutions. Please note that this will replace the previously provisioned primary IP address. When there are multiple IP failover groups in a LAN, a involved in multiple of these groups can only be used once for the primary IP address. The DCD will alert you accordingly.

Limitations and restrictions

For technical reasons this feature can only be used subject to the following limitations:

  • In public LANs that do not contain load balancers.

  • With reserved public IP addresses only - DHCP-generated IP addresses cannot be used.

  • Virtual MAC addresses are not supported.

  • IP failover must be configured for all HA setups.

Creating an IP Failover Group

Prerequisites: Please make sure that you have the privileges to Reserve IPs. You should have access to the required IP address. The LAN for which you wish to create an IP failover group should be public (connected to the Internet), and should not contain a load balancer.

1. In the Workspace, select the required LAN.

2. In the Inspector, open the IP Failover tab.

3. Click Create Group. In the dialog box that appears, select the IP address from the IP drop-down menu.

Select the NICs that you wish to include in the IP failover group by selecting their respective checkboxes.

Select the Primary IP checkboxes for all NICs for which the selected address is to be the primary IP address.

The primary IP address previously assigned to a NIC in another IP failover group is replaced.

Select the master of the group by clicking the respective radio button.

4. Click Create.

5. Provision your changes.

The IP failover group is now available.

Editing an IP failover group

1. Click the IP address of the required IP failover group.

2. The properties of the selected group are displayed.

3. To change the IP address, click Change.

4. In the dialog box that appears, select a new IP address.

(Optional) If no IP address is available, reserve a new one by clicking +.

5. Specify the primary IP address by selecting the respective check box.

6. Confirm your changes by clicking Change IP.

7. To Change Master, select the new Master by clicking the respective radio button.

8. To add or remove members Click Manage.

9. Select or clear the checkboxes of the required NICs.

10. Confirm your changes by clicking Update Group.

Deleting an IP Failover Group

1. Click the IP address of the required failover group.

2. The properties of the selected IP failover group are displayed.

3. Click Remove. Confirm your action by clicking OK.

4. Provision your changes

The IP failover group is no longer available. The DCD no longer maps your HA setup.

Enable Flow Logs

Overview

Use the Flow logs feature to capture data that is related to IPv4 and IPv6 network traffic flows. Flow logs can be enabled for each network interface of a (VM) instance, as well as the public interfaces of the and .

Flow logs can help you with a number of tasks such as:

  • Debugging connectivity and security issues

  • Monitoring network throughput and performance

  • Logging data to ensure that firewall rules are working as expected

Flow logs are stored in a customer’s IONOS S3 Object Storage bucket, which you configure when you create a flow log collector.

Network traffic flows

A network traffic flow is a sequence of sent from a specific source to a specific unicast, anycast, or multicast destination. A flow could be made up of all packets in a specific transport connection or a media stream. However, a flow is not always mapped to a transport connection one-to-one.

A flow consists of the following network information:

  • Source

  • Destination IP address

  • Source port

  • Destination port

  • Internet protocol

  • Number of packets

  • Bytes

  • Capture start time

  • Capture end time

Flow log basics

Core concepts

  • Flow log data for a monitored network interface is stored as flow log records, which are log events containing fields that describe the traffic flow. For more information, see

  • Flow log records are written to flow logs, which are then stored in a user-defined from where they can be accessed.

  • You can export, process, analyze, and visualize flow logs using tools, such as (SIEM) systems, (IDS), , , etc.

  • Traffic flows in your network are captured in accordance with the defined rules.

  • Flow logs are collected at a 10-minute rotation interval and have no impact on customer resources or network performance. Statistics about a traffic flow are collected and aggregated during this time period to create a flow log record.

No flow log file will be created if no flows for a particular bucket are received during the log rotation interval. This prevents empty objects from being uploaded to the IONOS S3 Object Storage.

  • The flow log file's name is prefixed with an optional object prefix, followed by a Unix timestamp and the file extension .log.gz, for example, flowlogs/webserver01-1629810635.log.gz.

  • Flow logs are retained in the IONOS S3 Object Storage bucket until they are manually deleted. Alternatively, you can configure objects to be deleted after a predefined time period using a Lifecycle Policy for an object in the IONOS S3 Object Storage.

  • The IONOS S3 Object Storage owner of the object is an IONOS internal technical user named [email protected] (Canonical ID 31721881|65b95d54-8b1b-459c-9d46-364296d9beaf).

Never delete the IONOS Cloud internal technical user from your bucket as this disables the flow log service. The bucket owner also receives full permissions to the flow log objects per default.

Limitations

To use flow logs, you need to be aware of the following limitations:

  • You can't change the configuration of a flow log or the flow log record format after it's been created. In the flow log record, for example, you can't add or remove fields. Instead, delete the flow log and create a new one with the necessary settings.

  • There is a limit of one flow log created per NIC, NAT Gateway, and Network Load Balancer.

Cross Connect VDCs

Cross Connect is a feature that allows you to connect virtual data centers () with each other using a LAN. The VDCs to be connected need to belong to the same IONOS Cloud contract and region. You can only use private LANs for a Cross Connect connection. A LAN can only be a part of one Cross Connect.

The of the used for the Cross Connect connection may not be used in more than one instance. They need to belong to the same IP range. For the time being, this needs to be checked manually. An automatic check will be available in the future.

Creating a Cross Connect

Prerequisites: Make sure you have the appropriate permissions. Only contract owners, administrators, or users with the Create Private Cross Connects privilege can work with Cross Connects. Other user types have read-only access and can't provision changes.

If you want to connect your virtual data centers with each other, you need to create a Cross Connect first.

1. Open the Cross Connect Manager: Menu > Resource Manager > Cross Connect Manager.

2. Select + Create. (Optional) Enter a name and a description for this Cross Connect.

3. Finish your entries by clicking Create Cross Connect.

4. (Optional) Make further changes to your data center.

5. Provision your changes.

The Cross Connect will be created.

Connecting data centers

When you want to connect your data centers, you need a Cross Connect which serves as a "hub" or "container" for the connection. This is created in the Cross Connect Manager. You can then add a to the connection by setting up a Cross Connect element in the VDC

Prerequisites: Make sure you have the appropriate permissions. Only contract owners, administrators, or users with the Create Private Cross Connects privilege can work with Cross Connects. Other user types have read-only access and can't provision changes.

The data centers should be under the same contract. Prior to connecting, they should be provisioned and part of the same location. The LANs to be used for the connection should be private LANs. The to be connected should have unique IP addresses that belong to the same IP range.

How to connect data centers

  1. Open the VDC that you wish to connect with other VDCs by means of a Cross Connect.

  2. Drag a Cross Connect element from the Palette to the Workspace.

  3. Connect the Cross Connect element to the LAN with which the connection is to be established.

  4. Select the Cross Connect element in the Workspace.

  5. From the drop-down menu in the Inspector, select the connection to which you wish to add your VDC.

  6. Ensure the IP addresses in use meet the requirements. (Optional) Make further changes to your data center.

  7. Provision your changes.

The selected VDC was added to the Cross Connect and is now connected with all VDCs that belong to this connection.

Removing a data center from a Cross Connect

When you don't want a virtual data center to be connected to other data centers, you can remove it from a Cross Connect. If you want to delete a Cross Connect, you need to remove all data centers from it.

  1. Open the required data center.

  2. In the Workspace, select the required Cross Connect.

  3. Set it to Not connected. Inspector > Private Cross Connect

  4. (Optional) Make further changes to your data center.

  5. Provision your changes.

The data center connection to the selected Cross Connect is deleted and the data center is removed from it.

Deleting a cross connect

If you no longer need a Cross Connect, you can easily remove it from the Cross Connect Manager. A Cross Connect can only be deleted when it does not contain any data centers.

  1. Open the Cross Connect Manager: Menu > MANAGER Resources > Cross Connect Manager

  2. In the Workspace, select the required Cross Connect.

  3. In the Connected LANs tab, ensure that the Cross Connect does not contain any virtual data centers.

  4. Remove existing data centers from the Cross Connect.

  5. Confirm your action by clicking Delete.

The selected Cross Connect will be deleted.

Setup a Cluster

Prerequisites: Prior to setting up a database, please make sure you are working within a provisioned that contains at least one virtual machine () from which to access the database. The VM you create is counted against the quota allocated in your contract.

Note: Database Manager is available for contract admins, owners and users with Access and manage DBaaS privilege only. You can set the privilege via DCD group privileges.

Creating a Cluster

1. To create a Postgres cluster, go to Menu > Databases.

2. In the Databases tab, click + Add in the Postgres Clusters section to start creating a new Postgres Cluster.

3. Provide an appropriate Display Name.

4. To create a Postgres Cluster from the available backups directly, you can go to the Create from Backup section and follow these steps:

  • Select a Backup from the available list of cluster backups in the dropdown.

  • Select the Recovery Target Time field. A modal will open up.

    • Select the recovery date from the calendar.

    • Then, select the recovery time using the clock.

5. Choose a Location where your data for the database cluster will be stored. You can select an available datacenter within the cluster's data directory to create your cluster.

6. Select a Backup Location that is explicitly the location of your backup (region). You can have off-site backups by using a region that is not included in your database region.

7. In the Cluster to Datacenter Connection section, provide the following information:

  • Data Center: Select a datacenter from the available list.

  • LAN: Select a LAN for your datacenter.

  • Private IP/Subnet: Enter the private IP or subnet using the available .

  • Once done, click on the Add Connection option to establish your cluster to datacenter connection.

Note: To know your private IP address/Subnet, you need to:

  • Create a single server connected to an empty private LAN and check the IP assigned to that NIC in that LAN. The DHCP in that LAN is always using a /24 subnet, so you have to reuse the first 3 IP blocks to reach your database.

  • To prevent a collision with the DHCP IP range, it is recommended to use IP addresses ending between x.x.x.3/24 and x.x.x.10/24 (which are never assigned by DHCP).

  • If you have disabled DHCP on your private LAN, then you need to discover the IP address on your own.

8. Select the appropriate Postgres Version. IONOS Database Manager supports versions 11, 12, 13, 14, and 15.

9. Enter the number of Postgres Instances in the cluster. One Postgres instance always manages the data of exactly one database cluster.

Note: Here, you will have a primary node, and one or more standby nodes that run a copy of the active database, so you have n-1 standby instances in the cluster.

10. Select the mode of replication in the Synchronization Mode field; asynchronous mode is selected by default. Following are the available replication modes:

  • Asynchronous mode: In asynchronous mode, the primary PostgreSQL instance does not wait for a replica to indicate that it wrote the data. The cluster can lose some committed transactions to ensure availability.

  • Synchronous mode: Synchronous replication allows the primary node to be run standalone. The primary PostgreSQL instance will wait for any or all replicas. So, no transactions are lost during failover.

  • Strictly Synchronous: It is similar to the synchronous mode but requires two nodes to operate.

11. Provide the initial values for the following:

  • CPU Cores: Select the number of CPU cores using the slider or choose from the available shortcut values.

  • RAM Size: Select the RAM size using the slider or choose from the available shortcut values.

  • Storage Type: is set by default.

  • Storage Size: Enter the size value in Gigabytes.

    The Estimated price will be displayed based on the input. The estimated cost is exclusive, where certain variables like traffic and backup are not considered.

12. Provide the Database User Credentials i.e. a suitable username and an associated password.

Note: The credentials will be overwritten if the user already exists in the backup.

13. In the Maintenance Window section, you can set a Maintenance time using the pre-defined format (hh:mm:ss), or you can use the clock. Select a Maintenance day from the dropdown menu. The maintenance occurs in a 4 hour-long window. So, adjust the time accordingly.

14. Click Save to create the Postgres Cluster.

Your Postgres Cluster is now created.

Access MaaS

There are two ways to access the metrics of an instance: from the Monitoring Manager view or via the .

Inside of the data center view of the , right-click the graphical representation of your . The context menu will show an item called Monitor, which will open a window with the server's metrics. This window can be minimized to the bottom bar.

Using the Monitoring Manager

The Monitoring Manager allows quick access to all running virtual instances within the contract. It also provides access to the configuration of alarms and actions.

1. In the DCD, open the Management option. A drop-down box displays.

2. Select Monitoring.

3. In the left panel, select the target Server. Metrics for the target server displays in the right-hand panel.

Using the Inspector pane

You can access MaaS via the properties panel of a virtual instance.

1. In the DCD, open your virtual data center.

2. Select your Server. The Inspector pane for this virtual instance displays on the right side of the window.

3. Click on the Metrics tab. The CPU utilization graph, followed by other graphs, displays showing the basic metrics for the last hour.

4. In the Inspector, you may select Refresh. The basic metrics are refreshed.

5. You can choose to monitor separately. In the Inspector, select Monitor in Background. A separate window displays. The graphs display an enlarged view. Additional information is available for each graph.

In the upper right corner of the pop-up window there are three buttons: minimize (down arrow), maximize (up arrow) and close (x). You to can either enlarge the view to the entire screen or reduce the view to a monitoring bar, which will remain active in the background.

The monitoring bar

The monitoring bar remains visible even when switching between . You can add multiple monitoring views of different virtual instances from different VDCs to this bar. Even if you close a virtual data center, the monitoring bar option will remain active.

The monitoring bar will disappear when you log out of the DCD.

If the VDC is closed, you can reopen the Monitoring Manager and also use the Focus Server option to load the VDC that contains the server instance. The server will be selected automatically. The relevant property panel will become available immediately.

Time range limits

In the Monitor in Background view, you may select the refresh interval as well as the time frame for data retrieval.

You can only choose a time frame of up to two weeks. If you select a wider time frame, MaaS will limit the data reported or it may return an error if no data is available. If you change one or both of these values, the view is refreshed. MaaS will display your latest chosen data in the graph.

If you want to create a long-term history of metrics, we advise you to retrieve the raw metric data and store it. Any data older than two weeks will be purged by IONOS Cloud.

Restore a Database

You can restore a database cluster in-place from a previous snapshot.

Listing available snapshots of a cluster

To restore from a snapshot you will need to provide a snapshot ID. You can request a list of all available snapshots:

Our chosen clusterId is: cc54e0f2-5e49-42bf-97e8-089c2eff0264

Request

Response

Restoring from backup in-place

You can now create a restore job for the chosen cluster. Your database will not be available during the restore operation. In order to successfully create a restore job, no other active restore job must exist.

Note: To restore a cluster in-place you can only use snapshots from that cluster.

Note: The cluster will have a BUSY state and must not receive connections.

Request

Response

The API will respond with a 202 Accepted status code if the request is successful.

Note: Check the cluster details in order to see the progress of the restoration.

curl --include \
    --user "[email protected]:Mb2.r5oHf-0t" \
    --header "Content-Type: application/json" \
    https://api.ionos.com/databases/mongodb/clusters/cc54e0f2-5e49-42bf-97e8-089c2eff0264/snapshots
{
  "type": "collection",
  "id": "cc54e0f2-5e49-42bf-97e8-089c2eff0264",
  "items": [
    {
      "type": "snapshot",
      "id": "e2044962-294a-4c99-b076-414b2a387c58",
      "properties": {
        "mongoDBVersion": "5.0",
        "size": 150,
        "creationTime": "2020-12-10T13:37:50+01:00"
      }
    }
  ],
  "offset": 0,
  "limit": 10,
  "_links": {}
}
curl --include \
    --request POST \
    --user "[email protected]:Mb2.r5oHf-0t" \
    --header "Content-Type: application/json" \
    --data-binary '{
      "snapshotId": "e2044962-294a-4c99-b076-414b2a387c58",
    }' \
    https://api.ionos.com/databases/mongodb/clusters/cc54e0f2-5e49-42bf-97e8-089c2eff0264/restore
Overview
High Availability and Scaling
Backup and Recovery
Failover and Upgrade
PostgreSQL Extensions
View Cluster Metrics
Setup a Cluster
Setup a Cluster
Modify Cluster Attributes
Access Logs
Database Migration
Restore from Backup
Troubleshooting
DBaaS API - OpenAPI Specification reference documentation
DBaaS SDKs - reference documentation
PostgreSQL - FAQ
Virtual Machine
Network Load Balancer (NLB)
Network Address Translation (NAT) Gateway
packets
IP address
Flow Log Record.
IONOS S3 Object Storage bucket
Security Information and Event Management
Intrusion Detection Systems
Cyberduck
Logstash
automatically
VDC
VM
Private IPs
SSD Premium
The Database Manager lets you complete the Postgres Cluster setup from the DCD
Enter the values for creating the Postgres Cluster
Enter the values for creating the Postgres Cluster
Enter and save the values for creating the Postgres Cluster

Block Storage FAQ

Which virtualization technology does IONOS use?

IONOS systems are built on Kernel-based Virtual Machine (KVM) hypervisor and libvirt virtualization management. We have adapted both of these components to our requirements and optimized them for the delivery of diverse cloud services, with a special focus on security and guest isolation.

Which images can I import?

Some software images are only designed for certain virtualization systems. Without VirtlO drivers, VM will not work properly with the hypervisor. You can set the storage bus type to IDE temporarily to install the VirtlO drivers.

How do I install VirtIO drivers for Windows?

For a Windows VM to work properly with our hypervisor, VirtI/O drivers are required.

  • Install Windows using the original IDE driver

    You can now install the VirtIO drivers from the ISO provided by IONOS.

  • Add a CD-ROM drive to your server

  • Select the windows-virtio-driver.iso ISO

  • Boot from the selected ISO to start the automatic installation tool

    You can now switch to VirtIO.

See also: Installing Windows VirtIO drivers

Can I run a virtualization platform on a virtual server?

Our hypervisor informs the guest operating system that it is located in a virtualized environment. Some virtualized systems do not support virtualized environments and cannot be executed on an IONOS virtual server. We generally do not recommend using your own virtualization technology in virtual hosts.

How do I upload my own images with FTP?

You can upload your own images to the FTP server in your region. The available regions are:

Frankfurt am Main (DE):

ftp://ftp-fra.ionos.com

Karlsruhe (DE):

ftp://ftp-fkb.ionos.com

Berlin (DE):

ftp://ftp-txl.ionos.com

London (GB):

ftp://ftp-lhr.ionos.com

Las Vegas (US):

ftp://ftp-las.ionos.com

Newark (US):

ftp://ftp-ewr.ionos.com

Logroño (ES):

ftp://ftp-vit.ionos.com

FTP addresses are listed in the DCD:

Menu Bar > ? (Help) > FTP Upload Image

or

Menu Bar > Image Manager > FTP Upload Image

See also: Uploading an image

Why can I not select the images I uploaded?

Your own images are only available in the region where you uploaded them. Accordingly, only images located in the same region as the virtual data center are available for selection in a virtual data center. For example, if you upload an image to the FTP server in Frankfurt, you can only use that image in a virtual data center in Frankfurt.

Can I use an encrypted connection for FTP uploads?

We strongly recommend that you select FTPS (File Transfer Protocol with Transport Layer Security) as the transfer protocol. This can easily be done using "FileZilla", for example. Simple FTP works as well, but your access data is transmitted in plain text.

Why are image files in the FTP account 0 bytes in size?

After a file has been uploaded to the FTP server, it is protected from deletion, converted, and then made available as an image. When this process is finished, the file size is reduced to 0 bytes to save space but left on the FTP server. This is to prevent a file with the same name from being uploaded again and interfering with the processing of existing images. If an image is no longer needed, please contact the IONOS enterprise support team.

How do I delete snapshots?

Snapshots that you no longer need can be deleted in the Image Manager.

See also: Deleting a snapshot

Which images support Live Vertical Scaling (LVS)?

Live Vertical Scaling is supported by all our images. Please note that the Windows OS only allows CPU core scaling.

Can I connect one storage device to multiple servers?

It is not possible to connect multiple servers to one storage device, but you can connect multiple servers in a network without performance loss.

Which image types are supported?

IONOS Cloud allows the customer to upload their own images to the infrastructure via upload servers. This procedure is to be completed individually for each data center location. IONOS Cloud optionally offers transmission with secure transport (TLS). The uploading of HDD and CD-ROM/DVD-ROM images is supported. Specifically, the uploading of images in the following formats is supported:

CD-ROM / DVD-ROM:

  • *.iso ISO 9660 image file

HDD Images:

  • *.vmdk vmware HDD images

  • *.vhd, *.vhdx HyperV HDD images

  • *.cow, *.qcow, *.qcow2 Qemu HDD images

  • *.raw binary HDD image

  • *.vpc VirtualPC HDD image

  • *.vdi VirtualBox HDD image

Note: Images created from UEFI boot machines cannot be uploaded. Only MBR boot images are supported.

How do I change the Availability Zone of a storage device?

Once a storage device is provisioned, it is not possible to change its Availability Zone. You could, however, create a snapshot and then use it to provision a storage device with a new Availability Zone.

See also: Availability Zones

High Availability and Scaling

Cluster options

Single-node cluster: A single-node cluster only has one node which is called the primary node. This node accepts customer connections and performs read/write operations. This is a single point of truth as well as a single point of failure.

Multi-node cluster: In addition to the primary node, this cluster contains standby nodes that can be promoted to primary if the current primary fails. The nodes are spread across availability zones. Currently, we use warm standby nodes, which means they don't serve read requests. Hot standby functionality (when the nodes can serve read requests) might be added in the future.

Database scaling

Existing clusters can be scaled in two ways: horizontal and vertical.

Horizontal scaling is defined as configuring the number of instances that run in parallel. The number of nodes can be increased or decreased in a cluster.

Scaling up the number of instances does not cause a disruption. However, decreasing may cause a switch over, if the current primary node is removed.

Note: This method of scaling is used to provide high availability. It will not increase performance.

Vertical scaling refers to configuring the size of the individual instances. This is used if you want to process more data and queries. You can change the number of cores and the size of memory to have the configuration that you need. Each instance is maintained on a dedicated node. In the event of scaling up or down, a new node will be created for each instance.

Once the new node becomes available, the server will switch from the old node to the new node. The old node is then removed. This process is executed sequentially if you have multiple nodes. We will always replace the standby first and then the primary. This means that there is only one switchover.

During the switch, if you are connected to the DB with an application, the connection will be terminated. All ongoing queries will be aborted. Inevitably, there will be some disruption. It is therefore recommended that the scaling is performed outside of peak times.

You can also increase the size of storage. However, it is not possible to reduce the size of the storage, nor can you change the type of storage. Increasing the size is done on-the-fly and causes no disruption.

Replication modes

The synchronization_mode determines how transactions are replicated between multiple nodes before a transaction is confirmed to the client. IONOS DBaaS supports three modes of replication: Asynchronous (default), Synchronous and Strict Synchronous. In either mode the transaction is first committed on the leader and then replicated to the standby node(s).

Asynchronous replication does not wait for the standby before confirming a transaction back to the user. Transactions are confirmed to the client after being written to disk on the primary node. Replication takes place in the background. In asynchronous mode the cluster is allowed to lose some committed (not yet replicated) transactions during a failover to ensure availability.

The benefit of asynchronous replication is the lower latency. The downside is that recent transactions might be lost if standby is promoted to leader. The lag between the leader and standby tends to be a few milliseconds.

Caution: Data loss might happen if the server crashes and the data has not been replicated yet.

Synchronous replication ensures that a transaction is committed to at least one standby before confirming the transaction back to the client. This standby is known as synchronous standby. If the primary node experiences a failure then only a synchronous standby can take over as primary. This ensures that committed transactions are not lost during a failover. If the synchronous standby fails and there is another standby available then the role of the synchronous standby changes to the latter. If no standby is available then the primary can continue in standalone mode. In standalone mode the primary role cannot change until at least one standby has caught up (regained the role of synchronous standby). Latency is generally higher than with asynchronous replication, but no data is lost during a failover.

At any time there will be at most one synchronous standby. If the synchronous standby fails then another healthy standby is automatically selected as the synchronous standby.

Caution: Turning on non-strict synchronous replication does not guarantee multi node durability of commits under all circumstances. When no suitable standby is available, the primary node will still accept writes, but does not guarantee their replication.

Strict synchronous replication is the same as synchronous replication with the exception that standalone mode is not permitted. This mode will prevent PostgreSQL from switching off the synchronous replication on the primary when no synchronous standby candidates are available. If no standby is available, no writes will be accepted anymore, so this mode sacrifices availability for replicated durability.

If replication mode is set to synchronous (either strict or non-strict) then data loss cannot occur during failovers (e.g. node failures). The benefit of strict replication is that data is not lost in case of a storage failure of the primary node and a simultaneous failure of all standby nodes.

Synchronization mode considerations

Please note that synchronization modes can impact DBaaS in several ways:

Aspect
Asynchronous
Synchronous

primary failure

A healthy standby will be promoted if the primary node becomes unavailable.

Only standby nodes that contain all confirmed transactions can be promoted.

Standby failure

No effect on primary. Standby catches up once it is back online.

In strict mode at least one standby must be available to accept write requests. In non-strict mode the primary continues as standalone. There is a short delay in transaction processing if the synchronous standby changes.

Consistency model

Strongly consistent (expect for lost data.)

Strongly consistent (expect for lost data.)

Data loss during failover

Non-replicated data is lost.

Not possible.

Data loss during primary storage failure

Non-replicated data is lost.

Non-replicated data is lost in standalone mode.

Latency

Limited by the performance of the primary.

Limited by the performance of the primary, the synchronous standby and the latency between them (usually below 1ms).

The performance penalty of synchronous over asynchronous replication depends on the workload. The primary handles transactions the same way in all replications modes, with the exception of COMMIT statements (incl. implicit transactions). When synchronous replication is enabled, the commit can only be confirmed to the client once it is replicated. Thus, there is a constant latency overhead for each transaction, independent of the transaction's size and duration.

Changing the commit guarantees per transaction

By default, the replication mode of the database cluster determines the guarantees of a committed transaction. However, some workloads might have very diverse requirements regarding accepted data loss vs performance. To address this need, commit guarantees can be changed per transaction. See synchronous_commit (PostgreSQL documentation) for details.

Caution: You cannot enforce a synchronous commit when the cluster is configured to use asynchronous replication. Without a synchronous standby any setting higher than local is equivalent to local, which doesn't wait for replication to complete. Instead, you can configure your cluster to use synchronous replication and choose synchronous_commit=local whenever data loss is acceptable.

Access Logs

Every running MongoDB instance collects logs from MongoD. Currently, there is no option to change this configuration. The log messages are shipped to a central storage location with a retention policy of 30 days.

Using the cluster ID, you can access the logs for that cluster via the API.

Requesting logs

The endpoint for fetching logs has four optional query parameters:

Parameter
Description
Default value
Possible values

start

Retrieve log lines after this timestamp (format: RCF3339)

30 days ago

between 30 days ago and now (before end)

end

Retrieve log line before this timestamp (format: RFC3339)

now

between 30 days ago and now (after start)

direction

Direction in which the logs are sorted and limited

BACKWARD

BACKWARD or FORWARD

limit

Maximum number of log lines to retrieve. Which log lines are cut depends on direction

100

between 1 and 5000

If you omit all parameters, you will get the most recent 100 log messages.

The following example shows how to receive only the first two messages after 12:00 a.m. on January 5, 2022:

curl --include \
    --request GET \
    --user "[email protected]:Mb2.r5oHf-0t" \
    --header "Content-Type: application/json" \
    "https://api.ionos.com/databases/mongodb/clusters/498ae72f-411f-11eb-9d07-046c59cc737e/logs?start=2022-01-05T12:00:00Z&end=2022-01-05T13:00:00Z&direction=FORWARD&limit=2"

Response

The response contains the log messages separated per instance and looks similar to this:

{
  "instances": [
    {
      "name": "ionos-498ae72f-411f-11eb-9d07-046c59cc737e-by6qu3m-1",
      "messages": [
        {
          "time": "2022-01-05T12:00:29.793Z",
          "message": "{\"t\":{\"$date\":\"2022-01-05T12:00:29.793+00:00\"},\"s\":\"I\",  \"c\":\"NETWORK\",  \"id\":22943,   \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:42562\",\"uuid\":\"4ff7c691-8aaa-4ef6-a37d-21386adcbdb0\",\"connectionId\":10751,\"connectionCount\":40}}"
        },
        {
          "time": "2022-01-05T12:00:29.794Z",
          "message": "{\"t\":{\"$date\":\"2022-01-05T12:00:29.794+00:00\"},\"s\":\"I\",  \"c\":\"NETWORK\",  \"id\":22944,   \"ctx\":\"conn10751\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:42562\",\"uuid\":\"318a7948-4860-4420-ad64-5cd6178f990f\",\"connectionId\":10751,\"connectionCount\":40}}"
        }
      ]
    }
  ]
}
Setup a Cluster (API)
Modify Cluster Attributes
Access Logs
Database Migration
Restore from Backup
Activate Extensions
View Cluster Metrics
Inspector pane
DCD
server
VDCs
Monitoring Manager is accessible from the Menu and offers a view of all active resources
Each Server from the list on the left displays monitoring statistics in the view on the right side of the Manager
The Inspector pane gives you direct access to Server metrics. You can also choose to Monitor in Background from this pane
Click Monitor in Background to open a separate monitoring window
The monitoring bar when logging out of the DCD
You can define a time range inside the Monitoring Manager

Overview

Monitoring as a Service (MaaS) gathers metrics on and Cube resource utilization. You can track any virtual instance in your data center and set triggers when usage limits are reached. MaaS is completely free and can be accessed immediately after a virtual instance is provisioned.

Compatibility: Monitoring is available anytime from the DCD or via for all instances in the . This service is compatible with all boot options and is independent of the operating system used or the instance type.

Actions and Alarms: With MaaS, you may set triggers based on resource usage. The system automatically executes predetermined Actions using set threshold Alarms. The application currently only sends email notifications.

Automation: MaaS lets you access up to two weeks' worth of runtime performance metrics. Raw Prometheus metrics can be gathered using TelemetryAPI (PromQL) and imported into your own monitoring system.

Collected metrics

MaaS collects basic metrics from the virtualization layer on which the virtual instance is running. The application makes it possible to run any configuration of virtual instances, as no client installation is required. Virtual instances can boot from any device using public or private images as well as snapshots or boots from ISO or network. Since no client installation or specific image is required, the feature is also enabled for virtual instances that were provisioned previously. The metrics covered by MaaS are as follows:

Category

Metrics

CPU

  • Average utilization of all cores of a virtual instance

Network

  • Ingress Network bytes/packets of all of a virtual instance

  • Egress Network bytes/packets of all NICs of a virtual instance

Storage

  • Disk bytes read by the block device

  • Disk bytes written by the block device

  • Disk read IOs performed by the device

  • Disk write IOs performed by the device.

Availability of metrics

The collection of monitoring metrics starts when a new instance is provisioned. However, the first metrics will not be available immediately after provisioning. It takes about 10 minutes before the first metrics are collected and can be retrieved in DCD or via API. Once this initial 10 minute period has elapsed, you can poll for metrics at any time.

Limitations

MaaS provides metric data for the two previous weeks. If you want to create a long-term history of metrics we advise you to retrieve the raw metric data and store it. Any data older than two weeks will be purged by IONOS Cloud.

IONOS currently offers CPU, Network, and Storage metrics. Memory metrics are currently not available.

Manage Node Pools

Create a node pool

Prerequisites: Make sure you have the appropriate permissions and that you are working within an active cluster. Only Contract Owners, Administrators, or Users with the Create Kubernetes Clusters permission can create node pools within the cluster. You should already have a in which the nodes can be provisioned.

1. Open MANAGER Resources > Kubernetes Manager

2. Select the cluster you want to add a node pool to from the list

3. Click + Create node pool

Create a node pool

4. Give the node pool a Name

Note the naming conventions for Kubernetes:

  • Maximum of 63 characters in length Begins and ends with an alphanumeric character ([a-z0-9A-Z])

  • Must not contain spaces or any other white-space characters

  • Can contain dashes (-), underscores (_), dots (.) in between

5. Choose a Data Center. Your node pool will be included in the selected data center. If you don't have a data center, you must first create one.

6. Select CPU and Storage Type. Then proceed with the other properties of the nodes. The nodes of the pool will all have the properties you select here.

7. Select the Node pool version that you wish to run

8. Select the number of nodes in the node pool

9. Select Storage Type

10. Click Create node pool

Create Kubernetes node pool modal

Result: Managed Kubernetes will start to provision the resources into your data center. This will take a while and the node pool can be used once it reaches the active state.

Try to avoid accessing the target data center while Managed Kubernetes is provisioning nodes, as concurrent manual interaction can cause undesirable results.

Update a node pool

For further management of node pools, select the cluster you want to manage a node pool for from the list

1. Select Node pools in Cluster

2. Set the Node Count, this is the number of nodes in the pool

3. Select Version, this is a version of Kubernetes you want to run on the node pool

4. Select Attached private LANs from the dropdown list

5. Select the day and time of your preferred maintenance window, necessary maintenance for Managed Kubernetes will be performed accordingly

6. Click Update node pool

Updating a node pool modal

Managed Kubernetes will start to align the resources in the target data center. In case you have selected a new version for Kubernetes the operation may take a while and the node pool will be available for further changes once it reaches the active state.

Try to avoid accessing the target data center while Managed Kubernetes is provisioning nodes, as concurrent manual interaction can cause undesirable results.

The maintenance window starts at the time of your choosing and remains open for another four hours. All planned maintenance work will be performed within this window, however, not necessarily at the beginning.

Delete a node pool

1. Open Containers > Managed Kubernetes

2. Select the cluster from which you want to remove the node pool.

3. Select Node pools in Cluster to delete the node pool.

4. Click Delete.

Managed Kubernetes will start to remove the resources from the target data center and eventually delete the node pool.

Try to avoid accessing the target data center while Managed Kubernetes is provisioning nodes, as concurrent manual interaction can cause undesirable results.

Rebuild a node

When a node fails or becomes unresponsive you can rebuild that node. This will create a new node with an identical configuration that will replace the failed node. Make sure your node is active.

1. Open Containers > Managed Kubernetes

2. Select the cluster and node pool that contains the failed node

3. Click the rebuild button of the node

4. Confirm the operation

Result: Managed Kubernetes starts a process, which is based on the node pool template. The template creates and configures a new node. It then waits for the status to display as ACTIVE. Once Active it migrates all the pods from the faulty node, deleting it once empty. While this operation occurs, the node pool will have an extra billable active node.

Try to avoid accessing the target data center while Managed Kubernetes is provisioning nodes, as concurrent manual interaction can cause undesirable results.

Images & Snapshots

IONOS provides you with a number of ready-made that you can use immediately. You can also use your own images by uploading them via our access. Your IONOS account supports many types of images as well as ISO images from which you can install an operating system or software directly, using an emulated CD-ROM drive.

Image types

The following image types can be uploaded:

Snapshots

are images generated from that have already been provisioned. You can use these images for other storages. This feature is useful, for example, if you need to quickly roll out more that have the same or similar configuration. You can use snapshots on HDD and storage, regardless of the storage type for which the snapshot was created. To create snapshots, users who are not contracted owners or administrators need to have the appropriate privileges.

Creating a snapshot

You can create snapshots from provisioned SSD and HDD storages. Regardless of the underlying storage type (HDD or SSD), snapshots use up HDD storage space assigned to an IONOS account. Therefore, if you want to create a snapshot, you must have enough HDD memory available.

The VM can be switched on or off when creating a snapshot. To ensure that data still in the RAM of the VM is included in the snapshot. It is recommended that you synchronize the data (with sync under Linux) or shut down the guest operating system (with shutdown -h now under Linux) before creating the snapshot.

Prerequisites: Make sure you have the appropriate permissions. Only contract owners, administrators, or users with the Create Snapshot permission can create a snapshot. Beforehand, ensure that you have sufficient memory available.

  1. Open the required data center.

  2. (Optional) Shut down the server. Creating a snapshot while the server is running takes longer.

  3. Open the context menu of the storage element and select Create Snapshot.

  4. (Optional) Change the name and the description of the snapshot.

  5. Click on Create Snapshot to start the process.

The snapshot is being created. It will be available in the Image Manager and in My own Images > Snapshots.

Uploading an image

We offer FTP access for each of our data center locations so that you can upload your own images. An image is only available at the location where it was uploaded.

You can manage your uploaded images and the snapshots you created with the 's Image Manager. You can specify who can access and use them. Only images and snapshots to which you have access are displayed.

To open the Image Manager, go to Menu Bar > Resource Manager > Image Manager

If you want to upload an image, you must first set up a connection from your computer to the IONOS FTP server. This can be done using an FTP client such as FileZilla or tools from your operating system. Then copy the image to the FTP upload of the IONOS data center location where you wish to use the image. After uploading, the image will be converted to a RAW format. As a result, dynamic HDD images are always used at their maximum size. A dynamic image, for example, whose file size is 3 GB, but which comes from a 50 GB hard disk, will be a 50 GB image again after conversion to the IONOS format. The disk space required for an uploaded image will not affect the resources of your IONOS account and you will not be charged.

FTP addresses:

Frankfurt am Main (DE): ftp://ftp-fra.ionos.com; Karlsruhe (DE): ftp://ftp-fkb.ionos.com; Berlin (DE): ftp://ftp-txl.ionos.com; London (GB): ftp://ftp-lhr.ionos.com; Las Vegas (US): ftp://ftp-las.ionos.com; Newark (US): ftp://ftp-ewr.ionos.com; Logroño (ES): ftp://ftp-vit.ionos.com

In the DCD, FTP addresses are listed here: Menu Bar > Image Manager > FTP Image Upload

Characters allowed for file names of images: a-z A-Z 0-9 - . / _ ( ) # ~ + = blanks.

Note: Images created from UEFI boot machines cannot be uploaded. Only MBR boot images are supported.

Example: Windows 10

In Windows 10, you can upload an image, without additional software, as follows.

How to set up FTP access

  1. Open Windows Explorer.

  2. Select Add a network location from the context menu.

  3. Enter the IONOS FTP address as the location of the website, e.g. ftp://ftp-fkb.ionos.com. An image is only available at the location where it was uploaded.

  4. In the next dialog box, leave the Log on anonymously check box activated.

  5. In the next dialog box, enter a name for the connection which will later be visible in Windows Explorer, e.g. upload_fkb.

  6. Confirm your entries by clicking Finish.

The FTP connection is available in Windows Explorer.

How to copy an image to the FTP upload.

  1. Open the FTP access on your PC.

  2. In the login dialog box, enter the credentials of your IONOS account.

  3. Copy the image you wish to upload to the folder matching the image type (HDD or iso).

As soon as the upload begins, you will receive a confirmation e-mail from IONOS.

After the upload has been completed, the image will be available in the Image Manager and in Own Images.

How to delete an image or snapshot

If you no longer need a snapshot or image and want to save resources, you can delete it.

  1. Open the Image Manager: Menu Bar > Resource Manager > Image Manager.

  2. To delete a snapshot, open the Snapshots tab and select the snapshot you would like to delete.

  3. To delete an image, open the Images tab and select the image you would like to delete.

  4. Click Delete.

In the dialog that appears, confirm your action by entering your password and clicking OK. The selected item is deleted and cannot be restored.

Flow log record

A flow log record is a record of a network flow in your virtual data center (). By default, each record captures a network internet protocol (IP) traffic flow, groups it, and is enhanced with the following information:

  • Account ID of the resource

  • Unique identifier of the network interface

  • The flow's status, indicating whether it was accepted or rejected by the software-defined networking (SDN) layer

The flow log record is in the following format:

Available Fields

The following table describes all of the available fields for a flow log record.

Field
Type
Description
Example Value

Flow log record example

The following are examples of flow log records that capture specific traffic flows. For information on how to create flow logs, see

Accepted record

In this example, traffic to the network interface 7ffd6527-ce80-4e57-a949-f9a45824ebe2 for the account 12345678 was accepted.

Rejected record

In this example, traffic to the network interface 7ffd6527-ce80-4e57-a949-f9a45824ebe2 for the account 12345678 was rejected.

Backup and Recovery

Backup and Recovery

Backups

PostgreSQL Backups: A cluster can have multiple backups. They are created

  1. When a cluster is created

  2. When the PostgreSQL version is changed to a higher major version

  3. When a Point-In-Time-Recovery operation is conducted.

At any time, Postgres only ships to one backup. We use combined with continuous WAL archiving. A base backup is done via pg_basebackup regularly, and then WAL records are continuously added to the backup. Thus, a backup doesn't represent a point in time but a time range. We keep backups for the last 7 days so recovery is possible for up to one week in the past.

Data is added to the backup in chunks of 16MB or after 30 minutes, whichever comes first. Failures and delays in archiving do not prevent writes to the cluster. If you restore from a backup then only the data that is present in the backup will be restored. This means that you may lose up to the last 30 minutes or 16MB of data if all replicas lose their data at the same time.

You can restore from any backup of any PostgreSQL cluster as long as the backup was created with the same or an older PostgreSQL major version.

Backups are stored encrypted in an IONOS S3 Object Storage bucket in the same region your database is in. Databases in regions without IONOS S3 Object Storage will be backed up to eu-central-2.

Warning: When a database is stopped all transactions since the last WAL segment are written to a (partial) WAL file and shipped to the IONOS S3 Object Storage. This also happens when you delete a database. We provide an additional security timeout of 5 minutes to stop and delete the database gracefully. However, under rare circumstances it could happen that this last WAL Segment is not written to the IONOS S3 Object Storage (e.g. due to errors in the communication with the IONOS S3 Object Storage) and these transactions get lost.

As an additional security mechanism you can check which data has been backed up before deleting the database. To verify which was the last archived WAL segment and at what time it was written you can connect to the database and get information from the pg_stat_archiver.

The `last_archived_time might be older than 30 minutes (WAL files are created with a specific timeout, see above) which is normal if there is no new data added.

Recovery

We provide Point-in-Time-Recovery (PITR). When recovering from a backup, the user chooses a specific backup and provides a time (optional), so that the new cluster will have all the data from the old cluster up until that time (exclusively). If the time was not provided, the current time will be used.

It is possible to set the recoveryTargetTime to a time in the future. If the end of the backup is reached before the recovery target time is met then the recovery will complete with the latest data available.

Note: WAL records shipping is a continuous process and the backup is continuously catching up with the workload. Should you require that all the data from the old cluster is completely available in the new cluster, stop the workload before recovery.

Failover and Upgrade

Failover procedures

Planned failover: During a failure or planned failover, the client must reconnect to the database. A planned failover is signaled to the client by the closing of the TCP connection on the server. The client must also close the connection and reconnect.

In the event of a failure, the connection might not be closed correctly. The new leader will send a gratuitous ARP packet to update the MAC address in the client's ARP table. Open TCP connections will be reset once the client sends a TCP packet. We recommend re-establishing a connection to the database by using an exponential back-off retry with an initial immediate retry.

Uncontrolled disconnection: Since we do not allow read connections to standby nodes, only primary disconnections are possible. However, uncontrolled disconnections can happen during maintenance windows, a cluster change, and during unexpected situations such as loss of storage disk space. Such disconnections are destructive for the ongoing transactions and also clients should reconnect.

If a node is disconnected from the cluster, then a new node will be created and provisioned. Losing a primary node leads to the same situation when a client should reconnect. Losing a replica is not noticeable to the customer.

PostgreSQL upgrades

IONOS Cloud updates and patches your database cluster to achieve high standards of functionality and security. This includes minor patches for PostgreSQL, as well as patches for the underlying OS. We try to make these updates unnoticeable to your operation. However, occasionally, we might have to restart your PostgreSQL instance to allow the changes to take effect. These interruptions will only occur during the maintenance window for your database, which is a weekly four-hour window.

When your cluster only contains one replica you might experience a short down-time during this maintenance window, while your database instance is being updated. In a replicated cluster, we only update standbys, but we might perform a switchover in order to change the leader node.

Considerations: Updates to a new minor version are always backward compatible. Such updates are done during the maintenance window with no additional actions from the user side.

Major Version Upgrades

Caution: Major changes of the PostgreSQL version are irreversible and can fail. You should read the official migration guide and test major version upgrades with an appropriate development cluster first.

Prerequisites:

  • Read the migration guide from Postgres (e.g. ) and make sure your database cluster can be upgraded

  • Test the upgrade on development cluster with similar / the same data (you can create a new database cluster as a clone of your existing cluster)

  • Prepare for a downtime during the major version upgrade

  • Ensure the database cluster has enough available storage. While the upgrade is space-efficient (i.e. it does not copy the data directory), some temporary data is written to disk.

Before upgrading PostgreSQL major versions, customers should be aware that IONOS Cloud is not responsible for customer data or any utilized postgreSQL functionality. Hence, it is the responsibility of the customer to ensure that the migration to a new PostgreSQL major version does not impact their operations.

As per : "New major versions also typically introduce some user-visible incompatibilities, so application programming changes might be required."

Supported Versions

Starting with version 10, PostgreSQL moved to a yearly release schedule, where each major version is supported for 5 years after initial release. You can find more details at . We strive to support new versions as soon as possible.

When a major version approaches its end of life (EOL), we will announce the deprecation and removal of the version at least 3 months in advance. About 1 month before the EOL, no new database can be created with the deprecated version (the exact date will be part of the first announcement). When the EOL is reached, not yet upgraded databases will be upgraded in their next maintenance window.

Activate Extensions

Available PostgreSQL extensions

There are several PostgreSQL extensions preinstalled, that you can enable for your cluster. You can enable the extension by logging into your cluster and executing:

The following table shows which extensions are enabled by default and which can be enabled (PostgreSQL version 12):

Extension
Enabled
Version
Description

Note: With select * from pg_available_extensions; you will see more available extensions, but many of them can't be enabled or used without superuser rights and thus aren't listed here.

View Cluster Metrics

Get Metrics via Telemetry API

Metrics can be retrieved via the as described below:

Request

Response

Follow for more information on how to authenticate and available endpoints.

Metrics Overview

Name
Labels
Description

Access Logs

The logs that are generated by a database are stored temporarily on the same disk as the database. We provide logs for connections, disconnections, waiting for locks, DDL statements, any statement that ran for at least 500 ms, and any statement that caused an error (see postgreSQL ). Currently, we do not provide an option to change this configuration.

In order to conserve disk space, log files are rotated according to size. Logs should not consume more than 175 MB of disk storage. The files are continuously monitored and log messages are shipped to a central storage location with a retention policy of 30 days.

By using your cluster ID, you can fetch the logs for that cluster via our API.

Requesting logs

The endpoint for fetching logs has four optional query parameters:

Parameter
Description
Default value
Possible values

So if you omit all parameters, you get the latest 100 log lines.

Response

The response will contain the logs separated per instance and look similar to this (of course with different timestamps, log contents etc):

Troubleshooting

Permission problems

If you're receiving errors like ERROR: permission denied for table x, please check that the permissions and owners are as you expect them.

PostgreSQL does have separate permissions and owners for each object (e.g. database, schema, table). Being the owner of the database only implies permissions to create objects in it, but does not grant any permissions on object in the database which are created by other users. E.g. the owner of the database can only select data from a table in this database if it is the owner of the table or has been granted the read privileges separately.

To show the owners and access privileges you can use this command. What each letter in access privileges stands for is documented in https://www.postgresql.org/docs/13/ddl-priv.html#PRIVILEGE-ABBREVS-TABLE

Please also include the output of this command if you open a support ticket related to permission problems.

Network issues

If you see error messages like psql: error: could not connect to server: ..., you can try to find the specific problem by executing these commands (on the client machine having the problems, assuming Linux):

To show local IP adresses:

Make sure that the IP address of the database cluster is NOT listed here. Otherwise this means that the IP address of the cluster collides with your local machines IP address. Make sure to select a non-DHCP IP address for the database cluster (between x.x.x.2/24 and x.x.x.10/24).

To list the known network neighbors:

Make sure that the IP address of the database cluster shows up here and is not FAILED. If it is missing: make sure that the database cluster is connected to the correct LAN in the correct datacenter.

Test that the database cluster IP is reachable:

This should show no package loss and rtt times should be around some milliseconds (may depend on your network setup).

To finally test the connection using the PostgreSQL protocol:

Some possible error messages are:

  • No route to host: Can't connect on layer 3 (IP). Maybe incorrect LAN connection.

  • Connection refused: Can reach the target, but it refuses to answer on this port. Could be that IP address is also used by another machine that has no PostgreSQL running.

  • password authentication failed for user "x": The password is incorrect

If you're opening a support ticket, please attach the output of the , the output of psql -h $ip -U $user -d postgres and the command showing your problem.

Issues with backup restore

Under some circumstances, in-place restore might fail. This is because some SQL statements are not transactional (most notably DROP DATABASE). A typical use case for in-place restore arises after the deletion of a database.

If a database is dropped, first, the data is removed from disk and then the database is removed from pg_database. These two changes are not transactional. In this event, you will want to revert this change by restoring to a time before the drop was issued. Internally, Postgres replays all transactions until a transaction commits after the specified recovery target time. At this point all uncommitted transactions are aborted. However, the deletion of the database from disk cannot be inverted. As a result, the database is still listed in pg_database but trying to connect to it results in the following:

DBaaS will perform some initialization on start-up. At this point the database will go into an error loop. To restore a database to a working state again, you can request another in-place restore with an earlier target time, such that at least one transaction is between recovery target time and the drop statement. The problem was previously discussed in the Postgres mailing list .

create extension <EXTENSION> CASCADE;

plpython3u

X

1.0

PL/Python3U untrusted procedural language

pg_stat_statements

X

1.7

track execution statistics of all SQL statements executed

intarray

1.2

functions, operators, and index support for 1-D arrays of integers

pg_trgm

1.4

text similarity measurement and index searching based on trigrams

pg_cron

1.3

Job scheduler for PostgreSQL

set_user

3.0

similar to SET ROLE but with added logging

timescaledb

2.4.2

Enables scalable inserts and complex queries for time-series data

tablefunc

1.0

functions that manipulate whole tables, including crosstab

pg_auth_mon

X

1.1

monitor connection attempts per user

plpgsql

X

1.0

PL/pgSQL procedural language

pg_partman

4.5.1

Extension to manage partitioned tables by time or ID

hypopg

1.1.4

Hypothetical indexes for PostgreSQL

postgres_fdw

X

1.0

foreign-data wrapper for remote PostgreSQL servers

btree_gin

1.3

support for indexing common datatypes in GIN

pg_stat_kcache

X

2.2.0

Kernel statistics gathering

citext

1.6

data type for case-insensitive character strings

pgcrypto

1.3

cryptographic functions

earthdistance

1.1

calculate great-circle distances on the surface of the Earth

postgis

3.2.1

PostGIS geometry and geography spatial types and functions

cube

1.4

data type for multidimensional cubes

SELECT now(); # verify server time
SELECT * FROM pg_stat_archiver; # get information about last archival wal and time

Tutorial - Learn how to Restore from Backup

Tutorial - Learn how to set maintenance windows

base backups
to version 13
PostgreSQL official documentation
https://www.postgresql.org/support/versioning/
curl --get https://dcd.ionos.com/telemetry/api/v1/query_range \
--data-urlencode "query=ionos_dbaas_postgres_memory_available_bytes{postgres_cluster=\"498ae72f-411f-11eb-9d07-046c59cc737e\"}" \
--data-urlencode "start=2022-12-06T12:32:37.076Z" \
--data-urlencode "end=2022-12-06T12:47:37.076Z" \
--data-urlencode "step=60" \
--header 'Authorization: Bearer your_JWT_token'
{
  "status": "success",
  "data": {
    "resultType": "matrix",
    "result": [
      {
        "metric": {
          "__name__": "ionos_dbaas_postgres_memory_available_bytes",
          "cloud_service": "default",
          "contract_number": "123456",
          "instance": "ionos-498ae72f-411f-11eb-9d07-046c59cc737e-4oymiqu-0",
          "postgres_cluster": "498ae72f-411f-11eb-9d07-046c59cc737e",
          "role": "master"
        },
        "values": [
          [
            1670329957.076,
            "1071443968"
          ],
          ...
        ]
      },
      {
        "metric": {
          "__name__": "ionos_dbaas_postgres_memory_available_bytes",
          "cloud_service": "default",
          "contract_number": "123456",
          "instance": "ionos-498ae72f-411f-11eb-9d07-046c59cc737e-4oymiqu-1",
          "postgres_cluster": "498ae72f-411f-11eb-9d07-046c59cc737e",
          "role": "replica"
        },
        "values": [
          [
            1670329957.076,
            "1086152704"
          ],
          ...
        ]
      }
    ]
  }
}

ionos_dbaas_postgres_connections_count

contract_number, instance, postgres_cluster, role, state

Number of connections per instance and state. The state is one of the following: active, disabled, fastpath function call, idle, idle in transaction, idle in transaction (aborted).

ionos_dbaas_postgres_cpu_rate5m

contract_number, instance, postgres_cluster, role

The average CPU utilization over the past 5 minutes.

ionos_dbaas_postgres_disk_io_time_weighted_seconds_rate5m

contract_number, instance, postgres_cluster, role

The rate of disk I/O time, in seconds, over a five-minute period. Provides insight into performance of a disk, as high values may indicate that the disk is being overused or is experiencing performance issues.

ionos_dbaas_postgres_instance_count

contract_number, instance, postgres_cluster, role

Desired number of instances. The number of currently ready and running instances may be different. ionos_dbaas_postgres_role provides information about running instances split by role.

ionos_dbaas_postgres_load5

contract_number, instance, postgres_cluster, role

Linux load average for the last 5 minutes. This metric is represented as a number between 0 and 1 (can be greater than 1 on multicore machines), where 0 indicates that the CPU core is idle and 1 indicates that the CPU core is fully utilized. Higher values may indicate that the system is experiencing performance issues or is approaching capacity.

ionos_dbaas_postgres_memory_available_bytes

contract_number, instance, postgres_cluster, role

Available memory in bytes.

ionos_dbaas_postgres_memory_total_bytes

contract_number, instance, postgres_cluster, role

Total memory of the underlying machine in bytes. Some of it is used for our management and monitoring tools and not available to PostgreSQL. During horizontal scaling you might see different values for each instance.

ionos_dbaas_postgres_role

contract_number, instance, postgres_cluster, role

Current role of the instance. Provides whether an instance is currently "master" or "replica".

ionos_dbaas_postgres_storage_available_bytes

contract_number, instance, postgres_cluster, role

Free available disk space per instance in bytes.

ionos_dbaas_postgres_storage_total_bytes

contract_number, instance, postgres_cluster, role

Total disk space per instance in bytes. During horizontal scaling you might see different values for each instance.

ionos_dbaas_postgres_transactions:rate2m

contract_number, datid, datname, instance, postgres_cluster, role

Per-second average rate of SQL transactions (that have been committed), calculated over the last 2 minutes.

ionos_dbaas_postgres_user_tables_idx_scan

contract_number, datname, instance, postgres_cluster, relname, role, schemaname

Number of index scans per table/schema.

ionos_dbaas_postgres_user_tables_seq_scan

contract_number, datname, instance, postgres_cluster, relname, role, schemaname

Number of sequential scans per table/schema. A high number of sequential scans may indicate that an index should be added to improve performance.

Telemetry API
MaaS documentation

start

Retrieve log lines after this timestamp (format: RCF3339)

30 days ago

between 30 days ago and now (before end)

end

Retrieve log line before this timestamp (format: RFC3339)

now

between 30 days ago and now (after start)

direction

Direction in which the logs are sorted and limited

BACKWARD

BACKWARD or FORWARD

limit

Maximum number of log lines to retrieve. Which log lines are cut depends on direction

100

between 1 and 5000

curl --include \
    --request GET \
    --user "[email protected]:Mb2.r5oHf-0t" \
    --header "Content-Type: application/json" \
    https://api.ionos.com/databases/postgresql/clusters/498ae72f-411f-11eb-9d07-046c59cc737e/logs?start=2022-01-05T12:00:00Z&end=2022-01-05T13:00:00Z&direction=FORWARD&limit=2
{
  "instances": [
    {
      "name": "ionos-498ae72f-411f-11eb-9d07-046c59cc737e-by6qu3m-1",
      "messages": [
        {
          "time": "2022-01-05T12:00:29.793Z",
          "message": "2022-01-05 12:00:29.793 UTC,,,107077,\"[local]\",61d5885d.1a245,1,\"\",2022-01-05 12:00:29 UTC,,0,LOG,00000,\"connection received: host=[local]\",,,,,,,,,\"\",\"not initialized\""
        },
        {
          "time": "2022-01-05T12:00:29.794Z",
          "message": "2022-01-05 12:00:29.794 UTC,\"postgres\",\"test\",107078,\"[local]\",61d5885d.1a246,2,\"authentication\",2022-01-05 12:00:29 UTC,9/73319,0,LOG,00000,\"connection authorized: user=postgres database=test application_name=pgq ticker\",,,,,,,,,\"\",\"client backend\""
        }
      ]
    }
  ]
}
documentation
psql -h $ip_address -U $user -d $database << EOC
\l
\d
\dn
\dp
EOC
ip address
ip neigh
ping $ip -c 5
psql -h $ip -U $user -d postgres
postgres=# \c test
FATAL:  database "test" does not exist
DETAIL:  The database subdirectory "base/16421" is missing.
check-net-config.sh script
here
virtual data centers
Virtual Data Centers
Virtual Servers
Block Storage
IP addresses
SSH keys
Remote Console
cloud-init
DCD
HDD
SSD
Availability Zones
NICs
Virtual Machines
Virtual Servers
Cloud Cubes
VDC
images
snapshots
storages
SSH keys
VM
DCD's
provisioning
VDC
storage
2-Factor Authentication
IP addresses
DCD's
DCD
VDC
failover
virtual machines
DCD
IP address
virtual server
Availability Zones
NIC
VDC
IP addresses
NICs
VDC
NICs
VM
API
VDC
block storage
NICs

HDD images:

VMWare disk image

Microsoft disk image

RAW disk image

QEMU QCOW image

UDF file system

Parallels disk image

ISO images:

ISO 9660 CD-ROM

images
FTP
HDD
Snapshots
storages
virtual machines
SSD
DCD
<version> <account-id> <interface-id> <srcaddr> <dstaddr> <srcport> <dstport> <protocol> <packets> <bytes> <start> <end> <action> <log-status>

version

string

The flow log version. Version 2 is the default.

2

account-id

string

The IONOS Cloud account ID of the owner of the resource containing the interface for which flow logs are collected.

12345678

interface_id

string

The interface unique identifier (UUID) for which flow logs are collected.

7ffd6527-ce80-4e57-a949-f9a45824ebe2

srcaddr

string

The source address for incoming traffic, or the IPv4 address of the network interface for outgoing traffic.

172.17.1.100

dstaddr

string

The destination address for outgoing traffic, or the IPv4 address of the network interface for incoming traffic.

172.17.1.101

srcport

uint16

The source port from which the network flow originated.

59113

dstport

uint16

The destination port for the network flow.

20756

protocol

uin8

The Internet Assigned Numbers Authority (IANA) protocol number of the traffic. For more information, see Assigned Internet Protocol Numbers

6

packets

uint64

The number of packets transferred during the network flow capture window.

17

bytes

uint64

The number of bytes transferred during the network flow capture window.

1325

start

string

The timestamp, in UNIX EPOCH format, of when the first packet of the flow was received within the grouping interval.

1587983051

end

string

The timestamp, in UNIX EPOCH format, of when the last packet of the flow was received within the grouping interval.

1587983052

action

string

The action associated with the traffic:

ACCEPT: traffic accepted by the firewall

REJECT: traffic rejected by the firewall

ACCEPT

log-status

string

The flow log logging status:

OK: normal flow logging

SKIPDATA: Some flow log records were skipped during the grouping interval

OK

2 12345678 7ffd6527-ce80-4e57-a949-f9a45824ebe2 172.17.1.100 172.17.1.101 59113 20756 6 17 1325 1587983051 1587983052 ACCEPT OK
2 12345678 7ffd6527-ce80-4e57-a949-f9a45824ebe2 172.17.1.100 172.17.1.101 59113 20756 6 17 1325 1587983051 1587983052 REJECT OK
configure flow logs
VDC

Configure Flow Logs

The information and assistance available in this category make it easier for you to work with flow logs using the Data Center Designer (DCD). For the time being, you have the option of doing either of the following.

Creating a flow log

You can create flow logs for your network interfaces as well as the public interfaces of the Network Load Balancer and Network Address Translation (NAT) Gateway. Flow logs can publish data to your buckets in the IONOS S3 Object Storage.

After you have created and configured your bucket in the IONOS S3 Object Storage, you can create flow logs for your network interfaces.

Prerequisites

Before you create a flow log, make sure that you meet the following prerequisites:

  • You are logged on to the DCD.

  • You are the contract owner or an administrator .

  • You have permissions to edit the required data center.

  • You have the create and manage Flow logs privilege.

  • The VDC is open.

  • You are the owner or have write access to permissions of an IONOS S3 Object Storage bucket.

  • You have an IONOS S3 Object Storage instance with a bucket that exists for your flow logs. To create an IONOS S3 Object Storage bucket, see the IONOS S3 Object Storage page.

Procedure

Select the appropriate tab for the instance or interface for which you want to activate flow logs in the workspace.

  1. In the Inspector pane, open the Network tab

  2. Open the properties of the network interface controller (NIC).

Accessing flow logs

Activate flow logs

Configure flow logs

Open the Flow Log drop-down and fill in the following fields:

  1. For Name, enter a name for the flow log rule. The name will also be the first part of the objects’ name prefix.

  2. For Direction, choose Ingress to create flow logs for incoming traffic, Egress for outgoing traffic, or Bidirectional to create flow logs for all traffic.

  3. For Action, choose Rejected to capture only traffic blocked by the firewall, Accepted to capture only traffic allowed by the firewall, or Any for all traffic.

  4. For Target S3 bucket, enter a valid existing IONOS S3 Object Storage bucket name and an optional object name prefix where flow log records should be written.

  5. Select Add flow log to complete the configuration of the flow log. It becomes applied once you provision your changes.

  • Characters ‘/’ (slash) and ‘%2F’ are not supported as object prefix characters.

  • You cannot edit/modify changes to the fields of a flow log rule after activating it.

  • There is a limit of one flow log created per NIC, NAT Gateway, and Network Load Balancer.

Result: An activated flow log rule is visualized by a green light on the NIC properties. The green light indicates that the configuration has been validated and is valid for provisioning.

valid flow log rule

A summary of the flow logs rule can be seen by opening the drop-down of the flow log and selecting the name of the flow log rule.

flow log summary

At this point, you may make further changes to your data center (optional).

When ready, select Provision changes. After provisioning is complete, the network interface's flow logs are activated.

Flow logs can be provisioned on both new and previously provisioned instances.

Deleting a flow log

Prerequisites

Before you delete a flow log, make sure that you meet the following prerequisites:

  • You are logged on to the DCD

  • You are the contract owner or an administrator

  • You have permissions to edit the required data center

  • You have the Create and manage Flow logs privilege

  • The VDC is open

  • You are the owner or have write access to permissions of an IONOS S3 Object Storage bucket

Procedure

  1. Select the relevant VM of the interface for which you want to delete the flow logs in the Workspace.

  2. In the Inspector pane, open the Network tab

  3. Open the properties of the network interface controller (NIC)

  4. Open the Flow Log drop-down

  5. Select the trash bin icon to delete the flow log

delete a flow log

6. In the confirmation message, select OK

7. Select Provision changes. After provisioning is complete, the network interface's flow logs are deleted and no longer captured.

Deleting a flow log does not delete the existing log streams from your bucket. Existing flow log data must be deleted using the respective service's console. In addition, deleting a flow log that publishes to IONOS S3 Object Storage does not remove the bucket policies and log file access control lists (ACLs).

  1. In the Inspector pane, open the Settings tab

To activate flow logs, open the Flow Log drop-down and fill in the following fields:

  1. For Name, enter a name for the flow log rule. The name will also be the first part of the objects’ name prefix.

  2. For Direction, choose Ingress to create flow logs for incoming traffic, Egress for outgoing traffic, or Bidirectional to create flow logs for all traffic.

  3. For Action, choose Rejected to capture only traffic blocked by the firewall, Accepted to capture only traffic allowed by the firewall, or Any for all traffic.

  4. For Target S3 bucket, enter a valid existing IONOS S3 Object Storage bucket name and an optional object name prefix where flow log records should be written.

  5. Select Add flow log to complete the configuration of the flow log. It becomes applied once you provision your changes.

Developer ReferenceAPIs, SDKs & Developer Tools

Overview

Block storage is a type of IT architecture in which data is stored as a file system. Block storage provides endless possibilities for storing large amounts of information. It guarantees the safety of resource planning systems and provides instant access to the required amount of data without delay.

Storage types and options

The virtual storage devices you create in the are provisioned and hosted in one of the IONOS physical data centers. Virtual storage devices are used in the same way as physical devices and can be configured and managed within the server's operating system.

A virtual storage device is equivalent to an iSCSI block device and behaves exactly like direct-attached storage. IONOS block storage is managed independently of servers. It is therefore easily scalable. You can assign a hard disk image to each storage device via DCD (or ). You can use one of the IONOS images, your own image, or a snapshot created with DCD (or API). You have a choice of hard disk drive () and solid-state drive () storage technologies while SSD is available in two different performance classes. Information on setting up the storage can be found .

Up to 24 storage volumes can be connected to a virtual server or a Cloud Cube (while the Cloud Cube already has one virtual storage device attached per default). You can use any mix of volume types if necessary.

IONOS Cloud provides HDD as well a SSD block storage in a double-redundant setup. Each virtual storage volume is replicated four times and stored on distributed physical devices within the selected data center location.

HDD storage

The following performance and configuration limits apply per HDD volume. The performance of HDD storage is static and independent of its volume size.

Performance HDD storage:

  • Read/write speed, sequential: 200 Mb/s at 1 MB block size

  • Read/write speed, full random:

    • Regular: 1,100 IOPS at 4 kB block size

    • Burst: 2,500 IOPS at 4 kB block size

Limits HDD storage:

  • Minimum Size per Volume: 1 GB

  • Maximum Size per Volume: 4 TB

Larger volumes can be made available on request. Please contact our

SSD storage

SSD storage volumes are available in two performance classes - SSD Premium and SSD Standard. The performance of SSD storage depends on the volume size. Please find the respective performance and configuration limits listed below.

Performance SSD Premium storage:

  • Read/write speed, sequential: 1 Mb/s pro GB at 1 MB block size

  • Read speed, full random: 75 IOPS per GB at 4 KB block size

  • Write speed, full random: 50 IOPS per GB at 4 KB block size

Limits SSD Premium storage:

  • Minimum Size per Volume: 1 GB

  • Maximum Size per Volume: 4 TB

    • Maximum Read/write speed, sequential: 600 Mb/s per volume at 1 MB block size

    • Maximum Read speed, full random: 45,000 IOPS at 4 KB block size and min. 4 Cores, 4 GB RAM per volume

    • Maximum Write speed, full random: 30,000 IOPS at 4 KB block size and min. 4 Cores, 4 GB RAM per volume

Larger volumes can be made available on request. Please contact our

Performance SSD Standard storage:

  • Read/write speed, sequential: 0,5 Mb/s pro GB at 1 MB block size

  • Read speed, full random: 40 IOPS per GB at 4 KB block size

  • Write speed, full random: 30 IOPS per GB at 4 KB block size

Limits SSD Premium storage:

  • Minimum Size per Volume: 1 GB

  • Maximum Size per Volume: 4 TB

    • Maximum Read/write speed, sequential: 300 Mb/s per volume at 1 MB block size

    • Maximum Read speed, full random: 24,000 IOPS at 4 KB block size and min. 2 Cores, 2 GB RAM per volume

    • Maximum Write speed, full random: 18,000 IOPS at 4 KB block size and min. 2 Cores, 2 GB RAM per volume

Larger volumes can be made available on request. Please contact our

SSD performance: The performance of SSD storage is directly related to the volume size. To get the full benefits of high-speed SSDs, we recommend that you book SSD storage units of at least 100 GB. You can use smaller volumes for your , but performance will be suboptimal, compared to that of the larger units. When storage units are configured in DCD, expected performance is predicted based on the volume size (Inspector > Settings). For storage volumes of more than 600 GB the performance is capped at the maximum as specified in the documentation above.

Availability Zones

Secure your data, enhance reliability, and set up high-availability scenarios by deploying your virtual servers and storage devices across multiple .

Assigning different Availability Zones ensures that redundant modules reside on separate physical resources at IONOS. For example, a server or a storage device assigned to Availability Zone 1 resides on a different resource than a server or storage device assigned to Availability Zone 2.

For HDD and SSD Storages you have the following Availability Zone options:

  • Zone 1

  • Zone 2

  • Zone 3

  • A - Auto (default; the system automatically assigns an Availability Zone upon provisioning)

The server Availability Zone can also be changed after provisioning. The storage device's Availability Zone is set on first provisioning and cannot be changed subsequently. However, you can take a and then use it to provide a storage device with a new Availability Zone.

Authentication

The first time you create a storage unit based on a public image, you must select at least one authentication method. Without authentication, the image on the storage unit cannot be provisioned. The authentication methods available depend on the IONOS operating system image you select.

Authentication methods depend on the operating system.

Authentication methods
SSH key
Password

We recommend using both SSH and a password with IONOS Linux images. This will allow you to log in with the . It is not possible to provision a storage unit with a Linux image without specifying a password or an SSH key.

Passwords: Provisioning a storage device with a Windows image is not possible without specifying a password. It must be between 8 and 50 characters long and may only consist of numbers (0 - 9) and letters (a-z, A - Z). For IONOS Linux images, you can specify a password along with SSH keys, so that you can also log in without the SSH, such as with the Remote Console. The password is set as the root or administrator password with corresponding permissions.

SSH (Secure Shell): To use SSH, you must have an SSH key pair consisting of public and private keys. The private key is installed on the client (the computer you use to access the server), and the public key is installed on the (virtual) instance (the server you wish to access). The IONOS SSH feature requires that you have a valid SSH public/private key pair and that the private key is installed as appropriate for your local operating system.

If you set an invalid or incorrect SSH key, it must be corrected on the side of the virtual machine.

Limitations

IONOS is focused on ensuring the uninterrupted and cost-efficient operation of your services. This is why we offer a selection of tested operating systems for immediate use in your virtual cloud instances. To ensure uninterrupted, secure, and stable performance, all operating systems, regardless of their source, should meet the following requirements:

  • are essential for the operation of virtual network cards

  • The following are the recommended drivers for the operation of virtual storage:

    • VirtIO (maximum performance)

    • IDE (for vStorage, an alternative connection by IDE is available, but it will not deliver the potential performance offered by IONOS).

    • QXL drivers are required to use the Remote Console.

  • We guarantee operation for the selected operating system as long as vendor or upstream support is available.

  • In general, all current Linux distributions and their derivatives are supported.

  • Microsoft Windows Server versions are also supported as long as vendor support is available.

The older an OS version, the greater the risk of performance and stability losses. It is recommended that you always switch to the current versions well before the manufacturer's support for your old version expires. This will greatly improve your operating system's security and functionality.

When operating software appliances, it is recommended that you use the images that have been specially prepared for the KVM hypervisor.

If you are using special software appliances or operating systems that are not listed here, Please contact our . We would be happy to explore the possibility of using such systems within the IONOS Enterprise Cloud and advise you on the best possible implementation.

Overview

facilitates the fully automated setup of Managed Kubernetes . Several clusters can also be quickly and easily deployed, for example, to set up staging environments, and then to delete them again if necessary.

Managed Kubernetes also simplifies and carefully supports the automation of CI/CD pipelines in terms of testing and deployment.

Our solution offers the following:

  • Automatic updates and security fixes.

  • Version and upgrade provisioning.

  • Highly available and geo-redundant control plane.

  • Full cluster admin-level access to Kubernetes API.

The Kubernetes Manager

Everything related to Managed Kubernetes can be controlled in via the dedicated Kubernetes Manager, which could be found in Menu Bar > Containers > Managed Kubernetes.

Kubernetes Manager provides a complete overview of your provisioned Kubernetes clusters and node pools including their status. Furthermore, you can:

Clusters and node pools

Kubernetes is organized in clusters and node pools. The node pools are created in the context of a cluster. The belonging to the node pool are provisioned into . All servers within a node pool are identical in their configuration.

When viewing a data center that contains resources created by Kubernetes they will be represented as read-only. This is because they are managed by Kubernetes and manual interactions would cause interference.

Clusters

Clusters can span multiple node pools that may be provisioned in different virtual data centers and across locations. For example, you can create a cluster consisting of multiple node pools where each pool is in a different location and achieve geo-redundancy. For an in-depth description of how the clusters work, read the .

All operations concerning the infrastructure of clusters can be performed using the Kubernetes Manager including cluster and node creation, and scaling of node pools.

The status of a cluster is indicated by a LED.

LED
Description

Node pools

All Kubernetes worker nodes are organized in node pools. All nodes within a node pool are identical in setup. The nodes of a pool are provisioned into virtual data centers at a location of your choice and you can freely specify the properties of all the nodes at once before creation.

All operations concerning the infrastructure of node pools can be performed using the Kubernetes Manager.

The status of a node pool is indicated by a LED.

LED
Description

Nodes and managed resources

Nodes or worker nodes are the servers in your data center that are managed by Kubernetes and constitute your node pools. Resources managed by Kubernetes in your data centers will be displayed by the DCD as read-only.

You can still see, inspect, and position the managed resources to your liking. However, the specifications of the resources are locked for manual interactions to avoid undesirable results. To modify the managed resources use the Kubernetes Manager. You can manage the following resource types: servers, , , LANs, (depending on your deployed pods and configurations).

The Inspector for managed resources permits no direct modifications to the resources themselves. It does allow easy navigation between the data center view and the cluster and node pools views in the Kubernetes Manager, as well as the following:

  • Switching to the Kubernetes Manager and showing the respective node pool

  • Downloading the kubeconfig for access to the cluster

  • Listing all nodes in the data center belonging to the same node pool

Modify Cluster Attributes

Once the PostgreSQL cluster is up and running, you can customize several attributes. For the first public release, you can alter the displayName attribute. You can also arrange the maintenanceWindow and change network connections.

Note: The sample UUID is 498ae72f-411f-11eb-9d07-046c59cc737e

Quick Links:
Quick Links:

Renaming the database cluster

With the PATCH request, you can change the name of your database cluster.

Response

Upgrading the cluster version in-place

DBaaS supports upgrading Postgres to a higher major version in-place. To do so, simply issue a PATCH request containing the target Postgres version:

The upgrade procedure is efficient and should only take a few minutes (even for large databases). The database will be unavailable (potentially multiple times) until the upgrade is complete. Once the upgrade is done, the creation of a new backup is triggered.

Once the upgrade is triggered it cannot be undone. If the upgrade fails or causes unexpected behaviors for the application then the old state can be restored by creating a new database from the previous backup. A in-place restore will only apply the old data and not roll back to the older Postgres version.

Caution: Executing in-place upgrades drops objects and extensions from the database that could be incompatible with the new version. If you are unsure whether your application is affected by the changes then try the upgrade on a clone first.

Increasing cluster storage size in-place

DBaaS supports increasing the storage size of your cluster in-place. To do so, simply issue a PATCH request containing the new storage size:

The resizing happens online without interruptions.

Caution: Decreasing the storage size is not supported with this method.

Scaling a cluster vertically (RAM, CPU)

DBaaS supports increasing and decreasing the size of your database instances. To do so, simply issue a PATCH request containing the new size (you can also specify only one of cores or ram, if you don't want to change both):

Caution: This change requires for the underlying nodes to be replaced and therefore will cause one failover.

Scaling a cluster horizontally (replicas)

DBaaS supports increasing and decreasing the amount of your database replicas. To do so, simply issue a PATCH request containing the new replica count (between 1 and 5):

Caution: Scaling down may cause one or more failovers and interrupt open connections.

Setting maintenance windows

If you do not provide a window during the creation of your database, a random window will be assigned for the database. You can update the window at any time, as shown in the example below.

When your cluster only contains one replica you might experience a short down-time during this maintenance window, while your database instance is being updated. In a replicated cluster, we only update standbys, but we might perform a switchover, in order to change the leader node.

Changing the connection

After creating your database you can change the connection to your private LAN or temporarily remove it completely. You can change it to either be connected to a different LAN, or simply update the IP. However, you always have to include all properties of the connections list for the request, even if you only want to update the database IP address. The newly provided LAN has to be in the same location as the database cluster. Updating the IP address also updates the record of the DNS name of the database.

Note: When you change the connection to a new LAN, the database will no longer be reachable in the old network almost immediately. However, the new connection will only be established, after your dedicated VMs are updated, which can take a couple of minutes, depending on the number of instances you specified.

In order to remove the connection, you have to specify an empty list in the request body:

Restore from Backup

You can restore a database from a previous backup either in-place or to a different cluster.

Note: Please choose resources carefully for your new database cluster. The operation may fail if there is insufficient disk space or RAM. We recommend at least 4 GB of RAM for the new database, which can be scaled down after the restore operation.

Listing all available backups

To restore from a backup you will need to provide its ID. You can request a list of all available backups:

Request

Response

Listing backups per cluster

You can also list backups belonging to a specific cluster. For this, you need a clusterId.

Our chosen clusterId is: 498ae72f-411f-11eb-9d07-046c59cc737e

Request

Response

Restoring from backup in-place

You can now trigger a restore of the chosen cluster. Your database will not be available during the restore operation.

The recoveryTargetTime is an ISO-8601 timestamp that specifies the point in time up to which data will be restored. It is non-inclusive, meaning the recovery will stop right before this timestamp.

You should choose a backup with the most recent earliestRecoveryTargetTime. However, this timestamp should be strictly less than the desirable recoveryTargetTime. For example suppose you have three backups with earliestRecoveryTargetTime from 1st, 2nd and 3rd of january 2022 at 0:00:00 espectively. If you want to restore to the recoveryTargetTime 2022-01-02T20:00:00Z, you should use chose the backup from 2nd of january.

Note: To restore a cluster in-place you can only use backups from that cluster. If that backup is from an older Postgres version (after a major version upgrade), only the data is applied. The database will continue running the updated version.

Request

Response

The API will respond with a 202 Accepted status code if the request is successful.

Restoring to a different target

You can also create a new cluster as a copy from a backup by adding the fromBackup field in your POST request. You can use any backup from any cluster as long as the target cluster has the same or a more recent version of PostgreSQL.

The field takes the same arguments as shown above, backupId and recoveryTargetTime.

Note: A backup is a continuous process, so if you have any ongoing workload in your current cluster, do not expect the old data to appear instantly. If you wish to avoid a slight delay, you need to stop the workload prior to backing up.

If you want a new database to have all the data from the old one (clone database) use a backup with the most recent earliestRecoveryTargetTime and omit recoveryTargetTime from the POST request.

Note: You can use the POST and fromBackup functionality to move a database to another region since the new database cluster doesn't need to be in the same region as the original one.

Request

Modify Cluster Attributes

Once the MongoDB cluster is up and running, you can customize several attributes. Attributes such as displayName and templateID are updated immediately after the request. Check for changes which will take place by a schedule.

Note: The sample UUID is d02de413-d5af-4104-a6f9-3a3c2766ee61

Quick Links:

Renaming the database cluster

With the PATCH request, you can change the name of your database cluster.

Response

Upgrading the MongoDB version

DBaaS supports MongoDB versions 5.0 and 6.0. We keep the MongoDB minor version up to date with the MongoDB's latest release for that major version.

Changing the version of a running MongoDB cluster via the API is not allowed.

Scaling a cluster vertically (RAM, CPU, Storage)

DBaaS allows you to scale up and down the size of your database instances. Database instance sizes are managed similarly to ; each template exposes a combination of resource sizes that can be assigned to any of your clusters.

Note: You can have a list of available .

To do so, issue a PATCH request containing the MongoDB Template ID:

The provisioning engine will delete a secondary replica and create a new server of the desired size. Once ready, it will replicate the existing data to this new server via the . During this procedure, the cluster should operate normally. When the provisioning engine identifies the new server is complete and healthy, it will continue the rollout for every remaining secondary replica up to the primary.

Note: Be prepared for a possible downtime in case of a cluster with only one replica.

Scaling a cluster horizontally (Replicas)

DBaaS allows you to scale up and down the number of database replicas. Increased replica count may result in a cluster with improved availability, performance (capacity to handle more data reads), and fault tolerance for upgrades. A new IP address must be provided for each new instance. To do so, send a PATCH request with the new instances count (supported values are: 1, 3):

The patch example takes a previous cluster with one replica and adds two more replicas to it. As a result, two new IP addresses are added to the connection properties cidrList, and the instances property receives the total number of replicas.

New servers are provisioned immediately and added one at a time to the pool of replicas, implying an incremental rollout. Downgrading a cluster follows the same pattern, with replica sets being removed one at a time until the desired replica set is reached.

NOTE: Scaling down may result in one or more failovers and the disruption of open connections.

More about

Setting maintenance windows

If you do not specify a maintenance window when creating your database, a random window will be assigned. You can update the window at any time, as shown in the example below.

If your cluster only has one replica, you may experience a brief outage while your database instance is being updated during this maintenance window. In a replicated cluster, secondaries are updated first and rolled out one by one; each rollout must be complete and healthy before the next one can begin.

Maintenance windows are used to update the cluster's MongoDB minor version. Other maintenance updates related to the cluster infrastructure and security might take place during a maintenance window.

Updating users and roles

The user management for IONOS MongoDB clusters happens completely via the IONOS API. Creation or modification of users from within MongoDB itself is not allowed. More information can be found in the .

Enable IPv6

To enable IPv6 for Local Area Network (LAN) in the Data Center Designer (DCD), follow these steps:

  1. Drag the Server element from the Palette onto the Workspace. The created server is automatically highlighted in turquoise. The allows you to configure the properties of this individual server instance.

  2. Drop the internet element onto the Workspace, and connect it to a LAN to provide internet access. First, connect the server or cube to the internet and then to the Local Area Network (LAN).

  3. By default, every new LAN has IPv6 addressing disabled. Select the checkbox Activate IPv6 for this LAN in LAN view.

    Note: On selecting PROVISION CHANGES, you can populate the LAN IPv6 CIDR block with prefix length /64 or allow it to be automatically assigned from the VDCs allocated /56 range. /64 indicates that the first 64 bits of the 128-bit IPv6 address are fixed. The remaining bits (64 in this case) are flexible, and you can use all of them.

  1. In the Inspector, configure your LAN device in the Network tab. Provide the following details:

    • Name: Your choice is recommended to be unique to this Virtual Data Center (VDC).

    • MAC: The Media Access Control (MAC) address will be assigned automatically upon provisioning.

    • LAN: Select a LAN for which you want to configure the network.

    • Firewall: To activate the firewall, choose between Ingress / Egress / Bidirectional.

    • IPv4 Configuration: Provide the following details:

      • Primary IP: The primary IP address is automatically assigned by the IONOS DHCP . You can, however, enter an IP address for manual assignment by selecting one of the reserved IPs from the drop-down list. Private IP addresses should be entered manually. The has to be connected to the Internet.

      • Failover: If you have an HA setup including a failover configuration on your VMs, you can create and manage IP failover groups that support your High Availability (HA) setup.

      • Firewall: Configure the firewall.

      • DHCP: It is often necessary to run a Dynamic Host Configuration Protocol (DHCP) server in your virtual data center (e.g. Preboot Execution Environment (PXE) boot for fast rollout of VMs). If you use your own DHCP server, clear this checkbox so that your IPs are not reassigned by the IONOS DHCP server.

      • Add IP: In order to use "floating" or virtual IPs, you can assign additional IPs to a NIC by selecting them from the drop-down menu.

    • IPv6 Configuration: Provide the following details:

      • NIC IPv6 CIDR: You can populate an IPv6 CIDR block with prefix length /80 or allow it to be automatically assigned from the VDCs allocated range, by selecting PROVISION CHANGES. You can also choose 1 or more individual /128 IPs. Only the first IP is automatically allocated. The remaining IPs can be assigned as per your requirement. The maximum number of IPv6 IPs that can be allocated per NIC is 50.

      • DHCPv6: It is often necessary to run your own DHCPv6 server in your virtual data center (e.g. PXE boot for fast rollout of VMs). If you use your own DHCPv6 server, clear this checkbox so that your IPs are not reassigned by the IONOS DHCPv6 server.

      • Add IP: In order to use "floating" or virtual IPs, you can assign additional IPs to a NIC by selecting them from the drop-down menu.

  2. Start the provisioning process by clicking PROVISION CHANGES in the Inspector.

The Virtual Data Center (VDC) is provisioned with the new network settings.

Note:

  • IPv6 CIDR assigned to LANs(/64) and NICs(/80 and /128) must be unique.

  • You can create a max of 256 IPv6-enabled LANs per VDC.

VDC

Renaming the database cluster

Increasing cluster storage size

Upgrading the cluster version in-place

Scaling a cluster vertically (RAM, CPU)

Setting maintenance windows

Scaling a cluster horizontally (replicas)

Changing the connection

curl --include \
    --request PATCH \
    --user "[email protected]:Mb2.r5oHf-0t" \
    --header "Content-Type: application/json" \
    --data-binary '{
      "metadata": {},
      "properties": {
        "displayName": "an even better name!",
      }
    }' \
    https://api.ionos.com/databases/postgresql/clusters/498ae72f-411f-11eb-9d07-046c59cc737e
{
  "type": "cluster",
  "id": "498ae72f-411f-11eb-9d07-046c59cc737e",
  "metadata": {
    "state": "AVAILABLE",
    "createdDate": "2020-12-10T12:37:50.000Z",
    "createdBy": "[email protected]",
    "createdByUserId": "012342f-411f-1eeb-9d07-046c59cc737e",
    "lastModifiedDate": "2020-12-18T21:37:50.000Z",
    "lastModifiedBy": "[email protected]",
    "lastModifiedByUserId": "012342f-411f-1eeb-9d07-046c59cc737e"
  },
  "properties": {
  "displayName": "an even better name!",
  "location": "DE/FRA",
  ...
  }
}
curl --include \
    --request PATCH \
    --user "[email protected]:Mb2.r5oHf-0t" \
    --header "Content-Type: application/json" \
    --data-binary '{
    --data-binary '{
      "metadata": {},
      "properties": {
        "postgresVersion": "15",
      }
    }' \
    https://api.ionos.com/databases/postgresql/clusters/498ae72f-411f-11eb-9d07-046c59cc737e
curl --include \
    --request PATCH \
    --user "[email protected]:Mb2.r5oHf-0t" \
    --header "Content-Type: application/json" \
    --data-binary '{
      "metadata": {},
      "properties": {
        "storageSize": 50000,
      }
    }' \
    https://api.ionos.com/databases/postgresql/clusters/498ae72f-411f-11eb-9d07-046c59cc737e
curl --include \
    --request PATCH \
    --user "[email protected]:Mb2.r5oHf-0t" \
    --header "Content-Type: application/json" \
    --data-binary '{
      "metadata": {},
      "properties": {
        "cores": 4,
        "ram": 4096,
      }
    }' \
    https://api.ionos.com/databases/postgresql/clusters/498ae72f-411f-11eb-9d07-046c59cc737e
curl --include \
    --request PATCH \
    --user "[email protected]:Mb2.r5oHf-0t" \
    --header "Content-Type: application/json" \
    --data-binary '{
      "metadata": {},
      "properties": {
        "instances": 3,
      }
    }' \
    https://api.ionos.com/cloudapi/databases/postgres/clusters/498ae72f-411f-11eb-9d07-046c59cc737e
curl --include \
    --request PATCH \
    --user "[email protected]:Mb2.r5oHf-0t" \
    --header "Content-Type: application/json" \
    --data-binary '{
      "metadata": {},
      "properties": {
        "maintenanceWindow": {
          "dayOfTheWeek": "Sunday",
          "time": "03:30:00"
        }
      }
    }' \
    https://api.ionos.com/databases/postgresql/clusters/498ae72f-411f-11eb-9d07-046c59cc737e
curl --include \
    --request PATCH \
    --user "[email protected]:Mb2.r5oHf-0t" \
    --header "Content-Type: application/json" \
    --data-binary '{
      "properties": {
        "connections": [
          {
            "datacenterId": "b4f86015-9918-443d-be14-aa2eb7529f40",
            "lanId": "2",
            "cidr": "192.168.1.100/24"
          }
        ]
      }
    }' \
    https://api.ionos.com/databases/postgresql/clusters/498ae72f-411f-11eb-9d07-046c59cc737e
{
  "properties": {
    "connections": []
  }
}
curl --include \
    --user "[email protected]:Mb2.r5oHf-0t" \
    --header "Content-Type: application/json" \
    https://api.ionos.com/databases/postgresql/clusters/backups
{
  "id": "backups",
  "type": "collection",
  "items": [
    {
      "type": "backup",
      "id": "dcd31531-3ac8-11eb-9feb-046c59cc737e",
      "properties": {
        "clusterId": "498ae72f-411f-11eb-9d07-046c59cc737e",
        "version": "14",
        "earliestRecoveryTargetTime": "2021-12-08T14:02:59Z"
      },
      "metadata": {...}
    },
    {
      "type": "backup",
      "id": "2b0cd7f8-5924-11ec-b621-da289d52ded8",
      "properties": {
        "clusterId": "2c12ad79-5cba-b2fc-b621-da289d52ded8",
        "version": "15",
        "earliestRecoveryTargetTime": "2021-12-09T14:02:59Z"
      },
      "metadata": {...}
    }
  ],
  "offset": 0,
  "limit": 2,
  "_links": {}
}
curl --include \
    --user "[email protected]:Mb2.r5oHf-0t" \
    --header "Content-Type: application/json" \
    https://api.ionos.com/databases/postgresql/clusters/498ae72f-411f-11eb-9d07-046c59cc737e/backups
{
  "type": "collection",
  "id": "498ae72f-411f-11eb-9d07-046c59cc737e/backups",
  "items": [
    {
      "type": "backup",
      "id": "dcd31531-3ac8-11eb-9feb-046c59cc737e",
      "metadata": {...},
      "properties": {
        "clusterId": "498ae72f-411f-11eb-9d07-046c59cc737e",
        "version": "14",
        "isActive": true,
        "earliestRecoveryTargetTime": "2020-12-08T20:13:49.000Z"
      }
    }
  ],
  "offset": 0,
  "limit": 1,
  "links": {}
}
curl --include \
    --request POST \
    --user "[email protected]:Mb2.r5oHf-0t" \
    --header "Content-Type: application/json" \
    --data-binary '{
      "backupId": "dcd31531-3ac8-11eb-9feb-046c59cc737e",
      "recoveryTargetTime": "2020-12-10T12:37:50.000Z"
    }' \
    https://api.ionos.com/databases/postgresql/clusters/498ae72f-411f-11eb-9d07-046c59cc737e/restore
curl --include \
    --request POST \
    --user "[email protected]:Mb2.r5oHf-0t" \
    --header "Content-Type: application/json" \
    --data-binary '{
      "metadata": {},
      "properties: {
        "postgresVersion": "14",
        "instances": 2,
        "cores": 4,
        "ram": 2048,
        "location": "DE/TXL",
        "storageSize": 20000,
        "storageType": "HDD",
        "displayName": "a good name for a database",
        "synchronizationMode": "ASYNCHRONOUS",
        "credentials": {
          "username": "dsertionos",
          "password": "knight-errant"
        },
        "connections": [
          {
            "datacenterId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
            "lanId": "x",
            "cidr": "x.x.x.x/24",
          }
        ],
        "fromBackup": {
          "backupId": "dcd31531-3ac8-11eb-9feb-046c59cc737e",
          "recoveryTargetTime": "2020-12-23T09:37:50.000Z"
        }
      }
    }' \
    https://api.ionos.com/databases/postgresql/clusters

IONOS Linux images

+

+

IONOS Windows images

-

+

DCD
storage
API
HDD
SSD
here
support team
support team
support team
VDC
Availability Zones
snapshot
Remote Console
VirtIO drivers
support team

Renaming the database cluster

Upgrading the MongoDB version

Updating users and roles

Setting maintenance windows

Scaling a cluster vertically (RAM, CPU, Storage)

Scaling a cluster horizontally (Replicas)

curl --include \
    --request PATCH \
    --user "[email protected]:Mb2.r5oHf-0t" \
    --header "Content-Type: application/json" \
    --data-binary '{
      "properties": {
        "displayName": "an even better name!"
      }
    }' \
    https://api.ionos.com/databases/mongodb/clusters/d02de413-d5af-4104-a6f9-3a3c2766ee61
{
  "type": "cluster",
  "id": "d02de413-d5af-4104-a6f9-3a3c2766ee61",
  "metadata": {
    "state": "AVAILABLE",
    "createdDate": "2020-12-10T12:37:50.000Z",
    "createdBy": "[email protected]",
    "createdByUserId": "012342f-411f-1eeb-9d07-046c59cc737e",
    "lastModifiedDate": "2020-12-18T21:37:50.000Z",
    "lastModifiedBy": "[email protected]",
    "lastModifiedByUserId": "012342f-411f-1eeb-9d07-046c59cc737e"
  },
  "properties": {
    "displayName": "an even better name!",
    "location": "de/fra"
  },
  ...
}
curl --include \
    --user "[email protected]:Mb2.r5oHf-0t" \
    --header "Content-Type: application/json" \
    https://api.ionos.com/databases/mongodb/templates
curl --include \
    --request PATCH \
    --user "[email protected]:Mb2.r5oHf-0t" \
    --header "Content-Type: application/json" \
    --data-binary '{
      "properties": {
        "templateID": "6b78ea06-ee0e-4689-998c-fc9c46e781f6"
      }
    }' \
    https://api.ionos.com/databases/mongodb/clusters/d02de413-d5af-4104-a6f9-3a3c2766ee61
curl --include \
    --request PATCH \
    --user "[email protected]:Mb2.r5oHf-0t" \
    --header "Content-Type: application/json" \
    --data-binary '{
      "properties": {
        "instances": 3,
        "connections": [
          {
            "datacenterId": "b72ebcf5-6158-4ddb-8f65-31b20d82b6c7",
            "lanId": "2",
            "cidrList": [
              "10.7.222.3/24",
              "10.7.222.4/24",
              "10.7.222.5/24"
            ]
          }
       ]
      }
    }' \
    https://api.ionos.com/cloudapi/databases/mongodb/clusters/d02de413-d5af-4104-a6f9-3a3c2766ee61
curl --include \
    --request PATCH \
    --user "[email protected]:Mb2.r5oHf-0t" \
    --header "Content-Type: application/json" \
    --data-binary '{
      "properties": {
        "maintenanceWindow": {
          "dayOfTheWeek": "Sunday",
          "time": "03:30:00"
        }
      }
    }' \
    https://api.ionos.com/databases/mongodb/clusters/d02de413-d5af-4104-a6f9-3a3c2766ee61
Setting maintenance windows
Cubes
MongoDB Templates via the API
MongoDB replication mechanism
MongoDB Replicate Set Elections
User Management Documentation
Inspector pane
server
Network Interface Controller (NIC)
Activate IPv6 for this LAN
IPV6 Configuration for this LAN

The status is transitional. The cluster is in a transitional state and temporarily locked for modifications.

The status is unavailable. The cluster is unavailable and locked for modifications.

The status is in progress. Modifications to the cluster are in progress, the cluster is temporarily locked for modifications.

The status is active. The cluster is available and running.

The status is transitional. The node pool is in a transitional state and temporarily locked for modifications.

The status is unavailable. The node pool is unavailable and locked for modifications.

The status is in progress. Modifications to the node pool are in progress. The node pool is locked for modifications.

The status is active. The node pool is available and running.

clusters
create and manage clusters
create and manage node pools
download the kubeconfig file
Setup of a Cluster
A Kubernetes managed server in DCD and inspector pane.
Managed Kubernetes
DCD
servers
virtual data centers
storages
NICs
IP addresses
Getting started with cURL in WindowsIONOS Digitalguide
cURL in Linux: what you need to know to get startedIONOS Digitalguide
Getting started with cURL in WindowsIONOS Digitalguide
cURL in Linux: what you need to know to get startedIONOS Digitalguide
IONOS DBaaS REST API
Click here for the OpenAPI Specification File

Boot with cloud-init

Overview

Cloud-init is a software package that automates the initialization of servers during system boot. When you deploy a new Linux server from an image, cloud-init gives you the option to set default user data. User data must be written in shell scripts or cloud-config directives using YAML syntax. This method is highly compatible across platforms and fully secure.

Compatibility: This service is supported on all public IONOS Cloud Linux distributions (Debian, CentOS, and Ubuntu). You may submit user data through the DCD or via Cloud API. Existing cloud-init configurations from other providers are compatible with IONOS Cloud.

Limitations: Cloud-init is available on all public images supplied by IONOS Cloud. If you wish to use your own Linux image, please make sure that it is cloud-init supported first. Otherwise, there is no guarantee that the package will function as intended. Windows images are currently out of scope; adding them may be considered at a later stage.

Provisioning: Cloud-init can only be set at initial provisioning. It cannot be applied to instances that have already been provisioned. Settings can't be changed once provisioned.

Laptops: When using a laptop, please scroll down the properties panel, as additional fields are not immediately visible on a small screen.

Supported user data formats

This tutorial demonstrates the use of cloud-config and user-data scripts. However, the cloud-init package supports a variety of formats.

Data Format
Description

Base64

If user-data is base64-encoded, cloud-init determines if it can understand the decoded data as one of the supported types. If it understands the decoded data, it decodes the data and handles it appropriately. If not, it returns the base64 data intact.

User-Data Script

Begins with #!or Content-Type: text/x-shellscript.

The script is run by /etc/init.d/cloud-init-user-scripts during the first boot cycle. This occurs late in the boot process (after the initial configuration actions are performed).

Include File

Begins with #include or Content-Type: text/x-include-url.

The file contains a list of URLs, one per line. Each of the URLs is read, and their content is passed through this same set of rules. The content read from the URL can be MIME-multi-part or plaintext.

Cloud Config data

Begins with #cloud-config or Content-Type: text/cloud-config.

For a commented example of supported configuration formats, see the examples.

Upstart Job

Begins with #upstart-job or Content-Type: text/upstart-job.

This content is stored in a file in /etc/init, and upstart consumes the content as per other upstart jobs.

Cloud Boothook

Begins with #cloud-boothook or Content-Type: text/cloud-boothook.

This content is boothook data. It is stored in a file under /var/lib/cloud and then runs immediately.

This is the earliest hook available. There is no mechanism provided for running it only one time. The boothook must take care of this itself. It is provided with the instance ID in the environment variable INSTANCE_ID. Use this variable to provide a once-per-instance set of boothook data

Configuring user data via DCD

1. In the DCD, create a new virtual instance and attach any storage device to it.

2. Ensure the storage device is selected. Its Inspector pane should be visible on the right.

3. When choosing the Image, you may either use your own or pick one that is supplied by IONOS.

For IONOS supplied images, select No image selected > IONOS Images.

Alternatively, for private images select No image selected > Own Images.

You may either use an IONOS image or upload your own via FTP and select it instead.

4. Once you choose an image, additional fields will appear in the Inspector pane.

5. A Root password is required for Remote Console access. You may change it later.

6. SSH keys are optional. You may upload a new key or use an existing file. SSH keys can also be injected as user data utilizing cloud-init.

7. You may add a specific key to the Ad-hoc SSH Key field.

8. Under Cloud-init user data, select No configuration and a window will appear.

9. Input your cloud-init data. Either use a bash script or a cloud-config file with YAML syntax. Sample scripts are provided below.

Configuration of Cloud-init User Data

10. To complete setup, return to the Inspector and click Provision Changes. Cloud-init automatically runs at boot, applying the changes requested.

‌When the DCD returns the message that provisioning has been successfully completed this means the infrastructure is virtually set up. However, bootstrapping, which includes the execution of cloud-init data, may take additional time. This execution time is not included in the success message. Please allow extra time for the tasks to complete before testing.

Using shell scripts

Using shell scripts is an easy way to bootstrap a server. In the example script below, the code creates and configures our CentOS web server.

#!/bin/bash
# Use this for your user data (script from top to bottom)
# install httpd (Linux 2 version)
yum update -y
yum install -y httpd
systemctl start httpd
systemctl enable httpd
echo "<h1>Hello World from $(hostname -f)</h1>" > /var/www/html/index.html

Allow enough time for the instance to launch and run the commands in your script, and then check to see that your script has completed the tasks that you intended.

The above example will install a web server and rewrite the default index.html file. To test if cloud-init bootstrapped your VM successfully, you can open the corresponding IP address in your browser. You should be greeted with a “Hello World” message from your web server.

Using cloud-config directives

Cloud-init images can also be bootstrapped using cloud-config directives. The cloud-init website outlines all supported modules and gives examples of basic directives.

Modules

Examples

Example 1: Creating a swap partition

The following script is an example of how to create a swap partition with second block storage, using a YAML script:

#cloud-config
fs_setup:
  - label: mylabel
    device: /dev/vda
    filesystem: ext4
  - label: swap
    device: /dev/vdb
    filesystem: swap
mounts:
- [ /dev/vda, /, ext4, defaults, 0, 0 ]
- [ /dev/vdb, none, swap, sw, 0, 0 ]

Example 2: Resizing the file system

The following script is an example of how to resize your file system according to the chosen size of the block storage. It will also create a user with an SSH key, using a cloud-config YAML script:

#cloud-config
resize_rootfs: True
users:
  - name: pb-user
    gecos: Demo User
    sudo: ALL=(ALL) NOPASSWD:ALL
    groups: users, admin
    ssh_import_id: None
    lock_passwd: true
    ssh_authorized_keys:
      - ssh-rsa AAAA...

Output log files

The cloud-init output log file (/var/log/cloud-init-output.log) captures console output. Depending on the default configuration for logging, a second log file exists under /var/log/cloud-init.log. **** This provides a comprehensive record based on user data.

Configuring user data via API

Cloud API provides enhanced convenience if you want to automate the provisioning and configuration of cloud instances. Cloud-init is configured on the volume resource in Cloud API V6 (or later). Please find the link to the documentation below:

Name: userData

Type: string

Description: The cloud-init configuration for the volume as base64 encoded string. The property is immutable and is only allowed to be set on a new volume creation. It is mandatory to provide either public image or imageAliasthat has cloud-init compatibility in conjunction with this property.

Curl example

curl --include \
     --request POST \
     --user '<user:password>' \
     --header "Content-Type: application/json" \
     --data-binary '{
         "properties": {
             "name": "Server-01",
             "ram": 2048,
             "cores": 1,
             "availabilityZone": "ZONE_1",
             "cpuFamily": "INTEL_SKYLAKE"
         },
         "entities": {
             "volumes": {
                 "items": [ {
                    "properties": {
                      "size": 10,
                      "type": "HDD",
                      "name": "Server-01_HDD",
                      "image": "bf4d1400-b48d-11eb-b9b3-d2869b2d44d9",
                      "imagePassword": "<pAsSW0rD>",
                      "sshKeys": ["<ssh_key>"],
                      "userData": "I2Nsb3VkLWNvbmZpZwoKcGFja2FnZXM6CiAgLSBodHRwZAogIC0gZmlyZXdhbGxkCgpydW5jbWQ6CiAgLSAvYmluL3N5c3RlbWN0bCBlbmFibGUgaHR0cGQKICAtIC9iaW4vc3lzdGVtY3RsIHN0YXJ0IGh0dHBkCiAgLSAvYmluL2ZpcmV3YWxsLW9mZmxpbmUtY21kIC0tYWRkLXBvcnQ9ODAvdGNwCiAgLSAvYmluL3N5c3RlbWN0bCBlbmFibGUgZmlyZXdhbGxkCiAgLSAvYmluL3N5c3RlbWN0bCBzdGFydCBmaXJld2FsbGQKICAtIGxvYWRrZXlzIGRlCgp3cml0ZV9maWxlczoKLSBjb250ZW50OiB8CiAgICA8IURPQ1RZUEUgaHRtbD4KICAgIDxodG1sPgogICAgICA8aGVhZD4KICAgICAgPC9oZWFkPgogICAgICA8Ym9keT4KICAgICAgICA8cD5XZWxjb21lIHRvIHlvdXIgbmV3IHdlYiBzZXJ2ZXIuPC9wPgogICAgICA8L2JvZHk+CiAgICA8L2h0bWw+CiAgcGF0aDogL3Zhci93d3cvaHRtbC9pbmRleC5odG1sCgpmaW5hbF9tZXNzYWdlOiAiVGhlIHN5c3RlbSBpcyBmaW5hbGx5IHVwLCBhZnRlciAkVVBUSU1FIHNlY29uZHMiCg=="
                    }
                 } ]
             },
             "nics": {
                 "items": [ {
                    "properties": {
                      "name": "NIC001",
                      "dhcp": true,
                      "lan": 1
                      }
                    } ]
             }
        }
        }' \
 https://api.ionos.com/cloudapi/v6/datacenters/<datacenter_id>/servers

Virtual Machines FAQ

Virtual Servers

What are the maximum resources available for a Server?

Cores

Virtual server configurations are subject to the following limits, according to the CPU type:

  • AMD CPU: Up to 62 cores and 230 GB RAM

  • Intel® CPU: Up to 51 Intel® cores and 230 GB RAM

A single Intel® physical core with Hyper-Threading Technology is exposed to the operating system of your virtual server as two distinct “logical cores”, which process separate threads.

Because the size of the working memory (RAM) cannot be processed during the initial configuration, newly provisioned servers with more than 8 GB of RAM may not start successfully when created from IONOS Windows images.

We recommend initially setting the RAM size to 8 GB; RAM size can then be scaled as needed after the initial provisioning and configuration.

HDD storage

  • Minimum: 1 GB

  • Maximum: 4 TB

SSD storage

  • Minimum: 1 GB

  • Maximum: 4 TB

You can scale up the HDD and SSD storage volumes on need basis.

What are the Availability Zones?

IONOS data centers are divided into separate areas called Availability Zones.

You can enhance reliability and set up high-availability scenarios by deploying redundant virtual servers and storage devices across multiple Availability Zones.

See also:

How do I change the Availability Zone?

  • Select the server in the DCD Workspace

  • Use Inspector > Properties > Availability Zone menu to change the Availability Zone

What is Live Vertical Scaling?

Live Vertical Scaling (LVS) technology permits you to scale the number of CPU cores and amount of RAM while the server is running, without having to restart it. Please note that Windows only allows scaling the number of CPU cores, but not the amount of RAM. For scaling to more than eight CPU cores, Windows requires a reboot.

See also:

How do I reboot a server?

Servers can be restarted at the operating system level (using the reboot command, for instance). You can also use the DCD reset function, which functions similarly to a physical server's reset button.

See also:

How do I shut down a server?

You should use DCD to shut down your server completely. Your VM will then be marked as "shut down" in the DCD. Shutting down a VM at the operating system level alone does not deallocate its resources or suspend the billing.

See also:

How do I delete a server?

You can delete a server in the DCD Workspace by right-clicking it and selecting Delete, or by selecting (clicking) the server and pressing the Del key.

See also:

What do I do when my VM isn't accessible?

Try to connect to your VM using the Remote Console to see if it is up and running. If you have trouble logging on to your VM, please provide our support team with screenshots of error messages and prompts from the Remote Console.

  • Windows users: Please send us a screenshot of the Task Manager.

  • Linux users: Please send us the output of uptime and top.

How do I get the root/admin passwords with IONOS images?

When using IONOS-provided images, you set the passwords yourself prior to provisioning.

Why does my newly provisioned server not start?

Newly provisioned servers with more than 8 GB of RAM may not start successfully when created from IONOS Windows images, because the RAM size cannot be processed during the initial configuration.

An error is displayed according to the server version; for example, Windows Server 2012 R2 displays the following message:

"Windows could not finish configuring the system. To attempt to resume configuration, restart the computer."

We recommend initially setting the RAM size to 8 GB, and rescaling it as needed after the initial provisioning and configuration is complete.

Which CPU architecture should I choose?

The choice of CPU architecture primarily depends on your workload and performance requirements. Intel® processors are oftentimes more powerful than AMD processors. Intel® processors are designed for compute-intensive applications and workloads where the benefits of hyperthreading and multitasking can be fully exploited. Intel® cores cost twice as much as AMD cores. Therefore, it is recommended that you measure and compare the actual performance of both CPU architectures against your own workload. You can change the CPU type in the DCD or use the API, and see for yourself whether Intel® processors deliver significant performance gains, or more economical AMD cores still meet your requirements.

With our unique "Core Technology Choice" feature, we are the only cloud computing provider that makes it possible to flexibly change the processor architecture per virtual instance.

What do I do if the cursor in the Remote Console disappears?

When the cursor disappears after logging on to the Remote Console, you can reconnect to the server using the appropriate menu entry.

Cloud Cubes

What are Cloud Cubes?

For a long time, the duopoly of virtual private servers (VPS) and dedicated cloud servers dominated virtualized computing environments.

Enter Cloud Cubes — virtual private service instances — the next generation of IaaS. Developed by IONOS Cloud, Cubes are ideal for specific workloads that do not require high compute performance from all resources at all times — development and testing environments, website hosting, simple web applications, and so on.

While based on shared resources, the Cubes can rival physical servers through a platform design that can redistribute available performance capacities among individual instances. At the same time, reduced operational complexity and highly optimized resource utilization translate into lower operating costs.

Cubes instances come complete with vCPUs, RAM, and direct-attached NVMe storage volumes; choose among standard by selecting one of several for your Cubes. Storage capacities can be expanded further by to your Cubes.

Cubes instances can be used together with all enterprise-grade features, resources, and services, offered by IONOS Cloud.

Affordable, quickly available, and with everything you need — have your Cubes up and running in minutes in the IONOS Cloud.

PVPanic Device

What is the PVPanic device for?

The device monitors VM/OS crashes. PVPanic is a simulated device, through which a guest panic event is sent to the hypervisor, and a QMP event is generated.

Do I need to restart the VM to get PVPanic?

No, the PVPanic device is plug-and-play. However, installing drivers may require a restart.

What happens if Windows VMs complain about an unknown device?

This is no cause for concern. First of all, you do not need to reboot the VM. However, you will need to reinstall appropriate drivers (which are provided by IONOS Cloud).

Are there any risks when enabling the use of PVPanic?

There are no issues found when enabling pvpanic. However, users can’t choose whether or not to enable the device - it is always available for use.

Something else to consider - PVPanic does not offer bidirectional communication between the VM and the hypervisor. Instead, the communication only goes from the VM towards the hypervisor.

Are there any compatibility issues with AMD or Intel processors?

There are no special requirements or limitations to any components of a virtualized server. Therefore, PVPanic is completely compatible with AMD and Intel processors.

Do we support hardware solutions?

The PVPanic device is implemented as an ISA device (using IOPORT).

Does my Linux image support the Pvpanic device?

Check the kernel config CONFIG_PVPANIC parameter.

For example:

m = PVPanic device is available as module y = PVPanic device is native available in the kernel n = PVPanic device is not available

When the device is not available (CONFIG_PVPANIC=n), use another kernel or image.

How do I install the device driver for the pvpanic device on Windows?

For your virtual machines running Microsoft Windows, we provide an ISO image that includes all the relevant drivers for your instance. Just log into DCD, open your chosen virtual data center, add a CD-ROM drive and insert the driver ISO as shown below (this can also be done via CloudAPI).

Please note that a reboot is required to add the CD drive.

Once provisioning is complete, you can log into your OS by adding drivers for the unknown device through the Device Manager. Just enter devmgmt.msc in the Windows search bar, console, or PowerShell to open it.

Since this is a Plug & Play driver, there is no need to reboot the machine.

Virtual Servers

Cloud Cubes

PVPanic Device

root@debian:~# grep --color CONFIG_PVPANIC /boot/config-$(uname -r) 
CONFIG_PVPANIC=m
Availability Zones
Scale resources
Starting, stopping, rebooting a server
Starting, stopping, rebooting a server
Deleting a server
configurations
templates
adding network block storage units
Create a new CD-ROM
Logo
Logo
Logo
Logo

Users Management

For MongoDB clusters you have to manage users via the IONOS API and can't create users inside the database. This How-To shows you in detail how to create, view, and delete users.

Roles

In MongoDB most roles are scoped to a database. For example you grant readWrite permissions on database mydb. The exception are roles that grant permissions to all databases, for example readAnyDatabase.

Assignable roles have several restrictions to avoid customers breaking out of their database or breaking internal stuff:

  • Currently, you can only assign predefined roles. Out of those currently only read, readWrite, readAnyDatabase, readWriteAnyDatabase, dbAdmin, dbAdminAnyDatabase and clusterMonitor are supported.

  • Roles with the suffix *AnyDatabase are granted only on the admin database, which is the main user management database.

  • Roles read, readWrite and dbAdmin cannot be granted on config and local databases.

Adding a user

When creating a user you need to consider the following:

  • All users are created in the admin database.

  • The combination of username and database must be unique within the MongoDB cluster.

  • You can only change the assigned roles and the password of a user.

  • You can't have more than 100 users in a cluster.

To add users to a MongoDB cluster, simply issue a POST request for each user.

curl --include \
    --request POST \
    --user "[email protected]:Mb2.r5oHf-0t" \
    --header "Content-Type: application/json" \
    --data-binary '{
      "metadata": {},
      "properties": {
        "username": "username",
        "password": "password",
        "roles": [
          {
            "role": "readWrite",
            "database": "mydb"
          }
        ]
      }
    }' \
    https://api.ionos.com/databases/mongodb/clusters/498ae72f-411f-11eb-9d07-046c59cc737e/users

Deleting user

To delete a user from MongoDB cluster, simply issue a DELETE request as follows:

curl --include \
    --request DELETE \
    --user "[email protected]:Mb2.r5oHf-0t" \
    --header "Content-Type: application/json" \
    https://api.ionos.com/databases/mongodb/clusters/498ae72f-411f-11eb-9d07-046c59cc737e/users/username
{
  "type": "user",
  "metadata": {
    "createdDate": "2020-12-10T12:37:50.000Z",
    "createdBy": "[email protected]",
    "createdByUserId": "012342f-411f-1eeb-9d07-046c59cc737e"
  },
  "properties": {
    "username": "username",
    "roles": [
      {
        "role": "readWrite",
        "database": "mydb"
      }
    ]
  }
}

Getting all users

To get a list of all users defined in MongoDB cluster, simply issue a GET request as follows:

curl --include \
    --request GET \
    --user "[email protected]:Mb2.r5oHf-0t" \
    --header "Content-Type: application/json" \
    https://api.ionos.com/databases/mongodb/clusters/498ae72f-411f-11eb-9d07-046c59cc737e/users
{
  "id": "498ae72f-411f-11eb-9d07-046c59cc737e",
  "type": "collection",
  "items": [
    {
      "type": "user",
      "metadata": {
        "createdDate": "2020-12-10T12:37:50.000Z",
        "createdBy": "[email protected]",
        "createdByUserId": "012342f-411f-1eeb-9d07-046c59cc737e"
      },
      "properties": {
        "username": "username",
        "roles": [
          {
            "role": "readWrite",
            "database": "mydb"
          }
        ]
      }
    }
  ]
}

Getting a single user

To get a specific user in a MongoDB cluster, simply issue a GET request as follows:

curl --include \
    --request GET \
    --user "[email protected]:Mb2.r5oHf-0t" \
    --header "Content-Type: application/json" \
    https://api.ionos.com/databases/mongodb/clusters/498ae72f-411f-11eb-9d07-046c59cc737e/users/admin/username
{
  "type": "user",
  "metadata": {
    "createdDate": "2020-12-10T12:37:50.000Z",
    "createdBy": "[email protected]",
    "createdByUserId": "012342f-411f-1eeb-9d07-046c59cc737e"
  },
  "properties": {
    "username": "username",
    "roles": [
      {
        "role": "readWrite",
        "database": "mydb"
      }
    ]
  }
}

Modifying a single user

Changing the password

To update the password of a specific user in a MongoDB cluster, simply issue a PATCH request as follows:

curl --include \
    --request PATCH \
    --user "[email protected]:Mb2.r5oHf-0t" \
    --header "Content-Type: application/json" \
    --data-binary '{
      "properties": {
        "password": "new super secure password",
      }
    }' \
    https://api.ionos.com/databases/mongodb/clusters/498ae72f-411f-11eb-9d07-046c59cc737e/users/username

Changing the roles

To update the assigned roles of a specific user in a MongoDB cluster, simply issue a PATCH request with the new list of assigned roles. Note that the request replaces the old role list, meaning that any existing roles missing from the patch will be deleted.

curl --include \
    --request PATCH \
    --user "[email protected]:Mb2.r5oHf-0t" \
    --header "Content-Type: application/json" \
    --data-binary '{
      "properties": {
        "roles": [
          {"database": "mydb", "role": "read"}
        ]
      }
    }' \
    https://api.ionos.com/databases/mongodb/clusters/498ae72f-411f-11eb-9d07-046c59cc737e/users/username

Set Up Storage

Storage space is added to your virtual machines by using storage elements in your VDC. Storage name, availability zone, size, OS image, and boot options are configurable for each element.

How to add Storage to a Server or a Cube

  1. Drag a storage element (HDD or SSD) from the Palette onto a Server or a Cube in the Workspace to connect them together. The highlighted VM will expand with a storage section.

  2. Click the Unnamed HDD Storage to highlight the storage section. You can now see new options in the Inspector on the right.

A Cube comes with an SSD, but you may also add a HDD to the setup. Drag and drop the storage element directly onto the Cube

Storage type cannot be changed after provisioning.

How to Configure Storage

  1. Enter a name that is unique within your VDC.

  2. Select a zone in which you want the storage device to be maintained. When you select A (Auto), our system assigns the optimal Zone. The Availability Zone cannot be changed after provisioning.

  3. Specify the required storage capacity. The size can be increased after provisioning, even while the server is running, as long as this is supported by its operating system. It is not possible to reduce the storage size after provisioning.

You can select one of the IONOS images or snapshots, or use your own. Only images and snapshots that you have access to are available for selection. Since provisioning does not require you to specify an image, you can also create empty storage volumes.

Authentication

  1. Set the root or administrator password for your server according to the guidelines. This is recommended for both operating system types

  2. Select an SSH key stored in the SSH Key Manager.

  3. Copy and paste the public part of your SSH key into this field.

  4. Select the storage volume from which the server is to boot by clicking on BOOT or Make Boot Device.

Alternative Mode

  • When adding a storage element using the Inspector, select the appropriate check box in the Add Storage dialog box. If you wish to boot from the network, set this on the server: Server in the Workspace > Inspector > Storage.

  • It is recommended to always use VirtIO to benefit from the full performance of InfiniBand. IDE is intended for troubleshooting if, for instance, the operating system has no VirtIO drivers installed. In this case, Windows usually displays a "blue screen" when booting.

  • After provisioning, the Live Vertical Scaling properties of the selected image are displayed. You can make changes to these properties later, which will require a reboot. You can set the properties of your uploaded images before you apply them to storage volumes in the Image Manager.

  • (Optional) Add and configure further storage elements.

  • (Optional) Make further changes to your data center.

  • Provision your changes. The storage device is now provisioned and configured according to your settings.

How to add a CD-ROM drive

To assign an image and specify a boot device, you need to add and configure a storage element.

  • Click on CD-ROM to add a CD-ROM drive so that you can use ISO images to install and configure an operating system from scratch.

  • Set up a network by connecting the server to other elements, such as an internet access element or other servers through their NICs.

  • Provision your changes.

The server is available according to your settings.

How to Delete Storage

When you no longer need snapshots or images, you should remove them from your cloud infrastructure to avoid unnecessary costs. For backup purposes, you can create a snapshot before deleting it.

  1. In the Workspace, select the storage device you wish to delete.

  2. Open the context menu of the element and select Delete.

  3. (alternative) Select the element and press the DEL key.

  4. Provision your changes result: The storage device is deleted and will no longer be available.

  • If you delete a server and its storage devices, or the entire data center, their backups are not deleted automatically. Only when you delete a Backup Unit will the backups it contains actually be deleted.

  • If you no longer need the backups of deleted VMs, you should delete them manually in the Backup Unit Manager to avoid unnecessary costs.

Installing Windows VirtIO Drivers

VirtIO provides an efficient abstraction for hypervisors and a common set of IO virtualization drivers. It was chosen to be the main platform for IO virtualization in KVM. There are four drivers available:

  • Balloon - The balloon driver affects the memory management of the guest OS.

  • VIOSERIAL - The serial driver affects single serial device limitation within KVM.

  • NetKVM - The network driver affects Ethernet network adapters.

  • VIOSTOR - The block driver affects SCSI based controllers.

Windows-based systems require VirtIO drivers primarily to recognize the VirtIO (SCSI) controller and network adapter presented by the IONOS KVM-based hypervisor. This can be accomplished in a variety of ways depending on the state of the virtual machine.

IONOS provides pre-configured Windows Server images that already contain the required VirtIO drivers and the optimal network adapter configuration. We also offer a VirtIO ISO to simplify the driver installation process for Windows 2008 R2, Windows 2012 & Windows 2012 R2 systems. This ISO can be found in the CD-ROM drop-down menu under IONOS Images which can be used for new Windows installations (only required for customer-provided images), as well as Windows images that have been migrated from other environments (e.g. via VMDK upload).\

Always use the latest Windows VirtIO driver from IONOS.

How to install window VirtIO drivers

  1. Add a CD-ROM drive and open the installation menu:

  • In the Workspace, select the required server.

  • In the Inspector, open the Storage.

  • Click on CD-ROM to add a CD-ROM drive.

  • In the dialog box, choose an IONOS image with drivers (windows-VirtIO-driver-<version>.iso) and select the Boot from Device check box.

  • Confirm the action by clicking the Create CD-ROM Drive.

  • Provision your changes.

  • Connect to the server using Remote Console. The installation menu opens.

  • Follow the options provided by the installation menu.

  • Remove the CD-ROM drive as soon as the menu asks you to do so, and shut down the VM.

  • In DCD, specify from which storage to boot.

  • Restart the server using the DCD.

  • Provision your changes.

  • Connect to the server again using the Remote Console to make further changes.

2. Set optimal values: For an optimal configuration, apply the following settings:

  • MTU:

    • Internal network interface: 1500 MTU

    • External network interface: 1500 MTU

  • Offloading for Receive (RX) and Transmit (TX):

  • Offload Tx IP checksum: Enabled

  • Offload Tx LSO: Enabled

  • Offload Tx TCP checksum: Enabled

  • Fix IP checksum on LSO: Enabled

  • Hardware checksum: Enabled

3. Disable TCP Offloading/Chimney:

  • Default:

    netsh int tcp set global chimney=disabled

  • Everything:

rss=disabled
chimney=disabled
congestionprovider=none
netdma=disabled dca=disabled
ecncapability=disabled
timestamps=enabled
  • Alternatively, modify the Windows registry:

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters]
"EnableTCPA"=dword:00000000
"EnableRSS"=dword:00000000
"EnableTCPChimney"=dword:00000000

The installation will be active after a restart. The following command can be used to verify the status of the configuration above.

netsh interface tcp show global

4. Set correct values for any network adapter automatically: You can apply the correct settings for any network adapter automatically by executing the following commands in PowerShell:

  • Request network adapter information Get-NetAdapter

The following output is displayed:

  • In the Name field, use the output value instead of "Ethernet".

  • Create a new file using PowerShell ISE (File > New).

  • Copy and paste the following code and make sure to change $name ="Ethernet" properly:

Clear-Host
$name ="Ethernet"
Set-NetAdapterAdvancedProperty-name $name -RegistryKeyword "MTU" -Registryvalue 1500
Set-NetAdapterAdvancedProperty -name $name -RegistryKeyword "*rss" -Registryvalue 0
Set-NetAdapterAdvancedProperty -name $name -RegistryKeyword"*TCPChecksumOffloadIPv4" -Registryvalue 0
Set-NetAdapterAdvancedProperty -name $name -RegistryKeyword "*UDPChecksumOffloadIPv4" -Registryvalue 0
netsh interface tcp set global chimney=disabled
netsh interface tcp set global autotuninglevel=normal
netsh interface tcp set global netdma=disabled
netsh interface tcp set global dca=disabled
netsh interface tcp set global ecncapability=disabled
netsh interface tcp set global timestamps=enabled
Get-NetAdapterAdvancedProperty
netsh int tcp show global
  • Click File > Execute.

  • Check the settings.

  • Restart the VM. The correct settings are applied automatically.

5. Activate TCP/IP auto-tuning:

TCP/IP auto-tuning ensures optimal data transfer between client and server by monitoring network traffic and automatically adjusting the "Receive Window Size". You should always activate this option to ensure the best performance.

Activate:

netsh interface tcp set global autotuninglevel=normal

Check:

netsh interface tcp show global

Set Up a Database Cluster

Preparing the network

To set up a database inside an existing datacenter, you should have at least one server in a private LAN.

You need to choose an IP address, under which the database leader should be made available.

There is currently no IP address management for databases. If you use your own subnet, you may use any IP address in that subnet. If you rely on DHCP for your servers, then you must pick an IP address of the subnet that is assigned to you by IONOS.

To find the subnet you can look at the NIC configuration. To prevent a collision with the DHCP IP range, pick an IP between x.x.x.3/24 and x.x.x.10/24 (which are never assigned by DHCP).

Caution: the deletion of a LAN with an attached database is forbidden. A special label deleteprotected will be attached to the LAN. If you want to delete the LAN, either attach the database to a different LAN (via PATCH request to update the database) or delete the database.

Alternatively, you can detach a database from the LAN and delete the LAN then. The database will remain disconnected.

Resource considerations

CPU, RAM, storage, and number of database clusters are counted against quotas. Contact to determine your RAM requirements.

Database performance depends on the storage type. Choose the type that is suitable for your workload.

The are stored alongside the database. The amount of WAL files can grow and shrink depending on your workload. Plan your storage size accordingly.

Database backups

All database clusters are backed up automatically. You can choose the location where cluster backups are stored by providing the backupLocation parameter as part of the cluster properties during cluster creation. When no backup location is provided it defaults to be the closest available location to your clusters' location. As of now, the backup location can't be changed after creation.

Note: Having the backup in the same location as your database increases the risk of data loss in case a whole location would experience a disaster. On the other hand chosing a remote location may impact the performance during node recreation.

Creating the cluster

This request will create a database cluster with two instances of PostgreSQL version 15.

Note: Only contract admins, owners, and users with "Access and manage DBaaS" privilege are allowed to create and manage databases. When a database is created it can be accessed in the specified LAN by using username and password specified during creation.

Note: This is the only opportunity to set the username and password via the API. The API does not provide a way to change the credentials yet. However, you can change them later by using raw SQL.

The datacenter must be provided as a UUID. The easiest way to retrieve the UUID is through the .

Note: The sample UUID is 498ae72f-411f-11eb-9d07-046c59cc737e

Request

Response

Your values will differ from those in the sample code. Your response will have different IDs, timestamps etc.

At this point, you have created your first PostgreSQL cluster. The deployment of the database will take 20 to 30 minutes. You can check if the request was correctly executed.

Note that the state will show as BUSY

Querying database status

Note: The sample UUID is 498ae72f-411f-11eb-9d07-046c59cc737e

You may have noticed that the state is BUSY and that the database is not yet reachable. This is because the cloud will create a completely new cluster and needs to provision new nodes for all the requested replicas. This process runs asynchronously in the background and might take up to 30 minutes.

The notification mechanism is not available yet. However, you can poll the API to see when the state switches to AVAILABLE.

Request

To query a single cluster, you will require the id from your "create" response.

If you don't know your PostgreSQL cluster ID, you can also list all clusters and look for the one for which to query the status.

Response

Note: You cannot configure the port. Your cluster runs in the default port 5432.

Connect to cluster

Now that everything is set up and successfully created, you can connect to your PostgreSQL cluster. Initially, the cluster only contains one database, called postgres, to which you can connect. For example, using psql and the credentials that you set in the POST request above you can connect with this:

Alternatively, you can also use the DNS Name returned in the response instead of the IP address. This record will also be updated when you change the IP address in the future:

This initial user is no superuser, for security reasons and to prevent conflicts with our management tooling. It only has CREATEDB and CREATEROLE, but not SUPERUSER, REPLICATION or BYPASSRLS (row level security) permissions ().

The following roles are available to grant: cron_admin, pg_monitor, pg_read_all_stats, and pg_stat_scan_tables, see list of .

Initial setup of databases, users, tables etc

Creating additional users, roles, databases, schemas, and other objects must be done by you yourself from inside SQL. Since this highly depends on your architecture, just some pointers:

Database

The PUBLIC role is a special role, in the sense that all database users inherit these permissions. This is also important if you want to have a user without write permissions, since by default PUBLIC is allowed to write to the public schema.

The official docs have a detailed walkthrough on .

Roles

If you want multiple user with the same permissions, you can group them in a role and GRANT the role to the users later.

For improved security you should only grant the required permissions. If you e.g. want to grant permission to a specific table, you also need to grant permissions to the schema:

To set the default privileges for new object in the future, see .

Users

Users are basically just roles with the LOGIN permission, so everything from above also applies.

Also see the docs on .

Congratulations: You now have a ready to use PostgreSQL cluster!

Kubernetes FAQ

Get answers to the most frequently asked questions about Kubernetes in IONOS DCD.

What is the function of Managed Kubernetes?

Managed Kubernetes facilitates the fully automated setup of Kubernetes clusters. Managed Kubernetes also simplifies and carefully supports the automation of CI/CD pipelines in terms of testing and deployment.

What does Kubernetes offer for providing transparency and control?

Our managed solution offers automatic updates and security fixes, versioning and upgrade provisioning, highly available and geo-redundant control plane, full cluster admin-level access to Kubernetes API.

How does the Kubernetes Manager work?

Everything related to Managed Kubernetes can be controlled in the DCD via the dedicated Kubernetes Manager. The manager provides a complete overview of your provisioned Kubernetes clusters and node pools including their status. The Manager allows you to create and manage clusters, create and manage node pools, and download the Kubeconfig file.

See also:

What is the control plane for?

The control plane manages the worker nodes and the pods in the cluster. In production environments, the control plane usually runs across multiple computers. A cluster usually runs multiple nodes, providing fault tolerance and high availability.

How does a kube-controller-manager work?

Kube-controller-manager manages controllers that provide functionalities such as deployments, services, etc.

See the

What is the function of a kube-apiserver?

The API server is a component of the Kubernetes control plane that exposes the Kubernetes API. The API server is the front end for the Kubernetes control plane. Kube-apiserver is designed to scale horizontally. It scales by deploying more instances. You can run several instances of kube-apiserver and balance traffic between those instances.

See the

What is the function of a kube-scheduler?

Kube-scheduler distributes pods to nodes. Pods must be created with resource limits so that nodes are not over-commissioned.

See

What is CSI?

The (Container Storage Interface) driver runs as deployment in the control plane to manage volumes for PVCs (Persistent Volume Claims) in the Ionos Cloud and to attach them to nodes.

How to provision NFS volumes?

The "soft" mount option is required when creating PersistentVolume with an source in Kubernetes. It can be set either in the mount options list in the PersistentVolume specification (spec.mountOptions), or using the annotation key volume.beta.kubernetes.io/mount-options. This value is expected to contain a comma-separated list of mount options. If none of them contains the "soft" mount option, the creation of the PersistentVolume will fail.

Note that the use of the annotation is still supported but will be deprecated in the future. See also:

Mount options in the PersistentVolume specification can also be set using the StorageClass.

Example for PV spec:

Example for annotation:

Example for using StorageClass:

What is a cluster autoscaler?

A cluster autoscaler is a tool that automatically adjusts the size of the Kubernetes cluster when one of the following conditions is true:

  • there are pods that failed to run in the cluster due to insufficient resources;

  • there are nodes in the cluster that have been underutilized for an extended period of time and their pods can be placed on other existing nodes.

When does the cluster autoscaler increase and reduce the node pool?

The cluster autoscaler increases the node pool if pods cannot be scheduled due to a lack of resources. In addition, adding a node (from the node pool that needs to be increased) should provide a remedy for the situation. If there was no node pool that provides enough nodes to schedule a pod, the autoscaler would not enlarge. The cluster autoscaler reduces the node pool if a node is not fully utilized for an extended period of time. A node is underutilized when it has a light load and all of its pods can be moved to other nodes.

Is it possible to mix node pools with and without active autoscaling within a cluster?

Yes, whereby only node pools with active autoscaling are managed by the autoscaler.

Can the autoscaler enlarge/reduce node pools as required?

No, the autoscaler essentially cannot increase the number of nodes in the node pool above the maximum specified by the user, or decrease it below the specified minimum. In addition, the quota for a specific contract cannot be exceeded using the autoscaler. The autosclaer also cannot reduce the number of nodes in the node pool to 0.

Is it possible to enable and configure encryption of secret data?

Yes, it is possible.

See

Are values and components of the cluster changed during maintenance?

All components installed in the cluster are updated. This includes the K8s control plane itself, CSI, CCM, Calico, and CoreDNS. With cluster maintenance, several components that are visible to customers are updated and reset to our values. For example, changes to coredns are not permanent and will be removed at the next maintenance. It is currently not possible to set your own DNS records in the coredns configuration, but this will be possible later. Managed components that are regularly updated:

  • coredns

  • csi-ionoscloud (DaemonSet)

  • calico (typha)

  • metrics-server

  • ionos-policy-validator

  • snapshot-validaton-webhook

Is there a limit on the node pool?

The maintenance time window is limited to four hours. If not all nodes can be rebuilt within this time, the remaining nodes will be replaced at the next maintenance. To avoid late updates, it is recommended to create node pools with no more than 20 nodes.

How to preset IP addresses on new nodes?

If old nodes are replaced with new ones during maintenance, the new nodes will subsequently have different (new) public IP addresses. You can pre-specify a list of public IP addresses from which entries for new nodes are taken. In such a way, the list of possible host addresses is limited and predictable (for example, to activate them differently through a whitelist).

Nodes in private and public clusters

Managed Kubernetes nodes usually have a public IP address, so they can be accessed from the Internet. This is not the case in the private cluster. Here all the nodes are "hidden" behind a NAT gateway, and although they can open Internet connections themselves, they cannot be reached directly from the outside. Private clusters have various limitations: they can only consist of one node pool and are therefore also limited to one region. In addition, the bandwidth of the Internet is limited by the maximum bandwidth of the NAT gateway (typically 700 Mbps). With private clusters, you can determine the external public IP address of the NAT gateway using CRIP. Thus, outbound traffic will use this IP address as the "source IP address".

Can a cluster and respective node pools have different Kubernetes versions?

The Kubernetes (control plane) and the corresponding node pools can have different versions of Kubernetes. Node pools can use older versions than the control plane, but not vice versa. The difference between the minor versions must not be more than 1. There is a distinction between Patch Version Updates and Minor Version Updates. All version updates must be initiated by the customer. Once initiated version updates are performed immediately. However, forced updates will also occur if the version used by the customer is so old that we can no longer support it. Typically, affected customers receive a support notification about two weeks prior to a forced update.

Is there traffic protection between nodes and the control plane?

The Kubernetes API is secured with TLS. Traffic between the nodes and the control plane is secured by mutual TLS, which means that both sides check whether they are talking to the expected remote station.

Why is the status "Failed" displayed for clusters or node pools?

If clusters or node pools are created or modified, the operation may fail and the cluster or node pool will go into a FAILED status. In this case, our employees are already informed as a result of monitoring, but sometimes it may be difficult/impossible for them to correct the error, since the reason may be, for example, in conflict with the client's requirements. For example, a LAN is specified that does not exist (or no longer exists), or due to a module budget violation, a service update becomes impossible. If the node is NotReady, the reason is usually that there is not enough RAM. If the node runs out of RAM, an infinite loop occurs in which an attempt is made to free RAM, which means that the executables must be reloaded from the disk. This means that the node cannot get anywhere else and is busy only with disk IO. We recommend setting to prevent such scenarios.

Can I publish my own CAs (Certificate Authorities) to the cluster?

Currently, customers cannot publish their own CAs in the Kubernetes cluster or use their own TLS certificates.

Is geo-redundancy implemented in Kubernetes?

You can reserve node pools in multiple locations in the same cluster. This allows simple geo-redundancy to be configured and implemented. The control plane is geo-reserved (within Germany). There are several replicas running in different locations.

How to access unresponsive nodes via the API?

If a node is unavailable, for example, because there are too many pods running on it without resource limits, it can simply be replaced. To do this, you can use the following API endpoint: POST /k8s/{k8sClusterId}/nodepools/{nodePoolId}/nodes/{nodeId}/replace

curl --include \
    --request POST \
    --user "[email protected]:Mb2.r5oHf-0t" \
    --header "Content-Type: application/json" \
    --data-binary '{
      "metadata": {},
      "properties":
        {
        "postgresVersion": "15",
        "instances": 2,
        "cores": 4,
        "ram": 2048,
        "location": "DE/FRA",
        "storageSize": 20000,
        "storageType": "HDD",
        "displayName": "a good name for a database",
        "synchronizationMode": "ASYNCHRONOUS",
        "credentials": {
          "username": "dsertionos
          "password": "knight-errant"
        },
        "connections": [
          {
            "datacenterId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
            "lanId": "x",
            "cidr": "x.x.x.x/24"
          }
        ]
      }
    }' \
    https://api.ionos.com/databases/postgresql/clusters
{
  "type": "cluster",
  "id": "498ae72f-411f-11eb-9d07-046c59cc737e",
  "metadata": {
    "state": "BUSY",
    "createdDate": "2020-12-10T12:37:50.000Z",
    "createdBy": "[email protected]",
    "createdByUserId": "012342f-411f-1eeb-9d07-046c59cc737e",
  },
  "properties": {
    "displayName": "a good name for a database",
    "dnsName": "pg-3euh45am6idkppu3.postgresql.de-fra.ionos.com",
    "location": "DE/FRA",
    "connections": [
      {
        "datacenterId": "3",
        "lanId": "28",
        "cidr": "192.168.1.100/24"
      }
    ],
    "maintenanceWindow": {
      "time": "15:39:01",
      "dayOfTheWeek": "Friday"
    },
    "cores": 4,
    "ram": 2048,
    "instances": 2,
    "storageSize": 20000,
    "storageType": "HDD",
    "synchronization_mode": "ASYNCHRONOUS"
  }
}
curl --include \
    --user "[email protected]:Mb2.r5oHf-0t" \
    --header "Content-Type: application/json" \
    https://api.ionos.com/databases/postgresql/clusters/498ae72f-411f-11eb-9d07-046c59cc737e
curl --include \
    --user "[email protected]:Mb2.r5oHf-0t" \
    --header "Content-Type: application/json" \
    https://api.ionos.com/databases/postgresql/clusters
{
  "type": "collection",
  "id": "clusters",
  "items": [
    {
      "type": "cluster",
      "id": "498ae72f-411f-11eb-9d07-046c59cc737e",
      "metadata": {
        "state": "AVAILABLE",
        "createdDate": "2020-12-10T12:37:50.000Z",
        "createdBy": "[email protected]",
        "createdByUserId": "012342f-411f-1eeb-9d07-046c59cc737e",
      },
      "properties": {
        "displayName": "a good name for a database",
        "dnsName": "pg-3euh45am6idkppu3.postgresql.de-fra.ionos.com",
        "location": "DE/FRA",
      ...
      }
    }
  ],
  "offset": 0,
  "limit": 1,
  "links": {}
}
psql -h 192.168.1.100 -d postgres -U dsertionos
psql -h pg-3euh45am6idkppu3.postgresql.de-fra.ionos.com -d postgres -U dsertionos
CREATE DATABASE example;
REVOKE ALL ON DATABASE example FROM PUBLIC;
CREATE ROLE role;
GRANT CONNECT ON DATABASE example TO role;
-- to grant all permissions in that database:
GRANT ALL ON DATABASE example TO role;
GRANT USAGE ON SCHEMA example TO role;
GRANT SELECT ON TABLE example TO role;
CREATE USER user WITH PASSWORD 'some_secret_passwd';
GRANT role TO user;
Resource usage
storage
WAL files
Cloud API
docs on role attributes
predefined roles
how to manage databases
docs on ALTER DEFAULT PRIVILEGES
how to manage users
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-share
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  storageClassName: nfs
  mountOptions:
    - soft
  nfs:
    path: /tmp
    server: 172.17.0.2
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-share
  annotations:
    volume.beta.kubernetes.io/mount-options: "soft"
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  storageClassName: nfs
  nfs:
    path: /tmp
    server: 172.17.0.2
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-on-ionos
mountOptions:
  - soft
provisioner: my.nfs.provisioner
persistentVolumeReclaimPolicy: Delete
The Kubernetes Manager
kube-controller-manager
kube-apiserver
kube-scheduler
CSI
NFS
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#mount-options
Encrypting Secret Data at Rest
cluster
Resource Requests and Limits
Logo

Create a Cluster

This How-To shows you how to create a MongoDB cluster using the IONOS API.

Prerequisites

Before creating a cluster, you already need to have:

  • a data center

  • a private LAN in that data center

  • at least one client server connected to the private LAN. In order to connect to your cluster, clients must be able to resolve public DNS entries.

Retrieve the data center UUID

You need to provide the UUID of the data center. The easiest way to retrieve the UUID is through the Cloud API:

Request

curl --include \
    --user "[email protected]:Mb2.r5oHf-0t" \
    --header "Content-Type: application/json" \
    "https://api.ionos.com/cloudapi/v6/datacenters?depth=1"

Response

{
  "id" : "datacenters",
  "type" : "collection",
  "href" : "https://api.ionos.com/cloudapi/v6/datacenters",
  "items" : [ {
    "id" : "b1432c51-c20a-4f83-876f-c3a8a9e1fbec",
    "type" : "datacenter",
    "href" : "https://api.ionos.com/cloudapi/v6/datacenters/b1432c51-c20a-4f83-876f-c3a8a9e1fbec",
    "metadata" : {
      ...
    },
    "properties" : {
      "name" : "example-datacenter",
      "description" : "",
      "location" : "de/txl",
      ...
    },
    "entities" : {
      ...
    }
  }, {
    "id" : "ad6eb12c-f297-4b91-bf65-7e1af7aebd8d",
    "type" : "datacenter",
    ...
  } ]
}

In the following examples, we assume that you want to use the data center UUID b1432c51-c20a-4f83-876f-c3a8a9e1fbec.

Preparing the network

Currently, there is no automatic IP address management for database cluster. For creating the MongoDB cluster, you need to specify as many free IP addresses as the number of instances in your cluster. Currently the only supported subnet is /24.

You need to make sure that the IP addresses are not used by other servers in your LAN:

  • If you rely on IONOS-provided DHCP, you must pick IP addresses of the subnet that is assigned to you by IONOS. To find the subnet, you can look at the network interface (NIC) configuration. To prevent a collision with the DHCP IP range, pick an IP between x.x.x.3/24 and x.x.x.10/24. IP addresses in that range are never assigned by the IONOS DHCP.

  • If you run your own DHCP, select IP addresses that cannot be assigned by your DHCP server.

  • If you assign IP addresses manually, you may use any IP addresses in that subnet and need to make sure yourself that it is not used anywhere else.

So for example, assume your private LAN uses the IP range 10.1.1.0/24 and you want a cluster with three instances. Then you could for example provide us the IP addresses 10.1.1.3/24, 10.1.1.4/24 and 10.1.1.5/24. Please note, that the prefix length (/24) still is the same as for the whole subnet. We only use the single IPs that you specify, but we need the prefix length to configure routing correctly.

Caution: You cannot delete a LAN while a database cluster is still attached. If you want to delete the LAN, please delete the database cluster first.

Sizing considerations

To list the available sizing templates using the API:

Request

curl --include \
    --user "[email protected]:Mb2.r5oHf-0t" \
    --header "Content-Type: application/json" \
    https://api.ionos.com/databases/mongodb/templates

Response

{
  "id": "templates",
  "type": "collection",
  "items": [
    {
      "id": "15c6dd2f-02d2-4987-b439-9a58dd59ecc3",
      "cores": 1,
      "ram": 1024,
      "storageSize": 30
    },
    {
      "id": "56ce4e71-b03a-42b2-85be-9a4520aa40be",
      ...
    },
    ...
  ],
  "offset": 0,
  "limit": 8,
  "_links": {}
}

Choose the sizing that fits your use case and specify it on cluster creation. For more information about how to calculate resource usage, please refer to our documentation on sizing.

Currently, we don't support to change instance size for single instance clusters. You would need to create a new cluster for this.

For the following examples, we assume that you want to create a small test cluster and therefore chose the template 15c6dd2f-02d2-4987-b439-9a58dd59ecc3.

Creating the cluster

This request will create a database cluster with three instances of MongoDB version 5.0.

Note: Only contract admins and owners can create and change database clusters. In contrast, the running database cluster can be accessed from all servers in the specified LAN.

Request

curl --include \
    --request POST \
    --user "[email protected]:Mb2.r5oHf-0t" \
    --header "Content-Type: application/json" \
    --data-binary '{
      "metadata": {},
      "properties":
        {
        "mongoDBVersion": "5.0",
        "instances": 3,
        "templateID": "15c6dd2f-02d2-4987-b439-9a58dd59ecc3",
        "location": "DE/FRA",
        "displayName": "a good name for a database",
        "connections": [
          {
            "datacenterId": "b1432c51-c20a-4f83-876f-c3a8a9e1fbec",
            "lanId": "28",
            "cidrList": ["10.1.1.3/24", "10.1.1.4/24", "10.1.1.5/24"]
          }
        ]
      }
    }' \
    https://api.ionos.com/databases/mongodb/clusters

Response

Note: We will use 498ae72f-411f-11eb-9d07-046c59cc737e as example for the cluster ID in following examples.

{
  "type": "cluster",
  "id": "498ae72f-411f-11eb-9d07-046c59cc737e",
  "metadata": {
    "state": "BUSY",
    "health": "UNKNOWN",
    "createdDate": "2020-12-10T12:37:50.000Z",
    "createdBy": "[email protected]",
    "createdByUserId": "012342f-411f-1eeb-9d07-046c59cc737e",
  },
  "properties": {
    "displayName": "a good name for a database",
    "location": "DE/FRA",
    "connections": [
      {
        "datacenterId": "b1432c51-c20a-4f83-876f-c3a8a9e1fbec",
        "lanId": "28",
        "cidr": [
          "10.1.1.3/24",
          "10.1.1.4/24",
          "10.1.1.5/24",
        ]
      }
    ],
    "templateID": "15c6dd2f-02d2-4987-b439-9a58dd59ecc3",
    "instances": 3,
    "maintenanceWindow": {
      "time": "15:39:01",
      "dayOfTheWeek": "Friday"
    },
    "connectionString": "mongodb+srv://m-65f4a879f126e3c4.mongodb.de-fra.ionos.com"
  }
}

At this point, you have created your first MongoDB cluster. The deployment of the database takes several minutes. See the next section on how to determine when your cluster is ready.

Querying database status

You may have noticed that the state is BUSY and that the database cluster is not yet reachable. This is because the cloud creates a completely new cluster and needs to provision new instances. This process runs asynchronously in the background and might take several minutes.

You can poll the API to see when the state switches to AVAILABLE.

Request

To query a single cluster, you require the id from your "create" response.

curl --include \
    --user "[email protected]:Mb2.r5oHf-0t" \
    --header "Content-Type: application/json" \
    https://api.ionos.com/databases/mongodb/clusters/498ae72f-411f-11eb-9d07-046c59cc737e

If you don't know your MongoDB cluster ID, you can also list all clusters and look for the one for which to query the status.

curl --include \
    --user "[email protected]:Mb2.r5oHf-0t" \
    --header "Content-Type: application/json" \
    https://api.ionos.com/databases/mongodb/clusters

Response

{
  "type": "collection",
  "id": "clusters",
  "items": [
    {
      "type": "cluster",
      "id": "498ae72f-411f-11eb-9d07-046c59cc737e",
      "metadata": {
        "state": "AVAILABLE",
        "health": "HEALTHY",
        "createdDate": "2020-12-10T12:37:50.000Z",
        "createdBy": "[email protected]",
        "createdByUserId": "012342f-411f-1eeb-9d07-046c59cc737e",
      },
      "properties": {
        "displayName": "a good name for a database",
        "location": "DE/FRA",
        "connectionString": "mongodb+srv://m-65f4a879f126e3c4.mongodb.de-fra.ionos.com"
      ...
      }
    }
  ],
  "offset": 0,
  "limit": 1,
  "_links": {}
}

Connect to cluster

Now that the cluster is available, you can connect to your MongoDB cluster using the mongo shell tool and the connection string from the API response and should see the mongo shell prompt:

$ mongosh "mongodb+srv://m-65f4a879f126e3c4.mongodb.de-fra.ionos.com"
Current Mongosh Log ID:	6304db914f263ebcb3ddaaf2
Connecting to:		mongodb+srv://m-65f4a879f126e3c4.mongodb.de-txl.ionos.com/?appName=mongosh+1.5.4
Using MongoDB:		5.0.10
Using Mongosh:		1.5.4

For mongosh info see: https://docs.mongodb.com/mongodb-shell/


To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy).
You can opt-out by running the disableTelemetry() command.

Enterprise 5b8acdcc-22c6-11ed-bc22-c65c3b0754bf [primary] test>

Authentication

Connecting to a MongoDB cluster doesn't require authentication. However, nearly all other operations with a cluster require authentication. For example, when you try to list all databases:

> show dbs
MongoServerError: command listDatabases requires authentication

You can't create or modify users from within MongoDB itself. The user management for IONOS MongoDB clusters happens completely via the IONOS API. You can read more about this in the documentation on User Management.

Let's assume you've created a user jdoeionos on database example with the role readWrite on example. Then you can authenticate like this:

... test> use example
switched to db example
... example> db.auth("jdoeionos", passwordPrompt())
{ ok: 1 }

and can afterwards write and read data:

... example> db.exampleCollection.insertOne( { x: 1 } );
{
  acknowledged: true,
  insertedId: ObjectId("6304deee5c8e10bbd2c099a9")
}
... example> show dbs
example  40.00 KiB
... example> db.exampleCollection.find()
[ { _id: ObjectId("6304deee5c8e10bbd2c099a9"), x: 1 } ]

Congratulations: you now have a ready to use MongoDB cluster!

API How-Tos

The Cloud API lets you manage Cloud Cubes resources programmatically using conventional HTTP requests. All the functionality available in the IONOS Cloud Data Center Designer is also available through the API.

You can use the API to create, destroy, and retrieve information about your Cubes. You can also use the API to suspend or resume your Cubes.

However, not all actions are shared between Virtual Servers and Cloud Cubes. Since Cubes come with direct-attached storage, a composite call is required for setup.

Furthermore, when provisioning Cubes, Templates must be used. Templates will not be compatible with Servers that still support full flex configuration.

APIs & SDKs

Cloud API outlines all required actions.

| | | |

Retrieving available Templates and Template details

Retrieve Template list

GET https://api.ionos.com/docs/cloud/v6/#tag/Templates/cloudapi/v6/templates

This method retrieves a list of configuration templates that are currently available. Instances have a fixed configuration of vCPU, RAM and direct-attached storage size.

Path Parameters

Name
Type
Description

v6

string

The API version

templates

string

Template attributes: ID, metadata, properties.

Retrieve Template details

GET https://api.ionos.com/docs/cloud/v6/#tag/Templates/cloudapi/v6/templates?depth=1

Retrieves Template information. Refine your request by adding the optional query parameter

depth

. The response will show a template's ID, number of cores, ram and storage size.

Path Parameters

Name
Type
Description

v6

string

The API version.

templates

string

Template attributes: ID, metadata, properties.

Query Parameters

Name
Type
Description

depth

integer

Template detail depth. Default value = 0.

{
    "id": "templates",
    "type": "collection",
    "href": "https://api.ionos.com/cloudapi/v6/templates",
    "items": [
        {
            "id": "5e98b425-1887-44e4-b782-a654bfbe7eaa",
            "type": "template",
            "href": "https://api.ionos.com/cloudapi/v6/templates/5e98b425-1887-44e4-b782-a654bfbe7eaa",
            "metadata": {
                "etag": "106988fd270d48ffd1734a210801a33d",
                "createdDate": "2021-02-13T17:07:39Z",
                "createdBy": "[UNKNOWN]",
                "createdByUserId": "[UNKNOWN]",
                "lastModifiedDate": "2021-08-10T10:11:45Z",
                "lastModifiedBy": "[UNKNOWN]",
                "lastModifiedByUserId": "[UNKNOWN]",
                "state": "AVAILABLE"
            },
            "properties": {
                "name": "CUBES XL",
                "cores": 6,
                "ram": 16384,
                "storageSize": 320
            }
        },
        {
            "id": "99d022bd-55ea-4af1-9ba7-6d4174d9fc22",
            "type": "template",
            "href": "https://api.ionos.com/cloudapi/v6/templates/99d022bd-55ea-4af1-9ba7-6d4174d9fc22",
            "metadata": {
                "etag": "2fd7e4e39bbbb7b33920bf4d7b5509a6",
                "createdDate": "2021-02-13T17:06:25Z",
                "createdBy": "[UNKNOWN]",
                "createdByUserId": "[UNKNOWN]",
                "lastModifiedDate": "2021-08-10T10:11:35Z",
                "lastModifiedBy": "[UNKNOWN]",
                "lastModifiedByUserId": "[UNKNOWN]",
                "state": "AVAILABLE"
            },
            "properties": {
                "name": "CUBES L",
                "cores": 4,
                "ram": 8192,
                "storageSize": 160
            }
        },
        {
            "id": "5ae1bfbd-05f2-47f5-a736-eaca3dcce41b",
            "type": "template",
            "href": "https://api.ionos.com/cloudapi/v6/templates/5ae1bfbd-05f2-47f5-a736-eaca3dcce41b",
            "metadata": {
                "etag": "6e68d67158a63d6d644a7c680342b26f",
                "createdDate": "2021-02-13T17:03:51Z",
                "createdBy": "[UNKNOWN]",
                "createdByUserId": "[UNKNOWN]",
                "lastModifiedDate": "2021-08-10T10:10:49Z",
                "lastModifiedBy": "[UNKNOWN]",
                "lastModifiedByUserId": "[UNKNOWN]",
                "state": "AVAILABLE"
            },
            "properties": {
                "name": "CUBES S",
                "cores": 1,
                "ram": 2048,
                "storageSize": 50
            }
        },
        {
            "id": "15c6dd2f-02d2-4987-b439-9a58dd59ecc3",
            "type": "template",
            "href": "https://api.ionos.com/cloudapi/v6/templates/15c6dd2f-02d2-4987-b439-9a58dd59ecc3",
            "metadata": {
                "etag": "4ff2f8ebb363005b447edb38563405a6",
                "createdDate": "2021-02-13T17:02:13Z",
                "createdBy": "[UNKNOWN]",
                "createdByUserId": "[UNKNOWN]",
                "lastModifiedDate": "2021-08-10T10:11:03Z",
                "lastModifiedBy": "[UNKNOWN]",
                "lastModifiedByUserId": "[UNKNOWN]",
                "state": "AVAILABLE"
            },
            "properties": {
                "name": "CUBES XS",
                "cores": 1,
                "ram": 1024,
                "storageSize": 30
            }
        },
        {
            "id": "56ce4e71-b03a-42b2-85be-9a4520aa40be",
            "type": "template",
            "href": "https://api.ionos.com/cloudapi/v6/templates/56ce4e71-b03a-42b2-85be-9a4520aa40be",
            "metadata": {
                "etag": "f528ce3bcba9ff1332d7c181f221984c",
                "createdDate": "2021-02-13T17:08:50Z",
                "createdBy": "[UNKNOWN]",
                "createdByUserId": "[UNKNOWN]",
                "lastModifiedDate": "2021-08-10T10:11:57Z",
                "lastModifiedBy": "[UNKNOWN]",
                "lastModifiedByUserId": "[UNKNOWN]",
                "state": "AVAILABLE"
            },
            "properties": {
                "name": "CUBES XXL",
                "cores": 8,
                "ram": 32768,
                "storageSize": 640
            }
        },
        {
            "id": "7f8dfdb3-594b-4ae2-ae2e-a9dfcbf05f74",
            "type": "template",
            "href": "https://api.ionos.com/cloudapi/v6/templates/7f8dfdb3-594b-4ae2-ae2e-a9dfcbf05f74",
            "metadata": {
                "etag": "fbb4194b718ce3e456437dbc55405273",
                "createdDate": "2021-02-13T17:05:17Z",
                "createdBy": "[UNKNOWN]",
                "createdByUserId": "[UNKNOWN]",
                "lastModifiedDate": "2021-08-10T10:11:22Z",
                "lastModifiedBy": "[UNKNOWN]",
                "lastModifiedByUserId": "[UNKNOWN]",
                "state": "AVAILABLE"
            },
            "properties": {
                "name": "CUBES M",
                "cores": 2,
                "ram": 4096,
                "storageSize": 80
            }
        }
    ]
}

Creating instances with composite calls

A composite call doesn't only configure a single instance but also defines additional devices. This is required because a Cloud Cube must include a direct-attached storage device. An instance cannot be provisioned and then mounted with a direct-attached storage volume. Composite calls are used to execute a series of REST API requests into a single API call. You can use the output of one request as the input for a subsequent request.

The payload of a composite call to configure a Cubes instance is different from that of a POST request to create an enterprise server. In a single request you can create a new instance, as well as its direct-attached storage device and image (public image, private image, or snapshot). When the request is processed, a Cubes instance is created and the direct-attached storage is mounted automatically.

Create an instance

POST https://api.ionos.com/cloudapi/v6/datacenter/{datacenterId}/servers

This method creates an instance in a specific data center.

\

Replace {datacenterID} with the unique ID of your data center. Your Cloud Cube will be provisioned in this location.

Path Parameters

Name
Type
Description

v6

string

datacenter

string

The API version.

datacenterId

string

The unique ID of the data center.

servers

string

Suspend an instance

POST https://api.ionos.com/cloudapi/v6/datacenters/{datacenterId}/servers/{serverId}/suspend

This method suspends an instance.

Path Parameters

Name
Type
Description

v6

string

The API version.

datacenterId

string

The unique ID of the data center.

serverId

string

The unique ID of the instance.

This does not destroy the instance. Used resources will be billed.

Setup, Resume and Delete

Resume instance

POST https://api.ionos.com/cloudapi/v6/datacenters/{datacenterId}/servers/{serverId}/resume

This method resumes a suspended instance.

Path Parameters

Name
Type
Description

v6

string

The API version.

datacenterId

string

The unique ID of the data center.

serverId

string

The unique ID of the instance.

Delete instance

DELETE https://api.ionos.com/cloudapi/v6/datacenters/{datacenterId}/servers/{serverId}

This method deletes an instance.

Path Parameters

Name
Type
Description

v6

string

The API version.

datacenterID

string

The unique ID of the data center.

serverID

string

The unique ID of the instance.

Deleting an instance also deletes the direct-attached storage NVMe volume. You should make a snapshot first in case you need to recreate the instance with the appropriate data device later.

Cloud API (v.6)
Cloud API SDK
GO
Python
Java
Ruby
NodeJS
Cloud config examples — cloud-init 21.4 documentation
Modules — cloud-init 21.4 documentation
Logo

Overview

Overview

DBaaS for PostgreSQL is fully integrated into the Data Center Designer and has a dedicated API. You may also launch it via automation tools like Terraform and Ansible.

Compatibility: DBaaS gives you access to the capabilities of the PostgreSQL database engine. This means that the code, applications, and tools you already use today with your existing databases can be used with DBaaS. IONOS Cloud currently supports PostgreSQL versions 11, 12, 13, 14, and 15.

Locations: As of December 2022, DBaaS is offered in all IONOS Cloud Locations.

Features

  • Scalable: Fully managed clusters that can be scaled on demand.

  • High availability: Multi-node clusters with automatic node failure handling.

  • Security: Communication between clients and the cluster is encrypted using TLS certificates from Let's Encrypt.

  • Upgrades: Customer-defined maintenance windows, with minimal disruption due to planned failover (approx. few seconds for multi-node clusters).

  • Backup: Base backups are carried out daily, with Point-in-Time recovery for one week.

  • Cloning: Customers also have the option to clone clusters via backups.

  • Restore: Databases can be restored in place or to a different target cluster.

  • Resources: Offered on Enterprise VM, with a dedicated CPU, storage, and RAM. Storage options are SSD or HDD, with SSD now including encryption-at-rest.

  • Network: DBaaS supports private LANs.

  • Extensions: DBaaS supports several PostgreSQL Extensions.

Platform Tasks

Note: IONOS Cloud doesn’t allow superuser access for PostgreSQL services. However, most DBA-type actions are still available through other methods.

DBaaS services offered by IONOS Cloud:

Our platform is responsible for all back-end operations required to maintain your database in optimal operational health.

  • Database installation via the DCD or the DBaaS API.

  • Pre-set database configuration and configuration management options.

  • Automation of backups for a period of 7 days.

  • Regular patches and upgrades during maintenance.

  • Disaster recovery via automated backup.

  • Service monitoring: both for the database and the underlying infrastructure.

Customer database administration duties:

Tasks related to the optimal health of the database remain the responsibility of the customer. These include:

  • Optimisation.

  • Data organisation.

  • Creation of indexes.

  • Updating statistics.

  • Consultation of access plans to optimize queries.

Logs: The logs that are generated by a database are stored on the same disk as the database. We provide logs for connections, disconnections, waiting for locks, DDL statements, any statement that ran for at least 500 ms, and any statement that caused an error (see PostgreSQL documentation). Currently, we do not provide an option to change this configuration.

To conserve disk space, log files are rotated according to size. Logs should not consume more than 175 MB of disk storage. The files are continuously monitored and log messages are shipped to a central storage location with a retention policy of 30 days.

Write-Ahead Logs: PostgreSQL uses Write Ahead Logs (WAL) for continuous archiving and point-in-time recovery. These logs are created in addition to the regular logs.

Every change to the database is recorded in the WAL record. WALs are generated along with daily base backups and offer a consistent snapshot of the database as it was at that time. WALs and backups are automatically deleted after 7 days, which is the earliest point in time you can recover from. Please consult PostgreSQL WAL documentation for more information.

Password encryption: Client libraries must support SCRAM-SHA-256 authentication. Make sure to use an up-to-date client library.

Connection encryption: All client connections are encrypted using TLS; the default SSL mode is prefer and clients cannot disable it. Server certificates are issued by Let's Encrypt and the root certificate is ISRG Root X1. This needs to be made available to the client for verify-ca and verify-full to function.

Certificates are issued for the DNS name of the cluster which is assigned automatically during creation and will look similar to pg-abc123.postgresql.de-txl.ionos.com. It is available via the IONOS API as the dnsName property of the cluster resource.

Here is how to verify the certificate using the psql command line tool:

curl https://crt.sh/?d=9314791 > ca.crt
export PGSSLROOTCERT=$(pwd)/ca.crt
export PG_SSLMODE=verify-full
psql -h pg-abc123.postgresql.de-txl.ionos.com -U dbadmin postgres

Resource Usage

Resource quotas: Each customer contract is allotted a resource quota. The available number of CPUs, RAM, storage, and database clusters is added to the default limitations for a VDC contract.

  • 16 CPU Cores

  • 32 GB RAM

  • 1500 GB Disk Space

  • 10 database clusters

  • 5 nodes within a cluster

Additionally, a single instance of your database cluster can not exceed 16 cores and 32GB RAM.

Calculating RAM Requirements: The RAM size must be chosen carefully. There is 1 GB of RAM reserved to capture resource reservation for OS system daemons. Additionally, internal services and tools use up to 500 MB of RAM. To choose a suitable RAM size, the following formula must be used.

ram_size = base_consumption + X * work_mem + shared_buffers

  • The base_consumption and reservation of internal services is approximately 1500 MB.

  • X is the number of parallel connections. The value work_mem is set to 8 MB by default.

  • The shared_buffersis set to about 15% of the total RAM.

Calculating Disk Requirements:

The requested disk space is used to store all the data that Postgres is working with, incl. database logs and WAL segments. Each Postgres instance has its storage (of the configured size). The operating system and applications are kept separately (outside of the configured storage) and are managed by IONOS.

If the disk runs full Postgres will reject write requests. Make sure that you order enough margin to keep the Postgres cluster operational. You can monitor the storage utilization in DCD.

WAL segments: In normal operation mode, older WAL files will be deleted once they have been replicated to the other instances and backed up to archive. If either of the two shipments is slow or failing then WAL files will be kept until the replicas and archive catch up again. Account for enough margin, especially for databases with high write load.

Log files: Database log files (175 MB) and auxiliary service log files (~100 MB) are stored on the same disk as the database.

Limitations

Connection Limits: The value for max_connections is calculated based on RAM size.

RAM size
max_connections

2GB

128

3GB

256

4GB

384

5GB

512

6GB

640

7GB

768

8GB

896

> 8GB

1000

The superuser needs to maintain the state and integrity of the database, which is why the platform reserves 11 connections for internal use: connections for superusers (see superuser_reserved_connections), for replication.

CPU: The total upper limit for CPU cores depends on your quota. A single instance cannot exceed 16 cores.

RAM: The total upper limit for RAM depends on your quota. A single instance cannot exceed 32 GB.

Storage: The upper limit for storage size is 2 TB.

Backups: Storing cluster backups in an IONOS S3 Object Storage is limited to the last 7 days.

Performance Considerations

Database instances are placed in the same location as your specified LAN, so network performance should be comparable to other machines in your LAN.

Estimates: A test with pgbench (scaling factor 1000, 20 connections, duration 300 seconds, not showing detailed logs) and a single small instance (2 cores, 3 GB RAM, 20 GB HDD) resulted in around 830 transactions per second (read and write mixed) and 1100 transactions per second (read-only). For a larger instance (4 cores, 8 GB RAM, 600GB Premium SSD) the results were around 3400 (read and write) and 19000 (read-only) transactions per second. The database was initialized using pgbench -i -s 1000 -h <ip> -U <username> <dbname>. For benchmarking the command line used was pgbench -c 20 -T 300 -h <ip> -U <username> <dbname> for the read/write tests, and pgbench -c 20 -T 300 -S -h <ip> -U <username> <dbname> for the read-only tests.

Note: To cite the pgbench docs: "It is very easy to use pgbench to produce completely meaningless numbers". The numbers shown here are only ballpark figures and there are no performance guarantees. The real performance will vary depending on your workload, the IONOS location, and several other factors.

Activate Extensions

Available PostgreSQL extensions

There are several PostgreSQL extensions preinstalled, that you can enable for your cluster. You can enable the extension by logging into your cluster and executing:

create extension <EXTENSION> CASCADE;

The following table shows which extensions are enabled by default and which can be enabled (PostgreSQL version 12):

Extension
Enabled
Version
Description

plpython3u

X

1.0

PL/Python3U untrusted procedural language

pg_stat_statements

X

1.7

track execution statistics of all SQL statements executed

intarray

1.2

functions, operators, and index support for 1-D arrays of integers

pg_trgm

1.4

text similarity measurement and index searching based on trigrams

pg_cron

1.3

Job scheduler for PostgreSQL

set_user

3.0

similar to SET ROLE but with added logging

timescaledb

2.4.2

Enables scalable inserts and complex queries for time-series data

tablefunc

1.0

functions that manipulate whole tables, including crosstab

pg_auth_mon

X

1.1

monitor connection attempts per user

plpgsql

X

1.0

PL/pgSQL procedural language

pg_partman

4.5.1

Extension to manage partitioned tables by time or ID

hypopg

1.1.4

Hypothetical indexes for PostgreSQL

postgres_fdw

X

1.0

foreign-data wrapper for remote PostgreSQL servers

btree_gin

1.3

support for indexing common datatypes in GIN

pg_stat_kcache

X

2.2.0

Kernel statistics gathering

citext

1.6

data type for case-insensitive character strings

pgcrypto

1.3

cryptographic functions

earthdistance

1.1

calculate great-circle distances on the surface of the Earth

postgis

3.2.1

PostGIS geometry and geography spatial types and functions

cube

1.4

data type for multidimensional cubes

Note: With select * from pg_available_extensions; you will see more available extensions, but many of them can't be enabled or used without superuser rights and thus aren't listed here.

Tutorial - Learn how to access logs
CLOUD API (6.0-SDK.2)
Logo

Overview

The DCD helps you interconnect the elements of your infrastructure and build a network to set up a functional VDC. Virtual networks work just like normal physical networks. Transmitted data is completely isolated from other subnets and cannot be intercepted by other users.

You won't find any switches in the DCD by design. Switching, routing, and forwarding functionality is deeply integrated into our network stack, which means we are responsible for distributing your traffic. If you wish to route from one of your private networks to the next by means of a virtual machine, the virtual machine must be configured accordingly, and the routing table adjusted.

IP settings: By default, IP addresses are assigned by our DHCP server. You can also assign IP addresses yourself. You can use any ethernet-based protocol. We do support TCP/IP and DHCP. MAC addresses cannot be modified.

Firewall: In order to protect your network against unauthorized access or attacks from the Internet, you can activate the firewall for each NIC. By default, this will block all traffic, and you will have to configure the rules to specify what data can pass through. For TCP, UCD, and ICMP protocols, you can specify rules for individual source or target IPs.

Network Interface Cards

IONOS Cloud allows virtual entities to be equipped with network cards (“network interface cards”; NICs). Only by using these virtual network interface cards, it is possible to connect multiple virtual entities together and/or to the Internet.

Parameter

Size

Performance

Throughput, internal

MTU 1,500

3 Gbps

Throughput, external

MTU 1,500

700 Mbps

Maximum number of packets per VM

100,000 packets/s

The maximum external throughput may only be achieved with a corresponding upstream of the provider.

Compatibility

  • The use of virtual MAC addresses and/or the changing of the MAC address of a network adapter is not supported. Among others, this limitation also applies to the use of CARP (Common Address Redundancy Protocol).

  • Gratuitous ARP (RFC 826) is supported.

  • Virtual Router Redundancy Protocol (VRRP) is supported based on gratuitous ARP. For VRRP to work IP failover groups must be configured.

External Network

Depending on the location, different capacities for transmitting data to or from the Internet are available for operating the IONOS Cloud service. Due to the direct connection between the data centers at the German locations, the upstream can be used across locations.

The total capacities of the respective locations are described below:

Location

Connection

Redundancy level

AS

Berlin (DE)

2 x 100 Gbps

N+1

AS-6724

Frankfurt am Main (DE)

2 x 100 Gbps 4 x 10 Gbps *

N+5

AS-51862

Karlsruhe (DE)

3 x 10 Gbps2 **

N+2

AS-51862

London (UK)

2 x 10 Gbps

N+1

AS-8560

Logroño (ES)

2 x 100 Gbps

N+1

AS-8560

Las Vegas (US)

3 x 10 Gbps

N+2

AS-54548

Newark (US)

2 x 10 Gbps

N+1

AS-54548

* - 2 x 10 Gbps toward Karlsruhe; 2 x 10 Gbps toward the Internet

** - 2 x 10 Gbps toward Frankfurt am Main; 1 x 10 Gbps toward the Internet

IONOS backbone AS-8560, to which IONOS Cloud is redundantly connected, has a high-quality edge capacity of 1.100 Gbps with 2.800 IPv4/IPv6 peering sessions, available in the following Internet and peering exchange points: AMS-IX, BW-IX, DE-CIX, ECIX, Equinix, FranceIX, KCIX, LINX.

Internal Network

IONOS Cloud operates redundant networks at each location. All networks are operated using the latest components from brand manufacturers with connections up to 100 Gbps.

IONOS Cloud uses high-speed networks based on InfiniBand technology both for connecting the central storage systems and for handling internal data connections between customer servers.

Core Network

IONOS Cloud operates a high availability core network at each location for the redundant connection of the product platform. All services provided by IONOS Cloud are connected to the Internet via this core network.

The core network consists exclusively of devices from brand manufacturers. The network connections are completed via an optical transmission network, which, by use of advanced technologies, can provide transmission capacities of several hundred gigabits per second. Connection to important Internet locations in Europe and America guarantees the customer an optimal connection at all times.

Data is not forwarded to third countries. At the customer’s explicit request, the customer can opt for support in a data center in a third country. In the interests of guaranteeing a suitable data protection level, this requires a separate agreement (within the meaning of article 44-50 DSGVO and §§ 78 ff. BDSG 2018).

IP Address Management

IONOS Cloud provides the customer with public IP addresses that, depending on the intended use, can be booked either permanently or for the duration for which a virtual server exists. These IP addresses provided by IONOS Cloud are only needed if connections are to be established over the Internet. Internally, virtual machines can be freely networked. For this, IONOS Cloud offers a DHCP server that allows and/or simplifies the assignment of IP addresses. However, one can establish one’s own addressing scheme.

See also: Reserve an IP Address

Public IPv4 Addresses

Every virtual network interface card that is connected to the Internet is automatically assigned a public IPv4 address by DHCP. This IPv4 address is dynamic, meaning it can change while the virtual server is operational or in the case of a restart.

Customers can reserve static public IPv4 addresses for a fee. These reserved IPv4 addresses can be assigned to a virtual network interface card, which is connected to the Internet, as primary or additional IP addresses.

Private IPv4 Addresses

In networks that are not connected to the Internet, each virtual network interface card is automatically assigned a private IPv4 address. This is assigned by the DHCP service. These IPv4 addresses are assigned statically to the MAC addresses of the virtual network interface cards.

The use of the IP address assignment can be enabled or disabled for each network interface card. Any private IPv4 addresses pursuant to RFC 1918 can be used in private networks.

Network address range

CIDR notation

Abbreviated CIDR notation

Number of addresses

Number of networks as per network class (historical)

10.0.0.0 to 10.255.255.255

10.0.0.0/8

10/8

224 = 16.777.216

Class A: 1 private network with 16,777,216 addresses; 10.0.0.0/8

172.16.0.0 to 172.31.255.255

172.16.0.0/12

172.16/12

220 = 1.048.576

Class B: 16 private networks with 65,536 addresses; 172.16.0.0/16 to 172.31.0.0/16

192.168.0.0 to 192.168.255.255

192.168.0.0/16

192.168/16

216 = 65.536

Class C: 256 private networks with 256 addresses; 192.168.0.0/24 to 192.168.255.0/24

DDoS Protect

IONOS DDoS Protect is a managed Distributed Denial of Service defense mechanism, which ensures that every customer resource hosted on IONOS Cloud is secure and resilient against Layer 3 and Layer 4 DDoS attacks. This is facilitated by a filtering and scrubbing technology, which in event detection of an attack filters the malicious DDoS traffic and lets through only the genuine traffic to its original destination. Hence, enabling applications and services of our customers to remain available under a DDoS attack.

Known attack vectors regularly evolve and new attack methods are added. IONOS Cloud monitors this evolution and dedicates resources to adapt and enhance DDoS Protect as much as possible to capture and mitigate the threat.

The service is currently available in the following data centers: Berlin, Frankfurt, and Karlsruhe, and will be available in the remaining data centers soon.

The service is available in two packages:

DDoS Protect Basic: This package is enabled by default for all customers and does not require any configuration. It provides basic DDoS Protection for every resource on IONOS Cloud from common volumetric and protocol attacks and has the following features:

  • DDoS traffic filtering - All suspicious traffic is redirected to the filtering platform where the DDoS traffic is filtered and the genuine traffic is allowed to the original destination.

  • Always-On attack detection - The service is always on by default for all customers and does not require any added configuration or subscription.

  • Automatic Containment - Each time an attack is identified the system automatically triggers the containment of the DDoS attack by activating the DDoS traffic and letting through only genuine traffic.

  • Protect against common Layer 3 and 4 attacks - This service protects every resource on IONOS Cloud from common volumetric and protocol attacks in the Network and Transport Layer such as UDP, SYN floods, etc.

DDoS Protect Advanced: This package offers everything that's part of the DDoS Protect Basic package plus advanced security measures and support.

  • 24/7 DDoS Expert Support - Customers have 24/7 access to IONOS Cloud DDoS expert support. The team is available to assist customers with their concerns regarding ongoing DDoS attacks or any related issues.

  • Proactive Support - The IONOS Cloud DDoS support team, equipped with alarms, will proactively respond to a DDoS attack directed towards a customer's resources and also notify the customer in such an event.

  • On-demand IP specific DDoS filtering - If a customer suspects or anticipates a DDoS attack at any point in time, he can request to enable DDoS filtering for a specific IP or server owned by him. Once enabled, all traffic directed to that IP will be redirected to the IONOS Cloud filtering platform where DDoS traffic will be filtered and genuine traffic will be passed to the original destination.

  • On-demand Attack Diagnosis - At the customer's request, a detailed report of a DDoS attack is sent to the customer, explaining the attack and other relevant details.

Note! IONOS Cloud sets forth Security as a Shared Responsibility between IONOS Cloud and the customer. We at IONOS Cloud strive at offering a state-of-the-art DDoS defense mechanism. Successful DDoS defense can only be achieved by a collective effort on all aspects including optimal use of firewalls and other settings in the customer environment.

Networks FAQ

How do I set up a firewall?

For every network interface, you can activate a firewall, which will block all incoming traffic by default. You must specify the rules that define which protocols will pass through the firewall, and which ports are enabled. For instructions on how to set up a firewall click here

How does the IONOS firewall work?

The IONOS firewall offered in the DCD can be used for simple protection for THE hosts behind it. Once activated, all incoming traffic is blocked. The traffic can only pass through the ports that are explicitly enabled. Outgoing traffic is generally permitted. We recommend that you set up your own firewall VM, even for small networks. There are many cost-free options, including IP tables for Linux, pfSense FreeBSD, and various solutions for Windows.

See also: Activating a Firewall

Do you have a DNS resolver?

Yes, there are DNS resolvers. Valid everywhere IP addresses for 1&1 resolvers are:

212.227.123.16

212.227.123.17

2001: 8d8: fe: 53: 72ec :: 1

2001: 8d8: fe: 53: 72ec :: 2

By adding a public DNS resolver you will provide a certain level of redundancy for your systems.

How do I create reverse DNS entries?

Please contact IONOS enterprise support team for personal assistance and more information on how to enable reverse DNS entries.

What is the IP address of my new server?

Once a server has been provisioned, you can find its IP address by following the procedure below:

  • Open VDC

  • Select the server, for which you wish to know the IP

  • Select the Network tab in the Inspector

  • Open the properties of the NIC

    The IP address is listed in the Primary IP field.

See also: Reserve an IP Address

How can I connect multiple servers to the Internet?

The internet access element can connect to more than one server. Simply add multiple virtual machines to provide them all with internet access.

How can I get additional IPs for my server?

Users with the appropriate privileges can reserve and release additional IP addresses. Additional addresses are made available as part of a reserved consecutive IP block.

See also: Reserve an IP address

How are IPs assigned by DHCP?

The public IP address assigned by DHCP will remain with your server. The IP address, however, may change when you deallocate your VM (power stop) or remove the network interface. We, therefore, recommend assigning reserved IPs when static IPs are required, such as for web servers.

Can I use my own DHCP server?

Yes, you can. To make sure that a network interface will be addressed from your own DHCP server, perform the following steps:

  • Open your data center

  • Select the NIC

  • Open the properties of the NIC in the Inspector

  • Clear the DHCP check box

    This will disable the allocation of IPs to this NIC by IONOS DHCP, and then you can use your own DHCP server to allocate information for this interface.

How do I set up DHCP during a Linux installation?

We preset the subnet mask 255.255.255.255 for the DHCP allocation of public IPs. Unfortunately, this is not supported by all DHCP clients. You can perform network configuration at the operating system level or specify the netmask 255.255.255.0 using a configuration file.

How do I assign an IP address to a Linux server manually if DHCP fails?

DHCP configurations may fail during the installation of Linux distributions that do not support /32 subnet mask configurations. If this happens, the IP address can be assigned manually using the Remote Console.

Example

Network interface "eth0" is being assigned P address "46.16.73.50" and subnet mask "/24" ("255.255.255.0"). For the internet access to work, the IP address of the gateway (which is "46.16.73.1" in this example) must also be specified.

  • Command-line:

    ifconfig eth0 46.16.73.50 netmask 255.255.255.0

    route add default gw 46.16.73.1

  • Config file:

    Modify the "interface" file in the "/etc/networking/" folder as follows:

    # This file describes the network interfaces available on your system

    # and how to activate them. For more information, see interfaces(5).

    # The loopback network interface

    auto lo

    iface lo inet loopback

    # The primary network interface

    allow-hotplug eth0

    iface eth0 inet static

    address 46.16.73.50

    netmask 255.255.255.0

    gateway 46.16.73.1

  • Restart the interfaces:

    ifdown eth0

    ifup eth0

Which IP versions are supported?

We support both IPv4 and IPv6 versions.

What is the maximum bandwidth of your data centers?

Our data centers are connected as follows:

Data center Bandwidth

Location
Bandwidth in Gbit/s

Karlsruhe (DE)

4 x 10

Frankfurt (DE)

2 x 40 & 3 x 10

Berlin (DE)

2 x 10

London (UK)

2 x 10

Las Vegas (US)

3 x 10

Newark (US)

2 x 10

Logroño (ES)

2 x 10

What should I do if the network does not work?

First, attempt to log on to the VM with the Remote Console. If this is successful, please collect the information we will need to help you resolve the issue as described below.

We will need to know the following:

  • VM name

  • IP address

  • URLs of web applications running on your VM

We will need the output of the following commands:

Windows

  • ping Hostname

  • date /t

  • time /t

  • route print

  • ipconfig /all

  • netstat

  • netstat -e

  • route print or netstat -r

  • tracert and ping in/out

  • nslookup hostname DNS-Server

  • nslookup hostname DNS-Server

Linux

  • date

  • traceroute

  • ping Hostname

The output of the following commands can also give important clues:

  • arp -n

  • ip address list

  • ip route show

  • ip neighbour show

  • iptables --list --numeric --verbose

  • cat /etc/sysconfig/network-scrips/ifcfg-eth*

  • cat /etc/network/interfaces

  • cat /etc/resolv.conf

  • netstat tcp --udp --numeric -a

Script (for Linux-based VMs)

We have prepared a ready-made script that helps gather the relevant information. The script provides both screen output and a log file which you can forward to us.

Use the script with the additional parameter -p

You will be able to observe the commands as they are being executed, and take screenshots as needed.

What should I do if the Remote Console does not work?

If you are using the Java-based edition of the Remote Console, please ensure that you have the latest Java version installed and the following ports released:

  • 80 (HTTP),

  • 443 (HTTPS),

  • 5900 (VNC).

The Remote Console becomes available immediately once the server is provisioned.

Is there a detailed traffic overview available?

There is no traffic overview screen in the user interface currently.

Customers can use either Traffic or Utilization endpoints of the Billing API to get details about their traffic usage.

Traffic

curl -X GET "https://api.ionos.com/billing/:contract/traffic/?output=all" -H "accept: application/json" -H "Authorization: Basic ******"

Utilization

curl -X GET "https://api.ionos.com/billing/:contract/utilization/?type=TRAFFIC" -H "accept: application/json" -H "Authorization: Basic ******"

More information in Swagger: https://api.ionos.com/billing/doc/

How can I configure VirtIO settings in Windows?

Please use the configuration below to ensure the stability and performance of the network connections on the operating system side. We suggest that you first check the current settings to see if any adjustments are necessary.

How to verify the current network configuration

  1. Open Device Manager

  2. Open the network adapter section where you can see all your connected virtual network cards named “Red Hat VirtIO Ethernet Adapter”. Now open the Properties dialog and go to the “Advanced” tab.

  3. Verify that your settings match those listed below; if not, follow the guidelines later in this guide to update them accordingly.

    • "Init.MTUSize"="1500"

    • "IPv4 Checksum Offload"="Rx & Tx Enabled"

    • "Large Send Offload V2 (IPv4)"="Enabled"

    • "Large Send Offload V2 (IPv6)"="Enabled"

    • "Offload.Rx.Checksum"="All"

    • "Offload.Tx.Checksum"="All"

    • "Offload.Tx.LSO"="Maximal"

    • "TCP Checksum Offload (IPv4)"="Rx & Tx Enabled"

    • "TCP Checksum Offload (IPv6)"="Rx & Tx Enabled"

    • "UDP Checksum Offload (IPv4)"="Rx & Tx Enabled"

    • "UDP Checksum Offload (IPv6)"="Rx & Tx Enabled"

Manual adjustments in the Properties dialog are not saved to the registry. To make any persistent changes, follow the guidelines in the following section.

How to update the network configuration

Once you determine that your system needs an update (see the “Verifying current network configuration” above), one of the following actions must be taken to adjust the settings:

Online update using IONOS VirtIO Network Driver Settings Update Scripts (recommended)

The best way to update network configuration is by using IONOS VirtIO Network Driver Settings Update Scripts.

The scripts are distributed in the following versions:

  • Installer, available for download here: https://github.com/ionos-enterprise/ionos-network-helper/blob/master/WinNet-v0.1.171.0001.exe

    Installer will extract the scripts to the user-specified folder and optionally run the scripts.

  • ZIP archive, available for download here: https://github.com/ionos-enterprise/ionos-network-helper/blob/master/WinNet-v0.1.171.0001.zip

When using the ZIP archive, or not selecting script execution in the installer, scripts can be started manually by launching the update.cmd file in the root folder of the extracted scripts.

If Windows does not allow you to start the installer or update.cmd from the File Explorer window, please launch it directly from the command line.

Offline update using IONOS Windows VirtIO Drivers ISO Image (alternative)

Alternatively, use the VirtIO drivers ISO for Microsoft operating systems provided by IONOS.

  1. Use DCD or API to add an ISO image to the virtual server you’d like to update (In DCD select the VM -> Inspector -> Storage -> CD-ROM -> IONOS-Images -> Windows-VirtIO-Drivers).

  2. Set the boot flag to the virtual CD/DVD drive with the ISO image.

  3. Boot your virtual server from the Windows VirtIO drivers ISO.

  4. Open the remote console of the virtual machine.

  5. Select an operating system from the list of supported versions. Driver installation or update will be performed automatically.

  6. Remove the ISO and restart the VM through the DCD. Make sure that the boot flag is set correctly again.

Manual update

Updating drivers

  1. Make sure you have the latest “VirtIO Ethernet Adapter” driver package. The driver package is available in the “Drivers” folder of IONOS VirtIO Network Driver Settings Update Scripts as described above.

  2. Open Device Manager.

    in the “File Explorer“ window right-click “My PC”, select “Properties” and then “Device Manager”.

  3. Under Network Adapters, for each "Red Hat VirtIO Ethernet Adapter":

    1. Right-click the adapter and select “Update driver”

    2. Select “Browse my computer for driver software”

    3. Click “Browse” and select the folder with the driver package suitable for your OS version

    4. Click OK and follow the instructions to install the driver.

Updating existing VirtIO network devices

  1. Open Device Manager

    In the File Explorer window, right-click My PC, select Properties, and then Device Manager

  2. Under Network adapters, for each "Red Hat VirtIO Ethernet Adapter":

    1. Open Properties (double-click usually works)

    2. Go to Advanced tab

    3. Navigate and set the following settings there:

      • "Init.MTUSize"="1500"

      • "IPv4 Checksum Offload"="Rx & Tx Enabled"

      • "Large Send Offload V2 (IPv4)"="Enabled"

      • "Large Send Offload V2 (IPv6)"="Enabled"

      • "Offload.Rx.Checksum"="All"

      • "Offload.Tx.Checksum"="All"

      • "Offload.Tx.LSO"="Maximal"

      • "TCP Checksum Offload (IPv4)"="Rx & Tx Enabled"

      • "TCP Checksum Offload (IPv6)"="Rx & Tx Enabled"

      • "UDP Checksum Offload (IPv4)"="Rx & Tx Enabled"

      • "UDP Checksum Offload (IPv6)"="Rx & Tx Enabled"

Please be aware that these settings will revert to old Registry values unless the full update procedure is executed as described above.

How can I configure VirtIO settings in Linux?

Please use the configuration below to ensure the stability and performance of the network connections on the operating system side.

Verifying current network configuration

Please make sure to use the MTU setting of 1500 for all network interfaces.

root@debian:~# ip link
...
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
	link/ether 02:01:26:cb:fc:d5 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
	link/ether 02:01:fb:20:f1:65 brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
	link/ether 02:01:2a:a0:e1:53 brd ff:ff:ff:ff:ff:ff

Make sure that all of your network interfaces have hardware offloads enabled. This can be done with the ethtool utility; to install ethtool:

  • For .deb-based distributions: apt-get install ethtool -y

  • For .rpm-based distributions: yum install ethtool.x86_64 -y

Once installed, please do the following for each of your VirtIO-net devices:

Replace the [device_name] with the name of your device, e.g. eth0 or ens0, and check that the highlighted offloads are in the On state:

root@debian:~# ethtool -k [device_name]
...
tx-checksumming: on
...
tx-checksum-ip-generic: on
...
...
tcp-segmentation-offload: on
tx-tcp-segmentation: on
tx-tcp-ecn-segmentation: on
...
tx-tcp6-segmentation: on

If you changed any configuration parameters, such as increase MTU or disable offloads for network adapters, please make sure to roll back these changes.

Fixing persistent network interface configuration

Fixing persistent network interface configuration may include removing such configuration in the below files:

/etc/network/interfaces
iface eth0 inet static
...
pre-up /sbin/ifconfig $IFACE mtu 64000 <-- WRONG
/etc/dhcp/dhclient.conf (for .deb-based distributions)
interface "eth0" {
supersede interface-mtu 64000; <-- WRONG
}
/etc/dhcp/dhclient-*.conf (for .rpm based distributions)
supersede interface-mtu 1500; <-- Correct

and then restarting the affected network interfaces with ifdown eth0; ifup eth0

Dynamic adjustment of the MTU and offload configuration

In all examples below, please replace the [device_name] with the name of the network device being adjusted, e.g. “eth0” or “ens6”.

Dynamically adjust network device MTU configuration:

ip link set mtu 1500 dev [device_name]

Dynamically enable hardware offloads for VirtIO-net devices. This can be done with the ethtool utility; to install ethtool:

  • For .deb-based distributions: apt-get install ethtool -y

  • For .rpm-based distributions: yum install ethtool.x86_64 -y

Once installed, please do the following for each of your VirtIO-net devices: ethtool -K [device_name] tx on tso on