Prerequisites: Make sure you have the appropriate permissions and that you are working within an active cluster. Only Contract Owners, Administrators, or Users with the Create Kubernetes Clusters permission can create node pools within the cluster. You should already have a in which the nodes can be provisioned.
1. Open MANAGER Resources > Kubernetes Manager
2. Select the cluster you want to add a node pool to from the list
3. Click + Create node pool
4. Give the node pool a Name
Note the naming conventions for Kubernetes:
Maximum of 63 characters in length Begins and ends with an alphanumeric character ([a-z0-9A-Z])
Must not contain spaces or any other white-space characters
Can contain dashes (-), underscores (_), dots (.) in between
5. Choose a Data Center. Your node pool will be included in the selected data center. If you don't have a data center, you must first create one.
6. Select CPU and Storage Type. Then proceed with the other properties of the nodes. The nodes of the pool will all have the properties you select here.
7. Select the Node pool version that you wish to run
8. Select the number of nodes in the node pool
9. Select Storage Type
10. Click Create node pool
Result: Managed Kubernetes will start to provision the resources into your data center. This will take a while and the node pool can be used once it reaches the active state.
Try to avoid accessing the target data center while Managed Kubernetes is provisioning nodes, as concurrent manual interaction can cause undesirable results.
For further management of node pools, select the cluster you want to manage a node pool for from the list
1. Select Node pools in Cluster
2. Set the Node Count, this is the number of nodes in the pool
3. Select Version, this is a version of Kubernetes you want to run on the node pool
4. Select Attached private LANs from the dropdown list
5. Select the day and time of your preferred maintenance window, necessary maintenance for Managed Kubernetes will be performed accordingly
6. Click Update node pool
Managed Kubernetes will start to align the resources in the target data center. In case you have selected a new version for Kubernetes the operation may take a while and the node pool will be available for further changes once it reaches the active state.
Try to avoid accessing the target data center while Managed Kubernetes is provisioning nodes, as concurrent manual interaction can cause undesirable results.
The maintenance window starts at the time of your choosing and remains open for another four hours. All planned maintenance work will be performed within this window, however, not necessarily at the beginning.
1. Open Containers > Managed Kubernetes
2. Select the cluster from which you want to remove the node pool.
3. Select Node pools in Cluster to delete the node pool.
4. Click Delete.
Managed Kubernetes will start to remove the resources from the target data center and eventually delete the node pool.
Try to avoid accessing the target data center while Managed Kubernetes is provisioning nodes, as concurrent manual interaction can cause undesirable results.
When a node fails or becomes unresponsive you can rebuild that node. This will create a new node with an identical configuration that will replace the failed node. Make sure your node is active.
1. Open Containers > Managed Kubernetes
2. Select the cluster and node pool that contains the failed node
3. Click the rebuild button of the node
4. Confirm the operation
Result: Managed Kubernetes starts a process, which is based on the node pool template. The template creates and configures a new node. It then waits for the status to display as ACTIVE. Once Active it migrates all the pods from the faulty node, deleting it once empty. While this operation occurs, the node pool will have an extra billable active node.
Try to avoid accessing the target data center while Managed Kubernetes is provisioning nodes, as concurrent manual interaction can cause undesirable results.
Prerequisites: Make sure you have the appropriate permissions. Only with the Create Kubernetes Clusters permission can create a cluster. Other user types have read-only access.
1. Go to Containers > Managed Kubernetes.
2. Select Create Cluster.
3. Give the cluster a unique Name.
Naming conventions for Kubernetes:
Maximum of 63 characters in length.
Begins and ends with an alphanumeric character ([a-z0-9A-Z]).
Must not contain spaces or any other white-space characters.
Can contain dashes (-), underscores (_), dots (.) in between.
4. Select the Kubernetes Version you want to run in the cluster.
5. Click Create Cluster.
Result: The cluster will now be created and can be further modified and populated with node pools once its status is active.
To access the Kubernetes API provided by the cluster simply download the kubeconfig
file and use it with tools like kubectl
.
1. Select Cluster name from the list and type a new name.
2. Select the Version of Kubernetes you want to run on the node pool.
4. Click Update Cluster to save your changes.
The maintenance window starts at the time of your choosing and remains open for another four hours. All planned maintenance work will be performed within this window, however not necessarily at the beginning.
Prerequisites: Make sure you have the appropriate permissions and access to the chosen cluster. The cluster should be active, evacuated, and no longer contain any node pools.
Open the Kubernetes Manager.
Select the cluster from the list.
Click the Delete in the menu.
Confirm the deletion when prompted.
A upgrade generally happens automatically during weekly maintenance. You can also trigger it manually, e.g. when upgrading to a higher version of . In any case the node pool upgrade will result in rebuilding all nodes belonging to the node pool.
During the upgrade, an "old" node in a node pool is replaced by a new node. This may be necessary for several reasons:
Software updates: Since the nodes are considered immutable, IONOS Cloud does not install software updates on the running nodes, but replaces them with new ones.
Configuration changes: Some configuration changes require replacing all included nodes.
Considerations: Multiple node pools of the same cluster can be upgraded at the same time. A node pool upgrade locks the affected node pool and you cannot make any changes until the upgrade is complete. During a node pool upgrade, all of its nodes are replaced one by one, starting with the oldest one. Depending on the number of nodes and your workload, the upgrade can take several hours.
If the upgrade was initiated as part of weekly maintenance, some nodes may not be replaced to avoid exceeding the maintenance window.
Please make sure that you have not exceeded your contract quota for servers, otherwise, you will not be able to provision a new node to replace an existing one.
The rebuilding process consists of the following steps:
Provision a new node to replace the "old" one and wait for it to register in the control plane.
Exclude the "old" node from scheduling to avoid deploying additional pods to it.
Drain all existing workload from the "old" node.
First, IONOS Cloud tries to gracefully drain the node.
- are enforced for up to 1 hour.
- for pods is respected for up to 1 hour.
If the process takes more than 1 hour, all remaining pods are deleted.
4. Delete the "old" node from the node pool.
Please consider the following node drain updates and their impact on the maintenance procedure:
Under the current platform setup, a node drain considers PodDisruptionBudgets (PDBs). If a concrete eviction of a pod violates an existing PDB, the drain would fail. If the drain of a node fails, the attempt to delete this node would also fail.
In the past, we observed problems with unprepared workloads or misconfigured PDBs, which often led to failing drains, node deletions and resulting failure in node pool maintenance.
To prevent this, the node drain will split into two stages. In the first stage, the system will continue to try to gracefully evict the pods from the node. If this fails, the second stage will forcefully drain the node by deleting all remaining pods. This deletion will bypass checking PDBs. This prevents nodes from failing during the drain.
As a result of the two-stage procedure, the process will stop failing due to unprepared workloads or misconfigured PDBs. However, please note that this change may still cause interruptions to workloads that are not prepared for maintenance. During maintenance, nodes are replaced one-by-one. For each node in a node pool, a new node is created. After that, the old node is drained and then deleted.
At times, a pod would not return to READY after having been evicted from a node during maintenance. In such cases, a PDB was in place for a pod’s workload. This led to failed maintenance and the rest of the workload left untouched. With the force drain behavior, the maintenance process will proceed and all parts of the workload will be evicted and potentially end up in a non-READY state. This might lead to an interruption of the workload. To prevent this, please ensure that your workload’s pods are prepared for eviction at any time.
Prerequisites: Make sure you have the appropriate privileges. Only Contract Owners, Administrators or Users with the Create Kubernetes Clusters permission can create a cluster. Other user types have read-only access.
3. Select the Maintenance time of your preferred maintenance window. Necessary maintenance for will be performed accordingly.
Learn how to create and configure a Kubernetes Cluster using the DCD. |
Create, update and rebuild node pools. |
Upgrade and manage node pools. |
Generate and download the yaml file. |
You can download the generated Kubeconfig for your cluster to be used with the kubectl
command. The file will be generated on the fly when you request it.
You can download the file from the Kubernetes Manager or from the Inspector pane, in case the viewed data center contains active node pools.
Go to MANAGER Resources > Kubernetes Manager
Select a cluster from the list
Click on kubeconfig.yaml
or to kubeconfig.json
for downloading
1. Open a data center containing node pools
2. Select a Server that is managed by Kubernetes
3. On the right, select Open Kubernetes Manager
4. Choose between kubeconfig.yaml
or to kubeconfig.json
for download