Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Prerequisites: Make sure you have the appropriate permissions and that you are working within an active cluster. Only Contract Owners, Administrators, or Users with the Create Kubernetes Clusters permission can create node pools within the cluster. You should already have a in which the nodes can be provisioned.
1. Open MANAGER Resources > Kubernetes Manager
2. Select the cluster you want to add a node pool to from the list
3. Click + Create node pool
4. Give the node pool a Name
Note the naming conventions for Kubernetes:
Maximum of 63 characters in length Begins and ends with an alphanumeric character ([a-z0-9A-Z])
Must not contain spaces or any other white-space characters
Can contain dashes (-), underscores (_), dots (.) in between
5. Choose a Data Center. Your node pool will be included in the selected data center. If you don't have a data center, you must first create one.
6. Select CPU and Storage Type. Then proceed with the other properties of the nodes. The nodes of the pool will all have the properties you select here.
7. Select the Node pool version that you wish to run
8. Select the number of nodes in the node pool
9. Select Storage Type
10. Click Create node pool
Result: Managed Kubernetes will start to provision the resources into your data center. This will take a while and the node pool can be used once it reaches the active state.
Try to avoid accessing the target data center while Managed Kubernetes is provisioning nodes, as concurrent manual interaction can cause undesirable results.
For further management of node pools, select the cluster you want to manage a node pool for from the list
1. Select Node pools in Cluster
2. Set the Node Count, this is the number of nodes in the pool
3. Select Version, this is a version of Kubernetes you want to run on the node pool
4. Select Attached private LANs from the dropdown list
5. Select the day and time of your preferred maintenance window, necessary maintenance for Managed Kubernetes will be performed accordingly
6. Click Update node pool
Managed Kubernetes will start to align the resources in the target data center. In case you have selected a new version for Kubernetes the operation may take a while and the node pool will be available for further changes once it reaches the active state.
Try to avoid accessing the target data center while Managed Kubernetes is provisioning nodes, as concurrent manual interaction can cause undesirable results.
The maintenance window starts at the time of your choosing and remains open for another four hours. All planned maintenance work will be performed within this window, however, not necessarily at the beginning.
1. Open Containers > Managed Kubernetes
2. Select the cluster from which you want to remove the node pool.
3. Select Node pools in Cluster to delete the node pool.
4. Click Delete.
Managed Kubernetes will start to remove the resources from the target data center and eventually delete the node pool.
Try to avoid accessing the target data center while Managed Kubernetes is provisioning nodes, as concurrent manual interaction can cause undesirable results.
When a node fails or becomes unresponsive you can rebuild that node. This will create a new node with an identical configuration that will replace the failed node. Make sure your node is active.
1. Open Containers > Managed Kubernetes
2. Select the cluster and node pool that contains the failed node
3. Click the rebuild button of the node
4. Confirm the operation
Result: Managed Kubernetes starts a process, which is based on the node pool template. The template creates and configures a new node. It then waits for the status to display as ACTIVE. Once Active it migrates all the pods from the faulty node, deleting it once empty. While this operation occurs, the node pool will have an extra billable active node.
Try to avoid accessing the target data center while Managed Kubernetes is provisioning nodes, as concurrent manual interaction can cause undesirable results.
Prerequisites: Make sure you have the appropriate privileges. Only Contract Owners, Administrators or Users with the Create Kubernetes Clusters permission can create a cluster. Other user types have read-only access.
Prerequisites: Make sure you have the appropriate permissions. Only with the Create Kubernetes Clusters permission can create a cluster. Other user types have read-only access.
1. Go to Containers > Managed Kubernetes.
2. Select Create Cluster.
3. Give the cluster a unique Name.
Naming conventions for Kubernetes:
Maximum of 63 characters in length.
Begins and ends with an alphanumeric character ([a-z0-9A-Z]).
Must not contain spaces or any other white-space characters.
Can contain dashes (-), underscores (_), dots (.) in between.
4. Select the Kubernetes Version you want to run in the cluster.
5. Click Create Cluster.
Result: The cluster will now be created and can be further modified and populated with node pools once its status is active.
To access the Kubernetes API provided by the cluster simply download the kubeconfig
file and use it with tools like kubectl
.
1. Select Cluster name from the list and type a new name.
2. Select the Version of Kubernetes you want to run on the node pool.
4. Click Update Cluster to save your changes.
The maintenance window starts at the time of your choosing and remains open for another four hours. All planned maintenance work will be performed within this window, however not necessarily at the beginning.
Prerequisites: Make sure you have the appropriate permissions and access to the chosen cluster. The cluster should be active, evacuated, and no longer contain any node pools.
Open the Kubernetes Manager.
Select the cluster from the list.
Click the Delete in the menu.
Confirm the deletion when prompted.
3. Select the Maintenance time of your preferred maintenance window. Necessary maintenance for will be performed accordingly.
A node pool upgrade generally happens automatically during weekly maintenance. You can also trigger it manually, e.g. when upgrading to a higher version of Kubernetes. In any case the node pool upgrade will result in rebuilding all nodes belonging to the node pool.
During the upgrade, an "old" node in a node pool is replaced by a new node. This may be necessary for several reasons:
Software updates: Since the nodes are considered immutable, IONOS Cloud does not install software updates on the running nodes, but replaces them with new ones.
Configuration changes: Some configuration changes require replacing all included nodes.
Considerations: Multiple node pools of the same cluster can be upgraded at the same time. A node pool upgrade locks the affected node pool and you cannot make any changes until the upgrade is complete. During a node pool upgrade, all of its nodes are replaced one by one, starting with the oldest one. Depending on the number of nodes and your workload, the upgrade can take several hours.
If the upgrade was initiated as part of weekly maintenance, some nodes may not be replaced to avoid exceeding the maintenance window.
Please make sure that you have not exceeded your contract quota for servers, otherwise, you will not be able to provision a new node to replace an existing one.
The rebuilding process consists of the following steps:
Provision a new node to replace the "old" one and wait for it to register in the control plane.
Exclude the "old" node from scheduling to avoid deploying additional pods to it.
Drain all existing workload from the "old" node.
First, IONOS Cloud tries to gracefully drain the node.
- PodDisruptionBudgets are enforced for up to 1 hour.
- GracefulTerminationPeriod for pods is respected for up to 1 hour.
If the process takes more than 1 hour, all remaining pods are deleted.
4. Delete the "old" node from the node pool.
Please consider the following node drain updates and their impact on the maintenance procedure:
Under the current platform setup, a node drain considers PodDisruptionBudgets (PDBs). If a concrete eviction of a pod violates an existing PDB, the drain would fail. If the drain of a node fails, the attempt to delete this node would also fail.
In the past, we observed problems with unprepared workloads or misconfigured PDBs, which often led to failing drains, node deletions and resulting failure in node pool maintenance.
To prevent this, the node drain will split into two stages. In the first stage, the system will continue to try to gracefully evict the pods from the node. If this fails, the second stage will forcefully drain the node by deleting all remaining pods. This deletion will bypass checking PDBs. This prevents nodes from failing during the drain.
As a result of the two-stage procedure, the process will stop failing due to unprepared workloads or misconfigured PDBs. However, please note that this change may still cause interruptions to workloads that are not prepared for maintenance. During maintenance, nodes are replaced one-by-one. For each node in a node pool, a new node is created. After that, the old node is drained and then deleted.
At times, a pod would not return to READY after having been evicted from a node during maintenance. In such cases, a PDB was in place for a pod’s workload. This led to failed maintenance and the rest of the workload left untouched. With the force drain behavior, the maintenance process will proceed and all parts of the workload will be evicted and potentially end up in a non-READY state. This might lead to an interruption of the workload. To prevent this, please ensure that your workload’s pods are prepared for eviction at any time.
Learn how to create and configure a Kubernetes Cluster using the DCD. |
Create, update and rebuild node pools. |
Upgrade and manage node pools. |
Generate and download the yaml file. |
All Kubernetes API instructions can be found in the main Cloud API specification file.
To access the Kubernetes API provided by the cluster, simply download the kubeconfig
for the cluster and use it with tools like kubectl.
GET
https://api.ionos.com/cloudapi/v6/k8s/{k8sClusterId}/kubeconfig
Retrieve a configuration file for the specified Kubernetes cluster, in YAML or JSON format as defined in the Accept header; the default Accept header is application/yaml.
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
It may be desirable to enhance the configuration of CoreDNS by incorporating additional settings. To ensure the persistence of these changes during control plane maintenance, it is necessary to create a ConfigMap within the kube-system
namespace. The ConfigMap should be named coredns-additional-conf
and should include a data entry with the key extra.conf
. The value associated with this entry must be a string that encompasses the supplementary configuration.
Below is an illustrative example that demonstrates the process of adding a custom DNS entry for example.abc
:
Learn how to create and configure a Server inside the DCD.
Use the Remote Console to connect to instances without SSH.
Leverage Live Vertical Scaling (LVS) to manage resource usage.
Learn how to extend CoreDNS with additional configuration.
Learn how to configure Ingress to preserve client source IPs.
All Managed Kubernetes resources, such as clusters and node pools, are subject to an automated (weekly) maintenance process. All changes to a cluster or node pools that may cause service interruption (Example: upgrade) are executed during maintenance. During the maintenance window, you may encounter uncontrolled disconnections and an inability to connect to the cluster.
The upgrade process during maintenance respects the selected Kubernetes version of the cluster. The upgrade process does not upgrade to another Kubernetes major, minor, or patch version unless the current cluster or node pool version reaches its end of life. In such instances, the cluster or node pool will be updated to the next minor version that is active.
Maintenance window consists of two parts. The first part specifies the day of the week, while the second part specifies the expected time. The following example shows a maintenance window configuration:
For more information, see cluster creation or node pool creation.
During cluster maintenance, control plane components are upgraded to the newest version available.
During the maintenance of a certain node pool, all nodes of this pool will be replaced by new nodes. Nodes are replaced one after the other, starting with the oldest node. During node replacement, first, a new node is created and added to the cluster. Then the old node is drained and finally removed from the cluster. Node pool upgrade respects the four-hour maintenance window. The upgrade process will be continued in the next node pool maintenance, if it fails to upgrade all nodes of the node pool within the four-hour maintenance window,
The maintenance process first tries to drain a node gracefully, respecting given PDBs for one hour. If this fails, the node is drained ignoring PDBs.
Some applications require a Kubernetes service of type LoadBalancer
which preserves the source IPs of incoming packets. Example: Ingress controllers. As the Network Load Balancer(NLB) integration is not yet available in Managed Kubernetes, a service is exposed by attaching a public IP to a viable Kubernetes node. This node serves as a load balancer using kube-proxy.
Note: This works fine with services that use externalTrafficPolicy: Cluster
, but in this case, the client's source IP is lost.
To preserve the client source IP address, Kubernetes services with externalTrafficPolicy: Local
need to be used. This configuration ensures that packets reaching a node are only forwarded to pods that run on the same node, preserving the client source IP. Therefore, the load balancer IP address of the service needs to be attached to the same node running the ingress controller pod.
This can be achieved with different strategies. One approach is to use a DaemonSet to ensure that a pod is running on each node. However, this approach is feasible only in some cases, and if a cluster has a lot of nodes, then, using DaemonSet could lead to a waste of resources.
For an efficient setup, you can schedule pods to be run only on nodes of a specific node pool using NodeSelectors. The node pool needs to have labels that can be used in the node selector. To ensure that the service's load balancer IP is also attached to one of these nodes, annotate the service with cloud.ionos.com/node-selector: key=value
, where key and value are the labels of the node pool.
The following example shows how to install the ingress-nginx helm chart as DaemonSet with node selector and to configure the controller service with the required annotation.
Create a node pool with a label nodepool=ingress
:
Create a values.yaml
file for later use in the helm command with the following content:
Install ingress-nginx via helm with the following command:
Get answers to the most frequently asked questions about Kubernetes in IONOS DCD.
Managed Kubernetes facilitates the fully automated setup of Kubernetes clusters. Managed Kubernetes also simplifies and carefully supports the automation of CI/CD pipelines in terms of testing and deployment.
Our managed solution offers automatic updates and security fixes, versioning and upgrade provisioning, highly available and geo-redundant control plane, full cluster admin-level access to Kubernetes API.
Everything related to Managed Kubernetes can be controlled in the DCD via the dedicated Kubernetes Manager. The manager provides a complete overview of your provisioned Kubernetes clusters and node pools including their status. The Manager allows you to create and manage clusters, create and manage node pools, and download the Kubeconfig file.
See also: The Kubernetes Manager
The control plane manages the worker nodes and the pods in the cluster. In production environments, the control plane usually runs across multiple computers. A cluster usually runs multiple nodes, providing fault tolerance and high availability.
Kube-controller-manager manages controllers that provide functionalities such as deployments, services, etc.
See the kube-controller-manager
The API server is a component of the Kubernetes control plane that exposes the Kubernetes API. The API server is the front end for the Kubernetes control plane. Kube-apiserver is designed to scale horizontally. It scales by deploying more instances. You can run several instances of kube-apiserver and balance traffic between those instances.
See the kube-apiserver
Kube-scheduler distributes pods to nodes. Pods must be created with resource limits so that nodes are not over-commissioned.
See kube-scheduler
Managed Kubernetes offers a hidden control plane. Control plane components like kube-apiserver
, kube-scheduler
and kube-controller-manager
, are not visible to the customer and cannot be modified directly. The kube-apiserver can only be interacted with by using its REST API.
The hidden control plane is deployed on Virtual Machines that are running in a geo-redundant cluster in the area of Frankfurt am Main, Germany.
See also:
The Managed Kubernetes clusters have a built-in Calico Container Network Interface (CNI) plugin. Its primary function is to provide Pod-to-Pod communication. Calico also allows the use of network policies in the Kubernetes cluster.
See also:
Managed Kubernetes does not currently offer an option to choose a different CNI plugin nor does it support customers that do so on their own. CNI affects the whole cluster network, so if changes are made to calico or a different plugin is installed it can cause cluster-wide issues and failed resources.
The CSI (Container Storage Interface) driver runs as deployment in the control plane to manage volumes for PVCs (Persistent Volume Claims) in the Ionos Cloud and to attach them to nodes.
The "soft" mount option is required when creating PersistentVolume
with an NFS source in Kubernetes. It can be set either in the mount options list in the PersistentVolume
specification (spec.mountOptions
), or using the annotation key volume.beta.kubernetes.io/mount-options. This value is expected to contain a comma-separated list of mount options. If none of them contains the "soft" mount option, the creation of the PersistentVolume
will fail.
Note that the use of the annotation is still supported but will be deprecated in the future. See also: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#mount-options
Mount options in the PersistentVolume
specification can also be set using the StorageClass.
Example for PV spec:
Example for annotation:
Example for using StorageClass:
A cluster autoscaler is a tool that automatically adjusts the size of the Kubernetes cluster when one of the following conditions is true:
there are pods that failed to run in the cluster due to insufficient resources;
there are nodes in the cluster that have been underutilized for an extended period of time and their pods can be placed on other existing nodes.
The cluster autoscaler increases the node pool if pods cannot be scheduled due to a lack of resources. In addition, adding a node (from the node pool that needs to be increased) should provide a remedy for the situation. If there was no node pool that provides enough nodes to schedule a pod, the autoscaler would not enlarge. The cluster autoscaler reduces the node pool if a node is not fully utilized for an extended period of time. A node is underutilized when it has a light load and all of its pods can be moved to other nodes.
Yes, whereby only node pools with active autoscaling are managed by the autoscaler.
No, the autoscaler essentially cannot increase the number of nodes in the node pool above the maximum specified by the user, or decrease it below the specified minimum. In addition, the quota for a specific contract cannot be exceeded using the autoscaler. The autosclaer also cannot reduce the number of nodes in the node pool to 0.
Yes, it is possible.
See Encrypting Secret Data at Rest
All components installed in the cluster are updated. This includes the K8s control plane itself, CSI, CCM, Calico, and CoreDNS. With cluster maintenance, several components that are visible to customers are updated and reset to our values. For example, changes to coredns are not permanent and will be removed at the next maintenance. It is currently not possible to set your own DNS records in the coredns configuration, but this will be possible later. Managed components that are regularly updated:
coredns
csi-ionoscloud (DaemonSet)
calico (typha)
metrics-server
ionos-policy-validator
snapshot-validaton-webhook
The maintenance time window is limited to four hours. If not all nodes can be rebuilt within this time, the remaining nodes will be replaced at the next maintenance. To avoid late updates, it is recommended to create node pools with no more than 20 nodes.
If old nodes are replaced with new ones during maintenance, the new nodes will subsequently have different (new) public IP addresses. You can pre-specify a list of public IP addresses from which entries for new nodes are taken. In such a way, the list of possible host addresses is limited and predictable (for example, to activate them differently through a whitelist).
Managed Kubernetes nodes usually have a public IP address, so they can be accessed from the Internet. This is not the case in the private cluster. Here all the nodes are "hidden" behind a NAT gateway, and although they can open Internet connections themselves, they cannot be reached directly from the outside. Private clusters have various limitations: they can only consist of one node pool and are therefore also limited to one region. In addition, the bandwidth of the Internet is limited by the maximum bandwidth of the NAT gateway (typically 700 Mbps). With private clusters, you can determine the external public IP address of the NAT gateway using CRIP. Thus, outbound traffic will use this IP address as the "source IP address".
The Kubernetes cluster (control plane) and the corresponding node pools can have different versions of Kubernetes. Node pools can use older versions than the control plane, but not vice versa. The difference between the minor versions must not be more than 1. There is a distinction between Patch Version Updates and Minor Version Updates. All version updates must be initiated by the customer. Once initiated version updates are performed immediately. However, forced updates will also occur if the version used by the customer is so old that we can no longer support it. Typically, affected customers receive a support notification about two weeks prior to a forced update.
The Kubernetes API is secured with TLS. Traffic between the nodes and the control plane is secured by mutual TLS, which means that both sides check whether they are talking to the expected remote station.
If clusters or node pools are created or modified, the operation may fail and the cluster or node pool will go into a FAILED status. In this case, our employees are already informed as a result of monitoring, but sometimes it may be difficult/impossible for them to correct the error, since the reason may be, for example, in conflict with the client's requirements. For example, a LAN is specified that does not exist (or no longer exists), or due to a module budget violation, a service update becomes impossible. If the node is NotReady, the reason is usually that there is not enough RAM. If the node runs out of RAM, an infinite loop occurs in which an attempt is made to free RAM, which means that the executables must be reloaded from the disk. This means that the node cannot get anywhere else and is busy only with disk IO. We recommend setting Resource Requests and Limits to prevent such scenarios.
Currently, customers cannot publish their own CAs in the Kubernetes cluster or use their own TLS certificates.
You can reserve node pools in multiple locations in the same cluster. This allows simple geo-redundancy to be configured and implemented. The control plane is geo-reserved (within Germany). There are several replicas running in different locations.
If a node is unavailable, for example, because there are too many pods running on it without resource limits, it can simply be replaced. To do this, you can use the following API endpoint:
POST /k8s/{k8sClusterId}/nodepools/{nodePoolId}/nodes/{nodeId}/replace
k8sClusterId* | String | The unique ID of the Kubernetes cluster. |
depth | String | Controls the detail depth of the response objects. |
X-Contract-Number | Integer | Users with multiple contracts must provide the contract number, for which all API requests are to be executed. |
facilitates the fully automated setup of Managed Kubernetes clusters. Several clusters can also be quickly and easily deployed, for example, to set up staging environments, and then to delete them again if necessary.
Managed Kubernetes also simplifies and carefully supports the automation of CI/CD pipelines in terms of testing and deployment.
Our solution offers the following:
Automatic updates and security fixes.
Version and upgrade provisioning.
Highly available and geo-redundant control plane.
Full cluster admin-level access to Kubernetes API.
Everything related to Managed Kubernetes can be controlled in via the dedicated Kubernetes Manager, which could be found in Menu Bar > Containers > Managed Kubernetes.
Kubernetes Manager provides a complete overview of your provisioned Kubernetes clusters and node pools including their status. Furthermore, you can:
When viewing a data center that contains resources created by Kubernetes they will be represented as read-only. This is because they are managed by Kubernetes and manual interactions would cause interference.
Clusters
Clusters can span multiple node pools that may be provisioned in different virtual data centers and across locations. For example, you can create a cluster consisting of multiple node pools where each pool is in a different location and achieve geo-redundancy. For an in-depth description of how the clusters work, read the Setup of a Cluster.
All operations concerning the infrastructure of clusters can be performed using the Kubernetes Manager including cluster and node creation, and scaling of node pools.
The status of a cluster is indicated by a LED.
Node pools
All Kubernetes worker nodes are organized in node pools. All nodes within a node pool are identical in setup. The nodes of a pool are provisioned into virtual data centers at a location of your choice and you can freely specify the properties of all the nodes at once before creation.
All operations concerning the infrastructure of node pools can be performed using the Kubernetes Manager.
The status of a node pool is indicated by a LED.
Nodes and managed resources
Nodes or worker nodes are the servers in your data center that are managed by Kubernetes and constitute your node pools. Resources managed by Kubernetes in your data centers will be displayed by the DCD as read-only.
The Inspector for managed resources permits no direct modifications to the resources themselves. It does allow easy navigation between the data center view and the cluster and node pools views in the Kubernetes Manager, as well as the following:
Switching to the Kubernetes Manager and showing the respective node pool
Downloading the kubeconfig
for access to the cluster
Listing all nodes in the data center belonging to the same node pool
This page provides information about Kubernetes versions and their availability in the Managed Kubernetes product. It also provides an estimated release and end-of-life schedule.
Available date: This is an estimate of the version release.
Kubernetes end of life: The listed Kubernetes versions will no longer receive new features, bug fixes, or security updates. These versions may still be available in the Managed Kubernetes product but will soon be removed from the available versions.
You can download the generated Kubeconfig for your cluster to be used with the kubectl
command. The file will be generated on the fly when you request it.
You can download the file from the Kubernetes Manager or from the Inspector pane, in case the viewed data center contains active node pools.
Go to MANAGER Resources > Kubernetes Manager
Select a cluster from the list
Click on kubeconfig.yaml
or to kubeconfig.json
for downloading
1. Open a data center containing node pools
2. Select a Server that is managed by Kubernetes
3. On the right, select Open Kubernetes Manager
4. Choose between kubeconfig.yaml
or to kubeconfig.json
for download
Kubernetes is organized in clusters and node pools. The node pools are created in the context of a cluster. The belonging to the node pool are provisioned into . All servers within a node pool are identical in their configuration.
LED | Description |
---|---|
LED | Description |
---|---|
You can still see, inspect, and position the managed resources to your liking. However, the specifications of the resources are locked for manual interactions to avoid undesirable results. To modify the managed resources use the Kubernetes Manager. You can manage the following resource types: servers, , , LANs, (depending on your deployed pods and configurations).
Kubernetes Version
Kubernetes release date
Available
Kubernetes end of life
End of life
28 February 2023
31 July 2023
05 March 2022
26 September 2022
28 July 2023
31 October 2023
23 August 2022
06 February 2023
08 August 2023
TBD
09 December 2022
31 May 2023
28 February 2024
TBD
11 April 2023
08 August 2023
28 June 2024
TBD
11 August 2023
TBD
TBD
TBD
The status is transitional. The cluster is in a transitional state and temporarily locked for modifications.
The status is unavailable. The cluster is unavailable and locked for modifications.
The status is in progress. Modifications to the cluster are in progress, the cluster is temporarily locked for modifications.
The status is active. The cluster is available and running.
The status is transitional. The node pool is in a transitional state and temporarily locked for modifications.
The status is unavailable. The node pool is unavailable and locked for modifications.
The status is in progress. Modifications to the node pool are in progress. The node pool is locked for modifications.
The status is active. The node pool is available and running.