Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Managed Kubernetes provides a platform to automate the deployment, scaling, and management of containerized applications. With IONOS Cloud Managed Kubernetes, you can quickly set up Kubernetes clusters and manage Node Pools.
It offers a wide range of features for containerized applications without having to handle the underlying infrastructure details. It is a convenient solution for users who want to leverage the power of Kubernetes without dealing with the operational challenges of managing the cluster themselves.
Note: Starting December 4, 2024, Kubernetes version 1.28 will end its life. All clusters on version 1.28 will be automatically updated to 1.29 (the last available patch version).
Note: Starting July 17, 2024, you may encounter an error when running the terraform plan
. This error will indicate a change in the cpu_family
attribute from AMD_OPTERON
to a “new CPU type”. For more information, see Changes to Opteron Node Pools.
Learn how to set up a cluster.
Learn how to create a node pool using the DCD.
Learn how to manage user groups for node pools.
To get answers to the most commonly encountered questions about Managed Kubernetes, see FAQs.
Managed Kubernetes facilitates the fully automated setup of Kubernetes clusters. Using Managed Kubernetes, several clusters can be quickly and easily deployed. For example, you can use it on the go to set up staging environments and then delete them if required. Managed Kubernetes simplifies and supports the automation of Continuous Integration and Continuous Delivery/Continuous Deployment (CI/CD) pipelines that help in testing and deployment.
IONOS Managed Kubernetes offers the following:
Automatic updates and security fixes.
Version and upgrade provisioning.
Highly available and geo-redundant control plane.
Full cluster administrator level access to Kubernetes API.
Both Public and Private Node Pools support the same Kubernetes versions.
Note:
You can explore the available releases for Kubernetes. For more information, see Release History.
You can visit the changelog to explore the information related to your Kubernetes version. For more information, see Changelog.
The architecture of Managed Kubernetes includes the following main components that collectively provide a streamlined and efficient environment for deploying, managing, and scaling containerized applications.
Control Plane: The control plane runs several key components, including the API server, scheduler, and controller manager. It is responsible for managing the cluster and its components, coordinates the scheduling and deployment of applications, monitors the health of the cluster, and enforces desired state management.
Cluster: A cluster is a group of computing resources that are connected and managed as a single entity. It is the foundation of the Kubernetes platform and provides the environment for deploying, running, and managing containerized applications. Clusters can span multiple node pools that may be provisioned in different virtual data centers and across locations. For example, you can create a cluster consisting of multiple node pools where each pool is in a different location and achieve geo-redundancy. Each cluster consists of a control plane and a set of worker nodes.
Node: A single (physical or virtual) machine in a cluster is part of the larger Kubernetes ecosystem. Each node is responsible for running containers, which are the encapsulated application units in Kubernetes. These nodes work together to manage and run containerized applications.
Node Pool: A node pool is a group of nodes within a cluster with the same configuration. Nodes are the compute resources where applications run. All Kubernetes worker nodes are organized in node pools. All nodes within a node pool are identical in setup. The nodes of a pool are provisioned into virtual data centers at a location of your choice, and you can freely specify the properties of all the nodes at once before creation.
kubectl
: The command-line tool for interacting with Kubernetes clusters that serves as a powerful and versatile interface for managing and deploying applications on Kubernetes. With kubectl
, you can perform various operations such as creating, updating, and deleting resources in a Kubernetes cluster.
Kubeconfig
: The kubeconfig
file is a configuration file used by the Kubernetes command-line tool (kubectl
) to authenticate and access a Kubernetes cluster. It contains information about the cluster, user credentials, and other settings.
etcd: etcd is a distributed key-value store that is used as the primary data store for Kubernetes. It is responsible for storing the configuration data that represents the state of the cluster. This includes information about nodes in the cluster, configurations, and the current status of various resources.
The illustration shows the key components of the Managed Kubernetes.
In IONOS Managed Kubernetes, a Private Node Pool is a dedicated set of nodes within a Kubernetes cluster that is isolated for the exclusive use of a specific user, application, or organization. Private node pools of a cluster are deployed in a private network behind a NAT Gateway to enable connectivity from the nodes to the public internet but not vice-versa.
You can create Kubernetes clusters for Private Node Pools using the Configuration Management Tools or directly using the IONOS Cloud API. By using IONOS Kubernetes clusters for Private Node Pools, you can ensure the network traffic between your nodes and Kubernetes service stays on your private network only.
The key features related to Private Node Pools include:
Customized Configurations: The ability to customize networking configurations and define subnets provides flexibility to align the infrastructure with user-specific requirements.
Isolation of Resources: Private Node Pools provide isolation of resources that improves the performance and reduces the risk of interference from external entities. The isolation of resources within a dedicated, private network environment.
Security: The additional layer of security added by Private Node Pools ensures that nodes are only accessible within a private network. This helps in protecting sensitive data and applications from external threats.
Scalability: The Private Node Pools are designed to be flexible and scalable based on your needs. This ensures that the resources are utilized efficiently, and you can adapt to varying levels of demand.
In IONOS Managed Kubernetes, a Public Node Pool provides a foundation for hosting applications and services that require external accessibility over the internet. These node pools consist of worker nodes that are exposed to the public network, enabling them to interact with external clients and services.
You can create Kubernetes clusters for Public Node Pools using the Configuration Management Tools or directly using the IONOS Cloud API.
The key features related to Public Node Pools include:
External Accessibility: Public Node Pools are designed to host workloads that need to be accessed from outside the Kubernetes cluster. This can include web applications, APIs, and other services that require internet connectivity.
Load Balancing: Load balancers are used with IONOS Public Node Pools to distribute incoming traffic across multiple nodes. This helps to achieve high availability, scalability, and efficient resource utilization.
Security: The Implementation of proper network policies, firewall rules, and user groups helps IONOS Public Node Pools mitigate potential risks and help in the protection of sensitive data.
Scaling: The ability to dynamically scale the number of nodes in a Public Node Pool is crucial for handling varying levels of incoming traffic. This scalability ensures optimal performance during peak usage periods.
Public Cloud Integration: Public Node Pools seamlessly integrate with IONOS Cloud services.
Monitoring and Logging: Robust monitoring and logging solutions are essential for tracking the performance and health of applications hosted in Public Node Pools. This includes metrics related to traffic, resource utilization, and potential security incidents.
It is desirable to extend CoreDNS with additional configuration to make changes that survive control plane maintenance. It is possible to create a ConfigMap in the kube-system
namespace. The ConfigMap must be named coredns-additional-conf
and contain a data entry with the key extra.conf
. The value of the entry must be a string containing the additional configuration.
The following example shows how to add a custom DNS entry for example.abc
:
You can use the Load Balancer to provide a stable and reliable IP address for your Kubernetes cluster. It will expose your application, such as Nginx deployment, to the internet. This IP address should remain stable as long as the service exists.
Define type
as LoadBalancer
to create a service of type Load Balancer. When this service is created, most cloud providers will automatically provision a Load Balancer with a stable external IP address. Configure the ports
that the service will listen on and forward the traffic to. Define the selector
field to set the Pods to which the traffic will be forwarded.
Note:
Ensure that your Cloud provider supports the automatic creation of external Load Balancers for Kubernetes services.
You need at least two remaining free CRIPs for regular maintenance.
You need to replace the Nginx
related labels and selectors with those relevant to your application.
The release schedule outlines the timeline for Kubernetes versions, updates, availability, and the deployment of new features within the Managed Kubernetes environment. It also provides an estimated release and End of Life (EOL) schedule.
The Managed Kubernetes release schedule provides the following information:
Kubernetes Version: This refers to a specific release of the Kubernetes, which includes updates, enhancements, and bug fixes.
Kubernetes Release Date: The date when a specific version of the Kubernetes software is released, making it available for users to download and deploy.
Availability Date: This is an estimate of the version release of the new feature that becomes accessible or ready for use.
Kubernetes End of Life (EOL): The date when a specific version or release of Kubernetes reaches the end of its official support and maintenance period, after which it no longer receives updates, security patches, or bug fixes from the Kubernetes community or its maintainers. These versions may still be available in the Managed Kubernetes product but will soon be removed from the available versions.
End of Life (EOL): The point in time when the Managed Kubernetes product reaches the end of its official support period, after which it will no longer receive updates, patches, or technical assistance.
Some applications require a Kubernetes service of type LoadBalancer
, which preserves the source IP address of incoming packets. Example: . You can manually integrate a Network Load Balancer (NLB) by exposing and attaching a public IP address to a viable Kubernetes node. This node serves as a load balancer using kube-proxy.
Note:
This works fine with services that use externalTrafficPolicy: Cluster
, but in this case, the client's source IP address is lost.
The public IP address that is used as the Load Balancer IP address also needs to be bound to those nodes on which the ingress controller is running.
To preserve the client source IP address, Kubernetes services with externalTrafficPolicy: Local
need to be used. This configuration ensures that packets reaching a node are only forwarded to Pods that run on the same node, preserving the client source IP address. Therefore, the load balancer IP address of the service needs to be attached to the same node running the ingress controller pod.
This can be achieved with different strategies. One approach is to use a to ensure that a pod is running on each node. However, this approach is feasible only in some cases, and if a cluster has a lot of nodes, then using could lead to a waste of resources.
For an efficient setup, you can schedule Pods to be run only on nodes of a specific node pool using . The node pool needs to have labels that can be used in the node selector. To ensure that the service's load balancer IP is also attached to one of these nodes, annotate the service with cloud.ionos.com/node-selector: key=value
, where the key and value are the labels of the node pool.
The following example shows how to install the as a DaemonSet with node selector and to configure the controller service with the required annotation.
Create a node pool with a label nodepool=ingress
:
Create a values.yaml
file for later use in the helm command with the following content:
Install ingress-nginx via helm using the following command:
Managed Kubernetes can be utilized to address the specific needs of its users. Here, you can find a list of common use cases and scenarios. Each use case is described in detail to highlight its relevance and benefits.
You can optimize the compute resources, such as CPU and RAM, along with storage volumes in Kubernetes through strategic usage of zones.To enhance the performance of your Kubernetes environment, consider implementing a strategic approach for resource allocation. You can intelligently distribute workloads across different zones to improve performance and enhance fault tolerance and resilience.
Define a storage class named ionos-enterprise-ssd-zone-1
, which specifies the provisioning of SSD-type storage with ext4
file system format, located in availability zone ZONE_2
. Configure the volumeBindingMode
and allowVolumeExpansion
fields.
Note: Supported values for fstype
are ext2
, ext3
or ext4
.
This implementation provides a robust and reliable Kubernetes infrastructure for your applications.
Kubernetes is organized in clusters and node pools. The node pools are created in the context of a cluster. The servers belonging to the node pool are provisioned into the . All servers within a node pool are identical in their configuration.
Nodes, also known as worker nodes, are the servers in your data center that are managed by Kubernetes and constitute your node pools. All Resources managed by Kubernetes in your data centers will be displayed by the DCD as read-only.
You can see, inspect, and position the managed resources as per your requirements. However, the specifications of the resources are locked for manual interactions to avoid undesirable results. To modify the managed resources, use the Kubernetes Manager. You can manage the following resource types based on your deployed pods and configurations:
Servers
The Inspector for Managed Resources allows easy navigation between the data centers, clusters, and node pools in the Kubernetes Manager. Here, you can:
Switch to the Kubernetes Manager and show the respective node pool.
Download the kubeconfig
to access the cluster.
List all nodes in the data center belonging to the same node pool.
All operations related to the infrastructure of clusters can be performed using the Kubernetes Manager, including cluster and node creation and scaling of node pools. The status of a cluster is indicated by different statuses.
All operations related to the infrastructure of node pools can be performed using the Kubernetes Manager. The status of a node pool is indicated by different statuses.
Prerequisite: Only contract administrators, owners, and users with Create Kubernetes Clusters permission can create a cluster. Other user types have read-only access.
Kubernetes Version
Kubernetes Release Date
Availability Date
Kubernetes End of Life (EOL)
End of Life (EOL)
August 13, 2024
November 4, 2024
October 28, 2025
November 28, 2025
April 17, 2024
July 8, 2024
June 28, 2025
July 28, 2025
December 13, 2023
April 23, 2024
February 28, 2025
March 28, 2025
August 11, 2023
October 18, 2023
October 28, 2024
December 4, 2024
Learn how to set user privileges using the DCD.
Learn how to set up and create a cluster.
Learn how to generate and download the yaml file.
Learn how to update a cluster for node pools using the DCD.
Learn how to delete a cluster from the node pools using the DCD.
Learn how to create a node pool using the DCD.
Learn how to update a node pool using the DCD.
Learn how to delete a node pool using the DCD.
Learn how to manage user groups for node pools.
Learn how to mount a Network File Storage (NFS) volume in your cluster.
Prerequisite: You need administrative privileges to create and assign user privileges by using the Cloud API.
To set user privileges using the Cloud API for creating clusters, follow these steps:
Authenticate to the Cloud API using your API credentials.
Create a user using the POST /cloudapi/v6/um/users
endpoint.
Set the following required parameters for the user: user's name
, email address
, and password
.
Create a group using the POST /cloudapi/v6/um/groups
endpoint.
Set createK8sCluster privilege to true
.
Assign the user to the created group using POST /cloudapi/v6/um/groups/{groupId}/users
endpoint and provide the user ID in the header.
Result: The Create Kubernetes Clusters privilege is granted to the user.
The status is transitional, and the cluster is temporarily locked for modifications.
The status is unavailable, and the cluster is locked for modifications.
The status is in progress. Modifications to the cluster are in progress, the cluster is temporarily locked for modifications.
The status is active, and the cluster is available and running.
The status is transitional, and the node pool is temporarily locked for modifications.
The status is unavailable. The node pool is unavailable and locked for modifications.
The status is in progress. Modifications to the node pool are in progress. The node pool is locked for modifications.
The status is active. The node pool is available and running.
The horizontal scaling of ingress network traffic over multiple Kubernetes nodes involves adjusting the number of running instances of your application to handle varying levels of load. This helps preserve the original client IP address forwarded by the Kubernetes ingress controller in the X-Forwarded-For HTTP header.
The Ingress NGINX Controller will be installed via Helm using a separate configuration file.
The following example contains a complete configuration file, including parameters and values to customize the installation:
The illustration shows the high-level architecture built using IONOS Managed Kubernetes.
The current implementation of the service of type LoadBalancer does not deploy a true load balancer in front of the Kubernetes cluster. Instead, it allocates a static IP address and assigns it to one of the Kubernetes nodes as an additional IP address. This node is, therefore, acting as an ingress node and takes over the role of a load balancer. If the pod of the service is not running on the ingress node, kube-proxy will NAT the traffic to the correct node.
Problem: The NAT operation will replace the original client IP address with an internal node IP address.
Any individual Kubernetes node provides a throughput of up to 2 Gbit/s on the public interface. Scaling beyond that can be achieved by scaling the number of nodes horizontally. Additionally, the service LB IP address must also be distributed horizontally across those nodes. This type of architecture relies on Domain Name System (DNS) load balancing, as all LB IP addresses are added to the DNS record. During name resolution, the client will decide which IP address to connect to.
When using an ingress controller inside a Kubernetes cluster, web services will usually not be exposed as type LoadBalancer, but as type NodePort instead. The ingress controller is the component that will accept client traffic and distribute it inside the cluster. Therefore, usually only the ingress controller service is exposed as type LoadBalancer.
To scale traffic across multiple nodes, multiple LB IP addresses are required, which are then distributed across the available ingress nodes. This can be achieved by creating as many (dummy) services as nodes and IP addresses are required. It is best practice to reserve these IP addresses outside of Kubernetes in the IP Manager so that they are not unassigned when the service is deleted.
Let’s assume that our web service demands a throughput of close to 5 Gbit/s. Distributing this across 2 Gbit/s interfaces would require 3 nodes. Each of these nodes requires its own LB IP address, so in addition to the ingress controller service, one needs to deploy 2 additional (dummy) services.
To spread each IP address to a dedicated node, use a node label to assign the LB IP address to: node-role.kubernetes.io/ingress=<service_name>
Note: You can always set labels and annotations via the DCD, API, Terraform, or other DevOps tools.
To pin a LB IP address to a dedicated node, follow these steps:
Reserve an IP address in the IP Manager.
Create a node pool of only one node.
Apply the following label to the node:
node-role.kubernetes.io/ingress=<service_name>
Add the following node selector annotation to the service:
annotations.cloud.ionos.com/node-selector: node-role.kubernetes.io/ingress=<service_name>
In the case of our example, reserve 3 IP addresses in the IP Manager. Add these 3 IP addresses to the DNS A-record of your fully qualified domain name. Then, create 3 node pools, each containing only one node, and apply a different ingress node-role label to each node pool. We will call these 3 nodes as ingress nodes.
The first service will be the ingress NGINX controller service. Add the above-mentioned service annotation to it:
controller.service.annotations.cloud.ionos.com/node-selector: node-role.kubernetes.io/ingress=<service_name>
Also, add the static IP address (provided by the IP Manager) to the configuration:
controller.service.loadBalancerIP: <LB_IP_address>
Similarly, 2 additional (dummy) services of type LoadBalancer must be added to spread traffic across 3 nodes. These 2 services must point to the same ingress-nginx deployment, therefore the same ports and selectors of the standard ingress-nginx service are used.
Note:
Make sure to add your specific LB IP address to the manifest.
Notice the service is using the service specific node selector label as annotation.
This spreads 3 IP addresses across 3 different nodes.
To avoid packets being forwarded using Network Address Translation (NAT) to different nodes (thereby lowering performance and losing the original client IP address), each node containing the LB IP address must also run an ingress controller pod. (This could be implemented by using a daemonSet, but this would waste resources on nodes that are not actually acting as ingress nodes.) First of all, as many replicas of the ingress controller as ingress nodes must be created (in our case 3): controller.replicaCount: 3
Then, the Pods must be deployed only on those ingress nodes. This is accomplished by using another node label. For example, node-role.kubernetes.io/ingress-node=nginx
. The name and value can be set to any desired string. All 3 nodes must have the same label associated. The ingress controller must now be configured to use this nodeSelector:
controller.nodeSelector.node-role.kubernetes.io/ingress-node: nginx
This limits the nodes on which the Ingress Controller Pods are placed.
For the Ingress Controller Pods to spread across all nodes equally (one pod on each node), a pod antiAffinity must be configured:
To force Kubernetes to forward traffic only to Pods running on the local node, the externalTrafficPolicy needs to be set to local. This will also guarantee the preservation of the original client IP address. This needs to be configured for the Ingress-NGINX service (controller.service.externalTrafficPolicy: Local) and for the 2 dummy services (see full-service example above).
The actual helm command via which the Ingress-NGINX Controller is deployed is as follows:
helm install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx --create-namespace -f values.yaml
To verify the setup, ensure that:
DNS load balancing works correctly.
Fully Qualified Domain Name (FQDN) DNS lookup yields three IP addresses.
The Whoami web application can be deployed using the following manifests:
Note: Ensure that both Whoami Pods are running, the service is created, and the Ingress returns an external IP address and a hostname.
A curl with the below-mentioned flags to the hostname will show which Load Balancer IP address is used. You need to use the same curl command multiple times to verify connection to all 3 LB IP addresses is possible.
The response from the whoami application will also return the client IP address in the X-Forwarded-For HTTP header. Verify that it is your local public IP address.
Existing Kubernetes node pools using AMD Opteron CPUs will be migrated to a “new CPU type” during scheduled maintenance windows.
Note: Starting July 17, 2024, you may encounter an error when running the terraform plan
. This error will indicate a change in the cpu_family
attribute from AMD_OPTERON
to a “new CPU type”.
Run terraform plan -refresh-only
to identify any remote changes related to your Kubernetes node pool configuration.
Check the plan output to confirm the change in the cpu_family
attribute.
Update your Terraform configuration file (*.tf
) to reflect the new CPU type. Ensure the cpu_family
parameter aligns with the migrated node pool configuration.
Note:
If allow_replace=true
, the existing node pool will be deleted and recreated with the new CPU type. This action may result in data loss.
To prevent data loss, set allow_replace=false
in your Terraform configuration and manually update the Terraform configuration file (*.tf
) with the new cpu_family
parameter. You can set allow_replace
back to true after this operation.
When creating new Kubernetes node pools, ensure that the cpu_family
parameter does not specify AMD_OPTERON
. Instead, set the value to the available cpuFamily
in your desired location.
Crossplane ignores the cpuFamily
parameter during reconciliation for existing Kubernetes node pools. Clients can modify this parameter in the specification file without impacting the operational state of the node pool.
When defining new Kubernetes node pools, ensure that the cpuFamily
parameter in the specification file does not specify AMD_OPTERON
. Failure to update this parameter may result in configuration errors during provisioning.
If you need to update the cpuFamily
parameter for existing node pools, make the necessary changes in your Crossplane specification files as required.
Ensure the cpuFamily
parameter is set appropriately to align with the current CPU offerings during the creation of new Kubernetes node pools.
Prerequisite: Only contract administrators, owners, and users with Create Kubernetes Clusters permission can create a cluster for Public and Private Node Pools. Other user types have read-only access.
You can create a cluster using the Kubernetes Manager in DCD for Public Node Pools.
Note:
A total of 500 node pools per cluster are supported.
It is not possible to switch the Node pool type from public to private and vice versa.
In the DCD, go to Containers > Managed Kubernetes.
Select + Create Cluster.
Enter a Name for the cluster.
Note: Make sure to use the following naming convention for the Kubernetes cluster:
Can be a maximum of 63 characters in length.
Begins and ends with an alphanumeric character ([a-z0-9A-Z]).
Must not contain spaces or any other white-space characters.
Can contain dashes (-), underscores (_), and dots (.) in between.
Select the Kubernetes Version you want to run in the cluster from the drop-down list.
Select a Region from the drop-down list.
In the Node pool type field, choose Public from the drop-down list.
Click + Create Cluster.
Result: A cluster is successfully created and listed in the clusters list for Public Node Pools. The cluster can be modified and populated with node pools once its status is active.
You can create a cluster using the Kubernetes Manager in DCD for Private Node Pools. For this cluster, you have to provide a Gateway IP. It is the IP address assigned to the deployed Network Address Translation (NAT) Gateway. The IP address must be reserved in the Management > IP Management.
Note:
When defining a private node pool, you need to provide a data center in the same location as the cluster for which you create the node pool.
A total of 500 node pools per cluster are supported.
It is not possible to switch the Node pool type from private to public and vice versa.
To create a cluster for Private Node Pools in Kubernetes Manager, follow these steps:
In the DCD, go to Menu > Containers > Managed Kubernetes.
Select + Create Cluster.
Enter a Name for the cluster.
Note: Make sure to use the following naming convention for the Kubernetes cluster:
Can be a maximum of 63 characters in length.
Begins and ends with an alphanumeric character ([a-z0-9A-Z]).
Must not contain spaces or any other white-space characters.
Can contain dashes (-), underscores (_), and dots (.) in between.
Select the Kubernetes Version you want to run in the cluster from the drop-down list.
In the Node pool type field, choose Private from the drop-down list.
Select a Region from the drop-down list.
Note: You can only create the cluster for Private Node Pools in the Virtual Data Centers (VDCs) in the same region as the cluster.
Select a reserved IP address from the drop-down list in Gateway IP. To do this, you need to reserve an IPv4 address assigned by IONOS Cloud. For more information, see Reserve an IPv4 Address.
(Optional) Define a Subnet for the private LAN. This has to be an address of a prefix length /16 in the Classless Inter-Domain Routing (CIDR) block.
Note:
The subnet value cannot intersect with the cluster's networks for pods and services. For clusters created with:
Kubernetes version 1.30 and above, the networks are 100.96.0.0/12
and 100.64.0.0/18
.
Older Kubernetes versions, the networks are 10.208.0.0/12
and 10.233.0.0/18
.
Once provisioned, the Region, Gateway IP, and Subnet values cannot be changed.
Click + Create Cluster.
Result: A cluster is successfully created and listed in the clusters list for Private Node Pools.
Note:
To access the Kubernetes API provided by the cluster, download the kubeconfig
file and use it with tools such as kubectl
.
The maintenance window starts at the time of your choosing and remains open for another four hours. All planned maintenance work will be performed within this window, however, not necessarily at the beginning.
You can delete a cluster for Public and Private Node Pools with the Kubernetes Manager in DCD.
Prerequisites:
Make sure you have the appropriate permissions and access to the chosen cluster.
The chosen cluster should be active.
Delete all the existing node pools associated with the chosen cluster.
To delete a cluster for Public Node Pools, follow these steps:
In the DCD, go to Menu > Containers > Managed Kubernetes.
Select a cluster you want to delete from the clusters list.
Click Delete.
Confirm your action by clicking OK.
Result: The cluster is successfully deleted from your clusters list for Public Node Pools.
To delete a cluster for Private Node Pools, follow these steps:
In the DCD, go to Menu > Containers > Managed Kubernetes.
Select a cluster you want to delete from the clusters list.
Click Delete.
Confirm your action by clicking OK.
Result: The cluster is successfully deleted from your clusters list for Private Node Pools.
Managed Kubernetes has a group privilege called Create Kubernetes Clusters. The privilege must be enabled for a group so that the group members inherit this privilege through group privilege settings.
Once the privilege is granted, contract users can create, update, and delete Kubernetes clusters using Managed Kubernetes.
Prerequisite: Make sure you have one or more Groups in the User Manager. To create one, see Create a group.
To set user privileges to create Kubernetes clusters, follow these steps:
In the DCD, open Management > Users & Groups under Users.
Select the Groups tab in the User Manager window.
Select the target group name from the Groups list.
Select the Create Kubernetes Clusters checkbox in the Privileges tab.
Result: The Create Kubernetes Clusters privilege is granted to all the members in the selected group.
You can revoke a user's Create Kubernetes Clusters privilege by removing the user from all the groups that have this privilege enabled.
Warning: You can revoke a user from this privilege by disabling Create Kubernetes Clusters for every group the user belongs to. In this case, all the members in the respective groups would also be revoked from this privilege.
To revoke this privilege from a contract administrator, disable the administrator option on the user account. On performing this action, the contract administrator gets the role of a contract user and the privileges that were set up for the user before being an administrator will then be in effect.
You can update a cluster for Public and Private Node Pools with the Kubernetes Manager in DCD.
To update a cluster, follow these steps:
In the DCD, go to Menu > Containers > Managed Kubernetes.
Select a cluster from the list and go to the Cluster Settings tab.
(Optional) Update the Cluster name, or you can continue with the existing cluster name.
Note: Make sure to use the following naming convention for the Kubernetes cluster:
Can be a maximum of 63 characters in length.
Begins and ends with an alphanumeric character ([a-z0-9A-Z]).
Must not contain spaces or any other white-space characters.
Can contain dashes (-), underscores (_), and dots (.) in between.
Select the Version number of Kubernetes you want to run on the cluster from the drop-down list.
Select a preferred Maintenance day for maintenance from the drop-down list.
Select a preferred Maintenance time (UTC) for your maintenance window from the menu. Necessary maintenance for Managed Kubernetes will be performed accordingly.
Click Update Cluster to save your changes.
Result: The cluster for your Public Node Pools is successfully updated.
To update a cluster, follow these steps:
In the DCD, go to Menu > Containers > Managed Kubernetes.
Select a cluster from the list and go to the Cluster Settings tab.
(Optional) Update the Cluster name, or you can continue with the existing cluster name.
Note: Make sure to use the following naming convention for the Kubernetes cluster:
Can be a maximum of 63 characters in length.
Begins and ends with an alphanumeric character ([a-z0-9A-Z]).
Must not contain spaces or any other white-space characters.
Can contain dashes (-), underscores (_), and dots (.) in between.
Select the Version number of Kubernetes you want to run on the cluster from the drop-down list.
Select a preferred Maintenance day for maintenance from the drop-down list.
Select a preferred Maintenance time (UTC) for your maintenance window from the menu. Necessary maintenance for Managed Kubernetes will be performed accordingly.
(Optional) Add a S3 Bucket to the Logging to S3 drop-down list to Enable logging to bucket. You can also disable logging to S3 for your Kubernetes cluster.
(Optional) Add the individual IP address or CIDRs that need access to the control plane in the Restrict Access by IP field using the + Add IP drop-down menu. Select Allow IP to control the access to the KubeAPI server of your cluster. Only requests from the defined IPs or networks are allowed.
Click Update Cluster to save your changes.
Note: Once provisioned, you cannot update the Subnet and Gateway IP values.
Result: The cluster for your Private Node Pools is successfully updated.
A kubeconfig
file is used to configure access to Kubernetes.
You can download the kubeconfig
file:
You can download the kubeconfig
file using configuration management tools such as IonosCTL CLI, Ansible, and Terraform. Following are a few options to retrieve the kubeconfig
files.
K8s Cluster Id
k8s_cluster config_file
filename
Note: If you do not want to use any tools like IonosCTL CLI, Ansible, or Terraform, you can retrieve the kubeconfig
file directly from the Get Kubernetes Configuration File API using tools like cURL
or Wget
.
In the DCD, go to Menu > Containers > Managed Kubernetes.
In Kubernetes Manager, select a cluster from the cluster list.
In the Cluster Settings tab, select either kubeconfig.yaml or kubeconfig.json from the drop-down list to download the kubeconfig
file.
Alternatively, you can also select the Kubernetes element in the Workspace and download the kubeconfig
file in the Inspector pane.
Result: The kubeconfig
file is successfully downloaded.
You can download the kubeconfig
file:
You can download the kubeconfig
file using configuration management tools such as IonosCTL CLI, Ansible, and Terraform. Following are a few options to retrieve the kubeconfig
files.
K8s Cluster Id
k8s_cluster config_file
filename
Note: If you do not want to use any tools like IonosCTL CLI, Ansible, or Terraform, you can retrieve the kubeconfig
file directly from the Get Kubernetes Configuration File API using tools like cURL
or Wget
.
To download the kubeconfig
file using Kubernetes Manager, follow these steps:
In the DCD, go to Menu > Containers > Managed Kubernetes.
In Kubernetes Manager, select a cluster from the cluster list.
In the Cluster Settings tab, select either kubeconfig.yaml or kubeconfig.json from the drop-down list to download the kubeconfig
file.
Alternatively, you can also select the Kubernetes element in the Workspace and download the kubeconfig
file in the Inspector pane.
Result: The kubeconfig
file is successfully downloaded.
Note: Only administrators can retrieve the kubeconfig
file without a node pool. All other users need to create a node pool first.
You can delete node pools with the Kubernetes Manager in DCD.
To delete a node pool, follow these steps:
Select a cluster from the list and go to the Node pools in Cluster tab.
Select a node pool from the list you want to delete.
Select Delete.
Result: Managed Kubernetes will remove the resources from the target data center and the node pool is successfully deleted.
To delete a node pool, follow these steps:
Select a cluster from the list and go to the Node pools in Cluster tab.
Select a node pool from the list you want to delete.
Select Delete.
Result: Managed Kubernetes will remove the resources from the target data center and the node pool is successfully deleted.
Note: Avoid accessing the target data center while Managed Kubernetes is provisioning nodes, as concurrent manual interaction can cause undesirable results.
Prerequisite: Only contract owners, administrators, and users having Create Kubernetes Clusters permission can create node pools. Other user types have read-only access.
You can create a cluster using the Kubernetes Manager in DCD for Public Node Pools.
In the DCD, go to Containers > Managed Kubernetes.
Select a cluster from the list and go to the Node pools in Cluster tab.
Select + Create node pool.
In Create Kubernetes node pool, configure your node pools.
In Pool Settings, provide the following information:
Pool Name: Enter a name that aligns with the Kubernetes naming convention.
Data Center: Select an option from the drop-down list. Your node pool will be included in the selected data center. If you do not have a data center, you must first create one.
Node pool version: Select an appropriate version from the drop-down list.
Node count: Select the number of nodes in the node count.
Autoscale: Select the checkbox to enable autoscale and provide a minimum and maximum number of the total nodes.
Attached private LANs: Select + and choose a private LAN from the drop-down list.
Reserved IPs: Select + and choose a reserved IP address from the drop-down list.
In the Node Pool Template, provide the following information:
CPU: Select an option from the drop-down list.
Cores: Select the number of cores.
RAM: Select the size of your RAM.
Availability Zone: Select a zone from the drop-down list.
Storage Type: Select a type of storage from the drop-down list.
Storage Size: Select the storage size for your storage.
Note: Make sure to use the following naming convention for the Kubernetes cluster:
Can be a maximum of 63 characters in length.
Begins and ends with an alphanumeric character ([a-z0-9A-Z]).
Must not contain spaces or any other white-space characters.
Can contain dashes (-), underscores (_), and dots (.) in between.
Select Create node pool.
Result: A node pool is successfully created and can be used once it reaches the active state.
When a node fails or becomes unresponsive you can rebuild that node. This will create a new node with an identical configuration that will replace the failed node.
Prerequisite: Make sure your node is active.
Select a cluster from the list and go to the Node pools in Cluster tab.
Select the node pool that contains the failed node.
Select Rebuild.
Confirm your selection by selecting OK.
Result:
Managed Kubernetes starts a process that is based on the Node Template. The template creates and configures a new node. Once the status is updated to ACTIVE, then it migrates all the pods from the faulty node to the new node.
The faulty node is deleted once it is empty.
While this operation occurs, the node pool will have an extra billable active node.
The node pool is successfully rebuilt.
You can create a cluster using the Kubernetes Manager in DCD for Private Node Pools.
In the DCD, go to Containers > Managed Kubernetes.
Select a cluster from the list and go to the Node pools in Cluster tab.
Select + Create node pool.
In Create Kubernetes node pool, configure your node pools.
In Pool Settings, provide the following information:
Pool Name: Enter a name that aligns with the Kubernetes naming convention.
Data Center: Select an option from the drop-down list. Your node pool will be included in the selected data center. If you do not have a data center, you must first create one.
Node pool version: Select an appropriate version from the drop-down list.
Node count: Select the number of nodes in the node count.
Autoscale: Select the checkbox to enable autoscale and provide a minimum and maximum number of the total nodes.
Attached private LANs: Select + and choose a private LAN from the drop-down list.
Reserved IPs: Select + and choose a reserved IP address from the drop-down list.
In the Node Pool Template, provide the following information:
CPU: Select an option from the drop-down list.
Cores: Select the number of cores.
RAM: Select the size of your RAM.
Availability Zone: Select a zone from the drop-down list.
Storage Type: Select a type of storage from the drop-down list.
Storage Size: Select the storage size for your storage.
Note: Make sure to use the following naming convention for the Kubernetes cluster:
Can be a maximum of 63 characters in length.
Begins and ends with an alphanumeric character ([a-z0-9A-Z]).
Must not contain spaces or any other white-space characters.
Can contain dashes (-), underscores (_), and dots (.) in between.
Select Create node pool.
Result: A node pool is successfully created and can be used once it reaches the Active state.
When a node fails or becomes unresponsive, you can rebuild that node. This will create a new node with an identical configuration that will replace the failed node.
Prerequisite: Make sure your node is active.
Select a cluster from the list and go to the Node pools in Cluster tab.
Select the node pool that contains the failed node.
Select Rebuild.
Confirm your selection by selecting OK.
Result:
Managed Kubernetes starts a process that is based on the Node Template. The template creates and configures a new node. Once the status is updated to ACTIVE, then it migrates all the pods from the faulty node to the new node.
The faulty node is deleted once it is empty.
While this operation occurs, the node pool will have an extra billable active node.
The node pool is successfully rebuilt.
Avoid accessing the target data center while Managed Kubernetes is provisioning nodes, as concurrent manual interaction can cause undesirable results.
You can update Public and Private Node Pools with the Kubernetes Manager in DCD.
To update a node pool, follow these steps:
Select a cluster from the list and go to the Node pools in Cluster tab.
Select the Kubernetes Version you want to run in the cluster from the drop-down list.
Select the number of nodes in the Node Count.
Select the checkbox to enable Autoscale and provide a minimum and maximum number of the total nodes.
Select + next to the Labels field. Provide a Name and Value for your label.
Select + next to the Annotations field. Provide a Name and Value for your annotation.
Select + next to the Reserved IPs field and choose an IP address from the drop-down list.
Select + next to the Attached private LANs field and choose a private LAN from the drop-down list.
Select the Maintenance day and Maintenance time (UTC) for your maintenance window. The necessary maintenance for Managed Kubernetes will be performed accordingly.
Select Update node pool.
Managed Kubernetes will start to align the resources in the target data center. In case you have selected a new version for Kubernetes, the operation may take a while, and the node pool will be available for further changes once it reaches the Active state.
Result: A node pool is successfully updated.
To update a node pool, follow these steps:
Select a cluster from the list and go to the Node pools in Cluster tab.
Select the Kubernetes Version you want to run in the cluster from the drop-down list.
Select the number of nodes in the Node Count.
Select the checkbox to enable Autoscale and provide a minimum and maximum number of the total nodes.
Select + next to the Labels field. Provide a Name and Value for your label.
Select + next to the Annotations field. Provide a Name and Value for your annotation.
Select + next to the Reserved IPs field and choose an IP address from the drop-down list.
Select + next to the Attached private LANs field and choose a private LAN from the drop-down list.
Select the Maintenance day and Maintenance time (UTC) for your maintenance window. The necessary maintenance for Managed Kubernetes will be performed accordingly.
Select Update node pool.
Managed Kubernetes will start to align the resources in the target data center. In case you have selected a new version for Kubernetes, the operation may take a while, and the node pool will be available for further changes once it reaches the Active state.
Result: A node pool is successfully updated.
Note:
Avoid accessing the target data center while Managed Kubernetes is provisioning nodes, as concurrent manual interaction can cause undesirable results.
The maintenance window starts at the time of your choice and remains open for the next four hours. All planned maintenance work will be performed within this window, but not necessarily at the beginning.
You can add user groups and assign permissions for Public and Private Node Pools with the Kubernetes Manager in .
In the clusters for Public Node Pools, nodes only have external IP addresses, which means that the nodes and pods are exposed to the internet.
To set up the security settings, follow these steps:
In the DCD, go to Menu > Containers > Managed Kubernetes.
In Kubernetes Manager, select a cluster.
Go to the Security tab and click Visible to Groups.
To enable access, select the Edit or Share checkbox for a group.
Note: To disable access, select the group for which you want to disable access. Clear either the Edit or Share checkboxes. You can also directly click Remove Group.
Result: The cluster for Public Node Pools now has the newly assigned permissions.
In the clusters for Private Node Pools, nodes only have internal IP addresses, which means that the nodes and pods are isolated from the internet. Internal IP addresses for nodes come from the primary IP address range of the subnet you choose for the cluster.
To set up the security settings, follow these steps:
In the DCD, go to Menu > Containers > Managed Kubernetes.
In Kubernetes Manager, select a cluster.
Go to the Security tab and click Visible to Groups.
To enable access, select the Edit or Share checkbox for a group.
Note: To disable access, select the group you want to disable the access for. Clear either the Edit or Share checkboxes. You can also directly click Remove Group.
Result: The cluster for Private Node Pools now has the newly assigned permissions.
All Kubernetes API instructions can be found in the main Cloud API specification file.
To access the Kubernetes API, which the cluster provides, you can download the kubeconfig
file and use it with tools such as kubectl
.
GET
https://api.ionos.com/cloudapi/v6/k8s/{k8sClusterId}/kubeconfig
Retrieve a configuration file for the specified Kubernetes cluster, in YAML or JSON format as defined in the Accept header; the default Accept header is application/yaml.
All Managed Kubernetes resources, such as clusters and node pools, are subject to an automated weekly maintenance process. All changes to a cluster or node pools that may cause service interruption, such as upgrades, are executed during maintenance. During the maintenance window, you may encounter uncontrolled disconnections and an inability to connect to the cluster.
The upgrade process during maintenance respects the selected Kubernetes version of the cluster. The upgrade process does not upgrade to another Kubernetes major, minor, or patch version unless the current cluster or node pool version reaches its end of life. In such instances, the cluster or node pool will be updated to the next minor version that is active.
The maintenance window consists of two parts. The first part specifies the day of the week, while the second part specifies the expected time. The following example shows a maintenance window configuration:
For more information, see .
During cluster maintenance, control plane components are upgraded to the newest version available.
During the maintenance of a particular node pool, all nodes of this pool will be replaced by new nodes. Nodes are replaced one after the other, starting with the oldest node. During the node replacement, first, a new node is created and added to the cluster. Then, the old node is drained and removed from the cluster. Node pool upgrade considers the four-hour maintenance window. The upgrade process will be continued in the next node pool maintenance if it fails to upgrade all nodes of the node pool within the four-hour maintenance window.
The maintenance process first tries to drain a node gracefully, considering the given PDBs for one hour. If this fails, the node is drained, ignoring pod disruption budgets.
The following steps guide you through the process of connecting with Persistent Volume Claims (PVCs) in a Managed Kubernetes cluster.
Note: Network File Storage (NFS) Kubernetes integration is currently available on a request basis. To access this product, please contact your sales representative or .
Prerequisites:
Ensure that the NFS volume and the Managed Kubernetes node pool are connected to the same private LAN.
Node pools can only retrieve their IPs in the private LAN via Dynamic Host Configuration Protocol (DHCP). Each private LAN has its own subnet distributed by the DHCP server.
The subnet of a private LAN becomes visible via the API when attaching a server to the LAN with a NIC, or by opening a node shell on the Kubernetes Cluster and inspecting the network interfaces.
To connect NFS with PVCs in a Managed Kubernetes cluster via the DCD (Data Center Designer), follow these steps:
Drag a vCPU Server
into the workspace to add a new server in the DCD.
Click Add NIC. This action creates a new Network Inteface Controller (NIC) with a new private LAN. Note the LAN number.
Click PROVISION CHANGES.
Once the changes are provisioned, inspect the server’s NIC to see its primary IP in the private LAN. For example, 10.7.228.11
. This reveals the private LAN’s DHCP subnet. For example, 10.7.228.0/24
.
Provision a Kubernetes Cluster.
Provision a node pool for the Kubernetes Cluster and attach the previously created private LAN. Ensure that DHCP is enabled.
Provision an NFS Cluster with the same private LAN Attached.
Assign a static IP to the NFS cluster within the same subnet identified earlier. For example, 10.7.228.5/24
.
Provision an NFS Share. For more information, see .
Add the 10.7.228.0/24
subnet and a client group to establish the necessary permissions for Kubernetes and any other hosts within that subnet to access NFS.
Delete the vCPU Server
created in the first step.
Choose one of the following NFS provisioners:
Both provisioners create a custom StorageClass
configured with an NFS server. Use the static IP assigned earlier, for example, 10.7.228.5
, as the server and /SHARE_UUID/SHARE_NAME
as the share path.
Managed Kubernetes requires a specific setting for the StorageClass
because PersistentVolumes need a specific mount setting. Apply the following command:
When creating PVCs, specify the custom StorageClass
name. The PVCs should then get provisioned using the specified provisioner.
Install the helm chart for csi-driver-nfs
:
Create a StorageClass
with the necessary parameters:
Create a PVC:
Result: The Managed Kubernetes cluster can now mount NFS volumes as PVCs.
For further assistance or questions, contact .
You can retrieve the kubeconfig
file and save it using a single command from IonosCTL CLI. For more information, see .
ionosctl k8s kubeconfig get --cluster-id CLUSTER_ID
You can retrieve the kubeconfig
by specifying the kubeconfig
parameter in the Ansible YAML file.
For more information, see .
You can interact with the kubeconfig
resources by providing proper configurations.
For more information, see .
You can retrieve the kubeconfig
file and save it using a single command from IonosCTL CLI. For more information, see .
ionosctl k8s kubeconfig get --cluster-id CLUSTER_ID
You can retrieve the kubeconfig
by specifying the kubeconfig
parameter in the Ansible YAML file.
For more information, see .
You can interact with the kubeconfig
resources by providing proper configurations.
For more information, see .
Refer to the for detailed instructions.
k8sClusterId*
String
The unique ID of the Kubernetes cluster.
depth
String
Controls the detail depth of the response objects.
X-Contract-Number
Integer
Users with multiple contracts must provide the contract number, for which all API requests are to be executed.
A node pool upgrade generally happens automatically during weekly maintenance. You can also trigger it manually. For example, when upgrading to a higher version of Kubernetes. In any case, the node pool upgrade will result in rebuilding all nodes belonging to the node pool.
During the upgrade, an old node in a node pool is replaced by a new node. This may be necessary for several reasons:
Software updates: Since the nodes are considered immutable, IONOS Cloud does not install software updates on the running nodes but replaces them with new ones.
Configuration changes: Some configuration changes require replacing all included nodes.
Considerations: Multiple node pools of the same cluster can be upgraded at the same time. A node pool upgrade locks the affected node pool, and you cannot make any changes until the upgrade is complete. During a node pool upgrade, all of its nodes are replaced one by one, starting with the oldest one. Depending on the number of nodes and your workload, the upgrade can take several hours.
If the upgrade is initiated as a part of weekly maintenance, some nodes may not be replaced to avoid exceeding the maintenance window.
Make sure that you have not exceeded your contract quota for servers; otherwise, you will not be able to provision a new node to replace an existing one.
The rebuilding process consists of the following steps:
Provision a new node to replace the old one and wait for it to register in the control plane.
Exclude the old nodes from scheduling to avoid deploying additional pods to it.
Drain all existing workload from the old node.
At first, the IONOS Cloud tries to drain the node gracefully.
- PodDisruptionBudgets (PDBs) are enforced for up to 1 hour. For more information, see Specifying a Disruption Budget for your Application.
- GracefulTerminationPeriod for pods is respected for up to 1 hour. For more information, see Termination of Pods.
If the process takes more than 1 hour, all remaining pods are deleted.
Delete the old node from the node pool.
You need to consider the following node drain updates and their impact on the maintenance procedure:
Under the current platform setup, a node drain considers PDBs. If a concrete eviction of a pod violates an existing PDB, the drain would fail. If the drain of a node fails, the attempt to delete this node will also fail.
The problems with unprepared workloads or misconfigured PDBs can lead to failing drains, node deletions, and resulting failure in node pool maintenance. To prevent this issue, the node drain will split into two stages. In the first stage, the system will continue to try to gracefully evict the pods from the node. If this fails, the second stage will forcefully drain the node by deleting all remaining pods. This deletion will bypass checking PDBs. This prevents nodes from failing during the drain.
As a result of the two-stage procedure, the process will stop failing due to unprepared workloads or misconfigured PDBs. However, this change may cause interruptions to workloads that are not prepared for maintenance. During maintenance, nodes are replaced one by one. For each node in a node pool, a new node is created. After that, the old node is drained and then deleted.
At times, a pod is not able to return to its READY state after being evicted from a node during maintenance. In such cases, a PDB was in place for a pod's workload. This leads to failed maintenance, and the rest of the workload is left untouched. With the force drain behavior, the maintenance process will proceed, and all parts of the workload will be evicted and potentially end up in a non-READY state. This might lead to an interruption of the workload. To prevent this, make sure that your workload pods are prepared for eviction at any time.
Following are a few limitations that you can encounter while using both Public and Private Node Pools:
The maximum number of pods supported per node is 110. It is a Kubernetes default value.
The recommended maximum number of nodes per node pool is 20.
The maximum number of nodes in a node pool is 100.
A total of 500 node pools per cluster is supported.
The recommended maximum number of node pools per cluster are 50.
The maximum number of supported nodes per cluster is 5000.
Following are a few limitations that you can encounter while using Private Node Pools:
Managed Kubernetes clusters for Private Node Pools are bound to one region. When you create a node pool, you need to provide a data center, which has to be in the same location as defined in the cluster. You can create Private Node Pools only in a see Virtual Data Center (VDC) that shares the region with the managed Kubernetes cluster.
Kubernetes services of type LoadBalancer are currently not supported.
The static node Internet Protocol (IP) addresses are not supported.
Managed Kubernetes facilitates the fully automated setup of Kubernetes clusters. Managed Kubernetes also simplifies and carefully supports the automation of Continuous Integration and Continuous Delivery/Continuous Deployment (CI/CD) pipelines for testing and deployment.
IONOS Managed Kubernetes solution offers automatic updates, security fixes, versioning, upgrade provisioning, high availability, geo-redundant control plane, and full cluster administrator level access to Kubernetes API.
Everything related to Managed Kubernetes can be controlled in the DCD via the dedicated Kubernetes Manager. The Manager provides a complete overview of your provisioned Kubernetes clusters and node pools, including their statuses. The Manager allows you to create and manage clusters, create and manage node pools, and download the kubeconfig
file.
The control plane manages the worker nodes and the pods in the cluster. In production environments, the control plane usually runs across multiple computers. A cluster usually runs multiple nodes, providing fault tolerance and high availability.
Kube-controller-manager manages controllers that provide functionalities such as deployments, services, etc. For more information, see kube-controller-manager.
The API server is a component of the Kubernetes control plane that exposes the Kubernetes API. The API server is the front end for the Kubernetes control plane. Kube-apiserver is designed to scale horizontally. It scales by deploying more instances. You can run several instances of kube-apiserver and balance traffic between those instances. For more information, see kube-apiserver.
Kube-scheduler distributes pods to nodes. Pods must be created with resource limits so that nodes are not over-commissioned. For more information, see kube-scheduler.
Managed Kubernetes supports regional control planes that provide a distributed and highly-available management infrastructure within a chosen region. It also offers a hidden control plane for both Public and Private Node Pools. Control plane components like kube-apiserver
, kube-scheduler
, and kube-controller-manager
are not visible to the users and cannot be modified directly.
The kube-apiserver can only be interacted with by using its REST API.
The hidden control plane is deployed on Virtual Machines (VMs) running in a geo-redundant cluster in the chosen region. For more information about the control plane components, refer to the Kubernetes documentation. and Kubernetes API.
The Managed Kubernetes clusters have a Calico CNI plugin. Its primary function is to automatically assign IP addresses, set up network interfaces, and establish connectivity between the pods. Calico also allows the use of network policies in the Kubernetes cluster. For more information, see Kubernetes CNI and Network Policies.
Managed Kubernetes does not currently offer an option to choose a different CNI plugin, nor does it support customers that do so on their own.
CNI affects the whole cluster network, so if changes are made to Calico or a different plugin is installed, it can cause cluster-wide issues and failed resources.
The CSI driver runs as a deployment in the control plane to manage volumes for Persistent Volume Claims (PVCs) in the IONOS Cloud and to attach them to nodes.
Note: Network File Storage (NFS) Kubernetes integration is currently available on a request basis. To access this product, please contact your sales representative or IONOS Cloud Support.
Please refer to the Network File Storage (NFS) Kubernetes integration documentation for more information.
IONOS Cloud volumes are represented as Persistent Volume (PV) resources in Kubernetes. The PV reclaim policy determines what happens to the volume when the PV is deleted. The Retain
reclaim policy skips deletion of the volume and is meant for manual reclamation of resources. In the case of dynamically provisioned volumes, the CSI driver manages the PV; the user cannot delete the volume even after the PV is deleted.
The PV has resource finalizers that ensure that Cloud resources are deleted. The finalizers are removed by the system after Cloud resources are cleaned up, so removing them prematurely is likely to leave resources behind.
This may be desired when a dynamically provisioned volume is left over or if an external volume should be exposed to Kubernetes workload.
Warning: Do not import your Managed Kubernetes node's root volumes. They are fully managed outside the Kubernetes cluster, and importing them will cause conflicts that may lead to service disruptions and data loss.
Dynamically provisioned PVs are created by the CSI driver, which populates the resource ownership annotations and information gathered from the IONOS Cloud API. For statically managed PVs, this data must be provided by the user. For statically managed PVs this data must be provided by the user.
The following fields should be modified according to the volume that is imported:
spec.capacity.storage
: Should contain the size of the volume with suffix G (Gigabyte).
spec.csi.volumeHandle
: Volume path in the IONOS Cloud API. Omit the leading slash (/
)
spec.nodeAffinity.required.nodeSelectorTerms.matchExpressions.values
: Must contain the Virtual Data Center ID from the Volume path.
Creating this PV will allow it to be used in Pod by referencing it in a PVC's spec.volumeName
.
Note: Be aware that the imported volume will only be deleted if it is Bound
.
A cluster autoscaler is a tool that automatically adjusts the size of the Kubernetes cluster when one of the following conditions is true:
There are pods that failed to run in the cluster due to insufficient resources.
There are nodes in the cluster that have been underutilized for an extended period of time, and their pods can be placed on other existing nodes.
For more information, see Cluster Autoscaler and its FAQ.
The cluster autoscaler increases the node pool if pods cannot be scheduled due to a lack of resources. In addition, adding a node (from the node pool that needs to be increased) should provide a remedy for the situation. If there are no node pools that provide enough nodes to schedule a pod, the autoscaler will not enlarge.
The cluster autoscaler reduces the node pool if a node is not fully utilized for an extended period of time. A node is underutilized when it has a light load, and all of its pods can be moved to other nodes.
Yes, only node pools with active autoscaling are managed by the autoscaler.
No, the autoscaler cannot increase the number of nodes in the node pool above the maximum specified by the user or decrease it below the specified minimum. In addition, the quota for a specific contract cannot be exceeded using the autoscaler. The autoscaler cannot reduce the number of nodes in the node pool to 0.
Yes, it is possible to enable and configure encryption of secret data. For more information, see Encrypting Confidential Data at Rest.
All components installed in the cluster are updated. This includes the K8s control plane itself, CSI, CCM, Calico, and CoreDNS . With cluster maintenance, several components that are visible to customers are updated and reset to our values. For example, changes to CoreDNS are not permanent and will be removed at the next maintenance. It is currently not possible to set your own DNS records in the CoreDNS configuration, but this will be possible later.
Managed components that are regularly updated:
The maintenance time window is limited to four hours for Public and Private Node Pools. If all of the nodes are not rebuilt within this time, the remaining nodes will be replaced at the next scheduled maintenance. To avoid taking more time for your updates, it is recommended to create node pools with no more than 20 nodes.
If old nodes are replaced with new ones during maintenance, the new nodes will subsequently have different or new public IP addresses. You can pre-specify a list of public IP addresses from which entries for new nodes are taken. In such a way, the list of possible host addresses is limited and predictable. For example, to activate them differently through a whitelist.
The Kubernetes cluster control plane and the corresponding node pools can have different versions of Kubernetes. Node pools can use older versions than the control plane, but not vice versa. The difference between the minor versions must not be more than 1.
There is a distinction between patch version updates and minor version updates. You must initiate all version updates. Once initiated, the version updates are performed immediately. However, forced updates will also occur if the version used by you is so old that we can no longer support it. Typically, affected users receive a support notification two weeks before a forced update.
The Kubernetes API is secured with Transport Layer Security (TLS). Traffic between the nodes and the control plane is secured by mutual TLS, which means that both sides check whether they are talking to the expected remote station.
If clusters or node pools are created or modified, the operation may fail, and the cluster or node pool will go into a Failed status. In this case, our team is already informed because we monitor it. However, sometimes it can also be difficult for us to rectify the error since the reason can be a conflict with the client's requirements. For example, if a LAN is specified that does not exist at all or no longer exists, a service update becomes impossible.
If the node is in a NotReady state, and there is not enough RAM or the node runs out of the RAM space, an infinite loop occurs in which an attempt is made to free the RAM. This means the node cannot be used, and the executables must be reloaded from the disk. The node is busy with disk Input/Output (I/O). In such a situation, we recommend doing resource management to prevent such scenarios. For more information, see Requests and limits.
The IONOS Kubernetes currently does not support the usage of your own CAs or your own TLS certificates in the Kubernetes cluster.
You can reserve Public Node Pools in multiple locations in the same cluster. This allows simple geo-redundancy to be configured and implemented. The control plane is geo-reserved (within Germany). There are several replicas running in different locations.
If a node is unavailable, like too many pods are running on it without resource limits, it can be replaced. To do this, you can use the following API endpoint:
A Private Node Pool ensures that the nodes are not connected directly to the internet; hence, the inter-node network traffic stays inside the private network. However, the control plane is still exposed to the internet and can be protected by restricting IP access.
Clusters and node pools turn yellow when a user or an automated maintenance process initiates an action on the resources. This locks the clusters and node pool resources from being updated until the process is finished, and they do not respond during this time.
A NAT is required to enable outbound traffic between the cluster nodes and the control plane. For example, to be able to retrieve container images.
Kubernetes clusters only support public networks only if they are VMs but not LAN networks.
Yes, if your node pool is configured to have a network interface in the same network as the VMs that you want to access, then you can add nodes.
Public Node Pools within a Kubernetes cluster are configured by defining a public dedicated node pool. Networking settings are specified to include public IP addresses for external access.
Private Node Pools within a Kubernetes cluster are configured by ensuring that each node in a pool has a distinct private network, while nodes within the same pool share a common private network.
It is crucial to set up these node pools with a network interface aligned with the network of the intended VMs when adding nodes to Kubernetes clusters.
The Private Cross Connect is required to enable node-to-node communication across all node pools belonging to the same Kubernetes cluster. This ensures that node pools in different VDCs can communicate.
No, the private NAT Gateway is not intended to be used for arbitrary nodes.
The Public Node Pools support the LoadBalancer service type. However, the Private Node Pools currently do not support the LoadBalancer service type.