Managed Kubernetes FAQs

Get answers to the most frequently asked questions about Kubernetes in IONOS DCD.

What is the function of Managed Kubernetes?

Managed Kubernetes facilitates the fully automated setup of Kubernetes clusters. Managed Kubernetes also simplifies and carefully supports the automation of CI/CD pipelines in terms of testing and deployment.

What does Kubernetes offer for providing transparency and control?

Our managed solution offers automatic updates and security fixes, versioning and upgrade provisioning, highly available and geo-redundant control plane, full cluster admin-level access to Kubernetes API.

How does the Kubernetes Manager work?

Everything related to Managed Kubernetes can be controlled in the DCD via the dedicated Kubernetes Manager. The manager provides a complete overview of your provisioned Kubernetes clusters and node pools including their status. The Manager allows you to create and manage clusters, create and manage node pools, and download the Kubeconfig file.

See also: The Kubernetes Manager

What is the control plane for?

The control plane manages the worker nodes and the pods in the cluster. In production environments, the control plane usually runs across multiple computers. A cluster usually runs multiple nodes, providing fault tolerance and high availability.

How does a kube-controller-manager work?

Kube-controller-manager manages controllers that provide functionalities such as deployments, services, etc.

See the kube-controller-manager

What is the function of a kube-apiserver?

The API server is a component of the Kubernetes control plane that exposes the Kubernetes API. The API server is the front end for the Kubernetes control plane. Kube-apiserver is designed to scale horizontally. It scales by deploying more instances. You can run several instances of kube-apiserver and balance traffic between those instances.

See the kube-apiserver

What is the function of a kube-scheduler?

Kube-scheduler distributes pods to nodes. Pods must be created with resource limits so that nodes are not over-commissioned.

See kube-scheduler

Where are the control plane nodes?

Managed Kubernetes offers a hidden control plane. Control plane components like kube-apiserver, kube-scheduler and kube-controller-manager, are not visible to the customer and cannot be modified directly. The kube-apiserver can only be interacted with by using its REST API.

The hidden control plane is deployed on Virtual Machines that are running in a geo-redundant cluster in the area of Frankfurt am Main, Germany.

See also:

What CNI plugin is installed in Managed Kubernetes clusters?

The Managed Kubernetes clusters have a built-in Calico Container Network Interface (CNI) plugin. Its primary function is to provide Pod-to-Pod communication. Calico also allows the use of network policies in the Kubernetes cluster.

See also:

Can I choose or install a different CNI plugin?

Managed Kubernetes does not currently offer an option to choose a different CNI plugin nor does it support customers that do so on their own. CNI affects the whole cluster network, so if changes are made to calico or a different plugin is installed it can cause cluster-wide issues and failed resources.

What is CSI?

The CSI (Container Storage Interface) driver runs as deployment in the control plane to manage volumes for PVCs (Persistent Volume Claims) in the Ionos Cloud and to attach them to nodes.

How to provision NFS volumes?

The "soft" mount option is required when creating PersistentVolume with an NFS source in Kubernetes. It can be set either in the mount options list in the PersistentVolume specification (spec.mountOptions), or using the annotation key volume.beta.kubernetes.io/mount-options. This value is expected to contain a comma-separated list of mount options. If none of them contains the "soft" mount option, the creation of the PersistentVolume will fail.

Note that the use of the annotation is still supported but will be deprecated in the future. See also: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#mount-options

Mount options in the PersistentVolume specification can also be set using the StorageClass.

Example for PV spec:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-share
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  storageClassName: nfs
  mountOptions:
    - soft
  nfs:
    path: /tmp
    server: 172.17.0.2

Example for annotation:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-share
  annotations:
    volume.beta.kubernetes.io/mount-options: "soft"
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  storageClassName: nfs
  nfs:
    path: /tmp
    server: 172.17.0.2

Example for using StorageClass:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-on-ionos
mountOptions:
  - soft
provisioner: my.nfs.provisioner
persistentVolumeReclaimPolicy: Delete

What is a cluster autoscaler?

A cluster autoscaler is a tool that automatically adjusts the size of the Kubernetes cluster when one of the following conditions is true:

  • there are pods that failed to run in the cluster due to insufficient resources;

  • there are nodes in the cluster that have been underutilized for an extended period of time and their pods can be placed on other existing nodes.

When does the cluster autoscaler increase and reduce the node pool?

The cluster autoscaler increases the node pool if pods cannot be scheduled due to a lack of resources. In addition, adding a node (from the node pool that needs to be increased) should provide a remedy for the situation. If there was no node pool that provides enough nodes to schedule a pod, the autoscaler would not enlarge. The cluster autoscaler reduces the node pool if a node is not fully utilized for an extended period of time. A node is underutilized when it has a light load and all of its pods can be moved to other nodes.

Is it possible to mix node pools with and without active autoscaling within a cluster?

Yes, whereby only node pools with active autoscaling are managed by the autoscaler.

Can the autoscaler enlarge/reduce node pools as required?

No, the autoscaler essentially cannot increase the number of nodes in the node pool above the maximum specified by the user, or decrease it below the specified minimum. In addition, the quota for a specific contract cannot be exceeded using the autoscaler. The autosclaer also cannot reduce the number of nodes in the node pool to 0.

Is it possible to enable and configure encryption of secret data?

Yes, it is possible.

See Encrypting Secret Data at Rest

Are values and components of the cluster changed during maintenance?

All components installed in the cluster are updated. This includes the K8s control plane itself, CSI, CCM, Calico, and CoreDNS. With cluster maintenance, several components that are visible to customers are updated and reset to our values. For example, changes to coredns are not permanent and will be removed at the next maintenance. It is currently not possible to set your own DNS records in the coredns configuration, but this will be possible later. Managed components that are regularly updated:

  • coredns

  • csi-ionoscloud (DaemonSet)

  • calico (typha)

  • metrics-server

  • ionos-policy-validator

  • snapshot-validaton-webhook

Is there a limit on the node pool?

The maintenance time window is limited to four hours. If not all nodes can be rebuilt within this time, the remaining nodes will be replaced at the next maintenance. To avoid late updates, it is recommended to create node pools with no more than 20 nodes.

How to preset IP addresses on new nodes?

If old nodes are replaced with new ones during maintenance, the new nodes will subsequently have different (new) public IP addresses. You can pre-specify a list of public IP addresses from which entries for new nodes are taken. In such a way, the list of possible host addresses is limited and predictable (for example, to activate them differently through a whitelist).

Nodes in private and public clusters

Managed Kubernetes nodes usually have a public IP address, so they can be accessed from the Internet. This is not the case in the private cluster. Here all the nodes are "hidden" behind a NAT gateway, and although they can open Internet connections themselves, they cannot be reached directly from the outside. Private clusters have various limitations: they can only consist of one node pool and are therefore also limited to one region. In addition, the bandwidth of the Internet is limited by the maximum bandwidth of the NAT gateway (typically 700 Mbps). With private clusters, you can determine the external public IP address of the NAT gateway using CRIP. Thus, outbound traffic will use this IP address as the "source IP address".

Can a cluster and respective node pools have different Kubernetes versions?

The Kubernetes cluster (control plane) and the corresponding node pools can have different versions of Kubernetes. Node pools can use older versions than the control plane, but not vice versa. The difference between the minor versions must not be more than 1. There is a distinction between Patch Version Updates and Minor Version Updates. All version updates must be initiated by the customer. Once initiated version updates are performed immediately. However, forced updates will also occur if the version used by the customer is so old that we can no longer support it. Typically, affected customers receive a support notification about two weeks prior to a forced update.

Is there traffic protection between nodes and the control plane?

The Kubernetes API is secured with TLS. Traffic between the nodes and the control plane is secured by mutual TLS, which means that both sides check whether they are talking to the expected remote station.

Why is the status "Failed" displayed for clusters or node pools?

If clusters or node pools are created or modified, the operation may fail and the cluster or node pool will go into a FAILED status. In this case, our employees are already informed as a result of monitoring, but sometimes it may be difficult/impossible for them to correct the error, since the reason may be, for example, in conflict with the client's requirements. For example, a LAN is specified that does not exist (or no longer exists), or due to a module budget violation, a service update becomes impossible. If the node is NotReady, the reason is usually that there is not enough RAM. If the node runs out of RAM, an infinite loop occurs in which an attempt is made to free RAM, which means that the executables must be reloaded from the disk. This means that the node cannot get anywhere else and is busy only with disk IO. We recommend setting Resource Requests and Limits to prevent such scenarios.

Can I publish my own CAs (Certificate Authorities) to the cluster?

Currently, customers cannot publish their own CAs in the Kubernetes cluster or use their own TLS certificates.

Is geo-redundancy implemented in Kubernetes?

You can reserve node pools in multiple locations in the same cluster. This allows simple geo-redundancy to be configured and implemented. The control plane is geo-reserved (within Germany). There are several replicas running in different locations.

How to access unresponsive nodes via the API?

If a node is unavailable, for example, because there are too many pods running on it without resource limits, it can simply be replaced. To do this, you can use the following API endpoint: POST /k8s/{k8sClusterId}/nodepools/{nodePoolId}/nodes/{nodeId}/replace

Last updated

Revision created on 9/7/2023