FAQs

What is the function of Managed Kubernetes?

Managed Kubernetes facilitates the fully automated setup of Kubernetes clusters. Managed Kubernetes also simplifies and carefully supports the automation of Continuous Integration and Continuous Delivery/Continuous Deployment (CI/CD) pipelines for testing and deployment.

What does Kubernetes offer for providing transparency and control?

IONOS Managed Kubernetes solution offers automatic updates, security fixes, versioning, upgrade provisioning, high availability, geo-redundant control plane, and full cluster administrator level access to Kubernetes API.

How does the Kubernetes Manager work?

Everything related to Managed Kubernetes can be controlled in the DCD via the dedicated Kubernetes Manager. The Manager provides a complete overview of your provisioned Kubernetes clusters and node pools, including their statuses. The Manager allows you to create and manage clusters, create and manage node pools, and download the kubeconfig file.

What is the control plane for?

The control plane manages the worker nodes and the pods in the cluster. In production environments, the control plane usually runs across multiple computers. A cluster usually runs multiple nodes, providing fault tolerance and high availability.

How does a kube-controller-manager work?

Kube-controller-manager manages controllers that provide functionalities such as deployments, services, etc. For more information, see kube-controller-manager.

What is the function of a kube-apiserver?

The API server is a component of the Kubernetes control plane that exposes the Kubernetes API. The API server is the front end for the Kubernetes control plane. Kube-apiserver is designed to scale horizontally. It scales by deploying more instances. You can run several instances of kube-apiserver and balance traffic between those instances. For more information, see kube-apiserver.

What is the function of a kube-scheduler?

Kube-scheduler distributes pods to nodes. Pods must be created with resource limits so that nodes are not over-commissioned. For more information, see kube-scheduler.

Where are the control plane nodes?

Managed Kubernetes supports regional control planes that provide a distributed and highly-available management infrastructure within a chosen region. It also offers a hidden control plane for both Public and Private Node Pools. Control plane components like kube-apiserver, kube-scheduler, and kube-controller-manager are not visible to the users and cannot be modified directly.

The kube-apiserver can only be interacted with by using its REST API.

The hidden control plane is deployed on Virtual Machines (VMs) running in a geo-redundant cluster in the area of Frankfurt am Main, Germany. For more information, see Control Plane Components and Kubernetes API.

Which Calico Container Network Interface (CNI) plugin is installed in Managed Kubernetes clusters?

The Managed Kubernetes clusters have a Calico CNI plugin. Its primary function is to automatically assign IP addresses, set up network interfaces, and establish connectivity between the pods. Calico also allows the use of network policies in the Kubernetes cluster. For more information, see Kubernetes CNI and Network Policies.

Can I choose or install a different CNI plugin?

Managed Kubernetes does not currently offer an option to choose a different CNI plugin, nor does it support customers that do so on their own.

CNI affects the whole cluster network, so if changes are made to Calico or a different plugin is installed, it can cause cluster-wide issues and failed resources.

What is a Container Storage Interface (CSI)?

The CSI driver runs as a deployment in the control plane to manage volumes for Persistent Volume Claims (PVCs) in the IONOS Cloud and to attach them to nodes.

How to provision Network File System (NFS) volumes?

The soft mount option is required when creating PersistentVolume with a NFS source in Kubernetes. It can be set in the mount options list in the PersistentVolume specification (spec.mountOptions) or using the annotation key volume.beta.kubernetes.io/mount-options. This value is expected to contain a comma-separated list of mount options. If none of them contains the soft mount option, the creation of the PersistentVolume will fail.

The use of the annotation is still supported but will be deprecated in the future. For more information, see Mount Options.

Mount options in the PersistentVolume specification can also be set using the StorageClass.

Example for PV spec

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-share
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  storageClassName: nfs
  mountOptions:
    - soft
  nfs:
    path: /tmp
    server: 172.17.0.2

Example for annotation

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-share
  annotations:
    volume.beta.kubernetes.io/mount-options: "soft"
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  storageClassName: nfs
  nfs:
    path: /tmp
    server: 172.17.0.2

Example for using StorageClass

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-on-ionos
mountOptions:
  - soft
provisioner: my.nfs.provisioner
persistentVolumeReclaimPolicy: Delete

Why do I have leftover IONOS Cloud volumes in my Virtual Data Center (VDC)?

IONOS Cloud volumes are represented as Persistent Volume (PV) resources in Kubernetes. The PV reclaim policy determines what happens to the volume when the PV is deleted. The Retain reclaim policy skips deletion of the volume and is meant for manual reclamation of resources. In the case of dynamically provisioned volumes, the CSI driver manages the PV; the user cannot delete the volume even after the PV is deleted.

The PV has resource finalizers that ensure that Cloud resources are deleted. The finalizers are removed by the system after Cloud resources are cleaned up, so removing them prematurely is likely to leave resources behind.

How can I import unmanaged IONOS Cloud volumes into Kubernetes?

This may be desired when a dynamically provisioned volume is left over or if an external volume should be exposed to Kubernetes workload.

Warning: Do not import your Managed Kubernetes node's root volumes. They are fully managed outside the Kubernetes cluster, and importing them will cause conflicts that may lead to service disruptions and data loss.

Dynamically provisioned PVs are created by the CSI driver, which populates the resource ownership annotations and information gathered from the IONOS Cloud API. For statically managed PVs, this data must be provided by the user. For statically managed PVs this data must be provided by the user.

Example for static PV spec

kind: PersistentVolume
apiVersion: v1
metadata:
  name: my-import
  annotations:
    pv.kubernetes.io/provisioned-by: cloud.ionos.com
spec:
  storageClassName: ionos-enterprise-hdd
  persistentVolumeReclaimPolicy: Delete
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 100G
  csi:
    driver: cloud.ionos.com
    fsType: ext4
    volumeHandle: datacenters/27871515-1527-443d-be94-f91b72fd557e/volumes/927c23f4-b7d1-4dd2-b29c-75b1e4b6b166
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: enterprise.cloud.ionos.com/datacenter-id
          operator: In
          values:
          - 27871515-1527-443d-be94-f91b72fd557e

The following fields should be modified according to the volume that is imported:

  • spec.capacity.storage: Should contain the size of the volume with suffix G (Gigabyte).

  • spec.csi.volumeHandle: Volume path in the IONOS Cloud API. Omit the leading slash (/)

  • spec.nodeAffinity.required.nodeSelectorTerms.matchExpressions.values: Must contain the Virtual Data Center ID from the Volume path.

Creating this PV will allow it to be used in Pod by referencing it in a PVC's spec.volumeName.

Note: Be aware that the imported volume will only be deleted if it is Bound.

What is a cluster autoscaler?

A cluster autoscaler is a tool that automatically adjusts the size of the Kubernetes cluster when one of the following conditions is true:

  • There are pods that failed to run in the cluster due to insufficient resources.

  • There are nodes in the cluster that have been underutilized for an extended period of time, and their pods can be placed on other existing nodes.

For more information, see Cluster Autoscaler and its FAQ.

When does the cluster autoscaler increase and reduce the node pool?

The cluster autoscaler increases the node pool if pods cannot be scheduled due to a lack of resources. In addition, adding a node (from the node pool that needs to be increased) should provide a remedy for the situation. If there are no node pools that provide enough nodes to schedule a pod, the autoscaler will not enlarge.

The cluster autoscaler reduces the node pool if a node is not fully utilized for an extended period of time. A node is underutilized when it has a light load, and all of its pods can be moved to other nodes.

Is it possible to mix node pools with and without active autoscaling within a cluster?

Yes, only node pools with active autoscaling are managed by the autoscaler.

Can the autoscaler enlarge/reduce node pools as required?

No, the autoscaler cannot increase the number of nodes in the node pool above the maximum specified by the user or decrease it below the specified minimum. In addition, the quota for a specific contract cannot be exceeded using the autoscaler. The autoscaler cannot reduce the number of nodes in the node pool to 0.

Is it possible to enable and configure encryption of secret data?

Yes, it is possible to enable and configure encryption of secret data. For more information, see Encrypting Confidential Data at Rest.

Are values and components of the cluster changed during maintenance?

All components installed in the cluster are updated. This includes the K8s control plane itself, CSI, CCM, Calico, and CoreDNS . With cluster maintenance, several components that are visible to customers are updated and reset to our values. For example, changes to CoreDNS are not permanent and will be removed at the next maintenance. It is currently not possible to set your own DNS records in the CoreDNS configuration, but this will be possible later.

Managed components that are regularly updated:

* `coredns`
* `csi-ionoscloud` (DaemonSet)
* `calico` (typha)
* `ionos-policy-validator`
* `snapshot-validaton-webhook`

Is there a limit on the node pool maintenance window?

The maintenance time window is limited to four hours for Public and Private Node Pools. If all of the nodes are not rebuilt within this time, the remaining nodes will be replaced at the next scheduled maintenance. To avoid taking more time for your updates, it is recommended to create node pools with no more than 20 nodes.

How to preset Internet Protocol (IP) addresses on new nodes for Public Node Pools?

If old nodes are replaced with new ones during maintenance, the new nodes will subsequently have different or new public IP addresses. You can pre-specify a list of public IP addresses from which entries for new nodes are taken. In such a way, the list of possible host addresses is limited and predictable. For example, to activate them differently through a whitelist.

Can a cluster and respective node pools have different Kubernetes versions?

The Kubernetes cluster control plane and the corresponding node pools can have different versions of Kubernetes. Node pools can use older versions than the control plane, but not vice versa. The difference between the minor versions must not be more than 1.

There is a distinction between patch version updates and minor version updates. You must initiate all version updates. Once initiated, the version updates are performed immediately. However, forced updates will also occur if the version used by you is so old that we can no longer support it. Typically, affected users receive a support notification two weeks before a forced update.

Is there traffic protection between nodes and the control plane?

The Kubernetes API is secured with Transport Layer Security (TLS). Traffic between the nodes and the control plane is secured by mutual TLS, which means that both sides check whether they are talking to the expected remote station.

Why is the status Failed displayed for clusters or node pools?

If clusters or node pools are created or modified, the operation may fail, and the cluster or node pool will go into a Failed status. In this case, our team is already informed because we monitor it. However, sometimes it can also be difficult for us to rectify the error since the reason can be a conflict with the client's requirements. For example, if a LAN is specified that does not exist at all or no longer exists, a service update becomes impossible.

If the node is in a NotReady state, and there is not enough RAM or the node runs out of the RAM space, an infinite loop occurs in which an attempt is made to free the RAM. This means the node cannot be used, and the executables must be reloaded from the disk. The node is busy with disk Input/Output (I/O). In such a situation, we recommend doing resource management to prevent such scenarios. For more information, see Requests and limits.

Can I publish my own Certificate Authorities (CAs) to the cluster?

The IONOS Kubernetes currently does not support the usage of your own CAs or your own TLS certificates in the Kubernetes cluster.

Is geo-redundancy implemented in Kubernetes for Public Node Pools?

You can reserve Public Node Pools in multiple locations in the same cluster. This allows simple geo-redundancy to be configured and implemented. The control plane is geo-reserved (within Germany). There are several replicas running in different locations.

How to access unresponsive nodes via the API?

If a node is unavailable, like too many pods are running on it without resource limits, it can be replaced. To do this, you can use the following API endpoint:

POST /k8s/{k8sClusterId}/nodepools/{nodePoolId}/nodes/{nodeId}/replace

Why do I need Private Node Pools?

A Private Node Pool ensures that the nodes are not connected directly to the internet; hence, the inter-node network traffic stays inside the private network. However, the control plane is still exposed to the internet and can be protected by restricting IP access.

When do the cluster status or node pools status turn yellow in the DCD?

Clusters and node pools turn yellow when a user or an automated maintenance process initiates an action on the resources. This locks the clusters and node pool resources from being updated until the process is finished, and they do not respond during this time.

Why is a Network Address Translation (NAT) Gateway deployed inside my cluster?

A NAT is required to enable outbound traffic between the cluster nodes and the control plane. For example, to be able to retrieve container images.

Can I attach public networks to my Kubernetes cluster?

Kubernetes clusters only support public networks only if they are VMs but not LAN networks.

Can I add nodes to Kubernetes clusters that consist of Private Node Pools?

Yes, if your node pool is configured to have a network interface in the same network as the VMs that you want to access, then you can add nodes.

How is a Public Node Pool configured within a Kubernetes cluster?

Public Node Pools within a Kubernetes cluster are configured by defining a public dedicated node pool. Networking settings are specified to include public IP addresses for external access.

How is a Private Node Pool configured within a Kubernetes cluster?

Private Node Pools within a Kubernetes cluster are configured by ensuring that each node in a pool has a distinct private network, while nodes within the same pool share a common private network.

It is crucial to set up these node pools with a network interface aligned with the network of the intended VMs when adding nodes to Kubernetes clusters.

Why is a Private Cross Connect deployed inside my VDC?

The Private Cross Connect is required to enable node-to-node communication across all node pools belonging to the same Kubernetes cluster. This ensures that node pools in different VDCs can communicate.

Can I attach an additional private network to the NAT Gateway?

No, the private NAT Gateway is not intended to be used for arbitrary nodes.

Is a service of type LoadBalancer supported, and how do I deploy a service of type LoadBalancer?

The Public Node Pools support the LoadBalancer service type. However, the Private Node Pools currently do not support the LoadBalancer service type.

Last updated