# FAQ

## General overview and management

### What is the function of Managed Kubernetes?

Managed Kubernetes facilitates the fully automated setup of Kubernetes clusters. Managed Kubernetes also simplifies and carefully supports the automation of Continuous Integration and Continuous Delivery/Continuous Deployment (CI/CD) pipelines for testing and deployment.

### What does Kubernetes offer for providing transparency and control?

IONOS Cloud Managed Kubernetes solution offers automatic updates, security fixes, versioning, upgrade provisioning, high availability, geo-redundant control plane, and full cluster administrator level access to Kubernetes API.

### How does the Kubernetes Manager work?

Everything related to Managed Kubernetes can be controlled in the DCD via the dedicated Kubernetes Manager. The Manager provides a complete overview of your provisioned Kubernetes clusters and node pools, including their statuses. The Manager allows you to create and manage clusters, create and manage node pools, and download the `kubeconfig` file.

## Cluster architecture and control plane

### What is the control plane for?

The control plane manages the worker nodes and the pods in the cluster. In production environments, the control plane usually runs across multiple computers. A cluster usually runs multiple nodes, providing fault tolerance and high availability.

### Where are the control plane nodes?

Managed Kubernetes supports regional control planes that provide a distributed and highly available management infrastructure within a chosen region. It also offers a hidden control plane for both Public and Private Node Pools. Control plane components such as `kube-apiserver`, `kube-scheduler`, and `kube-controller-manager` are not visible to users and cannot be modified directly.

The kube-apiserver can only be interacted with by using its REST API.

The hidden control plane is deployed on [<mark style="color:blue;">Virtual Machines (VMs)</mark>](https://docs.ionos.com/cloud/support/general-information/glossary-of-terms#virtual-machine-vm) running in a geo-redundant cluster in the chosen region. For more information about the control plane components, refer to the [<mark style="color:blue;">Kubernetes Documentation</mark>](https://kubernetes.io/docs/concepts/overview/components/#control-plane-components) and [<mark style="color:blue;">Kubernetes API</mark>](https://kubernetes.io/docs/concepts/overview/kubernetes-api/).

### What is the function of a kube-apiserver?

The API server is a component of the Kubernetes control plane that exposes the Kubernetes API. The API server is the front end for the Kubernetes control plane. Kube-apiserver is designed to scale horizontally. It scales by deploying more instances. You can run several instances of kube-apiserver and balance traffic between those instances. For more information, see [<mark style="color:blue;">kube-apiserver</mark>](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/).

### What is the function of a kube-scheduler?

Kube-scheduler distributes pods to nodes. Pods must be created with resource limits so that nodes are not over-commissioned. For more information, see [<mark style="color:blue;">kube-scheduler</mark>](https://kubernetes.io/docs/concepts/architecture/#kube-scheduler).

### How does a kube-controller-manager work?

Kube-controller-manager manages controllers that provide functionalities such as deployments, services, etc. For more information, see [<mark style="color:blue;">kube-controller-manager</mark>](https://kubernetes.io/docs/concepts/architecture/#kube-controller-manager).

### Is geo-redundancy implemented in Kubernetes for Public Node Pools?

You can reserve Public Node Pools in multiple locations in the same cluster. It allows simple geo-redundancy to be configured and implemented. The control plane is geo-reserved (within Germany). Several replicas are running in different locations.

## Networking and connectivity

### Which Calico Container Network Interface (CNI) plugin is installed in Managed Kubernetes clusters?

The Managed Kubernetes clusters have a Calico CNI plugin. Its primary function is to automatically assign IP addresses, set up network interfaces, and establish connectivity between the pods. Calico also allows the use of network policies in the Kubernetes cluster. For more information, see [<mark style="color:blue;">Kubernetes CNI</mark>](https://www.tigera.io/learn/guides/kubernetes-networking/kubernetes-cni/) and [<mark style="color:blue;">Network Policies</mark>](https://kubernetes.io/docs/concepts/services-networking/network-policies/).

### Can I choose or install a different CNI plugin?

Managed Kubernetes does not currently offer an option to choose a different CNI plugin, nor does it support users that do so on their own.

CNI affects the whole cluster network, so if changes are made to Calico or a different plugin is installed, it can cause cluster-wide issues and failed resources.

### Why is a Network Address Translation (NAT) Gateway deployed inside my cluster?

A [<mark style="color:blue;">NAT</mark>](https://docs.ionos.com/cloud/support/general-information/glossary-of-terms#network-address-translation-nat-gateway) is required to enable outbound traffic between the cluster nodes and the control plane. For example, to be able to retrieve container images.

### Why is a Private Cross Connect deployed within my VDC?

The Private Cross Connect is required to enable node-to-node communication across all node pools belonging to the same Kubernetes cluster. This ensures that node pools in different [<mark style="color:blue;">VDCs</mark>](https://docs.ionos.com/cloud/support/general-information/glossary-of-terms#virtual-data-center-vdc) can communicate.

### Can I attach an additional private network to the NAT Gateway?

No, the private NAT Gateway is not intended to be used for arbitrary nodes.

### Is a service of type LoadBalancer supported, and how do I deploy a service of type LoadBalancer?

The Public Node Pools support the LoadBalancer service type. However, the Private Node Pools currently do not support the LoadBalancer service type.

### How to preset Internet Protocol (IP) addresses on new nodes for Public Node Pools?

If old nodes are replaced with new ones during maintenance, the new nodes will subsequently have different or new public IP addresses. You can pre-specify a list of public IP addresses from which entries for new nodes are taken. In such a way, the list of possible host addresses is limited and predictable. For example, to activate them differently through a whitelist.

### Can I attach public networks to my Kubernetes cluster?

Kubernetes clusters support public networks only if they are VMs but not LAN networks.

## Node pools and compute resources

### What are the server types you can choose for a node pool?

Starting April 1, 2025, IONOS Cloud provides an option to choose from the **Dedicated Core** or **vCPU** server types, which offers best compute resources to create your node pool and run the K8s-based applications at affordable prices.

* When creating a node pool in the DCD, you see the field **Server type** in the **Node Template** section using which you could choose the server type. For more information, see [<mark style="color:blue;">Manage Node Pools</mark>](https://docs.ionos.com/sections-test/guides/containers/managed-kubernetes/how-tos/management-of-node-pools).
* To choose server types for the node pools via the API, see [<mark style="color:blue;">Managed Kubernetes API</mark>](https://api.ionos.com/docs/cloud/v6/#tag/Kubernetes).

### What are the differences between the Dedicated Core and vCPU server types?

The server types you can choose from for Kubernetes node pools are **Dedicated Core** and **vCPU**.

{% tabs %}
{% tab title="Dedicated Core" %}

* Creates nodes with dedicated CPU cores.
* Offers higher performance with best compute resources based on the availability in the data center.
  {% endtab %}

{% tab title="vCPU" %}

* Creates nodes with shared CPU cores.
* Cost-effective server type.
  {% endtab %}
  {% endtabs %}

### Can I switch between the server types after the node pool is created?

Yes, you can switch existing Kubernetes node pools between **Dedicated Core** and **vCPU** servers. You can do this while updating a node pool. For more information, see [<mark style="color:blue;">Update a Node Pool</mark>](https://docs.ionos.com/sections-test/guides/containers/managed-kubernetes/how-tos/update-node-pool).

### How does IONOS Cloud ensure consistent CPU threads across different server types?

IONOS Cloud Managed Kubernetes ensures a consistent ratio of two logical threads per CPU unit. Regardless of whether you select **vCPU Servers** or **Dedicated Core Servers**, CPU resource allocation follows a fixed logic when provisioning resources:

**1 provisioned core (of vCPU or Dedicated Core servers) equals 2 Managed Kubernetes CPUs**.

### Why do I need Private Node Pools?

A Private Node Pool ensures that the nodes are not connected directly to the internet; hence, the inter-node network traffic stays inside the private network. However, the control plane is still exposed to the internet and can be protected by restricting IP access.

### Can I add nodes to Kubernetes clusters that consist of Private Node Pools?

Yes, if your node pool is configured to have a network interface in the same network as the VMs that you want to access, then you can add nodes.

### How is a Public Node Pool configured within a Kubernetes cluster?

Public Node Pools within a Kubernetes cluster are configured by defining a public dedicated node pool. Networking settings are specified to include public IP addresses for external access.

### How is a Private Node Pool configured within a Kubernetes cluster?

Private Node Pools within a Kubernetes cluster are configured by ensuring that each node in a pool has a distinct private network, while nodes within the same pool share a common private network.

It is crucial to set up these node pools with a network interface aligned with the network of the intended VMs when adding nodes to Kubernetes clusters.

## Storage and volumes

### What is a Container Storage Interface (CSI)?

The [<mark style="color:blue;">CSI</mark>](https://docs.ionos.com/cloud/support/general-information/glossary-of-terms#container-storage-interface-csi) driver runs as a deployment in the control plane to manage volumes for Persistent Volume Claims (PVCs) in the IONOS Cloud and to attach them to nodes.

### How to access Network File Storage (NFS) volumes?

{% hint style="warning" %}
**Important:** Network File Storage (NFS) Kubernetes integration is currently available on a request basis. To access this product, please contact your sales representative or [<mark style="color:blue;">IONOS Cloud Support</mark>](https://docs.ionos.com/cloud/support/general-information/contact-information).
{% endhint %}

Please refer to the [<mark style="color:blue;">Network File Storage (NFS) Kubernetes integration</mark>](https://docs.ionos.com/sections-test/guides/containers/managed-kubernetes/how-tos/mount-an-nfs-volume) documentation for more information.

### Why do I have leftover IONOS Cloud volumes in my Virtual Data Center (VDC)?

IONOS Cloud volumes are represented as [<mark style="color:blue;">Persistent Volume (PV)</mark>](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) resources in Kubernetes. The PV [<mark style="color:blue;">reclaim policy</mark>](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming) determines what happens to the volume when the PV is deleted. The `Retain` reclaim policy skips deletion of the volume and is meant for manual reclamation of resources. In the case of dynamically provisioned volumes, the CSI driver manages the PV; the user cannot delete the volume even after the PV is deleted.

The PV has [<mark style="color:blue;">resource finalizers</mark>](https://kubernetes.io/docs/concepts/overview/working-with-objects/finalizers/) that ensure that Cloud resources are deleted. The finalizers are removed by the system after Cloud resources are cleaned up, so removing them prematurely is likely to leave resources behind.

### How can I import unmanaged IONOS Cloud volumes into Kubernetes?

This may be desired when a dynamically provisioned volume is left over or if an external volume should be exposed to Kubernetes workload.

{% hint style="warning" %}
**Warning:** Do not import your Managed Kubernetes node's root volumes. They are fully managed outside the Kubernetes cluster, and importing them will cause conflicts that may lead to service disruptions and data loss.
{% endhint %}

Dynamically provisioned PVs are created by the CSI driver, which populates the resource ownership annotations and information gathered from the IONOS Cloud API. For statically managed PVs, this data must be provided by the user.

**Example for static PV spec:**

```bash
kind: PersistentVolume
apiVersion: v1
metadata:
  name: my-import
  annotations:
    pv.kubernetes.io/provisioned-by: cloud.ionos.com
spec:
  storageClassName: ionos-enterprise-hdd
  persistentVolumeReclaimPolicy: Delete
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 100G
  csi:
    driver: cloud.ionos.com
    fsType: ext4
    volumeHandle: datacenters/27871515-1527-443d-be94-f91b72fd557e/volumes/927c23f4-b7d1-4dd2-b29c-75b1e4b6b166
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: enterprise.cloud.ionos.com/datacenter-id
        operator: In
        values:
        - 27871515-1527-443d-be94-f91b72fd557e
```

The following fields should be modified according to the volume that is imported:

* `spec.capacity.storage`: Should contain the size of the volume with suffix G (Gigabyte).
* `spec.csi.volumeHandle`: Volume path in the [<mark style="color:blue;">IONOS Cloud API</mark>](https://api.ionos.com/docs/cloud/v6/#tag/Volumes/operation/datacentersVolumesFindById). Omit the leading slash (`/`).
* `spec.nodeAffinity.required.nodeSelectorTerms.matchExpressions.values`: Must contain the Virtual Data Center ID from the Volume path.

Creating this PV will allow it to be used in Pod by referencing it in a PVC's `spec.volumeName`.

{% hint style="info" %}
**Note:** Be aware that the imported volume will only be deleted if it is [<mark style="color:blue;">`Bound`</mark>](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#binding).
{% endhint %}

## Scaling and autoscaling

### What is a cluster autoscaler?

A cluster autoscaler is a tool that automatically adjusts the size of the Kubernetes cluster when one of the following conditions is true:

* There are pods that failed to run in the cluster due to insufficient resources.
* There are nodes in the cluster that have been underutilized for an extended period of time, and their pods can be placed on other existing nodes.

For more information, see [<mark style="color:blue;">Cluster Autoscaler</mark>](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/README.md) and its [<mark style="color:blue;">FAQ</mark>](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md).

### How does the cluster autoscaler decide to scale-up or scale-down?

The cluster autoscaler decides to scale up when there are pods that cannot be scheduled due to a lack of available resources, such as CPU or memory, and adding a new node would resolve the scheduling issue. It triggers a scale-down when nodes are underutilized for an extended period, and all their pods can be scheduled onto other nodes without disruption. The autoscaler ensures workload stability and will not remove nodes if essential workloads would be disrupted.

Examples of situations that affect scaling include:

* A pod requests more resources than any current node can offer, triggering a scale-up.
* Nodes running only `DaemonSet` pods or system pods may not be removed during scale-down.
* Pods with local storage (e.g., using `emptyDir`) can block scale down, as their data would be lost upon eviction.
* Pods with strict node affinity, taints, or tolerations might prevent node removal.
* Scale down won't occur if the node pool is already at its minimum size.

To work around the limitation with pods using local storage such as `emptyDir`, you can use the annotation `"cluster-autoscaler.kubernetes.io/safe-to-evict": "true"`. Adding this annotation to a pod signals the autoscaler that it is safe to evict the pod, even if it uses local storage, enabling better scale down behavior when possible.

### When does the cluster autoscaler increase and reduce the node pool?

The cluster autoscaler increases the node pool if pods cannot be scheduled due to a lack of resources. In addition, adding a node (from the node pool that needs to be increased) should provide a remedy for the situation. If there are no node pools that provide enough nodes to schedule a pod, the autoscaler will not enlarge.

The cluster autoscaler reduces the node pool if a node is not fully utilized for an extended period of time. A node is underutilized when it has a light load, and all of its pods can be moved to other nodes.

### Is it possible to mix node pools with and without active autoscaling within a cluster?

Yes, only node pools with active autoscaling are managed by the autoscaler.

### Can the autoscaler enlarge/reduce node pools as required?

No, the autoscaler cannot increase the number of nodes in the node pool above the maximum specified by the user or decrease it below the specified minimum. In addition, the quota for a specific contract cannot be exceeded using the autoscaler. The autoscaler cannot reduce the number of nodes in the node pool to **0**.

## Security and encryption

### Is it possible to enable and configure encryption of secret data?

Yes, it is possible to enable and configure encryption of secret data. For more information, see [<mark style="color:blue;">Encrypting Confidential Data at Rest</mark>](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/).

### Can I publish my own Certificate Authorities (CAs) to the cluster?

The IONOS Cloud Managed Kubernetes currently does not support the usage of your own CAs or your own TLS certificates in the Kubernetes clusters.

### Is there traffic protection between nodes and the control plane?

The Kubernetes API is secured with Transport Layer Security (TLS). Traffic between the nodes and the control plane is secured by mutual TLS, which means that both sides check whether they are talking to the expected remote station.

## Maintenance, updates and versions

### Are values and components of the cluster changed during maintenance?

All components installed in the cluster are updated. This includes the K8s control plane itself, CSI, CCM, Calico, and CoreDNS. With cluster maintenance, several components that are visible to users are updated and reset to our values. For example, changes to CoreDNS are not permanent and will be removed at the next maintenance. It is currently not possible to set your own DNS records in the CoreDNS configuration, but this will be possible later.

Managed components that are regularly updated:

* `coredns`
* `csi-ionoscloud` (DaemonSet)
* `calico` (typha)
* `ionos-policy-validator`
* `snapshot-validation-webhook`

### Can a cluster and respective node pools have different Kubernetes versions?

The Kubernetes cluster control plane and the corresponding node pools can have different versions of Kubernetes. Node pools can use older versions than the control plane, but not vice versa. The difference between the minor versions **must not be more than 1**.

There is a distinction between **patch version updates** and **minor version updates**. You must initiate all version updates. Once initiated, the version updates are performed immediately. However, forced updates will also occur if the version used by you is so old that we can no longer support it. Typically, affected users receive a **support** notification **two weeks** before a forced update.

### Is there a limit on the node pool maintenance window?

The **maintenance time** window is limited to **four hours** for Public and Private Node Pools. If all of the nodes are not rebuilt within this time, the remaining nodes will be replaced at the next scheduled maintenance. To avoid taking more time for your updates, it is recommended to create node pools with **no more than 20 nodes**.

## Troubleshooting

### Why is the status Failed displayed for clusters or node pools?

If clusters or node pools are created or modified, the operation may fail, and the cluster or node pool will go into a **Failed** status. In this case, our team is already informed because we monitor it. However, sometimes it can also be difficult for us to rectify the error since the reason can be a conflict with the client's requirements. For example, if a LAN is specified that does not exist at all or no longer exists, a service update becomes impossible.

If the node is in a **NotReady** state, and there is not enough RAM or the node runs out of the RAM space, an infinite loop occurs in which an attempt is made to free the RAM. This means the node cannot be used, and the executables must be reloaded from the disk. The node is busy with disk Input/Output (I/O). In such a situation, we recommend doing resource management to prevent such scenarios. For more information, see [<mark style="color:blue;">Requests and limits</mark>](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits).

### When do the cluster status or node pools status turn yellow in the DCD?

Clusters and node pools turn yellow when a user or an automated maintenance process initiates an action on the resources. This locks the clusters and node pool resources from being updated until the process is finished, and they do not respond during this time.

### How to access unresponsive nodes via the API?

If a node is unavailable, like too many pods are running on it without resource limits, it can be replaced. To do this, you can use the following API endpoint:

`POST /k8s/{k8sClusterId}/nodepools/{nodePoolId}/nodes/{nodeId}/replace`
