It is desirable to extend CoreDNS with additional configuration to make changes that survive control plane maintenance. It is possible to create a ConfigMap in the kube-system
namespace. The ConfigMap must be named coredns-additional-conf
and contain a data entry with the key extra.conf
. The value of the entry must be a string containing the additional configuration.
The following example shows how to add a custom DNS entry for example.abc
:
Managed Kubernetes can be utilized to address the specific needs of its users. Here, you can find a list of common use cases and scenarios. Each use case is described in detail to highlight its relevance and benefits.
You can use the Load Balancer to provide a stable and reliable IP address for your Kubernetes cluster. It will expose your application, such as Nginx deployment, to the internet. This IP address should remain stable as long as the service exists.
Define type
as LoadBalancer
to create a service of type Load Balancer. When this service is created, most cloud providers will automatically provision a Load Balancer with a stable external IP address. Configure the ports
that the service will listen on and forward the traffic to. Define the selector
field to set the Pods to which the traffic will be forwarded.
Note:
Ensure that your Cloud provider supports the automatic creation of external Load Balancers for Kubernetes services.
You need at least two remaining free CRIPs for regular maintenance.
You need to replace the Nginx
related labels and selectors with those relevant to your application.
You can optimize the compute resources, such as CPU and RAM, along with storage volumes in Kubernetes through strategic usage of zones.To enhance the performance of your Kubernetes environment, consider implementing a strategic approach for resource allocation. You can intelligently distribute workloads across different zones to improve performance and enhance fault tolerance and resilience.
Define a storage class named ionos-enterprise-ssd-zone-1
, which specifies the provisioning of SSD-type storage with ext4
file system format, located in availability zone ZONE_2
. Configure the volumeBindingMode
and allowVolumeExpansion
fields.
Note: Supported values for fstype
are ext2
, ext3
or ext4
.
This implementation provides a robust and reliable Kubernetes infrastructure for your applications.
The horizontal scaling of ingress network traffic over multiple Kubernetes nodes involves adjusting the number of running instances of your application to handle varying levels of load. This helps preserve the original client IP address forwarded by the Kubernetes ingress controller in the X-Forwarded-For HTTP header.
The Ingress NGINX Controller will be installed via Helm using a separate configuration file.
The following example contains a complete configuration file, including parameters and values to customize the installation:
The illustration shows the high-level architecture built using IONOS Managed Kubernetes.
The current implementation of the service of type LoadBalancer does not deploy a true load balancer in front of the Kubernetes cluster. Instead, it allocates a static IP address and assigns it to one of the Kubernetes nodes as an additional IP address. This node is, therefore, acting as an ingress node and takes over the role of a load balancer. If the pod of the service is not running on the ingress node, kube-proxy will NAT the traffic to the correct node.
Problem: The NAT operation will replace the original client IP address with an internal node IP address.
Any individual Kubernetes node provides a throughput of up to 2 Gbit/s on the public interface. Scaling beyond that can be achieved by scaling the number of nodes horizontally. Additionally, the service LB IP address must also be distributed horizontally across those nodes. This type of architecture relies on Domain Name System (DNS) load balancing, as all LB IP addresses are added to the DNS record. During name resolution, the client will decide which IP address to connect to.
When using an ingress controller inside a Kubernetes cluster, web services will usually not be exposed as type LoadBalancer, but as type NodePort instead. The ingress controller is the component that will accept client traffic and distribute it inside the cluster. Therefore, usually only the ingress controller service is exposed as type LoadBalancer.
To scale traffic across multiple nodes, multiple LB IP addresses are required, which are then distributed across the available ingress nodes. This can be achieved by creating as many (dummy) services as nodes and IP addresses are required. It is best practice to reserve these IP addresses outside of Kubernetes in the IP Manager so that they are not unassigned when the service is deleted.
Let’s assume that our web service demands a throughput of close to 5 Gbit/s. Distributing this across 2 Gbit/s interfaces would require 3 nodes. Each of these nodes requires its own LB IP address, so in addition to the ingress controller service, one needs to deploy 2 additional (dummy) services.
To spread each IP address to a dedicated node, use a node label to assign the LB IP address to: node-role.kubernetes.io/ingress=<service_name>
Note: You can always set labels and annotations via the DCD, API, Terraform, or other DevOps tools.
To pin a LB IP address to a dedicated node, follow these steps:
Reserve an IP address in the IP Manager.
Create a node pool of only one node.
Apply the following label to the node:
node-role.kubernetes.io/ingress=<service_name>
Add the following node selector annotation to the service:
annotations.cloud.ionos.com/node-selector: node-role.kubernetes.io/ingress=<service_name>
In the case of our example, reserve 3 IP addresses in the IP Manager. Add these 3 IP addresses to the DNS A-record of your fully qualified domain name. Then, create 3 node pools, each containing only one node, and apply a different ingress node-role label to each node pool. We will call these 3 nodes as ingress nodes.
The first service will be the ingress NGINX controller service. Add the above-mentioned service annotation to it:
controller.service.annotations.cloud.ionos.com/node-selector: node-role.kubernetes.io/ingress=<service_name>
Also, add the static IP address (provided by the IP Manager) to the configuration:
controller.service.loadBalancerIP: <LB_IP_address>
Similarly, 2 additional (dummy) services of type LoadBalancer must be added to spread traffic across 3 nodes. These 2 services must point to the same ingress-nginx deployment, therefore the same ports and selectors of the standard ingress-nginx service are used.
Note:
Make sure to add your specific LB IP address to the manifest.
Notice the service is using the service specific node selector label as annotation.
This spreads 3 IP addresses across 3 different nodes.
To avoid packets being forwarded using Network Address Translation (NAT) to different nodes (thereby lowering performance and losing the original client IP address), each node containing the LB IP address must also run an ingress controller pod. (This could be implemented by using a daemonSet, but this would waste resources on nodes that are not actually acting as ingress nodes.) First of all, as many replicas of the ingress controller as ingress nodes must be created (in our case 3): controller.replicaCount: 3
Then, the Pods must be deployed only on those ingress nodes. This is accomplished by using another node label. For example, node-role.kubernetes.io/ingress-node=nginx
. The name and value can be set to any desired string. All 3 nodes must have the same label associated. The ingress controller must now be configured to use this nodeSelector:
controller.nodeSelector.node-role.kubernetes.io/ingress-node: nginx
This limits the nodes on which the Ingress Controller Pods are placed.
For the Ingress Controller Pods to spread across all nodes equally (one pod on each node), a pod antiAffinity must be configured:
To force Kubernetes to forward traffic only to Pods running on the local node, the externalTrafficPolicy needs to be set to local. This will also guarantee the preservation of the original client IP address. This needs to be configured for the Ingress-NGINX service (controller.service.externalTrafficPolicy: Local) and for the 2 dummy services (see full-service example above).
The actual helm command via which the Ingress-NGINX Controller is deployed is as follows:
helm install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx --create-namespace -f values.yaml
To verify the setup, ensure that:
DNS load balancing works correctly.
Fully Qualified Domain Name (FQDN) DNS lookup yields three IP addresses.
The Whoami web application can be deployed using the following manifests:
Note: Ensure that both Whoami Pods are running, the service is created, and the Ingress returns an external IP address and a hostname.
A curl with the below-mentioned flags to the hostname will show which Load Balancer IP address is used. You need to use the same curl command multiple times to verify connection to all 3 LB IP addresses is possible.
The response from the whoami application will also return the client IP address in the X-Forwarded-For HTTP header. Verify that it is your local public IP address.
Some applications require a Kubernetes service of type LoadBalancer
, which preserves the source IP address of incoming packets. Example: . You can manually integrate a Network Load Balancer (NLB) by exposing and attaching a public IP address to a viable Kubernetes node. This node serves as a load balancer using kube-proxy.
Note:
This works fine with services that use externalTrafficPolicy: Cluster
, but in this case, the client's source IP address is lost.
The public IP address that is used as the Load Balancer IP address also needs to be bound to those nodes on which the ingress controller is running.
To preserve the client source IP address, Kubernetes services with externalTrafficPolicy: Local
need to be used. This configuration ensures that packets reaching a node are only forwarded to Pods that run on the same node, preserving the client source IP address. Therefore, the load balancer IP address of the service needs to be attached to the same node running the ingress controller pod.
This can be achieved with different strategies. One approach is to use a to ensure that a pod is running on each node. However, this approach is feasible only in some cases, and if a cluster has a lot of nodes, then using could lead to a waste of resources.
For an efficient setup, you can schedule Pods to be run only on nodes of a specific node pool using . The node pool needs to have labels that can be used in the node selector. To ensure that the service's load balancer IP is also attached to one of these nodes, annotate the service with cloud.ionos.com/node-selector: key=value
, where the key and value are the labels of the node pool.
The following example shows how to install the as a DaemonSet with node selector and to configure the controller service with the required annotation.
Create a node pool with a label nodepool=ingress
:
Create a values.yaml
file for later use in the helm command with the following content:
Install ingress-nginx via helm using the following command: