Links

Use Cases

Scenario 1: Preserve Source Internet Protocol (IP) address when using Ingress

Some applications require a Kubernetes service of type LoadBalancer, which preserves the source IP address of incoming packets. Example: Ingress controllers. You can manually integrate a Network Load Balancer (NLB) by exposing and attaching a public IP address to a viable Kubernetes node. This node serves as a load balancer using kube-proxy.
Note:
  • This works fine with services that use externalTrafficPolicy: Cluster, but in this case, the client's source IP address is lost.
  • The public IP address that is used as the Load Balancer IP address also needs to be bound to those nodes on which the ingress controller is running.
To preserve the client source IP address, Kubernetes services with externalTrafficPolicy: Local need to be used. This configuration ensures that packets reaching a node are only forwarded to pods that run on the same node, preserving the client source IP address. Therefore, the load balancer IP address of the service needs to be attached to the same node running the ingress controller pod.
This can be achieved with different strategies. One approach is to use a DaemonSet to ensure that a pod is running on each node. However, this approach is feasible only in some cases, and if a cluster has a lot of nodes, then using DaemonSet could lead to a waste of resources.
For an efficient setup, you can schedule pods to be run only on nodes of a specific node pool using nodeSelector. The node pool needs to have labels that can be used in the node selector. To ensure that the service's load balancer IP is also attached to one of these nodes, annotate the service with cloud.ionos.com/node-selector: key=value, where the key and value are the labels of the node pool.
The following example shows how to install the ingress-nginx helm chart as a DaemonSet with node selector and to configure the controller service with the required annotation.
  1. 1.
    Create a node pool with a label nodepool=ingress:
    ionosctl k8s nodepool create --cluster-id <cluster-id> \
    --name ingress --node-count 1 --datacenter-id <datacenter-id> --labels nodepool=ingress
  2. 2.
    Create a values.yaml file for later use in the helm command with the following content:
    controller:
    nodeSelector:
    nodepool: ingress
    service:
    annotations:
    cloud.ionos.com/node-selector: nodepool=ingress
    kind: DaemonSet
  3. 3.
    Install ingress-nginx via helm using the following command:
    helm upgrade --install ingress-nginx ingress-nginx \
    --repo https://kubernetes.github.io/ingress-nginx \
    --namespace ingress-nginx --create-namespace -f values.yaml

Scenario 2: Add Custom CoreDNS Configuration

It is desirable to extend CoreDNS with additional configuration to make changes that survive control plane maintenance. It is possible to create a ConfigMap in the kube-system namespace. The ConfigMap must be named coredns-additional-conf and contain a data entry with the key extra.conf. The value of the entry must be a string containing the additional configuration.
The following example shows how to add a custom DNS entry for example.abc:
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns-additional-conf
namespace: kube-system
data:
extra.conf: |
example.abc:53 {
hosts {
1.2.3.4 example.abc
2.3.4.5 server.example.abc
fallthrough
}
}

Scenario 3: Horizontal scaling of network traffic

The horizontal scaling of ingress network traffic over multiple Kubernetes nodes involves adjusting the number of running instances of your application to handle varying levels of load. This helps preserve the original client IP address forwarded by the Kubernetes ingress controller in the X-Forwarded-For HTTP header.

Ingress NGINX controller configuration

The Ingress NGINX Controller will be installed via Helm using a separate configuration file.
The following example contains a complete configuration file, including parameters and values to customize the installation:
controller:
nodeSelector:
node-role.kubernetes.io/ingress-node: nginx
replicaCount: 3
service:
loadBalancerIP: <static_IP_address>
annotations:
cloud.ionos.com/node-selector: node-role.kubernetes.io/ingress=<service_name>
externalTrafficPolicy: Local
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- ingress-nginx
topologyKey: "kubernetes.io/hostname"
The illustration shows the high-level architecture built using IONOS Managed Kubernetes.
High Level Architecture Diagram

Load balancing

The current implementation of the service of type LoadBalancer does not deploy a true load balancer in front of the Kubernetes cluster. Instead, it allocates a static IP address and assigns it to one of the Kubernetes nodes as an additional IP address. This node is, therefore, acting as an ingress node and takes over the role of a load balancer. If the pod of the service is not running on the ingress node, kube-proxy will NAT the traffic to the correct node.
Problem: The NAT operation will replace the original client IP address with an internal node IP address.
Any individual Kubernetes node provides a throughput of up to 2 Gbit/s on the public interface. Scaling beyond that can be achieved by scaling the number of nodes horizontally. Additionally, the service LB IP address must also be distributed horizontally across those nodes. This type of architecture relies on Domain Name System (DNS) load balancing, as all LB IP addresses are added to the DNS record. During name resolution, the client will decide which IP address to connect to.
When using an ingress controller inside a Kubernetes cluster, web services will usually not be exposed as type LoadBalancer, but as type NodePort instead. The ingress controller is the component that will accept client traffic and distribute it inside the cluster. Therefore, usually only the ingress controller service is exposed as type LoadBalancer.
To scale traffic across multiple nodes, multiple LB IP addresses are required, which are then distributed across the available ingress nodes. This can be achieved by creating as many (dummy) services as nodes and IP addresses are required. It is best practice to reserve these IP addresses outside of Kubernetes in the IP Manager so that they are not unassigned when the service is deleted.

5 Gbit/s traffic demand

Let’s assume that our web service demands a throughput of close to 5 Gbit/s. Distributing this across 2 Gbit/s interfaces would require 3 nodes. Each of these nodes requires its own LB IP address, so in addition to the ingress controller service, one needs to deploy 2 additional (dummy) services.
To spread each IP address to a dedicated node, use a node label to assign the LB IP address to: node-role.kubernetes.io/ingress=<service_name>
Note: You can always set labels and annotations via the DCD, API, Terraform, or other DevOps tools.

Pin a Load Balancer IP address

To pin a LB IP address to a dedicated node, follow these steps:
  1. 1.
    Reserve an IP address in the IP Manager.
  2. 2.
    Create a node pool of only one node.
  3. 3.
    Apply the following label to the node:
    node-role.kubernetes.io/ingress=<service_name>
  4. 4.
    Add the following node selector annotation to the service:
    annotations.cloud.ionos.com/node-selector: node-role.kubernetes.io/ingress=<service_name>
In the case of our example, reserve 3 IP addresses in the IP Manager. Add these 3 IP addresses to the DNS A-record of your fully qualified domain name. Then, create 3 node pools, each containing only one node, and apply a different ingress node-role label to each node pool. We will call these 3 nodes as ingress nodes.
The first service will be the ingress NGINX controller service. Add the above-mentioned service annotation to it:
controller.service.annotations.cloud.ionos.com/node-selector: node-role.kubernetes.io/ingress=<service_name>
Also, add the static IP address (provided by the IP Manager) to the configuration:
controller.service.loadBalancerIP: <LB_IP_address>
Similarly, 2 additional (dummy) services of type LoadBalancer must be added to spread traffic across 3 nodes. These 2 services must point to the same ingress-nginx deployment, therefore the same ports and selectors of the standard ingress-nginx service are used.
Example
apiVersion: v1
kind: Service
metadata:
name: ingress-dummy-service-01
namespace: ingress-nginx
annotations:
cloud.ionos.com/node-selector: node-role.kubernetes.io/ingress=dummy-service-01
spec:
ports:
- appProtocol: http
name: http
nodePort: xyzxy
port: 80
protocol: TCP
targetPort: http
- appProtocol: https
name: https
nodePort: abcdef
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
type: LoadBalancer
loadBalancerIP: <LB_IP_address>
externalTrafficPolicy: Local
Note:
  • Make sure to add your specific LB IP address to the manifest.
  • Notice the service is using the service specific node selector label as annotation.
  • This spreads 3 IP addresses across 3 different nodes.
To avoid packets being forwarded using Network Address Translation (NAT) to different nodes (thereby lowering performance and losing the original client IP address), each node containing the LB IP address must also run an ingress controller pod. (This could be implemented by using a daemonSet, but this would waste resources on nodes that are not actually acting as ingress nodes.) First of all, as many replicas of the ingress controller as ingress nodes must be created (in our case 3): controller.replicaCount: 3
Then, the pods must be deployed only on those ingress nodes. This is accomplished by using another node label. For example, node-role.kubernetes.io/ingress-node=nginx. The name and value can be set to any desired string. All 3 nodes must have the same label associated. The ingress controller must now be configured to use this nodeSelector:
controller.nodeSelector.node-role.kubernetes.io/ingress-node: nginx
This limits the nodes on which the ingress controller pods are placed.
For the ingress controller pods to spread across all nodes equally (one pod on each node), a pod antiAffinity must be configured:
controller.affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- ingress-nginx
topologyKey: "kubernetes.io/hostname"
To force Kubernetes to forward traffic only to pods running on the local node, the externalTrafficPolicy needs to be set to local. This will also guarantee the preservation of the original client IP address. This needs to be configured for the Ingress-NGINX service (controller.service.externalTrafficPolicy: Local) and for the 2 dummy services (see full-service example above).
The actual helm command via which the Ingress-NGINX Controller is deployed is as follows:
helm install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx --create-namespace -f values.yaml

Verification of the Architecture

To verify the setup, ensure that:
  • DNS load balancing works correctly.
  • Fully Qualified Domain Name (FQDN) DNS lookup yields three IP addresses.
nslookup whoami.example.com
Non-authoritative answer:
Name: whoami.example.com
Address: xx.xxx.xxx.xxx
Name: whoami.example.com
Address: xx.xxx.xxx.xxx
Name: whoami.example.com
Address: xx.xxx.xxx.xxx
The Whoami web application can be deployed using the following manifests:
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: default
name: whoami
labels:
app: whoami
spec:
replicas: 2
selector:
matchLabels:
app: whoami
template:
metadata:
labels:
app: whoami
spec:
containers:
- name: whoami
image: containous/whoami
ports:
- name: web
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: whoami
namespace: default
spec:
ports:
- protocol: TCP
name: web
port: 80
selector:
app: whoami
type: NodePort
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-whoami
namespace: default
spec:
ingressClassName: nginx
rules:
- host: whoami.<your_domain>
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: whoami
port:
number: 80
Note: Ensure that both Whoami pods are running, the service is created, and the Ingress returns an external IP address and a hostname.
A curl with the below-mentioned flags to the hostname will show which Load Balancer IP address is used. You need to use the same curl command multiple times to verify connection to all 3 LB IP addresses is possible.
The response from the whoami application will also return the client IP address in the X-Forwarded-For HTTP header. Verify that it is your local public IP address.
curl -svk http://whoami.example.com
* Trying xx.xxx.xxx.xxx:xx...
* Connected to whoami.example.com (xx.xxx.xxx.xxx) port 80 (#0)
> GET / HTTP/1.1
> Host: whoami.example.com
> User-Agent: curl/8.1.2
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Tue, 07 Nov 2023 11:19:35 GMT
< Content-Type: text/plain; charset=utf-8
< Content-Length: 450
< Connection: keep-alive
<
Hostname: whoami-xxxxxxxxx-swn4z
IP: xxx.0.0.x
IP: ::1
IP: xx.xxx.xxx.xxx
IP: xxx0::xxxx:xxxx:xxxx:xxxx
RemoteAddr: xx.xxx.xxx.xx:xxxx
GET / HTTP/1.1
Host: whoami.example.com
User-Agent: curl/8.1.2
Accept: */*
X-Forwarded-For: xx.xxx.xxx.xx
X-Forwarded-Host: whoami.example.com
X-Forwarded-Port: 80
X-Forwarded-Proto: http
X-Forwarded-Scheme: http
X-Real-Ip: xx.xxx.xxx.xx
X-Request-Id: xxxx00xxxxxaa00000e00d0040adxxd00
X-Scheme: http

Scenario 4: Optimize Kubernetes deployments for compute resources and storage volumes

You can optimize the compute resources, such as CPU and RAM, along with storage volumes in Kubernetes through strategic usage of zones.To enhance the performance of your Kubernetes environment, consider implementing a strategic approach for resource allocation. You can intelligently distribute workloads across different zones to improve performance and enhance fault tolerance and resilience.

Example

Define a storage class named ionos-enterprise-ssd-zone-1, which specifies the provisioning of SSD-type storage with ext4 file system format, located in availability zone ZONE_2. Configure the WaitForFirstConsumer, volumeBindingMode and allowVolumeExpansion fields.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "false"
name: ionos-enterprise-ssd-zone-1
provisioner: cloud.ionos.com
parameters:
type: SSD
fstype: ext4
availabilityZone: ZONE_2
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
This implementation provides a robust and reliable Kubernetes infrastructure for your applications.