Overview
Network Load Balancer (NLB) is a pre-configured VDC element that provides connection-based layer 4 load balancing features and functionality. It is fully managed by IONOS, deeply integrated into our Software-Defined Networking (SDN) stack, deployed in a highly available setup, and offers robust security features required for fault-tolerant applications.
NLB serves as a single entry and exit point for all client traffic. Connection requests are accepted by the listener, and according to the defined forwarding rules, the sessions are distributed for parallel processing across multiple compute resources (targets). NLB keeps active sessions mapped to the same targets (sticky sessions), performs health checks, and routes traffic only to healthy targets.
NLB is a proxy load balancer, client connections are terminated at the balancer and mapped 1:1 to connections that the balancer initiates to targets. This is called two-arm load balancing because the load balancer has two arms (interfaces) - one facing clients and the other facing targets.
NLB provides the following features:
Performance
Scalability
Redundancy and fault tolerance
Deployment flexibility
Reduced or zero downtime
Fully-managed service
High throughput — low latency
Health monitoring
Sticky sessions
High Availability
Key Concepts
Network Address Translation (NAT)
Network Address Translation modifies IP header network address information to direct traffic as it moves from public to private address space. In the context of the Managed Network Load Balancer, this means that client connections are terminated on the load balancer, and the load balancer initiates a dedicated connection with the backend target servers.
NLB performs destination NAT (DNAT) to map (connect) the clients to the targets. Source NAT (SNAT) is not supported; targets cannot initiate network connections through the load balancer.
Forwarding Rules
Forwarding rules are configuration settings that dictate how network traffic is forwarded from a source to a destination in the context of network devices, such as routers or switches. These rules determine the routing path and actions taken on incoming packets.
Sticky sessions
Sticky sessions (source IP affinity) maintain client sessions mapped to the same targets for as long as the TCP sessions stay active.
Listener
The client-facing arm of the load balancer, the listener accepts the connections from clients through an exposed IP address and configured listener port. NLB has a single listener interface that can support multiple IPs with different forwarding rules.
The listener of a public load balancer is exposed to and accepts client connections directly from the Internet. Public load balancers serve as edge devices that handle "north-south" traffic flowing in and out of the data center.
The listener of a private load balancer is exposed to a private network. Private load balancers handle "east-west" traffic flowing internally within the data center.
Listener IPs are configured in the Settings tab of the Inspector.
NLB comes with basic firewall rules that are applied automatically based on the forwarding rules and cannot be changed. However, additional firewall rules can be configured for the NICs of the targets.
Backend
NLB backend exposes a private IP to targets as the source of client traffic.
Backend private IP is derived from the network mask of the target network connected; if no LAN is connected to the Southern interface, no default IP can be set.
Once a target network is connected and the changes are provisioned, the backend identifies the network mask and reserves recommended IP x.x.x.225 automatically.
Target network can be configured manually; any potential IP conflicts will have to be resolved at the provisioning stage.
Multiple backend private IPs can be configured with different rules on the same NLB.
Backend IPs are configured in the Private IP tab of the Inspector.
Targets
Targets are the compute resources, such as VM instances, containers, microservices, or appliances, to which the traffic is distributed for processing. NLB backend serves registered targets using an IP address and a TCP port.
Targets can be added or removed and capacities scaled without disrupting the overall flow of connection requests. Targets are configured per Forwarding rule.
The traffic is distributed in proportion to the target "weight" relative to the combined weight of all targets. A target with a higher weight receives a greater share of traffic. The default target weight is 1, and the maximum is 256. Target weight is configured for each target.
NLB performs Health checks to ensure that traffic is forwarded only to active targets. All health check-related metrics can be customized. Learn more about Health checks.
Maintenance Window
The Managed Network Load Balancer will be regularly maintained by IONOS and updated with the latest software versions and new features. IONOS reserves a weekly maintenance window which it can use for regular updates. It is scheduled every Monday between 02:00 - 04:00 am local time of the data center in which the Managed Network Load Balancer service is deployed. During maintenance, a service interruption of up to 5 seconds may occur. Aside from that service interruption, no further service impact is anticipated, and the Managed Network Load Balancer will continue to operate within its service description and configuration.
Additional update deployments may be carried out outside the maintenance window, for example, in the case of urgent security patches.
Limitations
NLB operates at TCP/IP layer 4 of the Open Systems Interconnection (OSI) model. NLB will distribute any TCP-based network traffic, including upper application layer protocols, such as HTTP and HTTPS. However, rules and health checks are strictly TCP-based, which means that HTTP rules (e.g., routing decisions based on the URL) are not supported.
SNAT Support: Managed NLB is not configured to support Source NAT (SNAT); targets cannot initiate network connections through the load balancer.
Last updated