# Enable NIC Multi-Queue for Virtual Machines (VMs)

NIC Multi-Queue is off by default and must be explicitly turned on using the [<mark style="color:blue;">DCD</mark>](https://docs.ionos.com/sections-test/guides/network-services/vdc-networking/nic-multi-queue/how-tos/create-nic-multi-queue) or [<mark style="color:blue;">IONOS Cloud API</mark>](https://docs.ionos.com/sections-test/guides/network-services/vdc-networking/nic-multi-queue/api-how-tos/create-nic-multi-queue) for each VM.

{% hint style="info" %}
**Note:** The following table provides information on which Compute Services the feature is available.
{% endhint %}

| **Compute Services**                                                                                                                                  | **Available?** |
| ----------------------------------------------------------------------------------------------------------------------------------------------------- | -------------- |
| [<mark style="color:blue;">Dedicated Core Servers</mark>](https://docs.ionos.com/sections-test/guides/compute-services/compute-engine/dedicated-core) | ✅              |
| [<mark style="color:blue;">vCPU Servers</mark>](https://docs.ionos.com/sections-test/guides/compute-services/compute-engine/vcpu-server)              | ✅              |
| [<mark style="color:blue;">Cubes</mark>](https://docs.ionos.com/sections-test/guides/compute-services/cubes)                                          | ❌              |

{% stepper %}
{% step %}

### Verify the number of queues before enabling NIC Multi-Queue

Use the `ethtool -l` command to query the maximum and current settings for the receive (RX), transmit (TX), and combined queue channels on a specified network device. The following example uses `ens6` as the NIC.

To verify the number of queues configured on each NIC, use the following command:

```shell
root@ubuntu:~# ethtool -l ens6
Channel parameters for ens6:
Pre-set maximums:
RX:  n/a
TX:  n/a
Other:  n/a
Combined: 1
Current hardware settings:
RX:  n/a
TX:  n/a
Other:  n/a
Combined: 1
```

The output shows that the NIC `ens6` is configured for:

* Maximum combined queues: 1
* Current hardware settings (combined): 1

The output indicates that, at the hardware level, `ens6` is not currently configured to utilize multiple queues. It is limited to a single combined queue for both sending `TX` and receiving `RX` traffic. This baseline check confirms queue capacity before attempting an NIC Multi-Queue setup.
{% endstep %}

{% step %}

### Verify the number of queues after enabling NIC Multi-Queue

To verify the number of queues configured on each NIC, use the following command:

{% hint style="info" %}
**Note:** The example below is for a VM configured with four threads.
{% endhint %}

```shell
root@ubuntu:~# ethtool -l ens6
Channel parameters for ens6:
Pre-set maximums:
RX:		n/a
TX:		n/a
Other:		n/a
Combined:	4
Current hardware settings:
RX:		n/a
TX:		n/a
Other:		n/a
Combined:	4
```

```shell
root@ubuntu:~# ls /sys/class/net/ens6/queues/
rx-0  rx-1  rx-2  rx-3  tx-0  tx-1  tx-2  tx-3
```

The output shows that the NIC `ens6` is configured for:

* Maximum combined queues for `RX-only` or `TX-only` queues: 4
* Current hardware settings (combined): 4

At the hardware level, NIC Multi-Queue is enabled and configured on the `ens6` interface for utilizing maximum parallelism, containing a maximum of four combined queues. The output shows that NIC Multi-Queue can now distribute traffic across four CPU cores simultaneously for both sending `TX` and receiving `RX` traffic.
{% endstep %}

{% step %}

### Improve your network performance

We recommend unsetting CPU affinity and TCP tuning depending on your use case to further improve the network performance.

{% tabs %}
{% tab title="Unset CPU affinity" %}
Use the following command to disable Transmit Packet Steering (XPS) by setting all CPU affinity masks to \`00000000\`. It loops through all transmit queues \`tx-\*\` for the \`ens6\` network interface and writes \`00000000\` to the \`xps\_cpus\` file for each queue. You can use the command to handle queue distribution automatically rather than manually, and to prevent pinning specific transmit queues to specific CPUs.

```shell
for f in /sys/class/net/ens6/queues/tx-*/xps_cpus; do echo 00000000 | tee $f; done
```

{% endtab %}

{% tab title="TCP Tuning" %}
To optimize network performance for multi-queue NICs under high traffic loads, increase the TCP receive and transmit buffer sizes:

```shell
echo 'net.core.wmem_max=16777216' >> /etc/sysctl.conf
echo 'net.core.rmem_max=12582912' >> /etc/sysctl.conf
echo 'net.ipv4.tcp_rmem = 4096 87380 16777216' >> /etc/sysctl.conf
echo 'net.ipv4.tcp_wmem = 4096 87380 16777216' >> /etc/sysctl.conf
sysctl -p
```

TCP tuning provides increased throughput on high-bandwidth connections by allowing more data in transit, reduced packet drops by preventing buffer exhaustion under heavy load. Immediate application of changes through `sysctl -p` reloads `/etc/sysctl.conf`.

{% hint style="info" %}
**Note:** These buffer size increases are particularly significant for multi-queue NICs that handle high traffic volumes. Adjust the values according to your specific network requirements and the available system memory.
{% endhint %}
{% endtab %}
{% endtabs %}
{% endstep %}
{% endstepper %}
