MTU

Last change on 2025-04-25 • Created on 2025-04-25 • ID: CL-F987E

This article assumes you are familiar with the meaning of Packet sizes, MTU, and MSS and the technical concepts of how routing through several networks works.

To check if your issue is in fact related to the MTU, you can test if Path MTU Discovery works.

ℹ   Path MTU Discovery (PMTUD)
If a packet gets dropped because it is too big and an ICMP error message with the correct MTU is returned, PMTUD learns that MTU for this destination and uses it for future packets.

If the system doesn't receive an ICMP message or the message doesn't contain the MTU, PMTUD fails and large packets will continue to get dropped.
Learn more

With ip addr, you can view the system's interfaces and their MTU.

Sending a request

Source
—>
Intermediary
—>
Intermediary
—>
Destination

When you send a request, the system uses the MTU of its local interface. If any interface between the source and the destination has a smaller MTU, the packet will get dropped. Ideally, the intermediary with the small MTU requirement should provide a proper ICMP error message that includes the MTU.

You can use the ping command to probe the network for the maximum path MTU and to check if you receive a proper ICMP error message for packets exceeding it. We'd suggest to probe for the maximum value you expect to work as well as for a larger value.

Within a private network with MTU 1450, the maximum transferable ICMPv4 payload size is 1422.
Over the public interface, MTUs up to 1500 should work, so the maximum transferable ICMPv4 payload size would be 1472.

# Testing with ping and the `Don't fragment` bit set.
# First with maximum payload value expected to work, e.g. 1422 bytes for a private network
ping -c 1 -M do -s 1422 <ip_address>
# Additionally, ping with a slightly too big payload of 1423 bytes
ping -c 1 -M do -s 1423 <ip_address>
sudo tcpdump -i <interface_name> icmp

Analyzing the response:

  • If the request succeeds — meaning you receive an echo reply — MTU is not the cause of your problem because the packet goes through the network without any issues.

    # ping -c1 -M do -s 1422 203.0.113.1
    PING 203.0.113.1 (203.0.113.1) 1422(1450) bytes of data.
    1430 bytes from 203.0.113.1: icmp_seq=1 ttl=64 time=0.122 ms
    
    --- 203.0.113.1 ping statistics ---
    1 packets transmitted, 1 received, 0% packet loss, time 0ms
    rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms

  • If you receive an error message that includes the MTU (e.g. ping: local error: message too long, mtu=1450), Path MTU Discovery should learn the correct MTU for that destination and adjust it accordingly. Future packets should adhere to the new MTU and succeed.

    # ping -c1 -M do -s 1423 203.0.113.1
    PING 203.0.113.1 (203.0.113.1) 1423(1451) bytes of data.
    ping: local error: message too long, mtu=1450
    
    --- 203.0.113.1 ping statistics ---
    1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms

    Note that this only fixes the problem in one direction. Ideally, you should run the command in both directions to ensure the correct MTU is set on both sides of the connection. In the example above, you should ping the destination from the source, and you should also ping the source from the destination.


  • If you receive an error message that doesn't contain the MTU, Path MTU Discovery cannot correct the MTU for that destination and future packets will continue to use the wrong MTU.

  • If you don't receive a response at all, some component is most likely filtering ICMP (error) packets. If this occurs within a private network, it might be worth checking iptables or nftables rulesets, or any firewall you may have configured on your system.

    # ping -c1 -M do -s 1450 203.0.113.1
    PING 203.0.113.1 (203.0.113.1) 1450(1478) bytes of data.
    
    --- 203.0.113.1 ping statistics ---
    1 packets transmitted, 0 received, 100% packet loss, time 0ms

Receiving a response

Source
<—
Intermediary
<—
Intermediary
<—
Destination

If the destination sends a response that is bigger than the smallest MTU required for the entire path between the source and the destination, the response will fail.

Ideally, the intermediary with the small MTU requirement should provide a proper ICMP error message that includes the MTU, and the destination should re-send a smaller response. However, some destinations may be misconfigured to drop all ICMP traffic and the ICMP error message with the correct MTU is ignored.

If you have access to the destination, connect to the system and use the ping command as explained in the section "Sending a request".

If you don't have access to the destination, you can use this command to check if you receive a proper ICMP error message after you sent your packet:

holu@source:~$ sudo tcpdump -i <interface_name> icmp
13:49:25.292748 IP static.1.113.0.203.clients.your-server.de > 1.100.51.198: ICMP static.1.113.0.203.clients.your-server.de unreachable - need to frag (mtu 1450), length 556

Analyzing the response:

  • If you see a proper ICMP error message, it is likely that the destination is misconfigured, meaning it blocks the ICMP error message and doesn't correct the MTU.

  • If you don't see an error message or the message doesn't contain the MTU, Path MTU Discovery of the destination cannot correct the MTU and future packets will continue to use the wrong MTU.

Solution

  • If you have access to both the source and the destination

    Set the MTU value on both the source and the destination down to the smallest MTU required for the entire path. For Hetzner Cloud Networks, the maximum MTU value is 1450 bytes.

    sudo ip link set <interface_name> mtu 1450  

    If your source is a network-isolated environment (e.g. Docker container or LXC), you have to set the MTU value within the network-isolated environment itself rather than on the host.

    For issues with Docker, see their official documentation on the MTU setting for the default bridge network and the default MTU setting for new networks.


  • If you only have access to the source or an intermediary

    ℹ   MSS Clamping
    When you enable MSS Clamping, you can define a custom MSS value that should be sent to the server (destination) when the TCP connection is initialized.
    Learn more

    Only for TCP: Enable MSS Clamping and set the MSS value to the smallest MSS required for the entire path between the source and the destination (see this example nftables configuration on Linux). You can enable MSS Clamping either on the source itself or on a router. For Hetzner Cloud Networks, the maximum MSS value is 1410 bytes.


    If you have several sources that share the same MSS limit and the same router, e.g. several Docker containers on a single host:

    Container A

    Container B
    Host
    Gateway
    Router
    Internet

    You can enable MSS Clamping at the router level. The router will change the MSS value of the TCP SYN packet before it forwards the TCP SYN packet to the destination. This way, you don't have to set up MSS Clamping on every source individually.