home

A simple measurement of iptables performance

It is a reasonably well understood problem that iptables does not scale well when many rules are present. A rough approximation of the reason for this is that iptables rules are applied sequentially. Under the hood there is a list of rules which packets are matched against in order. Upon encountering an ACCEPT rule the packet is allowed to flow through.

What does that slowdown really look like?

Throughput impact

Here are the results of a simple test. The below graph shows the results of an iperf3 test in bits/second. The high spikes are the result of this test against a machine with 5002 rules installed. The low sections show the results of running this test against a machine with 2 rules installed.

The 5002 rule tests have a mean bits per second of 5.54766e+08. The two rule tests have a mean of 4.58647e+08 bits per second. This means that adding the 5000 rules leads to a approximately 20% slowdown.

For reference I use this code to generate the rules.

for address in $(echo 135.1.{0..20}.{0..255})
do
    iptables -A INPUT -p tcp -s $address -j ACCEPT
done

NOTE

if you’re actually adding rules like this you probably want to use an ipset. For example:

ipset create allow-set hash:ips
ipset add allow-set 10.0.1.1
ipset add allow-set 10.0.1.2
iptables -A INPUT -m set --match-set allow-set src -j ACCEPT

CPU impact

The other interesting question here is how much the addition of these rules causes cpu usage to increase on the receiving machine. Here are two graphs which show CPU usage on the receiving machines during the tests over time. For the duration of the tests no other programs are running on the machine.

Two rules

5002 rules

The difference in CPU usage seems to be very marginal. What this suggests is that one can add many iptables rules on a system with little risk of them consuming the CPU. One might avoid doing this if you are very performance conscientious.