Penguin
Note: You are viewing an old revision of this page. View the current version.

This is my first attempt and writing something up about traffic shaping. I don't really understand much how this works, but I'm going to document a bit in the hope that people can improve it.

This is a script I use to throttle one machine down to half our ADSL rate. This machine is used for downloading large files (for example .iso's of the latest Linux Distribution?), but we don't want it impacting the rest of our machines. This example was stolen from the Advanced Router HOWTO, and cleaned up a bit by me.

You run this script on your gateway rate limiting data to your internal client machine(s).

  1. /bin/sh

  2. The device data is going out of. You can't stop a machine recieving data (short of firewalling)
  3. But you can limit how fast it sends data.

DEV=eth0

  1. The IP you want to throttle

IP=10.10.10.13

  1. How fast your internal network is. This is used to estimate the rates more accurately. If this
  2. is wrong then the rate will be out by a little bit or something.

LINERATE=100mbit

  1. How fast you want them to be able to d/l at. "kbps" means "kilobytes per second", don't get it
  2. confused with "kbits" which is approximately 10 times slower :)

THROTTLERATE=15kbps

  1. Where the tc executable is.

TC=/sbin/tc

if [ ! -x $TC?; then

echo Cant find $TC, aborting exit 1

fi

  1. Remove any previous queuing. This probably removes any other policies you have on this interface
  2. too. Oh well.

$TC qdisc del dev $DEV root 2>/dev/null

  1. Add a Queuing Discipline. A Queuing discipline is what manages the queues.
  2. we're using the "cbq" discipline, and we're saying that the average packet size is 1000 bytes
  3. (probably completely wrong :) Once again this is just a parameter to make it more accurate.

$TC qdisc add dev $DEV root handle 1: cbq avpkt 1000 bandwidth $LINERATE

  1. Create a class, also(?) a cbq, rate limited to $THROTTLERATE. allot I Think is how much
  2. data they get before they are rate limited. This must be at least the MTU (since you can't
  3. send partial packets you must be able to send at least one entire packet).
  4. I don't know what "prio" is about.
  5. bounded means that it cannot exceed this rate, if this was left off, I think it means that
  6. when the link is saturated that this can't use more than $THROTTLERATE and everyone else can
  7. share the rest. An example use of this might be to set $THROTTLERATE to 0 and remove bounded
  8. meaning that everything else can use the link in preference. I think.
  9. Isolated I'm not sure about, I think it means it doesn't interact with any other rules.

$TC class add dev $DEV parent 1: classid 1:1 cbq rate $THROTTLERATE \

allot 1500 prio 5 bounded isolated

  1. Add a filter to go into this class.
  2. This uses the "u32" filter which matches based on header fields in the IP packet.
  3. If you want to match on multiple rules you can use "match ip dest $IP src $IP" etc. I don't
  4. know how you do not. I think if you want to do anything interesting with TC you probably want
  5. to use fwmark from iptables(8).

$TC filter add dev $DEV parent 1: protocol ip prio 16 u32 \

match ip dst $IP flowid 1:1

  1. Peturb the random hash every 10s
  2. Ok, what cbq does (I think) is put everything into buckets based on a hash function. However
  3. sometimes you end up with hash collisions meaning that data will be occasionally lumped together
  4. with other data and they will both be rate limited as if they were one connection. The way
  5. around this is to change the hash function frequently so that this effect is reduced. However
  6. doing this too often makes your rate limiting less accurate, doing it too rarely means that
  7. data is incorrectly classified as above, so we tell the kernel to change the hash every 10s.

$TC qdisc add dev $DEV parent 1:1 sfq perturb 10

Hopefully this is enough to get people started, please, if you know anything more add it to this page. I found the advanced router howto very oblique in it's information.


Some points to remember:

Outgoing interface

You the interface you use must be your outgoing interface, not your incoming interface. Getting this confused will cause this to be of no use.

Tag data in the right direction

I use "dst $IP" for 'traffic destined to $IP', if you want traffic from an IP use 'src $IP' instead.


After a bit of fiddling I've managed to get TrafficShaping working on a per protocol (read port) basis

as per below

I wanted to limit my personal machine at work to only use 5kbps of bandwidth but ran into the quandry that my machine also runs nagios for monitoring.

I started with the above but found that when the 5kbps limit was reached, all the nagios ping tests started to go critical because of the latency introduced, so we needed to differentiate between different ports and protocols

so I ended up with this script

  1. /bin/sh

DEV=eth0

IP=203.97.10.61

LINERATE=2mbit

THROTTLERATE=5kbps ICMPRATE=40kbps HTTP=80

  1. Where the tc executable is.

TC=/sbin/tc

if ! test -x $TC; then

echo Cant find $TC, aborting exit 1

fi

$TC qdisc del dev $DEV root

$TC qdisc add dev $DEV root handle 1: cbq avpkt 1000 bandwidth $LINERATE

$TC class add dev $DEV parent 1: classid 1:1 cbq rate $THROTTLERATE \

allot 1500 prio 5 bounded isolated

$TC class add dev $DEV parent 1: classid 1:2 cbq rate $ICMPRATE \

allot 1500 prio 5 bounded isolated

  1. Filter ICMP traffic to class 1:2

$TC filter add dev $DEV parent 1: protocol ip prio 16 u32 \

match ip src $IP match ip protocol 1 0xFF flowid 1:2

  1. Filter port 80 (tcp and udp) to class 1:1

$TC filter add dev $DEV parent 1: protocol ip prio 16 u32 \

match ip src $IP match ip sport $HTTP 0xFFFF flowid 1:1

$TC qdisc add dev $DEV parent 1:1 sfq perturb 60

  1. Display Traffic Shaping details

echo "---- qdisc parameters Ingress ----------" $TC qdisc ls dev $DEV echo "---- Class parameters Ingress ----------" $TC class ls dev $DEV echo "---- filter parameters Ingress ----------" $TC filter ls dev $DEV

note that sport and protocols require 2 operands, the port/protocol number and a mask


Ingress shaping

It is possible to perform ingress shaping using a similar process. Your version of tc has to have ingress support compiled in - it appears that some RedHat versions may not have this.

The following script will limit traffic from source port 80 (ie, the return-path from a web connection) to 100kbit. It applies these rules on ppp0, which is my external interface.

  1. /bin/sh

TC=/sbin/tc IPTABLES=/sbin/iptables

DEV=ppp0

MAXRATE=128kbit THROTTLE=100kbit BURST=5000 MTU=1492

  1. Mark traffic with a source port of 80 with the mark 1

$IPTABLES -A PREROUTING -i $DEV -t mangle -p tcp --sport 80 -j MARK --set-mark 1

  1. Delete the old ingres rule

$TC qdisc del dev $DEV ingress

  1. then add the queuing discipline

$TC qdisc add dev $DEV handle FFFF: ingress

  1. apply the actual filter.

$TC filter add dev $DEV parent ffff: protocol ip prio 50 handle 1 fw \

police rate $THROTTLE burst $BURST mtu $MTU drop flowid :1

If I look at the output of wget, its reasonably good at limiting a port 80 download to 10 - 12k/second, which is about right for what we asked. If i look at my ppp0 usage meter in gkrellm, it seems to be using more bandwidth than it should - spends a lot of time at 16 or 17 K/s incoming. Running iptraf on ppp0, in detailed statistics mode, shows that my incoming rate seems to be about 100kbit/sec, although it tends to be a bit higher than this normally. I also tested, and verified, that traffic not caught by the above script - eg, FTP traffic, still obtained full rate

In comparison with a by-port filter such as the one prior to the ingress script, I see a high level of fluctuation in the download rate, in all three test cases. Whether this is to do with some misconfiguration on my part I dont know