This describes traffic shaping under linux. This uses the tc (traffic control) program from the IpRoute package.
Make sure you have the correct Kernel support for "QoS". In your LinuxKernel .config file, you will probably need support for the following:
See RouteBasedTrafficShaping for a somewhat complicated example for shaping based on a BGP feed of domestic routes.
This is a script I use to throttle one machine down to half our ADSL rate. This machine is used for downloading large files (for example .iso's of the latest LinuxDistribution), but we don't want it impacting the rest of our machines. This example was stolen from the Advanced Router HOWTO, and cleaned up a bit by me.
You run this script on your gateway rate limiting data to your internal client machine(s).
#!/bin/sh # The device data is going out of. You can't stop a machine recieving data (short of firewalling) # But you can limit how fast it sends data. # this is the device that goes to the clients on our LAN DEV=eth0 # The IP you want to throttle. Any IP that matches this destination # will have its traffic added to our (single) child class. # If you want to rate-limit all your clients, set it to something like: # IP=0.0.0.0/0 # Note that this script will also ratelimit traffic generated by this # router to any matching IP addresses! IP=10.10.10.13 # How fast your internal network is. This is used to estimate the rates more accurately. If this # is wrong then the rate will be out by a little bit or something. LINERATE=100mbit # How fast you want them to be able to d/l at. "kbps" means "kilobytes per second", don't get it # confused with "kbits" which is approximately 10 times slower :) THROTTLERATE=15kbps # Where the tc executable is. TC=/sbin/tc if [ ! -x $TC ]; then echo Cant find $TC, aborting exit 1 fi # Remove any previous queuing. This probably removes any other policies you have on this interface # too. Oh well. $TC qdisc del dev $DEV root 2>/dev/null # Add a Queuing Discipline. A Queuing discipline is what manages the queues. # we're using the "cbq" discipline, and we're saying that the average packet size is 1000 bytes # (probably completely wrong :) Once again this is just a parameter to make it more accurate. $TC qdisc add dev $DEV root handle 1: cbq avpkt 1000 bandwidth $LINERATE # Create a child class, also a cbq, rate limited to $THROTTLERATE. "allot" is how much data the # parent class gets from each child class in turn. This must be at least the MTU (since you # can't send partial packets you must be able to send at least one entire packet). # "prio" is used when there is more than 1 child class, and both have data in their queue. # "bounded" means that it cannot exceed this rate, if this was left off, I think it means that # when the link is saturated that this can't use more than $THROTTLERATE and everyone else can # share the rest. An example use of this might be to set $THROTTLERATE to 0 and remove bounded # meaning that everything else can use the link in preference. I think. # "isolated" I'm not sure about, I think it means it doesn't interact with any other rules. $TC class add dev $DEV parent 1: classid 1:1 cbq rate $THROTTLERATE \ allot 1500 prio 5 bounded isolated # Add a filter to the parent to redirect traffic into the the children classes. # (in this case, only a single child). Traffic that doesn't fall into the child class # will not be rate limited (depending on the parent class's settings, I guess) # # This uses the "u32" filter which matches based on header fields in the IP packet. # If you want to match on multiple rules you can use "match ip dest $IP src $IP" etc. # I think if you want to do anything interesting with TC you probably want # to use fwmark from iptables(8). $TC filter add dev $DEV parent 1: protocol ip prio 16 u32 \ match ip dst $IP flowid 1:1 # Peturb the random hash every 10s # Ok, what cbq does (I think) is put everything into buckets based on a hash function. However # sometimes you end up with hash collisions meaning that data will be occasionally lumped together # with other data and they will both be rate limited as if they were one connection. The way # around this is to change the hash function frequently so that this effect is reduced. However # doing this too often makes your rate limiting less accurate, doing it too rarely means that # data is incorrectly classified as above, so we tell the kernel to change the hash every 10s. $TC qdisc add dev $DEV parent 1:1 sfq perturb 10
Hopefully this is enough to get people started, please, if you know anything more add it to this page. I found the advanced router howto very oblique in it's information.
Similar script, for the case where your router is also a fileserver (and you don't want to rate-limit traffic from it to the lan clients, obviously).
My server has eth1 going to the ISP, and eth0 going to the lan clients. This doesn't rate-limit upstream connections, but few users on our lan use much upstream bandwidth. This script rate-limits to just below our ADSL bandwidth, so that packets get dropped (and TCP adjusts its sending rate) rather than getting queued in TelecomNZ's equipment.
/bin/sh DEV=eth0 # the IP address of the above device (so it isn't ratelimited) SERVERIP=10.21.1.2 # lan clients to rate-limit LIMITIPS=10.21.1.0/24 # remove any existing queue discipline: (might say no such file or dir) tc qdisc del dev eth0 root 2> /dev/null # quit this script immediately if any command returns error set -e # create a root queuing discipline for our interface tc qdisc add dev $DEV root handle 1:0 cbq bandwidth 100Mbit avpkt 1000 cell 8 # create a class called 1:1 tc class add dev $DEV parent 1:0 classid 1:1 cbq bandwidth 100Mbit \ prio 8 allot 1514 cell 8 rate 100Mbit maxburst 20 avpkt 1000 # create a sub-class of 1:1 called 1:10 that is rate-limited to 105kbit tc class add dev $DEV parent 1:1 classid 1:10 cbq bandwidth 100Mbit \ rate 105Kbit prio 1 allot 1514 cell 8 maxburst 20 \ avpkt 1000 bounded # create a sub-class called 1:20 that isn't limited, for locally generated traffic tc class add dev $DEV parent 1:1 classid 1:20 cbq allot 1514 avpkt 1000 \ rate 100Mbit bandwidth 100Mbit prio 2 # locally generated traffic should go to the appropriate sub-class tc filter add dev $DEV parent 1:0 protocol ip prio 1 u32 \ match ip src $SERVERIP/32 flowid 1:20 # not sure if this is really needed... traffic from one interface to another? tc filter add dev $DEV parent 1:0 protocol ip prio 1 u32 \ match ip dst $SERVERIP/32 flowid 1:20 # traffic to our lan (that didn't match earlier rules) should go to appropriate sub-class tc filter add dev $DEV parent 1:0 protocol ip prio 2 u32 \ match ip dst $LIMITIPS flowid 1:10
Some points to remember:
You the interface you use must be your outgoing interface, not your incoming interface. Getting this confused will cause this to be of no use.
I use "dst $IP" for 'traffic destined to $IP', if you want traffic from an IP use 'src $IP' instead.
I made some adjustments to the above script to split up the ADSL upload in my flat network. This was to ensure that no one person can whore the upload, which makes everything laggy for everyone else. (I found the wondershaper disappointing)
Notes: This will not work with masquerading The network connection can still get easily saturated
#!/bin/sh # List of IPs to have upload throttled IPS=`seq 114 117 | awk '{print "1.2.3."$1}'` LINERATE=2mbit THROTTLERATE=14kbps tc qdisc del dev ppp0 root 2>/dev/null tc qdisc add dev ppp0 root handle 1: cbq avpkt 1000 bandwidth $LINERATE for IP in $IPS; do echo throttling $IP LASTTHING=`echo $IP | cut -d . -f 4` tc class add dev ppp0 parent 1: classid 1:$LASTTHING cbq rate \ $THROTTLERATE allot 1500 prio 5 bounded isolated tc filter add dev ppp0 parent 1: protocol ip prio 16 u32 \ match ip src $IP flowid 1:$LASTTHING tc qdisc add dev ppp0 parent 1:$LASTTHING sfq perturb 10 done;
After a bit of fiddling I've managed to get TrafficShaping working on a per protocol (read port) basis
as per below
I wanted to limit my personal machine at work to only use 5kbps of bandwidth but ran into the quandry that my machine also runs nagios for monitoring.
I started with the above but found that when the 5kbps limit was reached, all the nagios ping tests started to go critical because of the latency introduced, so we needed to differentiate between different ports and protocols
so I ended up with this script
#!/bin/sh DEV=eth0 IP=203.97.10.61 LINERATE=2mbit THROTTLERATE=5kbps ICMPRATE=40kbps HTTP=80 # Where the tc executable is. TC=/sbin/tc if ! test -x $TC; then echo Cant find $TC, aborting exit 1 fi $TC qdisc del dev $DEV root $TC qdisc add dev $DEV root handle 1: cbq avpkt 1000 bandwidth $LINERATE $TC class add dev $DEV parent 1: classid 1:1 cbq rate $THROTTLERATE \ allot 1500 prio 5 bounded isolated $TC class add dev $DEV parent 1: classid 1:2 cbq rate $ICMPRATE \ allot 1500 prio 5 bounded isolated # Filter ICMP traffic to class 1:2 $TC filter add dev $DEV parent 1: protocol ip prio 16 u32 \ match ip src $IP match ip protocol 1 0xFF flowid 1:2 # Filter port 80 (tcp and udp) to class 1:1 $TC filter add dev $DEV parent 1: protocol ip prio 16 u32 \ match ip src $IP match ip sport $HTTP 0xFFFF flowid 1:1 $TC qdisc add dev $DEV parent 1:1 sfq perturb 60 #Display Traffic Shaping details echo "---- qdisc parameters Ingress ----------" $TC qdisc ls dev $DEV echo "---- Class parameters Ingress ----------" $TC class ls dev $DEV echo "---- filter parameters Ingress ----------" $TC filter ls dev $DEV
note that sport and protocols require 2 operands, the port/protocol number and a mask
Ingress shaping
It is possible to perform ingress shaping using a similar process. Your version of tc has to have ingress support compiled in - it appears that some RedHat versions may not have this.
The following script will limit traffic from source port 80 (ie, the return-path from a web connection) to 100kbit. It applies these rules on ppp0, which is my external interface.
#!/bin/sh TC=/sbin/tc IPTABLES=/sbin/iptables DEV=ppp0 MAXRATE=128kbit THROTTLE=100kbit BURST=5000 MTU=1492 # Mark traffic with a source port of 80 with the mark 1 $IPTABLES -A PREROUTING -i $DEV -t mangle -p tcp --sport 80 -j MARK --set-mark 1 # Delete the old ingres rule $TC qdisc del dev $DEV ingress # then add the queuing discipline $TC qdisc add dev $DEV handle FFFF: ingress # apply the actual filter. $TC filter add dev $DEV parent ffff: protocol ip prio 50 handle 1 fw \ police rate $THROTTLE burst $BURST mtu $MTU drop flowid :1
If I look at the output of wget, its reasonably good at limiting a port 80 download to 10 - 12k/second, which is about right for what we asked. If i look at my ppp0 usage meter in gkrellm, it seems to be using more bandwidth than it should - spends a lot of time at 16 or 17 K/s incoming. Running iptraf on ppp0, in detailed statistics mode, shows that my incoming rate seems to be about 100kbit/sec, although it tends to be a bit higher than this normally. I also tested, and verified, that traffic not caught by the above script - eg, FTP traffic, still obtained full rate
In comparison with a by-port filter such as the one prior to the ingress script, I see a high level of fluctuation in the download rate, in all three test cases. Whether this is to do with some misconfiguration on my part I dont know.
See also NetEm for information on simulating loss and delay.
See also LinuxQualityOfService
6 pages link to TrafficShaping: