View previous topic :: View next topic |
Author |
Message |
MrBlc n00b

Joined: 16 Mar 2004 Posts: 30
|
Posted: Thu Sep 22, 2005 10:25 pm Post subject: network challenge not easily solved... |
|
|
Here's the layout:
got a gentoo linux server running as firewall, with shorewall as it's iptables script
several clients are placed behind, on a gigabit switch
here's the issue:
2 of the computers behind the wall, uses too many port allocations.
1 user is critically dependent on low latency, but uses minimal bandwidth
i have installed a traffic shaper, and configured it using a modified version of wondershaper.
the shaper is configured to shape down standard p2p ports, but from what i can tell, that's not happening:
Code: |
>tc -s class show dev eth1
class htb 1:1 root rate 1900Kbit ceil 1900Kbit burst 4Kb/8 mpu 0b overhead 0b cburst 1836b/8 mpu 0b overhead 0b level 7
Sent 1372492550 bytes 3810931 pkt (dropped 0, overlimits 0 requeues 0)
rate 101080bit 125pps backlog 0b 0p requeues 0
lended: 1581240 borrowed: 0 giants: 0
tokens: 17155 ctokens: 7248
class htb 1:10 parent 1:1 leaf 10: prio 1 rate 1710Kbit ceil 1900Kbit burst 2Kb/8 mpu 0b overhead 0b cburst 1836b/8 mpu 0b overhead 0b level 0
Sent 14247902 bytes 151908 pkt (dropped 0, overlimits 0 requeues 0)
rate 0bit 0pps backlog 0b 0p requeues 0
lended: 152045 borrowed: 0 giants: 0
tokens: 9629 ctokens: 7739
class htb 1:20 parent 1:1 leaf 20: prio 2 rate 190000bit ceil 1900Kbit burst 2Kb/8 mpu 0b overhead 0b cburst 1836b/8 mpu 0b overhead 0b level 0
Sent 119206029 bytes 2073284 pkt (dropped 0, overlimits 0 requeues 0)
rate 41376bit 89pps backlog 0b 0p requeues 0
lended: 2073454 borrowed: 0 giants: 0
tokens: 87705 ctokens: 7844
class htb 1:30 parent 1:1 leaf 30: prio 3 rate 1000bit ceil 1710Kbit burst 2Kb/8 mpu 0b overhead 0b cburst 1812b/8 mpu 0b overhead 0b level 0
Sent 1238860948 bytes 1585237 pkt (dropped 0, overlimits 0 requeues 0)
rate 58344bit 35pps backlog 0b 0p requeues 0
lended: 4192 borrowed: 1581240 giants: 0
tokens: -6504824 ctokens: 7937
class htb 1:40 parent 1:1 leaf 40: prio 4 rate 1000bit ceil 1710Kbit burst 2Kb/8 mpu 0b overhead 0b cburst 1812b/8 mpu 0b overhead 0b level 0
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
rate 0bit 0pps backlog 0b 0p requeues 0
lended: 0 borrowed: 0 giants: 0
tokens: 17063998 ctokens: 8832
|
as you might be able to see, the class 1:30 is the desired shaped down class, and it does get a lot of hits... but the "normal" traffic class is equally loaded, which means, one of the 2 computers is NOT in 1:30..
what i'm looking for, is a way to limit and force regulations onto computers based on internal ip.
the gentoo box also covers dhcp section, so i can easily maintain who gets what ip...
here's my wondershaper script:
Code: |
#!/bin/bash
# Wonder Shaper
# please read the README before filling out these values
#
# Set the following values to somewhat less than your actual download
# and uplink speed. In kilobits. Also set the device that is to be shaped.
DOWNLINK=1900
UPLINK=1900
DEV=eth1
# normal priority OUTGOING traffic - you can leave this blank if you want
# normal priority source netmasks
PRIOHOSTSRC=
# normal priority destination netmasks
PRIOHOSTDST=
# normal priority source ports
PRIOPORTSRC="8080"
# normal priority destination ports
PRIOPORTDST="8080"
# high priority source ports
HIPRIOPORTSRC="9898"
# high priority destination ports
HIPRIOPORTDST="9898 27015 27016 27017"
# high priority source ips
HIPRIOIPSRC=""
# high priority destination ips
HIPRIOIPDST=""
# low priority source ports
LOWPRIOPORTSRC="6881"
# low priority destination ips
LOWPRIOPORTDST="6881"
# Now remove the following two lines :-)
#echo Please read the documentation in 'README' first
#exit
if [ "$1" = "status" ]
then
tc -s qdisc ls dev $DEV
tc -s class ls dev $DEV
exit
fi
# clean existing down- and uplink qdiscs, hide errors
tc qdisc del dev $DEV root 2> /dev/null > /dev/null
tc qdisc del dev $DEV ingress 2> /dev/null > /dev/null
if [ "$1" = "stop" ]
then
exit
fi
###### uplink
# install root HTB, point default traffic to 1:30:
tc qdisc add dev $DEV root handle 1: htb default 30
# shape everything at $UPLINK speed - this prevents huge queues in your
# DSL modem which destroy latency:
tc class add dev $DEV parent 1: classid 1:1 htb rate ${UPLINK}kbit \
ceil ${UPLINK}kbit burst 4k
# high prio class 1:10:
tc class add dev $DEV parent 1:1 classid 1:10 htb rate $[9*UPLINK/10]kbit \
ceil ${UPLINK}kbit burst 2k prio 1
# bulk & default class 1:20 - gets slightly less traffic,
# and a lower priority:
tc class add dev $DEV parent 1:1 classid 1:20 htb rate $[1*UPLINK/10]kbit \
ceil ${UPLINK}kbit burst 2k prio 2
tc class add dev $DEV parent 1:1 classid 1:30 htb rate 1kbit \
ceil $[9*$UPLINK/10]kbit burst 2k prio 3
tc class add dev $DEV parent 1:1 classid 1:40 htb rate 1kbit \
ceil $[9*$UPLINK/10]kbit burst 2k prio 4
# all get Stochastic Fairness:
#tc qdisc add dev $DEV parent 1:10 handle 10: sfq perturb 10
tc qdisc add dev $DEV parent 1:10 handle 10: pfifo limit 10
tc qdisc add dev $DEV parent 1:20 handle 20: sfq perturb 10
tc qdisc add dev $DEV parent 1:30 handle 30: sfq perturb 10
tc qdisc add dev $DEV parent 1:40 handle 40: sfq perturb 10
# TOS Minimum Delay (ssh, NOT scp) in 1:10:
tc filter add dev $DEV parent 1:0 protocol ip prio 10 u32 \
match ip tos 0x10 0xff flowid 1:10
# ICMP (ip protocol 1) in the interactive class 1:10 so we
# can do measurements & impress our friends:
tc filter add dev $DEV parent 1:0 protocol ip prio 10 u32 \
match ip protocol 1 0xff flowid 1:10
# To speed up downloads while an upload is going on, put ACK packets in
# the interactive class:
tc filter add dev $DEV parent 1: protocol ip prio 10 u32 \
match ip protocol 6 0xff \
match u8 0x05 0x0f at 0 \
match u16 0x0000 0xffc0 at 2 \
match u8 0x10 0xff at 33 \
flowid 1:20
# rest is 'non-interactive' ie 'bulk' and ends up in 1:30
# some traffic gets special deal
for a in $HIPRIOPORTDST
do
tc filter add dev $DEV parent 1: protocol ip prio 14 u32 \
match ip dport $a 0xffff flowid 1:10
done
for a in $HIPRIOPORTSRC
do
tc filter add dev $DEV parent 1: protocol ip prio 15 u32 \
match ip sport $a 0xffff flowid 1:10
done
for a in $HIPRIOIPDST
do
tc filter add dev $DEV parent 1: protocol ip prio 14 u32 \
match ip dst $a flowid 1:10
done
for a in $HIPRIOIPSRC
do
tc filter add dev $DEV parent 1: protocol ip prio 15 u32 \
match ip src $a flowid 1:10
done
for a in $PRIOPORTDST
do
tc filter add dev $DEV parent 1: protocol ip prio 16 u32 \
match ip dport $a 0xffff flowid 1:20
done
for a in $PRIOPORTSRC
do
tc filter add dev $DEV parent 1: protocol ip prio 17 u32 \
match ip sport $a 0xffff flowid 1:20
done
for a in $PRIOHOSTSRC
do
tc filter add dev $DEV parent 1: protocol ip prio 18 u32 \
match ip src $a flowid 1:20
done
for a in $PRIOHOSTDST
do
tc filter add dev $DEV parent 1: protocol ip prio 19 u32 \
match ip dst $a flowid 1:20
done
for a in $LOWPRIOPORTSRC
do
tc filter add dev $DEV parent 1: protocol ip prio 41 u32 \
match ip src $a flowid 1:40
done
for a in $LOWPRIOPORTDST
do
tc filter add dev $DEV parent 1: protocol ip prio 42 u32 \
match ip dst $a flowid 1:40
done
# rest is 'non-interactive' ie 'bulk' and ends up in 1:30
tc filter add dev $DEV parent 1: protocol ip prio 20 u32 \
match ip dst 0.0.0.0/0 flowid 1:30
########## downlink #############
# slow downloads down to somewhat less than the real speed to prevent
# queuing at our ISP. Tune to see how high you can set it.
# ISPs tend to have *huge* queues to make sure big downloads are fast
#
# attach ingress policer:
tc qdisc add dev $DEV handle ffff: ingress
# filter *everything* to it (0.0.0.0/0), drop everything that's
# coming in too fast:
tc filter add dev $DEV parent ffff: protocol ip prio 49 u32 \
match ip protocol 1 0xff flowid :1
tc filter add dev $DEV parent ffff: protocol ip prio 50 u32 match ip src \
0.0.0.0/0 police rate ${UPLINK}kbit burst 4k drop flowid :1
tc filter add dev $DEV parent fff: protocol ip prio 51 u32 match ip src \
0.0.0.0/0 police rate ${9*UPLINK/10}kbit burst 4k drop flowid :1
for a in $HIPRIOPORTSRC
do
tc filter add dev $DEV parent ffff: protocol ip prio 11 u32 \
match ip sport $a 0xffff flowid :1
done
for a in $HIPRIOPORTDST
do
tc filter add dev $DEV parent ffff: protocol ip prio 11 u32 \
match ip dport $a 0xffff flowid :1
done
|
yes i've tried shaping several things, and yes there's unused objects in here, but it's a work in progress...
anyway.. is there anything i can do to regulate the flow of port allocations here? as well as use shaping rules based on ip segments?
-blc |
|
Back to top |
|
 |
frostschutz Advocate


Joined: 22 Feb 2005 Posts: 2977 Location: Germany
|
Posted: Sun Sep 25, 2005 6:11 pm Post subject: Re: network challenge not easily solved... |
|
|
MrBlc wrote: | here's the issue:
2 of the computers behind the wall, uses too many port allocations.
1 user is critically dependent on low latency, but uses minimal bandwidth |
What does 'too many port allocations' mean exactly? Blocking ports is more like a firewall issue...
MrBlc wrote: | i have installed a traffic shaper, and configured it using a modified version of wondershaper. |
I do not recommend using Wondershaper. It's an old unmaintained script, and it's buggy. Not a good base to build on. How about using a general purpose script for shaping (like HTB.init), if the specialized ones (digriz, fairnat, ...) are not for you?
MrBlc wrote: | the shaper is configured to shape down standard p2p ports, but from what i can tell, that's not happening |
Are your clients trustworthy? Shaping P2P based on port numbers will only work if the clients don't change their ports (modern P2P applications can use any port...). Otherwise you'll need to use IPP2P or l7filter, two projects which detect P2P based on packet contents. |
|
Back to top |
|
 |
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|