View previous topic :: View next topic |
Author |
Message |
jcpunk n00b
Joined: 22 Jul 2003 Posts: 37
|
Posted: Fri Nov 05, 2004 4:52 pm Post subject: How do I slam a DHCP server as hard as possible? |
|
|
We have got an old server (Sparc5) with a 10BaseT network card in it that is currently our DHCP Server, and I have been assigned to prove that this 'new' system (P3 400) with a 100BaseT network card running Gentoo can provide addresses out under greater loads.
It is obvious to me that the 'new' system is so far superior to the old one (Processor, RAM, Network Card) that there should be no contest, but I cannot seem to write up a test program that will actually put load on the system.
My test lab is only 4 computers and my curent script cannot put enough requests out at at time to put any dent in either of these two machiene's processing power. A total of 12 per second is really not enough to have any impact. The problem is, I cannot use any existing dhcp program to test these with as the programs are deliberatly written not to allow these type of conditions.
My attempt looks like:
Code: |
#!/bin/bash
for i in `seq 1 100`
do
echo -----
echo COUNT:
echo -----
rm -f /var/lib/dhcp/dhclient.leases*
killall -KILL dhclient
`dhclient >/dev/null`
done
|
This script just doesn't generate enough load. Can anyone point me towards another method of writing this or some sort of already written one. I have access to perl and C++ on my test boxes for compiling purposes, but I really don't know a whole lot about how to write in them..... shell scripting usually doesn't fail me.... |
|
Back to top |
|
|
georwell Guru
Joined: 25 Jun 2003 Posts: 430 Location: Uppsala, Sweden
|
Posted: Fri Nov 05, 2004 9:36 pm Post subject: |
|
|
Try adding hundreds or thousands of network interfaces to your network card. Then keep releasing, removing, taking up and down each interface in random.
However, I have never seen more then 20 ip addresses assigned to one card. So it might take a bit of work.
Just a thought! |
|
Back to top |
|
|
NewBlackDak Guru
Joined: 02 Nov 2003 Posts: 512 Location: Utah County, UT
|
Posted: Sat Nov 06, 2004 10:07 am Post subject: |
|
|
That's a good idea.
Setup NTP, so the time in synchronized on all of them.
Setup ~20 interfaces on each machine. Write the script to take them all down at the same time, and bring them all back at the same time. Cron it to run on all of your machines at the same time. That would be 80 requests at once. If you have extra nics lying around drop those into the test machines, and do the same thing with those too. _________________ Gentoo systems.
X2 4200+@2.6 - Athy
X2 3600+ - Myth
UltraSparc5 440 - sparcy |
|
Back to top |
|
|
jklmnop n00b
Joined: 18 Jun 2003 Posts: 42
|
Posted: Sat Nov 06, 2004 11:46 am Post subject: |
|
|
look at the man page for dhcpcd. there is a -T testing option which makes no changes to your running config. there is also a clientid option that you can set to random MAC addresses to force the server to give you a new lease.
Code: |
for i in $( seq 0 255 ); do
hex=$( printf "%x" $i )
dhcpcd -T -d -t 5 -I 00:de:ad:be:ef:$hex eth0 &
done
killall dhcpcd
|
that's 255 seperate requests in a couple seconds. my poor dsl modem ran out of address way before...
check /var/log/messages for the results.
let us know how it turns out... |
|
Back to top |
|
|
nobspangle Veteran
Joined: 23 Mar 2004 Posts: 1318 Location: Manchester, UK
|
Posted: Sat Nov 06, 2004 2:38 pm Post subject: |
|
|
georwell wrote: | Try adding hundreds or thousands of network interfaces to your network card. Then keep releasing, removing, taking up and down each interface in random.
However, I have never seen more then 20 ip addresses assigned to one card. So it might take a bit of work.
Just a thought! |
This method won't work as only one alias can get it's IP from dhcp, the others have to have static addresses. |
|
Back to top |
|
|
speed_bump Tux's lil' helper
Joined: 10 Jan 2004 Posts: 92 Location: Wisconsin, USA
|
Posted: Sat Nov 06, 2004 3:21 pm Post subject: |
|
|
Let's back the train up for a second. Is your network generating a significant load on the DHCP server at this time? DHCP server performance could possibly be an issue for very large ISPs, but it's typically not an issue even for healthy size networks (1000-2000 hosts). Realistically, either of those two systems should be able to provide more than adequate performance as a DHCP server (unless you're using them for other tasks as well).
There have been several discussions of this sort of thing on the DHCP discussion list which you can find over at ISC's web site (www.isc.org), so you may want to check the archives of that list for ideas. |
|
Back to top |
|
|
NewBlackDak Guru
Joined: 02 Nov 2003 Posts: 512 Location: Utah County, UT
|
Posted: Sat Nov 06, 2004 9:38 pm Post subject: |
|
|
Our old DHCP(also DNS) was running on a sparc5 now that I think about it. It was an 85MHz machine with 256MB Ram. ~600 machines on 3 subnets, and the system monitor barely ever moved on it. _________________ Gentoo systems.
X2 4200+@2.6 - Athy
X2 3600+ - Myth
UltraSparc5 440 - sparcy |
|
Back to top |
|
|
speed_bump Tux's lil' helper
Joined: 10 Jan 2004 Posts: 92 Location: Wisconsin, USA
|
Posted: Sat Nov 06, 2004 11:51 pm Post subject: |
|
|
I've seen DNS run on some pretty lightweight machines - a P90 at one point (showing my age).
As you point out, there's no significant CPU load for this sort of thing.
In any case, for either DHCP or DNS you're more likely to run into memory issues long before network or CPU become an issue. In fact, in several discussions on the ISC DHCP list the indicate that you are more likely to see performance problems if there are a large number of leases moreso than if you're getting bombarded by a large number of simultaneous requests. This has to do with the data structures dhcpd uses to store the in-memory lease database.
There are other issues that may enter into it as well. In fact, the ICMP "ping before offer" can put a significant upper bound on performance. Obviously, it doesn't matter what architecture you're using if that's the case.
For my money I'd say either of those systems should be able to handle DHCP for all but the very most demanding environments. Unless there are actual specific performance problems that can be demonstrated, I think spending a lot of time benchmarking this would be largely pointless. |
|
Back to top |
|
|
jcpunk n00b
Joined: 22 Jul 2003 Posts: 37
|
Posted: Wed Nov 17, 2004 5:07 pm Post subject: |
|
|
Here is the script I ended up with
Code: | #!/bin/bash
path=/sbin
dhcp=dhcpcd
confdir=/etc/dhcpc
rm_file=/var/run/dhcpcd* /tmp/dhcpcd-eth*
rm -f $rm_file # cleanup all the files
for i in `seq 0 5` # this is for how many hundred
do
for j in `seq 0 99` # leave this as 0-99 and it will do 100
do
ja=$j; if [ $ja -lt 10 ]; then ja=0$ja; fi # make the output look good
echo -------------------
echo COUNT $i$ja
# the below will generate a random mac address that should cause some
# difficulty for the DHCP server hash tables since these are so close together
shex=$( printf "%x" $i )
ehex=$( printf "%x" $j )
if [ $j -lt 16 ]; then ehex=0$ehex; fi
mac=3c:ab:dd:12:3$shex:$ehex
echo for mac address $mac
echo -------------------
$path/$dhcp -TRYSBNn eth0 -L /tmp/ -l 1 -c $confdir -I $mac &
rm -f $rm_file # remove the cache files
done
done
`killall -KILL $dhcp`
sleep 1
rm -f $rm_file |
Since this is a school with around 7000 people on DHCP, our strongest need is the 100 base card for the fater connections. This pounds the snot out of it...
Thanks for all the help |
|
Back to top |
|
|
screwloose Tux's lil' helper
Joined: 07 Feb 2004 Posts: 94 Location: Toon Town, Canada
|
Posted: Wed Nov 17, 2004 9:14 pm Post subject: |
|
|
What was the load diffrerence between your two machines with that script? _________________ If something can go wrong it probably already has. You just don't know it yet. ~Henry's Modified version of Murphy's Law |
|
Back to top |
|
|
|