SStreet n00b
Joined: 06 Aug 2004 Posts: 11
|
Posted: Fri Feb 04, 2011 1:18 pm Post subject: Disk Performance on nVidia SATA [Server, not Desktop] |
|
|
Hi Folks,
I've recently re-built a file server for my home/office - It's been a fileserver for a long time, but I never really stressed it to the point I am now.
That said, I am observing some odd performance patterns. The most significant greatly effects my client machines ability to backup at reasonable speeds.
Servers Specs:
CPU: Intel Core 2 Duo 3GHz (E6850)
SysBrd: nForce 9400 based (basically an nForce i730 with a GPU bolted on), SATA ports are in AHCI mode.
Mem: 4G of DDR2-800
Drives: 4 * 250G WD2500JS; 2 * Samsung 1TB (HD103UJ)
NIC: nForce GigE
Gentoo amd64 [emerge sync / emerge --update world system as of Feb 1]
Kernel 2.6.36-gentoo-r5
The 4 250G drives are in a 3+1 RAID-5 md set - generally used for long term storage.
The 2 1TB drives are in a RAID-1 md set - this is where I see the most effect. This is the target of my backups from client machines on my network.
So I started doing some testing.
dd if=/dev/zero of=/Filestore/bigfile bs=64k
I would expect this to run the drives to full write rates until the filesystem filled up. That isn't the case. Observe my results:
Code: |
#> mount | grep Filestore
/dev/mapper/vg01-filestore on /Filestore type xfs (rw,noatime)
#> vgdisplay -v vg01
Using volume group(s) on command line
Finding volume group "vg01"
--- Volume group ---
VG Name vg01
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 931.51 GiB
PE Size 4.00 MiB
Total PE 238466
Alloc PE / Size 238466 / 931.51 GiB
Free PE / Size 0 / 0
VG UUID nGOTz8-aWKQ-xscZ-JNDt-YWfF-zg8N-Md1a0d
--- Logical volume ---
LV Name /dev/vg01/filestore
VG Name vg01
LV UUID 1M9YZZ-39iu-c2i3-wlZU-2N3o-3DVT-m3S2Th
LV Write Access read/write
LV Status available
# open 1
LV Size 931.51 GiB
Current LE 238466
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:3
--- Physical volumes ---
PV Name /dev/md125
PV UUID NnlPrv-2aub-5NR3-vEwR-v1q9-LSo3-C0C3SK
PV Status allocatable
Total PE / Free PE 238466 / 0
#>mdadm --detail /dev/md125
/dev/md125:
Version : 0.90
Creation Time : Mon Sep 27 16:25:05 2010
Raid Level : raid1
Array Size : 976760768 (931.51 GiB 1000.20 GB)
Used Dev Size : 976760768 (931.51 GiB 1000.20 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 125
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Fri Feb 4 04:49:23 2011
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 6d1d37b8:b6bf53a5:ab7bb3a9:117444ae
Events : 0.20963
Number Major Minor RaidDevice State
0 8 33 0 active sync /dev/sdc1
1 8 81 1 active sync /dev/sdf1
#> cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: SAMSUNG SP2504C Rev: VT10
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi1 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: WDC WD2500JS-00N Rev: 10.0
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi2 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: SAMSUNG HD103UJ Rev: 1AA0
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi3 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: WDC WD2500JS-00N Rev: 10.0
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi4 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: WDC WD2500JS-40N Rev: 10.0
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi5 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: SAMSUNG HD103UJ Rev: 1AA0
Type: Direct-Access ANSI SCSI revision: 05
#>
|
this is a snippet of the system performance while the dd is running and has been running for at least 3 minutes. Using 'dstat -D sda,sdb,sdc,sdd,sde,sdf' to monitor the system.
Code: |
0 4 0 96 0 0| 0 40k: 0 32k: 0 69M: 0 28k: 0 44k: 0 58M| 268B 518B| 0 0 | 824 1137
0 8 25 68 0 0| 0 0 : 0 0 : 0 67M: 0 0 : 0 0 : 0 70M| 66B 550B| 0 0 | 887 850
0 1 50 48 0 0| 0 0 : 0 0 : 0 14M: 0 0 : 0 0 : 0 41M| 66B 518B| 0 0 | 610 438
0 4 23 73 0 1| 0 0 : 0 0 : 0 84M: 0 0 : 0 0 : 0 69M| 66B 518B| 0 0 |1054 1014
----total-cpu-usage---- --dsk/sda-----dsk/sdb-----dsk/sdc-----dsk/sdd-----dsk/sde-----dsk/sdf-- -net/total- ---paging-- ---system--
usr sys idl wai hiq siq| read writ: read writ: read writ: read writ: read writ: read writ| recv send| in out | int csw
1 3 0 97 0 0| 0 0 : 0 0 : 0 75M: 0 0 : 0 0 : 0 59M| 148B 588B| 0 0 | 707 722
0 3 0 97 0 0| 0 40k: 0 28k: 0 73M: 0 28k: 0 40k: 0 62M| 66B 1398B| 0 0 | 873 933
1 3 0 97 0 0| 0 0 : 0 0 : 0 78M: 0 0 : 0 0 : 0 54M| 66B 550B| 0 0 | 702 623
0 3 20 77 0 0| 0 0 : 0 0 : 0 51M: 0 0 : 0 0 : 0 65M| 66B 518B| 0 0 | 784 812
0 4 38 57 0 0| 0 0 : 0 0 : 0 66M: 0 0 : 0 0 : 0 55M| 66B 518B| 0 0 | 718 1137
0 4 0 96 0 0| 0 0 : 0 0 : 0 57M: 0 0 : 0 0 : 0 57M| 66B 518B| 0 0 | 672 653
0 3 0 97 0 0| 0 28k: 0 36k: 0 46M: 0 28k: 0 36k: 0 55M| 66B 518B| 0 0 | 777 592
0 9 14 77 0 0| 0 0 : 0 0 : 0 59M: 0 0 : 0 0 : 0 64M| 66B 550B| 0 0 | 848 1434
0 2 46 52 0 0| 0 0 : 0 0 : 0 0 : 0 0 : 0 0 : 0 48M| 66B 700B| 0 0 | 459 391
0 4 0 96 0 0| 0 0 : 0 0 : 0 66M: 0 0 : 0 0 : 0 60M| 66B 502B| 0 0 | 753 839
0 2 0 97 0 0| 0 0 : 0 0 : 0 76M: 0 0 : 0 0 : 0 55M| 66B 518B| 0 0 | 690 541
----total-cpu-usage---- --dsk/sda-----dsk/sdb-----dsk/sdc-----dsk/sdd-----dsk/sde-----dsk/sdf-- -net/total- ---paging-- ---system--
usr sys idl wai hiq siq| read writ: read writ: read writ: read writ: read writ: read writ| recv send| in out | int csw
0 3 0 97 0 0| 0 4096B: 0 4096B: 0 74M: 0 4096B: 0 4096B: 0 63M| 66B 518B| 0 0 | 873 814
0 2 0 98 0 0| 0 36k:4096B 24k: 0 4096B:4096B 24k: 0 36k: 0 39M| 66B 1430B| 0 0 | 644 700
0 3 0 96 0 0| 0 0 : 0 0 : 0 85M: 0 0 : 0 0 : 0 74M| 66B 550B| 0 0 | 988 941
0 3 0 97 0 0| 0 0 : 0 0 : 0 71M: 0 0 : 0 0 : 0 52M| 66B 518B| 0 0 | 757 726
0 4 9 87 0 0| 0 0 : 0 0 : 0 67M: 0 0 : 0 0 : 0 68M| 66B 518B| 0 0 | 873 880
0 4 0 97 0 0| 0 0 : 0 0 : 0 63M: 0 0 : 0 0 : 0 66M| 66B 518B| 0 0 | 930 942
0 7 0 93 0 1| 0 4096B: 0 4096B: 0 68M: 0 4096B: 0 4096B: 0 70M| 336B 690B| 0 0 |1106 1063
0 5 1 94 0 0| 0 0 : 0 0 : 0 39M: 0 0 : 0 0 : 0 61M| 66B 550B| 0 0 | 768 1544
0 3 0 97 0 0| 0 0 : 0 0 : 0 63M: 0 0 : 0 0 : 0 63M| 66B 518B| 0 0 | 611 394
0 3 20 78 0 0| 0 0 : 0 0 : 0 64M: 0 0 : 0 0 : 0 68M| 438B 1016B| 0 0 | 534 311
0 0 51 49 0 0| 0 0 : 0 0 : 0 75M: 0 0 : 0 0 : 0 66M| 66B 518B| 0 0 | 407 208
|
Why does the system stall frequently during disk IO, must less on streaming reads, but a lot on streaming writes? Is it something that can be tuned?
Nothing appears in logs or dmesg. |
|