Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Tweaking TCQ settings
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware
View previous topic :: View next topic  
Author Message
busfahrer
n00b
n00b


Joined: 18 Sep 2004
Posts: 57
Location: Germany

PostPosted: Mon Feb 14, 2005 10:45 pm    Post subject: Tweaking TCQ settings Reply with quote

Hi,

my secondary box here is this:
2 x 933 MHz P3 Coppermine (933/256/133)
896 MB PC133 reg. ECC RAM
Tyan S2510 (ThunderLE) Mainboard

For that, I recently bought a Seagate Cheetah 15k.3 U320 HDD (ST318453LW).
I am running Linux kernel 2.6.10 on the machine, and it has an on-board U160 controller (LSI Logic / Symbios Logic 53c1010 66MHz), therefore I'm using the SYM53C8 kernel module.
That module has the following tunable settings, which I'm listing here with their stock values:

Code:

(1)   DMA addressing mode
(16)  default tagged command queue depth
(64)  maximum number of queued commands


Now I'm quite new to the topic and am not sure what "multi-user" means in the hdd context.. Correct me when I'm wrong, but I think "multi-user" only means several programs accessing the harddisk for non-trivial amounts of data concurrently, and not just having multiple programs running at once.
If that is the case, I guess it would be best in my case to tune the hdd/kernel module for single-user scenarios.
In the WD740GD review I have seen that enabling TCQ lead to substantial performance losses in single-user scenarios, and I figure that still holds true for SCSI (if not, please correct me).
So I was wondering what you guys would recommend as settings for that kernel module. The typical usage of the box is having several programs open at once, both interactive ones and daemons, with 95% of the time, at most one of the programs requiring heavy hdd access.

I'm looking forward to your replies. :)

Greetings, Chris.

P.S.: For the sake of completeness, the default value in kernel 2.4.28 for TCQ depth is 4.
_________________
HOWTO: Removing disks from an LVM volume
Back to top
View user's profile Send private message
thoughtform
l33t
l33t


Joined: 24 May 2004
Posts: 600

PostPosted: Sun Nov 13, 2005 3:20 am    Post subject: Reply with quote

i have:
0000:01:09.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1010 66MHz Ultra3 SCSI Adapter (rev 01)
pci card.
i'm getting 25mB/s xfers with hdparm, on an atlas 10k 36gb drive.
i would like to compare this speed with your setup.
thanks.
Back to top
View user's profile Send private message
carpman
Advocate
Advocate


Joined: 20 Jun 2002
Posts: 2202
Location: London - UK

PostPosted: Tue Nov 22, 2005 2:09 pm    Post subject: Reply with quote

Scorpaen wrote:
i have:
0000:01:09.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1010 66MHz Ultra3 SCSI Adapter (rev 01)
pci card.
i'm getting 25mB/s xfers with hdparm, on an atlas 10k 36gb drive.
i would like to compare this speed with your setup.
thanks.


I have said controller onboard asus cur-dls and atlas 10k lll 18gb and get following:

Code:

hdparm -tT /dev/sda

/dev/sda:
 Timing cached reads:   888 MB in  2.00 seconds = 443.97 MB/sec
 Timing buffered disk reads:  158 MB in  3.03 seconds =  52.11 MB/sec


Also have 160gb WD RE sata drive which is:
Code:

hdparm -tT /dev/sdc

/dev/sdc:
 Timing cached reads:   892 MB in  2.00 seconds = 445.08 MB/sec
HDIO_DRIVE_CMD(null) (wait for flush complete) failed: Inappropriate ioctl for device
 Timing buffered disk reads:  172 MB in  3.00 seconds =  57.25 MB/sec
HDIO_DRIVE_CMD(null) (wait for flush complete) failed: Inappropriate ioctl for device


As to original question i don't know but would like to know and will do some research and try different settings.
_________________
Work Station - 64bit
Gigabyte GA X48-DQ6 Core2duo E8400
8GB GSkill DDR2-1066
SATA Areca 1210 Raid
BFG OC2 8800 GTS 640mb
--------------------------------
Notebook
Samsung Q45 7100 4gb
Back to top
View user's profile Send private message
VinceLe
n00b
n00b


Joined: 24 Nov 2005
Posts: 7
Location: Paris, France

PostPosted: Fri Nov 25, 2005 12:28 am    Post subject: Reply with quote

I remeber hearing about higher disk latencies with excessive TCQ depth, perhaps that is the rationale behind 2.4 default value.
But this is perhaps not true any more with recent 2.6 kernels.

I think single app throughput (best case approximated by hdparm xfer testing) is not too much affected by TCQ depth.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum