Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
SATA - hdparm - performance
View unanswered posts
View posts from last 24 hours

Goto page Previous  1, 2, 3  Next  
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware
View previous topic :: View next topic  
Author Message
BlinkEye
Veteran
Veteran


Joined: 21 Oct 2003
Posts: 1046
Location: Gentoo Forums

PostPosted: Sat Jun 26, 2004 7:37 am    Post subject: Reply with quote

serendipity wrote:
I liked this iozone thing. Here are the results of 45 non-stop minutes of my two maxtor 120GBs being thrashed by iozone, max file size specified as 2GB. I'm not too sure how often I'd like to run it, because the disks really do take a beating....

http://perso.wanadoo.fr/ic/iozoneresults.html

gnah, i did the test with a file size of 1GB. i'd like to compare but to get useful result i really suggest of not doing anything else than running IOZone. if you want to compare and are willing to do another test please mail me your results (not only the graphics) with the command you executed (please use a file size of 1GB - i'd have other 2 test results from other users). i'll pm you with my email address
_________________
Easily backup up your system? klick
Get rid of SSH Brute Force Attempts / Script Kiddies klick
Back to top
View user's profile Send private message
BlinkEye
Veteran
Veteran


Joined: 21 Oct 2003
Posts: 1046
Location: Gentoo Forums

PostPosted: Sat Jun 26, 2004 7:44 am    Post subject: Reply with quote

lbrtuk wrote:
The problem with iozone is it's not filesystem independant. It works on top of the filesystem. Therefore someone using reiserfs will get totally different results to someone using ext3 and it will have little to do with the hardware.

i don't see a problem there, because i do want to know how fast my drives are configured as they are. i'm not interested how fast they could be if everything was perfect - what's the use of it? that's in fact another reason why one SHOULD use IOZone to benchmark his system.
_________________
Easily backup up your system? klick
Get rid of SSH Brute Force Attempts / Script Kiddies klick
Back to top
View user's profile Send private message
lbrtuk
l33t
l33t


Joined: 08 May 2003
Posts: 910

PostPosted: Sat Jun 26, 2004 2:20 pm    Post subject: Reply with quote

BlinkEye wrote:
i don't see a problem there, because i do want to know how fast my drives are configured as they are. i'm not interested how fast they could be if everything was perfect - what's the use of it? that's in fact another reason why one SHOULD use IOZone to benchmark his system.


It's because if you say "Hey, I'm getting 38Mb/s with my setup: xyz" and someone comes back and says "Hi, I've got a a very similar setup: xyz. But I'm getting 53Mb/s. You must have configured something wrong." that can be very useful information. But if you use iozone and you're both using different filesystems it's completely worthless information when it comes to setting up drivers and udma modes.
Back to top
View user's profile Send private message
BlinkEye
Veteran
Veteran


Joined: 21 Oct 2003
Posts: 1046
Location: Gentoo Forums

PostPosted: Sat Jun 26, 2004 3:15 pm    Post subject: Reply with quote

lbrtuk wrote:
It's because if you say "Hey, I'm getting 38Mb/s with my setup: xyz" and someone comes back and says "Hi, I've got a a very similar setup: xyz. But I'm getting 53Mb/s. You must have configured something wrong." that can be very useful information. But if you use iozone and you're both using different filesystems it's completely worthless information when it comes to setting up drivers and udma modes.

your first point may be right if hdparm -t -T brings up useful and accurate results, so here an example:

system specs #1: amd64 3200+ MHz, 3x512MB DDR 400MHz, 3x120GB SATA SEAGATE drives 7200 RPM in a raid5
Code:
# hdparm -t -T /dev/md1
/dev/md1:
 Timing buffer-cache reads:   1756 MB in  2.00 seconds = 671.77 MB/sec
 Timing buffered disk reads:   56 MB in  3.02 seconds =  55.58 MB/sec

system specs #2: intel pentium M 1200 MHz, 1x512 SDRAM, 1x 40GB ATA drive 5400 RPM
Code:

/dev/hda:
 Timing buffer-cache reads:   1756 MB in  2.00 seconds = 876.82 MB/sec
 Timing buffered disk reads:   56 MB in  3.02 seconds =  18.53 MB/sec

according to the manual of hdparm:
Quote:
-T Perform timings of cache reads for benchmark and comparison purposes. For meaningful results, this operation should be repeated 2-3 times on an otherwise inactive system (no other active processes) with at least a couple of megabytes of free memory. This displays the speed of reading directly from the Linux buffer cache without disk access. This measurement is essentially an indication of the throughput of the processor, cache, and memory of the system under test.

so one result doesn't really say something about your drive and what it says is that my laptop is a lot faster than my server? that's no useful result. the other result may be ok, but it doesn's say anything at all about your raid or your drives when you're working with them.
so, what should be configured wrong if someone gets different transfer rates? as we are talking about SATA drives the malconfiguration of ide drives on the same ide chanel does not apply (if you're running a raid, else it doesn't matter for the test). as most of us get unsatisfying results of raid devices you MUST have enabled the right kernel settings or your raid wouldn't run at all. so, i want to know how fast my drives are, hence i do a test with IOZone and compare the result to yours. maybe my drives are slower which would be the result of the filesystem or setting of the raid (i guess you know the drill) - so, i'd be totally persuaded that i'm not getting the utmost of my raid and either change the filesystem or change some settings, because now i know, i made some hard tests which reflect the daily situation of using my drives so it MUST be a setting problem.
_________________
Easily backup up your system? klick
Get rid of SSH Brute Force Attempts / Script Kiddies klick
Back to top
View user's profile Send private message
lbrtuk
l33t
l33t


Joined: 08 May 2003
Posts: 910

PostPosted: Sat Jun 26, 2004 7:48 pm    Post subject: Reply with quote

That's entirely what I'm talking about!

You count the number of threads on this forum which are: "I'm not sure udma is working properly", "My hard disk is making clicking sounds, and hdparm -tT says this...". When you're trying to troubleshoot problems like that, you want a tool that has nothing to do with the filesystem. That would overcomplicate things.

Quote:
doesn's say anything at all about your raid or your drives when you're working with them.


I know, I'm not talking about that. I'm talking about troubleshooting hardware problems.
Back to top
View user's profile Send private message
BlinkEye
Veteran
Veteran


Joined: 21 Oct 2003
Posts: 1046
Location: Gentoo Forums

PostPosted: Sat Jun 26, 2004 7:56 pm    Post subject: Reply with quote

lbrtuk wrote:
I'm talking about troubleshooting hardware problems

i agree! this is what i forgot to mention in my previous post: for quick and easy troubleshooting there's no better way than to start up hdparm. but according to the thread i thought that it was all about REALLY benchmarking your drives ...
_________________
Easily backup up your system? klick
Get rid of SSH Brute Force Attempts / Script Kiddies klick
Back to top
View user's profile Send private message
lbrtuk
l33t
l33t


Joined: 08 May 2003
Posts: 910

PostPosted: Sat Jun 26, 2004 8:07 pm    Post subject: Reply with quote

Well no, when you're asking about SATA performance, what you're asking is "Hi guys, I've got a SATA system and here are the numbers I'm getting. Do you think I've got it set up right?" and not "Hi, what real life performance should I expect to get with SATA?".

He's not asking a filesystem question.

Anyway. This has got gravely off topic.
Back to top
View user's profile Send private message
BlinkEye
Veteran
Veteran


Joined: 21 Oct 2003
Posts: 1046
Location: Gentoo Forums

PostPosted: Sat Jun 26, 2004 9:28 pm    Post subject: Reply with quote

now you gave me two reason to draw back :wink:
_________________
Easily backup up your system? klick
Get rid of SSH Brute Force Attempts / Script Kiddies klick
Back to top
View user's profile Send private message
srs5694
Guru
Guru


Joined: 08 Mar 2004
Posts: 434
Location: Woonsocket, RI

PostPosted: Sat Jun 26, 2004 11:00 pm    Post subject: Reply with quote

lbrtuk wrote:
The problem with iozone is it's not filesystem independant. It works on top of the filesystem. Therefore someone using reiserfs will get totally different results to someone using ext3 and it will have little to do with the hardware.


That's the impression I get. Whatever its flaws, hdparm is a fairly direct test of hardware performance, and in particular, sustained (on computer timescales) raw read operations. IOzone, from what I've seen in the documentation, is a filesystem tester. As such, it's dependent on hardware, but it's also dependent on the filesystem implementation, data structures, and maybe even stuff like how full or fragmented a specific disk is. (I've not looked into it in enough depth to know what might influence its results.) IOzone's hardware dependency will also test somewhat different features than hdparm; for instance, I'd expect IOzone performance to be more influenced by head seeks.

In sum, my impression is that hdparm is the superior tool for testing whether your kernel parameters and drive DMA features are set reasonably; it's quick and directly tests the drive performance factors that'll be most influenced by kernel settings. IOzone might be a superior tool for comparing different brands or models of drives or even disk controllers if you perform sufficiently controlled tests. If you just compare your disk to your neighbor's using your existing installations, there are likely to be too many variables to draw valid conclusions about your hardware -- or your kernel settings, for that matter.

As to the mention of buffer-cache readings, in the hdparm output, that's mostly a measure of your computer's memory subsystem; it's the performance of the buffer cache that the kernel maintains. Disk hardware has little or no influence on this measure, as I understand it. Low values might result because of a weak CPU, poor motherboard memory subsystem, slower-than-optimal RAM, etc. This value can vary much more dramatically across systems than actual disk performance. For instance, my Athlon 64 3000+ system gets values of about 1180 MB/s for buffer-cache reads and 30MB/s for disk throughput on an older IDE disk, whereas my 266MHz iMac gets values of 71MB/s and 13MB/s. Clearly, the Athlon 64's memory performance blows away the iMac, but the actual disk subsystem, although better, isn't nearly so dramatically better.
Back to top
View user's profile Send private message
BlinkEye
Veteran
Veteran


Joined: 21 Oct 2003
Posts: 1046
Location: Gentoo Forums

PostPosted: Sun Jun 27, 2004 7:32 am    Post subject: Reply with quote

how do you explain this result?

system specs #1: amd64 3200+ MHz, 3x512MB DDR 400MHz, 3x120GB SATA SEAGATE drives 7200 RPM in a raid5
Code:
# hdparm -t -T /dev/md1
/dev/md1:
 Timing buffer-cache reads:   1756 MB in  2.00 seconds = 671.77 MB/sec
 Timing buffered disk reads:   56 MB in  3.02 seconds =  55.58 MB/sec

system specs #2: intel pentium M 1200 MHz, 1x512 SDRAM, 1x 40GB ATA drive 5400 RPM
Code:

/dev/hda:
 Timing buffer-cache reads:   1756 MB in  2.00 seconds = 876.82 MB/sec
 Timing buffered disk reads:   56 MB in  3.02 seconds =  18.53 MB/sec

according to the manual of hdparm:
Quote:
-T Perform timings of cache reads for benchmark and comparison purposes. For meaningful results, this operation should be repeated 2-3 times on an otherwise inactive system (no other active processes) with at least a couple of megabytes of free memory. This displays the speed of reading directly from the Linux buffer cache without disk access. This measurement is essentially an indication of the throughput of the processor, cache, and memory of the system under test.

_________________
Easily backup up your system? klick
Get rid of SSH Brute Force Attempts / Script Kiddies klick
Back to top
View user's profile Send private message
Gherald2
Guru
Guru


Joined: 02 Jul 2003
Posts: 326
Location: Madison, WI USA

PostPosted: Mon Jun 28, 2004 10:48 pm    Post subject: Reply with quote

In that particular system the md1 Raid5 cannot keep up with > 670 MB/sec cache speeds. Note, however, that it is plenty fast enough to keep up with ~55mb/s of actual drive throughput.

On System #1 do:

hdparm -tT /dev/hdX

You should run it 3 times on each of your sata drives (9 times total) and round your figures....
Back to top
View user's profile Send private message
carpman
Advocate
Advocate


Joined: 20 Jun 2002
Posts: 2202
Location: London - UK

PostPosted: Thu Jul 15, 2004 6:49 pm    Post subject: Reply with quote

just for a comparison i have 2 maxtor diamond plus 9 40gb (not sata) drives on ITE raid0 kernel 2.6.7 and get

Code:

hdparm -t /dev/sda
/dev/sda:
 Timing buffered disk reads:  216 MB in  3.01 seconds =  71.80 MB/sec
[/code]
_________________
Work Station - 64bit
Gigabyte GA X48-DQ6 Core2duo E8400
8GB GSkill DDR2-1066
SATA Areca 1210 Raid
BFG OC2 8800 GTS 640mb
--------------------------------
Notebook
Samsung Q45 7100 4gb
Back to top
View user's profile Send private message
arsen
Bodhisattva
Bodhisattva


Joined: 10 Apr 2004
Posts: 1803
Location: Siemianowice Śląskie, Poland

PostPosted: Mon Jul 19, 2004 7:23 pm    Post subject: Reply with quote

my sowtware raid0, 2 x maxtor sata 80Gb:
mount /dev/md3
Code:

hdparm -tT /dev/md3
/dev/md3:
 Timing buffer-cache reads:   1316 MB in  2.00 seconds = 657.44 MB/sec
 Timing buffered disk reads:  238 MB in  3.02 seconds =  78.87 MB/sec

unmount /dev/md3
Code:

hdparm -tT /dev/md3
/dev/md3:
 Timing buffer-cache reads:   1276 MB in  2.00 seconds = 637.14 MB/sec
 Timing buffered disk reads:  306 MB in  3.01 seconds = 101.71 MB/sec

hmmm, mount slow, unmount fast....
Back to top
View user's profile Send private message
rkrenzis
Tux's lil' helper
Tux's lil' helper


Joined: 22 Jul 2004
Posts: 135
Location: USA

PostPosted: Mon Aug 02, 2004 2:37 am    Post subject: Single SATA drive Reply with quote

I have a Maxtor 6Y250M0 SATA drive on a SOYO CY-K8 Plus board (nforce3) based.

Is it:

1. An error that the system recognizes it as a ATA drive versus a SATA drive?
2. Can my performance be tuned? (from hdparm -tT /dev/hdc)

/dev/hdc:
Timing buffer-cache reads: 1960 MB in 2.00 seconds = 979.17 MB/sec
Timing buffered disk reads: 46 MB in 3.11 seconds = 14.80 MB/sec

I'm quite impressed to see individuals running raid-0 with reads in the mid-200s to low-300s.

Entries in /etc/conf.d/hdparm:

disc0_args="-d1 -A1 -m16 -u1 -a256 -X69"

TIA.
Back to top
View user's profile Send private message
Dr_b_
n00b
n00b


Joined: 18 Jan 2004
Posts: 33

PostPosted: Mon Aug 02, 2004 5:04 am    Post subject: Reply with quote

My SATA Drive:

Code:
ENROL-V2 ~ # hdparm -tT /dev/sda

/dev/sda:
 Timing buffer-cache reads:   3724 MB in  2.00 seconds = 1862.28 MB/sec
 Timing buffered disk reads:  164 MB in  3.01 seconds =  54.53 MB/sec


It wouldn't work for me without the SCSI ontop, couldn't figure out how to get it to work with the PIIX only. Still, not bad performance.

Drive is a WD Raptor, 36G, board is an Asus P4C800E
Back to top
View user's profile Send private message
rkrenzis
Tux's lil' helper
Tux's lil' helper


Joined: 22 Jul 2004
Posts: 135
Location: USA

PostPosted: Mon Aug 02, 2004 10:49 am    Post subject: nvidia sata driver in 2.6.8... Reply with quote

Short answer is to wait for 2.6.8. I'm going to grab a prepatch and see if it works on improving the overall speed.
Back to top
View user's profile Send private message
c0balt
Guru
Guru


Joined: 04 Jul 2004
Posts: 441
Location: Germany

PostPosted: Mon Aug 02, 2004 2:55 pm    Post subject: Reply with quote

hi,
is there any way to improve performance on sata with hdparm?
ie doing settings like on ide

Code:
[mybox ~]# hdparm -Tt /dev/sda /dev/hda

/dev/sda:
 Timing buffer-cache reads:   3736 MB in  2.00 seconds = 1867.35 MB/sec
 Timing buffered disk reads:  206 MB in  3.02 seconds =  68.18 MB/sec

/dev/hda:
 Timing buffer-cache reads:   3784 MB in  2.00 seconds = 1890.40 MB/sec
 Timing buffered disk reads:   64 MB in  3.01 seconds =  21.28 MB/sec


not bad, but maybe it can get better with improved settings?
Back to top
View user's profile Send private message
rkrenzis
Tux's lil' helper
Tux's lil' helper


Joined: 22 Jul 2004
Posts: 135
Location: USA

PostPosted: Mon Aug 02, 2004 6:48 pm    Post subject: sata performance... Reply with quote

Gruß Gott c0balt!

My understanding from the newsgroups is that linux can identify the optimum settings for most new drives. So, unless you are having problems tweaking isn't necessary. You are getting a much better disk reads. I 'm getting worse benchmarks than your IDE drive (my drive is a SATA) because the SATA drivers aren't incorporated in the linux kernel (for the nvidia chipset). :(

I'm going to have to kiss bootsplash goodbye until the final 2.6.8 comes out and I'll get the desired speed (--I am almost sure on this).

I'll send a update later this evening regarding 2.6.8-rc2 and nvidia sata drivers.

...btw c0balt what part of Germany are you in? I have family in Augsburg.


Last edited by rkrenzis on Tue Aug 03, 2004 2:41 am; edited 1 time in total
Back to top
View user's profile Send private message
rkrenzis
Tux's lil' helper
Tux's lil' helper


Joined: 22 Jul 2004
Posts: 135
Location: USA

PostPosted: Tue Aug 03, 2004 12:42 am    Post subject: nvidia sata in 2.6.8-rc2 for nforce150 doesn't work Reply with quote

Okay, I've gotten a 2.6.7 vanilla kernel and patched it to 2.6.8-rc2. My drive still shows up as a ATA drive.

This is very annoying. I even get this lovely message:

Quote:
hdc: Speed warnings UDMA 3/4/5 is not functional.


Any thoughts or ideas?

I have SCSI, SCSI Disk, SATA, and NVIDIA SATA Driver Support statically compiled into the kernel. Any ideas?
Back to top
View user's profile Send private message
rkrenzis
Tux's lil' helper
Tux's lil' helper


Joined: 22 Jul 2004
Posts: 135
Location: USA

PostPosted: Tue Aug 03, 2004 2:17 am    Post subject: 2.6.8-rc2-bk12 no go on sata nvidia driver Reply with quote

I tried 2.6.8-rc2-bk12 and still no go on sata nvidia driver. I tried disabling all ide controllers in the bios, disabling all ide disks but it still shows up as a ide disk. I also tried the sata driver under the "ide" menu but still to no avail. My disk in my Pentium 200 connected to a UltraDMA133 controller is faster than this heap of junk. :cry:

Any ideas?
Back to top
View user's profile Send private message
rkrenzis
Tux's lil' helper
Tux's lil' helper


Joined: 22 Jul 2004
Posts: 135
Location: USA

PostPosted: Tue Aug 03, 2004 2:30 am    Post subject: 2.6.8-rc2-bk12 dmesg/hdparm/uname output Reply with quote

dmesg output
Code:

Bootdata ok (command line is root=/dev/hde3 vga=795)
Linux version 2.6.8-rc2-bk12 (root@clawhammer) (gcc version 3.3.4 20040623 (Gentoo Linux 3.3.4-r1, ssp-3.3.2-2, pie-8.7.6)) #3 Mon Aug 2 22:26:21 GMT 2004
BIOS-provided physical RAM map:
 BIOS-e820: 0000000000000000 - 000000000009f800 (usable)
 BIOS-e820: 000000000009f800 - 00000000000a0000 (reserved)
 BIOS-e820: 00000000000f0000 - 0000000000100000 (reserved)
 BIOS-e820: 0000000000100000 - 000000001fff0000 (usable)
 BIOS-e820: 000000001fff0000 - 000000001fff3000 (ACPI NVS)
 BIOS-e820: 000000001fff3000 - 0000000020000000 (ACPI data)
 BIOS-e820: 00000000fec00000 - 00000000fec01000 (reserved)
 BIOS-e820: 00000000fee00000 - 00000000fef00000 (reserved)
 BIOS-e820: 00000000fefffc00 - 00000000ff000000 (reserved)
 BIOS-e820: 00000000ffff0000 - 0000000100000000 (reserved)
No mptable found.
On node 0 totalpages: 131056
  DMA zone: 4096 pages, LIFO batch:1
  Normal zone: 126960 pages, LIFO batch:16
  HighMem zone: 0 pages, LIFO batch:1
PCI bridge 00:0a from 10de found. Setting "noapic". Overwrite with "apic"
ACPI: RSDP (v000 Nvidia                                    ) @ 0x00000000000f62c0
ACPI: RSDT (v001 Nvidia AWRDACPI 0x42302e31 AWRD 0x00000000) @ 0x000000001fff3000
ACPI: FADT (v001 Nvidia AWRDACPI 0x42302e31 AWRD 0x00000000) @ 0x000000001fff3040
ACPI: MADT (v001 Nvidia AWRDACPI 0x42302e31 AWRD 0x00000000) @ 0x000000001fff8000
ACPI: DSDT (v001 NVIDIA AWRDACPI 0x00001000 MSFT 0x0100000e) @ 0x0000000000000000
ACPI: Local APIC address 0xfee00000
ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
Processor #0 15:4 APIC version 16
ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1])
ACPI: Skipping IOAPIC probe due to 'noapic' option.
Using ACPI for processor (LAPIC) configuration information
Intel MultiProcessor Specification v1.1
    Virtual Wire compatibility mode.
OEM ID: OEM00000 <6>Product ID: PROD00000000 <6>APIC at: 0xFEE00000
I/O APIC #2 Version 17 at 0xFEC00000.
Processors: 1
Checking aperture...
CPU 0: aperture @ c0000000 size 256 MB
Built 1 zonelists
Kernel command line: root=/dev/hde3 vga=795 console=tty0
Initializing CPU#0
PID hash table entries: 16 (order 4: 256 bytes)
time.c: Using 1.193182 MHz PIT timer.
time.c: Detected 2000.025 MHz processor.
Console: colour dummy device 80x25
Dentry cache hash table entries: 131072 (order: 8, 1048576 bytes)
Inode-cache hash table entries: 65536 (order: 7, 524288 bytes)
Memory: 510932k/524224k available (1864k kernel code, 12536k reserved, 990k data, 432k init)
Calibrating delay loop... 3964.92 BogoMIPS
Mount-cache hash table entries: 256 (order: 0, 4096 bytes)
CPU: L1 I Cache: 64K (64 bytes/line), D cache 64K (64 bytes/line)
CPU: L2 Cache: 1024K (64 bytes/line)
CPU: AMD Athlon(tm) 64 Processor 3200+ stepping 08
Using local APIC NMI watchdog using perfctr0
Using local APIC timer interrupts.
Detected 12.500 MHz APIC timer.
NET: Registered protocol family 16
PCI: Using configuration type 1
mtrr: v2.0 (20020519)
ACPI: Subsystem revision 20040326
ACPI: IRQ9 SCI: Level Trigger.
ACPI: Interpreter enabled
ACPI: Using PIC for interrupt routing
ACPI: PCI Root Bridge [PCI0] (00:00)
PCI: Probing PCI hardware (bus 00)
ACPI: PCI Interrupt Routing Table [\_SB_.PCI0._PRT]
ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.HUB0._PRT]
ACPI: Power Resource [ISAV] (on)
ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.AGPB._PRT]
ACPI: PCI Interrupt Link [LNK1] (IRQs 3 4 5 6 7 9 10 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LNK2] (IRQs 3 4 5 6 7 9 10 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LNK3] (IRQs 3 4 5 6 7 *9 10 11 12 14 15)
ACPI: PCI Interrupt Link [LNK4] (IRQs 3 4 5 6 7 9 10 *11 12 14 15)
ACPI: PCI Interrupt Link [LNK5] (IRQs 3 4 5 6 7 *9 10 11 12 14 15)
ACPI: PCI Interrupt Link [LUBA] (IRQs 3 4 5 6 7 9 10 *11 12 14 15)
ACPI: PCI Interrupt Link [LUBB] (IRQs 3 4 5 6 7 9 10 *11 12 14 15)
ACPI: PCI Interrupt Link [LMAC] (IRQs 3 4 *5 6 7 9 10 11 12 14 15)
ACPI: PCI Interrupt Link [LAPU] (IRQs 3 4 5 6 7 9 10 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LACI] (IRQs 3 4 *5 6 7 9 10 11 12 14 15)
ACPI: PCI Interrupt Link [LMCI] (IRQs 3 4 5 6 7 9 10 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LSMB] (IRQs 3 4 *5 6 7 9 10 11 12 14 15)
ACPI: PCI Interrupt Link [LUB2] (IRQs 3 4 5 6 7 9 10 *11 12 14 15)
ACPI: PCI Interrupt Link [LFIR] (IRQs 3 4 5 6 7 9 10 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [L3CM] (IRQs 3 4 5 6 7 9 10 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LIDE] (IRQs 3 4 5 6 7 9 10 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LSID] (IRQs 3 4 5 6 7 9 10 *11 12 14 15)
ACPI: PCI Interrupt Link [APC1] (IRQs *16), disabled.
ACPI: PCI Interrupt Link [APC2] (IRQs *17), disabled.
ACPI: PCI Interrupt Link [APC3] (IRQs *18), disabled.
ACPI: PCI Interrupt Link [APC4] (IRQs *19), disabled.
ACPI: PCI Interrupt Link [APC5] (IRQs *16), disabled.
ACPI: PCI Interrupt Link [APCF] (IRQs 20 21 22) *0, disabled.
ACPI: PCI Interrupt Link [APCG] (IRQs 20 21 22) *0, disabled.
ACPI: PCI Interrupt Link [APCH] (IRQs 20 21 22) *0, disabled.
ACPI: PCI Interrupt Link [APCI] (IRQs 20 21 22) *0, disabled.
ACPI: PCI Interrupt Link [APCJ] (IRQs 20 21 22) *0, disabled.
ACPI: PCI Interrupt Link [APCK] (IRQs 20 21 22) *0, disabled.
ACPI: PCI Interrupt Link [APCS] (IRQs *23), disabled.
ACPI: PCI Interrupt Link [APCL] (IRQs 20 21 22) *0, disabled.
ACPI: PCI Interrupt Link [APCM] (IRQs 20 21 22) *0, disabled.
ACPI: PCI Interrupt Link [AP3C] (IRQs 20 21 22) *0, disabled.
ACPI: PCI Interrupt Link [APCZ] (IRQs 20 21 22) *0, disabled.
ACPI: PCI Interrupt Link [APSI] (IRQs 20 21 22) *0, disabled.
SCSI subsystem initialized
usbcore: registered new driver usbfs
usbcore: registered new driver hub
PCI: Using ACPI for IRQ routing
ACPI: PCI Interrupt Link [LSMB] enabled at IRQ 5
ACPI: PCI interrupt 0000:00:01.1[A] -> GSI 5 (level, low) -> IRQ 5
ACPI: PCI Interrupt Link [LUBA] enabled at IRQ 11
ACPI: PCI interrupt 0000:00:02.0[A] -> GSI 11 (level, low) -> IRQ 11
ACPI: PCI Interrupt Link [LUBB] enabled at IRQ 11
ACPI: PCI interrupt 0000:00:02.1[B] -> GSI 11 (level, low) -> IRQ 11
ACPI: PCI Interrupt Link [LUB2] enabled at IRQ 11
ACPI: PCI interrupt 0000:00:02.2[C] -> GSI 11 (level, low) -> IRQ 11
ACPI: PCI Interrupt Link [LMAC] enabled at IRQ 5
ACPI: PCI interrupt 0000:00:05.0[A] -> GSI 5 (level, low) -> IRQ 5
ACPI: PCI Interrupt Link [LACI] enabled at IRQ 5
ACPI: PCI interrupt 0000:00:06.0[A] -> GSI 5 (level, low) -> IRQ 5
ACPI: PCI Interrupt Link [LSID] enabled at IRQ 11
ACPI: PCI interrupt 0000:00:09.0[A] -> GSI 11 (level, low) -> IRQ 11
ACPI: PCI Interrupt Link [LNK3] enabled at IRQ 9
ACPI: PCI interrupt 0000:02:06.0[A] -> GSI 9 (level, low) -> IRQ 9
ACPI: PCI Interrupt Link [LNK4] enabled at IRQ 11
ACPI: PCI interrupt 0000:02:07.0[A] -> GSI 11 (level, low) -> IRQ 11
ACPI: PCI Interrupt Link [LNK5] enabled at IRQ 9
ACPI: PCI interrupt 0000:01:00.0[A] -> GSI 9 (level, low) -> IRQ 9
agpgart: Detected AGP bridge 0
agpgart: Setting up Nforce3 AGP.
agpgart: Maximum main memory to use for agp memory: 439M
agpgart: AGP aperture is 256M @ 0xc0000000
PCI-DMA: Disabling IOMMU.
vesafb: framebuffer at 0xb0000000, mapped to 0xffffff000008e000, size 10240k
vesafb: mode is 1280x1024x32, linelength=5120, pages=0
vesafb: scrolling: redraw
vesafb: directcolor: size=8:8:8:8, shift=24:16:8:0
fb0: VESA VGA frame buffer device
IA32 emulation $Id: sys_ia32.c,v 1.32 2002/03/24 13:02:28 ak Exp $
Total HugeTLB memory allocated, 0
devfs: 2004-01-31 Richard Gooch (rgooch@atnf.csiro.au)
devfs: boot_options: 0x1
Console: switching to colour frame buffer device 160x64
Real Time Clock Driver v1.12
Linux agpgart interface v0.100 (c) Dave Jones
Hangcheck: starting hangcheck timer 0.5.0 (tick is 180 seconds, margin is 60 seconds).
Serial: 8250/16550 driver $Revision: 1.90 $ 8 ports, IRQ sharing disabled
ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A
Using anticipatory io scheduler
floppy0: no floppy controllers found
RAMDISK driver initialized: 16 RAM disks of 4096K size 1024 blocksize
loop: loaded (max 8 devices)
forcedeth.c: Reverse Engineered nForce ethernet driver. Version 0.28.
ACPI: PCI interrupt 0000:00:05.0[A] -> GSI 5 (level, low) -> IRQ 5
PCI: Setting latency timer of device 0000:00:05.0 to 64
eth0: forcedeth.c: subsystem: 010de:0c11 bound to 0000:00:05.0
Uniform Multi-Platform E-IDE driver Revision: 7.00alpha2
ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx
NFORCE3-150: IDE controller at PCI slot 0000:00:08.0
NFORCE3-150: chipset revision 165
NFORCE3-150: not 100% native mode: will probe irqs later
NFORCE3-150: 0000:00:08.0 (rev a5) UDMA133 controller
    ide0: BM-DMA at 0xf000-0xf007, BIOS settings: hda:DMA, hdb:DMA
    ide1: BM-DMA at 0xf008-0xf00f, BIOS settings: hdc:DMA, hdd:DMA
hda: Hewlett-Packard DVD Writer 100, ATAPI CD/DVD-ROM drive
ide0 at 0x1f0-0x1f7,0x3f6 on irq 14
NFORCE3-150: IDE controller at PCI slot 0000:00:09.0
ACPI: PCI interrupt 0000:00:09.0[A] -> GSI 11 (level, low) -> IRQ 11
NFORCE3-150: chipset revision 245
NFORCE3-150: 0000:00:09.0 (rev f5) UDMA133 controller
NFORCE3-150: 100% native mode on irq 11
    ide2: BM-DMA at 0xd000-0xd007, BIOS settings: hde:DMA, hdf:pio
hde: Maxtor 6Y250M0, ATA DISK drive
ide2 at 0x9f0-0x9f7,0xbf2 on irq 11
hde: max request size: 1024KiB
hde: 490234752 sectors (251000 MB) w/7936KiB Cache, CHS=30515/255/63, UDMA(33)
 /dev/ide/host2/bus0/target0/lun0: p1 p2 p3 p4 < p5 p6 p7 p8 >
hda: ATAPI 32X DVD-ROM CD-R/RW drive, 2048kB Cache, UDMA(33)
Uniform CD-ROM driver Revision: 3.20
libata version 1.02 loaded.
ACPI: PCI interrupt 0000:00:02.2[C] -> GSI 11 (level, low) -> IRQ 11
ehci_hcd 0000:00:02.2: nVidia Corporation nForce3 USB 2.0
PCI: Setting latency timer of device 0000:00:02.2 to 64
ehci_hcd 0000:00:02.2: irq 11, pci mem ffffff0000af1000
ehci_hcd 0000:00:02.2: new USB bus registered, assigned bus number 1
PCI: cache line size of 64 is not supported by device 0000:00:02.2
ehci_hcd 0000:00:02.2: USB 2.0 enabled, EHCI 1.00, driver 2004-May-10
hub 1-0:1.0: USB hub found
hub 1-0:1.0: 6 ports detected
ohci_hcd: 2004 Feb 02 USB 1.1 'Open' Host Controller (OHCI) Driver (PCI)
ohci_hcd: block sizes: ed 80 td 96
ACPI: PCI interrupt 0000:00:02.0[A] -> GSI 11 (level, low) -> IRQ 11
ohci_hcd 0000:00:02.0: nVidia Corporation nForce3 USB 1.1
PCI: Setting latency timer of device 0000:00:02.0 to 64
ohci_hcd 0000:00:02.0: irq 11, pci mem ffffff0000af3000
ohci_hcd 0000:00:02.0: new USB bus registered, assigned bus number 2
hub 2-0:1.0: USB hub found
hub 2-0:1.0: 3 ports detected
ACPI: PCI interrupt 0000:00:02.1[B] -> GSI 11 (level, low) -> IRQ 11
ohci_hcd 0000:00:02.1: nVidia Corporation nForce3 USB 1.1 (#2)
PCI: Setting latency timer of device 0000:00:02.1 to 64
ohci_hcd 0000:00:02.1: irq 11, pci mem ffffff0000af5000
ohci_hcd 0000:00:02.1: new USB bus registered, assigned bus number 3
hub 3-0:1.0: USB hub found
hub 3-0:1.0: 3 ports detected
usbcore: registered new driver usblp
drivers/usb/class/usblp.c: v0.13: USB Printer Device Class driver
Initializing USB Mass Storage driver...
usbcore: registered new driver usb-storage
USB Mass Storage support registered.
usbcore: registered new driver usbhid
drivers/usb/input/hid-core.c: v2.0:USB HID core driver
mice: PS/2 mouse device common for all mice
serio: i8042 AUX port at 0x60,0x64 irq 12
input: ImPS/2 Generic Wheel Mouse on isa0060/serio1
serio: i8042 KBD port at 0x60,0x64 irq 1
input: AT Translated Set 2 keyboard on isa0060/serio0
NET: Registered protocol family 2
IP: routing cache hash table of 4096 buckets, 32Kbytes
TCP: Hash tables configured (established 32768 bind 32768)
NET: Registered protocol family 1
NET: Registered protocol family 17
VFS: Mounted root (jfs filesystem) readonly.
Mounted devfs on /dev
Freeing unused kernel memory: 432k freed
Adding 2008116k swap on /dev/hde2.  Priority:-1 extents:1
ACPI: PCI interrupt 0000:00:06.0[A] -> GSI 5 (level, low) -> IRQ 5
PCI: Setting latency timer of device 0000:00:06.0 to 64
intel8x0_measure_ac97_clock: measured 49553 usecs
intel8x0: clocking to 47413
Linux video capture interface: v1.00
bttv: driver version 0.9.15 loaded
bttv: using 8 buffers with 2080k (520 pages) each for capture
i2c /dev entries driver
tvaudio: TV audio decoder + audio/video mux driver
tvaudio: known chips: tda9840,tda9873h,tda9874h/a,tda9850,tda9855,tea6300,tea6420,tda8425,pic16c54 (PV951),ta8874z
ohci1394: $Rev: 1223 $ Ben Collins <bcollins@debian.org>
ACPI: PCI interrupt 0000:02:06.0[A] -> GSI 9 (level, low) -> IRQ 9
ohci1394: fw-host0: OHCI-1394 1.0 (PCI): IRQ=[9]  MMIO=[d6004000-d60047ff]  Max Packet=[2048]
ieee1394: raw1394: /dev/raw1394 device initialized
video1394: Installed video1394 module
ieee1394: Host added: ID:BUS[0-00:1023]  GUID[00308d012000038e]
hde: Speed warnings UDMA 3/4/5 is not functional.


uname -a output
Code:
Linux clawhammer 2.6.8-rc2-bk12 #3 Mon Aug 2 22:26:21 GMT 2004 x86_64 4  GNU/Linux


hdparm -tT /dev/hde output
Code:
/dev/hde:
 Timing buffer-cache reads:   2288 MB in  2.00 seconds = 1142.46 MB/sec
 Timing buffered disk reads:   46 MB in  3.03 seconds =  15.20 MB/sec
Back to top
View user's profile Send private message
c0balt
Guru
Guru


Joined: 04 Jul 2004
Posts: 441
Location: Germany

PostPosted: Tue Aug 03, 2004 7:26 am    Post subject: Reply with quote

hi,
you sure youve disable "Support for SATA" in the ATA/ATAPI Submenu?!
if that is active every SCSI-SATA driver will be deactivated!

Code:

#
# Please see Documentation/ide.txt for help/info on IDE drives
#
# CONFIG_BLK_DEV_IDE_SATA is not set
# CONFIG_BLK_DEV_HD_IDE is not set
CONFIG_BLK_DEV_IDEDISK=y
# CONFIG_IDEDISK_MULTI_MODE is not set
CONFIG_BLK_DEV_IDECD=y
# CONFIG_BLK_DEV_IDETAPE is not set
# CONFIG_BLK_DEV_IDEFLOPPY is not set
# CONFIG_BLK_DEV_IDESCSI is not set
# CONFIG_IDE_TASK_IOCTL is not set
# CONFIG_IDE_TASKFILE_IO is not set


edit just to be sure, youve got this too?

Code:

#
# SCSI low-level drivers
#
# CONFIG_BLK_DEV_3W_XXXX_RAID is not set
# CONFIG_SCSI_3W_9XXX is not set
# CONFIG_SCSI_ACARD is not set
# CONFIG_SCSI_AACRAID is not set
# CONFIG_SCSI_AIC7XXX is not set
# CONFIG_SCSI_AIC7XXX_OLD is not set
# CONFIG_SCSI_AIC79XX is not set
# CONFIG_SCSI_DPT_I2O is not set
# CONFIG_SCSI_MEGARAID is not set
CONFIG_SCSI_SATA=y
# CONFIG_SCSI_SATA_SVW is not set
# CONFIG_SCSI_ATA_PIIX is not set
CONFIG_SCSI_SATA_NV=y
# CONFIG_SCSI_SATA_PROMISE is not set
# CONFIG_SCSI_SATA_SX4 is not set
...


if there is no ide driver in the kernel then its rather impossible that the drive is recognized as /dev/hd*
Back to top
View user's profile Send private message
c0balt
Guru
Guru


Joined: 04 Jul 2004
Posts: 441
Location: Germany

PostPosted: Tue Aug 03, 2004 7:32 am    Post subject: Reply with quote

ive just checked my dmesg, somehow this doesnt sound good:

Code:

libata version 1.02 loaded.
ata_piix version 1.02
ata1: SATA max UDMA/133 cmd 0xEFE0 ctl 0xEFAE bmdma 0xEF90 irq 18
ata2: SATA max UDMA/133 cmd 0xEFA0 ctl 0xEFAA bmdma 0xEF98 irq 18
ata1: dev 0 cfg 49:2f00 82:74eb 83:7f63 84:4003 85:74e9 86:3c43 87:4003 88:207f
ata1: dev 0 ATA, max UDMA/133, 145226112 sectors: lba48
ata1: dev 0 configured for UDMA/133
scsi0 : ata_piix
ata2: SATA port has no device.
scsi1 : ata_piix


configured for UDMA/133 ?! wth?

edit: im on 2.6.8-rc2-mm1-reiser4, maybe you should try rc2-mm2
Back to top
View user's profile Send private message
Dr_b_
n00b
n00b


Joined: 18 Jan 2004
Posts: 33

PostPosted: Tue Aug 03, 2004 8:02 am    Post subject: Reply with quote

I get the same thing...

Code:
ata1: dev 0 configured for UDMA/133

Linux enrolv2 2.6.7-gentoo-r11 #9 SMP Mon Jul 26 04:14:14 UTC 2004 i686 Intel(R) Pentium(R) 4 CPU 3.20GHz GenuineIntel GNU/Linux
Back to top
View user's profile Send private message
rkrenzis
Tux's lil' helper
Tux's lil' helper


Joined: 22 Jul 2004
Posts: 135
Location: USA

PostPosted: Tue Aug 03, 2004 12:13 pm    Post subject: SATA enabled only in SCSI menu... Reply with quote

I did check that. SATA is only enabled in the SCSI submenus.
Being that:

1. SCSI *
2. SCSI Disk *
3. SATA *
4. NVIDIA SATA *

All statically compiled into the kernel. Atleast your system recognizes that your drive is connected to SATA. I'm getting piss-poor performance. I'm ready to go back to SCSI disks after this bout with IDE disks.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware All times are GMT
Goto page Previous  1, 2, 3  Next
Page 2 of 3

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum