View previous topic :: View next topic |
Author |
Message |
rkrenzis Tux's lil' helper
Joined: 22 Jul 2004 Posts: 135 Location: USA
|
Posted: Tue Aug 03, 2004 5:31 pm Post subject: SATA only on nforce3-250 |
|
|
So far the findings have proven that SATA was only incorporated in the nforce3-250 chipset.
I have the nforce3-150 chipset.
Thus, I will be reduced to using a SATA interface which is transparently mapped to a 3rd IDE interface. I'm going to purchase a 3Ware SATA HBA and hope that it addresses my issues if I can't make any further progress with this. Go figure. Leave it up to Soyo to make some half-a$$ bastardized version of a *wannabe* SATA implementation.
I've also had problems with the nforce audio drivers. I can't control the volume. Apparently I paid extra for this feature.
This is the last time I buy Soyo. I have always bought ASUS and never had such problems.
Caveat Emptor!: Stay away from the SY-CK8 Plus. It is nothing but a hacked attempt to try and make something decent. |
|
Back to top |
|
|
Dr_b_ n00b
Joined: 18 Jan 2004 Posts: 33
|
Posted: Tue Aug 03, 2004 5:53 pm Post subject: |
|
|
I believe Soyo went belly up, there's a news item that they were bought out by a chip consortium.
Either way, so we should disable the SATA support in the kernel everywhere else, except under the SCSI section? |
|
Back to top |
|
|
rkrenzis Tux's lil' helper
Joined: 22 Jul 2004 Posts: 135 Location: USA
|
Posted: Tue Aug 03, 2004 7:36 pm Post subject: SATA Support |
|
|
Yes, you should only have SATA support under the SCSI subsystem menu. SATA support under the ATA/ATAPI menu should be disabled as the two conflict each other. The SATA support under ATA/ATAPI is only there for compatibility purposes. |
|
Back to top |
|
|
Stolz Moderator
Joined: 19 Oct 2003 Posts: 3028 Location: Hong Kong
|
Posted: Tue Oct 19, 2004 8:50 pm Post subject: |
|
|
c0balt wrote: | ive just checked my dmesg, somehow this doesnt sound good:
Code: |
ata1: SATA max UDMA/133 cmd 0xEFE0 ctl 0xEFAE bmdma 0xEF90 irq 18
ata2: SATA max UDMA/133 cmd 0xEFA0 ctl 0xEFAA bmdma 0xEF98 irq 18
ata1: dev 0 cfg 49:2f00 82:74eb 83:7f63 84:4003 85:74e9 86:3c43 87:4003 88:207f
ata1: dev 0 ATA, max UDMA/133, 145226112 sectors: lba48
ata1: dev 0 configured for UDMA/133
|
configured for UDMA/133 ?! why? |
I'm getting the same statement. Can someone explain it?
Thanks. |
|
Back to top |
|
|
rkrenzis Tux's lil' helper
Joined: 22 Jul 2004 Posts: 135 Location: USA
|
Posted: Wed Oct 20, 2004 3:45 am Post subject: Bridged SATA interface and IDE Controller |
|
|
What chipset are you using? You need to verify if you have a true SATA controller. Most ide drives and motherboards claim that they have "SATA" support. This pseudo-"SATA" support is accomplished by adding an additional IDE controller then using a bridge to the SATA connector. The hard drive you may have also may not be a true "SATA" drive. You need to look near the connectors of the drive. If you see an "M" logo, your interface is bridged to an ide interface.
I think there are only a handful of drives that actually have native SATA support. You should verify this.
Also, can you share with us your kernel configuration (the obivous sections regarding the scsi configuration. Yes, SATA is under SCSI.)
Also a dead give away if you are actually using SATA. In your fstab, are your raw disk devices /dev/hd* or /dev/sd*?
/dev/hd* = ide
/dev/sd* = sata or scsi
What about hdparm -iI /dev/hd* or hdparm -iI /dev/sd*? |
|
Back to top |
|
|
elvisthedj Guru
Joined: 21 Jun 2004 Posts: 483 Location: Nampa, ID
|
Posted: Mon Nov 08, 2004 3:33 am Post subject: My SATA and IDE |
|
|
I don't have onboard sata. I'm using an Adaptec (Silicon Image chipset) SATA Connect pci. Per some threads I've read here and elsewhere, I did the following:
Quote: |
Edit : /usr/src/linux/drivers/ide/ide-io.c and change the lines as indicated :
- if (hwif->irq != masked_irq)
+ if (masked_irq != IDE_NO_IRQ && hwif->irq != masked_irq)
- if (hwif->irq != masked_irq)
+ if (masked_irq != IDE_NO_IRQ && hwif->irq != masked_irq)
But dont change the following lines :
if (startstop == ide_stopped)
hwgroup->busy = 0;
Recompile the kernel, and reboot.
|
Here are my before and after stats (both tests done when system was idle)
Code: |
bash-2.05b# hdparm -tT /dev/sda
/dev/sda:
Timing buffer-cache reads: 764 MB in 2.00 seconds = 381.11 MB/sec
Timing buffered disk reads: 164 MB in 3.03 seconds = 54.06 MB/sec
|
new kernel:
Code: |
bash-2.05b# hdparm -tT /dev/sda
/dev/sda:
Timing buffer-cache reads: 996 MB in 2.00 seconds = 497.58 MB/sec
Timing buffered disk reads: 170 MB in 3.03 seconds = 56.06 MB/sec
|
Anybody else running this patch (now I wish I hadn't skipped 4 pages of the thread ) Geuss I'll go read it.
Ok, I tested my IDE and.. yuck. Practically like a floppy...
Code: |
/dev/hdb:
Timing buffer-cache reads: 816 MB in 2.00 seconds = 407.86 MB/sec
Timing buffered disk reads: 12 MB in 3.45 seconds = 3.47 MB/sec
|
|
|
Back to top |
|
|
yottabit Guru
Joined: 11 Nov 2002 Posts: 313 Location: Columbus, Ohio, US
|
Posted: Wed Mar 09, 2005 4:55 am Post subject: |
|
|
Time to wake this thread up, I guess.
Running 2.6.11-mm2 with default anticipatory I/O scheduler.
Config is two Hitachi 80 GB SATA drives using linux RAID-1 and two Hitachi 250 GB SATA drives using linux RAID-0. All four drives are on a Promise FastTrack S150 TX4 (not the motherboard's SiI controller) and using the kernel's promise driver.
Other pertinent info: ASUS A7N8X-Deluxe, AMD Athlon XP 2100+, 1024 MB RAM (3 DIMMs), nVidia nForce2.
Code: | hal linux # hdparm -t /dev/sda # Hitachi 80 GB native drive
/dev/sda:
Timing buffered disk reads: 174 MB in 3.03 seconds = 57.47 MB/sec
hal linux # hdparm -t /dev/md1 # Linux RAID-1 (mirror) array of two Hitachi 80 GB drives
/dev/md1:
Timing buffered disk reads: 166 MB in 3.02 seconds = 54.94 MB/sec
hal linux # hdparm -t /dev/sdc # Hitachi 250 GB native drive
/dev/sdc:
Timing buffered disk reads: 172 MB in 3.02 seconds = 56.96 MB/sec
hal linux # hdparm -t /dev/md3 # Linux RAID-0 (striped) array of two Hitachi 250 GB drives
/dev/md3:
Timing buffered disk reads: 250 MB in 3.02 seconds = 82.85 MB/sec |
I'm quite happy with it. I have a D-Link DGE-530T Gigabit Ethernet adapter installed (and set to jumbo frame 9000-byte MTU) and Samba set to a 64 KB window size, and I can actually max out the I/O to the disk array! Crazy.
I'm using the GigE card as a second NIC in the server to store all of the DVR & DVD video data on the server from the media computer attached to the TV. Quite excellent. _________________ Play The Hitchhiker's Guide to the Galaxy! |
|
Back to top |
|
|
Dr_b_ n00b
Joined: 18 Jan 2004 Posts: 33
|
Posted: Thu Mar 10, 2005 12:02 am Post subject: |
|
|
Can you tell us a little bit about your kernel config, or how you got your RAID working?
Thanks,
-Dr_b_ |
|
Back to top |
|
|
yottabit Guru
Joined: 11 Nov 2002 Posts: 313 Location: Columbus, Ohio, US
|
Posted: Thu Mar 10, 2005 12:24 am Post subject: |
|
|
Dr_b_ wrote: | Can you tell us a little bit about your kernel config, or how you got your RAID working? |
Sure, no problem. I'm pretty much running the stock 2.6.11-mm kernel available in Gentoo (~x86 keyword). I am not using a preemptible kernel (this is a server, not a workstation). I was using the default anticipatory I/O scheduler (more on this later). My RAID setup is pretty simple, using a mirror (RAID-1) for the system and striping (RAID-0) for the big video array. All disks are Hitachi SATA and I'm using a Promise S150 TX4 SATA controller with the 2.6 kernel's promise driver. I'm using the Reiser 3.6 filesystem on both arrays. Here's my /etc/raidtab:
Code: | # /boot (RAID 1)
raiddev /dev/md0
raid-level 1
nr-raid-disks 2
chunk-size 32
persistent-superblock 1
device /dev/sda1
raid-disk 0
device /dev/sdb1
raid-disk 1
# / (RAID 1)
raiddev /dev/md1
raid-level 1
nr-raid-disks 2
chunk-size 32
persistent-superblock 1
device /dev/sda3
raid-disk 0
device /dev/sdb3
raid-disk 1
# swap (RAID 1)
raiddev /dev/md2
raid-level 1
nr-raid-disks 2
chunk-size 32
persistent-superblock 1
device /dev/sda2
raid-disk 0
device /dev/sdb2
raid-disk 1
# big disk (RAID 0 striping)
raiddev /dev/md3
raid-level 0
nr-raid-disks 2
chunk-size 32
persistent-superblock 1
device /dev/sdc1
raid-disk 0
device /dev/sdd1
raid-disk 1 |
I actually have two NICs in the system. The primary NIC is the nForce2 onboard using the 2.6 kernel's reverse-engineered driver (forcedeth) and is on the primary network space. The secondary NIC is the D-Link DGE-530T Gigabit Ethernet, using the 2.6 kernel's sk98lin driver, and on a separate network space. I enabled Jumbo Frames on the second NIC by setting the MTU to 9000 (put in /etc/conf.d/local.start for change on boot since it defaults to the standard MTU of 1500). The gigabit link is directly connected to the HTPC upstairs with crossover Cat5 UTP cable. The HTPC unfortunately uses Windows XP since there are no Linux drivers available for my ATSC digital tuner. Jumbo Frame support was enabled in Windows XP through the network driver, and I changed the MTU to 9000 with the Dr. TCP utility.
So, now that I've bored you with my details, it must be said that I'm having some performance degradation. I've spent quite a lot of time diagnosing this, and more time is going to be spent as soon as the data finishes transferring off the array so I can try destructive testing.
I have already changed a few kernel parameters, namely I've enabled the new deadline I/O scheduler, though it may not actually be active since I haven't booted with the "elevator=deadline" kernel option yet. It seems at the present time that the striped RAID-0 array is suffering from some performance problems while under heavy read conditions. Yes, I said read conditions, not write conditions.
My on-going struggle with bizarro performance is being discussed in this thread.
As soon as my data finishes transferring off the array (slowly, I might add) I'll start some more tests, including using the deadline and anticipatory schedulers, changing the stripe-size between 4k and 512k, and trying the JFS, XFS, and Reiser filesystems. I'll try to be methodical in my procedures and document the tests well. I'm going to use iozone for the performance tests, and I've thought mostly about using -aMop -i 0 -i 1 -g 2g -+u as my iozone parameters.
If you want to continue the discussion of parameters and tests and such, please do so in the above-referenced thread since this one is pretty much dedicated to hdparm statistics which are useless for my problem.
Cheers!
J _________________ Play The Hitchhiker's Guide to the Galaxy! |
|
Back to top |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|