Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Install problem with 2.6.3 and RAID0
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Installing Gentoo
View previous topic :: View next topic  
Author Message
shrub
n00b
n00b


Joined: 04 Mar 2004
Posts: 8

PostPosted: Thu Mar 04, 2004 10:46 am    Post subject: Install problem with 2.6.3 and RAID0 Reply with quote

Hi guys,

I'm having a bit of a problem installing gentoo onto a setup including software RAID0.

After the installation via Knoppix, Grub loads fine and then starts to boot the system until I get this error message:

md: Autodetecting RAID arrays.
md: autorun...
md: ...autorun DONE.
EXT3-fs: unable to read superblock
EXT2-fs: unable to read superblock
FAT: unable to read boot sector
VFS: Cannot open root device "md0" or unknown-block(0,0)
Please append a correct "root=" boot option
Kernel panic: VFS: Unable to mount root fs on unknown-block(0,0)


I have setup my system as follows:

raidtab:

raiddev /dev/md0
raid-level 0
nr-raid-disks 2
chunk-size 32
persistent-superblock 1
device /dev/hda2
raid-disk 0
device /dev/hdb2
raid-disk 1

fstab:

/dev/hda1 /boot ext2 noauto,noatime 1 1
/dev/md0 / ext2 noatime 0 0
/dev/hdb1 none swap defaults,pri=1 0 0
/dev/cdroms/cdrom0 /mnt/cdrom iso9660 noauto,ro 0 0
none /proc proc defaults 0 0
none /dev/shm tmpfs defaults 0 0

grub.conf:

timeout 10
default 0
fallback 1
splashimage=(hd0,0)/grub/splash.xpm.gz

# Gentoo 2.6.3-gentoo-r1
title Gentoo Linux 2.6.3-gentoo-r1
root (hd0,0)
kernel (hd0,0)/linux-2.6.3-gentoo-r1 root=/dev/md0


HELP!!!!
I am at work at the mo but when I get home I will try changing the root= to /dev/md/0 and also the same in fstab as other users of this forum have suggested that but not always with the desired results which is why I am fishing for any more ideas in case this doesn't work :)

I noticed another post that had exactly the same problem and they solved it by loading his off-board IDE controller drivers into the kernel (which he says enabled the kernel to see his hard drives) but I can't understand how my system could boot GRUB and then start loading the kernel if it can't access my harddrives - any ideas?

BTW, I am using an Intel ICH5R S-ATA chipset to control my drives but haven RAID functionality dissabled in the BIOS. My drives are S-ATA WD Raptors (not that this should really make any difference).

Thanls in advance and sorry for the long post! :)
Back to top
View user's profile Send private message
cyrillic
Watchman
Watchman


Joined: 19 Feb 2003
Posts: 7313
Location: Groton, Massachusetts USA

PostPosted: Fri Mar 05, 2004 3:01 am    Post subject: Re: Install problem with 2.6.3 and RAID0 Reply with quote

shrub wrote:
I can't understand how my system could boot GRUB and then start loading the kernel if it can't access my harddrives

GRUB does not depend on anything in your kernel, it uses the BIOS to read data from the harddrive.

In order for your kernel to mount the root filesystem, you do need the apropriate Intel SATA, SCSI, filesystem, and RAID drivers compiled into the kernel (not modules), or else the kernel will panic.
Back to top
View user's profile Send private message
shrub
n00b
n00b


Joined: 04 Mar 2004
Posts: 8

PostPosted: Fri Mar 05, 2004 8:24 am    Post subject: Reply with quote

I have just been finding it hard to get my head around how GRUB can be loaded from my bootsector (and harddrive for the config file and splash etc) and then my kernel executed from my /boot partition when there are no SATA drivers loaded for my motherboard chipset and ALL of my harddrives are connected to this.

I can happily report though that compiling the kernel with SCSI support (including low level SATA ICH5 support) solved the problem of not being able to access the root partition. Would the drivers have been required if the root partition was not RAID0 (as with the /boot partition) and if so why?

The boot process fails now just after the Gentoo coloured text wizzes by and RAID devives are being initialised. device 0 is just being initialised and the process dies. Could this be because I compiled support for the Intal IIP? RAID device into the kernel (thinking this might support the ICH5R even though I had read that support for this software RAID solution would not be supported).

Thanks again in advance :)


Just had another thought actually:

Maybe the problem lies in the fact that when GRUB autodetects my RAID0 array during boot it assigns it to /dev/md0 but in my fstab and raidtab files I refer to the array as /dev/md/0. When I get back to my PC on monday (the wait is killing me) I will try editing these files and see what the result is. If it is still a no-go I will chroot into my install from Knoppix and recompile the kernel without PIIX RAID support and give that a wurl.

Any more suggestions are more than welcome (hint, hint).
Back to top
View user's profile Send private message
BlinkEye
Veteran
Veteran


Joined: 21 Oct 2003
Posts: 1046
Location: Gentoo Forums

PostPosted: Sat Mar 06, 2004 6:50 pm    Post subject: Reply with quote

hmm. i'm struggling with the same/a similar problem. it can't mount my boot device "md1", hence a kernel panic. please keep us up to date if you get any further
Back to top
View user's profile Send private message
shrub
n00b
n00b


Joined: 04 Mar 2004
Posts: 8

PostPosted: Tue Mar 09, 2004 4:07 pm    Post subject: Reply with quote

Problem solved!

I simply changed the device name of my RAID array from /dev/md/0 to /dev/md0 in fstab and raidtab and everything now works fine.

Just need to get my NIC working properly now... ;)
Back to top
View user's profile Send private message
BlinkEye
Veteran
Veteran


Joined: 21 Oct 2003
Posts: 1046
Location: Gentoo Forums

PostPosted: Tue Mar 09, 2004 5:26 pm    Post subject: Reply with quote

so did i https://forums.gentoo.org/viewtopic.php?t=145852&highlight=
Back to top
View user's profile Send private message
DancesWithWords
Guru
Guru


Joined: 29 Jun 2002
Posts: 347
Location: ottawa, canada

PostPosted: Tue Apr 13, 2004 9:04 pm    Post subject: Reply with quote

Just an addendum, after about a week of reading this forum and googling I've managed to get my IDE Highpoint Raid 370/372 working and then tried to extend my experience to getting Linux Software Raid working.

What I learnt from the experience was;

1. IDE Highpoint Raid 370/372 would not work with 2.6.5-rc1. This was stated in many posts and if it does work I did not find the solution.

2. I did get the IDE Highpoint Raid 370/372 to work with ck-sources 2.4.23, but it only worked if the option was built into the kernel. I never could get it to pickup the raid, if the option was enabled as a module.

3. In order for Linux Software Raid to work reading the Howto's was very useful, but for one fact. To get Raid to work I had to use fdisk and change every partition to type "fd" 'Auto Detect Raid" in my raid. I did not find this spelt out clearly in anyones doc's or Howto's.

So my configuration looks like this
==============
RAIDTAB

raiddev /dev/md0
raid-level 0
nr-raid-disks 4
chunk-size 32
device /dev/hda3
raid-disk 0
device /dev/hdc3
raid-disk 2
device /dev/hdi3
raid-disk 3
device /dev/hdk3

================
FSTAB

/dev/hda1 /boot reiserfs noauto,notail 1 2
/dev/md0 / reiserfs notail 0 1
/dev/hd2 none swap defaults,pri=1 0 0
/dev/hdc2 none swap defaults,pri=1 0 0
/dev/hdi2 none swap defaults,pri=1 0 0
/dev/hdk2 none swap defaults,pri=1 0 0

etc...............

================

The hardware stuff....

Motherboard Epox 8K7+ w/onboard hpt raid
AMD 2200+ cpu
768Mb DDR ram
4 x 80 Western Digital JB 8Mb cache Hard Drives

there are 2 drives per IDE controller each on Master. I move my Plextor Burner and CD-Rom to a Promise 66 PCI controller card. The only issue with this setup is there are alot of cables in the case. Fortunately for me I had the forsight to buy and Aopen Server tower.


Oh, and my kernel is 2.6.4-rc Love-Sources.


My final statement is I know nothing about this sort of stuff. I just wanted to see if I could do it. Does this arrangement run faster than the old way with one drive for this partition and one drive for that partition? The answer is clear YES.

Does it provide the most performance for my hardware configuration... I don't know. Is this a dangerous arrangement with out Raid 1... PROBABLY. But is was fun and frustrating getting this working and I learnt some new Linux stuff.
Back to top
View user's profile Send private message
cyrillic
Watchman
Watchman


Joined: 19 Feb 2003
Posts: 7313
Location: Groton, Massachusetts USA

PostPosted: Wed Apr 14, 2004 12:43 am    Post subject: Reply with quote

DancesWithWords wrote:
Does it provide the most performance for my hardware configuration... I don't know.

First of all, nice job. :D
It is good to hear people's success stories.

I'm also curious about how much performance you are getting from this setup. I have several machines with 2 harddrive RAID0 arrays, but none with 4 harddrives.

I have not had the chance to test the speed difference between :
1 harddrive per IDE channel (like your setup)
4 harddrives setup master/slave on the southbridge IDE controller /dev/hd[abcd]

I think that the faster the harddrives are, the more the PCI bus becomes a bottleneck. Removing the harddrives from the Highpoint controller (and the PCI bus) might actually be faster, even though setting up harddrives as master/slave is not ideal either...

Code:
# hdparm -tT /dev/md0
 
/dev/md0:
 Timing buffer-cache reads:   1068 MB in  2.00 seconds = 532.75 MB/sec
 Timing buffered disk reads:  342 MB in  3.00 seconds = 113.87 MB/sec
Back to top
View user's profile Send private message
DancesWithWords
Guru
Guru


Joined: 29 Jun 2002
Posts: 347
Location: ottawa, canada

PostPosted: Wed Apr 14, 2004 1:34 am    Post subject: Reply with quote

well here is what I got from hdparm

hdparm -tT /dev/md0

Timing buffer-cache reads: 1112 MB in 2.00 seconds = 555.81 MB/sec
Timing buffered disk reads: 166 MB in 3.01 seconds = 55.16 MB/sec

So what do these figures mean?

These are ATA133 drives, but this motherboard only supports ATA100. So for sure this could possess purformance problems. Since I'm in the experimenting mood and I know what to do I'll try them in the master/slave configuration. to see what happens.

Now I notice that the 2.6 support reverse order if you have and extra pci IDE controller, so I will also pick up a Promise ATA133 Fastrak control to see what that adds to performance.

Funny this all started cause I wanted to get faster access time to my Star Catologues that I use with Xephem and Sky Charts.
Back to top
View user's profile Send private message
cyrillic
Watchman
Watchman


Joined: 19 Feb 2003
Posts: 7313
Location: Groton, Massachusetts USA

PostPosted: Wed Apr 14, 2004 1:50 am    Post subject: Reply with quote

DancesWithWords wrote:
So what do these figures mean?

The first line tests the kernel's cache performance (depends mainly on your CPU and RAM).
The second line tests the drive (or array) performance.

You can also test the individual drives. I think your JB drives should get close to 45MB/sec each, so the speed of the controller should not be the limiting factor.
Code:
# hdparm -tT /dev/hda /dev/hdc /dev/hdi /dev/hdk
Back to top
View user's profile Send private message
DancesWithWords
Guru
Guru


Joined: 29 Jun 2002
Posts: 347
Location: ottawa, canada

PostPosted: Wed Apr 14, 2004 2:08 am    Post subject: Reply with quote

figures for the individual drivers:

bash-2.05b# hdparm -tT /dev/hda /dev/hdc /dev/hdi /dev/hdk

/dev/hda:
Timing buffer-cache reads: 1092 MB in 2.01 seconds = 543.91 MB/sec
Timing buffered disk reads: 114 MB in 3.03 seconds = 37.58 MB/sec

/dev/hdc:
Timing buffer-cache reads: 1120 MB in 2.01 seconds = 558.41 MB/sec
Timing buffered disk reads: 104 MB in 3.06 seconds = 34.04 MB/sec

/dev/hdi:
Timing buffer-cache reads: 1112 MB in 2.01 seconds = 554.14 MB/sec
Timing buffered disk reads: 134 MB in 3.03 seconds = 44.25 MB/sec

/dev/hdk:
Timing buffer-cache reads: 1108 MB in 2.00 seconds = 553.25 MB/sec
Timing buffered disk reads: 144 MB in 3.03 seconds = 47.56 MB/sec
Back to top
View user's profile Send private message
cyrillic
Watchman
Watchman


Joined: 19 Feb 2003
Posts: 7313
Location: Groton, Massachusetts USA

PostPosted: Wed Apr 14, 2004 2:11 am    Post subject: Reply with quote

That's interesting ...

I thought for sure /dev/hda and /dev/hdc would be the faster ones.
Back to top
View user's profile Send private message
DancesWithWords
Guru
Guru


Joined: 29 Jun 2002
Posts: 347
Location: ottawa, canada

PostPosted: Wed Apr 14, 2004 2:38 am    Post subject: Reply with quote

Uhmm... You do realize that they're two IDE controllers, each running two drivers as Master and that both IDE Controller are ATA100 capable? Should they therefore have the same speed?
Back to top
View user's profile Send private message
cyrillic
Watchman
Watchman


Joined: 19 Feb 2003
Posts: 7313
Location: Groton, Massachusetts USA

PostPosted: Wed Apr 14, 2004 2:53 am    Post subject: Reply with quote

ATA100 is only the burst speed for the interface. I don't know of any drives that are actually that fast.

There are other settings that can affect performance. You can view and change these with hdparm (see "man hdparm" for more info).
Code:
# hdparm /dev/hda
 
/dev/hda:
 multcount    = 16 (on)
 IO_support   =  0 (default 16-bit)
 unmaskirq    =  0 (off)
 using_dma    =  1 (on)
 keepsettings =  0 (off)
 readonly     =  0 (off)
 readahead    = 256 (on)
 geometry     = 65535/16/63, sectors = 120103200, start = 0
Back to top
View user's profile Send private message
DancesWithWords
Guru
Guru


Joined: 29 Jun 2002
Posts: 347
Location: ottawa, canada

PostPosted: Wed Apr 14, 2004 2:59 am    Post subject: Reply with quote

/dev/hda:
multcount = 16 (on)
IO_support = 1 (32-bit)
unmaskirq = 1 (on)
using_dma = 1 (on)
keepsettings = 0 (off)
readonly = 0 (off)
readahead = 256 (on)
geometry = 16383/255/63, sectors = 156301488, start = 0
Back to top
View user's profile Send private message
cyrillic
Watchman
Watchman


Joined: 19 Feb 2003
Posts: 7313
Location: Groton, Massachusetts USA

PostPosted: Wed Apr 14, 2004 3:17 am    Post subject: Reply with quote

/dev/hda and /dev/hdc may have different settings than /dev/hdi and /dev/hdk.

The most important one is "using_dma", without that, performance will really suck. You may be able to gain some performance by playing with those settings.
Back to top
View user's profile Send private message
BlinkEye
Veteran
Veteran


Joined: 21 Oct 2003
Posts: 1046
Location: Gentoo Forums

PostPosted: Wed Apr 14, 2004 10:52 pm    Post subject: Reply with quote

cyrillic wrote:
DancesWithWords wrote:
Does it provide the most performance for my hardware configuration... I don't know.

First of all, nice job. :D
It is good to hear people's success stories.

I'm also curious about how much performance you are getting from this setup. I have several machines with 2 harddrive RAID0 arrays, but none with 4 harddrives.

I have not had the chance to test the speed difference between :
1 harddrive per IDE channel (like your setup)
4 harddrives setup master/slave on the southbridge IDE controller /dev/hd[abcd]

I think that the faster the harddrives are, the more the PCI bus becomes a bottleneck. Removing the harddrives from the Highpoint controller (and the PCI bus) might actually be faster, even though setting up harddrives as master/slave is not ideal either...

Code:
# hdparm -tT /dev/md0
 
/dev/md0:
 Timing buffer-cache reads:   1068 MB in  2.00 seconds = 532.75 MB/sec
 Timing buffered disk reads:  342 MB in  3.00 seconds = 113.87 MB/sec


well, i know this is a bit off-topic but i do have a problem/question anyway: why does a hdparm on a raid5 array not make any significat better reads than a hdparm on a single device? my setup: 3 120GB Seagate hd's connected via SATA on to raidcontrollers on a software raid5 setup.

Code:
 pts/5 hdparm -tT /dev/md1 /dev/md2 /dev/md3 /dev/sda /dev/sdb /dev/sdc

/dev/md1:
 Timing buffer-cache reads:   128 MB in  0.36 seconds =354.62 MB/sec
 Timing buffered disk reads:  64 MB in  1.16 seconds = 55.13 MB/sec

/dev/md2:
 Timing buffer-cache reads:   128 MB in  0.36 seconds =351.70 MB/sec
 Timing buffered disk reads:  64 MB in  1.18 seconds = 54.43 MB/sec

/dev/md3:
 Timing buffer-cache reads:   128 MB in  0.38 seconds =340.48 MB/sec
 Timing buffered disk reads:  64 MB in  1.14 seconds = 56.00 MB/sec

/dev/sda:
 Timing buffer-cache reads:   128 MB in  0.35 seconds =362.66 MB/sec
 Timing buffered disk reads:  64 MB in  1.17 seconds = 54.71 MB/sec

/dev/sdb:
 Timing buffer-cache reads:   128 MB in  0.37 seconds =341.39 MB/sec
 Timing buffered disk reads:  64 MB in  1.18 seconds = 54.11 MB/sec

/dev/sdc:
 Timing buffer-cache reads:   128 MB in  0.36 seconds =359.61 MB/sec
 Timing buffered disk reads:  64 MB in  1.21 seconds = 52.94 MB/sec

Code:
pts/5 hdparm /dev/md1 /dev/md2 /dev/md3 /dev/sda /dev/sdb /dev/sdc

/dev/md1:
 readonly     =  0 (off)
 geometry     = 6624/2/4, sectors = 78171904, start = 0

/dev/md2:
 readonly     =  0 (off)
 geometry     = 39360/2/4, sectors = 195350016, start = 0

/dev/md3:
 readonly     =  0 (off)
 geometry     = 3712/2/4, sectors = 188249088, start = 0

/dev/sda:
 readonly     =  0 (off)
 geometry     = 14593/255/63, sectors = 234441648, start = 0

/dev/sdb:
 readonly     =  0 (off)
 geometry     = 14593/255/63, sectors = 234441648, start = 0

/dev/sdc:
 readonly     =  0 (off)
 geometry     = 14593/255/63, sectors = 234441648, start = 0
Back to top
View user's profile Send private message
cyrillic
Watchman
Watchman


Joined: 19 Feb 2003
Posts: 7313
Location: Groton, Massachusetts USA

PostPosted: Thu Apr 15, 2004 6:59 pm    Post subject: Reply with quote

BlinkEye wrote:
well, i know this is a bit off-topic but i do have a problem/question anyway: why does a hdparm on a raid5 array not make any significat better reads than a hdparm on a single device? my setup: 3 120GB Seagate hd's connected via SATA on to raidcontrollers on a software raid5 setup.

I think the SATA drivers are still under heavy development. The 113MB/s I posted above was with 2x80GB Maxtor RAID0 on VT8237 SATA and 2.6.5-mm4. With some other 2.6 kernels, I have been getting 26MB/s, the 2.6.5_rc2 kernel doesn't even work at all (2nd harddrive is undetected).

The low performance could also be due to the overhead (parity calculations) of RAID5. I have only played around with RAID0, so I am not sure what kind of performance to expect from RAID5.
Back to top
View user's profile Send private message
BlinkEye
Veteran
Veteran


Joined: 21 Oct 2003
Posts: 1046
Location: Gentoo Forums

PostPosted: Thu Apr 15, 2004 8:46 pm    Post subject: Reply with quote

hmm, thanks for the reply. i just installed mm-sources linux-2.6.5-mm6. i got the same performance as with gentoo-dev-sources. i'm not sure if i should trust the hdparm result. a
Code:
emerge sync
or
Code:
locate -u
is that much faster than on any other machine i set up that it must be the raid (i don't think that these two commands stress the cpu that it would be my amd64 processor which is responsible for the speed gain).
Quote:
The low performance could also be due to the overhead (parity calculations) of RAID5. I have only played around with RAID0, so I am not sure what kind of performance to expect from RAID5.
i don't know what this is supposed to mean but i really think that a raid5 is much faster than a system without raid. unfortunately i still don't know how to prove it ... :cry:
Back to top
View user's profile Send private message
DancesWithWords
Guru
Guru


Joined: 29 Jun 2002
Posts: 347
Location: ottawa, canada

PostPosted: Sat Apr 17, 2004 12:37 am    Post subject: Reply with quote

cyrillic wrote:
/dev/hda and /dev/hdc may have different settings than /dev/hdi and /dev/hdk.

The most important one is "using_dma", without that, performance will really suck. You may be able to gain some performance by playing with those settings.


how do I do that?
Back to top
View user's profile Send private message
DancesWithWords
Guru
Guru


Joined: 29 Jun 2002
Posts: 347
Location: ottawa, canada

PostPosted: Sat Apr 17, 2004 12:41 am    Post subject: Reply with quote

Further along in my experiment.

New configuration:

IDE Controller !
IDE Channel 1

Master
Slave


IDE Controller 2
Master
Slave

Results:

/dev/md0:
Timing buffer-cache reads: 1144 MB in 2.01 seconds = 570.38 MB/sec
Timing buffered disk reads: 192 MB in 3.02 seconds = 63.52 MB/sec
bash-2.05b#
Back to top
View user's profile Send private message
DancesWithWords
Guru
Guru


Joined: 29 Jun 2002
Posts: 347
Location: ottawa, canada

PostPosted: Sat Apr 17, 2004 1:03 am    Post subject: Reply with quote

last experiment..... 4 drive one controller.

master
slave
master
slave

bash-2.05b# hdparm -tT /dev/md0

/dev/md0:
Timing buffer-cache reads: 1096 MB in 2.00 seconds = 547.54 MB/sec
Timing buffered disk reads: 148 MB in 3.03 seconds = 48.80 MB/sec
bash-2.05b#
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Installing Gentoo All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum