Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
problems booting from raid[solved]
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware
View previous topic :: View next topic  
Author Message
Adel Ahmed
Veteran
Veteran


Joined: 21 Sep 2012
Posts: 1604

PostPosted: Sat Aug 17, 2013 12:38 pm    Post subject: problems booting from raid[solved] Reply with quote

I have installed /boot on /dev/sda1, then created a level0 raid using sda2 and sdb1 and I'm trying to mount that partition as /, I get a kernel panic when trying to boot from my raid partition.
here's my kernel config:
http://pastebin.com/nT3Uunwk

here's my grub.conf:

default 0
timeout 3
#splashimage=(hd0,0)/boot/grub/splash.xpm.gz

title new system /dev/sda1 grub
root (hd0,0)
kernel /boot/kernel5 root=/dev/sdb2

title raid
root (hd0,0)
#kernel /boot/kernel4 root=/dev/md127
kernel /boot/kernel5 root=/dev/disk/by-uuid/b0a27c76-c36e-4997-a0d1-077019981325
_________________________________________________________________________________________________

here's my fstab:
/dev/md127 / ext4 noatime 0 1


I've tried mounting by uuid and device name in grub

here's my kernel panic:
http://www.4shared.com/photo/nyqDaXAx/IMG_20130817_142907.html

thanks

edit:
added /dev/md127 to my fstab(the system where / is ona regular partition) and it mounts just fine, I suspect the kernel cannot see the raid partition during the boot process, I'll go through my kernel config and try to find if a modularized driver is causing this problem


solved:
changed the disk ID to fd in fdisk


Last edited by Adel Ahmed on Sat Aug 17, 2013 3:32 pm; edited 1 time in total
Back to top
View user's profile Send private message
christophe_y2k
n00b
n00b


Joined: 07 Jan 2008
Posts: 46
Location: FRANCE

PostPosted: Sat Aug 17, 2013 1:02 pm    Post subject: Hi have the same problem to boot on mdadm raid 1 Reply with quote

Hi, have kernel panic too, when i try to boot on newer system with kernel 3.8.14 on software mdadm raid 1 ssd



do you setup rootfstype= ?

# nano -w /boot/grub/grub.conf
Code:

default 0
timeout 10
splashimage=(hd0,0)/boot/grub/splash.xpm.gz

title=GENTOO By Christophe_Y2k
root (hd0,0)
kernel /boot/kernel-3.8.13 root=/dev/md3 rootfstype=ext4
Back to top
View user's profile Send private message
Adel Ahmed
Veteran
Veteran


Joined: 21 Sep 2012
Posts: 1604

PostPosted: Sat Aug 17, 2013 1:10 pm    Post subject: Reply with quote

no it's the same error with and without that option
Back to top
View user's profile Send private message
umka69
Tux's lil' helper
Tux's lil' helper


Joined: 31 Mar 2013
Posts: 124

PostPosted: Sat Aug 17, 2013 2:37 pm    Post subject: Reply with quote

Show us your

# cat /proc/mdstart
# cat /etc/mdadm.conf
# cat /boot/grub/device.map
# fdisk -l
Back to top
View user's profile Send private message
Adel Ahmed
Veteran
Veteran


Joined: 21 Sep 2012
Posts: 1604

PostPosted: Sat Aug 17, 2013 2:42 pm    Post subject: Reply with quote

# cat /proc/mdstat
Personalities : [raid0]
md127 : active raid0 sdb1[1] sda2[0]
20970496 blocks 512k chunks

unused devices: <none>
__________________________________________________________________
# cat /etc/mdadm.conf
# mdadm configuration file
#
# mdadm will function properly without the use of a configuration file,
# but this file is useful for keeping track of arrays and member disks.
# In general, a mdadm.conf file is created, and updated, after arrays
# are created. This is the opposite behavior of /etc/raidtab which is
# created prior to array construction.
#
#
# the config file takes two types of lines:
#
# DEVICE lines specify a list of devices of where to look for
# potential member disks
#
# ARRAY lines specify information about how to identify arrays so
# so that they can be activated
#
# You can have more than one device line and use wild cards. The first
# example includes SCSI the first partition of SCSI disks /dev/sdb,
# /dev/sdc, /dev/sdd, /dev/sdj, /dev/sdk, and /dev/sdl. The second
# line looks for array slices on IDE disks.
#
#DEVICE /dev/sd[bcdjkl]1
#DEVICE /dev/hda1 /dev/hdb1
#
# If you mount devfs on /dev, then a suitable way to list all devices is:
#DEVICE /dev/discs/*/*
#
#
# The AUTO line can control which arrays get assembled by auto-assembly,
# meaing either "mdadm -As" when there are no 'ARRAY' lines in this file,
# or "mdadm --incremental" when the array found is not listed in this file.
# By default, all arrays that are found are assembled.
# If you want to ignore all DDF arrays (maybe they are managed by dmraid),
# and only assemble 1.x arrays if which are marked for 'this' homehost,
# but assemble all others, then use
#AUTO -ddf homehost -1.x +all
#
# ARRAY lines specify an array to assemble and a method of identification.
# Arrays can currently be identified by using a UUID, superblock minor number,
# or a listing of devices.
#
# super-minor is usually the minor number of the metadevice
# UUID is the Universally Unique Identifier for the array
# Each can be obtained using
#
# mdadm -D <md>
#
#ARRAY /dev/md0 UUID=3aaa0122:29827cfa:5331ad66:ca767371
#ARRAY /dev/md1 super-minor=1
#ARRAY /dev/md2 devices=/dev/hda1,/dev/hdb1
#
# ARRAY lines can also specify a "spare-group" for each array. mdadm --monitor
# will then move a spare between arrays in a spare-group if one array has a failed
# drive but no spare
#ARRAY /dev/md4 uuid=b23f3c6d:aec43a9f:fd65db85:369432df spare-group=group1
#ARRAY /dev/md5 uuid=19464854:03f71b1b:e0df2edd:246cc977 spare-group=group1
#
# When used in --follow (aka --monitor) mode, mdadm needs a
# mail address and/or a program. This can be given with "mailaddr"
# and "program" lines to that monitoring can be started using
# mdadm --follow --scan & echo $! > /var/run/mdadm
# If the lines are not found, mdadm will exit quietly
#MAILADDR root@mydomain.tld
#PROGRAM /usr/sbin/handle-mdadm-events
__________________________________________________________________
(fd0) /dev/fd0
(hd0) /dev/sda
(hd2) /dev/sdb
(hd3) /dev/sdc
__________________________________________________________________

fdisk -l
Disk /dev/sda: 320.1 GB, 320072933376 bytes, 625142448 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x77777777

Device Boot Start End Blocks Id System
/dev/sda1 * 2048 133119 65536 83 Linux
/dev/sda2 133120 21104639 10485760 83 Linux
/dev/sda4 131781195 625153409 246686107+ 7 HPFS/NTFS/exFAT

Disk /dev/sdb: 160.0 GB, 160041885696 bytes, 312581808 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0a9ee45e

Device Boot Start End Blocks Id System
/dev/sdb1 2048 20973567 10485760 83 Linux
/dev/sdb2 20973568 60034652 19530542+ 83 Linux
/dev/sdb3 60035072 101978111 20971520 83 Linux

Disk /dev/md127: 21.5 GB, 21473787904 bytes, 41940992 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 524288 bytes / 1048576 bytes
Back to top
View user's profile Send private message
umka69
Tux's lil' helper
Tux's lil' helper


Joined: 31 Mar 2013
Posts: 124

PostPosted: Sat Aug 17, 2013 2:59 pm    Post subject: Reply with quote

I have perfect howto for you. But it is russian. http://xgu.ru/wiki/Программный_RAID_в_Linux/

Show me your /etc/fstab too. And I'll give you an advice.


Last edited by umka69 on Sat Aug 17, 2013 2:59 pm; edited 1 time in total
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54809
Location: 56N 3W

PostPosted: Sat Aug 17, 2013 2:59 pm    Post subject: Reply with quote

blakdeath,

Lots of things to poke at ...

To use kernel raid auto assembly, your raid set must use metadata version 0.09. Thats set in the mdadm --create command. The default is version 1.2.
Changing this destroys your data

Further, auto assembly requires that the partitions involved be marked as type fd, not 83
Fixing this is harmless

The synatax for mounting root by filesystem UUID is kernel /boot/3.9.7-gentoo-ssd root=UUID=ba840a47-ca9a-4a8f-a867-9ab816c4537f
However the kernel cannot read filesystem UUIDs without an initrd. PARTIDs may work but does a raid set have a partition ID?
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
Adel Ahmed
Veteran
Veteran


Joined: 21 Sep 2012
Posts: 1604

PostPosted: Sat Aug 17, 2013 3:27 pm    Post subject: Reply with quote

here's my fstab:
/dev/md127 / ext4 noatime 0 1

I'm using metadata version -0.09
localhost linux # mdadm -D /dev/md127
/dev/md127:
Version : 0.90

I've changed ids as fllows:
/dev/sda1 * 2048 133119 65536 83 Linux
/dev/sda2 133120 21104639 10485760 fd Linux raid autodetect
/dev/sda4 131781195 625153409 246686107+ 7 HPFS/NTFS/exFAT


Device Boot Start End Blocks Id System
/dev/sdb1 2048 20973567 10485760 fd Linux raid autodetect
/dev/sdb2 20973568 60034652 19530542+ 83 Linux
/dev/sdb3 60035072 101978111 20971520 83 Linux


what do you mean by partition id? is that the volume name or label?

thanks

edit:
it's being detected now that i've changed the ID
thanks everyone :D
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum