DingbatCA Guru


Joined: 07 Jul 2004 Posts: 384 Location: Portland Or
|
Posted: Wed Feb 08, 2006 6:23 am Post subject: New kernel and the raid breaks 2.6.11-r9 --> 2.6.15-r1 Id |
|
|
This one is a bit fun. 7 disk raid 5 array. Set up in a 6+1 config.
Current kernel is 2.6.11-gentoo-r9 and i have compiled the 2.6.15-gentoo-r1 with the same settings.
The only setting I added was support for raid 6.
When I boot up under the 2.6.11 kernel every thing is happy and my raid is running great. When I boot up under 2.6.15 the raid does not come up. Try for the manual start and get:
Code: |
whitequeen ~ # mdadm -R /dev/md0
mdadm: failed to run array /dev/md0: Input/output error
|
Strange, so lets take a look at the details
Code: |
whitequeen ~ # mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Sat Mar 26 04:14:46 2005
Raid Level : raid5
Device Size : 195318144 (186.27 GiB 200.01 GB)
Raid Devices : 6
Total Devices : 1
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Tue Feb 7 21:56:36 2006
State : active, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 32K
UUID : 318a7e8c:9d6c4695:b52f4f9e:931569a9
Events : 0.4045747
Number Major Minor RaidDevice State
0 0 0 - removed
1 0 0 - removed
2 0 0 - removed
3 0 0 - removed
4 34 1 4 active sync /dev/hdg1
5 0 0 - removed
|
So something is VERY wrong... reboot and use the 2.6.11 kernel, all good and happy. What the heck!
Good and happy world:
Code: |
whitequeen ~ # df -h
Filesystem Size Used Avail Use% Mounted on
/dev/hda3 114G 6.8G 107G 6% /
/dev/md0 932G 396G 536G 43% /data
whitequeen ~ # mdadm -D /dev/md0
/dev/md0:
Version : 00.90.01
Creation Time : Sat Mar 26 04:14:46 2005
Raid Level : raid5
Array Size : 976590720 (931.35 GiB 1000.03 GB)
Device Size : 195318144 (186.27 GiB 200.01 GB)
Raid Devices : 6
Total Devices : 7
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Tue Feb 7 22:04:19 2006
State : clean
Active Devices : 6
Working Devices : 7
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 32K
UUID : 318a7e8c:9d6c4695:b52f4f9e:931569a9
Events : 0.4045751
Number Major Minor RaidDevice State
0 88 1 0 active sync /dev/ide/host6/bus0/target0/lun0/part1
1 57 1 1 active sync /dev/ide/host4/bus1/target0/lun0/part1
2 91 1 2 active sync /dev/ide/host8/bus1/target0/lun0/part1
3 89 1 3 active sync /dev/ide/host6/bus1/target0/lun0/part1
4 34 1 4 active sync /dev/ide/host2/bus1/target0/lun0/part1
5 90 1 5 active sync /dev/ide/host8/bus0/target0/lun0/part1
6 56 1 - spare /dev/ide/host4/bus0/target0/lun0/part1
|
In the 2.6.15 i can see that the kernel is loading support for raid...
Code: |
md: raid5 personality registered as nr 4
raid5: automatically using best checksumming function: pIII_sse
pIII_sse : 2014.000 MB/sec
raid5: using function: pIII_sse (2014.000 MB/sec)
raid6: int32x1 333 MB/s
raid6: int32x2 396 MB/s
raid6: int32x4 280 MB/s
raid6: int32x8 276 MB/s
raid6: mmxx1 1002 MB/s
raid6: mmxx2 1201 MB/s
raid6: sse1x1 973 MB/s
raid6: sse1x2 1209 MB/s
raid6: using algorithm sse1x2 (1209 MB/s)
md: raid6 personality registered as nr 8
md: md driver 0.90.3 MAX_MD_DEVS=256, MD_SB_DISKS=27
md: bitmap version 4.39
|
So what am I missing? Any ideas? |
|