View previous topic :: View next topic |
Author |
Message |
ipic Guru


Joined: 29 Dec 2003 Posts: 448 Location: UK
|
Posted: Tue Feb 25, 2025 2:53 pm Post subject: mdadm 4.3: Flags devname as 'Not POSIX compatible' |
|
|
I have just noticed this:
Code: |
~ # mdadm --version
mdadm - v4.3 - 2024-02-15
~ # mdadm --examine --scan
mdadm: Value "/dev/md202" cannot be set as devname. Reason: Not POSIX compatible. Value ignored.
mdadm: Value "/dev/md203" cannot be set as devname. Reason: Not POSIX compatible. Value ignored.
mdadm: Value "/dev/md205" cannot be set as devname. Reason: Not POSIX compatible. Value ignored.
mdadm: Value "/dev/md206" cannot be set as devname. Reason: Not POSIX compatible. Value ignored.
mdadm: Value "/dev/md207" cannot be set as devname. Reason: Not POSIX compatible. Value ignored.
ARRAY /dev/md/202 metadata=1.2 UUID=b451683b:0d6ad0d0:6e5f8704:e007ea15
ARRAY /dev/md/203 metadata=1.2 UUID=1f3a0f0a:624e5baf:baf5c8dc:107c674d
ARRAY /dev/md/205 metadata=1.2 UUID=76eff651:7ae187c3:e31a3ba1:7cdcd3d5
ARRAY /dev/md/206 metadata=1.2 UUID=d0c923fd:d2afe75a:32683a68:a46b124a
ARRAY /dev/md/207 metadata=1.2 UUID=3b957d86:eb1ac37d:de6cbf14:08378022
ARRAY /dev/md1 UUID=bc860a4a:ae7fc481:4f6c8a54:b8f64bb7
ARRAY /dev/md/102 metadata=1.2 UUID=8abcd7aa:273adce2:02593b74:4b05a3f1
ARRAY /dev/md/103 metadata=1.2 UUID=b2272173:6474c530:843b2d61:5ec1bb18
ARRAY /dev/md/105 metadata=1.2 UUID=80b433a0:0801d37b:7264dcc4:cc5767d7
ARRAY /dev/md/106 metadata=1.2 UUID=49be6ae8:0184108b:09827fd0:3d4a1feb
|
After digging a bit I found this short thread: https://lore.kernel.org/all/ZeXKYbxagk7SD0UH@metamorpher.de/T/
In the thread, it is stated that device values greater than 127 are flagged in this way.
The suggestion is to use in /etc/mdadm.conf.
I changed my /etc/mdadm.conf as folows:
Code: |
Added this line:
CREATE names=yes
Changed all the ARRAY entries from this style:
ARRAY /dev/md202 UUID=b451683b:0d6ad0d0:6e5f8704:e007ea15
To this style:
ARRAY /dev/md/md_202 UUID=b451683b:0d6ad0d0:6e5f8704:e007ea15
|
Rebuilt the initrd (I use dracut to build the RAID arrays), and re-booted.
After that, I see the following:
Code: |
~ # mdadm --examine --scan
ARRAY /dev/md/202 metadata=1.2 UUID=b451683b:0d6ad0d0:6e5f8704:e007ea15
ARRAY /dev/md/203 metadata=1.2 UUID=1f3a0f0a:624e5baf:baf5c8dc:107c674d
ARRAY /dev/md/205 metadata=1.2 UUID=76eff651:7ae187c3:e31a3ba1:7cdcd3d5
ARRAY /dev/md/206 metadata=1.2 UUID=d0c923fd:d2afe75a:32683a68:a46b124a
ARRAY /dev/md/207 metadata=1.2 UUID=3b957d86:eb1ac37d:de6cbf14:08378022
ARRAY /dev/md1 UUID=bc860a4a:ae7fc481:4f6c8a54:b8f64bb7
ARRAY /dev/md/102 metadata=1.2 UUID=8abcd7aa:273adce2:02593b74:4b05a3f1
ARRAY /dev/md/103 metadata=1.2 UUID=b2272173:6474c530:843b2d61:5ec1bb18
ARRAY /dev/md/105 metadata=1.2 UUID=80b433a0:0801d37b:7264dcc4:cc5767d7
ARRAY /dev/md/106 metadata=1.2 UUID=49be6ae8:0184108b:09827fd0:3d4a1feb
~ # ls -lh /dev/md*
brw-rw---- 1 root disk 9, 1 Feb 25 14:09 /dev/md1
brw-rw---- 1 root disk 9, 102 Feb 25 14:08 /dev/md102
brw-rw---- 1 root disk 9, 103 Feb 25 14:08 /dev/md103
brw-rw---- 1 root disk 9, 105 Feb 25 14:08 /dev/md105
brw-rw---- 1 root disk 9, 106 Feb 25 14:08 /dev/md106
brw-rw---- 1 root disk 9, 202 Feb 25 14:08 /dev/md202
brw-rw---- 1 root disk 9, 203 Feb 25 14:08 /dev/md203
brw-rw---- 1 root disk 9, 205 Feb 25 14:08 /dev/md205
brw-rw---- 1 root disk 9, 206 Feb 25 14:08 /dev/md206
brw-rw---- 1 root disk 9, 207 Feb 25 14:08 /dev/md207
/dev/md:
total 0
lrwxrwxrwx 1 root root 8 Feb 25 14:08 md_102 -> ../md102
lrwxrwxrwx 1 root root 8 Feb 25 14:08 md_103 -> ../md103
lrwxrwxrwx 1 root root 8 Feb 25 14:08 md_105 -> ../md105
lrwxrwxrwx 1 root root 8 Feb 25 14:08 md_106 -> ../md106
lrwxrwxrwx 1 root root 8 Feb 25 14:08 md_202 -> ../md202
lrwxrwxrwx 1 root root 8 Feb 25 14:08 md_203 -> ../md203
lrwxrwxrwx 1 root root 8 Feb 25 14:08 md_205 -> ../md205
lrwxrwxrwx 1 root root 8 Feb 25 14:08 md_206 -> ../md206
lrwxrwxrwx 1 root root 8 Feb 25 14:08 md_207 -> ../md207
~ # cat /proc/mdstat
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md106 : active raid1 sdc6[2] sdd6[5]
470910976 blocks super 1.2 [2/2] [UU]
bitmap: 0/4 pages [0KB], 65536KB chunk
md102 : active raid1 sdd2[5] sdc2[2]
543287296 blocks super 1.2 [2/2] [UU]
bitmap: 1/5 pages [4KB], 65536KB chunk
md103 : active raid1 sdd3[5] sdc3[2]
481959936 blocks super 1.2 [2/2] [UU]
bitmap: 0/4 pages [0KB], 65536KB chunk
md206 : active raid1 sdb6[4] sda6[3]
528643072 blocks super 1.2 [2/2] [UU]
bitmap: 0/4 pages [0KB], 65536KB chunk
md105 : active raid1 sdd5[5] sdc5[2]
450427904 blocks super 1.2 [2/2] [UU]
bitmap: 0/4 pages [0KB], 65536KB chunk
md205 : active raid1 sdb5[4] sda5[3]
490923008 blocks super 1.2 [2/2] [UU]
bitmap: 0/4 pages [0KB], 65536KB chunk
md207 : active raid1 sdb7[0] sda7[1]
516250624 blocks super 1.2 [2/2] [UU]
bitmap: 0/4 pages [0KB], 65536KB chunk
md203 : active raid1 sdb3[4] sda3[3]
479281152 blocks super 1.2 [2/2] [UU]
bitmap: 2/4 pages [8KB], 65536KB chunk
md202 : active raid1 sdb2[4] sda2[3]
447740928 blocks super 1.2 [2/2] [UU]
bitmap: 2/4 pages [8KB], 65536KB chunk
md1 : active raid1 sdb1[2] sdd1[0] sdc1[3] sda1[1]
6101952 blocks [4/4] [UUUU]
unused devices: <none>
|
So the 'problem' is gone (problem in quotes since everything was working before - just the error message was annoying)
I remain puzzled as to why the devices reported by `mdadm --examine --scan` remain unchanged.
It's almost as if something is just removing the underscore to form the /dev name
Code: |
~ # ls -lh /dev/md/md_102
lrwxrwxrwx 1 root root 8 Feb 25 14:08 /dev/md/md_102 -> ../md102
~ # mdadm -D /dev/md/md_102
/dev/md/md_102:
Version : 1.2
Creation Time : Sat Apr 11 18:40:35 2020
Raid Level : raid1
Array Size : 543287296 (518.12 GiB 556.33 GB)
Used Dev Size : 543287296 (518.12 GiB 556.33 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Tue Feb 25 14:51:26 2025
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : bitmap
Name : ian2:102 (local to host ian2)
UUID : 8abcd7aa:273adce2:02593b74:4b05a3f1
Events : 127540
Number Major Minor RaidDevice State
5 8 50 0 active sync /dev/sdd2
2 8 34 1 active sync /dev/sdc2
|
|
|
Back to top |
|
 |
ipic Guru


Joined: 29 Dec 2003 Posts: 448 Location: UK
|
Posted: Tue Feb 25, 2025 3:04 pm Post subject: |
|
|
OK. I'm a bit slow.
When the RAID device was created, I gave it the name /dev/md102 as an example.
I'm assuming that this was recorded in the metadata, and hence the array device stays the same. |
|
Back to top |
|
 |
grknight Retired Dev

Joined: 20 Feb 2015 Posts: 2033
|
Posted: Tue Feb 25, 2025 4:24 pm Post subject: |
|
|
They changed this from 127 to 1024 in mdadm-4.4 for more compatibility |
|
Back to top |
|
 |
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|