Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
kernel >6.11.x device-mapper alignment inconsistency
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware
View previous topic :: View next topic  
Author Message
sdauth
l33t
l33t


Joined: 19 Sep 2018
Posts: 650
Location: Ásgarðr

PostPosted: Thu Oct 17, 2024 3:19 am    Post subject: kernel >6.11.x device-mapper alignment inconsistency Reply with quote

Hello,
I upgraded my kernel from 6.6.52 to 6.11.3, all good but this shows up in dmesg :

Code:
[   71.761899] device-mapper: table: 253:1: adding target device sde1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[   71.761914] device-mapper: table: 253:1: adding target device sde1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[   71.762139] device-mapper: table: 253:1: adding target device sde1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[   71.762145] device-mapper: table: 253:1: adding target device sde1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  106.904936] device-mapper: table: 253:2: adding target device sdb1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  106.904951] device-mapper: table: 253:2: adding target device sdb1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  106.905203] device-mapper: table: 253:2: adding target device sdb1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  106.905208] device-mapper: table: 253:2: adding target device sdb1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  131.785762] device-mapper: table: 253:3: adding target device sdc1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  131.785777] device-mapper: table: 253:3: adding target device sdc1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  131.786028] device-mapper: table: 253:3: adding target device sdc1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  131.786034] device-mapper: table: 253:3: adding target device sdc1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  165.961267] device-mapper: table: 253:4: adding target device sdd1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  165.961281] device-mapper: table: 253:4: adding target device sdd1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  165.961549] device-mapper: table: 253:4: adding target device sdd1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  165.961555] device-mapper: table: 253:4: adding target device sdd1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216


disks are mounted correctly (btrfs on all disks) but there was no such warning with kernel 6.6.

Code:
NAME         ALIGNMENT MIN-IO   OPT-IO PHY-SEC LOG-SEC ROTA SCHED RQ-SIZE    RA WSAME
sdb                  0   4096 16776704    4096     512    1 bfq       256 32764    0B
└─sdb1               0   4096 16776704    4096     512    1 bfq       256 32764    0B
  └─hdd2            -1   4096        0    4096     512    1               32764    0B
sdc                  0   4096 16776704    4096     512    1 bfq       256 32764    0B
└─sdc1               0   4096 16776704    4096     512    1 bfq       256 32764    0B
  └─hdd3            -1   4096        0    4096     512    1               32764    0B
sdd                  0   4096 16776704    4096     512    1 bfq       256 32764    0B
└─sdd1               0   4096 16776704    4096     512    1 bfq       256 32764    0B
  └─hdd4            -1   4096        0    4096     512    1               32764    0B
sde                  0   4096 16776704    4096     512    1 bfq       256 32764    0B
└─sde1               0   4096 16776704    4096     512    1 bfq       256 32764    0B
  └─hdd             -1   4096        0    4096     512    1               32764    0B


The verify option of gdisk says :
Code:
Warning: There is a gap between the main partition table (ending sector 33)
and the first usable sector (2048). This is helpful in some exotic configurations,
but is unusual. The util-linux fdisk program often creates disks like this.
Using 'j' on the experts' menu can adjust this gap.

Caution: Partition 1 doesn't end on a 2048-sector boundary. This may
result in problems with some disk encryption tools.


In order to fix that, I would have to format them... isn't ?
Or maybe I can just ignore it ?

Thanks


Last edited by sdauth on Sun Nov 10, 2024 4:35 pm; edited 1 time in total
Back to top
View user's profile Send private message
sdauth
l33t
l33t


Joined: 19 Sep 2018
Posts: 650
Location: Ásgarðr

PostPosted: Fri Oct 18, 2024 3:54 pm    Post subject: Reply with quote

I think I found the "issue"..
Initially, the four disks (sdb sdc sdd sde) were each in a usb enclosure. (and were formatted via usb using fdisk)
I then added them (recently) in my server (sata), with no partition table modification or anything, only adding UUID & mountpoint to my fstab.

I think the formatting via USB caused the issue although I can't explain it. (I remember a similar issue with a firewire hdd dock years ago)
I made a test with the same kernel (6.11.3) on an other machine, with one hdd showing a similar gdisk warning :

Code:
Warning: There is a gap between the main partition table (ending sector 33)
and the first usable sector (2048). This is helpful in some exotic configurations,
but is unusual. The util-linux fdisk program often creates disks like this.
Using 'j' on the experts' menu can adjust this gap.

Caution: Partition 1 doesn't end on a 2048-sector boundary. This may
result in problems with some disk encryption tools.

No problems found. 0 free sectors (0 bytes) available in 0
segments, the largest of which is 0 (0 bytes) in size.


In this case, I used fdisk (a long time ago) as well but via sata directly (it is a laptop)
With kernel 6.11.3, no "device-mapper alignment inconsistency" warning.
Code:
[    5.345888] device-mapper: uevent: version 1.0.3
[    5.346197] device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev


So even though I could just ignore it, I guess if I recreate the partition table on each disk, it should clear the warning.. stay tuned (will update this thread once I moved the data) :o

EDIT : Unfortunately, it didn't help. I'm still having the same warning.

I noticed the lsblk -t output is different between kernel 6.6.52 & 6.11.3.
With 6.11.3 I have 16776704 for each disk in OPT-IO column.
With 6.6.52, it is set to 0.

Also, the fdisk is different as well, each disk reports :
Code:
I/O size (minimum/optimal): 4096 bytes / 16776704 bytes


instead of (with kernel 6.6.52)
Code:
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


I have no idea how to solve this...
Anyway, looks like recreating the partition table was totally useless :? Well, I will revert to 6.6.x series for now..
Back to top
View user's profile Send private message
eccerr0r
Watchman
Watchman


Joined: 01 Jul 2004
Posts: 9824
Location: almost Mile High in the USA

PostPosted: Sat Oct 19, 2024 8:52 pm    Post subject: Reply with quote

I don't know much about btrfs and I don't think this is relevant but I recently learned... that the RPi5 can have 16K page size instead of the typical 4K which incidentally is the perfect size for x86 and these AF hard drives. However this has nothing to do with the 16776704 which is exactly 2^24 - 512 bytes or (2^15-1) 512-blocks... which is indeed a weird number... I think it may be a regression in the 6.11 kernel but unsure, this size is weird.

The first time I read this post I was kind of surprised at the gdisk warning, on MBR disks there typically is normally a gap from the start of disk to the first usable data sector - mainly for the MBR, and also to hide MBR code like grub or other bootloader for classic boot computers, and why gdisk would complain about that I don't know. Perhaps it only is focused on GPT disks. Also perhaps this warning is a red herring to the kernel issue.

The 2048-boundary for encryption tools I'd call that a encryption tool bug but it's reasonable if the block cypher handles 2048 sectors (1MB assuming 512-byte sectors) at a time. I think LVM also has a minimum chunk size which is similar but it will automatically round down.

Hope you have backups (done under 6.6) ... just in case. Is the array slower under 6.11? 32767 as an optimal read count is both immense and weird.
_________________
Intel Core i7 2700K/Radeon R7 250/24GB DDR3/256GB SSD
What am I supposed watching?
Back to top
View user's profile Send private message
sdauth
l33t
l33t


Joined: 19 Sep 2018
Posts: 650
Location: Ásgarðr

PostPosted: Sun Oct 20, 2024 4:02 am    Post subject: Reply with quote

eccerr0r, with kernel 6.11.3, just before trying to recreate the partition table (disk fully blank), this was the "lsblk -t" output.
Code:
NAME         ALIGNMENT MIN-IO   OPT-IO PHY-SEC LOG-SEC ROTA SCHED RQ-SIZE    RA WSAME
sdb                  0   4096 16776704    4096     512    1 bfq       256 32764    0B
sdc                  0   4096 16776704    4096     512    1 bfq       256 32764    0B
sdd                  0   4096 16776704    4096     512    1 bfq       256 32764    0B
sde                  0   4096 16776704    4096     512    1 bfq       256 32764    0B


So no partition table, no file system yet. Funny values indeed.
I then rebooted to 6.6.52 :

Code:
NAME         ALIGNMENT MIN-IO   OPT-IO PHY-SEC LOG-SEC ROTA SCHED RQ-SIZE    RA WSAME
sdb                  0   4096 0    4096     512    1 bfq       256 128    0B
sdc                  0   4096 0    4096     512    1 bfq       256 128    0B
sdd                  0   4096 0    4096     512    1 bfq       256 128    0B
sde                  0   4096 0    4096     512    1 bfq       256 128    0B


And as you can see, no problem here. So I recreated the partition table, crypto, fs.. and moved the data again from my backup, everything works as usual.

By the way those disks are connected to a raid controller (a perc h310, in IT mode, using mpt3sas) but there is no array involved here.
One other thing I noticed with kernel 6.11.3 is that if I plug sdb (for example) to one sata port on the motherboard instead, then the lsblk -t value is correct.
Code:
NAME         ALIGNMENT MIN-IO   OPT-IO PHY-SEC LOG-SEC ROTA SCHED RQ-SIZE    RA WSAME
sdb                  0   4096 0    4096     512    1 bfq       256 128    0B


So with kernel >6.11.3 and for disks connected to the raid card, it triggers a change in optimize_io_size (& ra ) with or without partition table, fs..
And I can also reproduce the issue on a other machine (different mb, cpu but same raid controller, with 8 disks connected), as soon as I try kernel >6.11.x, it shows the same funny values in "lsblk -t". If I reboot to 6.6.52, then back to normal.

I will try again when next lts kernel (6.12) is out.
Back to top
View user's profile Send private message
sdauth
l33t
l33t


Joined: 19 Sep 2018
Posts: 650
Location: Ásgarðr

PostPosted: Sun Nov 10, 2024 4:43 pm    Post subject: Reply with quote

New try with 6.12-rc6, same issue!
dmesg:
Code:
[   78.592421] device-mapper: table: 254:1: adding target device sde1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[   78.592446] device-mapper: table: 254:1: adding target device sde1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[   78.592672] device-mapper: table: 254:1: adding target device sde1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[   78.592678] device-mapper: table: 254:1: adding target device sde1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  105.756961] device-mapper: table: 254:2: adding target device sdd1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  105.756975] device-mapper: table: 254:2: adding target device sdd1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  105.757200] device-mapper: table: 254:2: adding target device sdd1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  105.757205] device-mapper: table: 254:2: adding target device sdd1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  132.900466] device-mapper: table: 254:3: adding target device sdc1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  132.900481] device-mapper: table: 254:3: adding target device sdc1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  132.900707] device-mapper: table: 254:3: adding target device sdc1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  132.900713] device-mapper: table: 254:3: adding target device sdc1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  160.034203] device-mapper: table: 254:4: adding target device sdb1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  160.034217] device-mapper: table: 254:4: adding target device sdb1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  160.034494] device-mapper: table: 254:4: adding target device sdb1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  160.034499] device-mapper: table: 254:4: adding target device sdb1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216


lsblk -t (here using mq-deadline, but the output is the same with bfq)
Code:
NAME         ALIGNMENT MIN-IO   OPT-IO PHY-SEC LOG-SEC ROTA SCHED       RQ-SIZE    RA WSAME
sda                  0    512 16776704     512     512    1 mq-deadline     256 32764    0B
└─sda1               0    512 16776704     512     512    1 mq-deadline     256 32764    0B
  └─data2            0    512 16776704     512     512    1                     32764    0B
sdb                  0   4096 16776704    4096     512    1 mq-deadline     256 32764    0B
└─sdb1               0   4096 16776704    4096     512    1 mq-deadline     256 32764    0B
  └─hdd4            -1   4096        0    4096    4096    1                     32764    0B
sdc                  0   4096 16776704    4096     512    1 mq-deadline     256 32764    0B
└─sdc1               0   4096 16776704    4096     512    1 mq-deadline     256 32764    0B
  └─hdd3            -1   4096        0    4096    4096    1                     32764    0B
sdd                  0   4096 16776704    4096     512    1 mq-deadline     256 32764    0B
└─sdd1               0   4096 16776704    4096     512    1 mq-deadline     256 32764    0B
  └─hdd2            -1   4096        0    4096    4096    1                     32764    0B
sde                  0   4096 16776704    4096     512    1 mq-deadline     256 32764    0B
└─sde1               0   4096 16776704    4096     512    1 mq-deadline     256 32764    0B
  └─hdd             -1   4096        0    4096    4096    1                     32764    0B
sdf                  0    512        0     512     512    0 mq-deadline       2   128    0B
└─sdf1               0    512        0     512     512    0 mq-deadline       2   128    0B
  └─enc_root         0    512        0     512     512    0                       128    0B


With the exception of sdf (root), which is connected to a sata port on the mb and doesn't report funny numbers, all other drives (sda, sdb, sdc, sdd, sde) are connected to a SAS HBA and report funny numbers.

and then again, if I reboot to 6.6.58-r1, no issue as you can see below, no device-mapper warning. So what's going on ?

lsblk -t
Code:
NAME         ALIGNMENT MIN-IO OPT-IO PHY-SEC LOG-SEC ROTA SCHED RQ-SIZE  RA WSAME
sda                  0    512      0     512     512    1 bfq       256 128    0B
└─sda1               0    512      0     512     512    1 bfq       256 128    0B
  └─data2            0    512      0     512     512    1               128    0B
sdb                  0   4096      0    4096     512    1 bfq       256 128    0B
└─sdb1               0   4096      0    4096     512    1 bfq       256 128    0B
  └─hdd4             0   4096      0    4096    4096    1               128    0B
sdc                  0   4096      0    4096     512    1 bfq       256 128    0B
└─sdc1               0   4096      0    4096     512    1 bfq       256 128    0B
  └─hdd2             0   4096      0    4096    4096    1               128    0B
sdd                  0   4096      0    4096     512    1 bfq       256 128    0B
└─sdd1               0   4096      0    4096     512    1 bfq       256 128    0B
  └─hdd              0   4096      0    4096    4096    1               128    0B
sde                  0   4096      0    4096     512    1 bfq       256 128    0B
└─sde1               0   4096      0    4096     512    1 bfq       256 128    0B
  └─hdd3             0   4096      0    4096    4096    1               128    0B
sdf                  0    512      0     512     512    0 bfq         2 128    0B
└─sdf1               0    512      0     512     512    0 bfq         2 128    0B
  └─enc_root         0    512      0     512     512    0               128    0B


Any idea ? Looks like I'm going to be stuck with 6.6, I have no idea how to solve this.

Thanks for your help.
Back to top
View user's profile Send private message
Hu
Administrator
Administrator


Joined: 06 Mar 2007
Posts: 22642

PostPosted: Sun Nov 10, 2024 5:41 pm    Post subject: Reply with quote

If this changes behavior based on kernel version, and the newer version is broken, then this may be a kernel regression. The question is whether it is a necessary and intentional regression caused by fixing some much more serious problem elsewhere, or a negligent regression. If the latter, you could argue fairly readily for it to be reverted. If the former, there may be more pushback, since the developers would be reluctant to reintroduce the more serious problem.

Can you narrow this down more precisely than v6.10 vs v6.11.3? I see you wrote that OPT-IO is 0 in the good kernel and non-zero in the bad kernel. That suggests to me that someone improved the ability to report OPT-IO, but that this improvement may have exposed a problem elsewhere.
Back to top
View user's profile Send private message
sdauth
l33t
l33t


Joined: 19 Sep 2018
Posts: 650
Location: Ásgarðr

PostPosted: Sun Nov 10, 2024 5:58 pm    Post subject: Reply with quote

Hu wrote:
Can you narrow this down more precisely than v6.10 vs v6.11.3?

Yes, I've thought about that. I can try with 6.10 to see if the issue was already present or not. ( then latest of EOL 6.7.x, 6.8.x, 6.9.x )
It's going to take a while with the slow cpu so I'll update this thread later to report my findings.
Back to top
View user's profile Send private message
sdauth
l33t
l33t


Joined: 19 Sep 2018
Posts: 650
Location: Ásgarðr

PostPosted: Mon Nov 11, 2024 12:14 am    Post subject: Reply with quote

So, with kernel :
6.7.12
6.8.12
6.9.9
6.10.14

All good, no issue, no device-mapper warning, same output as with 6.6.58.

Then tried again tried with 6.11.7 (gentoo-sources) and the issue shows up.
Well, at least now I know it really starts with 6.11.x series.

Found this while looking at the kernel log, maybe related, maybe not.

Quote:
commit a23634644afc2f7c1bac98776440a1f3b161819e
Author: Christoph Hellwig <hch@lst.de>
Date: Fri May 31 09:47:59 2024 +0200

block: take io_opt and io_min into account for max_sectors

The soft max_sectors limit is normally capped by the hardware limits and
an arbitrary upper limit enforced by the kernel, but can be modified by
the user. A few drivers want to increase this limit (nbd, rbd) or
adjust it up or down based on hardware capabilities (sd).

Change blk_validate_limits to default max_sectors to the optimal I/O
size, or upgrade it to the preferred minimal I/O size if that is
larger than the kernel default if no optimal I/O size is provided based
on the logic in the SD driver.

This keeps the existing kernel default for drivers that do not provide
an io_opt or very big io_min value, but picks a much more useful
default for those who provide these hints, and allows to remove the
hacks to set the user max_sectors limit in nbd, rbd and sd.

----

commit 0a94a469a4f02bdcc223517fd578810ffc21c548
Author: Christoph Hellwig <hch@lst.de>
Date: Wed Jul 3 15:12:08 2024 +0200

dm: stop using blk_limits_io_{min,opt}

Remove use of the blk_limits_io_{min,opt} and assign the values directly
to the queue_limits structure. For the io_opt this is a completely
mechanical change, for io_min it removes flooring the limit to the
physical and logical block size in the particular caller. But as
blk_validate_limits will do the same later when actually applying the
limits, there still is no change in overall behavior.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>


I can maybe try now with 5c1f50ab7fcb4e77a0b4ce102cfb890eef1ed8f1 (just before a23634644afc2f7c1bac98776440a1f3b161819e)
Back to top
View user's profile Send private message
sdauth
l33t
l33t


Joined: 19 Sep 2018
Posts: 650
Location: Ásgarðr

PostPosted: Mon Nov 11, 2024 3:38 am    Post subject: Reply with quote

Well, a bit lucky here, after a few compile...
the issue starts indeed with commit a23634644afc2f7c1bac98776440a1f3b161819e (block: take io_opt and io_min into account for max_sectors )
with this commit, the device-mapper warning appears and lsblk output changes...
Back to top
View user's profile Send private message
Hu
Administrator
Administrator


Joined: 06 Mar 2007
Posts: 22642

PostPosted: Mon Nov 11, 2024 3:00 pm    Post subject: Reply with quote

That is unfortunate, in my opinion. The commit you cited as bad has a commit log that I interpret as being that this was intentional and serves a useful purpose, so it would be disappointing to need to revert it. Nonetheless, I think reporting this to the relevant maintainers, and providing a summary of what it breaks, is likely the next step. Perhaps they can find a way to mitigate the regression without removing the improvement.
Back to top
View user's profile Send private message
sdauth
l33t
l33t


Joined: 19 Sep 2018
Posts: 650
Location: Ásgarðr

PostPosted: Mon Nov 11, 2024 3:40 pm    Post subject: Reply with quote

I think you're right.
I made an other test with dmcrypt service disabled at boot.
After rebooting, I have :

Code:
NAME         ALIGNMENT MIN-IO   OPT-IO PHY-SEC LOG-SEC ROTA SCHED RQ-SIZE    RA WSAME
sda                  0    512 16776704     512     512    1 bfq       256 32764    0B
└─sda1               0    512 16776704     512     512    1 bfq       256 32764    0B
sdb                  0   4096 16776704    4096     512    1 bfq       256 32764    0B
└─sdb1               0   4096 16776704    4096     512    1 bfq       256 32764    0B
sdc                  0   4096 16776704    4096     512    1 bfq       256 32764    0B
└─sdc1               0   4096 16776704    4096     512    1 bfq       256 32764    0B
sdd                  0   4096 16776704    4096     512    1 bfq       256 32764    0B
└─sdd1               0   4096 16776704    4096     512    1 bfq       256 32764    0B
sde                  0   4096 16776704    4096     512    1 bfq       256 32764    0B
└─sde1               0   4096 16776704    4096     512    1 bfq       256 32764    0B
sdf                  0    512        0     512     512    0 bfq         2   128    0B
└─sdf1               0    512        0     512     512    0 bfq         2   128    0B
  └─enc_root         0    512        0     512     512    0                 128    0B


so except for root of course, nothing else is unlocked.
But as soon as I start dmcrypt service, here is the dmesg output :
Code:

[   49.805072] r8169 0000:03:00.0 eth0: Link is Down
[   51.684256] r8169 0000:03:00.0 eth0: Link is Up - 1Gbps/Full - flow control rx/tx
[  322.837569] device-mapper: table: 254:2: adding target device sde1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  322.837583] device-mapper: table: 254:2: adding target device sde1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  322.837769] device-mapper: table: 254:2: adding target device sde1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  322.837774] device-mapper: table: 254:2: adding target device sde1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  322.959952] BTRFS: device label hdd-raid devid 1 transid 827 /dev/dm-2 (254:2) scanned by (udev-worker) (3106)
[  349.901218] device-mapper: table: 254:3: adding target device sdd1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  349.901231] device-mapper: table: 254:3: adding target device sdd1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  349.901406] device-mapper: table: 254:3: adding target device sdd1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  349.901411] device-mapper: table: 254:3: adding target device sdd1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  350.120502] BTRFS: device label hdd-raid devid 2 transid 827 /dev/dm-3 (254:3) scanned by (udev-worker) (3481)
[  377.060425] device-mapper: table: 254:4: adding target device sdc1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  377.060454] device-mapper: table: 254:4: adding target device sdc1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  377.060632] device-mapper: table: 254:4: adding target device sdc1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  377.060637] device-mapper: table: 254:4: adding target device sdc1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  377.273772] BTRFS: device label hdd-raid devid 3 transid 827 /dev/dm-4 (254:4) scanned by (udev-worker) (3854)
[  404.230997] device-mapper: table: 254:5: adding target device sdb1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  404.231010] device-mapper: table: 254:5: adding target device sdb1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  404.231202] device-mapper: table: 254:5: adding target device sdb1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  404.231207] device-mapper: table: 254:5: adding target device sdb1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=16777216
[  404.446158] BTRFS: device label hdd-raid devid 4 transid 827 /dev/dm-5 (254:5) scanned by (udev-worker) (4228)


as you can see, except for the alignment inconsistency message, each disk is unlocked and btrfs is correctly detected. I can also mount the btrfs raid10 (I just created it today) without issue.
Here is the output of lsblk -t after that :

Code:
NAME          ALIGNMENT MIN-IO   OPT-IO PHY-SEC LOG-SEC ROTA SCHED RQ-SIZE    RA WSAME
sda                   0    512 16776704     512     512    1 bfq       256 32764    0B
└─sda1                0    512 16776704     512     512    1 bfq       256 32764    0B
  └─data2             0    512 16776704     512     512    1               32764    0B
sdb                   0   4096 16776704    4096     512    1 bfq       256 32764    0B
└─sdb1                0   4096 16776704    4096     512    1 bfq       256 32764    0B
  └─raid-hdd4        -1   4096        0    4096    4096    1               32764    0B
sdc                   0   4096 16776704    4096     512    1 bfq       256 32764    0B
└─sdc1                0   4096 16776704    4096     512    1 bfq       256 32764    0B
  └─raid-hdd3        -1   4096        0    4096    4096    1               32764    0B
sdd                   0   4096 16776704    4096     512    1 bfq       256 32764    0B
└─sdd1                0   4096 16776704    4096     512    1 bfq       256 32764    0B
  └─raid-hdd2        -1   4096        0    4096    4096    1               32764    0B
sde                   0   4096 16776704    4096     512    1 bfq       256 32764    0B
└─sde1                0   4096 16776704    4096     512    1 bfq       256 32764    0B
  └─raid-hdd1        -1   4096        0    4096    4096    1               32764    0B
sdf                   0    512        0     512     512    0 bfq         2   128    0B
└─sdf1                0    512        0     512     512    0 bfq         2   128    0B
  └─enc_root          0    512        0     512     512    0                 128    0B


So after all, maybe it is a cryptsetup issue caused by this kernel change ?
Otherwise, I'll eventually open a kernel and/or cryptsetup bug report later. :o


Last edited by sdauth on Tue Nov 12, 2024 7:33 pm; edited 1 time in total
Back to top
View user's profile Send private message
Hu
Administrator
Administrator


Joined: 06 Mar 2007
Posts: 22642

PostPosted: Mon Nov 11, 2024 3:59 pm    Post subject: Reply with quote

That could be, and that would explain how it was not caught earlier, if it requires an interaction between the new kernel, cryptsetup, and possibly even options on the encrypted device. I still think it merits a report somewhere, since the kernel message looks to me like a warning that something is at least suboptimal if not outright wrong. However, I am unsure whether this is a cryptsetup bug that was exposed by the kernel change, or is a kernel bug.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum