Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
md file system slow: kworker/u16:0+flush-252:4
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware
View previous topic :: View next topic  
Author Message
pegu
n00b
n00b


Joined: 19 Sep 2003
Posts: 52

PostPosted: Sun Sep 22, 2024 7:53 am    Post subject: md file system slow: kworker/u16:0+flush-252:4 Reply with quote

I'm installing Altera Quartus, which I have done several times before on this system. But what used to take 15-20 minutes now takes a week.
It happens with the text based installer and the GUI installer. The only process I see running at 100% during this time is:

Code:

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                                                                           
29359 root      20   0       0      0      0 R 100.0   0.0 433:13.54 kworker/u16:0+flush-252:4                                                                                         


The load is not very high either and the response while running interactive tasks is just fine.
Code:

 05:10:40 up 109 days,  7:39,  2 users,  load average: 2.85, 2.77, 2.81


There are no errors in the kernel log and /proc/mdadm. I have installed the same package on other distros without problems so I don't think the problem is the Quartus installer.
The kernel version is 6.6.13-gentoo.

Update: It's not related to the Quartus installer as it takes more than two minutes to copy a 2.4M PDF file:

Code:

$ time cp /tmp/time-series-forcasting-10324313.pdf ./documentation/math

real    2m7.088s
user    0m0.000s
sys     0m0.032s


Any ideas as of what is causing this behavior and how to fix it?
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54550
Location: 56N 3W

PostPosted: Sun Sep 22, 2024 12:22 pm    Post subject: Reply with quote

Tell us how the raid set is construced and on what drives.

it sounds like a drive issue that the kernel is managing to hide meanwhile, rather than kick a member out of the raid set.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
pegu
n00b
n00b


Joined: 19 Sep 2003
Posts: 52

PostPosted: Fri Sep 27, 2024 5:19 am    Post subject: Reply with quote

Code:

# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md1 : active raid1 sdg2[3](S) sdh2[1] sdd2[5] sdf2[4] sda2[2](S) sdb2[0] sde2[6] sdc2[7]
      524224 blocks super 1.0 [6/6] [UUUUUU]
     
md2 : active raid6 sdg4[3] sdh4[1] sdd4[5] sdf4[4] sda4[2] sdb4[0] sde4[6] sdc4[7]
      23035502592 blocks super 1.2 level 6, 512k chunk, algorithm 2 [8/8] [UUUUUUUU]
      bitmap: 5/29 pages [20KB], 65536KB chunk

md3 : active raid6 sdh3[1] sdg3[3] sdd3[5] sdf3[4] sda3[2] sdb3[0] sde3[6] sdc3[7]
      402259968 blocks super 1.2 level 6, 512k chunk, algorithm 2 [8/8] [UUUUUUUU]


It's only at md2 where I observe these issues.
md3 is working fine.
They are all on the same physical drives.
I don't see any disk related errors in the kernel logs.
It has been working fine for many years.
Checking md5/sha256 checksums of a large file sets shows no errors.
There appears to be no slowdown on read operations. Only writes.
Other hosts will mount this filesystem read-only over NFS.
md2 is an ext4 filesystem .
I have many 100GB images which are installed resulting in thousands of smaller files. Both the image files and installed files are then deleted as new versions become available.

This problem, which seems to be related to fragmentation/journaling appears to be similar:

https://unix.stackexchange.com/questions/620804/kworker-flush-99-i-o-hangs-file-write-on-machine
Back to top
View user's profile Send private message
dmpogo
Advocate
Advocate


Joined: 02 Sep 2004
Posts: 3390
Location: Canada

PostPosted: Fri Sep 27, 2024 5:30 am    Post subject: Reply with quote

pegu wrote:
Code:

# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md1 : active raid1 sdg2[3](S) sdh2[1] sdd2[5] sdf2[4] sda2[2](S) sdb2[0] sde2[6] sdc2[7]
      524224 blocks super 1.0 [6/6] [UUUUUU]
     
md2 : active raid6 sdg4[3] sdh4[1] sdd4[5] sdf4[4] sda4[2] sdb4[0] sde4[6] sdc4[7]
      23035502592 blocks super 1.2 level 6, 512k chunk, algorithm 2 [8/8] [UUUUUUUU]
      bitmap: 5/29 pages [20KB], 65536KB chunk

md3 : active raid6 sdh3[1] sdg3[3] sdd3[5] sdf3[4] sda3[2] sdb3[0] sde3[6] sdc3[7]
      402259968 blocks super 1.2 level 6, 512k chunk, algorithm 2 [8/8] [UUUUUUUU]


It's only at md2 where I observe these issues.
md3 is working fine.
They are all on the same physical drives.
I don't see any disk related errors in the kernel logs.
It has been working fine for many years.
Checking md5/sha256 checksums of a large file sets shows no errors.
There appears to be no slowdown on read operations. Only writes.
Other hosts will mount this filesystem read-only over NFS.
md2 is an ext4 filesystem .
I have many 100GB images which are installed resulting in thousands of smaller files. Both the image files and installed files are then deleted as new versions become available.

This problem, which seems to be related to fragmentation/journaling appears to be similar:

https://unix.stackexchange.com/questions/620804/kworker-flush-99-i-o-hangs-file-write-on-machine



So it is on raid6 Can you take on disk down at a time from md2, and see whether anything improves ?
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum