Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Trouble removing internal bitmap from software RAID5
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware
View previous topic :: View next topic  
Author Message
johntash
n00b
n00b


Joined: 30 Apr 2005
Posts: 20
Location: KS

PostPosted: Sun Aug 30, 2009 4:15 am    Post subject: Trouble removing internal bitmap from software RAID5 Reply with quote

I think this is the right board, but if not feel free to move it :)

I recently started building a RAID5 array with 3 drives.
2 x Seagate 1.5tb
1 x WD 2tb (partitioned the same as the seagates, so 500g unused)
All three are connected through a Highpoint RocketRaid 2300 sata controller.
After plenty of different problems I think I finally have it set up to where I can use it now. I chose not to use the card's software or bios features since it's not an actual hardware raid card anyway. I'm just using the card for an extra 4 sata ports (my motherboard's ports are less than reliable).

Now, my main problem is the speed of the initial resync. The raid's never fully synced before since I've rebooted a few times in between, recreated the array, etc. Right now, it's hovering anywhere from 2mb/s to 17mb/s with ~1500 minutes to finish.
Code:
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdc2[0] sdb2[3] sdd2[1]
      2928311936 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]
      [==>..................]  recovery = 10.1% (148520280/1464155968) finish=1254.8min speed=17471K/sec
      bitmap: 0/175 pages [0KB], 4096KB chunk


I have no trouble assembling the array with mdadm anymore, but I thought it was a good idea to enable the write-intent bitmap for this raid with :
Code:
mdadm /dev/md0 -Gb internal

Now, after I enabled this, I read more about it and found out it can't be enabled on an array until it's finished syncing. The array was definitely performing a sync when I added the bitmap, but it didn't throw any errors. mdadm -D even shows it as enabled.

Code:

# mdadm -D /dev/md0
/dev/md0:
        Version : 0.90
  Creation Time : Thu Aug 27 18:02:37 2009
     Raid Level : raid5
     Array Size : 2928311936 (2792.66 GiB 2998.59 GB)
  Used Dev Size : 1464155968 (1396.33 GiB 1499.30 GB)
   Raid Devices : 3
  Total Devices : 3
Preferred Minor : 0
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Aug 29 22:31:59 2009
          State : active, degraded, recovering
 Active Devices : 2
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 64K

 Rebuild Status : 10% complete

           UUID : 34a1320e:d83aa2d5:a524cd1b:7b24cd10
         Events : 0.17744

    Number   Major   Minor   RaidDevice State
       0       8       34        0      active sync   /dev/sdc2
       1       8       50        1      active sync   /dev/sdd2
       3       8       18        2      spare rebuilding   /dev/sdb2


I'm not sure if I caused any problems by adding the bitmap while it was still syncing or not, but I think it may be part of my performance problem. When examining the bitmap on any of the raid disks, it always shows up as 100% dirty like this:


Code:

 # mdadm -X /dev/sdb2
        Filename : /dev/sdb2
           Magic : 6d746962
         Version : 4
            UUID : 34a1320e:d83aa2d5:a524cd1b:7b24cd10
          Events : 15691
  Events Cleared : 0
           State : OK
       Chunksize : 4 MB
          Daemon : 5s flush period
      Write Mode : Normal
       Sync Size : 1464155968 (1396.33 GiB 1499.30 GB)
          Bitmap : 357460 bits (chunks), 357460 dirty (100.0%)

I've also done the following hoping it would speed things up:
Code:

# blockdev --setra 8192 /dev/md0
# blockdev --setra 2048 /dev/sdb /dev/sdc /dev/sdd
# echo 8192 > /sys/block/md0/md/stripe_cache_size
# echo 50000 >/proc/sys/dev/raid/speed_limit_min


All I want to do right now is completely disable the write-intent bitmap, but I can't seem to do that. I've tried the following thinking it'd work:
Code:

# mdadm /dev/md0 -Gb none
mdadm: failed to remove internal bitmap.

dmesg also errors and says: md: couldn't update array info. -16

I also tried completely stopping the raid, and removing the bitmap with the same command, but it didn't work since the raid wasn't active.

Does anyone have any other ideas on how to remove the bitmap from my raid? Or any other ideas that might help speed it up potentially? My cpu's barely using 5% so I don't think there's a bottleneck there. The raid card is also going through pci-express 1x and should have enough bandwidth. If anyone's interested and wants more information, just let me know what to provide! Thanks! :D
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54797
Location: 56N 3W

PostPosted: Sun Aug 30, 2009 1:16 pm    Post subject: Reply with quote

johntash,

You haven't told us the speed you get from the array.
Syncing does not use all of the read/write bandwidth deliberately, so you can still usefully use the raid while its syncing.

What read speed you you get with
Code:
dd if=/dev/md0 of=/dev/null

Press ctrl-c to kill it and do kill -USR1 <pid_of_dd> to see how its doing.
This will give you some idea of the sequential read speed.
Try bonnie for more detailed information
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
johntash
n00b
n00b


Joined: 30 Apr 2005
Posts: 20
Location: KS

PostPosted: Sun Aug 30, 2009 6:10 pm    Post subject: Reply with quote

Thanks for the reply. I guess I haven't really benchmarked the array yet since I haven't used it a whole lot. I'm about to head out the door to work, so I'll try using Bonnie in a couple hours to get more details. For now, here's a few runs of dd :
Code:

# dd if=/dev/md0 of=/dev/null
925697+0 records in
925696+0 records out
473956352 bytes (474 MB) copied, 22.4079 s, 21.2 MB/s

# dd if=/dev/md0 of=/dev/null
1107713+0 records in
1107712+0 records out
567148544 bytes (567 MB) copied, 29.4186 s, 19.3 MB/s

# dd if=/dev/md0 of=/dev/null
1173241+0 records in
1173240+0 records out
600698880 bytes (601 MB) copied, 7.60021 s, 79.0 MB/s

# dd if=/dev/md0 of=/dev/null
1197465+0 records in
1197464+0 records out
613101568 bytes (613 MB) copied, 9.68879 s, 63.3 MB/s

# dd if=/dev/md0 of=/dev/null
1541529+0 records in
1541528+0 records out
789262336 bytes (789 MB) copied, 11.1974 s, 70.5 MB/s

# dd if=/dev/md0 of=/dev/null
2098433+0 records in
2098432+0 records out
1074397184 bytes (1.1 GB) copied, 33.0696 s, 32.5 MB/s
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54797
Location: 56N 3W

PostPosted: Sun Aug 30, 2009 10:01 pm    Post subject: Reply with quote

johntash,

That doesn't looks too bad as n-1 drives must be read and decoded. There is not really any speed increase to be gained with raid5.
The big speed difference will be accounted for by dd not having exclusive use of the raid set.

It looks ok for reads to me.

Writes will be slower as you have to write all the drives.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum