View previous topic :: View next topic |
Author |
Message |
Nossie Apprentice
![Apprentice Apprentice](/images/ranks/rank_rect_2.gif)
![](images/avatars/203815782640771f64e9eab.jpg)
Joined: 19 Apr 2002 Posts: 181
|
Posted: Wed Jun 23, 2004 10:07 am Post subject: LSI MegaRaid 150-6 SATA performance |
|
|
I just installed a LSI MegaRaid 150-6 SATA controller and was (negatively) surprised by the performance. I set it up with two Maxtor 160 GB SATA drives in a raid1 array. Read performance was 30 Mb/sec. but writes were about 4 mb/sec. with a very high system load.
After a couple of hours messing with the cards bios(es) and going through the competely incomperhensible and almost totaly useless manual, i found a bios setting to enable the write cache of eache individual drive.
For some reason the default setting for the drive cache is 'disabled'.
With the drives write cache enabled, the write performance was 20 Mb/sec.
The card has 2 bios interfaces, a text based bios and a 'web' based bios. The drive write cache can only be enabled or disabled in the text bios.
Gentoo instalation on the array was easy, just insert the livecd and boot the 2.4.x kernel with the doscsi parameter (2.6.x kernel didn't work for me).
Now I have a raid10 array of 6 drives. Write performance is a little lower than i expected (20 Mb/sec.), but read performance is very good (90 Mb/sec.).
I'll post some benchmarks later...
gr,
Nossie |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
Nossie Apprentice
![Apprentice Apprentice](/images/ranks/rank_rect_2.gif)
![](images/avatars/203815782640771f64e9eab.jpg)
Joined: 19 Apr 2002 Posts: 181
|
Posted: Wed Jun 30, 2004 3:15 pm Post subject: Some benchmark results |
|
|
In the above post I wrote that I would post some benchmarks later.... well here the are...
This is a raid10 array of 6 Maxtor 160 Gb SATA disks.
The filesystem is reiserfs, and the write policy on the raid controller is set to 'write back'
Code: |
home test # tiobench.pl -threads 1
No size specified, using 1792 MB
Unit information
================
File size = megabytes
Blk Size = bytes
Rate = megabytes per second
CPU% = percentage of CPU used during the test
Latency = milliseconds
Lat% = percent of requests that took longer than X seconds
CPU Eff = Rate divided by CPU% - throughput per cpu load
Sequential Reads
File Blk Num Avg Maximum Lat% Lat% CPU
Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff
---------------------------- ------ ----- --- ------ ------ --------- ----------- -------- -------- -----
2.4.26-gentoo-r3 1792 4096 1 72.02 16.47% 0.054 98.60 0.00000 0.00000 437
Random Reads
File Blk Num Avg Maximum Lat% Lat% CPU
Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff
---------------------------- ------ ----- --- ------ ------ --------- ----------- -------- -------- -----
2.4.26-gentoo-r3 1792 4096 1 0.93 0.239% 4.180 14.28 0.00000 0.00000 391
Sequential Writes
File Blk Num Avg Maximum Lat% Lat% CPU
Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff
---------------------------- ------ ----- --- ------ ------ --------- ----------- -------- -------- -----
2.4.26-gentoo-r3 1792 4096 1 26.88 14.46% 0.116 4006.49 0.00044 0.00000 186
Random Writes
File Blk Num Avg Maximum Lat% Lat% CPU
Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff
---------------------------- ------ ----- --- ------ ------ --------- ----------- -------- -------- -----
2.4.26-gentoo-r3 1792 4096 1 3.96 1.520% 0.012 0.46 0.00000 0.00000 260
|
Code: | home test # hdparm -tT /dev/sda
/dev/sda:
Timing buffer-cache reads: 1872 MB in 2.00 seconds = 936.00 MB/sec
Timing buffered disk reads: 218 MB in 3.00 seconds = 72.67 MB/sec |
|
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
the_highhat n00b
![n00b n00b](/images/ranks/rank_rect_0.gif)
Joined: 04 Jan 2003 Posts: 23 Location: Urf
|
Posted: Sat Jul 10, 2004 3:41 pm Post subject: |
|
|
cust srvc @ LSI recommended "write thru" instead of "write back" for RAID !0 arrays. I benchmarked this with h2bench and it made a signficant difference (+30%) on writes alone. I'm not sure of the effect of caching at the drive level with these card though. also, use direct i/o, and use normal reads (no readahead or adapative readahead) for optimal performance
btw, i too use raid10, but many posts seem to think raid 5 + hotspare is the way to go. i don't really get it since raid 10 can survive up to 2 disk failures w/o performance loss and it generally faster than raid 5. sure your loose some capacity, but it seems worth it to me.... why did you go raid 10? |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
Nossie Apprentice
![Apprentice Apprentice](/images/ranks/rank_rect_2.gif)
![](images/avatars/203815782640771f64e9eab.jpg)
Joined: 19 Apr 2002 Posts: 181
|
Posted: Sat Jul 10, 2004 4:55 pm Post subject: |
|
|
I choose raid10 because raid10 can can survive multiple disk failures. With a raid10 array of 6 drives, you have a stripe of 3 mirror sets, if 1 drive of each mirror fails, you are still up and running without any data loss.
The second reason i choose raid10 is that when a drive gets flaky (i.e. the drive fails partially and gives corrupted data) you can (if you find the responsible drive) replace the drive, and it gets rebuild without problems.
If this happens with raid5, the parity data will also be corrupted (checksum is created from corrupted data). So if you want to replace the flaky drive, the data will be rebuild with the corrupted parity data, and you will have a rebuild drive that's still corrupted.
I have tried both 'write-through' and 'write-back', and there is a performance difference between the 2 modes. For me write-back was 30% faster than write-through (20Mb/sec vs 26Mb/sec on sequential writes).
cat /proc/megaraid/hba0/raiddrives-0-9
Code: |
Logical drive: 0:, state: optimal
Span depth: 3, RAID level: 1, Stripe size: 64, Row size: 2
Read Policy: No read ahead, Write Policy: Write back, Cache Policy: Direct IO |
cat /proc/megaraid/hba0/config
Code: | v2.00.3 (Release Date: Wed Feb 19 08:51:30 EST 2003)
MegaRAID SATA 150-6D
Controller Type: 438/466/467/471/493/518/520/531/532
Controller Supports 40 Logical Drives
Controller capable of 64-bit memory addressing
Controller is not using 64-bit memory addressing
Base = f883a000, Irq = 11, Logical Drives = 1, Channels = 1
Version =713G:G117, DRAM = 64Mb
Controller Queue Depth = 254, Driver Queue Depth = 126
support_ext_cdb = 1
support_random_del = 1
boot_ldrv_enabled = 1
boot_ldrv = 0
boot_pdrv_enabled = 0
boot_pdrv_ch = 0
boot_pdrv_tgt = 0
quiescent = 0
has_cluster = 0
Module Parameters:
max_cmd_per_lun = 63
max_sectors_per_io = 128 |
The performance is adequate, but not spectacular. I hope this will improve with future driver/firmware updates. Maybe it is just because the card is in a 32bit PCI slot, i don't know...
I would appreciate it if someone with a LSI SATA card can post some benchmarks (tiobench) here, so i can compare the performance.
Nossie |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
the_highhat n00b
![n00b n00b](/images/ranks/rank_rect_0.gif)
Joined: 04 Jan 2003 Posts: 23 Location: Urf
|
Posted: Sun Jul 11, 2004 7:35 pm Post subject: some benchmarking stats... |
|
|
Setup:
lLSI Megaraid 150-4 on 64bit 66mhz slot
2xopteron 240 on K8W mobo w/2GB NUMA
4x36GB raptors in RAID 10
32K chunk, normal read, direct I/O
gentoo 2004.1, gcc 3.4 + multilib, glibc 2.3.4 + nptl,
kernel 2.6.7-gentoo-r9
....................................
tiobench is not available for amd64, so using bonnie++ to test.
command issued was "bonnie -u root"
....................................
With elevator=cfq and caching set to "write-back":
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
programatron 4G 40734 98 43337 17 17510 6 29664 75 50559 10 595.8 1
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 14795 98 +++++ +++ 14923 99 15774 98 +++++ +++ 14564 99
programatron,4G,40734,98,43337,17,17510,6,29664,75,50559,10,595.8,1,16,14795,98,+++++,+++,14923,99,15774,98,+++++,+++,14564,99
....................................
With elevator=cfq and caching set to "write-thru":
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
programatron 4G 38440 93 62204 31 29313 11 31496 80 83220 18 497.6 1
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 15060 99 +++++ +++ 12539 99 14986 99 +++++ +++ 12250 99
programatron,4G,38440,93,62204,31,29313,11,31496,80,83220,18,497.6,1,16,15060,99,+++++,+++,12539,99,14986,99,+++++,+++,12250,99
....................................
with elevator=as (default) and caching set to "write-thru"
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
programatron 4G 40333 98 78780 38 28052 10 31082 79 82366 18 398.2 0
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 15365 98 +++++ +++ 14644 99 15922 99 +++++ +++ 12977 99
programatron,4G,40333,98,78780,38,28052,10,31082,79,82366,18,398.2,0,16,15365,98,+++++,+++,14644,99,15922,99,+++++,+++,12977,99
........................................
can you post a bonnie++ output for reference? thnx |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
Nossie Apprentice
![Apprentice Apprentice](/images/ranks/rank_rect_2.gif)
![](images/avatars/203815782640771f64e9eab.jpg)
Joined: 19 Apr 2002 Posts: 181
|
Posted: Tue Jul 13, 2004 3:13 pm Post subject: |
|
|
My bonnie++ results
LSI MegaRaid 150-6 in a 32bit PCI slot
kernel 2.6.7-mm4
FS = reiserfs (with tails)
Code: |
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
home 2G 20046 83 29509 13 18712 6 19312 80 75446 17 344.7 0
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 25566 97 +++++ +++ 20223 97 24876 98 +++++ +++ 19035 97
home,2G,20046,83,29509,13,18712,6,19312,80,75446,17,344.7,0,16,25566,97,+++++,+++,20223,97,24876,98,+++++,+++,19035,97 |
The system wasn't completely idle when i ran the test, i was uploading someting t a ftp server with 50kb/sec. , but that shouldn't have much impact. |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|