View previous topic :: View next topic |
Author |
Message |
awalp n00b
![n00b n00b](/images/ranks/rank_rect_0.gif)
![](images/avatars/16238500814d4b58a7f1edd.jpg)
Joined: 29 May 2003 Posts: 73
|
Posted: Tue Sep 21, 2010 9:26 pm Post subject: raid0 vs single drive transfer speeds, no improvement |
|
|
I'm wondering what is the reason for no improvement with transfer speeds for a raid0 vs single drives
and if it is the CPU/memory/motherboard limiting it.
-- I have two raid0 arrays, the first using older seemingly slower harddrives showing a nice improvement with raid0 over single drives
-- the second using new harddrives which seemingly are faster and are show no improvement or even sometimes a minor loss with raid0 over single drives
detailed specs: Athlon XP 2500+, 512mb ram, PCI 4x SATA add-on card.
MD0 sda,sdb(250gb x2 = 500GB) XFS default chunk size of 64 (matching drives 2 years old) 7200rpm ST3250823AS
MD1 sdc,sdd(750gb x2 = 1.5TB) XFS default chunk size of 64 (matching drives 2 weeks old) 7200rpm WD7500AVDS-63U8B0
Code: | hdparm -tT /dev/hda; hdparm -tT /dev/sda; hdparm -tT /dev/sdb; hdparm -tT /dev/md0; hdparm -tT /dev/sdc; hdparm -tT /dev/sdd; hdparm -tT /dev/md1; hdparm -tT /dev/sde |
System IDE Drive (ext3, for comparison)
/dev/hda:
Timing cached reads: 478 MB in 2.00 seconds = 238.84 MB/sec
Timing buffered disk reads: 82 MB in 3.01 seconds = 27.22 MB/sec
---------------------- MD0 -------------------------------[/i]
MD0 raid0 SATA drive 1 (using SATA PCI Card)
/dev/sda:
Timing cached reads: 492 MB in 2.01 seconds = 245.24 MB/sec
Timing buffered disk reads: 196 MB in 3.01 seconds = 65.05 MB/sec
MD0 raid0 SATA drive 2 (using SATA PCI Card)
/dev/sdb:
Timing cached reads: 484 MB in 2.00 seconds = 241.79 MB/sec
Timing buffered disk reads: 200 MB in 3.00 seconds = 66.63 MB/sec
MD0 (raid0 sda + sdb, using mdadm software raid through SATA PCI add-on card)
/dev/md0:
Timing cached reads: 494 MB in 2.01 seconds = 246.07 MB/sec--------- Nice Gain
Timing buffered disk reads: 264 MB in 3.02 seconds = 87.45 MB/sec
----------------------------- the first raid0 array has an improvement from ~65MB/sec to ~87MB/sec
---------------------- MD1 -------------------------------
MD1 raid0 SATA drive 1(3) (using SATA PCI Card)
/dev/sdc:
Timing cached reads: 486 MB in 2.00 seconds = 242.92 MB/sec
Timing buffered disk reads: 244 MB in 3.01 seconds = 81.09 MB/sec
MD1 raid0 SATA drive 2(4) (using SATA PCI Card)
/dev/sdd:
Timing cached reads: 492 MB in 2.00 seconds = 245.41 MB/sec
Timing buffered disk reads: 246 MB in 3.02 seconds = 81.57 MB/sec
MD1 (raid0 sda + sdb, using mdadm software raid through SATA PCI add-on card)
/dev/md1:
Timing cached reads: 492 MB in 2.00 seconds = 245.60 MB/sec --------- No Gain/Slight Loss
Timing buffered disk reads: 236 MB in 3.02 seconds = 78.09 MB/sec
----------------------------- the newer second raid0 array actually has a loss from ~81MB/sec to ~78MB/sec
USB SATA adapter drive (for comparison)
/dev/sde:
Timing cached reads: 466 MB in 2.00 seconds = 232.53 MB/sec
Timing buffered disk reads: 94 MB in 3.04 seconds = 30.96 MB/sec
--
--
--
I'm wondering if either the PCI interface for SATA cannot support the bandwidth or if it is CPU/Memory limited, or possibly another reason, such as chunk size. |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
frostschutz Advocate
![Advocate Advocate](/images/ranks/rank-G-1-advocate.gif)
![](images/avatars/9097703434bddef6e5b49c.png)
Joined: 22 Feb 2005 Posts: 2977 Location: Germany
|
Posted: Tue Sep 21, 2010 10:08 pm Post subject: |
|
|
PCI is the slowest interface by far nowadays... that's why graphics card used to use AGP, and why we have PCI-E now |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
krinn Watchman
![Watchman Watchman](/images/ranks/rank-G-2-watchman.gif)
![](images/avatars/gallery/Blade Runner/movie_blade_runner_howl.gif)
Joined: 02 May 2003 Posts: 7470
|
Posted: Tue Sep 21, 2010 11:05 pm Post subject: |
|
|
- Check your sata controller is plug in a bus master capable pci (you can see that with lspci -vv). Some motherboard were having a burst pci (this can only be check in the manual, generally lower latency on that bus and no irq sharing for that one)
- Your md1 is with green WD disks, look here -> https://forums.gentoo.org/viewtopic-t-836411-start-0-postdays-0-postorder-asc-highlight-green.html
- PCI can gave 133MB (266 for pci 2.1) max bandwith, a SATA1 drive can get 150MB, so don't expect the drive to work at full speed with early pci, and with newest the raid cannot run at full speed
- Not always a good idea to pickup a complex filesystem, it should deliver high performance but also should put more pressure on the cpu, not really a good choice for a weak cpu that try to handle software raid array.
Complexity nearly always mean higher cpu usage, even with nice code.
- usb is by far the slowest interface, usb1 run at 12Mb = 1.5MB (lol), usb2 is better at 480Mb, but that's still weak with 60MB, it's easy to lie to users with Mb vs MB, to compare PCI in Mb = 32*33 = 1056Mb |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
awalp n00b
![n00b n00b](/images/ranks/rank_rect_0.gif)
![](images/avatars/16238500814d4b58a7f1edd.jpg)
Joined: 29 May 2003 Posts: 73
|
Posted: Wed Sep 22, 2010 12:17 am Post subject: |
|
|
krinn wrote: | - Check your sata controller is plug in a bus master capable pci |
with lspci -vv is displays
BusMaster+
It is uses kernel modules sata_sil
krinn wrote: | - Not always a good idea to pickup a complex filesystem, it should deliver high performance but also should put more pressure on the cpu, not really a good choice for a weak cpu that try to handle software raid array. |
How is mdadm raid0 formatted XFS a complex filesystem? This isn't raid5/6 or anything very demanding.
As far as a "green" drive being slower than a regular drive that surely cannot be the case. The drive is still a 7200rpm drive with 32 or 64mb of cache and is much faster than the 250mb Seagate drives 7200rpm 32mb cache that are 2 years old.
Why the drive performs slower in raid0 than as a single drive and is not giving a performance increase like the other array is the question.
If one array running the same file system, using the same pci slot and mdadm software raid drivers achieved a 65mb to 87mb improvement.
The second array that the only difference is the size, 1.5TB vs 500MB, having a single drive performance of 82mb, should show improvement.
Unless having a 1.5TB partition is causing raid0 to become slower than having a 500mb partition.
This is a dedicated fileserver. A 2500+ is a decent processor and 512mb of ram is a decent amount of ram.
The computer only has one pci slot used which is with the SATA expansion card.
The system is running a text only system with the only services being samba. (No X11 or anything else).
I would think that even if I don't get much improvement I still should not get any loss and some improvement.
--- Unless the size of the filesystem is a problem, 1.5TB. OR
Then again 65+65 = 130, 130 < 133mb/sec if that is the PCI limit, 82+82=144, 144 > 133mb/sec if that is the PCI limit.
--- That could be the cause of the problem, combined the drives together are fighting for PCI bus bandwidth.
Since the 65+65 = 130, is not as high, it allows the performance to increase to 87mb/sec.
---- Based on MD0 achieving 87mb/sec, unless having too much bandwidth per drive is a problem, I would think that MD1 should be able to reach at least that speed or higher.
EDIT::
Or possibly the chunk size should be different with such a large file-system (1.5TB), 64kb chunks are quite small and if I used like 512 chunks, the data wouldn't be switched back and forth between the drives as often. That is why I'm wondering if the chunk size could be the issue.
If that is the case, can the chunk size even be changed without entirely reformatting and resyncing all of the data? the 1.5TB drive is already half full. |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
dmpogo Advocate
![Advocate Advocate](/images/ranks/rank-G-1-advocate.gif)
Joined: 02 Sep 2004 Posts: 3468 Location: Canada
|
Posted: Wed Sep 22, 2010 12:48 am Post subject: |
|
|
It does seem that you hit the limit of ~80-85 Mb/sec, which is perhaps your PCI card |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
cyrillic Watchman
![Watchman Watchman](/images/ranks/rank-G-2-watchman.gif)
![](images/avatars/8174739453e52fd5e9aef6.jpg)
Joined: 19 Feb 2003 Posts: 7313 Location: Groton, Massachusetts USA
|
Posted: Wed Sep 22, 2010 12:59 am Post subject: |
|
|
Don't forget, the PCI bandwidth is shared with other PCI devices such as your network card(s), and not just the harddrive controller(s). |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
awalp n00b
![n00b n00b](/images/ranks/rank_rect_0.gif)
![](images/avatars/16238500814d4b58a7f1edd.jpg)
Joined: 29 May 2003 Posts: 73
|
Posted: Wed Sep 22, 2010 2:05 am Post subject: |
|
|
cyrillic wrote: | Don't forget, the PCI bandwidth is shared with other PCI devices such as your network card(s), and not just the harddrive controller(s). |
The motherboard has VGA, Network, and everything else integrated and usually only a power connector and network cable are attached to the computer.
The PCI SATA adapter is the only PCI card in the computer.
So by having two faster harddrives, they are causing the raid0 speed to actually be less because they are individually taking more bandwidth?? Does this have to do with software raid causing 2x amount of bandwidth usage vs hardware raid? |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
Mousee Apprentice
![Apprentice Apprentice](/images/ranks/rank_rect_2.gif)
![](images/avatars/241662241457ebb0dec378.jpg)
Joined: 29 Mar 2004 Posts: 291 Location: Illinois, USA
|
Posted: Wed Sep 22, 2010 2:40 am Post subject: |
|
|
awalp wrote: | cyrillic wrote: | Don't forget, the PCI bandwidth is shared with other PCI devices such as your network card(s), and not just the harddrive controller(s). |
The motherboard has VGA, Network, and everything else integrated and usually only a power connector and network cable are attached to the computer.
The PCI SATA adapter is the only PCI card in the computer. |
Such integrated devices almost always share the same PCI bus as PCI expansion devices. So you aren't saving PCI bandwidth by having a motherboard with 2 integrated NIC's, etc. |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
jathlon Tux's lil' helper
![Tux's lil' helper Tux's lil' helper](/images/ranks/rank_rect_1.gif)
Joined: 26 Sep 2006 Posts: 89 Location: Canada
|
Posted: Wed Sep 22, 2010 6:37 pm Post subject: |
|
|
The last time I upgraded a couple of hard drives I took a look at the Caviar Green line of discs. After reading a couple of reviews I spent the extra money and went with the Caviar Black. From an article on The Tech Report about the 1Gb Caviar Green.
Quote: | When it first launched the GreenPower Caviar, WD refused to disclose the drive's actual spindle speed, saying only that it was somewhere between 5,400 and 7,200RPM. The company later admitted that the drive ran at closer to the former than the latter, but we haven't been able to coax out an exact spindle speed.
Numerous sites have speculated that the Caviar Green essentially runs at 5,400RPM, and now even Western Digital has changed its tune. Sort of. The drive's latest spec sheet lists the Green's rotational speed as "IntelliPower," which WD defines as "A fine-tuned balance of spin speed, transfer rate and caching algorithms designed to deliver both significant power savings and solid performance." So much for clarification.
Western Digital obviously doesn't want customers making assumptions about the Caviar Green's performance based on rotational speed alone, but the decision to obfuscate it behind blatant marketingspeak is entirely unnecessary and evasive. |
And sure enough the web page for your drive never makes mention of rotational speed.
http://www.wdc.com/en/products/products.asp?driveid=608
That Advanced Format bit makes me wonder if you don't have 4k sectors instead of the old 512. That would be more of a formatting issue than chunk size, but that can have an effect on throughput.
Ahh, from this thread in the forums...
https://forums.gentoo.org/viewtopic-t-836411-highlight-sector+size.html
Check out the link in the first reply.
Later,
joe |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
krinn Watchman
![Watchman Watchman](/images/ranks/rank-G-2-watchman.gif)
![](images/avatars/gallery/Blade Runner/movie_blade_runner_howl.gif)
Joined: 02 May 2003 Posts: 7470
|
Posted: Wed Sep 22, 2010 7:36 pm Post subject: |
|
|
You really think XFS is less complex than ext3 ?
I have a pentium4 (i think it's a C, anyway one with 1 core+ HT), it run so slow compare to my system that i wish sometimes to thru it out of the windows (hopefully it's in the garage, and no window there).
And because it's software raid, you can easy check that
awalp wrote: | :Unless the size of the filesystem is a problem, 1.5TB |
Nothing for testing prevent you from doing a 500MB raid
Thank you i did ! And wow, that guy is so wise, i can even feel he is smart & everyone loves him, must be a God in his country.
![Very Happy :D](images/smiles/icon_biggrin.gif) |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
awalp n00b
![n00b n00b](/images/ranks/rank_rect_0.gif)
![](images/avatars/16238500814d4b58a7f1edd.jpg)
Joined: 29 May 2003 Posts: 73
|
Posted: Thu Sep 23, 2010 1:46 am Post subject: |
|
|
Mousee wrote: | awalp wrote: | cyrillic wrote: | Don't forget, the PCI bandwidth is shared with other PCI devices such as your network card(s), and not just the harddrive controller(s). |
The motherboard has VGA, Network, and everything else integrated and usually only a power connector and network cable are attached to the computer.
The PCI SATA adapter is the only PCI card in the computer. |
Such integrated devices almost always share the same PCI bus as PCI expansion devices. So you aren't saving PCI bandwidth by having a motherboard with 2 integrated NIC's, etc. |
--------- You're not understanding. All I stated was the features I utilize on the computer. Is that hard to understand?
Do you really think an ethernet card is going to take a noticable amount of bandwidth even if it were to be located on the PCI Bus?
Ethernet 10/100 = 12MB/sec max
Usually integrated features are apart of a chipset and connected directly to a bridge (northbridge or southbridge) therefor bypassing pci or are at least designed to be more efficient and may or may not even interfere with the PCI slot's bus performance.
All intergrated features probably are using a dedicated bus directly connected to the southbridge or northbridge.
If you have SATA on your motherboard is connects straight to the southbridge, and the north and south bridge have a dedicated bus.
Where do you get the idea that I think I'm saving PCI bandwidth by having 2 integrated NICs? I have monitorless fileserver which the best way to achieve that hardware wise is by using a motherboard with integrated vga and an ethernet port. If I was purposely building a computer that would be accessed only by ssh why would I do any differently?
Either way 2500+ processes use 333mhz (PC2700) Ram, so should have a higher bus than plain old PCI 100/133. (even if it is only 166x2, or 5:4 FSB/memory sync of 333mhz ram to a 266 mhz bus)
So unless the PCI slot itself is limited to 133, it should be faster (at least 166, maybe 266) and there is no possibility of anything else slowing it down.
krinn wrote: | You really think XFS is less complex than ext3 ?
I have a pentium4 (i think it's a C, anyway one with 1 core+ HT), it run so slow compare to my system that i wish sometimes to thru it out of the windows (hopefully it's in the garage, and no window there).
|
I never said that XFS was less complex or more complex than ext3. I just said XFS with a raid0 setup, going back and forth every chunk isn't really a complex filesystem. The fact that the drive is XFS should not impact performance. It is actually a rather high performing file system compared to ext3 and should be faster even with much less powerful of hardware. Since it is a filesystem XFS was a no-brainer choice. I wanted stability, reliability, simplicity, and speed. Also no questions of lots of tiny files vs big files, so scalability. ext4 has been out for like 3 years now? and was made to compete with XFS and the other newer file systems, ext3 is ancient and not for a file system.
Why would a faster filesystem be slower? or why would I use a slower file system to be faster?
----
either way from benchmarks and research I have done, with a raid0 setup and XFS. On larger drives the speed significantly increases as the chunk size increases up to 512k max(optimal), at 1024k you start to notice a decline. So my chunk size should be 512k instead of 64k. That would be one area where improvement would occur.
the chunk size is the amount of data written to each drive before the other drive is utilized
I can see where this could be the issue as there are a lot of 64k chunks on a 1.5TB drive compared to a 500mb drive.
For a 1024K read that is the difference between reading two chunks vs 16 chunks. That's exponentially more segmented, scattered, and amount of processed.
Does anyone have any solid numbers comparing a large 2 drive raid0 XFS default chunk size setup to the same raid0 512k chunk size setup?
Trying to find numbers that match with no performance change with raid0 vs a performance increase with larger chunk sizes.
I know that the performance increases with larger chunk sizes, just trying to make sure that is the issue before any major change. |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
dmpogo Advocate
![Advocate Advocate](/images/ranks/rank-G-1-advocate.gif)
Joined: 02 Sep 2004 Posts: 3468 Location: Canada
|
Posted: Thu Sep 23, 2010 4:14 am Post subject: |
|
|
awalp wrote: | B/sec max
Usually integrated features are apart of a chipset and connected directly to a bridge (northbridge or southbridge) therefor bypassing pci or are at least designed to be more efficient and may or may not even interfere with the PCI slot's bus performance.
All intergrated features probably are using a dedicated bus directly connected to the southbridge or northbridge.
|
Have you ever looked at any motherboard scheme ? Look at this one, for example (page 8 )
http://www.gigabyte.com/products/product-page.aspx?pid=2958#manual
and tell us where the integrated network chips are |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
dmpogo Advocate
![Advocate Advocate](/images/ranks/rank-G-1-advocate.gif)
Joined: 02 Sep 2004 Posts: 3468 Location: Canada
|
Posted: Thu Sep 23, 2010 4:19 am Post subject: |
|
|
awalp wrote: | [
Either way 2500+ processes use 333mhz (PC2700) Ram, so should have a higher bus than plain old PCI 100/133. (even if it is only 166x2, or 5:4 FSB/memory sync of 333mhz ram to a 266 mhz bus)
So unless the PCI slot itself is limited to 133, it should be faster (at least 166, maybe 266) and there is no possibility of anything else slowing it down.
|
Theoretical <I> peak </I> transfer rate of a standard PCI bus (32 bit, 33 Mhz) is 133 MB/s
http://en.wikipedia.org/wiki/Conventional_PCI |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
Mousee Apprentice
![Apprentice Apprentice](/images/ranks/rank_rect_2.gif)
![](images/avatars/241662241457ebb0dec378.jpg)
Joined: 29 Mar 2004 Posts: 291 Location: Illinois, USA
|
Posted: Thu Sep 23, 2010 4:34 am Post subject: |
|
|
awalp wrote: | Mousee wrote: | awalp wrote: | cyrillic wrote: | Don't forget, the PCI bandwidth is shared with other PCI devices such as your network card(s), and not just the harddrive controller(s). |
The motherboard has VGA, Network, and everything else integrated and usually only a power connector and network cable are attached to the computer.
The PCI SATA adapter is the only PCI card in the computer. |
Such integrated devices almost always share the same PCI bus as PCI expansion devices. So you aren't saving PCI bandwidth by having a motherboard with 2 integrated NIC's, etc. |
--------- You're not understanding. All I stated was the features I utilize on the computer. Is that hard to understand?
|
I was simply stating and backing the reason cyrillic made the comment he/she did. It makes sense as a single integrated Gigabit NIC along with a SATA Controller, both attached to the PCI bus, could easily saturate it. I doubt however that your current/previously-posted drive speeds are a result of said saturation though. But I wasn't really interested in expanding on that as I think dmpogo's comment was correct in suggesting that your PCI card has hit its limit. That's all. |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
awalp n00b
![n00b n00b](/images/ranks/rank_rect_0.gif)
![](images/avatars/16238500814d4b58a7f1edd.jpg)
Joined: 29 May 2003 Posts: 73
|
Posted: Fri Sep 24, 2010 2:07 am Post subject: |
|
|
The problem was chunk sizes.
I will post up some numbers when I'm done transferring all the files back onto the drive.
the solutions was
recreating the mdadm array with proper chunk sizes & formatting XFS with specific variables to properly align to it
Create the Array with a chunk size of 1024k instead the default 64k chunk
Code: | mdadm --create /dev/md1 --level=0 --chunk=1024 --raid-devices=2 /dev/sdc1 /dev/sdd1 |
Format /dev/md1 XFS with required options to match the chunk size -d sunit=2048,swidth=4096 (I also threw in couple extra tweaks)
Code: | mkfs.xfs -f -b size=4096 -d sunit=2048,swidth=4096 -i attr=2 -l lazy-count=1,version=2,sunit=256 /dev/md1 |
Format XFS, using sunit=2048(chunk size * number of drives),swidth=4096(2x sunit)
Also XFS cannot create log files that large and defaults to 32k, so -l sunit=256 (optimize log file size)
--- -b size=4096 (default block size specified) block size cannot be larger than kernel pagefile (4096k)
--- -l lazy-count=1 (speeds up system by creating mappings/logs of changes much less often: dangerous in unstable power situations)
--- -l version=2(default but just making sure)
--- -i attr=2(default but just making sure)
The code above brought the speed up to (with /dev/md1 mounted):
* single drive speed, ~82MB/sec,
* raid0 array speed, ~95MB/sec
Interestingly enough with /dev/md1 not mounted/unmounted, hdparm -tT shows array speeds of ~115MB/sec
* single drive speed, ~82MB/sec (same speed as when mounted),
* raid0 array speed, ~115MB/sec!
Whether this means I could further optimize the setup to obtain the 115MB speed or not, who knows...
Either way, the chunk size was the limiting factor and using a larger chunk size and a matching file system was the solution.
So any raid0 setup with a large filesystem (mine is 1.5TB), needs a large chunk size to make optimal use of raid0.
Otherwise if there had been no noticeable speed improvement using raid0, it would have been an unnecessary risk. |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
zeek Guru
![Guru Guru](/images/ranks/rank_rect_3.gif)
![](images/avatars/gallery/Star Wars/movie_star_wars_c-3p0.gif)
Joined: 16 Nov 2002 Posts: 480 Location: Bantayan Island
|
Posted: Fri Sep 24, 2010 3:55 am Post subject: Re: raid0 vs single drive transfer speeds, no improvement |
|
|
hdparm is a useless benchmark. It doesn't actually transfer any data from the disk platters. Try bonnie++ or similar. |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
awalp n00b
![n00b n00b](/images/ranks/rank_rect_0.gif)
![](images/avatars/16238500814d4b58a7f1edd.jpg)
Joined: 29 May 2003 Posts: 73
|
Posted: Mon Sep 27, 2010 7:10 pm Post subject: |
|
|
hdparm -tT benchmarks with arrays mounted (I get much higher numbers unmounted)
fileserver:~# hdparm -tT /dev/hda /dev/sda /dev/sdb /dev/md0 /dev/sdc /dev/sdd /dev/md1
/dev/hda:
Timing cached reads: 472 MB in 2.00 seconds = 235.79 MB/sec
Timing buffered disk reads: 80 MB in 3.01 seconds = 26.57 MB/sec
------------------------ MD0, ~65MB to 85MB/sec gain,-------------------
/dev/sda:
Timing cached reads: 484 MB in 2.01 seconds = 241.07 MB/sec
Timing buffered disk reads: 196 MB in 3.03 seconds = 64.74 MB/sec
/dev/sdb:
Timing cached reads: 474 MB in 2.00 seconds = 236.67 MB/sec
Timing buffered disk reads: 202 MB in 3.02 seconds = 66.82 MB/sec
/dev/md0:
Timing cached reads: 474 MB in 2.00 seconds = 236.99 MB/sec
Timing buffered disk reads: 256 MB in 3.01 seconds = 84.91 MB/sec
-------------------------------------------------------------------------------------
------------------------ MD1, ~78MB to 85MB/sec gain,-------------------
/dev/sdc:
Timing cached reads: 466 MB in 2.00 seconds = 232.89 MB/sec
Timing buffered disk reads: 234 MB in 3.01 seconds = 77.87 MB/sec
/dev/sdd:
Timing cached reads: 470 MB in 2.01 seconds = 234.32 MB/sec
Timing buffered disk reads: 236 MB in 3.02 seconds = 78.26 MB/sec
/dev/md1:
Timing cached reads: 472 MB in 2.00 seconds = 235.51 MB/sec
Timing buffered disk reads: 258 MB in 3.03 seconds = 85.17 MB/sec
---------------------------------------------------------------------------------
that particular run was on the low end, usually it averages about 5MB/sec higher on MD1 (~82MB/sec to 95MB/sec)
there may be something with 85MB/sec being a limit
both arrays seem to limit themselves to that when they are mounted.
unmounted md1 array reaches 115MB/sec
I'm gonna guess that mounted vs unmounted difference is a bandwidth issue somewhere.
-------------- With Filesystems NOT mounted / unmounted ---------------
------------------------ MD1, ~65MB to 105MB/sec gain,-------------------
/dev/sda:
Timing cached reads: 452 MB in 2.00 seconds = 225.50 MB/sec
Timing buffered disk reads: 198 MB in 3.03 seconds = 65.40 MB/sec
/dev/sdb:
Timing cached reads: 472 MB in 2.01 seconds = 235.36 MB/sec
Timing buffered disk reads: 196 MB in 3.00 seconds = 65.31 MB/sec
/dev/md0:
Timing cached reads: 450 MB in 2.01 seconds = 224.37 MB/sec
Timing buffered disk reads: 320 MB in 3.00 seconds = 106.56 MB/sec
------------------------ MD1, ~78MB to 110MB/sec gain,-------------------
/dev/sdc:
Timing cached reads: 464 MB in 2.01 seconds = 231.07 MB/sec
Timing buffered disk reads: 236 MB in 3.01 seconds = 78.49 MB/sec
/dev/sdd:
Timing cached reads: 454 MB in 2.00 seconds = 226.92 MB/sec
Timing buffered disk reads: 234 MB in 3.01 seconds = 77.71 MB/sec
/dev/md1:
Timing cached reads: 468 MB in 2.00 seconds = 233.79 MB/sec
Timing buffered disk reads: 332 MB in 3.01 seconds = 110.31 MB/sec
------------------------------------------------------------------------------------
There is a huge difference between mounted and unmounted numbers, whether that is just a bandwidth limitation of the hardware or if I can manage to pull those numbers by additionally tweaking the system. I have no idea.
I'll look into bonnie++ for some better numbers soon. |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
awalp n00b
![n00b n00b](/images/ranks/rank_rect_0.gif)
![](images/avatars/16238500814d4b58a7f1edd.jpg)
Joined: 29 May 2003 Posts: 73
|
Posted: Mon Sep 27, 2010 7:40 pm Post subject: |
|
|
also, could this error be causing a performance loss?
[ 1632.756007] Filesystem "md0": Disabling barriers, not supported by the underlying device
[ 1632.835622] XFS mounting filesystem md0
[ 1632.952003] Ending clean XFS mount for filesystem: md0
Would enabling the feature "Disabling barriers" change performance? |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
krinn Watchman
![Watchman Watchman](/images/ranks/rank-G-2-watchman.gif)
![](images/avatars/gallery/Blade Runner/movie_blade_runner_howl.gif)
Joined: 02 May 2003 Posts: 7470
|
Posted: Mon Sep 27, 2010 8:49 pm Post subject: |
|
|
or just use dd
write 2gb
dd if=/dev/zero of=/tmp/ddtest bs=8k count=250k
256000+0 enregistrements lus
256000+0 enregistrements écrits
2097152000 octets (2,1 GB) copiés, 12,3672 s, 170 MB/s
read
dd if=/tmp/ddtest of=/dev/null bs=8k count=250k
256000+0 enregistrements lus
256000+0 enregistrements écrits
2097152000 octets (2,1 GB) copiés, 3,87711 s, 541 MB/s |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
shamen n00b
![n00b n00b](/images/ranks/rank_rect_0.gif)
Joined: 28 Sep 2010 Posts: 1
|
Posted: Tue Sep 28, 2010 10:00 am Post subject: |
|
|
I registered just to reply to this as I saw exactly the same thing on my NAS; specs:
P4 1.8Ghz
Sil3512 SATA card (2 port)
512Mb ram
Asus P4s333 mobo
Software raid 1
Disk 1: Maxtor 1Tb 7200 SATA
Disk 2: WD Caviar Green 1Tb SATA
After thinking the issue could be the WD, I removed the WD caviar from the array so I was running on one disk. Just wanted to make sure I was not being pulled down by the 'greenness'
The first thing I looked into was PCI bus transfer - thats a max of 133MB/s (1Gbps) on the whole bus. I have disabled every internal device - Sound, NIC, LAN, second IDE etc in the BIOS, so theres just this SATA card on the pci bus and my AGP graphics. I was seeing Timed cached reads of about 400MB/sec (though strangely, only about 200MB/sec for the first 12 hours after being rebooted?!) and timed buffered disk reads of about 20MB/sec - very low for SATA buffered reads. 0_DIRECT cached reads were about 65MB/sec and 0_DIRECT disk reads were about 95MB/sec :
Quote: | xxx@hfw1:~$ sudo hdparm --direct -Tt /dev/md0
/dev/md0:
Timing O_DIRECT cached reads: 134 MB in 2.02 seconds = 66.42 MB/sec
Timing O_DIRECT disk reads: 288 MB in 3.01 seconds = 95.75 MB/sec
xxx@hfw1:~$ sudo hdparm -Tt /dev/md0
/dev/md0:
Timing cached reads: 814 MB in 2.00 seconds = 406.39 MB/sec
Timing buffered disk reads: 62 MB in 3.09 seconds = 20.03 MB/sec
|
So my direct disk reads (non cached) seemed very close to 'actual theoretical maximum pci speed' but the buffered disk reads seemed way out of whack. I checked CPU use, IOwait and load - all happy, low and clear skies.
Quick fix
Quote: |
blockdev --setra 4096 /dev/md0
|
That helped me no end, though different readahead variables may suit you better. I suggest a looping shell script which sets the readahead to a number, tests it, prints the results and moves on to the next number. Do it in round bytesizes, 128, 256,1024, 2056,4096 etc
My disks now:
Quote: | xxx@hfw1:~$ sudo hdparm -Tt /dev/md0
/dev/md0:
Timing cached reads: 842 MB in 2.00 seconds = 420.43 MB/sec
Timing buffered disk reads: 272 MB in 3.00 seconds = 90.57 MB/sec
|
Just FYI, heres the actual speeds returned from the physical Maxtor SATA:
Quote: |
xxx@hfw1:~$ sudo hdparm --direct -Tt /dev/sdb
/dev/sdb:
Timing O_DIRECT cached reads: 146 MB in 2.01 seconds = 72.55 MB/sec
Timing O_DIRECT disk reads: 306 MB in 3.01 seconds = 101.73 MB/sec
xxx@hfw1:~$ sudo hdparm -Tt /dev/sdb
/dev/sdb:
Timing cached reads: 842 MB in 2.00 seconds = 420.30 MB/sec
Timing buffered disk reads: 284 MB in 3.01 seconds = 94.47 MB/sec
|
And the speeds from my internal PATA drive. Cached reads are actually quicker on the PATA! Although the real direct disk transfer rate is half of what the SATA is (though dont forget on the PCI bus SATA is limited to 2/3 of its max speed anyway - and thats a theoretical maximum)
Quote: |
xxx@hfw1:~$ sudo hdparm --direct -Tt /dev/sda
/dev/sda:
Timing O_DIRECT cached reads: 80 MB in 2.01 seconds = 39.85 MB/sec
Timing O_DIRECT disk reads: 124 MB in 3.01 seconds = 41.26 MB/sec
xxx@hfw1:~$ sudo hdparm -Tt /dev/sda
/dev/sda:
Timing cached reads: 850 MB in 2.00 seconds = 424.76 MB/sec
Timing buffered disk reads: 136 MB in 3.03 seconds = 44.87 MB/sec
|
I'm going to upgrade the Sil3512 BIOS from their FakeRaid bios to their 'Base' bios as I hear that boosts performance a bit when using software RAID but I dont think theres much more juice to be had - im not far off the PCI bus theoretical maximum for this machine and its not got a huge amount of ram. |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|