View previous topic :: View next topic |
Author |
Message |
screwloose Tux's lil' helper
![Tux's lil' helper Tux's lil' helper](/images/ranks/rank_rect_1.gif)
![](images/avatars/119724390542b4acb80c81b.jpg)
Joined: 07 Feb 2004 Posts: 94 Location: Toon Town, Canada
|
Posted: Thu Jun 02, 2005 9:26 pm Post subject: Help selecting a RAID5 SATA controller |
|
|
I am trying to pick a sata controller to build a data storage server at work. The plan is to hook up 4 400GB drives and set them up in RAID5 array. There is no restriction to bus type(pci, pci-e, pci-x, etc.) on this server as we are planning to build the machine around whatever controller we select.
I would appreciate any suggetions to cards you have have used and have had a good experience with. Suggestions as to cards to avoid at all costs would also be good. From what I have researched some cards support array monitoring in a 2.4 kernel but 2.6 support seems to be uncertain, I would use cards that support either kernel any issues would be nice to know in advance.
Here are the guidelines we are looking at:
Being able to hotswap drives is not important.
Being able to build the array in the OS vs the card BIOS is not important.
The ability to have hotspares in the array would be handy but is not necessary.
Being able to monitor the health of the array from the OS is a must.
Thanks in advance for any suggestions. _________________ If something can go wrong it probably already has. You just don't know it yet. ~Henry's Modified version of Murphy's Law |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
adaptr Watchman
![Watchman Watchman](/images/ranks/rank-G-2-watchman.gif)
![](images/avatars/17218567054377b9b6104ea.jpg)
Joined: 06 Oct 2002 Posts: 6730 Location: Rotterdam, Netherlands
|
Posted: Thu Jun 02, 2005 10:17 pm Post subject: |
|
|
Quote: | Being able to hotswap drives is not important. |
No, but you got it anyway - SATA is hot-swappable by design.
Quote: | The ability to have hotspares in the array would be handy but is not necessary. |
If you want to go to the lengths to be able to monitor the array health in realtime then you are very much mistaken: hotspares are essential in that case.
If the fact that the array loses one drive is not important enough to you to be able to rebuild it instantly, then monitoring the array is also useless.
I think you need to re-think your goals here.
Quote: | Being able to build the array in the OS vs the card BIOS is not important. |
This essentially means you don't prefer hardware RAID over Linux kernel softraid.
If so, then the following:
Quote: | Being able to monitor the health of the array from the OS is a must. |
pretty much determines that you want to use software RAID - since you can obviously monitor a kernel softraid array from the kernel.
For software RAID, you can use any card that has decent drivers in Linux.
EDIT: alright, I'll add a real recommendation: if you want to use a real RAID controller and be able to forget about it, buy an Areca 8-port SATA board.
They cost around $700 and will give you 400 MB sustained (!) with the 8 slots filled with 10K Raptors.
If you intend to use 400MB SATA drives (ergo 7200rpm) you will still get 250MB/second easy. _________________ >>> emerge (3 of 7) mcse/70-293 to /
Essential tools: gentoolkit eix profuse screen |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
overkll Veteran
![Veteran Veteran](/images/ranks/rank_rect_5_vet.gif)
Joined: 21 Sep 2004 Posts: 1249 Location: Austin, Texas
|
Posted: Thu Jun 02, 2005 10:25 pm Post subject: |
|
|
Off the top of my head I'd say 3ware or a standard Promise 4 port card using linux md and mdadmin.
3ware - Hardware Raid Controller
Although their "Official" support may be lacking, the source code for the controller is in the kernel - has been for a long time. The monitoring tools are a web gui or a cli. Spares and hot swapping are possible. They even sell drive cages. Array building is done via bios and the array is recognized as one disk to the OS. Monitoring app can send notifications via email. Check out 3ware.com for more details.
Software Raid with Standard (not raid) Promise Controller using md/raid kernel features and mdadmin
2.6 Kernels support the Promise cards. Array Creation/Maint. done via OS. Raid arrays consist of partitions, not drives. Not sure about physical hotswap, but devices can be added/removed from an array while the array is active via cli. Arrays are available while building or repairing. Has monitoring capability via command line or scripts. Can send notifications via email. Has spare capabilties. Less expensive than 3ware option.
I use both - 1 box has a 4 port PATA controller, the other box I use the Promise card to extend the number of SATA ports for software raid. I've had the 3ware card the longest and I have to say it is very dependable. The only problems I have with the 3ware card array was a faulty ide ribbon and a failed drive. I'd also like to commend 3ware for it's firmware upgrades. I originally bought the card as a UDMA100 compatible controller. Over the life of my card, firmware upgrades added UDMA133 and SATA capabilities. Of course for SATA I need connector adapters.
I am happy with the software raid setup as well. It's less expensive and so far just as reliable.
Hope this helps. |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
screwloose Tux's lil' helper
![Tux's lil' helper Tux's lil' helper](/images/ranks/rank_rect_1.gif)
![](images/avatars/119724390542b4acb80c81b.jpg)
Joined: 07 Feb 2004 Posts: 94 Location: Toon Town, Canada
|
Posted: Thu Jun 02, 2005 11:20 pm Post subject: |
|
|
adaptr wrote: |
No, but you got it anyway - SATA is hot-swappable by design. |
SATA may be hot-swappable by design but its not a feature all controllers support. Again this is not a feature that overly concerns me.
adaptr wrote: |
If you want to go to the lengths to be able to monitor the array health in realtime then you are very much mistaken: hotspares are essential in that case.
If the fact that the array loses one drive is not important enough to you to be able to rebuild it instantly, then monitoring the array is also useless.
I think you need to re-think your goals here. |
The hotspares is something I'm currently pushing for but of course there is always the issue of what you can get management to understand and pay for.
A controller capable of hotspares should be able to integrate the hotspare without user intervention thus quickly restoring redundancy. A RAID5 array with no hot spares is still functional with one dead drive, but you no longer have any redunancy until the dead drive has been replaced. In either case I would like the ability to be able to poll the controller (cron job) or have it trigger some event that would let me know that a drive has failed. After being notified by the system I can schedule down time to deal with the dead drive.
I'm not aiming for the magical grail of zero down time, as I don't have an unlimited budget and this will not be a high availability public server. I'm just trying to balance these factors.
adaptr wrote: |
Quote: | Being able to build the array in the OS vs the card BIOS is not important. |
This essentially means you don't prefer hardware RAID over Linux kernel softraid.
If so, then the following:
Quote: | Being able to monitor the health of the array from the OS is a must. |
pretty much determines that you want to use software RAID - since you can obviously monitor a kernel softraid array from the kernel.
|
I would prefer a hardware RAID solution, I just haven't heard many people talking about what tools are available from vendors or the community to manage the hardware solutions under Linux. This is a topic that is frequently covered in Windows reviews of cards but you don't often hear about in the Linux world. Thus this thread to see what other people have had sucess with.
Again I'm not looking for anything fancy I can write a simple cron task to email me the array's status if necessary, but in that case there needs to be something I can poll to get that status.
adaptr wrote: |
For software RAID, you can use any card that has decent drivers in Linux.
|
I am aware that what I am asking is possible using software RAID but I would like to make sure I have considered all options before resorting to that. There are good chances that if this machine works out then I may be tasked with building a database server with higher specs. In that case I would prefer not to be using software RAID as the processor will be otherwise busy.
adaptr wrote: |
EDIT: alright, I'll add a real recommendation: if you want to use a real RAID controller and be able to forget about it, buy an Areca 8-port SATA board.
They cost around $700 and will give you 400 MB sustained (!) with the 8 slots filled with 10K Raptors.
If you intend to use 400MB SATA drives (ergo 7200rpm) you will still get 250MB/second easy. |
Thanks for the suggestion. _________________ If something can go wrong it probably already has. You just don't know it yet. ~Henry's Modified version of Murphy's Law |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
augury l33t
![l33t l33t](/images/ranks/rank_rect_4.gif)
![](images/avatars/114295240844019829c6d3a.jpg)
Joined: 22 May 2004 Posts: 722 Location: philadelphia
|
Posted: Fri Jun 03, 2005 7:21 am Post subject: |
|
|
seagate drives have a reputation for relability. 400 gb is a capacity that maxs out the present technology which is expensive and unpredicatable. 250/200 might give you the $/ gb sweetspot. the nature of raid 5 is that you would have some number of spindles, some of which will expire more quickly than all of the rest. few arrays fail and so only a few spares are nessessary. Raid cards have battery backed ram so they can complete a write action if the power goes down. Most file systems use a log for this purpose which is more closely tied to the kernel filesystem, so in practice recovery from all failures is more certain and automated. you can use a single disk to log many arrays. software raid will put any two devices in an array. the 2.4 kernel may not even build anymore so try a vanilla. |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
nic01 Tux's lil' helper
![Tux's lil' helper Tux's lil' helper](/images/ranks/rank_rect_1.gif)
![](images/avatars/gallery/The Fifth Element/movie_the_fifth_element_ruby_rhod.gif)
Joined: 17 Mar 2004 Posts: 87 Location: Copenhagen
|
Posted: Fri Jun 03, 2005 6:11 pm Post subject: |
|
|
http://www.hwb.no/artikkel/15307 is a Norwiegen test of different HW/SW SATA/SCSI raid controllers in Linux. You might find that interesting
Other than that, 3wares controllers work pretty well in Linux.
/Nic |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
Sparohok n00b
![n00b n00b](/images/ranks/rank_rect_0.gif)
Joined: 29 Aug 2004 Posts: 13
|
Posted: Wed Jun 08, 2005 1:56 am Post subject: |
|
|
Basically everything adaptr wrote is incorrect or misleading.
adaptr wrote: | No, but you got it anyway - SATA is hot-swappable by design. |
libata does not yet support hotswap, so for all practical purposes, SATA is not hotswappable today under Linux unless you buy a hardware RAID controller with its own drivers.
http://linux.yyz.us/sata/software-status.html#hotplug
adaptr wrote: | If you want to go to the lengths to be able to monitor the array health in realtime then you are very much mistaken: hotspares are essential in that case. |
Wrong. You are assuming that high availability is a requirement. If you can bring the array down in the case of hard drive failure, monitoring is still critical but hot spares are not.
If availability is such a critical requirement that you can't bring the array down in the case of drive failure, you should not be running Gentoo, IMHO.
adaptr wrote: | Quote: | Being able to build the array in the OS vs the card BIOS is not important. |
This essentially means you don't prefer hardware RAID over Linux kernel softraid. |
There are all sorts of reasons to prefer hardware RAID that have nothing to do with building the array in BIOS.
1) hardware RAID makes much more efficient use of the system bus
2) hardware RAID may support more advanced features (level migration, distributed sparing, etc.)
3) hardware RAID avoids using CPU for XORs
4) hardware RAID may be more reliable in power failures
Personally I don't find any of these compelling for my needs. However how the array is built is perhaps the least important distinction between hardware and software RAID.
One option nobody has mentioned is Broadcom/Raidcore. They have Linux drivers with proprietary RAID software. They do support hotplug under Linux. (I know this isn't a priority for the original poster but others may care.) They are cheaper per port than comparable hardware RAID, provide many advanced features of hardware RAID, but are really a software solution. I'm very curious if anyone here has firsthand experience with this controller, I don't.
http://www.broadcom.com/products/brand.php?brand_id=37
Martin |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
cummings66 n00b
![n00b n00b](/images/ranks/rank_rect_0.gif)
Joined: 22 Feb 2004 Posts: 42 Location: Moberly, MO
|
Posted: Wed Jun 08, 2005 9:21 pm Post subject: |
|
|
I think he was correct more than he was not. You have to consider the original post was not in agreement with itself. I mean, he said he wanted some things, but that others were not important. When he said he didn't care whethor or not it was built in the OS or Bios that says he is looking for cheap solutions. You can use software raid in that case.
Then he says he wants to monitor it, well, that's nice but most of us that monitor raid arrays do so because it's important and cheap does not agree with important, so now we have a problem defining exactly what he wants because it's contradicting his requirements.
I think what he wants is a cheap hardware raid card that he can use some software to monitor it. All the rest doesn't matter to him. Not a bad way to go if money is hard to come by from management. So, this means you want a SATA hardware raid solution. Exactly what card he would need is hard to say because he didn't say what his budget is. If he posts that it's going to be easier to suggest a solution to him. Personally if I was to do raid I'd do it in hardware, not software. I would want an xor in hardware as well becuase no driver should be needed that takes cpu power to run.
I prefer hardware raid solutions because they're more robust. I personally like megaraid stuff from LSI, but I'd be leading you astray in suggesting them because I only use SCSI and I'm not familiar with their SATA raid controllers. But if they're like the SCSI stuff I'd go that route myself. By the way, when their arrays have a failure if you're in the same building you'll notice it, no email is necessary unless you disable the alarms.
Here's what I'd get myself if I used SATA.
http://www.lsilogic.com/products/megaraid/sata_150_4.html
I say that because their SCSI cards are bulletproof and rock solid under linux. I run a news server on one and it's solid, many gigs pass through it and not a single problem to date. I trust them to have a good SATA product, but as I said, I do not know that from personal experience. They do have a megamgr program for it, but I don't know if it has command line options to check status or not.
If cost is an issue, maybe Ebay is an answer, I see the card there sometimes, but you'll still need an enclosure for it by the way. |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
Sparohok n00b
![n00b n00b](/images/ranks/rank_rect_0.gif)
Joined: 29 Aug 2004 Posts: 13
|
Posted: Wed Jun 08, 2005 11:25 pm Post subject: |
|
|
cummings66 wrote: | You have to consider the original post was not in agreement with itself. |
The original post is internally consistent. He is simply stating his requirements. None of them are contradictory.
Even if the original poster were not consistent, he was asking a question, so I would tend to cut him some slack. Adaptr was providing answers, but they were largely false or misleading, which is unacceptable.
cummings66 wrote: | Then he says he wants to monitor it, well, that's nice but most of us that monitor raid arrays do so because it's important and cheap does not agree with important, so now we have a problem defining exactly what he wants because it's contradicting his requirements. |
Wait. You're running Gentoo Linux. Yet, you believe that cheap does not agree with important.
Either your work is unimportant, or you paid a hell of a lot more for Gentoo than I did. My work is important, and I choose solutions based on how good they are, not how much they cost.
Hardware RAID is not inherently more reliable than software RAID.
Martin |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
cummings66 n00b
![n00b n00b](/images/ranks/rank_rect_0.gif)
Joined: 22 Feb 2004 Posts: 42 Location: Moberly, MO
|
Posted: Thu Jun 09, 2005 4:53 am Post subject: |
|
|
We're talking hardware here per the subject title and also his later post.
It's obvious he wants a raid card that doesn't cost an arm and a leg and works well with Linux. I posted what I think I would choose if I was to go with a non scsi system. But, as I said, we need to know his budget if we're to give him a better answer. There are good cheap cards and bad expensive ones depending on their intended usage. What are we using raid for? Is it for speed, space, security or all of the above? What mode are we going to be using in other words? 0, 1, 5, 10, 50? Why? Those things also define what card we need to look for. For example if you're not using 5 or it's ilk then maybe you don't need a total hardware solution and can look to a cheaper card. If all you're doing is spanning drives you don't need a lot of bells and whistles. We know it's for storage, but for what purpose? The purpose more than anything else determines the card needed. Personally speaking, what I have on my own system is more important to me than what I have on the public systems I have access to. I will protect my data at home at the expense of speed for safety's sake.
Even without the fancy programs you can often look in /proc or /sys and if it's like the scsi driver you'll be able to cat a file and grep it for status info to email, or sound an alert.
For what it's worth, many times under Linux you need to be able to write your own software to do what you're looking for. It's always been that way with Unix in general, probably always will be that way. It's not a solution for the masses IMO. That's why I said you can cat the files in /proc or /sys and grep them for the words you need, then execute an action based on what you find there. See, I don't know how he intended to notify himself of a failure, but I know that some drivers will post drive status for you to look at.
For an example, here's a cat of one of my raid drives. I could look for the word Online or Offline if I wanted and then email myself the results as needed, or page a number, etc. In other words, no software beyond what I would write is needed for monitoring. Here's some heavily edited (read clipped to one drive) examples of data to demonstrate what's possibly available depending on card. If people running those cards they suggest could post back something similar it might help him decide the level of support the card has under Linux.
Channel: 0 Id: 0 State: Online.
Vendor: COMPAQ Model: BD03664545 Rev: B20B
Type: Direct-Access ANSI SCSI revision: 02
Or take a very simplistic example;
#!/bin/bash
cat /proc/megaraid/hba0/diskdrives-ch0 | grep Online
# End of script
and the results which follow could be emailed.
Channel: 0 Id: 0 State: Online. |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
adaptr Watchman
![Watchman Watchman](/images/ranks/rank-G-2-watchman.gif)
![](images/avatars/17218567054377b9b6104ea.jpg)
Joined: 06 Oct 2002 Posts: 6730 Location: Rotterdam, Netherlands
|
Posted: Thu Jun 09, 2005 8:32 am Post subject: |
|
|
Sparohok wrote: | Adaptr was providing answers, but they were largely false or misleading, which is unacceptable |
I'm not asking you to accept them.
I'm providing them, much in the spirit of the Gnu GPL, "without warranty, implied or otherwise".
If you expected anything else on a forum such as this, you should probably go elsewhere with your highhanded opinions.
For what it's worth, cummings66 did pretty much sum up both my reaction to your post and my reading of the original thread starter: his requests are, to a significant degree, not easily attainable simultaneously.
If you disagree with that then please provide arguments to support it; slagging off my responses because you deem them "unacceptable" is both easy and opinionated.
My responses are equally valid as anyone else's given the terms he stated. _________________ >>> emerge (3 of 7) mcse/70-293 to /
Essential tools: gentoolkit eix profuse screen |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
cummings66 n00b
![n00b n00b](/images/ranks/rank_rect_0.gif)
Joined: 22 Feb 2004 Posts: 42 Location: Moberly, MO
|
Posted: Thu Jun 09, 2005 3:06 pm Post subject: |
|
|
Here's an interesting link that might be useful, it's Windows oriented but still telling in that they discuss the cards we've been talking about.
http://www.tweakers.net/reviews/557 |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
pksings Tux's lil' helper
![Tux's lil' helper Tux's lil' helper](/images/ranks/rank_rect_1.gif)
![](images/avatars/gallery/The Fifth Element/movie_the_fifth_element_ruby_rhod.gif)
Joined: 26 Oct 2003 Posts: 110 Location: Southern California
|
Posted: Thu Jun 09, 2005 4:41 pm Post subject: 3ware SATA RAID |
|
|
I would like to post a warning about the 3ware SATA RAID.
I have not been able to make mine boot from a SATA drive, grub fails to install and lilo just hangs.
I boot from a regular IDE just fine and it then mounts the SATA drives and runs just fine. But it's really dumb to have to keep an IDE drive in just to boot.
And 3ware would not help, so their support is non-existent. I will never buy another or recommend them either. _________________ PK |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
Sparohok n00b
![n00b n00b](/images/ranks/rank_rect_0.gif)
Joined: 29 Aug 2004 Posts: 13
|
Posted: Thu Jun 09, 2005 5:05 pm Post subject: |
|
|
adaptr wrote: | his requests are, to a significant degree, not easily attainable simultaneously. |
OK, now I am really baffled. Any Linux supported SATA controller, with or without an XOR engine, would attain all of his stated goals.
Linux software RAID definitely meets his goals. That is what I use, and what I would recommend for any Gentoo user. And although I've never used hardware RAID under Linux, so I can't specifically recommend any cards, the two that I have researched, 3ware and Broadcom, seem to meet his needs as well, for considerably less money than the Areca.
If you read his post his only actual requirements are:
1) Supports 2.4 or 2.6 kernel
2) Has array monitoring
3) Supports RAID 5
These are not sophisticated requirements. In fact they are some of the most fundamental checkbox features for any RAID solution. They certainly aren't internally contradictory. I have the same requirements, and I suspect lots of other RAID users do too.
adaptr, I try not to get high-and-mighty with people but it was really striking to me that I come to this forum, do a search on "RAID5," see two posts full of errors and inaccuracies stated with authority and confidence, and realize that they are by the same person. Should I have kept my mouth shut?
Martin |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
cummings66 n00b
![n00b n00b](/images/ranks/rank_rect_0.gif)
Joined: 22 Feb 2004 Posts: 42 Location: Moberly, MO
|
Posted: Fri Jun 10, 2005 1:39 pm Post subject: |
|
|
Something not said, but it should be. emerge smartmontools if you're going to be using the software method of raid control. You need to monitor the smart levels to keep the array intact. |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
HackingM2 Apprentice
![Apprentice Apprentice](/images/ranks/rank_rect_2.gif)
Joined: 26 Jul 2004 Posts: 245 Location: Cambridge, England
|
Posted: Fri May 05, 2006 5:53 pm Post subject: Re: 3ware SATA RAID |
|
|
pksings wrote: | I would like to post a warning about the 3ware SATA RAID.
I have not been able to make mine boot from a SATA drive, grub fails to install and lilo just hangs. |
I know this thread is kind of old but I just wanted to add my experiences with 3ware. I have had no problem getting either a 9000 or a 9500 to boot using grub.
The trick, if you can call it that, is to ensure that both /proc and /dev are mounted in the chroot from the CD.
If you have a PATA harddisk in the machine and can see the array then grub should work fine. If not you can always check out the grub documentation and try using a device.map file too. ![Wink :wink:](images/smiles/icon_wink.gif) |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
HackingM2 Apprentice
![Apprentice Apprentice](/images/ranks/rank_rect_2.gif)
Joined: 26 Jul 2004 Posts: 245 Location: Cambridge, England
|
Posted: Fri May 05, 2006 6:04 pm Post subject: |
|
|
On the original subject of RAID and the choice between hardware and software controllers I would make the following observations:
1) Hardware RAID controllers always (should) have a SATA interface controller per port.
2) Multi channel non-RAID SATA controllers almost always multiplex one SATA interface controller for two ports.
3) Hardware RAID controllers use less CPU then software as XOR is done on the card.
4) Hardware RAID controllers make better use of the system bus.
...however...
1) Most modern motherboards have multiple busses.
2) A multi-core CPU is cheaper than a single-core CPU and a hardware RAID controller.
3) If you use channel 0 and 2 on a pair of cheap SATA controllers you still have one SATA IC per port.
...so I would observe that these days it may well be better to get a decent motherboard and a dual-core CPU with plenty of RAM and use two cheap non-RAID SATA controllers.
As someone with setups such as these (one a P4 with a 3ware 9000, one with single-core AMD64 with a 3ware 9500 and one using two Promise SATA X4 controllers and a dual-core AMD64) I can honsetly say that the dual-core AMD64 with the cheapo controllers out-performs both the other two in drive throughput. So much so in fact that I never got round to moving the controllers around.
One day I may try putting the 9500 in the dual-core AMD64 and test the performance but I doubt it'll make a great deal of difference. If the CPU load was still too high it would still be cheaper to get another dual-core AMD64 than another 3ware 9500.
EDIT: I thought that I should add that software RAID, especially if used in conjunction with LVM2 ot EVMS, can provide much greater flexibility. You can, for example, use a combination of RAID levels on the same array. A friend of mine uses a RAID5 partition (on array of four drives) for his system and stored projects with a RAID0 partition on the same array to hold the video he is currently editing. This provides the best of both safety and speed. If you lose the current edit as long as the sources and the project files survive who cares? |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
stuartguthrie n00b
![n00b n00b](/images/ranks/rank_rect_0.gif)
Joined: 19 Jun 2005 Posts: 58
|
Posted: Wed May 31, 2006 7:41 am Post subject: Experiences with raid cards |
|
|
We have done the rounds and are up to areca.
Started with 2420SA adaptec. Gentoo support was terrible. The patch level in the kernel is (or appears to be ) way behind the red hat level.
Also their website sucks for getting source.
So no Adaptec.
Next we tried 3ware 9550SX. Nice card. But make sure you MEASURE the space as it is loooooong. the connectors are on the end and bump up nicely against the CPUs leaving no room to fit the connectors. The adaptec at least had the connectors on the side. So no joy there. Shame, they look like GREAT cards.
Finally (hopefully) we are trying the areca sata II card tomorrow.... If that fails, we will drop back to software raid I think!
ATB
Stuart |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
stuartguthrie n00b
![n00b n00b](/images/ranks/rank_rect_0.gif)
Joined: 19 Jun 2005 Posts: 58
|
Posted: Thu Jun 08, 2006 11:48 pm Post subject: |
|
|
OK we've been trashing the areca card for a week now. It seems stable and has great throughput. I would recommend it. Currently. |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
HackingM2 Apprentice
![Apprentice Apprentice](/images/ranks/rank_rect_2.gif)
Joined: 26 Jul 2004 Posts: 245 Location: Cambridge, England
|
Posted: Fri Jun 09, 2006 2:55 pm Post subject: Re: Experiences with raid cards |
|
|
stuartguthrie wrote: | Next we tried 3ware 9550SX. Nice card. But make sure you MEASURE the space as it is loooooong. the connectors are on the end and bump up nicely against the CPUs leaving no room to fit the connectors. |
I know we're drifting slightly OT here but can you tell us which MB you had those issues with please?
IMNSHO one should try to avoid any manufacturers who would position the CPU(s) as to foul the PCI slots when using full-length cards. In fact I would be tempted to argue that it was not fit for purpose and demand a refund had I purchased such an item. ![Wink :wink:](images/smiles/icon_wink.gif) |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
stuartguthrie n00b
![n00b n00b](/images/ranks/rank_rect_0.gif)
Joined: 19 Jun 2005 Posts: 58
|
Posted: Sat Jun 10, 2006 2:37 am Post subject: |
|
|
I will try to find this one, its with the doco. Unless there is a linux command? lspci shows this:
00:06.0 PCI bridge: Advanced Micro Devices [AMD] AMD-8111 PCI (rev 07)
00:07.0 ISA bridge: Advanced Micro Devices [AMD] AMD-8111 LPC (rev 05)
00:07.1 IDE interface: Advanced Micro Devices [AMD] AMD-8111 IDE (rev 03)
00:07.2 SMBus: Advanced Micro Devices [AMD] AMD-8111 SMBus 2.0 (rev 02)
00:07.3 Bridge: Advanced Micro Devices [AMD] AMD-8111 ACPI (rev 05)
00:0a.0 PCI bridge: Advanced Micro Devices [AMD] AMD-8131 PCI-X Bridge (rev 12)
00:0a.1 PIC: Advanced Micro Devices [AMD] AMD-8131 PCI-X IOAPIC (rev 01)
00:0b.0 PCI bridge: Advanced Micro Devices [AMD] AMD-8131 PCI-X Bridge (rev 12)
00:0b.1 PIC: Advanced Micro Devices [AMD] AMD-8131 PCI-X IOAPIC (rev 01)
00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration
00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map
00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller
00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control
00:19.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration
00:19.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map
00:19.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller
00:19.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control
01:03.0 PCI bridge: Intel Corporation 80331 [Lindsay] I/O processor (PCI-X Bridge) (rev 0a)
02:0e.0 RAID bus controller: Areca Technology Corp. ARC-1110 4-Port PCI-X to SATA RAID Controller
03:09.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5704 Gigabit Ethernet (rev 03)
03:09.1 Ethernet controller: Broadcom Corporation NetXtreme BCM5704 Gigabit Ethernet (rev 03)
04:00.0 USB Controller: Advanced Micro Devices [AMD] AMD-8111 USB (rev 0b)
04:00.1 USB Controller: Advanced Micro Devices [AMD] AMD-8111 USB (rev 0b)
04:06.0 VGA compatible controller: ATI Technologies Inc Rage XL (rev 27)
dmesg:
Bootdata ok (command line is root=/dev/sda3 udev nousb)
Linux version 2.6.16-xen (root@livecd) (gcc version 3.4.5 (Gentoo 3.4.5, ssp-3.$
On node 0 totalpages: 67584
DMA zone: 67584 pages, LIFO batch:15
DMA32 zone: 0 pages, LIFO batch:0
Normal zone: 0 pages, LIFO batch:0
HighMem zone: 0 pages, LIFO batch:0
ACPI: RSDP (v002 ACPIAM ) @ 0x00000000000f6dd0
ACPI: XSDT (v001 A M I OEMXSDT 0x12000527 MSFT 0x00000097) @ 0x00000000f9ff01$
ACPI: FADT (v001 A M I OEMFACP 0x12000527 MSFT 0x00000097) @ 0x00000000f9ff02$
ACPI: MADT (v001 A M I OEMAPIC 0x12000527 MSFT 0x00000097) @ 0x00000000f9ff03$
ACPI: OEMB (v001 A M I OEMBIOS 0x12000527 MSFT 0x00000097) @ 0x00000000f9fff0$
ACPI: SRAT (v001 A M I OEMSRAT 0x12000527 MSFT 0x00000097) @ 0x00000000f9ff39$
ACPI: HPET (v001 A M I OEMHPET 0x12000527 MSFT 0x00000097) @ 0x00000000f9ff3a$
ACPI: ASF! (v001 AMIASF AMDSTRET 0x00000001 INTL 0x02002026) @ 0x00000000f9ff3b$
ACPI: DSDT (v001 0AAAA 0AAAA001 0x00000001 INTL 0x02002026) @ 0x00000000000000$
ACPI: Local APIC address 0xfee00000
ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
ACPI: LAPIC (acpi_id[0x02] lapic_id[0x01] enabled)ACPI: LAPIC (acpi_id[0x03] lapic_id[0x02] enabled)
ACPI: LAPIC (acpi_id[0x04] lapic_id[0x03] enabled)
ACPI: IOAPIC (id[0x04] address[0xfec00000] gsi_base[0])
IOAPIC[0]: apic_id 4, version 17, address 0xfec00000, GSI 0-23
ACPI: IOAPIC (id[0x05] address[0xfebff000] gsi_base[24])
IOAPIC[1]: apic_id 5, version 17, address 0xfebff000, GSI 24-27
ACPI: IOAPIC (id[0x06] address[0xfebfe000] gsi_base[28])
IOAPIC[2]: apic_id 6, version 17, address 0xfebfe000, GSI 28-31
ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
ACPI: IRQ0 used by override.
ACPI: IRQ2 used by override.
ACPI: IRQ9 used by override.
Setting APIC routing to xen
Using ACPI (MADT) for SMP configuration information
Allocating PCI resources starting at fa800000 (gap: fa000000:5780000)
Built 1 zonelists
Kernel command line: root=/dev/sda3 udev nousb
Initializing CPU#0
PID hash table entries: 2048 (order: 11, 65536 bytes)
Xen reported: 1792.739 MHz processor.
Console: colour VGA+ 80x25
Dentry cache hash table entries: 65536 (order: 7, 524288 bytes)
Inode-cache hash table entries: 32768 (order: 6, 262144 bytes)
Software IO TLB enabled:
Aperture: 64 megabytes
Bus range: 0x000000000c000000 - 0x0000000010000000
Kernel range: 0xffff88000151b000 - 0xffff88000551b000
PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Memory: 185388k/270336k available (2722k kernel code, 84600k reserved, 987k data, 156k init)
Calibrating delay using timer specific routine.. 3582.28 BogoMIPS (lpj=17911405)
Security Framework v1.0.0 initialized
Capability LSM initialized
Mount-cache hash table entries: 256
CPU: L1 I Cache: 64K (64 bytes/line), D cache 64K (64 bytes/line)
CPU: L2 Cache: 1024K (64 bytes/line)
Initializing CPU#1
Initializing CPU#2
Brought up 4 CPUs
Initializing CPU#3
migration_cost=345
DMI 2.3 present.
Grant table initialized
NET: Registered protocol family 16
ACPI: bus type pci registered
PCI: Using configuration type 1
ACPI: Subsystem revision 20060127
ACPI: Interpreter enabled
ACPI: Using IOAPIC for interrupt routing
ACPI: PCI Root Bridge [PCI0] (0000:00)
PCI: Probing PCI hardware (bus 00)
Boot video device is 0000:04:06.0
ACPI: PCI Interrupt Routing Table [\_SB_.PCI0._PRT]
ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.PCI1._PRT]
ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.GOLA._PRT]
ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.GOLB._PRT]
ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 *5 6 7 9 10 11 12 14 15)
ACPI: PCI Interrupt Link [LNKB] (IRQs 3 4 5 6 7 *9 10 11 12 14 15)
ACPI: PCI Interrupt Link [LNKC] (IRQs 3 4 5 6 7 9 10 *11 12 14 15)
ACPI: PCI Interrupt Link [LNKD] (IRQs 3 4 5 6 7 9 *10 11 12 14 15)
xen_mem: Initialising balloon driver.
SCSI subsystem initialized
usbcore: USB support disabled
PCI: Using ACPI for IRQ routing
PCI: If a device doesn't work, try "pci=routeirq". If it helps, post a report
PCI: Bridge: 0000:00:06.0
IO window: b000-bfff
MEM window: fca00000-feafffff
PREFETCH window: disabled.
PCI: Bridge: 0000:00:0a.0
IO window: disabled.
MEM window: fc900000-fc9fffff
PREFETCH window: fc600000-fc6fffff
PCI: Bridge: 0000:01:03.0
IO window: disabled.
MEM window: fc800000-fc8fffff
PREFETCH window: fbe00000-fc5fffff
PCI: Bridge: 0000:00:0b.0
IO window: disabled.
MEM window: fc800000-fc8fffff
PREFETCH window: fbe00000-fc5fffff
IA-32 Microcode Update Driver: v1.14-xen <tigran@veritas.com>
IA32 emulation $Id: sys_ia32.c,v 1.32 2002/03/24 13:02:28 ak Exp $
VFS: Disk quotas dquot_6.5.1
Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Initializing Cryptographic API
io scheduler noop registered
io scheduler anticipatory registered (default)
io scheduler deadline registered
io scheduler cfq registered
PCI: MSI quirk detected. pci_msi_quirk set.
PCI: MSI quirk detected. pci_msi_quirk set.
Real Time Clock Driver v1.12ac
serio: i8042 AUX port at 0x60,0x64 irq 12
serio: i8042 KBD port at 0x60,0x64 irq 1
isa bounce pool size: 16 pages
RAMDISK driver initialized: 16 RAM disks of 16384K size 1024 blocksize
Intel(R) PRO/1000 Network Driver - version 6.3.9-k4-NAPI
Copyright (c) 1999-2005 Intel Corporation.
e100: Intel(R) PRO/100 Network Driver, 3.5.10-k2-NAPI
e100: Copyright(c) 1999-2005 Intel Corporation
tg3.c:v3.49 (Feb 2, 2006)
GSI 16 sharing vector 0xA9 and IRQ 16
ACPI: PCI Interrupt 0000:03:09.0[A] -> GSI 24 (level, low) -> IRQ 16
eth0: Tigon3 [partno(BCM95704A7) rev 2003 PHY(5704)] (PCIX:100MHz:64-bit) 10/100/1000BaseT Ethernet 00:e0:81:$
eth0: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[0] Split[0] WireSpeed[1] TSOcap[1]
eth0: dma_rwctrl[769f4000] dma_mask[64-bit]
GSI 17 sharing vector 0xB1 and IRQ 17
ACPI: PCI Interrupt 0000:03:09.1[B] -> GSI 25 (level, low) -> IRQ 17
eth1: Tigon3 [partno(BCM95704A7) rev 2003 PHY(5704)] (PCIX:100MHz:64-bit) 10/100/1000BaseT Ethernet 00:e0:81:$
eth1: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[0] Split[0] WireSpeed[1] TSOcap[1]
eth1: dma_rwctrl[769f4000] dma_mask[64-bit]
Xen virtual console successfully installed as ttyS0
Event-channel device installed.
blkif_init: reqs=64, pages=704, mmap_vstart=0xffff88000fc00000
Uniform Multi-Platform E-IDE driver Revision: 7.00alpha2
ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx
AMD8111: IDE controller at PCI slot 0000:00:07.1
AMD8111: chipset revision 3
AMD8111: not 100% native mode: will probe irqs later
AMD8111: 0000:00:07.1 (rev 03) UDMA133 controller
ide0: BM-DMA at 0xffa0-0xffa7, BIOS settings: hda:pio, hdb:pio
ide1: BM-DMA at 0xffa8-0xffaf, BIOS settings: hdc:DMA, hdd:pio
Probing IDE interface ide0...
Probing IDE interface ide1...
hdc: QSI CD-ROM SCR-242, ATAPI CD/DVD-ROM drive
ide1 at 0x170-0x177,0x376 on irq 15
Probing IDE interface ide0...
hdc: ATAPI 24X CD-ROM drive, 128kB Cache, UDMA(33)
Uniform CD-ROM driver Revision: 3.20
ide-floppy driver 0.99.newide
libata version 1.20 loaded.
GSI 18 sharing vector 0xB9 and IRQ 18
ACPI: PCI Interrupt 0000:02:0e.0[A] -> GSI 30 (level, low) -> IRQ 18
ARECA RAID ADAPTER0: 64BITS PCI BUS DMA ADDRESSING SUPPORTED
ARECA RAID ADAPTER0: 64BITS PCI BUS DMA ADDRESSING SUPPORTED
ARECA RAID ADAPTER0: FIRMWARE VERSION V1.39 2006-2-9
scsi0 : ARECA SATA HOST ADAPTER RAID CONTROLLER
Driver Version 1.20.0X.13
Vendor: Areca Model: ARC-1110-VOL#00 Rev: R001
Type: Direct-Access ANSI SCSI revision: 03
Vendor: Areca Model: RAID controller Rev: R001
Type: Processor ANSI SCSI revision: 00
arcmsr device major number 254
st: Version 20050830, fixed bufsize 32768, s/g segs 256
SCSI device sda: 1249999872 512-byte hdwr sectors (640000 MB)
sda: Write Protect is off
sda: Mode Sense: cb 00 00 08
SCSI device sda: drive cache: write back
SCSI device sda: 1249999872 512-byte hdwr sectors (640000 MB)
sda: Write Protect is off
sda: Mode Sense: cb 00 00 08
SCSI device sda: drive cache: write back
sda: sda1 sda2 sda3 sda4 < sda5 sda6 sda7 sda8 >
sd 0:0:0:0: Attached scsi disk sda
sd 0:0:0:0: Attached scsi generic sg0 type 0
0:0:16:0: Attached scsi generic sg1 type 3
usbmon: debugfs is not available
mice: PS/2 mouse device common for all mice
device-mapper: 4.5.0-ioctl (2005-10-04) initialised: dm-devel@redhat.com
device-mapper: dm-multipath version 1.0.4 loaded
device-mapper: dm-round-robin version 1.0.0 loaded
device-mapper: dm-emc version 0.0.3 loaded
NET: Registered protocol family 2
NET: Registered protocol family 2
IP route cache hash table entries: 4096 (order: 3, 32768 bytes)
TCP established hash table entries: 16384 (order: 6, 262144 bytes)
TCP bind hash table entries: 16384 (order: 6, 262144 bytes)
TCP: Hash tables configured (established 16384 bind 16384)
TCP reno registered
Initializing IPsec netlink socket
NET: Registered protocol family 1
NET: Registered protocol family 17
Bridge firewalling registered
kjournald starting. Commit interval 5 seconds
EXT3 FS on sda3, internal journal
EXT3-fs: mounted filesystem with ordered data mode.
VFS: Mounted root (ext3 filesystem).
Adding 506036k swap on /dev/sda2. Priority:-1 extents:1 across:506036k
EXT3 FS on sda3, internal journal
kjournald starting. Commit interval 5 seconds
EXT3 FS on sda1, internal journal
EXT3-fs: mounted filesystem with ordered data mode.
kjournald starting. Commit interval 5 seconds
EXT3 FS on dm-24, internal journal
EXT3-fs: mounted filesystem with ordered data mode.
kjournald starting. Commit interval 5 seconds
EXT3 FS on dm-25, internal journal
EXT3-fs: mounted filesystem with ordered data mode.
kjournald starting. Commit interval 5 seconds
EXT3 FS on dm-27, internal journal
EXT3-fs: mounted filesystem with ordered data mode.
kjournald starting. Commit interval 5 seconds
kjournald starting. Commit interval 5 seconds
EXT3 FS on dm-26, internal journal
EXT3-fs: mounted filesystem with ordered data mode.
kjournald starting. Commit interval 5 seconds
EXT3 FS on dm-28, internal journal
EXT3-fs: mounted filesystem with ordered data mode.
device vif0.0 entered promiscuous mode
xenbr0: port 1(vif0.0) entering learning state
xenbr0: topology change detected, propagating
xenbr0: port 1(vif0.0) entering forwarding state
tg3: peth0: Link is up at 100 Mbps, full duplex.
tg3: peth0: Flow control is off for TX and off for RX.
device peth0 entered promiscuous mode
xenbr0: port 2(peth0) entering learning state
xenbr0: topology change detected, propagating
xenbr0: port 2(peth0) entering forwarding state
device vif1.0 entered promiscuous mode
xenbr0: port 3(vif1.0) entering learning state
xenbr0: topology change detected, propagating
xenbr0: port 3(vif1.0) entering forwarding state
device vif2.0 entered promiscuous mode
xenbr0: port 4(vif2.0) entering learning state
xenbr0: topology change detected, propagating
xenbr0: port 4(vif2.0) entering forwarding state
xenbr0: port 4(vif2.0) entering disabled state
device vif2.0 left promiscuous mode
xenbr0: port 4(vif2.0) entering disabled state
device vif3.0 entered promiscuous mode
xenbr0: port 4(vif3.0) entering learning state
xenbr0: topology change detected, propagating
xenbr0: port 4(vif3.0) entering forwarding state |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|