View previous topic :: View next topic |
Author |
Message |
vaxbrat l33t
![l33t l33t](/images/ranks/rank_rect_4.gif)
![](images/avatars/gallery/The Jetsons/cartoon_the_jetsons_george.gif)
Joined: 05 Oct 2005 Posts: 731 Location: DC Burbs
|
Posted: Mon Feb 27, 2012 11:12 pm Post subject: [solved] mdadm and using more than 26 disk drives |
|
|
We just picked up one of those BackBlaze storage pods and have populated it with 45 3tb sata drives. Now I'm stuck trying to figure out how to get mdadm to understand a range like
Code: | /dev/sdn - /dev/sdad | when it only seems to take or I may end up looking at the source code anyway, but the pod is currently running an old Debian Lenny. I'm debating whether to replace that with a current gentoo stable.
Can any of you datacenter wonks chime in? It seems like all of the google results are for under 26 devices.
Last edited by vaxbrat on Wed Feb 29, 2012 6:30 am; edited 1 time in total |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
wildbug n00b
![n00b n00b](/images/ranks/rank_rect_0.gif)
Joined: 07 Oct 2007 Posts: 73
|
Posted: Tue Feb 28, 2012 12:06 am Post subject: |
|
|
Is that typed on the command line? If so, mdadm isn't interpreting the ranges, your shell is (probably bash).
This
expands to
Code: | /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 |
and that happens before mdadm (or any other command) sees it.
You just have to split the range into two expansions:
Code: | /dev/sd[a-z]1 /dev/sda[a-d]1 |
(I'm curious; did you have the Backblaze pod custom-made? If so, why didn't you go with something like the SuperMicro 847?)
EDIT:
I neglected to mention that the reason you probably don't see mdadm configurations for devices greater than 26 devices is that the rule of thumb for RAIDs is not to make them (much) larger than 9 devices. For a similar setup I used 9-disk RAID6 devices and assembled them into an LVM volume. |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
vaxbrat l33t
![l33t l33t](/images/ranks/rank_rect_4.gif)
![](images/avatars/gallery/The Jetsons/cartoon_the_jetsons_george.gif)
Joined: 05 Oct 2005 Posts: 731 Location: DC Burbs
|
Posted: Wed Feb 29, 2012 6:29 am Post subject: bash expansion it is |
|
|
Looks like your suggestion that bash expansion is at work is correct.
The Backblaze was already on order when they brought me into the program. It looks like their standard config with a supermicro based core I3 mobo and three Silicon Image 3gb/sec SATA controllers with 1to5 expanders on 3 of their 4 ports each. I didn't see the PO but I suspect the pod guts were about $3k with the 3TB Seagate Barracuda 7200's running about $230 each. Total pod cost for 135 TB of raw storage is about $15k.
The whole idea of this pod is to act as more of a NAS for cheap archiving than a JBOD. We have about 150tb on a Lustre based cluster with 4 storage servers and about 30 compute nodes. Most of that is supermicro based with jbods on fiber to the storage servers and infiniband on the main network. I wouldn't be surprised if those raids were at least an order of magnitude more expensive than the "little red pod".
It's in the middle of striping now. I estimate about 2.5 days with 3 15 disk raid6 MD devices doing about 12-13mb/sec on average. I just built a Thuban at home with a Vertex 3 240gb SSD and 4 3TB Seagates in a RAID 5 md array sharing the 6gb/sec SATA on an MSI motherboard with 16gb of pc3-1600 (versus the core I3's 8gb of pc3-1333). That striping was getting about 75mb/sec. It would be interesting to see what a config like that would do in the pod with three 6gb sata controllers. |
|
Back to top |
|
![](templates/gentoo/images/spacer.gif) |
|