View previous topic :: View next topic |
Author |
Message |
FizzyWidget Veteran
Joined: 21 Nov 2008 Posts: 1133 Location: 127.0.0.1
|
Posted: Mon Jun 02, 2014 11:15 am Post subject: Which RAID should I use? |
|
|
Here is my problem
I have 3TB (2.72TB) of space in my main system, I use the spare PC as storage, it has 4 500GB hard drives in it, while I am aware I cannot copy everything over I would like to be able to store the majority of my stuff on the space PC, I know that if I use raid 5 or 6 it will give me about 1.3TB (as we all know drives advertised amount of space and actual space you can use never match), where as if I use raid10 i will only get about 934GB of space, out of the choices I have, other than to have 4 separate drives which RAID would people advise? _________________ I know 43 ways to kill with a SKITTLE, so taste my rainbow bitch. |
|
Back to top |
|
|
krinn Watchman
Joined: 02 May 2003 Posts: 7470
|
Posted: Mon Jun 02, 2014 11:35 am Post subject: |
|
|
If i pickup your space lost for a drive, 500*3-1300 = 66M / drives, than why not using raid0 to get 4*(500-66) and get an array of 1.7T
It depend on your level of paranoia, but 1.7T to backup 2.72T should be fine, and making sure your backup array is fine shouldn't really be a problem. Backing up the backup is sure safe, but you can live well with a backup and the original and don't need to backup the backup |
|
Back to top |
|
|
FizzyWidget Veteran
Joined: 21 Nov 2008 Posts: 1133 Location: 127.0.0.1
|
Posted: Mon Jun 02, 2014 12:44 pm Post subject: |
|
|
krinn wrote: | If i pickup your space lost for a drive, 500*3-1300 = 66M / drives, than why not using raid0 to get 4*(500-66) and get an array of 1.7T
It depend on your level of paranoia, but 1.7T to backup 2.72T should be fine, and making sure your backup array is fine shouldn't really be a problem. Backing up the backup is sure safe, but you can live well with a backup and the original and don't need to backup the backup |
RAID0 is the devils child!!!!!!
well with the 500GB drives I get 465GB of space - but meh
I have had bad experience with raid0 before, and although I do have backups of backups, have a 1TB external drive, as well as stuff on a 1TB laptop, I just can't bring myself to use Raid0, yes I know it would get me 1.7TB of space and fast read and write, but as you say my paranoia level after the last time i used raid0 makes me unwilling to give it another go _________________ I know 43 ways to kill with a SKITTLE, so taste my rainbow bitch. |
|
Back to top |
|
|
krinn Watchman
Joined: 02 May 2003 Posts: 7470
|
Posted: Mon Jun 02, 2014 1:37 pm Post subject: |
|
|
It's more or less the same as any disk, except chance of failure is a bit increase by number of disks, but it doesn't change in real your failure time, the failure time comes from the first disk to fail.
But if you use raid0 as primary or backup, it won't change anything. If original fail, you have the backup, if backup fail, you have original.
If you look again at the logic, it would even be more intelligent to invert the disks, using primary the raid0 with the 4x500 disks for increase speed and using the bigger slower disk as backup (as it will get all your raid0 datas backup) while any other solutions would imply a filter and some datas lost. |
|
Back to top |
|
|
FizzyWidget Veteran
Joined: 21 Nov 2008 Posts: 1133 Location: 127.0.0.1
|
Posted: Mon Jun 02, 2014 1:58 pm Post subject: |
|
|
I know what you mean, and I have just found a few more 500GB HDD, so if the raid0 fails I have spares, and can copy things back over, I suppose i have to weigh up the space issue and the rebuild time if i were to use raid5 or 6 against how long it would take to copy over from main pc.
Now just to figure out should I used the bios raid or mdadm _________________ I know 43 ways to kill with a SKITTLE, so taste my rainbow bitch. |
|
Back to top |
|
|
Pearlseattle Apprentice
Joined: 04 Oct 2007 Posts: 165 Location: Switzerland
|
Posted: Mon Jun 02, 2014 7:37 pm Post subject: |
|
|
Hi
I'm a big antagonist of raid0
Think that if, while you're writing to it, the power is cut then you've lost it, no matter what.
It happened to me a couple of times with a raid5: I plugged in too much stuff into the same plug three times, I switched off the wrong power button two times, once I shut down everything in a hurry after having thrown 1.5l of coke over the power plugs, etc... .
Keep in mind that this kind of stuff usually happens when you're synchronizing your backup with your master copy, so both raids might have problems => I would never use raid0 if not to host temp data.
And I'm an even bigger antagonist of HW-raid
I have a mdadm-raid5 with 4 7200rpm 4TB disks which writes with minimum 250MB/s and has spikes of 450MB/s (depends quite a lot on the physical place on the HDD where it's writing and if the area is completely free or not) and the CPU usage is quite low => I don't know why anybody would want to use a HW-card which might slow down the data throughput (all inexpensive cards won't be faster than the CPU controller) and generate a dependency towards the HW-manufacturer as if the card breaks down you won't be able to use those HDDs with a different brand.
I therefore recommend you to
1) decide which raid-levels you consider being candidates.
2) test them with your HW and different parameters and weight their pros/cons & risks.
I wrote here some notes some time ago (not finalized and not reviewed so they might be silly but might give you some inspiration for potential tests).
Have fun!
p.s.: using the BIOS RAID instead of a RAID CARD is the same thing - you'll have to stick to that motherboard without ever being able to upgrade MB/CPU. |
|
Back to top |
|
|
szatox Advocate
Joined: 27 Aug 2013 Posts: 3477
|
Posted: Mon Jun 02, 2014 8:24 pm Post subject: |
|
|
Quote: | Now just to figure out should I used the bios raid or mdadm |
From what I learned when I was looking for the correct raid for myself, BIOS-based fake raid is the worst possible thing you can have. If anything breaks down, you're screwed. It gives you no performance boost. It's not a card you can easily move onto a new mobo. It's not a card you can easily replace. You need a spare mobo, perhaps even with the same settings, and you have no performance advantage over software raid as it's still your CPU doing all the hard work.
So, either go with generic software raid liek lvm or dm, or go with true hardware raid and have a spare controler at hand just in case you'd ever need it. Definitely don't go with software raid pretending it's hardware. |
|
Back to top |
|
|
FizzyWidget Veteran
Joined: 21 Nov 2008 Posts: 1133 Location: 127.0.0.1
|
Posted: Mon Jun 02, 2014 9:13 pm Post subject: |
|
|
Pearlseattle wrote: | Hi
I therefore recommend you to
1) decide which raid-levels you consider being candidates.
2) test them with your HW and different parameters and weight their pros/cons & risks.
I wrote here some notes some time ago (not finalized and not reviewed so they might be silly but might give you some inspiration for potential tests).
Have fun!
p.s.: using the BIOS RAID instead of a RAID CARD is the same thing - you'll have to stick to that motherboard without ever being able to upgrade MB/CPU. |
using your php script it says to use
Use: "-E stride=16,stripe-width=48"
that is with me using
mdadm --create --verbose /dev/md0 --level=5 --chunk=64 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
gussing ext4 will use 4096bytes unless i tell it otherwise?
Does the above look right?
szatox wrote: | Quote: | Now just to figure out should I used the bios raid or mdadm |
From what I learned when I was looking for the correct raid for myself, BIOS-based fake raid is the worst possible thing you can have. If anything breaks down, you're screwed. It gives you no performance boost. It's not a card you can easily move onto a new mobo. It's not a card you can easily replace. You need a spare mobo, perhaps even with the same settings, and you have no performance advantage over software raid as it's still your CPU doing all the hard work.
So, either go with generic software raid liek lvm or dm, or go with true hardware raid and have a spare controler at hand just in case you'd ever need it. Definitely don't go with software raid pretending it's hardware. |
Yes after a bit of reading i decided to go with mdadm _________________ I know 43 ways to kill with a SKITTLE, so taste my rainbow bitch. |
|
Back to top |
|
|
Pearlseattle Apprentice
Joined: 04 Oct 2007 Posts: 165 Location: Switzerland
|
Posted: Mon Jun 02, 2014 10:36 pm Post subject: |
|
|
Quote: | gussing ext4 will use 4096bytes unless i tell it otherwise? |
Yes
Quote: | Does the above look right? |
Please test test test the settings - the recommendations might look good on paper but might be bad for the constellation of HW + SW properties of your system/drives.
Just create small partitions on your drives (e.g. 20GB partitions - so that it's fast creating a raid for test purposes) and create multiple times a SW-raid using them using different settings.
Worst case: you'll end up feeling confident about how to do it and you'll know which settings definitely don't work for you.
Best case: you'll identify THE setting which works best for you.
p.s.:
Don't forget the "-E lazy_itable_init=0,lazy_journal_init=0" when formatting the raid with ext4 - they're in my opinion stupid options that should have been set as default, not the opposite. |
|
Back to top |
|
|
vaxbrat l33t
Joined: 05 Oct 2005 Posts: 731 Location: DC Burbs
|
Posted: Mon Jun 02, 2014 11:36 pm Post subject: How compressable is your stuff? |
|
|
You could do a mirror set with btrfs and enable lzo compression. Depending on the nature of your files, you might be able to squeeze everything on the smaller volume set. Your data will be a damn sight safer than a raid0 or even a raid5 |
|
Back to top |
|
|
Goverp Advocate
Joined: 07 Mar 2007 Posts: 2190
|
Posted: Tue Jun 03, 2014 8:20 am Post subject: |
|
|
FizzyWidget wrote: | ...
Yes after a bit of reading i decided to go with mdadm |
and note that auto-assembly of raid arrays is deprecated. The "approved" way to assemble your array is to use an initramfs and mdadm. There are many discussions on how to do that in the forums. IMHO the best route is to use the Early Userspace Mounting approach. _________________ Greybeard |
|
Back to top |
|
|
krinn Watchman
Joined: 02 May 2003 Posts: 7470
|
Posted: Tue Jun 03, 2014 12:43 pm Post subject: |
|
|
Pearlseattle wrote: | And I'm an even bigger antagonist of HW-raid
I have a mdadm-raid5 with 4 7200rpm 4TB disks which writes with minimum 250MB/s and has spikes of 450MB/s (depends quite a lot on the physical place on the HDD where it's writing and if the area is completely free or not) and the CPU usage is quite low => I don't know why anybody would want to use a HW-card which might slow down the data throughput (all inexpensive cards won't be faster than the CPU controller) and generate a dependency towards the HW-manufacturer as if the card breaks down you won't be able to use those HDDs with a different brand.
...
p.s.: using the BIOS RAID instead of a RAID CARD is the same thing - you'll have to stick to that motherboard without ever being able to upgrade MB/CPU. |
You seems to mix up things there, HW-card doesn't imply you use a HW-card that is fakeraid.
Using a card that use fakeraid will still allow you to change m/b, as long as you keep the card with the new one.
Try to not mistake hardware raid card, fakeraid and software raid
If you want sum up, you can say, fakeraid share all inconvenient of the other two solutions, without all advantages they could have. So it is just better to use software raid in that case. |
|
Back to top |
|
|
Pearlseattle Apprentice
Joined: 04 Oct 2007 Posts: 165 Location: Switzerland
|
Posted: Wed Jun 04, 2014 9:31 pm Post subject: |
|
|
krinn wrote: |
You seems to mix up things there, HW-card doesn't imply you use a HW-card that is fakeraid.
Using a card that use fakeraid will still allow you to change m/b, as long as you keep the card with the new one.
Try to not mistake hardware raid card, fakeraid and software raid
If you want sum up, you can say, fakeraid share all inconvenient of the other two solutions, without all advantages they could have. So it is just better to use software raid in that case. |
You're right
Quote: | You could do a mirror set with btrfs and enable lzo compression. |
I admit that having btrfs handle the whole raid+fs thingies wouldbe very handy (especially as well because of its resizing capabilities), but it's still too new to use it seriously. I remember myself trying to bring back a btrfs raid back to a healthy state - impossible, and I had to reformat the partition and restore the backup.
Involving compression would it make even more complicated to test but, depending on the nature of the files, it could bring a lot of additional storage AND even more throughput if your CPU is strong and your storage is slow (stuff gets compressed fast + fewer things to write = faster).
Quote: | note that auto-assembly of raid arrays is deprecated. The "approved" way to assemble your array is to use an initramfs and mdadm |
Why would an initramfs be required (meaning: why is the inclusion in the kernel itself NOK)?
Cheers |
|
Back to top |
|
|
Goverp Advocate
Joined: 07 Mar 2007 Posts: 2190
|
Posted: Thu Jun 05, 2014 9:08 am Post subject: |
|
|
Pearlseattle wrote: | ...
Quote: | note that auto-assembly of raid arrays is deprecated. The "approved" way to assemble your array is to use an initramfs and mdadm |
Why would an initramfs be required (meaning: why is the inclusion in the kernel itself NOK)?
... |
My bad. I was assuming you would be building everything on RAID, including your rootfs. In that case, you need to assemble the array before your init process starts, and hence need either auto-assembly or an initramfs. But rereading the first post, I guess you're booting from a different disk to the RAID array, so you don't need the array before you can start the init process.
The reason why auto-assembly is bad is that the kernel code to do it is deprecated, and only supported V0.9 superblocks, so I guess it's been extracted from an old version of mdadm. A few years ago I built my RAID array using mdadm and V1.0 superblocks, and then used kernel assembly to boot the system. All appeared OK until one drive produced I/O errors, and while fixing that it became apparent that the RAID components had both V0.9 and V1.0 superblocks (which are in different places, so potentially V1.0 code could overwrite the V0.9 ones with data, and vice-versa). _________________ Greybeard |
|
Back to top |
|
|
eccerr0r Watchman
Joined: 01 Jul 2004 Posts: 9847 Location: almost Mile High in the USA
|
Posted: Thu Jun 12, 2014 9:26 pm Post subject: |
|
|
Speaking of RAID5's I have a 4x500GB RAID5 on SATA 3Gbps. I find that disk i/o performance to it leaves much to be desired.
My current system has root on LVM on on MD-RAID5 (setup via initramfs, not autoconfigure). Boot partition is also on those disks (4 way MD-RAID1), in hopes that if any one of those 4 disks goes belly up, the machine is still bootable, yet I have only 4 spindles going (versus a 6-disk system where I dedicate a raid1 for root/boot). The member disks are "consumer" quality SATA disks. Swap, which is not commonly used, is a swapfile on root (6GB RAM)
However apparently the amount of i/o that goes to the RAID5 on the root partition (reads and writes) seems to really choked out RAID5 performance. hdparm works OK but when I do real read/writes the effective IOPS is really low. The disk io to the virtual machines on the machine are miserably slow, I suppose I get no more than 10MB/sec or possibly worse.
Not sure how people are configuring this and/or if this is a localized issue, but it's pretty pitiful. I have two machines with SATA 6Gbps SSDs in them and they make working on the RAID5 feel like pouring molasses (if even just one fast HDD feels slow compared to the SSD, the raid5 is even worse.)
I am thinking about eating the space inefficiency and going with a 2x2TB RAID1 in the near future because of how bad the disks are to the virtual machines...
Ugh... I just looked:
[ 3.670484] ata5.00: 976771055 sectors, multi 16: LBA48 NCQ (depth 0/32)
No wonder why this is behaving so poorly... Need to get NCQ to work again... _________________ Intel Core i7 2700K/Radeon R7 250/24GB DDR3/256GB SSD
What am I supposed watching? |
|
Back to top |
|
|
depontius Advocate
Joined: 05 May 2004 Posts: 3523
|
Posted: Fri Jun 13, 2014 3:38 pm Post subject: |
|
|
While we're talking RAID, I've been running ext4 on top of mdadm RAID1 for years now, serving it as /home over nfsv4 on my lan.
I've getting set to rebuild, and was thinking of moving to the btrfs with its internal raid1, still serving over nfsv4.
An comments / pros / cons? _________________ .sigs waste space and bandwidth |
|
Back to top |
|
|
Pearlseattle Apprentice
Joined: 04 Oct 2007 Posts: 165 Location: Switzerland
|
Posted: Fri Jun 13, 2014 8:51 pm Post subject: |
|
|
depontius wrote: | While we're talking RAID, I've been running ext4 on top of mdadm RAID1 for years now, serving it as /home over nfsv4 on my lan.
I've getting set to rebuild, and was thinking of moving to the btrfs with its internal raid1, still serving over nfsv4.
An comments / pros / cons? |
Any reason for you to move from your current ext4+RAID1, apart from getting rid of the mdadm-layer?
In any case: is the fsck now fully functional? Asking because (at least 1 year ago, could have been even 2) it could hardly fix anything and I ended up with an unusable HDD.
Well, if the data you hosted on the raid1 is not too important and if you regularly take backups then you could give it a try - in the end soon or later somebody has to try using it |
|
Back to top |
|
|
depontius Advocate
Joined: 05 May 2004 Posts: 3523
|
Posted: Fri Jun 13, 2014 9:12 pm Post subject: |
|
|
Pearlseattle wrote: | depontius wrote: | While we're talking RAID, I've been running ext4 on top of mdadm RAID1 for years now, serving it as /home over nfsv4 on my lan.
I've getting set to rebuild, and was thinking of moving to the btrfs with its internal raid1, still serving over nfsv4.
An comments / pros / cons? |
Any reason for you to move from your current ext4+RAID1, apart from getting rid of the mdadm-layer?
In any case: is the fsck now fully functional? Asking because (at least 1 year ago, could have been even 2) it could hardly fix anything and I ended up with an unusable HDD.
Well, if the data you hosted on the raid1 is not too important and if you regularly take backups then you could give it a try - in the end soon or later somebody has to try using it |
I'd like to try using snapshots or whatever to do something Time-Machine-like.
There is also some extra integrity checking that btrfs does in the background, and mdadm only does on demand.
I've seen that fsck has been here for a while, though I don't how "here" it really is.
Plus it's shiny. _________________ .sigs waste space and bandwidth |
|
Back to top |
|
|
Pearlseattle Apprentice
Joined: 04 Oct 2007 Posts: 165 Location: Switzerland
|
Posted: Sat Jun 14, 2014 12:53 am Post subject: |
|
|
hehehe..., yes, it's shiny.
Quote: | 'd like to try using snapshots or whatever to do something Time-Machine-like. |
I don't have any experience with the time-machine of OSX, but I'm using a continuous time-machine-like mechanism on all my notebooks by using NILFS2 (kind of continuous snapshots of the current fs state, where the oldest snapshots are cleared when space runs slow) - runs great on SSDs but it would most probably be terrible on HDDs. Explicit snapshots are probably offered by more common filesystems (does anybody have an idea of which fs offer a snapshot funtionality?).
Quote: | There is also some extra integrity checking that btrfs does in the background, and mdadm only does on demand. |
All other filesystems & mdadm work perfectly fine by performing the "normal" integrity checks on their own layer => if btrfs features "extra" checks it makes me worry if it wasn't added to cover flows in more basic areas |
|
Back to top |
|
|
vaxbrat l33t
Joined: 05 Oct 2005 Posts: 731 Location: DC Burbs
|
Posted: Sat Jun 14, 2014 5:41 am Post subject: been on btrfs here for a while |
|
|
Pro-Tip: make sure your memory is good. The two times I lost btrfs arrays were due to bad memory as later shown by memtest
"cp --reflink=always" is a new switch to keep in mind. It's wonderful to be able to snapshot a VM image file in a couple of seconds and then to be able to roll back those changes in seconds. Great for playing with test software inside a VM but not so hot if your corporate DC decides to throw you out of the domain on a whim because your machine account password got out of whack
I have a 2x4tb mirror set that has nicely handled a bad block replacement or two. Its a wonderful thing to see btrfs messages in your /var/log/messages announcing that it has found a bad checksum and repaired the nasty thing
I've been running a 9tb 4x3tb RAID5 now for about 6 months, and it's about 6tb filled. One of the drives has decided to pend-uncorrect 16 sectors. I've been keeping everything rsync'd to another btrfs RAID5 on another box. I first saw some grief trying to do a rebalance after going to kernel 3.10.7 to 3.12.21-r1. The whoopsie appears to be isolated to a single centos 6.2 vm that I'm going to try to "recover" just for giggles. I'm in the middle of a scrub on the drive, and sure enough, it's whining about the vm image:
Code: | thufir ~ # btrfs scrub status /thufirraid
scrub status for 2fe75a96-44c3-4d99-a952-f913bdc063cf
scrub started at Thu Jun 12 00:44:59 2014, running for 175503 seconds
total bytes scrubbed: 4.31TiB with 4 errors
error details: csum=4
corrected errors: 0, uncorrectable errors: 4, unverified errors: 0 |
Relevant /var/log/messages entries:
Code: | 7866672, root 256, inode 4502179, offset 3467575296, length 4096, links 1 (path: vm/centos62_ref.img)
Jun 14 00:38:17 thufir kernel: [192275.822299] btrfs: bdev /dev/sdc errs: wr 0, rd 0, flush 0, corrupt 1, gen 0
Jun 14 00:38:17 thufir kernel: [192275.822301] btrfs: unable to fixup (regular) error at logical 7345050050560 on dev /dev/sdc
Jun 14 00:38:52 thufir kernel: [192310.114430] btrfs: checksum error at logical 7345296588800 on dev /dev/sdc, sector 2758348192, root 256, inode 4502179, offset 3467583488, length 4096, links 1 (path: vm/centos62_ref.img)
Jun 14 00:38:52 thufir kernel: [192310.114441] btrfs: bdev /dev/sdc errs: wr 0, rd 0, flush 0, corrupt 2, gen 0
Jun 14 00:38:52 thufir kernel: [192310.114443] btrfs: unable to fixup (regular) error at logical 7345296588800 on dev /dev/sdc
Jun 14 01:14:32 thufir kernel: [194453.449886] btrfs: checksum error at logical 7345050050560 on dev /dev/sdd, sector 2757866672, root 256, inode 4502179, offset 3467575296, length 4096, links 1 (path: vm/centos62_ref.img)
Jun 14 01:14:32 thufir kernel: [194453.449893] btrfs: bdev /dev/sdd errs: wr 0, rd 0, flush 0, corrupt 1, gen 0
Jun 14 01:14:32 thufir kernel: [194453.449895] btrfs: unable to fixup (regular) error at logical 7345050050560 on dev /dev/sdd
Jun 14 01:14:56 thufir kernel: [194477.418961] btrfs: checksum error at logical 7345296588800 on dev /dev/sdd, sector 2758348192, root 256, inode 4502179, offset 3467583488, length 4096, links 1 (path: vm/centos62_ref.img)
Jun 14 01:14:56 thufir kernel: [194477.418968] btrfs: bdev /dev/sdd errs: wr 0, rd 0, flush 0, corrupt 2, gen 0
Jun 14 01:14:56 thufir kernel: [194477.418971] btrfs: unable to fixup (regular) error at logical 7345296588800 on dev /dev/sdd |
The scrub should be done within the next day. I think my next step will be to offline the thing, have smartctl tell the /dev/sdc drive to self-test itself (effectively remapping those uncorrectable sectors), online the array and then try to copy the 20gb vm image off with scp. That may or may not be enough to kick it in the head. My next step after that is either to try the dreaded btrfs fsck or maybe to just "delete erase" the VM image file and then try the rebalance again. |
|
Back to top |
|
|
eccerr0r Watchman
Joined: 01 Jul 2004 Posts: 9847 Location: almost Mile High in the USA
|
Posted: Sun Jun 15, 2014 12:38 am Post subject: Re: been on btrfs here for a while |
|
|
vaxbrat wrote: | Pro-Tip: make sure your memory is good. The two times I lost btrfs arrays were due to bad memory as later shown by memtest. |
++
And it doesn't even have to be btrfs. I've lost ext2 partitions due to bad RAM, though the bad ram was really egregious... I was surprised that bios ramtest didn't catch this despite it being that bad of a problem in the memory, shows to prove how crappy the memory tester has become in BIOS. _________________ Intel Core i7 2700K/Radeon R7 250/24GB DDR3/256GB SSD
What am I supposed watching? |
|
Back to top |
|
|
frostschutz Advocate
Joined: 22 Feb 2005 Posts: 2977 Location: Germany
|
Posted: Sun Jun 15, 2014 10:31 pm Post subject: |
|
|
I'm happy with a 7x2TB HDD raid5.
Proper disk monitoring is crucial. Test for read errors, replace failing disks immediately. If you don't, the first time any disk fails and you rebuild your raid, the rebuild will be your first full read test and the chance of hitting previously undiscovered read errors is high. And at that point you have data loss and your raid may not be recoverable at all.
Of course, you also need a backup. |
|
Back to top |
|
|
vaxbrat l33t
Joined: 05 Oct 2005 Posts: 731 Location: DC Burbs
|
Posted: Wed Jun 18, 2014 2:44 pm Post subject: Update on my disk situation |
|
|
I ran the dreaded btrfs.fsck and it didn't find anything wrong with the filesystem. However the data blocks in the VM were still bad so I ended up blasting the image file over with a backup copy. Although I've run this array for a good half year as btrfs, I had previously run it as an ext4 mdadm array for a couple of years. As an indicator of its age, the mobo is an AMD hexacore thuban. It's probably time to consider the drives as being close to their expected lifetime (what's the warranty now, 2 years?.... don't make em like they used to).
I'll probably build an AMD Piledriver based system this weekend with a fresh set of 4tb drives and then have the old box nuke itself with a low level format after everything is copied over. I'm starting to do a manual daily rsync over to my other box now that I know that things are getting a little flakey. |
|
Back to top |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|