View previous topic :: View next topic |
Author |
Message |
mruireis n00b

Joined: 20 Mar 2018 Posts: 4
|
Posted: Tue Mar 20, 2018 3:25 pm Post subject: Shared filesystem |
|
|
Hi all,
I've created a KVM cluster on Gentoo hosts, and for the last 8 years been using OCFS2 over SAN disks. I've found some issues on recent kernels, and with the removal of ocfs2-tools from portage decided to give GFS2 a try.
It seems that gfs2-utils have some problems building and is scheduled for removal.
Someone have experienced other shared filesystems? Any alternative suggestion for OCFS2 or GFS2?
Thanks. |
|
Back to top |
|
 |
szatox Advocate

Joined: 27 Aug 2013 Posts: 3605
|
Posted: Tue Mar 20, 2018 8:08 pm Post subject: |
|
|
CEPH is an option, though you better get some SSDs for that. Journals basically must reside on SSD, OSD's themselves depend on your IO requirements. Sharding does a lot of random IO, so HDDs take quite a bit of performance hit from that.
MooseFS works in a similar way too.
Depending on your particular needs and limitations, there are also things like clustered LVM, DRBD and their like.
On the other side of that spectrum you can find NFS. |
|
Back to top |
|
 |
mike155 Advocate

Joined: 17 Sep 2010 Posts: 4438 Location: Frankfurt, Germany
|
Posted: Tue Mar 20, 2018 8:28 pm Post subject: |
|
|
I sometimes work with active/passive KVM clusters which use DRBD and Pacemaker/Heartbeat. |
|
Back to top |
|
 |
mruireis n00b

Joined: 20 Mar 2018 Posts: 4
|
Posted: Wed Mar 21, 2018 5:10 pm Post subject: |
|
|
mike155 wrote: | I sometimes work with active/passive KVM clusters which use DRBD and Pacemaker/Heartbeat. |
I'm using three active hosts with SAN FC disks, no need for DRDB here. My question is more about the filesystem alternative to OCFS2 or GFS2.
Thanks for the reply.
Last edited by mruireis on Wed Mar 21, 2018 5:19 pm; edited 1 time in total |
|
Back to top |
|
 |
mruireis n00b

Joined: 20 Mar 2018 Posts: 4
|
Posted: Wed Mar 21, 2018 5:19 pm Post subject: |
|
|
szatox wrote: | CEPH is an option, though you better get some SSDs for that. Journals basically must reside on SSD, OSD's themselves depend on your IO requirements. Sharding does a lot of random IO, so HDDs take quite a bit of performance hit from that. |
Will take a look at CEPH, but I only need a filesystem. I will check if it's an option to use only CephFS with my current storage.
szatox wrote: | Depending on your particular needs and limitations, there are also things like clustered LVM, DRBD and their like.
On the other side of that spectrum you can find NFS. |
The servers are connected to a SAN over FC, no use here for DRDB. The storage array provides NFS shares, but that means to switch the current FC connections to ethernet. Not an option.
Thanks for the reply. |
|
Back to top |
|
 |
szatox Advocate

Joined: 27 Aug 2013 Posts: 3605
|
Posted: Wed Mar 21, 2018 11:56 pm Post subject: |
|
|
Ok, that clarifies your needs.
Honestly I don't know how well cephfs fits your "no ethernet" requirement.
However, if it does, here are a few tips on that will save you some headaches:
Make sure to enable sharding for your cephs, even though it means more random IO. Ceph is bad at handling large objects, it can't balance them properly.
Keep nodes close to each other. High latency kills your performance.
You may consider striping to save some space (something along the lines of raid 6 mode instead of traditional "make me 3 full copies" approach).
Finally, make sure you have at least one monitor on a machine that _won't_ freeze or get overloaded, no matter what. I've had a chance to watch a whole cluster rebooted by built-in watchdogs, one node after another, when load went through the roof due to positive feedback between IO, IO, IO, and IO. Now you know why people behind ceph recommend dedicated servers for OSD and monitors.
Bonus: ceph-deploy makes things pretty manageable. You won't get away from defining your stuff in config files, but it helps a lot with preparing OSDs and keeping things in sync. Do not use anything smarter than that for managing ceph configs, each node must have its local copy stored on its local disk. It actively uses those configs all the time, if they vanish for whatever reason, that node gets kicked out of cluster, no excuses.
It doesn't mean ceph is bad, and in fact it may be a good fit for your use case (hey, I suggested you have a look at it), but it does have some rough edges and is not very forgiving when you mess things up. So I decided to point out some pitfalls you'd be likely to encounter if you didn't know about them before staring up. |
|
Back to top |
|
 |
mruireis n00b

Joined: 20 Mar 2018 Posts: 4
|
Posted: Fri Mar 23, 2018 10:09 am Post subject: |
|
|
Thank you for your reply, it really gives me some good insights of what I will met when start to explore Ceph. |
|
Back to top |
|
 |
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|