Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
One NFS file system, multiple servers with multiple disks
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Networking & Security
View previous topic :: View next topic  
Author Message
wildhorse
Apprentice
Apprentice


Joined: 16 Mar 2006
Posts: 150
Location: Estados Unidos De América

PostPosted: Fri Feb 01, 2008 1:59 am    Post subject: One NFS file system, multiple servers with multiple disks Reply with quote

I am looking for a solution that allows me to serve one file system via one NFS mount point. The disks (IDE) are located on multiple Gentoo machines. Each machine is serving multiple disks.

It seems that Multicast NFS (MNFS) can do what I need, but I am not sure if MNFS is available on Gentoo.

As an alternative solution I could use one physical server and let this server import multiple NFS file systems from the other machines and the re-export the file systems all-in-one as one NFS file system. But I am sure the performance of this solution would be terrible.

I can also imagine to set up a connection from one central NFS server to the disk servers via iSCSI, but do not know if that is fully supported (on Gentoo and wrt my IDE disks) and what the performance might be. The problem with the server-in-the-middle solution is latency.

I can do anything on the servers. I can add another machine as central server. I can detach the networks and put a bridge inbetween. I also have my own DNS Bind server (which does not run Gentoo Linux). Anything on my servers and my local network is possible. But I cannot change the clients. Only mounting a single NFS file system is allowed on the clients.

Any suggestions?
Back to top
View user's profile Send private message
SnEptUne
l33t
l33t


Joined: 23 Aug 2004
Posts: 656

PostPosted: Fri Feb 01, 2008 6:51 am    Post subject: Reply with quote

How about using the nohide option in NFS?

Code:

$ man exports
       nohide This option is based on the option of the same name provided in
              IRIX NFS.  Normally, if a server exports two filesystems one of
              which is mounted on the other, then the client will have to
              mount both filesystems explicitly to get access to them.  If it
              just mounts the parent, it will see an empty directory at the
              place where the other filesystem is mounted.  That filesystem is
              "hidden".

              Setting the nohide option on a filesystem causes it not to be
              hidden, and an appropriately authorised client will be able to
              move from the parent to that filesystem without noticing the
              change.

              However, some NFS clients do not cope well with this situation
              as, for instance, it is then possible for two files in the one
              apparent filesystem to have the same inode number.

              The nohide option is currently only effective on single host
              exports.  It does not work reliably with netgroup, subnet, or
              wildcard exports.

              This option can be very useful in some situations, but it should
              be used with due care, and only after confirming that the client
              system copes with the situation effectively.

              The option can be explicitly disabled with hide.


And from Gentoo Wiki:

Quote:

If there is some other filesystem mounted under /export/FOLDER then it will also have to be exported, but if it has the 'nohide' option, then it will be visible when it's parent FOLDER is mounted. The important thing is that nohide does not automatically make all mounted filesystems visible, each filesystem still has to be separately exported, but they can all be seen through one mount.

_________________
"There will be more joy in heaven over the tear-bathed face of a repentant sinner than over the white robes of a hundred just men." (LM, 114)
Back to top
View user's profile Send private message
wildhorse
Apprentice
Apprentice


Joined: 16 Mar 2006
Posts: 150
Location: Estados Unidos De América

PostPosted: Fri Feb 01, 2008 7:06 pm    Post subject: Reply with quote

But exports' nohide option does not hide the fact that the clients would still be dealing with multiple servers. Distribution of the data over all disk servers is not addressed with this option, or at least not with this option alone.
Back to top
View user's profile Send private message
SnEptUne
l33t
l33t


Joined: 23 Aug 2004
Posts: 656

PostPosted: Fri Feb 01, 2008 8:22 pm    Post subject: Reply with quote

wildhorse wrote:
But exports' nohide option does not hide the fact that the clients would still be dealing with multiple servers. Distribution of the data over all disk servers is not addressed with this option, or at least not with this option alone.


I thought the requirement is for the client to mount once? Maybe I have misunderstood your post, what are you trying to do? Could you give me an example?
_________________
"There will be more joy in heaven over the tear-bathed face of a repentant sinner than over the white robes of a hundred just men." (LM, 114)
Back to top
View user's profile Send private message
wildhorse
Apprentice
Apprentice


Joined: 16 Mar 2006
Posts: 150
Location: Estados Unidos De América

PostPosted: Fri Feb 01, 2008 10:15 pm    Post subject: Reply with quote

N clients
1 common NFS file system seen by each of the N clients
M disk servers, each with multiple disks

I can easily combine all disks on 1 disk server to 1 file system (e.g. RAID, LVM). That is fine if I have only 1 NFS server. But I have too many servers and I do not want the users to manage the allocation of the disks. The file system should distribute the data onto the disk servers and disks, and the users will see only one huge NFS file system.
Back to top
View user's profile Send private message
SnEptUne
l33t
l33t


Joined: 23 Aug 2004
Posts: 656

PostPosted: Fri Feb 01, 2008 11:03 pm    Post subject: Reply with quote

Why do clients need to manage allocation of disks? I still don't quite understand what you are trying to do.

From your descriptions, you want clients to access files from multiple disk servers, but you can only install NFS server on one system? Or is it in the case there clients cannot have access to the disk servers and they can only update to the NFS servers? Is NFS a requirement? How will the clients access the remote file system? Why is nohide not a solution? Could you be more specific instead of giving vague description? I can't read your mind.
_________________
"There will be more joy in heaven over the tear-bathed face of a repentant sinner than over the white robes of a hundred just men." (LM, 114)
Back to top
View user's profile Send private message
wildhorse
Apprentice
Apprentice


Joined: 16 Mar 2006
Posts: 150
Location: Estados Unidos De América

PostPosted: Fri Feb 01, 2008 11:15 pm    Post subject: Reply with quote

The problem is writing data. The exports nohide option only works well for reading data.
If you write data from any of the N clients onto the NFS file system, then someone needs to make the decision onto which of the M file systems the data go. And that should be done on the server site, not the client site, in particular not by the users on the clients.
Back to top
View user's profile Send private message
robdd
Tux's lil' helper
Tux's lil' helper


Joined: 02 Jan 2005
Posts: 142
Location: Sydney Australia

PostPosted: Sat Feb 02, 2008 2:57 am    Post subject: Reply with quote

Hi wildhorse - I'm trying to narrow down the possible solutions by thinking about your problem from first principles...

Quote:

But I cannot change the clients. Only mounting a single NFS file system is allowed on the clients.


First, I know almost zip about MNFS, but from the descriptions I've seen it requires the clients to be listening to multicasts from a server, and that implies some smarts in the clients. But you can't change the client software - so MNFS doesn't seem viable. Have I got that right, or missed something ??

When you mount an NFS file system on a client you have to specify the IP address of the NFS server as part of the mount information. Only one computer can have a particular IP address and your clients can only have one server address specified, so it doesn't seem possible to me to route NFS file system requests from your clients to multiple server boxes (You *could* build a super-fast and super clever network router appliance that inspected NFS packets, and routed NFS requests to different servers based on the contents of the client's NFS requests - but that doesn't sound practical).

So if the above logic is correct then you are stuck with a single NFS server box (from your client's viewpoint), which at least limits the options you have to consider.

At first glance it seems like doing two-hop NFS might be too slow, but the slowest part of any disk access is the disk seek/transfer time. Your "server in the middle" is not doing any physical I/O, so you're only looking at network latency. If "server in the middle" had plenty of grunt, and had Gigabit LAN connections the other servers holding the disks then it may not be so bad - best way would be to measure it. (Also, I'm not 100% sure that NFS mounting a filesystem that has NFS mounts even works - I remember having some kind of problem a looooooooooong time ago, but that could have been with Samba ?? I *could* try it out here, but I'm too lazy :D ).

Hope these ramblings help you sort out your ideas..

Regards, Rob
_________________
Rob Diamond
Gentoo Hack, hack, hacker
Sydney, Australia
Back to top
View user's profile Send private message
SnEptUne
l33t
l33t


Joined: 23 Aug 2004
Posts: 656

PostPosted: Sat Feb 02, 2008 3:35 am    Post subject: Reply with quote

I see. I am not sure how nohide works, but NFS wasn't made to be recursive by design for performance reasons anyway.

How about openAFS? It also caches data on clients, and only updates through venus and vice when neccessary.
_________________
"There will be more joy in heaven over the tear-bathed face of a repentant sinner than over the white robes of a hundred just men." (LM, 114)
Back to top
View user's profile Send private message
misterbob05
Tux's lil' helper
Tux's lil' helper


Joined: 28 Apr 2007
Posts: 90

PostPosted: Sat Feb 02, 2008 4:20 am    Post subject: Reply with quote

one idea i had was making a parent directory and mounting all the shares in that directory and then mounting that parent directory on all the client machines???



i havent tried it or know that theory behine nfs or anything as i use samba

sorry if it has already been thought of and shot down
Back to top
View user's profile Send private message
robdd
Tux's lil' helper
Tux's lil' helper


Joined: 02 Jan 2005
Posts: 142
Location: Sydney Australia

PostPosted: Sat Feb 02, 2008 4:53 am    Post subject: Reply with quote

Sorry SnEptUne - I didn't read your earlier post carefully :oops:
Quote:

However, some NFS clients do not cope well with this situation as, for instance, it is then possible for two files in the one apparent filesystem to have the same inode number.


So depending on what wildhorse's client NFS implementation does, the NFS mounts under an NFS mount may not work. Sounds like one big server coming up. At least big IDE and SATA disks are very cheap now. Another option may be putting his existing IDE disks into external enclosures with USB interfaces - I've got a couple I use for backup, file transfer etc. and they work well. Most new boxes have heaps of free USB ports, and you can always get PCI cards with more ports. It's a shame when the single server goes down, though.

Regards, Rob
_________________
Rob Diamond
Gentoo Hack, hack, hacker
Sydney, Australia
Back to top
View user's profile Send private message
wildhorse
Apprentice
Apprentice


Joined: 16 Mar 2006
Posts: 150
Location: Estados Unidos De América

PostPosted: Mon Feb 04, 2008 9:03 pm    Post subject: Reply with quote

Hi Everybody, thanks for your comments!
It seems the idea of redistributing the files via an NFS server in the middle may not work. There are the inode problem, latency and most important the matter of distributing data onto the actual disk servers for write access. AFS might be able to cope with this configuration, but I cannot use AFS. And the USB solution would require additional investment into new hardware for a lot of disks and it would be too slow anyway.

How about Network Block Devices (NBD)? Has anybody ever set up a big NFS server with lots of NBDs? I could imagine that with read-ahead and write-through caching performance may not be that bad. It seems that RAID with NBD works. What I like to know is if there is someone who has actually experience with such a configuration?
Back to top
View user's profile Send private message
your_WooDness
Tux's lil' helper
Tux's lil' helper


Joined: 25 Oct 2007
Posts: 77

PostPosted: Mon Feb 04, 2008 11:17 pm    Post subject: Reply with quote

Hi there,

this sounds adventurous... =0) But please have in mind, that if you just "stripe" more filesystems via network, to make them available as one filesystem that you will run into big trouble when something goes wrong. So I hope you won't store important data on this.
What you are trying to do is a so called "storage virtualization" when you want to abstract the storage devices to the clients. There are a lot of software solutions out there for this...
But I think with NBD, you will also have the bottle neck of one server that manages the traffic and on http://nbd.sourceforge.net/ written "...But (also unlike NFS), if someone has mounted NBD read/write, you must assure that no one else will have it mounted...". This means that the NBD will have corrupt data when the mentioned situation occurs.

What about CFS or lustre filesystem? --> http://wiki.lustre.org/index.php?title=Main_Page
I think this will meet your needs, but is a bit tricky to set up. But I wouldn't even store my mp3 collection on a shaky setup with LVM and NFS mounts or NBD.
An easy way would be to set up iscsi targets on those server that should provide the storage. The clients must have installed the iscsi initiator. But you will also have no centralized storage and a lot of iscsi targets, which would mean that the logical configuration would be the same with a lot of NFS or samba shares. But the performance should be a bit better with iscsi.

WooD
Back to top
View user's profile Send private message
wildhorse
Apprentice
Apprentice


Joined: 16 Mar 2006
Posts: 150
Location: Estados Unidos De América

PostPosted: Mon Feb 04, 2008 11:43 pm    Post subject: Reply with quote

your_WooDness, I have enough disks (and thus plenty of statistics) to confirm that hard disks sometime do fail. :wink:

NBD supports RAID, and in particular RAID5. NBD and data corruption is no problem since the disks are only being used on my disk servers and I am in charge. :)

CFS is not an option. Neither is modifying the clients (except for adding an NFS file system entry fstab). Lustre looks interesting. I also looked into other "cluster" or global file systems. But I have to use NFS, at least as front end.

Has anyone ever compared iSCSI vs. NBD?
Back to top
View user's profile Send private message
wildhorse
Apprentice
Apprentice


Joined: 16 Mar 2006
Posts: 150
Location: Estados Unidos De América

PostPosted: Tue Feb 05, 2008 12:41 am    Post subject: Reply with quote

Perhaps I should add ATA over Ethernet (AoE) as another option since I only have a dedicated LAN between the disk servers.

So, if there nothing like MNFS, has anyone tested iSCSI vs. NBD vs. AoE?
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Networking & Security All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum