Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
emerge over chrooted nfs share?
View unanswered posts
View posts from last 24 hours

Goto page Previous  1, 2  
Reply to topic    Gentoo Forums Forum Index Networking & Security
View previous topic :: View next topic  
Author Message
Chewi
Developer
Developer


Joined: 01 Sep 2003
Posts: 886
Location: Edinburgh, Scotland

PostPosted: Mon Apr 02, 2018 7:09 pm    Post subject: Reply with quote

PS1 disappears because it gets stripped out by sudo for security reasons. Try changing the chroot line to this.

Code:
exec env PS1=eden chroot '${ROOT}' /bin/bash -l
Back to top
View user's profile Send private message
Joseph_sys
Advocate
Advocate


Joined: 08 Jun 2004
Posts: 2716
Location: Edmonton, AB

PostPosted: Mon Apr 02, 2018 7:23 pm    Post subject: Reply with quote

Chewi wrote:
PS1 disappears because it gets stripped out by sudo for security reasons. Try changing the chroot line to this.

Code:
exec env PS1=eden chroot '${ROOT}' /bin/bash -l


No, id didn't work:
Code:
syscon3 /home/thelma # sh chroot-eden
+ HOST=chroot-eden
+ HOST=eden
+ ROOT=/mnt/eden
+ PS1=eden
+ mkdir -p --mode=0755 /mnt/eden
+ exec sudo unshare -m /bin/sh -c '
set -e

mount -t nfs -o rw,noatime,nocto,actimeo=60,lookupcache=positive,vers=4,fsc '\''eden:/'\'' '\''/mnt/eden'\''
mount --bind {,'\''/mnt/eden'\''}/dev
mount --bind {,'\''/mnt/eden'\''}/dev/pts
mount --bind {,'\''/mnt/eden'\''}/dev/shm
mount --bind {,'\''/mnt/eden'\''}/proc
mount --bind {,'\''/mnt/eden'\''}/sys
mount --bind {,'\''/mnt/eden'\''}/usr/local/portage
mount --bind {,'\''/mnt/eden'\''}/usr/portage
mount --bind {,'\''/mnt/eden'\''}/var/cache/edb/dep
mount --bind {,'\''/mnt/eden'\''}/var/tmp/portage

#exec chroot '\''/mnt/eden'\'' /bin/bash -i
exec env PS1=eden chroot '\''/mnt/eden'\'' /bin/bash -i
'
syscon3 / #
Back to top
View user's profile Send private message
Chewi
Developer
Developer


Joined: 01 Sep 2003
Posts: 886
Location: Edinburgh, Scotland

PostPosted: Mon Apr 02, 2018 7:30 pm    Post subject: Reply with quote

Oh of course, it'll be reset by ${ROOT}/etc/bash/bashrc. I guess you'll have to do something clever in there.
Back to top
View user's profile Send private message
guitou
Guru
Guru


Joined: 02 Oct 2003
Posts: 534
Location: France

PostPosted: Tue Apr 03, 2018 1:35 pm    Post subject: Reply with quote

Hello.

I suppose PS1 is set... but out of chrooted env.

++
Gi)

Edit: replied too late!
Back to top
View user's profile Send private message
Joseph_sys
Advocate
Advocate


Joined: 08 Jun 2004
Posts: 2716
Location: Edmonton, AB

PostPosted: Tue Apr 03, 2018 6:17 pm    Post subject: Reply with quote

Chewi wrote:
Oh of course, it'll be reset by ${ROOT}/etc/bash/bashrc. I guess you'll have to do something clever in there.


I was trying to run your script on another remote network but I get:
"Illegal instruction"
It worked find on one of my network but not remote one.

Code:
#!/bin/sh

set -x

HOST=${0##*/}
HOST=${HOST#*-}
ROOT=/mnt/${HOST}

PS1="${HOST}"

mkdir -p --mode=0755 "${ROOT}"

#env -i - HOME="/root" TERM="${TERM}" exec sudo unshare -m /bin/sh -c "
exec sudo unshare -m /bin/sh -c "
set -e

mount -t nfs -o rw,noatime,nocto,actimeo=60,lookupcache=positive,vers=4,fsc '${HOST}:/' '${ROOT}'
mount --bind {,'${ROOT}'}/dev
mount --bind {,'${ROOT}'}/dev/pts
mount --bind {,'${ROOT}'}/dev/shm
mount --bind {,'${ROOT}'}/proc
mount --bind {,'${ROOT}'}/sys
mount --bind {,'${ROOT}'}/usr/local/portage
mount --bind {,'${ROOT}'}/usr/portage
mount --bind {,'${ROOT}'}/var/cache/edb/dep
mount --bind {,'${ROOT}'}/var/tmp/portage

exec chroot '${ROOT}' /bin/bash -i
"


Code:
+ HOST=chroot-i5
+ HOST=i5
+ ROOT=/mnt/i5
+ PS1=i5
+ mkdir -p --mode=0755 /mnt/i5
+ exec sudo unshare -m /bin/sh -c '
set -e

mount -t nfs -o rw,noatime,nocto,actimeo=60,lookupcache=positive,vers=4,fsc '\''i5:/'\'' '\''/mnt/i5'\''
mount --bind {,'\''/mnt/i5'\''}/dev
mount --bind {,'\''/mnt/i5'\''}/dev/pts
mount --bind {,'\''/mnt/i5'\''}/dev/shm
mount --bind {,'\''/mnt/i5'\''}/proc
mount --bind {,'\''/mnt/i5'\''}/sys
mount --bind {,'\''/mnt/i5'\''}/usr/local/portage
mount --bind {,'\''/mnt/i5'\''}/usr/portage
mount --bind {,'\''/mnt/i5'\''}/var/cache/edb/dep
mount --bind {,'\''/mnt/i5'\''}/var/tmp/portage

exec chroot '\''/mnt/i5'\'' /bin/bash -i
'
Illegal instruction
Back to top
View user's profile Send private message
Chewi
Developer
Developer


Joined: 01 Sep 2003
Posts: 886
Location: Edinburgh, Scotland

PostPosted: Tue Apr 03, 2018 6:27 pm    Post subject: Reply with quote

Judging by the "i5" name, this is a Core i5 that has had its software built with CFLAGS that are not compatible with the processor you are now trying to run it on. Also be careful not to use -march=native or you might end up breaking the remote system.
Back to top
View user's profile Send private message
Joseph_sys
Advocate
Advocate


Joined: 08 Jun 2004
Posts: 2716
Location: Edmonton, AB

PostPosted: Tue Apr 03, 2018 7:49 pm    Post subject: Reply with quote

Chewi wrote:
Judging by the "i5" name, this is a Core i5 that has had its software built with CFLAGS that are not compatible with the processor you are now trying to run it on. Also be careful not to use -march=native or you might end up breaking the remote system.


The computer that would be doing the compiling is:
AMD FX(tm)-8350 Eight-Core Processor
CFLAGS="-march=native -O2 -pipe"

What should I use on the above computer?


The i5 (you are correct) is (chroot failed on this)
Intel(R) Core(TM) i5-4200U CPU @ 1.60GHz
CFLAGS="-march=native -O2 -pipe"

Though, I was able to run the chroot-script OK on another remote box (same network):
Intel(R) Atom(TM) CPU 330 @ 1.60GHz
CFLAGS="-march=core2 -O2 -pipe"

--------------
On my local network I recompile/upgraded my via chroot
VIA Eden Processor 1200MHz
CFLAGS="-O2 -march=i686 -pipe"

The computer that was doing compiling was:
AMD Ryzen 5 1400 Quad-Core Processor
CFLAGS="-march=native -O2 -pipe"
Back to top
View user's profile Send private message
Chewi
Developer
Developer


Joined: 01 Sep 2003
Posts: 886
Location: Edinburgh, Scotland

PostPosted: Tue Apr 03, 2018 8:10 pm    Post subject: Reply with quote

Your Eden/Ryzen combo didn't break anything because only the Ryzen system had -march=native. If the Eden system had had that too, you would have found it broken following your upgrade. To be absolutely safe, stop using -march=native everywhere.

Your i5 system is a Haswell and your FX system is a Piledriver (bdver2). The gcc man page shows some slight differences between these and it only takes one instruction to break things. I think the most likely culprit is AVX2. For this to work, you would need to rebuild the i5 system or maybe even both with the lowest common denominator but it's hard to say what that would be. Usually this kind of problem is avoided because it is a much newer system doing the building. In your situation, you may want to consider distcc instead. In theory, the new stuff arriving in EAPI 7 will allow you to mount your remote system and build without chrooting but it is likely to break in other ways because this approach is mainly intended for cross-compiling.
Back to top
View user's profile Send private message
pablocool
n00b
n00b


Joined: 27 Jul 2017
Posts: 58

PostPosted: Tue Jul 17, 2018 9:59 am    Post subject: Reply with quote

Hello Guys

I also wanted to use this cool method but failed. Please help in tshooting.

Code:
pablocool@wloski ~ $ cat chroot-10.0.0.100
#!/bin/sh

set -x

HOST=${0##*/}
HOST=${HOST#*-}
ROOT=/mnt/${HOST}

PS1="${HOST}"

mkdir -p --mode=0755 "${ROOT}"

#env -i - HOME="/root" TERM="${TERM}" exec sudo unshare -m /bin/sh -c "
exec sudo unshare -m /bin/sh -c "
set -e

mount -t nfs -o rw,noatime,nocto,actimeo=60,lookupcache=positive,vers=4,fsc '${HOST}:/' '${ROOT}'
mount --bind {,'${ROOT}'}/dev
mount --bind {,'${ROOT}'}/dev/pts
mount --bind {,'${ROOT}'}/dev/shm
mount --bind {,'${ROOT}'}/proc
mount --bind {,'${ROOT}'}/sys
mount --bind {,'${ROOT}'}/usr/local/portage
mount --bind {,'${ROOT}'}/usr/portage
mount --bind {,'${ROOT}'}/var/cache/edb/dep
mount --bind {,'${ROOT}'}/var/tmp/portage

exec chroot '${ROOT}' /bin/bash -i
"



Code:
pablocool@wloski ~ $ sh chroot-10.0.0.100
+ HOST=chroot-10.0.0.100
+ HOST=10.0.0.100
+ ROOT=/mnt/10.0.0.100
+ PS1=10.0.0.100
+ mkdir -p --mode=0755 /mnt/10.0.0.100
+ exec sudo unshare -m /bin/sh -c
set -e

mount -t nfs -o rw,noatime,nocto,actimeo=60,lookupcache=positive,vers=4,fsc '10.0.0.100:/' '/mnt/10.0.0.100'
mount --bind {,'/mnt/10.0.0.100'}/dev
mount --bind {,'/mnt/10.0.0.100'}/dev/pts
mount --bind {,'/mnt/10.0.0.100'}/dev/shm
mount --bind {,'/mnt/10.0.0.100'}/proc
mount --bind {,'/mnt/10.0.0.100'}/sys
mount --bind {,'/mnt/10.0.0.100'}/usr/local/portage
mount --bind {,'/mnt/10.0.0.100'}/usr/portage
mount --bind {,'/mnt/10.0.0.100'}/var/cache/edb/dep
mount --bind {,'/mnt/10.0.0.100'}/var/tmp/portage

exec chroot '/mnt/10.0.0.100' /bin/bash -i

mount: mount point {,/mnt/10.0.0.100}/dev does not exist


I cannot understand these {,' ... '} construction. Appreciate any explaination.
Back to top
View user's profile Send private message
Chewi
Developer
Developer


Joined: 01 Sep 2003
Posts: 886
Location: Edinburgh, Scotland

PostPosted: Tue Jul 17, 2018 10:07 am    Post subject: Reply with quote

Turns out brace expansion is a Bashism. You learn something every day. Replace #!/bin/sh with #!/bin/bash.
Back to top
View user's profile Send private message
pablocool
n00b
n00b


Joined: 27 Jul 2017
Posts: 58

PostPosted: Tue Jul 17, 2018 11:07 am    Post subject: Reply with quote

Point for you even still it is not working.
Old PC is gentoo system it has:
lrwxrwxrwx 1 root root 4 07-14 21:04 /bin/sh -> bash
However only to test purposes as strong machine I used Debian VPS. It has:
lrwxrwxrwx 1 root root 4 lip 20 2016 /bin/sh -> dash

I am closer but still not working:

Code:
+ HOST=chroot-10.0.0.100
+ HOST=10.0.0.100
+ ROOT=/mnt/10.0.0.100
+ PS1=10.0.0.100
+ mkdir -p --mode=0755 /mnt/10.0.0.100
+ exec sudo unshare -m /bin/sh -c '
set -e

mount -t nfs -o rw,noatime,nocto,actimeo=60,lookupcache=positive,vers=4,fsc '\''10.0.0.100:/'\'' '\''/mnt/10.0.0.100'\''
mount --bind {,'\''/mnt/10.0.0.100'\''}/dev
mount --bind {,'\''/mnt/10.0.0.100'\''}/dev/pts
mount --bind {,'\''/mnt/10.0.0.100'\''}/dev/shm
mount --bind {,'\''/mnt/10.0.0.100'\''}/proc
mount --bind {,'\''/mnt/10.0.0.100'\''}/sys
mount --bind {,'\''/mnt/10.0.0.100'\''}/usr/local/portage
mount --bind {,'\''/mnt/10.0.0.100'\''}/usr/portage
mount --bind {,'\''/mnt/10.0.0.100'\''}/var/cache/edb/dep
mount --bind {,'\''/mnt/10.0.0.100'\''}/var/tmp/portage

exec chroot '\''/mnt/10.0.0.100'\'' /bin/bash -i
'
mount: mount point {,/mnt/10.0.0.100}/dev does not exist


Why do we need these brackets { } ?

EDIT:
Also this line needed update
+ exec sudo unshare -m /bin/sh -c '
Back to top
View user's profile Send private message
Chewi
Developer
Developer


Joined: 01 Sep 2003
Posts: 886
Location: Edinburgh, Scotland

PostPosted: Tue Jul 17, 2018 12:08 pm    Post subject: Reply with quote

Oh, I missed the /bin/sh in the middle of the script. Replace that too.

It's just a short way of saying:

Code:
mount --bind /dev /mnt/10.0.0.100/dev
Back to top
View user's profile Send private message
pablocool
n00b
n00b


Joined: 27 Jul 2017
Posts: 58

PostPosted: Wed Jul 18, 2018 7:40 am    Post subject: Reply with quote

Thank you for help! IT is working.

To just information, its overkill of course but it is even working over internet over OpenVPN.
Back to top
View user's profile Send private message
MickKi
Veteran
Veteran


Joined: 08 Feb 2004
Posts: 1179

PostPosted: Tue Dec 10, 2024 1:55 pm    Post subject: Reply with quote

Thank you for your script.

I have come across two problems with it.

1. I have it working by exporting a slow laptop's / ext4 fs to a fast client. All works well, except if I shutdown the laptop I can no longer shutdown the client. Restarting the laptop allows the client to shut down. Stopping the nfsclient service on the client doesn't make a difference. Shouldn't the chrooted fs be unmounted first as per the Handbook?

2. A second slow system with its / on btrfs subvolume and two of its directories /usr/portage and /var/log hosted on a different drive and separate btrfs partitions (both are top volumes in their respective partitions) fail to be exported. The /etc/exports and the chroot.sh script are the same with 1. above. I tried different things, like mount --rbind the two separate fs, but the script fails with:

Code:
 $ ./chroot-10.10.10.2
+ HOST=chroot-10.10.10.2
+ HOST=10.10.10.2
+ ROOT=/mnt/10.10.10.2
+ PS1=10.10.10.2
+ sudo mkdir -p --mode=0755 /mnt/10.10.10.2
Password:
+ exec sudo unshare -m /bin/bash -c '
set -e

mount -t nfs -o rw,noatime,nocto,actimeo=60,lookupcache=positive,nfsvers=4.2,fsc '\''10.10.10.2:/'\'' '\''/mnt/10.10.10.2'\''
mount --bind {,'\''/mnt/10.10.10.2'\''}/dev
mount --bind {,'\''/mnt/10.10.10.2'\''}/dev/pts
mount --bind {,'\''/mnt/10.10.10.2'\''}/dev/shm
mount --bind {,'\''/mnt/10.10.10.2'\''}/proc
mount --bind {,'\''/mnt/10.10.10.2'\''}/sys
#mount --bind {,'\''/mnt/10.10.10.2'\''}/usr/local/portage
#mount --bind {,'\''/mnt/10.10.10.2'\''}/usr/portage
mount --bind {,'\''/mnt/10.10.10.2'\''}/var/cache/edb/dep
mount --bind {,'\''/mnt/10.10.10.2'\''}/var/tmp/portage

exec chroot '\''/mnt/10.10.10.2'\'' /bin/bash -i
'
Illegal instruction


The server log shows:
Code:
rpc.mountd[2744]: Cannot export /proc, possibly unsupported filesystem or fsid= required
rpc.mountd[2744]: Cannot export /sys, possibly unsupported filesystem or fsid= required


Code:
cat /etc/exports
# /etc/exports: NFS file systems being exported.  See exports(5).
/         10.10.10.12/32(insecure,rw,sync,no_root_squash,no_subtree_check,crossmnt,fsid=0)


What am I doing wrong?
_________________
Regards,
Mick
Back to top
View user's profile Send private message
Hu
Administrator
Administrator


Joined: 06 Mar 2007
Posts: 23066

PostPosted: Tue Dec 10, 2024 3:12 pm    Post subject: Reply with quote

MickKi wrote:
1. I have it working by exporting a slow laptop's / ext4 fs to a fast client. All works well, except if I shutdown the laptop I can no longer shutdown the client. Restarting the laptop allows the client to shut down. Stopping the nfsclient service on the client doesn't make a difference.
This seems like standard NFS behavior. If you mount an export of an NFS server, and then the NFS server is unavailable when the client tries to access the network-mounted filesystem, the client will hang until either the NFS server is available or the client kernel eventually gives up. Unmount the share on the client before stopping the server.
MickKi wrote:
Shouldn't the chrooted fs be unmounted first as per the Handbook?
The use of unshare avoids the need to explicitly unmount every mount, as discussed earlier in the thread. You only need to exit all processes in the unshared mount namespace, and the kernel will handle the unmounting on its own.
MickKi wrote:
2. A second slow system with its / on btrfs subvolume and two of its directories /usr/portage and /var/log hosted on a different drive and separate btrfs partitions (both are top volumes in their respective partitions) fail to be exported. The /etc/exports and the chroot.sh script are the same with 1. above. I tried different things, like mount --rbind the two separate fs, but the script fails with:
Code:
Illegal instruction
Your local system tried to execute a program which cannot be run locally. The program was likely built on the server with a -march that cannot be used on the client. On the server, rebuild the affected program(s) with a compatible -march.
MickKi wrote:
The server log shows:
Code:
rpc.mountd[2744]: Cannot export /proc, possibly unsupported filesystem or fsid= required
rpc.mountd[2744]: Cannot export /sys, possibly unsupported filesystem or fsid= required


Code:
cat /etc/exports
# /etc/exports: NFS file systems being exported.  See exports(5).
/         10.10.10.12/32(insecure,rw,sync,no_root_squash,no_subtree_check,crossmnt,fsid=0)


What am I doing wrong?
You are using crossmnt, which per the manual page, will automatically export subordinate mounts:
man 5 exports:

       crossmnt
              This option is similar to nohide but it makes  it  possible  for
              clients to access all filesystems mounted on a filesystem marked
              with crossmnt.  Thus when a child filesystem "B" is mounted on a
              parent "A", setting crossmnt on "A" has a similar effect to set‐
              ting "nohide" on B.

              With  nohide  the  child  filesystem  needs to be explicitly ex‐
              ported.  With crossmnt it need not.  If a child  of  a  crossmnt
              file  is not explicitly exported, then it will be implicitly ex‐
              ported with the same export options as the  parent,  except  for
              fsid=.   This  makes  it  impossible  to not export a child of a
              crossmnt filesystem.  If some but not all  subordinate  filesys‐
              tems  of  a parent are to be exported, then they must be explic‐
              itly exported and the parent should not have crossmnt set.
Do not set crossmnt on an export that has child mounts you do not intend to export.
Back to top
View user's profile Send private message
MickKi
Veteran
Veteran


Joined: 08 Feb 2004
Posts: 1179

PostPosted: Tue Dec 10, 2024 3:51 pm    Post subject: Reply with quote

Thanks Hu,

Hu wrote:
MickKi wrote:
1. I have it working by exporting a slow laptop's / ext4 fs to a fast client. All works well, except if I shutdown the laptop I can no longer shutdown the client. Restarting the laptop allows the client to shut down. Stopping the nfsclient service on the client doesn't make a difference.
This seems like standard NFS behavior. If you mount an export of an NFS server, and then the NFS server is unavailable when the client tries to access the network-mounted filesystem, the client will hang until either the NFS server is available or the client kernel eventually gives up. Unmount the share on the client before stopping the server.
MickKi wrote:
Shouldn't the chrooted fs be unmounted first as per the Handbook?
The use of unshare avoids the need to explicitly unmount every mount, as discussed earlier in the thread. You only need to exit all processes in the unshared mount namespace, and the kernel will handle the unmounting on its own.
I read the whole thread and understood by exiting the chroot all mounts will be exited. Are you saying there are some processes which are not exited by exiting the chroot alone? How should I be identifying and dealing with those?

Hu wrote:
MickKi wrote:

What am I doing wrong?
You are using crossmnt, which per the manual page, will automatically export subordinate mounts:Do not set crossmnt on an export that has child mounts you do not intend to export.
Again my lack of understanding NFS is confusing me ... I want to be able to access the different fs exported under / on the server.

In any case, I removed the crossmnt and the script complains:
Code:
mount: /mnt/10.10.10.2/usr/portage: special device /usr/portage does not exist.
       dmesg(1) may have more information after failed mount system call.

The server reports:
Code:
rpc.mountd[7289]: v4.0 client detached: (null) from (null)

Adding exports for /usr/portage and /var/log cause the script to fail in the same manner.
_________________
Regards,
Mick
Back to top
View user's profile Send private message
Hu
Administrator
Administrator


Joined: 06 Mar 2007
Posts: 23066

PostPosted: Tue Dec 10, 2024 4:50 pm    Post subject: Reply with quote

If a process placed itself in the background, then yes, it could survive you exiting the bash that was started in the namespace, and that survival would keep the mounts active. I was only reacting to your specific question, where you seemed to think an explicit unmount would be needed.

You must export from the server, and mount on the client, each filesystem that you wish to use remotely. If you need more specific help, then please show exactly what you did, and how it failed. Show the output of showmount -e server, and the latest iteration of the client script.
Back to top
View user's profile Send private message
MickKi
Veteran
Veteran


Joined: 08 Feb 2004
Posts: 1179

PostPosted: Tue Dec 10, 2024 6:01 pm    Post subject: Reply with quote

Thank you for persevering with me. :-)

Hu wrote:
If a process placed itself in the background, then yes, it could survive you exiting the bash that was started in the namespace, and that survival would keep the mounts active. I was only reacting to your specific question, where you seemed to think an explicit unmount would be needed.
Well, I run emerge within the script, the emerge process finishes, so I assume there's nothing else left hanging when I exit the chroot. :-/

Hu wrote:
You must export from the server, and mount on the client, each filesystem that you wish to use remotely. If you need more specific help, then please show exactly what you did, and how it failed. Show the output of showmount -e server, and the latest iteration of the client script.

This is the server's exports:
Code:
/         10.10.10.12/32(insecure,rw,sync,no_root_squash,no_subtree_check,fsid=0)
/usr/portage 10.10.10.12/32(insecure,rw,sync,no_root_squash,no_subtree_check)
/var/log 10.10.10.12/32(insecure,rw,sync,no_root_squash,no_subtree_check)


The current script incantation:
Code:
 $ ./chroot-10.10.10.2
+ HOST=chroot-10.10.10.2
+ HOST=10.10.10.2
+ ROOT=/mnt/10.10.10.2
+ PS1=10.10.10.2
+ sudo mkdir -p --mode=0755 /mnt/10.10.10.2
Password:
+ exec sudo unshare -m /bin/bash -c '
set -e

mount -t nfs -o rw,noatime,nocto,actimeo=60,lookupcache=positive,nfsvers=4.2,fsc '\''10.10.10.2:/'\'' '\''/mnt/10.10.10.2'\''
mount -t nfs -o rw,noatime,nocto,actimeo=60,lookupcache=positive,nfsvers=4.2,fsc '\''10.10.10.2:/usr/portage'\'' '\''/mnt/10.10.10.2/usr/portage'\''
mount -t nfs -o rw,noatime,nocto,actimeo=60,lookupcache=positive,nfsvers=4.2,fsc '\''10.10.10.2:/var/log'\'' '\''/mnt/10.10.10.2/var/log'\''

mount --bind {,'\''/mnt/10.10.10.2'\''}/dev
mount --bind {,'\''/mnt/10.10.10.2'\''}/dev/pts
mount --bind {,'\''/mnt/10.10.10.2'\''}/dev/shm
mount --bind {,'\''/mnt/10.10.10.2'\''}/proc
mount --bind {,'\''/mnt/10.10.10.2'\''}/sys
#mount --bind {,'\''/mnt/10.10.10.2'\''}/usr/local/portage
mount --bind {,'\''/mnt/10.10.10.2'\''}/usr/portage
mount --bind {,'\''/mnt/10.10.10.2'\''}/var/cache/edb/dep
mount --bind {,'\''/mnt/10.10.10.2'\''}/var/tmp/portage

exec chroot '\''/mnt/10.10.10.2'\'' /bin/bash -i
'
mount: /mnt/10.10.10.2/usr/portage: special device /usr/portage does not exist.
       dmesg(1) may have more information after failed mount system call.


The log on the server:
Code:
rpc.mountd[24560]: v4.2 client attached: 0x74a8327567587f7b from "10.10.10.12:767"
rpc.mountd[24560]: v4.2 client detached: 0x74a8327567587f7b from "10.10.10.12:767"


EDIT: if I comment out
Code:
mount --bind {,'\''/mnt/10.10.10.2'\''}/usr/portage
from the script I end up with:
Code:
Illegal instruction

_________________
Regards,
Mick
Back to top
View user's profile Send private message
Hu
Administrator
Administrator


Joined: 06 Mar 2007
Posts: 23066

PostPosted: Tue Dec 10, 2024 7:15 pm    Post subject: Reply with quote

MickKi wrote:
Well, I run emerge within the script, the emerge process finishes, so I assume there's nothing else left hanging when I exit the chroot. :-/
That is probably, but not necessarily, true. You mentioned earlier some problem around the client failing to shutdown if the server is gone. Is that problem still present, even when you exit the chroot shell before the server halts, and the reason for this line of discussion? Or are you just seeking a resolution between the handbook and the statements in this thread?
MickKi wrote:
Code:
mount: /mnt/10.10.10.2/usr/portage: special device /usr/portage does not exist.
       dmesg(1) may have more information after failed mount system call.
Does /usr/portage in fact exist on the client system? You are trying to bind mount the client's Portage into the area that will become the chroot. On recent Gentoo systems, the standard location has changed to /var/db/repos/gentoo. If you are running the script on such a system, and using this script from more than 10 years ago, then the script may be incompatible. If unsure, show the output of namei -l /usr/portage ; mountpoint /usr/portage.
MickKi wrote:
EDIT: if I comment out
Code:
mount --bind {,'\''/mnt/10.10.10.2'\''}/usr/portage
from the script I end up with:
Code:
Illegal instruction
Yes, the script has set -e, so when that mount --bind is present and fails, the script stops there. When you comment it out, you avoid that error, and reach the chroot line, where you then run a program that cannot run on the client machine, and it terminates due to the illegal instruction. As I wrote above, you need to identify that program on the server and rebuild it with a -march that works on both the server and the client.
Back to top
View user's profile Send private message
MickKi
Veteran
Veteran


Joined: 08 Feb 2004
Posts: 1179

PostPosted: Tue Dec 10, 2024 9:40 pm    Post subject: Reply with quote

Hu wrote:
MickKi wrote:
Well, I run emerge within the script, the emerge process finishes, so I assume there's nothing else left hanging when I exit the chroot. :-/
That is probably, but not necessarily, true. You mentioned earlier some problem around the client failing to shutdown if the server is gone. Is that problem still present, even when you exit the chroot shell before the server halts, and the reason for this line of discussion? Or are you just seeking a resolution between the handbook and the statements in this thread?
The former. I connect & chroot on a server by running the script, run emerge within the chroot, complete it, exit chroot and close the terminal on the client. Then I shut down the server. At some point later on I try to shutdown the client. The client hangs and keeps hanging, until I restart the server, upon which the client continues and promptly shuts down. I'm trying to understand what's causing the client to hang, if the exported fs is no longer there. Am I missing something in this workflow?

Hu wrote:
MickKi wrote:
Code:
mount: /mnt/10.10.10.2/usr/portage: special device /usr/portage does not exist.
       dmesg(1) may have more information after failed mount system call.
Does /usr/portage in fact exist on the client system? You are trying to bind mount the client's Portage into the area that will become the chroot. On recent Gentoo systems, the standard location has changed to /var/db/repos/gentoo. If you are running the script on such a system, and using this script from more than 10 years ago, then the script may be incompatible.
The portage on the client is under /var/db, but on the server is under /usr/portage.

Hu wrote:
If unsure, show the output of namei -l /usr/portage ; mountpoint /usr/portage.

The server path for /usr/portage is:
Code:
~ # namei -l /usr/portage ; mountpoint /usr/portage
f: /usr/portage
drwxr-xr-x root    root    /
drwxr-xr-x root    root    usr
drwxr-xr-x portage portage portage
/usr/portage is a mountpoint

~ # findmnt | grep portage
├─/usr/portage                /dev/sdb3         btrfs       rw,noatime,compress=lzo,space_cache,subvolid=5,subvol=/


Hu wrote:
MickKi wrote:
I end up with:
Code:
Illegal instruction
Yes, the script has set -e, so when that mount --bind is present and fails, the script stops there. When you comment it out, you avoid that error, and reach the chroot line, where you then run a program that cannot run on the client machine, and it terminates due to the illegal instruction. As I wrote above, you need to identify that program on the server and rebuild it with a -march that works on both the server and the client.
I do not understand this. The illegal instruction arrives as a result of the script alone, I do not get to run any other commands on the client, because it fails to chroot with the illegal instruction message and drops me back into the client shell.
_________________
Regards,
Mick
Back to top
View user's profile Send private message
Hu
Administrator
Administrator


Joined: 06 Mar 2007
Posts: 23066

PostPosted: Tue Dec 10, 2024 10:25 pm    Post subject: Reply with quote

MickKi wrote:
At some point later on I try to shutdown the client. The client hangs and keeps hanging, until I restart the server, upon which the client continues and promptly shuts down. I'm trying to understand what's causing the client to hang, if the exported fs is no longer there. Am I missing something in this workflow?
That sounds like it ought to work. After you finish using the chroot, but before you halt the server, what is the output of cat /proc/self/mountinfo ; grep -nF 10.10.10.2 /proc/*/mountinfo on the client?
MickKi wrote:
The portage on the client is under /var/db, but on the server is under /usr/portage.
You are trying to bind the client's Portage directory, so you need to use the directory path that applies on the client. If you want to use the server's Portage directory, then do not bind mount the client's /usr/portage into the chroot. You already mounted the server's Portage directory earlier in the script, so the only reason for the bind mount of the client is if the client's Portage tree is somehow "better" (newer, faster to access, etc.).
MickKi wrote:
I do not understand this. The illegal instruction arrives as a result of the script alone, I do not get to run any other commands on the client, because it fails to chroot with the illegal instruction message and drops me back into the client shell.
The script is just a script, and cannot incur an illegal instruction. The script can run other programs which can fault. It may well be that the server's /bin/bash is the broken program. Think about the workflow of that last line. exec chroot runs chroot from the client, on the client. That program changes the root directory to the root of the NFS mount, then executes /bin/bash which, due to the chroot, is the bash provided by the NFS mount from the server. If that bash cannot be run on the client, you get an illegal instruction. It's also possible that bash itself is fine, but that it depends at startup on a library that is bad.
Back to top
View user's profile Send private message
MickKi
Veteran
Veteran


Joined: 08 Feb 2004
Posts: 1179

PostPosted: Wed Dec 11, 2024 2:39 pm    Post subject: Reply with quote

Hu wrote:
After you finish using the chroot, but before you halt the server, what is the output of cat /proc/self/mountinfo ; grep -nF 10.10.10.2 /proc/*/mountinfo on the client?
The output is the same once emerge has finished; after the nfs service is stopped on the server; and after the nfsclient service is finally stopped on the client. The only non-local mountinfo line for each stage is:
Code:
39 21 0:33 / /var/lib/nfs/rpc_pipefs rw,relatime - rpc_pipefs rpc_pipefs rw

Hu wrote:
You are trying to bind the client's Portage directory, so you need to use the directory path that applies on the client. If you want to use the server's Portage directory, then do not bind mount the client's /usr/portage into the chroot. You already mounted the server's Portage directory earlier in the script, so the only reason for the bind mount of the client is if the client's Portage tree is somehow "better" (newer, faster to access, etc.).
Understood. Thank you.

Hu wrote:
The script is just a script, and cannot incur an illegal instruction. The script can run other programs which can fault. It may well be that the server's /bin/bash is the broken program. Think about the workflow of that last line. exec chroot runs chroot from the client, on the client. That program changes the root directory to the root of the NFS mount, then executes /bin/bash which, due to the chroot, is the bash provided by the NFS mount from the server. If that bash cannot be run on the client, you get an illegal instruction. It's also possible that bash itself is fine, but that it depends at startup on a library that is bad.
Here is my conundrum: on the working server which had its packages built with (-march=core2) the client (Zen 3) has no problem chrooting and running all emerge commands. On the problematic server (bdver3) I get the above illegal instruction output. I rebuilt bash and for good measure binutils on the server, but the Zen 3 client still considers /bin/bash to contain an illegal instruction and bails out. I remain befuddled. :-/
_________________
Regards,
Mick
Back to top
View user's profile Send private message
Hu
Administrator
Administrator


Joined: 06 Mar 2007
Posts: 23066

PostPosted: Wed Dec 11, 2024 2:54 pm    Post subject: Reply with quote

I think you need to find specifically which file is contributing the illegal instruction. Running the chroot under gdb might work, since that should let you trap on the SIGILL and disassemble to see what instruction faulted, and in what library.
Back to top
View user's profile Send private message
pingtoo
Veteran
Veteran


Joined: 10 Sep 2021
Posts: 1481
Location: Richmond Hill, Canada

PostPosted: Wed Dec 11, 2024 3:14 pm    Post subject: Reply with quote

MickKi wrote:
Here is my conundrum: on the working server which had its packages built with (-march=core2) the client (Zen 3) has no problem chrooting and running all emerge commands. On the problematic server (bdver3) I get the above illegal instruction output. I rebuilt bash and for good measure binutils on the server, but the Zen 3 client still considers /bin/bash to contain an illegal instruction and bails out. I remain befuddled. :-/


Have you consider sys-libs/readline or sys-libs/ncurses and possible virtual/libintl? I think bash linked with them and load them at start.
Back to top
View user's profile Send private message
MickKi
Veteran
Veteran


Joined: 08 Feb 2004
Posts: 1179

PostPosted: Sat Dec 14, 2024 1:37 pm    Post subject: Reply with quote

MickKi wrote:
Hu wrote:
MickKi wrote:
Well, I run emerge within the script, the emerge process finishes, so I assume there's nothing else left hanging when I exit the chroot. :-/
That is probably, but not necessarily, true. You mentioned earlier some problem around the client failing to shutdown if the server is gone. Is that problem still present, even when you exit the chroot shell before the server halts, and the reason for this line of discussion? Or are you just seeking a resolution between the handbook and the statements in this thread?
The former. I connect & chroot on a server by running the script, run emerge within the chroot, complete it, exit chroot and close the terminal on the client. Then I shut down the server. At some point later on I try to shutdown the client. The client hangs and keeps hanging, until I restart the server, upon which the client continues and promptly shuts down. I'm trying to understand what's causing the client to hang, if the exported fs is no longer there. Am I missing something in this workflow?
Yep, among trying different scripts I had forgotten to comment out the legacy /usr/portage mount bind path, when the working server and client use /var/ for portage. Once commented out the client no longer hangs.

Regarding the illegal instruction problem (thank you pingtoo) I will need to find some time to debug this in more depth.

Thank you both for your help. :)
_________________
Regards,
Mick
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Networking & Security All times are GMT
Goto page Previous  1, 2
Page 2 of 2

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum