I’ve 2 network drives on one subnet (nfs and samba). I would like to access them only if computer is connected to particular ssid (subnet).
I’m using gnome primarily. And files stops responding if mount points can’t be accessed. There is no real way to recover from this apart from connecting to network, unmount and then change network.
I would like those drives to be accessed by system only if they are reachable.
Maybe look into autofs which will mount only when you choose to access the drives and then unmount on idle. Could be simpler then trying to react to network status.
Thanks will have a look at autofs
I second this. autofs is was I’d recommend.
Systemd will do this. Iirc you just need to put the mount info in fstab.
I currently have
systemdsolution but it only manages tomountwhen needed.Let’s avoid systemd solutions. They’ll just lock you deeper into lennart’s cancer.
If you have systemd installed then using it is fine.
Genuine question: what init do you use & how would you do this? Of course it doesn’t need to involve init.They just reach into their computer case and tickle the pins on the CPU when they want to initialize PID 1.
Imagine using a well designed set of tools instead of parts stuck in the 90s
I know autofs will work with nfs. Never used it with SMB. I’ve used it on a share of /home to specifically mount /home/user as needed (e.g., at login).
Thanks will have a look at autofs
I’ve used smb with autofs. Works a treat.
Be sure you have nothing running from or accessing the mount constantly, of course – I forgot with a homedir – or it’ll never I mount.
This should sidestep that and timeout pretty quickly: https://jshtab.github.io/guide/gvfs-autmount/





