Well really the topic says nobootwait option stops ssh access to your instance right? But what’s really going on is this.
In Ubuntu 14.04 we had an option called nobootwait to ignore the mount errors in fstab for block devices that may not have reach-ability at the time of booting or for any other reason perhaps.
The typical fstab entry for a block device would be like the below
/dev/xvdf /mount-point auto defaults,nobootwait,comment=cloudconfig 0 2
But when we try to add a block device with nobootwait as same as the above in Ubuntu 16.04,
- The boot process will fail
- If its a AWS instance for sure you will have no ssh access to the instances
The reason is, in Ubuntu 16.04 the option nobootwait is not supported anymore…!!
Its really irritating right? If its an physical server you can go in by interrupting the boot process and change the fstab entries and reboot. When it comes to an AWS instance there’s no option other than to kill the instance.
you can check in Ubuntu 14.04 fstab man page the following support description is there, which is obviously not in Ubuntu 16.04 man page for fstab.
$ man fstab
he mountall(8) program that mounts filesystem during boot also
recognises additional options that the ordinary mount(8) tool does not.
These are: ``bootwait'' which can be applied to remote filesystems
mounted outside of /usr or /var, without which mountall(8) would not
hold up the boot for these; ``nobootwait'' which can be applied to non-
remote filesystems to explicitly instruct mountall(8) not to hold up
the boot for them; ``optional'' which causes the entry to be ignored if
the filesystem type is not known at boot time; and ``showthrough''
which permits a mountpoint to be mounted before its parent mountpoint
(this latter should be used carefully, as it can cause boot hangs).
So the option is to use nofail instead of nobootwait..!!!
So you can mount the same block device with fstab entry like
/dev/xvdf /mount-point auto defaults,nofail,comment=cloudconfig 0 2
Note: It’s better if you add the entry to the fstab and try to mount your device using mount -a to see if there are any errors in the fstab. If not an AWS instance that you access through ssh would be not accessible anymore because the wrong fstab entries would hold the boot process and thus removing ssh access to your instance.