Mount volume permanently

List volumes

/dev/xvda1: LABEL="cloudimg-rootfs" UUID="ef263917-4ffc-4c36-880c-ae41d52b0d8e" TYPE="ext4"
/dev/xvdf: UUID="2c21a384-9e0e-4b44-b8d1-ceb452e8cc5c" TYPE="ext4"
[email protected]:/home/ubuntu# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 32G 0 disk
└─xvda1 202:1 0 32G 0 part /
xvdf 202:80 0 32G 0 disk

Permanently mount

Mount volume xvdf to /var/lib/mysql

[email protected]:/# mount /dev/xvdf /var/lib/mysql

Recheck after mount

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 32G 0 disk
└─xvda1 202:1 0 32G 0 part /
xvdf 202:80 0 32G 0 disk /var/lib/mysql

Mount volume permanently - even after rebooting

[email protected]:/# vim /etc/fstab
UUID="2c21a384-9e0e-4b44-b8d1-ceb452e8cc5c" /data ext4 defaults 0 0
[email protected]:/# mount -fav
/ : ignored
/var/lib/mysql : already mounted

We must config fstab (permanent mount) based on UUID or LABEL like this

LABEL=cloudimg-rootfs / ext4 defaults,discard 0 0
UUID="c46cf311-d31b-41ce-bce5-5d8ad0a6b109" /var/lib/elasticsearch ext4 defaults,nofail 0 2
UUID="2c21a384-9e0e-4b44-b8d1-ceb452e8cc5c" /data ext4 defaults 0 0

DON'T config based on device name like this fuck

LABEL=cloudimg-rootfs / ext4 defaults,discard 0 0
/dev/nvme1n1p1 /var/lib/elasticsearch ext4 defaults,nofail 0 2
/dev/xvdf /data ext4 defaults 0 0

Mountfuck

Entire fucking dir in fuckin /var/lib/elasticsearch

Root cause: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/nvme-ebs-volumes.html

EBS uses single-root I/O virtualization (SR-IOV) to provide volume attachments on Nitro-based instances using the NVMe specification.These devices rely on standard NVMe drivers on the operating system.These drivers typically discover attached devices by scanning the PCI bus during instance boot, and create device nodes based on the order in which the devices respond, not on how the devices are specified in the block device mapping.In Linux, NVMe device names follow the pattern /dev/nvme<x>n<y>, where <x> is the enumeration order, and, for EBS, <y> is 1.Occasionally, devices can respond to discovery in a different order in subsequent instance starts, which causes the device name to change.

So, if we use NVMe disk for some new types of AWS EC2, please note that the device name is nearly randomize after each reboot. It means if we have 2 NVMe disks on one EC2 vm, so we cannot know which device name delegate to which real disk.

/dev/nvme1n1p1
/dev/nvme0n1p1

Reference