LVM snapshots. Investigating some backup issues I decided to give them a try to see if they can help. This was done on CentOS 7. First, the important detail: the LVM snapshot lives in the same volume group as the logical volume one is making snapshot of. So, if the logical volume is in volume group X, so will be its snapshot.

This was modeled after a production machine that had some unused space in volume group of the volume I needed to make a snapshot of. I was going to snapshot lv_data so I needed some more space on VolGroup00. In this case sdb was added for this purpose. First, create physical volume:

[root@dbtest23 ~]# pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created.
[root@dbtest23 ~]# pvs
  PV         VG         Fmt  Attr PSize   PFree
  /dev/sda2  VolGroup00 lvm2 a--  <29,51g     0
  /dev/sdb              lvm2 ---   16,00g 16,00g
[root@dbtest23 ~]# vgs
  VG         #PV #LV #SN Attr   VSize   VFree
  VolGroup00   1   4   0 wz--n- <29,51g    0

That looks good. Let’s extend VolGroup00:

[root@dbtest23 ~]# vgextend VolGroup00 /dev/sdb
  Volume group "VolGroup00" successfully extended
[root@dbtest23 ~]# vgs
  VG         #PV #LV #SN Attr   VSize  VFree
  VolGroup00   2   4   0 wz--n- 45,50g <16,00g

There is 16GB to play around with. Next the snapshot is created. The next command allocates 1GB in VolGroup00 for snapshot called snap and the parent logical volume is VolGroup00-lv_data:

[root@dbtest23 data]# lvcreate --size 1G --snapshot --name snap /dev/mapper/VolGroup00-lv_data
  Logical volume "snap" created.
[root@dbtest23 data]# lvdisplay /dev/mapper/VolGroup00-snap
  --- Logical volume ---
  LV Path                /dev/VolGroup00/snap
  LV Name                snap
  VG Name                VolGroup00
  LV UUID                XfnO6Q-k7Vo-WENP-JPfy-58wN-Jfwe-Spxqd8
  LV Write Access        read/write
  LV Creation host, time dbtest23, 2020-10-08 12:46:29 +0000
  LV snapshot status     active destination for lv_data
  LV Status              available
  # open                 0
  LV Size                10,88 GiB
  Current LE             2786
  COW-table size         1,00 GiB
  COW-table LE           256
  Allocated to snapshot  0,00%
  Snapshot chunk size    4,00 KiB
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:6

As seen above, Allocated to snapshot indicates, nothing had changed on the source filesystem - lv_data. Also, note COW-table size. Next, let’s create some filesystem change on lv_data:

[root@dbtest23 data]# cp -r /root/DELETEME /data/DELETEME
[root@dbtest23 data]# lvdisplay /dev/mapper/VolGroup00-snap
  --- Logical volume ---
  LV Path                /dev/VolGroup00/snap
  LV Name                snap
  VG Name                VolGroup00
  LV UUID                XfnO6Q-k7Vo-WENP-JPfy-58wN-Jfwe-Spxqd8
  LV Write Access        read/write
  LV Creation host, time dbtest23, 2020-10-08 12:46:29 +0000
  LV snapshot status     active destination for lv_data
  LV Status              available
  # open                 0
  LV Size                10,88 GiB
  Current LE             2786
  COW-table size         1,00 GiB
  COW-table LE           256
  Allocated to snapshot  55,85%
  Snapshot chunk size    4,00 KiB
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:6

Now, the space allocated to snapshot had grown… Note LV snapshot status… more on that later. So, in order to backup the snapshot it has to be mounted first. Let’s do that:

[root@dbtest23 data]# mount -o /dev/mapper/VolGroup00-snap /mnt
mount: can't find /mnt in /etc/fstab
[root@dbtest23 data]# mount  /dev/mapper/VolGroup00-snap /mnt
mount: wrong fs type, bad option, bad superblock on /dev/mapper/VolGroup00-snap,
       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try
       dmesg | tail or so.
[root@dbtest23 data]#

That did not go so well. Let’s see what the problem might be:

[root@dbtest23 data]# dmesg|tail -1
[ 4471.570329] XFS (dm-6): Filesystem has duplicate UUID a4494ea9-5d07-4781-9853-d78106f02c49 - can't mount
[root@dbtest23 data]#

Ah, so the snapshot operation took UUID of lv_data. That seems to make sense. Consulting Google, we get this. Let’s try that then…

[root@dbtest23 data]# mount -o nouuid /dev/mapper/VolGroup00-snap /mnt
[root@dbtest23 data]# umount /mnt && xfs_admin -U generate /dev/mapper/VolGroup00-snap
Clearing log and setting UUID
writing all SBs
new UUID = 42d4e1ec-9780-4c88-9a8f-125616714a93
[root@dbtest23 data]# mount /dev/mapper/VolGroup00-snap /mnt
[root@dbtest23 data]# df
Filesystem                     1K-blocks    Used Available Use% Mounted on
devtmpfs                         1436124       0   1436124   0% /dev
tmpfs                            1447600       0   1447600   0% /dev/shm
tmpfs                            1447600   35884   1411716   3% /run
tmpfs                            1447600       0   1447600   0% /sys/fs/cgroup
/dev/mapper/VolGroup00-lv_root   8181760 6573464   1608296  81% /
/dev/sda1                         508580  205020    303560  41% /boot
/dev/mapper/VolGroup00-lv_data  11401216 1993668   9407548  18% /data
/dev/mapper/VolGroup00-lv_var    8181760 7029528   1152232  86% /var
tmpfs                             289520       0    289520   0% /run/user/0
/dev/mapper/VolGroup00-snap     11401216 1993668   9407548  18% /mnt
[root@dbtest23 data]#

That looks much better. Now, the snapshot can be backed up… and then removed…

[root@dbtest23 data]# lvremove /dev/mapper/VolGroup00-snap
Do you really want to remove active logical volume VolGroup00/snap? [y/n]: y
  Logical volume "snap" successfully removed
[root@dbtest23 data]#

Remember LV snapshot status above? There is a pretty good explanation of snapshots here