wiki:lvm-cache

Logical Volume Caching

An lvm cache logical volume uses a small logical volume consisting of fast block devices (such as SSD drives) to improve the performance of a larger and slower logical volume ( like spinning disks ) by storing the frequently used blocks on the smaller, faster logical volume.

lvmcache uses lvm as a frontend to dm-cache which is part of the kernel ​https://en.wikipedia.org/wiki/Dm-cache

More documentation:

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/lvm_cache_volume_creation

https://manpages.ubuntu.com/manpages/eoan/man7/lvmcache.7.html

Setup lvm cache

To setup lvm cache for a logical volume on one of our virtual servers observe the following steps

In the example below our new block device is /dev/sdb but it will not always be, make sure you have correctly identified the correct empty block device and change the command accordingly. In this example we plan on caching the logical volume "home" that belongs to volume group vg_erica0 , so we'll add our new block device to vg_erica0

# vgextend vg_erica0 /dev/sdb

Enabling the cache

Creating the cache is a multistep process but we can automate most of it with one command. In the example below vg_erica0/home is the logical volume we want to cache, we call this our origin disk, and /dev/sdb is our new SSD block device.

# lvcreate --type cache --cachemode writeback -l 100%FREE --name home_cachepool vg_erica0/home /dev/sdb
  Using 96.00 KiB chunk size instead of default 64.00 KiB, so cache pool has less than 1000000 chunks.
  Logical volume vg_erica0/home is now cached.

What just happened?

LVM automatically created both a data cache lv and a metadata cache lv from the new block device and combined those 2 into a cachepool. It then created a cached logical volume by linking the cachepool to the origin disk. The resulting cache volume assumes the name of the origin disk.

Disabling the cache

# lvconvert --uncache vg_erica0/home
  Flushing 6 blocks for cache vg_erica0/home.
  Flushing 4 blocks for cache vg_erica0/home.
  Logical volume "home_cachepool" successfully removed
  Logical volume vg_erica0/home is not cached.

Examining the cache

A cache logical volume is built on the combination of the origin logical volume and cache-pool logical volume.

# lvs vg_erica0/home
  LV   VG        Attr       LSize   Pool             Origin       Data%  Meta%  Move Log Cpy%Sync Convert
  home vg_erica0 Cwi-aoC--- 380.00g [home_cachepool] [home_corig] 0.25   14.54           0.11

We can use 'lvs -a' option to see all of the parts used to create the cache . Below home is the new the cached logical volume, home_cachepool is the cachepool lv, and home_corig is the origin lv. home_cachepool_cdata and home_cachepool_cmeta are the data cache lv and a metadata cache lv combined to create the cachepool. lvol0_pmspare is the spare metadata logical volume

# lvs -a 
  LV                     VG        Attr       LSize   Pool             Origin       Data%  Meta%  Move Log Cpy%Sync Convert                                                                    
  home                   vg_erica0 Cwi-aoC--- 380.00g [home_cachepool] [home_corig] 0.31   14.54           1.14            
  [home_cachepool]       vg_erica0 Cwi---C--- <89.92g                               0.31   14.54           1.14            
  [home_cachepool_cdata] vg_erica0 Cwi-ao---- <89.92g                                                                      
  [home_cachepool_cmeta] vg_erica0 ewi-ao----  40.00m                                                                      
  [home_corig]           vg_erica0 owi-aoC--- 380.00g                                                                      
  [lvol0_pmspare]        vg_erica0 ewi-------  40.00m  

Add the "-o +devices" option to show which devices the extents for each logical volume are based on.

# lvs -a -o +devices

More Details

LVM cache logical volume types

LVM caching uses the following LVM logical volume types. All of these associated logical volumes must be in the same volume group.

  • Origin logical volume the large, slow logical volume
  • Cache pool logical volume the small, fast logical volume, which is composed of two devices: the cache data logical volume, and the cache metadata logical volume
    • Cache data logical volume the logical volume containing the data blocks for the cache pool logical volume
    • Cache metadata logical volume the logical volume containing the metadata for the cache pool logical volume, which holds the accounting information that specifies where data blocks are stored (for example, on the origin logical volume or the cache data logical volume).
  • Cache logical volume the logical volume containing the origin logical volume and the cache pool logical volume. This is the resultant usable device which encapsulates the various cache volume components.
  • Spare metadata logical volume This is related to a [​https://manpages.debian.org/stretch/lvm2/lvmthin.7.en.html#Spare_metadata_LV metadata failure-recovery feature]. "If thin pool metadata is damaged, it may be repairable. Checking and repairing thin pool metadata is analagous to running fsck on a file system." Another explanation: ​https://rwmj.wordpress.com/2014/05/22/using-lvms-new-cache-feature/#comment-10152

Cachemode

When creating the cache cachemode has two possible options:

  • writethrough ensures that any data written will be stored both in the cache pool LV and on the origin LV. The loss of a device associated with the cache pool LV in this case would not mean the loss of any data;
  • writeback ensures better performance, but at the cost of a higher risk of data loss in case the drive used for cache fails.

In the examples above we've opted for writeback because the logical volume created for the cache on the physical host is backed by a RAID1 array created from two SSD drives so we already have a lower level of protection against drive failure in place. Choose wisely.

Do not exhaust space on "metadadata" logical volume

We should not allow the "metadadata" volume of an lvm cachepool to exhaust its available space. In the man pages there is documentation about what happens to lvm thinpool when it reaches metadata space exhaustion and it sounds all bad. ​https://www.systutorials.com/docs/linux/man/7-lvmthin/#lbAY

Last modified 2 weeks ago Last modified on Sep 9, 2020, 10:37:45 PM