Changes between Version 2 and Version 3 of lvm-cache


Ignore:
Timestamp:
Sep 9, 2020, 10:22:53 PM (2 weeks ago)
Author:
JaimeV
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • lvm-cache

    v2 v3  
    1919== Enabling the cache ==
    2020
    21 Creating the cache is a multistep process but we can automate most of it with one command. In the example below vg_erica0/home is the logical volume we want to cache and /dev/sdb is our new SSD block device.
     21Creating the cache is a multistep process but we can automate most of it with one command. In the example below vg_erica0/home is the logical volume we want to cache, we call this our origin disk, and /dev/sdb is our new SSD block device.
    2222
    2323{{{
     
    2626  Logical volume vg_erica0/home is now cached.
    2727}}}
     28
     29===What just happened?===
     30
     31LVM automatically created both a data cache lv and a metadata cache lv and combined those 2 into a cachepool. It then created a cached logical volume by linking the cachepool to the origin disk. The resulting cache volume assumes the name of the origin disk.
    2832
    2933== Disabling the cache ==
     
    3943== Examining the cache ==
    4044
     45A cache logical volume is built on the combination of the origin logical volume and cache-pool logical volume.
     46
     47{{{
     48# lvs vg_erica0/home
     49  LV   VG        Attr       LSize   Pool             Origin       Data%  Meta%  Move Log Cpy%Sync Convert
     50  home vg_erica0 Cwi-aoC--- 380.00g [home_cachepool] [home_corig] 0.25   14.54           0.11
     51}}}
     52
     53We can use 'lvs -a' option to see all of the parts used to create the cache . Below home is the new the cached logical volume, home_cachepool is the cachepool lv, and home_corig is the origin lv. home_cachepool_cdata and home_cachepool_cmeta are the data cache lv and a metadata cache lv combined to create the cachepool.  lvol0_pmspare is the spare metadata logical volume
     54
    4155{{{
    4256# lvs -a
    43   home                   vg_ossie0 Cwi-aoC--- 889.00g [home_cachepool] [home_corig] 0.46   16.21           71.37           
    44   [home_cachepool]       vg_ossie0 Cwi---C--- <90.14g                               0.46   16.21           71.37           
    45   [home_cachepool_cdata] vg_ossie0 Cwi-ao---- <90.14g                                                                     
    46   [home_cachepool_cmeta] vg_ossie0 ewi-ao----  48.00m                                                                     
    47   [home_corig]           vg_ossie0 owi-aoC--- 889.00g
     57  LV                     VG        Attr       LSize   Pool             Origin       Data%  Meta%  Move Log Cpy%Sync Convert                                                                   
     58  home                   vg_erica0 Cwi-aoC--- 380.00g [home_cachepool] [home_corig] 0.31   14.54           1.14           
     59  [home_cachepool]       vg_erica0 Cwi---C--- <89.92g                               0.31   14.54           1.14           
     60  [home_cachepool_cdata] vg_erica0 Cwi-ao---- <89.92g                                                                     
     61  [home_cachepool_cmeta] vg_erica0 ewi-ao----  40.00m                                                                     
     62  [home_corig]           vg_erica0 owi-aoC--- 380.00g                                                                     
     63  [lvol0_pmspare]        vg_erica0 ewi-------  40.00m 
     64}}}
     65
     66
     67Add the "-o +devices" option to show which devices the extents for each logical volume are based on.
     68
     69{{{
     70# lvs -a -o +devices
    4871}}}
    4972
    5073== More Details ==
    5174
    52   LVM caching uses the following LVM logical volume types. All of these associated logical volumes must be in the same volume group.
     75LVM caching uses the following LVM logical volume types. All of these associated logical volumes must be in the same volume group.
    5376
    5477* Origin logical volume  the large, slow logical volume
     
    5780    * Cache metadata logical volume  the logical volume containing the metadata for the cache pool logical volume, which holds the accounting information that specifies where data blocks are stored (for example, on the origin logical volume or the cache data logical volume).
    5881* Cache logical volume the logical volume containing the origin logical volume and the cache pool logical volume. This is the resultant usable device which encapsulates the various cache volume components.
     82* Spare metadata logical volume. This is related to a metadata failure-recovery feature. See: ​https://manpages.debian.org/stretch/lvm2/lvmthin.7.en.html#Spare_metadata_LV "If thin pool metadata is damaged, it may be repairable. Checking and repairing thin pool metadata is analagous to running fsck on a file system." Another explanation: ​https://rwmj.wordpress.com/2014/05/22/using-lvms-new-cache-feature/#comment-10152
     83
     84When creating the cache Cachemode has two possible options:
     85
     86* **writethrough** ensures that any data written will be stored both in the cache pool LV and on the origin LV. The loss of a device associated with the cache pool LV in this case would not mean the loss of any data;
     87* **writeback** ensures better performance, but at the cost of a higher risk of data loss in case the drive used for cache fails.
     88
     89In the exmaples above we've opted for writeback because the logical volume created for the cache on the physical host is backed by a RAID1 array created from two SSD drives so we already have a lower level of protection against drive failure in place. Choose wisely.
     90
     91
     92More details:
     93
     94​https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/lvm_cache_volume_creation
     95
     96​https://manpages.ubuntu.com/manpages/eoan/man7/lvmcache.7.html