Changes between Version 2 and Version 3 of lvm-cache
- Timestamp:
- Sep 10, 2020, 2:22:53 AM (5 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
lvm-cache
v2 v3 19 19 == Enabling the cache == 20 20 21 Creating the cache is a multistep process but we can automate most of it with one command. In the example below vg_erica0/home is the logical volume we want to cache and /dev/sdb is our new SSD block device.21 Creating the cache is a multistep process but we can automate most of it with one command. In the example below vg_erica0/home is the logical volume we want to cache, we call this our origin disk, and /dev/sdb is our new SSD block device. 22 22 23 23 {{{ … … 26 26 Logical volume vg_erica0/home is now cached. 27 27 }}} 28 29 ===What just happened?=== 30 31 LVM automatically created both a data cache lv and a metadata cache lv and combined those 2 into a cachepool. It then created a cached logical volume by linking the cachepool to the origin disk. The resulting cache volume assumes the name of the origin disk. 28 32 29 33 == Disabling the cache == … … 39 43 == Examining the cache == 40 44 45 A cache logical volume is built on the combination of the origin logical volume and cache-pool logical volume. 46 47 {{{ 48 # lvs vg_erica0/home 49 LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert 50 home vg_erica0 Cwi-aoC--- 380.00g [home_cachepool] [home_corig] 0.25 14.54 0.11 51 }}} 52 53 We can use 'lvs -a' option to see all of the parts used to create the cache . Below home is the new the cached logical volume, home_cachepool is the cachepool lv, and home_corig is the origin lv. home_cachepool_cdata and home_cachepool_cmeta are the data cache lv and a metadata cache lv combined to create the cachepool. lvol0_pmspare is the spare metadata logical volume 54 41 55 {{{ 42 56 # lvs -a 43 home vg_ossie0 Cwi-aoC--- 889.00g [home_cachepool] [home_corig] 0.46 16.21 71.37 44 [home_cachepool] vg_ossie0 Cwi---C--- <90.14g 0.46 16.21 71.37 45 [home_cachepool_cdata] vg_ossie0 Cwi-ao---- <90.14g 46 [home_cachepool_cmeta] vg_ossie0 ewi-ao---- 48.00m 47 [home_corig] vg_ossie0 owi-aoC--- 889.00g 57 LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert 58 home vg_erica0 Cwi-aoC--- 380.00g [home_cachepool] [home_corig] 0.31 14.54 1.14 59 [home_cachepool] vg_erica0 Cwi---C--- <89.92g 0.31 14.54 1.14 60 [home_cachepool_cdata] vg_erica0 Cwi-ao---- <89.92g 61 [home_cachepool_cmeta] vg_erica0 ewi-ao---- 40.00m 62 [home_corig] vg_erica0 owi-aoC--- 380.00g 63 [lvol0_pmspare] vg_erica0 ewi------- 40.00m 64 }}} 65 66 67 Add the "-o +devices" option to show which devices the extents for each logical volume are based on. 68 69 {{{ 70 # lvs -a -o +devices 48 71 }}} 49 72 50 73 == More Details == 51 74 52 75 LVM caching uses the following LVM logical volume types. All of these associated logical volumes must be in the same volume group. 53 76 54 77 * Origin logical volume the large, slow logical volume … … 57 80 * Cache metadata logical volume the logical volume containing the metadata for the cache pool logical volume, which holds the accounting information that specifies where data blocks are stored (for example, on the origin logical volume or the cache data logical volume). 58 81 * Cache logical volume the logical volume containing the origin logical volume and the cache pool logical volume. This is the resultant usable device which encapsulates the various cache volume components. 82 * Spare metadata logical volume. This is related to a metadata failure-recovery feature. See: https://manpages.debian.org/stretch/lvm2/lvmthin.7.en.html#Spare_metadata_LV "If thin pool metadata is damaged, it may be repairable. Checking and repairing thin pool metadata is analagous to running fsck on a file system." Another explanation: https://rwmj.wordpress.com/2014/05/22/using-lvms-new-cache-feature/#comment-10152 83 84 When creating the cache Cachemode has two possible options: 85 86 * **writethrough** ensures that any data written will be stored both in the cache pool LV and on the origin LV. The loss of a device associated with the cache pool LV in this case would not mean the loss of any data; 87 * **writeback** ensures better performance, but at the cost of a higher risk of data loss in case the drive used for cache fails. 88 89 In the exmaples above we've opted for writeback because the logical volume created for the cache on the physical host is backed by a RAID1 array created from two SSD drives so we already have a lower level of protection against drive failure in place. Choose wisely. 90 91 92 More details: 93 94 https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/lvm_cache_volume_creation 95 96 https://manpages.ubuntu.com/manpages/eoan/man7/lvmcache.7.html