#15613 closed Bug/Something is broken (fixed)
claudette won't start
Reported by: | Jamie McClelland | Owned by: | JaimeV |
---|---|---|---|
Priority: | Medium | Component: | Tech |
Keywords: | Cc: | ||
Sensitive: | no |
Description
TIME ] Timed out waiting for device dev-mapper-vg_claudette0\x2dhome.device. [DEPEND] Dependency failed for /home. [DEPEND] Dependency failed for Local File Systems. [DEPEND] Dependency failed for File System C…on /dev/mapper/vg_claudette0-home.
Change History (14)
comment:1 Changed 9 months ago by
comment:2 Changed 9 months ago by
I tried to uncache:
0 claudette:~# lvconvert --uncache vg_claudette0/home /usr/sbin/cache_check: execvp failed: No such file or directory Check of pool vg_claudette0/home_cachepool failed (status:2). Manual repair required! Failed to active cache locally vg_claudette0/home. 5 claudette:~#
comment:3 Changed 9 months ago by
This didn't work either:
0 claudette:~# lvconvert --repair vg_claudette0/home_cachepool Using default stripesize 64.00 KiB. Operation not permitted on cache pool LV vg_claudette0/home_cachepool. Operations permitted on a cache pool LV are: --splitcache (operates on cache LV) 5 claudette:~#
comment:4 Changed 9 months ago by
5 claudette:~# lvconvert --repair vg_claudette0/home Using default stripesize 64.00 KiB. Operation not permitted on cache LV vg_claudette0/home. Operations permitted on a cache LV are: --splitcache --uncache --splitmirrors (operates on mirror or raid sub LV) --type thin-pool 5 claudette:~#
comment:5 Changed 9 months ago by
This thread seems to describe the problem but there is no resolution.
comment:6 Changed 9 months ago by
And here's a thread describing a work around.
It's possible that the --repair option doesn't work because we don't have a new enough version of lvm.
comment:7 Changed 9 months ago by
I think the answer is to boot with a more modern version of lvm and re-run the repair command.
comment:8 Changed 9 months ago by
I booted into debirf (stretch) and upgraded lvm2 and...
0 debirf-rescue:~# lvconvert --repair vg_claudette0/home_cachepool /dev/vg_claudette0/lvol1: not found: device not cleared Aborting. Failed to wipe start of new LV. WARNING: If everything works, remove vg_claudette0/home_cachepool_meta0 volume. WARNING: Use pvmove command to move vg_claudette0/home_cachepool_cmeta on the best fitting PV. 0 debirf-rescue:~#
comment:9 Changed 9 months ago by
Hm... it seems the problem is that /dev/sdc is not available in claudette
comment:10 Changed 9 months ago by
Nope - scratch that - both ssd disks are there: /dev/sda and /dev/sdb
comment:11 Changed 9 months ago by
Weirdly - /dev/mapper/vg_claudette0-home was available in debirf. So I've rebooted into claudette.
But not in claudette.... maybe we need to upgrade lvm?
comment:12 Changed 9 months ago by
Owner: | set to JaimeV |
---|---|
Status: | new → assigned |
Ok first of all the real issue here was that I had failed to install the thin-provisioning-tools package in claudette.
Strangely you can setup and begin using an lvmcache without it but you will not be able to boot or perform key maintenance tasks on the lvmcache without it.
Booting into debirf and upgrading lvm from buster packages pulled in the package automatically which is why jamie was able to mount home that way.
I think your first instinct to attempt to turn off the cache was the right one but you also need the thin-provisioning-tools package for that to work. After installing the missing package in debirf I was able to turn off the cache and boot into claudette. Once claudette was back online I installed the missing package and was easily able to turn the cache back on again.
The lesson here is to never, never, never enable an lvmcache without first ensuring that the thin-provisioning-tools package is installed first. I may just go ahead and add it to all vms via puppet so that we don't have to worry about this happening to us again.
comment:13 Changed 9 months ago by
Resolution: | → fixed |
---|---|
Status: | assigned → closed |
comment:14 Changed 9 months ago by
Sensitive: | unset |
---|
Please login to add comments to this ticket.
I was then dropped into a maintenance shell prompt. I entered the password.
Then: