Description of problem: boot on latest netboot iso in rescue mode trying to re install the boot loader don't work because the rescue mode does not analyse pv / vg / lv under lvm the installer report an exit code = 2 Version-Release number of selected component (if applicable): 2_RC How reproducible: install a machine with root fs under lvm then (my personal configuration is VG under raid 1 of sda2 and sdb2 with 3 lv's home, root & swap) Steps to Reproduce: 1.boot on CD and type rescue (return) 2.choose "Re install boot loader" in the menu wait few seconds and see the error message : can't locate your root FS. 3. type (return) the exit code is equal to 2
Created attachment 2216 [details] can't find rootfs
Created attachment 2217 [details] exit code
another duplicate ?
CC: (none) => mageia, pterjan, thierry.vignaud
rescue's guessmounts initialize first RAID, then LVs I haves tested rescuing RAID & LVs separately but not one upon the other
Source RPM: (none) => drakx-installer-rescue
So is this somewhat expected then Thierry? Is it basically just up the user to activate their raid and lvm in these scenarios? Note: I've also got a machine with a similar setup (tho' I'm changing it as it's never proved useful - just adds overhead!), but I've never used the rescue mode to fiddle with it, just always used a generic rescue cd and assembled all the bits manually when needed.
Humm not easy to reproduce as the installer doesn't allow to create LV on top of RAID (even in expert mode)... Since we don't support creating such a scenario, this not much a priority. Unless I've misunderstood the reporter description of if the reporter can provide a way to reproduce such an install.
Priority: Normal => Low
It doesn't allow that? Hmm, I thought it did... Maybe it did years ago? Certainly my system is one big raid1 + lvm on top but to be honest it's been mangled and changed so much over the years since the "original install" (likely a decade ago!) that I have no idea how I installed it now :D
(In reply to comment #6) > Humm not easy to reproduce as the installer doesn't allow to create LV on top > of RAID (even in expert mode)... > Since we don't support creating such a scenario, this not much a priority. > Morning ... am not agree with you the installer permit this kind of installation, all what i have done were done with mageia netboot installer i will test partitioning again but am pretty sure about my usage.
Hi, This bug was filed against cauldron, but we do not have cauldron at the moment. Please report whether this bug is still valid for Mageia 2. Thanks :) Cheers, marja
Keywords: (none) => NEEDINFO
Please report whether this bug is still valid for Mageia 2. Thanks :) Cheers, Roelof
CC: (none) => r.wobben
Testing on Mageia 3 beta 4, seems that maybe I have something related to this. When asking it to find my root filesystem it just hangs with: "Please wait, trying to find your root device" it can't find it. It's an encrypted root device on LVM, and it's an XFS partition. As my /boot might not be 100% right now, (which is why I am testing the rescue mode), I am not surprised it can't find it.. but it should not just hang, Ctrl-C does not get out of it either.
CC: (none) => nelg
It seems the cause of the hang might well be lvm2's vgchange -a y, as I tried running that by hand, and it hung in the same way. Nothing very must of note in dmesg as far as I can tell, last message is: bio: create slap <bio-1> at 1 This is in a VM, so can reproduce as needed.
Tested again, with Mageia2 DVD, rescue mode works, It can see the LVM partitions and make them active.
Confirmed, by building two VMs, one with live installer, one with standard. Even though both of these VM's boot fine, the rescue mode on the Mageia-3-beta4-x86_64-DVD fails. It fails at the command: lvm2 vgchange -a y which just hangs. I believe this must be considered release critical for Mageia 3, as a working rescue system is important.
Keywords: NEEDINFO => (none)Priority: Low => release_blocker
Hardware: i586 => AllTarget Milestone: --- => Mageia 3
Whiteboard: (none) => 3beta4
morning, i guess the trouble comes from unloaded needed DM kernel modules if you first load dm-log before trying to activate the volume it seem to work but you have to create simlinks in /dev directory using lvm2 vgmknodes. to resume : - boot in rescue mode - access to console - load your keyboard if needed : loadkeys fr - load the module : modprobe dm-log - scan your hardrive for lvm : lvm2 vgscan - activate the detecteed vg's : lvm2 vgchange -ay - create simlonks under /dev : lvm2 vgmknodes --> normally you have you directory and simlinks under /dev example : - ls /dev/vg0 lv_home@ lv_root@ lv_swap@ ... hope this comment can help. mna.
(In reply to mnaud mnaud from comment #15) [...] i reply myself and give here the correct behaviour : i guess the trouble comes from udev and lvm interaction you can find all your VG's if you activate them with 1 option to not wait for udev answer to resume : - boot in rescue mode - access to console - load your keyboard if needed : loadkeys fr - scan your hardrive for lvm : lvm2 vgscan - activate the detected vg's : lvm2 vgchange -ay --noudevsync --> normally you have you directory and simlinks under /dev and you can use them example : - ls /dev/vg0 lv_home@ lv_root@ lv_swap@ ... hope this comment can help. mna.
CC: (none) => tmb
Indeed, lvm2 vgchange -a -y is freezeing Reproduced with rescue -> "go to console" -> guessmounts going on tty2 shows that it's lvm2 that blocks strace show that it blocks on semop() Interestingly, C^c it then rerun it enables to go on...
CC: (none) => bluca
Created attachment 3700 [details] guessmounts strace trace in rescue
Attachment 2216 description: san't find rootfs => can't find rootfs
Created attachment 3702 [details] strace diff between first & second run indeed there's obviously a bad interaction between lvm2 & udev
See also https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/802626 I was going to use the --noudevsync fix since it does workaround the issue in rescue. But comparing udev rules between rescue & installer, I saw we miss one rule (60-persistent-storage.rules) added by Colin in drakx only: http://svnweb.mageia.org/soft?view=revision&revision=4017 This didn't fix it but It make me see that only drakx was really adapted to the /usr move and udev rules are in /lib/udev/ in rescue but in /usr/lib/udev in drakx. Fixing that solves the issue...
URL: (none) => https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/802626
Fixed
Status: NEW => RESOLVEDResolution: (none) => FIXED
*** Bug 5910 has been marked as a duplicate of this bug. ***
CC: (none) => simplew8