Bug 16899 - Installer rescue can't create a new bootloader if it doesn't find existing configuration
Summary: Installer rescue can't create a new bootloader if it doesn't find existing co...
Status: RESOLVED OLD
Alias: None
Product: Mageia
Classification: Unclassified
Component: Installer (show other bugs)
Version: Cauldron
Hardware: All Linux
Priority: Normal normal
Target Milestone: ---
Assignee: Thierry Vignaud
QA Contact:
URL:
Whiteboard:
Keywords: NEEDINFO
Depends on:
Blocks:
 
Reported: 2015-10-04 19:26 CEST by Rémi Verschelde
Modified: 2016-04-01 16:46 CEST (History)
0 users

See Also:
Source RPM: drakx-installer-rescue
CVE:
Status comment:


Attachments
Screenshot of the output of the repair bootloader option (77.55 KB, image/jpeg)
2015-10-04 19:26 CEST, Rémi Verschelde
Details
journalctl output from last working boot (173.84 KB, text/plain)
2015-10-06 19:36 CEST, Rémi Verschelde
Details

Description Rémi Verschelde 2015-10-04 19:26:18 CEST
Context: I did a VM install of Cauldron this morning using boot-nonfree.iso. The process was a bit chaotic, with several conflicts and failed scripts, but I managed to install nevertheless. The system was working fine for a couple reboots, then I installed the latest vboxadditions to be in sync with the kernel, and remove the orphans (there shouldn't have been orphans at all after a clean install IMO, but whatever).

Now on reboot I get directly to the grub prompt, my grub configuration seems to have been lost.

Running the drakx rescue from Mageia 5 (or from boot-nonfree.iso directly), I tried to let it repair/regenerate a bootloader, but it errors with:

  => found a Mageia release 6 (Cauldron) for x86_64 root partition on /dev/sda1
  => type ext4, version '
  find_root_parts found sda1: Mageia (Cauldron) for x86_64
  => Selecting /dev/sda1 as root fs
  [...]
  Cannot find a configured boot loader

(See attached screenshot).


Is it the expected behaviour, or should the rescue system be able to regenerate a new grub/grub2 configuration if none is found? IMO it would be a very useful feature if it did.

Reproducible: 

Steps to Reproduce:
Comment 1 Rémi Verschelde 2015-10-04 19:26:50 CEST
Created attachment 7087 [details]
Screenshot of the output of the repair bootloader option
Comment 2 Rémi Verschelde 2015-10-04 19:28:03 CEST
When pressing enter in the above screenshot, I just get:

  Error
  Program exited abnormally (return code 2)


If there is a way to retrieve more debug info about this, please tell me how, I'm a bit lost with the various methods in the various stages or live/classical flavours :)
Comment 3 Thierry Vignaud 2015-10-05 11:13:52 CEST
The purpose is to reinstall the bootloader if it was overwritten, eg: by Windows.
We don't offer to create a new bootloader configuration.
(basically we run lilo, /boot/grub{,2}/install.sh

For further fixing, you need to rerun the drakx installer, choose update, then your bootloader will be reconfigured at summary time.

The question is why did you lost your bootloader?

Once you've run drakx in roder to fix it, you can look at the logs and see what you had removed with urpme --auto-orphans
Thierry Vignaud 2015-10-05 11:14:18 CEST

Keywords: (none) => NEEDINFO

Comment 4 Rémi Verschelde 2015-10-05 11:25:25 CEST
OK, I had assumed that the rescue application would do basically the same thing as rerunning the drakx installer, without having to go through all the installer steps.

Is it by choice that it can't do the same as the post-install bootloader configuration step, or for technical reasons? It would be nice in such situations to have a rescue option that would generate a very basic grub config (i.e. just using the root partition it found, without trying to find other distros or Windows), so that one can boot again and fix the bootloader using drakboot.


Regarding my VM, I probably borked it when running auto-orphans indeed. I did not check the list thoroughly but there were some relatively important packages like tk that got removed (the list was not too long however, maybe 20 packages or so). I'll try to fix it to see what was actually uninstalled that might have killed grub.
Comment 5 Rémi Verschelde 2015-10-06 19:35:06 CEST
So I repaired the bootloader using the installer, and interestingly, there is no mention of any package removed by "urpme --auto-orphans" in "journalctl | grep RPM | grep erase".

I just checked by installing buildrequires and removing the orphans again, and this time they show up in journalctl. So I really wonder how I messed this all up :)
Comment 6 Rémi Verschelde 2015-10-06 19:36:46 CEST
Created attachment 7093 [details]
journalctl output from last working boot

Nothing really interesting in there, but I attach it just for the sake of consistency. It looks like it stops abruptly, so I guess I close my VM with Host+Q instead of doing a proper shutdown; wouldn't expect it to mess the grub config though.
Comment 7 Thierry Vignaud 2016-04-01 16:46:40 CEST
I think it's safe to close this one...

Status: NEW => RESOLVED
Resolution: (none) => OLD


Note You need to log in before you can comment on or make changes to this bug.