After a clean install of beta 2, I used diskdrake to add mountpoints for a previous install. The existing entries for the current install were removed from /etc/fstab. I'll attach a copy of fstab from immediately after the install, and a copy from after using diskdrake to add the entries for the existing install.
Created attachment 318 [details] fstab after install (before running diskdrake This is the fstab created by the clean installation of beta 2.
Created attachment 319 [details] fstab after running diskdrake Here is the fstab after using diskdrake to add the entries for /var/mnt/91... Note that entries for the currently in use entries (/dev/bk...) have been removed incorrectly by diskdrake.
I did the same thing with Mandriva 2011.0 beta 2. Clean install and then add the entries to the existing install, and it did not incorrectly remove entries from fstab, so I've compared the source programs looking for non-cosmetic differences. In line 507 of /usr/lib/libDrakX/diskdrake/interactive.pm Magia has { label => N("Encrypt partition"), type => 'bool', val => \$use_dmcrypt }, while Mandriva has { label => N("Encrypt partition"), type => 'bool', val => \$use_dmcrypt, disabled => sub { member($part->{mntpoint}, qw(/ /usr /var /boot)); } }, I'm not clear on what this is doing, but it's the only change I've found that doesn't appear to be cosmetic only.
Fyi. I changed line 507 to match what Mandriva has, and retested, but the problem is still there. I have no idea why the Mageia version of diskdrake is dropping the entries from /etc/fstab, for the currently mounted /home, /opt, /usr/, /var, and /var/mnt logical volumes. Note that this bug leaves the system unbootable, until the fstab entries are restored. Any suggestions or further info needed to help debug this?
Priority: Normal => release_blocker
CC: (none) => pterjan
CC: (none) => ennael1Assignee: bugsquad => pterjan
How were the /dev/bk mount points setup? And what is /dev/bk/? LVM?
Yes, I'm using lvm. During the install, I selected the existing partition sdb1 for /, and various logical volumes within a single physical volume, called bk. All were formatted prior to the install. After booting into the new install, I used diskdrake to add mount points for a previous installation on sda13, and using logical volumes within the physical volume group called 91. I've since manually added mount points for my third installation on sda8, which uses logical volumes within a physical volume group called a8. I think df shows it best ... /dev/sdb1 1.5G 364M 1.1G 26% / /dev/mapper/bk-home 1.5G 284M 1.2G 19% /home /dev/mapper/bk-opt 1.5G 81M 1.4G 6% /opt /dev/mapper/bk-tmp 5.0G 174M 4.6G 4% /tmp /dev/mapper/bk-usr 16G 6.6G 8.4G 44% /usr /dev/mapper/bk-var 1.5G 745M 691M 52% /var /dev/sdb2 494M 23M 447M 5% /var/log /dev/mapper/bk-mnt 3.9M 39K 3.7M 2% /var/mnt /dev/sda3 149G 111G 30G 79% /var/mnt/hd /dev/sda14 1004M 881M 73M 93% /var/mnt/91 /dev/mapper/91-home 1008M 594M 364M 63% /var/mnt/91/home /dev/mapper/91-opt 1008M 557M 401M 59% /var/mnt/91/opt /dev/mapper/91-tmp 5.5G 305M 4.9G 6% /var/mnt/91/tmp /dev/mapper/91-usr 16G 14G 1.4G 91% /var/mnt/91/usr /dev/mapper/91-var 7.9G 1.6G 6.0G 21% /var/mnt/91/var /dev/sda15 494M 148M 321M 32% /var/mnt/91/var/log /dev/mapper/91-mnt 3.9M 1.9M 1.8M 52% /var/mnt/91/var/mnt /dev/sda8 1004M 247M 707M 26% /var/mnt/a8 /dev/mapper/a8-home 1008M 360M 598M 38% /var/mnt/a8/home /dev/mapper/a8-opt 1008M 34M 924M 4% /var/mnt/a8/opt /dev/mapper/a8-tmp 5.5G 140M 5.1G 3% /var/mnt/a8/tmp /dev/mapper/a8-usr 16G 3.2G 12G 21% /var/mnt/a8/usr /dev/mapper/a8-var 7.9G 366M 7.2G 5% /var/mnt/a8/var /dev/sda11 494M 30M 439M 7% /var/mnt/a8/var/log /dev/mapper/a8-mnt 3.9M 31K 3.7M 1% /var/mnt/a8/var/mnt /dev/mapper/luks91 95G 38G 53G 42% /var/mnt/data /dev/sr0 4.1G 4.1G 0 100% /media/cdrom /dev/sda1 2.0G 1.2G 843M 59% /var/mnt/win_c /dev/sda5 2.0G 552M 1.4G 29% /var/mnt/win_d /dev/sda6 2.0G 562M 1.4G 29% /var/mnt/win_e /dev/sda7 2.0G 683M 1.3G 36% /var/mnt/win_f /dev/sda13 36G 22G 15G 61% /var/mnt/win_m
ping ?
Created attachment 402 [details] Syslog output from diskdrake I did another test. I restored the fstab shown in the first attachment, and then used rpmdrake to add an entry only for sda14 on /var/mnt/91. Again, rpmdrake removed all of the /dev/mapper/bk entries. This test should rule out the possibility of the problem being caused by duplicate logical volume names in different physical volumes, although they still exist on the drive. I'm attaching an extract of the syslog entries created by diskdrake.
Ping? I consider this bug a release blocker! If the user cannot reboot into Magiea, it's a major problem. Adding mountpoints for a previous linux installation should not make the magiea installation unbootable.
Well it's hard to guess what's going on as the log don't show anything wrong, all partitions are listed... I will try creating several lvm in kvm then do an installation and add them...
I could reproduce, it seems that when adding mountpoint for logical volume, partitions from other volume groups get removed :(
Summary: diskdrake removes existing entries from fstab when adding additional entries. => diskdrake removes lvm entries from fstab when making any change
Note that it only removes lvm entries that previously existed in fstab. In the example shown in attachment 319 [details] the new lvm entries added for /dev/mapper/91/... were written to fstab, while the existing entries from /dev/mapper/bk/... were removed.
Yes it drops them when loading the fstab, and adds the ones you add.
Can you try after patching /usr/lib/libDrakX/fs/get.pm with http://svnweb.mageia.org/soft/drakx/trunk/perl-install/fs/get.pm?r1=1318&r2=1317&pathrev=1318 ? I didn't think it would be enough but I could no longer reproduce after fixing that bug
With that patch applied, everything looks good. Btw, just before I saw this posting, I'd compared /usr/lib/libDrakX/fsedit.pm to the Mandriva version, and based on that, tried commenting out the line to allow lvm on dmcrypt, and that also fixed the problem, so it was the change to allow lvm on dmcrypt that triggered the bug. With the change to allow lvm on dmcrypt, and the comment 14 patch, it works, so this looks like is the correct fix. Once that patch is pushed, this bug can be closed as fixed. Thank you very much!
Package uploaded
Status: NEW => RESOLVEDResolution: (none) => FIXED