Bug 1041 - diskdrake removes lvm entries from fstab when making any change
Summary: diskdrake removes lvm entries from fstab when making any change
Status: RESOLVED FIXED
Alias: None
Product: Mageia
Classification: Unclassified
Component: RPM Packages (show other bugs)
Version: Cauldron
Hardware: i586 Linux
Priority: release_blocker critical
Target Milestone: ---
Assignee: Pascal Terjan
QA Contact:
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2011-04-28 22:13 CEST by Dave Hodgins
Modified: 2011-05-14 22:34 CEST (History)
2 users (show)

See Also:
Source RPM: drakxtools-13.49-1.mga1.src.rpm
CVE:
Status comment:


Attachments
fstab after install (before running diskdrake (1.10 KB, text/plain)
2011-04-28 22:14 CEST, Dave Hodgins
Details
fstab after running diskdrake (1.34 KB, text/plain)
2011-04-28 22:18 CEST, Dave Hodgins
Details
Syslog output from diskdrake (28.95 KB, text/plain)
2011-05-13 04:26 CEST, Dave Hodgins
Details

Description Dave Hodgins 2011-04-28 22:13:02 CEST
After a clean install of beta 2, I used diskdrake to add mountpoints for
a previous install.  The existing entries for the current install were
removed from /etc/fstab.

I'll attach a copy of fstab from immediately after the install, and a
copy from after using diskdrake to add the entries for the existing
install.
Comment 1 Dave Hodgins 2011-04-28 22:14:49 CEST
Created attachment 318 [details]
fstab after install (before running diskdrake

This is the fstab created by the clean installation of beta 2.
Comment 2 Dave Hodgins 2011-04-28 22:18:40 CEST
Created attachment 319 [details]
fstab after running diskdrake

Here is the fstab after using diskdrake to add the entries for
/var/mnt/91...

Note that entries for the currently in use entries (/dev/bk...) have
been removed incorrectly by diskdrake.
Comment 3 Dave Hodgins 2011-04-29 01:11:36 CEST
I did the same thing with Mandriva 2011.0 beta 2.
Clean install and then add the entries to the existing install, and it
did not incorrectly remove entries from fstab, so I've compared the
source programs looking for non-cosmetic differences.  In
line 507 of /usr/lib/libDrakX/diskdrake/interactive.pm Magia has

{ label => N("Encrypt partition"), type => 'bool', val => \$use_dmcrypt },

while Mandriva has

{ label => N("Encrypt partition"), type => 'bool', val => \$use_dmcrypt, disabled => sub { member($part->{mntpoint}, qw(/ /usr /var /boot)); } },

I'm not clear on what this is doing, but it's the only change I've found that
doesn't appear to be cosmetic only.
Comment 4 Dave Hodgins 2011-05-03 22:32:21 CEST
Fyi.  I changed line 507 to match what Mandriva has, and retested, but the
problem is still there.  I have no idea why the Mageia version of diskdrake
is dropping the entries from /etc/fstab, for the currently mounted /home,
/opt, /usr/, /var, and /var/mnt logical volumes.  Note that this bug leaves
the system unbootable, until the fstab entries are restored.

Any suggestions or further info needed to help debug this?

Priority: Normal => release_blocker

Ahmad Samir 2011-05-03 23:24:19 CEST

CC: (none) => pterjan

Anne Nicolas 2011-05-04 15:08:48 CEST

CC: (none) => ennael1
Assignee: bugsquad => pterjan

Comment 5 Pascal Terjan 2011-05-06 01:26:30 CEST
How were the /dev/bk mount points setup? And what is /dev/bk/? LVM?
Comment 6 Dave Hodgins 2011-05-06 02:12:11 CEST
Yes, I'm using lvm.

During the install, I selected the existing partition sdb1 for /,
and various logical volumes within a single physical volume, called bk.
All were formatted prior to the install.

After booting into the new install, I used diskdrake to add mount points
for a previous installation on sda13, and using logical volumes within
the physical volume group called 91.  I've since manually added mount
points for my third installation on sda8, which uses logical volumes
within a physical volume group called a8.  I think df shows it best ...

/dev/sdb1             1.5G  364M  1.1G  26% /
/dev/mapper/bk-home   1.5G  284M  1.2G  19% /home
/dev/mapper/bk-opt    1.5G   81M  1.4G   6% /opt
/dev/mapper/bk-tmp    5.0G  174M  4.6G   4% /tmp
/dev/mapper/bk-usr     16G  6.6G  8.4G  44% /usr
/dev/mapper/bk-var    1.5G  745M  691M  52% /var
/dev/sdb2             494M   23M  447M   5% /var/log
/dev/mapper/bk-mnt    3.9M   39K  3.7M   2% /var/mnt
/dev/sda3             149G  111G   30G  79% /var/mnt/hd
/dev/sda14           1004M  881M   73M  93% /var/mnt/91
/dev/mapper/91-home  1008M  594M  364M  63% /var/mnt/91/home
/dev/mapper/91-opt   1008M  557M  401M  59% /var/mnt/91/opt
/dev/mapper/91-tmp    5.5G  305M  4.9G   6% /var/mnt/91/tmp
/dev/mapper/91-usr     16G   14G  1.4G  91% /var/mnt/91/usr
/dev/mapper/91-var    7.9G  1.6G  6.0G  21% /var/mnt/91/var
/dev/sda15            494M  148M  321M  32% /var/mnt/91/var/log
/dev/mapper/91-mnt    3.9M  1.9M  1.8M  52% /var/mnt/91/var/mnt
/dev/sda8            1004M  247M  707M  26% /var/mnt/a8
/dev/mapper/a8-home  1008M  360M  598M  38% /var/mnt/a8/home
/dev/mapper/a8-opt   1008M   34M  924M   4% /var/mnt/a8/opt
/dev/mapper/a8-tmp    5.5G  140M  5.1G   3% /var/mnt/a8/tmp
/dev/mapper/a8-usr     16G  3.2G   12G  21% /var/mnt/a8/usr
/dev/mapper/a8-var    7.9G  366M  7.2G   5% /var/mnt/a8/var
/dev/sda11            494M   30M  439M   7% /var/mnt/a8/var/log
/dev/mapper/a8-mnt    3.9M   31K  3.7M   1% /var/mnt/a8/var/mnt
/dev/mapper/luks91     95G   38G   53G  42% /var/mnt/data
/dev/sr0              4.1G  4.1G     0 100% /media/cdrom
/dev/sda1             2.0G  1.2G  843M  59% /var/mnt/win_c
/dev/sda5             2.0G  552M  1.4G  29% /var/mnt/win_d
/dev/sda6             2.0G  562M  1.4G  29% /var/mnt/win_e
/dev/sda7             2.0G  683M  1.3G  36% /var/mnt/win_f
/dev/sda13             36G   22G   15G  61% /var/mnt/win_m
Comment 7 Anne Nicolas 2011-05-12 09:51:31 CEST
ping ?
Comment 8 Dave Hodgins 2011-05-13 04:26:03 CEST
Created attachment 402 [details]
Syslog output from diskdrake

I did another test.  I restored the fstab shown in the first attachment,
and then used rpmdrake to add an entry only for sda14 on /var/mnt/91.
Again, rpmdrake removed all of the /dev/mapper/bk entries.

This test should rule out the possibility of the problem being caused
by duplicate logical volume names in different physical volumes, although
they still exist on the drive.

I'm attaching an extract of the syslog entries created by diskdrake.
Comment 9 Dave Hodgins 2011-05-14 09:01:29 CEST
Ping?  I consider this bug a release blocker!  If the user cannot reboot
into Magiea, it's a major problem. Adding mountpoints for a previous linux
installation should not make the magiea installation unbootable.
Comment 10 Pascal Terjan 2011-05-14 13:37:00 CEST
Well it's hard to guess what's going on as the log don't show anything wrong, all partitions are listed...
I will try creating several lvm in kvm then do an installation and add them...
Comment 11 Pascal Terjan 2011-05-14 15:03:43 CEST
I could reproduce, it seems that when adding mountpoint for logical volume, partitions from other volume groups get removed :(
Pascal Terjan 2011-05-14 15:30:35 CEST

Summary: diskdrake removes existing entries from fstab when adding additional entries. => diskdrake removes lvm entries from fstab when making any change

Comment 12 Dave Hodgins 2011-05-14 20:28:06 CEST
Note that it only removes lvm entries that previously existed in fstab.

In the example shown in attachment 319 [details] the new lvm entries added for
/dev/mapper/91/... were written to fstab, while the existing entries from
/dev/mapper/bk/... were removed.
Comment 13 Pascal Terjan 2011-05-14 20:38:37 CEST
Yes it drops them when loading the fstab, and adds the ones you add.
Comment 14 Pascal Terjan 2011-05-14 21:27:18 CEST
Can you try after patching /usr/lib/libDrakX/fs/get.pm with  http://svnweb.mageia.org/soft/drakx/trunk/perl-install/fs/get.pm?r1=1318&r2=1317&pathrev=1318 ?

I didn't think it would be enough but I could no longer reproduce after fixing that bug
Comment 15 Dave Hodgins 2011-05-14 21:58:02 CEST
With that patch applied, everything looks good.

Btw, just before I saw this posting, I'd compared /usr/lib/libDrakX/fsedit.pm
to the Mandriva version, and based on that, tried commenting out the line
to allow lvm on dmcrypt, and that also fixed the problem, so it was the
change to allow lvm on dmcrypt that triggered the bug.

With the change to allow lvm on dmcrypt, and the comment 14 patch, it works,
so this looks like is the correct fix.

Once that patch is pushed, this bug can be closed as fixed.

Thank you very much!
Comment 16 Pascal Terjan 2011-05-14 22:34:36 CEST
Package uploaded

Status: NEW => RESOLVED
Resolution: (none) => FIXED


Note You need to log in before you can comment on or make changes to this bug.