Bug 5160 - Partition Manager crashes out when system has mirrored RAID
Summary: Partition Manager crashes out when system has mirrored RAID
Status: RESOLVED WONTFIX
Alias: None
Product: Mageia
Classification: Unclassified
Component: RPM Packages (show other bugs)
Version: 1
Hardware: x86_64 Linux
Priority: Normal major
Target Milestone: ---
Assignee: Thierry Vignaud
QA Contact:
URL:
Whiteboard:
Keywords:
Depends on:
Blocks: 14330
  Show dependency treegraph
 
Reported: 2012-03-29 15:32 CEST by A B
Modified: 2014-10-19 11:55 CEST (History)
1 user (show)

See Also:
Source RPM: drakxtools-13.58-1.mga1
CVE:
Status comment:


Attachments

Description A B 2012-03-29 15:32:08 CEST
I have a clean install of Mageia 1, fully updated, installed on an Asus P9 X79 WS motherboard, 32 GB RAM, with the major partitions on a 500 GB WD drive, and a hardware/bios RAID 1 of two 1.5 TB WD Caviar Green drives. During install, the mirrored RAID showed up in the diskdrake partition interface, and was successfully partitioned and formatted as a single 1.5 TB ext4 partition.

After a few initial issues of fstab not recognizing it as a device, I determined there was a problem with the /mapper/ line in fstab, and found the RAID's UUID and devlink, and was able to mount it successfully to the system.

However, while the 1.5 TB RAID is visible in Places in Dolphin, and is properly accessible to the non-root admin account I have, to the point where I have successfully shared folders on the RAID via the samba server to Windows machines on the LAN as a test; I cannot access, in MCC, the "Manage disk partitions", "CD/DVD Burner", or "Set up boot system" control options. In all three cases, I receive a pop-up window saying "This program has exited abnormally".

This error occurs every time the listed MCC configuration tools are started.

When I log MCC I receive the following:

Attempting to start Manage Disk Partitions:

15:13:05 diskdrake[5162]: ### Program is starting ### 
15:13:05 diskdrake[5162]: running: dmraid -s -c -c 
15:13:06 diskdrake[5162]: running: dmraid -d -s -c -c 
15:13:06 diskdrake[5162]: running: dmraid -r -c -c 
15:13:06 diskdrake[5162]: running: blkid -o udev -p /dev/sda 
15:13:06 diskdrake[5162]: running: blkid -o udev -p /dev/sda 
15:13:06 diskdrake[5162]: running: blkid -o udev -p /dev/sda1 
15:13:06 diskdrake[5162]: running: blkid -o udev -p /dev/sda5 
15:13:06 diskdrake[5162]: running: blkid -o udev -p /dev/sda6 
15:13:06 diskdrake[5162]: running: blkid -o udev -p /dev/sda7 
15:13:06 diskdrake[5162]: running: blkid -o udev -p /dev/sda8 
15:13:06 diskdrake[5162]: running: blkid -o udev -p /dev/sdb 
15:13:06 diskdrake[5162]: running: blkid -o udev -p /dev/sdb 
15:13:06 diskdrake[5162]: ### Program is exiting ###


Attempting to start CD/DVD Burner Management:

15:14:18 diskdrake[5250]: ### Program is starting ### 
15:14:18 diskdrake[5250]: running: dmraid -s -c -c 
15:14:18 diskdrake[5250]: running: dmraid -d -s -c -c 
15:14:18 diskdrake[5250]: running: dmraid -r -c -c 
15:14:19 diskdrake[5250]: running: blkid -o udev -p /dev/sda 
15:14:19 diskdrake[5250]: running: blkid -o udev -p /dev/sda 
15:14:19 diskdrake[5250]: running: blkid -o udev -p /dev/sda1 
15:14:19 diskdrake[5250]: running: blkid -o udev -p /dev/sda5 
15:14:19 diskdrake[5250]: running: blkid -o udev -p /dev/sda6 
15:14:19 diskdrake[5250]: running: blkid -o udev -p /dev/sda7 
15:14:19 diskdrake[5250]: running: blkid -o udev -p /dev/sda8 
15:14:19 diskdrake[5250]: running: blkid -o udev -p /dev/sdb 
15:14:19 diskdrake[5250]: running: blkid -o udev -p /dev/sdb 
15:14:19 diskdrake[5250]: ### Program is exiting ### 


Attempting to start Setup boot system:

15:14:48 drakboot[5305]: ### Program is starting ### 
15:14:49 drakboot[5305]: running: dmraid -s -c -c 
15:14:49 drakboot[5305]: running: dmraid -d -s -c -c 
15:14:49 drakboot[5305]: running: dmraid -r -c -c 
15:14:49 drakboot[5305]: running: blkid -o udev -p /dev/sda 
15:14:49 drakboot[5305]: running: blkid -o udev -p /dev/sda 
15:14:49 drakboot[5305]: running: blkid -o udev -p /dev/sda1 
15:14:49 drakboot[5305]: running: blkid -o udev -p /dev/sda5 
15:14:49 drakboot[5305]: running: blkid -o udev -p /dev/sda6 
15:14:49 drakboot[5305]: running: blkid -o udev -p /dev/sda7 
15:14:49 drakboot[5305]: running: blkid -o udev -p /dev/sda8 
15:14:49 drakboot[5305]: running: blkid -o udev -p /dev/sdb 
15:14:49 drakboot[5305]: running: blkid -o udev -p /dev/sdb 
15:14:49 drakboot[5305]: ### Program is exiting ###


Now, seeing that in all instances, the failure happens at sdb, I searched for ways to pin down this issue.

-
Using fdisk, I receive the following info:


Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *          63   211800959   105900448+  83  Linux
Partition 1 does not start on physical sector boundary.
/dev/sda2       211800960   976768064   382483552+   5  Extended
/dev/sda5       211801023   280992914    34595946   82  Linux swap / Solaris
Partition 5 does not start on physical sector boundary.
/dev/sda6       280992978   513774764   116390893+  83  Linux
Partition 6 does not start on physical sector boundary.
/dev/sda7       513774828   746556614   116390893+  83  Linux
Partition 7 does not start on physical sector boundary.
/dev/sda8       746556678   976768064   115105693+  83  Linux
Partition 8 does not start on physical sector boundary.

Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes
255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *          63  2783743199  1391871568+  83  Linux

Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes
255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1   *          63  2783743199  1391871568+  83  Linux

Disk /dev/md126: 1425.3 GB, 1425283219456 bytes
255 heads, 63 sectors/track, 173280 cylinders, total 2783756288 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

      Device Boot      Start         End      Blocks   Id  System
/dev/md126p1   *          63  2783743199  1391871568+  83  Linux


-
If I run blkid, I receive the following:


/dev/sda1: UUID="34e6188c-ab85-4090-b26d-c4e9f95e0540" TYPE="ext4"
/dev/sda5: UUID="dfbb0f85-ea5b-4c11-9adc-7a0a1f517430" TYPE="swap"
/dev/sda6: UUID="3e580035-89ff-4e79-8f39-aee636f81fde" TYPE="ext4"
/dev/sda7: UUID="adafe20e-345a-4801-a60a-aaabb9b3dae2" TYPE="ext4"
/dev/sda8: UUID="1063ea7d-6d1c-4982-ab01-9892afbb1375" TYPE="ext4"
/dev/md126p1: UUID="3c2ea997-d173-479f-8f74-46625156d829" TYPE="ext4"
/dev/sdb: TYPE="isw_raid_member"


-
If I specifically run blkid -o udev -p /dev/sdb from the console, the line which crashes in MCC, I receive:


ID_FS_VERSION=1.1.00
ID_FS_TYPE=isw_raid_member
ID_FS_USAGE=raid


-
My blkid.tab is:


<device DEVNO="0x0801" TIME="1333048723.623318" UUID="34e6188c-ab85-4090-b26d-c4e9f95e0540" TYPE="ext4">/dev/sda1</device>
<device DEVNO="0x0805" TIME="1333048723.623331" UUID="dfbb0f85-ea5b-4c11-9adc-7a0a1f517430" TYPE="swap">/dev/sda5</device>
<device DEVNO="0x0806" TIME="1333048723.623339" UUID="3e580035-89ff-4e79-8f39-aee636f81fde" TYPE="ext4">/dev/sda6</device>
<device DEVNO="0x0807" TIME="1333048723.623346" UUID="adafe20e-345a-4801-a60a-aaabb9b3dae2" TYPE="ext4">/dev/sda7</device>
<device DEVNO="0x0808" TIME="1333048723.623352" UUID="1063ea7d-6d1c-4982-ab01-9892afbb1375" TYPE="ext4">/dev/sda8</device>
<device DEVNO="0x10300" TIME="1333048723.796731" PRI="10" UUID="3c2ea997-d173-479f-8f74-46625156d829" TYPE="ext4">/dev/md126p1</device>
<device DEVNO="0x0810" TIME="1333048723.640353" TYPE="isw_raid_member">/dev/sdb</device>


-
My fstab is:


none /proc proc defaults 0 0
UUID=34e6188c-ab85-4090-b26d-c4e9f95e0540 / ext4 acl,relatime 1 1
UUID=dfbb0f85-ea5b-4c11-9adc-7a0a1f517430 swap swap defaults 0 0
UUID=3e580035-89ff-4e79-8f39-aee636f81fde /usr ext4 acl,relatime 1 2
UUID=adafe20e-345a-4801-a60a-aaabb9b3dae2 /var ext4 acl,relatime 1 2
UUID=1063ea7d-6d1c-4982-ab01-9892afbb1375 /home ext4 acl,relatime 1 2
/dev/md126p1 /data ext4 acl,relatime,users,exec,suid,dev,rw,umask=000 1 2
/dev/cdrom /media/cdrom auto umask=0,users,iocharset=utf8,noauto,ro,exec 0 0



I made sure my media sources were fully configured, and went to software updates, and could find no specific updates for dmraid, blkid, or diskdrake. My searches to date for information have found issues with the same error message relating to NTFS drives not being mounted correctly, but none specifically calling out a RAID environment as a problem.

There is a problem, apparently, with the diskdrake scripts causing them to bomb out, while the underlying commands can obviously be run directly in the console, and return no errors.
Remco Rijnders 2012-03-30 07:44:57 CEST

Assignee: bugsquad => thierry.vignaud

Comment 1 Thierry Vignaud 2012-03-30 10:28:04 CEST
Have you tried run diskdrake from the command line?
Is there any message?

CC: (none) => pterjan
Source RPM: drakconf-12.21.9-2.mga1; drakxtools-13.58-1.mga1 => drakxtools-13.58-1.mga1

Comment 2 Pascal Terjan 2012-03-30 11:50:47 CEST
It may be related to the other dmraid bug (https://bugs.mageia.org/show_bug.cgi?id=4750#c24).
You are using /dev/md126p1 so not dmraid
Comment 3 A B 2012-03-30 17:15:21 CEST
Running diskdrake from the command line as root produces
the following error message:
 
 
INTERNAL ERROR: unknown device sdb1
MDK::Common::Various::internal_error() called from /usr/lib/libDrakX/devices.pm:186
devices::entry() called from /usr/lib/libDrakX/devices.pm:201
devices::make() called from /usr/lib/libDrakX/fs/type.pm:279
fs::type::call_blkid() called from /usr/lib/libDrakX/fs/type.pm:287
fs::type::type_subpart_from_magic() called from /usr/lib/libDrakX/fsedit.pm:271
fsedit::get_hds() called from /usr/sbin/diskdrake:74


In addition, I am currently identifying the drive via /dev/md126p1
 
I have also used the UUID code assigned to it in /dev and
received the same script crash and error messages.
Comment 4 Marja Van Waes 2012-07-06 15:03:35 CEST
Please look at the bottom of this mail to see whether you're the assignee of this  bug, if you don't already know whether you are.


If you're the assignee:

We'd like to know for sure whether this bug was assigned correctly. Please change status to ASSIGNED if it is, or put OK on the whiteboard instead.

If you don't have a clue and don't see a way to find out, then please put NEEDHELP on the whiteboard.

Please assign back to Bug Squad or to the correct person to solve this bug if we were wrong to assign it to you, and explain why.

Thanks :)

**************************** 

@ the reporter and persons in the cc of this bug:

If you have any new information that wasn't given before (like this bug being valid for another version of Mageia, too, or it being solved) please tell us.

@ the reporter of this bug

If you didn't reply yet to a request for more information, please do so within two weeks from now.

Thanks all :-D
Comment 5 Thierry Vignaud 2012-09-20 14:47:52 CEST
Can you try on Mageia 2?

Summary: MCC - Partition Manager crashes out when system has mirrored RAID => Partition Manager crashes out when system has mirrored RAID

Comment 6 Manuel Hiebel 2012-11-05 16:51:15 CET
This message is a reminder that Mageia 1 is nearing its end of life. 
In approximately 25 days from now, Mageia will stop maintaining and issuing 
updates for Mageia 1. At that time this bug will be closed as WONTFIX (EOL) if it 
remains open with a Mageia 'version' of '1'.

Package Maintainer: If you wish for this bug to remain open because you plan to 
fix it in a currently maintained version, simply change the 'version' to a later 
Mageia version prior to Mageia 1's end of life.

Bug Reporter: Thank you for reporting this issue and we are sorry that we may not 
be able to fix it before Mageia 1 is end of life.  If you would still like to see 
this bug fixed and are able to reproduce it against a later version of Mageia, 
you are encouraged to click on "Version" and change it against that version 
of Mageia.

Although we aim to fix as many bugs as possible during every release's lifetime, 
sometimes those efforts are overtaken by events. Often a more recent Mageia 
release includes newer upstream software that fixes bugs or makes them obsolete.

--
Mageia Bugsquad
Comment 7 Manuel Hiebel 2012-12-02 14:31:18 CET
Mageia 1 changed to end-of-life (EOL) status on ''1st December''. Mageia 1 is no 
longer maintained, which means that it will not receive any further security or 
bug fix updates. As a result we are closing this bug. 

If you can reproduce this bug against a currently maintained version of Mageia 
please feel free to click on "Version" change it against that version of Mageia and reopen this bug.

Thank you for reporting this bug and we are sorry it could not be fixed.

--
Mageia Bugsquad

Status: NEW => RESOLVED
Resolution: (none) => WONTFIX

Vladimir Zawalinski 2014-10-19 11:55:00 CEST

Blocks: (none) => 14330


Note You need to log in before you can comment on or make changes to this bug.