Description of problem: I'm trying to install Mageia 3 beta 3 on a blank new PC. I create a RAID1 volume in AHCI bios and I boot on installer. raid is inactive and Mageia propose installation on independent disks If i go to console mode to activate raid I have to modprobe raid1 mdadm --stop /dev/md126 mdadm --stop /dev/md127 mdadm -A --scan And even if i do this at the license stage, installer vants to install on individual disks Reproducible: Steps to Reproduce:
CC: (none) => pterjan, thierry.vignaud, tmb
please attach your /root/drakx/report.bug.xz
Keywords: (none) => NEEDINFO
The bug is now old. I've installed Mageia on an extrnal hard disk and copied back the installation on the ahci raid disk. I've no report.bug.xz in /root/drakx (I have a ddebug.log and an install.log) Shoud I relaunch an installation from DVD and see if i have a report.bug.xz in the ramdisk.
Created attachment 3693 [details] content of /root/drakx directory
You can start the mga3 beta4 install up to the point it detects the hard drives. Once it offers you to install on inviddual disks: - plug an usb key - go to the second text consoel (alt+ctrl+f2) - run the "bug" command then attach the report.bug.xz file found on that key here From the files you've attached, dmraid is run but we no VGs (fsedit::handle_dmraid()) I though we'd fixed something related to this, but fs/dmraid.pm hasn't been altered since mga fork besides space fixes the following shows that _raid_devices_raw() saw sg: * got: /dev/sdb:isw:isw_cgiadebdfb:GROUP:ok:976773166:0 * using isw_cgiadebdfb_Volume0 instead of isw_cgiadebdfb * got: /dev/sda:isw:isw_cgiadebdfb:GROUP:ok:976773166:0 * using isw_cgiadebdfb_Volume0 instead of isw_cgiadebdfb but there was nothing when we returned back from this chain: vgs()->_sets()->_raid_devices()->_raid_devices_raw() according to this: * dmraid: * using dmraid on Which hints me that maybe we've an issue with using udev: #- device should exist, created by dmraid(8) using libdevmapper #- if it doesn't, we suppose it's not in use if_(-e "/dev/$dev", $vg); Sadly, I've now HW to test, so it would be better handled by Pascal, Colin or Thomas... (adding dumper + logs where appropriate)
CC: (none) => mageia
such as maybe should we issue a call to udevadm settle? WDYT Colin?
Created attachment 3736 [details] bug.log with beta 4 I've retried with beta 4. The bug is different. md are started. But the installer see only individual disks
Could you try a network install with latest boot.iso? eg, boot.iso from http://distrib-coffee.ipsl.jussieu.fr/pub/linux/Mageia/distrib/cauldron/x86_64/install/images/ We should get better logs.
Created attachment 3746 [details] report.bug with boot.iso from 13-Apr-2013 20:44 report.bug with boot.iso from 13-Apr-2013 20:44 (Installer still wants to install on separate hard disk)
BTW we don't see any driver for it in lspcidrake output (b/c ahci is now built-in). As I supposed in comment #4, the raid is ignored b/c the device doesn't exist: * got: DEBUG: isw metadata found at 500107860992 from probe at 500107860992 * got: DEBUG: isw metadata found at 500107860992 from probe at 500107860992 (...) * got: DEBUG: _find_set: searching isw_cgiadebdfb * got: DEBUG: _find_set: found isw_cgiadebdfb * got: DEBUG: _find_set: searching isw_cgiadebdfb_Volume0 * got: DEBUG: _find_set: searching isw_cgiadebdfb_Volume0 * got: DEBUG: _find_set: found isw_cgiadebdfb_Volume0 * got: DEBUG: _find_set: found isw_cgiadebdfb_Volume0 * got: DEBUG: set status of set "isw_cgiadebdfb_Volume0" to 16 * got: isw_cgiadebdfb_Volume0:927985920:128:mirror:ok:0:2:0 * got: DEBUG: freeing devices of RAID set "isw_cgiadebdfb_Volume0" * got: DEBUG: freeing device "isw_cgiadebdfb_Volume0", path "/dev/sda" * /dev/sda => isw_cgiadebdfb_Volume0 * got: DEBUG: freeing device "isw_cgiadebdfb_Volume0", path "/dev/sdb" * /dev/sdb => isw_cgiadebdfb_Volume0 * got: DEBUG: freeing devices of RAID set "isw_cgiadebdfb" * got: DEBUG: freeing device "isw_cgiadebdfb", path "/dev/sda" * got: DEBUG: freeing device "isw_cgiadebdfb", path "/dev/sdb" * running: dmraid -r -c -c * got: /dev/sdb:isw:isw_cgiadebdfb:GROUP:ok:976773166:0 * got: /dev/sda:isw:isw_cgiadebdfb:GROUP:ok:976773166:0 * ignoring mapper/isw_cgiadebdfb_Volume0 as /dev/mapper/isw_cgiadebdfb_Volume0 doesn't exist Colin, Thomas: Do you think adding a call to "udevadm settle" just after running dmraid could help? Fabrice: could you try again, booting with "linux rd.md=0" from boot.iso? If that doesn't work, can you try with "linux rd.dm=0"?
Priority: Normal => release_blocker
Thomas: in bug #9440, you said "So I guess we should filter both isw and ddf and leave them to mdadm as dmraid has not really been maintained for a very long time". Have you got time to test? It's the same issue here...
I will try this evening. But why installer search for /dev/mapper/isw_cgiadebdfb_Volume0 on an intel ahci raid ? dmraid was replaced by mdadm a long time ago (I even made some bug reports https://bugs.mageia.org/show_bug.cgi?id=4750 )
OK. I've tried boot.iso with linux rd.md=0 Installer proposes me to activate raid and to upgrade the distribution. But it works by enabling dmraid instead of mdadm. I don't think it is a good solution.
i don't really get this one, is this like a fakeraid AHCI RAID in bios, and at the same time dmraid/mdadm in software raid? this sounds conflicting to me...
CC: (none) => alien
ahci raid is a pure software raid, but with some bios support. (Initialisation, and boot from raid...) On Intel ahci controler,dmraid is deprecated and mdadm should be used.
i guess this is just scary to me that you define a raid in bios AND as software raid at the same time... i've always just disabled fakeraid and used it directly in software raid. ihmo, there's rarely any advantage...
I really don't see where is the problem. You have a purely software raid seen by the bios. There are no performance penalty. It allows you to have bootloader and /boot replicated on each hard disk and to boot even if the first hard disk is down. And it will works if you have windows double boot
Sorry, I didn't get around to testing this, will do so tomorrow
Status: NEW => ASSIGNEDAssignee: bugsquad => tmb
CC: (none) => fri
Priority: release_blocker => HighVersion: Cauldron => 3
@tmb: please chdck if this needs to stay or could ge closed, thanks.
Mageia 3 changed to end-of-life (EOL) status 4 months ago. http://blog.mageia.org/en/2014/11/26/lets-say-goodbye-to-mageia-3/ Mageia 3 is no longer maintained, which means that it will not receive any further security or bug fix updates. As a result we are closing this bug. If you can reproduce this bug against a currently maintained version of Mageia please feel free to click on "Version" change it against that version of Mageia and reopen this bug. Thank you for reporting this bug and we are sorry it could not be fixed. -- The Mageia Bugsquad
Status: ASSIGNED => RESOLVEDResolution: (none) => OLD
I've tried with latest Mageia 5 RC. Intel RAID AHCI, boot efi. Installer mount /dev/mapper/isw_... Which is a bad choice with intel chipset. Disk is seen, but when I try to make partitions, I have errors. failed to add partition #1 on /dev/mapper/isw... So still does not work.
Status: RESOLVED => REOPENEDVersion: 3 => CauldronResolution: OLD => (none)
Created attachment 6612 [details] report.bug of latest installation
Keywords: NEEDINFO => (none)
Please don't reopen this bug. The original issue (presenting disks instead of a RAID) is fixed. We don't support yet adding partitions, which is tracked by bug #15665 *** This bug has been marked as a duplicate of bug 11105 ***
Keywords: (none) => NEEDINFOStatus: REOPENED => RESOLVEDResolution: (none) => DUPLICATE
OK. I made the partition outside and ask Mageia to use it. Installation works. Reboot and mount /dev/sda2 is already mounted or sysroot busy
When I launch rescue mode, "mount your partition under /mnt" does not activate raid... In console mode, "dmraid -ay" activate the raid, but I need partprobe to see subpartitions. NB: blkid gives same UUID for /dev/sda2 /dev/sdb2 and /dev/mapper/isw_..._Part2 only PARTUUID differs and in /root/grub2/grub.cfg, root was filled with the UUID