Description of problem:
RAID ( ddf / isw ) broken for new install
Version-Release number of selected component (if applicable):
on a real RAID machine or in a VM
here how to create such a RAID ( ddf )
1.Start a LiveCD on a VM with 2 disks
2.mdadm -C /dev/md/ddf -e ddf -l container -n 2 /dev/sda /dev/sdb
mdadm -C /dev/md/array1 -l raid1 -n 2 /dev/md/ddf
3.reboot on the installer
I tested all the mageia and here my result:
Mageia 1: OK install on RAID and boot
Mageia 2, 3, 4: Doesn't install ( tell the partition table is corrupted )
on my physical machine with ISW RAID i have the same issue
Steps to Reproduce:
*** Bug 11139 has been marked as a duplicate of this bug. ***
patch fixing the bug, ( need test ) ( in progress here )
tested in my mga4 custom stage2 ( stage2 + the patch ).
and no more pb, the raid is correctly detected.
mga1 it shows in the partitionning only the raid
without the patch in mga4a2 it shows in the partitionning step 3 disk sda sdb and the raid
with the patch in mga4a2 it shows in the partitionning step only the raid
I simplified your patch. I The issue was a simple typo.
Please test again.
i just saw your commit, thanks a lot.
i am starting tests now.
install work as expected but it doesn't boot.
I test n fresh install on my raid system this morning, don't work! dmraid with silicon image.
what is "silicon image" ?
On the motherboard I have two controller:
ping ? how can we go on on that one ? anyone has tested it with last isos ?
I think other reports show it's fixed.
Though it would be nice if reporters of _this_ bug report could confirm it.
i will test this week.
it is not better at all now the installer does not see the raid but show me 2 HD
Please attach the corresponding report.bug.xz
I'm testing tonight. It works with nvidia and silicon image system raid.
Possible expect more information from Nicolas, he don't have the same hardware configuration.
*** Bug 9684 has been marked as a duplicate of this bug. ***
*** Bug 9910 has been marked as a duplicate of this bug. ***
*** Bug 7505 has been marked as a duplicate of this bug. ***
(In reply to Nicolas Lécureuil from comment #0)
> Description of problem:
> RAID ( ddf / isw ) broken for new install
> Version-Release number of selected component (if applicable):
> How reproducible:
> on a real RAID machine or in a VM
> here how to create such a RAID ( ddf )
> 1.Start a LiveCD on a VM with 2 disks
> 2.mdadm -C /dev/md/ddf -e ddf -l container -n 2 /dev/sda /dev/sdb
> mdadm -C /dev/md/array1 -l raid1 -n 2 /dev/md/ddf
> 3.reboot on the installer
> I tested all the mageia and here my result:
> Mageia 1: OK install on RAID and boot
> Mageia 2, 3, 4: Doesn't install ( tell the partition table is corrupted )
> on my physical machine with ISW RAID i have the same issue
> Steps to Reproduce:
I have seen the error message 'the partition table is corrupted', many times.
It is a bug, but you can continue anyway, select a partition for install, and the array eventually gets detected.
Quite scary to continue, but it hasn't destroyed my data.
tested on my RAID0 Physical machine and impossible to install with still the same error
CabÃ¼n you atrach the report.bug.xz that tv requested please?
Re comment 20: The text shown in comment 20 is the same as the text of comment 0. However, I did encounter similar issues in a UEFI context, which I reported in bug 14330 for MGA5-live alpha 2. Briefly MGA5 seems to support recognition and mounting of isw_ arrays once live-booted, but because drakinstall-live <- diskdrake encounters errors, it is impossible to install, either through the classical installer with UEFI enabled (Bug 13471), or live install (errors raised in 14330). Please refer there for test diagnostic outputs.
Afterthought to comment 25: dmraid handles the isw raid correctly. dmadm reports error?
will look into this...
A RAID 1 with two drives has been created in the BIOS (Intel RSTe Oprom RAID). This RAID (/dev/mapper/isw_*) is correctly detected with gparted on a LiveDVD (KNOPPIX). Calling "mdadm --examine --scan" in a terminal on KNOPPIX shows the correct assembly of drives. However, this RAID is not detected during the hardware scanning procedure in the Mageia installer. And later - during partitioning - you cannot select this RAID. But the two drives of the RAID are visible.
Markus, I find that that after installation is completed, typing dmraid -ay in the new MGA and rebooting shows md126p1... in /dev. Mounting that partition allows access to that RAID array. The only partitioner that shows no errors relating to RAID is Gnome-Disks. Gparted does some things and errors out on others.
The installation process where isw_ raid pre-exists is a minefield. Naturally, answer no to the corrupted partitions message, use only custom partitioning and define your installation partitions manually. Auto-allocate can propose partitions on your RAID disks while 'erase and use disk' can do this without warning, destroying the array and causing problems in fstab as well.
Please let me know how you get on.
Could someone affected could attach report.bug after seeing the bug, attaching a USB key then typing the "bug" command on tty2?
Here is the report.bug, taken after aborting the installer at the partitioning stage. Please advise if you require the full version, but this one shows the offending "Invalid argument during seek for read on /dev/sdb" and crashes various partitioners. Note that only the raid member disk appeared on the dracx partitioner exactly as in comment 28 above.
Created attachment 6180 [details]
report.bug for drakx 16.75
After the message "Partition of sdb is too corrupted ..." and just before committing the partitioner.
Comment on attachment 6180 [details]
report.bug for drakx 16.75
The corrupted message is not logged in report.bug, contrary to what I said.
However, 'unknown partition table' for disks sdb and sdc occurs many times, especially around 15.698660 - and this is at the root of every problem in this area.
Please do not mix the issues.
The original bug was installer wasn't detecting raid disks which was fixed in 2013 then it was highjacked for the faillure to boot (which was fixed in dmraid+dracut)
I didn't know that the original bug had been fixed in 2013. There was no intent to hijack the agenda, but I can see that the two situations are different.
*** Bug 9467 has been marked as a duplicate of this bug. ***
(In reply to Thierry Vignaud from comment #34)
> Please do not mix the issues.
> The original bug was installer wasn't detecting raid disks which was fixed
> in 2013 then it was highjacked for the faillure to boot (which was fixed in
Perhaps it was fixed in 2013. However, the bug *is* present in mageia 4, which came out in 2014. I have had the installer be unable to detect my raid 5 array many times.
(In reply to jeff deifik from comment #37)
> (In reply to Thierry Vignaud from comment #34)
> > Please do not mix the issues.
> > The original bug was installer wasn't detecting raid disks which was fixed
> > in 2013 then it was highjacked for the faillure to boot (which was fixed in
> > dmraid+dracut)
> Perhaps it was fixed in 2013. However, the bug *is* present in mageia 4,
> which came out in 2014. I have had the installer be unable to detect my raid
> 5 array many times.
Does it still happen with Mageia 5 RC?
I have no idea. I have not tried 5 RC.
This bug is open against Cauldron and is fixed in Cauldron (future Mga5).
Mga4 installer is frozen and cannot be updated.
This bug exists with mageia-5 64 bit.
I did a fresh install.
My 7 disk raid-5 array was not detected.
When grub was installed, it cleverly was installed on /dev/sda, which was part of my raid array, destroying the array.
Grub should have been installed on the disk that I did the installation on, which was /dev/sdd.
What's the status on this bug report? It's marked as a blocker for Mageia 6, though it was presumably fixed for Mageia 3, then reopened, then reclosed, then reopened...
Thierry, could you assess comment 41 and see if it warrants reopening this issue, or if it should be moved to a new bug report?
Decreasing priority until this is reviewed.
Old bug fixed in mga3 that was reopened several times, the new issue(s) should probably be moved to new bug report(s)