Description of problem: (to save space here, see description late in closed bug 13592 as to how I set up RAID 0 on this UEFI ASUS UX301 dual SSD laptop. The problem during install was that on having partitioned the RAID0 drive, it then failed. 'failed to add partition #5 on /dev/mapper/isw_bbacjebecf_vol0' I have an initial 300Mb EFI partition, then 3 further partitions of around 80Gb each. My attempted partitioning had been to next have a large encrypted partition, then swap at the end of the RAID0. I got it to install and reboot having just the EFI and 3 further partitions each of 80Gb, followed by empty disk space with no swap. From the working system, I am still unable to create a swap or any other type of partition in the considerable spare space in RAID 0, volume 0. Version-Release number of selected component (if applicable): How reproducible: always Steps to Reproduce: See the process I went through in bug 13592 comment 27 as I recall. 1. 2. 3. Reproducible: Steps to Reproduce:
Whiteboard: (none) => M5 RC x86_64
We don't really support partitionable RAIDs yet
CC: (none) => pterjan, thierry.vignaudDepends on: (none) => 15665
Ah well, its maybe surprising how far I got then. Since then, some indirect new info of a sort. 1. I tried to install Win8.1 from an OEM memory stick that would accept my hidden-in-UEFI serial number. It could see the existing partitions within RAID vol0, but reported it was unable to install as it had some issue with what was there. I tried again, from win8.1 deleted all the partitions, then let it install as on a new system. Worked. This was a step forward, as previously Win8.1 still saw separate sda and sdb despite the RAID flag being set in UEFI. Something must have been written to the 2 disks by my Mageia 5 RC installation which allowed Win8.1 to install and work I've since run an M5 x86_64 UEFI install. Deleted some little win8.1 partition of 128Mb next to the 100Mb EFI, and combined them as a bigger EFI. Note that the installer didn't renumber the partitions - so there was one missing. Rebooted Classic DVD in rescue mode. By default it didn't recognise the RAID, (had only 'control' in /dev/mapper) but after running dmraid -ay there was then an isw_xxxxx_vol0 in there as well, and gdisk could access the win8.1-created partitions inside vol0. Running 'sort' in gdisk renumbered the partitions (and may have thereby killed the win8.1, but hey, its just a test...) Re-ran the M5 installation, and curiously this time, building on what the Win8.1 had done, there was no trouble adding all the extra partitions I wanted during install. (added 2 ext4's, a big linux native and a swap with no issues.) New system rebooted and ran fine. So, it looks as if there is something buggy in the way we set up partitions inside the RAID0 volume using Classic M5 RC. I'll take your advice that it's not been addressed so far, but should probably keep this bug open as the 'use case' of installing M5 into spare disk space on a pre-existing MS-Win dual-HD RAID system is not unheard of.
This is still valid immediately pre-release. for Errata Installation onto a new bare RAID 0 system may have problems if you try to create a 5th partition although works normally with fewer partitions. The work-around is that the classical installer can happily work with pre-existing MS-Windows partitions within a RAID 0 drive, allowing manual resizing of the main windows partition, deleting un-needed Windows partitions (e.g. a drive 'D') and replacing them with linux partitions, at and above a total of 5 partitions. Note that the MS-Windows must be completely and cleanly shut down first.
Whiteboard: M5 RC x86_64 => FOR ERRATA
Whiteboard: FOR ERRATA => FOR_ERRATA
That has nothing to do with 5th partition IMHO. The issue is with adding partitions to a partitionned array, IMHO. WDYT Thomas?
CC: (none) => tmb
Clarification: it manifest as inability to create more than 4 partitions in a de-novo Mageia-5 RAID 0 array. There was no problem adding more than 4 partitions in an existing (created by MS-Win) array.
... but comment 4 is probably valid in that this bug suggests the whole of our partitioning RAID 0 is buggy.
Partitioning on dmraid managed raid devices should work... It atleast used to back when I used them... Since it seems a specific <=4 is ok, but the 5th fails, I wonder if we are actually doing the partitioning (or checking) in mbr format with only primary ones, meaning we hit mbr max 4 primary... And as it works on pre-partitioned disc where Win does switch to extended partitioning when it hits >3 partitions, we can easily add partitions in the extended section
Could you attach the installer log for that error?
Keywords: (none) => NEEDINFO
I'm sorry that log is lost since I've re-installed after letting Windows set up the initial disk partitioning. At the moment I'm overseas with the 'test' laptop as my only machine, so will have to wait to mid June before I can assist further with this. Just remind me how to get the bug report at that early point in the install, before anything is written to disk? With thanks, Tony
Just plug an USB key, go to tty2 and type "bug" and voila. You then have a nice "report.bug" file on your USB key (or floppy...) So you can just start the installer, just try to add a new partition w/o touching anything else then run the "bug" command once the installed failed.
Added in Errata. https://wiki.mageia.org/en/Mageia_5_Errata#Partitionning_in_Raid_volume Suppress it if the bug is solved.
CC: (none) => yves.brungard_mageiaWhiteboard: FOR_ERRATA => IN_ERRATA
Tony, anything further?
CC: (none) => nic
I see the ball's in my court. I've been delaying testing in M6 as would have to rely on good graces of manufacturer to re-image disks to recover. Has anything changed to improve my chances of success? Any hints on the fine detail of how to dd my current setup to aid recovery? (Have 2 RAID0 SSD's in very fast laptop).
(In reply to Tony Blackwell from comment #13) > I see the ball's in my court. I've been delaying testing in M6 as would > have to rely on good graces of manufacturer to re-image disks to recover. > Has anything changed to improve my chances of success? > Any hints on the fine detail of how to dd my current setup to aid recovery? > (Have 2 RAID0 SSD's in very fast laptop). Please ask on dev ml (I don't have the slightest idea :-/ ). Closing this report as OLD, since the needed information is still missing. Please reopen if the issue is still valid for Mageia 6sta1 or 6dev1 installs, and attach the needed logs.
Status: NEW => RESOLVEDCC: (none) => marja11Resolution: (none) => OLD
I think it's still valid though we need for someone to give us logs when the bug happens, aka: - plug a USB keu - go to tty2 - run the "bug" command => this makes the installer to dump its logs to the report.bug file on the USB key - attach this report.bug file here (not paste)
Status: RESOLVED => REOPENEDResolution: OLD => (none)
(In reply to Thierry Vignaud from comment #15) > I think it's still valid though we need for someone to give us logs when the > bug happens, aka: > - plug a USB keu > - go to tty2 > - run the "bug" command > => this makes the installer to dump its logs to the report.bug file > on the USB key > - attach this report.bug file here (not paste) @ Tony Could you do that?
Sync'ing RC now, will do it today
Created attachment 8175 [details] RAID 0 problems; early attempts I busted up my RAID to test this. The results speak for themselves. at step 41 its late - enough for tonight. RC is not ready.
see the attachment for what I actually did - to unwieldy to paste in here.
Priority: Normal => release_blockerBlocks: (none) => 15527
Whiteboard: IN_ERRATA => 6RC
Created attachment 8176 [details] report.bug at failed partitioning
Created attachment 8182 [details] further progress on RAID install See the long attached text file. Managed to get working RAID0 install by first booting Classic x86_64 in recovery mode and setting up partitions manually with gdisk. Didn't see the 5th partitions error. I'll detour ti see whether I can re-install Win8.1 at this point, then tear all the partitions down and try it all via just the installer, which is where the '5th partition' problem arose with M5. More news shortly!
Priority: release_blocker => HighBlocks: 15527 => (none)
Created attachment 8218 [details] report2.bug dump at time of triggering this bug Have now conclusively demonstrated this bug for M6 RC: HW: ASUS UX301L laptop, EFI, dual SSD each 256 Gb Software: M6 x86_64 RC USB Laptop starting state: Win 81 and M6 installed on RAID 0 array. Test procedure: 1. Boot into EFI, change SATA mode selection from RAID0 to AHCI, save 2. Reboot into M6 RC in rescue mode -> Console 3. (ls /dev/mapper: just had the 'control' file.) 4. Wipe any left-over metadata from disks: mdadm --zero-superblock /dev/sda 5. same for sdb; both completed silently (meaning there _was_ some metadata there). 6 reboot into EFI, reset SATA to RAID 7. Reboot into M6 installation. 8. When it pauses at 'choose a language', go to # prompt at F2 9. Create a RAID container with 'mdadm -C /dev/md/imsm /dev/sd[a-b] -n 2 -e imsm' Response: 'mdadm: /dev/sdb appears to be part of a raid array: level=raid0 devices=0 ctime=Thu Jan 1 00:00:00 1970' mdadm: metadata will over-write last partition on /dev/sdb. Continue creating array? 10. y 11. Response: 'mdadm: container /dev/md/imsm prepared.' 12. Create the RAID 0 volume /dev/md/vol0 within this imsm container: mdadm -C /dev/md/vol0 /dev/md/imsm -n 2 -l 0 13. Response: 'mdadm: array /dev/md/vol0 started.' 14. See where we are at: 'mdadm -E -s -config=mdadm.conf > /etc/mdadm.conf' resulting file of form: ARRAY metadat-imsm UUID=(the string for that UUID) ARRAY /dev/md/vol0 container=(same string as UUID) member=0 UUID=(different UUID string) 15: can't go on at this point; it thinks partition tables are corrupt: so reboot. 16. on reboot, restart install from USB. This time, after accepting licence, get popup 'BIOS software RAID detected on disks /dev/sdb /deb/sda. Activate it?' 17 y 18 Choose install (rather than upgrade the old M6 it sees after the RAID0 was rebuilt) 19. Realise this isn't going to test anything, as all the old RAID0 partitions are there. 20. Delete them all. For test realism, reboot, activate the RAID array when prompted, custom disk partitioning. 21. Select mapper/isw_ggceccijf_vol0 as the RAID array to be partitioned. 22. It thinks there is a 4Mb EFI system partition at the start of the disk; delete it. 23. try to re-create something like the partitions that existed previously: 400Mb, type needs to be 2700, basic data partition. M6 install graphics don't seem to allow this level of choice; pick Fat32 for now. 100Mb, type EFI system partition, give it mount point of /boot/EFI 128Mb, of type Compaq Diagnostics 29.34Gb, of type NTFS-3G (main Windows partition) 100Gb, of type ext4 24. Proceeded from there, accepted write-to-disk. 25: failure; bug demonstrated. 'An error occurred. INTERNAL ERROR: unknown device mapper/isw_ggceccijif_vol0p5 see attached reoport2.bug
Keywords: NEEDINFO => (none)Assignee: bugsquad => pterjan
Anything I can do to help with this? This bug fell off the back of the stack with M5. I'm keeping the laptop devoid of anything useful at the moment ready for re-testing of installation.
Keywords: (none) => 6RCWhiteboard: 6RC => (none)
Assignee: pterjan => mageiatools
Sorry all, This feels like bug 15791 - Raid0 can't add 5th partition - but it isn't...? H/W ASUS Prime Z270-AR motherboard, Core I7 7700K CPU 32Gb RAM Samsung NVMe SSD: 960 EVO M.2 UEFI install from external USB DVD, Classical x86_64 Custom disk partitioning. accept 300Mb EFI partition created 3 partitions of around 90Gb for M6 and other future OS, first one as / created 488Gb linux native partition for future encrypted partition left 161Gb (17% of disk) un-partitioned for Samsung oversizing click on 'done'. msg partitions are going to be written to disk msg: failed to add partition #5 on /dev/nvme0n1 Drat! This is exactly the message I was seeing on RAID0 and have talked about repeatedly, but this is on a single M.2 SSD. There are going to be lots of these hardware platforms around soon. Release-stopper? I've added this to bug 15791, although that bug's title is probably no longer appropriate. Happy to accept community's judgement re release-blocker status at this late late stage Tony
Keywords: 6sta1.5 => 6finalSeverity: major => criticalPriority: High => release_blocker
> > Deleting partition #5, so now have just EFI and 3 linux partitions, plus un-partitioned space, still results in same#5 error. > > Even going back to 3 partitions fails with #5 error; looks like a bug in the installer as well if its not correctly re-reading the currently-proposed partition table, which now has only 3 partitions in it. > > Same #5 error with only EFI and single partition; installer clearly not re-reading what it's supposed to be creating. presumably a re-boot and starting from scratch will get past this... > Tony > Yes, works as expected on starting again, normal install process starting with just EFI and a single 90Gb partition. I suppose at this 'about to get it out there' stage, the work-around is just to have fewer partitions, but looks now as if there is an important bug in the installer if its not reading the currently-proposed partitions before trying to create them, as well as the 5th-partition bug being not restricted to partitions within RAID. Tony
Priority: release_blocker => HighSeverity: critical => major
Severity: major => normalPriority: High => Normal
It would probably be good to add a note about this in the Errata then replace the FOR_ERRATA6 keyword with IN_ERRATA6.
Keywords: (none) => FOR_ERRATA6
@Tony, please capture and attach a report.bug for this latest problem (immediately after you get the failed to add partition #5 message).
Keywords: (none) => NEEDINFOCC: (none) => mageia
Ha! fixed-resolved as at M7.1 x86_64. Yesterday with M7 it failed, but today with M7.1 I've created 9 partitions in dual SSD RAID0, and successfully installed bootable M7.1 in the 9th partition. Only procedural difference comparing with yeterday's failure in M7, is that yesterday I created 9 partitions in one go. Got error message that it couldn't create the 9th partition. Deleted them one by one, still in the same session, but it kept complaining it couldn't create the 9th partition, even when I was down to only 4 partitions. Today I took it more slowly, creating 2 or 3 at a time from my booted RAID0 dual SSD M7.1. Out of MCC and back in, got to 9 partitions, then re-ran a successsful M7.1 install on the 9th partition. Marking fixed, resolved Tony
Resolution: (none) => FIXEDStatus: REOPENED => RESOLVED