Description of problem:M7 installer thinks RAID data corrupted, gets size wrong. Context: Core I7, M7x86_64 Classical install. 2x "6Tb" spinning disks. Set RAID0 using the Intel Rapid Start module (Intel RST 12.0.0.1783 SATA driver) from UEFI This utility creats the RAID0, reports total capacity 10.9Tb. M7 Classical install, asks to activate RAID, but when I approach Custom disk partitioning, get message that partitioning "is too corrupted for me". Agree to lose partitioning data (in fact there shouldn't be any, having barely created the RAID0, although I have been mucking around with the disks over recent days). The graphical partitioning then show device isw_xxxx_vol0 with size 2.9 Tb. Tried going into My Rescue mode, and from prompt, ran dmraid -r -E /dev/sda wiped that disk, breaking the RAID. Did same for sdb. Tried re-creating the RAID from UEFI, expecting my "dmraid -r -E" had cleared old RAID stuff, but ended up in exactly the same position. Tried same Rescue mode cleaning, then from Rescue # prompt ran dmraid -f isw -C Main0 --type 0 --disk "/dev/sda /dev/sdb" then dmraid -ay and got a RAID0 array reporting itself at around 11Tb, as expected. Rebooted into install mode, M7 installation asks to activate RAID, but still reports corrupted partitioning and only finds 2.9 Tb Summary: UEFI config appears to succeed dmraid from the command line appears to succeed Installation can't see it correctly, fails. By way of contrast, on the same box I had just successfully installed 2 virgin Samsung 860EVO SSD's in RAID0, and installation proceeded uneventfully (within the limits of a different failed-boot-after-installation RAID bug just fixed). This installation attempt was with those SSD's disconnected, only the 2 target drives in place.
Guess I should wipe the disks with DD, proposing to use: dd if=/dev/zero of=/dev/sda bs=1M see if that helps, but I would have expected that M7 installation wouldn't be confused by any old stuff on the drive...
When reproducing your previous bug, I used a pair of disks that had been used in a software (mdraid) array. I needed to zero the first 1MB on each disk to wipe out the old RAID/partition data: dd if=/dev/zero bs=1M count=1 of=...
CC: (none) => mageia
...then used the BIOS to create the fake RAID array.
Well that would have been easier. Because of some stuff reportedly put at the other end of disk as well, I'm zero-ing the lot - but first 6Tb disk took 6 hours. 2nd is recently underway - guess I'll let it run now. Thanks, Tony
Real bug I'm afraid. Despite 2x 6 hours + writing zeros to each of the entire drives, and despite dmraid (from My rescue) and system's UEFI previously seeing the correct 10.9 Tb for the combined drive, Mageia 7 classical install still thinks its 2.9Tb. I'd just used the UEFI to set up the RAID0, same as noted above, then went straight to install.
Here's a _really_ way-out idea. Is it just co-incidence that the reported sum of the 2 raid drive capacities is apparently 1/2 the capacity of each drive in the array, rather than twice?
from M7 rescue prompt: dmraid -r /dev/sda: isw, "isw_jcabajcac", GROUP, ok, 11721045166 secotrs data @ 0 /dev/sdb: isw, "isw_jcabajcac", GROUP, ok, 11721045166 sectors data @ 0 dmraid -ay RAID set "isw_jcabajcac_Volume1" was activated. " isw_jcabajcac_Volume1" is now registered with dmeventd for monitoring. (the Volume1 here is the default name UEFI gave the RAID set on creation). ls /dev/mapper Control isw_jc....Volume1 gdisk /dev/mapper/isw* Creating new GPT entries in memory p Disk /dev/mapper/isw_jcabajcac_Volume1: 62622110 sectors 2.9 TiB sector size (logical/physical): 512/4096 bytes partition table holds up to 128 entries main partition table begins at sector 2 and ends sector 33 Partitions will be aligned on 2048 sector boundaries Total free space is 6262211005 sectors (2.9 TiB) This is perhaps just an expanded rendition of the bug, but how come gdisk sees roughly half the number of sectors in the total RAID array that dmraid sees in one component of that array? Has that logical/physical sector size got anything to do with the bug? Way out of my depth here. Tony
Still present M7.1
Whiteboard: (none) => 7.1Version: 7 => Cauldron
Given that gdisk reports the same size as the installer, this seems most likely to be a problem in the dmraid driver.
Initially I didn't follow what you were saying, but yes, given that I'd just pointed gdisk at /dev/mapper/isw..., I see your reasoning. Note however that both dmraid and UEFI agree on the true size of the RAID0. Installer and gdisk get it wrong. On this basis its not immediately apparent why dmraid is at fault. Are you implying there is some fault in the way dmraid is presenting its info to gdisk and the installer? (or are both of them asking the wrong question of dmraid?) Where to from here? Again, way out of my depth. Tony
I see the same fault on my test system with 2x3GB disks. If you cat /proc/partitions and look at the reported size (in KB) for sda, sdb, and dm-0, I expect you will see that sda and sdb show the size of your physical disks, and dm-0 shows the incorrect size for your RAID array. dm-0 is the virtual device created by dmraid, so if its size is wrong, it is a fault in dmraid. I booted from a Mageia 6.1 Live ISO on my test system, and it shows the same fault.
(In reply to Martin Whitaker from comment #11) > I see the same fault on my test system with 2x3GB disks. > > If you > > cat /proc/partitions > > and look at the reported size (in KB) for sda, sdb, and dm-0, I expect you > will see that sda and sdb show the size of your physical disks, and dm-0 > shows the incorrect size for your RAID array. dm-0 is the virtual device > created by dmraid, so if its size is wrong, it is a fault in dmraid. > > I booted from a Mageia 6.1 Live ISO on my test system, and it shows the same > fault. Thanks for the debugging, Martin :-) Assigning to tmb, our registered dmraid maintainer.
Assignee: bugsquad => tmbCC: (none) => marja11
Hmm... My M7 python modules are: usr/lib64/python3.7/site-packages/gnucash /usr/lib64/python3.7/site-packages/gnucash/gnucash_core_c.py /usr/lib64/python3.7/site-packages/gnucash/gnucash_core.py /usr/lib64/python3.7/site-packages/gnucash/__pycache__/gnucash_core_c.cpython-37.pyc /usr/lib64/python3.7/site-packages/gnucash/__pycache__/gnucash_core.cpython-37.pyc /usr/lib64/python3.7/site-packages/gnucash/__pycache__/gnucash_core_c.cpython-37.opt-1.pyc /usr/lib64/python3.7/site-packages/gnucash/__pycache__/gnucash_business.cpython-37.opt-1.pyc /usr/lib64/python3.7/site-packages/gnucash/__pycache__/gnucash_core.cpython-37.opt-1.pyc /usr/lib64/python3.7/site-packages/gnucash/__pycache__/gnucash_business.cpython-37.pyc /usr/lib64/python3.7/site-packages/gnucash/gnucash_business.py There is no corresponding python2.7/site-packages/gnucash directory, and no python2-gnucash in the M7 repository, which just offers python3-gnucash. M6 contained python2-gnucash-2.6.19-1.mga6.x86_64.rpm with its dependencies. Could we please have this or its successor and dependencies added to M7 repositories Tony
Oops, posted to wrong bug. Sorry Tony
Summary: incorrect RAID recognition prevents install => incorrect RAID recognition prevents install (bogus size in /proc/partitions)CC: (none) => thierry.vignaudSource RPM: (none) => kernel
Doesn't seem to be a problem with M8 x86_64-rc. Closing for now. Tony
Resolution: (none) => FIXEDWhiteboard: 7.1 => MGA8_x86_64-rcStatus: NEW => RESOLVED