Description of problem:Classic DVD 4.1 boots OK but cannot see the hard drives. hardware: ASUS UX301; their top-end UEFI ultrabook with 2x250Gb SSD in RAID0. Classic DVD boots OK, gets to partitioning and cannot see the disk. Incidentally found that the DVD does not seem to have gdisk or parted or gparted, so difficult! Loaded gdisk, it is thinks the gpt is hybrid or corrupted, finds partitions extending beyond the disk (true of the individual disk, not of the pair in RAID). I need to preserve RAID 0 and a smaller functioning MS Win8.1 (I've read there seems to be a linux bug on this box resulting in the battery not charging, going into windows fixes this). So, installation seems impossible at present. (Off topic: For good measure I made a recovery stick for win8.1, tried going from RAID 0 to AHCI. Couldn't manage to reinstall win 8.1 from recovery to the separate drives. Just for good measure, there seems to be a bug in the UEFI bios for this superb laptop; - it should, when just put back into RAID, bring up a 'pre-bios' window asking if you want to rebuild the RAID array - but it doesn't, so setting back to RAID in UEFI doesn't actually work (yet). Took it to ASUS here, confessed what I'd done to their shiny 3-day-old laptop. Friendly reception, but even they couldn't easily rebuild the RAID, left it with them overnight.) Back on topic: Central problem is that the install disk cannot see the hard drives in RAID0, which is a critical failure. ASUS's recommended way of adding another OS is to shrink their unused data drive 'D' then put it there, but can't if M4 can't see it. Useful links: https://wiki.archlinux.org/index.php/ASUS_UX301LA (Archlinux make it work) http://blogs.lmax.com/staff-blogs/2014/01/setting-asus-ux301la/ Version-Release number of selected component (if applicable): How reproducible: always Steps to Reproduce: 1. 2. 3. Reproducible: Steps to Reproduce:
CC: (none) => thierry.vignaud, tmb
OK, got the machine back in original factory config. KDE-Live x86.64 M4.1 boots OK, graphics OK but any attempt in MCC to manage hard disk partitions fails with immediate crashing of that module. journalctl shows: dmraid::init failed HDIO_GETGEO on /dev/sda succeeded: head=255 sectors=63 cylinders=31130 start=0 (and identical string for sdb) .. test_for_bad-drives(/dev/sda on sector #62) .. found a dos partition table on /dev/sda at sector 0 guess_geometry_from_partition_table sda 31009/256/63 is_geometry_valid_for_the_partition_table failed for (sda1,4294967295): 1023,255,62 vs 1023,89,3 with geometry 31130/255/63 Looks like the primary problem is it fails to recognise the existing raid0 configuration, (which is 2x256Gb SSD's in RAID0) and subsequent errors are consequent on this.
A helpful discussion of relevant issues in sharing a windows RAID and linux dual-boot is at: https://help.ubuntu.com/community/FakeRaidHowto Of particular note is the comment down the page, referring to a much older ubuntu version: "Some Fakeraid card combinations trigger a bug introduced in 8.04 where on reboot, even with dmraid loaded, /dev/mapper only has a "control" file in it. The bug is not in dmraid, it is in how the initrd image is created, and it is fixed in 9.10 If you have a system that won't boot after installation, boot the Live CD and load dmraid then look at /dev/mapper, if it is empty then you have this bug." I note that M4 KDE-Live x86_64 shows only a file 'control' in /dev/mapper, which sounds like what is being described here.
Sorry for not getting to this earlier... There were some issues with dmraid on live images, so it's not installed/active by default (but it is available in "Live Core" media). It's also planned to switch to mdadm for all intel raids as that is what Intel supports nowdays... But for now try this: - boot up in live mode - open a terminal window - switch to root user with: su - - install dmraid with: urpmi dmraid - try to activate dmraid with: dmraid -ay - check the contents of /dev/mapper
Much appreciated. Life's been a bit frantic for us all recently and despite marking this as critical, I figure folk were waiting until I got the machine back from ASUS, who kindly re-imaged it back to original status, before giving it much attention. Rightly so. I'd just figured my way to what you suggest from an Arch-linux discussion and am booting KDE-Live as we speak. Will try the above and report back. From the arch-linux discussion (referenced in the ubuntu thread above) I gather when dmraid is sorted out, one installs to /dev/mapper/some_partition, but I could learn a lot more about this before doing anything!
so far so good. dmraid -ay: RAID set "isw_cjabdfefhe_ASUS_OS" was activated device isw_cj... now registered with dmeventd for monitoring. ls /dev/mapper: control isw_cj.... Going into MCC then local disks then Manage Disk Partitions still crashes.
Top get a look at it, I then tried to mount /dev/mapper/isw_cj... on /mnt/disk, but failed with the usual 'wrong fs type' msg
ok, can you capture the info in the crash and attach it to this report since it's a haswell system I should be able to reproduce it somewhat by reconfigure my local workstation and setup a raid0 to install on... But it will probably take until next week
Herewith the crash info, for next week. Really appreciate your interest - its taking me longer than it might totry and take this all in. It's a bit lengthy, but perhaps just OK here rather than as an attachment: [root@localhost live]# dmraid -ay RAID set "isw_cjabdfefhe_ASUS_OS" was activated device "isw_cjabdfefhe_ASUS_OS" is now registered with dmeventd for monitoring [root@localhost live]# mcc "/usr/sbin/drakmenustyle" is not executable [Menus] at /usr/libexec/drakconf line 831. "/usr/sbin/drakbackup" is not executable [Backups] at /usr/libexec/drakconf line 831. "/usr/sbin/tomoyo-gui" is not executable [Tomoyo Policy] at /usr/libexec/drakconf line 831. Too late to run INIT block at /usr/lib/perl5/vendor_perl/5.18.1/x86_64-linux-thread-multi/Glib/Object/Introspection.pm line 258. Subroutine Gtk3::main redefined at /usr/lib/perl5/vendor_perl/5.18.1/Gtk3.pm line 296. Error: The backup GPT table is not at the end of the disk, as it should be. This might mean that another operating system believes the disk is smaller. Fix, by moving the backup to the end (and removing the old backup)? Warning: Not all of the space available to /dev/mapper/isw_cjabdfefhe_ASUS_OS appears to be used, you can fix the GPT to use all of the space (an extra 512 blocks) or continue with the current setting? Error: The backup GPT table is not at the end of the disk, as it should be. This might mean that another operating system believes the disk is smaller. Fix, by moving the backup to the end (and removing the old backup)? Warning: Not all of the space available to /dev/mapper/isw_cjabdfefhe_ASUS_OS appears to be used, you can fix the GPT to use all of the space (an extra 512 blocks) or continue with the current setting? INTERNAL ERROR: unknown device mapper/isw_cjabdfefhe_ASUS_OSp1 MDK::Common::Various::internal_error() called from /usr/lib/libDrakX/devices.pm:186 devices::entry() called from /usr/lib/libDrakX/devices.pm:201 devices::make() called from /usr/lib/libDrakX/fs/type.pm:276 fs::type::call_blkid() called from /usr/lib/libDrakX/fs/type.pm:284 fs::type::type_subpart_from_magic() called from /usr/lib/libDrakX/fsedit.pm:271 fsedit::get_hds() called from /usr/libexec/drakdisk:74
and this was the Arch reference noted above: https://wiki.archlinux.org/index.php/Installing_with_Fake_RAID
... and there's something I don't understand re the /dev/mapper/isw_cj... entry referenced above. With my encrypted partitions I've used for years, each /dev/mapper entry refers to an individual encrypted partition. In this case, with only one entry, it must refer to the entire RAID 0? - or else something is not right and there should have been more entries generated?
update: M5 alpha1 Live-KDE x86_64 boots and runs. Initially /dev/mapper contains only the 'control' file. Manually running urpmi will add dmraid and parted. Calling 'dmraid -ay' will then create the file 'isw_cjabdfefhe_ASUS_OS in /dev/mapper. This refers to the whole RAID 0, but does not show any partitions within it at this stage. Subsequently running 'partprobe' without arguments, sets up the 5 existing partitions in RAID0, named with derivatives of the isw_cj... raid disk named above. Some of these can be mounted and the contents read, e.g. the EFI partition can be read. Trying the same from the M5alpha1 Classical x86_64 DVD install fails. dmraid is there, but calling 'dmraid -ay' gives the message: RAID set "isw_cjabdfefhe_ASUS_OS could not be found. (yet it finds a name for it). 'partprobe' is not available on the DVD (I think it is installed as part of parted, also not there). I suspect there is something related to or needed by dmraid which is present on KDE-Live but absent from the Classical DVD? In addition, we really need parted on the classical DVD Appreciate any ideas.
correction: getting tired - I merged 2 error lines in the 2nd paragraph above. Calling 'dmraid -ay' results in: RAID set "isw_cjabdfefhe_ASUS_OS" was not activated ERROR: device "isw_cjabdfefhe_ASUS_OS" could not be found
Current status: using Live_KDE x86_64 I've urpmi'd dmraid, run it, run partprobe, created fakeraid for the existing MS_win8.1 partitions. Have shrunk the 'drive D' MSWin data partition in this RAID-0 system almost to nothing, have created ext4 and swap in the resulting free space. Couldn't get anywhere with classical DVD install, but from the Live-KDE4 I've installed M5alpha1, can mount and see the directories there subsequently, still from the Live DVD. Current issues: 1. The Live_KDE DVD won't run properly in UEFI mode. It starts but then fails early. Had to run the DVD with CSM turned on and choosing the non-UEFI version of the DVD drive (as the UEFI sees it - there are always UEFI and non-UEFI versions shown in the uefi-bios) to get this far. 2. I've not yet set up /boot/EFI as part of the install. 3. The major issue for enabling this installation to boot may need setting up a way of calling 'dmraid -ay' and then 'partprobe' very early, so the installed system can find itself, on /dev/isw_cjabdfefhe_ASUS_OSp7 as it happens. I've read elsewhere of modifying dmraid hooks in initcpio-related files. Note the late discussion by 'Hydranix' in the thread at https://bbs.archlinux.org/viewtopic.php?id=169099 I don't know how to address this in systemd. 4. The last part of the puzzle will be where to put some grub version (grub in the partition or grub2 on the RAID 0 assembly; some folk on the net have had trouble with the latter and I've read that grub2 doesn't recognize dmraid properly??) May need to choose which to boot via going into UEFI for the moment?? Really appreciate some ideas.
further info: I see that isw Intel 'fakeraid' chipset is specifically excluded in the script /lib/systemd/fedora-storage-init which otherwise runs 'dmraid -ay' against other chipsets. I believe this is done as Intel reportedly have said they prefer mdadm to assemble raid using this chipset. Only problem is, mdadm doesn't seem to work on my ASUS UX301 laptop mdadm --detail-platform gives me: Platform: Intel(R) Matrix Storage Manager Version: 12.7.0.1936 RAID Levels: raid0 raid1 Chunk Sizes: 4k 8k 16k 32k 64k 128k 2Tb volumes: supported 2Tb disks: supported Max Disks: 6 Max Volumes: 2 per array, 4 per controller I/O Controller: /sys/devices/pci10000:00/0000:00:1f.2 (SATA) ..and then stops. I understand it should go on to list the disks present, but it doesn't. running mdadm --assemble --scan fails - I don't recall the error message but no devices are generated in /dev/mapper. Quite strangely (meaning I don't understand), if I've previously run 'dmraid -ay' (and subsequently 'partprobe'?) to generate a full set of /dev/mapper entries, subsequently running 'mdadm --assemble --scan' results in a second full set of entries in /dev/mapper, which had slightly different names but I think were all symbolic links to the dmraid-created entries. I know one uses either dmraid or mdadm not both!, but mention this for any diagnostic use it may be.
Current status. (Note: I allowed a failed live-DVD install to generate bug 13772 - maybe I should have kept it all here). I now have a uefi mageia .efi which attempts to start my installed-from-LiveDVD M5alpha1, but fails and drops out to the dracut prompt because it can't see the raid0 partition which hasn't been assembled yet. I've tried going to the fedora-storage-init script in /lib/systemd and commenting out the 4 lines which exclude Intel isw from having 'dmraid -ay' run early, but this is not sufficient. It needs at least for 'partprobe' to be run immediately after the 'dmraid -ay' but I don't know how to do this. Does 'partprobe' need to be added to initramfs and how to do this in systemd?
M5 alpha2 classic resbuild Boots UEFI OK Install failed early due to raid0 issues. mdadm has correctly assembled the raid and made devices for the existing partitions in /dev/md. Unfortunately the install script cannot see these, and crashes saying the partition table of device sda "it's too corrupted for me" and offers to erase all partitions. The install script needs to be smarter and use /dev/md once it has called mdadm successfully. Bug 13592
Whiteboard: (none) => M5 alpha2 DVD x86_64
M5 alpha 2 LiveDVD KDE4 x86_64. Boots in UEFI mode, manually mounted (on /mnt/7) the RAID 0 partition where previously installed M5a1 mount --bind /dev /proc /run and /sys to appropriate places in /mnt/7 and chroot there. Can read everything OK. grub2-efi is installed grub2-install --target=x86_64-efi --efi-directory=/boot/efi --boot-directory=/boot/efi This fails: "disk 'md126,1 not found'. Doesn't help putting /dev/md126 at the end of the grub2-install arguments urpme dmraid, then attempting to reinstall from Live to HD using the on-screen install icon fails instantly and silently. I know from M5a1 it will not fail, and successfully install if dmraid is present and I run 'dmraid -ay', after which it can see the RAID 0 partitions. Seems it is unable to see the /dev/md126 RAID 0 created by mdadm, despite this being mountable manually Tonyb
*** Bug 12399 has been marked as a duplicate of this bug. ***
CC: (none) => lmenut
CC: lmenut => (none)
Version: 4 => CauldronWhiteboard: M5 alpha2 DVD x86_64 => 5alpha2
Bug 14330 reports a related issue.
CC: (none) => vzawalin1
commit 281d55f77a148cb8fafd4e73912c31e06d81acf4 Author: Thierry Vignaud <thierry.vignaud@...> Date: Fri Mar 27 08:45:06 2015 -0400 fix failing to read partition table (mga#13592, mga#15272) this is making it more readable regarding: "I cannot read the partition table of device XXX, it is too corrupted" (mga#13592, mga#15272, mga#15472) --- Commit Link: http://gitweb.mageia.org/software/drakx/commit/?id=281d55f77a148cb8fafd4e73912c31e06d81acf4 Bug links: Mageia https://bugs.mageia.org/15472 https://bugs.mageia.org/15272 https://bugs.mageia.org/13592
Just tried MGA5RC 29/3/2015. The "I cannot read partition table ..." message is still there. Did the fix in link of comment 20 make it to the 29/3/2015 round of isos?
Please try again the installer. When you see the error, please: - plug a USB key - go to tty2 (alt+ctrl+F2) - run the "bug" command - attach the report.bug file you will find on your USB key here. (but please compress it first) We'll see the drakx version as well as the details of the issue.
Keywords: (none) => NEEDINFO
Testing version of drakx in comment 21 was .74 As in my reply re 15446, where would I access and how would I merge in drakx .75? (or the squashfs containing it)?
Just boot with "boot-nonfree.iso" from eg http://distrib-coffee.ipsl.jussieu.fr/pub/linux/Mageia//distrib/cauldron/x86_64/install/images/ Most mirrors are be up to date: http://mirrors.mageia.org/status
Tried classical installer from RC round 5. report.bug for drakx 16.75 attached. The warning "Partition table for sdb is too corrupted for me" still appears. As that screen provides an option to bypass this step, it is adequate for now. Diskdrake fails and throws "Error: Invalid argument during seek for read on /dev/sdb" everytime the disks are rescanned. This is also what causes the live-install to fail (14330) in this environment. Partitioner does not show the raid disk, but that can be mounted and used post-install. Installation on the raid disk is therefore not an option currently but tmb's 'wip' message is reassuring.
Created attachment 6175 [details] drakx 16.75
Well, I might have just taken a step forward. Hardware is UEFI ASUS UX301 laptop, 2x256Gb SSD. Originally came with Win8.1 in RAID0 config. Couldn't install mageia on this - see open bug 13592. I finally despaired, broke the RAID and installed on one of the two disks. A side issue is that although in UEFI, can switch the 2 disks between ACHI and RAID (no choice of RAID level, but it is 0), there is no UEFI way to activate this. M5 (and windows for that matter) still see it as 2 separate disks despite set to RAID. There 'should' be a popup on rebooting, perhaps accessible via Ctrl-I or some alternative, which reputedly brings up a dialog for configuring the RAID. Anything at all like this is conspicuously absent from the ASUS-UX301. Myriad complaints on the Web about this. 1.I booted M5 RC9 Classic x86_64 DVD into rescue mode, # prompt. dmraid found nothing. 2. Found: http://www.intel.com.au/content/dam/www/public/us/en/documents/white-papers/rst-linux-paper.pdf 3. Tried (adapted from the above link): mdadm -C /dev/md/imsm /dev/sd[a-b] ân 2 âe imsm (delighted both dmraid and mdadm on Classic disk) and initially it failed, saying /dev/sdb was busy and it couldn't write to it and that it was unusable in this array. 4. Tried to wipe any bad metadata on /dev/sdb: mdadm --zero-superblock /dev/sdb and it still failed as couldn't write to /dev/sdb 5. Still from # prompt on Classic rescue, used gdisk, which reported corrupted partition stuff on /dev/sdb. I wrote a blank partition table to it. Re-doing the zero-superblock stuff above still failed with no write permission. 6. Rebooted the M5 install disk into rescue again, when the zero-superblock stuff ran without comment. 7. For good measure, used gdisk to delete all the partitions on /dev/sda, and ran the zero-superblock stuff on sda. 8. Tried RAID creation again with: mdadm -C /dev/md/imsm /dev/sd[a-b] ân 2 âe imsm and it popped up the dialog: Continue creating array? y then reported mdadm: container /dev/md/imsm prepared. 9. Ran the next step as per the Intel article, creating the RAID 0 volume /dev/md/vol0 within this imsm container: mdadm -C /dev/md/vol0 /dev/md/imsm ân 2 âl 0 Result was mdadm: array /dev/md/vol0 started. 10 Just for the record I then ran mdadm -E âs â-config=mdadm.conf > /etc/mdadm.conf and had a look at the resulting file, photographed its text in case needed in future. 11. Rebooted RC9 Classic x86_64 DVD, in install mode. Loaded program, accepted licence then immediately a new popup message I'd never seen before: BIOS software RAID detected on disks /dev/sdb /dev/sda. Activate it? Continued with the default 'Y' and my single disk drive is now shown as 'mapper/isw_bbacjebecf_vo (476Gb)' and it looks as if I'll be able to partition this normally. Will proceed.
Whiteboard: 5alpha2 => M5 RC9
Attempting install to my 'mapper/isw_bbacjebecf_vo' RAID drive failed - with messages about unable to install volume 5. I left a bit of space at the end and tried again, then it couldn't install volume 1. I'll fiddle more with spare spaces." Final: By having just EFI, the mount partition and one more spare, and manually formatting each one in turn, I avoided the 'unable to install volume 5' error message and proceeded with a normal install onto the laptop raid, which re-booted OK. My earlier attempts to also have a swap partition always gave failure related to 'volume 5' - even when I'd deleted some partitions so there was no volume 5. Got it up with no swap. The installer behaves as if there is some bug when installing too many partitions onto a RAID volume?? A not insignificant side problem however is no mouse cursor - never seen this before on this laptop, including not with M5 alphas or betas. Someone else mentioned this recently. Ideas?
I guess the above bug report could now be considered resolved and closed, even if the work-around described was a bit convoluted. There is the residual problem of the fault installing the 5th partition into this RAID 0 array - should I split this off as a separate bug? (The mouse cursor suddenly appeared, 2 reboots later. Unexplained.)
CC: (none) => eeeemail
I'll take the liberty of marking this resolved
Status: NEW => RESOLVEDResolution: (none) => FIXED