I have a running cauldron system. I added a second drive to it, booted up off the primary drive, and ran diskdrake to define several partitions on the new drive (/dev/sdb, as expected). This all went fine. I gave no mount point for any of the /dev/sdbX partitions. Then I tried to format them. In every case, the format failed. Running diskdrake from the command line, mke2fs complained that the partitions appeared to be in use by the system, and it would not create a filesystem. Figuring that something in the partition definition was lingering around, I rebooted, but go the same error. The error also occurs if I try mkfs.ext4 from the command line. lsof /dev/sdb1 shows no users. Apparently, whatever (dbus ? systemd ?) is responsible for detecting the drive and creating the devices for the drive and the partitions is doing something to them that blocks mkfs, but I have no idea what, nor how to find out. I've done this in the past with partitions on the primary drive without a problem. I'm guessing that it's because the primary drive containing the root partition is recognized and set up earlier in the boot by something else. Reproducible: Steps to Reproduce:
This has nothing to do with formatting. I booted to single-user mode and formatted the partitions without a problem. When I rebooted, mount refused to mount any of them, giving the same error described above.
Priority: Normal => HighSummary: Something in cauldron blocks formatting of unmounted partitions => Something in cauldron blocks mountng of partitions on disks not containing the active rootSeverity: normal => major
I noticed the following, but I don't know if it is significant. /dev/sda is my primary drive, formatted many moons ago. /dev/sdb is the new drive, 2 TB and recently acquired: ****************************************************************** [root@ftgme2 ~]# fdisk /dev/sda Welcome to fdisk (util-linux 2.24). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Command (m for help): p Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x00000000 [root@ftgme2 ~]# fdisk /dev/sdb Welcome to fdisk (util-linux 2.24). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Command (m for help): p Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0x00000000 ********************************************************************** Note that the new disk has a physical sector size of 4096 while the older one has a sector size of 512. The mount point at which I'm trying to mount the /dev/sdb1 partition is a directory on /dev/sda1. Is the mount failing because the two disks have differing physical sector sizes ?
Which partition table are you using on that 2TB disk. So big disks should use gpt AFAIK.
CC: (none) => mageia
I created the partitions with diskdrake, and it doesn't seem to have a problem seeing them. Likewise fdisk: Device Boot Start End Blocks Id System /dev/sdb1 * 2048 83875364 41936658+ 83 Linux /dev/sdb2 83875840 167750729 41937445 83 Linux /dev/sdb3 167751680 251626094 41937207+ 83 Linux /dev/sdb4 251629560 2466154214 1107262327+ 5 Extended /dev/sdb5 251629568 285169814 16770123+ 82 Linux swap / Solaris /dev/sdb6 285173760 318713534 16769887+ 83 Linux /dev/sdb7 318715904 855573704 268428900+ 83 Linux /dev/sdb8 855576576 1392433874 268428649+ 83 Linux /dev/sdb9 1392437248 1929294044 268428398+ 83 Linux /dev/sdb10 1929297920 2466154214 268428147+ 83 Linux I assume diskdrake is still using a DOS-type PT ?
Alright, this is just weird... I booted to single-user mode to see if the mount would work there. It does. I then exited single-user mode and allowed the full boot to continue. When KDM came up, I logged in, and "df" showed that the mount was still there. I had in the previous boot defined a second mount point, /mnt/disk2, and had tried mounting a partition there (just to rule out something holding /mnt/disk), and that didn't work. Now, with /dev/sdb1 still mounted on /mnt/disk, /dev/sdb2 mounts successfully on /mnt/disk2. I also umounted them both and was able to mount /dev/sdb1 on /mnt/disk again with no problem. As long as I have them accessible, I'm going to do some work with them. Then I'll try to reboot normally and see what happens...
Summary: Something in cauldron blocks mountng of partitions on disks not containing the active root => Something in cauldron blocks mounting of partitions on disks not containing the active root
Reboot normally, and the partitions are all "busy" again.
Is this bug still happening in Mageia 5 RC?
Keywords: (none) => NEEDINFO
The /dev/sda disk started to go bad, and /dev/sdb replaced it, so I can no longer test this. I'd just be interested to hear if anyone knows why mount works differently in single-user mode and RL 3. I've seen this "in use by the system" off and on over the years in other situations, and I've never understood what that means. If no one knows, then I'm OK with WORKSFORME.
responding to previous comment
Status: NEW => RESOLVEDCC: (none) => nicResolution: (none) => WORKSFORME