Description of problem: Stale directory after umounting filesystem. During Mageia boot, all filesystems declared in fstab are automatically mounted. E.g I have this line in fstab: LABEL=MomentusXT /disk/MomentusXT ext4 noatime,commit=60,barrier=0,data=writeback 1 2 I'm at runlevel 5, logged as normal user in a KDE4 session. I can obviously see the mount point inside a konsole : # grep MomentusXT /proc/mounts /dev/sdb1 /disk/MomentusXT ext4 rw,noatime,nobarrier,commit=600,stripe=128,data=writeback 0 0 # umount /dev/sdb1 (no output) # grep MomentusXT /proc/mounts (no output) # rmdir /disk/MomentusXT/ rmdir: échec de suppression de « /disk/MomentusXT/ »: Périphérique ou ressource occupé # fuser /disk/MomentusXT (no output) # lsof /disk/MomentusXT (no output) # lsof /dev/sdb (no output) # lsof /dev/sdb1 (no output) This is getting a major problem when using a raid array ... I need to reconfigure some MD devices, but I can't ! # grep md0 /proc/mounts /dev/md0 /mnt/storage/raid ext4 rw,noatime,user_xattr,acl,barrier=1,stripe=1792,data=ordered 0 0 # umount /dev/md0 (no output) # grep md0 /proc/mounts (no output) # rmdir /mnt/storage/raid rmdir: échec de suppression de « /disk/MomentusXT/ »: Périphérique ou ressource occupé (In english: Device or resource busy) # mdadm -S /dev/md0 mdadm: Cannot get exclusive access to /dev/md0:Perhaps a running process, mounted filesystem or active volume group? [...] again, nothing found with either 'lsof' or 'fuser' ... In order to correctly free my block devices, I have either to switch to single user mode, or kill running processes randomly (!) until I can remove the mount directory or stop the raid array. Problem happens in runlevel 3 too, so I'm sure this is not caused by some graphical utility running in the user session. I found that the "colord" process was one of the culprit, but it's not the only one. Version-Release number of selected component (if applicable): Mageia 2, everything up to date. How reproducible: Happens constantly on my 2 computers. Just umount an EXT4 filesystem that was mounted during boot, then try to delete the directory it was mounted on. It always result in "device or resource busy".
I can confirm this bug with my USB stick. Since the upgrade to Mageia 2 it can never be unmounted again. I have to go into the hibernate mode to plug it off. /dev/sdb1 on /run/media/franz/1D39-7202 type vfat (rw,nosuid,nodev,relatime,uid=500,gid=500,fmask=0022,dmask=0077,codepage=cp437,iocharset=iso8859-1,shortname=mixed,showexec,utf8,errors=remount-ro) [root@localhost html]# umount /dev/sdb1 démontage : /run/media/franz/1D39-7202 : périphérique occupé. (Dans certains cas, des infos sur les processus l'utilisant sont récupérables par lsof(8) ou fuser(1))
CC: (none) => flink
[root@localhost html]# lsof /dev/sdb1 lsof: WARNING: can't stat() fuse.gvfs-fuse-daemon file system /run/user/franz/gvfs Output information may be incomplete. COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME evince-th 24508 franz 6u REG 8,17 170504 1486 /run/media/franz/1D39-7202/Windows/USt-Voranmeldung-2009-Q2.xps I have had opened the usb stick content in Nautilus. So the unmount did not close it how it should. The unmount should automatically close all processes having formerly used the folder and files.
I'm also experiencing this problem. I am unable to stop an md device. There are no volume groups involved, the filesystem on that device has been unmounted and the device is not associated with any userland processes. However an lsof|grep shows several kernel processes associated with the device. # mdadm --stop /dev/md99 mdadm: Cannot get exclusive access to /dev/md99:Perhaps a running process, mounted filesystem or active volume group? # lsof /dev/md99 # lsof | grep md99 md99_raid 2561 root cwd DIR 8,1 4096 2 / md99_raid 2561 root rtd DIR 8,1 4096 2 / md99_raid 2561 root txt unknown /proc/2561/exe jbd2/md99 31622 root cwd DIR 8,1 4096 2 / jbd2/md99 31622 root rtd DIR 8,1 4096 2 / jbd2/md99 31622 root txt unknown /proc/31622/exe
CC: (none) => pfaff
more information about my environment # cat /etc/release Mageia release 2 (Official) for x86_64 # uname -a Linux localhost 3.3.8-server-2.mga2 #1 SMP Mon Jul 30 22:06:50 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux # lsb_release -a LSB Version: core-2.0-amd64:core-2.0-noarch:core-3.0-amd64:core-3.0-noarch:core-3.1-amd64:core-3.1-noarch:core-3.2-amd64:core-3.2-noarch:core-4.0-amd64:core-4.0-noarch:core-4.1-amd64:core-4.1-noarch:cxx-3.1-amd64:cxx-3.1-noarch:cxx-3.2-amd64:cxx-3.2-noarch:graphics-3.1-amd64:graphics-3.1-noarch:graphics-3.2-amd64:graphics-3.2-noarch:lsb-2.0-amd64:lsb-2.0-noarch:lsb-3.0-amd64:lsb-3.0-noarch:lsb-3.1-amd64:lsb-3.1-noarch:lsb-3.2-amd64:lsb-3.2-noarch:lsb-4.0-amd64:lsb-4.0-noarch:lsb-4.1-amd64:lsb-4.1-noarch Distributor ID: Mageia Description: Mageia 2 Release: 2 Codename: thornicroft # mdadm --version mdadm - v3.2.3 - 23rd December 2011
thomas, something for you ?
See Also: (none) => https://bugs.mageia.org/show_bug.cgi?id=6964Assignee: bugsquad => tmbSource RPM: base system and utilities => util-linux
I can confirm, although I don't have the code here to contribute. The other day I had this problem, nothing was going on with the server on either side, and I tried to umount it from console and mcc both, with "device is busy" error. I ended up having to delete it in mcc on the server side (then re-added it back later), then it finally umounted on the user side. At the time, it was an NFS share that was crashing Nautilus (a different bug here) that I was working on; had to umount it from my netbook, and couldn't. Thanks.
CC: (none) => skeeter1029
*** Bug 6964 has been marked as a duplicate of this bug. ***
This message is a reminder that Mageia 2 is nearing its end of life. Approximately one month from now Mageia will stop maintaining and issuing updates for Mageia 2. At that time this bug will be closed as WONTFIX (EOL) if it remains open with a Mageia 'version' of '2'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Mageia version prior to Mageia 2's end of life. Bug Reporter: Thank you for reporting this issue and we are sorry that we may not be able to fix it before Mageia 2 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Mageia, you are encouraged to click on "Version" and change it against that version of Mageia. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Mageia release includes newer upstream software that fixes bugs or makes them obsolete. -- The Mageia Bugsquad
Mageia 2 changed to end-of-life (EOL) status on ''22 November''. Mageia 2 is no longer maintained, which means that it will not receive any further security or bug fix updates. As a result we are closing this bug. If you can reproduce this bug against a currently maintained version of Mageia please feel free to click on "Version" change it against that version of Mageia and reopen this bug. Thank you for reporting this bug and we are sorry it could not be fixed. -- The Mageia Bugsquad
Status: NEW => RESOLVEDResolution: (none) => OLD