Mageia Bugzilla – Attachment 2485 Details for
Bug 6541
systemd seems to have problems with NFS mounts in /etc/fstab
Home
|
New
|
Browse
|
Search
|
[?]
|
Reports
|
Requests
|
Help
|
Log In
[x]
|
New Account
|
Forgot Password
syslog fragment
syslog (text/plain), 13.14 KB, created by
Frank Griffin
on 2012-06-22 15:49:37 CEST
(
hide
)
Description:
syslog fragment
Filename:
MIME Type:
Creator:
Frank Griffin
Created:
2012-06-22 15:49:37 CEST
Size:
13.14 KB
patch
obsolete
>Jun 21 17:31:52 localhost dhclient: DHCPREQUEST on eth0 to 255.255.255.255 port 67 >Jun 21 17:31:52 localhost dhclient: DHCPACK from 192.168.3.100 > >*** Network is up at this point > > >Jun 21 17:31:52 localhost avahi-daemon[17449]: Registering new address record for fe80::20c:41ff:feee:e2e1 on eth0.*. >Jun 21 17:31:52 localhost avahi-daemon[17449]: Joining mDNS multicast group on interface eth0.IPv4 with address 192.168.3.102. >Jun 21 17:31:52 localhost avahi-daemon[17449]: New relevant interface eth0.IPv4 for mDNS. >Jun 21 17:31:52 localhost avahi-daemon[17449]: Registering new address record for 192.168.3.102 on eth0.IPv4. >Jun 21 17:31:52 localhost kernel: [ 54.769868] netlink: 33 bytes leftover after parsing attributes. >Jun 21 17:31:52 localhost kernel: [ 54.769871] netlink: 33 bytes leftover after parsing attributes. >Jun 21 17:31:52 localhost kernel: [ 54.769891] netlink: 33 bytes leftover after parsing attributes. >Jun 21 17:31:52 localhost NET[18371]: /sbin/dhclient-script : updated /etc/resolv.conf >Jun 21 17:31:52 localhost avahi-daemon[17449]: Withdrawing address record for fe80::20c:41ff:feee:e2e1 on eth0. >Jun 21 17:31:52 localhost avahi-daemon[17449]: Withdrawing address record for 192.168.3.102 on eth0. >Jun 21 17:31:52 localhost avahi-daemon[17449]: Withdrawing workstation service for eth0. >Jun 21 17:31:52 localhost avahi-daemon[17449]: Withdrawing workstation service for lo. >Jun 21 17:31:52 localhost avahi-daemon[17449]: Registering new address record for fe80::20c:41ff:feee:e2e1 on eth0.*. >Jun 21 17:31:52 localhost avahi-daemon[17449]: Registering new address record for 192.168.3.102 on eth0.IPv4. >Jun 21 17:31:52 localhost avahi-daemon[17449]: Registering HINFO record with values 'X86_64'/'LINUX'. >Jun 21 17:31:52 localhost avahi-daemon[17449]: Changing host name to 'ftgme2'. >Jun 21 17:31:52 localhost dhclient: bound to 192.168.3.102 -- renewal in 8373 seconds. >Jun 21 17:31:52 localhost ifplugd(eth0)[18026]: client: Determining IP information for eth0... done. >Jun 21 17:31:52 localhost kernel: [ 55.197786] netlink: 33 bytes leftover after parsing attributes. >Jun 21 17:31:52 localhost kernel: [ 55.197789] netlink: 33 bytes leftover after parsing attributes. >Jun 21 17:31:52 localhost kernel: [ 55.197810] netlink: 33 bytes leftover after parsing attributes. >Jun 21 17:31:52 localhost ifplugd(eth0)[18026]: client: 192.168.3.102 >Jun 21 17:31:52 localhost ifplugd(eth0)[18026]: client: squid.service - LSB: Starts the squid daemon >Jun 21 17:31:52 localhost ifplugd(eth0)[18026]: client: #011 Loaded: loaded (/etc/rc.d/init.d/squid) >Jun 21 17:31:52 localhost ifplugd(eth0)[18026]: client: #011 Active: inactive (dead) >Jun 21 17:31:52 localhost ifplugd(eth0)[18026]: client: #011 CGroup: name=systemd:/system/squid.service >Jun 21 17:31:52 localhost ifplugd(eth0)[18026]: client: squid: ERROR: No running copy >Jun 21 17:31:53 localhost ifplugd(eth0)[18026]: Program executed successfully. >Jun 21 17:31:53 localhost network-up[18072]: Waiting for network to be up[ OK ] >Jun 21 17:31:53 localhost smb[18494]: Starting SMB services: [ OK ] >Jun 21 17:31:53 localhost dc_server[18536]: Starting dc_server: [ OK ] >Jun 21 17:31:53 localhost ct_sync[18543]: Starting ct_sync: [FAILED] >Jun 21 17:31:53 localhost smb[18494]: Starting NMB services: [ OK ] >Jun 21 17:31:53 localhost jetty[18531]: Starting jetty: [ OK ] >Jun 21 17:31:53 localhost xinetd[18667]: Reading included configuration file: /etc/xinetd.d/cups-lpd [file=/etc/xinetd.conf] [line=15] >Jun 21 17:31:53 localhost xinetd[18592]: Starting xinetd[ OK ] >Jun 21 17:31:53 localhost xinetd[18667]: Reading included configuration file: /etc/xinetd.d/cvs [file=/etc/xinetd.d/cvs] [line=12] >Jun 21 17:31:53 localhost (date-cfg)[18630]: Failed at step EXEC spawning /usr/bin/mailman-update-cfg: No such file or directory >Jun 21 17:31:53 localhost xinetd[18667]: Reading included configuration file: /etc/xinetd.d/proftpd-xinetd [file=/etc/xinetd.d/proftpd-xinetd] [line=13] >Jun 21 17:31:53 localhost xinetd[18667]: Reading included configuration file: /etc/xinetd.d/rexec [file=/etc/xinetd.d/rexec] [line=16] >Jun 21 17:31:53 localhost xinetd[18667]: Reading included configuration file: /etc/xinetd.d/rlogin [file=/etc/xinetd.d/rlogin] [line=14] >Jun 21 17:31:53 localhost xinetd[18667]: Reading included configuration file: /etc/xinetd.d/rsh [file=/etc/xinetd.d/rsh] [line=14] >Jun 21 17:31:53 localhost xinetd[18667]: Reading included configuration file: /etc/xinetd.d/rsync [file=/etc/xinetd.d/rsync] [line=15] >Jun 21 17:31:53 localhost xinetd[18667]: Reading included configuration file: /etc/xinetd.d/saned [file=/etc/xinetd.d/saned] [line=12] >Jun 21 17:31:53 localhost xinetd[18667]: Reading included configuration file: /etc/xinetd.d/sshd-xinetd [file=/etc/xinetd.d/sshd-xinetd] [line=16] >Jun 21 17:31:53 localhost xinetd[18667]: Reading included configuration file: /etc/xinetd.d/tftp [file=/etc/xinetd.d/tftp] [line=16] >Jun 21 17:31:53 localhost xinetd[18667]: removing printer >Jun 21 17:31:53 localhost xinetd[18667]: removing ftp >Jun 21 17:31:53 localhost xinetd[18667]: removing exec >Jun 21 17:31:53 localhost xinetd[18667]: removing login >Jun 21 17:31:53 localhost xinetd[18667]: removing shell >Jun 21 17:31:53 localhost xinetd[18667]: removing ssh >Jun 21 17:31:53 localhost xinetd[18667]: removing tftp >Jun 21 17:31:53 localhost xinetd[18667]: xinetd Version 2.3.15 started with libwrap options compiled in. >Jun 21 17:31:53 localhost xinetd[18667]: Started working: 3 available services >Jun 21 17:31:53 localhost systemd[1]: ct_sync.service: control process exited, code=exited status=1 >Jun 21 17:31:53 localhost coherence[18550]: Starting coherence daemon: [ OK ] >Jun 21 17:31:53 localhost hddtemp[18502]: Starting hard disk temperature monitor daemon[ OK ] >Jun 21 17:31:53 localhost systemd[1]: Unit ct_sync.service entered failed state. >Jun 21 17:31:53 localhost systemd[1]: mailman.service: control process exited, code=exited status=203 >Jun 21 17:31:53 localhost systemd[1]: Unit mailman.service entered failed state. >Jun 21 17:31:53 localhost mysqld_safe[18705]: 120621 17:31:53 mysqld_safe Logging to '/var/log/mysqld/mysqld.log'. >Jun 21 17:31:53 localhost mysqld_safe[18705]: 120621 17:31:53 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql >Jun 21 17:31:53 localhost systemd[1]: PID file /var/run/memcached/11211.pid not readable (yet?) after start. >Jun 21 17:31:53 localhost systemd[1]: tor.service: main process exited, code=exited, status=255 >Jun 21 17:31:53 localhost kill[19106]: usage: kill [ -s signal | -p ] [ -a ] pid ... >Jun 21 17:31:53 localhost kill[19106]: kill -l [ signal ] >Jun 21 17:31:53 localhost systemd[1]: tor.service: control process exited, code=exited status=1 >Jun 21 17:31:53 localhost kernel: [ 56.095192] FS-Cache: Loaded >Jun 21 17:31:53 localhost kernel: [ 56.147330] RPC: Registered named UNIX socket transport module. >Jun 21 17:31:53 localhost kernel: [ 56.147333] RPC: Registered udp transport module. >Jun 21 17:31:53 localhost kernel: [ 56.147334] RPC: Registered tcp transport module. >Jun 21 17:31:53 localhost kernel: [ 56.147335] RPC: Registered tcp NFSv4.1 backchannel transport module. >Jun 21 17:31:53 localhost mount[18861]: mount.nfs: No such device >Jun 21 17:31:53 localhost mount[19005]: mount.nfs: No such device >Jun 21 17:31:53 localhost mount[18822]: mount.nfs: No such device >Jun 21 17:31:53 localhost mount[18951]: mount.nfs: No such device >Jun 21 17:31:53 localhost mount[18925]: mount.nfs: No such device >Jun 21 17:31:53 localhost mount[18889]: mount.nfs: No such device >Jun 21 17:31:53 localhost systemd[1]: mnt-ftglap.mount mount process exited, code=exited status=32 >Jun 21 17:31:53 localhost systemd[1]: Job remote-fs-login.target/start failed with result 'dependency'. >Jun 21 17:31:53 localhost systemd[1]: Job remote-fs.target/start failed with result 'dependency'. >Jun 21 17:31:53 localhost systemd[1]: Unit mnt-ftglap.mount entered failed state. >Jun 21 17:31:53 localhost systemd[1]: mnt-ftglap.data.mount mount process exited, code=exited status=32 >Jun 21 17:31:53 localhost systemd[1]: Unit mnt-ftglap.data.mount entered failed state. >Jun 21 17:31:53 localhost systemd[1]: mnt-cauldron.mount mount process exited, code=exited status=32 >Jun 21 17:31:53 localhost systemd[1]: Unit mnt-cauldron.mount entered failed state. >Jun 21 17:31:53 localhost systemd[1]: mnt-cooker.mount mount process exited, code=exited status=32 >Jun 21 17:31:53 localhost systemd[1]: Unit mnt-cooker.mount entered failed state. >Jun 21 17:31:53 localhost systemd[1]: mnt-plf.mount mount process exited, code=exited status=32 >Jun 21 17:31:53 localhost systemd[1]: Unit mnt-plf.mount entered failed state. >Jun 21 17:31:53 localhost systemd[1]: mnt-backups.mount mount process exited, code=exited status=32 >Jun 21 17:31:53 localhost systemd[1]: Unit mnt-backups.mount entered failed state. > >*** All NFS mounts in fstab have just failed > >Jun 21 17:31:53 localhost kernel: [ 56.320524] NFS: Registering the id_resolver key type >Jun 21 17:31:53 localhost kernel: [ 56.320552] FS-Cache: Netfs 'nfs' registered for caching > >*** But maybe NFS itself hadn't fully come up yet ? > > >Jun 21 17:31:53 localhost systemd[1]: tor.service holdoff time over, scheduling restart. >Jun 21 17:31:53 localhost systemd[1]: Unit tor.service entered failed state. >Jun 21 17:31:53 localhost systemd[1]: tor.service: main process exited, code=exited, status=255 > >... > >*** Looks like any NFS mounts attempted through systemd fail > >Jun 21 17:38:08 localhost mgaapplet[21198]: Computing new updates... >Jun 21 17:38:08 localhost mgaapplet[21198]: running: urpmi.update -a >Jun 21 17:38:08 localhost urpmi.update: mount /mnt/cauldron >Jun 21 17:38:08 localhost urpmi.update: umount /mnt/cauldron >Jun 21 17:38:09 localhost systemd[1]: Unit mnt-cauldron.mount entered failed state. >Jun 21 17:38:09 localhost mgaapplet[21198]: updating inactive backport media Core Backports, Core Backports Testing, Nonfree Backports, Nonfree Backports Testing, Tainted Backports, Tainted Backports Testing, Core 32bit Backports, Core 32bit Backports Testing >Jun 21 17:38:09 localhost mgaapplet[21198]: running: urpmi.update Core Backports >Jun 21 17:38:09 localhost urpmi.update: mount /mnt/cauldron >Jun 21 17:38:09 localhost urpmi.update: umount /mnt/cauldron >Jun 21 17:38:09 localhost systemd[1]: Unit mnt-cauldron.mount entered failed state. >Jun 21 17:38:09 localhost mgaapplet[21198]: running: urpmi.update Core Backports Testing >Jun 21 17:38:09 localhost urpmi.update: mount /mnt/cauldron >Jun 21 17:38:09 localhost urpmi.update: umount /mnt/cauldron >Jun 21 17:38:09 localhost systemd[1]: Unit mnt-cauldron.mount entered failed state. >Jun 21 17:38:09 localhost mgaapplet[21198]: running: urpmi.update Nonfree Backports >Jun 21 17:38:10 localhost urpmi.update: mount /mnt/cauldron >Jun 21 17:38:10 localhost urpmi.update: umount /mnt/cauldron >Jun 21 17:38:10 localhost systemd[1]: Unit mnt-cauldron.mount entered failed state. >Jun 21 17:38:10 localhost mgaapplet[21198]: running: urpmi.update Nonfree Backports Testing >Jun 21 17:38:10 localhost urpmi.update: mount /mnt/cauldron >Jun 21 17:38:10 localhost urpmi.update: umount /mnt/cauldron >Jun 21 17:38:10 localhost systemd[1]: Unit mnt-cauldron.mount entered failed state. >Jun 21 17:38:10 localhost mgaapplet[21198]: running: urpmi.update Tainted Backports >Jun 21 17:38:11 localhost urpmi.update: mount /mnt/cauldron >Jun 21 17:38:11 localhost urpmi.update: umount /mnt/cauldron >Jun 21 17:38:11 localhost systemd[1]: Unit mnt-cauldron.mount entered failed state. >Jun 21 17:38:11 localhost mgaapplet[21198]: running: urpmi.update Tainted Backports Testing >Jun 21 17:38:11 localhost urpmi.update: mount /mnt/cauldron >Jun 21 17:38:11 localhost urpmi.update: umount /mnt/cauldron >Jun 21 17:38:11 localhost systemd[1]: Unit mnt-cauldron.mount entered failed state. >Jun 21 17:38:11 localhost mgaapplet[21198]: running: urpmi.update Core 32bit Backports >Jun 21 17:38:11 localhost urpmi.update: mount /mnt/cauldron >Jun 21 17:38:12 localhost urpmi.update: umount /mnt/cauldron >Jun 21 17:38:12 localhost systemd[1]: Unit mnt-cauldron.mount entered failed state. >Jun 21 17:38:12 localhost mgaapplet[21198]: running: urpmi.update Core 32bit Backports Testing >Jun 21 17:38:12 localhost urpmi.update: mount /mnt/cauldron >Jun 21 17:38:12 localhost urpmi.update: umount /mnt/cauldron >Jun 21 17:38:12 localhost systemd[1]: Unit mnt-cauldron.mount entered failed state. >Jun 21 17:38:32 localhost mgaapplet[21198]: Packages are up to date > >*** Following appears to get logged when I manually umount all NFS and then mount all NFS > >Jun 22 08:58:00 localhost kernel: [55424.646307] Installing knfsd (copyright (C) 1996 okir@monad.swb.de). >Jun 22 08:58:00 localhost mount[11037]: mount: sunrpc already mounted or /proc/fs/nfsd busy >Jun 22 08:58:00 localhost mount[11037]: mount: according to mtab, nfsd is already mounted on /proc/fs/nfsd >Jun 22 08:58:00 localhost systemd[1]: proc-fs-nfsd.mount mount process exited, code=exited status=32 >Jun 22 08:58:00 localhost kernel: [55424.697099] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory >Jun 22 08:58:00 localhost kernel: [55424.697112] NFSD: starting 90-second grace period >Jun 22 08:58:01 localhost rpc.mountd[11076]: Version 1.2.6 starting >Jun 22 09:12:18 localhost sensord: Chip: acpitz-virtual-0 >Jun 22 09:12:18 localhost sensord: Adapter: Virtual device >Jun 22 09:12:18 localhost sensord: temp1: 40.0 C
Jun 21 17:31:52 localhost dhclient: DHCPREQUEST on eth0 to 255.255.255.255 port 67 Jun 21 17:31:52 localhost dhclient: DHCPACK from 192.168.3.100 *** Network is up at this point Jun 21 17:31:52 localhost avahi-daemon[17449]: Registering new address record for fe80::20c:41ff:feee:e2e1 on eth0.*. Jun 21 17:31:52 localhost avahi-daemon[17449]: Joining mDNS multicast group on interface eth0.IPv4 with address 192.168.3.102. Jun 21 17:31:52 localhost avahi-daemon[17449]: New relevant interface eth0.IPv4 for mDNS. Jun 21 17:31:52 localhost avahi-daemon[17449]: Registering new address record for 192.168.3.102 on eth0.IPv4. Jun 21 17:31:52 localhost kernel: [ 54.769868] netlink: 33 bytes leftover after parsing attributes. Jun 21 17:31:52 localhost kernel: [ 54.769871] netlink: 33 bytes leftover after parsing attributes. Jun 21 17:31:52 localhost kernel: [ 54.769891] netlink: 33 bytes leftover after parsing attributes. Jun 21 17:31:52 localhost NET[18371]: /sbin/dhclient-script : updated /etc/resolv.conf Jun 21 17:31:52 localhost avahi-daemon[17449]: Withdrawing address record for fe80::20c:41ff:feee:e2e1 on eth0. Jun 21 17:31:52 localhost avahi-daemon[17449]: Withdrawing address record for 192.168.3.102 on eth0. Jun 21 17:31:52 localhost avahi-daemon[17449]: Withdrawing workstation service for eth0. Jun 21 17:31:52 localhost avahi-daemon[17449]: Withdrawing workstation service for lo. Jun 21 17:31:52 localhost avahi-daemon[17449]: Registering new address record for fe80::20c:41ff:feee:e2e1 on eth0.*. Jun 21 17:31:52 localhost avahi-daemon[17449]: Registering new address record for 192.168.3.102 on eth0.IPv4. Jun 21 17:31:52 localhost avahi-daemon[17449]: Registering HINFO record with values 'X86_64'/'LINUX'. Jun 21 17:31:52 localhost avahi-daemon[17449]: Changing host name to 'ftgme2'. Jun 21 17:31:52 localhost dhclient: bound to 192.168.3.102 -- renewal in 8373 seconds. Jun 21 17:31:52 localhost ifplugd(eth0)[18026]: client: Determining IP information for eth0... done. Jun 21 17:31:52 localhost kernel: [ 55.197786] netlink: 33 bytes leftover after parsing attributes. Jun 21 17:31:52 localhost kernel: [ 55.197789] netlink: 33 bytes leftover after parsing attributes. Jun 21 17:31:52 localhost kernel: [ 55.197810] netlink: 33 bytes leftover after parsing attributes. Jun 21 17:31:52 localhost ifplugd(eth0)[18026]: client: 192.168.3.102 Jun 21 17:31:52 localhost ifplugd(eth0)[18026]: client: squid.service - LSB: Starts the squid daemon Jun 21 17:31:52 localhost ifplugd(eth0)[18026]: client: #011 Loaded: loaded (/etc/rc.d/init.d/squid) Jun 21 17:31:52 localhost ifplugd(eth0)[18026]: client: #011 Active: inactive (dead) Jun 21 17:31:52 localhost ifplugd(eth0)[18026]: client: #011 CGroup: name=systemd:/system/squid.service Jun 21 17:31:52 localhost ifplugd(eth0)[18026]: client: squid: ERROR: No running copy Jun 21 17:31:53 localhost ifplugd(eth0)[18026]: Program executed successfully. Jun 21 17:31:53 localhost network-up[18072]: Waiting for network to be up[ OK ] Jun 21 17:31:53 localhost smb[18494]: Starting SMB services: [ OK ] Jun 21 17:31:53 localhost dc_server[18536]: Starting dc_server: [ OK ] Jun 21 17:31:53 localhost ct_sync[18543]: Starting ct_sync: [FAILED] Jun 21 17:31:53 localhost smb[18494]: Starting NMB services: [ OK ] Jun 21 17:31:53 localhost jetty[18531]: Starting jetty: [ OK ] Jun 21 17:31:53 localhost xinetd[18667]: Reading included configuration file: /etc/xinetd.d/cups-lpd [file=/etc/xinetd.conf] [line=15] Jun 21 17:31:53 localhost xinetd[18592]: Starting xinetd[ OK ] Jun 21 17:31:53 localhost xinetd[18667]: Reading included configuration file: /etc/xinetd.d/cvs [file=/etc/xinetd.d/cvs] [line=12] Jun 21 17:31:53 localhost (date-cfg)[18630]: Failed at step EXEC spawning /usr/bin/mailman-update-cfg: No such file or directory Jun 21 17:31:53 localhost xinetd[18667]: Reading included configuration file: /etc/xinetd.d/proftpd-xinetd [file=/etc/xinetd.d/proftpd-xinetd] [line=13] Jun 21 17:31:53 localhost xinetd[18667]: Reading included configuration file: /etc/xinetd.d/rexec [file=/etc/xinetd.d/rexec] [line=16] Jun 21 17:31:53 localhost xinetd[18667]: Reading included configuration file: /etc/xinetd.d/rlogin [file=/etc/xinetd.d/rlogin] [line=14] Jun 21 17:31:53 localhost xinetd[18667]: Reading included configuration file: /etc/xinetd.d/rsh [file=/etc/xinetd.d/rsh] [line=14] Jun 21 17:31:53 localhost xinetd[18667]: Reading included configuration file: /etc/xinetd.d/rsync [file=/etc/xinetd.d/rsync] [line=15] Jun 21 17:31:53 localhost xinetd[18667]: Reading included configuration file: /etc/xinetd.d/saned [file=/etc/xinetd.d/saned] [line=12] Jun 21 17:31:53 localhost xinetd[18667]: Reading included configuration file: /etc/xinetd.d/sshd-xinetd [file=/etc/xinetd.d/sshd-xinetd] [line=16] Jun 21 17:31:53 localhost xinetd[18667]: Reading included configuration file: /etc/xinetd.d/tftp [file=/etc/xinetd.d/tftp] [line=16] Jun 21 17:31:53 localhost xinetd[18667]: removing printer Jun 21 17:31:53 localhost xinetd[18667]: removing ftp Jun 21 17:31:53 localhost xinetd[18667]: removing exec Jun 21 17:31:53 localhost xinetd[18667]: removing login Jun 21 17:31:53 localhost xinetd[18667]: removing shell Jun 21 17:31:53 localhost xinetd[18667]: removing ssh Jun 21 17:31:53 localhost xinetd[18667]: removing tftp Jun 21 17:31:53 localhost xinetd[18667]: xinetd Version 2.3.15 started with libwrap options compiled in. Jun 21 17:31:53 localhost xinetd[18667]: Started working: 3 available services Jun 21 17:31:53 localhost systemd[1]: ct_sync.service: control process exited, code=exited status=1 Jun 21 17:31:53 localhost coherence[18550]: Starting coherence daemon: [ OK ] Jun 21 17:31:53 localhost hddtemp[18502]: Starting hard disk temperature monitor daemon[ OK ] Jun 21 17:31:53 localhost systemd[1]: Unit ct_sync.service entered failed state. Jun 21 17:31:53 localhost systemd[1]: mailman.service: control process exited, code=exited status=203 Jun 21 17:31:53 localhost systemd[1]: Unit mailman.service entered failed state. Jun 21 17:31:53 localhost mysqld_safe[18705]: 120621 17:31:53 mysqld_safe Logging to '/var/log/mysqld/mysqld.log'. Jun 21 17:31:53 localhost mysqld_safe[18705]: 120621 17:31:53 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql Jun 21 17:31:53 localhost systemd[1]: PID file /var/run/memcached/11211.pid not readable (yet?) after start. Jun 21 17:31:53 localhost systemd[1]: tor.service: main process exited, code=exited, status=255 Jun 21 17:31:53 localhost kill[19106]: usage: kill [ -s signal | -p ] [ -a ] pid ... Jun 21 17:31:53 localhost kill[19106]: kill -l [ signal ] Jun 21 17:31:53 localhost systemd[1]: tor.service: control process exited, code=exited status=1 Jun 21 17:31:53 localhost kernel: [ 56.095192] FS-Cache: Loaded Jun 21 17:31:53 localhost kernel: [ 56.147330] RPC: Registered named UNIX socket transport module. Jun 21 17:31:53 localhost kernel: [ 56.147333] RPC: Registered udp transport module. Jun 21 17:31:53 localhost kernel: [ 56.147334] RPC: Registered tcp transport module. Jun 21 17:31:53 localhost kernel: [ 56.147335] RPC: Registered tcp NFSv4.1 backchannel transport module. Jun 21 17:31:53 localhost mount[18861]: mount.nfs: No such device Jun 21 17:31:53 localhost mount[19005]: mount.nfs: No such device Jun 21 17:31:53 localhost mount[18822]: mount.nfs: No such device Jun 21 17:31:53 localhost mount[18951]: mount.nfs: No such device Jun 21 17:31:53 localhost mount[18925]: mount.nfs: No such device Jun 21 17:31:53 localhost mount[18889]: mount.nfs: No such device Jun 21 17:31:53 localhost systemd[1]: mnt-ftglap.mount mount process exited, code=exited status=32 Jun 21 17:31:53 localhost systemd[1]: Job remote-fs-login.target/start failed with result 'dependency'. Jun 21 17:31:53 localhost systemd[1]: Job remote-fs.target/start failed with result 'dependency'. Jun 21 17:31:53 localhost systemd[1]: Unit mnt-ftglap.mount entered failed state. Jun 21 17:31:53 localhost systemd[1]: mnt-ftglap.data.mount mount process exited, code=exited status=32 Jun 21 17:31:53 localhost systemd[1]: Unit mnt-ftglap.data.mount entered failed state. Jun 21 17:31:53 localhost systemd[1]: mnt-cauldron.mount mount process exited, code=exited status=32 Jun 21 17:31:53 localhost systemd[1]: Unit mnt-cauldron.mount entered failed state. Jun 21 17:31:53 localhost systemd[1]: mnt-cooker.mount mount process exited, code=exited status=32 Jun 21 17:31:53 localhost systemd[1]: Unit mnt-cooker.mount entered failed state. Jun 21 17:31:53 localhost systemd[1]: mnt-plf.mount mount process exited, code=exited status=32 Jun 21 17:31:53 localhost systemd[1]: Unit mnt-plf.mount entered failed state. Jun 21 17:31:53 localhost systemd[1]: mnt-backups.mount mount process exited, code=exited status=32 Jun 21 17:31:53 localhost systemd[1]: Unit mnt-backups.mount entered failed state. *** All NFS mounts in fstab have just failed Jun 21 17:31:53 localhost kernel: [ 56.320524] NFS: Registering the id_resolver key type Jun 21 17:31:53 localhost kernel: [ 56.320552] FS-Cache: Netfs 'nfs' registered for caching *** But maybe NFS itself hadn't fully come up yet ? Jun 21 17:31:53 localhost systemd[1]: tor.service holdoff time over, scheduling restart. Jun 21 17:31:53 localhost systemd[1]: Unit tor.service entered failed state. Jun 21 17:31:53 localhost systemd[1]: tor.service: main process exited, code=exited, status=255 ... *** Looks like any NFS mounts attempted through systemd fail Jun 21 17:38:08 localhost mgaapplet[21198]: Computing new updates... Jun 21 17:38:08 localhost mgaapplet[21198]: running: urpmi.update -a Jun 21 17:38:08 localhost urpmi.update: mount /mnt/cauldron Jun 21 17:38:08 localhost urpmi.update: umount /mnt/cauldron Jun 21 17:38:09 localhost systemd[1]: Unit mnt-cauldron.mount entered failed state. Jun 21 17:38:09 localhost mgaapplet[21198]: updating inactive backport media Core Backports, Core Backports Testing, Nonfree Backports, Nonfree Backports Testing, Tainted Backports, Tainted Backports Testing, Core 32bit Backports, Core 32bit Backports Testing Jun 21 17:38:09 localhost mgaapplet[21198]: running: urpmi.update Core Backports Jun 21 17:38:09 localhost urpmi.update: mount /mnt/cauldron Jun 21 17:38:09 localhost urpmi.update: umount /mnt/cauldron Jun 21 17:38:09 localhost systemd[1]: Unit mnt-cauldron.mount entered failed state. Jun 21 17:38:09 localhost mgaapplet[21198]: running: urpmi.update Core Backports Testing Jun 21 17:38:09 localhost urpmi.update: mount /mnt/cauldron Jun 21 17:38:09 localhost urpmi.update: umount /mnt/cauldron Jun 21 17:38:09 localhost systemd[1]: Unit mnt-cauldron.mount entered failed state. Jun 21 17:38:09 localhost mgaapplet[21198]: running: urpmi.update Nonfree Backports Jun 21 17:38:10 localhost urpmi.update: mount /mnt/cauldron Jun 21 17:38:10 localhost urpmi.update: umount /mnt/cauldron Jun 21 17:38:10 localhost systemd[1]: Unit mnt-cauldron.mount entered failed state. Jun 21 17:38:10 localhost mgaapplet[21198]: running: urpmi.update Nonfree Backports Testing Jun 21 17:38:10 localhost urpmi.update: mount /mnt/cauldron Jun 21 17:38:10 localhost urpmi.update: umount /mnt/cauldron Jun 21 17:38:10 localhost systemd[1]: Unit mnt-cauldron.mount entered failed state. Jun 21 17:38:10 localhost mgaapplet[21198]: running: urpmi.update Tainted Backports Jun 21 17:38:11 localhost urpmi.update: mount /mnt/cauldron Jun 21 17:38:11 localhost urpmi.update: umount /mnt/cauldron Jun 21 17:38:11 localhost systemd[1]: Unit mnt-cauldron.mount entered failed state. Jun 21 17:38:11 localhost mgaapplet[21198]: running: urpmi.update Tainted Backports Testing Jun 21 17:38:11 localhost urpmi.update: mount /mnt/cauldron Jun 21 17:38:11 localhost urpmi.update: umount /mnt/cauldron Jun 21 17:38:11 localhost systemd[1]: Unit mnt-cauldron.mount entered failed state. Jun 21 17:38:11 localhost mgaapplet[21198]: running: urpmi.update Core 32bit Backports Jun 21 17:38:11 localhost urpmi.update: mount /mnt/cauldron Jun 21 17:38:12 localhost urpmi.update: umount /mnt/cauldron Jun 21 17:38:12 localhost systemd[1]: Unit mnt-cauldron.mount entered failed state. Jun 21 17:38:12 localhost mgaapplet[21198]: running: urpmi.update Core 32bit Backports Testing Jun 21 17:38:12 localhost urpmi.update: mount /mnt/cauldron Jun 21 17:38:12 localhost urpmi.update: umount /mnt/cauldron Jun 21 17:38:12 localhost systemd[1]: Unit mnt-cauldron.mount entered failed state. Jun 21 17:38:32 localhost mgaapplet[21198]: Packages are up to date *** Following appears to get logged when I manually umount all NFS and then mount all NFS Jun 22 08:58:00 localhost kernel: [55424.646307] Installing knfsd (copyright (C) 1996 okir@monad.swb.de). Jun 22 08:58:00 localhost mount[11037]: mount: sunrpc already mounted or /proc/fs/nfsd busy Jun 22 08:58:00 localhost mount[11037]: mount: according to mtab, nfsd is already mounted on /proc/fs/nfsd Jun 22 08:58:00 localhost systemd[1]: proc-fs-nfsd.mount mount process exited, code=exited status=32 Jun 22 08:58:00 localhost kernel: [55424.697099] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory Jun 22 08:58:00 localhost kernel: [55424.697112] NFSD: starting 90-second grace period Jun 22 08:58:01 localhost rpc.mountd[11076]: Version 1.2.6 starting Jun 22 09:12:18 localhost sensord: Chip: acpitz-virtual-0 Jun 22 09:12:18 localhost sensord: Adapter: Virtual device Jun 22 09:12:18 localhost sensord: temp1: 40.0 C
View Attachment As Raw
Actions:
View
Attachments on
bug 6541
: 2485 |
2669
|
2670
|
3329
|
3330
|
3331
|
3332
|
3341
|
3488