Bug 16391 - at boot sequence splash displays errors "systemd-journald[953]: Failed to write entry..." numerous times
Summary: at boot sequence splash displays errors "systemd-journald[953]: Failed to wri...
Status: RESOLVED WORKSFORME
Alias: None
Product: Mageia
Classification: Unclassified
Component: RPM Packages (show other bugs)
Version: 5
Hardware: x86_64 Linux
Priority: Normal normal
Target Milestone: ---
Assignee: Base system maintainers
QA Contact:
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2015-07-16 00:25 CEST by igor ivanov
Modified: 2017-08-08 22:28 CEST (History)
4 users (show)

See Also:
Source RPM: systemd
CVE:
Status comment:


Attachments
this is the result of the dmesg command (116.56 KB, application/octet-stream)
2015-07-16 00:29 CEST, igor ivanov
Details
this is the result of the dmesg command (116.56 KB, text/plain)
2015-07-16 00:31 CEST, igor ivanov
Details
return of "journalctl -ab > journal.txt" (410.06 KB, text/plain)
2016-10-31 00:22 CET, igor ivanov
Details
return of "journalctl --verify" (9.99 KB, text/plain)
2016-10-31 00:23 CET, igor ivanov
Details

Description igor ivanov 2015-07-16 00:25:09 CEST
Description of problem:
at the end of the boot sequence, splashes displays:

..............................................
119.444400] systemd-journald[953]: Failed to write entry (23 items, 514 bytes), ignoring: Bad address
[  119.486266] systemd-journald[953]: Failed to write entry (23 items, 514 bytes), ignoring: Bad address
[  120.464464] systemd-journald[953]: Failed to write entry (20 items, 934 bytes), ignoring: Bad address
[  120.464583] systemd-journald[953]: Failed to write entry (20 items, 983 bytes), ignoring: Bad address
[  133.315125] systemd-journald[953]: Failed to write entry (21 items, 534 bytes), ignoring: Bad address
[  133.315252] systemd-journald[953]: Failed to write entry (21 items, 489 bytes), ignoring: Bad address
[  133.316398] systemd-journald[953]: Failed to write entry (21 items, 528 bytes), ignoring: Bad address
[  142.622324] systemd-journald[953]: Failed to write entry (21 items, 509 bytes), ignoring: Bad address
...............................................


the totality of the message can be found in dmesg

Version-Release number of selected component (if applicable):


How reproducible: I can't say


Steps to Reproduce:???
1.
2.
3.


Reproducible: 

Steps to Reproduce:
Comment 1 igor ivanov 2015-07-16 00:29:46 CEST
Created attachment 6848 [details]
this is the result of the dmesg command
Comment 2 igor ivanov 2015-07-16 00:31:27 CEST
Created attachment 6849 [details]
this is the result of the dmesg command
Comment 3 igor ivanov 2015-07-16 00:34:44 CEST
ignore attachment 6848 [details], and consider only attachment 6849 [details]
Barry Jackson 2015-07-16 00:53:40 CEST

CC: (none) => zen25000
Attachment 6848 is obsolete: 0 => 1

Comment 4 Marja Van Waes 2016-10-30 20:03:49 CET
Hi Igor,

Sorry for the very late reply. We are short on active BugSquad members.

Is this bug still valid in fully updated Mageia 5 or cauldron?

If so:

* Does this occur in an installed Mageia, or using a Live iso in Live mode, or...?
* How much free space is left on the partition where /var/log/journal/ resides?
* Please attach verify.txt that is the result of (as root):
       journalctl --verify > verify.txt
* If (as root) "running journalctl -ab" works, then please attach journal.txt
  that is the result of:
       journalctl -ab > journal.txt

(It is unclear to me why the Release component was chosen, changing it to RPM Packages)

Keywords: (none) => NEEDINFO
CC: sysadmin-bugs => marja11
Component: Release (media or process) => RPM Packages
Source RPM: (none) => systemd

Comment 5 igor ivanov 2016-10-31 00:22:00 CET
Created attachment 8611 [details]
return of "journalctl -ab > journal.txt"
Comment 6 igor ivanov 2016-10-31 00:23:03 CET
Created attachment 8612 [details]
return of "journalctl --verify"
Comment 7 igor ivanov 2016-10-31 00:32:43 CET
Hi Marja,
I usually incrementally backup my system with the "dump" command; sometimes, for some reason, I am led to remove the totality of my system; in this case I restore it with the "restore" command; after restoring, at the 6 or 7 following boots, this phenomenon occurs, and vanishes after.
the command "journalctl -ab > journal.txt" did work, but "journalctl --verify > verify.txt" didn't; for the latter I have just run "journalctl --verify" in a terminal and pasted it.
Comment 8 Marja Van Waes 2017-01-04 19:15:18 CET

(In reply to igor ivanov from comment #7)
> Hi Marja,
> I usually incrementally backup my system with the "dump" command; sometimes,
> for some reason, I am led to remove the totality of my system; in this case
> I restore it with the "restore" command; after restoring, at the 6 or 7
> following boots, this phenomenon occurs, and vanishes after.

I can't find it now, but I'm pretty sure I've seen another (non-Mageia) report that this happens every time after restoring the system from a backup.

Anyway, assigning to basesystem maintainer group and CC'ing the systemd maintainer.

Keywords: NEEDINFO => (none)
CC: (none) => mageia
Assignee: bugsquad => basesystem

Comment 9 Florian Hubold 2017-08-08 22:28:43 CEST
(In reply to Marja van Waes from comment #8)

> I can't find it now, but I'm pretty sure I've seen another (non-Mageia)
> report that this happens every time after restoring the system from a backup.

Well, there are a lot - basically you need to restart systemd-journald after you restore from backup. I believe there's nothing that we can fix as downstream, and how would systemd magically know that it has been restored from backup ?

See
https://bugzilla.redhat.com/show_bug.cgi?id=1069828
https://access.redhat.com/discussions/2100681

@Igor: As we have no way to reproduce this, I'm going to close this one. If it still happens on every boot, or if it happens with a fresh installation then please feel free to reopen. But as journalctl --verify shows nearly every journal file as corrupted I don't believe that's the case. For comparison my journald logs go back to 2015 and I have not one such "Failed to write entry" errors in the log.

On a related note, you should really take a look at the SMART data for both of your harddisks, there are some offline and some pending offline sectors which may as well influence the health of the filesystem and might as well play into this bug ...

Resolution: (none) => WORKSFORME
Status: NEW => RESOLVED
CC: (none) => doktor5000


Note You need to log in before you can comment on or make changes to this bug.