Description of problem: Backuppc-3.2.1-2.0 uses native systemd. After an update from backuppc-3.2.1-1.3, (pre-systemd) backuppc does not automatically start after re-boot. The workaround is to manually do after the update systemctl enable backuppc.service An install (rather than update) of backuppc works as it should: the service is created "enabled" and does automatically starts at boot. Version-Release number of selected component (if applicable): Backuppc-3.2.1-1.3, resp backuppc-3.2.1-2 and rpm-helper-0.24.8-1 How reproducible: 100% for me, not 100% clarified for other users. Steps to Reproduce: 1. Force installing backuppc-3.2.1-1.3 2. Update to backuppc-3.2.1-2 3. Verify with "systemctl show backuppc.service" (output says "Loaded: loaded (/lib/systemd/system/backuppc.service; disabled)" rather than "... enabled)" Note: the update from 3.2.1-1.3 to 3.2.1-2 launches the following warning warning: %post(backuppc-3.2.1-2.mga2.i586) scriptlet failed, exit status 1 (no warning when doing a clean install) I am the maintainer of backuppc, but need help on this (the update for systemd support was done by Guillomovitch) Including as an attachment the output of rpm -Uvvvvh <backuppc-package-file> (see the lines containing rpm-helper - it looks like the service is in fact enabled, but gets disabled again)
Created attachment 2098 [details] Ouptput from rpm -Uvvvvh <backuppc-package-file>
CC: (none) => guillomovitch, mageia
Blocks: (none) => 2120
Attachment 2098 mime type: application/octet-stream => text/plain
OK, so I can reproduce the issue. It's quite limited in scope so I don't think it's a practical problem. Scenario was: 1. Install old version. Ensure it's enabled and reboot. 2. Confirm it's running on boot. 3. Do the upgrade. 4. The migration to systemd is successful, but when the service is restarted, systemd cannot detect the processes of the service and thus cannot stop the current service. 5. Reboot and confirm the service is started and runs fine. Now the reason it does not detect and track the processes when under sysvinit is because sysvinit uses su to drop user privileges. Now anything started via su is tracked as a user session, and thus it escapes the cgroup and is tracked separately via a user session. Thus when systemd tries to restart it, it doesn't kill the current running daemon first, tries to start a new one and that fails. When running under systemd, the user privs dropping is handled internally in a much nicer way and services can be tracked fine. So It'll only happen if running when running on systemd itself, i.e. it will not affect an mga1 -> mga2 upgrade. So in the larger view of things it's likely not a big problem. If we want to be 100% robust, we'd have to do something in %pre to detect the current status of the service (running or stopped), track that in a state file, stop the service. Then in the post, check the state file and start the service. As this cannot happen automatically, I'm not sure it's worth the effort.
Thanks. I think it was important to clarify that this problem is harmless, and that a workaround exists. If I understand right, the problem only happens during the phase when certain service packages are updated to a new version with systemd support: it should fade out once the majority of packages, resp. users have done the update. And, only users (forgetting cauldron users) who do an upgrade from Mageia 1 rather than installing it are concerned. Remain those users who have Mageia 1 "canned" and will upgrade some time later when this issue has been forgotten. Probably the best approach is to close the bug and possibly, as a complement in order to avoid having duplicates of this bug report popping up, to add a short note to the errata - is errata the right place?
Just to clarify, the problem only happens in cauldron (because we're already running systemd). It will not affect a mga1 -> mga2 upgrade. Going forward, this could be a problem (i.e. when we do mga2 -> mga3 upgrade) but this will be somewhat trickier anyway. Assuming everything works for you after a reboot then I think we can close.
OK due to the above, I consider this problem something that affects cauldron only and thus not a major concern for mga1->mga2 transition. We'll likely have to handle this better for mga2->mga3 upgrade, but that can wait :D
Status: NEW => RESOLVEDResolution: (none) => FIXED