Debian has issued an advisory on July 8: https://lists.debian.org/debian-security-announce/2025/msg00125.html
Whiteboard: (none) => MGA9TOOCVE: (none) => CVE-2025-43904Source RPM: (none) => slurm-24.11.0-2.mga10.src.rpm, slurm-23.02.3-1.mga9.src.rpmStatus comment: (none) => Fixed upstream in 24.11.5, 24.05.8, and 23.11.11
ChrisD normally does version updates for slurm, so assigning to you.
Assignee: bugsquad => eatdirt
Thanks, I'll push fixes or updates!
New versions pushed for both Mageia 9 (23.11.11) and Cauldron (25.05.1) Here an advisory for Mageia 9. Notice that in spite of the same versioning leading number (23.x.y), there is a change of API in the library, hence the major number has been bumped from 39->40. We don't have any packages linking to libslurm.so, so that should not be a problem, but, testers, please do a second check. Suggested advisory: ======================== Updated slurm packages to fix a vulnerability in the Slurm’s accounting system that would have allowed a Coordinator to promote a user to Administrator (CVE-2025-43904). ======================== Updated packages in core/updates_testing: ======================== lib(64)slurm-devel-23.11.11-1.mga9 lib(64)slurm40-23.11.11-1.mga9 slurm-23.11.11-1.mga9 lib(64)slurm-static-devel-23.11.11-1.mga9 Source RPMs: slurm-23.11.11-1.mga9.src.rpm
Assignee: eatdirt => qa-bugsCC: (none) => eatdirt
Source RPM: slurm-24.11.0-2.mga10.src.rpm, slurm-23.02.3-1.mga9.src.rpm => slurm-24.11.0-2.mga10, slurm-23.02.3-1.mga9
Keywords: (none) => advisory
MGA9-64 server Plasma Wayland on Compaq H000SB No installation issues. Apparently no previous update on this, so unleash google. First thing: slurm is an acronym, where "s" stands for simple. Seems like a ad joke to me. Hunting for a simple tutorial brings me to https://blogs.oracle.com/research/post/a-beginners-guide-to-slurm (dear ex-colleagues) but that is not dummy-proof sinca at CLI $ sinfo slurm_load_partitions: Unable to contact slurm controller (connect failure) Find on this error: "Unable to contact slurm controller (connect failure)" indicates that the sinfo command, which is used to query the Slurm workload manager, cannot reach the slurm controller process. This typically means the slurmctld daemon is not running " So # systemctl start slurmctld # systemctl -l status slurmctld ○ slurmctld.service - Slurm controller daemon Loaded: loaded (/usr/lib/systemd/system/slurmctld.service; disabled; preset: disabled) ● slurmctld.service - Slurm controller daemon ● slurmctld.service - Slurm controller daemon Loaded: loaded (/usr/lib/systemd/system/slurmctld.service; disabled; preset: disabled) Active: active (running) since Sat 2025-07-26 10:39:13 CEST; 5s ago Process: 38248 ExecStart=/usr/sbin/slurmctld $SLURMCTLD_OPTIONS (code=exited, status=0/SUCCESS) Main PID: 38250 (slurmctld) Tasks: 18 Memory: 3.6M CPU: 136ms CGroup: /system.slice/slurmctld.service ├─38250 /usr/sbin/slurmctld └─38251 "slurmctld: slurmscriptd" Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: error: NOTE: Trying backup state save file. Reservations may be lost Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: No reservation state file (/var/spool/slurmctld/resv_state.old) to recover Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: error: Could not open trigger state file /var/spool/slurmctld/trigger_state: No such file or d> Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: error: NOTE: Trying backup state save file. Triggers may be lost! Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: No trigger state file (/var/spool/slurmctld/trigger_state.old) to recover Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: read_slurm_conf: backup_controller not specified Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: Reinitializing job accounting statesystemctl -l status slurmctld ○ slurmctld.service - Slurm controller daemon Loaded: loaded (/usr/lib/systemd/system/slurmctld.service; disabled; preset: disabled) ● slurmctld.service - Slurm controller daemon ● slurmctld.service - Slurm controller daemon Loaded: loaded (/usr/lib/systemd/system/slurmctld.service; disabled; preset: disabled) Active: active (running) since Sat 2025-07-26 10:39:13 CEST; 5s ago Process: 38248 ExecStart=/usr/sbin/slurmctld $SLURMCTLD_OPTIONS (code=exited, status=0/SUCCESS) Main PID: 38250 (slurmctld) Tasks: 18 Memory: 3.6M CPU: 136ms CGroup: /system.slice/slurmctld.service ├─38250 /usr/sbin/slurmctld └─38251 "slurmctld: slurmscriptd" Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: error: NOTE: Trying backup state save file. Reservations may be lost Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: No reservation state file (/var/spool/slurmctld/resv_state.old) to recover Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: error: Could not open trigger state file /var/spool/slurmctld/trigger_state: No such file or d> Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: error: NOTE: Trying backup state save file. Triggers may be lost! Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: No trigger state file (/var/spool/slurmctld/trigger_state.old) to recover Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: read_slurm_conf: backup_controller not specified Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: Reinitializing job accounting state Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: select/cons_tres: select_p_reconfigure: select/cons_tres: reconfigure Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: select/cons_tres: part_data_create_array: select/cons_tres: preparing for 1 partitions Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: Running as primary controller Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: select/cons_tres: select_p_reconfigure: select/cons_tres: reconfigure Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: select/cons_tres: part_data_create_array: select/cons_tres: preparing for 1 partitions Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: Running as primary controller Then I get $ sinfo PARTITION AVAIL TIMELIMIT NODES STATE NODELIST debug* up infinite 1 unk* localhost and $ sview gives me a window where I find the same info (and a little more). I can click and see the various tabs. If someone with more knowledge can do more tests, plse do. But in the mean time I set the OK.
Whiteboard: MGA9TOO => MGA9TOO, MGA9-64-OKCC: (none) => herman.viaene
Cauldron fixed in comment 3, changing this to a Mageia 9 bug, and validating.
CC: (none) => andrewsfarm, sysadmin-bugsKeywords: (none) => validated_updateWhiteboard: MGA9TOO, MGA9-64-OK => MGA9-64-OKVersion: Cauldron => 9
An update for this issue has been pushed to the Mageia Updates repository. https://advisories.mageia.org/MGASA-2025-0215.html
Resolution: (none) => FIXEDStatus: NEW => RESOLVED