Bug 34453 - slurm new security issue CVE-2025-43904
Summary: slurm new security issue CVE-2025-43904
Status: RESOLVED FIXED
Alias: None
Product: Mageia
Classification: Unclassified
Component: Security (show other bugs)
Version: 9
Hardware: All Linux
Priority: Normal normal
Target Milestone: ---
Assignee: QA Team
QA Contact: Sec team
URL:
Whiteboard: MGA9-64-OK
Keywords: advisory, validated_update
Depends on:
Blocks:
 
Reported: 2025-07-08 17:52 CEST by Nicolas Salguero
Modified: 2025-07-31 19:27 CEST (History)
4 users (show)

See Also:
Source RPM: slurm-24.11.0-2.mga10, slurm-23.02.3-1.mga9
CVE: CVE-2025-43904
Status comment: Fixed upstream in 24.11.5, 24.05.8, and 23.11.11


Attachments

Description Nicolas Salguero 2025-07-08 17:52:24 CEST
Debian has issued an advisory on July 8:
https://lists.debian.org/debian-security-announce/2025/msg00125.html
Nicolas Salguero 2025-07-08 17:54:11 CEST

Whiteboard: (none) => MGA9TOO
CVE: (none) => CVE-2025-43904
Source RPM: (none) => slurm-24.11.0-2.mga10.src.rpm, slurm-23.02.3-1.mga9.src.rpm
Status comment: (none) => Fixed upstream in 24.11.5, 24.05.8, and 23.11.11

Comment 1 Lewis Smith 2025-07-16 20:52:38 CEST
ChrisD normally does version updates for slurm, so assigning to you.

Assignee: bugsquad => eatdirt

Comment 2 Chris Denice 2025-07-25 16:49:13 CEST
Thanks, I'll push fixes or updates!
Comment 3 Chris Denice 2025-07-25 17:30:04 CEST
New versions pushed for both Mageia 9 (23.11.11) and Cauldron (25.05.1)

Here an advisory for Mageia 9. Notice that in spite of the same versioning leading number (23.x.y), there is a change of API in the library, hence the major number has been bumped from 39->40. We don't have any packages linking to libslurm.so, so that should not be a problem, but, testers, please do a second check.

Suggested advisory:
========================

Updated slurm packages to fix a vulnerability in the Slurm’s accounting system that would have allowed a Coordinator to promote a user to Administrator (CVE-2025-43904). 

========================

Updated packages in core/updates_testing:
========================
lib(64)slurm-devel-23.11.11-1.mga9
lib(64)slurm40-23.11.11-1.mga9
slurm-23.11.11-1.mga9
lib(64)slurm-static-devel-23.11.11-1.mga9

Source RPMs: 
slurm-23.11.11-1.mga9.src.rpm

Assignee: eatdirt => qa-bugs
CC: (none) => eatdirt

katnatek 2025-07-26 05:05:03 CEST

Source RPM: slurm-24.11.0-2.mga10.src.rpm, slurm-23.02.3-1.mga9.src.rpm => slurm-24.11.0-2.mga10, slurm-23.02.3-1.mga9

katnatek 2025-07-26 05:09:31 CEST

Keywords: (none) => advisory

Comment 4 Herman Viaene 2025-07-26 10:59:20 CEST
MGA9-64 server Plasma Wayland on Compaq H000SB
No installation issues.
Apparently no previous update on this, so unleash google.
First thing: slurm is an acronym, where "s" stands for simple. Seems like a ad joke to me.
Hunting for a simple tutorial brings me to https://blogs.oracle.com/research/post/a-beginners-guide-to-slurm (dear ex-colleagues) but that is not dummy-proof sinca at CLI
$ sinfo 
slurm_load_partitions: Unable to contact slurm controller (connect failure)
Find on this error: "Unable to contact slurm controller (connect failure)" indicates that the sinfo command, which is used to query the Slurm workload manager, cannot reach the slurm controller process. This typically means the slurmctld daemon is not running "
So
# systemctl start slurmctld
# 
systemctl -l status slurmctld
○ slurmctld.service - Slurm controller daemon
     Loaded: loaded (/usr/lib/systemd/system/slurmctld.service; disabled; preset: disabled)
● slurmctld.service - Slurm controller daemon
● slurmctld.service - Slurm controller daemon
     Loaded: loaded (/usr/lib/systemd/system/slurmctld.service; disabled; preset: disabled)
     Active: active (running) since Sat 2025-07-26 10:39:13 CEST; 5s ago
    Process: 38248 ExecStart=/usr/sbin/slurmctld $SLURMCTLD_OPTIONS (code=exited, status=0/SUCCESS)
   Main PID: 38250 (slurmctld)
      Tasks: 18
     Memory: 3.6M
        CPU: 136ms
     CGroup: /system.slice/slurmctld.service
             ├─38250 /usr/sbin/slurmctld
             └─38251 "slurmctld: slurmscriptd"

Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: error: NOTE: Trying backup state save file. Reservations may be lost
Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: No reservation state file (/var/spool/slurmctld/resv_state.old) to recover
Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: error: Could not open trigger state file /var/spool/slurmctld/trigger_state: No such file or d>
Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: error: NOTE: Trying backup state save file. Triggers may be lost!
Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: No trigger state file (/var/spool/slurmctld/trigger_state.old) to recover
Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: read_slurm_conf: backup_controller not specified
Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: Reinitializing job accounting statesystemctl -l status slurmctld
○ slurmctld.service - Slurm controller daemon
     Loaded: loaded (/usr/lib/systemd/system/slurmctld.service; disabled; preset: disabled)
● slurmctld.service - Slurm controller daemon
● slurmctld.service - Slurm controller daemon
     Loaded: loaded (/usr/lib/systemd/system/slurmctld.service; disabled; preset: disabled)
     Active: active (running) since Sat 2025-07-26 10:39:13 CEST; 5s ago
    Process: 38248 ExecStart=/usr/sbin/slurmctld $SLURMCTLD_OPTIONS (code=exited, status=0/SUCCESS)
   Main PID: 38250 (slurmctld)
      Tasks: 18
     Memory: 3.6M
        CPU: 136ms
     CGroup: /system.slice/slurmctld.service
             ├─38250 /usr/sbin/slurmctld
             └─38251 "slurmctld: slurmscriptd"

Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: error: NOTE: Trying backup state save file. Reservations may be lost
Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: No reservation state file (/var/spool/slurmctld/resv_state.old) to recover
Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: error: Could not open trigger state file /var/spool/slurmctld/trigger_state: No such file or d>
Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: error: NOTE: Trying backup state save file. Triggers may be lost!
Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: No trigger state file (/var/spool/slurmctld/trigger_state.old) to recover
Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: read_slurm_conf: backup_controller not specified
Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: Reinitializing job accounting state
Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: select/cons_tres: select_p_reconfigure: select/cons_tres: reconfigure
Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: select/cons_tres: part_data_create_array: select/cons_tres: preparing for 1 partitions
Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: Running as primary controller

Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: select/cons_tres: select_p_reconfigure: select/cons_tres: reconfigure
Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: select/cons_tres: part_data_create_array: select/cons_tres: preparing for 1 partitions
Jul 26 10:39:13 mach3.hviaene.thuis slurmctld[38250]: Running as primary controller

Then I get
$ sinfo 
PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST
debug*       up   infinite      1   unk* localhost

and
$ sview
gives me a window where I find the same info (and a little more). I can click and see the various tabs.
If someone with more knowledge can do more tests, plse do. But in the mean time I set the OK.

Whiteboard: MGA9TOO => MGA9TOO, MGA9-64-OK
CC: (none) => herman.viaene

Comment 5 Thomas Andrews 2025-07-31 02:44:53 CEST
Cauldron fixed in comment 3, changing this to a Mageia 9 bug, and validating.

CC: (none) => andrewsfarm, sysadmin-bugs
Keywords: (none) => validated_update
Whiteboard: MGA9TOO, MGA9-64-OK => MGA9-64-OK
Version: Cauldron => 9

Comment 6 Mageia Robot 2025-07-31 19:27:18 CEST
An update for this issue has been pushed to the Mageia Updates repository.

https://advisories.mageia.org/MGASA-2025-0215.html

Resolution: (none) => FIXED
Status: NEW => RESOLVED


Note You need to log in before you can comment on or make changes to this bug.