Reference: https://www.openwall.com/lists/oss-security/2026/01/21/6
Fix for 20.2.x: https://github.com/ceph/ceph/pull/66140 Fix for 18.2.x: https://github.com/ceph/ceph/pull/66142
Status comment: (none) => Fixed upstream in 20.2.1 and 18.2.9 and patches available from upstreamFlags: (none) => affects_mga9+Source RPM: (none) => ceph-20.2.0-1.mga10.src.rpm, ceph-18.2.7-2.1.mga9.src.rpmWhiteboard: (none) => MGA9TOOCVE: (none) => CVE-2024-31884
Assignee: bugsquad => eatdirt
thanks for the links!
ceph-18.2.7-2.2.mga9 landing on updates_testing for mga9, fixing CVE-2024-31884. @QA-teams, see https://bugs.mageia.org/show_bug.cgi?id=28538 for minimal consistency tests (you're not expected to deploy a ceph cluster). ======================== Updated ceph packages fix a security issue allowing an attacker to have Ceph accepting any certificate. References: https://www.openwall.com/lists/oss-security/2026/01/21/6 ======================== Updated packages in core/updates_testing: ======================== ceph-18.2.7-2.2.mga9 ceph-osd-debuginfo-18.2.7-2.2.mga9 lib64rbd1-18.2.7-2.2.mga9 ceph-radosgw-18.2.7-2.2.mga9 lib64rbd-devel-18.2.7-2.2.mga9 ceph-fuse-18.2.7-2.2.mga9 ceph-rbd-18.2.7-2.2.mga9 lib64rgw2-18.2.7-2.2.mga9 ceph-immutable-object-cache-18.2.7-2.2.mga9 lib64ceph2-18.2.7-2.2.mga9 lib64rgw-devel-18.2.7-2.2.mga9 python3-ceph-18.2.7-2.2.mga9 ceph-mds-18.2.7-2.2.mga9 lib64ceph-devel-18.2.7-2.2.mga9 python3-rados-18.2.7-2.2.mga9 ceph-mgr-18.2.7-2.2.mga9 lib64rados2-18.2.7-2.2.mga9 python3-rbd-18.2.7-2.2.mga9 ceph-mirror-18.2.7-2.2.mga9 lib64rados-devel-18.2.7-2.2.mga9 python3-rgw-18.2.7-2.2.mga9 ceph-mon-18.2.7-2.2.mga9 lib64radosstriper1-18.2.7-2.2.mga9 ceph-osd-18.2.7-2.2.mga9 lib64radosstriper-devel-18.2.7-2.2.mga9 from ceph-18.2.7-2.2.mga9.src.rpm
Assignee: eatdirt => qa-bugsCC: (none) => eatdirt
Keywords: (none) => advisory
Source RPM: ceph-20.2.0-1.mga10.src.rpm, ceph-18.2.7-2.1.mga9.src.rpm => ceph-20.2.0-1.mga10, ceph-18.2.7-2.1.mga9
Install all except the debuginfo without issues ceph Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)') ceph --help Ends with same message after put the help text ceph-volume -h Produce help but still compliant about the lack of configuration Log Path: /var/log/ceph Ceph Conf: Unable to load expected Ceph config at: /etc/ceph/ceph.conf Looks OK in base previous rounds
Just adding a few notes to this: Went as far as I could before updating then updated all but the debuginfo package. Documentation at /usr/share/doc/ceph/README.mageia. Cleared /etc/ceph/ after initial tests before updating. Misquoting RedHat documentation at https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/installation_guide/using-the-command-line-interface-to-install-the-ceph-software#monitor-bootstrapping # touch /etc/ceph/ceph.conf # echo "[global]" > /etc/ceph/ceph.conf # echo "fsid = `uuidgen`" >> /etc/ceph/ceph.conf # echo "mon initial members = this,that,other" >> /etc/ceph/ceph.conf # cat /etc/ceph/ceph.conf [global] fsid = f19c92f9-08fa-4dac-9c15-153e8cba6784 mon initial members = this,that,other # ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *' creating /tmp/ceph.mon.keyring No idea how to proceed beyond this point: # ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n mgr.lcl --set-uid=1000 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow' ceph-authtool: unexpected '--set-uid=1000' # ls /var/log/path ls: cannot access '/var/log/path': No such file or directory The tools seem to work OK but without any real understanding of the system I would agree with katnatek in comment 4.
CC: (none) => tarazed25
Forgot test from comment 4: # ceph-volume -h usage: ceph-volume [-h] [--cluster CLUSTER] [--log-level {debug,info,warning,error,critical}] [--log-path LOG_PATH] ceph-volume: Deploy Ceph OSDs using different device technologies like lvm or physical disks. Log Path: /var/log/ceph Ceph Conf: /etc/ceph/ceph.conf Available subcommands: lvm Use LVM and LVM-based technologies to deploy OSDs simple Manage already deployed OSDs with ceph-volume raw Manage single-device OSDs on raw block devices inventory Get this nodes available disk inventory ........
Source RPM: ceph-20.2.0-1.mga10, ceph-18.2.7-2.1.mga9 => ceph-18.2.7-2.1.mga9Version: Cauldron => 9Status comment: Fixed upstream in 20.2.1 and 18.2.9 and patches available from upstream => (none)Whiteboard: MGA9TOO => (none)Flags: affects_mga9+ => (none)
Whiteboard: (none) => MGA9-64-OKCC: (none) => andrewsfarm
Validating.
Keywords: (none) => validated_updateCC: (none) => sysadmin-bugs
An update for this issue has been pushed to the Mageia Updates repository. https://advisories.mageia.org/MGASA-2026-0025.html
Status: NEW => RESOLVEDResolution: (none) => FIXED