CVE-2025-52555 was announced here: https://www.openwall.com/lists/oss-security/2025/06/26/1 https://www.openwall.com/lists/oss-security/2025/06/27/1 seems to indicate the proposed fix introduces problems.
Whiteboard: (none) => MGA9TOOCVE: (none) => CVE-2025-52555Source RPM: (none) => ceph-19.2.2-1.mga10.src.rpm, ceph-18.1.1-1.1.mga9.src.rpm
"It is patched via 17.2.8 <https://github.com/ceph/ceph/pull/60314> , 18.2.5, and 19.2.3 in upstream Ceph" but, well spotted Nicolas, the 2nd URL does question the change. That needs to be followed, it is new (26th June) and has not been answered yet - or I cannot see the answer. Assigning globally, CC'ing ChrisD who put up the latest version.
Assignee: bugsquad => pkg-bugsCC: (none) => eatdirt
Thanks, I'll follow that!
A fix has been merged, see: https://github.com/ceph/ceph/commit/64f0d786a078a79843c1c1da9cae5e2e603371af I'll push a new version of ceph for mga9 with that commit included.
ceph-18.2.7 landing on updates_testing for mga9, fixing CVE-2025-52555. @QA-teams, see https://bugs.mageia.org/show_bug.cgi?id=28538 for minimal consistency tests, you're not expected to deploy a ceph cluster. NB: Cauldron fix will come later with another update. Advisory: ======================== Updated ceph packages fix a security regression (CVE-2025-52555) that would have allowed an user to read, write and execute to any directory owned by root as long as they chmod 777 it. References: https://www.openwall.com/lists/oss-security/2025/06/26/1 https://github.com/ceph/ceph/commit/64f0d786a078a79843c1c1da9cae5e2e603371af ======================== Updated packages in core/updates_testing: ======================== ceph-18.2.7-1.mga9 ceph-fuse-18.2.7-1.mga9 ceph-immutable-object-cache-18.2.7-1.mga9 ceph-mds-18.2.7-1.mga9 ceph-mgr-18.2.7-1.mga9 ceph-mirror-18.2.7-1.mga9 ceph-mon-18.2.7-1.mga9 ceph-osd-18.2.7-1.mga9 ceph-radosgw-18.2.7-1.mga9 ceph-rbd-18.2.7-1.mga9 lib64ceph2-18.2.7-1.mga9 lib64ceph-devel-18.2.7-1.mga9 lib64rados2-18.2.7-1.mga9 lib64rados-devel-18.2.7-1.mga9 lib64radosstriper1-18.2.7-1.mga9 lib64radosstriper-devel-18.2.7-1.mga9 lib64rbd1-18.2.7-1.mga9 lib64rbd-devel-18.2.7-1.mga9 lib64rgw2-18.2.7-1.mga9 lib64rgw-devel-18.2.7-1.mga9 python3-ceph-18.2.7-1.mga9 python3-rados-18.2.7-1.mga9 python3-rbd-18.2.7-1.mga9 python3-rgw-18.2.7-1.mga9 from ceph-18.2.7-1.mga.src.rpm
Assignee: pkg-bugs => qa-bugs
MGA9-64 server Plasma Wayland on Compaq H000SB. No installation issues. Ref bug 33896 Comment 3 and bug 29871 Comment 3. # ceph-create-keys /usr/sbin/ceph-create-keys: This tool is obsolete; mons now create these keys on their own, and /usr/sbin/ceph-create-keys: this tool does nothing except print this message. /usr/sbin/ceph-create-keys: It will be removed in the next release. Please fix your script/tool. [root@mach3 ~]# ceph-volume -h usage: ceph-volume [-h] [--cluster CLUSTER] [--log-level {debug,info,warning,error,critical}] [--log-path LOG_PATH] ceph-volume: Deploy Ceph OSDs using different device technologies like lvm or physical disks. Log Path: /var/log/ceph Ceph Conf: Unable to load expected Ceph config at: /etc/ceph/ceph.conf and some more.. Checked /etc/ceph: the folder is there, but it is empty. I would expect e default conf file there.
CC: (none) => herman.viaene
This error occurred also on previous updates, so continuing.... # ceph --cluster ceph Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)') # ceph-conf did not load config file, using default settings. 2025-08-27T11:25:35.474+0200 7f03b3f42080 -1 Errors while parsing config file! 2025-08-27T11:25:35.474+0200 7f03b3f42080 -1 can't open ceph.conf: (2) No such file or directory 2025-08-27T11:25:35.474+0200 7f03b3f42080 -1 Errors while parsing config file! 2025-08-27T11:25:35.474+0200 7f03b3f42080 -1 can't open ceph.conf: (2) No such file or directory You must give an action, such as --lookup or --list-all-sections. Pass --help for more help. # ceph-mgr --version ceph version Development (no_version) reef (stable) # ceph-mgr -i me -n client.me did not load config file, using default settings. 2025-08-27T11:26:39.474+0200 7fca52525b40 -1 Errors while parsing config file! 2025-08-27T11:26:39.474+0200 7fca52525b40 -1 can't open ceph.conf: (2) No such file or directory unable to get monitor info from DNS SRV with service name: ceph-mon 2025-08-27T11:27:09.514+0200 7fca52525b40 -1 failed for service _ceph-mon._tcp 2025-08-27T11:27:09.514+0200 7fca52525b40 -1 monclient: get_monmap_and_config cannot identify monitors to contact failed to fetch mon config (--no-mon-config to skip) # ceph-conf --name client.me -c /etc/ceph/ceph.conf 'client addr' global_init: unable to open config file from search list /etc/ceph/ceph.conf Same commands with same results and as nocrashes occured and clean install, good to go as in referred bugs.
Whiteboard: MGA9TOO => MGA9TOO, MGA9-64-OK
Hold on the testing guys! I've found an issue while deploying it on a real cluster. There is a subtle issue with librocksdb, another package required by ceph. I have a fix but I need to push an update of rocksdb before, so this version 18.2.7-1 needs to be updated. Re attributing the bug to myself.
Assignee: qa-bugs => eatdirt
ceph-18.2.7-2.mga9 landing on updates_testing for mga9, fixing CVE-2025-52555. @QA-teams, see https://bugs.mageia.org/show_bug.cgi?id=28538 for minimal consistency tests (you're not expected to deploy a ceph cluster). However, during the install of this update, please check that it also triggers the install of the updated rocksdb libraries discussed in: https://bugs.mageia.org/show_bug.cgi?id=34583 (you can actually validate the two updates at the same time) Order does not matter, you can first install the lib64rocksdb, or just go for ceph install which will call for the lib64rocksdb install. NB: I've deployed my locally build package on a real cluster to test its functioning, but packaging bugs might still remain from our official builds. Advisory: ======================== Updated ceph packages fix a security regression (CVE-2025-52555) that would have allowed an user to read, write and execute to any directory owned by root as long as they chmod 777 it. References: https://www.openwall.com/lists/oss-security/2025/06/26/1 https://github.com/ceph/ceph/commit/64f0d786a078a79843c1c1da9cae5e2e603371af ======================== Updated packages in core/updates_testing: ======================== ceph-18.2.7-2.mga9 ceph-fuse-18.2.7-2.mga9 ceph-immutable-object-cache-18.2.7-2.mga9 ceph-mds-18.2.7-2.mga9 ceph-mgr-18.2.7-2.mga9 ceph-mirror-18.2.7-2.mga9 ceph-mon-18.2.7-2.mga9 ceph-osd-18.2.7-2.mga9 ceph-radosgw-18.2.7-2.mga9 ceph-rbd-18.2.7-2.mga9 lib64ceph2-18.2.7-2.mga9 lib64ceph-devel-18.2.7-2.mga9 lib64rados2-18.2.7-2.mga9 lib64rados-devel-18.2.7-2.mga9 lib64radosstriper1-18.2.7-2.mga9 lib64radosstriper-devel-18.2.7-2.mga9 lib64rbd1-18.2.7-2.mga9 lib64rbd-devel-18.2.7-2.mga9 lib64rgw2-18.2.7-2.mga9 lib64rgw-devel-18.2.7-2.mga9 python3-ceph-18.2.7-2.mga9 python3-rados-18.2.7-2.mga9 python3-rbd-18.2.7-2.mga9 python3-rgw-18.2.7-2.mga9 from ceph-18.2.7-2.mga9.src.rpm
Assignee: eatdirt => qa-bugs
Removing the OK. @Chris: "NB: Cauldron fix will come later with another update." Is that still valid? Does it mean this should now be a MGA9-only bug?
CC: (none) => andrewsfarmWhiteboard: MGA9TOO, MGA9-64-OK => MGA9TOO,
Yes! I have just pushed 19.2.3 on Cauldron which should no longer be affected.
Whiteboard: MGA9TOO, => (none)Version: Cauldron => 9
Keywords: (none) => advisory
When I put the list of Comment 8 in QARepo, and then select the ceph version I get: Sorry, the following package cannot be selected: - ceph-18.2.7-2.mga9.x86_64 (due to unsatisfied librocksdb.so.7()(64bit)) So added the files from bug 34583 then I could proceed.
From bug 28538: $ ceph Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)') [tester9@mach3 ~]$ [tester9@mach3 ~]$ ceph --help General usage: ============== usage: ceph [-h] [-c CEPHCONF] [-i INPUT_FILE] [-o OUTPUT_FILE] [--setuser SETUSER] [--setgroup SETGROUP] [--id CLIENT_ID] [--name CLIENT_NAME] [--cluster CLUSTER] [--admin-daemon ADMIN_SOCKET] [-s] [-w] [--watch-debug] [--watch-info] [--watch-sec] [--watch-warn] [--watch-error] [-W WATCH_CHANNEL] [--version] [--verbose] [--concise] [-f {json,json-pretty,xml,xml-pretty,plain,yaml}] [--connect-timeout CLUSTER_TIMEOUT] [--block] [--period PERIOD] Ceph administration tool options: etc..... thus works OK. $ ceph ping mon.* Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)') That is the same as in bug 28538. Repeated tests from Comment 6 above with same results, so no regression and clean install OK.
Whiteboard: (none) => MGA9-64-OK
Validating.
Keywords: (none) => validated_updateCC: (none) => sysadmin-bugs
An update for this issue has been pushed to the Mageia Updates repository. https://advisories.mageia.org/MGASA-2025-0222.html
Status: NEW => RESOLVEDResolution: (none) => FIXED