Bug 27472 - pacemaker new security issue CVE-2020-25654
Summary: pacemaker new security issue CVE-2020-25654
Status: RESOLVED FIXED
Alias: None
Product: Mageia
Classification: Unclassified
Component: Security (show other bugs)
Version: 7
Hardware: All Linux
Priority: Normal major
Target Milestone: ---
Assignee: QA Team
QA Contact: Sec team
URL:
Whiteboard: MGA7-64-OK
Keywords: advisory, validated_update
: 27832 (view as bug list)
Depends on:
Blocks:
 
Reported: 2020-10-27 20:48 CET by David Walser
Modified: 2020-12-15 17:12 CET (History)
8 users (show)

See Also:
Source RPM: pacemaker-1.1.19-2.1.mga7.src.rpm
CVE: CVE-2020-25654
Status comment:


Attachments

Description David Walser 2020-10-27 20:48:10 CET
A security issue in pacemaker has been announced today (October 27):
https://www.openwall.com/lists/oss-security/2020/10/27/1

Details and patches at:
https://bugzilla.redhat.com/show_bug.cgi?id=1888191

Mageia 7 is also affected.
David Walser 2020-10-27 20:52:30 CET

Whiteboard: (none) => MGA7TOO

Comment 1 David Walser 2020-10-31 14:19:31 CET
openSUSE has issued an advisory for this today (October 31):
https://lists.opensuse.org/opensuse-security-announce/2020-10/msg00076.html
Comment 2 Aurelien Oudelet 2020-10-31 17:44:19 CET
Hi, thanks for reporting this bug.
Assigned to recent commiters.

(Please set the status to 'assigned' if you are working on it)


Please note package belong to ennael, who no longer involved sadly, in http://pkgsubmit.mageia.org/data/maintdb.txt

Keywords: (none) => Triaged
Assignee: bugsquad => pkg-bugs
CC: (none) => geiger.david68210, joequant, ouaurelien

Comment 3 Nicolas Salguero 2020-11-04 10:14:04 CET
Suggested advisory:
========================

The updated packages fix a security vulnerability:

ACL restrictions bypass. (CVE-2020-25654)

References:
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-25654
https://www.openwall.com/lists/oss-security/2020/10/27/1
https://bugzilla.redhat.com/show_bug.cgi?id=1888191
https://lists.opensuse.org/opensuse-security-announce/2020-10/msg00076.html
========================

Updated packages in core/updates_testing:
========================
pacemaker-1.1.19-2.2.mga7
lib(64)cib4-1.1.19-2.2.mga7
lib(64)crmcluster4-1.1.19-2.2.mga7
lib(64)crmcommon3-1.1.19-2.2.mga7
lib(64)crmservice3-1.1.19-2.2.mga7
lib(64)lrmd1-1.1.19-2.2.mga7
lib(64)pengine10-1.1.19-2.2.mga7
lib(64)pe_rules2-1.1.19-2.2.mga7
lib(64)pe_status10-1.1.19-2.2.mga7
lib(64)stonithd2-1.1.19-2.2.mga7
lib(64)transitioner2-1.1.19-2.2.mga7
lib(64)pacemaker-devel-1.1.19-2.2.mga7
pacemaker-cts-1.1.19-2.2.mga7
pacemaker-doc-1.1.19-2.2.mga7

from SRPM:
pacemaker-1.1.19-2.2.mga7.src.rpm

CVE: (none) => CVE-2020-25654
Status: NEW => ASSIGNED
Whiteboard: MGA7TOO => (none)
CC: (none) => nicolas.salguero
Source RPM: pacemaker-1.1.19-7.mga8.src.rpm => pacemaker-1.1.19-2.1.mga7.src.rpm
Version: Cauldron => 7
Keywords: Triaged => (none)
Assignee: pkg-bugs => qa-bugs

Comment 4 Herman Viaene 2020-11-09 15:02:25 CET
MGA7-64 MATE  on Peaq C1011
No installation issues.
Followed bug 11724 and 24691, noting at least one change in the behavior
After installing crmsh and its dependencies, and adding the network address to the corosync.conf, I went on
# crm_report -vV
crm_report 1.1.19-c3c624ea3d
looks OK
# systemctl start corosync
# systemctl -l status corosync
● corosync.service - Corosync Cluster Engine
   Loaded: loaded (/usr/lib/systemd/system/corosync.service; disabled; vendor preset: disabled)
   Active: active (running) since Mon 2020-11-09 14:31:49 CET; 33s ago
     Docs: man:corosync
           man:corosync.conf
           man:corosync_overview
  Process: 9200 ExecStart=/usr/share/corosync/corosync start (code=exited, status=0/SUCCESS)
 Main PID: 9215 (corosync)
    Tasks: 2 (limit: 2288)
   Memory: 24.7M
   CGroup: /system.slice/corosync.service
           └─9215 corosync

Nov 09 14:31:49 mach6.hviaene.thuis corosync[9215]:   [QB    ] server name: cfg
Nov 09 14:31:49 mach6.hviaene.thuis corosync[9215]:   [SERV  ] Service engine loaded: corosync cluster closed process group service v1.01 [2]
Nov 09 14:31:49 mach6.hviaene.thuis corosync[9215]:   [QB    ] server name: cpg
Nov 09 14:31:49 mach6.hviaene.thuis corosync[9215]:   [SERV  ] Service engine loaded: corosync profile loading service [4]
Nov 09 14:31:49 mach6.hviaene.thuis corosync[9215]:   [SERV  ] Service engine loaded: corosync cluster quorum service v0.1 [3]
Nov 09 14:31:49 mach6.hviaene.thuis corosync[9215]:   [QB    ] server name: quorum
Nov 09 14:31:49 mach6.hviaene.thuis corosync[9215]:   [TOTEM ] A new membership (192.168.2.6:4) was formed. Members joined: 3232236038
Nov 09 14:31:49 mach6.hviaene.thuis corosync[9215]:   [MAIN  ] Completed service synchronization, ready to provide service.
Nov 09 14:31:49 mach6.hviaene.thuis corosync[9200]: Starting Corosync Cluster Engine (corosync): [  OK  ]
Nov 09 14:31:49 mach6.hviaene.thuis systemd[1]: Started Corosync Cluster Engine.
[root@mach6 ~]# systemctl start pacemakerd
Failed to start pacemakerd.service: Unit pacemakerd.service not found.
# systemctl start pacemaker
# systemctl -l status pacemaker
● pacemaker.service - Pacemaker High Availability Cluster Manager
   Loaded: loaded (/usr/lib/systemd/system/pacemaker.service; disabled; vendor preset: disabled)
   Active: active (running) since Mon 2020-11-09 14:33:09 CET; 14s ago
     Docs: man:pacemakerd
           https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/1.1/html-single/Pacemaker_Explained/index.html
 Main PID: 9282 (pacemakerd)
    Tasks: 7 (limit: 2288)
   Memory: 27.8M
   CGroup: /system.slice/pacemaker.service
           ├─9282 /usr/sbin/pacemakerd -f
           ├─9284 /usr/libexec/pacemaker/cib
           ├─9285 /usr/libexec/pacemaker/stonithd
           ├─9286 /usr/libexec/pacemaker/lrmd
           ├─9287 /usr/libexec/pacemaker/attrd
           ├─9288 /usr/libexec/pacemaker/pengine
           └─9289 /usr/libexec/pacemaker/crmd

Nov 09 14:33:10 mach6.hviaene.thuis stonith-ng[9285]:   notice: Defaulting to uname -n for the local corosync node name
Nov 09 14:33:10 mach6.hviaene.thuis cib[9284]:   notice: Defaulting to uname -n for the local corosync node name
Nov 09 14:33:10 mach6.hviaene.thuis stonith-ng[9285]:   notice: Defaulting to uname -n for the local corosync node name
Nov 09 14:33:10 mach6.hviaene.thuis cib[9290]:  warning: Could not verify cluster configuration file /var/lib/pacemaker/cib/cib.xml: No such file or directory>
Nov 09 14:33:11 mach6.hviaene.thuis crmd[9289]:   notice: Connecting to cluster infrastructure: corosync
Nov 09 14:33:11 mach6.hviaene.thuis crmd[9289]:   notice: Could not obtain a node name for corosync nodeid 3232236038
Nov 09 14:33:11 mach6.hviaene.thuis crmd[9289]:   notice: Defaulting to uname -n for the local corosync node name
Nov 09 14:33:11 mach6.hviaene.thuis attrd[9287]:   notice: Defaulting to uname -n for the local corosync node name
Nov 09 14:33:11 mach6.hviaene.thuis cib[9284]:   notice: Defaulting to uname -n for the local corosync node name
Nov 09 14:33:11 mach6.hviaene.thuis crmd[9289]:    error: Corosync quorum is not configured
This is not the same as in previous updates, googled a bit and found
https://www.systutorials.com/docs/linux/man/5-votequorum/
So added two lines to corosync.conf
    provider: corosync_votequorum
    expected_votes: 8
And then got going again
# systemctl restart corosync
# systemctl -l status corosync
● corosync.service - Corosync Cluster Engine
   Loaded: loaded (/usr/lib/systemd/system/corosync.service; disabled; vendor preset: disabled)
   Active: active (running) since Mon 2020-11-09 14:38:20 CET; 9s ago
     Docs: man:corosync
           man:corosync.conf
           man:corosync_overview
  Process: 10989 ExecStart=/usr/share/corosync/corosync start (code=exited, status=0/SUCCESS)
 Main PID: 11006 (corosync)
    Tasks: 2 (limit: 2288)
   Memory: 57.2M
   CGroup: /system.slice/corosync.service
           └─11006 corosync

Nov 09 14:38:20 mach6.hviaene.thuis corosync[11006]:   [QUORUM] Using quorum provider corosync_votequorum
Nov 09 14:38:20 mach6.hviaene.thuis corosync[11006]:   [SERV  ] Service engine loaded: corosync vote quorum service v1.0 [5]
Nov 09 14:38:20 mach6.hviaene.thuis corosync[11006]:   [QB    ] server name: votequorum
Nov 09 14:38:20 mach6.hviaene.thuis corosync[11006]:   [SERV  ] Service engine loaded: corosync cluster quorum service v0.1 [3]
Nov 09 14:38:20 mach6.hviaene.thuis corosync[11006]:   [QB    ] server name: quorum
Nov 09 14:38:20 mach6.hviaene.thuis corosync[11006]:   [TOTEM ] A new membership (192.168.2.6:8) was formed. Members joined: 3232236038
Nov 09 14:38:20 mach6.hviaene.thuis corosync[11006]:   [QUORUM] Members[1]: 3232236038
Nov 09 14:38:20 mach6.hviaene.thuis corosync[11006]:   [MAIN  ] Completed service synchronization, ready to provide service.
Nov 09 14:38:20 mach6.hviaene.thuis corosync[10989]: Starting Corosync Cluster Engine (corosync): [  OK  ]
Nov 09 14:38:20 mach6.hviaene.thuis systemd[1]: Started Corosync Cluster Engine.

# systemctl restart pacemaker
# systemctl -l status pacemaker
● pacemaker.service - Pacemaker High Availability Cluster Manager
   Loaded: loaded (/usr/lib/systemd/system/pacemaker.service; disabled; vendor preset: disabled)
   Active: active (running) since Mon 2020-11-09 14:38:46 CET; 7s ago
     Docs: man:pacemakerd
           https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/1.1/html-single/Pacemaker_Explained/index.html
 Main PID: 11056 (pacemakerd)
    Tasks: 7 (limit: 2288)
   Memory: 27.5M
   CGroup: /system.slice/pacemaker.service
           ├─11056 /usr/sbin/pacemakerd -f
           ├─11059 /usr/libexec/pacemaker/cib
           ├─11060 /usr/libexec/pacemaker/stonithd
           ├─11061 /usr/libexec/pacemaker/lrmd
           ├─11062 /usr/libexec/pacemaker/attrd
           ├─11063 /usr/libexec/pacemaker/pengine
           └─11064 /usr/libexec/pacemaker/crmd

Nov 09 14:38:48 mach6.hviaene.thuis crmd[11064]:   notice: Connecting to cluster infrastructure: corosync
Nov 09 14:38:48 mach6.hviaene.thuis crmd[11064]:   notice: Could not obtain a node name for corosync nodeid 3232236038
Nov 09 14:38:48 mach6.hviaene.thuis crmd[11064]:   notice: Defaulting to uname -n for the local corosync node name
Nov 09 14:38:48 mach6.hviaene.thuis crmd[11064]:  warning: Quorum lost
Nov 09 14:38:48 mach6.hviaene.thuis cib[11059]:   notice: Defaulting to uname -n for the local corosync node name
Nov 09 14:38:48 mach6.hviaene.thuis crmd[11064]:   notice: Node mach6.hviaene.thuis state is now member
Nov 09 14:38:48 mach6.hviaene.thuis attrd[11062]:   notice: Defaulting to uname -n for the local corosync node name
Nov 09 14:38:48 mach6.hviaene.thuis crmd[11064]:   notice: Defaulting to uname -n for the local corosync node name
Nov 09 14:38:48 mach6.hviaene.thuis crmd[11064]:   notice: The local CRM is operational
Nov 09 14:38:48 mach6.hviaene.thuis crmd[11064]:   notice: State transition S_STARTING -> S_PENDING
So no error here anymore, getting on

# crm
crm(live)# cib new test-conf
INFO: cib.new: test-conf shadow CIB created
crm(test-conf)# cib use test-conf
crm(test-conf)# configure
crm(test-conf)configure# show
node 3232236038: mach6.hviaene.thuis
property cib-bootstrap-options: \
	have-watchdog=false \
	dc-version=1.1.19-2.2.mga7-c3c624ea3d \
	cluster-infrastructure=corosync
crm(test-conf)configure# show xml
<?xml version="1.0" ?>
<cib num_updates="3" dc-uuid="3232236038" update-origin="mach6.hviaene.thuis" crm_feature_set="3.0.14" validate-with="pacemaker-2.10" update-client="crmd" epoch="3" admin_epoch="0" update-user="hacluster" cib-last-written="Mon Nov  9 14:38:43 2020" have-quorum="0">
  <configuration>
    <crm_config>
      <cluster_property_set id="cib-bootstrap-options">
        <nvpair id="cib-bootstrap-options-have-watchdog" name="have-watchdog" value="false"/>
        <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.19-2.2.mga7-c3c624ea3d"/>
        <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/>
      </cluster_property_set>
    </crm_config>
    <nodes>
      <node id="3232236038" uname="mach6.hviaene.thuis"/>
    </nodes>
    <resources/>
    <constraints/>
  </configuration>
</cib>

crm(test-conf)configure# verify
ERROR: error: unpack_resources:	Resource start-up disabled since no STONITH resources have been defined
   error: unpack_resources:	Either configure some or disable STONITH with the stonith-enabled option
   error: unpack_resources:	NOTE: Clusters with shared data need STONITH to ensure data integrity
Errors found during check: config not valid
WARNING: cib-bootstrap-options: unknown attribute 'have-watchdog'

This is again different, but I gave up trying to study the config options.

crm(test-conf)configure# end
crm(test-conf)# cib commit test-conf
INFO: cib.commit: committed 'test-conf' shadow CIB to the cluster
crm(test-conf)# quit
bye

The rest looks all OK, the services run without complaining.

Whiteboard: (none) => MGA7-64-OK
CC: (none) => herman.viaene

Comment 5 Thomas Andrews 2020-11-09 18:03:28 CET
Validating. Advisory in Comment 3.

CC: (none) => andrewsfarm, sysadmin-bugs
Keywords: (none) => validated_update

Comment 6 Aurelien Oudelet 2020-11-10 09:24:14 CET
Advisory pushed to SVN.

Keywords: (none) => advisory

Comment 7 Mageia Robot 2020-11-10 16:21:21 CET
An update for this issue has been pushed to the Mageia Updates repository.

https://advisories.mageia.org/MGASA-2020-0409.html

Status: ASSIGNED => RESOLVED
Resolution: (none) => FIXED

Comment 8 David Walser 2020-12-15 17:12:03 CET
*** Bug 27832 has been marked as a duplicate of this bug. ***

CC: (none) => zombie_ryushu


Note You need to log in before you can comment on or make changes to this bug.