Bug 24691 - pacemaker new security issues CVE-2018-1687[78] and CVE-2019-3885
Summary: pacemaker new security issues CVE-2018-1687[78] and CVE-2019-3885
Status: RESOLVED FIXED
Alias: None
Product: Mageia
Classification: Unclassified
Component: Security (show other bugs)
Version: 7
Hardware: All Linux
Priority: Normal normal
Target Milestone: ---
Assignee: QA Team
QA Contact: Sec team
URL:
Whiteboard: MGA7-64-OK
Keywords: advisory, validated_update
Depends on:
Blocks:
 
Reported: 2019-04-19 13:24 CEST by David Walser
Modified: 2019-12-19 14:45 CET (History)
8 users (show)

See Also:
Source RPM: pacemaker-1.1.19-2.mga7.src.rpm
CVE:
Status comment:


Attachments

Description David Walser 2019-04-19 13:24:04 CEST
Security issues fixed upstream in Pacemaker have been announced:
https://www.openwall.com/lists/oss-security/2019/04/17/1

Patches are attached to the message above and linked from:
https://www.openwall.com/lists/oss-security/2019/04/18/2

Mageia 6 is also affected.
David Walser 2019-04-19 13:24:15 CEST

Whiteboard: (none) => MGA6TOO

Comment 1 Marja Van Waes 2019-04-20 07:08:21 CEST
Assigning to our registered pacemaker maintainer.
CC'ing two committers.

CC: (none) => geiger.david68210, marja11, smelror
Assignee: bugsquad => ennael1

Comment 2 David Walser 2019-04-29 21:32:59 CEST
SUSE has issued an advisory for this on April 26:
http://lists.suse.com/pipermail/sle-security-updates/2019-April/005369.html
Comment 3 David Walser 2019-05-31 19:26:21 CEST
RedHat has issued an advisory for this on May 27:
https://access.redhat.com/errata/RHSA-2019:1278
David Walser 2019-06-23 19:17:40 CEST

Whiteboard: MGA6TOO => MGA7TOO, MGA6TOO

Comment 4 David Walser 2019-08-11 20:27:43 CEST
Ubuntu has issued an advisory for this on April 23:
https://usn.ubuntu.com/3952-1/
Comment 5 Nicolas Salguero 2019-12-17 14:18:10 CET
Suggested advisory:
========================

The updated packages fix security vulnerabilities:

A flaw was found in the way pacemaker's client-server authentication was implemented in versions up to and including 2.0.0. A local attacker could use this flaw, and combine it with other IPC weaknesses, to achieve local privilege escalation. (CVE-2018-16877)

A flaw was found in pacemaker up to and including version 2.0.1. An insufficient verification inflicted preference of uncontrolled processes can lead to DoS. (CVE-2018-16878)

A use-after-free flaw was found in pacemaker up to and including version 2.0.1 which could result in certain sensitive information to be leaked via the system logs. (CVE-2019-3885)

References:
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-16877
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-16878
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-3885
https://www.openwall.com/lists/oss-security/2019/04/17/1
https://www.openwall.com/lists/oss-security/2019/04/18/2
http://lists.suse.com/pipermail/sle-security-updates/2019-April/005369.html
https://access.redhat.com/errata/RHSA-2019:1278
https://usn.ubuntu.com/3952-1/
========================

Updated packages in core/updates_testing:
========================
pacemaker-1.1.19-2.1.mga7
lib(64)cib4-1.1.19-2.1.mga7
lib(64)crmcluster4-1.1.19-2.1.mga7
lib(64)crmcommon3-1.1.19-2.1.mga7
lib(64)crmservice3-1.1.19-2.1.mga7
lib(64)lrmd1-1.1.19-2.1.mga7
lib(64)pengine10-1.1.19-2.1.mga7
lib(64)pe_rules2-1.1.19-2.1.mga7
lib(64)pe_status10-1.1.19-2.1.mga7
lib(64)stonithd2-1.1.19-2.1.mga7
lib(64)transitioner2-1.1.19-2.1.mga7
lib(64)pacemaker-devel-1.1.19-2.1.mga7
pacemaker-cts-1.1.19-2.1.mga7
pacemaker-doc-1.1.19-2.1.mga7

from SRPMS:
pacemaker-1.1.19-2.1.mga7.src.rpm

Whiteboard: MGA7TOO, MGA6TOO => (none)
Version: Cauldron => 7
Assignee: ennael1 => qa-bugs
Status: NEW => ASSIGNED
CC: (none) => nicolas.salguero

Comment 6 Herman Viaene 2019-12-17 15:37:35 CET
MGA7-64 Plasma on Lenovo B50
No installation issues.
Ref to bug 11724 Comment 7 for testing (tx Claire, good old Mrs. B, note: she is little more than half my age).
So also installed crmsh which brought in corosync.
Qoute:
Copied /etc/corosync/corosync.conf.example to /etc/corosync/corosync.conf
Edited /etc/corosync/corosync.conf to add the network IP address, eg. 192.168.1.0 if the computer is 192.168.1.64 for example.
Endquote
I presume the address Claire refers to is the line bindnetaddr:
Changed that to my own setting and then:
# systemctl start corosync
# systemctl -l status corosync
● corosync.service - Corosync Cluster Engine
   Loaded: loaded (/usr/lib/systemd/system/corosync.service; disabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-12-17 15:15:14 CET; 19s ago
     Docs: man:corosync
           man:corosync.conf
           man:corosync_overview
  Process: 26093 ExecStart=/usr/share/corosync/corosync start (code=exited, status=0/SUCCESS)
 Main PID: 26108 (corosync)
   Memory: 21.8M
   CGroup: /system.slice/corosync.service
           └─26108 corosync

dec 17 15:15:13 mach5.hviaene.thuis corosync[26108]:   [TOTEM ] Initializing transport (UDP/IP Multicast).
dec 17 15:15:13 mach5.hviaene.thuis corosync[26108]:   [TOTEM ] Initializing transmit/receive security (NSS) crypto: none hash: none
dec 17 15:15:13 mach5.hviaene.thuis corosync[26108]:   [TOTEM ] The network interface [192.168.2.5] is now up.
dec 17 15:15:13 mach5.hviaene.thuis corosync[26108]:   [QB    ] server name: cmap
dec 17 15:15:13 mach5.hviaene.thuis corosync[26108]:   [QB    ] server name: cfg
dec 17 15:15:13 mach5.hviaene.thuis corosync[26108]:   [QB    ] server name: cpg
dec 17 15:15:13 mach5.hviaene.thuis corosync[26108]:   [QB    ] server name: quorum
dec 17 15:15:13 mach5.hviaene.thuis corosync[26108]:   [TOTEM ] A new membership (192.168.2.5:4) was formed. Members joined: 3232236037
dec 17 15:15:14 mach5.hviaene.thuis corosync[26093]: Starting Corosync Cluster Engine (corosync): [  OK  ]
dec 17 15:15:14 mach5.hviaene.thuis systemd[1]: Started Corosync Cluster Engine.

# systemctl start pacemaker
# systemctl -l status pacemaker
● pacemaker.service - Pacemaker High Availability Cluster Manager
   Loaded: loaded (/usr/lib/systemd/system/pacemaker.service; disabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-12-17 15:16:14 CET; 14s ago
     Docs: man:pacemakerd
           https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/1.1/html-single/Pacemaker_Explained/index.html
 Main PID: 28508 (pacemakerd)
   Memory: 24.2M
   CGroup: /system.slice/pacemaker.service
           ├─28508 /usr/sbin/pacemakerd -f
           ├─28510 /usr/libexec/pacemaker/cib
           ├─28511 /usr/libexec/pacemaker/stonithd
           ├─28512 /usr/libexec/pacemaker/lrmd
           ├─28513 /usr/libexec/pacemaker/attrd
           ├─28514 /usr/libexec/pacemaker/pengine
           └─28515 /usr/libexec/pacemaker/crmd

dec 17 15:16:15 mach5.hviaene.thuis cib[28510]:   notice: Defaulting to uname -n for the local corosync node name
dec 17 15:16:15 mach5.hviaene.thuis stonith-ng[28511]:   notice: Defaulting to uname -n for the local corosync node name
dec 17 15:16:15 mach5.hviaene.thuis stonith-ng[28511]:   notice: Defaulting to uname -n for the local corosync node name
dec 17 15:16:15 mach5.hviaene.thuis cib[28516]:  warning: Could not verify cluster configuration file /var/lib/pacemaker/cib/cib.xml: No such file or directory (2)
dec 17 15:16:16 mach5.hviaene.thuis crmd[28515]:   notice: Connecting to cluster infrastructure: corosync
dec 17 15:16:16 mach5.hviaene.thuis crmd[28515]:   notice: Could not obtain a node name for corosync nodeid 3232236037
dec 17 15:16:16 mach5.hviaene.thuis crmd[28515]:   notice: Defaulting to uname -n for the local corosync node name
dec 17 15:16:16 mach5.hviaene.thuis crmd[28515]:    error: Corosync quorum is not configured
dec 17 15:16:16 mach5.hviaene.thuis cib[28510]:   notice: Defaulting to uname -n for the local corosync node name
dec 17 15:16:16 mach5.hviaene.thuis attrd[28513]:   notice: Defaulting to uname -n for the local corosync node name
Note the error on Corosyn quorum, but the conf file states specifically: default off, so I went on.
Tested then along the lines in https://wiki.clusterlabs.org/wiki/Example_configurations
and that went without apparent troubles:
# crm
crm(live)# cib new test-conf
INFO: cib.new: test-conf shadow CIB created
crm(test-conf)# cib use test-conf
crm(test-conf)# configure
crm(test-conf)configure# show
crm(test-conf)configure# show xml
<?xml version="1.0" ?>
<cib crm_feature_set="3.0.14" validate-with="pacemaker-2.10" num_updates="0" epoch="0" admin_epoch="0">
  <configuration>
    <crm_config/>
    <nodes/>
    <resources/>
    <constraints/>
  </configuration>
</cib>

crm(test-conf)configure# verify
crm(test-conf)configure# end
crm(test-conf)# cib commit test-conf
INFO: cib.commit: committed 'test-conf' shadow CIB to the cluster
crm(test-conf)# quit
bye

I don't pretend to understand everything there, but it looks good, so supported by Claire, I give the OK.

CC: (none) => herman.viaene
Whiteboard: (none) => MGA7-64-OK

Comment 7 Thomas Andrews 2019-12-17 18:24:15 CET
I wouldn't dare argue with Claire. (Grin.) 

Validating. Advisory in comment 5.

CC: (none) => andrewsfarm, sysadmin-bugs
Keywords: (none) => validated_update

Thomas Backlund 2019-12-19 14:01:38 CET

Keywords: (none) => advisory
CC: (none) => tmb

Comment 8 Mageia Robot 2019-12-19 14:45:45 CET
An update for this issue has been pushed to the Mageia Updates repository.

https://advisories.mageia.org/MGASA-2019-0394.html

Status: ASSIGNED => RESOLVED
Resolution: (none) => FIXED


Note You need to log in before you can comment on or make changes to this bug.