Bug 19879 - openafs new security issue OPENAFS-SA-2016-003 (CVE-2016-9772)
Summary: openafs new security issue OPENAFS-SA-2016-003 (CVE-2016-9772)
Status: RESOLVED FIXED
Alias: None
Product: Mageia
Classification: Unclassified
Component: Security (show other bugs)
Version: 5
Hardware: All Linux
Priority: Normal normal
Target Milestone: ---
Assignee: QA Team
QA Contact: Sec team
URL: https://lwn.net/Vulnerabilities/708140/
Whiteboard: has_procedure MGA5-64-OK advisory MGA...
Keywords: validated_update
Depends on:
Blocks:
 
Reported: 2016-12-01 20:28 CET by David Walser
Modified: 2017-02-02 20:18 CET (History)
3 users (show)

See Also:
Source RPM: openafs-1.6.18-1.1.mga5.src.rpm
CVE:
Status comment:


Attachments

Description David Walser 2016-12-01 20:28:10 CET
Upstream has issued an advisory on November 30:
https://www.openafs.org/pages/security/OPENAFS-SA-2016-003.txt

The issue is fixed in 1.6.20:
https://dl.openafs.org/dl/1.6.20/RELNOTES-1.6.20

There were also some bug fixes in 1.6.19:
https://dl.openafs.org/dl/1.6.19/RELNOTES-1.6.19

Freeze push requested for Cauldron.  Updated checked into Mageia 5 SVN.
Comment 1 David Walser 2016-12-02 00:20:14 CET
Updated packages uploaded for Mageia 5 and Cauldron.

Test procedure:
https://wiki.mageia.org/en/Installing_OpenAFS_Client

Advisory:
========================

Updated openafs packages fix security vulnerability:

Due to incomplete initialization or clearing of reused memory, OpenAFS
directory objects are likely to contain "dead" directory entry information.
This extraneous information is not active - that is, it is logically invisible
to the fileserver and client. However, the leaked information is physically
visible on the fileserver vice partition, on the wire in FetchData replies and
other RPCs, and on the client cache partition. This constitutes a leak of
directory information (OPENAFS-SA-2016-003).

The openafs package has been updated to version 1.6.20, to fix this issue and
other bugs.

References:
https://www.openafs.org/pages/security/OPENAFS-SA-2016-003.txt
http://openafs.org/dl/openafs/1.6.18.1/RELNOTES-1.6.18.1
http://openafs.org/dl/openafs/1.6.18.2/RELNOTES-1.6.18.2
http://openafs.org/dl/openafs/1.6.18.3/RELNOTES-1.6.18.3
https://dl.openafs.org/dl/1.6.19/RELNOTES-1.6.19
https://dl.openafs.org/dl/1.6.20/RELNOTES-1.6.20
========================

Updated packages in core/updates_testing:
========================
openafs-1.6.20-1.mga5
openafs-client-1.6.20-1.mga5
openafs-server-1.6.20-1.mga5
libopenafs1-1.6.20-1.mga5
libopenafs-devel-1.6.20-1.mga5
libopenafs-static-devel-1.6.20-1.mga5
dkms-libafs-1.6.20-1.mga5
openafs-doc-1.6.20-1.mga5

from openafs-1.6.20-1.mga5.src.rpm

Whiteboard: (none) => has_procedure
Assignee: bugsquad => qa-bugs

Comment 2 David Walser 2016-12-03 19:11:46 CET
CVE-2016-9772 has been assigned:
http://openwall.com/lists/oss-security/2016/12/02/9

Advisory:
========================

Updated openafs packages fix security vulnerability:

Due to incomplete initialization or clearing of reused memory, OpenAFS
directory objects are likely to contain "dead" directory entry information.
This extraneous information is not active - that is, it is logically invisible
to the fileserver and client. However, the leaked information is physically
visible on the fileserver vice partition, on the wire in FetchData replies and
other RPCs, and on the client cache partition. This constitutes a leak of
directory information (CVE-2016-9772).

The openafs package has been updated to version 1.6.20, to fix this issue and
other bugs.

References:
http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-9772
https://www.openafs.org/pages/security/OPENAFS-SA-2016-003.txt
http://openafs.org/dl/openafs/1.6.18.1/RELNOTES-1.6.18.1
http://openafs.org/dl/openafs/1.6.18.2/RELNOTES-1.6.18.2
http://openafs.org/dl/openafs/1.6.18.3/RELNOTES-1.6.18.3
https://dl.openafs.org/dl/1.6.19/RELNOTES-1.6.19
https://dl.openafs.org/dl/1.6.20/RELNOTES-1.6.20
http://openwall.com/lists/oss-security/2016/12/02/9

Summary: openafs new security issue OPENAFS-SA-2016-003 => openafs new security issue OPENAFS-SA-2016-003 (CVE-2016-9772)

Comment 3 Len Lawrence 2016-12-04 02:25:49 CET
Been looking at this and trying to understand what is involved.  afs is the Andrew File System which needs support built into the kernel if it is to be used, so after updating, a reboot would be in order to trigger dkms.

I cannot think of any way to test this or derive a PoC for CVE-2016-9772.  It is not clear how the dead directory information could be exploited and a PoC would have to demonstrate that it could be read (before updates) and that after updating there would be nothing to read.

In summary, all we can do is look for a clean install and successful dkms rebuild on a reboot and start and stop the servers, maybe on two different machines.  Shall do this tomorrow, hopefully.

CC: (none) => tarazed25

Comment 4 Len Lawrence 2016-12-04 02:32:57 CET
Just noticed that a procedure is listed.  It is more of a full-blown tutorial on the installation and use of AFS.  Likely to take a few days to plough through that lot.
Comment 5 claire robinson 2016-12-05 12:23:40 CET
We normally just ensure the kernel module builds ok Len. It is known to take a long time to build (~15-20mins) so don't panic if it seems slow.
Comment 6 Len Lawrence 2016-12-05 16:14:39 CET
Thanks Claire.  Will get onto it.
Comment 7 Len Lawrence 2016-12-05 17:13:00 CET
Testing on x86_64 hardware under kernel-4.4.32-tmb-desktop-1.

The updates installed cleanly and the dkms module build went smoothly.  There are several kernels installed on this machine and dkms built the modules against each of them.

Rebooted to the running kernel and saw an error starting openafs-client service.

After login:
$ systemctl status openafs.service
â openafs.service
   Loaded: not-found (Reason: No such file or directory)
   Active: inactive (dead)
[lcl@vega ocaml]$ systemctl status openafs-client.service
â openafs-client.service - OpenAFS Client Service
   Loaded: loaded (/usr/lib/systemd/system/openafs-client.service; enabled)
   Active: failed (Result: exit-code) since Mon 2016-12-05 15:36:46 GMT; 3min 15s ago
  Process: 22544 ExecStart=/sbin/afsd $AFSD_ARGS (code=exited, status=1/FAILURE)
  Process: 22461 ExecStartPre=/sbin/modprobe libafs (code=exited, status=0/SUCCESS)
  Process: 22441 ExecStartPre=/bin/chmod 0644 /etc/openafs/CellServDB (code=exited, status=0/SUCCESS)
  Process: 22396 ExecStartPre=/bin/sed -n w/etc/openafs/CellServDB /etc/openafs/CellServDB.local /etc/openafs/CellServDB.dist (code=exited, status=0/SUCCESS)

Similar messages were posted on an attempt to start the client-server from the command-line but the server started OK.
$ sudo systemctl status openafs-server.service
â openafs-server.service - OpenAFS Server Service
   Loaded: loaded (/usr/lib/systemd/system/openafs-server.service; enabled)
   Active: active (running) since Mon 2016-12-05 15:36:46 GMT; 13min ago
 Main PID: 22406 (bosserver)
   CGroup: /system.slice/openafs-server.service
           ââ22406 /usr/sbin/bosserver -nofork

Without reading the documentation my suspicion is that some configuration needs to be done before the client server is invoked.  Shall have a look at that later.

Rebooted to the stock kernel, 4.4.32-desktop-1 and observed the openafs module being installed.  That took a bit of time and the same openafs-client failure message showed up afterwards.  After login the server started OK.

This looks OK but I need to confirm that the client service can be started.
Comment 8 Len Lawrence 2016-12-05 18:45:41 CET
It looks like I have been here before.  UDP port 7001 already available and /etc/openafs/CellServDB had already been backed up.
# cd /etc/openafs
# diff CellServDB CellServDB-
#
# wget http://dl.central.org/dl/cellservdb/CellServDB
--2016-12-05 16:24:44--  http://dl.central.org/dl/cellservdb/CellServDB
Resolving dl.central.org (dl.central.org)... 128.2.13.212
Connecting to dl.central.org (dl.central.org)|128.2.13.212|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 37058 (36K)
Saving to: âCellServDB.1â

100%[======================================>] 37,058       156KB/s   in 0.2s   

2016-12-05 16:24:45 (156 KB/s) - âCellServDB.1â saved [37058/37058]

# echo grand.central.org > /etc/openafs/ThisCell

/afs already exists.

$ df /var/cache/openafs
Filesystem     1K-blocks     Used Available Use% Mounted on
/dev/sda3      482957888 75209496 383192464  17% /

Computed 84% of available space and allocated it;
CACHE=321912832 (kilobytes) in /etc/sysconfig/openafs.
# echo "/afs:/var/cache/openafs:321912832" > /etc/openafs/cacheinfo

Configured OpenAFS Cache manager:

# f=/etc/sysconfig/openafs
# sed < ${f} -e s/^AFSD_ARGS=/#AFSD_ARGS=/ -e s/^$/AFSD_ARGS="-dynroot -fakestat -afsdb -stat 2000 -dcache 800 -daemons 3 -volumes 70 -nosettime"/ > ${f}+
# mv -f ${f} /tmp/ && mv ${f}+ ${f}

# lsmod | grep libafs
libafs                778240  0 


# systemctl start openafs-client.service
Job for openafs-client.service failed. See "systemctl status openafs-client.service" and "journalctl -xe" for details.
Dec 05 16:45:28 vega systemd[1]: Unit openafs-client.service entered failed state.
Dec 05 16:45:28 vega systemd[1]: openafs-client.service failed.
Dec 05 16:49:49 vega sudo[19474]: root : TTY=pts/2 ; PWD=/etc/openafs ; USER=root ; COMMAND=/bin/systemctl stop openafs-server.servi
Dec 05 16:49:49 vega bos[19477]: bos: could not find entry (configuring connection security)
Dec 05 16:49:49 vega systemd[1]: openafs-server.service: control process exited, code=exited status=1
Dec 05 16:49:49 vega systemd[1]: Unit openafs-server.service entered failed state.
Dec 05 16:49:49 vega systemd[1]: openafs-server.service failed.
Dec 05 16:50:01 vega crond[19508]: pam_tcb(crond:session): Session opened for lcl by (uid=0)
Dec 05 16:50:01 vega CROND[19509]: (lcl) CMD (/usr/bin/nice -n 19 /usr/bin/ionice -c2 -n7 /usr/bin/backintime --profile-id 2 --backu
Dec 05 16:50:01 vega CROND[19508]: pam_tcb(crond:session): Session closed for lcl
Dec 05 16:50:20 vega afsd[19540]: afsd: Error calling AFSOP_CACHEINODE: not configured
Dec 05 16:50:20 vega systemd[1]: openafs-client.service: control process exited, code=exited status=1
Dec 05 16:50:20 vega systemd[1]: Failed to start OpenAFS Client Service.
-- Subject: Unit openafs-client.service has failed
David Walser 2016-12-05 20:21:23 CET

URL: (none) => https://lwn.net/Vulnerabilities/708140/

Comment 9 Len Lawrence 2016-12-06 21:30:39 CET
# systemctl restart openafs-client.service
Job for openafs-client.service failed. See "systemctl status openafs-client.service" and "journalctl -xe" for details.

<journalctl -xe>
Dec 06 20:21:18 vega afsd[14939]: afsd: Error calling AFSOP_CACHEINODE: not configured
Dec 06 20:21:18 vega systemd[1]: openafs-client.service: control process exited, code=exited status=1
Dec 06 20:21:18 vega systemd[1]: Failed to start OpenAFS Client Service.
-- Subject: Unit openafs-client.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit openafs-client.service has failed.
-- 
-- The result is failed.
Dec 06 20:21:18 vega systemd[1]: Unit openafs-client.service entered failed state.
Dec 06 20:21:18 vega systemd[1]: openafs-client.service failed.


# systemctl status openafs-client
â openafs-client.service - OpenAFS Client Service
   Loaded: loaded (/usr/lib/systemd/system/openafs-client.service; enabled)
   Active: failed (Result: exit-code) since Tue 2016-12-06 20:21:18 GMT; 3min 3s ago
  Process: 14939 ExecStart=/sbin/afsd $AFSD_ARGS (code=exited, status=1/FAILURE)
  Process: 14936 ExecStartPre=/sbin/modprobe libafs (code=exited, status=0/SUCCESS)
  Process: 14933 ExecStartPre=/bin/chmod 0644 /etc/openafs/CellServDB (code=exited, status=0/SUCCESS)
  Process: 14930 ExecStartPre=/bin/sed -n w/etc/openafs/CellServDB /etc/openafs/CellServDB.local /etc/openafs/CellServDB.dist (code=exited, status=0/SUCCESS)

Dec 06 20:21:18 vega afsd[14939]: afsd: Error calling AFSOP_CACHEINODE: not configured
Dec 06 20:21:18 vega systemd[1]: openafs-client.service: control process exited, code=exited status=1
Dec 06 20:21:18 vega systemd[1]: Failed to start OpenAFS Client Service.
Dec 06 20:21:18 vega systemd[1]: Unit openafs-client.service entered failed state.
Dec 06 20:21:18 vega systemd[1]: openafs-client.service failed.

The question is - what is afsd and how and where does one configure AFSOP_CACHEINODE?
Comment 10 Len Lawrence 2016-12-28 12:43:48 CET
Tried updating afs on another machine and specifically started openafs-client and that succeeded.  Shall try rebooting to see if it comes up OK.
Comment 11 Len Lawrence 2016-12-28 12:48:56 CET
Yes, the client server is running automatically after a reboot.
Comment 12 Len Lawrence 2016-12-28 14:30:34 CET
Tried to continue the test procedure but gave up after setting up the/afs/ cache.  To go further requires knowledge of Kerberos realms and Tivoli environments.

So, going back to Claire's recommendation, the server and client services start fine and keep running.  This is OK for 64-bits.

Whiteboard: has_procedure => has_procedure MGA5-64-OK

Lewis Smith 2016-12-29 10:33:37 CET

Whiteboard: has_procedure MGA5-64-OK => has_procedure MGA5-64-OK advisory
CC: (none) => lewyssmith

Comment 13 Len Lawrence 2017-02-02 16:19:48 CET
This has been hanging about for five weeks.  About to run it on i586 vbox.
Comment 14 Len Lawrence 2017-02-02 19:42:33 CET
32-bit test on virtualbox

Placed UDP port 7001 on the Shorewall watch list.
Installed the older 1.6.18 versions of the packages and started the openafs-server.
Stopping openafs-server and starting openafs-client failed.
Stopped the openafs-server.
Updated the packages from updates testing.
Still no luck getting either service to start until kernel-desktop-latest was installed.
This triggered dkms-openafs to build the libafs module and install it.
On reboot openafs-client.service was running.
$ uname -r
4.4.39-desktop-1.mga5
# cat /etc/openafs/cacheinfo
/afs:/var/cache/openafs:100000
# f=/etc/sysconfig/openafs
# sed < ${f} -e s/^AFSD_ARGS=/#AFSD_ARGS=/ -e s/^$/AFSD_ARGS="-dynroot -fakestat -afsdb -stat 2000 -dcache 800 -daemons 3 -volumes 70 -nosettime"/ > ${f}+
# mv -f ${f} /tmp/ && mv ${f}+ ${f}
# lsmod | grep libafs
libafs                696320  2 
# systemctl status openafs-client.service
â openafs-client.service - OpenAFS Client Service
   Loaded: loaded (/usr/lib/systemd/system/openafs-client.service; enabled)
   Active: active (running) since Thu 2017-02-02 17:39:49 GMT; 7min ago
  Process: 1671 ExecStart=/sbin/afsd $AFSD_ARGS (code=exited, status=0/SUCCESS)
  Process: 1664 ExecStartPre=/sbin/modprobe libafs (code=exited, status=0/SUCCESS)
  Process: 1635 ExecStartPre=/bin/chmod 0644 /etc/openafs/CellServDB (code=exited, status=0/SUCCESS)
  Process: 1625 ExecStartPre=/bin/sed -n w/etc/openafs/CellServDB /etc/openafs/CellServDB.local /etc/openafs/CellServDB.dist (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/openafs-client.service
           ââ1689 /sbin/afsd -dynroot -fakestat -afsdb

Feb 02 17:39:49 localhost afsd[1671]: afsd: All AFS daemons started.
Feb 02 17:39:49 localhost afsd[1671]: afsd: All AFS daemons started.

Checking back on the procedure found that the kernel-devel rpm was needed.
# rpm -qa | grep "^kernel-desktop-devel-latest"
kernel-desktop-devel-latest-4.4.39-1.mga5

Otherwise the build would not have been effected.
# systemctl start chronyd.service
[root@localhost lcl]# systemctl status chronyd
â chronyd.service - NTP client/server
   Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled)
   Active: active (running) since Thu 2017-02-02 18:01:47 GMT; 27s ago
  Process: 4534 ExecStartPost=/usr/libexec/chrony-helper add-dhclient-servers (code=exited, status=0/SUCCESS)
  Process: 4505 ExecStart=/usr/sbin/chronyd -u chrony $OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 4526 (chronyd)
   CGroup: /system.slice/chronyd.service
           ââ4526 /usr/sbin/chronyd -u chrony

Feb 02 18:01:47 localhost chronyd[4526]: chronyd version 1.31.2 starting
Feb 02 18:01:47 localhost chronyd[4526]: Generated key 1
Feb 02 18:01:52 localhost chronyd[4526]: Selected source 188.114.116.1
Feb 02 18:01:52 localhost chronyd[4526]: System clock wrong by 1.536048 seco...d
Feb 02 18:01:54 localhost chronyd[4526]: Selected source 85.199.214.98

CellServDB already downloaded.
/etc/openafs]# cat ThisCell
grand.central.org
# [ ! -d  /afs/ ] && mkdir /afs/ || echo "/afs/ already exists"
/afs/ already exists

# df /afs/
Filesystem      Size  Used Avail Use% Mounted on
AFS             2.0T     0  2.0T   0% /afs
That was obviously due to a mistake made earlier - this is a virtual machine.

Apart from that all seems to be well.
Tried to reset the cache size by modifying cacheinfo, unmounting /afs and restarting openafs-client but systemctl hung at that point.

Rebooted and checked /afs.  It contained a set of directories which matched the names in the CellServDB file in a different order but
$ df -h /afs
Filesystem      Size  Used Avail Use% Mounted on
AFS             2.0T     0  2.0T   0% /afs
[lcl@localhost afs]$ cat /etc/openafs/cacheinfo
/afs:/var/cache/openafs:9600
Chose a site at random:
# ls /afs/zcu.cz/
common/        i386_linux24@  metainfo/  project/  software/  tftpboot/  usr/
i386_linux23/  i386_nt35/     novell/    public/   sysadmin/  users/

That is as far as I can take this.
Len Lawrence 2017-02-02 19:43:37 CET

Whiteboard: has_procedure MGA5-64-OK advisory => has_procedure MGA5-64-OK advisory MGA5-32-OK
Keywords: (none) => validated_update
CC: (none) => sysadmin-bugs

Comment 15 Mageia Robot 2017-02-02 20:18:01 CET
An update for this issue has been pushed to the Mageia Updates repository.

http://advisories.mageia.org/MGASA-2017-0037.html

Status: NEW => RESOLVED
Resolution: (none) => FIXED


Note You need to log in before you can comment on or make changes to this bug.