Bug 15473 - glusterfs new security issue CVE-2014-3619
Summary: glusterfs new security issue CVE-2014-3619
Status: RESOLVED FIXED
Alias: None
Product: Mageia
Classification: Unclassified
Component: Security (show other bugs)
Version: 4
Hardware: All Linux
Priority: Normal major
Target Milestone: ---
Assignee: QA Team
QA Contact: Sec team
URL: http://lwn.net/Vulnerabilities/636272/
Whiteboard: has_procedure advisory mga4-32-ok mga...
Keywords: validated_update
Depends on:
Blocks: 14049
  Show dependency treegraph
 
Reported: 2015-03-11 19:25 CET by David Walser
Modified: 2015-04-15 11:02 CEST (History)
3 users (show)

See Also:
Source RPM: glusterfs-3.5.2-7.mga5.src.rpm
CVE:
Status comment:


Attachments

Description David Walser 2015-03-11 19:25:01 CET
OpenSuSE has issued an advisory today (March 11):
http://lists.opensuse.org/opensuse-updates/2015-03/msg00031.html

Mageia 4 is also affected.

Reproducible: 

Steps to Reproduce:
David Walser 2015-03-11 19:25:17 CET

CC: (none) => thomas
Whiteboard: (none) => MGA5TOO, MGA4TOO

Comment 1 Thomas Spuhler 2015-03-12 19:19:17 CET
I take this since the maintainer isn't active anymore.
But going skiing for a week, it will take a little while.
mga4 isn't in danger as it's not usable. Bug #14049.(I will work on this bug after the rel of mga5, I rember, it just needs to be committed, pushed and tested.)

Status: NEW => ASSIGNED
Assignee: bugsquad => thomas

Comment 2 David Walser 2015-03-19 15:01:32 CET
OpenSuSE has issued an advisory for a newer version on March 18:
http://lists.opensuse.org/opensuse-updates/2015-03/msg00056.html

We should be able to borrow patches from them from OpenSuSE 13.1 (Comment 0) for Mageia 4 and OpenSuSE 13.2 (see above) for Mageia 5.
Comment 3 Thomas Spuhler 2015-03-23 02:10:47 CET
This may take some time. There is our open bug #14049 that needs to be fixed together with this. For Cauldron/mga5, OpenSuSE has upgraded to ver. 3.6.1, we have 3.5.2
Comment 4 David Walser 2015-03-23 03:02:58 CET
OpenSuSE has version 3.5.2 in 13.2, which is exactly what the advisory on Comment 2 is for, so they have the exact same verison we have.  I would think that taking those patches would be trivial (and certainly we could get them into Cauldron and still address the other bug later).  Certainly for Mageia 4, we'd want to fix the other issue too before issuing an update.
Comment 5 Thomas Spuhler 2015-03-23 16:58:46 CET
This bug has been resolved for MGA4. and it resolves but # 14049 too.
The security issue has been resolved by adding patch multifrag from OpenSuSE.
bug 14049 has been resolved by removing the init script and leave the init to systemd only.
I have tested the system for bug #14049 and it starts w/o errors.

The following packages are in updates_testing:

glusterfs-3.4.1-1.1.mga4.src.rpm
lib64glusterfs0-3.4.1-1.1.mga4.x86_64.rpm
lib64glusterfs-devel-3.4.1-1.1.mga4.x86_64.rpm
glusterfs-common-3.4.1-1.1.mga4.x86_64.rpm
glusterfs-client-3.4.1-1.1.mga4.x86_64.rpm
glusterfs-server-3.4.1-1.1.mga4.x86_64.rpm
glusterfs-geo-replication-3.4.1-1.1.mga4.x86_64.rpm
glusterfs-debuginfo-3.4.1-1.1.mga4.x86_64.rpm
and corresponding i586 packages

Assignee: thomas => qa-bugs

Comment 6 David Walser 2015-03-23 17:04:31 CET
I haven't even seen a freeze push request for Cauldron yet.

Let's please wait until this is fixed in Cauldron before assigning a Mageia 4 update to QA.  Thanks.

CC: (none) => qa-bugs
Assignee: qa-bugs => thomas

Comment 7 Thomas Spuhler 2015-03-25 16:36:49 CET
This has been fixed in cauldron SVN. A freeze push was requested yesterday.
Assigning back to QA.

Assignee: thomas => qa-bugs

Comment 8 David Walser 2015-03-26 14:09:31 CET
Thanks Thomas!

Package list in Comment 5.

Advisory:
========================

Updated glusterfs packages fix security vulnerability:

glusterfs was vulnerable to a fragment header infinite loop denial of service
attack (CVE-2014-3619).

Also, the glusterfsd SysV init script was failing to properly start the
service.  This was fixed by replacing it with systemd unit files for the
service that work properly (mga#14049).

References:
http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-3619
http://lists.opensuse.org/opensuse-updates/2015-03/msg00031.html
https://bugs.mageia.org/show_bug.cgi?id=14049
https://bugs.mageia.org/show_bug.cgi?id=15473

CC: qa-bugs => (none)
Version: Cauldron => 4
Blocks: (none) => 14049
Whiteboard: MGA5TOO, MGA4TOO => (none)

Comment 9 olivier charles 2015-04-04 12:44:19 CEST
Testing on Mageia4 : 
Mageia4x64 first VFS
Mageia4x32 1 second VFS (in virtual box)
Mageia4x32 2 gluster client (in virtual box) 

Following tutorial found here : 
https://www.digitalocean.com/community/tutorials/how-to-create-a-redundant-storage-pool-using-glusterfs-on-ubuntu-servers

With current packages :
---------------------
On mageia4x64

- glusterfs-client-3.4.1-1.mga4.x86_64
- glusterfs-common-3.4.1-1.mga4.x86_64
- glusterfs-geo-replication-3.4.1-1.mga4.x86_64
- glusterfs-server-3.4.1-1.mga4.x86_64
- lib64glusterfs-devel-3.4.1-1.mga4.x86_64
- lib64glusterfs0-3.4.1-1.mga4.x86_64
- lib64openssl-devel-1.0.1m-1.mga4.x86_64


On Mageia4x32

- glusterfs-client-3.4.1-1.mga4.i586
- glusterfs-common-3.4.1-1.mga4.i586
- glusterfs-geo-replication-3.4.1-1.mga4.i586
- glusterfs-server-3.4.1-1.mga4.i586
- libglusterfs-devel-3.4.1-1.mga4.i586
- libglusterfs0-3.4.1-1.mga4.i586
- libopenssl-devel-1.0.1m-1.mga4.i586

Could not start glusterfsd service (which is bug 14049)
Could start service glusterd on 2 VFS
# service glusterd start

From mageia4x64

# gluster peer probe gluster1.droplet.com
peer probe: success
# gluster peer status
Number of Peers: 1

Hostname: gluster1.droplet.com
Port: 24007
Uuid: 3868b1d7-87d8-4000-aa9c-0d2b8aa8d654
State: Peer in Cluster (Connected)

# gluster volume create volume1 replica 2 transport tcp gluster0.droplet.com:/gluster-storage gluster1.droplet.com:/gluster-storage
volume create: volume1: success: please start the volume to access data

Checked it hat created a volume storage on each machine : OK

# gluster volume start volume1
volume start: volume1: success

Installed glusterfs-client on 3rd machine (mageia4x32 2)
# mkdir /storage-pool
# mount -t glusterfs gluster0.droplet.com:/volume1 /storage-pool
# cd /storage-pool/
# touch file{1..20}
Verified the 20 files had been created from client to both nodes : OK

# gluster volume info
 
Volume Name: volume1
Type: Replicate
Volume ID: 085d030f-98a8-45dc-8f93-9e71195acfaf
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster0.droplet.com:/gluster-storage
Brick2: gluster1.droplet.com:/gluster-storage

Updated to testing packages :
---------------------------

On mageia4x64
- glusterfs-client-3.4.1-1.1.mga4.x86_64
- glusterfs-common-3.4.1-1.1.mga4.x86_64
- glusterfs-geo-replication-3.4.1-1.1.mga4.x86_64
- glusterfs-server-3.4.1-1.1.mga4.x86_64
- lib64glusterfs-devel-3.4.1-1.1.mga4.x86_64
- lib64glusterfs0-3.4.1-1.1.mga4.x86_64

On mageia4x32
- glusterfs-client-3.4.1-1.1.mga4.i586
- glusterfs-common-3.4.1-1.1.mga4.i586
- glusterfs-debuginfo-3.4.1-1.1.mga4.i586
- glusterfs-geo-replication-3.4.1-1.1.mga4.i586
- glusterfs-server-3.4.1-1.1.mga4.i586
- libglusterfs-devel-3.4.1-1.1.mga4.i586
- libglusterfs0-3.4.1-1.1.mga4.i586

Could now start glusterfs daemon :
# systemctl status glusterfsd
glusterfsd.service - GlusterFS brick processes (stopping only)
   Loaded: loaded (/usr/lib/systemd/system/glusterfsd.service; disabled)
   Active: active (exited) since sam. 2015-04-04 12:20:06 CEST; 21min ago

But could not issue any gluster commands :
# gluster peer probe gluster1.droplet.com
Connection failed. Please check if gluster daemon is operational.

Could not find how to start the gluster daemon.
# systemctl start glusterd
Failed to issue method call: Unit glusterd.service failed to load: No such file or directory.
# service glusterd start
Cannot find glusterd service

What am I missing ?

CC: (none) => olchal

Comment 10 David Walser 2015-04-04 17:28:49 CEST
The service is called glusterfsd.service in our package, not glusterd.service.
Comment 11 olivier charles 2015-04-04 17:38:40 CEST
(In reply to David Walser from comment #10)
> The service is called glusterfsd.service in our package, not
> glusterd.service.

Yes I figured as much though we had glusterd in previous version of this package.
What I have not figured yet is how to configure glusterfsd.service to create nodes.
I'm trying to configure /etc/glusterfs/glusterd.vol at the moment and will report if successful.
Comment 12 claire robinson 2015-04-07 16:44:13 CEST
Well done Olivier. This is one we haven't had before.

Adding feedback marker then as this update currently changes the name of the service.

Whiteboard: (none) => has_procedure feedback

Comment 13 David Walser 2015-04-07 17:17:39 CEST
Removing the feedback marker as this update does not change the name of the service.  It simply changed from using a SysV init script to a systemd unit file to define the service.

Whiteboard: has_procedure feedback => has_procedure

Comment 14 claire robinson 2015-04-07 17:28:52 CEST
(In reply to olivier charles from comment #9)

> Could not start glusterfsd service (which is bug 14049)
> Could start service glusterd on 2 VFS
> # service glusterd start

Could you clarify Olivier please. It appears glusterd has become glusterfsd.
Comment 15 David Walser 2015-04-07 17:31:55 CEST
I can clarify.  In the previous package, a systemd unit from upstream called glusterd was mistakenly included in the package, this was unintended.  The intended method of starting the service was the SysV init script which was called glusterfsd.  The new systemd service in the update is called glusterfsd.service.  The name of the intended method of starting the service has not changed.
Comment 16 claire robinson 2015-04-07 17:39:46 CEST
So if I'm following along correctly, previously the intended glusterfsd did not work but glusterd did. This update removes glusterd which was the only working one and provides an equally working glusterfsd.

If so, this effectively changes the service name.
Comment 17 David Walser 2015-04-07 18:45:49 CEST
Like I said, my interpretation is that the inclusion of glusterd.service was unintentional, so I wouldn't say it changes the service name, but if glusterd.service really did work correctly, it makes me wonder why we aren't using the upstream provided glusterd.service instead of a separate-source glusterfsd.service in the package SOURCES.  I don't know enough to say why Thomas chose that, so I guess feedback is required.

Whiteboard: has_procedure => has_procedure feedback

Comment 18 Thomas Spuhler 2015-04-07 18:55:55 CEST
(In reply to claire robinson from comment #16)
> So if I'm following along correctly, previously the intended glusterfsd did
> not work but glusterd did. This update removes glusterd which was the only
> working one and provides an equally working glusterfsd.
> 
> If so, this effectively changes the service name.

Not really. Neither of them worked. See Bug 14049 - glusterfsd doesn't start
Comment 19 David Walser 2015-04-07 18:58:00 CEST
If neither of them worked, then it's no issue, since we had to get rid of one of the names.  Any comment on why we aren't using the upstream service file?
Comment 20 claire robinson 2015-04-07 18:58:15 CEST
Right, but glusterd seemed to, see comment 14, comment 9
Comment 21 Thomas Spuhler 2015-04-07 19:08:18 CEST
Claire, this bug is so old, I don't remember everything. I need to dig into it and it will take some time.
Comment 22 claire robinson 2015-04-07 19:13:48 CEST
Well let's confirm it's the case first. I did ask for Olivier to clarify before everybody joined in :)

It's pretty easy to check though so I'll do so tmrw if he doesn't today.
Comment 23 olivier charles 2015-04-07 22:57:02 CEST
I've been testing these update packages again. I can make systemfsd service start on my two glusterfs servers but cannot mount glusterfs client anymore.

I've tried to configure /etc/glusterfs/glusterd.volume following tutorials I found on the web but never managed in servers and client but to no avail.

Previous commands I used with glusterd daemon are not operationnal with glusterfsd daemon.

So, I'm at a loss...
Comment 24 Thomas Spuhler 2015-04-08 22:15:38 CEST
I am very sorry for the confusion. 
It looks as if both services (glusterd.service and glusterfsd.service) need to be running. I have added the  glusterd.service and bumped the subrel to 2.
It's in updates testing. Could you please retest. I do not have the resources to do the testing. (BTW, a nice tutorial)
After successful test, I would need to fix mga5 as well.
Comment 25 David Walser 2015-04-08 22:19:17 CEST
Thanks Thomas.  This is indeed a confusing package :o)

Whiteboard: has_procedure feedback => has_procedure

Comment 26 Thomas Spuhler 2015-04-08 22:24:22 CEST
(In reply to David Walser from comment #25)
> Thanks Thomas.  This is indeed a confusing package :o)

I had finally some time to read about it. It seems to be a nice package for critical applications. But there isn't much fuzz about it on our WEB site and the maintainers seem to have disappeared.

Hardware: i586 => All

Comment 27 olivier charles 2015-04-08 23:19:29 CEST
(In reply to David Walser from comment #25)
> Thanks Thomas.  This is indeed a confusing package :o)

Confusing but potentially quite useful. If nobody has done it before that, I can test latest update on Friday using both services this time if they should happen to work :)
Comment 28 olivier charles 2015-04-10 17:06:45 CEST
Testing on Mageia4x64 (server 1) and 2 Mageia4x32 (server2 + client)

With latest updated testing packages :
------------------------------------

- glusterfs-client-3.4.1-1.2.mga4.x86_64
- glusterfs-common-3.4.1-1.2.mga4.x86_64
- glusterfs-debuginfo-3.4.1-1.2.mga4.x86_64
- glusterfs-geo-replication-3.4.1-1.2.mga4.x86_64
- glusterfs-server-3.4.1-1.2.mga4.x86_64
- lib64glusterfs-devel-3.4.1-1.2.mga4.x86_64
- lib64glusterfs0-3.4.1-1.2.mga4.x86_64

- glusterfs-client-3.4.1-1.2.mga4.i586
- glusterfs-common-3.4.1-1.2.mga4.i586
- glusterfs-debuginfo-3.4.1-1.2.mga4.i586
- glusterfs-geo-replication-3.4.1-1.2.mga4.i586
- glusterfs-server-3.4.1-1.2.mga4.i586
- libglusterfs-devel-3.4.1-1.2.mga4.i586
- libglusterfs0-3.4.1-1.2.mga4.i586

Same procedure as comment 9

All ok with glusterd service.

I am still unable to fathom glusterfsd.service which I can start but is not required for the procedure I tried (disabling and stopping it makes no difference).

???
Comment 29 Thomas Spuhler 2015-04-10 18:51:13 CEST
The file is quite strange. There is a little info in this bug that may help where to look:
https://bugzilla.redhat.com/show_bug.cgi?id=1022542
It seems this is to start/stop the "bricks"
Comment 30 olivier charles 2015-04-11 02:15:52 CEST
Thanks Thomas,

From the link you provided and what I found here : http://blog.nixpanic.net/2013/12/gluster-and-not-restarting-brick.html

I understand that glusterfsd.service is used to force a restart of the bricks after an update in order to have the new libraries used.

So I guess it cannot be tested before a future update.

Whatever, 
- glusterd and glusterfsd services are running smoothly with this updated testing packages.
- glusterfs-server and glusterfs-client perform well on Mageia4x64 (real hardware) and Mageia4x32 (VMs)

I leave it upon experimented qa testers to decide if it is an OK or not :)
Comment 31 Thomas Spuhler 2015-04-11 06:36:50 CEST
Oliver, thanks a lot for all the testing. We both have learned from it.
This seems to be a good program and we may should advertise it on the WIKI
Comment 32 claire robinson 2015-04-11 14:22:34 CEST
I think you're right Olivier. Well done all!

Adding the OK's

Whiteboard: has_procedure => has_procedure mga4-32-ok mga4-64-ok

Comment 33 claire robinson 2015-04-11 15:04:20 CEST
Validating. Advisory uploaded.

Please push to 4 updates

Thanks

Keywords: (none) => validated_update
Whiteboard: has_procedure mga4-32-ok mga4-64-ok => has_procedure advisory mga4-32-ok mga4-64-ok
CC: (none) => sysadmin-bugs

Comment 34 Mageia Robot 2015-04-15 11:02:15 CEST
An update for this issue has been pushed to Mageia Updates repository.

http://advisories.mageia.org/MGASA-2015-0145.html

Status: ASSIGNED => RESOLVED
Resolution: (none) => FIXED


Note You need to log in before you can comment on or make changes to this bug.