OpenSuSE has issued an advisory today (March 11): http://lists.opensuse.org/opensuse-updates/2015-03/msg00031.html Mageia 4 is also affected. Reproducible: Steps to Reproduce:
CC: (none) => thomasWhiteboard: (none) => MGA5TOO, MGA4TOO
I take this since the maintainer isn't active anymore. But going skiing for a week, it will take a little while. mga4 isn't in danger as it's not usable. Bug #14049.(I will work on this bug after the rel of mga5, I rember, it just needs to be committed, pushed and tested.)
Status: NEW => ASSIGNEDAssignee: bugsquad => thomas
OpenSuSE has issued an advisory for a newer version on March 18: http://lists.opensuse.org/opensuse-updates/2015-03/msg00056.html We should be able to borrow patches from them from OpenSuSE 13.1 (Comment 0) for Mageia 4 and OpenSuSE 13.2 (see above) for Mageia 5.
This may take some time. There is our open bug #14049 that needs to be fixed together with this. For Cauldron/mga5, OpenSuSE has upgraded to ver. 3.6.1, we have 3.5.2
OpenSuSE has version 3.5.2 in 13.2, which is exactly what the advisory on Comment 2 is for, so they have the exact same verison we have. I would think that taking those patches would be trivial (and certainly we could get them into Cauldron and still address the other bug later). Certainly for Mageia 4, we'd want to fix the other issue too before issuing an update.
This bug has been resolved for MGA4. and it resolves but # 14049 too. The security issue has been resolved by adding patch multifrag from OpenSuSE. bug 14049 has been resolved by removing the init script and leave the init to systemd only. I have tested the system for bug #14049 and it starts w/o errors. The following packages are in updates_testing: glusterfs-3.4.1-1.1.mga4.src.rpm lib64glusterfs0-3.4.1-1.1.mga4.x86_64.rpm lib64glusterfs-devel-3.4.1-1.1.mga4.x86_64.rpm glusterfs-common-3.4.1-1.1.mga4.x86_64.rpm glusterfs-client-3.4.1-1.1.mga4.x86_64.rpm glusterfs-server-3.4.1-1.1.mga4.x86_64.rpm glusterfs-geo-replication-3.4.1-1.1.mga4.x86_64.rpm glusterfs-debuginfo-3.4.1-1.1.mga4.x86_64.rpm and corresponding i586 packages
Assignee: thomas => qa-bugs
I haven't even seen a freeze push request for Cauldron yet. Let's please wait until this is fixed in Cauldron before assigning a Mageia 4 update to QA. Thanks.
CC: (none) => qa-bugsAssignee: qa-bugs => thomas
This has been fixed in cauldron SVN. A freeze push was requested yesterday. Assigning back to QA.
Thanks Thomas! Package list in Comment 5. Advisory: ======================== Updated glusterfs packages fix security vulnerability: glusterfs was vulnerable to a fragment header infinite loop denial of service attack (CVE-2014-3619). Also, the glusterfsd SysV init script was failing to properly start the service. This was fixed by replacing it with systemd unit files for the service that work properly (mga#14049). References: http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-3619 http://lists.opensuse.org/opensuse-updates/2015-03/msg00031.html https://bugs.mageia.org/show_bug.cgi?id=14049 https://bugs.mageia.org/show_bug.cgi?id=15473
CC: qa-bugs => (none)Version: Cauldron => 4Blocks: (none) => 14049Whiteboard: MGA5TOO, MGA4TOO => (none)
Testing on Mageia4 : Mageia4x64 first VFS Mageia4x32 1 second VFS (in virtual box) Mageia4x32 2 gluster client (in virtual box) Following tutorial found here : https://www.digitalocean.com/community/tutorials/how-to-create-a-redundant-storage-pool-using-glusterfs-on-ubuntu-servers With current packages : --------------------- On mageia4x64 - glusterfs-client-3.4.1-1.mga4.x86_64 - glusterfs-common-3.4.1-1.mga4.x86_64 - glusterfs-geo-replication-3.4.1-1.mga4.x86_64 - glusterfs-server-3.4.1-1.mga4.x86_64 - lib64glusterfs-devel-3.4.1-1.mga4.x86_64 - lib64glusterfs0-3.4.1-1.mga4.x86_64 - lib64openssl-devel-1.0.1m-1.mga4.x86_64 On Mageia4x32 - glusterfs-client-3.4.1-1.mga4.i586 - glusterfs-common-3.4.1-1.mga4.i586 - glusterfs-geo-replication-3.4.1-1.mga4.i586 - glusterfs-server-3.4.1-1.mga4.i586 - libglusterfs-devel-3.4.1-1.mga4.i586 - libglusterfs0-3.4.1-1.mga4.i586 - libopenssl-devel-1.0.1m-1.mga4.i586 Could not start glusterfsd service (which is bug 14049) Could start service glusterd on 2 VFS # service glusterd start From mageia4x64 # gluster peer probe gluster1.droplet.com peer probe: success # gluster peer status Number of Peers: 1 Hostname: gluster1.droplet.com Port: 24007 Uuid: 3868b1d7-87d8-4000-aa9c-0d2b8aa8d654 State: Peer in Cluster (Connected) # gluster volume create volume1 replica 2 transport tcp gluster0.droplet.com:/gluster-storage gluster1.droplet.com:/gluster-storage volume create: volume1: success: please start the volume to access data Checked it hat created a volume storage on each machine : OK # gluster volume start volume1 volume start: volume1: success Installed glusterfs-client on 3rd machine (mageia4x32 2) # mkdir /storage-pool # mount -t glusterfs gluster0.droplet.com:/volume1 /storage-pool # cd /storage-pool/ # touch file{1..20} Verified the 20 files had been created from client to both nodes : OK # gluster volume info Volume Name: volume1 Type: Replicate Volume ID: 085d030f-98a8-45dc-8f93-9e71195acfaf Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: gluster0.droplet.com:/gluster-storage Brick2: gluster1.droplet.com:/gluster-storage Updated to testing packages : --------------------------- On mageia4x64 - glusterfs-client-3.4.1-1.1.mga4.x86_64 - glusterfs-common-3.4.1-1.1.mga4.x86_64 - glusterfs-geo-replication-3.4.1-1.1.mga4.x86_64 - glusterfs-server-3.4.1-1.1.mga4.x86_64 - lib64glusterfs-devel-3.4.1-1.1.mga4.x86_64 - lib64glusterfs0-3.4.1-1.1.mga4.x86_64 On mageia4x32 - glusterfs-client-3.4.1-1.1.mga4.i586 - glusterfs-common-3.4.1-1.1.mga4.i586 - glusterfs-debuginfo-3.4.1-1.1.mga4.i586 - glusterfs-geo-replication-3.4.1-1.1.mga4.i586 - glusterfs-server-3.4.1-1.1.mga4.i586 - libglusterfs-devel-3.4.1-1.1.mga4.i586 - libglusterfs0-3.4.1-1.1.mga4.i586 Could now start glusterfs daemon : # systemctl status glusterfsd glusterfsd.service - GlusterFS brick processes (stopping only) Loaded: loaded (/usr/lib/systemd/system/glusterfsd.service; disabled) Active: active (exited) since sam. 2015-04-04 12:20:06 CEST; 21min ago But could not issue any gluster commands : # gluster peer probe gluster1.droplet.com Connection failed. Please check if gluster daemon is operational. Could not find how to start the gluster daemon. # systemctl start glusterd Failed to issue method call: Unit glusterd.service failed to load: No such file or directory. # service glusterd start Cannot find glusterd service What am I missing ?
CC: (none) => olchal
The service is called glusterfsd.service in our package, not glusterd.service.
(In reply to David Walser from comment #10) > The service is called glusterfsd.service in our package, not > glusterd.service. Yes I figured as much though we had glusterd in previous version of this package. What I have not figured yet is how to configure glusterfsd.service to create nodes. I'm trying to configure /etc/glusterfs/glusterd.vol at the moment and will report if successful.
Well done Olivier. This is one we haven't had before. Adding feedback marker then as this update currently changes the name of the service.
Whiteboard: (none) => has_procedure feedback
Removing the feedback marker as this update does not change the name of the service. It simply changed from using a SysV init script to a systemd unit file to define the service.
Whiteboard: has_procedure feedback => has_procedure
(In reply to olivier charles from comment #9) > Could not start glusterfsd service (which is bug 14049) > Could start service glusterd on 2 VFS > # service glusterd start Could you clarify Olivier please. It appears glusterd has become glusterfsd.
I can clarify. In the previous package, a systemd unit from upstream called glusterd was mistakenly included in the package, this was unintended. The intended method of starting the service was the SysV init script which was called glusterfsd. The new systemd service in the update is called glusterfsd.service. The name of the intended method of starting the service has not changed.
So if I'm following along correctly, previously the intended glusterfsd did not work but glusterd did. This update removes glusterd which was the only working one and provides an equally working glusterfsd. If so, this effectively changes the service name.
Like I said, my interpretation is that the inclusion of glusterd.service was unintentional, so I wouldn't say it changes the service name, but if glusterd.service really did work correctly, it makes me wonder why we aren't using the upstream provided glusterd.service instead of a separate-source glusterfsd.service in the package SOURCES. I don't know enough to say why Thomas chose that, so I guess feedback is required.
Whiteboard: has_procedure => has_procedure feedback
(In reply to claire robinson from comment #16) > So if I'm following along correctly, previously the intended glusterfsd did > not work but glusterd did. This update removes glusterd which was the only > working one and provides an equally working glusterfsd. > > If so, this effectively changes the service name. Not really. Neither of them worked. See Bug 14049 - glusterfsd doesn't start
If neither of them worked, then it's no issue, since we had to get rid of one of the names. Any comment on why we aren't using the upstream service file?
Right, but glusterd seemed to, see comment 14, comment 9
Claire, this bug is so old, I don't remember everything. I need to dig into it and it will take some time.
Well let's confirm it's the case first. I did ask for Olivier to clarify before everybody joined in :) It's pretty easy to check though so I'll do so tmrw if he doesn't today.
I've been testing these update packages again. I can make systemfsd service start on my two glusterfs servers but cannot mount glusterfs client anymore. I've tried to configure /etc/glusterfs/glusterd.volume following tutorials I found on the web but never managed in servers and client but to no avail. Previous commands I used with glusterd daemon are not operationnal with glusterfsd daemon. So, I'm at a loss...
I am very sorry for the confusion. It looks as if both services (glusterd.service and glusterfsd.service) need to be running. I have added the glusterd.service and bumped the subrel to 2. It's in updates testing. Could you please retest. I do not have the resources to do the testing. (BTW, a nice tutorial) After successful test, I would need to fix mga5 as well.
Thanks Thomas. This is indeed a confusing package :o)
(In reply to David Walser from comment #25) > Thanks Thomas. This is indeed a confusing package :o) I had finally some time to read about it. It seems to be a nice package for critical applications. But there isn't much fuzz about it on our WEB site and the maintainers seem to have disappeared.
Hardware: i586 => All
(In reply to David Walser from comment #25) > Thanks Thomas. This is indeed a confusing package :o) Confusing but potentially quite useful. If nobody has done it before that, I can test latest update on Friday using both services this time if they should happen to work :)
Testing on Mageia4x64 (server 1) and 2 Mageia4x32 (server2 + client) With latest updated testing packages : ------------------------------------ - glusterfs-client-3.4.1-1.2.mga4.x86_64 - glusterfs-common-3.4.1-1.2.mga4.x86_64 - glusterfs-debuginfo-3.4.1-1.2.mga4.x86_64 - glusterfs-geo-replication-3.4.1-1.2.mga4.x86_64 - glusterfs-server-3.4.1-1.2.mga4.x86_64 - lib64glusterfs-devel-3.4.1-1.2.mga4.x86_64 - lib64glusterfs0-3.4.1-1.2.mga4.x86_64 - glusterfs-client-3.4.1-1.2.mga4.i586 - glusterfs-common-3.4.1-1.2.mga4.i586 - glusterfs-debuginfo-3.4.1-1.2.mga4.i586 - glusterfs-geo-replication-3.4.1-1.2.mga4.i586 - glusterfs-server-3.4.1-1.2.mga4.i586 - libglusterfs-devel-3.4.1-1.2.mga4.i586 - libglusterfs0-3.4.1-1.2.mga4.i586 Same procedure as comment 9 All ok with glusterd service. I am still unable to fathom glusterfsd.service which I can start but is not required for the procedure I tried (disabling and stopping it makes no difference). ???
The file is quite strange. There is a little info in this bug that may help where to look: https://bugzilla.redhat.com/show_bug.cgi?id=1022542 It seems this is to start/stop the "bricks"
Thanks Thomas, From the link you provided and what I found here : http://blog.nixpanic.net/2013/12/gluster-and-not-restarting-brick.html I understand that glusterfsd.service is used to force a restart of the bricks after an update in order to have the new libraries used. So I guess it cannot be tested before a future update. Whatever, - glusterd and glusterfsd services are running smoothly with this updated testing packages. - glusterfs-server and glusterfs-client perform well on Mageia4x64 (real hardware) and Mageia4x32 (VMs) I leave it upon experimented qa testers to decide if it is an OK or not :)
Oliver, thanks a lot for all the testing. We both have learned from it. This seems to be a good program and we may should advertise it on the WIKI
I think you're right Olivier. Well done all! Adding the OK's
Whiteboard: has_procedure => has_procedure mga4-32-ok mga4-64-ok
Validating. Advisory uploaded. Please push to 4 updates Thanks
Keywords: (none) => validated_updateWhiteboard: has_procedure mga4-32-ok mga4-64-ok => has_procedure advisory mga4-32-ok mga4-64-okCC: (none) => sysadmin-bugs
An update for this issue has been pushed to Mageia Updates repository. http://advisories.mageia.org/MGASA-2015-0145.html
Status: ASSIGNED => RESOLVEDResolution: (none) => FIXED