| Summary: | varnish new security issue CVE-2013-4484 | ||
|---|---|---|---|
| Product: | Mageia | Reporter: | David Walser <luigiwalser> |
| Component: | Security | Assignee: | QA Team <qa-bugs> |
| Status: | RESOLVED FIXED | QA Contact: | Sec team <security> |
| Severity: | major | ||
| Priority: | Normal | CC: | davidwhodgins, fundawang, mageia, mageia, oe, rverschelde, stormi-mageia, sysadmin-bugs, tmb |
| Version: | 4 | Keywords: | validated_update |
| Target Milestone: | --- | ||
| Hardware: | i586 | ||
| OS: | Linux | ||
| URL: | http://lwn.net/Vulnerabilities/573942/ | ||
| Whiteboard: | MGA3TOO has_procedure advisory mga3-32-ok mga3-64-ok mga4-32-ok mga4-64-ok | ||
| Source RPM: | varnish-3.0.3-7.mga3.src.rpm | CVE: | |
| Status comment: | |||
| Bug Depends on: | |||
| Bug Blocks: | 11817 | ||
|
Description
David Walser
2013-11-15 19:14:30 CET
David Walser
2013-11-15 19:14:36 CET
Whiteboard:
(none) =>
MGA2TOO PoC: https://www.varnish-cache.org/trac/ticket/1367 Advisory uploaded. Please remove 'advisory' whiteboard tag if anything changes. Whiteboard:
MGA2TOO =>
MGA2TOO has_procedure advisory Testing mga3 32 tl;dr; This is still using init scripts but they don't seem well configured. When starting it think it fails but actually does start, when stopping it thinks it succeeds but actually fails. It's not a regression though and it does actually work. Before ------ Varnish service times out when started after ~5mins and declares it has failed. It is actually running though and has created a pid at /run/varnish/varnish.pid owned by root. It seems to be starting 2 instances, not sure if that is normal, like apache. varnishd[16606]: child (16607) Started varnishd[16606]: Child (16607) said Child starts varnishd[16606]: Child (16607) said SMF.s0 mmap'ed 1073741824 bytes of 1073741824 varnish[16584]: Starting varnish HTTP accelerator: [ OK ] systemd[1]: Failed to start LSB: start and stop varnishd. systemd[1]: Unit varnish.service entered failed state # ps aux | grep varn root 18514 0.0 0.0 87948 1276 ? Ss 13:45 0:00 /usr/sbin/varnishd -P /run/varnish/varnish.pid -a :6081 -T localhost:6082 -f /etc/varnish/default.vcl -u varnish -g varnish -s file,/var/lib/varnish/varnish_storage.bin,1G varnish 18515 0.0 0.0 1204844 1232 ? Sl 13:45 0:00 /usr/sbin/varnishd -P /run/varnish/varnish.pid -a :6081 -T localhost:6082 -f /etc/varnish/default.vcl -u varnish -g varnish -s file,/var/lib/varnish/varnish_storage.bin,1G # telnet localhost 6082 Trying 127.0.0.1... Connected to localhost (127.0.0.1). Escape character is '^]'. 200 211 ----------------------------- Varnish Cache CLI 1.0 ----------------------------- Linux,3.8.13.4-desktop586-1.mga3,i686,-sfile,-smalloc,-hcritbit Type 'help' for command list. Type 'quit' to close CLI session. # service varnish stop Stopping varnish (via systemctl): [ OK ] # ll /var/lock/subsys/varnish -rw-r--r-- 1 root root 0 Nov 21 13:45 /var/lock/subsys/varnish # ll /run/varnish/varnish.pid -rw-r--r-- 1 root root 5 Nov 21 13:45 /run/varnish/varnish.pid # ps aux | grep varn root 18514 0.0 0.0 87948 1276 ? Ss 13:45 0:00 /usr/sbin/varnishd -P /run/varnish/varnish.pid -a :6081 -T localhost:6082 -f /etc/varnish/default.vcl -u varnish -g varnish -s file,/var/lib/varnish/varnish_storage.bin,1G varnish 18515 0.0 0.0 1204844 1232 ? Sl 13:45 0:00 /usr/sbin/varnishd -P /run/varnish/varnish.pid -a :6081 -T localhost:6082 -f /etc/varnish/default.vcl -u varnish -g varnish -s file,/var/lib/varnish/varnish_storage.bin,1G Browsing http://localhost:6081 or http://localhost:6081/phpmyadmin/ shows the cache is working. Killing the processes and cleaning up.. # kill 18514 # kill 18515 -bash: kill: (18515) - No such process # rm /var/lock/subsys/varnish rm: remove regular empty file â/var/lock/subsys/varnishâ? y # rm /run/varnish/varnish.pid rm: remove regular file â/run/varnish/varnish.pidâ? y After ----- Same issues with the init script It does start, but causes a long timeout and doesn't stop. # curl -I http://localhost:6081 HTTP/1.1 200 OK Server: Apache/2.4.4 (Mageia) PHP/5.4.19 Last-Modified: Wed, 15 May 2013 20:46:15 GMT ETag: "83-4dcc7d743f3c0" Content-Type: text/html; charset=UTF-8 Content-Length: 131 Accept-Ranges: bytes Date: Thu, 21 Nov 2013 14:08:20 GMT X-Varnish: 466057771 <---------| Age: 0 <---------| Varnish stuff.. Via: 1.1 varnish <---------| Connection: keep-alive # varnishstat Shows an info page like top. # varnishlog 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1385043133 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1385043136 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1385043139 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1385043142 1.0 14 BackendClose - default 14 BackendOpen b default 127.0.0.1 40238 127.0.0.1 80 ..etc Shows loads of info when browsing through it. Testing mga2 64
Different from mga3 in that it instantly fails to start and really isn't started.
# systemctl status varnish.service
varnish.service - LSB: start and stop varnishd
Loaded: loaded (/etc/rc.d/init.d/varnish)
Active: failed (Result: exit-code) since Thu, 21 Nov 2013 14:35:13 +0000; 19s ago
Process: 2328 ExecStart=/etc/rc.d/init.d/varnish start (code=exited, status=127)
CGroup: name=systemd:/system/varnish.service
# tail /var/log/syslog
Nov 21 14:34:15 mga264 perl: [RPM] lib64varnish1-3.0.2-1.1.mga2.x86_64 installed
Nov 21 14:34:16 mga264 perl: [RPM] varnish-3.0.2-1.1.mga2.x86_64 installed
Nov 21 14:34:16 mga264 systemd[1]: Reloading.
Nov 21 14:34:17 mga264 systemd[1]: Reloading.
Nov 21 14:34:18 mga264 systemd[1]: Reloading.
Nov 21 14:34:18 mga264 perl: [RPM] varnish-3.0.2-1.mga2.x86_64 removed
Nov 21 14:34:18 mga264 perl: [RPM] lib64varnish1-3.0.2-1.mga2.x86_64 removed
Nov 21 14:35:13 mga264 varnish[2328]: Starting varnish HTTP accelerator: [FAILED]
Nov 21 14:35:13 mga264 systemd[1]: varnish.service: control process exited, code=exited status=127
Nov 21 14:35:13 mga264 systemd[1]: Unit varnish.service entered failed state.Whiteboard:
MGA2TOO has_procedure advisory =>
MGA2TOO has_procedure advisory feedback CC'ing Damien as he was the packager for the Mageia 2 version, and Funda who worked on the Mageia 3 package. Any ideas on the failure in Comment 3? CC:
(none) =>
fundawang, mageia Removing Mageia 2 from the whiteboard due to EOL. http://blog.mageia.org/en/2013/11/21/farewell-mageia-2/ Funda or Damien, we'd still like to fix the issues found in Comment 2. Whiteboard:
MGA2TOO has_procedure advisory feedback =>
has_procedure advisory feedback No response from packagers sadly so bug 11817 created for the service not starting or stopping properly. Validating this one with the bug still present. We can't allow security updates to sit indefinitely. Advisory updated to remove mga2. Could sysadmin please push from 3 core/updates_testing to updates. Thanks! Keywords:
(none) =>
validated_update Please try varnish-3.0.3-7.2.mga3 & varnish-3.0.3-10.mga4 which uses systemd service files instead of sysv scripts. CC:
(none) =>
oe Thanks Oden, Unvalidating. Keywords:
validated_update =>
(none)
claire robinson
2013-11-29 13:35:00 CET
Whiteboard:
has_procedure advisory feedback =>
has_procedure Advisory updated. Whiteboard:
has_procedure =>
has_procedure advisory Errors in the post scriptlet
1/2: libvarnish1 ##################################################################################################
2/2: varnish ##################################################################################################
error reading information on service varnish: No such file or directory
error reading information on service varnishlog: No such file or directory
error reading information on service varnishncsa: No such file or directory
warning: %post(varnish-3.0.3-7.2.mga3.i586) scriptlet failed, exit status 1
ERROR: 'script' failed for varnish-3.0.3-7.2.mga3.i586:CC:
(none) =>
davidwhodgins
Dave Hodgins
2013-11-30 16:19:41 CET
Whiteboard:
has_procedure advisory =>
has_procedure advisory feeback
Dave Hodgins
2013-11-30 18:28:14 CET
Whiteboard:
has_procedure advisory feeback =>
has_procedure advisory feedback David/Oden what do you want to do with this one please? This should be fixable. It was converted to use systemd services but the %post script which error out are still using the service and chkconfig commands. Those should be changed to use our standard service macros. I believe I've fixed it in varnish-3.0.3-11.mga4 and varnish-3.0.3-7.3.mga3. Whiteboard:
has_procedure advisory feedback =>
has_procedure advisory Thanks David. Advisory updated.
David Walser
2013-12-13 00:00:22 CET
Blocks:
(none) =>
11817 Silently fails to start now unfortunately..
# systemctl -a status varnish.service
varnish.service - Varnish a high-perfomance HTTP accelerator
Loaded: loaded (/usr/lib/systemd/system/varnish.service; disabled)
Active: failed (Result: exit-code) since Mon, 2013-12-16 08:05:04 GMT; 1min 24s ago
Process: 4606 ExecStart=/usr/sbin/varnishd -P /var/run/varnish.pid -f $VARNISH_VCL_CONF -a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} -T ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ADMIN_LISTEN_PORT} -t $VARNISH_TTL -w ${VARNISH_MIN_THREADS},${VARNISH_MAX_THREADS},${VARNISH_THREAD_TIMEOUT} -u $VARNISH_USER -g $VARNISH_GROUP -S $VARNISH_SECRET_FILE -s $VARNISH_STORAGE $DAEMON_OPTS (code=exited, status=0/SUCCESS)
Main PID: 4616 (code=exited, status=2)
CGroup: name=systemd:/system/varnish.service
varnishd[4616]: Platform: Linux,3.10.16-desktop-1.mga3,x86_64,-sfile,-smalloc,-hcritbit
systemd[1]: Started Varnish a high-perfomance HTTP accelerator.
systemd[1]: varnish.service: main process exited, code=exited, status=2/INVALIDARGUMENT
systemd[1]: Unit varnish.service entered failed stateWhiteboard:
has_procedure advisory =>
has_procedure advisory feedback Confirmed it isn't actually running as it was previously. Also i586 it doesn't seem to have created varnish.service in systemd or anything in /etc/init.d/ # systemctl start varnish<TAB> varnishlog.service varnishncsa.service Oh nm, my mistake. It has actually created the service file, just isn't autocompleting.
It at least declares a failure i586, it is silently failing x86_64, maybe quicker computer. There is no mention of invalid argument i586 though.
# systemctl -a start varnish.service
Job for varnish.service failed. See 'systemctl status varnish.service' and 'journalctl -n' for details.
# systemctl -a status varnish.service
varnish.service - Varnish a high-perfomance HTTP accelerator
Loaded: loaded (/usr/lib/systemd/system/varnish.service; disabled)
Active: failed (Result: resources) since Mon, 2013-12-16 08:25:59 GMT; 20s ago
Process: 30124 ExecStart=/usr/sbin/varnishd -P /var/run/varnish.pid -f $VARNISH_VCL_CONF -a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} -T ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ADMIN_LISTEN_PORT} -t $VARNISH_TTL -w ${VARNISH_MIN_THREADS},${VARNISH_MAX_THREADS},${VARNISH_THREAD_TIMEOUT} -u $VARNISH_USER -g $VARNISH_GROUP -S $VARNISH_SECRET_FILE -s $VARNISH_STORAGE $DAEMON_OPTS (code=exited, status=0/SUCCESS)
CGroup: name=systemd:/system/varnish.service
varnishd[30137]: Platform: Linux,3.8.13.4-desktop586-1.mga3,i686,-sfile,-smalloc,-hcritbit
systemd[1]: Failed to start Varnish a high-perfomance HTTP accelerator.
[1]: Unit varnish.service entered failed state
# ps aux | grep -v grep | grep varnish
#
Another red herring. It was failing due to stale stuff in /run but once that is removed it now produces the invalid argument message i586 too. So looking at 3.0.3-7.3.mga3 there appears to be a couple of things wrong: 1. The file /etc/varnish/secret does not exist. This is what causes the deamon startup to fail primarily (running it with the correct arguments and the -d (debug) option pointed me at this problem. 2. The PID file seems to be in the wrong location. tmpfiles are included to create /run/varnish but the pid file is specified as [/var]/run/varnish.pid. 3. The PID file problem above isn't really a problem as varnish is started as root before dropping privs so it can write it's pid file fine. Now the latter issue is interesting as it seems to be the initial process (i.e. the root one) that initialises the storage file and it seems to be owned by root. This would lead me to believe that it's a bit broken generally as the worker threads would not have access to it. I would recommend the following: 1. Add a ConditionPathExists=/etc/varnish/secret to the unit. This will prevent it trying to startup until that file exists. It's still not pretty, but perhaps instructions could be left in a README.urpmi or something stating that this file MUST be created. I don't know how the auth works generally so don't know if this is meant to be in a special format or not. 2. Change the unit to specify the PIDFile as /run/varnish/varnishd.pid (in both places) 3. Add User=varnish/Group=varnish to the systemd unit and remove it from the command line. This starts varnish as the restricted user, knowing that all files/paths needed are writable by that user. This should fix the storage file initialisation problem mentioned above. With these changes varnish runs here OK, tho' no idea how to test the auth thingy. HTHs CC:
(none) =>
mageia Oh and a second thing, varnish itself wasn't enabled when the package was installed, but the other two services were: varnishncsa and varnishlog. I think either each %_post_service should be specified separately or you should specify %{name} twice (i'd say the former is clearer).
Advisory: ======================== Updated varnish packages fix security vulnerabilities: Varnish before 3.0.5 allows remote attackers to cause a denial of service (child-process crash and temporary caching outage) via a GET request with trailing whitespace characters and no URI (CVE-2013-4484). Also, the services have been converted from SysV init scripts to systemd- native services, which should allow for more consistent behavior. References: http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-4484 http://lists.opensuse.org/opensuse-updates/2013-11/msg00029.html ======================== Updated packages in core/updates_testing: ======================== varnish-3.0.3-7.4.mga3 libvarnish1-3.0.3-7.4.mga3 libvarnish-devel-3.0.3-7.4.mga3 from varnish-3.0.3-7.4.mga3.src.rpm Whiteboard:
has_procedure advisory feedback =>
has_procedure svn advisory updated. Whiteboard:
has_procedure =>
has_procedure advisory Installed varnish, and rebooted. Service fails to start on both i586 and x86_64.
# systemctl -a status varnish.service
varnish.service - Varnish a high-perfomance HTTP accelerator
Loaded: loaded (/usr/lib/systemd/system/varnish.service; enabled)
Active: failed (Result: exit-code) since Sun, 2014-01-05 15:32:55 EST; 1min 16s ago
Process: 2091 ExecStart=/usr/sbin/varnishd -P /run/varnish/varnish.pid -f $VARNISH_VCL_CONF -a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} -T ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ADMIN_LISTEN_PORT} -t $VARNISH_TTL -w ${VARNISH_MIN_THREADS},${VARNISH_MAX_THREADS},${VARNISH_THREAD_TIMEOUT} -S $VARNISH_SECRET_FILE -s $VARNISH_STORAGE $DAEMON_OPTS (code=exited, status=217/USER)
CGroup: name=systemd:/system/varnish.service
Jan 05 15:32:55 i3v.hodgins.homeip.net systemd[1]: Starting Varnish a high-perfomance HTTP accelerator...
Jan 05 15:32:55 i3v.hodgins.homeip.net systemd[1]: Failed to start Varnish a high-perfomance HTTP accelerator.
Jan 05 15:32:55 i3v.hodgins.homeip.net systemd[1]: Unit varnish.service entered failed stateWhiteboard:
has_procedure advisory =>
has_procedure advisory feedback Note that the log service also fails to start ... Jan 05 15:43:52 i3v.hodgins.homeip.net varnishlog[2064]: Cannot open /var/lib/varnish/i3v.hodgins.homeip.net/_.vsm: No such f...ctory Jan 05 15:43:52 i3v.hodgins.homeip.net systemd[1]: Failed to start Varnish HTTP accelerator logging daemon. Jan 05 15:43:52 i3v.hodgins.homeip.net systemd[1]: Unit varnishlog.service entered failed state /var/lib/varnish/ does exist, but /var/lib/varnish/$(hostname)/ does not. Even after creating the directory/file /var/lib/varnish/i3v.hodgins.homeip.net/_.vsm it still fails to start. Testing on mga3 64: Service installs fine and starts OK, but is killed due to initscripts issue. Fixing that and I can reproduce the visual results but not the crash. The version available in updates_testing is really broken due to systemd unit conversion. Variables are used in places where they are not meant to be used (e.g. in user and group bits) which definitely breaks things. Even correcting those it still fails. Going back to the sysvinit approach (without all the file mangling) allows things to actually start OK, but I cannot reproduce the "fix". i.e. the behaviour is exactly the same as before. I'll commit some fixes to the unit files and try again later. FWIW, the systemd unit and the newly introduced "params" file (which was /etc/sysconfig/varnish before) seems totally wrong. It's just moving the goal posts for no particular gain. We should simply drop the sysconfig file and hard-code everything in the systemd unit. If the user doesn't like it, then they copy the unit to /etc/ and edit it to taste. This "environment" file is just an unnecessary middle man which complicates things. I'll see what I can do to tidy it up. (and for the avoidance of doubt the "params" file comes from Fedora - I think I know a couple fedora people who would say "Uggg" to that change there ;)) FWIW, I've tidied up varnish units, dropped customisation file (in favour of using a systemd-blessed way to tweak things) and added some chowning for upgrades and generated the secret file. I've done some builds and will retry tests. OK, updated packages test better :) Testing complete MGA3 64bit. NB: The shipped package on MGA4 does not start so the same startup fixes have been applied there. SRPMS: varnish-3.0.3-7.5.mga3.src.rpm varnish-3.0.3-12.1.mga4.src.rpm RPMS: libvarnish-devel-3.0.3-7.5.mga3.i586.rpm libvarnish1-3.0.3-7.5.mga3.i586.rpm varnish-3.0.3-7.5.mga3.i586.rpm lib64varnish1-3.0.3-7.5.mga3.x86_64.rpm lib64varnish-devel-3.0.3-7.5.mga3.x86_64.rpm varnish-3.0.3-7.5.mga3.x86_64.rpm libvarnish-devel-3.0.3-12.1.mga4.i586.rpm libvarnish1-3.0.3-12.1.mga4.i586.rpm varnish-3.0.3-12.1.mga4.i586.rpm lib64varnish1-3.0.3-12.1.mga4.x86_64.rpm varnish-3.0.3-12.1.mga4.x86_64.rpm lib64varnish-devel-3.0.3-12.1.mga4.x86_64.rpm Procedure: 1. (if on unclean test platform) rpm -e varnish lib[64]varnish1 && rm -rf /var/lib/varnish/* 2. Install MGA3 varnish. 3. Start it. 4. telnet localhost 6081 Type: GET<return> Host: example.com<return> <return> You should get kicked out with no message. Type: GET <return> Host: example.com<return> <return> (not the space after GET) You should get a 417 Request too large error. I did not notice any actual crash, but the above behaviour is certainly wrong. 5. Ensure there is no /etc/varnish/secret file (it is not needed by official script in mga3 so it shouldn't exist) and check file permissions on files: ls -l /var/lib/varnish/ They should be owned by root. Also observe "systemctl status varnish" output. 6. Install new versions. 7. Check that varnish has restarted properly via the systemctl status output. 8. Check that /etc/varnish/secret was generated properly. 9. Verify that step 4 above now gives a 400 Bad Request error for both tests. 10. Done! PS Not sure if I should set version to 4 and add MGA3TOO as the update for MGA4 is different to the CVE specifically... but likely easiest to track this way. Whiteboard:
has_procedure advisory feedback =>
has_procedure advisory feedback mga3-64-ok We've handled updates with different advisories per release in the same bug before, although we can branch the Mageia 4 update into another bug if desired. The advisory in Comment 20 will still suffice (other than the package versions) for the Mageia 3 update. If we keep everything in this bug it'll need to be renamed in SVN with a -mga3 or something IINM. For the Mageia 4 update, the MGAA advisory can read (feel free to enhance it): Issues with the varnish service configuration that prevented it from starting have been corrected. Version:
3 =>
4 Wording works for me. Adding back in my mga3-64-ok tag which was just wiped! Also testing complete on mga4-64, so adding that in :) Whiteboard:
MGA3TOO has_procedure =>
MGA3TOO has_procedure mga3-64-ok mga4-64-ok Typically the packager who did the packaging isn't allowed to add OK tags (except for backports). That's why I wiped the tags. (In reply to David Walser from comment #31) > Typically the packager who did the packaging isn't allowed to add OK tags > (except for backports). That's why I wiped the tags. For this update, there were several packagers involved, so I think we can count Colin's testing as valid :) CC:
(none) =>
stormi Testing complete Mageia 4 i586. Actually with the core/release package I could not start varnishd at all. I installed the update candidate and noticed that urpmi varnish doesn't pull libvarnish from core/updates_testing, is that intended? Either way I installed libvarnish too. Now varnishd starts properly, and I followed the procedure in comment 27 to check that step 4 gives a 400 Bad Request error for both tests. CC:
(none) =>
remi (In reply to Rémi Verschelde from comment #33) > Testing complete Mageia 4 i586. > > Actually with the core/release package I could not start varnishd at all. Yeah I mentioned above but probably wasn't clear enough that both Cauldron and MGA4 versions were totally broken, so the MGA4 testing should just be related to "does it start now". If it did start the actual bug here should have been solved already, so the MGA4 version was [extra] secure ;) I guess you were clear, but it was tl;dr ;) (In reply to Rémi Verschelde from comment #33) > Actually with the core/release package I could not start varnishd at all. > > I installed the update candidate and noticed that urpmi varnish doesn't pull > libvarnish from core/updates_testing, is that intended? Not "intended", but that's how it works currently for all libs: requires are not strictly versioned. This isn't an issue since users are meant to update all their packages from Updates, but it will be for backports and we will have to be extra-careful about this. Otherwise people will install backports and it won't work. Testing complete mga3 32 Followed previous testing in comment 2 and ensured the service starts and stops ok. Whiteboard:
MGA3TOO has_procedure mga3-64-ok mga4-32-ok mga4-64-ok =>
MGA3TOO has_procedure mga3-32-ok mga3-64-ok mga4-32-ok mga4-64-ok Sorry Colin, also tested with your procedure from comment 27 Not sure if it's a problem but everything in /var/lib/varnish is owned by varnish rather than root. I suspect this is probably what you meant. It seems to be doing everything it should though so validating. Advisory uploaded. Could sysadmin please push to 3 & 4 updates Thanks Keywords:
(none) =>
validated_update (In reply to claire robinson from comment #39) > Not sure if it's a problem but everything in /var/lib/varnish is owned by > varnish rather than root. Just for clarity here, the reason for this is that under Mageia 3 originally, varnish starts a "master" process which spawns a child process and drops user privilege to varnish user. It seems the master process controls the files in /var/lib/varnish and thus they were owned by root. Under the new setup, the user is set to varnish immediately by systemd, so no varnish binary is run as root (which is "safer") thus all the files have to be owned by the varnish user. Hope that explains things :) Update pushed: http://advisories.mageia.org/MGASA-2014-0065.html Status:
NEW =>
RESOLVED |