A security issue fixed upstream in zeromq has been announced on July 8: https://www.openwall.com/lists/oss-security/2019/07/08/6 The issue is fixed upstream in 4.3.2. Mageia 6 and Mageia 7 are also affected.
Whiteboard: (none) => MGA7TOO, MGA6TOOStatus comment: (none) => Fixed upstream in 4.3.2
Debian and Ubuntu have issued advisories for this on July 8: https://www.debian.org/security/2019/dsa-4477 https://usn.ubuntu.com/4050-1/
Severity: normal => critical
Thanks David. Updated to 4.3.2 in Cauldron. Builds for 7 and 6 seem OK in iurt so will do some testing with those installed before pushing to updates_testing.
Whiteboard: MGA7TOO, MGA6TOO => MGA6TOOVersion: Cauldron => 7
zeromq-4.3.2-1 has been submitted to 7/core/updates_testing ############################################ Advisory A security vulnerability has been reported in libzmq/zeromq. CVE-2019-13132: a remote, unauthenticated client connecting to a libzmq application, running with a socket listening with CURVE encryption/authentication enabled, may cause a stack overflow and overwrite the stack with arbitrary data, due to a buffer overflow in the library. Users running public servers with the above configuration are highly encouraged to upgrade as soon as possible, as there are no known mitigations. All versions from 4.0.0 and upwards are affected. This update removes this vulnerability. ############################################ References https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-13132 https://www.openwall.com/lists/oss-security/2019/07/08/6 https://www.debian.org/security/2019/dsa-4477 https://usn.ubuntu.com/4050-1/ ############################################ Affected rpms libzmq5-4.3.2-1.mga7.i586.rpm libzmq-devel-4.3.2-1.mga7.i586.rpm zeromq-utils-4.3.2-1.mga7.i586.rpm zeromq-debugsource-4.3.2-1.mga7.i586.rpm zeromq-debuginfo-4.3.2-1.mga7.i586.rpm libzmq5-debuginfo-4.3.2-1.mga7.i586.rpm zeromq-utils-debuginfo-4.3.2-1.mga7.i586.rpm lib64zmq5-4.3.2-1.mga7.x86_64.rpm lib64zmq-devel-4.3.2-1.mga7.x86_64.rpm zeromq-utils-4.3.2-1.mga7.x86_64.rpm zeromq-debugsource-4.3.2-1.mga7.x86_64.rpm zeromq-debuginfo-4.3.2-1.mga7.x86_64.rpm lib64zmq5-debuginfo-4.3.2-1.mga7.x86_64.rpm zeromq-utils-debuginfo-4.3.2-1.mga7.x86_64.rpm From source rpm zeromq-4.3.2-1.mga7.src.rpm
A fix for this has also been committed to 6 svn if someone can submit it? I will provide advisory if it's not too late.
Assignee: zen25000 => qa-bugs
CC: (none) => tmbWhiteboard: MGA6TOO => (none)
Mageia7, x86_64 Installed the first three packages, ignoring debuginfo because QA does not normally enable debuginfo repositories. In fact I would have to add them if the debuginfo packages really do need to be tested. CVE-2019-13132 https://github.com/zeromq/libzmq/issues/3558 A proof of concept is provided as three C++ files which need to be compiled. The first one combines server and client tests I think but fails to find czmq.h when compilation is attempted. That file does not exist here - it might belong to a package not part of this update, hopefully not *debuginfo. The reason I am pursuing this is because I know absolutely nothing about zeromq with little chance of modifying that ignorance. Running the PoC should exercize the package as well as demonstrating that the vulnerability has been caught. $ g++ -o repro2 -lzmq repro2.cc That compiles without the help of pkgconfig. (No zmq.pc file anywhere) The other fails to compile: $ g++ -o repro3 -lzmq repro3.cc repro3.cc: In function ‘int main(int, char**)’: repro3.cc:26:38: error: ‘ZMQ_METADATA’ was not declared in this scope rc = zmq_setsockopt (client, ZMQ_METADATA, data, sizeof (data)); ^~~~~~~~~~~~ repro3.cc:26:38: note: suggested alternative: ‘ZMQ_MECHANISM’ rc = zmq_setsockopt (client, ZMQ_METADATA, data, sizeof (data)); ^~~~~~~~~~~~ ZMQ_MECHANISM ZMQ_METADATA is defined under DRAFT socket options in zmq.h so it is not clear what is going on. Lifting the definition out of zmq.h and inserting it into the script seems to work. $ g++ -o repro3 -lzmq repro3.cc $ file repro3 repro3: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=22c3c29710de7d74749314558f857aea258178eb, for GNU/Linux 3.2.0, with debug_info, not stripped $ ./repro3 & [1] 2667 $ ./repro2 & [2] 2692 $ *** buffer overflow detected ***: ./repro2 terminated <pressed enter> [2]+ Aborted (core dumped) ./repro2 That was before the update, version 4.3.1-2. Updated the three packages. Recompiled the server (repro2) and client (repro3) and ran the same test as before. $ ./repro2 & [1] 3880 $ ./repro3 & [2] 3899 $ ps aux | grep repro lcl 3962 0.0 0.0 9044 816 pts/4 S+ 19:20 0:00 grep --color repro [1]- Done ./repro2 [2]+ Done ./repro3 Looks like they terminated silently, which may be the intended outcome. I guess this is OK. What about the debuginfo packages - do we test those?
CC: (none) => tarazed25
Created attachment 11348 [details] Combined server/client PoC file Fails to compile. See earlier comment.
Created attachment 11349 [details] Server PoC file $ g++ -o repro2 -lzmq repro2.cc
Created attachment 11350 [details] Client PoC file $ g++ -o repro3 -lzmq repro3.cc
MGA7-64 Plasma on Lenovo B50 No installation issues. Found info on testing in bug 24186 Comment 10 and 11 (tx Lewis) So at CLI: $ local_lat --help usage: local_lat <bind-to> <message-size> <roundtrip-count> which is the same as in previous version. Followed tests and installed redis and ntopng and at CLI: # systemctl start redis # systemctl -l status redis ● redis.service - Redis persistent key-value database Loaded: loaded (/usr/lib/systemd/system/redis.service; disabled; vendor preset: disabled) Drop-In: /usr/lib/systemd/system/redis.service.d └─limit.conf Active: active (running) since Sun 2019-11-10 10:55:23 CET; 23s ago Main PID: 11018 (redis-server) Memory: 1.6M CGroup: /system.slice/redis.service └─11018 /usr/bin/redis-server 127.0.0.1:6379 nov 10 10:55:23 mach5.hviaene.thuis systemd[1]: Started Redis persistent key-value database. # ntopng -i 1 2>&1 10/Nov/2019 10:59:10 [Ntop.cpp:1902] Setting local networks to 127.0.0.0/8 10/Nov/2019 10:59:10 [Redis.cpp:127] Successfully connected to redis 127.0.0.1:6379@0 10/Nov/2019 10:59:10 [Redis.cpp:127] Successfully connected to redis 127.0.0.1:6379@0 10/Nov/2019 10:59:11 [PcapInterface.cpp:93] Reading packets from interface wlp9s0... 10/Nov/2019 10:59:11 [Ntop.cpp:1996] Registered interface wlp9s0 [id: 0] 10/Nov/2019 10:59:11 [main.cpp:308] PID stored in file /var/run/ntopng/ntopng.pid 10/Nov/2019 10:59:11 [Utils.cpp:592] User changed to ntopng 10/Nov/2019 10:59:11 [HTTPserver.cpp:1198] Web server dirs [/usr/share/ntopng/httpdocs][/usr/share/ntopng/scripts] 10/Nov/2019 10:59:11 [HTTPserver.cpp:1201] HTTP server listening on 3000 10/Nov/2019 10:59:11 [main.cpp:390] Working directory: /var/lib/ntopng 10/Nov/2019 10:59:11 [main.cpp:392] Scripts/HTML pages directory: /usr/share/ntopng 10/Nov/2019 10:59:11 [Ntop.cpp:403] Welcome to ntopng x86_64 v.3.8.190416 - (C) 1998-18 ntop.org 10/Nov/2019 10:59:11 [Ntop.cpp:717] Adding 192.168.2.5/32 as IPv4 interface address for wlp9s0 10/Nov/2019 10:59:11 [Ntop.cpp:725] Adding 192.168.2.0/24 as IPv4 local network for wlp9s0 10/Nov/2019 10:59:11 [Ntop.cpp:744] Adding fe80::b66d:83ff:fe0d:c14/128 as IPv6 interface address for wlp9s0 10/Nov/2019 10:59:11 [Ntop.cpp:753] Adding fe80::b66d:83ff:fe0d:c14/64 as IPv6 local network for wlp9s0 10/Nov/2019 10:59:15 [PeriodicActivities.cpp:72] Started periodic activities loop... 10/Nov/2019 10:59:15 [PeriodicActivities.cpp:113] Each periodic activity script will use 2 threads 10/Nov/2019 10:59:15 [NetworkInterface.cpp:2577] Started packet polling on interface wlp9s0 [id: 0]... And then I could point firefox to http://localhost:3000/ with default user/password of admin/admin. That promped to change the default password and then the nto dashboard started. To provoke network activity, I played a .avi file from an NFS share located on my desktop machine. The different tabs on the nto dashboard all show info that seem reasonable to me. I did not venture into using spyder, that's not in my league. If Len want to do more tests, I'll be happy to agree OK'ing if and when he seems fit to.
CC: (none) => herman.viaene
@Herman wrt comment 9; yes, go ahead. Your tests are pretty thorough.
Whiteboard: (none) => MGA7-64-OK
Thanks, guys. Validating. Advisory in Comment 3.
Keywords: (none) => validated_updateCC: (none) => andrewsfarm, sysadmin-bugs
Keywords: (none) => advisory
An update for this issue has been pushed to the Mageia Updates repository. https://advisories.mageia.org/MGASA-2019-0323.html
Resolution: (none) => FIXEDStatus: NEW => RESOLVED