xen-4.2.1-16.1.mga3: This update fixes the following security issues: XSA-52/CVE-2013-2076: Information leak on XSAVE/XRSTOR capable AMD CPUs XSA-53/CVE-2013-2077: Hypervisor crash due to missing exception recovery on XRSTOR XSA-54/CVE-2013-2078: Hypervisor crash due to missing exception recovery on XSETBV XSA-55/CVE-2013-2194: integer overflows XSA-55/CVE-2013-2195: pointer dereferences XSA-55/CVE-2013-2196: other problems XSA-56/CVE-2013-2072: Buffer overflow in xencontrol Python bindings affecting xend XSA-57/CVE-XXXX-XXXX: libxl allows guest write access to sensitive console related xenstore keys Reproducible: Steps to Reproduce:
Priority: Normal => HighAssignee: bugsquad => qa-bugs
(XSA-57 doesn't have a CVE yet) this is not easy to test, considering QA-Team hasn't likely tested this before... (and libvirt integration is difficult and not quite ready yet) (to test HVM guests, you will need to have a processor that has virtualisation (and enabled in the BIOS); but not to test PV guests) (also, this version doesn't have UEFI stuff, so you should turn off UEFI if you have it and can turn it off.) how to test: 1. install xen 2. edit grub to provide a correct entry (separate hypervisor needs to be loaded before kernel) (see below) 3. xend will be deprecated, so it's best to test out xl toolset (not xm toolset) 4. xend.service does not need to run 5. make sure xenstored.service and xenconsoled.service are started. 6. xl toolset uses distro networking, so you'll have to create a bridge (see below) 6. cd /etc/xen 7. make a config file for a HVM guest (see below) 8. make a sparse disk image file: dd if=/dev/zero of=/opt/hvmtest.img count=1 bs=4m skip=4k (for 16GB image, which takes only 4MB to start with) 8. xl create hvmtest 9. you can reach hvm with vnc, so you can do a mageia installation on it (better do a minimal install; xen is mostly used with headless servers) (make sure you can log in with ssh) 10. when your installation is ready, test out the xl pause, xl unpause, xl save (with and without -c ) and xl restore. (also see how xl list is doing) (if you have multiple xen hypervisors, you could try a xl migrate (live migration) 11. halt your guest 12. copy your image to make a pv image: rsync -aS --progress /opt/hvmtest.img /opt/pvtest.img 13. make a config file for a PV guest (see below) 14. xl create pvtest 15. pv guests are reachable via the "xl console" command, usually, i do this in a screen session 16. see if the guest works (and if you can access it with ssh) 17. again, test out xl pause, unpause, save, restore (and possibly migrate) 18. halt the guest This should be a good basic test... /o\
NOTE: i forgot to mention that when you're doing hvmguest, take a minute to scp the vmlinuz and initrd.img file that are actually used to your hypervisor (aka host machine aka dom0) in /opt/ . how to create a bridge: (assuming normal internet goes via eth0) ----------------------- (don't use network-manager; urpme lib64nm-utils2 lib64nm-glib4) cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-scripts/ifcfg-xenbr modify the ifcfg-xenbr file to have xenbr as device and add: TYPE=Bridge # Bridge must be capital modify the ifcfg-eth0 (this should be sufficient): DEVICE=eth0 BRIDGE=xenbr ONBOOT=yes a bridge is a combination of interfaces into one interface; the idea is that xenbr will contain eth0 (xenbr will have all the ip settings and needs to be used instead of eth0), and xen can then add the virtual interfaces it will make for it's guest into the bridge, so that those interfaces are all on the same network: []# brctl show XEN grub entry: --------------- typically grub entries look like this: (if you have a separate /boot as first partition) title linux kernel (hd0,0)/vmlinuz BOOT_IMAGE=linux root=UUID=xxxxxxxxx vga=788 initrd (hd0,0)/initrd.img XEN grub entries look like this: title XEN-linux kernel (hd0,0)/xen.gz dom0_mem=1024MB module (hd0,0)/vmlinuz BOOT_IMAGE=linux root=UUID=xxxxxxxxx vga=788 module (hd0,0)/initrd.img the xen hypervisor is now the kernel and the kernel and initrd become modules instead. the dom0_mem parameter, will limit the amount of ram, the host has access to for itself. (the rest of RAM can be used for the guests) (dom0 means hypervisor) HVM guest config: ----------------- [ ]# cat /etc/xen/hvmtest builder = "hvm" name = "hvmtest" vcpus=2 memory = "1024" maxmem = 4096 disk = [ 'file:/opt/pvtest.img,hda,w', 'file:/opt/mageia.iso,hdb:cdrom,r', ] vif = [ 'type=ioemu, mac=00:1f:5d:51:ae:37, bridge=xenbr', ] # choose a random mac boot="dc" PV guest config: ---------------- [ ]# cat /etc/xen/pvtest name = "pvtest" vcpus=2 memory = "1024" maxmem = 4096 disk = [ 'file:/opt/pvtest.img,sda,w', ] vif = [ 'mac=00:1f:5d:51:ae:37, bridge=xenbr', ] # choose a random mac kernel = "/opt/vmlinuz" ramdisk = "/opt/initrd.img" extra = "console=xvc0 xencons=tty1" root = "/dev/sda1 ro"
Blocks: (none) => 6931
Priority: High => NormalComponent: RPM Packages => SecuritySeverity: normal => major
I have now finished getting the hvm part working on x86_64, using the core release version. My plan is to get the pvtest working, then install the update, and test for regressions. I'll document the testing setup after that.
CC: (none) => davidwhodgins
Created attachment 4165 [details] xl console pvtest output I'm not having any luck getting the pv version to find the hard drive, using the appended config file. I tried adding a second drive, like I did to get the hvm version to boot properly, from the hard drive, but that is not working this time. $ cat /etc/xen/pvtest name = "pvtest" vcpus=2 memory=2048 maxmem=4096 disk = [ 'file:/opt/pvtest.img,sda,w', ] vif = [ 'mac=00:1f:5a:71:ae:37, bridge=xenbr', ] # choose a random mac kernel = "/opt/vmlinuz" ramdisk = "/opt/initrd.img" extra = "console=xvc0 xencons=tty1" root = "/dev/sda8 ro" ## Make sure partition is root of host system With the hvmtest, to get it to boot from the hard drive, I used the disk line disk = [ 'file:/opt/hvmtest.img,xvda,rw', 'file:/opt/hvmtest.img,sda,rw' ] Note that I'm still working with the release version, before looking for regressions in the updates testing version.
Note that I also tried with root=/dev/sda1, with same result.
can you also show the xl-testpv-X.log file in /var/log/xen ? the guest logs timeout tells me that something went wrong on host side in attaching the guest disk image... XENBUS: Timeout connecting to device: device/vbd/2048
also, can you check with lsinitrd /opt/initrd.img if there's xen modules in the initrd.
i submitted xen-4.2.1-16.2.mga3 This update fixes the following security issues: XSA-52/CVE-2013-2076: Information leak on XSAVE/XRSTOR capable AMD CPUs XSA-53/CVE-2013-2077: Hypervisor crash due to missing exception recovery on XRSTOR XSA-54/CVE-2013-2078: Hypervisor crash due to missing exception recovery on XSETBV XSA-55/CVE-2013-2194: integer overflows XSA-55/CVE-2013-2195: pointer dereferences XSA-55/CVE-2013-2196: other problems XSA-56/CVE-2013-2072: Buffer overflow in xencontrol Python bindings affecting xend XSA-57/CVE-2013-2211: libxl allows guest write access to sensitive console related xenstore keys XSA-58/CVE-2013-1432: Page reference counting error due to XSA-45/CVE-2013-1918 fixes
Not much in the log files. [root@x3a xen]# cat xl-pvtest.log Waiting for domain pvtest (domid 5) to die [pid 4488] Domain 5 has been destroyed. [root@x3a xen]# cat qemu-dm-pvtest.log domid: 5 Warning: vlan 0 is not connected to host network -videoram option does not work with cirrus vga device model. Videoram set to 4M. /home/iurt/rpmbuild/BUILD/xen-4.2.1/tools/qemu-xen-traditional/hw/xen_blktap.c:628: Init blktap pipes /home/iurt/rpmbuild/BUILD/xen-4.2.1/tools/qemu-xen-traditional/hw/xen_blktap.c:603: Created /var/run/tap directory Could not open /var/run/tap/qemu-read-5 xen be core: xen be core: can't open gnttab device can't open gnttab device xen be core: xen be core: can't open gnttab device can't open gnttab device xs_read(): target get error. /local/domain/5/target. # cat xen-hotplug.log RTNETLINK answers: Operation not supported < above line repeated a few dozen times>
this means the pvtest is actually a hvm... pv shouldn't use qemu at all... check the config file, make sure to remove hvm mention
# cat pvtest name = "pvtest" vcpus=2 memory=2048 maxmem=4096 pae=1 acpi=1 apic=1 nx=1 disk = [ 'file:/opt/pvtest.img,sda,w', ] vif = [ 'mac=00:1f:5a:71:ae:37, bridge=xenbr', ] # choose a random mac kernel = "/opt/vmlinuz" ramdisk = "/opt/initrd.img" extra = "console=xvc0 xencons=tty1" root = "/dev/sda1 ro" ## Make sure partition is root of host system As per comment 1, step 12, the pvtest.img is a copy of hvmtest.img.
iinm pae=1 acpi=1 apic=1 nx=1 are only for hvm... perhaps you can remove these and retry "xl create pvtest" there should be NO qemu process. and the xl create log should have everything you need. possibly you can "xl create -c pvtest" to connect console directly...
Created attachment 4173 [details] console output from xl create pvtest -c -d
# cat qemu-dm-pvtest.log domid: 2 Warning: vlan 0 is not connected to host network -videoram option does not work with cirrus vga device model. Videoram set to 4M. /home/iurt/rpmbuild/BUILD/xen-4.2.1/tools/qemu-xen-traditional/hw/xen_blktap.c:628: Init blktap pipes Could not open /var/run/tap/qemu-read-2 xen be core: xen be core: can't open gnttab device can't open gnttab device xen be core: xen be core: can't open gnttab device can't open gnttab device xs_read(): target get error. /local/domain/2/target.
Created attachment 4174 [details] xen console output for pvtest
# cat qemu-dm-pvtest.log domid: 3 Warning: vlan 0 is not connected to host network -videoram option does not work with cirrus vga device model. Videoram set to 4M. /home/iurt/rpmbuild/BUILD/xen-4.2.1/tools/qemu-xen-traditional/hw/xen_blktap.c:628: Init blktap pipes Could not open /var/run/tap/qemu-read-3 xen be core: xen be core: can't open gnttab device can't open gnttab device xen be core: xen be core: can't open gnttab device can't open gnttab device xs_read(): target get error. /local/domain/3/target.
# ps auxf|grep -e xl -e xen -e virt root 27 0.0 0.0 0 0 ? S 16:11 0:00 \_ [xenwatch] root 28 0.0 0.0 0 0 ? S 16:11 0:00 \_ [xenbus] root 623 0.0 0.0 84556 924 ? SLl 16:11 0:00 /usr/sbin/xenconsoled --log=none --log-dir=/var/log/xen/console root 713 0.0 0.0 10880 1048 ? SL 16:11 0:00 /usr/sbin/xenstored --pid-file /var/run/xenstored.pid root 1050 0.0 0.0 6328 404 ? S 16:11 0:00 /sbin/ifplugd -I -b -i xenbr root 1637 0.0 0.2 301120 8036 ? SLsl 16:11 0:00 /usr/sbin/libvirtd root 4216 0.1 0.4 127148 18104 pts/0 SLl+ 16:30 0:00 | \_ xl create pvtest -c -d root 4278 0.0 0.0 78292 712 pts/0 Sl+ 16:30 0:00 | \_ /usr/lib64/xen/bin/xenconsole 3 --num 0 --type pv root 4514 0.0 0.0 12152 940 pts/2 S+ 16:35 0:00 | \_ grep --color -e xl -e xen -e virt dave 4362 0.2 0.7 560296 28728 pts/3 Sl+ 16:32 0:00 \_ gedit xenconsole root 4220 0.0 0.1 191020 5184 ? Ssl 16:30 0:00 /usr/lib/xen/bin/qemu-dm -d 3 -domain-name pvtest -nographic -M xenpv root 4280 0.0 0.4 127140 16984 ? SLsl 16:30 0:00 xl create pvtest -c -d
From ls -ltr /var/log/xen -rw-r--r-- 1 root root 486 Jun 27 16:30 qemu-dm-pvtest.log -rw-r--r-- 1 root root 2709 Jun 27 16:30 xen-hotplug.log -rw-r--r-- 1 root root 54 Jun 27 16:30 xl-pvtest.log
Testing complete of the update on Mageia 3 x86_64. No regressions found. There are the same problems with pvtest not being able to see the hard drive, and with the hvmtest, in kde, the desktop is black, instead of displaying the image. Also, I find it quite unstable, and have several times seen the mouse in the guest freeze, requiring "xl destroy hvmtest", before restarting it. On a Mageia 3 i586 install, in the host, I get a segfault in mandrake-everytime, and cannot start the X server, so can't try installing a guest. With the fglrx driver, I was getting a black screen, after which I could not switch to a terminal with alt+ctrl+f3. Had to use sysrq to reboot. After switching to the vesa driver, startx fails with no screens found.
The problems are not regressions, so if you want, this update can be validated. Should I go ahead and validate, or do you want to try more troubleshooting?
Advisory added http://svnweb.mageia.org/advisories/10586.adv?revision=107&view=markup
better validate, i'll look into those bugs you had separately and get a better testing going...
actually dave, can you try without the hvc0 option, or try ttyS0 instead or even tty0 ? see if the XENBUS timeout style stuff is better... (perhaps we're missing a console-like module for xen)
Created attachment 4177 [details] Output of "xl create pv test -d -c" The attached is using tty0. In a test using ttyS0, the output looks the same, but also goes into a loop generating lines with [ 165.902302] dracut Warning: Could not boot. [ 165.904314] dracut Warning: "/dev/disk/by-uuid/23a8307b-adae-4116-972e-e0e1a438e045" does not exist [ 165.904470] dracut Warning: "/dev/sda1" does not exist The kernel modules listed in the rd.loaddriver options of the cmdline are all now in the initrd. The config is ... [root@x3a xen]# cat pvtest name = "pvtest" vcpus=2 memory=2048 maxmem=4096 disk = [ 'file:/opt/hvmtest.img,xvda,rw', 'file:/opt/hvmtest.img,sda,rw', 'file:/opt/hvmtest.img,hda,rw' ] vif = [ 'mac=00:1f:5a:71:ae:37, bridge=xenbr', ] # choose a random mac kernel = "/opt/vmlinuz" ramdisk = "/opt/initrd.img" extra = "console=ttyS0 xencons=tty1" root = "/dev/sda1 ro rd.loaddriver=xen-blkfront rd.loaddriver=xen-blkback rd.loaddriver=xen-pcifront rd.loaddriver=pci-blkback" ## Make sure partition is root of guest system Note that the console output shows it still isn't finding any drives.
only the front are for the guests, and xen-pcifront is for using pci passthrough, so those aren't required. i fear that the pvdrivers are forcing 'tap:aio:' instead of 'file:' and we don't have the blktap kernel module for the host system. i'll try and see what i can find for this with upstream.
As per comment 22, validating the update. Could someone from the sysadmin team push 10586.adv
Keywords: (none) => validated_updateWhiteboard: (none) => MGA3-64-OK MGA3-32-OKCC: (none) => sysadmin-bugs
http://advisories.mageia.org/MGASA-2013-0197.html
Status: NEW => RESOLVEDCC: (none) => boklmResolution: (none) => FIXED
CC: boklm => (none)