Friday, September 15, 2017

Friday, June 02, 2017

virt-viewer on EL 6.8 + ovirt 4.1

I updated the version of oVirt to 4.1 and clients launching a console in EL 6.8 would get:

"At least Remote Viewer version 99.0-1 is required to setup this connection"


This version of remote viewer does not, and may never exist. Something else was going on here.  When I ran remote-viewer in debug mode, it seems that it is deliberately disabling rhel6 by setting the version to a non-existent version (remote-viewer /path/to/console.vv):

(remote-viewer:23829): remote-viewer-DEBUG: Minimum version '2.0-160'
for OS id 'rhev-win64'
(remote-viewer:23829): remote-viewer-DEBUG: Minimum version '2.0-160'
for OS id 'rhev-win32'
(remote-viewer:23829): remote-viewer-DEBUG: Minimum version '2.0-6'
for OS id 'rhel7'
(remote-viewer:23829): remote-viewer-DEBUG: Minimum version '99.0-1'
for OS id 'rhel6'

I'd seen a post on Redhat's support site that said you should just RHEL7. That wasn't an option however. A downgrade of virt-viewer on EL 6.8 fixes the issue, but I contacted oVirt to ask them what was going on. The issue is that certain new features in 4.1 are only supported in EL7, though it will still work with reduced functionality in EL6 (such as a basic console). There is a setting in the engine properties that controls this, and it can be adjusted. Use 'engine-config' to change the value, then restart ovirt-engine service. I found the relevant key in /etc/ovirt-engine/engine-config/engine-config.properties

To get the current setting:

engine-config -g RemoteViewerSupportedVersions


RemoteViewerSupportedVersions: rhev-win64:2.0-160;rhev-win32:2.0-160;rhel7:2.0-6;rhel6:2.0-6 version: general

To set:

engine-config -s RemoteViewerSupportedVersions="rhev-win64:2.0-160;rhev-win32:2.0-160;rhel7:2.0-6;rhel6:2.0-6"

Then 'systemctl restart ovirt-engine'.

PS: The oVirt devel team have told me that the reason it works in 6.7 is because the version of virt-viewer in 6.7 "doesn't support the mechanism to check for the minimum required version"

Friday, May 19, 2017

Recent oVirt errors when doing a minor update on a 4.1 engine host


 Had two problems after doing an engine update on 4.1.0.4-1.el7.centos:

First was that the engine service would not start due missing python-dateutil (no idea why this package wasn't included if it was needed in the update, and the engine service was working fine before the update). ABRT and journalctl provided useful info as to what needed fixing:


   ovirt-engine-dwhd.service                                                                   loaded activating auto-restart oVirt Engine Data Warehouse
* ovirt-engine.service                                                                        loaded failed     failed       oVirt Engine
* ovirt-fence-kdump-listener.service                                                          loaded failed     failed       oVirt Engine fence_kdump listener
  ovirt-imageio-proxy.service                                                                 loaded active     running      oVirt ImageIO Proxy
  ovirt-vmconsole-proxy-sshd.service                                                          loaded active     running      oVirt VM Console SSH server daemon
* ovirt-websocket-proxy.service                                                               loaded failed     failed       oVirt Engine websockets proxy
  polkit.service                                                                              loaded active     running      Authorization Manager


ABRT has detected 4 problem(s). For more info run: abrt-cli list --since 1495130066
[root@ovirt-engine ~]# abrt-cli list --since 1495130066
id 07e7e054795adf76b6260aed93a81420fe16e4cc
reason:         service.py:35:<module>:ImportError: No module named dateutil
time:           Thu May 18 18:57:04 2017
cmdline:        /usr/bin/python /usr/share/ovirt-engine-dwh/services/ovirt-engine-dwhd/ovirt-engine-dwhd.py --redirect-output --systemd=notify start
package:        ovirt-engine-dwh-4.1.0-1.el7.centos
uid:            108 (ovirt)
count:          1
Directory:      /var/spool/abrt/Python-2017-05-18-18:57:04-2765

id 107b6035ce460c7c4cabdbb8b9b2c5cee02f7b2c
reason:         service.py:35:<module>:ImportError: No module named dateutil
time:           Thu May 18 18:57:04 2017
cmdline:        /usr/bin/python /usr/share/ovirt-engine/services/ovirt-fence-kdump-listener/ovirt-fence-kdump-listener.py --systemd=notify start
package:        ovirt-engine-tools-4.1.0.4-1.el7.centos
uid:            108 (ovirt)
count:          1
Directory:      /var/spool/abrt/Python-2017-05-18-18:57:04-2766

id 02916623c009e9277ec757c70ca1e1bc32020c62
reason:         service.py:35:<module>:ImportError: No module named dateutil
time:           Thu May 18 18:57:04 2017
cmdline:        /usr/bin/python /usr/share/ovirt-engine/services/ovirt-engine/ovirt-engine.py --redirect-output --systemd=notify start
package:        ovirt-engine-backend-4.1.0.4-1.el7.centos
uid:            108 (ovirt)
count:          1
Directory:      /var/spool/abrt/Python-2017-05-18-18:57:04-2767

id 588e5eb1889bfde67787877bc8d1156953eeae1e
reason:         service.py:35:<module>:ImportError: No module named dateutil
time:           Thu May 18 18:57:04 2017
cmdline:        /usr/bin/python /usr/share/ovirt-engine/services/ovirt-websocket-proxy/ovirt-websocket-proxy.py --systemd=notify start
package:        ovirt-engine-websocket-proxy-4.1.0.4-1.el7.centos
uid:            108 (ovirt)
count:          1
Directory:      /var/spool/abrt/Python-2017-05-18-18:57:04-1391

The Autoreporting feature is disabled. Please consider enabling it by issuing
'abrt-auto-reporting enabled' as a user with root privileges
[root@ovirt-engine ~]#



 May 18 18:57:04 ovirt-engine systemd[1]: Starting oVirt Engine...
May 18 18:57:04 ovirt-engine python[2767]: detected unhandled Python exception in '/usr/share/ovirt-engine/services/ovirt-engine/ovirt-engine.py'
May 18 18:57:06 ovirt-engine ovirt-engine.py[2767]: Traceback (most recent call last):
May 18 18:57:06 ovirt-engine ovirt-engine.py[2767]: File "/usr/share/ovirt-engine/services/ovirt-engine/ovirt-engine.py", line 32, in <module>
May 18 18:57:06 ovirt-engine ovirt-engine.py[2767]: from ovirt_engine import service
May 18 18:57:06 ovirt-engine ovirt-engine.py[2767]: File "/usr/lib/python2.7/site-packages/ovirt_engine/service.py", line 35, in <module>
May 18 18:57:06 ovirt-engine ovirt-engine.py[2767]: from dateutil import tz
May 18 18:57:06 ovirt-engine ovirt-engine.py[2767]: ImportError: No module named dateutil
May 18 18:57:06 ovirt-engine systemd[1]: ovirt-engine.service: main process exited, code=exited, status=1/FAILURE
May 18 18:57:06 ovirt-engine systemd[1]: Failed to start oVirt Engine.
May 18 18:57:06 ovirt-engine systemd[1]: Unit ovirt-engine.service entered failed state.
May 18 18:57:06 ovirt-engine systemd[1]: ovirt-engine.service failed.





Installed python-dateutil:

yum install python-dateutil

The second issue was vdsmd not being able to start. Once again, journalctl provided useful information on what to do (logging on vdsmd is very helpful):


* vdsmd.service                                                                                  loaded failed failed    Virtual Desktop Server Manager


May 18 19:02:42 ovirt-engine systemd[1]: Starting Virtual Desktop Server Manager...
May 18 19:02:42 ovirt-engine vdsmd_init_common.sh[2746]: vdsm: Running mkdirs
May 18 19:02:42 ovirt-engine vdsmd_init_common.sh[2746]: vdsm: Running configure_coredump
May 18 19:02:42 ovirt-engine vdsmd_init_common.sh[2746]: vdsm: Running configure_vdsm_logs
May 18 19:02:42 ovirt-engine vdsmd_init_common.sh[2746]: vdsm: Running wait_for_network
May 18 19:02:42 ovirt-engine vdsmd_init_common.sh[2746]: vdsm: Running run_init_hooks
May 18 19:02:42 ovirt-engine vdsmd_init_common.sh[2746]: vdsm: Running upgraded_version_check
May 18 19:02:42 ovirt-engine vdsmd_init_common.sh[2746]: vdsm: Running check_is_configured
May 18 19:02:43 ovirt-engine vdsmd_init_common.sh[2746]: Error:
May 18 19:02:43 ovirt-engine vdsmd_init_common.sh[2746]: One of the modules is not configured to work with VDSM.
May 18 19:02:43 ovirt-engine vdsmd_init_common.sh[2746]: To configure the module use the following:
May 18 19:02:43 ovirt-engine vdsmd_init_common.sh[2746]: 'vdsm-tool configure [--module module-name]'.
May 18 19:02:43 ovirt-engine vdsmd_init_common.sh[2746]: If all modules are not configured try to use:
May 18 19:02:43 ovirt-engine vdsmd_init_common.sh[2746]: 'vdsm-tool configure --force'
May 18 19:02:43 ovirt-engine vdsmd_init_common.sh[2746]: (The force flag will stop the module's service and start it
May 18 19:02:43 ovirt-engine vdsmd_init_common.sh[2746]: afterwards automatically to load the new configuration.)
May 18 19:02:43 ovirt-engine vdsmd_init_common.sh[2746]: multipath requires configuration
May 18 19:02:43 ovirt-engine vdsmd_init_common.sh[2746]: libvirt is not configured for vdsm yet
May 18 19:02:43 ovirt-engine vdsmd_init_common.sh[2746]: Modules certificates, sebool, multipath, passwd, sanlock, libvirt are not configured
May 18 19:02:43 ovirt-engine systemd[1]: vdsmd.service: control process exited, code=exited status=1
May 18 19:02:43 ovirt-engine systemd[1]: Failed to start Virtual Desktop Server Manager.
May 18 19:02:43 ovirt-engine systemd[1]: Unit vdsmd.service entered failed state.
May 18 19:02:43 ovirt-engine systemd[1]: vdsmd.service failed.


So I used vdsm-tool to configure the services it listed as needing configuring, e.g.:

[root@ovirt-engine ~]# vdsm-tool configure --module certificates

Checking configuration status...


Running configure...
Reconfiguration of certificates is done.

Done configuring modules to VDSM.
[root@ovirt-engine ~]# vdsm-tool configure --module sebool    

Checking configuration status...


Running configure...
Reconfiguration of sebool is done.

Done configuring modules to VDSM.
[root@ovirt-engine ~]# vdsm-tool configure --module multipath

Checking configuration status...

multipath requires configuration

Running configure...
Reconfiguration of multipath is done.

Done configuring modules to VDSM.
[root@ovirt-engine ~]# vdsm-tool configure --module passwd 

Checking configuration status...


Running configure...
Reconfiguration of passwd is done.

Done configuring modules to VDSM.


Some services needed to be stopped first before they could be configured:

[root@ovirt-engine ~]# vdsm-tool configure --module sanlock

Checking configuration status...

Error:

Cannot configure while service 'sanlock' is running.
 Stop the service manually or use the --force flag.
 


[root@ovirt-engine ~]#



 e.g., stop sanlock:

[root@ovirt-engine ~]# vdsm-tool service-stop sanlock

Wednesday, February 01, 2017

virt-p2v error

guestfsd: main_loop: proc 334 (fstrim) took 1.94 seconds
libguestfs: trace: v2v: fstrim = 0
libguestfs: trace: v2v: umount_all
guestfsd: main_loop: new request, len 0x28
umount-all: /proc/mounts: fsname=/dev/root dir=/ type=ext2 opts=rw,noatime,block_validity,barrier,user_xattr,acl freq=0 passno=0
umount-all: /proc/mounts: fsname=/proc dir=/proc type=proc opts=rw,relatime freq=0 passno=0
umount-all: /proc/mounts: fsname=/sys dir=/sys type=sysfs opts=rw,relatime freq=0 passno=0
umount-all: /proc/mounts: fsname=tmpfs dir=/run type=tmpfs opts=rw,nosuid,relatime,size=157896k,mode=755 freq=0 passno=0
umount-all: /proc/mounts: fsname=/dev dir=/dev type=devtmpfs opts=rw,relatime,size=393292k,nr_inodes=98323,mode=755 freq=0 passno=0
umount-all: /proc/mounts: fsname=/dev/sda2 dir=/sysroot type=ext3 opts=rw,relatime,data=ordered freq=0 passno=0
commandrvf: stdout=n stderr=y flags=0x0
commandrvf: umount /sysroot
libguestfs: trace: v2v: umount_all = 0
libguestfs: trace: v2v: mount "/dev/sda1" "/"
guestfsd: main_loop: proc 47 (umount_all) took 0.01 seconds
guestfsd: main_loop: new request, len 0x40
commandrvf: stdout=n stderr=y flags=0x0
commandrvf: mount -o  /dev/sda1 /sysroot/
[  330.601856] EXT4-fs (sda1): mounting ext3 file system using the ext4 subsystem
[  330.605430] EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null)
guestfsd: main_loop: proc 1 (mount) took 0.01 seconds
libguestfs: trace: v2v: mount = 0
libguestfs: trace: v2v: fstrim "/"
guestfsd: main_loop: new request, len 0x48
fsync /dev/sda
fsync /dev/sdb
commandrvf: stdout=y stderr=y flags=0x0
commandrvf: fstrim -v /sysroot/
/sysroot/: 90.5 MiB (94919680 bytes) trimmed

guestfsd: main_loop: proc 334 (fstrim) took 0.26 seconds
libguestfs: trace: v2v: fstrim = 0
[ 335.4] Closing the overlay
libguestfs: trace: v2v: umount_all
guestfsd: main_loop: new request, len 0x28
umount-all: /proc/mounts: fsname=/dev/root dir=/ type=ext2 opts=rw,noatime,block_validity,barrier,user_xattr,acl freq=0 passno=0
umount-all: /proc/mounts: fsname=/proc dir=/proc type=proc opts=rw,relatime freq=0 passno=0
umount-all: /proc/mounts: fsname=/sys dir=/sys type=sysfs opts=rw,relatime freq=0 passno=0
umount-all: /proc/mounts: fsname=tmpfs dir=/run type=tmpfs opts=rw,nosuid,relatime,size=157896k,mode=755 freq=0 passno=0
umount-all: /proc/mounts: fsname=/dev dir=/dev type=devtmpfs opts=rw,relatime,size=393292k,nr_inodes=98323,mode=755 freq=0 passno=0
umount-all: /proc/mounts: fsname=/dev/sda1 dir=/sysroot type=ext3 opts=rw,relatime,data=ordered freq=0 passno=0
commandrvf: stdout=n stderr=y flags=0x0
commandrvf: umount /sysroot
libguestfs: trace: v2v: umount_all = 0
libguestfs: trace: v2v: shutdown
libguestfs: trace: v2v: internal_autosync
guestfsd: main_loop: proc 47 (umount_all) took 0.01 seconds
guestfsd: main_loop: new request, len 0x28
umount-all: /proc/mounts: fsname=/dev/root dir=/ type=ext2 opts=rw,noatime,block_validity,barrier,user_xattr,acl freq=0 passno=0
umount-all: /proc/mounts: fsname=/proc dir=/proc type=proc opts=rw,relatime freq=0 passno=0
umount-all: /proc/mounts: fsname=/sys dir=/sys type=sysfs opts=rw,relatime freq=0 passno=0
umount-all: /proc/mounts: fsname=tmpfs dir=/run type=tmpfs opts=rw,nosuid,relatime,size=157896k,mode=755 freq=0 passno=0
umount-all: /proc/mounts: fsname=/dev dir=/dev type=devtmpfs opts=rw,relatime,size=393292k,nr_inodes=98323,mode=755 freq=0 passno=0
fsync /dev/sda
fsync /dev/sdb
guestfsd: main_loop: proc 282 (internal_autosync) took 0.00 seconds
libguestfs: trace: v2v: internal_autosync = 0
libguestfs: sending SIGTERM to process 3438
libguestfs: qemu maxrss 460488K
libguestfs: trace: v2v: shutdown = 0
libguestfs: trace: v2v: close
libguestfs: closing guestfs handle 0x1a13dc0 (state 0)
libguestfs: command: run: rm
libguestfs: command: run: \ -rf /tmp/libguestfso5eAFU
[ 335.4] Checking if the guest needs BIOS or UEFI to boot
[ 335.4] Assigning disks to buses
virtio-blk slot 0:
target_file = /var/tmp/rancid-sda
target_format = raw
target_estimated_size = None
target_overlay = /var/tmp/v2vovl599e6e.qcow2
target_overlay.ov_source = nbd:localhost:40113
virtio-blk slot 1:
target_file = /var/tmp/rancid-sdb
target_format = raw
target_estimated_size = None
target_overlay = /var/tmp/v2vovl811e67.qcow2
target_overlay.ov_source = nbd:localhost:36773

[ 335.4] Copying disk 1/2 to /var/tmp/rancid-sda (raw)
target_file = /var/tmp/rancid-sda
target_format = raw
target_estimated_size = None
target_overlay = /var/tmp/v2vovl599e6e.qcow2
target_overlay.ov_source = nbd:localhost:40113

libguestfs: trace: set_verbose true
libguestfs: trace: set_verbose = 0
libguestfs: trace: disk_has_backing_file "/var/tmp/v2vovl599e6e.qcow2"
libguestfs: command: run: qemu-img
libguestfs: command: run: \ --help
libguestfs: which_parser: g->qemu_img_info_parser = 1
libguestfs: command: run: qemu-img
libguestfs: command: run: \ info
libguestfs: command: run: \ --output json
libguestfs: command: run: \ /dev/fd/4
libguestfs: parse_json: qemu-img info JSON output:\n{\n    "backing-filename-format": "raw",\n    "virtual-size": 160005980160,\n    "filename": "/dev/fd/4",\n    "cluster-size": 65536,\n    "format": "qcow2",\n    "actual-size": 275845120,\n    "format-specific": {\n        "type": "qcow2",\n        "data": {\n            "compat": "1.1",\n            "lazy-refcounts": false,\n            "refcount-bits": 16,\n            "corrupt": false\n        }\n    },\n    "backing-filename": "nbd:localhost:40113",\n    "dirty-flag": false\n}\n\n
libguestfs: trace: disk_has_backing_file = 1
libguestfs: trace: set_verbose true
libguestfs: trace: set_verbose = 0
libguestfs: trace: disk_create "/var/tmp/rancid-sda" "raw" 160005980160 "preallocation:sparse"
libguestfs: trace: disk_create = 0
qemu-img 'convert' '-p' '-n' '-f' 'qcow2' '-O' 'raw' '/var/tmp/v2vovl599e6e.qcow2' '/var/tmp/rancid-sda'
    (100.00/100%)
du --block-size=1 '/var/tmp/rancid-sda' | awk '{print $1}'
virtual copying rate: 888.3 M bits/sec
real copying rate: 128.8 M bits/sec
[2053.8] Copying disk 2/2 to /var/tmp/rancid-sdb (raw)
target_file = /var/tmp/rancid-sdb
target_format = raw
target_estimated_size = None
target_overlay = /var/tmp/v2vovl811e67.qcow2
target_overlay.ov_source = nbd:localhost:36773

libguestfs: trace: set_verbose true
libguestfs: trace: set_verbose = 0
libguestfs: trace: disk_has_backing_file "/var/tmp/v2vovl811e67.qcow2"
libguestfs: command: run: qemu-img
libguestfs: command: run: \ --help
libguestfs: which_parser: g->qemu_img_info_parser = 1
libguestfs: command: run: qemu-img
libguestfs: command: run: \ info
libguestfs: command: run: \ --output json
libguestfs: command: run: \ /dev/fd/4
libguestfs: parse_json: qemu-img info JSON output:\n{\n    "backing-filename-format": "raw",\n    "virtual-size": 899899242496,\n    "filename": "/dev/fd/4",\n    "cluster-size": 65536,\n    "format": "qcow2",\n    "actual-size": 551632896,\n    "format-specific": {\n        "type": "qcow2",\n        "data": {\n            "compat": "1.1",\n            "lazy-refcounts": false,\n            "refcount-bits": 16,\n            "corrupt": false\n        }\n    },\n    "backing-filename": "nbd:localhost:36773",\n    "dirty-flag": false\n}\n\n
libguestfs: trace: disk_has_backing_file = 1
libguestfs: trace: set_verbose true
libguestfs: trace: set_verbose = 0
libguestfs: trace: disk_create "/var/tmp/rancid-sdb" "raw" 899899242496 "preallocation:sparse"
libguestfs: trace: disk_create = 0
qemu-img 'convert' '-p' '-n' '-f' 'qcow2' '-O' 'raw' '/var/tmp/v2vovl811e67.qcow2' '/var/tmp/rancid-sdb'
qemu-img: Could not open '/var/tmp/v2vovl811e67.qcow2': Could not open backing file: Failed to connect socket: Connection refused

virt-v2v: error: qemu-img command failed, see earlier errors
rm -rf '/var/tmp/null.KCJqyM'
libguestfs: trace: close
libguestfs: closing guestfs handle 0x1a08e70 (state 0)
libguestfs: trace: close
libguestfs: closing guestfs handle 0x1a20a80 (state 0)
libguestfs: trace: close
libguestfs: closing guestfs handle 0x19f7f20 (state 0)
libguestfs: trace: close
libguestfs: closing guestfs handle 0x1a13dc0 (state 0)