Monday, November 10, 2014

OpenStack Nova Compute Cannot Launch Instance after Upgrade to Fedora 21

I updated my system to Fedora 21 beta and then found that I could not use OpenStack. Every effort to launch an instance failed:
libvirtError: operation failed: filter 'nova-no-nd-reflection' already exists with uuid 728f13cf-f1a9-4e0e-add7-357b93d052ee
Removing the openstack packages, clearing the data and reinstalling openstack did not help.

The problem seems to be an incompatibility  between nova-compute and the libvirt libraries.

Upgrading the libvirt libraries to rawhide did not help.

Finally, I used rpm to downgrade libvirt using downloaded packages from the fedora 20 repository, ignoring the dependency problems.
$ sudo rpm -U --oldpackage --nodeps libvirt*1.1.3*rpm
All seems well for now.



Monday, November 3, 2014

Openstack Nova Compute fails to start after updating to 2014.1.3-2.fc21

OpenStack nova compute service failed to start after the update to th e openstack-icehouse version 2014.1.3-2.fc21  - the reason appeared to be:

"Failed to add interface: can't add lo to bridge br100: Invalid argument"

This bug report, though marked as invalid, helped, The problem was a change in the file nova/network/linux_net.py.

The following exception had been in the wrong place earlier and was being ignored:
if (err and err != "device %s is already a member of a bridge; "
              "can't enslave it to bridge %s.\n" % (interface, bridge)):
       msg = _('Failed to add interface: %s') % err
       raise exception.NovaException(msg)

The interface for the flat network is set to 'lo' even if the nova.conf file does not define it. network['bridge_interface'] seems to be set to 'lo' in the following line:

iface = CONF.flat_interface or network['bridge_interface']

One workaround was to change the following line in ensure_bridge method:
<         if interface:
---
>         if interface and interface != 'lo':

Subsequently, I came across another workaround in a bug report by creating a dummy interface.

The virtual machines were stuck in 'powering on' state after the update. This time solving it was easy!