summaryrefslogtreecommitdiff
path: root/nova
Commit message (Collapse)AuthorAgeFilesLines
* enable blocked VDPA move operationsSean Mooney2022-11-172-26/+281
| | | | | | | | | | | | | | | | This change adds functional test for operations on servers with VDPA devices that are expected to work but currently blocked due to lack of testing or qemu bugs. cold-migrate, resize, evacuate,and shelve are enabled and tested by this patch Conflicts: nova/tests/functional/libvirt/test_pci_sriov_servers.py Closes-Bug: #1970467 Change-Id: I6e220cf3231670d156632e075fcf7701df744773 (cherry picked from commit 95f96ed3aa201bc5b90e589b288fa4039bc9c0d2)
* Add compute restart capability for libvirt func testsBalazs Gibizer2022-11-177-54/+127
| | | | | | | | | | | | | | | | | | | The existing generic restart_compute_service() call in the nova test base class is not appropriate for the libvirt functional test that needs to reconfigure the libvirt connection as it is not aware of the libvirt specific mocking needed when a compute service is started. So this patch adds a specific restart_compute_service() call to nova.tests.functional.libvirt.base.ServersTestBase. This will be used by a later patch testing [pci]device_spec reconfiguration scenarios. This change showed that some of the existing libvirt functional test used the incomplete restart_compute_service from the base class. Others used local mocking to inject new pci config to the restart. I moved all these to the new function and removed the local mocking. Change-Id: Ic717dc42ac6b6cace59d344acaf12f9d1ee35564 (cherry picked from commit 57c253a609e859fa21ba05b264f0ba4d0ade7b8b)
* Remove double mocking... againBalazs Gibizer2022-11-1723-198/+160
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I thought we fixed all the double mocking issues with I3998d0d49583806ac1c3ae64f1b1fe343cefd20d but I was wrong. While we used both mock and unittest.mock the fixtures.MockPatch used the mock lib instead of the unittest.mock lib. The path Ibf4f36136f2c65adad64f75d665c00cf2de4b400 (Remove the PowerVM driver) removed the last user of mock lib from nova. So it is also removed the mock from test-requirements. This triggered that fixtures.MockPatch athat started using unittest.mock too. Before Ibf4f36136f2c65adad64f75d665c00cf2de4b400 a function can be mocked twice once with unittest.mock and once with fixtures.MockPatch (still using mock). However after that patch both path of such double mocking goes through unittest.mock and the second one fails. So this patch fixes double mocking so far hidden behind fixtures.MockPatch. This patch made the py310 and functional-py310 jobs voting on master however that has been dropped as part of the backport. Conflicts: .zuul.yaml nova/tests/unit/virt/libvirt/test_host.py Change-Id: Ic1352ec31996577a5d0ad18a057339df3e49de25 (cherry picked from commit bf654e3a4a8f690ad0bec0955690bf4fadf98dba)
* Remove double mockingBalazs Gibizer2022-11-1720-669/+540
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In py310 unittest.mock does not allow to mock the same function twice as the second mocking will fail to autospec the Mock object created by the first mocking. This patch manually fixes the double mocking. Fixed cases: 1) one of the mock was totally unnecessary so it was removed 2) the second mock specialized the behavior of the first generic mock. In this case the second mock is replaced with the configuration of the first mock 3) a test case with two test steps mocked the same function for each step with overlapping mocks. Here the overlap was removed to have the two mock exists independently The get_connection injection in the libvirt functional test needed a further tweak (yeah I know it has many already) to act like a single mock (basically case #2) instead of a temporary re-mocking. Still the globalness of the get_connection mocking warrant the special set / reset logic there. Conflicts: nova/tests/functional/regressions/test_bug_1781286.py nova/tests/unit/api/openstack/compute/test_shelve.py Change-Id: I3998d0d49583806ac1c3ae64f1b1fe343cefd20d (cherry picked from commit f8cf050a1380ae844e0184ed45f4a04fde3b07a9)
* Record SRIOV PF MAC in the binding profileBalazs Gibizer2022-11-1713-102/+798
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Today Nova updates the mac_address of a direct-physical port to reflect the MAC address of the physical device the port is bound to. But this can only be done before the port is bound. However during migration Nova does not update the MAC when the port is bound to a different physical device on the destination host. This patch extends the libvirt virt driver to provide the MAC address of the PF in the pci_info returned to the resource tracker. This information will be then persisted in the extra_info field of the PciDevice object. Then the port update logic during migration, resize, live migration, evacuation and unshelve is also extended to record the MAC of physical device in the port binding profile according to the device on the destination host. The related neutron change Ib0638f5db69cb92daf6932890cb89e83cf84f295 uses this info from the binding profile to update the mac_address field of the port when the binding is activated. Closes-Bug: #1942329 Conflicts: nova/objects/pci_device.py nova/virt/libvirt/host.py Change-Id: Iad5e70b43a65c076134e1874cb8e75d1ba214fde (cherry picked from commit cd03bbc1c33e33872594cf002f0e7011ab8ea047)
* refactor: remove duplicated logicBalazs Gibizer2022-11-171-21/+3
| | | | | | | | Remove _update_port_pci_binding_profile and replace its usage with _get_pci_device_profile. Change-Id: I34dae6fdb746205f0baa4997e69eec55634bec4d (cherry picked from commit 8d2776fb34339b311c713992a39507452c4ae42f)
* [compute] always set instance.host in post_livemigrationSean Mooney2022-10-213-6/+63
| | | | | | | | | | | | | | | | | | | | This change add a new _post_live_migration_update_host function that wraps _post_live_migration and just ensures that if we exit due to an exception instance.host is set to the destination host. when we are in _post_live_migration the guest has already started running on the destination host and we cannot revert. Sometimes admins or users will hard reboot the instance expecting that to fix everything when the vm enters the error state after the failed migrations. Previously this would end up recreating the instance on the source node leading to possible data corruption if the instance used shared storage. Change-Id: Ibc4bc7edf1c8d1e841c72c9188a0a62836e9f153 Partial-Bug: #1628606 (cherry picked from commit 8449b7caefa4a5c0728e11380a088525f15ad6f5) (cherry picked from commit 643b0c7d35752b214eee19b8d7298a19a8493f6b)
* Adds a repoducer for post live migration failAmit Uniyal2022-10-212-1/+63
| | | | | | | | | | | | | | Adds a regression test or repoducer for post live migration fail at destination, the possible casue can be fail to get instance network info or block device info changes: adds return server from _live_migrate in _integrated_helpers Related-Bug: #1628606 Change-Id: I48dbe0aae8a3943fdde69cda1bd663d70ea0eb19 (cherry picked from commit a20baeca1f5ebb0dfe9607335a6986e9ed0e1725) (cherry picked from commit 74a618a8118642c9fd32c4e0d502d12ac826affe)
* For evacuation, ignore if task_state is not NoneAmit Uniyal2022-08-052-17/+10
| | | | | | | | ignore instance task state and continue with vm evacutaion Closes-Bug: #1978983 Change-Id: I5540df6c7497956219c06cff6f15b51c2c8bc29d (cherry picked from commit db919aa15f24c0d74f3c5c0e8341fad3f2392e57)
* add regression test case for bug 1978983Amit Uniyal2022-08-052-2/+82
| | | | | | | | | This change add a repoducer test for evacuating a vm in the powering-off state Related-Bug: #1978983 Change-Id: I5540df6c7497956219c06cff6f15b51c2c8bc299 (cherry picked from commit 5904c7f993ac737d68456fc05adf0aaa7a6f3018)
* Merge "libvirt: Add a workaround to skip compareCPU() on destination" into ↵Zuul2022-08-013-9/+37
|\ | | | | | | stable/yoga
| * libvirt: Add a workaround to skip compareCPU() on destinationKashyap Chamarthy2022-06-083-9/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Nova's use of libvirt's compareCPU() API served its purpose over the years, but its design limitations break live migration in subtle ways. For example, the compareCPU() API compares against the host physical CPUID. Some of the features from this CPUID aren not exposed by KVM, and then there are some features that KVM emulates that are not in the host CPUID. The latter can cause bogus live migration failures. With QEMU >=2.9 and libvirt >= 4.4.0, libvirt will do the right thing in terms of CPU compatibility checks on the destination host during live migration. Nova satisfies these minimum version requirements by a good margin. So, provide a workaround to skip the CPU comparison check on the destination host before migrating a guest, and let libvirt handle it correctly. This workaround will be removed once Nova replaces the older libvirt APIs with their newer and improved counterparts[1][2]. - - - Note that Nova's libvirt driver calls compareCPU() in another method, _check_cpu_compatibility(); I did not remove its usage yet. As it needs more careful combing of the code, and then: - where possible, remove the usage of compareCPU() altogether, and rely on libvirt doing the right thing under the hood; or - where Nova _must_ do the CPU comparison checks, switch to the better libvirt CPU APIs -- baselineHypervisorCPU() and compareHypervisorCPU() -- that are described here[1]. This is work in progress[2]. [1] https://opendev.org/openstack/nova-specs/commit/70811da221035044e27 [2] https://review.opendev.org/q/topic:bp%252Fcpu-selection-with-hypervisor-consideration Change-Id: I444991584118a969e9ea04d352821b07ec0ba88d Closes-Bug: #1913716 Signed-off-by: Kashyap Chamarthy <kchamart@redhat.com> Signed-off-by: Balazs Gibizer <bgibizer@redhat.com> (cherry picked from commit 267a40663cd8d0b94bbc5ebda4ece55a45753b64)
* | Merge "neutron: Unbind remaining ports after PortNotFound" into stable/yogaZuul2022-07-302-23/+68
|\ \
| * | neutron: Unbind remaining ports after PortNotFoundStephen Finucane2022-05-192-23/+68
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Just because we encountered a PortNotFound error when unbinding a port doesn't mean we should stop unbinding the remaining ports. If this error is encountered, simply continue with the other ports. While we're here, we clean up some other tests related to '_unbind_port' since they're clearly duplicates. Change-Id: Id04e0df12829df4d8929e03a8b76b5cbe0549059 Signed-off-by: Stephen Finucane <sfinucan@redhat.com> Closes-Bug: #1974173 (cherry picked from commit 9e0dcb52ab308a63c6a18e47d1850cc3ade4d807)
* | | Fix typos in help messagesRajesh Tailor2022-06-188-10/+10
| | | | | | | | | | | | | | | | | | | | | | | | This change fixes typos in conf parameter help messages and in error log message. Change-Id: Iedc268072d77771b208603e663b0ce9b94215eb8 (cherry picked from commit aa1e7a6933df221e72a1371d286a63a9a08ce90a)
* | | Merge "Allow claiming PCI PF if child VF is unavailable" into stable/yoga25.0.1Zuul2022-06-153-24/+160
|\ \ \
| * | | Allow claiming PCI PF if child VF is unavailableBalazs Gibizer2022-05-063-24/+160
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As If9ab424cc7375a1f0d41b03f01c4a823216b3eb8 stated there is a way for the pci_device table to become inconsistent. Parent PF can be in 'available' state while children VFs are still in 'unavailable' state. In this situation the PF is schedulable but the PCI claim will fail when try to mark the dependent VFs unavailable. This patch changes the PCI claim logic to allow claiming the parent PF in the inconsistent situation as we assume that it is safe to do so. This claim also fixed the inconsistency so that when the parent PF is freed the children VFs become available again. Closes-Bug: #1969496 Change-Id: I575ce06bcc913add7db0849f85728371da2032fc (cherry picked from commit 3af2ecc13fa9334de8418accaed4fffefefb41da)
* | | | Merge "Simulate bug 1969496" into stable/yogaZuul2022-06-151-0/+56
|\ \ \ \ | |/ / /
| * | | Simulate bug 1969496Balazs Gibizer2022-05-061-0/+56
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As If9ab424cc7375a1f0d41b03f01c4a823216b3eb8 stated there is a way for the pci_device table to become inconsistent. Parent PF can be in 'available' state while children VFs are still in 'unavailable' state. In this situation the PF is schedulable but the PCI claim will fail to when try to mark the dependent VFs unavailable. This patch adds a test case that shows the error. Related-Bug: #1969496 Change-Id: I7b432d7a32aeb1ab765d1f731691c7841a8f1440 (cherry picked from commit 9ee5d2c66255f83cc8a66f1b5648fa13e1d73f47)
* | | | Merge "Remove unavailable but not reported PCI devices at startup" into ↵Zuul2022-06-144-7/+123
|\ \ \ \ | |/ / / | | | | | | | | stable/yoga
| * | | Remove unavailable but not reported PCI devices at startupBalazs Gibizer2022-05-064-7/+123
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We saw in the field that the pci_devices table can end up in inconsistent state after a compute node HW failure and re-deployment. There could be dependent devices where the parent PF is in available state while the children VFs are in unavailable state. (Before the HW fault the PF was allocated hence the VFs was marked unavailable). In this state this PF is still schedulable but during the PCI claim the handling of dependent devices in the PCI tracker fill fail with the error: "Attempt to consume PCI device XXX from empty pool". The reason of the failure is that when the PF is claimed, all the children VFs are marked unavailable. But if the VF is already unavailable such step fails. One way the deployer might try to recover from this state is to remove the VFs from the hypervisor and restart the compute agent. The compute startup already has a logic to delete PCI devices that are unused and not reported by the hypervisor. However this logic only removed devices in 'available' state and ignored devices in 'unavailable' state. If a device is unused and the hypervisor is not reporting the device any more then it is safe to delete that device from the PCI tracker. So this patch extends the logic to allow deleting 'unavailable' devices. There is a small window when dependent PCI device is in 'unclaimable' state. From cleanup perspective this is an analogous state. So it is also added to the cleanup logic. Related-Bug: #1969496 Change-Id: If9ab424cc7375a1f0d41b03f01c4a823216b3eb8 (cherry picked from commit 284ea72e96604bdf16d1c5c4db47247334841b2f)
* | | | Merge "Isolate PCI tracker unit tests" into stable/yogaZuul2022-06-141-33/+30
|\ \ \ \ | |/ / /
| * | | Isolate PCI tracker unit testsBalazs Gibizer2022-05-061-33/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | During the testing If9ab424cc7375a1f0d41b03f01c4a823216b3eb8 we noticed that the unit test cases of PciTracker._set_hvdev are changing and leaking global state leading to unstable tests. To reproduce on master, duplicate the test_set_hvdev_remove_tree_maintained_with_allocations test case and run PciDevTrackerTestCase serially. The duplicated test case will fail with File "/nova/nova/objects/pci_device.py", line 238, in _from_db_object setattr(pci_device, key, db_dev[key]) KeyError: 'id' This is caused by the fact that the test data is defined on module level, both _create_tracker and _set_hvdevs modifies the devices passed to them, and some test mixes passing db dicts to _set_hvdevs that expects pci dicts from the hypervisor. This patch fixes multiple related issues: * always deepcopy what _create_tracker takes as that list is later returned to the PciTracker via a mock and the tracker might modify what it got * ensure that _create_tracker takes db dicts (with id field) while _set_hvdevs takes pci dicts in the hypervisor format (without id field) * always deepcopy what is passed to _set_hvdevs as the PciTracker modify what it gets. * normalize when the deepcopy happens to give a safe patter for future test cases Change-Id: I20fb4ea96d5dfabfc4be3b5ecec0e4e6c5b3a318 (cherry picked from commit c58376db75917444831934963fa75b4b57f08818)
* | | | Merge "Fix segment-aware scheduling permissions error" into stable/yogaZuul2022-06-142-4/+18
|\ \ \ \ | |_|_|/ |/| | |
| * | | Fix segment-aware scheduling permissions errorAndrew Bonney2022-05-062-4/+18
| |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Resolves a bug encountered when setting the Nova scheduler to be aware of Neutron routed provider network segments, by using 'query_placement_for_routed_network_aggregates'. Non-admin users attempting to access the 'segment_id' attribute of a subnet caused a traceback, resulting in instance creation failure. This patch ensures the Neutron client is initialised with an administrative context no matter what the requesting user's permissions are. Change-Id: Ic0f25e4d2395560fc2b68f3b469e266ac59abaa2 Closes-Bug: #1970383 (cherry picked from commit ee32934f34afd8e6df467361e9d71788cd36f6ee)
* | | Add missing conditionRajesh Tailor2022-05-302-2/+5
| |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | Change [1] added new fields 'src|dst_supports_numa_live_migration' to LibvirtLiveMigrateData object, but missed if condition for dst_supports_numa_live_migration field in obj_make_compatible method. This change adds the if condition as well as fix typo in unit test because of which this wasn't catched earlier. Closes-Bug: #1975891 Change-Id: Ice5a2c7aca77f47ea6328a10d835854d9aff408e (cherry picked from commit 3aa77a3999a7dcabbd4c0141d4c56b07a4624128)
* | Merge "Fix pre_live_migration rollback" into stable/yogaZuul2022-05-183-13/+17
|\ \
| * | Fix pre_live_migration rollbackErlon R. Cruz2022-04-203-13/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | During the pre live migration process, Nova performs most of the tasks related to the creation and operation of the VM in the destination host. That is done without interrupting any of the hardware in the source host. If the pre_live_migration fails, those same operations should be rolled back. Currently nova is sharing the _rollback_live_migration for both live and pre_live migration rollbacks, and that is causing the source host to try to re-attach network interfaces on the source host where they weren't actually de-attached. This patch fixes that by adding a conditional to allow nova to do different paths for migration and pre_live_migration rollbacks. Closes-bug: #1944619 Change-Id: I784190ac356695dd508e0ad8ec31d8eaa3ebee56 (cherry picked from commit 63ffba7496182f6f6f49a380f3c639fc3ded9772)
* | | Merge "Adds regression test for bug LP#1944619" into stable/yogaZuul2022-05-181-0/+82
|\ \ \ | |/ /
| * | Adds regression test for bug LP#1944619Erlon R. Cruz2022-04-201-0/+82
| | | | | | | | | | | | | | | | | | | | | Related-bug: #1944619 Closes-bug: #1964472 Change-Id: Ie7e5377aea23a4fbd7ad91f245d17def6d0fb927 (cherry picked from commit 2ddb8bf53fdf9a17c09afc4987ab6efe8ba97696)
* | | Retry in CellDatabases fixture when global DB state changesmelanie witt2022-05-061-20/+46
| |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | There is a NOTE in the CellDatabases code about an unlikely but possible race that can occur between taking the writer lock to set the last DB context manager and taking the reader lock to call target_cell(). When the race is detected, a RuntimeError is raised. We can handle the race by retrying setting the last DB context manager when the race is detected, as described in the NOTE. Closes-Bug: #1959677 Change-Id: I5c0607ce5910dce581ab9360cc7fc69ba9673f35 (cherry picked from commit 1c8122a25f50b40934af127d7717b55794ff38b5)
* | Fix eventlet.tpool importBalazs Gibizer2022-04-071-1/+2
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently nova.utils.tpool_execute() only works by chance. And as the bug report shows there are env where it fails. The nova.utils.tpool_execute() call tries to uses eventlet.tpool.execute but the tpool module is not imported by the utils module only eventlet. In devstack it works by chance as the wsgi init actually imports eventlet.tpool indirectly via: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/stack/nova/nova/api/openstack/compute/__init__.py", line 21, in <module> from nova.api.openstack.compute.routes import APIRouterV21 # noqa File "/opt/stack/nova/nova/api/openstack/compute/routes.py", line 20, in <module> from nova.api.openstack.compute import admin_actions File "/opt/stack/nova/nova/api/openstack/compute/admin_actions.py", line 17, in <module> from nova.api.openstack import common File "/opt/stack/nova/nova/api/openstack/common.py", line 27, in <module> from nova.compute import task_states File "/opt/stack/nova/nova/compute/task_states.py", line 26, in <module> from nova.objects import fields File "/opt/stack/nova/nova/objects/fields.py", line 24, in <module> from nova.network import model as network_model File "/opt/stack/nova/nova/network/model.py", line 23, in <module> from nova import utils File "/opt/stack/nova/nova/utils.py", line 39, in <module> from oslo_concurrency import processutils File "/usr/local/lib/python3.8/dist-packages/oslo_concurrency/processutils.py", line 57, in <module> from eventlet import tpool This was broken since I8dbc579e0037969aab4f2bb500fccfbde4190726. This patch adds the correct import statement. Change-Id: Ic46345ceeb445164aea6ae9b35c457c6150765f6 Closes-Bug: #1915400 (cherry picked from commit b2d28f890872747d099a262e4a208e146b882f3f)
* Merge "Clean up when queued live migration aborted"Zuul2022-03-103-39/+69
|\
| * Clean up when queued live migration abortedAlexey Stupnikov2022-03-093-39/+69
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch solves bug #1949808 and bug #1960412 by tuning live_migration_abort() function and adding calls to: - remove placement allocations for live migration; - remove INACTIVE port bindings against destination compute node; - restore instance's state. Related unit test was adjusted and related functional tests were fixed. Closes-bug: #1949808 Closes-bug: #1960412 Change-Id: Ic97eff86f580bff67b1f02c8eeb60c4cf4181e6a
* | Merge "Revert "Adds regression test for bug LP#1944619""Zuul2022-03-101-87/+0
|\ \
| * | Revert "Adds regression test for bug LP#1944619"Balazs Gibizer2022-03-101-87/+0
| | | | | | | | | | | | | | | | | | | | | | | | This reverts commit d43538712c034023bdb3e25cd7adfdee409ed596. Reason for revert: The functional test introduced here is unstable. See https://bugs.launchpad.net/nova/+bug/1964472 Change-Id: I8c3a1655933300357aaf300650f91689d9a46bf5
* | | Merge "Fix migration with remote-managed ports & add FT"Zuul2022-03-107-51/+739
|\ \ \
| * | | Fix migration with remote-managed ports & add FTDmitrii Shcherbakov2022-03-047-51/+739
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | `binding:profile` updates are handled differently for migration from instance creation which was not taken into account previously. Relevant fields (card_serial_number, pf_mac_address, vf_num) are now added to the `binding:profile` after a new remote-managed PCI device is determined at the destination node. Likewise, there is special handling for the unshelve operation which is fixed too. Func testing: * Allow the generated device XML to contain the PCI VPD capability; * Add test cases for basic operations on instances with remote-managed ports (tunnel or physical); * Add a live migration test case similar to how it is done for non-remote-managed SR-IOV ports but taking remote-managed port related specifics into account; * Add evacuate, shelve/unshelve, cold migration test cases. Change-Id: I9a1532e9a98f89db69b9ae3b41b06318a43519b3
* | | | Merge "Add functional tests to reproduce bug #1960412"Zuul2022-03-103-7/+118
|\ \ \ \ | | |_|/ | |/| |
| * | | Add functional tests to reproduce bug #1960412Alexey Stupnikov2022-03-093-7/+118
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instance would be affected by problems described in bug #1949808 and bug #1960412 when queued live migration is aborted. This change adds functional test to reproduce problems with placement allocations (record for aborted live migration is not removed when queued live migration is aborted) and with Neutron port bindings (INACTIVE port binding records for destination host are not removed when queued live migration is aborted). It looks like there are no other modifications introduced by Nova control plane which should be reverted when queued live migration is aborted. This patch also changes libvirt and neutron fixtures: - libvirt fixture was changed to support live migrations of instances with regular ports: without this change _update_vif_xml() complains about lack of address element in VIF's XML. - neutron fixture was changed to improve active port binding's tracking during live migration: without this change port's binding:host_id is not updated when activate_port_binding() is called. As a result, list_ports() function returns empty list when constants.BINDING_HOST_ID is used in search_opts, which is the case for setup_networks_on_host() called with teardown=True. Related-bug: #1960412 Related-bug: #1949808 Change-Id: I152581deb6e659c551f78eed66e4b0b958b20c53
* | | | Merge "reenable greendns in nova."Zuul2022-03-092-58/+2
|\ \ \ \ | |/ / / |/| | |
| * | | reenable greendns in nova.Sean Mooney2022-03-082-58/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Back in the days of centos 6 and python 2.6 eventlet greendns monkeypatching broke ipv6. As a result nova has run without greendns monkey patching ever since. This removes that old workaround allowing modern eventlet to use greendns for non blocking dns lookups. Closes-Bug: #1964149 Change-Id: Ia511879d2f5f50a3f63d180258abccf046a7264e
* | | | Merge "Adds regression test for bug LP#1944619"Zuul2022-03-091-0/+87
|\ \ \ \ | | |_|/ | |/| |
| * | | Adds regression test for bug LP#1944619Erlon R. Cruz2022-02-101-0/+87
| | | | | | | | | | | | | | | | | | | | Related-bug: #1944619 Change-Id: Ia208ef0dd0bab1c59d025234f4da7537c39cda61
* | | | Merge "Follow up for unified limits"Zuul2022-03-094-13/+54
|\ \ \ \ | |_|_|/ |/| | |
| * | | Follow up for unified limitsmelanie witt2022-03-044-13/+54
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This addresses remaining comments from the unified limits series to add type hints to new code and add a docstring to the is_qfd_populated() method in nova/quota.py. Related to blueprint unified-limits-nova Change-Id: I948647b04b260e888a4c71c1fa3c2a7be5d140c5
* | | | Merge "Nova resize don't extend disk in one specific case"Zuul2022-03-022-3/+70
|\ \ \ \
| * | | | Nova resize don't extend disk in one specific casePierre LIBEAU2022-01-212-3/+70
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Nova resize not apply extend virtual size of the instance if the image is not accessible by the customer (public image use to build the instance is now private image because this image is deprecated) and the source compute of the resize don't have anymore the base image. Related-Bug: #1558880 Change-Id: I4d6dfca1efe10caebb017b6ec96820979018203f
* | | | | Merge "Move file system freeze after end of mirroring"Zuul2022-03-012-10/+25
|\ \ \ \ \
| * | | | | Move file system freeze after end of mirroringPierre Libeau2022-03-012-10/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The file system before the start of mirroring generates kernel task blocked for more than 120 seconds. The file system freeze after the start of mirroring and just before stopping the mirror between the original disk and copy of disk reduce the time of the freeze. Related-Bug: #1939116 Change-Id: I067382ec676b65f8698772a262c5e4cea7a1216d