summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
...
| * | Robustify attachment tracking in CinderFixtureNewAttachFlowMatt Riedemann2020-08-271-20/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Cinder allows more than one attachment record on a non-multiattach volume as long as only one attachment is "connected" (has a host connector). When you create an empty (no connector) attachment for an in-use volume, the volume status changes to "attaching". If you try to update the empty attachment before deleting the previous attachment, Cinder will return a 400 like: Invalid volume: Volume b2aba195-6570-40c4-82bb-46d3557fceeb status must be available or downloading to reserve, but the current status is attaching. This change refactors the attachment tracking in the CinderFixtureNewAttachFlow fixture to be able to track the connector per attachment and if code tries to update an attachment on an already connected volume it will fail. Change-Id: I369f82245465e96fc15d4fc71a79a8a71f6f2c6d (cherry picked from commit 1d3ca5a3a07fdfff0d61ac11c390dfd4acab23d7)
* | | Merge "Improve CinderFixtureNewAttachFlow" into stable/steinZuul2020-09-041-3/+19
|\ \ \ | |/ /
| * | Improve CinderFixtureNewAttachFlowMatt Riedemann2020-08-271-3/+19
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This does three things: 1. Implements the attachment_complete method in order to fail if code is trying to complete an attachment that does not exist. 2. Adds logging in the various attachment CRUD methods to aid in debugging test failures. 3. Mirrors the method signature for is_microversion_supported to make sure code is at least calling it properly. Change-Id: If6a36d23768877bfa820ed44b610bfb113df5464 (cherry picked from commit 576a4e5260c4bc813474dead4eec19bd2a1cc680)
* | libvirt: Do not reference VIR_ERR_DEVICE_MISSING when libvirt is < v4.1.0Lee Yarwood2020-08-284-21/+141
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I7eb86edc130d186a66c04b229d46347ec5c0b625 introduced VIR_ERR_DEVICE_MISSING into the hot unplug libvirt error code list within detach_device_with_retry. While the change correctly referenced that the error code was introduced in v4.1.0 it made no attempt to handle versions prior to this. With MIN_LIBVIRT_VERSION currently pinned to v4.0.0 we need to handle libvirt < v4.1.0 to avoid referencing the non-existent error code within the libvirt module. NOTE(lyarwood): Conflict as I2830ccfc81cfa9654cfeac7ad5effc294f523552 and Idd49b0c70caedfcd42420ffa2ac926a6087d406e are not present in stable/stein. Conflicts: nova/virt/libvirt/driver.py Closes-Bug: #1891547 Change-Id: I32908b77c18f8ec08211dd67be49bbf903611c34 (cherry picked from commit bc96af565937072c04dea31781d86d2073b77ed4) (cherry picked from commit 3f3b889f4e7e204a140d32d71201c4f23dd54c24) (cherry picked from commit c61f4c8e20d712ba84a8965cbe0cba90c7d27d0b)
* | libvirt: Handle VIR_ERR_DEVICE_MISSING when detaching devicesLee Yarwood2020-08-283-54/+67
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Introduced in libvirt v4.1.0 [1] this error code replaces the previously raised VIR_ERR_INVALID_ARG, VIR_ERR_OPERATION_FAILED and VIR_ERR_INVALID_ARG codes [2][3]. VIR_ERR_OPERATION_FAILED was introduced and tested as an active/live/hot unplug config device detach error code in I131aaf28d2f5d5d964d4045e3d7d62207079cfb0. VIR_ERR_INTERNAL_ERROR was introduced and tested as an active/live/hot unplug config device detach error code in I3055cd7641de92ab188de73733ca9288a9ca730a. VIR_ERR_INVALID_ARG was introduced and tested as an inactive/persistent/cold unplug config device detach error code in I09230fc47b0950aa5a3db839a070613c9c817576. This change introduces support for the new VIR_ERR_DEVICE_MISSING error code while also retaining coverage for these codes until MIN_LIBVIRT_VERSION is bumped past v4.1.0. The majority of this change is test code motion with the existing tests being modified to run against either the active or inactive versions of the above error codes for the time being. test_detach_device_with_retry_operation_internal and test_detach_device_with_retry_invalid_argument_no_live have been removed as they duplicate the logic within the now refactored _test_detach_device_with_retry_second_detach_failure. [1] https://libvirt.org/git/?p=libvirt.git;a=commit;h=bb189c8e8c93f115c13fa3bfffdf64498f3f0ce1 [2] https://libvirt.org/git/?p=libvirt.git;a=commit;h=126db34a81bc9f9f9710408f88cceaa1e34bbbd7 [3] https://libvirt.org/git/?p=libvirt.git;a=commit;h=2f54eab7c7c618811de23c60a51e910274cf30de Closes-Bug: #1887946 Change-Id: I7eb86edc130d186a66c04b229d46347ec5c0b625 (cherry picked from commit 902f09af251d2b2e56fb2f2900a3510baf38a508) (cherry picked from commit 93058ae1b8bc1b1728f08b9e606b68318751fc3b) (cherry picked from commit 863d6ef7601302901fa3368ea8457b3564eeb501)
* | Merge "libvirt: Provide VIR_MIGRATE_PARAM_PERSIST_XML during live migration" ↵Zuul2020-08-283-1/+10
|\ \ | |/ |/| | | into stable/stein
| * libvirt: Provide VIR_MIGRATE_PARAM_PERSIST_XML during live migrationLee Yarwood2020-08-253-1/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The VIR_MIGRATE_PARAM_PERSIST_XML parameter was introduced in libvirt v1.3.4 and is used to provide the new persistent configuration for the destination during a live migration: https://libvirt.org/html/libvirt-libvirt-domain.html#VIR_MIGRATE_PARAM_PERSIST_XML Without this parameter the persistent configuration on the destination will be the same as the original persistent configuration on the source when the VIR_MIGRATE_PERSIST_DEST flag is provided. As Nova does not currently provide the VIR_MIGRATE_PARAM_PERSIST_XML param but does provide the VIR_MIGRATE_PERSIST_DEST flag this means that a soft reboot by Nova of the instance after a live migration can revert the domain back to the original persistent configuration from the source. Note that this is only possible in Nova as a soft reboot actually results in the virDomainShutdown and virDomainLaunch libvirt APIs being called that recreate the domain using the persistent configuration. virDomainReboot does not result in this but is not called at this time. The impact of this on the instance after the soft reboot is pretty severe, host devices referenced in the original persistent configuration on the source may not exist or could even be used by other users on the destination. CPU and NUMA affinity could also differ drastically between the two hosts resulting in the instance being unable to start etc. As MIN_LIBVIRT_VERSION is now > v1.3.4 this change simply includes the VIR_MIGRATE_PARAM_PERSIST_XML param using the same updated XML for the destination as is already provided to VIR_MIGRATE_PARAM_DEST_XML. Co-authored-by: Tadayoshi Hosoya <tad-hosoya@wr.jp.nec.com> Closes-Bug: #1890501 Change-Id: Ia3f1d8e83cbc574ce5cb440032e12bbcb1e10e98 (cherry picked from commit 1bb8ee95d4c3ddc3f607ac57526b75af1b7fbcff) (cherry picked from commit bbf9d1de06e9991acd968fceee899a8df3776d60) (cherry picked from commit 6a07edb4b29d8bfb5c86ed14263f7cd7525958c1)
* | Merge "Should not skip volume_size check for bdm.image_id == image_ref case" ↵Zuul2020-08-262-6/+37
|\ \ | |/ |/| | | into stable/stein
| * Should not skip volume_size check for bdm.image_id == image_ref caseKevin_Zheng2020-08-032-6/+37
| | | | | | | | | | | | | | | | | | | | | | The volume size should be checked in bdm.sourece_type=image, dest_type=volume case no matter what the image is, but we skipped the check if the bdm.image_id == image_ref, we should not skipped the volume_size check. Change-Id: Ib10579280b63a4dd59ac76733aa0ff48fd2e024b closes-bug: 1818798 (cherry picked from commit 24fe74d126f23bea56c87524b1005a2aaacb870c)
* | objects: Update keypairs when saving an instanceTakashi NATSUME2020-07-274-8/+117
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The keypair of a server is updated when rebuilding the server with a keypair. This function has been added since API microversion 2.54. However the 'keypairs' of the instance object is not saved when saving the instance object currently. Make the instance object update the 'keypairs' field when saving the instance object. Conflicts: nova/tests/unit/fake_instance.py nova/tests/unit/objects/test_instance.py NOTE(stephenfin): Conflicts in 'fake_instance.py' are due to change If7f48933db10fcca3b9a05e1e978dfc51f6dabd0 ("Claim resources in resource tracker"), which is related to the vPMEM work and shouldn't be backported, while the conflicts in 'test_instance.py' are due to change Ic89352a9900515484bffe961475feb1cefc6b2a9 ("Remove 'instance_update_at_top', 'instance_destroy_at_top'") which removed some cells v1 tests but shouldn't be removed here where cells v1 is technically still a thing. Change-Id: I8a2726b39d0444de8c35480024078a97430f5d0c Closes-Bug: #1843708 Co-authored-by: Stephen Finucane <stephenfin@redhat.com> (cherry picked from commit 086796021b189c3ac64805ed8f6bde833906d284) (cherry picked from commit aed86ee5d6289edf1baf9fe0b2a9e509031fdd25) (cherry picked from commit b971dc82cb524fe86284c95ec671e2bad1c2874f)
* Merge "compute: Allow snapshots to be created from PAUSED volume backed ↵19.3.0Zuul2020-07-242-1/+6
|\ | | | | | | instances" into stable/stein
| * compute: Allow snapshots to be created from PAUSED volume backed instancesLee Yarwood2020-07-212-1/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Iabeb44f843c3c04f767c4103038fcf6c52966ff3 allowed snapshots to be created from PAUSED non-volume backed instances but missed the volume backed use case. This change simply adds PAUSED to the list of acceptable vm_states when creating a snapshot from a volume backed instance in addition to the already supported ACTIVE, STOPPED and SUSPENDED vm_states. Closes-Bug: #1878583 Change-Id: I9f95a054de9d43ecaa50ff7ffc9343490e212d53 (cherry picked from commit cfde53e4b402e71d7f53b6e0ab232854dba160dc) (cherry picked from commit a270eeeb9b1a65045c3a8bf3cfad5eee6415f63c) (cherry picked from commit c93ca609568bac73210f39207c821867620b2f0e)
* | libvirt: Mark e1000e VIF as supportedStephen Finucane2020-07-223-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | This is supported by the QEMU/KVM backends in libvirt. There's no reason not to support it in nova. This appears to have been an oversight. Change-Id: I12a5d28d75bc32a76a4f3765cb4db4cbc46c0c75 Signed-off-by: Stephen Finucane <stephenfin@redhat.com> Closes-Bug: #1882919 (cherry picked from commit 644cb5cb8bf44184d9b3f046cc67746b13550dd6) (cherry picked from commit 840de3b8924612fa3fc47a4c9032a4723d536613) (cherry picked from commit ddbc262494cf0df3f3abd5b5139eb787bb085f8c)
* | Merge "Create instance action when burying in cell0" into stable/steinZuul2020-07-223-0/+120
|\ \
| * | Create instance action when burying in cell0Matt Riedemann2020-06-103-0/+120
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Change I8742071b55f018f864f5a382de20075a5b444a79 in Ocata moved the creation of the instance record from the API to conductor. As a result, the "create" instance action was only being created in conductor when the instance is created in a non-cell0 database. This is a regression because before that change when a server create would fail during scheduling you could still list instance actions for the server and see the "create" action but that was lost once we started burying those instances in cell0. This fixes the bug by creating the "create" action in the cell0 database when burying an instance there. It goes a step further and also creates and finishes an event so the overall action message shows up as "Error" with the details about where the failure occurred in the event traceback. A short release note is added since a new action event is added here (conductor_schedule_and_build_instances) rather than re-use some kind of event that we could generate from the compute service, e.g. compute__do_build_and_run_instance. Change-Id: I1e9431e739adfbcfc1ca34b87e826a516a4b18e2 Closes-Bug: #1852458 (cherry picked from commit f2608c91175411ec7c2604035adb39306d7e607e) (cherry picked from commit 6484d9ff5b03f7b7d8e9ba296f7f32d2e54fcc11)
* | Merge "zuul: remove legacy-tempest-dsvm-neutron-dvr-multinode-full" into ↵Zuul2020-07-211-2/+0
|\ \ | | | | | | | | | stable/stein
| * | zuul: remove legacy-tempest-dsvm-neutron-dvr-multinode-fullLuigi Toscano2020-07-211-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The job was part of the neutron experimental queue but then removed during the ussuri lifecycle. See https://review.opendev.org/#/c/693630/ Change-Id: I04717b95dd44ae89f24bd74525d1c9607e3bc0fc (cherry picked from commit bce4a3ab97320bdc2a6a43e2a961a0aa0b8ffb63) (cherry picked from commit cf399a363ca530151895c4b7cf49ad7b2a79e01b) (cherry picked from commit b1ead1fb2adf25493e5cab472d529fde31f985f0)
* | | Merge "Remove stale nested backport from InstancePCIRequests" into stable/steinZuul2020-07-212-22/+0
|\ \ \ | |/ / |/| |
| * | Remove stale nested backport from InstancePCIRequestsDan Smith2020-07-092-22/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Sometime in 2015, we removed the hard-coded obj_relationships mapping from parent objects which facilitated semi-automated child version backports. This was replaced by a manifest-of-versions mechanism where the client reports all the supported objects and versions during a backport request to conductor. The InstancePCIRequests object isn't technically an ObjectListBase, despite acting like one, and thus wasn't using the obj_relationships. Because of this, it was doing its own backporting of its child object, which was not removed in the culling of the static mechanism. Because we now no longer need to worry about sub-object backport chaining, when version 1.2 was added, no backport rule was added, and since the object does not call the base class' generic routine, proper backporting of the child object was not happening. All we need to do is remove the override to allow the base infrastructure to do the work. Change-Id: Id610a24c066707de5ddc0507e7ef26c421ba366c Closes-Bug: #1868033 (cherry picked from commit d3ca7356860d64555eef6f5138501cb38f50ecc8) (cherry picked from commit e61d0025303b33ef00aa95ebd934f6121d320cbb) (cherry picked from commit 38ee1f39423e3af12ddc04de6808ff86bfbca645)
* | | Merge "Check cherry-pick hashes in pep8 tox target" into stable/steinZuul2020-07-142-0/+43
|\ \ \
| * | | Check cherry-pick hashes in pep8 tox targetDan Smith2020-07-082-0/+43
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | NOTE(elod.illes): This is a combination of 2 commits: the cherry-pick hash checker script and a fix for the script. 1. Check cherry-pick hashes in pep8 tox target This adds a tools/ script that checks any cherry-picked hashes on the current commit (or a provided commit) to make sure that all the hashes exist on at least master or stable/.* branches. This should help avoid accidentally merging stable backports where one of the hashes along the line has changed due to conflicts. 2. Fix cherry-pick check for merge patch Cherry-pick check script validates the proposed patch's commit message. If a patch is not on top of the given branch then Zuul rebases it to the top and the patch becomes a merge patch. In this case the script validates the merge patch's commit message instead of the original patch's commit message and fails. This fix selects the parent of the patch if it is a merge patch. (cherry picked from commit c7c48c6f52c9159767b60a4576ba37726156a5f7) (cherry picked from commit 02f213b831d8e1d4a1d8ebb18d1260571fe20b84) (cherry picked from commit 7a5111ba2943014b6fd53a5fe7adcd9bc445315e) Change-Id: I4afaa0808b75cc31a8dd14663912c162281a1a42 (cherry picked from commit aebc829c4e0d39a160eaaa5ad949c1256c8179e6) (cherry picked from commit 5cacfaab82853241022d3a2a0734f82dae59a34b) (cherry picked from commit d307b964ce380f2fa57debc6c4c8346ac8736afe)
* | | | Guard against missing image cache directoryBalazs Gibizer2020-07-102-34/+61
| |/ / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The original fix of bug 1878024 missed an edge case where on a fresh hypervisor the image cache directory hasn't been created yet. That directory is only created when the first image is downloaded. This patch makes sure that if the cache dir hasn't been created yet then 0 disk is reserved for the cache usage instead of raising and logging an exception. Change-Id: Id1bbc955a9099de1abc11b9063fe177896646d03 Related-Bug: #1878024 Closes-Bug: #1884214 (cherry picked from commit a85753778f710f03616a682d294f8f638dea6baf) (cherry picked from commit a6a48e876c9c4f9a218de882270ef098d46bf767) (cherry picked from commit f26823c7805c96d155de6837a08ec34271117515)
* | | Merge "Add admin doc information about image cache resource accounting" into ↵Zuul2020-07-082-0/+92
|\ \ \ | | | | | | | | | | | | stable/stein
| * | | Add admin doc information about image cache resource accountingDan Smith2020-07-062-0/+92
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds some details to the image cache page in the admin docs about how image cache disk usage is (not) considered in the scheduler disk space calculation. Workarounds and mitigation strategies are provided. Basically this patch is now a partial backport of 829ccbe2bbc4e80e92baf7553401d7925d883732 squashed into 57702ad4849a4ffc0c02df97269c8d7f44b57bae . Change-Id: I7f40f167cea073a73cf249a9adfd73e1187c031b Related-Bug: #1878024 (cherry picked from commit ab3fab03222a5e82ca96ed2a4e959ac7faa3f4fe) (cherry picked from commit 57702ad4849a4ffc0c02df97269c8d7f44b57bae) (cherry picked from commit 147f1c753a225a09485ddd2d575d489ad67428ae)
* | | | Merge "Reserve DISK_GB resource for the image cache" into stable/steinZuul2020-07-087-1/+166
|\ \ \ \ | |/ / /
| * | | Reserve DISK_GB resource for the image cacheBalazs Gibizer2020-07-067-1/+166
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If the nova compute image cache is on the same disk as the instance directory then the images in the cache will consume the same disk resource as the nova instances which can lead to disk overallocation. This patch adds a new config option [workarounds]/reserve_disk_resource_for_image_cache . If it is set the libvirt driver will reserve DISK_GB resources in placement for the actual size of the image cache. This is a low complexity solution with known limitations: * Reservation is updated long after the image is downloaded * This code allows the reservation to push the provider into negative available resource if the reservation + allocations exceed the total inventory. Conflicts: nova/conf/workarounds.py due to Ie38aa625dff543b5980fd437ad2febeba3b50079 ("Add support for translating CPU policy extra specs, image meta") is not in stable/stein nova/tests/unit/virt/libvirt/test_driver.py due to I25d70aa09080b22d1bfa0aa097f0a114de8bf15a ("Add reshaper for PCPU") is not in stable/stein Change-Id: If874f018ea996587e178219569c2903c2ee923cf Closes-Bug: #1878024 (cherry picked from commit 89fe504abfbd41a9c37f9b544c0ce98b23b45799) (cherry picked from commit 968981b5853724a8225cfc16b04ea83b4f485a9a) (cherry picked from commit 90c70f04d777e444eb8e2f0bccb8aa616e69dc66)
* | | | Merge "libvirt: avoid cpu check at s390x arch" into stable/steinZuul2020-07-082-0/+32
|\ \ \ \
| * | | | libvirt: avoid cpu check at s390x archjichenjc2020-06-102-0/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | nova compute will call check_can_live_migrate_destination when doing live migration and it will compare cpu model, however, following info indicated that cpu compare is not supported at s390x arch. URI qemu:///system does not support full set of host capabilities: this function is not supported by the connection driver: cannot compute baseline CPU of s390x architecture https://www.libvirt.org/news.html has the info v5.9.0 has Improvements part indicated the compare was added at 5.9 so the workaround is to avoid the check and let the migration proceed. Change-Id: I253f4f305ecf8b5331212be87caef41f2ebb747e Closes-Bug: 1854126 (cherry picked from commit 011cce6adb30c50737b45ec02e161fd71ab5b3e3) (cherry picked from commit 7d7a3ba70a36f05b07ee471490f1b587663348f8)
* | | | | Merge "Revert "Make greande jobs n-v for EM and oldest stable"" into ↵Zuul2020-07-061-4/+4
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | stable/stein
| * | | | | Revert "Make greande jobs n-v for EM and oldest stable"Elod Illes2020-07-061-4/+4
| | |/ / / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This reverts commit b05bf09b2d50bd77d33a31babe204ba7969f0221. NOTE(elod.illes): devstack in Rocky is fixed and grenade jobs started to pass. We can enable grenades again to restore testing as it was before. Change-Id: I2d8bbffb3f58f5d0190e970ff860162c122a9b0e
* | | | | Merge "libvirt: Don't delete disks on shared storage during evacuate" into ↵Zuul2020-07-065-95/+195
|\ \ \ \ \ | |/ / / / |/| | | | | | | | | stable/stein
| * | | | libvirt: Don't delete disks on shared storage during evacuateMatthew Booth2020-06-105-95/+195
| | |/ / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When evacuating an instance between compute hosts on shared storage, during the rebuild operation we call spawn() on the destination compute. spawn() currently assumes that it should cleanup all resources on failure, which results in user data being deleted in the evacuate case. This change modifies spawn in the libvirt driver such that it only cleans up resources it created. Conflicts: nova/tests/functional/libvirt/base.py nova/tests/functional/regressions/test_bug_1595962.py nova/virt/libvirt/driver.py Changes: nova/tests/functional/libvirt/test_pci_sriov_servers.py NOTE(lyarwood): Conflicts on stable/stein due to the following: - nova/tests/functional/libvirt/base.py Due to I5895865751e8e1fb08b3515bc9f8119cfcb9f35e not being present on stable/stein. - nova/tests/functional/regressions/test_bug_1595962.py Due to I8e5a122cc547222249973cf595d90c2d8d5658d4 not being present on stable/stein. - nova/virt/libvirt/driver.py Due to I725deb0312c930087c9e60115abe68b4e06e6804 and I6929c588dd2e0e805f2e30b2e30d29967469d756 not being present on stable/stein. NOTE(lyarwood): Changes on stable/stein due to I5895865751e8e1fb08b3515bc9f8119cfcb9f35e ("libvirt: Mock libvirt'y things in setUp") not being present on stable/stein. Co-Authored-By: Lee Yarwood <lyarwood@redhat.com> Closes-Bug: #1550919 Change-Id: I764481966c96a67d993da6e902dc9fc3ad29ee36 (cherry picked from commit 497360b0ea970f1e68912be8229ef8c3f5454e9e) (cherry picked from commit 8b48ca672d9c0eb108c71b7f9f3f089d9ecf688a) (cherry picked from commit 1a320f2a0e0918de6afcce5cf23b7de178ec3a49)
* | | | Make greande jobs n-v for EM and oldest stableGhanshyam Mann2020-06-241-4/+4
|/ / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As discussed in ML thread[1], we are going to make grenade jobs as non voting for all EM stable and oldest stable. grenade jobs are failing not and it might take time to fix those if we are able to fix. Once it jobs are working depends on project team, they can bring them back to voting or keep non-voting. If those jobs are failing consistently and no one is fixing them then removing those n-v jobs in future also fine. [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-June/015499.html StableOnly Depends-On: https://review.opendev.org/#/c/737384/ Depends-On: https://review.opendev.org/#/c/737826/ Change-Id: I4e03295548af5272a02ca81162fcd685c7986470
* | | Add functional test for bug 1550919Matthew Booth2020-06-021-0/+661
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds a failing test, which we fix in change I76448196. An earlier version of this change was previously merged as change I5619728d. This was later reverted, as it was failing in the gate. However, on inspection these failures seem to have been simply timeouts due to load. Changes from previous version: - Increase the timeouts which were previously triggering, and serialise server creation to reduce the chance of this recurring. - Add an LVM test, which highlights the requirement to flag the creation of ephemeral and swap disks. - Add an Qcow2 test, essentially the same as the Flat test but ensures coverage of the most common backends. - Each test now uses a separate instances_path allowing for cleanup without racing against other active tests. - Some nits addressed. For the time being this test does not make use of the recently improved nova.tests.functional.libvirt.base.ServersTestBase class to ease backports. Future changes should be made to use this class removing some of the common setUp logic from _LibvirtEvacuateTest. NOTE(lyarwood): The following changes are required for stable/stein: * [libvirt]/rbd_user is now set within LibvirtRbdEvacuateTest due to I361af845d6a733618ecd056aa7df973191184ae9 not being present. * CinderFixtureNewAttachFlow is used by all tests due to I6a777b4b7a5729488f939df8c40e49bd40aec3dd not being present. * _get_vcpu_total is used instead of _get_vcpu_available due to I98efdc61fd456fc7f9e1a85238c9ef9bc04a1252 not being present. Co-Authored-By: Lee Yarwood <lyarwood@redhat.com> Related-Bug: #1550919 Change-Id: I1062b3e74382734edbb2142a09ff0073c66af8db (cherry picked from commit 90e0e874bde38937380d09ab27a7defbb5475cc2) (cherry picked from commit 6ccd13f8aeeb97c2139c1abc93cb976fd57d57dd) (cherry picked from commit 172eb21dee1d93b140c2b691cb8dfbc68b721bfe)
* | | fix scsi disk unit number of the attaching volume when cdrom bus is scsiKevin Zhao2020-05-272-5/+51
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | From Image Meta Properties: hw_cdrom_bus=scsi, and use virtio-scsi mode, it will also need a disk address for it. So we need to calculate the disk address when call the function to get the next unit of scsi controller. Closes-Bug: #1867075 Change-Id: Ifd8b249de3e8f96fa13db252f0abe2b1bd950de0 Signed-off-by: Kevin Zhao <kevin.zhao@linaro.org> (cherry picked from commit c8d6767cf8baaf3cc81496c83db10c8ae72fce06) (cherry picked from commit 11b2b7f0b3a8c09216cd8ebfea8b4cd059605290)
* | | Merge "Make quotas respect instance_list_per_project_cells" into stable/steinZuul2020-05-272-4/+23
|\ \ \
| * | | Make quotas respect instance_list_per_project_cellsMohammed Naser2020-05-212-4/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This option was introduced in order to limit queries to cells which only belong to the project we're interacting with. However, The nova quota code was not respecting it and therefore it was always trying to get the cells assigned to a project even with that option disabled. This patch makes the quota code respect that option and adds testing to ensure that enabling the option does make sure it doesn't search all cells but only the ones for this project. Closes-Bug: #1878979 Conflicts: nova/tests/functional/db/test_quota.py NOTE(melwitt): The conflict and difference (import mock) from the cherry-picked change are because of the following changes not in Stein: Iaffb27bd8c562ba120047c04bb62619c0864f594 I3ff39d5ed99a68ad8678e5ff62b343f3018b4768 Change-Id: I2e0d48e799e70d550f912ad8a424c86df3ade3a2 (cherry picked from commit ab16946885e68ccdef7222a71cc0ad6f92b10de7) (cherry picked from commit fa2bfac862e9e1db5a98c64a56a933987f857903) (cherry picked from commit 91160c423839fc8a385dd28c96f94c2b2cdb02cd)
* | | | Merge "Update scheduler instance info at confirm resize" into stable/steinZuul2020-05-223-19/+16
|\ \ \ \
| * | | | Update scheduler instance info at confirm resizeBalazs Gibizer2020-05-203-19/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When a resize is confirmed the instance does not belong to the source compute any more. In the past the scheduler instance info is only updated by the _sync_scheduler_instance_info periodic. This caused that server boots with anti-affinity did not consider the source host. But now at the end of the confirm_resize call the compute also updates the scheduler about the move. Conflicts: nova/tests/unit/compute/test_compute_mgr.py due to Ib50b6b02208f5bd2972de8a6f8f685c19745514c and Ia6d8a7909081b0b856bd7e290e234af7e42a2b38 are missing from stable/stein Change-Id: Ic50e72e289b56ac54720ad0b719ceeb32487b8c8 Closes-Bug: #1869050 (cherry picked from commit 738110db7492b1360f5f197e8ecafd69a3b141b4) (cherry picked from commit e8b3927c92d29c74fd0c79b5a51b7a34e9d66236) (cherry picked from commit e34b375a6161b15d92beba64fa281f40634ffeab)
* | | | | Merge "Reproduce bug 1869050" into stable/steinZuul2020-05-221-0/+47
|\ \ \ \ \ | |/ / / / | | / / / | |/ / / |/| | |
| * | | Reproduce bug 1869050Balazs Gibizer2020-05-201-0/+47
| | |/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds a functional test that reproduce the bug when stale scheduler instance info prevents booting server with anti-affinity. Change-Id: If485330b48ae2671651aafabc93f92a8999f7ca2 Related-Bug: #1869050 (cherry picked from commit b52c483308f32f3744dd8a5df424b9f518c13155) (cherry picked from commit 016eeec9841116bbbbc6c3019850c18012e3781a) (cherry picked from commit 66e4d8218133a5e4a68f68a3017446cb585675c4)
* | | Revert "nova shared storage: rbd is always shared storage"hutianhao272020-05-112-5/+0
|/ / | | | | | | | | | | | | | | | | | | | | | | | | | | This reverts commit 05b7f63a42e3512da6fe45d2e6639fb47ed8102b. The _is_storage_shared_with method is specifically checking if the instance directory is shared. It is not checking if the actual instance disks are shared and as a result assumptions cannot be made based on the value of images_type. Closes-Bug: 1824858 Change-Id: I52293b6ce3e1ce77fa31b382d0067fb3bc68b21f (cherry picked from commit 404932f82130445472837095b3ad9089c75e2660) (cherry picked from commit 890882ebbf74db14a7c1904cca96cd7f5907493b)
* | Merge "Add retry to cinder API calls related to volume detach" into stable/steinZuul2020-04-302-0/+108
|\ \
| * | Add retry to cinder API calls related to volume detachFrancois Palin2020-04-242-0/+108
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When shutting down an instance for which volume needs to be deleted, if cinder RPC timeout expires before cinder volume driver terminates connection, then an unknown cinder exception is received and the volume is not removed. This fix adds a retry mechanism directly in cinder API calls attachment_delete, terminate_connection, and detach. Change-Id: I3c9ae47d0ceb64fa3082a01cb7df27faa4f5a00d Closes-Bug: #1834659 (cherry picked from commit 01c334cbdd859f4e486ac2c369a4bdb3ec7709cc) (cherry picked from commit 118ee682571a4bd41c8009dbe2e47fdd1f85a630)
* | | Merge "Reject boot request for unsupported images" into stable/stein19.2.0Zuul2020-04-244-0/+52
|\ \ \
| * | | Reject boot request for unsupported imagesBrian Rosmaita2020-04-234-0/+52
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Nova has never supported direct booting of an image of an encrypted volume uploaded to Glance via the Cinder upload-volume-to-image process, but instead of rejecting such a request, an 'active' but unusable instance is created. This patch allows Nova to use image metadata to detect such an image and reject the boot request. Change-Id: Idf84ccff254d26fa13473fe9741ddac21cbcf321 Related-bug: #1852106 Closes-bug: #1863611 (cherry picked from commit 963fd8c0f956bdf5c6c8987aa5d9f836386fd5ed) (cherry picked from commit 240d0309023fcaf20df44f819e9b3e14af97f526)
* | | | Merge "Make RBD imagebackend flatten method idempotent" into stable/steinZuul2020-04-242-2/+29
|\ \ \ \
| * | | | Make RBD imagebackend flatten method idempotentVladyslav Drok2020-04-232-2/+29
| | |/ / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If glance and nova are both configured with RBD backend, but glance does not return location information from the API, nova will fail to clone the image from glance pool and will download it from the API. In this case, image will be already flat, and subsequent flatten call will fail. This commit makes flatten call idempotent, so that it ignores already flat images by catching ImageUnacceptable when requesting parent info from ceph. NOTE(lyarwood): Conflict as I361af845d6a733618ecd056aa7df973191184ae9 is not present in stable/stein. Conflicts: nova/virt/libvirt/imagebackend.py Closes-Bug: 1860990 Change-Id: Ia6c184c31a980e4728b7309b2afaec4d9f494ac3 (cherry picked from commit 65825ebfbd58920adac5e8594891eec8e9cec41f) (cherry picked from commit 03d59e289369df4980bc1e7350e7f52a6f6aa828)
* | | | Merge "Add config option for neutron client retries" into stable/steinZuul2020-04-244-1/+37
|\ \ \ \
| * | | | Add config option for neutron client retriesmelanie witt2020-04-224-1/+37
| |/ / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Nova can occasionally fail to carry out server actions which require calls to neutron API if haproxy happens to close a connection after idle time if an incoming request attempts to re-use the connection while it is being torn down. In order to be more resilient to this scenario, we can add a config option for neutron client to retry requests, similar to our existing CONF.cinder.http_retries and CONF.glance.num_retries options. Because we create our neutron client [1] using a keystoneauth1 session [2], we can set the 'connect_retries' keyword argument to let keystoneauth1 handle connection retries. Closes-Bug: #1866937 [1] https://github.com/openstack/nova/blob/57459c3429ce62975cebd0cd70936785bdf2f3a4/nova/network/neutron.py#L226-L237 [2] https://docs.openstack.org/keystoneauth/latest/api/keystoneauth1.session.html#keystoneauth1.session.Session Change-Id: Ifb3afb13aff7e103c2e80ade817d0e63b624604a (cherry picked from commit 0e34ed9733e3f23d162e3e428795807386abf1cb) (cherry picked from commit 71971c206292232fff389dedbf412d780f0a557a)