| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| |
| | |
into stable/victoria
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
NOTE(sbauza): Stable policy allows us to proactively merge a backport without waiting for the parent patch to be merged (exception to rule #4 in [1]. Marking [stable-only] in order to silence nova-tox-validate-backport
[1] https://docs.openstack.org/project-team-guide/stable-branches.html#appropriate-fixes
Conflicts vs wallaby in:
nova/conf/compute.py
nova/tests/unit/virt/test_images.py
Related-Bug: #1996188
Change-Id: I5a399f1d3d702bfb76c067893e9c924904c8c360
|
|\ \
| | |
| | |
| | | |
stable/victoria
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When cinder-api runs behind a load balancer(eg haproxy), the load
balancer can return 504 Gateway Timeout when cinder-api does not
respond within timeout. This change ensures nova retries deleting
a volume attachment in that case.
Also this change makes nova ignore 404 in the API call. This is
required because cinder might continue deleting the attachment even if
the load balancer returns 504. This also helps us in the situation
where the volume attachment was accidentally removed by users.
Conflicts:
nova/tests/unit/volume/test_cinder.py
nova/volume/cinder.py
NOTE(melwitt): The conflicts are due to the following changes not in
Victoria:
* I23bb9e539d08f5c6202909054c2dd49b6c7a7a0e
(Remove six.text_type (1/2))
* I779bd1446dc1f070fa5100ccccda7881fa508d79
(Remove six.text_type (2/2))
Closes-Bug: #1978444
Change-Id: I593011d9f4c43cdae7a3d53b556c6e2a2b939989
(cherry picked from commit 8f4b740ca5292556f8e953a30f2a11ed4fbc2945)
(cherry picked from commit b94ffb1123b1a6cf0a8675e0d6f1072e9625f570)
(cherry picked from commit 14f9b7627e8a48f546e2f1c79d4336e1e4923501)
(cherry picked from commit 9b1c078112f11eafbd8e174efbd0e0f9d2c951ee)
|
|\ \ \
| |/ /
| | |
| | | |
stable/victoria
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When cinderclient receives an error response from API, it raises one
of the exceptions from the cinderclient.exceptions module instead of
the cinderclient.apiclient.exceptions module.
This change fixes the wrong exception used to detect 500 error from
cinder API, so that API calls to detach volumes are properly retried.
Closes-Bug: #1944043
Change-Id: I741cb6b29a67da8c60708c6251c441d778ca74d0
(cherry picked from commit f6206269b71d9bcaf65aa0313c21cc6b75a54fb3)
(cherry picked from commit 513241a7e4f3f3246c75e66a910db6c864c3981e)
(cherry picked from commit 7168314dd125fca8d1cecac035477ea0314ab828)
|
| |/
|/|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
openstacksdk-functional-devstack is broken on stable/wallaby.
To fix this we need fix on osc stable/wallaby which is merged
- https://review.opendev.org/c/openstack/python-openstackclient/+/872341
but that is enough to make this job green. We need to release osc
for stable/wallaby which we cannot do as stable/wallaby is in EM
phase
- https://review.opendev.org/c/openstack/releases/+/872382
One option to pin sdk in this job instead of master but as sdk
stable branch are not a thing, it does not make sense to test that
way. Better is to stop testing sdk in stable/wallaby.
ref: https://lists.openstack.org/pipermail/openstack-discuss/2023-February/031996.html
Depends-On: https://review.opendev.org/c/openstack/tempest/+/872964
Change-Id: I36086bd185965b34ab00b711c5de2e54b5aeb1ef
(cherry picked from commit 03231450b62755d9dcc42f95bcacf57b23a72099)
|
|\ \ |
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
In response to bug 1927677 we added a workaround to
NovaProxyRequestHandler to respond with a 400 Bad Request if an open
redirect is attempted:
Ie36401c782f023d1d5f2623732619105dc2cfa24
I95f68be76330ff09e5eabb5ef8dd9a18f5547866
Recently in python 3.10.6, a fix has landed in cpython to respond with
a 301 Moved Permanently to a sanitized URL that has had extra leading
'/' characters removed.
This breaks our existing unit tests which assume a 400 Bad Request as
the only expected response.
This adds handling of a 301 Moved Permanently response and asserts that
the redirect location is the expected sanitized URL. Doing this instead
of checking for a given python version will enable the tests to continue
to work if and when the cpython fix gets backported to older python
versions.
While updating the tests, the opportunity was taken to commonize the
code of two unit tests that were nearly identical.
Conflicts:
nova/tests/unit/console/test_websocketproxy.py
NOTE(melwitt): The conflict is because change
I58b0382c86d4ef798572edb63d311e0e3e6937bb
(Refactor and rename test_tcp_rst_no_compute_rpcapi) is not in
Victoria.
Related-Bug: #1927677
Closes-Bug: #1986545
Change-Id: I27441d15cc6fa2ff7715ba15aa900961aadbf54a
(cherry picked from commit 15769b883ed4a86d62b141ea30d3f1590565d8e0)
(cherry picked from commit 4a2b44c7cf55d1d79d5a2dd638bd0def3af0f5af)
(cherry picked from commit 0e4a257e8636a979605c614a35e79ba47b74d870)
(cherry picked from commit 3023e162e1a415ddaa70b4b8fbe24b1771dbe424)
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When migrating/resizing VM to destination
host that has VIF type difference from source VIF
type, it fails due to exception in unplugging
VIF on source host after user perform a confirmation
action.
This change unplugs the vifs in resize_instance and
wraps the call to unplug in confirm with an try except
block. the call to unplug_vifs in confirm is not removed
to support rolling upgrades but a todo is added to remove
it after the Wallaby release.
Change-Id: I2c195df5fcf844c0587933b5b5995bdca1a3ebed
Closes-Bug: #1895220
(cherry picked from commit 66c7f00e1d9d7c0eebe46eb4b24b2b21f7413789)
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This change add a new _post_live_migration_update_host
function that wraps _post_live_migration and just ensures
that if we exit due to an exception instance.host is set
to the destination host.
when we are in _post_live_migration the guest has already
started running on the destination host and we cannot revert.
Sometimes admins or users will hard reboot the instance expecting
that to fix everything when the vm enters the error state after
the failed migrations. Previously this would end up recreating the
instance on the source node leading to possible data corruption if
the instance used shared storage.
Change-Id: Ibc4bc7edf1c8d1e841c72c9188a0a62836e9f153
Partial-Bug: #1628606
(cherry picked from commit 8449b7caefa4a5c0728e11380a088525f15ad6f5)
(cherry picked from commit 643b0c7d35752b214eee19b8d7298a19a8493f6b)
(cherry picked from commit 17ae907569e45cc0f5c7da9511bb668a877b7b2e)
(cherry picked from commit 15502ddedc23e6591ace4e73fa8ce5b18b5644b0)
(cherry picked from commit 43c0e40d288960760a6eaad05cb9670e01ef40d0)
|
| |/
|/|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Adds a regression test or repoducer for post live migration
fail at destination, the possible casue can be fail to get
instance network info or block device info
changes:
adds return server from _live_migrate in _integrated_helpers
Related-Bug: #1628606
Change-Id: I48dbe0aae8a3943fdde69cda1bd663d70ea0eb19
(cherry picked from commit a20baeca1f5ebb0dfe9607335a6986e9ed0e1725)
(cherry picked from commit 74a618a8118642c9fd32c4e0d502d12ac826affe)
(cherry picked from commit 71e5a1dbcc22aeaa798d3d06ce392cf73364b8db)
(cherry picked from commit 5efcc3f695e02d61cb8b881e009308c2fef3aa58)
(cherry picked from commit ed1ea71489b60c0f95d76ab05f554cd046c60bac)
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
ignore instance task state and continue with vm evacutaion.
Closes-Bug: #1978983
Change-Id: I5540df6c7497956219c06cff6f15b51c2c8bc29d
(cherry picked from commit db919aa15f24c0d74f3c5c0e8341fad3f2392e57)
(cherry picked from commit 6d61fccb8455367aaa37ae7bddf3b8befd3c3d88)
(cherry picked from commit 8e9aa71e1a4d3074a94911db920cae44334ba2c3)
(cherry picked from commit 0b8124b99601e1aba492be8ed564f769438bd93d)
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This change add a repoducer test for evacuating
a vm in the powering-off state
While backporting to stable/victoria
Fixed conflict in functional.integrated_helpers
1 - Added placeholder NOT_SPECIFIED as object
Its a default parameter value for _evacuate_server
2 - Updated _evacuate_server defition
to allow function to wait, until expected
server state reached
3 - Added _start_server and _stop_server
4 - Updated _evacuate_server arguments for test_evacuate
as per updated _evacuate_server signature
Related-Bug: #1978983
Change-Id: I5540df6c7497956219c06cff6f15b51c2c8bc299
(cherry picked from commit 5904c7f993ac737d68456fc05adf0aaa7a6f3018)
(cherry picked from commit 6bd0bf00fca6ac6460d70c855eded3898cfe2401)
(cherry picked from commit 1e0af92e17f878ce64bd16e428cb3c10904b0877)
(cherry picked from commit b57b0eef218fd7604658842c9277aad782d11b45)
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When the nova-compute service starts, by default it attempts to
startup instance configuration states for aspects such as networking.
This is fine in most cases, and makes a lot of sense if the
nova-compute service is just managing virtual machines on a hypervisor.
This is done, one instance at a time.
However, when the compute driver is ironic, the networking is managed
as part of the physical machine lifecycle potentially all the way into
committed switch configurations. As such, there is no need to attempt
to call ``plug_vifs`` on every single instance managed by the
nova-compute process which is backed by Ironic.
Additionally, using ironic tends to manage far more physical machines
per nova-compute service instance then when when operating co-installed
with a hypervisor. Often this means a cluster of a thousand machines,
with three controllers, will see thousands of un-needed API calls upon
service start, which elongates the entire process and negatively
impacts operations.
In essence, nova.virt.ironic's plug_vifs call now does nothing,
and merely issues a debug LOG entry when called.
Closes-Bug: #1777608
Change-Id: Iba87cef50238c5b02ab313f2311b826081d5b4ab
(cherry picked from commit 7f81cf28bf21ad2afa98accfde3087c83b8e269b)
(cherry picked from commit eb6d70f02daa14920a2522e5c734a3775ea2ea7c)
(cherry picked from commit f210115bcba3436b957a609cd388a13e6d77a638)
|
|\ \
| | |
| | |
| | | |
stable/victoria
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This patch is based upon a downstream patch which came up in discussion
amongst the ironic community when some operators began discussing a case
where resource providers had disappeared from a running deployment with
several thousand baremetal nodes.
Discussion amongst operators and developers ensued and we were able
to determine that this was still an issue in the current upstream code
and that time difference between collecting data and then reconciling
the records was a source of the issue. Per Arun, they have been running
this change downstream and had not seen any reoccurances of the issue
since the patch was applied.
This patch was originally authored by Arun S A G, and below is his
original commit mesage.
An instance could be launched and scheduled to a compute node between
get_uuids_by_host() call and _get_node_list() call. If that happens
the ironic node.instance_uuid may not be None but the instance_uuid
will be missing from the instance list returned by get_uuids_by_host()
method. This is possible because _get_node_list() takes several minutes to return
in large baremetal clusters and a lot can happen in that time.
This causes the compute node to be orphaned and associated resource
provider to be deleted from placement. Once the resource provider is
deleted it is never created again until the service restarts. Since
resource provider is deleted subsequent boots/rebuilds to the same
host will fail.
This behaviour is visibile in VMbooter nodes because it constantly
launches and deletes instances there by increasing the likelihood
of this race condition happening in large ironic clusters.
To reduce the chance of this race condition we call _get_node_list()
first followed by get_uuids_by_host() method.
Change-Id: I55bde8dd33154e17bbdb3c4b0e7a83a20e8487e8
Co-Authored-By: Arun S A G <saga@yahoo-inc.com>
Related-Bug: #1841481
(cherry picked from commit f84d5917c6fb045f03645d9f80eafbc6e5f94bdd)
(cherry picked from commit 0c36bd28ebd05ec0b1dbae950a24a2ecf339be00)
|
|\ \ \
| |/ /
|/| | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Some GPU drivers like i915 don't provide a name attribute for mdev types.
As we don't use this attribute yet, let's just make sure we support the fact
it's optional.
Change-Id: Ia745ed7095c74e2bfba38379e623a3f81e7799eb
Closes-Bug: #1896741
(cherry picked from commit 416cd1ab18180fc09b915f4517aca03651f01eea)
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
We have placement-nova-tox-functional-py38 job defined and run
on placement gate[1] to run the nova functional test excluding
api and notification _sample_tests, and db-related tests but that
job skip those tests via tox_extra_args which is not right way
to do as we currently facing error when tox_extra_args is included
in tox siblings task
- https://opendev.org/zuul/zuul-jobs/commit/c02c28a982da8d5a9e7b4ca38d30967f6cd1531d
- https://zuul.openstack.org/build/a8c186b2c7124856ae32477f10e2b9a4
Let's define a new tox env which can exclude the required test
in stestr command itself.
Conflicts:
tox.ini
NOTE(melwitt): The conflict is because change
I672904e9bfb45a66a82331063c7d49c4bc0439df (Add functional-py39 testing)
is not in Victoria.
[1] https://opendev.org/openstack/placement/src/commit/bd5b19c00e1ab293fc157f4699bc4f4719731c25/.zuul.yaml#L83
Change-Id: I20d6339a5203aed058f432f68e2ec1af57030401
(cherry picked from commit 7b063e4d0518af3e57872bc0288a94edcd33c19d)
(cherry picked from commit 64f5c1cfb0e7223603c06e22a204716919d05294)
(cherry picked from commit baf0d93e0fafcd992d37543aa9df3f6dc248a738)
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When tox 'docs' target is called, first it installs the dependencies
(listed in 'deps') in 'installdeps' phase, then it installs nova (with
its requirements) in 'develop-inst' phase. In the latter case 'deps' is
not used so that the constraints defined in 'deps' are not used.
This could lead to failures on stable branches when new packages are
released that break the build. To avoid this, the simplest solution is
to pre-install requirements, i.e. add requirements.txt to 'docs' tox
target.
Conflicts:
tox.ini
NOTE(elod.illes): conflict is due to branch specific upper constraints
file link.
Change-Id: I4471d4488d336d5af0c23028724c4ce79d6a2031
(cherry picked from commit 494e8d7db6f8a3d1a952f657acab353787f57e04)
(cherry picked from commit 1ac0d6984a43cddbb5a2f1a2f7bc115fd83517c9)
(cherry picked from commit 64cc0848be9bf92d79e6fa7b424668d21321d593)
(cherry picked from commit f66a570e946d980162a1313aa5a7e2ce5856a128)
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
During the PTG the TC discussed the topic and decided to drop the job
completely. Since the latest job configuration broke all stable gate
for nova (older than yoga) this is needed there to unblock our gates.
For dropping the job on master let's wait to the resolution as the
gate is not broken there, hence the patch is stable-only.
Conflicts:
.zuul.yaml
lower-constraints.txt
NOTE(elod.illes): conflict is due to branch specific settings (job
template names, lower constraints changes).
Change-Id: I514f6b337ffefef90a0ce9ab0b4afd083caa277e
(cherry picked from commit 15b72717f2f3bd79791b913f1b294a19ced47ca7)
(cherry picked from commit ba3c5b81abce49fb86981bdcc0013068b54d4f61)
(cherry picked from commit 327693af402e4dd0c03fe247c4cee7beaedd2852)
|
|\ \ \
| | | |
| | | |
| | | | |
stable/victoria
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
There is a race condition between an incoming resize and an
update_available_resource periodic in the resource tracker. The race
window starts when the resize_instance RPC finishes and ends when the
finish_resize compute RPC finally applies the migration context on the
instance.
In the race window, if the update_available_resource periodic is run on
the destination node, then it will see the instance as being tracked on
this host as the instance.node is already pointing to the dest. But the
instance.numa_topology still points to the source host topology as the
migration context is not applied yet. This leads to CPU pinning error if
the source topology does not fit to the dest topology. Also it stops the
periodic task and leaves the tracker in an inconsistent state. The
inconsistent state only cleanup up after the periodic is run outside of
the race window.
This patch applies the migration context temporarily to the specific
instances during the periodic to keep resource accounting correct.
Change-Id: Icaad155e22c9e2d86e464a0deb741c73f0dfb28a
Closes-Bug: #1953359
Closes-Bug: #1952915
(cherry picked from commit 32c1044d86a8d02712c8e3abdf8b3e4cff234a9c)
(cherry picked from commit 1235dc324ebc1c6ac6dc94da0f45ffffcc546d2c)
(cherry picked from commit 5f2f283a75243d2e2629d3c5f7e5ef4b3994972d)
|
|\ \ \ \
| | | | |
| | | | |
| | | | | |
stable/victoria
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The libvirt driver power on and hard reboot destroys the domain first
and unplugs the vifs then recreate the domain and replug the vifs.
However nova does not wait for the network-vif-plugged event before
unpause the domain. This can cause that the domain starts running and
requesting IP via DHCP before the networking backend finished plugging
the vifs.
So this patch adds a workaround config option to nova to wait for
network-vif-plugged events during hard reboot the same way as nova waits
for this event during new instance spawn.
This logic cannot be enabled unconditionally as not all neutron
networking backend sending plug time events to wait for. Also the logic
needs to be vnic_type dependent as ml2/ovs and the in tree sriov backend
often deployed together on the same compute. While ml2/ovs sends plug
time event the sriov backend does not send it reliably. So the
configuration is not just a boolean flag but a list of vnic_types
instead. This way the waiting for the plug time event for a vif that is
handled by ml2/ovs is possible while the instance has other vifs handled
by the sriov backend where no event can be expected.
Conflicts:
nova/virt/libvirt/driver.py both
I73305e82da5d8da548961b801a8e75fb0e8c4cf1 and
I0b93bdc12cdce591c7e642ab8830e92445467b9a are not in
stable/victoria
The stable/victoria specific changes:
* The list of accepted vnic_type-s are adapted to what is supported by
neutron on this release. So vdpa, accelerator-direct, and
accelerator-direct-physical are removed as they are only added in
stable/wallaby
Change-Id: Ie904d1513b5cf76d6d5f6877545e8eb378dd5499
Closes-Bug: #1946729
(cherry picked from commit 68c970ea9915a95f9828239006559b84e4ba2581)
(cherry picked from commit 0c41bfb8c5c60f1cc930ae432e6be460ee2e97ac)
(cherry picked from commit 89c4ff5f7b45f1a5bed8b6b9b4586fceaa391bfb)
|
|\ \ \ \ \
| | |/ / /
| |/| | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This patch extends the original reproduction
I4be429c56aaa15ee12f448978c38214e741eae63 to cover
bug 1952915 as well as they have a common root cause.
Change-Id: I57982131768d87e067d1413012b96f1baa68052b
Related-Bug: #1953359
Related-Bug: #1952915
(cherry picked from commit 9f296d775d8f58fcbd03393c81a023268c7071cb)
(cherry picked from commit 0411962938ae1de39f8dccb03efe4567f82ad671)
(cherry picked from commit 94f17be190cce060ba8afcafbade4247b27b86f0)
|
|\ \ \ \ \
| |/ / / / |
|
| |/ / /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This patch adds a functional test that reproduces a race between
incoming migration and the update_available_resource periodic
Fixes:
- Added more memory to mock 'host_info', since the default would not
fit the instance. Default was changed in later releases
Change-Id: I4be429c56aaa15ee12f448978c38214e741eae63
Related-Bug: #1953359
(cherry picked from commit c59224d715a21998f40f72cf4e37efdc990e4d7e)
(cherry picked from commit f0a6d946aaa6c30f826cfced75c2fb06fdb379a8)
(cherry picked from commit d8859e4f95f5abb20c844d914f2716cba047630e)
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Currently neutron can report ports to have MAC addresses
in upper case when they're created like that. In the meanwhile
libvirt configuration file always stores MAC in lower case
which leads to KeyError while trying to retrieve migrate_vif.
Conflicts:
nova/tests/unit/virt/libvirt/test_migration.py
Note: conflict is caused by not having six.text_type removal patch
I779bd1446dc1f070fa5100ccccda7881fa508d79 in stable/victoria.
Original assertion method was preserved to resolve this conflict.
Closes-Bug: #1945646
Change-Id: Ie3129ee395427337e9abcef2f938012608f643e1
(cherry picked from commit 6a15169ed9f16672c2cde1d7f27178bb7868c41f)
(cherry picked from commit 63a6388f6a0265f84232731aba8aec1bff3c6d18)
(cherry picked from commit 6c3d5de659e558e8f6ee353475b54ff3ca7240ee)
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
During resize, on the source host, in resize_instance(), the instance.host
and .node is updated to point to the destination host. This indicates to
the source host's resource tracker that the allocation of this instance
does not need to be tracked as an instance but as an outbound migration
instead. However for the source host's resource tracker to do that it,
needs to use the instance.old_flavor. Unfortunately the
instance.old_flavor is only set during finish_resize() on the
destination host. (resize_instance cast to the finish_resize). So it is
possible that a running resize_instance() set the instance.host to point
to the destination and then before the finish_resize could set the
old_flavor an update_available_resources periodic runs on the source
host. This causes that the allocation of this instance is not tracked as
an instance as the instance.host point to the destination but it is not
tracked as a migration either as the instance.old_flavor is not yet set.
So the allocation on the source host is simply dropped by the periodic
job.
When such migration is confirmed the confirm_resize() tries to drop
the same resource allocation again but fails as the pinned CPUs of the
instance already freed.
When such migration is reverted instead, then revert succeeds but the
source host resource allocation will not contain the resource allocation
of the instance until the next update_available_resources periodic runs
and corrects it.
This does not affect resources tracked exclusively in placement (e.g.
VCPU, MEMORY_MB, DISK_GB) but it does affect NUMA related resource that
are still tracked in the resource tracker (e.g. huge pages, pinned
CPUs).
This patch moves the instance.old_flavor setting to the source node to
the same transaction that sets the instance.host to point to the
destination host. Hence solving the race condition.
Change-Id: Ic0d6c59147abe5e094e8f13e0d3523b178daeba9
Closes-Bug: #1944759
(cherry picked from commit b841e553214be9a732703e2dfed6c97698ef9b71)
(cherry picked from commit d4edcd62bae44c01885268a6cf7b7fae92617060)
(cherry picked from commit c8b04d183f560a616a79577c4d4ae9465d03e541)
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Add functional tests to reproduce the race between resize_instance()
and update_available_resources().
Related-Bug: #1944759
Change-Id: Icb7e3379248fe00f9a94f9860181b5de44902379
(cherry picked from commit 3e4e4489b7a6e9cdefcc6ff02ed99a0a70420fca)
(cherry picked from commit e6c6880465824f1e327a54143f32bb5a5816ff6c)
(cherry picked from commit 140ae45d98dabd30aef5c0ac075346de4eabcea1)
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Ie36401c782f023d1d5f2623732619105dc2cfa24 was intended
to address OSSA-2021-002 (CVE-2021-3654) however after its
release it was discovered that the fix only worked
for urls with 2 leading slashes or more then 4.
This change adresses the missing edgecase for 3 leading slashes
and also maintian support for rejecting 2+.
Conflicts:
nova/tests/unit/console/test_websocketproxy.py
NOTE: conflict is due to I58b0382c86d4ef798572edb63d311e0e3e6937bb
is missing in Victoria and Ie36401c782f023d1d5f2623732619105dc2cfa24
backport contained conflicts and methods order was swapped.
Change-Id: I95f68be76330ff09e5eabb5ef8dd9a18f5547866
co-authored-by: Matteo Pozza
Closes-Bug: #1927677
(cherry picked from commit 6fbd0b758dcac71323f3be179b1a9d1c17a4acc5)
(cherry picked from commit 47dad4836a26292e9d34e516e1525ecf00be127c)
|
|\ \ \ \
| | | | |
| | | | |
| | | | | |
stable/victoria
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Consider the following situation:
- Using the Ironic virt driver
- Replacing (so removing and re-adding) all baremetal nodes
associated with a single nova-compute service
The update resources periodic will have destroyed the compute node
records because they're no longer being reported by the virt driver.
If we then attempt to manually delete the compute service record, the
datbase layer will raise an exception, as there are no longer any
compute node records for the host. Previously, this exception would
get bubbled up as an error 500 in the API. This patch catches it and
allows service deletion to complete succefully.
Closes bug: 1860312
Change-Id: I2f9ad3df25306e070c8c3538bfed1212d6d8682f
(cherry picked from commit 880611df0b6b967adabd3f08886e385d0a100c5c)
(cherry picked from commit df5158bf3f80fd4362725dc280de67b88ece9952)
|
|\ \ \ \ \ |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
By default, we don't currently allow the Nova microversion
header for CORS requests. It should be something that is
included out of the box because it's part of the core API.
Deployers can workaround this by overriding allow_headers,
but we should provide a better experience out of the box.
Closes-Bug: #1931908
Change-Id: Idf4650f36952331f02d7512580c21451f3ee3b63
(cherry picked from commit b02a95a18b5da37db6d4f30a5dea07e2a4187245)
|
|\ \ \ \ \ \
| | | | | | |
| | | | | | |
| | | | | | | |
stable/victoria
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
To query a resource provider by uuid, request path should look like
/resource_providers?uuid=<uuid>
istead of
/resource_providers&uuid=<uuid>
This change fixes the wrong path so that "nova-manage placement audit"
command can look up the target resource provider properly.
Closes-Bug: #1936278
Change-Id: I2ae7e9782316e3662e4e51e3f5ba0ef597bf281b
(cherry picked from commit 1d3373dcf0a05d4a2c5b51fc1b74d41ec1bb1175)
(cherry picked from commit 62a3fa4fff70a1d03998626406a71b7dc09da733)
|
|\ \ \ \ \ \ \
| | | | | | | |
| | | | | | | |
| | | | | | | | |
stable/victoria
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
This is a followup for change Ie36401c782f023d1d5f2623732619105dc2cfa24
to reduce mocking in the unit test coverage for it.
While backporting the bug fix, it was found to be incompatible with
earlier versions of Python < 3.6 due to a difference in internal
implementation [1].
This reduces the mocking in the unit test to be more agnostic to the
internals of the StreamRequestHandler (ancestor of
SimpleHTTPRequestHandler) and work across Python versions >= 2.7.
Related-Bug: #1927677
[1] https://github.com/python/cpython/commit/34eeed42901666fce099947f93dfdfc05411f286
Change-Id: I546d376869a992601b443fb95acf1034da2a8f36
(cherry picked from commit 214cabe6848a1fdb4f5941d994c6cc11107fc4af)
(cherry picked from commit 9c2f29783734cb5f9cb05a08d328c10e1d16c4f1)
|
|\ \ \ \ \ \ \ \
| | |_|_|/ / / /
| |/| | | | | | |
|
| |/ / / / / /
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Consider the following situation:
- Using the Ironic virt driver
- Replacing (so removing and re-adding) all baremetal nodes
associated with a single nova-compute service
The update resources periodic will have destroyed the compute node
records because they're no longer being reported by the virt driver.
If we then attempt to manually delete the compute service record, the
datbase layer will raise an exception, as there are no longer any
compute node records for the host. This exception gets bubbled up as
an error 500 in the API. This patch adds a unit test to demonstrate
this.
Related bug: 1860312
Change-Id: I03eec634b25582ec9643cacf3e5868c101176983
(cherry picked from commit 32257a2a6d159406577c885a9d7e366cbf0c72b9)
(cherry picked from commit e6cd23c3b4928b421b8c706f9cc218020779e367)
|
|\ \ \ \ \ \ \
| |_|_|_|/ / /
|/| | | | | | |
|
| | |/ / / /
| |/| | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Fix for bug #1893263 introduced a regression where
1 vcpu instances would fail to build when paired with
multiqueue-enabled images, in the scenario vif_type=tap.
Solution is to not pass multiqueue parameter when
instances.get_flavor().vcpus = 1.
Closes-bug: #1939604
Change-Id: Iaccf2eeeb6e8bb80c658f51ce9ab4e8eb4093a55
(cherry picked from commit 7fc6fe6fae891eae42b36ccb9d69cd0f6d6db21d)
(cherry picked from commit aa5b8d12bcacc01e5f9be45cc1eef24ac9efd2fc)
|
|/ / / / /
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The rdb unit test defines a shutdown field on the Mock class
unintentionally. This causes that a later test in the same
executor expecting that mock.Mock().shutdown is a newly auto
generated mock.Mock() but it founds that is an already used
(called) object. This causes that the later test fails when
asserting the number of calls on that mock.
The original intention of the rbd unit test was to catch the
instantiation of the Rados object in test so it set Rados = mock.Mock()
so when the code under test called Rados() it actually called Mock().
It worked but it has that huge side effect. Instead of this a proper
mocking of the constructor can be done in two steps:
rados_inst = mock.Mock()
Rados = mock.Mock(return_value=rados_inst)
This makes sure that every Rados() call will return rados_inst that is a
mock without causing Mock class level side effect.
Change-Id: If71620e808744736cb4fe3abda76d81a6335311b
Closes-Bug: #1936849
(cherry picked from commit 930b7c992156733fbb4f598488605825d62ebc0c)
(cherry picked from commit c958fab901be97999c0d117faa31ab53e52a3371)
|
|\ \ \ \ \
| |/ / / /
|/| | | | |
|