summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* trivial: Remove remaining '_LI' instancesStephen Finucane2020-05-188-34/+24
| | | | | | | | Once again, do what we did for '_LE' and '_LW' and remove the final remnants of the log translation effort. Change-Id: Id6cf7a9bfbe69d6d3e65303e62403d1db9188a84 Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
* trivial: Remove remaining '_LW' instancesStephen Finucane2020-05-1810-45/+47
| | | | | | | There are only a few of these remaining in the code base. Remove them. Change-Id: I33725e2439b0f39c1e9bec9e33a37bf3e24944fb Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
* trivial: Remove remaining '_LE' instancesStephen Finucane2020-05-1811-52/+48
| | | | | | | | We've been slowly removing these as we go. Remove the final few '_LE' occurrences now. Change-Id: I75ebd2e95a0c77585d7b4329ca01e4bacc1dd7c4 Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
* Merge "Switch to newer openstackdocstheme and reno versions"Zuul2020-05-165-25/+22
|\
| * Switch to newer openstackdocstheme and reno versionsAndreas Jaeger2020-05-155-25/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Switch to openstackdocstheme 2.1.2 and reno 3.1.0 versions. Using these versions will allow parallelizing building of documents. Update Sphinx version as well. openstackdocstheme renames some variables, so follow the renames. A couple of variables are also not needed anymore, remove them. Set openstackdocs_auto_version to not version the documents. Set openstackdocs_auto_name to use project as name. Depends-On: https://review.opendev.org/728432 Change-Id: I4e3ae3ceabe125ea459ed4baabf2e98686268e50
* | Merge "Update scheduler instance info at confirm resize"Zuul2020-05-163-19/+16
|\ \
| * | Update scheduler instance info at confirm resizeBalazs Gibizer2020-03-253-19/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When a resize is confirmed the instance does not belong to the source compute any more. In the past the scheduler instance info is only updated by the _sync_scheduler_instance_info periodic. This caused that server boots with anti-affinity did not consider the source host. But now at the end of the confirm_resize call the compute also updates the scheduler about the move. Change-Id: Ic50e72e289b56ac54720ad0b719ceeb32487b8c8 Closes-Bug: #1869050
* | | Merge "Reproduce bug 1869050"Zuul2020-05-151-0/+47
|\ \ \ | |/ /
| * | Reproduce bug 1869050Balazs Gibizer2020-03-251-0/+47
| | | | | | | | | | | | | | | | | | | | | | | | This patch adds a functional test that reproduce the bug when stale scheduler instance info prevents booting server with anti-affinity. Change-Id: If485330b48ae2671651aafabc93f92a8999f7ca2 Related-Bug: #1869050
* | | Merge "docs: Add evacuation pre-conditions around the src host"Zuul2020-05-151-0/+6
|\ \ \
| * | | docs: Add evacuation pre-conditions around the src hostLee Yarwood2020-05-061-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | These were previously not listed but are important to re-enforce to avoid corruption of instances during an evacuation. Change-Id: Ia80dba0ade98940a5c4f94a9d2095e3c9ef21f97
* | | | Merge "Bump hacking min version to 3.0.1"Zuul2020-05-151-1/+1
|\ \ \ \ | |_|_|/ |/| | |
| * | | Bump hacking min version to 3.0.1Ghanshyam Mann2020-05-151-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | hacking 3.0.1 fix the pinning of flake8 to avoid bringing in a new version with new checks. bumping the min version for hacking so that any older hacking versions which auto adopt the new checks are not used. Depends-On: https://review.opendev.org/#/c/728335/ Change-Id: Ie00c10332bd7110169dbb150d601c157b6694d05
* | | | Merge "Moving functional jobs to Victoria testing runtime"Zuul2020-05-133-33/+16
|\ \ \ \ | |/ / / |/| | |
| * | | Moving functional jobs to Victoria testing runtimeGhanshyam Mann2020-05-083-33/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As per Victoria testing runtime[1], we need to tests py3.6, py3.7, and py3.8. - py3.7 is being tested with integration jobs. - py3.6 and py3.8 are tested with unit test jobs. Nova functional tests are testing py3.6 which can be moved to the latest python available in this cycle which is 3.8. We do not need to run functional or integration tests on each supported python version. Testing them on the latest python version is enough. Coverage of all supported python version is achieved via unit tests job. [1] https://governance.openstack.org/tc/reference/runtimes/victoria.html Change-Id: I1d6a2986fcb0435cfabdd104d202b65329909d2b
* | | | Merge "Suppress remaining policy warnings in unit tests"Zuul2020-05-112-6/+9
|\ \ \ \
| * | | | Suppress remaining policy warnings in unit testsGhanshyam Mann2020-05-082-6/+9
| |/ / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are few place left in unit tests where policy warnings are still logged. - test_policy which is policy file tests and does policy initialization without suppressing the warnings. - test_serversV21. PolicyFixture takes care of policy setup with no warning things[1] which is used by test base class[2] but test_serversV21 dulicate the policy initialization which leads to log warnings for unit tests. We do not need to initialization policy again and can reply on PolicyFixture setup. From the git history, it was added 7 years ago when no Fixture was available so there is no specific reason of re-initializating the policy in this test. - https://review.opendev.org/#/c/16160/3 [1] https://github.com/openstack/nova/blob/4b62c90063abc0c1b3e6b564a171e8c1d96cb735/nova/tests/unit/policy_fixture.py#L46 [2] https://github.com/openstack/nova/blob/4b62c90063abc0c1b3e6b564a171e8c1d96cb735/nova/test.py#L269 Change-Id: Ieb3f5510437d38bf2a4c8994d76c7f4001a6c9d8
* | | | Merge "Support for --force flag for nova-manage placement heal_allocations ↵Zuul2020-05-114-9/+152
|\ \ \ \ | | | | | | | | | | | | | | | command"
| * | | | Support for --force flag for nova-manage placement heal_allocations commandjay2020-05-064-9/+152
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use this flag to forcefully heal allocation for a specific instance Change-Id: I54147d522c86d858f938df509b333b6af3189e52 Closes-Bug: #1868997
* | | | | Merge "objects: Add migrate-on-load behavior for legacy NUMA objects"Zuul2020-05-086-70/+145
|\ \ \ \ \
| * | | | | objects: Add migrate-on-load behavior for legacy NUMA objectsStephen Finucane2020-05-066-70/+145
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We started storing NUMA information as objects in the database in Kilo (commit bb3202f8594). Prior to this, we had stored this NUMA information as a plain old dict. To facilitate the transition, we provided some handlers based on these '_from_dict' functions. There were used to ensure we could load old-style entries, converting them to objects which we could later save back to the database. It's been over four years (three since Kilo went EOL) and nine (nearly ten) releases, meaning its time to look at dropping this code. At this point, the only thing that could hurt us is attempting to do something with a NUMA-based instance that hasn't been touched since they were first booted on a Kilo or earlier host. Convert the '_to_dict' functionality and overrides of the 'obj_from_primitive' method with a similar check in the DB loading functions. Crucially, inside these DB loading functions, save back when legacy objects are detected. This is acceptable because the 'update_available_resource' in the resource tracker pulls out both compute nodes and instances, with their embedded fields like 'numa_topology', ensuring this will be run as part of the periodic task. NOTE: We don't need to worry about migrations of 'numa_topology' fields in other objects: the 'RequestSpec' and 'MigrationContext' objects were added in Liberty [1][2] and used the 'InstanceNUMATopology' o.vo from the start, while the 'NUMATopology' object has only ever been used in the 'ComputeNode' object. Change-Id: I6cd206542fdd28f3ef551dcc727f4cb35a53f6a3 Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
* | | | | | Merge "config: Explicitly register 'remote_debug' CLI opts"Zuul2020-05-0812-23/+42
|\ \ \ \ \ \
| * | | | | | config: Explicitly register 'remote_debug' CLI optsStephen Finucane2020-05-0712-23/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We were registering the 'remote_debug' CLI opts as part of the 'nova.conf' init code. This meant not only every service we used (bar 'nova-console') but also every CLI tool we provided was exposing these options, even though they made zero sense in the latter option. Resolve this by explicitly registering the options against the services that we want them in. Specifically, these are the API services (nova-api, nova-api-metadata, nova-api-os-compute), the console proxy services (nova-novncproxy, nova-serialproxy, nova-spicehtml5proxy), nova-compute, nova-conductor and nova-scheduler. While we're here, we also clean up the documentation for these options since there are a few non-rST'y things in there. Change-Id: I7b7489e8412cc93d49904d4ef08a775b07b27d26 Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
* | | | | | | Merge "Wait for all servers to be active when testing vGPUs"Zuul2020-05-081-2/+11
|\ \ \ \ \ \ \
| * | | | | | | Wait for all servers to be active when testing vGPUsSylvain Bauza2020-05-071-2/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We were just verifying the state of the first server before introspecting the number of created devices. If the second server isn't ACTIVE yet, then the mdevs wouldn't be found and the functest would return an exception. Change-Id: I02a9f26dea6378281f3968a2c5cf0f652f9e342b Closes-Bug: #1877281
* | | | | | | | Merge "remove support of oslo.messaging 9.8.0 warning message"Zuul2020-05-081-11/+2
|\ \ \ \ \ \ \ \
| * | | | | | | | remove support of oslo.messaging 9.8.0 warning messageSean Mooney2020-05-071-11/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change removes support for supressing the heartbeat message from olso.messaging before 9.8.0 Change-Id: I2035d5df31e43b730cd472cc438ec863bb538d62 Related-Bug: #1825584
* | | | | | | | | Merge "Silence amqp heartbeat warning"Zuul2020-05-081-0/+22
|\ \ \ \ \ \ \ \ \ | |/ / / / / / / / | | | | | | / / / | |_|_|_|_|/ / / |/| | | | | | |
| * | | | | | | Silence amqp heartbeat warningSean Mooney2020-05-071-0/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When the nova api is executing under uWSGI or MOD_WSGI the lifetime of the amqp heartbeat thread is controlled by the wsgi server. As a result when the nova api is run in this configuration we expect that the heartbeat thread will be suspended and heartbeats will be missed when the wsgi server suspends execution of the wsgi application. This change adds a python logging filter to suppress the reporting of heartbeat warnings as this behavior is expected. Since the operator cannot do anything to address the issue the warning is just noise and many operators and customers find it to be off-putting. Change-Id: I642b1e3ed6de2be4dcc19fe214f84095d2e1d31a Closes-Bug: #1825584
* | | | | | | | Merge "Remove stale nested backport from InstancePCIRequests"Zuul2020-05-062-22/+0
|\ \ \ \ \ \ \ \ | |_|_|_|/ / / / |/| | | | | | |
| * | | | | | | Remove stale nested backport from InstancePCIRequestsDan Smith2020-04-212-22/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Sometime in 2015, we removed the hard-coded obj_relationships mapping from parent objects which facilitated semi-automated child version backports. This was replaced by a manifest-of-versions mechanism where the client reports all the supported objects and versions during a backport request to conductor. The InstancePCIRequests object isn't technically an ObjectListBase, despite acting like one, and thus wasn't using the obj_relationships. Because of this, it was doing its own backporting of its child object, which was not removed in the culling of the static mechanism. Because we now no longer need to worry about sub-object backport chaining, when version 1.2 was added, no backport rule was added, and since the object does not call the base class' generic routine, proper backporting of the child object was not happening. All we need to do is remove the override to allow the base infrastructure to do the work. Change-Id: Id610a24c066707de5ddc0507e7ef26c421ba366c Closes-Bug: #1868033
* | | | | | | | Merge "doc: Fix list rendering in cli/nova-status.rst"Zuul2020-05-061-3/+1
|\ \ \ \ \ \ \ \ | |_|_|_|_|_|/ / |/| | | | | | |
| * | | | | | | doc: Fix list rendering in cli/nova-status.rstTakashi Natsume2020-05-021-3/+1
| | |_|/ / / / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | Change-Id: I29ebb4c956ce979d60f74346d93510797f8de76b Signed-off-by: Takashi Natsume <takanattie@gmail.com>
* | | | | | | Merge "Follow-up for NUMA live migration functional tests"Zuul2020-05-052-53/+77
|\ \ \ \ \ \ \
| * | | | | | | Follow-up for NUMA live migration functional testsArtom Lifshitz2020-05-052-53/+77
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch addresses outstanding feedback on Ia3d7351c1805d98bcb799ab0375673c7f1cb8848 and I78e79112a9c803fb45d828cfb4641456da66364a. Change-Id: I70c4715de05d64fabc498b02d5c757af9450fbe9
* | | | | | | | Merge "Remove future imports"Zuul2020-05-0518-35/+0
|\ \ \ \ \ \ \ \
| * | | | | | | | Remove future importsStephen Finucane2020-03-2418-35/+0
| | |/ / / / / / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | These particular imports are no longer needed in a Python 3-only world. Change-Id: Ia1b60ce238713b86f126e2d404199d102fdbc5bc Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
* | | | | | | | Merge "Don't show upgr note for policy validation in V"Zuul2020-05-051-0/+2
|\ \ \ \ \ \ \ \
| * | | | | | | | Don't show upgr note for policy validation in VSylvain Bauza2020-05-041-0/+2
| |/ / / / / / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We merged Id9cd65877e53577bff22e408ca07bbeec4407f6e which included a reno file after we branched Ussuri so we had to backport it to Ussuri but since it was merged after RC1, the relnote would appear as an Upgrade section for the Victoria release, which is not something we wanted (but rather for a T->U upgrade). Adding the ignore-notes section in order to avoid it. Change-Id: If31933c5ff20167bf24fbad6f8746b1a9a6a36e5
* | | | | | | | Merge "Add nested resource providers limit for multi create"Zuul2020-05-052-2/+48
|\ \ \ \ \ \ \ \ | |/ / / / / / / |/| | | | | | |
| * | | | | | | Add nested resource providers limit for multi createzhangbailin2020-05-022-2/+48
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In 21.0.0 Ussuri we were completed the nova-cyborg interaction feature, but there are some issue when multiple create instances. Creating servers with accelerators provisioned with the Cyborg service, if a flavor asks for resources that are provided by nested Resource Provider inventories (eg. VGPU) and the user wants multi-create (ie. say --max 2) then the scheduler could be returning a NoValidHosts exception even if each nested Resource Provider can support at least one specific instance, if the total wanted capacity is not supported by only one nested RP. For example,creating servers with accelerators provisioned with the Cyborg service, if two children RP have 4 VGPU inventories each: - you can ask for a flavor with 2 VGPU with --max 2 - but you can't ask for a flavor with 4 VGPU and --max 2 Related-Bug: #1874664 Change-Id: I64647a6ba79c47c891134cedb49f03d3c61e8824
* | | | | | | | Merge "Add nova-status upgrade check and reno for policy new defaults"Zuul2020-05-025-0/+163
|\ \ \ \ \ \ \ \ | |_|/ / / / / / |/| | | | | | |
| * | | | | | | Add nova-status upgrade check and reno for policy new defaultsGhanshyam Mann2020-05-015-0/+163
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are cases where policy file is re-generated freshly and end up having the new defaults only but expectation is that old deprecated rule keep working. If a rule is present in policy file then, that has priority over its defaults so either rules should not be present in policy file or users need to update their token to match the overridden rule permission. This issue was always present when any policy defaults were changed with old defaults being supported as deprecated. This is we have changed all the policy for new defaults so it came up as broken case. Adding nova-status upgrade check also to detect such policy file. Related-Bug: #1875418 Change-Id: Id9cd65877e53577bff22e408ca07bbeec4407f6e
* | | | | | | | Merge "NUMA LM: Add func test for bug 1845146"Zuul2020-05-011-0/+19
|\ \ \ \ \ \ \ \
| * | | | | | | | NUMA LM: Add func test for bug 1845146Artom Lifshitz2020-03-241-0/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Bug 1845146 was caused by the update available resources periodic task running during a small window in which the migration was in 'accepted' but resource claims had been done. 'accepted' migrations were not considered in progress before the fix for 1845146 merged as commit 6ec686c26b, which caused the periodic task to incorrectly free the migration's resources from the destination. This patch adds a test that triggers this race by wrapping around the compute manager's live_migration() (which sets the 'queued' migration status - this was actually wrong in 6ec686c26b, as it talks about 'preparing') and running the update available resources periodic task while the migration is still in 'accepted'. Related bug: 1845146 Change-Id: I78e79112a9c803fb45d828cfb4641456da66364a
* | | | | | | | | Merge "Feature matrix: update AArch64 information"Zuul2020-04-291-6/+6
|\ \ \ \ \ \ \ \ \
| * | | | | | | | | Feature matrix: update AArch64 informationMarcin Juszkiewicz2020-03-301-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I tested missing entries on Linaro Developer Cloud. Change-Id: I63267f5a759af52eb9c052cc804e2650c3fe9ef3
* | | | | | | | | | Merge "libvirt:driver:Disallow AIO=native when 'O_DIRECT' is not available"Zuul2020-04-293-0/+75
|\ \ \ \ \ \ \ \ \ \
| * | | | | | | | | | libvirt:driver:Disallow AIO=native when 'O_DIRECT' is not availableArthur Dayne2020-04-143-0/+75
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Because of the libvirt issue[1], there is a bug[2] that if we set cache mode whose write semantic is not O_DIRECT (.i.e unsafe, writeback or writethrough), there will be a problem with the volume drivers (.i.e nova.virt.libvirt.volume.LibvirtISCSIVolumeDriver, nova.virt.libvirt.volume.LibvirtNFSVolumeDriver and so on), which designate native io explicitly. That problem will generate a libvirt xml for the instance, whose content contains ``` ... <disk ... > <driver ... cache='unsafe/writeback/writethrough' io='native' /> </disk> ... ``` In turn, it will fail to start the instance or attach the disk. > When qemu is configured with a block device that has aio=native set, but > the cache mode doesn't use O_DIRECT (i.e. isn't cache=none/directsync or any > unnamed mode with explicit cache.direct=on), then the raw-posix block driver > for local files and block devices will silently fall back to aio=threads. > The blockdev-add interface rejects such combinations, but qemu can't > change the existing legacy interfaces that libvirt uses today. [1]: https://github.com/libvirt/libvirt/commit/058384003db776c580d0e5a3016a6384e8eb7b92 [2]: https://bugzilla.redhat.com/show_bug.cgi?id=1086704 Closes-Bug: #1841363 Change-Id: If9acc054100a6733f3659a15dd9fc2d462e84d64
* | | | | | | | | | | Merge "zuul: Switch to the Zuulv3 grenade job"Zuul2020-04-281-2/+2
|\ \ \ \ \ \ \ \ \ \ \ | |_|_|_|_|_|_|/ / / / |/| | | | | | | | | |