summaryrefslogtreecommitdiff
path: root/nova/exception.py
Commit message (Collapse)AuthorAgeFilesLines
* Merge "libvirt: Add vIOMMU device to guest"Zuul2022-09-011-0/+10
|\
| * libvirt: Add vIOMMU device to guestStephen Finucane2022-09-011-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Implementation for BP/libvirt-viommu-device. With provide `hw:viommu_model` property to extra_specs or `hw_viommu_model` to image property. will enable viommu to libvirt guest. [1] https://www.berrange.com/posts/2017/02/16/setting-up-a-nested-kvm-guest-for-developing-testing-pci-device-assignment-with-numa/ [2] https://review.opendev.org/c/openstack/nova-specs/+/840310 Implements: blueprint libvirt-viommu-device Change-Id: Ief9c550292788160433a28a7a1c36ba38a6bc849 Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
* | Add support for volume backed server rebuildRajat Dhasmana2022-08-311-0/+4
|/ | | | | | | | | | | | | | | | | | | This patch adds the plumbing for rebuilding a volume backed instance in compute code. This functionality will be enabled in a subsequent patch which adds a new microversion and the external support for requesting it. The flow of the operation is as follows: 1) Create an empty attachment 2) Detach the volume 3) Request cinder to reimage the volume 4) Wait for cinder to notify success to nova (via external events) 5) Update and complete the attachment Related blueprint volume-backed-server-rebuild Change-Id: I0d889691de1af6875603a9f0f174590229e7be18
* Merge "Retry /reshape at provider generation conflict"Zuul2022-08-261-0/+10
|\
| * Retry /reshape at provider generation conflictBalazs Gibizer2022-08-251-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | During a normal update_available_resources run if the local provider tree caches is invalid (i.e. due to the scheduler made an allocation bumping the generation of the RPs) and the virt driver try to update the inventory of an RP based on the cache Placement will report conflict, the report client will invalidate the caches and the retry decorator on ResourceTracker._update_to_placement will re-drive the top of the fresh RP data. However the same thing can happen during reshape as well but the retry mechanism is missing in that code path so the stale caches can cause reshape failures. This patch adds specific error handling in the reshape code path to implement the same retry mechanism as exists for inventory update. blueprint: pci-device-tracking-in-placement Change-Id: Ieb954a04e6aba827611765f7f401124a1fe298f3
* | Merge "Add locked_memory extra spec and image property"Zuul2022-08-261-0/+11
|\ \ | |/ |/|
| * Add locked_memory extra spec and image propertySean Mooney2022-08-241-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | This change adds a new hw:locked_memory extra spec and hw_locked_memory image property to contol preventing guest memory from swapping. This change adds docs and extend the flavor validators for the new extra spec. Also add new image property. Blueprint: libvirt-viommu-device Change-Id: Id3779594f0078a5045031aded2ed68ee4301abbd
* | Reject mixed VF rc and trait configBalazs Gibizer2022-08-251-0/+16
| | | | | | | | | | | | | | | | | | | | If two VFs from the same PF are configured by two separate [pci]device_spec entries then it is possible to define contradicting resource classes or traits. This patch detects and rejects such configuration. blueprint: pci-device-tracking-in-placement Change-Id: I623ab24940169991a400eba854c9619a11662a91
* | Reject PCI dependent device configBalazs Gibizer2022-08-251-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | The PCI tracking in placement does not support the configuration where both a PF and its children VFs are configured for nova usage. This patch adds logic to detect and reject such configuration. To be able to kill the service if started with such config special exception handling is added for the update_available_resource code path, similarly how a failed reshape is handled. blueprint: pci-device-tracking-in-placement Change-Id: I708724465d2afaa37a65c231c64da88fc8b458eb
* | Basics for PCI Placement reportingBalazs Gibizer2022-08-251-0/+5
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | A new PCI resource handler is added to the update_available_resources code path update the ProviderTree with PCI device RPs, inventories and traits. It is a bit different than the other Placement inventory reporter. It does not run in the virt driver level as PCI is tracked in a generic way in the PCI tracker in the resource tracker. So the virt specific information is already parsed and abstracted by the resource tracker. Another difference is that to support rolling upgrade the PCI handler code needs to be prepared for situations where the scheduler does not create PCI allocations even after some of the compute already started reporting inventories and started healing PCI allocations. So the code is not prepared to do a single, one shot, reshape at startup, but instead to do a continuous healing of the allocations. We can remove this continuous healing after the PCI prefilter will be made mandatory in a future release. The whole PCI placement reporting behavior is disabled by default while it is incomplete. When it is functionally complete a new [pci]report_in_placement config option will be added to allow enabling the feature. This config is intentionally not added by this patch as we don't want to allow enabling this logic yet. blueprint: pci-device-tracking-in-placement Change-Id: If975c3ec09ffa95f647eb4419874aa8417a59721
* Rename exception.PciConfigInvalidWhitelist to PciConfigInvalidSpecBalazs Gibizer2022-08-101-2/+2
| | | | | | | | This patch continues cleaning up the "whitelist" terminology by renaming exception.PciConfigInvalidWhitelist blueprint: pci-device-tracking-in-placement Change-Id: I054e17090818f01f7e7621ad6144fd7908346da2
* Remove the PowerVM driverStephen Finucane2022-08-021-5/+0
| | | | | | | | | | | | | | The PowerVM driver was deprecated in November 2021 as part of change Icdef0a03c3c6f56b08ec9685c6958d6917bc88cb. As noted there, all indications suggest that this driver is no longer maintained and may be abandonware. It's been some time and there's still no activity here so it's time to abandon this for real. This isn't as tied into the codebase as the old XenAPI driver was, so removal is mostly a case of deleting large swathes of code. Lovely. Change-Id: Ibf4f36136f2c65adad64f75d665c00cf2de4b400 Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
* Allow unshelve to a specific host (Compute API part)René Ribaud2022-07-221-3/+9
| | | | | | | | | | This patch introduce changes to the compute API that will allow PROJECT_ADMIN to unshelve an shelved offloaded server to a specific host. This patch also supports the ability to unpin the availability_zone of an instance that is bound to it. Implements: blueprint unshelve-to-host Change-Id: Ieb4766fdd88c469574fad823e05fe401537cdc30
* Add logic to enforce local api and db limitsJohn Garbutt2022-02-241-0/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Local API and DB limits are limits on resources that are counted either as an API request parameter (example: server metadata items) or as records in the database (example: server key pairs). Future patches will make use of this logic, and actually enforce the limits. This patch just adds the infrastructure to allow for the enforcement of the limits. We are moving all existing quotas to be managed via Keystone's unified limits. To stop confusion between injected_file_path_length and injected_file_path_bytes, the unified limit in Keystone will use the latter name to match the name used the API. These local limits are all about preventing excessive load on the API and database and have little to do with resource usage. These limits are represented by keystone registered limits only, accordingly. Local limits include things that just limit items in an API request: * metadata_items * injected_files * injected_file_content_bytes * injected_file_path_bytes Local limits also include things that are stored in the database: * key_pairs * server_groups * server_group_members Some resource names have been changed to prepend a prefix of "server_" in order to disambiguate them from other potential unified limits in keystone: * metadata_items => server_metadata_items * injected_files => server_injected_files * injected_file_content_bytes => server_injected_file_content_bytes * injected_file_path_bytes => server_injected_file_path_bytes * key_pairs => server_key_pairs Note that each of the above are counted via a different scope. This new code ensures that key_pairs are counted per user, server_groups are counted per project and server_group_members are counted per server_group. Note: Previously server_group_member were counted per user inside each server_group, which has proved very confusing, as adding more users into a project increases the maximum size of allowed for a server_group. blueprint unified-limits-nova Change-Id: I0b6f4d29aaee1d71541a95cbecfd0708aac325d2
* Move keypair quota error message into exceptionDan Smith2022-02-241-1/+1
| | | | | | | | | | | | | | The KeypairLimitExceeded exception has a message string which is never used. We raise this exception and then return a different message to the API user. For the unified limit work, we want to move to using oslo.limit's better error messages when available, which means we need to honor the message in the exception. This just moves the legacy string into the exception and makes the API use that instead of overriding it. Related to bp/unified-limits-nova Change-Id: I217b3d0551291498191b556f62d78abf159778c2
* Merge "manage: Add image_property commands"Zuul2022-02-241-0/+4
|\
| * manage: Add image_property commandsLee Yarwood2022-02-241-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds an image property show and image property set command to nova-manage to allow users to update image properties stored for an instance in system metadata without having to rebuild the instance. This is intended to ease migration to new machine types, as updating the machine type could potentially invalidate the existing image properties of an instance. Co-Authored-By: melanie witt <melwittt@gmail.com> Blueprint: libvirt-device-bus-model-update Change-Id: Ic8783053778cf4614742186e94059d5675121db1
* | Merge "[nova/libvirt] Support for checking and enabling SMM when needed"Zuul2022-02-171-0/+4
|\ \ | |/ |/|
| * [nova/libvirt] Support for checking and enabling SMM when neededImran Hussain2022-02-171-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | Check the features list we get from the firmware descriptor file to see if we need SMM (requires-smm), if so then enable it as we aren't using the libvirt built in mechanism to enable it when grabbing the right firmware. Closes-Bug: 1958636 Change-Id: I890b3021a29fa546d9e36b21b1111e8537cd0020 Signed-off-by: Imran Hussain <ih@imranh.co.uk>
* | Merge "Move 'hw:pmu', 'hw_pmu' parsing to nova.virt.hardware"Zuul2022-02-151-5/+0
|\ \
| * | Move 'hw:pmu', 'hw_pmu' parsing to nova.virt.hardwareStephen Finucane2022-02-011-5/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | Virtually all of the code for parsing 'hw:'-prefixed extra specs and 'hw_'-prefix image metadata properties lives in the 'nova.virt.hardware' module. It makes sense for these to be included there. Do that. Change-Id: I1fabdf1827af597f9e5fdb40d5aef244024dd015 Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
* | | Merge "Join quota exception family trees"Zuul2022-02-101-16/+9
|\ \ \
| * | | Join quota exception family treesDan Smith2022-02-081-16/+9
| | |/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For some reason, we have two lineages of quota-related exceptions in Nova. We have QuotaError (which sounds like an actual error), from which all of our case-specific "over quota" exceptions inhert, such as KeypairLimitExceeded, etc. In contrast, we have OverQuota which lives outside that hierarchy and is unrelated. In a number of places, we raise one and translate to the other, or raise the generic QuotaError to signal an overquota situation, instead of OverQuota. This leads to places where we have to catch both, signaling the same over quota situation, but looking like there could be two different causes (i.e. an error and being over quota). This joins the two cases, by putting OverQuota at the top of the hierarchy of specific exceptions and removing QuotaError. The latter was only used in a few situations, so this isn't actually much change. Cleaning this up will help with the unified limits work, reducing the number of potential exceptions that mean the same thing. Related to blueprint bp/unified-limits-nova Change-Id: I17a3e20b8be98f9fb1a04b91fcf1237d67165871
* | | [yoga] Add support for VNIC_REMOTE_MANAGEDDmitrii Shcherbakov2022-02-091-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Allow instances to be created with VNIC_TYPE_REMOTE_MANAGED ports. Those ports are assumed to require remote-managed PCI devices which means that operators need to tag those as "remote_managed" in the PCI whitelist if this is the case (there is no meta information or standard means of querying this information). The following changes are introduced: * Handling for VNIC_TYPE_REMOTE_MANAGED ports during allocation of resources for instance creation (remote_managed == true in InstancePciRequests); * Usage of the noop os-vif plugin for VNIC_TYPE_REMOTE_MANAGED ports in order to avoid the invocation of the local representor plugging logic since a networking backend is responsible for that in this case; * Expectation of bind time events for ports of VNIC_TYPE_REMOTE_MANAGED. Events for those arrive early from Neutron after a port update (before Nova begins to wait in the virt driver code, therefore, Nova is set to avoid waiting for plug events for VNIC_TYPE_REMOTE_MANAGED ports; * Making sure the service version is high enough on all compute services before creating instances with ports that have VNIC type VNIC_TYPE_REMOTE_MANAGED. Network requests are examined for the presence of port ids to determine the VNIC type via Neutron API. If remote-managed ports are requested, a compute service version check is performed across all cells. Change-Id: Ica09376951d49bc60ce6e33147477e4fa38b9482 Implements: blueprint integration-with-off-path-network-backends
* | | Introduce remote_managed tag for PCI devsDmitrii Shcherbakov2022-02-091-0/+10
|/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PCI devices may be managed remotely from the perspective of a hypervisor host (e.g. by a SmartNIC DPU) which means that the VF control plane is not available to the hypervisor. Depending on the presence of a remote_managed device attribute in the InstancePCIRequest spec and available device types in a pool, additional processing needs to be done: * Filtering of devices marked as `remote_managed: "true"` in the whitelist configuration so that they are not used in legacy SR-IOV and hardware offload requests; * Early error reporting if PFs marked as remote_managed="true" are present in the whitelist configuration. This is not supported explicitly since allocating such PFs would remove the associated VFs from the pool and an instance with such PF and its VFs will not have access to the control plane required for representor interface plugging at the SmartNIC DPU side. This configuration is not valid which is enforced in the PCIDeviceStats code. * Checking of the presence of a card serial number in the PCI VPD capability of a device if it was marked as `remote_managed: "true"` in the whitelist. The card serial number presence is mandatory because it is used for identification of a host in the networking backend that will handle the configuration of a given PCI device at the remote host side (i.e. representor plugging, flow programming). For compatibility, all devices not explicitly marked as remote_managed in the whitelist are assumed to have remote_managed attribute set to False. Implements: blueprint integration-with-off-path-network-backends Change-Id: Ic44d5e206326827d00a751da3cea67afe3929a08
* | Fill the AcceleratorRequestBindingFailed exception msg infosongwenping2021-12-141-2/+3
|/ | | | | Closes-Bug: #1950276 Change-Id: Iac1d74ebeefc8e4192896b10c76c16942dbe30fc
* [nova-manage]support extended resource requestBalazs Gibizer2021-11-011-22/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The nova-manage placement heal_allocations CLI is capable of healing missing placement allocations due to port resource requests. To support the new extended port resource request this code needs to be adapted too. When the heal_allocation command got the port resource request support in train, the only way to figure out the missing allocations was to dig into the placement RP tree directly. Since then nova gained support for interface attach with such ports and to support that placement gained support for in_tree filtering in allocation candidate queries. So now the healing logic can be generalized to following: For a given instance 1) Find the ports that has resource request but no allocation key in the binding profile. These are the ports we need to heal 2) Gather the RequestGroups from the these ports and run an allocation_candidates query restricted to the current compute of the instance with in_tree filtering. 3) Extend the existing instance allocation with a returned allocation candidate and update the instance allocation in placement. 4) Update the binding profile of these ports in neutron The main change compared to the existing implementation is in step 2) the rest mostly the same. Note that support for old resource request format is kept alongside of the new resource request format until Neutron makes the new format mandatory. blueprint: qos-minimum-guaranteed-packet-rate Change-Id: I58869d2a5a4ed988fc786a6f1824be441dd48484
* Convert features not supported error to HTTPBadRequestGhanshyam Mann2021-09-011-13/+21
| | | | | | | | | | | | | | | | | | | | | | | | There is inconsistency on return code nova API return for "Feature not supported/implemented'. Current return code are 400, 409, and 403. - 400 case: Example: Multiattach Swap Volume Not Supported - 403 case: Cyborg integration - 409 case: Example: Operation Not Supported For SEV , Operation Not Supported For VTPM In xena PTG, we agreed to fix this by returning 400 in all cases - L446: https://etherpad.opendev.org/p/nova-xena-ptg This commit convert all the features not supported error to HTTPBadRequest(400). To avoid converting every NotSupported inherited exception in API controller to HTTPBadRequest generic conversion is added in expected_errors() decorator. Closes-Bug: #1938093 Change-Id: I410924668a73785f1bfe5c79827915d72e1d9e03
* Merge "Support boot with extended resource request"Zuul2021-08-311-3/+3
|\
| * Support boot with extended resource requestBalazs Gibizer2021-08-271-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds the final missing pieces to support creating servers with ports having extended resource request. As the changes in the neutron interface code is called from nova-compute service during the port binding the compute service version is bumped. And a check is added to the compute-api to reject such server create requests if there are old computes in the cluster. Note that some of the negative and SRIOV related interface attach tests are also started to pass as they are not dependent on any of the interface attach specific implementation. Still interface attach is broken here as the failing of the positive tests show. blueprint: qos-minimum-guaranteed-packet-rate Change-Id: I9060cc9cb9e0d5de641ade78c5fd7e1cc77ade46
* | api: Add support for 'hostname' parameterStephen Finucane2021-01-141-0/+4
|/ | | | | | | | | Add microversion 2.90, which allows allows users to configure the hostname that will be exposed via the nova metadata service when creating their instance. Change-Id: I95047c1689ac14fa73eba48e19dc438988b78aad Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
* Reject server operations with extended resource reqBalazs Gibizer2021-08-211-0/+7
| | | | | | | | | | | | | | | | | | As a precaution reject all the server lifecycle operations that currently do not support port-resource-request-groups API extension. These are: * resize * migrate * live migrate * evacuate * unshelve after shelve offload * interface attach This rejection will be removed in the patch that adds support for the given operation. blueprint: qos-minimum-guaranteed-packet-rate Change-Id: I12c25550b08be6854b71ed3ad4c411a244a6c813
* Reject server create with extended resource reqBalazs Gibizer2021-08-211-0/+6
| | | | | | | | | | | | | | | | To prepare for the unlikely event that Neutron merges and an operator enables the port-resource-request-groups neutron API extension before nova adds support for it, this patch rejects server creation if such extension is enabled in Neutron. Enabling that extension has zero benefits without nova support hence the harsh but simple rejection. A subsequent patch will reject server lifecycle operations in a more sophisticated way and as soon as we support some operations, like boot, the deployer might rightfully choose to enable the Neutron extension. Change-Id: I2c55d9da13a570efbc1c862116cea31aaa6aa02e blueprint: qos-minimum-guaranteed-packet-rate
* Merge "Change the admin-visible logs for mdev support"Zuul2021-08-201-2/+2
|\
| * Change the admin-visible logs for mdev supportSylvain Bauza2021-08-051-2/+2
| | | | | | | | | | | | | | | | | | Now that we want to support generic stateless mdev devices, we can't really tell of pGPUs and vGPUs in our logs and we need to be more generic. Change-Id: I016773b6d6750db4f34ef6faac6d272c4177d883 Partially-Implements: blueprint generic-mdevs
* | Merge "smartnic support - reject server move and suspend"Zuul2021-08-191-0/+4
|\ \
| * | smartnic support - reject server move and suspendYongli He2021-08-051-0/+4
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Server with ARQ in the port does not support move and suspend, reject these operations in API stage: - resize - shelve - live_migrate - evacuate - suspend - attach/detach a smartnic port Reject create server with smartnic in port if minimal compute service version less than 57 Reject create server with port which have a malformed device profile that request multi devices, like: { "resources:CUSTOM_ACCELERATOR_FPGA": "2", "trait:CUSTOM_INTEL_PAC_ARRIA10": "required", } Implements: blueprint sriov-smartnic-support Change-Id: Ia705a0341fb067e746a3b91ec4fc6d149bcaffb8
* | Merge "Block servers with vGPU and device profile in heal_allocations"Zuul2021-08-181-0/+18
|\ \
| * | Block servers with vGPU and device profile in heal_allocationsBalazs Gibizer2021-08-061-0/+18
| |/ | | | | | | | | | | | | | | | | | | | | | | Nested allocations are only partially supported in nova-manage placement heal_allocations CLI. This patch documents the missing support and blocks healing instances with VGPU or Cyborg device profile request in the embedded flavor. Blocking is needed as if --forced is used with such instances then the tool could recreate an allocation ignoring some of these resources. Change-Id: I89ac90d2ea8bc268940869dbbc90352bfad5c0de Related-Bug: bug/1939020
* | libvirt: Handle silent failures to extend volume within os-brickLee Yarwood2021-08-041-0/+5
|/ | | | | | | | | | | | As seen in bug #1849425 os-brick can fail to extend an underlying volume device on the compute silently, returning a new_size of None to n-cpu. While this should ultimatley be addressed in os-brick n-cpu can also handle this before we eventually run into a type error when attemting floor division later in the volume extend flow. Change-Id: Ic8091537274a5ad27fb5af8939f81ed154b7ad7c Closes-Bug: #1849425
* Use neutronclient's port binding APIsStephen Finucane2021-07-141-4/+4
| | | | | | | | | Take advantage of the neutronclient bindings for the port binding APIs added in neutronclient 7.1.0 to avoid having to vendor this stuff ourselves. Change-Id: Icc284203fb53658abe304f24a62705217f90b22b Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
* db: Remove 'nova.db.sqlalchemy.utils'Stephen Finucane2021-06-161-4/+0
| | | | | | | This was missed in change I17db7cdaad2c6368092b4fb00d5959711ad249f9. Change-Id: I9b964c8e68051a995635a3d5f5aa09af2b0dcb82 Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
* api: Block unsupported actions with vDPASean Mooney2021-03-161-0/+8
| | | | | | | | | | | There are a number of operations that are known not to work with vDPA interfaces and another few that may work but haven't been tested. Start blocking these. In all cases where an operation is blocked a HTTP 409 (Conflict) is returned. This will allow lifecycle operations to be enabled as they are tested or bugs are addressed. Change-Id: I7f3cbc57a374b2f271018a2f6ef33ef579798db8 Blueprint: libvirt-vdpa-support
* libvirt: Wire up 'os_secure_boot' propertyStephen Finucane2021-03-091-0/+4
| | | | | | | | | | The penultimate step in our journey from the secure boot wilderness. Start configuring the relevant attribute of the guest if and when secure boot is enabled. Blueprint: allow-secure-boot-for-qemu-kvm-guests Change-Id: Ic38ab840f59619bf921e5387cd7a11c88a77b2a5 Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
* nova-manage: Add libvirt update_machine_type commandLee Yarwood2021-03-031-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | This change adds a second update command to the libvirt group within nova-manage. This command will set or update the machine type of the instance when the following criteria are met: * The instance must have a ``vm_state`` of ``STOPPED``, ``SHELVED`` or ``SHELVED_OFFLOADED``. * The machine type is supported. The supported list includes alias and versioned types of ``pc``, ``pc-i440fx``, ``pc-q35``, ``q35``, ``virt`` or ``s390-ccw-virtio``. * The update will not move the instance between underlying machine types. For example, ``pc`` to ``q35``. * The update will not move the instance between an alias and versioned machine type or vice versa. For example, ``pc`` to ``pc-1.2.3`` or ``pc-1.2.3`` to ``pc``. A --force flag is provided to skip the above checks but caution should be taken as this could easily lead to the underlying ABI of the instance changing when moving between machine types. blueprint: libvirt-default-machine-type Change-Id: I6b80021a2f90d3379c821dc8f02a72f350169eb3
* Merge "libvirt: Rationalize attachment of USB tablet"Zuul2021-02-241-5/+0
|\
| * libvirt: Rationalize attachment of USB tabletStephen Finucane2021-01-271-5/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, nova will decide whether to attach a USB tablet device based on the below criteria: +-----------+-----------+-------------------+---------------------+ | Image | nova.conf | VNC/Spice Enabled | Result | +-----------+-----------+-------------------+---------------------+ | <unset> | <unset> | Yes | No device | | <unset> | <unset> | No | No device | | <unset> | usbtablet | Yes | USB tablet attached | | <unset> | usbtablet | No | **Warning** | | <unset> | ps2mouse | Yes | No device | | <unset> | ps2mouse | No | No device | | usbtablet | <unset> | Yes | USB tablet attached | | usbtablet | <unset> | No | **Exception** | | usbtablet | usbtablet | Yes | USB tablet attached | | usbtablet | usbtablet | No | **Warning** | | usbtablet | ps2mouse | Yes | USB tablet attached | | usbtablet | ps2mouse | No | **Warning** | +-----------+-----------+-------------------+---------------------+ (*) SPICE Enabled *and* agent disabled; if agent is enabled, it behaves like SPICE is disabled. (**) Note that there's an additional dimension missing here which is based on the 'os_type' of the instance, but that behaves exactly the same as the the VNC/SPICE enabled, only it's checking if the 'HVM' type is "enabled" or not. This behavior is unusual - if the image metadata property supersedes the host configuration, then why should we see different behavior based on the host configuration when the image is requesting a pointer model but no graphics are enabled? With this change, we rationalize this matrix. Instead of having this funky split, we completely ignore the request if a graphical console is not enabled on the host. Or, in graphical terms: +-----------+-----------+-------------------+---------------------+ | Image | nova.conf | VNC/Spice Enabled | Result | +-----------+-----------+-------------------+---------------------+ | <unset> | <unset> | Yes | No device | | <unset> | <unset> | No | No device | | <unset> | usbtablet | Yes | USB tablet attached | | <unset> | usbtablet | No | No device | | <unset> | ps2mouse | Yes | No device | | <unset> | ps2mouse | No | No device | | usbtablet | <unset> | Yes | USB tablet attached | | usbtablet | <unset> | No | No device | | usbtablet | usbtablet | Yes | USB tablet attached | | usbtablet | usbtablet | No | No device | | usbtablet | ps2mouse | Yes | USB tablet attached | | usbtablet | ps2mouse | No | No device | +-----------+-----------+-------------------+---------------------+ This will set us up quite nicely for a future 'hw_input_bus' image metadata property, which would allow us to choose between tablet and mouse independently of bus. Change-Id: I206109fdfdacf5eb67bd31338201548e429084ac Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
* | Merge "FUP: Catch and reraise routed nets exception"Zuul2021-02-231-1/+1
|\ \
| * | FUP: Catch and reraise routed nets exceptionSylvain Bauza2021-02-191-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | Based on review feedback, we prefer to have the exception for routed networks to not be prefilter-specific and just reraise with the right exception type in the prefilter. Change-Id: I9ccbbf3be8efc65fe7f480ad545fb5fc70767988
* | | Merge "Add net & utils methods for routed nets & segments"Zuul2021-02-231-0/+5
|\ \ \ | |/ /