| Commit message (Collapse) | Author | Age | Files | Lines |
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Implementation for BP/libvirt-viommu-device.
With provide `hw:viommu_model` property to extra_specs or
`hw_viommu_model` to image property. will enable viommu to libvirt
guest.
[1] https://www.berrange.com/posts/2017/02/16/setting-up-a-nested-kvm-guest-for-developing-testing-pci-device-assignment-with-numa/
[2] https://review.opendev.org/c/openstack/nova-specs/+/840310
Implements: blueprint libvirt-viommu-device
Change-Id: Ief9c550292788160433a28a7a1c36ba38a6bc849
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds the plumbing for rebuilding a volume backed
instance in compute code. This functionality will be enabled
in a subsequent patch which adds a new microversion and the
external support for requesting it.
The flow of the operation is as follows:
1) Create an empty attachment
2) Detach the volume
3) Request cinder to reimage the volume
4) Wait for cinder to notify success to nova (via external events)
5) Update and complete the attachment
Related blueprint volume-backed-server-rebuild
Change-Id: I0d889691de1af6875603a9f0f174590229e7be18
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
During a normal update_available_resources run if the local provider
tree caches is invalid (i.e. due to the scheduler made an allocation
bumping the generation of the RPs) and the virt driver try to update the
inventory of an RP based on the cache Placement will report conflict,
the report client will invalidate the caches and the retry decorator
on ResourceTracker._update_to_placement will re-drive the top of the
fresh RP data.
However the same thing can happen during reshape as well but the retry
mechanism is missing in that code path so the stale caches can cause
reshape failures.
This patch adds specific error handling in the reshape code path to
implement the same retry mechanism as exists for inventory update.
blueprint: pci-device-tracking-in-placement
Change-Id: Ieb954a04e6aba827611765f7f401124a1fe298f3
|
|\ \
| |/
|/| |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This change adds a new hw:locked_memory extra spec and hw_locked_memory
image property to contol preventing guest memory from swapping.
This change adds docs and extend the flavor
validators for the new extra spec.
Also add new image property.
Blueprint: libvirt-viommu-device
Change-Id: Id3779594f0078a5045031aded2ed68ee4301abbd
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If two VFs from the same PF are configured by two separate
[pci]device_spec entries then it is possible to define contradicting
resource classes or traits. This patch detects and rejects such
configuration.
blueprint: pci-device-tracking-in-placement
Change-Id: I623ab24940169991a400eba854c9619a11662a91
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The PCI tracking in placement does not support the configuration where
both a PF and its children VFs are configured for nova usage. This patch
adds logic to detect and reject such configuration. To be able to kill
the service if started with such config special exception handling is
added for the update_available_resource code path, similarly how a
failed reshape is handled.
blueprint: pci-device-tracking-in-placement
Change-Id: I708724465d2afaa37a65c231c64da88fc8b458eb
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A new PCI resource handler is added to the update_available_resources
code path update the ProviderTree with PCI device RPs, inventories and
traits.
It is a bit different than the other Placement inventory reporter. It
does not run in the virt driver level as PCI is tracked in a generic way
in the PCI tracker in the resource tracker. So the virt specific
information is already parsed and abstracted by the resource tracker.
Another difference is that to support rolling upgrade the PCI handler
code needs to be prepared for situations where the scheduler does not
create PCI allocations even after some of the compute already started
reporting inventories and started healing PCI allocations. So the code
is not prepared to do a single, one shot, reshape at startup, but
instead to do a continuous healing of the allocations. We can remove
this continuous healing after the PCI prefilter will be made mandatory
in a future release.
The whole PCI placement reporting behavior is disabled by default while
it is incomplete. When it is functionally complete a new
[pci]report_in_placement config option will be added to allow enabling
the feature. This config is intentionally not added by this patch as we
don't want to allow enabling this logic yet.
blueprint: pci-device-tracking-in-placement
Change-Id: If975c3ec09ffa95f647eb4419874aa8417a59721
|
|
|
|
|
|
|
|
| |
This patch continues cleaning up the "whitelist" terminology by
renaming exception.PciConfigInvalidWhitelist
blueprint: pci-device-tracking-in-placement
Change-Id: I054e17090818f01f7e7621ad6144fd7908346da2
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The PowerVM driver was deprecated in November 2021 as part of change
Icdef0a03c3c6f56b08ec9685c6958d6917bc88cb. As noted there, all
indications suggest that this driver is no longer maintained and may be
abandonware. It's been some time and there's still no activity here so
it's time to abandon this for real.
This isn't as tied into the codebase as the old XenAPI driver was, so
removal is mostly a case of deleting large swathes of code. Lovely.
Change-Id: Ibf4f36136f2c65adad64f75d665c00cf2de4b400
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
This patch introduce changes to the compute API that will allow
PROJECT_ADMIN to unshelve an shelved offloaded server to a specific host.
This patch also supports the ability to unpin the availability_zone of an
instance that is bound to it.
Implements: blueprint unshelve-to-host
Change-Id: Ieb4766fdd88c469574fad823e05fe401537cdc30
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Local API and DB limits are limits on resources that are counted either
as an API request parameter (example: server metadata items) or as
records in the database (example: server key pairs).
Future patches will make use of this logic, and actually enforce the
limits. This patch just adds the infrastructure to allow for the
enforcement of the limits.
We are moving all existing quotas to be managed via Keystone's
unified limits.
To stop confusion between injected_file_path_length and
injected_file_path_bytes, the unified limit in Keystone will use the
latter name to match the name used the API.
These local limits are all about preventing excessive load on the API
and database and have little to do with resource usage. These limits
are represented by keystone registered limits only, accordingly.
Local limits include things that just limit items in an API request:
* metadata_items
* injected_files
* injected_file_content_bytes
* injected_file_path_bytes
Local limits also include things that are stored in the database:
* key_pairs
* server_groups
* server_group_members
Some resource names have been changed to prepend a prefix of "server_"
in order to disambiguate them from other potential unified limits in
keystone:
* metadata_items => server_metadata_items
* injected_files => server_injected_files
* injected_file_content_bytes => server_injected_file_content_bytes
* injected_file_path_bytes => server_injected_file_path_bytes
* key_pairs => server_key_pairs
Note that each of the above are counted via a different scope. This new
code ensures that key_pairs are counted per user, server_groups are
counted per project and server_group_members are counted per
server_group.
Note: Previously server_group_member were counted per user inside each
server_group, which has proved very confusing, as adding more users into
a project increases the maximum size of allowed for a server_group.
blueprint unified-limits-nova
Change-Id: I0b6f4d29aaee1d71541a95cbecfd0708aac325d2
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The KeypairLimitExceeded exception has a message string which is never
used. We raise this exception and then return a different message to
the API user. For the unified limit work, we want to move to using
oslo.limit's better error messages when available, which means we
need to honor the message in the exception. This just moves the
legacy string into the exception and makes the API use that instead
of overriding it.
Related to bp/unified-limits-nova
Change-Id: I217b3d0551291498191b556f62d78abf159778c2
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This adds an image property show and image property set command to
nova-manage to allow users to update image properties stored for an
instance in system metadata without having to rebuild the instance.
This is intended to ease migration to new machine types, as updating
the machine type could potentially invalidate the existing image
properties of an instance.
Co-Authored-By: melanie witt <melwittt@gmail.com>
Blueprint: libvirt-device-bus-model-update
Change-Id: Ic8783053778cf4614742186e94059d5675121db1
|
|\ \
| |/
|/| |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Check the features list we get from the firmware descriptor file
to see if we need SMM (requires-smm), if so then enable it as
we aren't using the libvirt built in mechanism to enable it
when grabbing the right firmware.
Closes-Bug: 1958636
Change-Id: I890b3021a29fa546d9e36b21b1111e8537cd0020
Signed-off-by: Imran Hussain <ih@imranh.co.uk>
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Virtually all of the code for parsing 'hw:'-prefixed extra specs and
'hw_'-prefix image metadata properties lives in the 'nova.virt.hardware'
module. It makes sense for these to be included there. Do that.
Change-Id: I1fabdf1827af597f9e5fdb40d5aef244024dd015
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
|
|\ \ \ |
|
| | |/
| |/|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
For some reason, we have two lineages of quota-related exceptions in
Nova. We have QuotaError (which sounds like an actual error), from
which all of our case-specific "over quota" exceptions inhert, such
as KeypairLimitExceeded, etc. In contrast, we have OverQuota which
lives outside that hierarchy and is unrelated. In a number of places,
we raise one and translate to the other, or raise the generic
QuotaError to signal an overquota situation, instead of OverQuota.
This leads to places where we have to catch both, signaling the same
over quota situation, but looking like there could be two different
causes (i.e. an error and being over quota).
This joins the two cases, by putting OverQuota at the top of the
hierarchy of specific exceptions and removing QuotaError. The latter
was only used in a few situations, so this isn't actually much change.
Cleaning this up will help with the unified limits work, reducing the
number of potential exceptions that mean the same thing.
Related to blueprint bp/unified-limits-nova
Change-Id: I17a3e20b8be98f9fb1a04b91fcf1237d67165871
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Allow instances to be created with VNIC_TYPE_REMOTE_MANAGED ports.
Those ports are assumed to require remote-managed PCI devices which
means that operators need to tag those as "remote_managed" in the PCI
whitelist if this is the case (there is no meta information or standard
means of querying this information).
The following changes are introduced:
* Handling for VNIC_TYPE_REMOTE_MANAGED ports during allocation of
resources for instance creation (remote_managed == true in
InstancePciRequests);
* Usage of the noop os-vif plugin for VNIC_TYPE_REMOTE_MANAGED ports
in order to avoid the invocation of the local representor plugging
logic since a networking backend is responsible for that in this
case;
* Expectation of bind time events for ports of VNIC_TYPE_REMOTE_MANAGED.
Events for those arrive early from Neutron after a port update (before
Nova begins to wait in the virt driver code, therefore, Nova is set
to avoid waiting for plug events for VNIC_TYPE_REMOTE_MANAGED ports;
* Making sure the service version is high enough on all compute services
before creating instances with ports that have VNIC type
VNIC_TYPE_REMOTE_MANAGED. Network requests are examined for the presence
of port ids to determine the VNIC type via Neutron API. If
remote-managed ports are requested, a compute service version check
is performed across all cells.
Change-Id: Ica09376951d49bc60ce6e33147477e4fa38b9482
Implements: blueprint integration-with-off-path-network-backends
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
PCI devices may be managed remotely from the perspective of a hypervisor
host (e.g. by a SmartNIC DPU) which means that the VF control plane is
not available to the hypervisor. Depending on the presence of a
remote_managed device attribute in the InstancePCIRequest spec and
available device types in a pool, additional processing needs to be
done:
* Filtering of devices marked as `remote_managed: "true"` in the
whitelist configuration so that they are not used in legacy SR-IOV
and hardware offload requests;
* Early error reporting if PFs marked as remote_managed="true" are
present in the whitelist configuration. This is not supported
explicitly since allocating such PFs would remove the associated
VFs from the pool and an instance with such PF and its VFs will
not have access to the control plane required for representor
interface plugging at the SmartNIC DPU side. This configuration
is not valid which is enforced in the PCIDeviceStats code.
* Checking of the presence of a card serial number in the PCI VPD
capability of a device if it was marked as `remote_managed: "true"`
in the whitelist. The card serial number presence is mandatory
because it is used for identification of a host in the networking
backend that will handle the configuration of a given PCI device at
the remote host side (i.e. representor plugging, flow programming).
For compatibility, all devices not explicitly marked as remote_managed
in the whitelist are assumed to have remote_managed attribute set to
False.
Implements: blueprint integration-with-off-path-network-backends
Change-Id: Ic44d5e206326827d00a751da3cea67afe3929a08
|
|/
|
|
|
| |
Closes-Bug: #1950276
Change-Id: Iac1d74ebeefc8e4192896b10c76c16942dbe30fc
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The nova-manage placement heal_allocations CLI is capable of healing
missing placement allocations due to port resource requests. To support
the new extended port resource request this code needs to be adapted
too.
When the heal_allocation command got the port resource request
support in train, the only way to figure out the missing allocations was
to dig into the placement RP tree directly. Since then nova gained
support for interface attach with such ports and to support that
placement gained support for in_tree filtering in allocation candidate
queries. So now the healing logic can be generalized to following:
For a given instance
1) Find the ports that has resource request but no allocation key in the
binding profile. These are the ports we need to heal
2) Gather the RequestGroups from the these ports and run an
allocation_candidates query restricted to the current compute of the
instance with in_tree filtering.
3) Extend the existing instance allocation with a returned allocation
candidate and update the instance allocation in placement.
4) Update the binding profile of these ports in neutron
The main change compared to the existing implementation is in step 2)
the rest mostly the same.
Note that support for old resource request format is kept alongside of
the new resource request format until Neutron makes the new format
mandatory.
blueprint: qos-minimum-guaranteed-packet-rate
Change-Id: I58869d2a5a4ed988fc786a6f1824be441dd48484
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There is inconsistency on return code nova API return
for "Feature not supported/implemented'. Current return
code are 400, 409, and 403.
- 400 case: Example: Multiattach Swap Volume Not Supported
- 403 case: Cyborg integration
- 409 case: Example: Operation Not Supported For SEV ,
Operation Not Supported For VTPM
In xena PTG, we agreed to fix this by returning 400 in all cases
- L446: https://etherpad.opendev.org/p/nova-xena-ptg
This commit convert all the features not supported error to
HTTPBadRequest(400).
To avoid converting every NotSupported inherited exception
in API controller to HTTPBadRequest generic conversion is
added in expected_errors() decorator.
Closes-Bug: #1938093
Change-Id: I410924668a73785f1bfe5c79827915d72e1d9e03
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This adds the final missing pieces to support creating servers with
ports having extended resource request. As the changes in the neutron
interface code is called from nova-compute service during the port
binding the compute service version is bumped. And a check is added to
the compute-api to reject such server create requests if there are old
computes in the cluster.
Note that some of the negative and SRIOV related interface attach
tests are also started to pass as they are not dependent on any of the
interface attach specific implementation. Still interface attach is
broken here as the failing of the positive tests show.
blueprint: qos-minimum-guaranteed-packet-rate
Change-Id: I9060cc9cb9e0d5de641ade78c5fd7e1cc77ade46
|
|/
|
|
|
|
|
|
|
| |
Add microversion 2.90, which allows allows users to configure the
hostname that will be exposed via the nova metadata service when
creating their instance.
Change-Id: I95047c1689ac14fa73eba48e19dc438988b78aad
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As a precaution reject all the server lifecycle operations that currently
do not support port-resource-request-groups API extension. These
are:
* resize
* migrate
* live migrate
* evacuate
* unshelve after shelve offload
* interface attach
This rejection will be removed in the patch that adds support for the
given operation.
blueprint: qos-minimum-guaranteed-packet-rate
Change-Id: I12c25550b08be6854b71ed3ad4c411a244a6c813
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To prepare for the unlikely event that Neutron merges and an operator
enables the port-resource-request-groups neutron API extension before
nova adds support for it, this patch rejects server creation if such
extension is enabled in Neutron. Enabling that extension has zero
benefits without nova support hence the harsh but simple rejection.
A subsequent patch will reject server lifecycle operations in a more
sophisticated way and as soon as we support some operations, like
boot, the deployer might rightfully choose to enable the Neutron
extension.
Change-Id: I2c55d9da13a570efbc1c862116cea31aaa6aa02e
blueprint: qos-minimum-guaranteed-packet-rate
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Now that we want to support generic stateless mdev devices, we can't
really tell of pGPUs and vGPUs in our logs and we need to be more
generic.
Change-Id: I016773b6d6750db4f34ef6faac6d272c4177d883
Partially-Implements: blueprint generic-mdevs
|
|\ \ |
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Server with ARQ in the port does not support move and suspend,
reject these operations in API stage:
- resize
- shelve
- live_migrate
- evacuate
- suspend
- attach/detach a smartnic port
Reject create server with smartnic in port if minimal compute
service version less than 57
Reject create server with port which have a malformed device
profile that request multi devices, like:
{
"resources:CUSTOM_ACCELERATOR_FPGA": "2",
"trait:CUSTOM_INTEL_PAC_ARRIA10": "required",
}
Implements: blueprint sriov-smartnic-support
Change-Id: Ia705a0341fb067e746a3b91ec4fc6d149bcaffb8
|
|\ \ |
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Nested allocations are only partially supported in nova-manage placement
heal_allocations CLI. This patch documents the missing support and
blocks healing instances with VGPU or Cyborg device profile request in
the embedded flavor. Blocking is needed as if --forced is used with such
instances then the tool could recreate an allocation ignoring some of
these resources.
Change-Id: I89ac90d2ea8bc268940869dbbc90352bfad5c0de
Related-Bug: bug/1939020
|
|/
|
|
|
|
|
|
|
|
|
|
| |
As seen in bug #1849425 os-brick can fail to extend an underlying volume
device on the compute silently, returning a new_size of None to n-cpu.
While this should ultimatley be addressed in os-brick n-cpu can also
handle this before we eventually run into a type error when attemting
floor division later in the volume extend flow.
Change-Id: Ic8091537274a5ad27fb5af8939f81ed154b7ad7c
Closes-Bug: #1849425
|
|
|
|
|
|
|
|
|
| |
Take advantage of the neutronclient bindings for the port binding APIs
added in neutronclient 7.1.0 to avoid having to vendor this stuff
ourselves.
Change-Id: Icc284203fb53658abe304f24a62705217f90b22b
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
|
|
|
|
|
|
|
| |
This was missed in change I17db7cdaad2c6368092b4fb00d5959711ad249f9.
Change-Id: I9b964c8e68051a995635a3d5f5aa09af2b0dcb82
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
There are a number of operations that are known not to work with vDPA
interfaces and another few that may work but haven't been tested. Start
blocking these. In all cases where an operation is blocked a HTTP 409
(Conflict) is returned. This will allow lifecycle operations to be
enabled as they are tested or bugs are addressed.
Change-Id: I7f3cbc57a374b2f271018a2f6ef33ef579798db8
Blueprint: libvirt-vdpa-support
|
|
|
|
|
|
|
|
|
|
| |
The penultimate step in our journey from the secure boot wilderness.
Start configuring the relevant attribute of the guest if and when secure
boot is enabled.
Blueprint: allow-secure-boot-for-qemu-kvm-guests
Change-Id: Ic38ab840f59619bf921e5387cd7a11c88a77b2a5
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change adds a second update command to the libvirt group
within nova-manage. This command will set or update the machine type of
the instance when the following criteria are met:
* The instance must have a ``vm_state`` of ``STOPPED``, ``SHELVED`` or
``SHELVED_OFFLOADED``.
* The machine type is supported. The supported list includes alias and
versioned types of ``pc``, ``pc-i440fx``, ``pc-q35``, ``q35``, ``virt``
or ``s390-ccw-virtio``.
* The update will not move the instance between underlying machine types.
For example, ``pc`` to ``q35``.
* The update will not move the instance between an alias and versioned
machine type or vice versa. For example, ``pc`` to ``pc-1.2.3`` or
``pc-1.2.3`` to ``pc``.
A --force flag is provided to skip the above checks but caution
should be taken as this could easily lead to the underlying ABI of the
instance changing when moving between machine types.
blueprint: libvirt-default-machine-type
Change-Id: I6b80021a2f90d3379c821dc8f02a72f350169eb3
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Currently, nova will decide whether to attach a USB tablet device based
on the below criteria:
+-----------+-----------+-------------------+---------------------+
| Image | nova.conf | VNC/Spice Enabled | Result |
+-----------+-----------+-------------------+---------------------+
| <unset> | <unset> | Yes | No device |
| <unset> | <unset> | No | No device |
| <unset> | usbtablet | Yes | USB tablet attached |
| <unset> | usbtablet | No | **Warning** |
| <unset> | ps2mouse | Yes | No device |
| <unset> | ps2mouse | No | No device |
| usbtablet | <unset> | Yes | USB tablet attached |
| usbtablet | <unset> | No | **Exception** |
| usbtablet | usbtablet | Yes | USB tablet attached |
| usbtablet | usbtablet | No | **Warning** |
| usbtablet | ps2mouse | Yes | USB tablet attached |
| usbtablet | ps2mouse | No | **Warning** |
+-----------+-----------+-------------------+---------------------+
(*) SPICE Enabled *and* agent disabled; if agent is enabled, it behaves
like SPICE is disabled.
(**) Note that there's an additional dimension missing here which is
based on the 'os_type' of the instance, but that behaves exactly
the same as the the VNC/SPICE enabled, only it's checking if the
'HVM' type is "enabled" or not.
This behavior is unusual - if the image metadata property supersedes the
host configuration, then why should we see different behavior based on
the host configuration when the image is requesting a pointer model but
no graphics are enabled?
With this change, we rationalize this matrix. Instead of having this
funky split, we completely ignore the request if a graphical console is
not enabled on the host. Or, in graphical terms:
+-----------+-----------+-------------------+---------------------+
| Image | nova.conf | VNC/Spice Enabled | Result |
+-----------+-----------+-------------------+---------------------+
| <unset> | <unset> | Yes | No device |
| <unset> | <unset> | No | No device |
| <unset> | usbtablet | Yes | USB tablet attached |
| <unset> | usbtablet | No | No device |
| <unset> | ps2mouse | Yes | No device |
| <unset> | ps2mouse | No | No device |
| usbtablet | <unset> | Yes | USB tablet attached |
| usbtablet | <unset> | No | No device |
| usbtablet | usbtablet | Yes | USB tablet attached |
| usbtablet | usbtablet | No | No device |
| usbtablet | ps2mouse | Yes | USB tablet attached |
| usbtablet | ps2mouse | No | No device |
+-----------+-----------+-------------------+---------------------+
This will set us up quite nicely for a future 'hw_input_bus' image
metadata property, which would allow us to choose between tablet and
mouse independently of bus.
Change-Id: I206109fdfdacf5eb67bd31338201548e429084ac
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Based on review feedback, we prefer to have the exception for
routed networks to not be prefilter-specific and just reraise
with the right exception type in the prefilter.
Change-Id: I9ccbbf3be8efc65fe7f480ad545fb5fc70767988
|
|\ \ \
| |/ / |
|