| Commit message (Collapse) | Author | Age | Files | Lines |
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| | |
We only care about neutron security groups now, so a lot of nova-network
only cruft can be removed. Do just that.
Change-Id: I2a360e766261a186f9edf6ceb47a786aea2957eb
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
|
|\ \ |
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Nova can occasionally fail to carry out server actions which require
calls to neutron API if haproxy happens to close a connection after
idle time if an incoming request attempts to re-use the connection
while it is being torn down.
In order to be more resilient to this scenario, we can add a config
option for neutron client to retry requests, similar to our existing
CONF.cinder.http_retries and CONF.glance.num_retries options.
Because we create our neutron client [1] using a keystoneauth1 session
[2], we can set the 'connect_retries' keyword argument to let
keystoneauth1 handle connection retries.
Closes-Bug: #1866937
[1] https://github.com/openstack/nova/blob/57459c3429ce62975cebd0cd70936785bdf2f3a4/nova/network/neutron.py#L226-L237
[2] https://docs.openstack.org/keystoneauth/latest/api/keystoneauth1.session.html#keystoneauth1.session.Session
Change-Id: Ifb3afb13aff7e103c2e80ade817d0e63b624604a
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds support for unshelving an offloaded server with qos ports.
To do that this patch:
* collects the port resource requests from neutron before the scheduler
is called to select the target of the unshelve.
* calculate the request group - provider mapping after the scheduler
selected the target host
* update the InstancePCIRequest to drive the pci_claim to allocate VFs
from the same PF as the bandwidth is allocated from by the scheduler
* update the binding profile of the qos ports to so that the allocation
key of the binding profile points to the RPs the port is allocated
from.
As this was the last move operation to be supported the compute service
version is bumped to indicate such support. This will be used in a later
patches to implement a global service level check in the API.
Note that unshelve does not have a re-schedule loop and all the RPC
changes was committed in Queens.
Two error cases needs special care by rolling back allocations before
putting the instance back to SHELVED_OFFLOADED state:
* if the IntancePCIRequest cannot be updated according to the new target
host of unshelve
* if updating port binding fails in neutron during unshelve
Change-Id: I678722b3cf295c89110967d5ad8c0c964df4cb42
blueprint: support-move-ops-with-qos-ports-ussuri
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With the removal of the related quotas, the 'FixedIP', 'FloatingIP',
'SecurityGroupRule', 'Network' and 'NetworkList' objects can all be
deleted. As noted previously, the 'SecurityGroup' object must be
retained until we bump the 'Instance' object version to v3 and drop the
'security_group' field. In addition, we need to make a small change to
the object versioning test to ensure we're only testing objects in the
nova namespace. If we don't do this, we end up pulling in things like
os_vif objects. This was previously noted in change
Ica9f217d0318fc7c2db4bcdea12d00aad749c30c.
Change-Id: I6aba959eff1e50af4ac040148c7177f235a09a1f
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Floating IPs don't have to have an associated port, so there's no reason
to error out when this is the case.
Change-Id: I50c79843bf819b731c597dbfe72090cdf02c7641
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
Closes-bug: #1861876
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
In the worst case scenario, we could list N floating IPs, each of which
has a different network. This would result in N additional calls to
neutron - one for each of the networks. Avoid this by calling neutron
once for all networks associated with the floating IPs.
Change-Id: If067a730b0fcbe3f59f4472b00c690cc43be4b3b
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
|
|\ \
| |/ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The 'fip-port-details' API extension was added to neutron in Rocky [1]
and is optional. As a result, we cannot rely on the 'port_details' field
being present in API responses. If it is not, we need to make a second
query for all ports and build 'port_details' using the 'port_id' field.
[1] https://docs.openstack.org/releasenotes/neutron-lib/rocky.html#relnotes-1-14-0-stable-rocky-new-features
Change-Id: Ifb96f31f471cc0a25c1dfce2161a669b97a384ae
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
Closes-bug: #1861876
|
|/
|
|
|
|
|
|
|
|
| |
Change I77b1cfeab3c00c6c3d7743ba51e12414806b71d2 reworked things such
that random methods in 'nova.network.api.API' weren't using the legacy
nova-network 'FloatingIP' object as a container. Now do the same for the
'Network' object, which is similarly unncessary here.
Change-Id: I62fd27856a52adc65a90ad6301a6e47928347f52
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We want to get rid of the 'FloatingIP' object. Alas, that's easier said
than done because there are still a few users of this. The
'get_floating_ip', 'get_floating_ip_by_address', and
'get_floating_ips_by_project' methods are examples. These are only
called by the legacy 'os-floating-ips' API and the 'FloatingIP' object
is simply used as an unnecessary container between the two. Convert
things so the aforementioned API can handle mostly intact responses from
neutron instead, eliminating this user of the 'FloatingIP' object.
Change-Id: I77b1cfeab3c00c6c3d7743ba51e12414806b71d2
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We're wrestling with multiple imports for this thing and have introduced
a cache to avoid having to load the thing repeatedly. However, Python
already has a way to ensure this doesn't happen: the use of a module.
Given that we don't have any state, we can straight up drop the class
and just call functions directly. Along the way, we drop the
'ensure_default' function, which is a no-op for neutron and switch all
the mocks over, where necessary.
Change-Id: Ia8dbe8ba61ec6d1b8498918a53a103a6eff4d488
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
At some point in the past, there was only nova-network and its code
could be found in 'nova.network'. Neutron was added and eventually found
itself (mostly!) in the 'nova.network.neutronv2' submodule. With
nova-network now gone, we can remove one layer of indirection and move
the code from 'nova.network.neutronv2' back up to 'nova.network',
mirroring what we did with the old nova-volume code way back in 2012
[1]. To ensure people don't get nova-network and 'nova.network'
confused, 'neutron' is retained in filenames.
[1] https://review.opendev.org/#/c/14731/
Change-Id: I329f0fd589a4b2e0426485f09f6782f94275cc07
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Finish the job by removing all the now-unused modules. This also allows
us to - wait for it - kill mox at long last. It's a great day in the
parish.
Partial-Implements: blueprint remove-nova-network-ussuri
Partial-Implements: blueprint mox-removal-ussuri
Change-Id: Ia33ec2604b2fc2d3b6830b596cac669cc3ad6c96
|
|
|
|
|
|
|
|
|
|
|
| |
Strip out everything matching '(is|use)_neutron', except the tests for
nova-network code and two other places that these tests rely on. Along
the way, remove a whole load of apparently unnecessary mocking that
clearly wasn't caught when we switched over the bulk of testing to use
the neutron network driver.
Change-Id: Ifa9c5c468400261a5e1f66b72c575845173a4f8f
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Firewall support is not needed with neutron, which supports both
security groups for per-port filtering and FWaaS for per-network
filtering. Remove both the generic firewalls and the hypervisor-specific
implementations.
This change focuses on removing the firewall-related API calls from the
various virt drivers. The firewall drivers themselves are removed
separately.
Change-Id: I5a9e5532c46a5f7064441ae644125d21efe5fda1
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This implements the cleanup_instance_network_on_host method
in the neutron API which will delete port bindings for the
given instance and the given host, similar to how
setup_networks_on_host works when teardown=True and the
instance.host does not match the host provided to that method.
This allows removing the hacks in the
_confirm_snapshot_based_resize_delete_port_bindings and
_revert_snapshot_based_resize_at_dest methods.
Change-Id: Iff8194c868580facb1cc81b5567d66d4093c5274
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This is another self-explanatory change. We remove the driver along with
related tests. Some additional API tests need to be fixed since these
were using the nova-network security group driver.
Change-Id: Ia05215b2e7168563c54b78263625125537b7234c
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
|
|\ \ \
| |/ / |
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We'll always use the neutron security group driver going forward. A
future change will remove the nova-network-based security group driver
itself as well as the 'get_openstack_security_group_driver' function
from the same module.
Change-Id: I12a96ea659ed402cc4d1bd52a50e2e16042b6372
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch enhances the live_migration task in the conductor to support
qos ports during live migration. The high level sequence of events are
the following:
* when request spec is gathered before the scheduler call the resource
requests are collected from neutron ports and the request spec is
updated
* after a successful scheduling the request group - resource provider
mapping is calculated
* the instance pci requests are updated to drive the pci claim on the
target host to allocate VFs from the same PCI PF the bandwidth is
allocated from
* the inactive port binding on the target host is updated to have the RP
UUID in the allocation key according to the resource allocation on the
destination host.
As the port binding is already updated in the conductor the late check
about the allocation key in the binding profile is turned off for live
migration in the neutronv2 api.
Note that this patch only handles the live migration without target host
subsequent patches will add support for migration with target host and
other edge case like reschedule.
blueprint: support-move-ops-with-qos-ports-ussuri
Change-Id: Ibb84ea210795634f02997d4627e0beec5fea36ee
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This addresses bug #1795920 by adding support for
defining a pci numa affinity policy via the flavor
extra specs or image metadata properties enabling
the policies to be applied to neutron sriov port
including hardware offloaded ovs.
Closes-Bug: #1795920
Related-Bug: #1805891
Implements: blueprint vm-scoped-sriov-numa-affinity
Change-Id: Ibd62b24c2bd2dd208d0f804378d4e4f2bbfdaed6
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Live migration uses multiple port bindings but the NeutronFixture used
in the functional tests does not track such bindings. Later patches will
need to assert that such bindings are properly updated with allocation
keys during the migration with ports having resource request.
To be able to do it this patch extends the NeutronFixture.
To allow easy stubbing this patch refactors
neutronv2.api.APi.activate_port_binding() and introduces
neutronv2.api.APi._get_port_binding()
blueprint: support-move-ops-with-qos-ports-ussuri
Change-Id: Id349f2f355b89445139e58e6efc38a0daabe9e91
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Change I0932c652fb455fe10239215a93e183ea947234e3 from Mitaka
was a performance improvement to cache the loaded security
group driver since the API calls get_openstack_security_group_driver
a lot. That performance fix was regressed with change
Ia4a8d9954bf456253101b936f8b4ff513aaa73b2 in Newton.
This caches the loaded security group driver once again. This
is pretty similar to the original change except simpler since
we don't have to account for the skip_policy_check flag.
Change-Id: Icacc763f19db6dc90e72af32e17d480775ad5edf
Closes-Bug: #1825018
|
|
|
|
|
|
|
|
| |
These no longer have any callers and can be removed almost entirely, RPC
and DB methods aside.
Change-Id: Id6120adbee61682eb0f90bdac4080dc1e53ae978
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
|
|
|
|
|
|
|
| |
With the API removed, nothing is using these anymore. Remove them.
Change-Id: Id303edc0e3b4af5647ce171b7763e094d1aa244c
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Don't include the rules in the SG fetch in the metadata server, since
we don't need them there, and with >1000 rules, it starts to get
really slow, especially in Pike and later.
Closes-Bug: #1851430
Co-Authored-By: Doug Wiegley <dougwig@parkside.io>
Co-Authored-By: Matt Riedemann <mriedem.os@gmail.com>
Change-Id: I7de14456d04370c842b4c35597dca3a628a826a2
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
There is one place in the compute manager that is passing a dict
for the migration to the migrate_instance_finish method but a
Migration object should be passed. The dict works because the
Migration object uses the NovaObjectDictCompat mixin but that is
something we'd like to eventually remove since it's nice to know
if you're dealing with an object or a dict.
This converts the remaining uses of migrate_instance_finish that
were using a dict for the migration over to using an object.
Change-Id: Ice8b167d6fe026e7043a0899f399a62e25dcfcd1
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The network_api get_requested_resource_for_instance() creates a neutron
client with the current request context and uses the client to query
neutron ports. Neutron does not return the resource_request of the
neutron port if it is queried by a non-admin. So if the request context
was a non admin context nova thought that none of the ports have resource
requests.
This patch ensures that an admin client is used to query the ports.
Change-Id: I1178fb77a74010c3b9f51eea22c7e7576b600015
Closes-Bug: #1849695
|
| |/
|/|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The compute service updates the binding:profile of the neutron port
during server create. If the port has resource_request then the
'allocation' key need to point to the resource provider the port is
allocating resources. Unfortunately this code used a non admin client to
query the port data and therefore if the original server create request
was sent by a non admin user the returned port does not have its
resource_request filled and as a consequence nova does not add the
allocation key to the binding profile.
This patch makes sure that the port is queried with an admin client.
There is a tempest test change that reproduces the issue:
https://review.opendev.org/#/c/690934
Change-Id: Icc631cf2e81a5c78cb7fb1d0b625d19bd8f5a274
Closes-Bug: #1849657
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds support for evacuating a server with qos ports. To do
that this patch:
* collects the port resource requests from neutron before the scheduler
is called to select the target of the evacuation.
* calculate the request group - provider mapping after the scheduler
selected the target host
* update the InstancePCIRequest to drive the pci_claim to allocate VFs
from the same PF as the bandwidth is allocated from by the scheduler
* update the binding profile of the qos ports to so that the allocation
key of the binding profile points to the RPs the port is allocated
from.
Note that evacuate does not have reschedule loop so we don't need any
extra logic for that.
The rebuild_instance RPC passes request spec to the compute since Queens
so no RPC or service version change was needed. Therefore no upgrade
related checks were introduced.
Change-Id: Id9ed7a82d42be8ffe760f03e6610b9e6f5a4287b
blueprint: support-move-ops-with-qos-ports-ussuri
|
|
|
|
|
|
|
|
|
|
|
| |
One of the PortUpdateFailed exception usage case introduced by the
bp support-move-ops-with-qos-ports overly chatty about upgrade and
pinned RPC. This message can reach the end user during resize so the
deployment specific information is removed from the exception message
and logged instead. Also a TODO is added that the whole check can be
removed once we bump the compute RPC to 6.0
Change-Id: I37b02da02a42cab09d2efe6d1a4b88cbc8b9b0d0
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Enhanced exception message when provider mapping is missing during
_update_port_binding_for_instance
* replaced dict with real Migration object in the unit test
blueprint: support-move-ops-with-qos-ports
Change-Id: Ib033fee8b8464f51f10101a5da2dfc983bf76733
|
|\ \ |
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
During revert_resize we can skip collecting port resource requests
from neutron if there is no port with allocation key in the binding
profile in the network info caches of the instance.
Note that we cannot do the same optimization in the MigrationTask as we
would like to use migration to heal missing port allocations. See more
in bug #1819923.
Change-Id: I72d0e28f3b9319e521243d7d0dc5dfa4feaf56f5
blueprint: support-move-ops-with-qos-ports
|
|\ \
| |/
|/| |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When associating a floating IP to instance A but it was already
associated with instance B, we try to refresh the info cache on
instance B. The problem is the context is targeted to the cell
for instance A and instance B might be in another cell, so we'll
get an InstanceNotFound error trying to lookup instance B.
This change tries to find the instance in another cell using its
instance mapping, and makes the code a bit more graceful if
instance B is deleted.
Change-Id: I71790afd0784d98050ccd7cc0e046321da249cbe
Closes-Bug: #1826472
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
If the server has port with resource allocation and the server is
migrated then when the port is bound to the destination host the
allocation key needs to be updated in the binding:profile to point to
the resource provider that provides resources for this port on the
destination host.
This patch extends the migrate_instance_finish() network api method to
pass the updated resource providers of the ports during migration.
Change-Id: I220fa02ee916728e241503084b14984bab4b0c3b
blueprint: support-move-ops-with-qos-ports
|
|\ \ \
| | |/
| |/| |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When associating a floating IP to instance A and the floating IP is
already associated to instance B, the associate_floating_ip method
updates the floating IP to be associated with instance A then tries
to update the network info cache for instance B. That network info
cache update is best effort and could fail in different ways, e.g.
the original associated port or instance could be gone. Failing to
refresh the cache for instance B should not affect the association
operation for instance A, so this change traps any errors during the
refresh and just logs them as a warning.
Change-Id: Ib5a44e4fd2ec2bf43b761db29403810d8b730429
Related-Bug: #1826472
|
|\ \ \
| |/ /
| | /
| |/
|/| |
|