summaryrefslogtreecommitdiff
path: root/nova/network
Commit message (Collapse)AuthorAgeFilesLines
* Merge "nova-net: Remove unused parameters"Zuul2020-03-261-25/+37
|\
| * nova-net: Remove unused parametersStephen Finucane2020-02-181-25/+37
| | | | | | | | | | | | | | | | We only care about neutron security groups now, so a lot of nova-network only cruft can be removed. Do just that. Change-Id: I2a360e766261a186f9edf6ceb47a786aea2957eb Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
* | Merge "Add config option for neutron client retries"Zuul2020-03-211-1/+2
|\ \
| * | Add config option for neutron client retriesmelanie witt2020-03-191-1/+2
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Nova can occasionally fail to carry out server actions which require calls to neutron API if haproxy happens to close a connection after idle time if an incoming request attempts to re-use the connection while it is being torn down. In order to be more resilient to this scenario, we can add a config option for neutron client to retry requests, similar to our existing CONF.cinder.http_retries and CONF.glance.num_retries options. Because we create our neutron client [1] using a keystoneauth1 session [2], we can set the 'connect_retries' keyword argument to let keystoneauth1 handle connection retries. Closes-Bug: #1866937 [1] https://github.com/openstack/nova/blob/57459c3429ce62975cebd0cd70936785bdf2f3a4/nova/network/neutron.py#L226-L237 [2] https://docs.openstack.org/keystoneauth/latest/api/keystoneauth1.session.html#keystoneauth1.session.Session Change-Id: Ifb3afb13aff7e103c2e80ade817d0e63b624604a
* | Support unshelve with qos portsBalazs Gibizer2020-03-181-2/+5
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds support for unshelving an offloaded server with qos ports. To do that this patch: * collects the port resource requests from neutron before the scheduler is called to select the target of the unshelve. * calculate the request group - provider mapping after the scheduler selected the target host * update the InstancePCIRequest to drive the pci_claim to allocate VFs from the same PF as the bandwidth is allocated from by the scheduler * update the binding profile of the qos ports to so that the allocation key of the binding profile points to the RPs the port is allocated from. As this was the last move operation to be supported the compute service version is bumped to indicate such support. This will be used in a later patches to implement a global service level check in the API. Note that unshelve does not have a re-schedule loop and all the RPC changes was committed in Queens. Two error cases needs special care by rolling back allocations before putting the instance back to SHELVED_OFFLOADED state: * if the IntancePCIRequest cannot be updated according to the new target host of unshelve * if updating port binding fails in neutron during unshelve Change-Id: I678722b3cf295c89110967d5ad8c0c964df4cb42 blueprint: support-move-ops-with-qos-ports-ussuri
* nova-net: Remove unused nova-network objectsStephen Finucane2020-02-181-1/+1
| | | | | | | | | | | | | | | With the removal of the related quotas, the 'FixedIP', 'FloatingIP', 'SecurityGroupRule', 'Network' and 'NetworkList' objects can all be deleted. As noted previously, the 'SecurityGroup' object must be retained until we bump the 'Instance' object version to v3 and drop the 'security_group' field. In addition, we need to make a small change to the object versioning test to ensure we're only testing objects in the nova namespace. If we don't do this, we end up pulling in things like os_vif objects. This was previously noted in change Ica9f217d0318fc7c2db4bcdea12d00aad749c30c. Change-Id: I6aba959eff1e50af4ac040148c7177f235a09a1f Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
* Don't error out on floating IPs without associated portsStephen Finucane2020-02-061-6/+9
| | | | | | | | | Floating IPs don't have to have an associated port, so there's no reason to error out when this is the case. Change-Id: I50c79843bf819b731c597dbfe72090cdf02c7641 Signed-off-by: Stephen Finucane <sfinucan@redhat.com> Closes-bug: #1861876
* Merge "Avoid calling neutron for N networks"Zuul2020-02-061-7/+4
|\
| * Avoid calling neutron for N networksStephen Finucane2020-02-041-7/+4
| | | | | | | | | | | | | | | | | | | | In the worst case scenario, we could list N floating IPs, each of which has a different network. This would result in N additional calls to neutron - one for each of the networks. Avoid this by calling neutron once for all networks associated with the floating IPs. Change-Id: If067a730b0fcbe3f59f4472b00c690cc43be4b3b Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
* | Merge "Handle neutron without the fip-port-details extension"Zuul2020-02-052-6/+48
|\ \ | |/
| * Handle neutron without the fip-port-details extensionStephen Finucane2020-02-042-6/+48
| | | | | | | | | | | | | | | | | | | | | | | | | | The 'fip-port-details' API extension was added to neutron in Rocky [1] and is optional. As a result, we cannot rely on the 'port_details' field being present in API responses. If it is not, we need to make a second query for all ports and build 'port_details' using the 'port_id' field. [1] https://docs.openstack.org/releasenotes/neutron-lib/rocky.html#relnotes-1-14-0-stable-rocky-new-features Change-Id: Ifb96f31f471cc0a25c1dfce2161a669b97a384ae Signed-off-by: Stephen Finucane <sfinucan@redhat.com> Closes-bug: #1861876
* | nova-net: Remove use of legacy 'Network' objectStephen Finucane2019-12-031-15/+2
|/ | | | | | | | | | Change I77b1cfeab3c00c6c3d7743ba51e12414806b71d2 reworked things such that random methods in 'nova.network.api.API' weren't using the legacy nova-network 'FloatingIP' object as a container. Now do the same for the 'Network' object, which is similarly unncessary here. Change-Id: I62fd27856a52adc65a90ad6301a6e47928347f52 Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
* nova-net: Remove use of legacy 'FloatingIP' objectStephen Finucane2019-12-031-79/+53
| | | | | | | | | | | | | | We want to get rid of the 'FloatingIP' object. Alas, that's easier said than done because there are still a few users of this. The 'get_floating_ip', 'get_floating_ip_by_address', and 'get_floating_ips_by_project' methods are examples. These are only called by the legacy 'os-floating-ips' API and the 'FloatingIP' object is simply used as an unnecessary container between the two. Convert things so the aforementioned API can handle mostly intact responses from neutron instead, eliminating this user of the 'FloatingIP' object. Change-Id: I77b1cfeab3c00c6c3d7743ba51e12414806b71d2 Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
* nova-net: Make the security group API a moduleStephen Finucane2019-11-295-839/+717
| | | | | | | | | | | | | We're wrestling with multiple imports for this thing and have introduced a cache to avoid having to load the thing repeatedly. However, Python already has a way to ensure this doesn't happen: the use of a module. Given that we don't have any state, we can straight up drop the class and just call functions directly. Along the way, we drop the 'ensure_default' function, which is a no-op for neutron and switch all the mocks over, where necessary. Change-Id: Ia8dbe8ba61ec6d1b8498918a53a103a6eff4d488 Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
* nova-net: Remove layer of indirection in 'nova.network'Stephen Finucane2020-01-156-420/+101
| | | | | | | | | | | | | | | | At some point in the past, there was only nova-network and its code could be found in 'nova.network'. Neutron was added and eventually found itself (mostly!) in the 'nova.network.neutronv2' submodule. With nova-network now gone, we can remove one layer of indirection and move the code from 'nova.network.neutronv2' back up to 'nova.network', mirroring what we did with the old nova-volume code way back in 2012 [1]. To ensure people don't get nova-network and 'nova.network' confused, 'neutron' is retained in filenames. [1] https://review.opendev.org/#/c/14731/ Change-Id: I329f0fd589a4b2e0426485f09f6782f94275cc07 Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
* nova-net: Kill itStephen Finucane2020-01-1414-6332/+3
| | | | | | | | | | | Finish the job by removing all the now-unused modules. This also allows us to - wait for it - kill mox at long last. It's a great day in the parish. Partial-Implements: blueprint remove-nova-network-ussuri Partial-Implements: blueprint mox-removal-ussuri Change-Id: Ia33ec2604b2fc2d3b6830b596cac669cc3ad6c96
* nova-net: Remove final references to nova-networkStephen Finucane2020-01-082-6/+2
| | | | | | | | | | | Strip out everything matching '(is|use)_neutron', except the tests for nova-network code and two other places that these tests rely on. Along the way, remove a whole load of apparently unnecessary mocking that clearly wasn't caught when we switched over the bulk of testing to use the neutron network driver. Change-Id: Ifa9c5c468400261a5e1f66b72c575845173a4f8f Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
* nova-net: Remove firewall support (pt. 2)Stephen Finucane2020-01-061-18/+3
| | | | | | | | | | | | | | Firewall support is not needed with neutron, which supports both security groups for per-port filtering and FWaaS for per-network filtering. Remove both the generic firewalls and the hypervisor-specific implementations. This change focuses on removing the firewall-related API calls from the various virt drivers. The firewall drivers themselves are removed separately. Change-Id: I5a9e5532c46a5f7064441ae644125d21efe5fda1 Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
* Merge "Implement cleanup_instance_network_on_host for neutron API"Zuul2019-12-261-30/+54
|\
| * Implement cleanup_instance_network_on_host for neutron APIMatt Riedemann2019-12-231-30/+54
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This implements the cleanup_instance_network_on_host method in the neutron API which will delete port bindings for the given instance and the given host, similar to how setup_networks_on_host works when teardown=True and the instance.host does not match the host provided to that method. This allows removing the hacks in the _confirm_snapshot_based_resize_delete_port_bindings and _revert_snapshot_based_resize_at_dest methods. Change-Id: Iff8194c868580facb1cc81b5567d66d4093c5274
* | Merge "nova-net: Remove nova-network security group driver"Zuul2019-12-231-0/+1
|\ \
| * | nova-net: Remove nova-network security group driverStephen Finucane2019-12-161-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | This is another self-explanatory change. We remove the driver along with related tests. Some additional API tests need to be fixed since these were using the nova-network security group driver. Change-Id: Ia05215b2e7168563c54b78263625125537b7234c Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
* | | Merge "nova-net: Remove 'is_neutron_security_groups' function"Zuul2019-12-231-11/+3
|\ \ \ | |/ /
| * | nova-net: Remove 'is_neutron_security_groups' functionStephen Finucane2019-12-161-11/+3
| |/ | | | | | | | | | | | | | | | | | | We'll always use the neutron security group driver going forward. A future change will remove the nova-network-based security group driver itself as well as the 'get_openstack_security_group_driver' function from the same module. Change-Id: I12a96ea659ed402cc4d1bd52a50e2e16042b6372 Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
* | Support live migration with qos portsBalazs Gibizer2019-12-191-5/+4
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch enhances the live_migration task in the conductor to support qos ports during live migration. The high level sequence of events are the following: * when request spec is gathered before the scheduler call the resource requests are collected from neutron ports and the request spec is updated * after a successful scheduling the request group - resource provider mapping is calculated * the instance pci requests are updated to drive the pci claim on the target host to allocate VFs from the same PCI PF the bandwidth is allocated from * the inactive port binding on the target host is updated to have the RP UUID in the allocation key according to the resource allocation on the destination host. As the port binding is already updated in the conductor the late check about the allocation key in the binding profile is turned off for live migration in the neutronv2 api. Note that this patch only handles the live migration without target host subsequent patches will add support for migration with target host and other edge case like reschedule. blueprint: support-move-ops-with-qos-ports-ussuri Change-Id: Ibb84ea210795634f02997d4627e0beec5fea36ee
* Merge "support pci numa affinity policies in flavor and image"Zuul2019-12-132-4/+12
|\
| * support pci numa affinity policies in flavor and imageSean Mooney2019-12-112-4/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | This addresses bug #1795920 by adding support for defining a pci numa affinity policy via the flavor extra specs or image metadata properties enabling the policies to be applied to neutron sriov port including hardware offloaded ovs. Closes-Bug: #1795920 Related-Bug: #1805891 Implements: blueprint vm-scoped-sriov-numa-affinity Change-Id: Ibd62b24c2bd2dd208d0f804378d4e4f2bbfdaed6
* | Extend NeutronFixture to handle multiple bindingsBalazs Gibizer2019-12-111-7/+35
|/ | | | | | | | | | | | | | | | Live migration uses multiple port bindings but the NeutronFixture used in the functional tests does not track such bindings. Later patches will need to assert that such bindings are properly updated with allocation keys during the migration with ports having resource request. To be able to do it this patch extends the NeutronFixture. To allow easy stubbing this patch refactors neutronv2.api.APi.activate_port_binding() and introduces neutronv2.api.APi._get_port_binding() blueprint: support-move-ops-with-qos-ports-ussuri Change-Id: Id349f2f355b89445139e58e6efc38a0daabe9e91
* Cache security group driverMatt Riedemann2019-12-031-4/+8
| | | | | | | | | | | | | | | Change I0932c652fb455fe10239215a93e183ea947234e3 from Mitaka was a performance improvement to cache the loaded security group driver since the API calls get_openstack_security_group_driver a lot. That performance fix was regressed with change Ia4a8d9954bf456253101b936f8b4ff513aaa73b2 in Newton. This caches the loaded security group driver once again. This is pretty similar to the original change except simpler since we don't have to account for the skip_policy_check flag. Change-Id: Icacc763f19db6dc90e72af32e17d480775ad5edf Closes-Bug: #1825018
* nova-net: Remove associate, disassociate network APIsStephen Finucane2019-11-284-60/+0
| | | | | | | | These no longer have any callers and can be removed almost entirely, RPC and DB methods aside. Change-Id: Id6120adbee61682eb0f90bdac4080dc1e53ae978 Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
* nova-net: Remove unused '*_default_rules' security group DB APIsStephen Finucane2019-11-181-20/+0
| | | | | | | With the API removed, nothing is using these anymore. Remove them. Change-Id: Id303edc0e3b4af5647ce171b7763e094d1aa244c Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
* Merge "Improve metadata server performance with large security groups"Zuul2019-11-141-3/+13
|\
| * Improve metadata server performance with large security groupsDoug Wiegley2019-11-051-3/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | Don't include the rules in the SG fetch in the metadata server, since we don't need them there, and with >1000 rules, it starts to get really slow, especially in Pike and later. Closes-Bug: #1851430 Co-Authored-By: Doug Wiegley <dougwig@parkside.io> Co-Authored-By: Matt Riedemann <mriedem.os@gmail.com> Change-Id: I7de14456d04370c842b4c35597dca3a628a826a2
* | Merge "Require Migration object arg to migrate_instance_finish method"Zuul2019-11-142-5/+14
|\ \
| * | Require Migration object arg to migrate_instance_finish methodMatt Riedemann2019-10-232-5/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is one place in the compute manager that is passing a dict for the migration to the migrate_instance_finish method but a Migration object should be passed. The dict works because the Migration object uses the NovaObjectDictCompat mixin but that is something we'd like to eventually remove since it's nice to know if you're dealing with an object or a dict. This converts the remaining uses of migrate_instance_finish that were using a dict for the migration over to using an object. Change-Id: Ice8b167d6fe026e7043a0899f399a62e25dcfcd1
* | | Use admin neutron client to gather port resource requestsBalazs Gibizer2019-11-061-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The network_api get_requested_resource_for_instance() creates a neutron client with the current request context and uses the client to query neutron ports. Neutron does not return the resource_request of the neutron port if it is queried by a non-admin. So if the request context was a non admin context nova thought that none of the ports have resource requests. This patch ensures that an admin client is used to query the ports. Change-Id: I1178fb77a74010c3b9f51eea22c7e7576b600015 Closes-Bug: #1849695
* | | Use admin neutron client to query ports for bindingBalazs Gibizer2019-11-061-5/+11
| |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The compute service updates the binding:profile of the neutron port during server create. If the port has resource_request then the 'allocation' key need to point to the resource provider the port is allocating resources. Unfortunately this code used a non admin client to query the port data and therefore if the original server create request was sent by a non admin user the returned port does not have its resource_request filled and as a consequence nova does not add the allocation key to the binding profile. This patch makes sure that the port is queried with an admin client. There is a tempest test change that reproduces the issue: https://review.opendev.org/#/c/690934 Change-Id: Icc631cf2e81a5c78cb7fb1d0b625d19bd8f5a274 Closes-Bug: #1849657
* | Allow evacuating server with port resource requestBalazs Gibizer2019-10-253-8/+13
|/ | | | | | | | | | | | | | | | | | | | | | | | This patch adds support for evacuating a server with qos ports. To do that this patch: * collects the port resource requests from neutron before the scheduler is called to select the target of the evacuation. * calculate the request group - provider mapping after the scheduler selected the target host * update the InstancePCIRequest to drive the pci_claim to allocate VFs from the same PF as the bandwidth is allocated from by the scheduler * update the binding profile of the qos ports to so that the allocation key of the binding profile points to the RPs the port is allocated from. Note that evacuate does not have reschedule loop so we don't need any extra logic for that. The rebuild_instance RPC passes request spec to the compute since Queens so no RPC or service version change was needed. Therefore no upgrade related checks were introduced. Change-Id: Id9ed7a82d42be8ffe760f03e6610b9e6f5a4287b blueprint: support-move-ops-with-qos-ports-ussuri
* Remove upgrade specific info from user facing exception textBalazs Gibizer2019-09-241-5/+11
| | | | | | | | | | | One of the PortUpdateFailed exception usage case introduced by the bp support-move-ops-with-qos-ports overly chatty about upgrade and pinned RPC. This message can reach the end user during resize so the deployment specific information is removed from the exception message and logged instead. Also a TODO is added that the whole check can be removed once we bump the compute RPC to 6.0 Change-Id: I37b02da02a42cab09d2efe6d1a4b88cbc8b9b0d0
* Merge "Follow up for I220fa02ee916728e241503084b14984bab4b0c3b"Zuul2019-09-141-2/+8
|\
| * Follow up for I220fa02ee916728e241503084b14984bab4b0c3bBalazs Gibizer2019-09-131-2/+8
| | | | | | | | | | | | | | | | | | | | * Enhanced exception message when provider mapping is missing during _update_port_binding_for_instance * replaced dict with real Migration object in the unit test blueprint: support-move-ops-with-qos-ports Change-Id: Ib033fee8b8464f51f10101a5da2dfc983bf76733
* | Merge "Skip querying resource request in revert_resize if no qos port"Zuul2019-09-141-0/+6
|\ \
| * | Skip querying resource request in revert_resize if no qos portBalazs Gibizer2019-09-121-0/+6
| |/ | | | | | | | | | | | | | | | | | | | | | | | | During revert_resize we can skip collecting port resource requests from neutron if there is no port with allocation key in the binding profile in the network info caches of the instance. Note that we cannot do the same optimization in the MigrationTask as we would like to use migration to heal missing port allocations. See more in bug #1819923. Change-Id: I72d0e28f3b9319e521243d7d0dc5dfa4feaf56f5 blueprint: support-move-ops-with-qos-ports
* | Merge "Find instance in another cell during floating IP re-association"Zuul2019-09-131-5/+53
|\ \ | |/ |/|
| * Find instance in another cell during floating IP re-associationMatt Riedemann2019-09-061-5/+53
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When associating a floating IP to instance A but it was already associated with instance B, we try to refresh the info cache on instance B. The problem is the context is targeted to the cell for instance A and instance B might be in another cell, so we'll get an InstanceNotFound error trying to lookup instance B. This change tries to find the instance in another cell using its instance mapping, and makes the code a bit more graceful if instance B is deleted. Change-Id: I71790afd0784d98050ccd7cc0e046321da249cbe Closes-Bug: #1826472
* | Merge "update allocation in binding profile during migrate"Zuul2019-09-063-15/+46
|\ \
| * | update allocation in binding profile during migrateBalazs Gibizer2019-09-053-15/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If the server has port with resource allocation and the server is migrated then when the port is bound to the destination host the allocation key needs to be updated in the binding:profile to point to the resource provider that provides resources for this port on the destination host. This patch extends the migrate_instance_finish() network api method to pass the updated resource providers of the ports during migration. Change-Id: I220fa02ee916728e241503084b14984bab4b0c3b blueprint: support-move-ops-with-qos-ports
* | | Merge "Trap and log errors from _update_inst_info_cache_for_disassociated_fip"Zuul2019-09-051-3/+8
|\ \ \ | | |/ | |/|
| * | Trap and log errors from _update_inst_info_cache_for_disassociated_fipMatt Riedemann2019-08-231-3/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When associating a floating IP to instance A and the floating IP is already associated to instance B, the associate_floating_ip method updates the floating IP to be associated with instance A then tries to update the network info cache for instance B. That network info cache update is best effort and could fail in different ways, e.g. the original associated port or instance could be gone. Failing to refresh the cache for instance B should not affect the association operation for instance A, so this change traps any errors during the refresh and just logs them as a warning. Change-Id: Ib5a44e4fd2ec2bf43b761db29403810d8b730429 Related-Bug: #1826472
* | | Merge "neutron: refactor nw info cache refresh out of associate_floating_ip"Zuul2019-09-051-15/+35
|\ \ \ | |/ / | | / | |/ |/|