summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
...
| * | | | | | claims: Do not assume image-meta is a dictNikola Dipanov2016-09-302-5/+17
| | |/ / / / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MoveClaim is expected to accept the image_meta argument to it's __init__ which we assumed was always a dict. This is not the case anymore as we pass in the ImageMeta object, that recently lost it's dict-mimicing powers so the assumption that it's safe to call from_dict on it is no longer true. This patch make sure that we convert to objects internally always. Change-Id: I72081f656b02aa564af5461b24c5294b6bac59c8 (cherry picked from commit b01187eede3881f72addd997c8fd763ddbc137fc)
* | | | | | Merge "compute: Skip driver detach calls for non local instances" into ↵Jenkins2016-09-303-37/+31
|\ \ \ \ \ \ | |/ / / / / |/| | | | | | | | | | | stable/mitaka
| * | | | | compute: Skip driver detach calls for non local instancesLee Yarwood2016-09-153-37/+31
| |/ / / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Only call for a driver detach from a volume if the instance is currently associated with the local compute host. This avoids potential virt driver and volume backend issues when attempting to disconnect from volumes that have never been connected to from the current host. NOTE(lyarwood): Test conflict caused by the mox to mock migration in I8d4d8a. As a result test_rebuild_on_remote_host_with_volumes remains a mox based test. Conflicts: nova/tests/unit/compute/test_compute.py Closes-Bug: #1583284 Change-Id: I36b8532554d75b24130f456a35acd0be838b62d6 (cherry picked from commit fdf3328107e53f1c5578c2e4dfbad78d832b01c6)
* | | | | Merge "Fix resizing in imagebackend.cache()" into stable/mitakaJenkins2016-09-302-1/+69
|\ \ \ \ \
| * | | | | Fix resizing in imagebackend.cache()Jens Rosenboom2016-09-092-1/+69
| |/ / / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The Raw and Lvm backends do not create a 'base image' (the file in the image cache) when creating an ephemeral or swap disk. However, cache() expects it to exist when checking if a resize is required. This change ignores the resize check if the backing file doesn't exist. This happens to be ok, because ephemeral and swap disks are always created with the correct target size anyway, and therefore never need to be resized. NOTE(mriedem): There is a slight change in the commit message and test since the Raw image backend was renamed to Flat in Newton. Since Flat didn't exist in Mitaka it's better to use Raw here. Closes-Bug: 1608934 Co-Authored-By: Matthew Booth <mbooth@redhat.com> Change-Id: I46b5658efafe558dd6b28c9910fb8fde830adec0 (cherry picked from commit d0775c50d0c2bd50a62ccd49ea7063948af6c3b3)
* | | | | Merge "Set migration status to 'error' on live-migration failure" into ↵Jenkins2016-09-303-4/+4
|\ \ \ \ \ | |_|_|/ / |/| | | | | | | | | stable/mitaka
| * | | | Set migration status to 'error' on live-migration failureRajesh Tailor2016-08-113-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | (A) In resize, confirm-resize and revert-resize operation, migration status is marked as 'error' in case of failure for respective operation. Migration object support is added in live-migration operation, which mark migration status to 'failed' if live-migration operation fails in-between. To make live-migration consistent with resize, confirm-resize and revert- resize operation, it needs to mark migration status to 'error' instead of 'failed' in case of failure. (B) Apart from consistency, proposed change fixes issue (similar to [1]) which might occur on live-migration failure as follows: If live-migration fails (which sets migration status to 'failed') after copying instance files from source to dest node and then user request for instance deletion. In that case, delete api will only remove instance files from instance.host and not from other host (which could be either source or dest node but not instance.host). Since instance is already deleted, instance files will remain on other host (not instance.host). Set migration status to 'error' on live-migration failure, so that periodic task _cleanup_incomplete_migrations [2] will remove orphaned instance files from compute nodes after instance deletion in above case. [1] https://bugs.launchpad.net/nova/+bug/1392527 [2] https://review.openstack.org/#/c/219299/ DocImpact: On live-migration failure, set migration status to 'error' instead of 'failed'. Change-Id: I7a0c5a32349b0d3604802d22e83a3c2dab4b1370 Closes-Bug: 1470420 (cherry picked from commit d61e15818c1d108275b3286a6665fa3e6540e7e7)
* | | | | Merge "Updated from global requirements" into stable/mitakaJenkins2016-09-211-1/+1
|\ \ \ \ \
| * | | | | Updated from global requirementsOpenStack Proposal Bot2016-09-061-1/+1
| | |_|/ / | |/| | | | | | | | | | | | | Change-Id: I887aedf4823b7601be8fb0da5bfaad6ec3849612
* | | | | Merge "ironic: Cleanup instance information when spawn fails" into stable/mitakaJenkins2016-09-202-1/+37
|\ \ \ \ \
| * | | | | ironic: Cleanup instance information when spawn failsHironori Shiina2016-09-012-1/+37
| |/ / / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instance information such as an instance_uuid set to an ironic node by _add_driver_fields() is not cleared when spawning is aborted by an exception raised before ironic starts deployment. Then, ironic node stays AVAILABLE state with instance_uuid set. This information is not cleared even if the instance is deleted. The ironic node cannot be removed nor deployed again becuase instance_uuid remains. This patch adds a method to remove the information. This method is called if ironic doesn't need unprovisioning when an instance is destroyed. Change-Id: Idf5191aa1c990552ca2340856d5d5b6ac03f7539 Closes-Bug: 1596922 (cherry picked from commit 0e24e9e2ec254364ffe029226b9ae5956002df54)
* | | | | Merge "Default image.size to 0 when extracting v1 image attributes" into ↵Jenkins2016-09-202-2/+13
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | stable/mitaka
| * | | | | Default image.size to 0 when extracting v1 image attributesMatt Riedemann2016-09-142-2/+13
| | |_|/ / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we snapshot a non-volume-backed instance, we create an image in nova.compute.api.API._create_image and set some values from the instance but 'size' isn't one of them. Later in the virt driver's snapshot method, at least for libvirt, it gets the snapshot image from the image API (glance) and when using glance v1 (use_glance_v1=True) the _extract_attributes method in nova.image.glance pulls the attributes out of the v1 image response to the form that nova expects. This code assumes that the 'size' attribute is set on the image, which for a snapshot image it might not be (yet anyway). This results in an AttributeError. This change defaults the size attribute value to 0 if it's not set. If it is set, but to None, we still use 0 as before. Conflicts: nova/tests/unit/image/test_glance.py Original patch added test-case to TestExtractAttributes class, but stable/mitaka doesn't have that class. And that class used configurable value "use_glance_v1" that is not contained in stable/mitaka. So I added a test-case that tests fixed points to other existing class. Change-Id: I14b0e44a7268231c2b19f013b563f0b8f09c2e88 Closes-Bug: #1606707 (cherry picked from commit 7a16c754fb49615b802e6b8f3d886847c168bd2a)
* | | | | Merge "virt: handle unicode when logging LifecycleEvents" into stable/mitakaJenkins2016-09-202-2/+15
|\ \ \ \ \
| * | | | | virt: handle unicode when logging LifecycleEventsMatt Riedemann2016-09-102-2/+15
| |/ / / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The repr on the LifecycleEvent object includes a translated name, which blows up with a UnicodeEncodeError in the emit_event method because of str(event) and non-English locales. This change uses six.text_type on the object rather than str and adds a test to recreate the bug and show the fix. Change-Id: I9b7b52739883043b7aae9759f500e5e21cfe8b30 Closes-Bug: #1621392 (cherry picked from commit 2b57b3d867f568a1539d4419b441ae8d666b77c4)
* | | | | Merge "Refresh info_cache after deleting floating IP" into stable/mitakaJenkins2016-09-192-2/+27
|\ \ \ \ \
| * | | | | Refresh info_cache after deleting floating IPMichael Wurtz2016-09-072-2/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When deleting a floating IP associated with Neutron's info_cache we don't refresh the info_cache after it is deleted. This patch makes it so the info_cache is refreshed when an associated floating IP is deleted. If there is no info_cache associated with the floating IP then info_cache is not refreshed. Change-Id: I8a8ae8cdbe2d9d77e7f1ae94ebdf6e4ad46eaf00 Closes-Bug: #1614538 (cherry picked from commit cdb9b6820dc17971bca24adfc0b56f030f0ae827)
* | | | | | Merge "db: retry on deadlocks while adding an instance" into stable/mitakaJenkins2016-09-192-0/+7
|\ \ \ \ \ \
| * | | | | | db: retry on deadlocks while adding an instanceJohn Garbutt2016-09-082-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We are hitting deadlocks in the gate when we are inserting the new instance_extra row into the DB. We should follow up this fix and look at way to avoid the deadlock happening rather than retrying it. It currently doesn't happen too often, so this should be enough to stop the problem while we work on a better fix. Closes-Bug: #1480305 Change-Id: Iba218bf28c7d1e6040c551fe836d6fa5e5e45f4d (cherry picked from commit df15e467b61fee781e78b07bf910d6b411bafd44)
* | | | | | | Merge "ironic_host_manager: fix population of instances info on start" into ↵Jenkins2016-09-195-15/+87
|\ \ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | stable/mitaka
| * | | | | | | ironic_host_manager: fix population of instances info on startRoman Podoliaka2016-09-085-15/+87
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | IronicHostManager currently overrides the _init_instance_info() method of the base class and unconditionally skips population of instances information for all compute nodes, even if they are not Ironic ones. If there are compute nodes with the hypervisor_type different from Ironic in the same cloud. the instances info will be missing in nova-scheduler (if IronicHostManager is configured as a host manager impl in nova.conf), which will effectively break instance affinity filters like DifferentHostFilter or SameHostFilter, that check set intersections of instances running on a particular host and the ones passed as a hint for nova-scheduler in a boot request. IronicHostManager should use the method implementation of the base class for non-ironic compute nodes. Ib1ddb44d71f7b085512c1f3fc0544f7b00c754fe fixed the problem with scheduling, this change is needed to make sure we also populate the instances info on start of nova-scheduler. Closes-Bug: #1606496 Co-Authored-By: Timofei Durakov <tdurakov@mirantis.com> (cherry-picked from cc64a45d98d7576a78a853cc3da8109c31f4b75d) Change-Id: I9d8d2dc99773df4097c178d924d182a0d1971bcc
* | | | | | | | Merge "ironic_host_manager: fix population of instances info on schedule" ↵Jenkins2016-09-192-3/+48
|\ \ \ \ \ \ \ \ | |/ / / / / / / | | | | | | | | | | | | | | | | into stable/mitaka
| * | | | | | | ironic_host_manager: fix population of instances info on scheduleRoman Podoliaka2016-09-082-3/+48
| | |/ / / / / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | IronicHostManager currently overrides the _get_instance_info() method of the base class and unconditionally returns an empty dict of instances for a given compute node. The problem with that is that in a heterogeneous cloud with both libvirt and ironic compute nodes this will always return {} for the former too, which is incorrect and will effectively break instance affinity filters like DifferentHostFilter or SameHostFilter, that check set intersections of instances running on a particular host and the ones passed as a hint for nova-scheduler in a boot request. IronicHostManager should use the method implementation of the base class for non-ironic compute nodes. This is a partial fix which only modifies _get_instance_info() called down the select_destinations() stack. A following change will modify _init_instance_info() that pre-populates node instances info on start of a nova-scheduler process. Partial-Bug: #1606496 (cherry-picked from af218caba4f532c7d182071ab4304d49d02de08f) Change-Id: Ib1ddb44d71f7b085512c1f3fc0544f7b00c754fe
* | | | | | | Merge "VMware: Use Port Group and Key in binding details" into stable/mitakaJenkins2016-09-194-12/+65
|\ \ \ \ \ \ \
| * | | | | | | VMware: Use Port Group and Key in binding detailsThomas Bachman2016-04-124-12/+65
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This uses the port group and port key information passed via the binding:vif_details attribute, if available. This allows these parameters to be passed explicitly. Change-Id: I41949e8134c2ca860e7b7ad3a2679b9f2884a99a Closes-Bug: #1552786 (cherry picked from commit e964b4778cfe6a1864718bdad4ab037ddf976766)
* | | | | | | | Merge "libvirt: Prevent block live migration with tunnelled flag" into ↵Jenkins2016-09-192-5/+148
|\ \ \ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | stable/mitaka
| * | | | | | | | libvirt: Prevent block live migration with tunnelled flagEli Qiao2016-06-132-5/+148
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | libvirt will report "Selecting disks to migrate is not implemented for tunneled migration" while doing block migration with VIR_MIGRATE_TUNNELLED flag. This patch does 2 changes: 1. Raise exception.MigrationPreCheckError if block live migration with with mapped volumes and tunnelled flag on. 2. Remove migrate_disks from params of migrateToURI3 in case of tunnelled block live migration w/o mapped volumes since we want to copy all disks to destination Co-Authored-By: Pawel Koniszewski <pawel.koniszewski@intel.com> Closes-bug: #1576093 Conflicts: nova/tests/unit/virt/libvirt/test_driver.py nova/virt/libvirt/driver.py Change-Id: Id6e49f298133c53d21386ea619c83e413ef3117a (cherry picked from commit 1885a39083776605348523002f4a6aedace12cce)
* | | | | | | | | Merge "Properly quote IPv6 address in RsyncDriver" into stable/mitakaJenkins2016-09-164-3/+74
|\ \ \ \ \ \ \ \ \
| * | | | | | | | | Properly quote IPv6 address in RsyncDriverAlexey I. Froloff2016-08-184-3/+74
| | |_|_|_|_|_|/ / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When IPv6 address literal is used as host in rsync call, it should be enclosed in square brackets. This is already done for copy_file method outside of driver in changeset Ia5f28673e79158d948980f2b3ce496c6a56882af Create helper function format_remote_path(host, path) and use where appropriate. Closes-Bug: 1601822 Change-Id: Ifc386539f33684fb764f5f638a7ee0a10b1ef534 (cherry picked from commit 270be6906c13bc621a7ad507b8ae729a940609d2)
* | | | | | | | | Merge "Ensures that progress_watermark and progress_time are updated" into ↵Jenkins2016-09-162-5/+41
|\ \ \ \ \ \ \ \ \ | |_|_|_|_|_|/ / / |/| | | | | | | | | | | | | | | | | stable/mitaka
| * | | | | | | | Ensures that progress_watermark and progress_time are updatedGaudenz Steinlin2016-08-172-5/+41
| |/ / / / / / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Ensure that progress_watermark adn progress_time are updated when progress is made, even if progress_watermark is set to 0 in the first iteration due to info.data_remaining being equals to 0. Closes-Bug: #1591240 Change-Id: I36c38a4ef049c995f715f2a8274c77fd8504b546 (cherry picked from commit 6283b16ceb7eb9f70e64846b3cefd258642a4c65)
* | | | | | | | Merge "Imported Translations from Zanata" into stable/mitakaJenkins2016-09-0911-65/+38
|\ \ \ \ \ \ \ \ | |_|_|_|_|/ / / |/| | | | | | |
| * | | | | | | Imported Translations from ZanataOpenStack Proposal Bot2016-09-0811-65/+38
| | |_|_|/ / / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For more information about this automatic import see: https://wiki.openstack.org/wiki/Translations/Infrastructure Change-Id: I3a78c07daf280e063680b616eaf887645f60a236
* | | | | | | Merge "Add networks to quota's update json-schema when network quota ↵Jenkins2016-09-083-0/+40
|\ \ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | enabled" into stable/mitaka
| * | | | | | | Add networks to quota's update json-schema when network quota enabledHe Jie Xu2016-09-023-0/+40
| | |_|_|_|_|/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The enable of network quota is configurable. But the json-schema of quota update won't be updated accroding configuration. This leads to user can't update network quota from API. This patch add network quota when it's enabled. Change-Id: I922525bfe55c42d2bf187c3bcd30536c8adf1c4f Closes-Bug: #1606740 (cherry picked from commit 767b3db4116e7fba8275c67c2a4d7caad0bf647b)
* | | | | | | Merge "Run shelve/shelve_offload_instance in a semaphore" into stable/mitakaJenkins2016-09-081-2/+17
|\ \ \ \ \ \ \
| * | | | | | | Run shelve/shelve_offload_instance in a semaphoreMatt Riedemann2016-08-191-2/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When an instance is shelved, by default it is immediately offloaded because CONF.shelved_offload_time defaults to 0. When a shelved instance is offloaded, it's destroyed and it's host/node values are nulled out. Unshelving an instance is basically the same flow as building an instance for the first time. The instance.host/node values are set in the resource tracker when claiming resources. Tempest has some tests which use a shared server resource and perform actions on that shared server. These tests are triggering a race when unshelve is called while the compute is offloading the shelved instance. The race hits a window where unshelve is running before shelve_offload_instance nulls out the instance host/node values. The resource claim during unshelve sets the host/node values (which were actually already set) and then shelve_offload_instance nulls them out. The unshelve operation sets the instance.vm_state to ACTIVE, however. So Tempest sees an instance that's ACTIVE and thinks it can run the next action test on it, for example 'suspend'. This fails because the instance.host isn't set (from shelve_offload_instance) and the test fails in the compute API. To close the race window, we add a lock to shelve_instance and shelve_offload_instance to match the lock that's in unshelve_instance. This way when unshelve is called it will wait until the shelve_offload_instance operation is complete and the instance.host value is nulled out. Closes-Bug: #1611008 Change-Id: Id36b3b9516d72d28519c18c38d98b646b47d288d (cherry picked from commit e285eb1a382e6d3ce1cc596eeb5cecb3b165a228)
* | | | | | | | Merge "HyperV: remove instance snapshot lock" into stable/mitakaJenkins2016-09-082-33/+0
|\ \ \ \ \ \ \ \ | |/ / / / / / / | | | / / / / / | |_|/ / / / / |/| | | | | |
| * | | | | | HyperV: remove instance snapshot lockLucian Petrut2016-08-182-33/+0
| | |/ / / / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | At the moment, the instance snapshot operation is synchronized using the instance uuid. This was added some time ago, as the instance destroy operation was failing when an instance snapshot was in proggress. This is now causing a deadlock, as a similar lock was recently introduced in the manager for the shelve operation by this change: Id36b3b9516d72d28519c18c38d98b646b47d288d We can safely remove the lock from the HyperV driver as we now stop pending jobs when destroying instances. Closes-Bug: #1611321 Change-Id: I1c2ca0d24c195ebaba442bbb7091dcecc0a7e781 (cherry picked from commit c7af24ca8279226adc5cd8fa0984c6fd79e26d67)
* | | | | | Merge "Return None in get_instance_id_by_floating_address" into stable/mitakaJenkins2016-09-072-1/+26
|\ \ \ \ \ \ | |_|_|_|_|/ |/| | | | |
| * | | | | Return None in get_instance_id_by_floating_addressKen'ichi Ohmichi2016-08-202-1/+26
| |/ / / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | _show_port() can raise a PortNotFound exception, but the method get_instance_id_by_floating_address() doesn't handle it. On the other hand, the method returns None if fip doesn't contain port_id as a normal case. On the caller side, "Delete a floating ip" API can use the returned value None to disassociate_and_release_floating_ip() and the method handles the None as a normal value. So this patch makes get_instance_id_by_floating_address return None if PortNotFound happens. Closes-Bug: #1586931 Change-Id: I03be8100155d343eb6a4ea9eda3f1498ad3fb4cf (cherry picked from commit e72826123bfd7c1d962b615da3f028b315ba3943)
* | | | | Merge "List instances for secgroup without joining on rules" into stable/mitakaJenkins2016-09-015-12/+23
|\ \ \ \ \
| * | | | | List instances for secgroup without joining on rulesPaul Griffin2016-08-135-12/+23
| |/ / / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Make db.security_group_get only join rules if specified in the columns_to_join. This works around a performance issue with lots of instances and security groups. NOTE(mriedem): A legacy_v2 API test had to be updated which didn't exist in the original fix in Newton. Co-Authored-By: Dan Smith <dansmith@redhat.com> Change-Id: Ie3daed133419c41ed22646f9a790570ff47f0eec Closes-Bug: #1552971 (cherry picked from commit e70468e87537965b5db61f32e72ececde84531f2)
* | | | | Merge "Use stashed volume connector in _local_cleanup_bdm_volumes" into ↵Jenkins2016-08-242-14/+105
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | stable/mitaka
| * | | | | Use stashed volume connector in _local_cleanup_bdm_volumesAndrea Rosa2016-08-062-14/+105
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we perform a local delete in the compute API during the volumes cleanup, Nova calls the volume_api.terminate_connection passing a fake volume connector. That call is useless and it has no real effect on the volume server side. With a686185fc02ec421fd27270a343c19f668b95da6 in mitaka we started stashing the connector in the bdm.connection_info so this change tries to use that if it's available, which it won't be for attachments made before that change -or- for volumes attached to an instance in shelved_offloaded state that were never unshelved (because the actual volume attach for those instances happens on the compute manager after you unshelve the instance). If we can't find a stashed connector, or we find one whose host does not match the instance.host, we punt and just don't call terminate_connection at all since calling it with a fake connector can actually make some cinder volume backends fail and then the volume is orphaned. Closes-Bug: #1609984 Co-Authored-By: Matt Riedemann <mriedem@us.ibm.com> Change-Id: I9f9ead51238e27fa45084c8e3d3edee76a8b0218 (cherry picked from commit 33510d4be990417d2a3a428106f6f745db5af6ed)
* | | | | | Merge "Return HTTP 200 on list for invalid status" into stable/mitakaJenkins2016-08-243-11/+33
|\ \ \ \ \ \
| * | | | | | Return HTTP 200 on list for invalid statusEdLeafe2016-07-013-11/+33
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The server listing API raises a 500 error if you pass an invalid status filter for admin user. In the case of a non-admin user, it simply returns an empty list. In the case of an admin user, it fetches extended server attributes, so a condition was added to get extended server attributes only when servers list is not empty. This change simply removes the cause of the 500 exception. A subsequent patch with a microversion bump will modify the behavior so that a 400 Bad Request will be raised for an invalid status, for both admin and non-admin alike. Co-Authored-By: Dinesh Bhor <dinesh.bhor@nttdata.com> Closes-Bug: #1579706 Change-Id: I10bde78f0a9ac59b8646d58f62fa5056f989f54f (cherry picked from commit ee4d69e28dfb3d4764186d0c0212d53c99bda3ca)
* | | | | | | Merge "libvirt: Fix ssh driver to to prevent prompting" into stable/mitakaJenkins2016-08-242-28/+28
|\ \ \ \ \ \ \
| * | | | | | | libvirt: Fix ssh driver to to prevent promptingMoshe Levi2016-05-052-28/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch fix the bug which was resolved in this patch Ib1e38f397afaac96a2e1a8717c87a4b6756419db but got lost after the merge of this I586a9faa2a7afa3f195239df305898b6da4fb583 patch Change-Id: I75d09832980a88752b061309e80c3fcfce1f2fcc (cherry picked from commit b274a85e968242cfb9bf44925a7266ecd4ce2243)
* | | | | | | | Merge "VMware: enable a resize of instance with no root disk" into stable/mitakaJenkins2016-08-242-21/+84
|\ \ \ \ \ \ \ \ | |_|_|_|/ / / / |/| | | | | | |