summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* Merge "Update rpc version aliases for kilo" into stable/kilo2015.1.0rc32015.1.0Jenkins2015-04-287-8/+30
|\
| * Update rpc version aliases for kiloHe Jie Xu2015-04-287-8/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Update all of the rpc client API classes to include a version alias for the latest version implemented in Kilo. This alias is needed when doing rolling upgrades from Kilo to Liberty. With this in place, you can ensure all services only send messages that both Kilo and Liberty will understand. Closes-Bug: #1444745 Conflicts: nova/conductor/rpcapi.py NOTE(alex_xu): The conflict is due to there are some logs already added into the master. Change-Id: I2952aec9aae747639aa519af55fb5fa25b8f3ab4 (cherry picked from commit 78a8b5802ca148dcf37c5651f75f2126d261266e)
* | Merge "Fix migrate_flavor_data() to catch instances with no instance_extra ↵Jenkins2015-04-282-1/+31
|\ \ | | | | | | | | | rows" into stable/kilo
| * | Fix migrate_flavor_data() to catch instances with no instance_extra rowsDan Smith2015-04-232-1/+31
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The way the query was being performed previously, we would not see any instances that didn't have a row in instance_extra. This could happen if an instance hasn't been touched for several releases, or if the data set is old. The fix is a simple change to use outerjoin instead of join. This patch includes a test that ensures that instances with no instance_extra rows are included in the migration. If we query an instance without such a row, we create it before doing a save on the instance. Closes-Bug: #1447132 Change-Id: I2620a8a4338f5c493350f26cdba3e41f3cb28de7 (cherry picked from commit 92714accc49e85579f406de10ef8b3b510277037)
* | Add security group calls missing from latest compute rpc api version bumpHans Lindgren2015-04-275-44/+39
|/ | | | | | | | | | | | | | | | The recent compute rpc api version bump missed out on the security group related calls that are part of the api. One possible reason is that both compute and security group client side rpc api:s share a single target, which is of little value and only cause mistakes like this. This change eliminates future problems like this by combining them into one to get a 1:1 relationship between client and server api:s. Change-Id: I9207592a87fab862c04d210450cbac47af6a3fd7 Closes-Bug: #1448075 (cherry picked from commit bebd00b117c68097203adc2e56e972d74254fc59)
* Merge "Control create/delete flavor api permissions using policy.json" into ↵2015.1.0rc2Jenkins2015-04-232-13/+2
|\ | | | | | | stable/kilo
| * Control create/delete flavor api permissions using policy.jsonDivya2015-04-222-13/+2
| | | | | | | | | | | | | | | | | | | | The permissions of create/delete flavor api is currently broken and expects the user to be always an admin, instead of controlling the permissions by the rules defined in the nova policy.json. Change-Id: Ide3c9ec2fa674b4fe3ea9d935cd4f7848914b82e Closes-Bug: 1445335 (cherry picked from commit ced60b1d1b1608dc8229741b207a95498bc0b212)
* | Merge "Release Import of Translations from Transifex" into stable/kiloJenkins2015-04-2310-9519/+641
|\ \
| * | Release Import of Translations from TransifexAndreas Jaeger2015-04-2010-9519/+641
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Manual import of Translations from Transifex. This change also removes all po files that are less than 66 per cent translated since such partially translated files will not help users. This updates also recreates all pot (translation source files) to reflect the state of the repository. This change needs to be done manually since the automatic import does not handle the proposed branches and we need to sync with latest translations. Change-Id: I0e9ef00182a2229602d23b8a67a02f0be62ee239
* | | Updated from global requirementsOpenStack Proposal Bot2015-04-232-6/+6
| |/ |/| | | | | Change-Id: I5d4acd36329fe2dccb5772fed3ec55b442597150
* | Merge "Fix handling of pci_requests in consume_from_instance." into stable/kiloJenkins2015-04-224-10/+85
|\ \
| * | Fix handling of pci_requests in consume_from_instance.Przemyslaw Czesnowicz2015-04-214-10/+85
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Properly retrieve requests from pci_requests in consume_from_instance. Without this the call to numa_fit_instance_to_host will fail because it expects the request list. And change the order in which apply_requests and numa_fit_instance_to_host are called. Calling apply_requests first will remove devices from pools and may make numa_fit_instance_to_host fail. Change-Id: I41cf4e8e5c1dea5f91e5261a8f5e88f46c7994ef Closes-bug: #1444021 (cherry picked from commit 0913e799e9ce3138235f5ea6f80159f468ad2aaa)
* | | Merge "Use list of requests in InstancePCIRequests.obj_from_db." into ↵Jenkins2015-04-223-3/+16
|\ \ \ | |/ / | | | | | | stable/kilo
| * | Use list of requests in InstancePCIRequests.obj_from_db.Przemyslaw Czesnowicz2015-04-213-3/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | InstancePCIRequests.obj_from_db assumes it's called with with a dict of values from instances_extra table, but in some cases it's called with just the value of pci_requests column. This changes obj_from_db to be used with just the value of pci_requests column. Change-Id: I7bed733c845c365081719a70b8a2f0cc9a58370c Closes-bug: #1445040 (cherry picked from commit a074d7b4465b45730a5171e024c5c39a66a9c927)
* | | Merge "Add numa_node field to PciDevicePool" into stable/kiloJenkins2015-04-227-16/+45
|\ \ \ | |/ /
| * | Add numa_node field to PciDevicePoolPrzemyslaw Czesnowicz2015-04-217-16/+45
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Without this field, PciDevicePool.from_dict will treat numa_node key in the dict as a tag, which in turn means that the scheduler client will drop it when converting stats to objects before reporting. Converting it back to dicts on the scheduler side thus will not have access to the numa_node information which would cause any requests that will look for the exact match between the device and instance NUMA nodes in the NUMATopologyFilter to fail. Closes-Bug: #1441169 (cherry picked from commit 7db1ebc66c59205f78829d1e9cd10dcc1201d798) Conflicts: nova/tests/unit/objects/test_objects.py Change-Id: I7381f909620e8e787178c0be9a362f8d3eb9ff7d
* | | Merge "scheduler: re-calculate NUMA on consume_from_instance" into stable/kiloJenkins2015-04-224-44/+35
|\ \ \ | |/ /
| * | scheduler: re-calculate NUMA on consume_from_instanceNikola Dipanov2015-04-214-44/+35
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch narrows down the race window between the filter running and the consumption of resources from the instance after the host has been chosen. It does so by re-calculating the fitted NUMA topology just before consuming it from the chosen host. Thus we avoid any locking, but also make sure that the host_state is kept as up to date as possible for concurrent requests, as there is no opportunity for switching threads inside a consume_from_instance. Several things worth noting: * Scheduler being lock free (and thus racy) does not really affect resources other than PCI and NUMA topology this badly - this is due to complexity of said resources. In order for scheduler decesions to not be based on basically guessing, in case of those two we will likely need to introduce either locking or special heuristics. * There is a lot of repeated code between the 'consume_from_instance' method and the actual filters. This situation should really be fixed but is out of scope for this bug fix (which is about preventing valid requests failing because of races in the scheduler). Change-Id: If0c7ad20506c9dddf4dec1eb64c9d6dd4fb75633 Closes-bug: #1438238 (cherry picked from commit d6b3156a6c89ddff9b149452df34c4b32c50b6c3)
* | Merge "Fix kwargs['migration'] KeyError in @errors_out_migration decorator" ↵Jenkins2015-04-222-1/+30
|\ \ | | | | | | | | | into stable/kilo
| * | Fix kwargs['migration'] KeyError in @errors_out_migration decoratorRajesh Tailor2015-04-202-1/+30
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | @errors_out_migration decorator is used in the compute manager on resize_instance and finish_resize methods of ComputeManager class. It is decorated via @utils.expects_func_args('migration') to check 'migration' is a parameter to the decorator method, however, that only ensures there is a migration argument, not that it's in args or kwargs (either is fine for what expects_func_args checks). The errors_out_migration decorator can get a KeyError when checking kwargs['migration'] and fails to set the migration status to 'error'. This fixes the KeyError in the decorator by normalizing the args/kwargs list into a single dict that we can pull the migration from. Change-Id: I774ac9b749b21085f4fbcafa4965a78d68eec9c7 Closes-Bug: 1444300 (cherry picked from commit 3add7923fc16c050d4cfaef98a87886c6b6a589c)
* | Merge "Resource tracker: unable to restart nova compute" into stable/kiloJenkins2015-04-222-3/+16
|\ \
| * | Resource tracker: unable to restart nova computeGary Kotton2015-04-202-3/+16
| |/ | | | | | | | | | | | | | | | | | | | | The resource tracker calculates its used resources. In certain cases of failed migrations and an instance being deleted the resource tracker causes an exception in nova compute. If this situation arises then nova compute may not even be able to restart. Change-Id: I4a154e0cae3b8e22bd59ed05ba708e07eed8dea7 Closes-bug: #1444439 (cherry picked from commit ee7a7446cc6947a6bacacb6cb514934cc22e5782)
* | Merge "Add min/max of API microversions to version API" into stable/kiloJenkins2015-04-215-0/+29
|\ \
| * | Add min/max of API microversions to version APIKen'ichi Ohmichi2015-04-155-0/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As nova-spec api-microversions, versions API needs to expose minimum and maximum microversions to version API, because clients need to know available microversions through the API. That is very important for the interoperability. This patch adds these versions as the nova-spec mentioned. Note: As v2(not v2.1) API change manner, we have added new extensions if changing API. However, this patch doesn't add a new extension even if adding new parameters "version" and "min_version" because version API is independent from both v2 and v2.1 APIs. Change-Id: Id464a07d624d0e228fe0aa66a04c8e51f292ba0c Closes-Bug: #1443375 (cherry picked from commit 1830870718fe7472b47037f3331cfe59b5bdda07) (cherry picked from commit 853671e912c6ad9a4605acad2575417911875cdd)
* | | Merge "Use kwargs from compute v4 proxy change_instance_metadata" into ↵Jenkins2015-04-211-1/+2
|\ \ \ | | | | | | | | | | | | stable/kilo
| * | | Use kwargs from compute v4 proxy change_instance_metadataMatt Riedemann2015-04-161-1/+2
| | |/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The args were passed to the compute manager method in the wrong order. We noticed this in the gate with KeyError: 'uuid' in the logs because of the LOG.debug statement in change_instance_metadata. Just use kwargs like rpcapi would normally. There isn't a unit test for this since the v4 proxy code goes away in liberty, this is for getting it into stable/kilo. Closes-Bug: #1444728 Change-Id: Ic988f48d99e626ee5773c97904e09dbf00c5414a (cherry picked from commit e55f746ea8590cce7c2b07a023197f369251a7ef)
* | | Merge "compute: stop handling virt lifecycle events in cleanup_host()" into ↵Jenkins2015-04-212-0/+5
|\ \ \ | | | | | | | | | | | | stable/kilo
| * | | compute: stop handling virt lifecycle events in cleanup_host()Matt Riedemann2015-04-162-0/+5
| |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When rebooting a compute host, guest VMs can be getting shutdown automatically by the hypervisor and the virt driver is sending events to the compute manager to handle them. If the compute service is still up while this happens it will try to call the stop API to power off the instance and update the database to show the instance as stopped. When the compute service comes back up and events come in from the virt driver that the guest VMs are running, nova will see that the vm_state on the instance in the nova database is STOPPED and shut down the instance by calling the stop API (basically ignoring what the virt driver / hypervisor tells nova is the state of the guest VM). Alternatively, if the compute service shuts down after changing the intance task_state to 'powering-off' but before the stop API cast is complete, the instance can be in a strange vm_state/task_state combination that requires the admin to manually reset the task_state to recover the instance. Let's just try to avoid some of this mess by disconnecting the event handling when the compute service is shutting down like we do for neutron VIF plugging events. There could still be races here if the compute service is shutting down after the hypervisor (e.g. libvirtd), but this is at least a best attempt to do the mitigate the potential damage. Closes-Bug: #1444630 Related-Bug: #1293480 Related-Bug: #1408176 Change-Id: I1a321371dff7933cdd11d31d9f9c2a2f850fd8d9 (cherry picked from commit d1fb8d0fbdd6cb95c43b02f754409f1c728e8cd0)
* | | Merge "Forbid booting of QCOW2 images with virtual_size > root_gb" into ↵Jenkins2015-04-213-9/+53
|\ \ \ | | | | | | | | | | | | stable/kilo
| * | | Forbid booting of QCOW2 images with virtual_size > root_gbRoman Podoliaka2015-04-163-9/+53
| |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, it's possible to boot an instance from a QCOW2 image, which has virtual_size bigger than one allowed by the given flavor (root_gb). The issue is caused by two different problems in the code: 1) typo in get_disk_size() has made it always return None and effectively disabled verify_base_size() checks 2) Rbd image backend skips the verify_base_size() step for 'cached' images (the one with base files), so it is possible to boot an instance using a larger flavor once and then use smaller flavors to boot the same image, even if allowed root_gb size is smaller than the image virtual size Closes-Bug: #1429093 Change-Id: I383130e5f8cc288f4b428ed43fe4d3aba7169473 (cherry picked from commit c1f9ed27af64e6893d9d0153a964df5aba99b8f0)
* | | Merge "Pass migrate_data to pre_live_migration" into stable/kiloJenkins2015-04-211-1/+1
|\ \ \
| * | | Pass migrate_data to pre_live_migrationMatt Riedemann2015-04-151-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit ebfa09fa197a1d88d1b3ab1f308232c3df7dc009 added an RPC proxy but as part of that was passing migrate_data=None for pre_live_migration which breaks live block migration when not using shared storage. Closes-Bug: #984996 Change-Id: I2a83f1fb0e4468f9a6c67a188af725c3406139d1 (cherry picked from commit 4e515ec2269a1c3187ee9ffad3a6be059ec74b0b)
* | | | Merge "Fixed order of arguments during execution live_migrate()" into ↵Jenkins2015-04-211-2/+2
|\ \ \ \ | |/ / / | | | | | | | | stable/kilo
| * | | Fixed order of arguments during execution live_migrate()Timofey Durakov2015-04-151-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | order of arguments that passed to ComputeManager.live_,migration() differs in ComputeManager and _ComputeV4Proxy classes Change-Id: I23c25d219e9cdd0673ae6a12250219680fb7bda9 Closes-Bug:#1442656 (cherry picked from commit ba521fa53711774e0718808fe333aca676de57ae)
* | | | Merge "VMware: Fix attribute error in resize" into stable/kiloJenkins2015-04-211-1/+1
|\ \ \ \
| * | | | VMware: Fix attribute error in resizeSabari Kumar Murugesan2015-04-151-1/+1
| | |_|/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The class DatastorePath was recently removed from ds_util as it's available in oslo.vmware. One of the reference was missed during the refactor. Change-Id: Idc5825c304a99e83cbf36e93751148d6f995131a Closes-Bug: #1440968 (cherry picked from commit ab4a5a5300179a79f7a67688f0e9f3fc280c0efa)
* | | | Merge "Release bdm constraint source and dest type" into stable/kiloJenkins2015-04-214-10/+45
|\ \ \ \
| * | | | Release bdm constraint source and dest typejichenjc2015-04-154-10/+45
| |/ / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | https://bugs.launchpad.net/nova/+bug/1377958 fixed a problem that source_type: image, destination_type: local is not supported for boot instance, exception should be raised to reject the param otherwise it will lead to instance become ERROR state. However the fix introduced a problem on nova client https://bugs.launchpad.net/python-novaclient/+bug/1418484 The fix of the bug leads to following command become invalid nova boot test-vm --flavor m1.medium --image centos-vm-32 --nic net-id=c3f40e33-d535-4217-916b-1450b8cd3987 --block-device id=26b7b917-2794-452a-95e5-2efb2ca6e32d,bus=sata,source=volume,bootindex=1 So we need to release the original constraint to allow the above special case pass the validation check then we can revert the nova client exception (https://review.openstack.org/#/c/165932/) This patch checks the boot_index and whether image param is given after we found the bdm has source_type: image, destination_type: local, if this is the special case, then no exception will be raised. Closes-Bug: #1433609 Change-Id: If43faae95169bc3864449a8364975f5c887aac14 (cherry picked from commit cadbcc440a2fcfb8532f38111999a06557fbafc2)
* | | | Merge "Fix check_can_live_migrate_destination() in ComputeV4Proxy" into ↵Jenkins2015-04-211-2/+2
|\ \ \ \ | | |/ / | |/| / | |_|/ |/| | stable/kilo
| * | Fix check_can_live_migrate_destination() in ComputeV4ProxyDan Smith2015-04-151-2/+2
| |/ | | | | | | | | | | | | | | | | | | | | | | | | There was a mismatch in the V4 proxy in the call signatures of this function. This was missed because the "destination" parameter is passed in the rpcapi as the host to contact, which is consumed by the rpc layer and not passed. Since it was not called one of the standard names (either host if to be not passed, or host_param if to be passed), this was missed. Change-Id: Idf2160934dade650ed02b672f3b64cb26247f8e6 Closes-Bug: #1442602 (cherry picked from commit 0c08f7f2ef070f7c6172d7742f9789e0a8bda91a)
* | update .gitreview for stable/kiloDoug Hellmann2015-04-151-0/+1
|/ | | | Change-Id: I6356513ac42b79402dbde8ee5e75cbbd1aee7eef
* Merge "Honor uuid parameter passed to nova-network create"2015.1.0rc1proposed/kiloJenkins2015-04-102-0/+17
|\
| * Honor uuid parameter passed to nova-network createmelanie witt2015-04-092-0/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | The nova api for creating nova-network networks has an optional request parameter "id" which maps to the string uuid for the network to create. The nova-manage network create command represents it as the option --uuid. The parameter is currently being ignored by the nova-network manager. This change sets the uuid when creating the network if it has been specified. Closes-Bug: #1441931 Change-Id: Ib29e632b09905f557a7a6910d58207ed91cdc047
* | Merge "Refactor nova-net cidr validation in prep for bug fix"Jenkins2015-04-101-32/+35
|\ \ | |/
| * Refactor nova-net cidr validation in prep for bug fixmelanie witt2015-04-091-32/+35
| | | | | | | | | | | | | | | | | | The network manager create_networks function is on the verge of the "too complex" pep8 check error, so this change refactors the cidr validation code out into a private function in preparation for fixing a bug. Change-Id: Id0603ddd642acccfa12ae53a52ecfb66dca53702
* | Merge "Add compute RPC API v4.0"Jenkins2015-04-103-105/+821
|\ \
| * | Add compute RPC API v4.0Dan Smith2015-04-093-105/+821
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch creates compute RPC API version 4.0, while retaining compatibility in rpcapi and manager for 3.x, allowing for continuous deployment scenarios. UpgradeImpact - Deployments doing continuous deployment should follow this process to upgrade without any downtime: 1) Set [upgrade_levels] compute=kilo in your config. 2) Upgrade to this commit. 3) Once everything has been upgraded, remove the entry in [upgrade_levels] so that all rpc clients to the nova-compute service start sending the new 4.0 messages. Change-Id: Id96e77c739e7473774e110646204520d6163d8a5
* | | Merge "Update compute version alias for kilo"Jenkins2015-04-091-0/+1
|\ \ \ | |/ / | | / | |/ |/|
| * Update compute version alias for kiloDan Smith2015-04-091-0/+1
| | | | | | | | | | | | | | This simply draws a line in the sand for the version of compute RPC that kilo supports for use during upgrades by operators. Change-Id: I29103861e2e333cc877204cca62e2f26d3fafe66
* | Merge "add ironic hypervisor type"Jenkins2015-04-092-1/+4
|\ \