| Commit message (Collapse) | Author | Age | Files | Lines |
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Update all of the rpc client API classes to include a version alias
for the latest version implemented in Kilo. This alias is needed when
doing rolling upgrades from Kilo to Liberty. With this in place, you can
ensure all services only send messages that both Kilo and Liberty will
understand.
Closes-Bug: #1444745
Conflicts:
nova/conductor/rpcapi.py
NOTE(alex_xu): The conflict is due to there are some logs already added
into the master.
Change-Id: I2952aec9aae747639aa519af55fb5fa25b8f3ab4
(cherry picked from commit 78a8b5802ca148dcf37c5651f75f2126d261266e)
|
|\ \
| | |
| | |
| | | |
rows" into stable/kilo
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The way the query was being performed previously, we would not see any
instances that didn't have a row in instance_extra. This could happen if
an instance hasn't been touched for several releases, or if the data
set is old.
The fix is a simple change to use outerjoin instead of join. This patch
includes a test that ensures that instances with no instance_extra rows
are included in the migration. If we query an instance without such a
row, we create it before doing a save on the instance.
Closes-Bug: #1447132
Change-Id: I2620a8a4338f5c493350f26cdba3e41f3cb28de7
(cherry picked from commit 92714accc49e85579f406de10ef8b3b510277037)
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The recent compute rpc api version bump missed out on the security group
related calls that are part of the api.
One possible reason is that both compute and security group client side
rpc api:s share a single target, which is of little value and only cause
mistakes like this.
This change eliminates future problems like this by combining them into
one to get a 1:1 relationship between client and server api:s.
Change-Id: I9207592a87fab862c04d210450cbac47af6a3fd7
Closes-Bug: #1448075
(cherry picked from commit bebd00b117c68097203adc2e56e972d74254fc59)
|
|\
| |
| |
| | |
stable/kilo
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The permissions of create/delete flavor api is currently broken
and expects the user to be always an admin, instead of controlling
the permissions by the rules defined in the nova policy.json.
Change-Id: Ide3c9ec2fa674b4fe3ea9d935cd4f7848914b82e
Closes-Bug: 1445335
(cherry picked from commit ced60b1d1b1608dc8229741b207a95498bc0b212)
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Manual import of Translations from Transifex. This change also removes
all po files that are less than 66 per cent translated since such
partially translated files will not help users.
This updates also recreates all pot (translation source files) to
reflect the state of the repository.
This change needs to be done manually since the automatic import does
not handle the proposed branches and we need to sync with latest
translations.
Change-Id: I0e9ef00182a2229602d23b8a67a02f0be62ee239
|
| |/
|/|
| |
| | |
Change-Id: I5d4acd36329fe2dccb5772fed3ec55b442597150
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Properly retrieve requests from pci_requests in consume_from_instance.
Without this the call to numa_fit_instance_to_host will fail because
it expects the request list.
And change the order in which apply_requests and numa_fit_instance_to_host
are called. Calling apply_requests first will remove devices from pools
and may make numa_fit_instance_to_host fail.
Change-Id: I41cf4e8e5c1dea5f91e5261a8f5e88f46c7994ef
Closes-bug: #1444021
(cherry picked from commit 0913e799e9ce3138235f5ea6f80159f468ad2aaa)
|
|\ \ \
| |/ /
| | |
| | | |
stable/kilo
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
InstancePCIRequests.obj_from_db assumes it's called with with a dict
of values from instances_extra table, but in some cases it's called
with just the value of pci_requests column.
This changes obj_from_db to be used with just the value of pci_requests column.
Change-Id: I7bed733c845c365081719a70b8a2f0cc9a58370c
Closes-bug: #1445040
(cherry picked from commit a074d7b4465b45730a5171e024c5c39a66a9c927)
|
|\ \ \
| |/ / |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Without this field, PciDevicePool.from_dict will treat numa_node key in
the dict as a tag, which in turn means that the scheduler client will
drop it when converting stats to objects before reporting.
Converting it back to dicts on the scheduler side thus will not have
access to the numa_node information which would cause any requests that
will look for the exact match between the device and instance NUMA nodes
in the NUMATopologyFilter to fail.
Closes-Bug: #1441169
(cherry picked from commit 7db1ebc66c59205f78829d1e9cd10dcc1201d798)
Conflicts:
nova/tests/unit/objects/test_objects.py
Change-Id: I7381f909620e8e787178c0be9a362f8d3eb9ff7d
|
|\ \ \
| |/ / |
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This patch narrows down the race window between the filter running and
the consumption of resources from the instance after the host has been
chosen.
It does so by re-calculating the fitted NUMA topology just before consuming it
from the chosen host. Thus we avoid any locking, but also make sure that
the host_state is kept as up to date as possible for concurrent
requests, as there is no opportunity for switching threads inside a
consume_from_instance.
Several things worth noting:
* Scheduler being lock free (and thus racy) does not really affect
resources other than PCI and NUMA topology this badly - this is due
to complexity of said resources. In order for scheduler decesions to not
be based on basically guessing, in case of those two we will likely need
to introduce either locking or special heuristics.
* There is a lot of repeated code between the 'consume_from_instance'
method and the actual filters. This situation should really be fixed but
is out of scope for this bug fix (which is about preventing valid
requests failing because of races in the scheduler).
Change-Id: If0c7ad20506c9dddf4dec1eb64c9d6dd4fb75633
Closes-bug: #1438238
(cherry picked from commit d6b3156a6c89ddff9b149452df34c4b32c50b6c3)
|
|\ \
| | |
| | |
| | | |
into stable/kilo
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
@errors_out_migration decorator is used in the compute manager on
resize_instance and finish_resize methods of ComputeManager class.
It is decorated via @utils.expects_func_args('migration') to check
'migration' is a parameter to the decorator method, however, that
only ensures there is a migration argument, not that it's in args or
kwargs (either is fine for what expects_func_args checks).
The errors_out_migration decorator can get a KeyError when checking
kwargs['migration'] and fails to set the migration status to 'error'.
This fixes the KeyError in the decorator by normalizing the args/kwargs
list into a single dict that we can pull the migration from.
Change-Id: I774ac9b749b21085f4fbcafa4965a78d68eec9c7
Closes-Bug: 1444300
(cherry picked from commit 3add7923fc16c050d4cfaef98a87886c6b6a589c)
|
|\ \ |
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The resource tracker calculates its used resources. In certain cases
of failed migrations and an instance being deleted the resource tracker
causes an exception in nova compute. If this situation arises then nova
compute may not even be able to restart.
Change-Id: I4a154e0cae3b8e22bd59ed05ba708e07eed8dea7
Closes-bug: #1444439
(cherry picked from commit ee7a7446cc6947a6bacacb6cb514934cc22e5782)
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
As nova-spec api-microversions, versions API needs to expose minimum
and maximum microversions to version API, because clients need to
know available microversions through the API. That is very important
for the interoperability.
This patch adds these versions as the nova-spec mentioned.
Note:
As v2(not v2.1) API change manner, we have added new extensions if
changing API. However, this patch doesn't add a new extension even
if adding new parameters "version" and "min_version" because version
API is independent from both v2 and v2.1 APIs.
Change-Id: Id464a07d624d0e228fe0aa66a04c8e51f292ba0c
Closes-Bug: #1443375
(cherry picked from commit 1830870718fe7472b47037f3331cfe59b5bdda07)
(cherry picked from commit 853671e912c6ad9a4605acad2575417911875cdd)
|
|\ \ \
| | | |
| | | |
| | | | |
stable/kilo
|
| | |/
| |/|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The args were passed to the compute manager method in the wrong order.
We noticed this in the gate with KeyError: 'uuid' in the logs because of
the LOG.debug statement in change_instance_metadata. Just use kwargs
like rpcapi would normally.
There isn't a unit test for this since the v4 proxy code goes away in
liberty, this is for getting it into stable/kilo.
Closes-Bug: #1444728
Change-Id: Ic988f48d99e626ee5773c97904e09dbf00c5414a
(cherry picked from commit e55f746ea8590cce7c2b07a023197f369251a7ef)
|
|\ \ \
| | | |
| | | |
| | | | |
stable/kilo
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When rebooting a compute host, guest VMs can be getting shutdown
automatically by the hypervisor and the virt driver is sending events to
the compute manager to handle them. If the compute service is still up
while this happens it will try to call the stop API to power off the
instance and update the database to show the instance as stopped.
When the compute service comes back up and events come in from the virt
driver that the guest VMs are running, nova will see that the vm_state
on the instance in the nova database is STOPPED and shut down the
instance by calling the stop API (basically ignoring what the virt
driver / hypervisor tells nova is the state of the guest VM).
Alternatively, if the compute service shuts down after changing the
intance task_state to 'powering-off' but before the stop API cast is
complete, the instance can be in a strange vm_state/task_state
combination that requires the admin to manually reset the task_state to
recover the instance.
Let's just try to avoid some of this mess by disconnecting the event
handling when the compute service is shutting down like we do for
neutron VIF plugging events. There could still be races here if the
compute service is shutting down after the hypervisor (e.g. libvirtd),
but this is at least a best attempt to do the mitigate the potential
damage.
Closes-Bug: #1444630
Related-Bug: #1293480
Related-Bug: #1408176
Change-Id: I1a321371dff7933cdd11d31d9f9c2a2f850fd8d9
(cherry picked from commit d1fb8d0fbdd6cb95c43b02f754409f1c728e8cd0)
|
|\ \ \
| | | |
| | | |
| | | | |
stable/kilo
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Currently, it's possible to boot an instance from a QCOW2 image,
which has virtual_size bigger than one allowed by the given flavor
(root_gb).
The issue is caused by two different problems in the code:
1) typo in get_disk_size() has made it always return None and
effectively disabled verify_base_size() checks
2) Rbd image backend skips the verify_base_size() step for
'cached' images (the one with base files), so it is possible to
boot an instance using a larger flavor once and then use smaller
flavors to boot the same image, even if allowed root_gb size is
smaller than the image virtual size
Closes-Bug: #1429093
Change-Id: I383130e5f8cc288f4b428ed43fe4d3aba7169473
(cherry picked from commit c1f9ed27af64e6893d9d0153a964df5aba99b8f0)
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Commit ebfa09fa197a1d88d1b3ab1f308232c3df7dc009 added an RPC proxy but
as part of that was passing migrate_data=None for pre_live_migration
which breaks live block migration when not using shared storage.
Closes-Bug: #984996
Change-Id: I2a83f1fb0e4468f9a6c67a188af725c3406139d1
(cherry picked from commit 4e515ec2269a1c3187ee9ffad3a6be059ec74b0b)
|
|\ \ \ \
| |/ / /
| | | |
| | | | |
stable/kilo
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
order of arguments that passed to
ComputeManager.live_,migration() differs in ComputeManager and
_ComputeV4Proxy classes
Change-Id: I23c25d219e9cdd0673ae6a12250219680fb7bda9
Closes-Bug:#1442656
(cherry picked from commit ba521fa53711774e0718808fe333aca676de57ae)
|
|\ \ \ \ |
|
| | |_|/
| |/| |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The class DatastorePath was recently removed from ds_util as it's
available in oslo.vmware. One of the reference was missed during
the refactor.
Change-Id: Idc5825c304a99e83cbf36e93751148d6f995131a
Closes-Bug: #1440968
(cherry picked from commit ab4a5a5300179a79f7a67688f0e9f3fc280c0efa)
|
|\ \ \ \ |
|
| |/ / /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
https://bugs.launchpad.net/nova/+bug/1377958 fixed a problem
that source_type: image, destination_type: local is not
supported for boot instance, exception should be raised to
reject the param otherwise it will lead to instance become
ERROR state.
However the fix introduced a problem on nova client
https://bugs.launchpad.net/python-novaclient/+bug/1418484
The fix of the bug leads to following command become invalid
nova boot test-vm --flavor m1.medium --image centos-vm-32
--nic net-id=c3f40e33-d535-4217-916b-1450b8cd3987 --block-device
id=26b7b917-2794-452a-95e5-2efb2ca6e32d,bus=sata,source=volume,bootindex=1
So we need to release the original constraint to allow
the above special case pass the validation check then
we can revert the nova client exception
(https://review.openstack.org/#/c/165932/)
This patch checks the boot_index and whether image param is
given after we found the bdm has source_type: image,
destination_type: local, if this is the special case, then
no exception will be raised.
Closes-Bug: #1433609
Change-Id: If43faae95169bc3864449a8364975f5c887aac14
(cherry picked from commit cadbcc440a2fcfb8532f38111999a06557fbafc2)
|
|\ \ \ \
| | |/ /
| |/| /
| |_|/
|/| | |
stable/kilo
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
There was a mismatch in the V4 proxy in the call signatures of this
function. This was missed because the "destination" parameter is passed
in the rpcapi as the host to contact, which is consumed by the rpc
layer and not passed. Since it was not called one of the standard
names (either host if to be not passed, or host_param if to be passed),
this was missed.
Change-Id: Idf2160934dade650ed02b672f3b64cb26247f8e6
Closes-Bug: #1442602
(cherry picked from commit 0c08f7f2ef070f7c6172d7742f9789e0a8bda91a)
|
|/
|
|
| |
Change-Id: I6356513ac42b79402dbde8ee5e75cbbd1aee7eef
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The nova api for creating nova-network networks has an optional
request parameter "id" which maps to the string uuid for the
network to create. The nova-manage network create command represents
it as the option --uuid. The parameter is currently being ignored
by the nova-network manager. This change sets the uuid when creating
the network if it has been specified.
Closes-Bug: #1441931
Change-Id: Ib29e632b09905f557a7a6910d58207ed91cdc047
|
|\ \
| |/ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The network manager create_networks function is on the verge of the
"too complex" pep8 check error, so this change refactors the cidr
validation code out into a private function in preparation for
fixing a bug.
Change-Id: Id0603ddd642acccfa12ae53a52ecfb66dca53702
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This patch creates compute RPC API version 4.0, while retaining
compatibility in rpcapi and manager for 3.x, allowing for
continuous deployment scenarios.
UpgradeImpact - Deployments doing continuous deployment should follow this
process to upgrade without any downtime:
1) Set [upgrade_levels] compute=kilo in your config.
2) Upgrade to this commit.
3) Once everything has been upgraded, remove the entry in
[upgrade_levels] so that all rpc clients to the nova-compute service
start sending the new 4.0 messages.
Change-Id: Id96e77c739e7473774e110646204520d6163d8a5
|
|\ \ \
| |/ /
| | /
| |/
|/| |
|
| |
| |
| |
| |
| |
| |
| | |
This simply draws a line in the sand for the version of compute RPC that
kilo supports for use during upgrades by operators.
Change-Id: I29103861e2e333cc877204cca62e2f26d3fafe66
|
|\ \ |
|