| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Nova can occasionally fail to carry out server actions which require
calls to neutron API if haproxy happens to close a connection after
idle time if an incoming request attempts to re-use the connection
while it is being torn down.
In order to be more resilient to this scenario, we can add a config
option for neutron client to retry requests, similar to our existing
CONF.cinder.http_retries and CONF.glance.num_retries options.
Because we create our neutron client [1] using a keystoneauth1 session
[2], we can set the 'connect_retries' keyword argument to let
keystoneauth1 handle connection retries.
Conflicts:
nova/conf/neutron.py
NOTE(s10): conflict is due to Id7c2f0b53c8871ff47a836ec4c324c8cce430b79
not being in Queens.
Closes-Bug: #1866937
[1] https://github.com/openstack/nova/blob/57459c3429ce62975cebd0cd70936785bdf2f3a4/nova/network/neutron.py#L226-L237
[2] https://docs.openstack.org/keystoneauth/latest/api/keystoneauth1.session.html#keystoneauth1.session.Session
Change-Id: Ifb3afb13aff7e103c2e80ade817d0e63b624604a
(cherry picked from commit 0e34ed9733e3f23d162e3e428795807386abf1cb)
(cherry picked from commit 71971c206292232fff389dedbf412d780f0a557a)
(cherry picked from commit a96e7ab83066bee0a13a54070aab988396bae320)
(cherry picked from commit 6bed39cd6e77744ca4bbad9dba3f68c78cdc1ec0)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Live migration is currently totally broken if a NUMA topology is
present. This affects everything that's been regrettably stuffed in with
NUMA topology including CPU pinning, hugepage support and emulator
thread support. Side effects can range from simple unexpected
performance hits (due to instances running on the same cores) to
complete failures (due to instance cores or huge pages being mapped to
CPUs/NUMA nodes that don't exist on the destination host).
Until such a time as we resolve these issues, we should alert users to
the fact that such issues exist. A workaround option is provided for
operators that _really_ need the broken behavior, but it's defaulted to
False to highlight the brokenness of this feature to unsuspecting
operators.
Conflicts:
nova/conf/workarounds.py
nova/tests/unit/api/openstack/compute/admin_only_action_common.py
nova/tests/unit/api/openstack/compute/test_migrate_server.py
nova/tests/unit/conductor/tasks/test_live_migrate.py
NOTE(stephenfin): stable/rocky conflicts due to removal of
'report_ironic_standard_resource_class_inventory' option and addition of
change Iaea1cb4ed93bb98f451de4f993106d7891ca3682 on master.
NOTE(stephenfin): stable/queens conflicts due to presence of
the 'enable_consoleauth' configuration option and change
I83b473e9ba557545b5c186f979e068e442de2424 (Mox to mock) in stable/rocky.
A hyperlink is removed from the config option help text as the version
of 'oslo.config' used here does not parse help text as rST (bug 1755783).
Change-Id: I217fba9138132b107e9d62895d699d238392e761
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
Related-bug: #1289064
(cherry picked from commit ae2e5650d14a2c81dd397727d67b60f9b8dd0dd7)
(cherry picked from commit 52b89734426253f64b6d4797ba4d849c3020fb52)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a follow up to I43a23a3290db0c835fed01b8d6a38962dc61adce
which makes the cpu/disk/ram_allocation_ratio config "sticky" in
that once set to a non-default value, it is not possible to reset
back to the default behavior (when config is 0.0) on an existing
compute node record by unsetting the option from nova.conf. To
reset back to the defaults, the non-0.0 default would have to be
explicitly put into config, so cpu_allocation_ratio=16.0 for example.
Alternatively operators could delete the nova-compute service
record via the DELETE /os-services/{service_id} REST API and
restart the nova-compute service to get a new compute_nodes record,
but that workaround is messy and left undocumented in config.
Change-Id: I908615d82ead0f70f8e6d2d78d5dcaed8431084d
Related-Bug: #1789654
(cherry picked from commit c45adaca5dd241408f1e29b657fe6ed42c908b8b)
(cherry picked from commit a039f8397702d15718ebcec0fdb9cfeb6155f6a1)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
At present all virt drivers provide a cleanup method that takes a single
destroy_disks boolean to indicate when the underlying storage of an
instance should be destroyed.
When cleaning up after an evacuation or revert resize the value of
destroy_disks is determined by the compute layer calling down both into
the check_instance_shared_storage_local method of the local virt driver
and remote check_instance_shared_storage method of the virt driver on
the host now running the instance.
For the Libvirt driver the initial local call will return None when
using the shared block RBD imagebackend as it is assumed all instance
storage is shared resulting in destroy_disks always being False when
cleaning up. This behaviour is wrong as the instance disks are stored
separately to the instance directory that still needs to be cleaned up
on the host. Additionally this directory could also be shared
independently of the disks on a NFS share for example and would need to
also be checked before removal.
This change introduces a backportable workaround configurable for the
Libvirt driver with which operators can ensure that the instance
directory is always removed during cleanup when using the RBD
imagebackend. When enabling this workaround operators will need to
ensure that the instance directories are not shared between computes.
Future work will allow for the removal of this workaround by separating
the shared storage checks from the compute to virt layers between the
actual instance disks and any additional storage required by the
specific virt backend.
NOTE(lyarwood): Conflicts as If1b6e5f20d2ea82d94f5f0550f13189fc9bc16c4
only merged in Rocky and the backports of
Id3c74c019da29070811ffc368351e2238b3f6da5 and
I217fba9138132b107e9d62895d699d238392e761 have yet to land on
stable/queens from stable/rocky.
Conflicts:
nova/conf/workarounds.py
Related-Bug: #1761062
Partial-Bug: #1414895
Change-Id: I8fd6b9f857a1c4919c3365951e2652d2d477df77
(cherry picked from commit d6c1f6a1032ed2ea99f3d8b70ccf38065163d785)
(cherry picked from commit 8c678ae57299076a5013f0be985621e064acfee0)
|
|
|
|
|
|
|
|
|
|
| |
Add secret=true to fixed_key configuration parameter as that value
shouldn't be logged.
Change-Id: Ie6da21e8680b2deb6b1da3add31cd725ba855c1c
Closes-Bug: #1806471
(cherry picked from commit 37a036672e459b0d83b7b91120c8ec40e3759190)
(cherry picked from commit 7ef3304b12e2a6a89cfedc99e09b214ae6de3d7a)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds a new config option which is read on the source host
during pre_live_migration which can be used to determine
if it should wait for a "network-vif-plugged" event due to VIFs
being plugged on the destination host. This helps us to
avoid the guest transfer at all if vif plugging failed on the dest
host, which we just wouldn't find out until post live migration
and then we have to rollback.
The option is disabled by default for backward compatibility and
also because certain networking backends, like OpenDaylight, are
known to not send network-vif-plugged events unless the port host
binding information changes, which for live migration doesn't happen
until after the guest is transferred to the destination host.
Related to blueprint neutron-new-port-binding-api
Related-Bug: #1786346
NOTE(danms): Stable-only changes to this patch from master include
removing the RPC-related communication from the destination
to the source node. As such, the new option is read on the source
node so the conf option help and release note are updated. This is
OK before Rocky since we don't claim support to live migrate
between different networking backends (vif types), so operators
would need to set the option universally, or at least have host
aggregates in place if they are using different network types.
Conflicts:
nova/conf/compute.py
nova/tests/unit/objects/test_migrate_data.py
Change-Id: I0f3ab6604d8b79bdb75cf67571e359cfecc039d8
(cherry picked from commit 5aadff75c3ac4f2019838600df6580481a96db0f)
|
|
|
|
|
|
|
|
| |
Update the whitelist for the latest new CPU flags for mitigation
of recent security issues.
Change-Id: I8686a4755777c8c720c40d4111cc469676d2a5fd
Closes-Bug: #1777460
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There is concern over the ability for compute nodes to reasonably
determine which events should count against its consecutive build
failures. Since the compute may erronenously disable itself in
response to mundane or otherwise intentional user-triggered events,
this patch adds a scheduler weigher that considers the build failure
counter and can negatively weigh hosts with recent failures. This
avoids taking computes fully out of rotation, rather treating them as
less likely to be picked for a subsequent scheduling
operation.
This introduces a new conf option to control this weight. The default
is set high to maintain the existing behavior of picking nodes that
are not experiencing high failure rates, and resetting the counter as
soon as a single successful build occurs. This is minimal visible
change from the existing behavior with default configuration.
The rationale behind the default value for this weigher comes from the
values likely to be generated by its peer weighers. The RAM and Disk
weighers will increase the score by number of available megabytes of
memory (range in thousands) and disk (range in millions). The default
value of 1000000 for the build failure weigher will cause competing
nodes with similar amounts of available disk and a small (less than ten)
number of failures to become less desirable than those without, even
with many terabytes of available disk.
Change-Id: I71c56fe770f8c3f66db97fa542fdfdf2b9865fb8
Related-Bug: #1742102
(cherry picked from commit 91e29079a0eac825c5f4fe793cf607cb1771467d)
|
|
|
|
|
|
|
|
|
| |
This adds two other flags to the whitelist of available options to
the cpu_model_extra_flags variable related to further variants of
Meltdown/Spectre recently published.
Related-Bug: #1750829
Change-Id: I72085016c8756ff88a4da722368f62359bcd7080
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When using ImagePropertiesFilter with multiple architectures inside the
same deployment, it is possible that images can be uploaded without the
hw_architecture property defined.
This results in behaviour where the instance could be scheduled on any
type of hypervisor, resulting in an instance that will successfully
transition to ACTIVE but never properly run because of the difference
in architecture.
This makes the ImagePropertiesFilter problematic as most images are
generally uploaded without the architecture property set because
most documentation does not encourage doing that.
The addition of this flag allows to make using the filter possible
because it allows the deployer to assume a default architecture if
the user did not supply one (assuming it would be the most common
architecture in their deployment, such as x86_64) yet if the user
wants a more specific architecture, they can do it in their image
properties.
In order to avoid a circular import loop, the references to the
architecture field have been moved to a seperate module so that
they can be properly and cleaned imported inside configuration.
Change-Id: Ib52deb095028e93619b93ef9e5f70775df2a403a
Closes-Bug: #1769283
(cherry picked from commit aa5b1326c86c408ce9cc4546e1c7a310fbce3136)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The provider-tree refresh in the SchedulerReportClient() instance
of each compute node happens every five minutes as it is hard coded.
This patch adds this update interval as a new config option which
can be set/changed on each compute node.
Conflicts:
nova/conf/compute.py - because 197539d7a050042463802f6ece98473bbbf9743b
is missing in Queens.
nova/scheduler/client/report.py - because f05e6279d092f9d53291ddf69c99a71bfe3989bf
is missing in Queens.
Change-Id: I00f92aac44d7b0169f94940ef389796c782b0cc1
Closes-Bug: #1767309
(cherry picked from commit 41d6b479fe8baf66578044b8773c3b892d64a2c4)
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds information to the "notification_format" config
option help and notifications docs on how to disable notifications.
While updating the config option help text, a stale reference
to Pike is removed.
Change-Id: I736025a0a88fc969831558805687b642da8cd365
Closes-Bug: #1761405
(cherry picked from commit 11528966ba000addde5416f085332e0d39c46588)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The recent "Meltdown" CVE fixes have resulted in a critical performance
penalty[*] that will impact every Nova guest with certain CPU models.
I.e. assume you have applied all the "Meltdown" CVE fixes, and performed
a cold reboot (explicit stop & start) of all Nova guests, for the
updates to take effect. Now, if any guests that are booted with certain
named virtual CPU models (e.g. "IvyBridge", "Westmere", etc), then those
guests, will incur noticeable performance degradation[*], while being
protected from the CVE itself.
To alleviate this guest performance impact, it is now important to
specify an obscure Intel CPU feature flag, 'PCID' (Process-Context ID)
-- for the virtual CPU models that don't already include it (more on
this below). To that end, this change will allow Nova to explicitly
specify CPU feature flags via a new configuration attribute,
`cpu_model_extra_flags`, e.g. in `nova.conf`:
...
[libvirt]
cpu_mode = custom
cpu_model = IvyBridge
cpu_model_extra_flags = pcid
...
NB: In the first iteration, the choices for `cpu_model_extra_flags` is
restricted to only 'pcid' (the option is case-insensitive) -- to address
the earlier mentioned guest performance degradation. A future patch
will remove this restriction, allowing to add / remove multiple CPU
feature flags, thus making way for other useful features.
Some have asked: "Why not simply hardcode the 'PCID' CPU feature flag
into Nova?" That's not graceful, and more importantly, impractical:
(1) Not every Intel CPU model has 'PCID':
- The only Intel CPU models that include the 'PCID' capability
are: "Haswell", "Broadwell", and "Skylake" variants.
- The libvirt / QEMU Intel CPU models: "Nehalem", "Westmere",
"SandyBridge", and "IvyBridge" will *not* expose the 'PCID'
capability, even if the host CPUs by the same name include it.
I.e. 'PCID' needs to be explicitly when using the said virtual
CPU models.
(2) Magically adding new CPU feature flags under the user's feet
impacts live migration.
[*] https://groups.google.com/forum/m/#!topic/mechanical-sympathy/L9mHTbeQLNU
Closes-Bug: #1750829
Change-Id: I6bb956808aa3df58747c865c92e5b276e61aff44
(cherry picked from commit 6b601b7cf6e7f23077f428353a3a4e81084eb3a1)
|
|
|
|
|
|
|
|
|
|
| |
`enabled_vgpu_types` flag has been set in nova.conf to set enabled
vgpu type of current host. The example is wrong.
Modify the example to avoid mislead.
Change-Id: If7a150d4d28609cabeb5f513bbd134a969fca17c
Closes-Bug: #1746217
(cherry picked from commit 8e687ab4f85561e31e4da06d89e9505179abc127)
|
|
|
|
|
|
|
| |
The help text for the scheduler's max_attempts config option is
corrected/clarified.
Change-Id: Ia0f38ac1e1b0385981c4171d553330d0419a4dc8
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This makes us pass an upper limit to placement when doing scheduling
activities. Without this, we'll always receive every host in the
deployment (that has space for the request), which may be very large.
Closes-Bug: #1746294
Change-Id: I1c34964a74123b3d94ccae89d7cac0426b57b9b6
|
|\ \
| |/
|/| |
|
| |
| |
| |
| |
| |
| | |
Only two options are currently supported. Let's hardcode this behavior.
Change-Id: I7609f408294ca151db9b7ff43d8c7bf6a4638a94
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When set reclaim_instance_interval > 0, and then delete an
instance which booted from volume with `delete_on_termination`
set as true. After reclaim_instance_interval time pass,
all volumes boot instance will with state: attached and in-use,
but attached instances was deleted.
This bug case as admin context from
`nova.compute.manager._reclaim_queued_deletes` did not have
any token info, then call cinder api would be failed.
So add user/project CONF with admin role at cinder group,
and when determine context is_admin and without token, do
authenticaion with user/project info to call cinder api.
Change-Id: I3c35bba43fee81baebe8261f546c1424ce3a3383
Closes-Bug: #1733736
Closes-Bug: #1734025
Partial-Bug: #1736773
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The only in-tree implementation of the nova.image.download.modules
extension point was nova.image.download.file which was removed in
Pike: I7687cc89545a7a8b295dd6535b4ccebc913a2e0e
At the time of that removal, there was an operators ML thread
asking if anyone was using this code, or the extension point,
and the answer was no (or no answer at all).
Since we have no in-tree implementation of this extension point
and the extension point itself is not maintained or documented,
and even the TransferBase base class was removed in Pike, we
should deprecate the extension point and the configuration option
associated with its use so that we can simplify our internal
glance API client code.
Note that the libvirt Rbd image backend which does support
direct_url / image locations configuration for fast clones is
still supported and unrelated to this code.
Change-Id: I13162ebc9050dd2c468d0f8b969b96409f60afa8
|
|\ \ \ \
| | |_|/
| |/| | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This has been deprecated for a long time. It's time to go.
Change-Id: Ia3d88dd795b7049938265acc44465261df3fec36
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Add multiattach support to libvirt driver, by updating the
xml configuration info if the multiattach support is turned
on for a volume and set the virt driver capability
'support_multiattach' to true. This capability is set to false
for all the other drivers.
Also the attach function in nova/virt/block_device.py is updated
to call out to Cinder in case of each attachment request for
multiattach volumes, which is needed for Cinder in order to track
all attachments for a volume and be able to detach properly.
Co-Authored-By: Matt Riedemann <mriedem.os@gmail.com>
Partially-implements: blueprint multi-attach-volume
Change-Id: I947bf0ad34a48e9182a3dc016f47f0c9f71c9d7b
|
|\ \ \ \ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Fix nits from the following reviews:
- https://review.openstack.org/#/c/345397/35
- https://review.openstack.org/#/c/345398/36
- https://review.openstack.org/#/c/345399/42
Change-Id: Iafa5b4432a85353a216da4d55e46e1eda30842e3
Blueprint: websocket-proxy-to-host-security
|
|\ \ \ \ \
| |/ / / /
|/| | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
All image signature properties should not be inherited from the metadata
of the original image when creating a snapshot of an instance. Otherwise
Glance will attempt to verify the signature of the snapshot image and
fail as this has changed from that of the original.
Closes-bug: #1737513
Change-Id: Ia3d80bf2f81c7317fec117aecbc3c560d51a7d4e
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Provide an implementation for the VeNCrypt RFB authentication
scheme, which uses TLS and x509 certificates to provide both
encryption and mutual authentication / authorization.
Based on earlier work by Solly Ross <sross@redhat.com>
Change-Id: I6a63d2535e86faf369ed1c0eeba6cb5a52252b80
Co-authored-by: Stephen Finucane <sfinucan@redhat.com>
Implements: bp websocket-proxy-to-host-security
|
|/ / / /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Introduce a framework for providing RFB authentication scheme
implementations. This will be later used by the websocket RFB security
proxy code.
Change-Id: I98403ca922b83a460a4e7baa12bd5f596a79c940
Co-authored-by: Stephen Finucane <sfinucan@redhat.com>
Implements: bp websocket-proxy-to-host-security
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The unit of 'token_ttl' is not clear
in the help text in nova/conf/consoleauth.py.
So add the unit (in seconds) in the help text.
TrivialFix
Change-Id: Id6506b7462c303223bac8586e664e187cb52abd6
|
|\ \ \ \ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
nova.network.neutronv2.api.get_client now uses the common
get_ksa_adapter utility to create an Adapter from common keystoneauth1
configuration options if the legacy [neutron] config option ``url`` is
not specified.
As part of blueprint use-ksa-adapter-for-endpoints, this provides a
consistent mechanism for endpoint communication from Nova.
Change-Id: I41724a612a5f3eabd504f3eaa9d2f9d141ca3f69
Partial-Implements: bp use-ksa-adapter-for-endpoints
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Reviewing the Placement API, I noticed wrong OpenStack capitalization.
Fix the docs and some strings as well.
Change-Id: I14a2443687a0d517ece80e794e7ef0d4e165af6f
|
|\ \ \ \ \ |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
This adds a limit query parameter to GET
/allocation_candidates?limit=5&resource=VCPU:1
A 'limit' filter is added to the AllocationCandidates. If set, after
the database query has been run to create the allocation requests and
provider summaries, a slice or sample of the allocation requests is
taken to limit the results. The summaries are then filtered to only
include those in the allocation requests.
This method avoids needing to make changes to the generated SQL, the
creation of which is fairly complex, or the database tables. The amount
of data queried is still high in the extreme case, but the amount of
data sent over the wire (as JSON) is shrunk. This is a trade-off that
was discussed in the spec and the discussion surrounding its review.
If it turns out that memory use server-side is an issue we can
investigate changing the SQL.
A configuration setting, [placement]/randomize_allocation_candidates,
is added to allow deployers to declare whether they want the results
to be returned in whatever order the database chooses or a random
order. The default is "False" which is expected to preserve existing
behavior and impose a packing placement strategy.
When the config setting is combined with the limit parameter, if
"True" the limited results are a random sampling from the full
results. If "False", it is a slice from the front.
This is done as a new microversion, 1.16, with updates to docs, a reno
and adjustments to the api history doc.
Change-Id: I5f3d4f49c34fd3cd6b9d2e12b3c3c4cdcb409bec
Implements: bp allocation-candidates-limit
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
s/by retrieved/be retrieved
TrivialFix
Change-Id: I9a08940593dc5e74e62a749e3f7a4f628570ea4e
Signed-off-by: Chen Hanxiao <chenhx@certusnet.com.cn>
|
|\ \ \ \ \ \ |
|
| |/ / / / /
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
This patch adds new policies for PCI devices allocation. There are 3
policies:
- legacy - this is the default value and it describes the current nova
behavior. Nova will boot VMs with PCI device if the PCI device is
associated with at least one NUMA node on which the instance should
be booted or there is no information about PCI-NUMA association
- required - nova will boot VMs with PCI devices *only* if at least one
of the VM's NUMA node is associated with these PCI devices.
- preferred - nova will boot VMs using best effort NUMA affinity
bp share-pci-between-numa-nodes
Change-Id: I46d483f9de6776db1b025f925890624e5e682ada
Co-Authored-By: Stephen Finucane <stephenfin@redhat.com>
|
|\ \ \ \ \ \ |
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
This flag was deprecated in a previous cycle. The flag, the feature that
the flag enabled, and a related flag that was not deprecated but should
have been, can now be removed.
Change-Id: I6b38b4e69c0aace4b109f0d083ef665a5c032a15
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
s/for details/for details
TrivialFix
Change-Id: Ia9c419130f47834d52d6225f746a9216f830e931
Signed-off-by: Chen Hanxiao <chenhx@certusnet.com.cn>
|
|\ \ \ \ \ \ \ |
|
| | |_|_|_|_|/
| |/| | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
This change does a couple of things:
(1) Describe what 'none' CPU model actually means. It means different
things based on different hypervisors; begin with by documenting
what it does for KVM / QEMU guests.
(2) Clarify related options `cpu_mode` and `cpu_model`. And avoid using
the confusing 'double negative' grammar construction.
Change-Id: I135db0fbe4a39096c1ed9bcc5faeb7a4cb6e2dac
|
| |_|/ / / /
|/| | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
s/sepcifies/specifies
TrivialFix
Change-Id: I08f01f2ebad2cace1e9aa2e707db8c9867614e79
Signed-off-by: Chen Hanxiao <chenhx@certusnet.com.cn>
|
|/ / / / /
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This commit deprecates the config option and policy
for hide server address. They are marked for removal.
Implement blueprint remove-configurable-hide-server-address-feature
Depends-On: I6aed4909b0e7efe9c95d1f7398db613eca05e5ce
Change-Id: I6040e8c2b3e132b0dfd09f82ae041b4786a63483
|
|\ \ \ \ \ |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
This adds the PowerVM driver to the possible options for
compute_driver.
Change-Id: I042d797954e0e7ab921f0528f9b898522458fe6d
|