diff options
Diffstat (limited to 'releasenotes')
45 files changed, 582 insertions, 3 deletions
diff --git a/releasenotes/notes/add-instance-name-to-instance-create-notification-4c2f5eca9e574178.yaml b/releasenotes/notes/add-instance-name-to-instance-create-notification-4c2f5eca9e574178.yaml new file mode 100644 index 0000000000..6fbd9abcae --- /dev/null +++ b/releasenotes/notes/add-instance-name-to-instance-create-notification-4c2f5eca9e574178.yaml @@ -0,0 +1,9 @@ +--- +features: + - | + The field ``instance_name`` has been added to the + ``InstanceCreatePayload`` in the following versioned notifications: + + * ``instance.create.start`` + * ``instance.create.end`` + * ``instance.create.error`` diff --git a/releasenotes/notes/add_initial_allocation_ratio-2d2666d62426a4bf.yaml b/releasenotes/notes/add_initial_allocation_ratio-2d2666d62426a4bf.yaml new file mode 100644 index 0000000000..fdb30b7d57 --- /dev/null +++ b/releasenotes/notes/add_initial_allocation_ratio-2d2666d62426a4bf.yaml @@ -0,0 +1,27 @@ +--- +upgrade: + - | + The default value for the "cpu_allocation_ratio", "ram_allocation_ratio" + and "disk_allocation_ratio" configurations have been changed to ``None``. + + The ``initial_cpu_allocation_ratio``, ``initial_ram_allocation_ratio`` and + ``initial_disk_allocation_ratio`` configuration options have been added to + the ``DEFAULT`` group: + + - initial_cpu_allocation_ratio with default value 16.0 + - initial_ram_allocation_ratio with default value 1.5 + - initial_disk_allocation_ratio with default value 1.0 + + These options help operators specify initial virtual CPU/ram/disk to + physical CPU/ram/disk allocation ratios. These options are only used when + initially creating the ``computes_nodes`` table record for a given + nova-compute service. + + Existing ``compute_nodes`` table records with ``0.0`` or ``None`` values + for ``cpu_allocation_ratio``, ``ram_allocation_ratio`` or + ``disk_allocation_ratio`` will be migrated online when accessed or when + the ``nova-manage db online_data_migrations`` command is run. + + For more details, refer to the `spec`__. + + .. __: https://specs.openstack.org/openstack/nova-specs/specs/stein/approved/initial-allocation-ratios.html diff --git a/releasenotes/notes/bp-handling-down-cell-10f76145d767300c.yaml b/releasenotes/notes/bp-handling-down-cell-10f76145d767300c.yaml new file mode 100644 index 0000000000..81a78833c1 --- /dev/null +++ b/releasenotes/notes/bp-handling-down-cell-10f76145d767300c.yaml @@ -0,0 +1,12 @@ +--- +features: + - | + From microversion 2.69 the responses of ``GET /servers``, + ``GET /servers/detail``, ``GET /servers/{server_id}`` and + ``GET /os-services`` will contain missing keys during down cell situations + because of adding support for returning minimal constructs based on the + available information from the API database for those records in the + down cells. See `Handling Down Cells`_ for more information on the missing + keys. + + .. _Handling Down Cells: https://developer.openstack.org/api-guide/compute/down_cells.html
\ No newline at end of file diff --git a/releasenotes/notes/bug-1378904-disable-az-rename-b22a558a20b12706.yaml b/releasenotes/notes/bug-1378904-disable-az-rename-b22a558a20b12706.yaml new file mode 100644 index 0000000000..788408ecc8 --- /dev/null +++ b/releasenotes/notes/bug-1378904-disable-az-rename-b22a558a20b12706.yaml @@ -0,0 +1,7 @@ +--- +fixes: + - | + ``PUT /os-aggregates/{aggregate_id}`` and + ``POST /os-aggregates/{aggregate_id}/action`` (for set_metadata action) will + now return HTTP 400 for availability zone renaming if the hosts of + the aggregate have any instances. diff --git a/releasenotes/notes/bug-1414895-8f7d8da6499f8e94.yaml b/releasenotes/notes/bug-1414895-8f7d8da6499f8e94.yaml new file mode 100644 index 0000000000..8fa6d12746 --- /dev/null +++ b/releasenotes/notes/bug-1414895-8f7d8da6499f8e94.yaml @@ -0,0 +1,24 @@ +--- +other: + - | + The ``[workarounds]/ensure_libvirt_rbd_instance_dir_cleanup`` configuration + option has been introduced. This can be used by operators to ensure that + instance directories are always removed during cleanup within the Libvirt + driver while using ``[libvirt]/images_type = rbd``. This works around known + issues such as `bug 1414895`_ when cleaning up after an evacuation and + `bug 1761062`_ when reverting from an instance resize. + + Operators should be aware that this workaround only applies when using the + libvirt compute driver and rbd images_type as enabled by the following + configuration options: + + * ``[DEFAULT]/compute_driver = libvirt`` + * ``[libvirt]/images_type = rbd`` + + .. warning:: Operators will need to ensure that the instance directory + itself, specified by ``[DEFAULT]/instances_path``, is not shared between + computes before enabling this workaround, otherwise files associated with + running instances may be removed. + + .. _bug 1414895: https://bugs.launchpad.net/nova/+bug/1414895 + .. _bug 1761062: https://bugs.launchpad.net/nova/+bug/1761062 diff --git a/releasenotes/notes/bug-1773342-52b6a1460c7bee64.yaml b/releasenotes/notes/bug-1773342-52b6a1460c7bee64.yaml new file mode 100644 index 0000000000..e53c24847f --- /dev/null +++ b/releasenotes/notes/bug-1773342-52b6a1460c7bee64.yaml @@ -0,0 +1,8 @@ +--- +fixes: + - | + Fixes `bug 1773342`_ where the Hyper-v driver always deleted unused images + ignoring ``remove_unused_images`` config option. This change will now allow + deployers to disable the auto-removal of old images. + + .. _bug 1773342: https://bugs.launchpad.net/nova/+bug/1773342 diff --git a/releasenotes/notes/bug-1803627-cinder-catalog-info-service-name-optional-fa673ad29fb762ea.yaml b/releasenotes/notes/bug-1803627-cinder-catalog-info-service-name-optional-fa673ad29fb762ea.yaml new file mode 100644 index 0000000000..e050cd38da --- /dev/null +++ b/releasenotes/notes/bug-1803627-cinder-catalog-info-service-name-optional-fa673ad29fb762ea.yaml @@ -0,0 +1,11 @@ +--- +other: + - | + The ``[cinder]/catalog_info`` default value is changed such that the + ``service_name`` portion of the value is no longer set and is also + no longer required. Since looking up the cinder endpoint in the service + catalog should only need the endpoint type (``volumev3`` by default) and + interface (``publicURL`` by default), the service name is dropped and only + provided during endpoint lookup if configured. + See `bug 1803627 <https://bugs.launchpad.net/nova/+bug/1803627>`_ for + details. diff --git a/releasenotes/notes/bug-1815791-f84a913eef9e3b21.yaml b/releasenotes/notes/bug-1815791-f84a913eef9e3b21.yaml new file mode 100644 index 0000000000..f52352506b --- /dev/null +++ b/releasenotes/notes/bug-1815791-f84a913eef9e3b21.yaml @@ -0,0 +1,11 @@ +--- +fixes: + - | + Fixes a race condition that could allow a newly created Ironic + instance to be powered off after deployment, without letting + the user power it back on. +upgrade: + - | + Adds a ``use_cache`` parameter to the virt driver ``get_info`` + method. Out of tree drivers should add support for this + parameter. diff --git a/releasenotes/notes/conf-max-attach-disk-devices-82dc1e0825e00b35.yaml b/releasenotes/notes/conf-max-attach-disk-devices-82dc1e0825e00b35.yaml new file mode 100644 index 0000000000..b1681af54b --- /dev/null +++ b/releasenotes/notes/conf-max-attach-disk-devices-82dc1e0825e00b35.yaml @@ -0,0 +1,57 @@ +--- +features: + - | + A new configuration option, ``[compute]/max_disk_devices_to_attach``, which + defaults to ``-1`` (unlimited), has been added and can be used to configure + the maximum number of disk devices allowed to attach to a single server, + per compute host. Note that the number of disks supported by a server + depends on the bus used. For example, the ``ide`` disk bus is limited to 4 + attached devices. + + Usually, disk bus is determined automatically from the device type or disk + device, and the virtualization type. However, disk bus + can also be specified via a block device mapping or an image property. + See the ``disk_bus`` field in + https://docs.openstack.org/nova/latest/user/block-device-mapping.html + for more information about specifying disk bus in a block device mapping, + and see + https://docs.openstack.org/glance/latest/admin/useful-image-properties.html + for more information about the ``hw_disk_bus`` image property. + + The configured maximum is enforced during server create, rebuild, evacuate, + unshelve, live migrate, and attach volume. When the maximum is exceeded + during server create, rebuild, evacuate, unshelve, or live migrate, the + server will go into ``ERROR`` state and the server fault message will + indicate the failure reason. When the maximum is exceeded during a server + attach volume API operation, the request will fail with a + ``403 HTTPForbidden`` error. +issues: + - | + Operators changing the ``[compute]/max_disk_devices_to_attach`` on a + compute service that is hosting servers should be aware that it could + cause rebuilds to fail, if the maximum is decreased lower than the number + of devices already attached to servers. For example, if server A has 26 + devices attached and an operators changes + ``[compute]/max_disk_devices_to_attach`` to 20, a request to rebuild server + A will fail and go into ERROR state because 26 devices are already + attached and exceed the new configured maximum of 20. + + Operators setting ``[compute]/max_disk_devices_to_attach`` should also be + aware that during a cold migration, the configured maximum is only enforced + in-place and the destination is not checked before the move. This means if + an operator has set a maximum of 26 on compute host A and a maximum of 20 + on compute host B, a cold migration of a server with 26 attached devices + from compute host A to compute host B will succeed. Then, once the server + is on compute host B, a subsequent request to rebuild the server will fail + and go into ERROR state because 26 devices are already attached and exceed + the configured maximum of 20 on compute host B. + + The configured maximum is not enforced on shelved offloaded servers, as + they have no compute host. +upgrade: + - | + The new configuration option, ``[compute]/max_disk_devices_to_attach`` + defaults to ``-1`` (unlimited). Users of the libvirt driver should be + advised that the default limit for non-ide disk buses has changed from 26 + to unlimited, upon upgrade to Stein. The ``ide`` disk bus continues to be + limited to 4 attached devices per server. diff --git a/releasenotes/notes/default-zero-disk-flavor-to-admin-api-fd99e162812c2c7f.yaml b/releasenotes/notes/default-zero-disk-flavor-to-admin-api-fd99e162812c2c7f.yaml new file mode 100644 index 0000000000..e76921ecbb --- /dev/null +++ b/releasenotes/notes/default-zero-disk-flavor-to-admin-api-fd99e162812c2c7f.yaml @@ -0,0 +1,11 @@ +--- +upgrade: + - | + The defalut value for policy rule + ``os_compute_api:servers:create:zero_disk_flavor`` has changed from + ``rule:admin_or_owner`` to ``rule:admin_api`` which means that by default, + users without the admin role will not be allowed to create servers using + a flavor with ``disk=0`` *unless* they are creating a volume-backed server. + If you have these kinds of flavors, you may need to take action or + temporarily override the policy rule. Refer to + `bug 1739646 <https://launchpad.net/bugs/1739646>`_ for more details. diff --git a/releasenotes/notes/deprecate-config_drive_format-62d481260c254187.yaml b/releasenotes/notes/deprecate-config_drive_format-62d481260c254187.yaml new file mode 100644 index 0000000000..f6fee43446 --- /dev/null +++ b/releasenotes/notes/deprecate-config_drive_format-62d481260c254187.yaml @@ -0,0 +1,8 @@ +--- +deprecations: + - | + The ``config_drive_format`` config option has been deprecated. This was + necessary to workaround an issue with libvirt that was later resolved in + libvirt v1.2.17. For more information refer to `bug #1246201`__. + + __ https://bugs.launchpad.net/nova/+bug/1246201 diff --git a/releasenotes/notes/deprecate-disable_libvirt_livesnapshot-413c71b96f5e38d4.yaml b/releasenotes/notes/deprecate-disable_libvirt_livesnapshot-413c71b96f5e38d4.yaml new file mode 100644 index 0000000000..e337537136 --- /dev/null +++ b/releasenotes/notes/deprecate-disable_libvirt_livesnapshot-413c71b96f5e38d4.yaml @@ -0,0 +1,9 @@ +--- +deprecations: + - | + The ``[workarounds] disable_libvirt_livesnapshot`` config option has been + deprecated. This was necessary to work around an issue with libvirt v1.2.2, + which we no longer support. For more information refer to `bug + #1334398`__. + + __ https://bugs.launchpad.net/nova/+bug/1334398 diff --git a/releasenotes/notes/deprecate-nova-console-8247a1e2565dc326.yaml b/releasenotes/notes/deprecate-nova-console-8247a1e2565dc326.yaml index 6b1012baf0..ed6ccc43c5 100644 --- a/releasenotes/notes/deprecate-nova-console-8247a1e2565dc326.yaml +++ b/releasenotes/notes/deprecate-nova-console-8247a1e2565dc326.yaml @@ -1,6 +1,7 @@ --- deprecations: - | - The ``nova-console`` service is deprecated as it is Xen specific, does not - function properly in a multi-cell environment, and has effectively been - replaced by noVNC and the ``nova-novncproxy`` service. + The ``nova-console`` service is deprecated as it is XenAPI specific, does + not function properly in a multi-cell environment, and has effectively + been replaced by noVNC and the ``nova-novncproxy`` service. noVNC should + therefore be configured instead. diff --git a/releasenotes/notes/deprecate-yet-another-nova-network-opt-b23b7bd9c31383eb.yaml b/releasenotes/notes/deprecate-yet-another-nova-network-opt-b23b7bd9c31383eb.yaml new file mode 100644 index 0000000000..910b6a4238 --- /dev/null +++ b/releasenotes/notes/deprecate-yet-another-nova-network-opt-b23b7bd9c31383eb.yaml @@ -0,0 +1,7 @@ +--- +deprecations: + - | + The following options, found in ``DEFAULT``, were only used for configuring + nova-network and are, like nova-network itself, now deprecated. + + - ``defer_iptables_apply`` diff --git a/releasenotes/notes/disable-live-migration-with-numa-bc710a1bcde25957.yaml b/releasenotes/notes/disable-live-migration-with-numa-bc710a1bcde25957.yaml new file mode 100644 index 0000000000..4297c186f7 --- /dev/null +++ b/releasenotes/notes/disable-live-migration-with-numa-bc710a1bcde25957.yaml @@ -0,0 +1,25 @@ +--- +upgrade: + - | + Live migration of instances with NUMA topologies is now disabled by default + when using the libvirt driver. This includes live migration of instances + with CPU pinning or hugepages. CPU pinning and huge page information for + such instances is not currently re-calculated, as noted in `bug #1289064`_. + This means that if instances were already present on the destination host, + the migrated instance could be placed on the same dedicated cores as these + instances or use hugepages allocated for another instance. Alternately, if + the host platforms were not homogeneous, the instance could be assigned to + non-existent cores or be inadvertently split across host NUMA nodes. + + The `long term solution`_ to these issues is to recalculate the XML on the + destination node. When this work is completed, the restriction on live + migration with NUMA topologies will be lifted. + + For operators that are aware of the issues and are able to manually work + around them, the ``[workarounds] enable_numa_live_migration`` option can + be used to allow the broken behavior. + + For more information, refer to `bug #1289064`_. + + .. _bug #1289064: https://bugs.launchpad.net/nova/+bug/1289064 + .. _long term solution: https://blueprints.launchpad.net/nova/+spec/numa-aware-live-migration diff --git a/releasenotes/notes/driver-capabilities-to-traits-152eb851cd016f4d.yaml b/releasenotes/notes/driver-capabilities-to-traits-152eb851cd016f4d.yaml new file mode 100644 index 0000000000..da60ed4a12 --- /dev/null +++ b/releasenotes/notes/driver-capabilities-to-traits-152eb851cd016f4d.yaml @@ -0,0 +1,26 @@ +--- +features: + - | + Compute drivers now expose capabilities via traits in the + Placement API. Capabilities must map to standard traits defined + in `the os-traits project + <https://docs.openstack.org/os-traits/latest/>`_; for now these + are: + + * ``COMPUTE_NET_ATTACH_INTERFACE`` + * ``COMPUTE_DEVICE_TAGGING`` + * ``COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG`` + * ``COMPUTE_VOLUME_ATTACH_WITH_TAG`` + * ``COMPUTE_VOLUME_EXTEND`` + * ``COMPUTE_VOLUME_MULTI_ATTACH`` + * ``COMPUTE_TRUSTED_CERTS`` + + Any traits provided by the driver will be automatically added + during startup or a periodic update of a compute node. Similarly + any traits later retracted by the driver will be automatically + removed. + + However any traits which are removed by the admin from the compute + node resource provider via the Placement API will not be + reinstated until the compute service's provider cache is reset. + This can be triggered via a ``SIGHUP``. diff --git a/releasenotes/notes/drop-reqspec-migration-4d493450ce436f7e.yaml b/releasenotes/notes/drop-reqspec-migration-4d493450ce436f7e.yaml new file mode 100644 index 0000000000..8baadfd2f9 --- /dev/null +++ b/releasenotes/notes/drop-reqspec-migration-4d493450ce436f7e.yaml @@ -0,0 +1,10 @@ +--- +upgrade: + - | + The online data migration ``migrate_instances_add_request_spec``, which was + added in the 14.0.0 Newton release, has now been removed. Compatibility + code in the controller services for old instances without a matching + ``request_specs`` entry in the ``nova_api`` database is also gone. + Ensure that the ``'Request Spec Migration`` check in the + ``nova-status upgrade check`` command is successful before upgrading to + the 19.0.0 Stein release. diff --git a/releasenotes/notes/fill_virtual_interface_list-1ec5bcccde2ebd22.yaml b/releasenotes/notes/fill_virtual_interface_list-1ec5bcccde2ebd22.yaml new file mode 100644 index 0000000000..9dbe758db2 --- /dev/null +++ b/releasenotes/notes/fill_virtual_interface_list-1ec5bcccde2ebd22.yaml @@ -0,0 +1,10 @@ +--- +upgrade: + - The ``nova-manage db online_data_migrations`` command + will now fill missing ``virtual_interfaces`` records for instances + created before the Newton release. This is related to a fix for + https://launchpad.net/bugs/1751923 which makes the + _heal_instance_info_cache periodic task in the ``nova-compute`` + service regenerate an instance network info cache from the current + neutron port list, and the VIFs from the database are needed to + maintain the port order for the instance. diff --git a/releasenotes/notes/fix-image-utime-race-condition-3c404e272ea91b34.yaml b/releasenotes/notes/fix-image-utime-race-condition-3c404e272ea91b34.yaml new file mode 100644 index 0000000000..08aa6e4860 --- /dev/null +++ b/releasenotes/notes/fix-image-utime-race-condition-3c404e272ea91b34.yaml @@ -0,0 +1,8 @@ +--- +fixes: + - | + Fixes a race condition when multiple instances are launched at the same + time, which leads to a failure when modifying the modified time of the + instance base image. This issue was noticed when using an NFS backend. For + more information see https://bugs.launchpad.net/nova/+bug/1809123 + diff --git a/releasenotes/notes/flavor-extra-spec-image-property-validation-7310954ba3822477.yaml b/releasenotes/notes/flavor-extra-spec-image-property-validation-7310954ba3822477.yaml new file mode 100644 index 0000000000..7e3532beec --- /dev/null +++ b/releasenotes/notes/flavor-extra-spec-image-property-validation-7310954ba3822477.yaml @@ -0,0 +1,17 @@ +--- +upgrade: + - | + With added validations for flavor extra-specs and image properties, the + APIs for server create, resize and rebuild will now return 400 exceptions + where they did not before due to the extra-specs or properties not being + properly formatted or being mutually incompatible. + + For all three actions we will now check both the flavor and image to + validate the CPU policy, CPU thread policy, CPU topology, memory topology, + hugepages, serial ports, realtime CPU mask, NUMA topology details, CPU + pinning, and a few other things. + + The main advantage to this is to catch invalid configurations as early + as possible so that we can return a useful error to the user rather than + fail later on much further down the stack where the operator would have + to get involved. diff --git a/releasenotes/notes/ironic-partition-compute-nodes-fc60a6557fae9c5e.yaml b/releasenotes/notes/ironic-partition-compute-nodes-fc60a6557fae9c5e.yaml new file mode 100644 index 0000000000..cbe7efdbb3 --- /dev/null +++ b/releasenotes/notes/ironic-partition-compute-nodes-fc60a6557fae9c5e.yaml @@ -0,0 +1,12 @@ +--- +features: + - | + In deployments with Ironic, adds the ability for compute services to manage + a subset of Ironic nodes. If the ``[ironic]/partition_key`` configuration + option is set, the compute service will only consider nodes with a matching + ``conductor_group`` attribute for management. Setting the + ``[ironic]/peer_list`` configuration option allows this subset of nodes to + be distributed among the compute services specified to further reduce + failure domain. This feature is useful to co-locate nova-compute services + with ironic-conductor services managing the same nodes, or to better + control failure domain of a given compute service. diff --git a/releasenotes/notes/libvirt-stein-vgpu-reshape-a1fa23b8ad8aa966.yaml b/releasenotes/notes/libvirt-stein-vgpu-reshape-a1fa23b8ad8aa966.yaml new file mode 100644 index 0000000000..e0cce313c6 --- /dev/null +++ b/releasenotes/notes/libvirt-stein-vgpu-reshape-a1fa23b8ad8aa966.yaml @@ -0,0 +1,12 @@ +--- +upgrade: + - | + The libvirt compute driver will "reshape" VGPU inventories and allocations + on start of the ``nova-compute`` service. This will result in moving + VGPU inventory from the root compute node resource provider to a nested + (child) resource provider in the tree and move any associated VGPU + allocations with it. This will be a one-time operation on startup in Stein. + There is no end-user visible impact for this; it is for internal resource + tracking purposes. See the `spec`__ for more details. + + .. __: https://specs.openstack.org/openstack/nova-specs/specs/stein/approved/reshape-provider-tree.html diff --git a/releasenotes/notes/live-migration-force-after-timeout-54f2a4b631d295bb.yaml b/releasenotes/notes/live-migration-force-after-timeout-54f2a4b631d295bb.yaml new file mode 100644 index 0000000000..0e481793aa --- /dev/null +++ b/releasenotes/notes/live-migration-force-after-timeout-54f2a4b631d295bb.yaml @@ -0,0 +1,29 @@ +--- +features: + - | + A new configuration option ``[libvirt]/live_migration_timeout_action`` + is added. This new option will have choices ``abort`` (default) + or ``force_complete``. This option will determine what actions will be + taken against a VM after ``live_migration_completion_timeout`` expires. + Currently nova just aborts the live migrate operation after completion + timeout expires. By default, we keep the same behavior of aborting after + completion timeout. ``force_complete`` will either pause the VM or trigger + post-copy depending on if post copy is enabled and available. + + The ``[libvirt]/live_migration_completion_timeout`` is restricted by + minimum 0 and will now raise a ValueError if the configuration option + value is less than minimum value. + + Note if you configure Nova to have no timeout, post copy will never be + automatically triggered. None of this affects triggering post copy via + the force live-migration API, that continues to work in the same way. +upgrade: + - | + Config option ``[libvirt]/live_migration_progress_timeout`` was deprecated + in Ocata, and has now been removed. + + Current logic in libvirt driver to auto trigger post-copy based on + progress information is removed as it has `proved impossible`__ to detect + when live-migration appears to be making little progress. + + .. __: https://bugs.launchpad.net/nova/+bug/1644248 diff --git a/releasenotes/notes/live_migration_wait_for_vif_plug-stein-default-true-12103b09b8ac686a.yaml b/releasenotes/notes/live_migration_wait_for_vif_plug-stein-default-true-12103b09b8ac686a.yaml new file mode 100644 index 0000000000..40e77c50bf --- /dev/null +++ b/releasenotes/notes/live_migration_wait_for_vif_plug-stein-default-true-12103b09b8ac686a.yaml @@ -0,0 +1,7 @@ +--- +upgrade: + - | + The default value for the ``[compute]/live_migration_wait_for_vif_plug`` + configuration option has been changed to True. As noted in the help text + for the option, some networking backends will not work with this set to + True, although OVS and linuxbridge will. diff --git a/releasenotes/notes/maximum_instance_delete_attempts-option-force-minimum-value-2ce74351650e7b21.yaml b/releasenotes/notes/maximum_instance_delete_attempts-option-force-minimum-value-2ce74351650e7b21.yaml new file mode 100644 index 0000000000..e1e12570ef --- /dev/null +++ b/releasenotes/notes/maximum_instance_delete_attempts-option-force-minimum-value-2ce74351650e7b21.yaml @@ -0,0 +1,5 @@ +--- +upgrade: + - The ``maximum_instance_delete_attempts`` configuration option has been + restricted by the minimum value and now raises a ValueError + if the value is less than 1. diff --git a/releasenotes/notes/microversion-2.70-expose-virtual-device-tags-ca82ba6ee6cf9272.yaml b/releasenotes/notes/microversion-2.70-expose-virtual-device-tags-ca82ba6ee6cf9272.yaml new file mode 100644 index 0000000000..bd2274d742 --- /dev/null +++ b/releasenotes/notes/microversion-2.70-expose-virtual-device-tags-ca82ba6ee6cf9272.yaml @@ -0,0 +1,18 @@ +--- +features: + - | + The 2.70 compute API microversion exposes virtual device tags for volume + attachments and virtual interfaces (ports). A ``tag`` parameter is added + to the response body for the following APIs: + + **Volumes** + + * GET /servers/{server_id}/os-volume_attachments (list) + * GET /servers/{server_id}/os-volume_attachments/{volume_id} (show) + * POST /servers/{server_id}/os-volume_attachments (attach) + + **Ports** + + * GET /servers/{server_id}/os-interface (list) + * GET /servers/{server_id}/os-interface/{port_id} (show) + * POST /servers/{server_id}/os-interface (attach) diff --git a/releasenotes/notes/move-vrouter-plug-unplug-to-separate-os-vif-plugin-5557c9cd6f926fd8.yaml b/releasenotes/notes/move-vrouter-plug-unplug-to-separate-os-vif-plugin-5557c9cd6f926fd8.yaml new file mode 100644 index 0000000000..747f14a8b1 --- /dev/null +++ b/releasenotes/notes/move-vrouter-plug-unplug-to-separate-os-vif-plugin-5557c9cd6f926fd8.yaml @@ -0,0 +1,12 @@ +--- +upgrade: + - | + This release moves the ``vrouter`` VIF plug and unplug code to a + separate package called ``contrail-nova-vif-driver``. This package is a + requirement on compute nodes when using Contrail, OpenContrail or + Tungsten Fabric as a Neutron plugin. + At this time, the reference plugin is hosted on OpenContrail at + https://github.com/Juniper/contrail-nova-vif-driver but is expected to + transition to Tungsten Fabric in the future. + Release ``r5.1.alpha0`` or later of the plugin is required, which will + be included in Tungsten Fabric 5.1. diff --git a/releasenotes/notes/per-aggregate-scheduling-weight-7535fd6e8345034d.yaml b/releasenotes/notes/per-aggregate-scheduling-weight-7535fd6e8345034d.yaml new file mode 100644 index 0000000000..1e1a0ab00c --- /dev/null +++ b/releasenotes/notes/per-aggregate-scheduling-weight-7535fd6e8345034d.yaml @@ -0,0 +1,15 @@ +--- +features: + - | + Added the ability to allow users to use + ``Aggregate``'s ``metadata`` to override the global config options + for weights to achieve more fine-grained control over resource + weights. + + Such as, for the CPUWeigher, it weighs hosts based on available vCPUs + on the compute node, and multiplies it by the cpu weight multiplier. If + per-aggregate value (which the key is "cpu_weight_multiplier") is found, + this value would be chosen as the cpu weight multiplier. Otherwise, it + will fall back to the ``[filter_scheduler]/cpu_weight_multiplier``. If + more than one value is found for a host in aggregate metadata, the minimum + value will be used. diff --git a/releasenotes/notes/per-instance-serial-f2e597cb05d1b09e.yaml b/releasenotes/notes/per-instance-serial-f2e597cb05d1b09e.yaml new file mode 100644 index 0000000000..a70ae75cf7 --- /dev/null +++ b/releasenotes/notes/per-instance-serial-f2e597cb05d1b09e.yaml @@ -0,0 +1,9 @@ +--- +upgrade: + - | + Added a new ``unique`` choice to the ``[libvirt]/sysinfo_serial`` + configuration which if set will result in the guest serial number being + set to ``instance.uuid``. This is now the default value + of the ``[libvirt]/sysinfo_serial`` config option and is the recommended + choice since it ensures the guest serial is the same even if the instance + is migrated between hosts. diff --git a/releasenotes/notes/reject-interface-attach-with-port-resource-request-17473ddc5a989a2a.yaml b/releasenotes/notes/reject-interface-attach-with-port-resource-request-17473ddc5a989a2a.yaml new file mode 100644 index 0000000000..5b4b8a2252 --- /dev/null +++ b/releasenotes/notes/reject-interface-attach-with-port-resource-request-17473ddc5a989a2a.yaml @@ -0,0 +1,10 @@ +--- +other: + - | + The ``POST /servers/{server_id}/os-interface`` request will be rejected + with HTTP 400 if the Neutron port referenced in the request body has + resource request as Nova currently cannot support such operation. For + example a Neutron port has resource request if a `QoS minimum bandwidth + rule`_ is attached to that port in Neutron. + + .. _QoS minimum bandwidth rule: https://docs.openstack.org/neutron/latest/admin/config-qos.html diff --git a/releasenotes/notes/reject-networks-with-qos-policy-2746c74fd1f3ff26.yaml b/releasenotes/notes/reject-networks-with-qos-policy-2746c74fd1f3ff26.yaml new file mode 100644 index 0000000000..94c7ec2266 --- /dev/null +++ b/releasenotes/notes/reject-networks-with-qos-policy-2746c74fd1f3ff26.yaml @@ -0,0 +1,9 @@ +--- +other: + - | + The ``POST /servers/{server_id}/os-interface`` request and the + ``POST /servers`` request will be rejected with HTTP 400 if the Neutron + network referenced in the request body has `QoS minimum bandwidth rule`_ + attached as Nova currently cannot support such operations. + + .. _QoS minimum bandwidth rule: https://docs.openstack.org/neutron/latest/admin/config-qos.html diff --git a/releasenotes/notes/remove-deprecated-flavors-policy-c03c5d227a7b0c87.yaml b/releasenotes/notes/remove-deprecated-flavors-policy-c03c5d227a7b0c87.yaml new file mode 100644 index 0000000000..7afb350d8f --- /dev/null +++ b/releasenotes/notes/remove-deprecated-flavors-policy-c03c5d227a7b0c87.yaml @@ -0,0 +1,4 @@ +--- +upgrade: + - The ``os_compute_api:flavors`` policy deprecated in 16.0.0 + has been removed. diff --git a/releasenotes/notes/remove-deprecated-os-flavor-manage-policy-138296853d957c5f.yaml b/releasenotes/notes/remove-deprecated-os-flavor-manage-policy-138296853d957c5f.yaml new file mode 100644 index 0000000000..af0b48c722 --- /dev/null +++ b/releasenotes/notes/remove-deprecated-os-flavor-manage-policy-138296853d957c5f.yaml @@ -0,0 +1,9 @@ +--- +upgrade: + - | + The ``os_compute_api:os-flavor-manage`` policy has been removed + because it has been deprecated since 16.0.0. + Use the following policies instead: + + * ``os_compute_api:os-flavor-manage:create`` + * ``os_compute_api:os-flavor-manage:delete`` diff --git a/releasenotes/notes/remove-deprecated-os-server-groups-policy-de89d5d11d490338.yaml b/releasenotes/notes/remove-deprecated-os-server-groups-policy-de89d5d11d490338.yaml new file mode 100644 index 0000000000..44dd3b7b40 --- /dev/null +++ b/releasenotes/notes/remove-deprecated-os-server-groups-policy-de89d5d11d490338.yaml @@ -0,0 +1,4 @@ +--- +upgrade: + - The ``os_compute_api:os-server-groups`` policy deprecated in 16.0.0 + has been removed. diff --git a/releasenotes/notes/remove-live-migrate-evacuate-force-flag-cb50608d5930585c.yaml b/releasenotes/notes/remove-live-migrate-evacuate-force-flag-cb50608d5930585c.yaml new file mode 100644 index 0000000000..56152bff11 --- /dev/null +++ b/releasenotes/notes/remove-live-migrate-evacuate-force-flag-cb50608d5930585c.yaml @@ -0,0 +1,8 @@ +--- +upgrade: + - | + It is no longer possible to force server live migrations or evacuations + to a specific destination host starting with API microversion 2.68. + This is because it is not possible to support these requests for servers + with complex resource allocations. It is still possible to request a + destination host but it will be validated by the scheduler. diff --git a/releasenotes/notes/remove-quota-options-0e407c56ea993f5a.yaml b/releasenotes/notes/remove-quota-options-0e407c56ea993f5a.yaml new file mode 100644 index 0000000000..5d6b23caac --- /dev/null +++ b/releasenotes/notes/remove-quota-options-0e407c56ea993f5a.yaml @@ -0,0 +1,9 @@ +--- +upgrade: + - | + The following configuration options in the ``quota`` group have been + removed because they have not been used since 17.0.0. + + - ``reservation_expire`` + - ``until_refresh`` + - ``max_age`` diff --git a/releasenotes/notes/run-meta-api-per-cell-69d74cdd70528085.yaml b/releasenotes/notes/run-meta-api-per-cell-69d74cdd70528085.yaml new file mode 100644 index 0000000000..3cfcef0cfc --- /dev/null +++ b/releasenotes/notes/run-meta-api-per-cell-69d74cdd70528085.yaml @@ -0,0 +1,10 @@ +--- +features: + - | + Added configuration option ``[api]/local_metadata_per_cell`` to allow + users to run Nova metadata API service per cell. Doing this could provide + performance improvement and data isolation in a multi-cell deployment. + But it has some caveats, see the + `Metadata api service in cells v2 layout`_ for more details. + + .. _Metadata api service in cells v2 layout: https://docs.openstack.org/nova/latest/user/cellsv2-layout.html#nova-metadata-api-service diff --git a/releasenotes/notes/show-server-group-8d4bf609213a94de.yaml b/releasenotes/notes/show-server-group-8d4bf609213a94de.yaml new file mode 100644 index 0000000000..cf234c2e9a --- /dev/null +++ b/releasenotes/notes/show-server-group-8d4bf609213a94de.yaml @@ -0,0 +1,10 @@ +--- +features: + - | + Starting with the 2.71 microversion the ``server_groups`` parameter will be + in the response body of the following APIs to list the server groups to + which the server belongs: + + * ``GET /servers/{server_id}`` + * ``PUT /servers/{server_id}`` + * ``POST /servers/{server_id}/action (rebuild)`` diff --git a/releasenotes/notes/stein-affinity-weight-multiplier-cleanup-fed9ec25660befd3.yaml b/releasenotes/notes/stein-affinity-weight-multiplier-cleanup-fed9ec25660befd3.yaml new file mode 100644 index 0000000000..2a7fb7da06 --- /dev/null +++ b/releasenotes/notes/stein-affinity-weight-multiplier-cleanup-fed9ec25660befd3.yaml @@ -0,0 +1,8 @@ +--- +upgrade: + - | + The ``[filter_scheduler]/soft_affinity_weight_multiplier`` and + ``[filter_scheduler]/soft_anti_affinity_weight_multiplier`` configuration + options now have a hard minimum value of 0.0. Also, the deprecated alias + to the ``[DEFAULT]`` group has been removed so the options must appear in + the ``[filter_scheduler]`` group. diff --git a/releasenotes/notes/stein-nova-cells-v1-experimental-ci-de47b3c62e5fb675.yaml b/releasenotes/notes/stein-nova-cells-v1-experimental-ci-de47b3c62e5fb675.yaml new file mode 100644 index 0000000000..6bac013cd4 --- /dev/null +++ b/releasenotes/notes/stein-nova-cells-v1-experimental-ci-de47b3c62e5fb675.yaml @@ -0,0 +1,9 @@ +--- +other: + - | + CI testing of Cells v1 has been moved to the ``experimental`` queue meaning + changes proposed to nova will not be tested against a Cells v1 setup unless + explicitly run through the ``experimental`` queue by leaving a review + comment on the patch of "check experimental". Cells v1 has been deprecated + since the 16.0.0 Pike release and this is a further step in its eventual + removal. diff --git a/releasenotes/notes/support-neutron-ports-with-resource-request-cb9ad5e9757792d0.yaml b/releasenotes/notes/support-neutron-ports-with-resource-request-cb9ad5e9757792d0.yaml new file mode 100644 index 0000000000..815c3cc12c --- /dev/null +++ b/releasenotes/notes/support-neutron-ports-with-resource-request-cb9ad5e9757792d0.yaml @@ -0,0 +1,22 @@ +--- +features: + - | + API microversion 2.72 adds support for creating servers with neutron ports + that has resource request, e.g. neutron ports with + `QoS minimum bandwidth rule`_. Deleting servers with such ports have + already been handled properly as well as detaching these type of ports. + + API limitations: + + * Creating servers with Neutron networks having QoS minimum bandwidth rule + is not supported. + + * Attaching Neutron ports and networks having QoS minimum bandwidth rule + is not supported. + + * Moving (resizing, migrating, live-migrating, evacuating, + unshelving after shelve offload) servers with ports having resource + request is not yet supported. + + .. _QoS minimum bandwidth rule: https://docs.openstack.org/neutron/latest/admin/config-qos.html + diff --git a/releasenotes/notes/support-qemu-native-tls-for-migration-31d8b0ae9eb2c893.yaml b/releasenotes/notes/support-qemu-native-tls-for-migration-31d8b0ae9eb2c893.yaml new file mode 100644 index 0000000000..54913d05c4 --- /dev/null +++ b/releasenotes/notes/support-qemu-native-tls-for-migration-31d8b0ae9eb2c893.yaml @@ -0,0 +1,15 @@ +--- +features: + - | + The libvirt driver now supports "QEMU-native TLS" transport for live + migration. This will provide encryption for all migration streams, + namely: guest RAM, device state and disks on a non-shared setup that are + transported over NBD (Network Block Device), also called as "block + migration". + + This can be configured via a new configuration attribute + ``[libvirt]/live_migration_with_native_tls``. Refer to its + documentation in ``nova.conf`` for usage details. Note that this is + the preferred the way to secure all migration streams in an + OpenStack network, instead of + ``[libvirt]/live_migration_tunnelled``. diff --git a/releasenotes/notes/versioned-notification-interface-is-complete-06725d7d4d761849.yaml b/releasenotes/notes/versioned-notification-interface-is-complete-06725d7d4d761849.yaml new file mode 100644 index 0000000000..2a4143bd22 --- /dev/null +++ b/releasenotes/notes/versioned-notification-interface-is-complete-06725d7d4d761849.yaml @@ -0,0 +1,10 @@ +--- +features: + - | + The versioned notification interface of nova is now complete and in + feature parity with the legacy interface. The emitted notifications + are documented in `notification dev ref`_ with full sample files. + The deprecation of the legacy notification interface is under dicussion + and will be handled separately. + + .. _notification dev ref: https://docs.openstack.org/nova/latest/reference/notifications.html#existing-versioned-notifications diff --git a/releasenotes/notes/vmware-add-max-ram-validation-f27f94d4a04aef3a.yaml b/releasenotes/notes/vmware-add-max-ram-validation-f27f94d4a04aef3a.yaml new file mode 100644 index 0000000000..c0fcd39b30 --- /dev/null +++ b/releasenotes/notes/vmware-add-max-ram-validation-f27f94d4a04aef3a.yaml @@ -0,0 +1,11 @@ +--- +features: + - | + For the VMware vCenter driver, added support for the configured video ram ``hw_video_ram`` from the image, + which will be checked against the maximum allowed video ram ``hw_video:ram_max_mb`` + from the flavor. + If the selected video ram from the image is less than or equal to the maximum allowed ram, + the ``videoRamSizeInKB`` will be set. + If the selected ram is more than the maximum allowed one, then server creation will fail for the given + image and flavor. + If the maximum allowed video ram is not set in the flavor we do not set ``videoRamSizeInKB`` in the VM. diff --git a/releasenotes/notes/vrouter-hw-offloads-38257f49ac1d3a60.yaml b/releasenotes/notes/vrouter-hw-offloads-38257f49ac1d3a60.yaml new file mode 100644 index 0000000000..417d82367d --- /dev/null +++ b/releasenotes/notes/vrouter-hw-offloads-38257f49ac1d3a60.yaml @@ -0,0 +1,14 @@ +--- +features: + - | + This release adds support for ``direct`` and ``virtio-forwarder`` VNIC + types to the ``vrouter`` VIF type. In order to use these VNIC types, + support is required from the version of OpenContrail, Contrail or Tungsten + Fabric that is installed, as well the required hardware. At this time, the + reference os-vif plugin is hosted on OpenContrail at + https://github.com/Juniper/contrail-nova-vif-driver but is expected to + transition to Tungsten Fabric in the future. Version 5.1 or later of the + plugin is required to use these new VNIC types. Consult the `Tungsten + Fabric documentation <https://tungstenfabric.github.io/website/>`_ for + release notes, when available, about hardware support. For commercial + support, consult the release notes from a downstream vendor. |