diff options
author | John Garbutt <john.garbutt@stackhpc.com> | 2022-11-16 17:12:40 +0000 |
---|---|---|
committer | Ruby Loo <opensrloo@gmail.com> | 2022-12-15 16:09:04 +0000 |
commit | c9de185ea1ac1e8d4435c5863b2ad7cefdb28c76 (patch) | |
tree | 3992eee7ed692b6e4a6e659a66d3a32cfa42b221 /nova/virt/images.py | |
parent | d92d0934188a14741dd86949ddf98bd1208f3d96 (diff) | |
download | nova-c9de185ea1ac1e8d4435c5863b2ad7cefdb28c76.tar.gz |
Ironic nodes with instance reserved in placement
Currently, when you delete an ironic instance, we trigger
and undeploy in ironic and we release our allocation in placement.
We do this well before the ironic node is actually available.
We have attempted to fix this my marking unavailable nodes
as reserved in placement. This works great until you try
and re-image lots of nodes.
It turns out, ironic nodes that are waiting for their automatic
clean to finish, are returned as a valid allocation candidates
for quite some time. Eventually we mark then as reserved.
This patch takes a strange approach, if we mark all nodes as
reserved as soon as the instance lands, we close the race.
That is, when the allocation is removed the node is still
unavailable until the next update of placement is done and
notices that the node has become available. That may or may
not have been after automatic cleaning. The trade off is
that when you don't have automatic cleaning, we wait a bit
longer to notice the node is available again.
Note, this is also useful when a broken Ironic node is
marked as in-maintainance while it is in-use by a nova
instance. In a similar way, we mark the Nova as reserved
immmeidately, rather than first waiting for the instance to be
deleted before reserving the resources in Placement.
Closes-Bug: #1974070
Change-Id: Iab92124b5776a799c7f90d07281d28fcf191c8fe
(cherry picked from commit 3c022e968375c1b2eadf3c2dd7190b9434c6d4c1)
Diffstat (limited to 'nova/virt/images.py')
0 files changed, 0 insertions, 0 deletions