summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorTom Lane <tgl@sss.pgh.pa.us>2022-09-30 19:36:46 -0400
committerTom Lane <tgl@sss.pgh.pa.us>2022-09-30 19:36:46 -0400
commit92941f26435ce3910bb9320fd92bda1e3dd4c6b5 (patch)
tree0ed2a6731304c9ff368eab5d0420fbeced75199f
parenta3de685013e7f3c0bf624e037a5569c0d5f8f0c2 (diff)
downloadpostgresql-92941f26435ce3910bb9320fd92bda1e3dd4c6b5.tar.gz
Avoid improbable PANIC during heap_update, redux.
Commit 34f581c39 intended to ensure that RelationGetBufferForTuple would acquire a visibility-map page pin in case the otherBuffer's all-visible bit had become set since we last had lock on that page. But I missed a case: when we're extending the relation, VM concerns were dealt with only in the relatively-less-likely case that we fail to conditionally lock the otherBuffer. I think I'd believed that we couldn't need to worry about it if the conditional lock succeeds, which is true for the target buffer; but the otherBuffer was unlocked for awhile so its bit might be set anyway. So we need to do the GetVisibilityMapPins dance, and then also recheck the page's free space, in both cases. Per report from Jaime Casanova. Back-patch to v12 as the previous patch was (although there's still no evidence that the bug is reachable pre-v14). Discussion: https://postgr.es/m/E1lWLjP-00006Y-Ml@gemulon.postgresql.org
-rw-r--r--src/backend/access/heap/hio.c41
1 files changed, 23 insertions, 18 deletions
diff --git a/src/backend/access/heap/hio.c b/src/backend/access/heap/hio.c
index 43a43a84be..69cadbcede 100644
--- a/src/backend/access/heap/hio.c
+++ b/src/backend/access/heap/hio.c
@@ -643,29 +643,34 @@ loop:
LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
LockBuffer(otherBuffer, BUFFER_LOCK_EXCLUSIVE);
LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
+ }
- /*
- * Because the buffers were unlocked for a while, it's possible,
- * although unlikely, that an all-visible flag became set or that
- * somebody used up the available space in the new page. We can
- * use GetVisibilityMapPins to deal with the first case. In the
- * second case, just retry from start.
- */
- GetVisibilityMapPins(relation, otherBuffer, buffer,
- otherBlock, targetBlock, vmbuffer_other,
- vmbuffer);
+ /*
+ * Because the buffers were unlocked for a while, it's possible,
+ * although unlikely, that an all-visible flag became set or that
+ * somebody used up the available space in the new page. We can use
+ * GetVisibilityMapPins to deal with the first case. In the second
+ * case, just retry from start.
+ */
+ GetVisibilityMapPins(relation, otherBuffer, buffer,
+ otherBlock, targetBlock, vmbuffer_other,
+ vmbuffer);
- if (len > PageGetHeapFreeSpace(page))
- {
- LockBuffer(otherBuffer, BUFFER_LOCK_UNLOCK);
- UnlockReleaseBuffer(buffer);
+ /*
+ * Note that we have to check the available space even if our
+ * conditional lock succeeded, because GetVisibilityMapPins might've
+ * transiently released lock on the target buffer to acquire a VM pin
+ * for the otherBuffer.
+ */
+ if (len > PageGetHeapFreeSpace(page))
+ {
+ LockBuffer(otherBuffer, BUFFER_LOCK_UNLOCK);
+ UnlockReleaseBuffer(buffer);
- goto loop;
- }
+ goto loop;
}
}
-
- if (len > PageGetHeapFreeSpace(page))
+ else if (len > PageGetHeapFreeSpace(page))
{
/* We should not get here given the test at the top */
elog(PANIC, "tuple is too big: size %zu", len);