diff options
author | Noah Misch <noah@leadboat.com> | 2024-10-25 06:51:02 -0700 |
---|---|---|
committer | Noah Misch <noah@leadboat.com> | 2024-10-25 06:51:07 -0700 |
commit | 4eac5a1fa78edde0de53a89bd2b06e1fe5588913 (patch) | |
tree | c0eda5bc9fbc69c2a79dcdf88741ea23bd04f62f /src/backend/access/heap/heapam.c | |
parent | 3baf804b7295fff7551f93ed94550d8268804cc1 (diff) | |
download | postgresql-4eac5a1fa78edde0de53a89bd2b06e1fe5588913.tar.gz postgresql-4eac5a1fa78edde0de53a89bd2b06e1fe5588913.zip |
For inplace update, send nontransactional invalidations.
The inplace update survives ROLLBACK. The inval didn't, so another
backend's DDL could then update the row without incorporating the
inplace update. In the test this fixes, a mix of CREATE INDEX and ALTER
TABLE resulted in a table with an index, yet relhasindex=f. That is a
source of index corruption. Back-patch to v12 (all supported versions).
The back branch versions don't change WAL, because those branches just
added end-of-recovery SIResetAll(). All branches change the ABI of
extern function PrepareToInvalidateCacheTuple(). No PGXN extension
calls that, and there's no apparent use case in extensions.
Reviewed by Nitin Motiani and (in earlier versions) Andres Freund.
Discussion: https://postgr.es/m/20240523000548.58.nmisch@google.com
Diffstat (limited to 'src/backend/access/heap/heapam.c')
-rw-r--r-- | src/backend/access/heap/heapam.c | 43 |
1 files changed, 36 insertions, 7 deletions
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c index 33d69641269..48c241f0088 100644 --- a/src/backend/access/heap/heapam.c +++ b/src/backend/access/heap/heapam.c @@ -6302,6 +6302,24 @@ heap_inplace_update_and_unlock(Relation relation, if (oldlen != newlen || htup->t_hoff != tuple->t_data->t_hoff) elog(ERROR, "wrong tuple length"); + /* + * Construct shared cache inval if necessary. Note that because we only + * pass the new version of the tuple, this mustn't be used for any + * operations that could change catcache lookup keys. But we aren't + * bothering with index updates either, so that's true a fortiori. + */ + CacheInvalidateHeapTupleInplace(relation, tuple, NULL); + + /* + * Unlink relcache init files as needed. If unlinking, acquire + * RelCacheInitLock until after associated invalidations. By doing this + * in advance, if we checkpoint and then crash between inplace + * XLogInsert() and inval, we don't rely on StartupXLOG() -> + * RelationCacheInitFileRemove(). That uses elevel==LOG, so replay would + * neglect to PANIC on EIO. + */ + PreInplace_Inval(); + /* NO EREPORT(ERROR) from here till changes are logged */ START_CRIT_SECTION(); @@ -6345,17 +6363,28 @@ heap_inplace_update_and_unlock(Relation relation, PageSetLSN(BufferGetPage(buffer), recptr); } + LockBuffer(buffer, BUFFER_LOCK_UNLOCK); + + /* + * Send invalidations to shared queue. SearchSysCacheLocked1() assumes we + * do this before UnlockTuple(). + * + * If we're mutating a tuple visible only to this transaction, there's an + * equivalent transactional inval from the action that created the tuple, + * and this inval is superfluous. + */ + AtInplace_Inval(); + END_CRIT_SECTION(); + UnlockTuple(relation, &tuple->t_self, InplaceUpdateTupleLock); - heap_inplace_unlock(relation, oldtup, buffer); + AcceptInvalidationMessages(); /* local processing of just-sent inval */ /* - * Send out shared cache inval if necessary. Note that because we only - * pass the new version of the tuple, this mustn't be used for any - * operations that could change catcache lookup keys. But we aren't - * bothering with index updates either, so that's true a fortiori. - * - * XXX ROLLBACK discards the invalidation. See test inplace-inval.spec. + * Queue a transactional inval. The immediate invalidation we just sent + * is the only one known to be necessary. To reduce risk from the + * transition to immediate invalidation, continue sending a transactional + * invalidation like we've long done. Third-party code might rely on it. */ if (!IsBootstrapProcessingMode()) CacheInvalidateHeapTuple(relation, tuple, NULL); |