diff options
author | Jeff Davis <jdavis@postgresql.org> | 2022-11-10 14:46:30 -0800 |
---|---|---|
committer | Jeff Davis <jdavis@postgresql.org> | 2022-11-11 12:46:34 -0800 |
commit | 58a45bb1d82b90e33a45b646717bf6a035256ded (patch) | |
tree | fcf1102dc053c716439f66bd4c89a80ee964e008 /src/backend/access/heap/heapam.c | |
parent | 294a2199a331ff719e0d0fe70fd2b6200689eb16 (diff) | |
download | postgresql-58a45bb1d82b90e33a45b646717bf6a035256ded.tar.gz postgresql-58a45bb1d82b90e33a45b646717bf6a035256ded.zip |
Fix theoretical torn page hazard.
The original report was concerned with a possible inconsistency
between the heap and the visibility map, which I was unable to
confirm. The concern has been retracted.
However, there did seem to be a torn page hazard when using
checksums. By not setting the heap page LSN during redo, the
protections of minRecoveryPoint were bypassed. Fixed, along with a
misleading comment.
It may have been impossible to hit this problem in practice, because
it would require a page tear between the checksum and the flags, so I
am marking this as a theoretical risk. But, as discussed, it did
violate expectations about the page LSN, so it may have other
consequences.
Backpatch to all supported versions.
Reported-by: Konstantin Knizhnik
Reviewed-by: Konstantin Knizhnik
Discussion: https://postgr.es/m/fed17dac-8cb8-4f5b-d462-1bb4908c029e@garret.ru
Backpatch-through: 11
Diffstat (limited to 'src/backend/access/heap/heapam.c')
-rw-r--r-- | src/backend/access/heap/heapam.c | 6 |
1 files changed, 4 insertions, 2 deletions
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c index 0a3ebecd221..fbe7c222f73 100644 --- a/src/backend/access/heap/heapam.c +++ b/src/backend/access/heap/heapam.c @@ -7956,8 +7956,7 @@ heap_xlog_visible(XLogReaderState *record) /* * We don't bump the LSN of the heap page when setting the visibility * map bit (unless checksums or wal_hint_bits is enabled, in which - * case we must), because that would generate an unworkable volume of - * full-page writes. This exposes us to torn page hazards, but since + * case we must). This exposes us to torn page hazards, but since * we're not inspecting the existing page contents in any way, we * don't care. * @@ -7971,6 +7970,9 @@ heap_xlog_visible(XLogReaderState *record) PageSetAllVisible(page); + if (XLogHintBitIsNeeded()) + PageSetLSN(page, lsn); + MarkBufferDirty(buffer); } else if (action == BLK_RESTORED) |