aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorAndres Freund <andres@anarazel.de>2023-03-07 21:36:51 -0800
committerAndres Freund <andres@anarazel.de>2023-03-07 21:36:51 -0800
commite6d77f22c7a8bed88ceef3a3267931f4ce988664 (patch)
treedd9dde40c9a7d641cf8ba262667ad6534527b8c5
parent695b34ab359d84cf8ea94f15852f12bbf04bb038 (diff)
downloadpostgresql-e6d77f22c7a8bed88ceef3a3267931f4ce988664.tar.gz
postgresql-e6d77f22c7a8bed88ceef3a3267931f4ce988664.zip
Fix corruption due to vacuum_defer_cleanup_age underflowing 64bit xids
When vacuum_defer_cleanup_age is bigger than the current xid, including the epoch, the subtraction of vacuum_defer_cleanup_age would lead to a wrapped around xid. While that normally is not a problem, the subsequent conversion to a 64bit xid results in a 64bit-xid very far into the future. As that xid is used as a horizon to detect whether rows versions are old enough to be removed, that allows removal of rows that are still visible (i.e. corruption). If vacuum_defer_cleanup_age was never changed from the default, there is no chance of this bug occurring. This bug was introduced in dc7420c2c92. A lesser version of it exists in 12-13, introduced by fb5344c969a, affecting only GiST. The 12-13 version of the issue can, in rare cases, lead to pages in a gist index getting recycled too early, potentially causing index entries to be found multiple times. The fix is fairly simple - don't allow vacuum_defer_cleanup_age to retreat further than FirstNormalTransactionId. Patches to make similar bugs easier to find, by adding asserts to the 64bit xid infrastructure, have been proposed, but are not suitable for backpatching. Currently there are no tests for vacuum_defer_cleanup_age. A patch introducing infrastructure to make writing a test easier has been posted to the list. Reported-by: Michail Nikolaev <michail.nikolaev@gmail.com> Reviewed-by: Matthias van de Meent <boekewurm+postgres@gmail.com> Author: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/20230108002923.cyoser3ttmt63bfn@awork3.anarazel.de Backpatch: 12-, but impact/fix is smaller for 12-13
-rw-r--r--src/backend/utils/time/snapmgr.c20
1 files changed, 16 insertions, 4 deletions
diff --git a/src/backend/utils/time/snapmgr.c b/src/backend/utils/time/snapmgr.c
index 9d02f7638b8..85b5d5cdb9e 100644
--- a/src/backend/utils/time/snapmgr.c
+++ b/src/backend/utils/time/snapmgr.c
@@ -983,24 +983,36 @@ GetFullRecentGlobalXmin(void)
uint32 nextxid_epoch;
TransactionId nextxid_xid;
uint32 epoch;
+ TransactionId horizon = RecentGlobalXmin;
- Assert(TransactionIdIsNormal(RecentGlobalXmin));
+ Assert(TransactionIdIsNormal(horizon));
/*
* Compute the epoch from the next XID's epoch. This relies on the fact
* that RecentGlobalXmin must be within the 2 billion XID horizon from the
* next XID.
+ *
+ * Need to be careful to prevent wrapping around during epoch 0, otherwise
+ * we would generate an xid far into the future when converting to a
+ * FullTransactionId. This can happen because RecentGlobalXmin can be held
+ * back via vacuum_defer_cleanup_age.
*/
nextxid_full = ReadNextFullTransactionId();
nextxid_epoch = EpochFromFullTransactionId(nextxid_full);
nextxid_xid = XidFromFullTransactionId(nextxid_full);
- if (RecentGlobalXmin > nextxid_xid)
+ if (horizon <= nextxid_xid)
+ epoch = nextxid_epoch;
+ else if (nextxid_epoch > 0)
epoch = nextxid_epoch - 1;
else
- epoch = nextxid_epoch;
+ {
+ /* don't wrap around */
+ epoch = 0;
+ horizon = FirstNormalTransactionId;
+ }
- return FullTransactionIdFromEpochAndXid(epoch, RecentGlobalXmin);
+ return FullTransactionIdFromEpochAndXid(epoch, horizon);
}
/*