aboutsummaryrefslogtreecommitdiff
path: root/src/backend/executor/nodeIndexonlyscan.c
diff options
context:
space:
mode:
authorRobert Haas <rhaas@postgresql.org>2016-03-01 21:49:41 -0500
committerRobert Haas <rhaas@postgresql.org>2016-03-01 21:49:41 -0500
commita892234f830e832110f63fc0a2afce2fb21d1584 (patch)
treefbb37cd6dc4e68f450bf5360610756368c90ae46 /src/backend/executor/nodeIndexonlyscan.c
parent68c521eb92c3515e3306f51a7fd3f32d16c97524 (diff)
downloadpostgresql-a892234f830e832110f63fc0a2afce2fb21d1584.tar.gz
postgresql-a892234f830e832110f63fc0a2afce2fb21d1584.zip
Change the format of the VM fork to add a second bit per page.
The new bit indicates whether every tuple on the page is already frozen. It is cleared only when the all-visible bit is cleared, and it can be set only when we vacuum a page and find that every tuple on that page is both visible to every transaction and in no need of any future vacuuming. A future commit will use this new bit to optimize away full-table scans that would otherwise be triggered by XID wraparound considerations. A page which is merely all-visible must still be scanned in that case, but a page which is all-frozen need not be. This commit does not attempt that optimization, although that optimization is the goal here. It seems better to get the basic infrastructure in place first. Per discussion, it's very desirable for pg_upgrade to automatically migrate existing VM forks from the old format to the new format. That, too, will be handled in a follow-on patch. Masahiko Sawada, reviewed by Kyotaro Horiguchi, Fujii Masao, Amit Kapila, Simon Riggs, Andres Freund, and others, and substantially revised by me.
Diffstat (limited to 'src/backend/executor/nodeIndexonlyscan.c')
-rw-r--r--src/backend/executor/nodeIndexonlyscan.c12
1 files changed, 6 insertions, 6 deletions
diff --git a/src/backend/executor/nodeIndexonlyscan.c b/src/backend/executor/nodeIndexonlyscan.c
index 90afbdca652..4f6f91c8dba 100644
--- a/src/backend/executor/nodeIndexonlyscan.c
+++ b/src/backend/executor/nodeIndexonlyscan.c
@@ -85,9 +85,9 @@ IndexOnlyNext(IndexOnlyScanState *node)
* which all tuples are known visible to everybody. In any case,
* we'll use the index tuple not the heap tuple as the data source.
*
- * Note on Memory Ordering Effects: visibilitymap_test does not lock
- * the visibility map buffer, and therefore the result we read here
- * could be slightly stale. However, it can't be stale enough to
+ * Note on Memory Ordering Effects: visibilitymap_get_status does not
+ * lock the visibility map buffer, and therefore the result we read
+ * here could be slightly stale. However, it can't be stale enough to
* matter.
*
* We need to detect clearing a VM bit due to an insert right away,
@@ -114,9 +114,9 @@ IndexOnlyNext(IndexOnlyScanState *node)
* It's worth going through this complexity to avoid needing to lock
* the VM buffer, which could cause significant contention.
*/
- if (!visibilitymap_test(scandesc->heapRelation,
- ItemPointerGetBlockNumber(tid),
- &node->ioss_VMBuffer))
+ if (!VM_ALL_VISIBLE(scandesc->heapRelation,
+ ItemPointerGetBlockNumber(tid),
+ &node->ioss_VMBuffer))
{
/*
* Rats, we have to visit the heap to check visibility.