aboutsummaryrefslogtreecommitdiff
path: root/src/backend/access/index/indexam.c
diff options
context:
space:
mode:
authorAndres Freund <andres@anarazel.de>2019-03-11 12:46:41 -0700
committerAndres Freund <andres@anarazel.de>2019-03-11 12:46:41 -0700
commitc2fe139c201c48f1133e9fbea2dd99b8efe2fadd (patch)
treeab0a6261b412b8284b6c91af158f72af97e02a35 /src/backend/access/index/indexam.c
parenta47841528107921f02c280e0c5f91c5a1d86adb0 (diff)
downloadpostgresql-c2fe139c201c48f1133e9fbea2dd99b8efe2fadd.tar.gz
postgresql-c2fe139c201c48f1133e9fbea2dd99b8efe2fadd.zip
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several new abstractions are needed. Specifically: 1) Heap scans need to be generalized into table scans. Do this by introducing TableScanDesc, which will be the "base class" for individual AMs. This contains the AM independent fields from HeapScanDesc. The previous heap_{beginscan,rescan,endscan} et al. have been replaced with a table_ version. There's no direct replacement for heap_getnext(), as that returned a HeapTuple, which is undesirable for a other AMs. Instead there's table_scan_getnextslot(). But note that heap_getnext() lives on, it's still used widely to access catalog tables. This is achieved by new scan_begin, scan_end, scan_rescan, scan_getnextslot callbacks. 2) The portion of parallel scans that's shared between backends need to be able to do so without the user doing per-AM work. To achieve that new parallelscan_{estimate, initialize, reinitialize} callbacks are introduced, which operate on a new ParallelTableScanDesc, which again can be subclassed by AMs. As it is likely that several AMs are going to be block oriented, block oriented callbacks that can be shared between such AMs are provided and used by heap. table_block_parallelscan_{estimate, intiialize, reinitialize} as callbacks, and table_block_parallelscan_{nextpage, init} for use in AMs. These operate on a ParallelBlockTableScanDesc. 3) Index scans need to be able to access tables to return a tuple, and there needs to be state across individual accesses to the heap to store state like buffers. That's now handled by introducing a sort-of-scan IndexFetchTable, which again is intended to be subclassed by individual AMs (for heap IndexFetchHeap). The relevant callbacks for an AM are index_fetch_{end, begin, reset} to create the necessary state, and index_fetch_tuple to retrieve an indexed tuple. Note that index_fetch_tuple implementations need to be smarter than just blindly fetching the tuples for AMs that have optimizations similar to heap's HOT - the currently alive tuple in the update chain needs to be fetched if appropriate. Similar to table_scan_getnextslot(), it's undesirable to continue to return HeapTuples. Thus index_fetch_heap (might want to rename that later) now accepts a slot as an argument. Core code doesn't have a lot of call sites performing index scans without going through the systable_* API (in contrast to loads of heap_getnext calls and working directly with HeapTuples). Index scans now store the result of a search in IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the target is not generally a HeapTuple anymore that seems cleaner. To be able to sensible adapt code to use the above, two further callbacks have been introduced: a) slot_callbacks returns a TupleTableSlotOps* suitable for creating slots capable of holding a tuple of the AMs type. table_slot_callbacks() and table_slot_create() are based upon that, but have additional logic to deal with views, foreign tables, etc. While this change could have been done separately, nearly all the call sites that needed to be adapted for the rest of this commit also would have been needed to be adapted for table_slot_callbacks(), making separation not worthwhile. b) tuple_satisfies_snapshot checks whether the tuple in a slot is currently visible according to a snapshot. That's required as a few places now don't have a buffer + HeapTuple around, but a slot (which in heap's case internally has that information). Additionally a few infrastructure changes were needed: I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now internally uses a slot to keep track of tuples. While systable_getnext() still returns HeapTuples, and will so for the foreseeable future, the index API (see 1) above) now only deals with slots. The remainder, and largest part, of this commit is then adjusting all scans in postgres to use the new APIs. Author: Andres Freund, Haribabu Kommi, Alvaro Herrera Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
Diffstat (limited to 'src/backend/access/index/indexam.c')
-rw-r--r--src/backend/access/index/indexam.c164
1 files changed, 64 insertions, 100 deletions
diff --git a/src/backend/access/index/indexam.c b/src/backend/access/index/indexam.c
index 4ad30186d97..ae1c87ebadd 100644
--- a/src/backend/access/index/indexam.c
+++ b/src/backend/access/index/indexam.c
@@ -72,6 +72,7 @@
#include "access/amapi.h"
#include "access/heapam.h"
#include "access/relscan.h"
+#include "access/tableam.h"
#include "access/transam.h"
#include "access/xlog.h"
#include "catalog/index.h"
@@ -235,6 +236,9 @@ index_beginscan(Relation heapRelation,
scan->heapRelation = heapRelation;
scan->xs_snapshot = snapshot;
+ /* prepare to fetch index matches from table */
+ scan->xs_heapfetch = table_index_fetch_begin(heapRelation);
+
return scan;
}
@@ -318,16 +322,12 @@ index_rescan(IndexScanDesc scan,
Assert(nkeys == scan->numberOfKeys);
Assert(norderbys == scan->numberOfOrderBys);
- /* Release any held pin on a heap page */
- if (BufferIsValid(scan->xs_cbuf))
- {
- ReleaseBuffer(scan->xs_cbuf);
- scan->xs_cbuf = InvalidBuffer;
- }
-
- scan->xs_continue_hot = false;
+ /* Release resources (like buffer pins) from table accesses */
+ if (scan->xs_heapfetch)
+ table_index_fetch_reset(scan->xs_heapfetch);
scan->kill_prior_tuple = false; /* for safety */
+ scan->xs_heap_continue = false;
scan->indexRelation->rd_indam->amrescan(scan, keys, nkeys,
orderbys, norderbys);
@@ -343,11 +343,11 @@ index_endscan(IndexScanDesc scan)
SCAN_CHECKS;
CHECK_SCAN_PROCEDURE(amendscan);
- /* Release any held pin on a heap page */
- if (BufferIsValid(scan->xs_cbuf))
+ /* Release resources (like buffer pins) from table accesses */
+ if (scan->xs_heapfetch)
{
- ReleaseBuffer(scan->xs_cbuf);
- scan->xs_cbuf = InvalidBuffer;
+ table_index_fetch_end(scan->xs_heapfetch);
+ scan->xs_heapfetch = NULL;
}
/* End the AM's scan */
@@ -379,17 +379,16 @@ index_markpos(IndexScanDesc scan)
/* ----------------
* index_restrpos - restore a scan position
*
- * NOTE: this only restores the internal scan state of the index AM.
- * The current result tuple (scan->xs_ctup) doesn't change. See comments
- * for ExecRestrPos().
- *
- * NOTE: in the presence of HOT chains, mark/restore only works correctly
- * if the scan's snapshot is MVCC-safe; that ensures that there's at most one
- * returnable tuple in each HOT chain, and so restoring the prior state at the
- * granularity of the index AM is sufficient. Since the only current user
- * of mark/restore functionality is nodeMergejoin.c, this effectively means
- * that merge-join plans only work for MVCC snapshots. This could be fixed
- * if necessary, but for now it seems unimportant.
+ * NOTE: this only restores the internal scan state of the index AM. See
+ * comments for ExecRestrPos().
+ *
+ * NOTE: For heap, in the presence of HOT chains, mark/restore only works
+ * correctly if the scan's snapshot is MVCC-safe; that ensures that there's at
+ * most one returnable tuple in each HOT chain, and so restoring the prior
+ * state at the granularity of the index AM is sufficient. Since the only
+ * current user of mark/restore functionality is nodeMergejoin.c, this
+ * effectively means that merge-join plans only work for MVCC snapshots. This
+ * could be fixed if necessary, but for now it seems unimportant.
* ----------------
*/
void
@@ -400,9 +399,12 @@ index_restrpos(IndexScanDesc scan)
SCAN_CHECKS;
CHECK_SCAN_PROCEDURE(amrestrpos);
- scan->xs_continue_hot = false;
+ /* release resources (like buffer pins) from table accesses */
+ if (scan->xs_heapfetch)
+ table_index_fetch_reset(scan->xs_heapfetch);
scan->kill_prior_tuple = false; /* for safety */
+ scan->xs_heap_continue = false;
scan->indexRelation->rd_indam->amrestrpos(scan);
}
@@ -483,6 +485,9 @@ index_parallelrescan(IndexScanDesc scan)
{
SCAN_CHECKS;
+ if (scan->xs_heapfetch)
+ table_index_fetch_reset(scan->xs_heapfetch);
+
/* amparallelrescan is optional; assume no-op if not provided by AM */
if (scan->indexRelation->rd_indam->amparallelrescan != NULL)
scan->indexRelation->rd_indam->amparallelrescan(scan);
@@ -513,6 +518,9 @@ index_beginscan_parallel(Relation heaprel, Relation indexrel, int nkeys,
scan->heapRelation = heaprel;
scan->xs_snapshot = snapshot;
+ /* prepare to fetch index matches from table */
+ scan->xs_heapfetch = table_index_fetch_begin(heaprel);
+
return scan;
}
@@ -535,7 +543,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
/*
* The AM's amgettuple proc finds the next index entry matching the scan
- * keys, and puts the TID into scan->xs_ctup.t_self. It should also set
+ * keys, and puts the TID into scan->xs_heaptid. It should also set
* scan->xs_recheck and possibly scan->xs_itup/scan->xs_hitup, though we
* pay no attention to those fields here.
*/
@@ -543,23 +551,23 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
/* Reset kill flag immediately for safety */
scan->kill_prior_tuple = false;
+ scan->xs_heap_continue = false;
/* If we're out of index entries, we're done */
if (!found)
{
- /* ... but first, release any held pin on a heap page */
- if (BufferIsValid(scan->xs_cbuf))
- {
- ReleaseBuffer(scan->xs_cbuf);
- scan->xs_cbuf = InvalidBuffer;
- }
+ /* release resources (like buffer pins) from table accesses */
+ if (scan->xs_heapfetch)
+ table_index_fetch_reset(scan->xs_heapfetch);
+
return NULL;
}
+ Assert(ItemPointerIsValid(&scan->xs_heaptid));
pgstat_count_index_tuples(scan->indexRelation, 1);
/* Return the TID of the tuple we found. */
- return &scan->xs_ctup.t_self;
+ return &scan->xs_heaptid;
}
/* ----------------
@@ -580,53 +588,18 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
* enough information to do it efficiently in the general case.
* ----------------
*/
-HeapTuple
-index_fetch_heap(IndexScanDesc scan)
+bool
+index_fetch_heap(IndexScanDesc scan, TupleTableSlot *slot)
{
- ItemPointer tid = &scan->xs_ctup.t_self;
bool all_dead = false;
- bool got_heap_tuple;
-
- /* We can skip the buffer-switching logic if we're in mid-HOT chain. */
- if (!scan->xs_continue_hot)
- {
- /* Switch to correct buffer if we don't have it already */
- Buffer prev_buf = scan->xs_cbuf;
-
- scan->xs_cbuf = ReleaseAndReadBuffer(scan->xs_cbuf,
- scan->heapRelation,
- ItemPointerGetBlockNumber(tid));
+ bool found;
- /*
- * Prune page, but only if we weren't already on this page
- */
- if (prev_buf != scan->xs_cbuf)
- heap_page_prune_opt(scan->heapRelation, scan->xs_cbuf);
- }
+ found = table_index_fetch_tuple(scan->xs_heapfetch, &scan->xs_heaptid,
+ scan->xs_snapshot, slot,
+ &scan->xs_heap_continue, &all_dead);
- /* Obtain share-lock on the buffer so we can examine visibility */
- LockBuffer(scan->xs_cbuf, BUFFER_LOCK_SHARE);
- got_heap_tuple = heap_hot_search_buffer(tid, scan->heapRelation,
- scan->xs_cbuf,
- scan->xs_snapshot,
- &scan->xs_ctup,
- &all_dead,
- !scan->xs_continue_hot);
- LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
-
- if (got_heap_tuple)
- {
- /*
- * Only in a non-MVCC snapshot can more than one member of the HOT
- * chain be visible.
- */
- scan->xs_continue_hot = !IsMVCCSnapshot(scan->xs_snapshot);
+ if (found)
pgstat_count_heap_fetch(scan->indexRelation);
- return &scan->xs_ctup;
- }
-
- /* We've reached the end of the HOT chain. */
- scan->xs_continue_hot = false;
/*
* If we scanned a whole HOT chain and found only dead tuples, tell index
@@ -638,17 +611,17 @@ index_fetch_heap(IndexScanDesc scan)
if (!scan->xactStartedInRecovery)
scan->kill_prior_tuple = all_dead;
- return NULL;
+ return found;
}
/* ----------------
- * index_getnext - get the next heap tuple from a scan
+ * index_getnext_slot - get the next tuple from a scan
*
- * The result is the next heap tuple satisfying the scan keys and the
- * snapshot, or NULL if no more matching tuples exist.
+ * The result is true if a tuple satisfying the scan keys and the snapshot was
+ * found, false otherwise. The tuple is stored in the specified slot.
*
- * On success, the buffer containing the heap tup is pinned (the pin will be
- * dropped in a future index_getnext_tid, index_fetch_heap or index_endscan
+ * On success, resources (like buffer pins) are likely to be held, and will be
+ * dropped by a future index_getnext_tid, index_fetch_heap or index_endscan
* call).
*
* Note: caller must check scan->xs_recheck, and perform rechecking of the
@@ -656,32 +629,23 @@ index_fetch_heap(IndexScanDesc scan)
* enough information to do it efficiently in the general case.
* ----------------
*/
-HeapTuple
-index_getnext(IndexScanDesc scan, ScanDirection direction)
+bool
+index_getnext_slot(IndexScanDesc scan, ScanDirection direction, TupleTableSlot *slot)
{
- HeapTuple heapTuple;
- ItemPointer tid;
-
for (;;)
{
- if (scan->xs_continue_hot)
- {
- /*
- * We are resuming scan of a HOT chain after having returned an
- * earlier member. Must still hold pin on current heap page.
- */
- Assert(BufferIsValid(scan->xs_cbuf));
- Assert(ItemPointerGetBlockNumber(&scan->xs_ctup.t_self) ==
- BufferGetBlockNumber(scan->xs_cbuf));
- }
- else
+ if (!scan->xs_heap_continue)
{
+ ItemPointer tid;
+
/* Time to fetch the next TID from the index */
tid = index_getnext_tid(scan, direction);
/* If we're out of index entries, we're done */
if (tid == NULL)
break;
+
+ Assert(ItemPointerEquals(tid, &scan->xs_heaptid));
}
/*
@@ -689,12 +653,12 @@ index_getnext(IndexScanDesc scan, ScanDirection direction)
* If we don't find anything, loop around and grab the next TID from
* the index.
*/
- heapTuple = index_fetch_heap(scan);
- if (heapTuple != NULL)
- return heapTuple;
+ Assert(ItemPointerIsValid(&scan->xs_heaptid));
+ if (index_fetch_heap(scan, slot))
+ return true;
}
- return NULL; /* failure exit */
+ return false;
}
/* ----------------