diff options
author | Andres Freund <andres@anarazel.de> | 2019-03-11 12:46:41 -0700 |
---|---|---|
committer | Andres Freund <andres@anarazel.de> | 2019-03-11 12:46:41 -0700 |
commit | c2fe139c201c48f1133e9fbea2dd99b8efe2fadd (patch) | |
tree | ab0a6261b412b8284b6c91af158f72af97e02a35 /src/backend/commands/constraint.c | |
parent | a47841528107921f02c280e0c5f91c5a1d86adb0 (diff) | |
download | postgresql-c2fe139c201c48f1133e9fbea2dd99b8efe2fadd.tar.gz postgresql-c2fe139c201c48f1133e9fbea2dd99b8efe2fadd.zip |
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:
1) Heap scans need to be generalized into table scans. Do this by
introducing TableScanDesc, which will be the "base class" for
individual AMs. This contains the AM independent fields from
HeapScanDesc.
The previous heap_{beginscan,rescan,endscan} et al. have been
replaced with a table_ version.
There's no direct replacement for heap_getnext(), as that returned
a HeapTuple, which is undesirable for a other AMs. Instead there's
table_scan_getnextslot(). But note that heap_getnext() lives on,
it's still used widely to access catalog tables.
This is achieved by new scan_begin, scan_end, scan_rescan,
scan_getnextslot callbacks.
2) The portion of parallel scans that's shared between backends need
to be able to do so without the user doing per-AM work. To achieve
that new parallelscan_{estimate, initialize, reinitialize}
callbacks are introduced, which operate on a new
ParallelTableScanDesc, which again can be subclassed by AMs.
As it is likely that several AMs are going to be block oriented,
block oriented callbacks that can be shared between such AMs are
provided and used by heap. table_block_parallelscan_{estimate,
intiialize, reinitialize} as callbacks, and
table_block_parallelscan_{nextpage, init} for use in AMs. These
operate on a ParallelBlockTableScanDesc.
3) Index scans need to be able to access tables to return a tuple, and
there needs to be state across individual accesses to the heap to
store state like buffers. That's now handled by introducing a
sort-of-scan IndexFetchTable, which again is intended to be
subclassed by individual AMs (for heap IndexFetchHeap).
The relevant callbacks for an AM are index_fetch_{end, begin,
reset} to create the necessary state, and index_fetch_tuple to
retrieve an indexed tuple. Note that index_fetch_tuple
implementations need to be smarter than just blindly fetching the
tuples for AMs that have optimizations similar to heap's HOT - the
currently alive tuple in the update chain needs to be fetched if
appropriate.
Similar to table_scan_getnextslot(), it's undesirable to continue
to return HeapTuples. Thus index_fetch_heap (might want to rename
that later) now accepts a slot as an argument. Core code doesn't
have a lot of call sites performing index scans without going
through the systable_* API (in contrast to loads of heap_getnext
calls and working directly with HeapTuples).
Index scans now store the result of a search in
IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
target is not generally a HeapTuple anymore that seems cleaner.
To be able to sensible adapt code to use the above, two further
callbacks have been introduced:
a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
slots capable of holding a tuple of the AMs
type. table_slot_callbacks() and table_slot_create() are based
upon that, but have additional logic to deal with views, foreign
tables, etc.
While this change could have been done separately, nearly all the
call sites that needed to be adapted for the rest of this commit
also would have been needed to be adapted for
table_slot_callbacks(), making separation not worthwhile.
b) tuple_satisfies_snapshot checks whether the tuple in a slot is
currently visible according to a snapshot. That's required as a few
places now don't have a buffer + HeapTuple around, but a
slot (which in heap's case internally has that information).
Additionally a few infrastructure changes were needed:
I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
internally uses a slot to keep track of tuples. While
systable_getnext() still returns HeapTuples, and will so for the
foreseeable future, the index API (see 1) above) now only deals with
slots.
The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.
Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
Diffstat (limited to 'src/backend/commands/constraint.c')
-rw-r--r-- | src/backend/commands/constraint.c | 68 |
1 files changed, 37 insertions, 31 deletions
diff --git a/src/backend/commands/constraint.c b/src/backend/commands/constraint.c index f9ada29af84..cd04e4ea81b 100644 --- a/src/backend/commands/constraint.c +++ b/src/backend/commands/constraint.c @@ -15,6 +15,7 @@ #include "access/genam.h" #include "access/heapam.h" +#include "access/tableam.h" #include "catalog/index.h" #include "commands/trigger.h" #include "executor/executor.h" @@ -41,7 +42,7 @@ unique_key_recheck(PG_FUNCTION_ARGS) { TriggerData *trigdata = castNode(TriggerData, fcinfo->context); const char *funcname = "unique_key_recheck"; - HeapTuple new_row; + ItemPointerData checktid; ItemPointerData tmptid; Relation indexRel; IndexInfo *indexInfo; @@ -73,28 +74,30 @@ unique_key_recheck(PG_FUNCTION_ARGS) * Get the new data that was inserted/updated. */ if (TRIGGER_FIRED_BY_INSERT(trigdata->tg_event)) - new_row = trigdata->tg_trigtuple; + checktid = trigdata->tg_trigslot->tts_tid; else if (TRIGGER_FIRED_BY_UPDATE(trigdata->tg_event)) - new_row = trigdata->tg_newtuple; + checktid = trigdata->tg_newslot->tts_tid; else { ereport(ERROR, (errcode(ERRCODE_E_R_I_E_TRIGGER_PROTOCOL_VIOLATED), errmsg("function \"%s\" must be fired for INSERT or UPDATE", funcname))); - new_row = NULL; /* keep compiler quiet */ + ItemPointerSetInvalid(&checktid); /* keep compiler quiet */ } + slot = table_slot_create(trigdata->tg_relation, NULL); + /* - * If the new_row is now dead (ie, inserted and then deleted within our - * transaction), we can skip the check. However, we have to be careful, - * because this trigger gets queued only in response to index insertions; - * which means it does not get queued for HOT updates. The row we are - * called for might now be dead, but have a live HOT child, in which case - * we still need to make the check --- effectively, we're applying the - * check against the live child row, although we can use the values from - * this row since by definition all columns of interest to us are the - * same. + * If the row pointed at by checktid is now dead (ie, inserted and then + * deleted within our transaction), we can skip the check. However, we + * have to be careful, because this trigger gets queued only in response + * to index insertions; which means it does not get queued e.g. for HOT + * updates. The row we are called for might now be dead, but have a live + * HOT child, in which case we still need to make the check --- + * effectively, we're applying the check against the live child row, + * although we can use the values from this row since by definition all + * columns of interest to us are the same. * * This might look like just an optimization, because the index AM will * make this identical test before throwing an error. But it's actually @@ -103,13 +106,23 @@ unique_key_recheck(PG_FUNCTION_ARGS) * it's possible the index entry has also been marked dead, and even * removed. */ - tmptid = new_row->t_self; - if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL)) + tmptid = checktid; { - /* - * All rows in the HOT chain are dead, so skip the check. - */ - return PointerGetDatum(NULL); + IndexFetchTableData *scan = table_index_fetch_begin(trigdata->tg_relation); + bool call_again = false; + + if (!table_index_fetch_tuple(scan, &tmptid, SnapshotSelf, slot, + &call_again, NULL)) + { + /* + * All rows referenced by the index entry are dead, so skip the + * check. + */ + ExecDropSingleTupleTableSlot(slot); + table_index_fetch_end(scan); + return PointerGetDatum(NULL); + } + table_index_fetch_end(scan); } /* @@ -122,14 +135,6 @@ unique_key_recheck(PG_FUNCTION_ARGS) indexInfo = BuildIndexInfo(indexRel); /* - * The heap tuple must be put into a slot for FormIndexDatum. - */ - slot = MakeSingleTupleTableSlot(RelationGetDescr(trigdata->tg_relation), - &TTSOpsHeapTuple); - - ExecStoreHeapTuple(new_row, slot, false); - - /* * Typically the index won't have expressions, but if it does we need an * EState to evaluate them. We need it for exclusion constraints too, * even if they are just on simple columns. @@ -163,11 +168,12 @@ unique_key_recheck(PG_FUNCTION_ARGS) { /* * Note: this is not a real insert; it is a check that the index entry - * that has already been inserted is unique. Passing t_self is - * correct even if t_self is now dead, because that is the TID the - * index will know about. + * that has already been inserted is unique. Passing the tuple's tid + * (i.e. unmodified by table_index_fetch_tuple()) is correct even if + * the row is now dead, because that is the TID the index will know + * about. */ - index_insert(indexRel, values, isnull, &(new_row->t_self), + index_insert(indexRel, values, isnull, &checktid, trigdata->tg_relation, UNIQUE_CHECK_EXISTING, indexInfo); } |