diff options
author | Tom Lane <tgl@sss.pgh.pa.us> | 2018-09-23 16:05:45 -0400 |
---|---|---|
committer | Tom Lane <tgl@sss.pgh.pa.us> | 2018-09-23 16:05:45 -0400 |
commit | 89b280e139c463c98eb33592216a123e89052d08 (patch) | |
tree | 226e7608f24f3ef658286ca444cfad43f9d95314 /src/backend/executor/execScan.c | |
parent | 2f39106a209e647d7b1895331fca115f9bb6ec8d (diff) | |
download | postgresql-89b280e139c463c98eb33592216a123e89052d08.tar.gz postgresql-89b280e139c463c98eb33592216a123e89052d08.zip |
Fix failure in WHERE CURRENT OF after rewinding the referenced cursor.
In a case where we have multiple relation-scan nodes in a cursor plan,
such as a scan of an inheritance tree, it's possible to fetch from a
given scan node, then rewind the cursor and fetch some row from an
earlier scan node. In such a case, execCurrent.c mistakenly thought
that the later scan node was still active, because ExecReScan hadn't
done anything to make it look not-active. We'd get some sort of
failure in the case of a SeqScan node, because the node's scan tuple
slot would be pointing at a HeapTuple whose t_self gets reset to
invalid by heapam.c. But it seems possible that for other relation
scan node types we'd actually return a valid tuple TID to the caller,
resulting in updating or deleting a tuple that shouldn't have been
considered current. To fix, forcibly clear the ScanTupleSlot in
ExecScanReScan.
Another issue here, which seems only latent at the moment but could
easily become a live bug in future, is that rewinding a cursor does
not necessarily lead to *immediately* applying ExecReScan to every
scan-level node in the plan tree. Upper-level nodes will think that
they can postpone that call if their child node is already marked
with chgParam flags. I don't see a way for that to happen today in
a plan tree that's simple enough for execCurrent.c's search_plan_tree
to understand, but that's one heck of a fragile assumption. So, add
some logic in search_plan_tree to detect chgParam flags being set on
nodes that it descended to/through, and assume that that means we
should consider lower scan nodes to be logically reset even if their
ReScan call hasn't actually happened yet.
Per bug #15395 from Matvey Arye. This has been broken for a long time,
so back-patch to all supported branches.
Discussion: https://postgr.es/m/153764171023.14986.280404050547008575@wrigleys.postgresql.org
Diffstat (limited to 'src/backend/executor/execScan.c')
-rw-r--r-- | src/backend/executor/execScan.c | 6 |
1 files changed, 6 insertions, 0 deletions
diff --git a/src/backend/executor/execScan.c b/src/backend/executor/execScan.c index caf91730ce1..f84a3fb0dbc 100644 --- a/src/backend/executor/execScan.c +++ b/src/backend/executor/execScan.c @@ -263,6 +263,12 @@ ExecScanReScan(ScanState *node) { EState *estate = node->ps.state; + /* + * We must clear the scan tuple so that observers (e.g., execCurrent.c) + * can tell that this plan node is not positioned on a tuple. + */ + ExecClearTuple(node->ss_ScanTupleSlot); + /* Rescan EvalPlanQual tuple if we're inside an EvalPlanQual recheck */ if (estate->es_epqScanDone != NULL) { |