aboutsummaryrefslogtreecommitdiff
path: root/src
diff options
context:
space:
mode:
authorKevin Grittner <kgrittn@postgresql.org>2016-06-02 12:23:19 -0500
committerKevin Grittner <kgrittn@postgresql.org>2016-06-02 12:23:19 -0500
commit236d569f92b298c697e0f54891418acfc8310003 (patch)
tree98851345dc38630f2d50a3822289c90b97992487 /src
parent43d3fbe369088f089afd55847dde0f34b339b5f2 (diff)
downloadpostgresql-236d569f92b298c697e0f54891418acfc8310003.tar.gz
postgresql-236d569f92b298c697e0f54891418acfc8310003.zip
Fix btree mark/restore bug.
Commit 2ed5b87f96d473962ec5230fd820abfeaccb2069 introduced a bug in mark/restore, in an attempt to optimize repeated restores to the same page. This caused an assertion failure during a merge join which fed directly from an index scan, although the impact would not be limited to that case. Revert the bad chunk of code from that commit. While investigating this bug it was discovered that a particular "paranoia" set of the mark position field would not prevent bad behavior; it would just make it harder to diagnose. Change that into an assertion, which will draw attention to any future problem in that area more directly. Backpatch to 9.5, where the bug was introduced. Bug #14169 reported by Shinta Koyanagi. Preliminary analysis by Tom Lane identified which commit caused the bug.
Diffstat (limited to 'src')
-rw-r--r--src/backend/access/nbtree/nbtree.c19
-rw-r--r--src/backend/access/nbtree/nbtsearch.c2
2 files changed, 1 insertions, 20 deletions
diff --git a/src/backend/access/nbtree/nbtree.c b/src/backend/access/nbtree/nbtree.c
index cf4a6dc7c47..cd2d4a6c54e 100644
--- a/src/backend/access/nbtree/nbtree.c
+++ b/src/backend/access/nbtree/nbtree.c
@@ -592,25 +592,6 @@ btrestrpos(PG_FUNCTION_ARGS)
*/
so->currPos.itemIndex = so->markItemIndex;
}
- else if (so->currPos.currPage == so->markPos.currPage)
- {
- /*
- * so->markItemIndex < 0 but mark and current positions are on the
- * same page. This would be an unusual case, where the scan moved to
- * a new index page after the mark, restored, and later restored again
- * without moving off the marked page. It is not clear that this code
- * can currently be reached, but it seems better to make this function
- * robust for this case than to Assert() or elog() that it can't
- * happen.
- *
- * We neither want to set so->markItemIndex >= 0 (because that could
- * cause a later move to a new page to redo the memcpy() executions)
- * nor re-execute the memcpy() functions for a restore within the same
- * page. The previous restore to this page already set everything
- * except markPos as it should be.
- */
- so->currPos.itemIndex = so->markPos.itemIndex;
- }
else
{
/*
diff --git a/src/backend/access/nbtree/nbtsearch.c b/src/backend/access/nbtree/nbtsearch.c
index 101a7d80a95..3bdbe757aeb 100644
--- a/src/backend/access/nbtree/nbtsearch.c
+++ b/src/backend/access/nbtree/nbtsearch.c
@@ -1000,7 +1000,7 @@ _bt_first(IndexScanDesc scan, ScanDirection dir)
so->currPos.moreRight = false;
}
so->numKilled = 0; /* just paranoia */
- so->markItemIndex = -1; /* ditto */
+ Assert(so->markItemIndex == -1);
/* position to the precise item on the page */
offnum = _bt_binsrch(rel, buf, keysCount, scankeys, nextkey);