aboutsummaryrefslogtreecommitdiff
path: root/src/backend/executor
diff options
context:
space:
mode:
authorTom Lane <tgl@sss.pgh.pa.us>2017-02-15 16:40:05 -0500
committerTom Lane <tgl@sss.pgh.pa.us>2017-02-15 16:40:05 -0500
commit354dfa235b4dfb01e23cee6ad8ec5fe6904004fc (patch)
tree22ccdb280c96f1bd17ffa0b68aa5c198d7144cd0 /src/backend/executor
parenta3f4c8e50e8e00fc88c44d5ff7a99c1ce88256be (diff)
downloadpostgresql-354dfa235b4dfb01e23cee6ad8ec5fe6904004fc.tar.gz
postgresql-354dfa235b4dfb01e23cee6ad8ec5fe6904004fc.zip
Make sure that hash join's bulk-tuple-transfer loops are interruptible.
The loops in ExecHashJoinNewBatch(), ExecHashIncreaseNumBatches(), and ExecHashRemoveNextSkewBucket() are all capable of iterating over many tuples without ever doing a CHECK_FOR_INTERRUPTS, so that the backend might fail to respond to SIGINT or SIGTERM for an unreasonably long time. Fix that. In the case of ExecHashJoinNewBatch(), it seems useful to put the added CHECK_FOR_INTERRUPTS into ExecHashJoinGetSavedTuple() rather than directly in the loop, because that will also ensure that both principal code paths through ExecHashJoinOuterGetTuple() will do a CHECK_FOR_INTERRUPTS, which seems like a good idea to avoid surprises. Back-patch to all supported branches. Tom Lane and Thomas Munro Discussion: https://postgr.es/m/6044.1487121720@sss.pgh.pa.us
Diffstat (limited to 'src/backend/executor')
-rw-r--r--src/backend/executor/nodeHash.c6
-rw-r--r--src/backend/executor/nodeHashjoin.c7
2 files changed, 13 insertions, 0 deletions
diff --git a/src/backend/executor/nodeHash.c b/src/backend/executor/nodeHash.c
index 6375d9bfda7..ebfe9278b6a 100644
--- a/src/backend/executor/nodeHash.c
+++ b/src/backend/executor/nodeHash.c
@@ -720,6 +720,9 @@ ExecHashIncreaseNumBatches(HashJoinTable hashtable)
/* next tuple in this chunk */
idx += MAXALIGN(hashTupleSize);
+
+ /* allow this loop to be cancellable */
+ CHECK_FOR_INTERRUPTS();
}
/* we're done with this chunk - free it and proceed to the next one */
@@ -1599,6 +1602,9 @@ ExecHashRemoveNextSkewBucket(HashJoinTable hashtable)
}
hashTuple = nextHashTuple;
+
+ /* allow this loop to be cancellable */
+ CHECK_FOR_INTERRUPTS();
}
/*
diff --git a/src/backend/executor/nodeHashjoin.c b/src/backend/executor/nodeHashjoin.c
index 369e666f885..c4d4c600312 100644
--- a/src/backend/executor/nodeHashjoin.c
+++ b/src/backend/executor/nodeHashjoin.c
@@ -912,6 +912,13 @@ ExecHashJoinGetSavedTuple(HashJoinState *hjstate,
MinimalTuple tuple;
/*
+ * We check for interrupts here because this is typically taken as an
+ * alternative code path to an ExecProcNode() call, which would include
+ * such a check.
+ */
+ CHECK_FOR_INTERRUPTS();
+
+ /*
* Since both the hash value and the MinimalTuple length word are uint32,
* we can read them both in one BufFileRead() call without any type
* cheating.