aboutsummaryrefslogtreecommitdiff
path: root/src/backend/executor/nodeHashjoin.c
diff options
context:
space:
mode:
authorTom Lane <tgl@sss.pgh.pa.us>2024-11-08 13:42:01 -0500
committerTom Lane <tgl@sss.pgh.pa.us>2024-11-08 13:42:10 -0500
commitb8df690492568d7852b799b4eff3274fbbd91e78 (patch)
treec97cc6e4cc3b67da1bd4e45e9aeb1086e80d643f /src/backend/executor/nodeHashjoin.c
parentb5ee4e52026b03b19ba47cde27bed9cf740576cc (diff)
downloadpostgresql-b8df690492568d7852b799b4eff3274fbbd91e78.tar.gz
postgresql-b8df690492568d7852b799b4eff3274fbbd91e78.zip
Improve fix for not entering parallel mode when holding interrupts.
Commit ac04aa84a put the shutoff for this into the planner, which is not ideal because it doesn't prevent us from re-using a previously made parallel plan. Revert the planner change and instead put the shutoff into InitializeParallelDSM, modeling it on the existing code there for recovering from failure to allocate a DSM segment. However, that code path is mostly untested, and testing a bit harder showed there's at least one bug: ExecHashJoinReInitializeDSM is not prepared for us to have skipped doing parallel DSM setup. I also thought the Assert in ReinitializeParallelWorkers is pretty ill-advised, and replaced it with a silent Min() operation. The existing test case added by ac04aa84a serves fine to test this version of the fix, so no change needed there. Patch by me, but thanks to Noah Misch for the core idea that we could shut off worker creation when !INTERRUPTS_CAN_BE_PROCESSED. Back-patch to v12, as ac04aa84a was. Discussion: https://postgr.es/m/CAC-SaSzHUKT=vZJ8MPxYdC_URPfax+yoA1hKTcF4ROz_Q6z0_Q@mail.gmail.com
Diffstat (limited to 'src/backend/executor/nodeHashjoin.c')
-rw-r--r--src/backend/executor/nodeHashjoin.c9
1 files changed, 7 insertions, 2 deletions
diff --git a/src/backend/executor/nodeHashjoin.c b/src/backend/executor/nodeHashjoin.c
index 2f7170604d6..6c3009fba0f 100644
--- a/src/backend/executor/nodeHashjoin.c
+++ b/src/backend/executor/nodeHashjoin.c
@@ -1713,8 +1713,13 @@ void
ExecHashJoinReInitializeDSM(HashJoinState *state, ParallelContext *pcxt)
{
int plan_node_id = state->js.ps.plan->plan_node_id;
- ParallelHashJoinState *pstate =
- shm_toc_lookup(pcxt->toc, plan_node_id, false);
+ ParallelHashJoinState *pstate;
+
+ /* Nothing to do if we failed to create a DSM segment. */
+ if (pcxt->seg == NULL)
+ return;
+
+ pstate = shm_toc_lookup(pcxt->toc, plan_node_id, false);
/*
* It would be possible to reuse the shared hash table in single-batch