diff options
author | Robert Haas <rhaas@postgresql.org> | 2016-01-20 14:29:22 -0500 |
---|---|---|
committer | Robert Haas <rhaas@postgresql.org> | 2016-01-20 14:40:26 -0500 |
commit | 45be99f8cd5d606086e0a458c9c72910ba8a613d (patch) | |
tree | 8d3f186879c12b8f84dc1b2b018450f6fb972a51 /src/backend/executor/execParallel.c | |
parent | a7de3dc5c346e07e0439275982569996e645b3c2 (diff) | |
download | postgresql-45be99f8cd5d606086e0a458c9c72910ba8a613d.tar.gz postgresql-45be99f8cd5d606086e0a458c9c72910ba8a613d.zip |
Support parallel joins, and make related improvements.
The core innovation of this patch is the introduction of the concept
of a partial path; that is, a path which if executed in parallel will
generate a subset of the output rows in each process. Gathering a
partial path produces an ordinary (complete) path. This allows us to
generate paths for parallel joins by joining a partial path for one
side (which at the baserel level is currently always a Partial Seq
Scan) to an ordinary path on the other side. This is subject to
various restrictions at present, especially that this strategy seems
unlikely to be sensible for merge joins, so only nested loops and
hash joins paths are generated.
This also allows an Append node to be pushed below a Gather node in
the case of a partitioned table.
Testing revealed that early versions of this patch made poor decisions
in some cases, which turned out to be caused by the fact that the
original cost model for Parallel Seq Scan wasn't very good. So this
patch tries to make some modest improvements in that area.
There is much more to be done in the area of generating good parallel
plans in all cases, but this seems like a useful step forward.
Patch by me, reviewed by Dilip Kumar and Amit Kapila.
Diffstat (limited to 'src/backend/executor/execParallel.c')
-rw-r--r-- | src/backend/executor/execParallel.c | 66 |
1 files changed, 40 insertions, 26 deletions
diff --git a/src/backend/executor/execParallel.c b/src/backend/executor/execParallel.c index 4658e59941a..c30b3485dd5 100644 --- a/src/backend/executor/execParallel.c +++ b/src/backend/executor/execParallel.c @@ -167,25 +167,25 @@ ExecParallelEstimate(PlanState *planstate, ExecParallelEstimateContext *e) e->nnodes++; /* Call estimators for parallel-aware nodes. */ - switch (nodeTag(planstate)) + if (planstate->plan->parallel_aware) { - case T_SeqScanState: - ExecSeqScanEstimate((SeqScanState *) planstate, - e->pcxt); - break; - default: - break; + switch (nodeTag(planstate)) + { + case T_SeqScanState: + ExecSeqScanEstimate((SeqScanState *) planstate, + e->pcxt); + break; + default: + break; + } } return planstate_tree_walker(planstate, ExecParallelEstimate, e); } /* - * Ordinary plan nodes won't do anything here, but parallel-aware plan nodes - * may need to initialize shared state in the DSM before parallel workers - * are available. They can allocate the space they previous estimated using - * shm_toc_allocate, and add the keys they previously estimated using - * shm_toc_insert, in each case targeting pcxt->toc. + * Initialize the dynamic shared memory segment that will be used to control + * parallel execution. */ static bool ExecParallelInitializeDSM(PlanState *planstate, @@ -202,15 +202,26 @@ ExecParallelInitializeDSM(PlanState *planstate, /* Count this node. */ d->nnodes++; - /* Call initializers for parallel-aware plan nodes. */ - switch (nodeTag(planstate)) + /* + * Call initializers for parallel-aware plan nodes. + * + * Ordinary plan nodes won't do anything here, but parallel-aware plan + * nodes may need to initialize shared state in the DSM before parallel + * workers are available. They can allocate the space they previously + * estimated using shm_toc_allocate, and add the keys they previously + * estimated using shm_toc_insert, in each case targeting pcxt->toc. + */ + if (planstate->plan->parallel_aware) { - case T_SeqScanState: - ExecSeqScanInitializeDSM((SeqScanState *) planstate, - d->pcxt); - break; - default: - break; + switch (nodeTag(planstate)) + { + case T_SeqScanState: + ExecSeqScanInitializeDSM((SeqScanState *) planstate, + d->pcxt); + break; + default: + break; + } } return planstate_tree_walker(planstate, ExecParallelInitializeDSM, d); @@ -623,13 +634,16 @@ ExecParallelInitializeWorker(PlanState *planstate, shm_toc *toc) return false; /* Call initializers for parallel-aware plan nodes. */ - switch (nodeTag(planstate)) + if (planstate->plan->parallel_aware) { - case T_SeqScanState: - ExecSeqScanInitializeWorker((SeqScanState *) planstate, toc); - break; - default: - break; + switch (nodeTag(planstate)) + { + case T_SeqScanState: + ExecSeqScanInitializeWorker((SeqScanState *) planstate, toc); + break; + default: + break; + } } return planstate_tree_walker(planstate, ExecParallelInitializeWorker, toc); |