aboutsummaryrefslogtreecommitdiff
path: root/src/backend/executor/tqueue.c
Commit message (Collapse)AuthorAge
* Update copyright via script for 2017Bruce Momjian2017-01-03
|
* Add macros to make AllocSetContextCreate() calls simpler and safer.Tom Lane2016-08-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I found that half a dozen (nearly 5%) of our AllocSetContextCreate calls had typos in the context-sizing parameters. While none of these led to especially significant problems, they did create minor inefficiencies, and it's now clear that expecting people to copy-and-paste those calls accurately is not a great idea. Let's reduce the risk of future errors by introducing single macros that encapsulate the common use-cases. Three such macros are enough to cover all but two special-purpose contexts; those two calls can be left as-is, I think. While this patch doesn't in itself improve matters for third-party extensions, it doesn't break anything for them either, and they can gradually adopt the simplified notation over time. In passing, change TopMemoryContext to use the default allocation parameters. Formerly it could only be extended 8K at a time. That was probably reasonable when this code was written; but nowadays we create many more contexts than we did then, so that it's not unusual to have a couple hundred K in TopMemoryContext, even without considering various dubious code that sticks other things there. There seems no good reason not to let it use growing blocks like most other contexts. Back-patch to 9.6, mostly because that's still close enough to HEAD that it's easy to do so, and keeping the branches in sync can be expected to avoid some future back-patching pain. The bugs fixed by these changes don't seem to be significant enough to justify fixing them further back. Discussion: <21072.1472321324@sss.pgh.pa.us>
* Code review for tqueue.c: fix memory leaks, speed it up, other fixes.Tom Lane2016-07-31
| | | | | | | | | | | | | | | | | | | | When doing record typmod remapping, tqueue.c did fresh catalog lookups for each tuple it processed, which was pretty horrible performance-wise (it seemed to about halve the already none-too-quick speed of bulk reads in parallel mode). Worse, it insisted on putting bits of that data into TopMemoryContext, from where it never freed them, causing a session-lifespan memory leak. (I suppose this was coded with the idea that the sender process would quit after finishing the query --- but the receiver uses the same code.) Restructure to avoid repetitive catalog lookups and to keep that data in a query-lifespan context, in or below the context where the TQueueDestReceiver or TupleQueueReader itself lives. Fix some other bugs such as continuing to use a tupledesc after releasing our refcount on it. Clean up cavalier datatype choices (typmods are int32, please, not int, and certainly not Oid). Improve comments and error message wording.
* Fix worst memory leaks in tqueue.c.Tom Lane2016-07-29
| | | | | | | | | | | | | | | | | | | | | | | | | TupleQueueReaderNext() leaks like a sieve if it has to do any tuple disassembly/reconstruction. While we could try to clean up its allocations piecemeal, it seems like a better idea just to insist that it should be run in a short-lived memory context, so that any transient space goes away automatically. I chose to have nodeGather.c switch into its existing per-tuple context before the call, rather than inventing a separate context inside tqueue.c. This is sufficient to stop all leakage in the simple case I exhibited earlier today (see link below), but it does not deal with leaks induced in more complex cases by tqueue.c's insistence on using TopMemoryContext for data that it's not actually trying hard to keep track of. That issue is intertwined with another major source of inefficiency, namely failure to cache lookup results across calls, so it seems best to deal with it separately. In passing, improve some comments, and modify gather_readnext's method for deciding when it's visited all the readers so that it's more obviously correct. (I'm not actually convinced that the previous code *is* correct in the case of a reader deletion; it certainly seems fragile.) Discussion: <32763.1469821037@sss.pgh.pa.us>
* Fix tqueue.c's range-remapping code.Tom Lane2016-07-29
| | | | It's depressingly clear that nobody ever tested this.
* Message style improvementsPeter Eisentraut2016-07-28
|
* tqueue.c's record-typmod hashtables need the HASH_BLOBS option.Tom Lane2016-07-28
| | | | | | | | | | The keys are integers, not strings. The code accidentally worked on little-endian machines, at least up to 256 distinct record types within a session, but failed utterly on big-endian. This was unexpectedly exposed by a test case added by commit 4452000f3, which apparently is the only parallelizable query in the regression suite that uses more than one anonymous record type. Fortunately, buildfarm member mandrill is big-endian and is running with force_parallel_mode on, so it failed.
* pgindent run for 9.6Robert Haas2016-06-09
|
* Stop the executor if no more tuples can be sent from worker to leader.Robert Haas2016-06-06
| | | | | | | | | | | | | | | | | | | | | If a Gather node has read as many tuples as it needs (for example, due to Limit) it may detach the queue connecting it to the worker before reading all of the worker's tuples. Rather than let the worker continue to generate and send all of the results, have it stop after sending the next tuple. More could be done here to stop the worker even quicker, but this is about as well as we can hope to do for 9.6. This is in response to a problem report from Andreas Seltenreich. Commit 44339b892a04e94bbb472235882dc6f7023bdc65 should be actually be sufficient to fix that example even without this change, but it seems better to do this, too, since we might otherwise waste quite a large amount of effort in one or more workers. Discussion: CAA4eK1KOKGqmz9bGu+Z42qhRwMbm4R5rfnqsLCNqFs9j14jzEA@mail.gmail.com Amit Kapila
* Update copyright for 2016Bruce Momjian2016-01-02
| | | | Backpatch certain files through 9.1
* Fix TupleQueueReaderNext not to ignore its nowait argument.Robert Haas2015-12-18
| | | | | | This was a silly goof on my (rhaas's) part. Report and fix by Rushabh Lathia.
* Fix dumb bug in tqueue.cRobert Haas2015-11-18
| | | | | | | | | | | | | | When I wrote this code originally, the intention was to recompute the remapinfo only when the tupledesc changes. This presumably only happens once per query, but I copied the design pattern from other DestReceivers. However, due to a silly oversight on my part, tqueue->tupledesc never got set, leading to recomputation for every tuple. This should improve the performance of parallel scans that return a significant number of tuples. Report by Amit Kapila; patch by me, reviewed by him.
* Remove accidentally-committed debugging code.Robert Haas2015-11-15
| | | | Amit Kapila
* Add a dummy return statement to TupleQueueRemap.Robert Haas2015-11-09
| | | | | This is unreachable for multiple reasons, but per Amit Kapila the Windows compiler he is using still thinks we can get there.
* Remove set-but-not-used variables.Robert Haas2015-11-07
| | | | Reported by both Peter Eisentraunt and Kevin Grittner.
* Try to convince gcc that TupleQueueRemap never falls off the end.Robert Haas2015-11-06
| | | | Without this, MacOS gcc version 4.2.1 isn't convinced.
* Modify tqueue infrastructure to support transient record types.Robert Haas2015-11-06
| | | | | | | | | | | | | | | | | | | | | | | | | Commit 4a4e6893aa080b9094dadbe0e65f8a75fee41ac6, which introduced this mechanism, failed to account for the fact that the RECORD pseudo-type uses transient typmods that are only meaningful within a single backend. Transferring such tuples without modification between two cooperating backends does not work. This commit installs a system for passing the tuple descriptors over the same shm_mq being used to send the tuples themselves. The two sides might not assign the same transient typmod to any given tuple descriptor, so we must also substitute the appropriate receiver-side typmod for the one used by the sender. That adds some CPU overhead, but still seems better than being unable to pass records between cooperating parallel processes. Along the way, move the logic for handling multiple tuple queues from tqueue.c to nodeGather.c; tqueue.c now provides a TupleQueueReader, which reads from a single queue, rather than a TupleQueueFunnel, which potentially reads from multiple queues. This change was suggested previously as a way to make sure that nodeGather.c rather than tqueue.c had policy control over the order in which to read from queues, but it wasn't clear to me until now how good an idea it was. typmod mapping needs to be performed separately for each queue, and it is much simpler if the tqueue.c code handles that and leaves multiplexing multiple queues to higher layers of the stack.
* Parallel executor support.Robert Haas2015-09-28
| | | | | | | | | | | | | This code provides infrastructure for a parallel leader to start up parallel workers to execute subtrees of the plan tree being executed in the master. User-supplied parameters from ParamListInfo are passed down, but PARAM_EXEC parameters are not. Various other constructs, such as initplans, subplans, and CTEs, are also not currently shared. Nevertheless, there's enough here to support a basic implementation of parallel query, and we can lift some of the current restrictions as needed. Amit Kapila and Robert Haas
* Glue layer to connect the executor to the shm_mq mechanism.Robert Haas2015-09-18
The shm_mq mechanism was built to send error (and notice) messages and tuples between backends. However, shm_mq itself only deals in raw bytes. Since commit 2bd9e412f92bc6a68f3e8bcb18e04955cc35001d, we have had infrastructure for one message to redirect protocol messages to a queue and for another backend to parse them and do useful things with them. This commit introduces a somewhat analogous facility for tuples by adding a new type of DestReceiver, DestTupleQueue, which writes each tuple generated by a query into a shm_mq, and a new TupleQueueFunnel facility which reads raw tuples out of the queue and reconstructs the HeapTuple format expected by the executor. The TupleQueueFunnel abstraction supports reading from multiple tuple streams at the same time, but only in round-robin fashion. Someone could imaginably want other policies, but this should be good enough to meet our short-term needs related to parallel query, and we can always extend it later. This also makes one minor addition to the shm_mq API that didn' seem worth breaking out as a separate patch. Extracted from Amit Kapila's parallel sequential scan patch. This code was originally written by me, and then it was revised by Amit, and then it was revised some more by me.