aboutsummaryrefslogtreecommitdiff
path: root/src/backend/executor/nodeAgg.c
Commit message (Collapse)AuthorAge
* Update copyright for 2022Bruce Momjian2022-01-07
| | | | Backpatch-through: 10
* Avoid some other O(N^2) hazards in list manipulation.Tom Lane2021-11-01
| | | | | | | | | | | | In the same spirit as 6301c3ada, fix some more places where we were using list_delete_first() in a loop and thereby risking O(N^2) behavior. It's not clear that the lists manipulated in these spots can get long enough to be really problematic ... but it's not clear that they can't, either, and the fixes are simple enough. As before, back-patch to v13. Discussion: https://postgr.es/m/CD2F0E7F-9822-45EC-A411-AE56F14DEA9F@amazon.com
* Refactor LogicalTapeSet/LogicalTape interface.Heikki Linnakangas2021-10-18
| | | | | | | | | | | | All the tape functions, like LogicalTapeRead and LogicalTapeWrite, now take a LogicalTape as argument, instead of LogicalTapeSet+tape number. You can create any number of LogicalTapes in a single LogicalTapeSet, and you don't need to decide the number upfront, when you create the tape set. This makes the tape management in hash agg spilling in nodeAgg.c simpler. Discussion: https://www.postgresql.org/message-id/420a0ec7-602c-d406-1e75-1ef7ddc58d83%40iki.fi Reviewed-by: Peter Geoghegan, Zhihong Yu, John Naylor
* Get rid of artificial restriction on hash table sizes on Windows.Tom Lane2021-07-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The point of introducing the hash_mem_multiplier GUC was to let users reproduce the old behavior of hash aggregation, i.e. that it could use more than work_mem at need. However, the implementation failed to get the job done on Win64, where work_mem is clamped to 2GB to protect various places that calculate memory sizes using "long int". As written, the same clamp was applied to hash_mem. This resulted in severe performance regressions for queries requiring a bit more than 2GB for hash aggregation, as they now spill to disk and there's no way to stop that. Getting rid of the work_mem restriction seems like a good idea, but it's a big job and could not conceivably be back-patched. However, there's only a fairly small number of places that are concerned with the hash_mem value, and it turns out to be possible to remove the restriction there without too much code churn or any ABI breaks. So, let's do that for now to fix the regression, and leave the larger task for another day. This patch does introduce a bit more infrastructure that should help with the larger task, namely pg_bitutils.h support for working with size_t values. Per gripe from Laurent Hasson. Back-patch to v13 where the behavior change came in. Discussion: https://postgr.es/m/997817.1627074924@sss.pgh.pa.us Discussion: https://postgr.es/m/MN2PR15MB25601E80A9B6D1BA6F592B1985E39@MN2PR15MB2560.namprd15.prod.outlook.com
* Cleanup some aggregate code in the executorDavid Rowley2021-07-04
| | | | | | | | | | | | | | | | | | | | | Here we alter the code that calls build_pertrans_for_aggref() so that the function no longer needs to special-case whether it's dealing with an aggtransfn or an aggcombinefn. This allows us to reuse the build_aggregate_transfn_expr() function and just get rid of the build_aggregate_combinefn_expr() completely. All of the special case code that was in build_pertrans_for_aggref() has been moved up to the calling functions. This saves about a dozen lines of code in nodeAgg.c and a few dozen more in parse_agg.c Also, rename a few variables in nodeAgg.c to try to make it more clear that we're working with either a aggtransfn or an aggcombinefn. Some of the old names would have you believe that we were always working with an aggtransfn. Discussion: https://postgr.es/m/CAApHDvptMQ9FmF0D67zC_w88yVnoNVR2+kkOQGUrCmdxWxLULQ@mail.gmail.com
* Initial pgindent and pgperltidy run for v14.Tom Lane2021-05-12
| | | | | | | | Also "make reformat-dat-files". The only change worthy of note is that pgindent messed up the formatting of launcher.c's struct LogicalRepWorkerId, which led me to notice that that struct wasn't used at all anymore, so I just took it out.
* Fix some typos, grammar and style in docs and commentsMichael Paquier2021-02-24
| | | | | | | | The portions fixing the documentation are backpatched where needed. Author: Justin Pryzby Discussion: https://postgr.es/m/20210210235557.GQ20012@telsasoft.com backpatch-through: 9.6
* Fix bug in HashAgg's selective-column-spilling logic.Tom Lane2021-02-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 230230223 taught nodeAgg.c that, when spilling tuples from memory in an oversized hash aggregation, it only needed to spill input columns referenced in the node's tlist and quals. Unfortunately, that's wrong: we also have to save the grouping columns. The error is masked in common cases because the grouping columns also appear in the tlist, but that's not necessarily true. The main category of plans where it's not true seem to come from semijoins ("WHERE outercol IN (SELECT innercol FROM innertable)") where the innercol needs an implicit promotion to make it comparable to the outercol. The grouping column will be "innercol::promotedtype", but that expression appears nowhere in the Agg node's own tlist and quals; only the bare "innercol" is found in the tlist. I spent quite a bit of time looking for a suitable regression test case for this, without much success. If the number of distinct values of the innercol is large enough to make spilling happen, the planner tends to prefer a non-HashAgg plan, at least for problem sizes that are reasonable to use in the regression tests. So, no new regression test. However, this patch does demonstrably fix the originally-reported test case. Per report from s.p.e (at) gmx-topmail.de. Backpatch to v13 where the troublesome code came in. Discussion: https://postgr.es/m/trinity-1c565d44-159f-488b-a518-caf13883134f-1611835701633@3c-app-gmx-bap78
* Update copyright for 2021Bruce Momjian2021-01-02
| | | | Backpatch-through: 9.5
* Fix bug #16784 in Disk-based Hash Aggregation.Jeff Davis2020-12-26
| | | | | | | | | | | | | | | | | Before processing tuples, agg_refill_hash_table() was setting all pergroup pointers to NULL to signal to advance_aggregates() that it should not attempt to advance groups that had spilled. The problem was that it also set the pergroups for sorted grouping sets to NULL, which caused rescanning to fail. Instead, change agg_refill_hash_table() to only set the pergroups for hashed grouping sets to NULL; and when compiling the expression, pass doSort=false. Reported-by: Alexander Lakhin Discussion: https://postgr.es/m/16784-7ff169bf2c3d1588%40postgresql.org Backpatch-through: 13
* Move per-agg and per-trans duplicate finding to the planner.Heikki Linnakangas2020-11-24
| | | | | | | | | | This has the advantage that the cost estimates for aggregates can count the number of calls to transition and final functions correctly. Bump catalog version, because views can contain Aggrefs. Reviewed-by: Andres Freund Discussion: https://www.postgresql.org/message-id/b2e3536b-1dbc-8303-c97e-89cb0b4a9a48%40iki.fi
* Skip allocating hash table in EXPLAIN-only mode.Heikki Linnakangas2020-11-18
| | | | | Author: Alexey Bashtanov Discussion: https://www.postgresql.org/message-id/36823f65-050d-ae24-aa4d-a37726998240%40imap.cc
* Remove useless entries for aggregate functions from fmgrtab.c.Tom Lane2020-11-04
| | | | | | | | | | | | | | | | | | | | | | | Gen_fmgrtab.pl treated aggregate functions the same as other built-in functions, which is wasteful because there is no real need to have entries for them in the fmgr_builtins[] table. Suppressing those entries saves about 3KB in the compiled table on my machine; which is not a lot but it's not nothing either, considering that that table is pretty "hot". The only outside code change needed is that ExecInitWindowAgg() can't be allowed to call fmgr_info_cxt() on a plain aggregate function. But that saves a few cycles anyway. Having done that, the aggregate_dummy() function is unreferenced and might as well be dropped. Using "aggregate_dummy" as the prosrc value for an aggregate is now just a documentation convention not something that matters. There was some discussion of using NULL instead to save a few bytes in pg_proc, but we'd have to remove prosrc's BKI_FORCE_NOT_NULL marking which doesn't seem a great idea. Anyway, it's possible there's client-side code that expects to see "aggregate_dummy" there, so I'm loath to change it without a strong reason. Discussion: https://postgr.es/m/533989.1604263665@sss.pgh.pa.us
* Change LogicalTapeSetBlocks() to use nBlocksWritten.Jeff Davis2020-09-15
| | | | | | | | | | | | | Previously, it was based on nBlocksAllocated to account for tapes with open write buffers that may not have made it to the BufFile yet. That was unnecessary, because callers do not need to get the number of blocks while a tape has an open write buffer; and it also conflicted with the preallocation logic added for HashAgg. Reviewed-by: Peter Geoghegan Discussion: https://postgr.es/m/ce5af05900fdbd0e9185747825a7423c48501964.camel@j-davis.com Backpatch-through: 13
* HashAgg: release write buffers sooner by rewinding tape.Jeff Davis2020-09-15
| | | | | | | | | | This was an oversight. The purpose of 7fdd919ae7 was to avoid keeping tape buffers around unnecessisarily, but HashAgg didn't rewind early enough. Reviewed-by: Peter Geoghegan Discussion: https://postgr.es/m/1fb1151c2cddf8747d14e0532da283c3f97e2685.camel@j-davis.com Backpatch-through: 13
* logtape.c: do not preallocate for tapes when sortingJeff Davis2020-09-11
| | | | | | | | | | | The preallocation logic is only useful for HashAgg, so disable it when sorting. Also, adjust an out-of-date comment. Reviewed-by: Peter Geoghegan Discussion: https://postgr.es/m/CAH2-Wzn_o7tE2+hRVvwSFghRb75AJ5g-nqGzDUqLYMexjOAe=g@mail.gmail.com Backpatch-through: 13
* Add hash_mem_multiplier GUC.Peter Geoghegan2020-07-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a GUC that acts as a multiplier on work_mem. It gets applied when sizing executor node hash tables that were previously size constrained using work_mem alone. The new GUC can be used to preferentially give hash-based nodes more memory than the generic work_mem limit. It is intended to enable admin tuning of the executor's memory usage. Overall system throughput and system responsiveness can be improved by giving hash-based executor nodes more memory (especially over sort-based alternatives, which are often much less sensitive to being memory constrained). The default value for hash_mem_multiplier is 1.0, which is also the minimum valid value. This means that hash-based nodes continue to apply work_mem in the traditional way by default. hash_mem_multiplier is generally useful. However, it is being added now due to concerns about hash aggregate performance stability for users that upgrade to Postgres 13 (which added disk-based hash aggregation in commit 1f39bce0). While the old hash aggregate behavior risked out-of-memory errors, it is nevertheless likely that many users actually benefited. Hash agg's previous indifference to work_mem during query execution was not just faster; it also accidentally made aggregation resilient to grouping estimate problems (at least in cases where this didn't create destabilizing memory pressure). hash_mem_multiplier can provide a certain kind of continuity with the behavior of Postgres 12 hash aggregates in cases where the planner incorrectly estimates that all groups (plus related allocations) will fit in work_mem/hash_mem. This seems necessary because hash-based aggregation is usually much slower when only a small fraction of all groups can fit. Even when it isn't possible to totally avoid hash aggregates that spill, giving hash aggregation more memory will reliably improve performance (the same cannot be said for external sort operations, which appear to be almost unaffected by memory availability provided it's at least possible to get a single merge pass). The PostgreSQL 13 release notes should advise users that increasing hash_mem_multiplier can help with performance regressions associated with hash aggregation. That can be taken care of by a later commit. Author: Peter Geoghegan Reviewed-By: Álvaro Herrera, Jeff Davis Discussion: https://postgr.es/m/20200625203629.7m6yvut7eqblgmfo@alap3.anarazel.de Discussion: https://postgr.es/m/CAH2-WzmD%2Bi1pG6rc1%2BCjc4V6EaFJ_qSuKCCHVnH%3DoruqD-zqow%40mail.gmail.com Backpatch: 13-, where disk-based hash aggregation was introduced.
* HashAgg: use better cardinality estimate for recursive spilling.Jeff Davis2020-07-28
| | | | | | | | | | | | | | | | Use HyperLogLog to estimate the group cardinality in a spilled partition. This estimate is used to choose the number of partitions if we recurse. The previous behavior was to use the number of tuples in a spilled partition as the estimate for the number of groups, which lead to overpartitioning. That could cause the number of batches to be much higher than expected (with each batch being very small), which made it harder to interpret EXPLAIN ANALYZE results. Reviewed-by: Peter Geoghegan Discussion: https://postgr.es/m/a856635f9284bc36f7a77d02f47bbb6aaf7b59b3.camel@j-davis.com Backpatch-through: 13
* Rename another "hash_mem" local variable.Peter Geoghegan2020-07-28
| | | | | | Missed by my commit 564ce621. Backpatch: 13-, where disk-based hash aggregation was introduced.
* Make EXPLAIN ANALYZE of HashAgg more similar to Hash JoinDavid Rowley2020-07-29
| | | | | | | | | | | | | | | | | | | | | | | | | There were various unnecessary differences between Hash Agg's EXPLAIN ANALYZE output and Hash Join's. Here we modify the Hash Agg output so that it's better aligned to Hash Join's. The following changes have been made: 1. Start batches counter at 1 instead of 0. 2. Always display the "Batches" property, even when we didn't spill to disk. 3. Use the text "Batches" instead of "HashAgg Batches" for text format. 4. Use the text "Memory Usage" instead of "Peak Memory Usage" for text format. 5. Include "Batches" before "Memory Usage" in both text and non-text formats. In passing also modify the "Planned Partitions" property so that we show it regardless of if the value is 0 or not for non-text EXPLAIN formats. This was pointed out by Justin Pryzby and probably should have been part of 40efbf870. Reviewed-by: Justin Pryzby, Jeff Davis Discussion: https://postgr.es/m/CAApHDvrshRnA6C0VFnu7Fb9TVvgGo80PUMm5+2DiaS1gEkPvtw@mail.gmail.com Backpatch-through: 13, where HashAgg batching was introduced
* Fix LookupTupleHashEntryHash() pipeline-stall issue.Jeff Davis2020-07-26
| | | | | | | | Refactor hash lookups in nodeAgg.c to improve performance. Author: Andres Freund and Jeff Davis Discussion: https://postgr.es/m/20200612213715.op4ye4q7gktqvpuo%40alap3.anarazel.de Backpatch-through: 13
* Rename "hash_mem" local variable.Peter Geoghegan2020-07-17
| | | | | | The term "hash_mem" will take on new significance when pending work to add a new hash_mem_multiplier GUC is committed. Rename a local variable that happens to have been called hash_mem now to avoid confusion.
* HashAgg: before spilling tuples, set unneeded columns to NULL.Jeff Davis2020-07-12
| | | | | | | | | This is a replacement for 4cad2534. Instead of projecting all tuples going into a HashAgg, only remove unnecessary attributes when actually spilling. This avoids the regression for the in-memory case. Discussion: https://postgr.es/m/a2fb7dfeb4f50aa0a123e42151ee3013933cb802.camel%40j-davis.com Backpatch-through: 13
* Fix EXPLAIN ANALYZE for parallel HashAgg plansDavid Rowley2020-06-19
| | | | | | | | | | | | | | | Since 1f39bce02, HashAgg nodes have had the ability to spill to disk when memory consumption exceeds work_mem. That commit added new properties to EXPLAIN ANALYZE to show the maximum memory usage and disk usage, however, it didn't quite go as far as showing that information for parallel workers. Since workers may have experienced something very different from the main process, we should show this information per worker, as is done in Sort. Reviewed-by: Justin Pryzby Reviewed-by: Jeff Davis Discussion: https://postgr.es/m/CAApHDvpEKbfZa18mM1TD7qV6PG+w97pwCWq5tVD0dX7e11gRJw@mail.gmail.com Backpatch-through: 13, where the hashagg spilling code was added.
* Fix HashAgg regression from choosing too many initial buckets.Jeff Davis2020-06-08
| | | | | | | | Diagnosis by Andres. Reported-by: Pavel Stehule Discussion: https://postgr.es/m/CAFj8pRDLVakD5Aagt3yZeEQeTeEWaS3YE5h8XC3Q3qJ6TYkc2Q%40mail.gmail.com Backpatch-through: 13
* Run pgindent with new pg_bsd_indent version 2.1.1.Tom Lane2020-05-16
| | | | | | | | | | | Thomas Munro fixed a longstanding annoyance in pg_bsd_indent, that it would misformat lines containing IsA() macros on the assumption that the IsA() call should be treated like a cast. This improves some other cases involving field/variable names that match typedefs, too. The only places that get worse are a couple of uses of the OpenSSL macro STACK_OF(); we'll gladly take that trade-off. Discussion: https://postgr.es/m/20200114221814.GA19630@alvherre.pgsql
* Initial pgindent and pgperltidy run for v13.Tom Lane2020-05-14
| | | | | | | | | | | Includes some manual cleanup of places that pgindent messed up, most of which weren't per project style anyway. Notably, it seems some people didn't absorb the style rules of commit c9d297751, because there were a bunch of new occurrences of function calls with a newline just after the left paren, all with faulty expectations about how the rest of the call would get indented.
* Fix minor violations of FunctionCallInvoke usage protocol.Tom Lane2020-04-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Working on commit 1c455078b led me to check through FunctionCallInvoke call sites to see if every one was being honest about (a) making sure that fcinfo.isnull is initially false, and (b) checking its state after the call. Sure enough, I found some violations. The main one is that finalize_partialaggregate re-used serialfn_fcinfo without resetting isnull, even though it clearly intends to cater for serialfns that return NULL. There would only be an issue with a non-strict serialfn, since it's unlikely that a serialfn would return NULL for non-null input. We have no non-strict serialfns in core, and there may be none in the wild either, which would account for the lack of complaints. Still, it's clearly wrong, so back-patch that fix to 9.6 where finalize_partialaggregate was introduced. Also, arrayfuncs.c and rowtypes.c contained various callers that were not bothering to check for result nulls. While what's being called is a comparison or hash function that probably *shouldn't* return null, that's a lousy excuse for not having any check at all. There are existing places that just Assert(!fcinfo->isnull) in comparable situations, so I added that to the places that were calling btree comparison or hash support functions. In the places calling boolean-returning equality functions, it's quite cheap to have them treat isnull as FALSE, so make those places do that. Also remove some "locfcinfo->isnull = false" assignments that are unnecessary given the assumption that no previous call returned null. These changes seem like mostly neatnik-ism or debugging support, so I didn't back-patch.
* Create memory context for HashAgg with a reasonable maxBlockSize.Jeff Davis2020-04-07
| | | | | | | | | | | | | | | | If the memory context's maxBlockSize is too big, a single block allocation can suddenly exceed work_mem. For Hash Aggregation, this can mean spilling to disk too early or reporting a confusing memory usage number for EXPLAN ANALYZE. Introduce CreateWorkExprContext(), which is like CreateExprContext(), except that it creates the AllocSet with a maxBlockSize that is reasonable in proportion to work_mem. Right now, CreateWorkExprContext() is only used by Hash Aggregation, but it may be generally useful in the future. Discussion: https://postgr.es/m/412a3fbf306f84d8d78c4009e11791867e62b87c.camel@j-davis.com
* Include chunk overhead in hash table entry size estimate.Jeff Davis2020-04-03
| | | | | | | | Don't try to be precise about it, just use a constant 16 bytes of chunk overhead. Being smarter would require knowing the memory context where the chunk will be allocated, which is not known by all callers. Discussion: https://postgr.es/m/20200325220936.il3ni2fj2j2b45y5@alap3.anarazel.de
* Fix costing for disk-based hash aggregation.Jeff Davis2020-03-28
| | | | | | Report and suggestions from Richard Guo and Tomas Vondra. Discussion: https://postgr.es/m/CAMbWs4_W8fYbAn8KxgidAaZHON_Oo08OYn9ze=7remJymLqo5g@mail.gmail.com
* Avoid allocating unnecessary zero-sized array.Jeff Davis2020-03-24
| | | | | If there are no aggregates, there is no need to allocate an array of zero AggStatePerGroupData elements.
* Fixes for Disk-based Hash Aggregation.Jeff Davis2020-03-23
| | | | | | | | | Justin Pryzby raised a couple issues with commit 1f39bce0. Fixed. Also, tweak the way the size of a hash entry is estimated and the number of buckets is estimated when calling BuildTupleHashTableExt(). Discussion: https://www.postgresql.org/message-id/20200319064222.GR26184@telsasoft.com
* Disk-based Hash Aggregation.Jeff Davis2020-03-18
| | | | | | | | | | | | | | | | | | | | | | While performing hash aggregation, track memory usage when adding new groups to a hash table. If the memory usage exceeds work_mem, enter "spill mode". In spill mode, new groups are not created in the hash table(s), but existing groups continue to be advanced if input tuples match. Tuples that would cause a new group to be created are instead spilled to a logical tape to be processed later. The tuples are spilled in a partitioned fashion. When all tuples from the outer plan are processed (either by advancing the group or spilling the tuple), finalize and emit the groups from the hash table. Then, create new batches of work from the spilled partitions, and select one of the saved batches and process it (possibly spilling recursively). Author: Jeff Davis Reviewed-by: Tomas Vondra, Adam Lee, Justin Pryzby, Taylor Vesely, Melanie Plageman Discussion: https://postgr.es/m/507ac540ec7c20136364b5272acbcd4574aa76ef.camel@j-davis.com
* Extend ExecBuildAggTrans() to support a NULL pointer check.Jeff Davis2020-03-04
| | | | | | | | | | | | | | | | | Optionally push a step to check for a NULL pointer to the pergroup state. This will be important for disk-based hash aggregation in combination with grouping sets. When memory limits are reached, a given tuple may find its per-group state for some grouping sets but not others. For the former, it advances the per-group state as normal; for the latter, it skips evaluation and the calling code will have to spill the tuple and reprocess it in a later batch. Add the NULL check as a separate expression step because in some common cases it's not needed. Discussion: https://postgr.es/m/20200221202212.ssb2qpmdgrnx52sj%40alap3.anarazel.de
* expression eval: Reduce number of steps for agg transition invocations.Andres Freund2020-02-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Do so by combining the various steps that are part of aggregate transition function invocation into one larger step. As some of the current steps are only necessary for some aggregates, have one variant of the aggregate transition step for each possible combination. To avoid further manual copies of code in the different transition step implementations, move most of the code into helper functions marked as "always inline". The benefit of this change is an increase in performance when aggregating lots of rows. This comes in part due to the reduced number of indirect jumps due to the reduced number of steps, and in part by reducing redundant setup code across steps. This mainly benefits interpreted execution, but the code generated by JIT is also improved a bit. As a nice side-effect it also ends up making the code a bit simpler. A small additional optimization is removing the need to set aggstate->curaggcontext before calling ExecAggInitGroup, choosing to instead passign curaggcontext as an argument. It was, in contrast to other aggregate related functions, only needed to fetch a memory context to copy the transition value into. Author: Andres Freund Discussion: https://postgr.es/m/20191023163849.sosqbfs5yenocez3@alap3.anarazel.de https://postgr.es/m/5c371df7cee903e8cd4c685f90c6c72086d3a2dc.camel@j-davis.com
* Fixup for nodeAgg.c refactor.Jeff Davis2020-02-21
| | | | Commit 5b618e1f made an unintended behavior change.
* Minor refactor of nodeAgg.c.Jeff Davis2020-02-19
| | | | | | | | | | | | * Separate calculation of hash value from the lookup. * Split build_hash_table() into two functions. * Change lookup_hash_entry() to return AggStatePerGroup. That's all the caller needed, anyway. These changes are to support the upcoming Disk-based Hash Aggregation work. Discussion: https://postgr.es/m/31f5ab871a3ad5a1a91a7a797651f20e77ac7ce3.camel%40j-davis.com
* Remove duplicated words in commentsMichael Paquier2020-02-18
| | | | | | Author: Daniel Gustafsson Reviewed-by: Vik Fearing Discussion: https://postgr.es/m/EBC3BFEB-664C-4063-81ED-29F1227DB012@yesql.se
* Refactor hash_agg_entry_size().Jeff Davis2020-02-06
| | | | | | Consolidate the calculations for hash table size estimation. This will help with upcoming Hash Aggregation work that will add additional call sites.
* Clean up newlines following left parenthesesAlvaro Herrera2020-01-30
| | | | | | | | | | | | We used to strategically place newlines after some function call left parentheses to make pgindent move the argument list a few chars to the left, so that the whole line would fit under 80 chars. However, pgindent no longer does that, so the newlines just made the code vertically longer for no reason. Remove those newlines, and reflow some of those lines for some extra naturality. Reviewed-by: Michael Paquier, Tom Lane Discussion: https://postgr.es/m/20200129200401.GA6303@alvherre.pgsql
* Fix edge case leading to agg transitions skipping ExecAggTransReparent() calls.Andres Freund2020-01-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The code checking whether an aggregate transition value needs to be reparented into the current context has always only compared the transition return value with the previous transition value by datum, i.e. without regard for NULLness. This normally works, because when the transition function returns NULL (via fcinfo->isnull), it'll return a value that won't be the same as its input value. But there's no hard requirement that that's the case. And it turns out, it's possible to hit this case (see discussion or reproducers), leading to a non-null transition value not being reparented, followed by a crash caused by that. Instead of adding another comparison of NULLness, instead have ExecAggTransReparent() ensure that pergroup->transValue ends up as 0 when the new transition value is NULL. That avoids having to add an additional branch to the much more common cases of the transition function returning the old transition value (which is a pointer in this case), and when the new value is different, but not NULL. In branches since 69c3936a149, also deduplicate the reparenting code between the expression evaluation based transitions, and the path for ordered aggregates. Reported-By: Teodor Sigaev, Nikita Glukhov Author: Andres Freund Discussion: https://postgr.es/m/bd34e930-cfec-ea9b-3827-a8bc50891393@sigaev.ru Backpatch: 9.4-, this issue has existed since at least 7.4
* Update copyrights for 2020Bruce Momjian2020-01-01
| | | | Backpatch-through: update all files in master, backpatch legal files through 9.4
* Make the order of the header file includes consistent in backend modules.Amit Kapila2019-11-12
| | | | | | | | | | | Similar to commits 7e735035f2 and dddf4cdc33, this commit makes the order of header file inclusion consistent for backend modules. In the passing, removed a couple of duplicate inclusions. Author: Vignesh C Reviewed-by: Kuntal Ghosh and Amit Kapila Discussion: https://postgr.es/m/CALDaNm2Sznv8RR6Ex-iJO6xAdsxgWhCoETkaYX=+9DW3q0QCfA@mail.gmail.com
* Don't include utils/array.h from acl.h.Andres Freund2019-08-16
| | | | | | | | | | | | | | | | | | For most uses of acl.h the details of how "Acl" internally looks like are irrelevant. It might make sense to move a lot of the implementation details into a separate header at a later point. The main motivation of this change is to avoid including fmgr.h (via array.h, which needs it for exposed structs) in a lot of files that otherwise don't need it. A subsequent commit will remove the fmgr.h include from a lot of files. Directly include utils/array.h and utils/expandeddatum.h from the files that need them, but previously included them indirectly, via acl.h. Author: Andres Freund Discussion: https://postgr.es/m/20190803193733.g3l3x3o42uv4qj7l@alap3.anarazel.de
* Fix slot type handling for Agg nodes performing internal sorts.Andres Freund2019-07-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since 15d8f8312 we assert that - and since 7ef04e4d2cb2, 4da597edf1 rely on - the slot type for an expression's ecxt_{outer,inner,scan}tuple not changing, unless explicitly flagged as such. That allows to either skip deforming (for a virtual tuple slot) or optimize the code for JIT accelerated deforming appropriately (for other known slot types). This assumption was sometimes violated for grouping sets, when nodeAgg.c internally uses tuplesorts, and the child node doesn't return a TTSOpsMinimalTuple type slot. Detect that case, and flag that the outer slot might not be "fixed". It's probably worthwhile to optimize this further in the future, and more granularly determine whether the slot is fixed. As we already instantiate per-phase transition and equal expressions, we could cheaply set the slot type appropriately for each phase. But that's a separate change from this bugfix. This commit does include a very minor optimization by avoiding to create a slot for handling tuplesorts, if no such sorts are performed. Previously we created that slot unnecessarily in the common case of computing all grouping sets via hashing. The code looked too confusing without that, as the conditions for needing a sort slot and flagging that the slot type isn't fixed, are the same. Reported-By: Ashutosh Sharma Author: Andres Freund Discussion: https://postgr.es/m/CAE9k0PmNaMD2oHTEAhRyxnxpaDaYkuBYkLa1dpOpn=RS0iS2AQ@mail.gmail.com Backpatch: 12-, where the bug was introduced in 15d8f8312
* Fix inconsistencies and typos in the treeMichael Paquier2019-07-22
| | | | | | | | This is numbered take 7, and addresses a set of issues with code comments, variable names and unreferenced variables. Author: Alexander Lakhin Discussion: https://postgr.es/m/dff75442-2468-f74f-568c-6006e141062f@gmail.com
* Fix inconsistencies and typos in the treeMichael Paquier2019-07-16
| | | | | | | | | | | This is numbered take 7, and addresses a set of issues around: - Fixes for typos and incorrect reference names. - Removal of unneeded comments. - Removal of unreferenced functions and structures. - Fixes regarding variable name consistency. Author: Alexander Lakhin Discussion: https://postgr.es/m/10bfd4ac-3e7c-40ab-2b2e-355ed15495e8@gmail.com
* Fix array size allocation for HashAggregate hash keys.Andrew Gierth2019-05-23
| | | | | | | | | | | | | | | | | | | When there were duplicate columns in the hash key list, the array sizes could be miscomputed, resulting in access off the end of the array. Adjust the computation to ensure the array is always large enough. (I considered whether the duplicates could be removed in planning, but I can't rule out the possibility that duplicate columns might have different hash functions assigned. Simpler to just make sure it works at execution time regardless.) Bug apparently introduced in fc4b3dea2 as part of narrowing down the tuples stored in the hashtable. Reported by Colm McHugh of Salesforce, though I didn't use their patch. Backpatch back to version 10 where the bug was introduced. Discussion: https://postgr.es/m/CAFeeJoKKu0u+A_A9R9316djW-YW3-+Gtgvy3ju655qRHR3jtdA@mail.gmail.com
* Phase 2 pgindent run for v12.Tom Lane2019-05-22
| | | | | | | | | Switch to 2.1 version of pg_bsd_indent. This formats multiline function declarations "correctly", that is with additional lines of parameter declarations indented to match where the first line's left parenthesis is. Discussion: https://postgr.es/m/CAEepm=0P3FeTXRcU5B2W3jv3PgRVZ-kGUXLGfd42FFhUROO3ug@mail.gmail.com