aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
...
* Fix handling of structure for bytea data type in ECPGMichael Paquier2020-07-27
| | | | | | | | | | | | | | Some code paths dedicated to bytea used the structure for varchar. This did not lead to any actual bugs, as bytea and varchar have the same definition, but it could become a trap if one of these definitions changes for a new feature or a bug fix. Issue introduced by 050710b. Author: Shenhao Wang Reviewed-by: Vignesh C, Michael Paquier Discussion: https://postgr.es/m/07ac7dee1efc44f99d7f53a074420177@G08CNEXMBPEKD06.g08.fujitsu.local Backpatch-through: 12
* Fix LookupTupleHashEntryHash() pipeline-stall issue.Jeff Davis2020-07-26
| | | | | | | | Refactor hash lookups in nodeAgg.c to improve performance. Author: Andres Freund and Jeff Davis Discussion: https://postgr.es/m/20200612213715.op4ye4q7gktqvpuo%40alap3.anarazel.de Backpatch-through: 13
* Tweak behavior of pg_stat_activity.leader_pidMichael Paquier2020-07-26
| | | | | | | | | | | | | | | | | | | | | | | The initial implementation of leader_pid in pg_stat_activity added by b025f32 took the approach to strictly print what a PGPROC entry includes. In short, if a backend has been involved in parallel query at least once, leader_pid would remain set as long as the backend is alive. For a parallel group leader, this means that the field would always be set after it participated at least once in parallel query, and after more discussions this could be confusing if using for example a connection pooler. This commit changes the data printed so as leader_pid becomes always NULL for a parallel group leader, showing up a non-NULL value only for the parallel workers, and actually as long as a parallel query is running as workers are shut down once the query has completed. This does not change the definition of any catalog, so no catalog bump is needed. Per discussion with Justin Pryzby, Álvaro Herrera, Julien Rouhaud and me. Discussion: https://postgr.es/m/20200721035145.GB17300@paquier.xyz Backpatch-through: 13
* Fix buffer usage stats for nodes above Gather Merge.Amit Kapila2020-07-25
| | | | | | | | | | | | | | | Commit 85c9d347 addressed a similar problem for Gather and Gather Merge nodes but forgot to account for nodes above parallel nodes. This still works for nodes above Gather node because we shut down the workers for Gather node as soon as there are no more tuples. We can do a similar thing for Gather Merge as well but it seems better to account for stats during nodes shutdown after completing the execution. Reported-by: Stéphane Lorek, Jehan-Guillaume de Rorthais Author: Jehan-Guillaume de Rorthais <jgdr@dalibo.com> Reviewed-by: Amit Kapila Backpatch-through: 10, where it was introduced Discussion: https://postgr.es/m/20200718160206.584532a2@firost
* Replace TS_execute's TS_EXEC_CALC_NOT flag with TS_EXEC_SKIP_NOT.Tom Lane2020-07-24
| | | | | | | | | | | | | | | | | | | | | It's fairly silly that ignoring NOT subexpressions is TS_execute's default behavior. It's wrong on its face and it encourages errors of omission. Moreover, the only two remaining callers that aren't specifying CALC_NOT are in ts_headline calculations, and it's very arguable that those are bugs: if you've specified "!foo" in your query, why would you want to get a headline that includes "foo"? Hence, rip that out and change the default behavior to be to calculate NOT accurately. As a concession to the slim chance that there is still somebody somewhere who needs the incorrect behavior, provide a new SKIP_NOT flag to explicitly request that. Back-patch into v13, mainly because it seems better to change this at the same time as the previous commit's rejiggering of TS_execute related APIs. Any outside callers affected by this change are probably also affected by that one. Discussion: https://postgr.es/m/CALT9ZEE-aLotzBg-pOp2GFTesGWVYzXA3=mZKzRDa_OKnLF7Mg@mail.gmail.com
* Fix assorted bugs by changing TS_execute's callback API to ternary logic.Tom Lane2020-07-24
| | | | | | | | | | | | | | | | | | | | | | | | | Text search sometimes failed to find valid matches, for instance '!crew:A'::tsquery might fail to locate 'crew:1B'::tsvector during an index search. The root of the issue is that TS_execute's callback functions were not changed to use ternary (yes/no/maybe) reporting when we made the search logic itself do so. It's somewhat annoying to break that API, but on the other hand we now see that any code using plain boolean logic is almost certainly broken since the addition of phrase search. There seem to be very few outside callers of this code anyway, so we'll just break them intentionally to get them to adapt. This allows removal of tsginidx.c's private re-implementation of TS_execute, since that's now entirely duplicative. It's also no longer necessary to avoid use of CALC_NOT in tsgistidx.c, since the underlying callbacks can now do something reasonable. Back-patch into v13. We can't change this in stable branches, but it seems not quite too late to fix it in v13. Tom Lane and Pavel Borisov Discussion: https://postgr.es/m/CALT9ZEE-aLotzBg-pOp2GFTesGWVYzXA3=mZKzRDa_OKnLF7Mg@mail.gmail.com
* Fix error message.Thomas Munro2020-07-23
| | | | | | | Remove extra space. Back-patch to all releases, like commit 7897e3bb. Author: Lu, Chenyang <lucy.fnst@cn.fujitsu.com> Discussion: https://postgr.es/m/795d03c6129844d3803e7eea48f5af0d%40G08CNEXMBPEKD04.g08.fujitsu.local
* neqjoinsel must now pass through collation to eqjoinsel.Tom Lane2020-07-21
| | | | | | | | | | | | | Since commit 044c99bc5, eqjoinsel passes the passed-in collation to any operators it invokes. However, neqjoinsel failed to pass on whatever collation it got, so that if we invoked a collation-dependent operator via that code path, we'd get "could not determine which collation to use for string comparison" or the like. Per report from Justin Pryzby. Back-patch to v12, like the previous commit. Discussion: https://postgr.es/m/20200721191606.GL5748@telsasoft.com
* Assert that we don't insert nulls into attnotnull catalog columns.Tom Lane2020-07-21
| | | | | | | | | | | | | | | | | | | | | The executor checks for this error, and so does the bootstrap catalog loader, but we never checked for it in retail catalog manipulations. The folly of that has now been exposed, so let's add assertions checking it. Checking in CatalogTupleInsert[WithInfo] and CatalogTupleUpdate[WithInfo] should be enough to cover this. Back-patch to v10; the aforesaid functions didn't exist before that, and it didn't seem worth adapting the patch to the oldest branches. But given the risk of JIT crashes, I think we certainly need this as far back as v11. Pre-v13, we have to explicitly exclude pg_subscription.subslotname and pg_subscription_rel.srsublsn from the checks, since they are mismarked. (Even if we change our mind about applying BKI_FORCE_NULL in the branch tips, it doesn't seem wise to have assertions that would fire in existing databases.) Discussion: https://postgr.es/m/298837.1595196283@sss.pgh.pa.us
* Correctly mark pg_subscription_rel.srsublsn as nullable.Tom Lane2020-07-20
| | | | | | | | | | | | | | The code has always set this column to NULL when it's not valid, but the catalog header's description failed to reflect that, as did the SGML docs, as did some of the code. To prevent future coding errors of the same ilk, let's hide the field from C code as though it were variable-length (which, in a sense, it is). As with commit 72eab84a5, we can only fix this cleanly in HEAD and v13; the problem extends further back but we'll need some klugery in the released branches. Discussion: https://postgr.es/m/367660.1595202498@sss.pgh.pa.us
* Fix construction of updated-columns bitmap in logical replication.Tom Lane2020-07-20
| | | | | | | | | | | | | | | | | | | Commit b9c130a1f failed to apply the publisher-to-subscriber column mapping while checking which columns were updated. Perhaps less significantly, it didn't exclude dropped columns either. This could result in an incorrect updated-columns bitmap and thus wrong decisions about whether to fire column-specific triggers on the subscriber while applying updates. In HEAD (since commit 9de77b545), it could also result in accesses off the end of the colstatus array, as detected by buildfarm member skink. Fix the logic, and adjust 003_constraints.pl so that the problem is exposed in unpatched code. In HEAD, also add some assertions to check that we don't access off the ends of these newly variable-sized arrays. Back-patch to v10, as b9c130a1f was. Discussion: https://postgr.es/m/CAH2-Wz=79hKQ4++c5A060RYbjTHgiYTHz=fw6mptCtgghH2gJA@mail.gmail.com
* Rename wal_keep_segments to wal_keep_size.Fujii Masao2020-07-20
| | | | | | | | | | | | | | | | | | | | | | | | | max_slot_wal_keep_size that was added in v13 and wal_keep_segments are the GUC parameters to specify how much WAL files to retain for the standby servers. While max_slot_wal_keep_size accepts the number of bytes of WAL files, wal_keep_segments accepts the number of WAL files. This difference of setting units between those similar parameters could be confusing to users. To alleviate this situation, this commit renames wal_keep_segments to wal_keep_size, and make users specify the WAL size in it instead of the number of WAL files. There was also the idea to rename max_slot_wal_keep_size to max_slot_wal_keep_segments, in the discussion. But we have been moving away from measuring in segments, for example, checkpoint_segments was replaced by max_wal_size. So we concluded to rename wal_keep_segments to wal_keep_size. Back-patch to v13 where max_slot_wal_keep_size was added. Author: Fujii Masao Reviewed-by: Álvaro Herrera, Kyotaro Horiguchi, David Steele Discussion: https://postgr.es/m/574b4ea3-e0f9-b175-ead2-ebea7faea855@oss.nttdata.com
* Fix minor typo in nodeIncrementalSort.c.Amit Kapila2020-07-20
| | | | | | | Author: Vignesh C Reviewed-by: James Coleman Backpatch-through: 13, where it was introduced Discussion: https://postgr.es/m/CALDaNm0WjZqRvdeL59ZfYH0o4mLbKQ23jm-bnjXcFzgpANx55g@mail.gmail.com
* Correctly mark pg_subscription.subslotname as nullable.Tom Lane2020-07-19
| | | | | | | | | | | | | | | | | | | | | | Due to the layout of this catalog, subslotname has to be explicitly marked BKI_FORCE_NULL, else initdb will default to the assumption that it's non-nullable. Since, in fact, CREATE/ALTER SUBSCRIPTION will store null values there, the existing marking is just wrong, and has been since this catalog was invented. We haven't noticed because not much in the system actually depends on attnotnull being truthful. However, JIT'ed tuple deconstruction does depend on that in some cases, allowing crashes or wrong answers in queries that inspect pg_subscription. Commit 9de77b545 quite accidentally exposed this on the buildfarm members that force JIT activation. Back-patch to v13. The problem goes further back, but we cannot force initdb in released branches, so some klugier solution will be needed there. Before working on that, push this simple fix to try to get the buildfarm back to green. Discussion: https://postgr.es/m/4118109.1595096139@sss.pgh.pa.us
* Cope with data-offset-less archive files during out-of-order restores.Tom Lane2020-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | pg_dump produces custom-format archive files that lack data offsets when it is unable to seek its output. Up to now that's been a hazard for pg_restore. But if pg_restore is able to seek in the archive file, there is no reason to throw up our hands when asked to restore data blocks out of order. Instead, whenever we are searching for a data block, record the locations of the blocks we passed over (that is, fill in the missing data-offset fields in our in-memory copy of the TOC data). Then, when we hit a case that requires going backwards, we can just seek back. Also track the furthest point that we've searched to, and seek back to there when beginning a search for a new data block. This avoids possible O(N^2) time consumption, by ensuring that each data block is examined at most twice. (On Unix systems, that's at most twice per parallel-restore job; but since Windows uses threads here, the threads can share block location knowledge, reducing the amount of duplicated work.) We can also improve the code a bit by using fseeko() to skip over data blocks during the search. This is all of some use even in simple restores, but it's really significant for parallel pg_restore. In that case, we require seekability of the input already, and we will very probably need to do out-of-order restores. Back-patch to v12, as this fixes a regression introduced by commit 548e50976. Before that, parallel restore avoided requesting out-of-order restores, so it would work on a data-offset-less archive. Now it will again. Ideally this patch would include some test coverage, but there are other open bugs that need to be fixed before we can extend our coverage of parallel restore very much. Plan to revisit that later. David Gilman and Tom Lane; reviewed by Justin Pryzby Discussion: https://postgr.es/m/CALBH9DDuJ+scZc4MEvw5uO-=vRyR2=QF9+Yh=3hPEnKHWfS81A@mail.gmail.com
* Remove manual tracking of file position in pg_dump/pg_backup_custom.c.Tom Lane2020-07-17
| | | | | | | | | | | | | | | | | | | We do not really need to track the file position by hand. We were already relying on ftello() whenever the archive file is seekable, while if it's not seekable we don't need the file position info anyway because we're not going to be able to re-write the TOC. Moreover, that tracking was buggy since it failed to account for the effects of fseeko(). Somewhat remarkably, that seems not to have made for any live bugs up to now. We could fix the oversights, but it seems better to just get rid of the whole error-prone mess. In itself this is merely code cleanup. However, it's necessary infrastructure for an upcoming bug-fix patch (because that code *does* need valid file position after fseeko). The bug fix needs to go back as far as v12; hence, back-patch that far. Discussion: https://postgr.es/m/CALBH9DDuJ+scZc4MEvw5uO-=vRyR2=QF9+Yh=3hPEnKHWfS81A@mail.gmail.com
* Avoid CREATE INDEX unique index deduplication.Peter Geoghegan2020-07-17
| | | | | | | | | | | | | | | | | There is no advantage to attempting deduplication for a unique index during CREATE INDEX, since there cannot possibly be any duplicates. Doing so wastes cycles due to unnecessary copying. Make sure that we avoid it consistently. We already avoided unique index deduplication in the case where there were some spool2 tuples to merge. That didn't account for the fact that spool2 is removed early/unset in the common case where it has no tuples that need to be merged (i.e. it failed to account for the "spool2 turns out to be unnecessary" optimization in _bt_spools_heapscan()). Oversight in commit 0d861bbb, which added nbtree deduplication Backpatch: 13-, where nbtree deduplication was introduced.
* Ensure that distributed timezone abbreviation files are plain ASCII.Tom Lane2020-07-17
| | | | | | | | | | | | | | | | | | We had two occurrences of "Mitteleuropäische Zeit" in Europe.txt, though the corresponding entries in Default were spelled "Mitteleuropaeische Zeit". Standardize on the latter spelling to avoid questions of which encoding to use. While here, correct a couple of other trivial inconsistencies between the Default file and the supposedly-matching entries in the *.txt files, as exposed by some checking with comm(1). Also, add BDST to the Europe.txt file; it previously was only listed in Default. None of this has any direct functional effect. Per complaint from Christoph Berg. As usual for timezone data patches, apply to all branches. Discussion: https://postgr.es/m/20200716100743.GE3534683@msg.df7cb.de
* Fix whitespacePeter Eisentraut2020-07-17
|
* Resolve gratuitous tabs in SQL filePeter Eisentraut2020-07-17
|
* Fix signal handler setup for SIGHUP in the apply launcher process.Amit Kapila2020-07-17
| | | | | | | | | | | | Commit 1e53fe0e70 has unified the usage of the config-file reload flag by using the same signal handler function for the SIGHUP signal at many places in the code. By mistake, it used the wrong SIGNAL in apply launcher process for the SIGHUP signal handler function. Author: Bharath Rupireddy Reviewed-by: Dilip Kumar Backpatch-through: 13, where it was introduced Discussion: https://postgr.es/m/CALj2ACVzHCRnS20bOiEHaLtP5PVBENZQn4khdsSJQgOv_GM-LA@mail.gmail.com
* Switch pg_test_fsync to use binary mode on WindowsMichael Paquier2020-07-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | pg_test_fsync has always opened files using the text mode on Windows, as this is the default mode used if not enforced by _setmode(). This fixes a failure when running pg_test_fsync down to 12 because O_DSYNC and the text mode are not able to work together nicely. We fixed the handling of O_DSYNC in 12~ for the tool by switching to the concurrent-safe version of fopen() in src/port/ with 0ba06e0. And 40cfe86, by enforcing the text mode for compatibility reasons if O_TEXT or O_BINARY are not specified by the caller, broke pg_test_fsync. For all versions, this avoids any translation overhead, and pg_test_fsync should test binary writes, so it is a gain in all cases. Note that O_DSYNC is still not handled correctly in ~11, leading to pg_test_fsync to show insanely high numbers for open_datasync() (using this property it is easy to notice that the binary mode is much faster). This would require a backpatch of 0ba06e0 and 40cfe86, which could potentially break existing applications, so this is left out. There are no TAP tests for this tool yet, so I have checked all builds manually using MSVC. We could invent a new option to run a single transaction instead of using a duration of 1s to make the tests a maximum short, but this is left as future work. Thanks to Bruce Momjian for the discussion. Reported-by: Jeff Janes Author: Michael Paquier Discussion: https://postgr.es/m/16526-279ded30a230d275@postgresql.org Backpatch-through: 9.5
* Fix handling of missing files when using pg_rewind with online sourceMichael Paquier2020-07-15
| | | | | | | | | | | | | | | | | | | | | | | | When working with an online source cluster, pg_rewind gets a list of all the files in the source data directory using a WITH RECURSIVE query, returning a NULL result for a file's metadata if it gets removed between the moment it is listed in a directory and the moment its metadata is obtained with pg_stat_file() (say a recycled WAL segment). The query result was processed in such a way that for each tuple we checked only that the first file's metadata was NULL. This could have two consequences, both resulting in a failure of the rewind: - If the first tuple referred to a removed file, all files from the source would be ignored. - Any file actually missing would not be considered as such. While on it, rework slightly the code so as no values are saved if we know that a file is going to be skipped. Issue introduced by b36805f, so backpatch down to 9.5. Author: Justin Pryzby, Michael Paquier Reviewed-by: Daniel Gustafsson, Masahiko Sawada Discussion: https://postgr.es/m/20200713061010.GC23581@telsasoft.com Backpatch-through: 9.5
* Fix bitmap AND/OR scans on the inside of a nestloop partition-wise join.Tom Lane2020-07-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | reparameterize_path_by_child() failed to reparameterize BitmapAnd and BitmapOr paths. This matters only if such a path is chosen as the inside of a nestloop partition-wise join, where we have to pass in parameters from the outside of the nestloop. If that did happen, we generated a bad plan that would likely lead to crashes at execution. This is not entirely reparameterize_path_by_child()'s fault though; it's the victim of an ancient decision (my ancient decision, I think) to not bother filling in param_info in BitmapAnd/Or path nodes. That caused the function to believe that such nodes and their children contain no parameter references and so need not be processed. In hindsight that decision looks pretty penny-wise and pound-foolish: while it saves a few cycles during path node setup, we do commonly need the information later. In particular, by reversing the decision and requiring valid param_info data in all nodes of a bitmap path tree, we can get rid of indxpath.c's get_bitmap_tree_required_outer() function, which computed the data on-demand. It's not unlikely that that nets out as a savings of cycles in many scenarios. A couple of other things in indxpath.c can be simplified as well. While here, get rid of some cases in reparameterize_path_by_child() that are visibly dead or useless, given that we only care about reparameterizing paths that can be on the inside of a parameterized nestloop. This case reminds one of the maxim that untested code probably does not work, so I'm unwilling to leave unreachable code in this function. (I did leave the T_Gather case in place even though it's not reached in the regression tests. It's not very clear to me when the planner might prefer to put Gather below rather than above a nestloop, but at least in principle the case might be interesting.) Per bug #16536, originally from Arne Roland but with a test case by Andrew Gierth. Back-patch to v11 where this code came in. Discussion: https://postgr.es/m/16536-2213ee0b3aad41fd@postgresql.org
* Fix timing issue with ALTER TABLE's validate constraintDavid Rowley2020-07-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | An ALTER TABLE to validate a foreign key in which another subcommand already caused a pending table rewrite could fail due to ALTER TABLE attempting to validate the foreign key before the actual table rewrite takes place. This situation could result in an error such as: ERROR: could not read block 0 in file "base/nnnnn/nnnnn": read only 0 of 8192 bytes The failure here was due to the SPI call which validates the foreign key trying to access an index which is yet to be rebuilt. Similarly, we also incorrectly tried to validate CHECK constraints before the heap had been rewritten. The fix for both is to delay constraint validation until phase 3, after the table has been rewritten. For CHECK constraints this means a slight behavioral change. Previously ALTER TABLE VALIDATE CONSTRAINT on inheritance tables would be validated from the bottom up. This was different from the order of evaluation when a new CHECK constraint was added. The changes made here aligns the VALIDATE CONSTRAINT evaluation order for inheritance tables to be the same as ADD CONSTRAINT, which is generally top-down. Reported-by: Nazli Ugur Koyluoglu, using SQLancer Discussion: https://postgr.es/m/CAApHDvp%3DZXv8wiRyk_0rWr00skhGkt8vXDrHJYXRMft3TjkxCA%40mail.gmail.com Backpatch-through: 9.5 (all supported versions)
* Fix comments related to table AMsMichael Paquier2020-07-14
| | | | | | | | | | | Incorrect function names were referenced. As this fixes some portions of tableam.h, that is mentioned in the docs as something to look at when implementing a table AM, backpatch down to 12 where this has been introduced. Author: Hironobu Suzuki Discussion: https://postgr.es/m/8fe6d672-28dd-3f1d-7aed-ac2f6d599d3f@interdb.jp Backpatch-through: 12
* Cope with lateral references in the quals of a subquery RTE.Tom Lane2020-07-13
| | | | | | | | | | | | | | | | | | | | | | The qual pushdown logic assumed that all Vars in a restriction clause must be Vars referencing subquery outputs; but since we introduced LATERAL, it's possible for such a Var to be a lateral reference instead. This led to an assertion failure in debug builds. In a non-debug build, there might be no ill effects (if qual_is_pushdown_safe decided the qual was unsafe anyway), or we could get failures later due to construction of an invalid plan. I've not gone to much length to characterize the possible failures, but at least segfaults in the executor have been observed. Given that this has been busted since 9.3 and it took this long for anybody to notice, I judge that the case isn't worth going to great lengths to optimize. Hence, fix by just teaching qual_is_pushdown_safe that such quals are unsafe to push down, matching the previous behavior when it accidentally didn't fail. Per report from Tom Ellis. Back-patch to all supported branches. Discussion: https://postgr.es/m/20200713175124.GQ8220@cloudinit-builder
* Fix uninitialized value in segno calculationAlvaro Herrera2020-07-13
| | | | | | | | | | | | Remove previous hack in KeepLogSeg that added a case to deal with a (badly represented) invalid segment number. This was added for the sake of GetWALAvailability. But it's not needed if in that function we initialize the segment number to be retreated to the currently being written segment, so do that instead. Per valgrind-running buildfarm member skink, and some sparc64 animals. Discussion: https://postgr.es/m/1724648.1594230917@sss.pgh.pa.us
* Fix bugs in libpq's management of GSS encryption state.Tom Lane2020-07-13
| | | | | | | | | | | | | | | | | | | | | GSS-related resources should be cleaned up in pqDropConnection, not freePGconn, else the wrong things happen when resetting a connection or trying to switch to a different server. It's also critical to reset conn->gssenc there. During connection setup, initialize conn->try_gss at the correct place, else switching to a different server won't work right. Remove now-redundant cleanup of GSS resources around one (and, for some reason, only one) pqDropConnection call in connectDBStart. Per report from Kyotaro Horiguchi that psql would freeze up, rather than successfully resetting a GSS-encrypted connection after a server restart. This is YA oversight in commit b0b39f72b, so back-patch to v12. Discussion: https://postgr.es/m/20200710.173803.435804731896516388.horikyota.ntt@gmail.com
* Improvements to psql \dAo and \dAp commandsAlexander Korotkov2020-07-13
| | | | | | | | | | | | | | | | * Strategy number and purpose are essential information for opfamily operator. So, show those columns in non-verbose output. * "Left/right arg type" \dAp column names are confusing, because those type don't necessary match to function arguments. Rename them to "Registered left/right type". * Replace manual assembling of operator/procedure names with casts to regoperator/regprocedure. * Add schema-qualification for pg_catalog functions and tables. Reported-by: Peter Eisentraut, Tom Lane Reviewed-by: Tom Lane Discussion: https://postgr.es/m/2edc7b27-031f-b2b6-0db2-864241c91cb9%402ndquadrant.com Backpatch-through: 13
* HashAgg: before spilling tuples, set unneeded columns to NULL.Jeff Davis2020-07-12
| | | | | | | | | This is a replacement for 4cad2534. Instead of projecting all tuples going into a HashAgg, only remove unnecessary attributes when actually spilling. This avoids the regression for the in-memory case. Discussion: https://postgr.es/m/a2fb7dfeb4f50aa0a123e42151ee3013933cb802.camel%40j-davis.com Backpatch-through: 13
* Revert "Use CP_SMALL_TLIST for hash aggregate"Jeff Davis2020-07-12
| | | | | | | | | | This reverts commit 4cad2534da6d17067d98cf04be2dfc1bda8f2cd0 due to a performance regression. It will be replaced by a new approach in an upcoming commit. Reported-by: Andres Freund Discussion: https://postgr.es/m/20200614181418.mx4bvljmfkkhoqzl@alap3.anarazel.de Backpatch-through: 13
* Revert "Track statistics for spilling of changes from ReorderBuffer".Amit Kapila2020-07-13
| | | | | | | | | | | | | | | | | | | | The stats with this commit was available only for WALSenders, however, users might want to see for backends doing logical decoding via SQL API. Then, users might want to reset and access these stats across server restart which was not possible with the current patch. List of commits reverted: caa3c4242c Don't call elog() while holding spinlock. e641b2a995 Doc: Update the documentation for spilled transaction statistics. 5883f5fe27 Fix unportable printf format introduced in commit 9290ad198. 9290ad198b Track statistics for spilling of changes from ReorderBuffer. Additionaly, remove the release notes entry for this feature. Backpatch-through: 13, where it was introduced Discussion: https://postgr.es/m/CA+fd4k5_pPAYRTDrO2PbtTOe0eHQpBvuqmCr8ic39uTNmR49Eg@mail.gmail.com
* Avoid trying to restore table ACLs and per-column ACLs in parallel.Tom Lane2020-07-11
| | | | | | | | | | | | | | | | | | | | | | | | | | Parallel pg_restore has always supposed that ACL items for different objects are independent and can be restored in parallel without conflicts. However, there is one case where this fails: because REVOKE on a table is defined to also revoke the privilege(s) at column level, we can't restore per-column ACLs till after we restore any table-level privileges on their table. Failure to honor this restriction can lead to "tuple concurrently updated" errors during parallel restore, or even to the per-column ACLs silently disappearing because the table-level REVOKE is executed afterwards. To fix, add a dependency from each column-level ACL item to its table's ACL item, if there is one. Note that this doesn't fix the hazard for pre-existing archive files, only for ones made with a corrected pg_dump. Given that the bug's been there quite awhile without field reports, I think this is acceptable. This requires changing the API of pg_dump's dumpACL() function. To keep its argument list from getting even longer, I removed the "CatalogId objCatId" argument, which has been unused for ages. Per report from Justin Pryzby. Back-patch to all supported branches. Discussion: https://postgr.es/m/20200706050129.GW4107@telsasoft.com
* Forbid numeric NaN in jsonpathAlexander Korotkov2020-07-11
| | | | | | | | | | | | | | SQL standard doesn't define numeric Inf or NaN values. It appears even more ridiculous to support then in jsonpath assuming JSON doesn't support these values as well. This commit forbids returning NaN from .double(), which was previously allowed. NaN can't be result of inner-jsonpath computation over non-NaNs. So, we can not expect NaN in the jsonpath output. Reported-by: Tom Lane Discussion: https://postgr.es/m/203949.1591879542%40sss.pgh.pa.us Author: Alexander Korotkov Reviewed-by: Tom Lane Backpatch-through: 12
* Improve error reporting for jsonpath .double() methodAlexander Korotkov2020-07-11
| | | | | | | | | | | When jsonpath .double() method detects that numeric or string can't be converted to double precision, it throws an error. This commit makes these errors explicitly express the reason of failure. Discussion: https://postgr.es/m/CAPpHfdtqJtiSXkP7tOXez18NxhLUH_-75bL8%3DOce4Ki%2Bbv7V6Q%40mail.gmail.com Author: Alexander Korotkov Reviewed-by: Tom Lane Backpatch-through: 12
* Log the location field before any backtracePeter Eisentraut2020-07-10
| | | | | | | This order makes more sense because the location is effectively at the lowest level of the backtrace. Discussion: https://www.postgresql.org/message-id/flat/90f5fa04-c410-a54e-9449-aa3749fb7972%402ndquadrant.com
* Remove WARNING message from brin_desummarize_rangeAlvaro Herrera2020-07-09
| | | | | | | | | | This message was being emitted on the grounds that only crashed summarization could cause it, but in reality even an aborted vacuum could do it ... which makes it way too noisy, particularly since it shows up in regression tests and makes them die. Reported by Tom Lane. Discussion: https://postgr.es/m/489091.1593534251@sss.pgh.pa.us
* Tighten up Windows CRLF conversion in our TAP test scripts.Tom Lane2020-07-09
| | | | | | | | | | Back-patch commits 91bdf499b and ffb4cee43, so that all branches agree on when and how to do Windows CRLF conversion. This should close the referenced thread. Thanks to Andrew Dunstan for discussion/review. Discussion: https://postgr.es/m/412ae8da-76bb-640f-039a-f3513499e53d@gmx.net
* Fix pg_current_logfile() to not emit a carriage return on Windows.Tom Lane2020-07-09
| | | | | | | | | | | | | | | | | | | | Due to not having our signals straight about CRLF vs. LF line termination, the output of pg_current_logfile() included a trailing \r on Windows. To fix, force the file descriptor it uses into text mode. While here, move a couple of local variable declarations to make the function's logic clearer. In v12 and v13, also back-patch the test added by 1c4e88e2f so that this function has some test coverage. However, the 004_logrotate.pl test script doesn't exist before v12, and it didn't seem worth adding to older branches just for this. Per report from Thomas Kellerer. Back-patch to v10 where this function was added. Discussion: https://postgr.es/m/412ae8da-76bb-640f-039a-f3513499e53d@gmx.net
* Fix whitespace in HashAgg EXPLAIN ANALYZEDavid Rowley2020-07-09
| | | | | | | | | | | | The Sort node does not put a space between the number of kilobytes and the "kB" of memory or disk space used, but HashAgg does. Here we align HashAgg to do the same as Sort. Sort has been displaying this information for longer than HashAgg, so it makes sense to align HashAgg to Sort rather than the other way around. Reported-by: Justin Pryzby Discussion: https://postgr.es/m/20200708163021.GW4107@telsasoft.com Backpatch-through: 13, where the hashagg started showing these details
* Fix incorrect variable datatype.Fujii Masao2020-07-08
| | | | | | | | | | | Since slot_keep_segs indicates the number of WAL segments not LSN, its datatype should not be XLogRecPtr. Back-patch to v13 where this issue was added. Reported-by: Atsushi Torikoshi Author: Atsushi Torikoshi, tweaked by Fujii Masao Discussion: https://postgr.es/m/ebd0d674f3e050222238a960cac5251a@oss.nttdata.com
* Morph pg_replication_slots.min_safe_lsn to safe_wal_sizeAlvaro Herrera2020-07-07
| | | | | | | | | | | | | | | The previous definition of the column was almost universally disliked, so provide this updated definition which is more useful for monitoring purposes: a large positive value is good, while zero or a negative value means danger. This should be operationally more convenient. Backpatch to 13, where the new column to pg_replication_slots (and the feature it represents) were added. Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Author: Álvaro Herrera <alvherre@alvh.no-ip.org> Reported-by: Fujii Masao <masao.fujii@oss.nttdata.com> Discussion: https://postgr.es/m/9ddfbf8c-2f67-904d-44ed-cf8bc5916228@oss.nttdata.com
* Remove extra whitespace in comments atop ReorderBufferCheckMemoryLimit.Amit Kapila2020-07-06
| | | | Backpatch-through: 13, where it was introduced
* Remove unused function parameter in end_parallel_vacuum.Amit Kapila2020-07-06
| | | | | | | Author: Vignesh C Reviewed-by: Sawada Masahiko Backpatch-through: 13, where it was introduced Discussion: https://postgr.es/m/CALDaNm3Ppt71NafGY5mk3V2i3Q+mm93pVibDq-0NpW7WU67Jcg@mail.gmail.com
* Rename enable_incrementalsort for clarityPeter Eisentraut2020-07-05
| | | | | Author: James Coleman <jtc331@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/df652910-e985-9547-152c-9d4357dc3979%402ndquadrant.com
* Fix "ignoring return value" complaints from commit 96d1f423f9Joe Conway2020-07-04
| | | | | | | | | | | | | | | The cfbot and some BF animals are complaining about the previous read_binary_file commit because of ignoring return value of ‘fread’. So let's make everyone happy by testing the return value even though not strictly needed. Reported by Justin Pryzby, and suggested patch by Tom Lane. Backpatched to v11 same as the previous commit. Reported-By: Justin Pryzby Reviewed-By: Tom Lane Discussion: https://postgr.es/m/flat/969b8d82-5bb2-5fa8-4eb1-f0e685c5d736%40joeconway.com Backpatch-through: 11
* Read until EOF vice stat-reported size in read_binary_fileJoe Conway2020-07-04
| | | | | | | | | | | | | | | | | | read_binary_file(), used by SQL functions pg_read_file() and friends, uses stat to determine file length to read, when not passed an explicit length as an argument. This is problematic, for example, if the file being read is a virtual file with a stat-reported length of zero. Arrange to read until EOF, or StringInfo data string lenth limit, is reached instead. Original complaint and patch by me, with significant review, corrections, advice, and code optimizations by Tom Lane. Backpatched to v11. Prior to that only paths relative to the data and log dirs were allowed for files, so no "zero length" files were reachable anyway. Reviewed-By: Tom Lane Discussion: https://postgr.es/m/flat/969b8d82-5bb2-5fa8-4eb1-f0e685c5d736%40joeconway.com Backpatch-through: 11
* Clamp total-tuples estimates for foreign tables to ensure planner sanity.Tom Lane2020-07-03
| | | | | | | | | | | | | | | | | | | | | After running GetForeignRelSize for a foreign table, adjust rel->tuples to be at least as large as rel->rows. This prevents bizarre behavior in estimate_num_groups() and perhaps other places, especially in the scenario where rel->tuples is zero because pg_class.reltuples is (suggesting that ANALYZE has never been run for the table). As things stood, we'd end up estimating one group out of any GROUP BY on such a table, whereas the default group-count estimate is more likely to result in a sane plan. Also, clarify in the documentation that GetForeignRelSize has the option to override the rel->tuples value if it has a better idea of what to use than what is in pg_class.reltuples. Per report from Jeff Janes. Back-patch to all supported branches. Patch by me; thanks to Etsuro Fujita for review Discussion: https://postgr.es/m/CAMkU=1xNo9cnan+Npxgz0eK7394xmjmKg-QEm8wYG9P5-CcaqQ@mail.gmail.com
* Fix temporary tablespaces for shared filesets some more.Tom Lane2020-07-03
| | | | | | | | | | | | | | | | | | | | | | | | Commit ecd9e9f0b fixed the problem in the wrong place, causing unwanted side-effects on the behavior of GetNextTempTableSpace(). Instead, let's make SharedFileSetInit() responsible for subbing in the value of MyDatabaseTableSpace when the default tablespace is called for. The convention about what is in the tempTableSpaces[] array is evidently insufficiently documented, so try to improve that. It also looks like SharedFileSetInit() is doing the wrong thing in the case where temp_tablespaces is empty. It was hard-wiring use of the pg_default tablespace, but it seems like using MyDatabaseTableSpace is more consistent with what happens for other temp files. Back-patch the reversion of PrepareTempTablespaces()'s behavior to 9.5, as ecd9e9f0b was. The changes in SharedFileSetInit() go back to v11 where that was introduced. (Note there is net zero code change before v11 from these two patch sets, so nothing to release-note.) Magnus Hagander and Tom Lane Discussion: https://postgr.es/m/CABUevExg5YEsOvqMxrjoNvb3ApVyH+9jggWGKwTDFyFCVWczGQ@mail.gmail.com