aboutsummaryrefslogtreecommitdiff
path: root/src/backend/utils
Commit message (Collapse)AuthorAge
* Fix EXPLAIN of SEARCH BREADTH FIRST with a constant initial value.Tom Lane2022-10-16
| | | | | | | | | | | | | | | | | | | | | | If the non-recursive term of a SEARCH BREADTH FIRST recursive query has only constants in its target list, the planner will fold the starting RowExpr added by rewrite into a simple Const of type RECORD. The executor doesn't have any problem with that --- but EXPLAIN VERBOSE will encounter the Const as the ultimate source of truth about what the field names of the SET column are, and it didn't know what to do with that. Fortunately, we can pull the identifying typmod out of the Const, in much the same way that record_out would. For reasons that remain a bit obscure to me, this only fails with SEARCH BREADTH FIRST, not SEARCH DEPTH FIRST or CYCLE. But I added regression test cases for both of those options too, just to make sure we don't break it in future. Per bug #17644 from Matthijs van der Vleuten. Back-patch to v14 where these constructs were added. Discussion: https://postgr.es/m/17644-3bd1f3036d6d7a16@postgresql.org
* Change some errdetail() to errdetail_internal()Alvaro Herrera2022-09-28
| | | | | | | | | | | | This prevents marking the argument string for translation for gettext, and it also prevents the given string (which is already translated) from being translated at runtime. Also, mark the strings used as arguments to check_rolespec_name for translation. Backpatch all the way back as appropriate. None of this is caught by any tests (necessarily so), so I verified it manually.
* Suppress variable-set-but-not-used warnings from clang 15.Tom Lane2022-09-20
| | | | | | | | | | | | | | | | | | | | | | | | clang 15+ will issue a set-but-not-used warning when the only use of a variable is in autoincrements (e.g., "foo++;"). That's perfectly sensible, but it detects a few more cases that we'd not noticed before. Silence the warnings with our usual methods, such as PG_USED_FOR_ASSERTS_ONLY, or in one case by actually removing a useless variable. One thing that we can't nicely get rid of is that with %pure-parser, Bison emits "yynerrs" as a local variable that falls foul of this warning. To silence those, I inserted "(void) yynerrs;" in the top-level productions of affected grammars. Per recently-established project policy, this is a candidate for back-patching into out-of-support branches: it suppresses annoying compiler warnings but changes no behavior. Hence, back-patch to 9.5, which is as far as these patches go without issues. (A preliminary check shows that the prior branches need some other set-but-not-used cleanups too, so I'll leave them for another day.) Discussion: https://postgr.es/m/514615.1663615243@sss.pgh.pa.us
* Fix NaN comparison in circle_same testDaniel Gustafsson2022-09-12
| | | | | | | | | | | | | | | | | | | | | | Commit c4c340088 changed geometric operators to use float4 and float8 functions, and handle NaN's in a better way. The circle sameness test had a typo in the code which resulted in all comparisons with the left circle having a NaN radius considered same. postgres=# select '<(0,0),NaN>'::circle ~= '<(0,0),1>'::circle; ?column? ---------- t (1 row) This fixes the sameness test to consider the radius of both the left and right circle. Backpatch to v12 where this was introduced. Author: Ranier Vilela <ranier.vf@gmail.com> Discussion: https://postgr.es/m/CAEudQAo8dK=yctg2ZzjJuzV4zgOPBxRU5+Kb+yatFiddtQk6Rw@mail.gmail.com Backpatch-through: v12
* Fix some possibly latent bugs in slab.cDavid Rowley2022-09-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | Primarily, this fixes an incorrect calculation in SlabCheck which was looking in the wrong byte for the sentinel check. The reason that we've never noticed this before in the form of a failing sentinel check is because the pre-check to this always fails because all current core users of slab contexts have a chunk size which is already MAXALIGNed, therefore there's never any space for the sentinel byte. It is possible that an extension needs to use a slab context and if they do with a chunk size that's not MAXALIGNed, then they'll likely get errors about overwritten sentinel bytes. Additionally, this patch changes various calculations which are being done based on the sizeof(SlabBlock). Currently, sizeof(SlabBlock) is a multiple of 8, therefore sizeof(SlabBlock) is the same as MAXALIGN(sizeof(SlabBlock)), however, if we were to ever have to add any fields to that struct as part of a bug fix, then SlabAlloc could end up returning a non-MAXALIGNed pointer. To be safe, let's ensure we always MAXALIGN sizeof(SlabBlock) before using it in any calculations. This patch has already been applied to master in d5ee4db0e. Diagnosed-by: Tomas Vondra, Tom Lane Author: Tomas Vondra, David Rowley Discussion: https://postgr.es/m/CAA4eK1%2B1JyW5TiL%3DyV-3Uq1CrfnTyn0Xrk5uArt31Z%3D8rgPhXQ%40mail.gmail.com Backpatch-through: 10
* Defend against stack overrun in a few more places.Tom Lane2022-08-24
| | | | | | | | | | | | | | | | | | | SplitToVariants() in the ispell code, lseg_inside_poly() in geo_ops.c, and regex_selectivity_sub() in selectivity estimation could recurse until stack overflow; fix by adding check_stack_depth() calls. So could next() in the regex compiler, but that case is better fixed by converting its tail recursion to a loop. (We probably get better code that way too, since next() can now be inlined into its sole caller.) There remains a reachable stack overrun in the Turkish stemmer, but we'll need some advice from the Snowball people about how to fix that. Per report from Egor Chindyaskin and Alexander Lakhin. These mistakes are old, so back-patch to all supported branches. Richard Guo and Tom Lane Discussion: https://postgr.es/m/1661334672.728714027@f473.i.mail.ru
* Preserve memory context of VarStringSortSupport buffers.Tom Lane2022-08-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When enlarging the work buffers of a VarStringSortSupport object, varstrfastcmp_locale was careful to keep them in the ssup_cxt memory context; but varstr_abbrev_convert just used palloc(). The latter creates a hazard that the buffers could be freed out from under the VarStringSortSupport object, resulting in stomping on whatever gets allocated in that memory later. In practice, because we only use this code for ICU collations (cf. 3df9c374e), the problem is confined to use of ICU collations. I believe it may have been unreachable before the introduction of incremental sort, too, as traditional sorting usually just uses one context for the duration of the sort. We could fix this by making the broken stanzas in varstr_abbrev_convert match the non-broken ones in varstrfastcmp_locale. However, it seems like a better idea to dodge the issue altogether by replacing the pfree-and-allocate-anew coding with repalloc, which automatically preserves the chunk's memory context. This fix does add a few cycles because repalloc will copy the chunk's content, which the existing coding assumes is useless. However, we don't expect that these buffer enlargement operations are performance-critical. Besides that, it's far from obvious that copying the buffer contents isn't required, since these stanzas make no effort to mark the buffers invalid by resetting last_returned, cache_blob, etc. That seems to be safe upon examination, but it's fragile and could easily get broken in future, which wouldn't get revealed in testing with short-to-moderate-size strings. Per bug #17584 from James Inform. Whether or not the issue is reachable in the older branches, this code has been broken on its own terms from its introduction, so patch all the way back. Discussion: https://postgr.es/m/17584-95c79b4a7d771f44@postgresql.org
* Fix trim_array() for zero-dimensional array argument.Tom Lane2022-07-31
| | | | | | | | | | | | | | The code tried to access ARR_DIMS(v)[0] and ARR_LBOUND(v)[0] whether or not those values exist. This made the range check on the "n" argument unstable --- it might or might not fail, and if it did it would report garbage for the allowed upper limit. These bogus accesses would probably annoy Valgrind, and if you were very unlucky even lead to SIGSEGV. Report and fix by Martin Kalcher. Back-patch to v14 where this function was added. Discussion: https://postgr.es/m/baaeb413-b8a8-4656-5757-ef347e5ec11f@aboutsource.net
* Allow "in place" tablespaces.Alvaro Herrera2022-07-27
| | | | | | | | | | | | | | | | | | | | | | This is a backpatch to branches 10-14 of the following commits: 7170f2159fb2 Allow "in place" tablespaces. c6f2f01611d4 Fix pg_basebackup with in-place tablespaces. f6f0db4d6240 Fix pg_tablespace_location() with in-place tablespaces 7a7cd84893e0 doc: Remove mention to in-place tablespaces for pg_tablespace_location() 5344723755bd Remove unnecessary Windows-specific basebackup code. In-place tablespaces were introduced as a testing helper mechanism, but they are going to be used for a bugfix in WAL replay to be backpatched to all stable branches. I (Álvaro) had to adjust some code to account for lack of get_dirent_type() in branches prior to 14. Author: Thomas Munro <thomas.munro@gmail.com> Author: Michaël Paquier <michael@paquier.xyz> Author: Álvaro Herrera <alvherre@alvh.no-ip.org> Discussion: https://postgr.es/m/20220722081858.omhn2in5zt3g4nek@alvherre.pgsql
* Fix ruleutils issues with dropped cols in functions-returning-composite.Tom Lane2022-07-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Due to lack of concern for the case in the dependency code, it's possible to drop a column of a composite type even though stored queries have references to the dropped column via functions-in-FROM that return the composite type. There are "soft" references, namely FROM-clause aliases for such columns, and "hard" references, that is actual Vars referring to them. The right fix for hard references is to add dependencies preventing the drop; something we've known for many years and not done (and this commit still doesn't address it). A "soft" reference shouldn't prevent a drop though. We've been around on this before (cf. 9b35ddce9, 2c4debbd0), but nobody had noticed that the current behavior can result in dump/reload failures, because ruleutils.c can print more column aliases than the underlying composite type now has. So we need to rejigger the column-alias-handling code to treat such columns as dropped and not print aliases for them. Rather than writing new code for this, I used expandRTE() which already knows how to figure out which function result columns are dropped. I'd initially thought maybe we could use expandRTE() in all cases, but that fails for EXPLAIN's purposes, because the planner strips a lot of RTE infrastructure that expandRTE() needs. So this patch just uses it for unplanned function RTEs and otherwise does things the old way. If there is a hard reference (Var), then removing the column alias causes us to fail to print the Var, since there's no longer a name to print. Failing seems less desirable than printing a made-up name, so I made it print "?dropped?column?" instead. Per report from Timo Stolz. Back-patch to all supported branches. Discussion: https://postgr.es/m/5c91267e-3b6d-5795-189c-d15a55d61dbb@nullachtvierzehn.de
* Fix omissions in support for the "regcollation" type.Tom Lane2022-07-17
| | | | | | | | | | | | | The patch that added regcollation doesn't seem to have been too thorough about supporting it everywhere that other reg* types are supported. Fix that. (The find_expr_references omission is moderately serious, since it could result in missing expression dependencies. The others are less exciting.) Noted while fixing bug #17483. Back-patch to v13 where regcollation was added. Discussion: https://postgr.es/m/1423433.1652722406@sss.pgh.pa.us
* Invent qsort_interruptible().Tom Lane2022-07-12
| | | | | | | | | | | | | | | | | | | | | | | | Justin Pryzby reported that some scenarios could cause gathering of extended statistics to spend many seconds in an un-cancelable qsort() operation. To fix, invent qsort_interruptible(), which is just like qsort_arg() except that it will also do CHECK_FOR_INTERRUPTS every so often. This bloats the backend by a couple of kB, which seems like a good investment. (We considered just enabling CHECK_FOR_INTERRUPTS in the existing qsort and qsort_arg functions, but there are some callers for which that'd demonstrably be unsafe. Opt-in seems like a better way.) For now, just apply qsort_interruptible() in statistics collection. There's probably more places where it could be useful, but we can always change other call sites as we find problems. Back-patch to v14. Before that we didn't have extended stats on expressions, so that the problem was less severe. Also, this patch depends on the sort_template infrastructure introduced in v14. Tom Lane and Justin Pryzby Discussion: https://postgr.es/m/20220509000108.GQ28830@telsasoft.com
* Remove %error-verbose directive from jsonpath parserAndrew Dunstan2022-07-03
| | | | | | | | | | | None of the other bison parsers contains this directive, and it gives rise to some unfortunate and impenetrable messages, so just remove it. Backpatch to release 12, where it was introduced. Per gripe from Erik Rijkers Discussion: https://postgr.es/m/ba069ce2-a98f-dc70-dc17-2ccf2a9bf7c7@xs4all.nl
* Fix visibility check when XID is committed in CLOG but not in procarray.Heikki Linnakangas2022-06-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | TransactionIdIsInProgress had a fast path to return 'false' if the single-item CLOG cache said that the transaction was known to be committed. However, that was wrong, because a transaction is first marked as committed in the CLOG but doesn't become visible to others until it has removed its XID from the proc array. That could lead to an error: ERROR: t_xmin is uncommitted in tuple to be updated or for an UPDATE to go ahead without blocking, before the previous UPDATE on the same row was made visible. The window is usually very short, but synchronous replication makes it much wider, because the wait for synchronous replica happens in that window. Another thing that makes it hard to hit is that it's hard to get such a commit-in-progress transaction into the single item CLOG cache. Normally, if you call TransactionIdIsInProgress on such a transaction, it determines that the XID is in progress without checking the CLOG and without populating the cache. One way to prime the cache is to explicitly call pg_xact_status() on the XID. Another way is to use a lot of subtransactions, so that the subxid cache in the proc array is overflown, making TransactionIdIsInProgress rely on pg_subtrans and CLOG checks. This has been broken ever since it was introduced in 2008, but the race condition is very hard to hit, especially without synchronous replication. There were a couple of reports of the error starting from summer 2021, but no one was able to find the root cause then. TransactionIdIsKnownCompleted() is now unused. In 'master', remove it, but I left it in place in backbranches in case it's used by extensions. Also change pg_xact_status() to check TransactionIdIsInProgress(). Previously, it only checked the CLOG, and returned "committed" before the transaction was actually made visible to other queries. Note that this also means that you cannot use pg_xact_status() to reproduce the bug anymore, even if the code wasn't fixed. Report and analysis by Konstantin Knizhnik. Patch by Simon Riggs, with the pg_xact_status() change added by me. Author: Simon Riggs Reviewed-by: Andres Freund Discussion: https://www.postgresql.org/message-id/flat/4da7913d-398c-e2ad-d777-f752cf7f0bbb%40garret.ru
* Fix relptr's encoding of the base address.Thomas Munro2022-06-27
| | | | | | | | | | | | | | | | | Previously, we encoded both NULL and the first byte at the base address as 0. That confusion led to the assertion in commit e07d4ddc, which failed when min_dynamic_shared_memory was used. Give them distinct encodings, by switching to 1-based offsets for non-NULL pointers. Also improve macro hygiene in passing (missing/misplaced parentheses), and remove open-coded access to the raw offset value from freepage.c/h. Although e07d4ddc was back-patched to 10, the only code that actually makes use of relptr at the base address arrived in 84b1c63a, so no need to back-patch further than 14 for now. Reported-by: Justin Pryzby <pryzby@telsasoft.com> Reviewed-by: Robert Haas <robertmhaas@gmail.com> Discussion: https://postgr.es/m/20220519193839.GT19626%40telsasoft.com
* Fix SPI's handling of errors during transaction commit.Tom Lane2022-06-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | SPI_commit previously left it up to the caller to recover from any error occurring during commit. Since that's complicated and requires use of low-level xact.c facilities, it's not too surprising that no caller got it right. Let's move the responsibility for cleanup into spi.c. Doing that requires redefining SPI_commit as starting a new transaction, so that it becomes equivalent to SPI_commit_and_chain except that you get default transaction characteristics instead of preserving the prior transaction's characteristics. We can make this pretty transparent API-wise by redefining SPI_start_transaction() as a no-op. Callers that expect to do something in between might be surprised, but available evidence is that no callers do so. Having made that API redefinition, we can fix this mess by having SPI_commit[_and_chain] trap errors and start a new, clean transaction before re-throwing the error. Likewise for SPI_rollback[_and_chain]. Some cleanup is also needed in AtEOXact_SPI, which was nowhere near smart enough to deal with SPI contexts nested inside a committing context. While plperl and pltcl need no changes beyond removing their now-useless SPI_start_transaction() calls, plpython needs some more work because it hadn't gotten the memo about catching commit/rollback errors in the first place. Such an error resulted in longjmp'ing out of the Python interpreter, which leaks Python stack entries at present and is reported to crash Python 3.11 altogether. Add the missing logic to catch such errors and convert them into Python exceptions. This is a back-patch of commit 2e517818f. That's now aged long enough to reduce the concerns about whether it will break something, and we do need to ensure that supported branches will work with Python 3.11. Peter Eisentraut and Tom Lane Discussion: https://postgr.es/m/3375ffd8-d71c-2565-e348-a597d6e739e3@enterprisedb.com Discussion: https://postgr.es/m/17416-ed8fe5d7213d6c25@postgresql.org
* Ensure ParseTzFile() closes the input file after failing.Tom Lane2022-05-31
| | | | | | | | | | | | | | | We hadn't noticed this because (a) few people feed invalid timezone abbreviation files to the server, and (b) in typical scenarios guc.c would throw ereport(ERROR) and then transaction abort handling would silently clean up the leaked file reference. However, it was possible to observe file leakage warnings if one breaks an already-active abbreviation file, because guc.c does not throw ERROR when loading supposedly-validated settings during session start or SIGHUP processing. Report and fix by Kyotaro Horiguchi (cosmetic adjustments by me) Discussion: https://postgr.es/m/20220530.173740.748502979257582392.horikyota.ntt@gmail.com
* Handle NULL for short descriptions of custom GUC variablesMichael Paquier2022-05-28
| | | | | | | | | | | | | | | If a short description is specified as NULL in one of the various DefineCustomXXXVariable() functions available to external modules to define a custom parameter, SHOW ALL would crash. This change teaches SHOW ALL to properly handle NULL short descriptions, as well as any code paths that manipulate it, to gain in flexibility. Note that help_config.c was already able to do that, when describing a set of GUCs for postgres --describe-config. Author: Steve Chavez Reviewed by: Nathan Bossart, Andres Freund, Michael Paquier, Tom Lane Discussion: https://postgr.es/m/CAGRrpzY6hO-Kmykna_XvsTv8P2DshGiU6G3j8yGao4mk0CqjHA%40mail.gmail.com Backpatch-through: 10
* Show 'AS "?column?"' explicitly when it's important.Tom Lane2022-05-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | ruleutils.c was coded to suppress the AS label for a SELECT output expression if the column name is "?column?", which is the parser's fallback if it can't think of something better. This is fine, and avoids ugly clutter, so long as (1) nothing further up in the parse tree relies on that column name or (2) the same fallback would be assigned when the rule or view definition is reloaded. Unfortunately (2) is far from certain, both because ruleutils.c might print the expression in a different form from how it was originally written and because FigureColname's rules might change in future releases. So we shouldn't rely on that. Detecting exactly whether there is any outer-level use of a SELECT column name would be rather expensive. This patch takes the simpler approach of just passing down a flag indicating whether there *could* be any outer use; for example, the output column names of a SubLink are not referenceable, and we also do not care about the names exposed by the right-hand side of a setop. This is sufficient to suppress unwanted clutter in all but one case in the regression tests. That seems like reasonable evidence that it won't be too much in users' faces, while still fixing the cases we need to fix. Per bug #17486 from Nicolas Lutic. This issue is ancient, so back-patch to all supported branches. Discussion: https://postgr.es/m/17486-1ad6fd786728b8af@postgresql.org
* Revert "Disallow infinite endpoints in generate_series() for timestamps."Tom Lane2022-05-09
| | | | | | | | | | | | | | | | | | | This reverts commit eafdf9de06e9b60168f5e47cedcfceecdc6d4b5f and its back-branch counterparts. Corey Huinker pointed out that we'd discussed this exact change back in 2016 and rejected it, on the grounds that there's at least one usage pattern with LIMIT where an infinite endpoint can usefully be used. Perhaps that argument needs to be re-litigated, but there's no time left before our back-branch releases. To keep our options open, restore the status quo ante; if we do end up deciding to change things, waiting one more quarter won't hurt anything. Rather than just doing a straight revert, I added a new test case demonstrating the usage with LIMIT. That'll at least remind us of the issue if we forget again. Discussion: https://postgr.es/m/3603504.1652068977@sss.pgh.pa.us Discussion: https://postgr.es/m/CADkLM=dzw0Pvdqp5yWKxMd+VmNkAMhG=4ku7GnCZxebWnzmz3Q@mail.gmail.com
* Make relation-enumerating operations be security-restricted operations.Noah Misch2022-05-09
| | | | | | | | | | | | | | | | | | When a feature enumerates relations and runs functions associated with all found relations, the feature's user shall not need to trust every user having permission to create objects. BRIN-specific functionality in autovacuum neglected to account for this, as did pg_amcheck and CLUSTER. An attacker having permission to create non-temp objects in at least one schema could execute arbitrary SQL functions under the identity of the bootstrap superuser. CREATE INDEX (not a relation-enumerating operation) and REINDEX protected themselves too late. This change extends to the non-enumerating amcheck interface. Back-patch to v10 (all supported versions). Sergey Shinderuk, reviewed (in earlier versions) by Alexander Lakhin. Reported by Alexander Lakhin. Security: CVE-2022-1552
* Fix CLUSTER tuplesorts on abbreviated expressions.Peter Geoghegan2022-04-20
| | | | | | | | | | | | | | | | | | | | CLUSTER sort won't use the datum1 SortTuple field when clustering against an index whose leading key is an expression. This makes it unsafe to use the abbreviated keys optimization, which was missed by the logic that sets up SortSupport state. Affected tuplesorts output tuples in a completely bogus order as a result (the wrong SortSupport based comparator was used for the leading attribute). This issue is similar to the bug fixed on the master branch by recent commit cc58eecc5d. But it's a far older issue, that dates back to the introduction of the abbreviated keys optimization by commit 4ea51cdfe8. Backpatch to all supported versions. Author: Peter Geoghegan <pg@bowt.ie> Author: Thomas Munro <thomas.munro@gmail.com> Discussion: https://postgr.es/m/CA+hUKG+bA+bmwD36_oDxAoLrCwZjVtST2fqe=b4=qZcmU7u89A@mail.gmail.com Backpatch: 10-
* Disallow infinite endpoints in generate_series() for timestamps.Tom Lane2022-04-20
| | | | | | | | | | Such cases will lead to infinite loops, so they're of no practical value. The numeric variant of generate_series() already threw error for this, so borrow its message wording. Per report from Richard Wesley. Back-patch to all supported branches. Discussion: https://postgr.es/m/91B44E7B-68D5-448F-95C8-B4B3B0F5DEAF@duckdblabs.com
* Fix extract epoch from interval calculationPeter Eisentraut2022-04-19
| | | | | | | | | | | | | | The new numeric code for extract epoch from interval accidentally truncated the DAYS_PER_YEAR value to an integer, leading to results that mismatched the floating-point interval_part calculations. The commit a2da77cdb4661826482ebf2ddba1f953bc74afe4 that introduced this actually contains the regression test change that this reverts. I suppose this was missed at the time. Reported-by: Joseph Koshakow <koshy44@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://www.postgresql.org/message-id/flat/CAAvxfHd5n%3D13NYA2q_tUq%3D3%3DSuWU-CufmTf-Ozj%3DfrEgt7pXwQ%40mail.gmail.com
* Fix NULL input behaviour of pg_stat_get_replication_slot().Andres Freund2022-03-27
| | | | | | | | | | | pg_stat_get_replication_slot() accidentally was marked as non-strict, crashing when called with NULL input. As it's already released, introduce an explicit NULL check in 14, fix the catalog in HEAD. Bumps catversion in HEAD. Discussion: https://postgr.es/m/20220326212432.s5n2maw6kugnpyxw@alap3.anarazel.de Backpatch: 14-, where replication slot stats were introduced
* Fix breakage of get_ps_display() in the PS_USE_NONE case.Tom Lane2022-03-27
| | | | | | | | | | | | | | Commit 8c6d30f21 caused this function to fail to set *displen in the PS_USE_NONE code path. If the variable's previous value had been negative, that'd lead to a memory clobber at some call sites. We'd managed not to notice due to very thin test coverage of such configurations, but this appears to explain buildfarm member lorikeet's recent struggles. Credit to Andrew Dunstan for spotting the problem. Back-patch to v13 where the bug was introduced. Discussion: https://postgr.es/m/136102.1648320427@sss.pgh.pa.us
* Don't try to translate NULL in GetConfigOptionByNum().Andres Freund2022-03-23
| | | | | | | | | | | Noticed via -fsanitize=undefined. Introduced when a few columns in GetConfigOptionByNum() / pg_settings started to be translated in 72be8c29a / PG 12. Backpatch to all affected branches, for the same reasons as 46ab07ffda9. Discussion: https://postgr.es/m/20220323173537.ll7klrglnp4gn2um@alap3.anarazel.de Backpatch: 12-
* Don't call fwrite() with len == 0 when writing out relcache init file.Andres Freund2022-03-23
| | | | | | | | | Noticed via -fsanitize=undefined. Backpatch to all branches, for the same reasons as 46ab07ffda9. Discussion: https://postgr.es/m/20220323173537.ll7klrglnp4gn2um@alap3.anarazel.de Backpatch: 10-
* Fix assorted missing logic for GroupingFunc nodes.Tom Lane2022-03-21
| | | | | | | | | | | | | | | | | | | | | | The planner needs to treat GroupingFunc like Aggref for many purposes, in particular with respect to processing of the argument expressions, which are not to be evaluated at runtime. A few places hadn't gotten that memo, notably including subselect.c's processing of outer-level aggregates. This resulted in assertion failures or wrong plans for cases in which a GROUPING() construct references an outer aggregation level. Also fix missing special cases for GroupingFunc in cost_qual_eval (resulting in wrong cost estimates for GROUPING(), although it's not clear that that would affect plan shapes in practice) and in ruleutils.c (resulting in excess parentheses in pretty-print mode). Per bug #17088 from Yaoguang Chen. Back-patch to all supported branches. Richard Guo, Tom Lane Discussion: https://postgr.es/m/17088-e33882b387de7f5c@postgresql.org
* Fix incorrect xmlschema output for types timetz and timestamptz.Tom Lane2022-03-18
| | | | | | | | | | | The output of table_to_xmlschema() and allied functions includes a regex describing valid values for these types ... but the regex was itself invalid, as it failed to escape a literal "+" sign. Report and fix by Renan Soares Lopes. Back-patch to all supported branches. Discussion: https://postgr.es/m/7f6fabaa-3f8f-49ab-89ca-59fbfe633105@me.com
* Fix waiting in RegisterSyncRequest().Thomas Munro2022-03-16
| | | | | | | | | | | | | | | | | If we run out of space in the checkpointer sync request queue (which is hopefully rare on real systems, but common with very small buffer pool), we wait for it to drain. While waiting, we should report that as a wait event so that users know what is going on, and also handle postmaster death, since otherwise the loop might never terminate if the checkpointer has exited. Back-patch to 12. Although the problem exists in earlier releases too, the code is structured differently before 12 so I haven't gone any further for now, in the absence of field complaints. Reported-by: Andres Freund <andres@anarazel.de> Reviewed-by: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/20220226213942.nb7uvb2pamyu26dj%40alap3.anarazel.de
* Wake up for latches in CheckpointWriteDelay().Thomas Munro2022-03-16
| | | | | | | | | | | | | | | | | | | | | The checkpointer shouldn't ignore its latch. Other backends may be waiting for it to drain the request queue. Hopefully real systems don't have a full queue often, but the condition is reached easily when shared_buffers is small. This involves defining a new wait event, which will appear in the pg_stat_activity view often due to spread checkpoints. Back-patch only to 14. Even though the problem exists in earlier branches too, it's hard to hit there. In 14 we stopped using signal handlers for latches on Linux, *BSD and macOS, which were previously hiding this problem by interrupting the sleep (though not reliably, as the signal could arrive before the sleep begins; precisely the problem latches address). Reported-by: Andres Freund <andres@anarazel.de> Reviewed-by: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/20220226213942.nb7uvb2pamyu26dj%40alap3.anarazel.de
* Restore the previous semantics of get_constraint_index().Tom Lane2022-03-11
| | | | | | | | | | | | | | | | | | | | | | | | Commit 8b069ef5d changed this function to look at pg_constraint.conindid rather than searching pg_depend. That was a good performance improvement, but it failed to preserve the exact semantics. The old code would only return an index that was "owned by" (internally dependent on) the specified constraint, whereas the new code will also return indexes that are just referenced by foreign key constraints. This confuses ALTER TABLE, which was implicitly expecting the previous semantics, into failing with errors like ERROR: relation 146621 has multiple clustered indexes or ERROR: "pk_attbl" is not an index for table "atref" We can fix this without reverting the performance improvement by adding a contype check in get_constraint_index(). Another way could be to make ALTER TABLE check it, but I'm worried that extension code could also have subtle dependencies on the old semantics. Tom Lane and Japin Li, per bug #17409 from Holly Roberts. Back-patch to v14 where the error crept in. Discussion: https://postgr.es/m/17409-52871dda8b5741cb@postgresql.org
* Clean up assorted failures under clang's -fsanitize=undefined checks.Tom Lane2022-03-03
| | | | | | | | | | | | | | | | | | | | | | Most of these are cases where we could call memcpy() or other libc functions with a NULL pointer and a zero count, which is forbidden by POSIX even though every production version of libc allows it. We've fixed such things before in a piecemeal way, but apparently never made an effort to try to get them all. I don't claim that this patch does so either, but it gets every failure I observe in check-world, using clang 12.0.1 on current RHEL8. numeric.c has a different issue that the sanitizer doesn't like: "ln(-1.0)" will compute log10(0) and then try to assign the resulting -Inf to an integer variable. We don't actually use the result in such a case, so there's no live bug. Back-patch to all supported branches, with the idea that we might start running a buildfarm member that tests this case. This includes back-patching c1132aae3 (Check the size in COPY_POINTER_FIELD), which previously silenced some of these issues in copyfuncs.c. Discussion: https://postgr.es/m/CALNJ-vT9r0DSsAOw9OXVJFxLENoVS_68kJ5x0p44atoYH+H4dg@mail.gmail.com
* Re-allow underscore as first character of custom GUC names.Tom Lane2022-02-23
| | | | | | | | | | | Commit 3db826bd5 intended that valid_custom_variable_name's rules for valid identifiers match those of scan.l. However, I (tgl) had some kind of brain fade and put "_" in the wrong list. Fix by Japin Li, per bug #17415 from Daniel Polski. Discussion: https://postgr.es/m/17415-ebdb683d7e09a51c@postgresql.org
* Add compute_query_id = regressMichael Paquier2022-02-22
| | | | | | | | | | | | | | | | | | | | | | | | | | "regress" is a new mode added to compute_query_id aimed at facilitating regression testing when a module computing query IDs is loaded into the backend, like pg_stat_statements. It works the same way as "auto", meaning that query IDs are computed if a module enables it, except that query IDs are hidden in EXPLAIN outputs to ensure regression output stability. Like any GUCs of the kind (force_parallel_mode, etc.), this new configuration can be added to an instance's postgresql.conf, or just passed down with PGOPTIONS at command level. compute_query_id uses an enum for its set of option values, meaning that this addition ensures ABI compatibility. Using this new configuration mode allows installcheck-world to pass when running the tests on an instance with pg_stat_statements enabled, stabilizing the test output while checking the paths doing query ID computations. Reported-by: Anton Melnikov Reviewed-by: Julien Rouhaud Discussion: https://postgr.es/m/1634283396.372373993@f75.i.mail.ru Discussion: https://postgr.es/m/YgHlxgc/OimuPYhH@paquier.xyz Backpatch-through: 14
* Suppress warning about stack_base_ptr with late-model GCC.Tom Lane2022-02-17
| | | | | | | | | | | | | | | | | | | | GCC 12 complains that set_stack_base is storing the address of a local variable in a long-lived pointer. This is an entirely reasonable warning (indeed, it just helped us find a bug); but that behavior is intentional here. We can work around it by using __builtin_frame_address(0) instead of a specific local variable; that produces an address a dozen or so bytes different, in my testing, but we don't care about such a small difference. Maybe someday a compiler lacking that function will start to issue a similar warning, but we'll worry about that when it happens. Patch by me, per a suggestion from Andres Freund. Back-patch to v12, which is as far back as the patch will go without some pain. (Recently-established project policy would permit a back-patch as far as 9.2, but I'm disinclined to expend the work until GCC 12 is much more widespread.) Discussion: https://postgr.es/m/3773792.1645141467@sss.pgh.pa.us
* Make timeout.c more robust against missed timer interrupts.Tom Lane2022-02-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 09cf1d522 taught schedule_alarm() to not do anything if the next requested event is after when we expect the next interrupt to fire. However, if somehow an interrupt gets lost, we'll continue to not do anything indefinitely, even after the "next interrupt" time is obviously in the past. Thus, one missed interrupt can break timeout scheduling for the life of the session. Michael Harris reported a scenario where a bug in a user-defined function caused this to happen, so you don't even need to assume kernel bugs exist to think this is worth fixing. We can make things more robust at little cost by detecting the case where signal_due_at is before "now" and forcing a new setitimer call to occur. This isn't a completely bulletproof fix of course; but in our typical usage pattern where we frequently set timeouts and clear them before they are reached, the interrupt will get re-enabled after at most one timeout interval, which with a little luck will be before we really need it. While here, let's mark signal_due_at as volatile, since the signal handler can both examine and set it. I'm not sure there's any actual risk given that signal_pending is already volatile, but it's surely questionable. Backpatch to v14 where this logic came in. Michael Harris and Tom Lane Discussion: https://postgr.es/m/CADofcAWbMrvgwSMqO4iG_iD3E2v8ZUrC-_crB41my=VMM02-CA@mail.gmail.com
* Prevent memory context logging from sending log message to connected client.Fujii Masao2022-01-28
| | | | | | | | | | | | | | | | | | When pg_log_backend_memory_contexts() is executed, the target backend should use LOG_SERVER_ONLY to log its memory contexts, to prevent them from being sent to its connected client regardless of client_min_messages. But previously the backend unexpectedly used LOG to log the message "logging memory contexts of PID %d" and it could be sent to the client. This is a bug in memory context logging. To fix the bug, this commit changes that message so that it's logged with LOG_SERVER_ONLY. Back-patch to v14 where pg_log_backend_memory_contexts() was added. Author: Fujii Masao Reviewed-by: Bharath Rupireddy, Atsushi Torikoshi Discussion: https://postgr.es/m/82c12f36-86f7-5e72-79af-7f5c37f6cad7@oss.nttdata.com
* Fix ordering of XIDs in ProcArrayApplyRecoveryInfoTomas Vondra2022-01-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 8431e296ea reworked ProcArrayApplyRecoveryInfo to sort XIDs before adding them to KnownAssignedXids. But the XIDs are sorted using xidComparator, which compares the XIDs simply as uint32 values, not logically. KnownAssignedXidsAdd() however expects XIDs in logical order, and calls TransactionIdFollowsOrEquals() to enforce that. If there are XIDs for which the two orderings disagree, an error is raised and the recovery fails/restarts. Hitting this issue is fairly easy - you just need two transactions, one started before the 4B limit (e.g. XID 4294967290), the other sometime after it (e.g. XID 1000). Logically (4294967290 <= 1000) but when compared using xidComparator we try to add them in the opposite order. Which makes KnownAssignedXidsAdd() fail with an error like this: ERROR: out-of-order XID insertion in KnownAssignedXids This only happens during replica startup, while processing RUNNING_XACTS records to build the snapshot. Once we reach STANDBY_SNAPSHOT_READY, we skip these records. So this does not affect already running replicas, but if you restart (or create) a replica while there are transactions with XIDs for which the two orderings disagree, you may hit this. Long-running transactions and frequent replica restarts increase the likelihood of hitting this issue. Once the replica gets into this state, it can't be started (even if the old transactions are terminated). Fixed by sorting the XIDs logically - this is fine because we're dealing with normal XIDs (because it's XIDs assigned to backends) and from the same wraparound epoch (otherwise the backends could not be running at the same time on the primary node). So there are no problems with the triangle inequality, which is why xidComparator compares raw values. Investigation and root cause analysis by Abhijit Menon-Sen. Patch by me. This issue is present in all releases since 9.4, however releases up to 9.6 are EOL already so backpatch to 10 only. Reviewed-by: Abhijit Menon-Sen Reviewed-by: Alvaro Herrera Backpatch-through: 10 Discussion: https://postgr.es/m/36b8a501-5d73-277c-4972-f58a4dce088a%40enterprisedb.com
* Build inherited extended stats on partitioned tablesTomas Vondra2022-01-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 859b3003de disabled building of extended stats for inheritance trees, to prevent updating the same catalog row twice. While that resolved the issue, it also means there are no extended stats for declaratively partitioned tables, because there are no data in the non-leaf relations. That also means declaratively partitioned tables were not affected by the issue 859b3003de addressed, which means this is a regression affecting queries that calculate estimates for the whole inheritance tree as a whole (which includes e.g. GROUP BY queries). But because partitioned tables are empty, we can invert the condition and build statistics only for the case with inheritance, without losing anything. And we can consider them when calculating estimates. It may be necessary to run ANALYZE on partitioned tables, to collect proper statistics. For declarative partitioning there should no prior statistics, and it might take time before autoanalyze is triggered. For tables partitioned by inheritance the statistics may include data from child relations (if built 859b3003de), contradicting the current code. Report and patch by Justin Pryzby, minor fixes and cleanup by me. Backpatch all the way back to PostgreSQL 10, where extended statistics were introduced (same as 859b3003de). Author: Justin Pryzby Reported-by: Justin Pryzby Backpatch-through: 10 Discussion: https://postgr.es/m/20210923212624.GI831%40telsasoft.com
* Ignore extended statistics for inheritance treesTomas Vondra2022-01-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | Since commit 859b3003de we only build extended statistics for individual relations, ignoring the child relations. This resolved the issue with updating catalog tuple twice, but we still tried to use the statistics when calculating estimates for the whole inheritance tree. When the relations contain very distinct data, it may produce bogus estimates. This is roughly the same issue 427c6b5b9 addressed ~15 years ago, and we fix it the same way - by ignoring extended statistics when calculating estimates for the inheritance tree as a whole. We still consider extended statistics when calculating estimates for individual child relations, of course. This may result in plan changes due to different estimates, but if the old statistics were not describing the inheritance tree particularly well it's quite likely the new plans is actually better. Report and patch by Justin Pryzby, minor fixes and cleanup by me. Backpatch all the way back to PostgreSQL 10, where extended statistics were introduced (same as 859b3003de). Author: Justin Pryzby Reported-by: Justin Pryzby Backpatch-through: 10 Discussion: https://postgr.es/m/20210923212624.GI831%40telsasoft.com
* Revert error handling improvements for cryptohashesMichael Paquier2022-01-14
| | | | | | | | | | | | This reverts commits ab27df2, af8d530 and 3a0cced, that introduced pg_cryptohash_error(). In order to make the core code able to pass down the new error types that this introduced, some of the MD5-related routines had to be reworked, causing an ABI breakage, but we found that some external extensions rely on them. Maintaining compatibility outweights the error report benefits, so just revert the change in v14. Reported-by: Laurenz Albe Discussion: https://postgr.es/m/9f0c0a96d28cf14fc87296bbe67061c14eb53ae8.camel@cybertec.at
* Fix ruleutils.c's dumping of whole-row Vars in more contexts.Tom Lane2022-01-13
| | | | | | | | | | | | | | | | | | Commit 7745bc352 intended to ensure that whole-row Vars would be printed with "::type" decoration in all contexts where plain "var.*" notation would result in star-expansion, notably in ROW() and VALUES() constructs. However, it missed the case of INSERT with a single-row VALUES, as reported by Timur Khanjanov. Nosing around ruleutils.c, I found a second oversight: the code for RowCompareExpr generates ROW() notation without benefit of an actual RowExpr, and naturally it wasn't in sync :-(. (The code for FieldStore also does this, but we don't expect that to generate strictly parsable SQL anyway, so I left it alone.) Back-patch to all supported branches. Discussion: https://postgr.es/m/efaba6f9-4190-56be-8ff2-7a1674f9194f@intrans.baku.az
* Improve error handling of cryptohash computationsMichael Paquier2022-01-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The existing cryptohash facility was causing problems in some code paths related to MD5 (frontend and backend) that relied on the fact that the only type of error that could happen would be an OOM, as the MD5 implementation used in PostgreSQL ~13 (the in-core implementation is used when compiling with or without OpenSSL in those older versions), could fail only under this circumstance. The new cryptohash facilities can fail for reasons other than OOMs, like attempting MD5 when FIPS is enabled (upstream OpenSSL allows that up to 1.0.2, Fedora and Photon patch OpenSSL 1.1.1 to allow that), so this would cause incorrect reports to show up. This commit extends the cryptohash APIs so as callers of those routines can fetch more context when an error happens, by using a new routine called pg_cryptohash_error(). The error states are stored within each implementation's internal context data, so as it is possible to extend the logic depending on what's suited for an implementation. The default implementation requires few error states, but OpenSSL could report various issues depending on its internal state so more is needed in cryptohash_openssl.c, and the code is shaped so as we are always able to grab the necessary information. The core code is changed to adapt to the new error routine, painting more "const" across the call stack where the static errors are stored, particularly in authentication code paths on variables that provide log details. This way, any future changes would warn if attempting to free these strings. The MD5 authentication code was also a bit blurry about the handling of "logdetail" (LOG sent to the postmaster), so improve the comments related that, while on it. The origin of the problem is 87ae969, that introduced the centralized cryptohash facility. Extra changes are done for pgcrypto in v14 for the non-OpenSSL code path to cope with the improvements done by this commit. Reported-by: Michael Mühlbeyer Author: Michael Paquier Reviewed-by: Tom Lane Discussion: https://postgr.es/m/89B7F072-5BBE-4C92-903E-D83E865D9367@trivadis.com Backpatch-through: 14
* Fix datatype confusion in logtape.c's right_offset().Tom Lane2021-12-14
| | | | | | | | | | | | | This could only matter if (a) long is wider than int, and (b) the heap of free blocks exceeds UINT_MAX entries, which seems pretty unlikely. Still, it's a theoretical bug, so backpatch to v13 where the typo came in (in commit c02fdc922). In passing, also make swap_nodes() use consistent datatypes. Ma Liangzhu Discussion: https://postgr.es/m/17336-fc4e522d26a750fd@postgresql.org
* Fix alignment in multirange_get_range() functionAlexander Korotkov2021-12-13
| | | | | | | | | The multirange_get_range() function fails when two boundaries of the same range have different alignments. Fix that by adding proper pointer alignment. Reported-by: Alexander Lakhin Discussion: https://postgr.es/m/17300-dced2d01ddeb1f2f%40postgresql.org Backpatch-through: 14
* Allow Memoize to operate in binary comparison modeDavid Rowley2021-11-24
| | | | | | | | | | | | | | | | | | | | | | Memoize would always use the hash equality operator for the cache key types to determine if the current set of parameters were the same as some previously cached set. Certain types such as floating points where -0.0 and +0.0 differ in their binary representation but are classed as equal by the hash equality operator may cause problems as unless the join uses the same operator it's possible that whichever join operator is being used would be able to distinguish the two values. In which case we may accidentally return in the incorrect rows out of the cache. To fix this here we add a binary mode to Memoize to allow it to the current set of parameters to previously cached values by comparing bit-by-bit rather than logically using the hash equality operator. This binary mode is always used for LATERAL joins and it's used for normal joins when any of the join operators are not hashable. Reported-by: Tom Lane Author: David Rowley Discussion: https://postgr.es/m/3004308.1632952496@sss.pgh.pa.us Backpatch-through: 14, where Memoize was added
* Fix display of SQL-standard function's arguments in INSERT/SELECT.Tom Lane2021-11-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | If a SQL-standard function body contains an INSERT ... SELECT statement, any function parameters referenced within the SELECT were always printed in $N style, rather than using the parameter name if any. While not strictly incorrect, this wasn't the intention, and it's inconsistent with the way that such parameters would be printed in any other kind of statement. The cause is that the recursion to get_query_def from get_insert_query_def neglected to pass down the context->namespaces list, passing constant NIL instead. This is a very ancient oversight, but AFAICT it had no visible consequences before commit e717a9a18 added an outermost namespace with function parameters. We don't allow INSERT ... SELECT as a sub-query, except in a top-level WITH clause, where it couldn't contain any outer references that might need to access upper namespaces. So although that's arguably a bug, I don't see any point in changing it before v14. In passing, harden the code added to get_parameter by e717a9a18 so that it won't crash if a PARAM_EXTERN Param appears in an unexpected place. Per report from Erki Eessaar. Code fix by me, regression test case by Masahiko Sawada. Discussion: https://postgr.es/m/AM9PR01MB8268347BED344848555167FAFE949@AM9PR01MB8268.eurprd01.prod.exchangelabs.com
* Fix memory overrun when querying pg_stat_slruMichael Paquier2021-11-12
| | | | | | | | | | | | | | | | pg_stat_get_slru() in pgstatfuncs.c would point to one element after the end of the array PgStat_SLRUStats when finishing to scan its entries. This had no direct consequences as no data from the extra memory area was read, but static analyzers would rightfully complain here. So let's be clean. While on it, this adds one regression test in the area reserved for system views. Reported-by: Alexander Kozhemyakin, via AddressSanitizer Author: Kyotaro Horiguchi Discussion: https://postgr.es/m/17280-37da556e86032070@postgresql.org Backpatch-through: 13