aboutsummaryrefslogtreecommitdiff
path: root/src/test
Commit message (Collapse)AuthorAge
* Fix privilege checks in pg_stats_ext and pg_stats_ext_exprs.Nathan Bossart2024-05-06
| | | | | | | | | | | | | | | | | | | | | | | The catalog view pg_stats_ext fails to consider privileges for expression statistics. The catalog view pg_stats_ext_exprs fails to consider privileges and row-level security policies. To fix, restrict the data in these views to table owners or roles that inherit privileges of the table owner. It may be possible to apply less restrictive privilege checks in some cases, but that is left as a future exercise. Furthermore, for pg_stats_ext_exprs, do not return data for tables with row-level security enabled, as is already done for pg_stats_ext. On the back-branches, a fix-CVE-2024-4317.sql script is provided that will install into the "share" directory. This file can be used to apply the fix to existing clusters. Bumps catversion on 'master' branch only. Reported-by: Lukas Fittl Reviewed-by: Noah Misch, Tomas Vondra, Tom Lane Security: CVE-2024-4317 Backpatch-through: 14
* Disable run condition optimization for some WindowFuncsDavid Rowley2024-05-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 94985c210 added code to detect when WindowFuncs were monotonic and allowed additional quals to be "pushed down" into the subquery to be used as WindowClause runConditions in order to short-circuit execution in nodeWindowAgg.c. The Node representation of runConditions wasn't well selected and because we do qual pushdown before planning the subquery, the planning of the subquery could perform subquery pull-up of nested subqueries. For WindowFuncs with args, the arguments could be changed after pushing the qual down to the subquery. This was made more difficult by the fact that the code duplicated the WindowFunc inside an OpExpr to include in the WindowClauses runCondition field. This could result in duplication of subqueries and a pull-up of such a subquery could result in another initplan parameter being issued for the 2nd version of the subplan. This could result in errors such as: ERROR: WindowFunc not found in subplan target lists Here in the backbranches, we don't have the flexibility to improve the Node representation to resolve this, so instead we just disable the runCondition optimization for ntile() unless the argument is a Const, (v16 only) and likewise for count(expr) (both v15 and v16). count(*) is unaffected. All other window functions which support this optimization all take zero arguments and therefore are unaffected. Bug: #18170 Reported-by: Zuming Jiang Discussion: https://postgr.es/m/18170-f1d17bf9a0d58b24@postgresql.org Backpatch-through 15 (master will be fixed independently)
* Ensure we allocate NAMEDATALEN bytes for names in Index Only ScansDavid Rowley2024-05-01
| | | | | | | | | | | | | | | As an optimization, we store "name" columns as cstrings in btree indexes. Here we modify it so that Index Only Scans convert these cstrings back to names with NAMEDATALEN bytes rather than storing the cstring in the tuple slot, as was happening previously. Bug: #17855 Reported-by: Alexander Lakhin Reviewed-by: Alexander Lakhin, Tom Lane Discussion: https://postgr.es/m/17855-5f523e0f9769a566@postgresql.org Backpatch-through: 12, all supported versions
* Detect more overflows in timestamp[tz]_pl_interval.Tom Lane2024-04-28
| | | | | | | | | | | | | | | | | In commit 25cd2d640 I (tgl) opined that "The additions of the months and microseconds fields could also overflow, of course. However, I believe we need no additional checks there; the existing range checks should catch such cases". This is demonstrably wrong however for the microseconds field, and given that discovery it seems prudent to be paranoid about the months addition as well. Report and patch by Joseph Koshakow. As before, back-patch to all supported branches. (However, the test case doesn't work before v15 because we didn't allow wider-than-int32 numbers in interval literals. A variant test could probably be built that fits within that restriction, but it didn't seem worth the trouble.) Discussion: https://postgr.es/m/CAAvxfHf77sRHKoEzUw9_cMYSpbpNS2C+J_+8Dq4+0oi8iKopeA@mail.gmail.com
* Fix type-checking of RECORD-returning functions in FROM, redux.Tom Lane2024-04-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 2ed8f9a01 intended to institute a policy that if a RangeTblFunction has a coldeflist, then the function return type is certainly RECORD, and we should use the coldeflist as the source of truth about what the columns of the record type are. When the original function has been folded to a constant, inspection of the constant might give a different answer. This situation will lead to a tuple-type-mismatch error at execution, but up until that point we need to consistently believe the coldeflist, or we'll have problems from different bits of code reaching different conclusions. expandRTE didn't get that memo though, and would try to produce a tupdesc based on the constant in this situation, leading to an assertion failure. (Desultory testing suggests that non-assert builds often manage to give the expected error, although I also saw a "cache lookup failed for type 0" error, and it seems at least possible that a crash could happen.) Some other callers of get_expr_result_type and get_expr_result_tupdesc were also being incautious about this. While none of them seem to have actual bugs, they're working harder than necessary in this case, besides which it seems safest to have an explicit policy of not using those functions on an RTE with a coldeflist. Adjust the code accordingly, and add commentary to funcapi.c about this policy. Also fix an obsolete comment that claimed "get_expr_result_type() doesn't know how to extract type info from a RECORD constant". That hasn't been true since commit d57534740. Per bug #18422 from Alexander Lakhin. As with the previous commit, back-patch to all supported branches. Discussion: https://postgr.es/m/18422-89ca86c8eac5246d@postgresql.org
* freespace: Don't return blocks past the end of the main fork.Noah Misch2024-04-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | GetPageWithFreeSpace() callers assume the returned block exists in the main fork, failing with "could not read block" errors if that doesn't hold. Make that assumption reliable now. It hadn't been guaranteed, due to the weak WAL and data ordering of participating components. Most operations on the fsm fork are not WAL-logged. Relation extension is not WAL-logged. Hence, an fsm-fork block on disk can reference a main-fork block that no WAL record has initialized. That could happen after an OS crash, a replica promote, or a PITR restore. wal_log_hints makes the trouble easier to hit; a replica promote or PITR ending just after a relevant fsm-fork FPI_FOR_HINT may yield this broken state. The v16 RelationAddBlocks() mechanism also makes the trouble easier to hit, since it bulk-extends even without extension lock waiters. Commit 917dc7d2393ce680dea7a59418be9ff341df3c14 stopped trouble around truncation, but vectors involving PageIsNew() pages remained. This implementation adds a RelationGetNumberOfBlocks() call when the cached relation size doesn't confirm a block exists. We've been unable to identify a benchmark that slows materially, but this may show up as additional time in lseek(). An alternative without that overhead would be a new ReadBufferMode such that ReadBufferExtended() returns NULL after a 0-byte read, with all other errors handled normally. However, each GetFreeIndexPage() caller would then need code for the return-NULL case. Back-patch to v14, due to earlier versions not caching relation size and the absence of a pre-v16 problem report. Ronan Dunklau. Reported by Ronan Dunklau. Discussion: https://postgr.es/m/1878547.tdWV9SEqCh%40aivenlaptop
* Fix plpgsql's handling of -- comments following expressions.Tom Lane2024-04-10
| | | | | | | | | | | | | | | | | | | | | | | | | | Up to now, read_sql_construct() has collected all the source text from the statement or expression's initial token up to the character just before the "until" token. It normally tries to strip trailing whitespace from that, largely for neatness. If there was a "-- text" comment after the expression, this resulted in removing the newline that terminates the comment, which creates a hazard if we try to paste the collected text into a larger SQL construct without inserting a newline after it. In particular this caused our handling of CASE constructs to fail if there's a comment after a WHEN expression. Commit 4adead1d2 noticed a similar problem with cursor arguments, and worked around it through the rather crude hack of suppressing the whitespace-trimming behavior for those. Rather than do that and leave the hazard open for future hackers to trip over, let's fix it properly. pl_scanner.c already has enough infrastructure to report the end location of the expression's last token, so we can copy up to that location and never collect any trailing whitespace or comment to begin with. Erik Wienhold and Tom Lane, per report from Michal Bartak. Back-patch to all supported branches. Discussion: https://postgr.es/m/CAAVzF_FjRoi8fOVuLCZhQJx6HATQ7MKm=aFOHWZODFnLmjX-xA@mail.gmail.com
* Don't clobber test exit code at cleanup in LDAP/Kerberors testsHeikki Linnakangas2024-04-07
| | | | | | | | | | If the test script die()d before running the first test, the whole test was interpreted as SKIPped rather than failed. The PostgreSQL::Cluster module got this right. Backpatch to all supported versions. Discussion: https://www.postgresql.org/message-id/fb898a70-3a88-4629-88e9-f2375020061d@iki.fi
* Improve check in LDAP test to find the OpenLDAP installationHeikki Linnakangas2024-04-07
| | | | | | | | | | | | | | | | | | | | | | | If the OpenLDAP installation directory is not found, set $setup to 0 so that the LDAP tests are skipped. The macOS checks were already doing that, but the checks on other OS's were not. While we're at it, improve the error message when the tests are skipped, to specify whether the OS is supported at all, or if we just didn't find the installation directory. This was accidentally "working" without this, i.e. we were skipping the tests if the OpenLDAP installation was not found, because of a bug in the LdapServer test module: the END block clobbered the exit code so if the script die()s before running the first subtest, the whole test script was marked as SKIPped. The next commit will fix that bug, but we need to fix the setup code first. These checks should probably go into configure/meson, but this is better than nothing and allows fixing the bug in the END block. Backpatch to all supported versions. Discussion: https://www.postgresql.org/message-id/fb898a70-3a88-4629-88e9-f2375020061d@iki.fi
* Fix unstable aggregate regression testDavid Rowley2024-03-28
| | | | | | | | | | | | | Buildfarm member avocet has shown a plan change by switching the finalize aggregate stage to use a GroupAggregate rather than a HashAggregate. This is consistent with autovacuum having triggered on the table, per analysis by Alexander Lakhin. Fix this by disabling autovacuum on the table. Reported-by: Alexander Lakhin Discussion: https://postgr.es/m/d4493a28-589a-5328-fed5-250f2d7d3e2a@gmail.com Backpatch-through: 16, where this test was added.
* Fix failure of ALTER FOREIGN TABLE SET SCHEMA to move sequences.Tom Lane2024-03-26
| | | | | | | | | | | | | | | | | | Ordinary ALTER TABLE SET SCHEMA will also move any owned sequences into the new schema. We failed to do likewise for foreign tables, because AlterTableNamespaceInternal believed that only certain relkinds could have indexes, owned sequences, or constraints. We could simply add foreign tables to that relkind list, but it seems likely that the same oversight could be made again in future. Instead let's remove the relkind filter altogether. These functions shouldn't cost much when there are no objects that they need to process, and surely this isn't an especially performance-critical case anyway. Per bug #18407 from Vidushi Gupta. Back-patch to all supported branches. Discussion: https://postgr.es/m/18407-4fd07373d252c6a0@postgresql.org
* Fix EXPLAIN Bitmap heap scan to count pages with no visible tuplesHeikki Linnakangas2024-03-18
| | | | | | | | | | | | | | | | | Previously, bitmap heap scans only counted lossy and exact pages for explain when there was at least one visible tuple on the page. heapam_scan_bitmap_next_block() returned true only if there was a "valid" page with tuples to be processed. However, the lossy and exact page counters in EXPLAIN should count the number of pages represented in a lossy or non-lossy way in the constructed bitmap, regardless of whether or not the pages ultimately contained visible tuples. Backpatch to all supported versions. Author: Melanie Plageman Discussion: https://www.postgresql.org/message-id/CAAKRu_ZwCwWFeL_H3ia26bP2e7HiKLWt0ZmGXPVwPO6uXq0vaA@mail.gmail.com Discussion: https://www.postgresql.org/message-id/CAAKRu_bxrXeZ2rCnY8LyeC2Ls88KpjWrQ%2BopUrXDRXdcfwFZGA@mail.gmail.com
* Fix EXPLAIN output for subplans in MERGE.Dean Rasheed2024-03-17
| | | | | | | | | | | | | | | | | | Given a subplan in a MERGE query, EXPLAIN would sometimes fail to properly display expressions involving Params referencing variables in other parts of the plan tree. This would affect subplans outside the topmost join plan node, for which expansion of Params would go via the top-level ModifyTable plan node. The problem was that "inner_tlist" for the ModifyTable node's deparse_namespace was set to the join node's targetlist, but "inner_plan" was set to the ModifyTable node itself, rather than the join node, leading to incorrect results when descending to the referenced variable. Fix and backpatch to v15, where MERGE was introduced. Discussion: https://postgr.es/m/CAEZATCWAv-sZuH%2BwG5xJ-%2BGt7qGNGX8wUQd3XYydMFDKgRB9nw%40mail.gmail.com
* Fix handling of expecteddir in pg_regressDaniel Gustafsson2024-03-15
| | | | | | | | | | | | | | Commit c855872074b introduced a new parameter to pg_regress to set the directory where to look for expected files, but accidentally only implemented it for when compiling pg_regress for ECPG tests. Fix by adding support for the parameter to the main regression test compilation of pg_regress as well. Backpatch to v16 where --expecteddir was introduced. Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com> Discussion: https://postgr.es/m/CAO6_Xqq5yKJHcJsq__LPcKwSY0XHRqVERNWGxx5ttNXXj7+W=A@mail.gmail.com Backpatch-through: 16
* Make INSERT-from-multiple-VALUES-rows handle domain target columns.Tom Lane2024-03-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | Commit a3c7a993d fixed some cases involving target columns that are arrays or composites by applying transformAssignedExpr to the VALUES entries, and then stripping off any assignment ArrayRefs or FieldStores that the transformation added. But I forgot about domains over arrays or composites :-(. Such cases would either fail with surprising complaints about mismatched datatypes, or insert unexpected coercions that could lead to odd results. To fix, extend the stripping logic to get rid of CoerceToDomain if it's atop an ArrayRef or FieldStore. While poking at this, I realized that there's a poorly documented and not-at-all-tested behavior nearby: we coerce each VALUES column to the domain type separately, and rely on the rewriter to merge those operations so that the domain constraints are checked only once. If that merging did not happen, it's entirely possible that we'd get unexpected domain constraint failures due to checking a partially-updated container value. There's no bug there, but while we're here let's improve the commentary about it and add some test cases that explicitly exercise that behavior. Per bug #18393 from Pablo Kharo. Back-patch to all supported branches. Discussion: https://postgr.es/m/18393-65fedb1a0de9260d@postgresql.org
* Fix confusion about the return rowtype of SQL-language procedures.Tom Lane2024-03-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | There is a very ancient hack in check_sql_fn_retval that allows a single SELECT targetlist entry of composite type to be taken as supplying all the output columns of a function returning composite. (This is grotty and fundamentally ambiguous, but it's really hard to do nested composite-returning functions without it.) As far as I know, that doesn't cause any problems in ordinary functions. It's disastrous for procedures however. All procedures that have any output parameters are labeled with prorettype RECORD, and the CALL code expects it will get back a record with one column per output parameter, regardless of whether any of those parameters is composite. Doing something else leads to an assertion failure or core dump. This is simple enough to fix: we just need to not apply that rule when considering procedures. However, that requires adding another argument to check_sql_fn_retval, which at least in principle might be getting called by external callers. Therefore, in the back branches convert check_sql_fn_retval into an ABI-preserving wrapper around a new function check_sql_fn_retval_ext. Per report from Yahor Yuzefovich. This has been broken since we implemented procedures, so back-patch to all supported branches. Discussion: https://postgr.es/m/CABz5gWHSjj2df6uG0NRiDhZ_Uz=Y8t0FJP-_SVSsRsnrQT76Gg@mail.gmail.com
* Fix incorrect accessing of pfree'd memory in MemoizeDavid Rowley2024-03-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | For pass-by-reference types, the code added in 0b053e78b, which aimed to resolve a memory leak, was overly aggressive in resetting the per-tuple memory context which could result in pfree'd memory being accessed resulting in failing to find previously cached results in the hash table. What was happening was prepare_probe_slot() was switching to the per-tuple memory context and calling ExecEvalExpr(). ExecEvalExpr() may have required a memory allocation. Both MemoizeHash_hash() and MemoizeHash_equal() were aggressively resetting the per-tuple context and after determining the hash value, the context would have gotten reset before MemoizeHash_equal() was called. This could have resulted in MemoizeHash_equal() looking at pfree'd memory. This is less likely to have caused issues on a production build as some other allocation would have had to have reused the pfree'd memory to overwrite it. Otherwise, the original contents would have been intact. However, this clearly caused issues on MEMORY_CONTEXT_CHECKING builds. Author: Tender Wang, Andrei Lepikhov Reported-by: Tender Wang (using SQLancer) Reviewed-by: Andrei Lepikhov, Richard Guo, David Rowley Discussion: https://postgr.es/m/CAHewXNnT6N6UJkya0z-jLFzVxcwGfeRQSfhiwA+NyLg-x8iGew@mail.gmail.com Backpatch-through: 14, where Memoize was added
* Fix handling of self-modified tuples in MERGE.Dean Rasheed2024-03-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When an UPDATE or DELETE action in MERGE returns TM_SelfModified, there are 2 possible causes: 1). The target tuple was already updated or deleted by the current command. This can happen if the target row joins to more than one source row, and the SQL standard explicitly says that this must be an error. 2). The target tuple was already updated or deleted by a later command in the current transaction. This can happen if the tuple is modified by a BEFORE trigger or a volatile function used in the query, and should be an error for the same reason that it is in a plain UPDATE or DELETE command. In MERGE's primary error handling block, it failed to check for (2), causing it to return a misleading error message in such cases. In the secondary error handling block, following a concurrent update from another session, it failed to check for (1), causing it to silently ignore target rows joined to more than one source row, instead of reporting an error. Fix this, and add tests for both of these cases. Per report from Wenjiang Zhang. Back-patch to v15, where MERGE was introduced. Discussion: https://postgr.es/m/tencent_41DE0FF443FE14B94A5898D373792109E408%40qq.com
* Revert "Fix parallel-safety check of expressions and predicate for index builds"Michael Paquier2024-03-07
| | | | | | | | | | | This reverts commit eae7be600be7, following a discussion with Tom Lane, due to concerns that this impacts the decisions made by the planner for the number of workers spawned based on the inlining and const-folding of index expressions and predicate for cases that would have worked until this commit. Discussion: https://postgr.es/m/162802.1709746091@sss.pgh.pa.us Backpatch-through: 12
* Fix type-checking of RECORD-returning functions in FROM.Tom Lane2024-03-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the corner case where a function returning RECORD has been simplified to a RECORD constant or an inlined ROW() expression, ExecInitFunctionScan failed to cross-check the function's result rowtype against the coldeflist provided by the calling query. That happened because get_expr_result_type is able to extract a tupdesc from such expressions, which led ExecInitFunctionScan to ignore the coldeflist. (Instead, it used the extracted tupdesc to check the function's output, which of course always succeeds.) I have not been able to demonstrate any really serious consequences from this, because if some column of the result is of the wrong type and is directly referenced by a Var of the calling query, CheckVarSlotCompatibility will catch it. However, we definitely do fail to report the case where the function returns more columns than the coldeflist expects, and in the converse case where it returns fewer columns, we get an assert failure (but, seemingly, no worse results in non-assert builds). To fix, always build the expected tupdesc from the coldeflist if there is one, and consult get_expr_result_type only when there isn't one. Also remove the failing Assert, even though it is no longer reached after this fix. It doesn't seem to be adding anything useful, since later checking will deal with cases with the wrong number of columns. The only other place I could find that is doing something similar is inline_set_returning_function. There's no live bug there because we cannot be looking at a Const or RowExpr, but for consistency change that code to agree with ExecInitFunctionScan. Per report from PetSerAl. After some debate I've concluded that this should be back-patched. There is a small risk that somebody has been relying on such a case not throwing an error, but I judge this outweighed by the risk that I've missed some way in which the failure to cross-check has worse consequences than sketched above. Discussion: https://postgr.es/m/CAKygsHSerA1eXsJHR9wft3Gn3wfHQ5RfP8XHBzF70_qcrrRvEg@mail.gmail.com
* Fix parallel-safety check of expressions and predicate for index buildsMichael Paquier2024-03-06
| | | | | | | | | | | | | | | | | | | | | | | | As coded, the planner logic that calculates the number of parallel workers to use for a parallel index build uses expressions and predicates from the relcache, which are flattened for the planner by eval_const_expressions(). As reported in the bug, an immutable parallel-unsafe function flattened in the relcache would become a Const, which would be considered as parallel-safe, even if the predicate or the expressions including the function are not safe in parallel workers. Depending on the expressions or predicate used, this could cause the parallel build to fail. Tests are included that check parallel index builds with parallel-unsafe predicate and expressions. Two routines are added to lsyscache.h to be able to retrieve expressions and predicate of an index from its pg_index data. Reported-by: Alexander Lakhin Author: Tender Wang Reviewed-by: Jian He, Michael Paquier Discussion: https://postgr.es/m/CAHewXN=UaAaNn9ruHDH3Os8kxLVmtWqbssnf=dZN_s9=evHUFA@mail.gmail.com Backpatch-through: 12
* Fix mis-rounding and overflow hazards in date_bin().Tom Lane2024-02-28
| | | | | | | | | | | | | | | | | | | | In the case where the target timestamp is before the origin timestamp and their difference is already an exact multiple of the stride, the code incorrectly subtracted the stride anyway. Also detect several integer-overflow cases that previously produced bogus results. (The submitted patch tried to avoid overflow, but I'm not convinced it's right, and problematic cases are so far out of the plausibly-useful range that they don't seem worth sweating over. Let's just use overflow-detecting arithmetic and throw errors.) timestamp_bin() and timestamptz_bin() are basically identical and so had identical bugs. Fix both. Report and patch by Moaaz Assali, adjusted some by me. Back-patch to v14 where date_bin() was introduced. Discussion: https://postgr.es/m/CALkF+nvtuas-2kydG-WfofbRSJpyODAJWun==W-yO5j2R4meqA@mail.gmail.com
* Back-patch test modifications that were done as part of b6df0798a5.Amit Kapila2024-02-26
| | | | | | | | | | This commit fixes the intermittent buildfarm failures in 031_column_list. I missed to back-patch while committing b6df0798a5 in the HEAD. Diagnosed-by: Amit Kapila Author: Vignesh C Backpatch-through: 15 Discussion: https://postgr.es/m/3307255.1706911634@sss.pgh.pa.us
* MERGE ... DO NOTHING: require SELECT privilegesAlvaro Herrera2024-02-21
| | | | | | | | | | | | | | | | | Verify that a user running MERGE with a DO NOTHING clause has privileges to read the table, even if no columns are referenced. Such privileges were already required if the ON clause or any of the WHEN conditions referenced any column at all, so there's no functional change in practice. This change fixes an assertion failure in the case where no column is referenced by the command and the WHEN clauses are all DO NOTHING. Backpatch to 15, where MERGE was introduced. Reported-by: Alena Rybakina <a.rybakina@postgrespro.ru> Reported-by: Alexander Lakhin <exclusion@gmail.com> Discussion: https://postgr.es/m/4d65a385-7efa-4436-a825-0869f89d9d92@postgrespro.ru
* Fix incorrect pruning of NULL partition for boolean IS NOT clausesDavid Rowley2024-02-20
| | | | | | | | | | | | | | | | | | | | | | | | Partition pruning wrongly assumed that, for a table partitioned on a boolean column, a clause in the form "boolcol IS NOT false" and "boolcol IS NOT true" could be inverted to correspondingly become "boolcol IS true" and "boolcol IS false". These are not equivalent as the NOT version matches the opposite boolean value *and* NULLs. This incorrect assumption meant that partition pruning pruned away partitions that could contain NULL values. Here we fix this by correctly not pruning partitions which could store NULLs. To be affected by this, the table must be partitioned by a NULLable boolean column and queries would have to contain "boolcol IS NOT false" or "boolcol IS NOT true". This could result in queries filtering out NULL values with a LIST partitioned table and "ERROR: invalid strategy number 0" for RANGE and HASH partitioned tables. Reported-by: Alexander Lakhin Bug: #18344 Discussion: https://postgr.es/m/18344-8d3f00bada6d09c6@postgresql.org Backpatch-through: 12
* Fix test race between primary XLOG_RUNNING_XACTS and standby logical slot.Noah Misch2024-02-19
| | | | | | | | | | | | | Before the previous commit, the test could hang until LOG_SNAPSHOT_INTERVAL_MS (15s), until checkpoint_timeout (300s), or indefinitely. An indefinite hang was awfully improbable. It entailed the test reaching checkpoint_timeout before the DecodingContextFindStartpoint() of a CREATE SUBSCRIPTION, yet after the preceding WAL record. Back-patch to v16, which introduced the test. Bertrand Drouvot, reported by Noah Misch. Discussion: https://postgr.es/m/20240211010227.a2.nmisch@google.com
* Bound waits in 035_standby_logical_decoding.pl.Noah Misch2024-02-19
| | | | | | | | | | One IPC::Run::start() used an IPC::Run::timer() without checking for expiration. The other used no timeout or timer. Back-patch to v16, which introduced the test. Reviewed by Bertrand Drouvot. Discussion: https://postgr.es/m/20240211010227.a2.nmisch@google.com
* Use a safer outfuncs/readfuncs representation for BitStrings.Tom Lane2024-02-13
| | | | | | | | | | | | | | | | | | | | | | | | For a long time, our outfuncs.c code has supposed that the string contents of a BitString node could just be printed literally with no concern for quoting/escaping. Now, that's okay if the string literal contains only valid binary or hex digits ... but our lexer doesn't check that, preferring to let bitin() be the sole authority on what's valid. So we could have raw parse trees that contain incorrect BitString literals, and that can result in failures when WRITE_READ_PARSE_PLAN_TREES debugging is enabled. Fix by using outToken() to print the string field, and debackslash() to read it. This results in a change in the emitted representation only in cases that would have failed before, and don't represent valid SQL in the first place. Between that and the fact that we don't store raw parse trees in the catalogs, I judge this safe to apply without a catversion bump. Per bug #18340 from Alexander Lakhin. Back-patch to v16; before that, we lacked readfuncs support for BitString nodes, so that the problem was only cosmetic. Discussion: https://postgr.es/m/18340-4aa1ae6ed4121912@postgresql.org
* Fix propagation of persistence to sequences in ALTER TABLE / ADD COLUMNPeter Eisentraut2024-02-09
| | | | | | | | | | | | | | | | Fix for 344d62fb9a9: That commit introduced unlogged sequences and made it so that identity/serial sequences automatically get the persistence level of their owning table. But this works only for CREATE TABLE and not for ALTER TABLE / ADD COLUMN. The latter would always create the sequence as logged (default), independent of the persistence setting of the table. This is fixed here. Note: It is allowed to change the persistence of identity sequences directly using ALTER SEQUENCE. So mistakes in existing databases can be fixed manually. Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/c4b6e2ed-bcdf-4ea7-965f-e49761094827%40eisentraut.org
* Fix assertion if index is dropped during REFRESH CONCURRENTLYHeikki Linnakangas2024-02-05
| | | | | | | | | | When assertions are disabled, the built SQL statement is invalid and you get a "syntax error". So this isn't a serious problem, but let's avoid the assertion failure. Backpatch to all supported versions. Reviewed-by: Noah Misch
* Apply band-aid fix for an oversight in reparameterize_path_by_child.Tom Lane2024-02-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The path we wish to reparameterize is not a standalone object: in particular, it implicitly references baserestrictinfo clauses in the associated RelOptInfo, and if it's a SampleScan path then there is also the TableSampleClause in the RTE to worry about. Both of those could contain lateral references to the join partner relation, which would need to be modified to refer to its child. Since we aren't doing that, affected queries can give wrong answers, or odd failures such as "variable not found in subplan target list", or executor crashes. But we can't just summarily modify those expressions, because they are shared with other paths for the rel. We'd break things if we modify them and then end up using some non-partitioned-join path. In HEAD, we plan to fix this by postponing reparameterization until create_plan(), when we know that those other paths are no longer of interest, and then adjusting those expressions along with the ones in the path itself. That seems like too big a change for stable branches however. In the back branches, let's just detect whether any troublesome lateral references actually exist in those expressions, and fail reparameterization if so. This will result in not performing a partitioned join in such cases. Given the lack of field complaints, nobody's likely to miss the optimization. Report and patch by Richard Guo. Apply to 12-16 only, since the intended fix for HEAD looks quite different. We're not quite ready to push the HEAD fix, but with back-branch releases coming up soon, it seems wise to get this stopgap fix in place there. Discussion: https://postgr.es/m/CAMbWs496+N=UAjOc=rcD3P7B6oJe4rZw08e_TZRUsWbPxZW3Tw@mail.gmail.com
* Fix various issues with ALTER TEXT SEARCH CONFIGURATIONMichael Paquier2024-01-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit addresses a set of issues when changing token type mappings in a text search configuration when using duplicated token names: - ADD MAPPING would fail on insertion because of a constraint failure after inserting the same mapping. - ALTER MAPPING with an "overridden" configuration failed with "tuple already updated by self" when the token mappings are removed. - DROP MAPPING failed with "tuple already updated by self", like previously, but in a different code path. The code is refactored so the token names (with their numbers) are handled as a List with unique members rather than an array with numbers, ensuring that no duplicates mess up with the catalog inserts, updates and deletes. The list is generated by getTokenTypes(), with the same error handling as previously while duplicated tokens are discarded from the list used to work on the catalogs. Regression tests are expanded to cover much more ground for the cases fixed by this commit, as there was no coverage for the code touched in this commit. A bit more is done regarding the fact that a token name not supported by a configuration's parser should result in an error even if IF EXISTS is used in a DROP MAPPING clause. This is implied in the code but there was no coverage for that, and it was very easy to miss. These issues exist since at least their introduction in core with 140d4ebcb46e, so backpatch all the way down. Reported-by: Alexander Lakhin Author: Tender Wang, Michael Paquier Discussion: https://postgr.es/m/18310-1eb233c5908189c8@postgresql.org Backpatch-through: 12
* Fix DROP ROLE when specifying duplicated rolesMichael Paquier2024-01-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit fixes failures with "tuple already updated by self" when listing twice the same role and in a DROP ROLE query. This is an oversight in 6566133c5f52, that has introduced a two-phase logic in DropRole() where dependencies of all the roles to drop are removed in a first phase, with the roles themselves removed from pg_authid in a second phase. The code is simplified to not rely on a List of ObjectAddress built in the first phase used to remove the pg_authid entries in the second phase, switching to a list of OIDs. Duplicated OIDs can be simply avoided in the first phase thanks to that. Using ObjectAddress was not necessary for the roles as they are not used for anything specific to dependency.c, building all the ObjectAddress in the List with AuthIdRelationId as class ID. In 15 and older versions, where a single phase is used, DROP ROLE with duplicated role names would fail on "role \"blah\" does not exist" for the second entry after the CCI() done by the first deletion. This is not really incorrect, but it does not seem worth changing based on a lack of complaints. Reported-by: Alexander Lakhin Reviewed-by: Tender Wang Discussion: https://postgr.es/m/18310-1eb233c5908189c8@postgresql.org Backpatch-through: 16
* Detect Julian-date overflow in timestamp[tz]_pl_interval.Tom Lane2024-01-26
| | | | | | | | | | | | | | | | | | | | We perform addition of the days field of an interval via arithmetic on the Julian-date representation of the timestamp's date. This step is subject to int32 overflow, and we also should not let the Julian date become very negative, for fear of weird results from j2date. (In the timestamptz case, allow a Julian date of -1 to pass, since it might convert back to zero after timezone rotation.) The additions of the months and microseconds fields could also overflow, of course. However, I believe we need no additional checks there; the existing range checks should catch such cases. The difficulty here is that j2date's magic modular arithmetic could produce something that looks like it's in-range. Per bug #18313 from Christian Maurer. This has been wrong for a long time, so back-patch to all supported branches. Discussion: https://postgr.es/m/18313-64d2c8952d81e84b@postgresql.org
* Fix ALTER TABLE .. ADD COLUMN with complex inheritance treesMichael Paquier2024-01-24
| | | | | | | | | | | | | | | | This command, when used to add a column on a parent table with a complex inheritance tree, tried to update multiple times the same tuple in pg_attribute for a child table when incrementing attinhcount, causing failures with "tuple already updated by self" because of a missing CommandCounterIncrement() between two updates. This exists for a rather long time, so backpatch all the way down. Reported-by: Alexander Lakhin Author: Tender Wang Reviewed-by: Richard Guo Discussion: https://postgr.es/m/18297-b04cd83a55b51e35@postgresql.org Backpatch-through: 12
* Improve stability of recovery test 035_standby_logical_decodingMichael Paquier2024-01-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit tweaks a couple of things in 035_standby_logical_decoding to hopefully stabilize it: - Autovacuum is now disabled, as it could hold a global xmin with a transaction. - Conflicts are generated with command sequences that removed rows (on catalogs, shared or non-shared, or just plain relations) followed by a VACUUM. This was unstable because these did not check that the horizon moved between the SQL commands and the VACUUM. The logic is refactored as follows, to ensure that VACUUM removes dead rows before testing for slot invalidation on a standby (idea suggested by Andres Freund): -- Grab the current horizon. -- Launch SQL commands removing rows. -- Check that the snapshot horizon has been updated. -- Launch VACUUM on the relation whose rows have been removed by the first step. Note that there are still some issues because of standby snapshot WAL records generated by the bgwriter, but this makes the test much more stable. Per reports from buildfarm members dikkop and skink, with analysis and tests from Alexander Lakhin. While on it, fix a couple of incorrect comments. Author: Bertrand Drouvot Reviewed-by: Alexander Lakhin, Michael Paquier Discussion: https://postgr.es/m/OSZPR01MB6310ED3CEDB531BCEDBC6AF2FD479@OSZPR01MB6310.jpnprd01.prod.outlook.com Discussion: https://postgr.es/m/bf67e076-b163-9ba3-4ade-b9fc51a3a8f6@gmail.com Backpatch-through: 16
* Fix buildfarm error from commit 5c31669058.Jeff Davis2024-01-18
| | | | | | | Skip test when not using unix domain sockets. Discussion: https://postgr.es/m/CALDaNm29-8OozsBWo9H6DN_Tb_3yA1QjRJput-KhaN8ncDJtJA@mail.gmail.com Backpatch-through: 16
* Re-validate connection string in libpqrcv_connect().Jeff Davis2024-01-12
| | | | | | | | | | | | | | | | | | | A superuser may create a subscription with password_required=true, but which uses a connection string without a password. Previously, if the owner of such a subscription was changed to a non-superuser, the non-superuser was able to utilize a password from another source (like a password file or the PGPASSWORD environment variable), which should not have been allowed. This commit adds a step to re-validate the connection string before connecting. Reported-by: Jeff Davis Author: Vignesh C Reviewed-by: Peter Smith, Robert Haas, Amit Kapila Discussion: https://www.postgresql.org/message-id/flat/e5892973ae2a80a1a3e0266806640dae3c428100.camel%40j-davis.com Backpatch-through: 16
* pg_regress: Disable autoruns for cmd.exe on WindowsMichael Paquier2024-01-12
| | | | | | | | | | | | | | This is similar to 9886744a361b, to prevent the execution of other programs due to autorun configurations which could influence the postmaster startup. This was originally applied on HEAD as of 83c75ac7fb69 without a backpatch, but the patch has survived CI and buildfarm cycles. I have checked that cmd /d exists down to Windows XP, which should make this change work correctly in the oldest branches still supported. Discussion: https://postgr.es/m/20230922.161551.320043332510268554.horikyota.ntt@gmail.com Backpatch-through: 12
* Allow subquery pullup to wrap a PlaceHolderVar in another one.Tom Lane2024-01-11
| | | | | | | | | | | | | | | | | | | | | | | The code for wrapping subquery output expressions in PlaceHolderVars believed that if the expression already was a PlaceHolderVar, it was never necessary to wrap that in another one. That's wrong if the expression is underneath an outer join and involves a lateral reference to outside that scope: failing to add an additional PHV risks evaluating the expression at the wrong place and hence not forcing it to null when the outer join should do so. This is an oversight in commit 9e7e29c75, which added logic to forcibly wrap lateral-reference Vars in PlaceHolderVars, but didn't see that the adjacent case for PlaceHolderVars needed the same treatment. The test case we have for this doesn't fail before 4be058fe9, but now that I see the problem I wonder if it is possible to demonstrate related errors before that. That's moot though, since all such branches are out of support. Per bug #18284 from Holger Reise. Back-patch to all supported branches. Discussion: https://postgr.es/m/18284-47505a20c23647f8@postgresql.org
* Handle WindowClause.runCondition in tree walker/mutator functions.Tom Lane2024-01-10
| | | | | | | | | | | | | | | | | Commit 9d9c02ccd, which added the notion of a "run condition" for window functions, neglected to teach nodeFuncs.c to process the new field. Remarkably, that doesn't seem to have had any ill effects before we invented Var.varnullingrels, but now it can cause visible failures in join-removal scenarios. I have no faith that there's not reachable problems in v15 too, so back-patch the code change to v15 where 9d9c02ccd came in. The test case seems irrelevant to v15, though. Per bug #18277 from Zuming Jiang. Diagnosis and patch by Richard Guo. Discussion: https://postgr.es/m/18277-089ead83b329a2fd@postgresql.org
* Fix use of incorrect TupleTableSlot in DISTINCT aggregatesDavid Rowley2024-01-04
| | | | | | | | | | | | | | | | | | | | | | 1349d2790 added code to allow DISTINCT and ORDER BY aggregates to work more efficiently by using presorted input. That commit added some code that made use of the AggState's tmpcontext and adjusted the ecxt_outertuple and ecxt_innertuple slots before checking if the current row is distinct from the previously seen row. That code forgot to set the TupleTableSlots back to what they were originally, which could result in errors such as: ERROR: attribute 1 of type record has wrong type This only affects aggregate functions which have multiple arguments when DISTINCT is used. For example: string_agg(DISTINCT col, ', ') Thanks to Tom Lane for identifying the breaking commit. Bug: #18264 Reported-by: Vojtěch Beneš Discussion: https://postgr.es/m/18264-e363593d7e9feb7d@postgresql.org Backpatch-through: 16, where 1349d2790 was added
* Set readline-relevant ENV vars in interactive_psql(), not caller.Tom Lane2023-12-23
| | | | | | | | | | | | | | | | Commit 664d75753 pulled 010_tab_completion.pl's infrastructure for invoking an interactive psql session out into a generally-useful test function, but it didn't move enough stuff. We need to set up various environment variables that readline will look at, both to ensure stability of test results and to prevent test actions from cluttering the calling user's ~/.psql_history. Expecting calling scripts to remember to do that is too failure-prone: the other existing caller 001_password.pl did not do it. Hence, remove those initialization steps from 010_tab_completion.pl and put them into interactive_psql(). Since interactive_psql was already making a local ENV hash, this has no effect on calling scripts. Discussion: https://postgr.es/m/794610.1703182896@sss.pgh.pa.us
* Avoid trying to fetch metapage of an SPGist partitioned index.Tom Lane2023-12-21
| | | | | | | | | | | | | | | | | | | This is necessary when spgcanreturn() is invoked on a partitioned index, and the failure might be reachable in other scenarios as well. The rest of what spgGetCache() does is perfectly sensible for a partitioned index, so we should allow it to go through. I think the main takeaway from this is that we lack sufficient test coverage for non-btree partitioned indexes. Therefore, I added simple test cases for brin and gin as well as spgist (hash and gist AMs were covered already in indexing.sql). Per bug #18256 from Alexander Lakhin. Although the known test case only fails since v16 (3c569049b), I've got no faith at all that there aren't other ways to reach this problem; so back-patch to all supported branches. Discussion: https://postgr.es/m/18256-0b0e1b6e4a620f1b@postgresql.org
* Fix BEFORE ROW trigger handling in cross-partition MERGE update.Dean Rasheed2023-12-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix a bug during MERGE if a cross-partition update is attempted on a partitioned table with a BEFORE DELETE ROW trigger that returns NULL, to prevent the update. This would cause an error to be thrown, or an assert failure in an assert-enabled build. This was an oversight in 9321c79c86, which failed to properly distinguish a DELETE prevented by a trigger from one prevented by a concurrent update. Fix by having ExecDelete() return the TM_Result status to ExecCrossPartitionUpdate(), so that it can distinguish the two cases, and make ExecCrossPartitionUpdate() return the TM_Result status to ExecUpdateAct(), so that it can return the correct status from a concurrent update. In addition, ensure that the command tag is correctly updated by having ExecMergeMatched() pass canSetTag to ExecUpdateAct(), rather than passing false, so that it updates the command tag if it does a cross-partition update, making this code path in ExecMergeMatched() consistent with ExecUpdate(). Per bug #18238 from Alexander Lakhin. Back-patch to v15, where MERGE was introduced. Dean Rasheed, reviewed by Richard Guo and Jian He. Discussion: https://postgr.es/m/18238-2f2bdc7f720180b9%40postgresql.org
* Fix bugs in manipulation of large objects.Tom Lane2023-12-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In v16 and up (since commit afbfc0298), large object ownership checking has been broken because object_ownercheck() didn't take care of the discrepancy between our object-address representation of large objects (classId == LargeObjectRelationId) and the catalog where their ownership info is actually stored (LargeObjectMetadataRelationId). This resulted in failures such as "unrecognized class ID: 2613" when trying to update blob properties as a non-superuser. Poking around for related bugs, I found that AlterObjectOwner_internal would pass the wrong classId to the PostAlterHook in the no-op code path where the large object already has the desired owner. Also, recordExtObjInitPriv checked for the wrong classId; that bug is only latent because the stanza is dead code anyway, but as long as we're carrying it around it should be less wrong. These bugs are quite old. In HEAD, we can reduce the scope for future bugs of this ilk by changing AlterObjectOwner_internal's API to let the translation happen inside that function, rather than requiring callers to know about it. A more bulletproof fix, perhaps, would be to start using LargeObjectMetadataRelationId as the dependency and object-address classId for blobs. However that has substantial risk of breaking third-party code; even within our own code, it'd create hassles for pg_dump which would have to cope with a version-dependent representation. For now, keep the status quo. Discussion: https://postgr.es/m/2650449.1702497209@sss.pgh.pa.us
* Fix wrong description of BackgroundPsql's timeout.Masahiko Sawada2023-11-30
| | | | | | | | Backpatch through 16 where this was introduced by 664d757531e1. Reviewed-by: Daniel Gustafsson Backpatch-through: 16 Discussion: http://postgr.es/m/CAD21AoBXMEqDBLoDuAWVWoTLYB4aNsxx4oYNmyJJbhfq_vGQBQ@mail.gmail.com
* Fix BackgroundPsql's set_query_timer_restart() issue without argument.Masahiko Sawada2023-11-30
| | | | | | | | | | | | | | | The set_query_timer_restart() required an argument to define a value to query_timer_restart, but none of the existing callers passes an argument to this function. This changes the function to set a value without an argument. Backpatch through 16 where the background psql TAP functions were refactored by 664d757531e1. Reviewed-by: Bharath Rupireddy, Tom Lane Discussion: https://postgr.es/m/CAD21AoA0B6VKe_5A9nZi8i5umwSN-zJJuPVNht9DaOZ9SJumMA@mail.gmail.com Backpatch-through: 16
* Use BIO_{get,set}_app_data instead of BIO_{get,set}_data.Tom Lane2023-11-28
| | | | | | | | | | | | | | | | | | | | | | We should have done it this way all along, but we accidentally got away with using the wrong BIO field up until OpenSSL 3.2. There, the library's BIO routines that we rely on use the "data" field for their own purposes, and our conflicting use causes assorted weird behaviors up to and including core dumps when SSL connections are attempted. Switch to using the approved field for the purpose, i.e. app_data. While at it, remove our configure probes for BIO_get_data as well as the fallback implementation. BIO_{get,set}_app_data have been there since long before any OpenSSL version that we still support, even in the back branches. Also, update src/test/ssl/t/001_ssltests.pl to allow for a minor change in an error message spelling that evidently came in with 3.2. Tristan Partin and Bo Andreson. Back-patch to all supported branches. Discussion: https://postgr.es/m/CAN55FZ1eDDYsYaL7mv+oSLUij2h_u6hvD4Qmv-7PK7jkji0uyQ@mail.gmail.com
* Fix assertions with RI triggers in heap_update and heap_delete.Heikki Linnakangas2023-11-28
| | | | | | | | | | | | If the tuple being updated is not visible to the crosscheck snapshot, we return TM_Updated but the assertions would not hold in that case. Move them to before the cross-check. Fixes bug #17893. Backpatch to all supported versions. Author: Alexander Lakhin Backpatch-through: 12 Discussion: https://www.postgresql.org/message-id/17893-35847009eec517b5%40postgresql.org