aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
...
* Fix mishandling of OLD/NEW references in subqueries in rule actions.Dean Rasheed2023-02-25
| | | | | | | | | | | | | | | | | If a rule action contains a subquery that refers to columns from OLD or NEW, then those are really lateral references, and the planner will complain if it sees such things in a subquery that isn't marked as lateral. However, at rule-definition time, the user isn't required to mark the subquery with LATERAL, and so it can fail when the rule is used. Fix this by marking such subqueries as lateral in the rewriter, at the point where they're used. Dean Rasheed and Tom Lane, per report from Alexander Lakhin. Back-patch to all supported branches. Discussion: https://postgr.es/m/5e09da43-aaba-7ea7-0a51-a2eb981b058b%40gmail.com
* Don't repeatedly register cache callbacks in pgoutput plugin.Tom Lane2023-02-23
| | | | | | | | | | | | | | Multiple cycles of starting up and shutting down the plugin within a single session would eventually lead to "out of relcache_callback_list slots", because pgoutput_startup blindly re-registered its cache callbacks each time. Fix it to register them only once, as all other users of cache callbacks already take care to do. This has been broken all along, so back-patch to all supported branches. Shi Yu Discussion: https://postgr.es/m/OSZPR01MB631004A78D743D68921FFAD3FDA79@OSZPR01MB6310.jpnprd01.prod.outlook.com
* Fix multi-row DEFAULT handling for INSERT ... SELECT rules.Dean Rasheed2023-02-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Given an updatable view with a DO ALSO INSERT ... SELECT rule, a multi-row INSERT ... VALUES query on the view fails if the VALUES list contains any DEFAULTs that are not replaced by view defaults. This manifests as an "unrecognized node type" error, or an Assert failure, in an assert-enabled build. The reason is that when RewriteQuery() attempts to replace the remaining DEFAULT items with NULLs in any product queries, using rewriteValuesRTEToNulls(), it assumes that the VALUES RTE is located at the same rangetable index in each product query. However, if the product query is an INSERT ... SELECT, then the VALUES RTE is actually in the SELECT part of that query (at the same index), rather than the top-level product query itself. Fix, by descending to the SELECT in such cases. Note that we can't simply use getInsertSelectQuery() for this, since that expects to be given a raw rule action with OLD and NEW placeholder entries, so we duplicate its logic instead. While at it, beef up the checks in getInsertSelectQuery() by checking that the jointree->fromlist node is indeed a RangeTblRef, and that the RTE it points to has rtekind == RTE_SUBQUERY. Per bug #17803, from Alexander Lakhin. Back-patch to all supported branches. Dean Rasheed, reviewed by Tom Lane. Discussion: https://postgr.es/m/17803-53c63ed4ecb4eac6%40postgresql.org
* Fix snapshot handling in logicalmsg_decodeTomas Vondra2023-02-22
| | | | | | | | | | | | | | | | | Whe decoding a transactional logical message, logicalmsg_decode called SnapBuildGetOrBuildSnapshot. But we may not have a consistent snapshot yet at that point. We don't actually need the snapshot in this case (during replay we'll have the snapshot from the transaction), so in practice this is harmless. But in assert-enabled build this crashes. Fixed by requesting the snapshot only in non-transactional case, where we are guaranteed to have SNAPBUILD_CONSISTENT. Backpatch to 11. The issue exists since 9.6. Backpatch-through: 11 Reviewed-by: Andres Freund Discussion: https://postgr.es/m/84d60912-6eab-9b84-5de3-41765a5449e8@enterprisedb.com
* Add missing support for the latest SPI status codes.Dean Rasheed2023-02-22
| | | | | | | | | | | | | | | | | | | SPI_result_code_string() was missing support for SPI_OK_TD_REGISTER, and in v15 and later, it was missing support for SPI_OK_MERGE, as was pltcl_process_SPI_result(). The last of those would trigger an error if a MERGE was executed from PL/Tcl. The others seem fairly innocuous, but worth fixing. Back-patch to all supported branches. Before v15, this is just adding SPI_OK_TD_REGISTER to SPI_result_code_string(), which is unlikely to be seen by anyone, but seems worth doing for completeness. Reviewed by Tom Lane. Discussion: https://postgr.es/m/CAEZATCUg8V%2BK%2BGcafOPqymxk84Y_prXgfe64PDoopjLFH6Z0Aw%40mail.gmail.com https://postgr.es/m/CAEZATCUMe%2B_KedPMM9AxKqm%3DSZogSxjUcrMe%2BsakusZh3BFcQw%40mail.gmail.com
* Fix Assert failure for MERGE into a partitioned table with RLS.Dean Rasheed2023-02-22
| | | | | | | | | | | In ExecInitPartitionInfo(), the Assert when building the WITH CHECK OPTION list for the new partition assumed that the command would be an INSERT or UPDATE, but it can also be a MERGE. This can be triggered by a MERGE into a partitioned table with RLS checks to enforce. Fix, and back-patch to v15, where MERGE was introduced. Discussion: https://postgr.es/m/CAEZATCWWFtQmW67F3XTyMU5Am10Oxa_b8oe0x%2BNu5Mo%2BCdRErg%40mail.gmail.com
* Fix MERGE command tag for cross-partition updates.Dean Rasheed2023-02-22
| | | | | | | | | | | This ensures that the row count in the command tag for a MERGE is correctly computed. Previously, if MERGE updated a partitioned table, the row count would be incorrect if any row was moved to a different partition, since such updates were counted twice. Back-patch to v15, where MERGE was introduced. Discussion: https://postgr.es/m/CAEZATCWRMG7XX2QEsVL1LswmNo2d_YG8tKTLkpD3=Lp644S7rg@mail.gmail.com
* Fix corruption of templates after CREATE DATABASE .. STRATEGY WAL_LOGMichael Paquier2023-02-22
| | | | | | | | | | | | | | | | | | | | | | | | | | WAL_LOG does a scan of the template's pg_class to determine the set of relations that need to be copied from a template database to the new one. However, as coded in 9c08aea, this copy strategy would load the pages of pg_class without considering it as a permanent relation, causing the loaded pages to never be flushed when they should. Any modification of the template's pg_class, mostly through DDLs, would then be missed, causing corruptions. STRATEGY = WAL_LOG is the default over FILE_COPY since it has been introduced, so any changes done to pg_class on a database template would be gone. Updates of database templates should be a rare thing, so the impact of this bug should be hopefully limited. The pre-14 default strategy FILE_COPY is safe, and can be used as a workaround. Ryo Matsumura has found and analyzed the issue, and Nathan has written a test able to reproduce the failure (with few tweaks from me). Backpatch down to 15, where STRATEGY = WAL_LOG has been introduced. Author: Nathan Bossart, Ryo Matsumura Reviewed-by: Dilip Kumar, Michael Paquier Discussion: https://postgr.es/m/TYCPR01MB6868677E499C9AD5123084B5E8A39@TYCPR01MB6868.jpnprd01.prod.outlook.com Backpatch-through: 15
* Fix erroneous Valgrind markings in AllocSetRealloc.Tom Lane2023-02-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If asked to decrease the size of a large (>8K) palloc chunk, AllocSetRealloc could improperly change the Valgrind state of memory beyond the new end of the chunk: it would mark data UNDEFINED as far as the old end of the chunk after having done the realloc(3) call, thus tromping on the state of memory that no longer belongs to it. One would normally expect that memory to now be marked NOACCESS, so that this mislabeling might prevent detection of later errors. If realloc() had chosen to move the chunk someplace else (unlikely, but well within its rights) we could also mismark perfectly-valid DEFINED data as UNDEFINED, causing false-positive valgrind reports later. Also, any malloc bookkeeping data placed within this area might now be wrongly marked, causing additional problems. Fix by replacing relevant uses of "oldsize" with "Min(size, oldsize)". It's sufficient to mark as far as "size" when that's smaller, because whatever remains in the new chunk size will be marked NOACCESS below, and we expect realloc() to have taken care of marking the memory beyond the new official end of the chunk. While we're here, also rename the function's "oldsize" variable to "oldchksize" to more clearly explain what it actually holds, namely the distance to the end of the chunk (that is, requested size plus trailing padding). This is more consistent with the use of "size" and "chksize" to hold the new requested size and chunk size. Add a new variable "oldsize" in the one stanza where we're actually talking about the old requested size. Oversight in commit c477f3e44. Back-patch to all supported branches, as that was, just in case anybody wants to do valgrind testing on back branches. Karina Litskevich Discussion: https://postgr.es/m/CACiT8iaAET-fmzjjZLjaJC4zwSJmrFyL7LAdHwaYyjjQOQ4hcg@mail.gmail.com
* pgbench: Prepare commands in pipelines in advanceAlvaro Herrera2023-02-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | Failing to do so results in an error when a pgbench script tries to start a serializable transaction inside a pipeline, because by the time BEGIN ISOLATION LEVEL SERIALIZABLE is executed, we're already in a transaction that has acquired a snapshot, so the server rightfully complains. We can work around that by preparing all commands in the pipeline before actually starting the pipeline. This changes the existing code in two aspects: first, we now prepare each command individually at the point where that command is about to be executed; previously, we would prepare all commands in a script as soon as the first command of that script would be executed. It's hard to see that this would make much of a difference (particularly since it only affects the first time to execute each script in a client), but I didn't actually try to measure it. Secondly, we no longer use PQsendPrepare() in pipeline mode, but only PQprepare. There's no specific reason for this change other than no longer needing to do differently in pipeline mode. (Previously we had no choice, because in pipeline mode PQprepare could not be used.) Backpatch to 14, where pgbench got support for pipeline mode. Reported-by: Yugo NAGATA <nagata@sraoss.co.jp> Discussion: https://postgr.es/m/20210716153013.fc53b1c780b06fccc07a7f0d@sraoss.co.jp
* Fix parsing of ISO-8601 interval fields with exponential notation.Tom Lane2023-02-20
| | | | | | | | | | | | | | | | | | | | | | | | Historically we've accepted interval input like 'P.1e10D'. This is probably an accident of having used strtod() to do the parsing, rather than something anyone intended, but it's been that way for a long time. Commit e39f99046 broke this by trying to parse the integer and fractional parts separately, without accounting for the possibility of an exponent. In principle that coding allowed for precise conversions of field values wider than 15 decimal digits, but that does not seem like a goal worth sweating bullets for. So, rather than trying to manage an exponent on top of the existing complexity, let's just revert to the previous coding that used strtod() by itself. We can still improve on the old code to the extent of allowing the value to range up to 1.0e15 rather than only INT_MAX. (Allowing more than that risks creating problems due to precision loss: the converted fractional part might have absolute value more than 1. Perhaps that could be dealt with in some way, but it really does not seem worth additional effort.) Per bug #17795 from Alexander Lakhin. Back-patch to v15 where the faulty code came in. Discussion: https://postgr.es/m/17795-748d6db3ed95d313@postgresql.org
* Prevent join removal from removing the query's result relation.Tom Lane2023-02-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This was not something that required consideration before MERGE was invented; but MERGE builds a join tree that left-joins to the result relation, meaning that remove_useless_joins will consider removing it. That should generally be stopped by the query's use of output variables from the result relation. However, if the result relation is inherited (e.g. a partitioned table) then we don't add any row identity variables to the query until expand_inherited_rtentry, which happens after join removal. This was exposed as of commit 3c569049b, which made it possible to deduce that a partitioned table could contain at most one row matching a join key, enabling removal of the not-yet-expanded result relation. Ooops. To fix, let's just teach join_is_removable that the query result rel is never removable. It's a cheap enough test in any case, and it'll save some cycles that we'd otherwise expend in proving that it's not removable, even in the cases we got right. Back-patch to v15 where MERGE was added. Although I think the case cannot be reached in v15, this seems like cheap insurance. Per investigation of a report from Alexander Lakhin. Discussion: https://postgr.es/m/36bee393-b351-16ac-93b2-d46d83637e45@gmail.com
* Fix handling of multi-column BRIN indexesTomas Vondra2023-02-19
| | | | | | | | | | | | | | When evaluating clauses on multiple scan keys of a multi-column BRIN index, we can stop processing as soon as we find a scan key eliminating the range, and the range should not be added to tbe bitmap. That's how it worked before 14, but since a681e3c107a the code treated the range as matching if it matched at least the last scan key. Backpatch to 14, where this code was introduced. Backpatch-through: 14 Discussion: https://postgr.es/m/ebc18613-125e-60df-7520-fcbe0f9274fc%40enterprisedb.com
* Print the correct aliases for DML target tables in ruleutils.Tom Lane2023-02-17
| | | | | | | | | | | | | | | | | | | | | | | | | ruleutils.c blindly printed the user-given alias (or nothing if there hadn't been one) for the target table of INSERT/UPDATE/DELETE queries. That works a large percentage of the time, but not always: for queries appearing in WITH, it's possible that we chose a different alias to avoid conflict with outer-scope names. Since the chosen alias would be used in any Var references to the target table, this'd lead to an inconsistent printout with consequences such as dump/restore failures. The correct logic for printing (or not) a relation alias was embedded in get_from_clause_item. Factor it out to a separate function so that we don't need a jointree node to use it. (Only a limited part of that function can be reached from these new call sites, but this seems like the cleanest non-duplicative factorization.) In passing, I got rid of a redundant "\d+ rules_src" step in rules.sql. Initial report from Jonathan Katz; thanks to Vignesh C for analysis. This has been broken for a long time, so back-patch to all supported branches. Discussion: https://postgr.es/m/e947fa21-24b2-f922-375a-d4f763ef3e4b@postgresql.org Discussion: https://postgr.es/m/CALDaNm1MMntjmT_NJGp-Z=xbF02qHGAyuSHfYHias3TqQbPF2w@mail.gmail.com
* Don't rely on uninitialized value in MERGE / DELETEAlvaro Herrera2023-02-15
| | | | | | | | | | | | On MERGE / WHEN MATCHED DELETE it's not possible to get cross-partition updates, so we don't initialize cpUpdateRetrySlot; however, the code was not careful to ignore the value in that case. Make it do so. Backpatch to 15. Reported-by: Alexander Lakhin <exclusion@gmail.com> Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com> Discussion: https://postgr.es/m/17792-0f89452029662c36@postgresql.org
* Fix handling of SCRAM-SHA-256's channel binding with RSA-PSS certificatesMichael Paquier2023-02-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | OpenSSL 1.1.1 and newer versions have added support for RSA-PSS certificates, which requires the use of a specific routine in OpenSSL to determine which hash function to use when compiling it when using channel binding in SCRAM-SHA-256. X509_get_signature_nid(), that is the original routine the channel binding code has relied on, is not able to determine which hash algorithm to use for such certificates. However, X509_get_signature_info(), new to OpenSSL 1.1.1, is able to do it. This commit switches the channel binding logic to rely on X509_get_signature_info() over X509_get_signature_nid(), which would be the choice when building with 1.1.1 or newer. The error could have been triggered on the client or the server, hence libpq and the backend need to have their related code paths patched. Note that attempting to load an RSA-PSS certificate with OpenSSL 1.1.0 or older leads to a failure due to an unsupported algorithm. The discovery of relying on X509_get_signature_info() comes from Jacob, the tests have been written by Heikki (with few tweaks from me), while I have bundled the whole together while adding the bits needed for MSVC and meson. This issue exists since channel binding exists, so backpatch all the way down. Some tests are added in 15~, triggered if compiling with OpenSSL 1.1.1 or newer, where the certificate and key files can easily be generated for RSA-PSS. Reported-by: Gunnar "Nick" Bluth Author: Jacob Champion, Heikki Linnakangas Discussion: https://postgr.es/m/17760-b6c61e752ec07060@postgresql.org Backpatch-through: 11
* Disable WindowAgg inverse transitions when subplans are presentDavid Rowley2023-02-13
| | | | | | | | | | | | | | | | | | | | | | | | | When an aggregate function is used as a WindowFunc and a tuple transitions out of the window frame, we ordinarily try to make use of the aggregate function's inverse transition function to "unaggregate" the exiting tuple. This optimization is disabled for various cases, including when the aggregate contains a volatile function. In such a case we'd be unable to ensure that the transition value was calculated to the same value during transitions and inverse transitions. Unfortunately, we did this check by calling contain_volatile_functions() which does not recursively search SubPlans for volatile functions. If the aggregate function's arguments or its FILTER clause contained a subplan with volatile functions then we'd fail to notice this. Here we fix this by just disabling the optimization when the WindowFunc contains any subplans. Volatile functions are not the only reason that a subplan may have nonrepeatable results. Bug: #17777 Reported-by: Anban Company Discussion: https://postgr.es/m/17777-860b739b6efde977%40postgresql.org Reviewed-by: Tom Lane Backpatch-through: 11
* Avoid dereferencing an undefined pointer in DecodeInterval().Tom Lane2023-02-12
| | | | | | | | | | | | | | | Commit e39f99046 moved some code up closer to the start of DecodeInterval(), without noticing that it had been implicitly relying on previous checks to reject the case of empty input. Given empty input, we'd now dereference a pointer that hadn't been set, possibly leading to a core dump. (But if we fail to provoke a SIGSEGV, nothing bad happens, and the expected syntax error is thrown a bit later.) Per bug #17788 from Alexander Lakhin. Back-patch to v15 where the fault was introduced. Discussion: https://postgr.es/m/17788-dabac9f98f7eafd5@postgresql.org
* Un-revert "Disable STARTUP_PROGRESS_TIMEOUT in standby mode."Robert Haas2023-02-10
| | | | | | | | | This reverts commit 1eadfbdd7eb0679ba8d45787aa8b2f06e76de20a and thus reinstates commit 98e7234242a652497c99d4d0d6f2bf9a75d4e921. It's a better time to commit this now that the release is over. Discussion: http://postgr.es/m/3509384.1675878203@sss.pgh.pa.us
* Remove SQL regression tests for GUCs related to NO_SHOW_ALLMichael Paquier2023-02-08
| | | | | | | | | | | | | | No GUCs that use NO_SHOW_ALL are reported in pg_show_all_settings(), hence trying to check combinations of flags related to it is pointless. These queries have been introduced by d10e41d, so backpatch down to 15 to keep all the branches consistent. Equivalent checks based on NO_SHOW_ALL could be added in check_GUC_init() when a GUC is initially loaded, but this can be done only on HEAD. Author: Nitin Jadhav Discussion: https://postgr.es/m/CAMm1aWaYe0muu3ABo7iSAgK+OWDS9yNe8GGRYnCyeEpScYKa+g@mail.gmail.com Backpatch-through: 15
* Revert "Disable STARTUP_PROGRESS_TIMEOUT in standby mode."Robert Haas2023-02-06
| | | | | | | | | This reverts commit 98e7234242a652497c99d4d0d6f2bf9a75d4e921. I forgot that we're about to wrap a release, and this fix isn't critical enough to justify committing it right before we wrap a release. Discussion: http://postgr.es/m/2676424.1675700113@sss.pgh.pa.us
* Disable STARTUP_PROGRESS_TIMEOUT in standby mode.Robert Haas2023-02-06
| | | | | | | | | | | | | In standby mode, we don't actually report progress of recovery, but up until now, startup_progress_timeout_handler() nevertheless got called every log_startup_progress_interval seconds. That's an unnecessary expense, so avoid it. Report by Thomas Munro. Patch by Bharath Rupireddy, reviewed by Simon Riggs, Thomas Munro, and me. Back-patch to v15, where the problem was introduced. Discussion: https://www.postgresql.org/message-id/CA%2BhUKGKCHSffAj8zZJKJvNX7ygnQFxVD6wm1d-2j3fVw%2BMafPQ%40mail.gmail.com
* Translation updatesPeter Eisentraut2023-02-06
| | | | | Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: 3748d8972214a3d1e316cffc19824cd948e9e2d8
* Properly NULL-terminate GSS receive buffer on error packet receptionMichael Paquier2023-02-06
| | | | | | | | | | | | | | | | | | | | | pqsecure_open_gss() includes a code path handling error messages with v2-style protocol messages coming from the server. The client-side buffer holding the error message does not force a NULL-termination, with the data of the server getting copied to the errorMessage of the connection. Hence, it would be possible for a server to send an unterminated string and copy arbitrary bytes in the buffer receiving the error message in the client, opening the door to a crash or even data exposure. As at this stage of the authentication process the exchange has not been completed yet, this could be abused by an attacker without Kerberos credentials. Clients that have a valid kerberos cache are vulnerable as libpq opportunistically requests for it except if gssencmode is disabled. Author: Jacob Champion Backpatch-through: 12 Security: CVE-2022-41862
* Make int64_div_fast_to_numeric() more robust.Dean Rasheed2023-02-03
| | | | | | | | | | | | | | | | | | The prior coding of int64_div_fast_to_numeric() had a number of bugs that would cause it to fail under different circumstances, such as with log10val2 <= 0, or log10val2 a multiple of 4, or in the "slow" numeric path with log10val2 >= 10. None of those could be triggered by any of our current code, which only uses log10val2 = 3 or 6. However, they made it a hazard for any future code that might use it. Also, since this is exported by numeric.c, users writing their own C code might choose to use it. Therefore fix, and back-patch to v14, where it was introduced. Dean Rasheed, reviewed by Tom Lane. Discussion: https://postgr.es/m/CAEZATCW8gXgW0tgPxPgHDPhVX71%2BSWFRkhnXy%2BTfGDsKLepu2g%40mail.gmail.com
* Update time zone data files to tzdata release 2022g.Tom Lane2023-01-31
| | | | | | | DST law changes in Greenland and Mexico. Notably, a new timezone America/Ciudad_Juarez has been split off from America/Ojinaga. Historical corrections for northern Canada, Colombia, and Singapore.
* Remove recovery test 011_crash_recovery.plMichael Paquier2023-01-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This test has been added as of 857ee8e that has introduced the SQL function txid_status(), with the purpose of checking that a transaction ID still in-progress during a crash is correctly marked as aborted after recovery finishes. This test is unstable, and some configuration scenarios may that easier to reproduce (wal_level=minimal, wal_compression=on) because the WAL holding the information about the in-progress transaction ID may not have made it to disk yet, hence a post-crash recovery may cause the same XID to be reused, triggering a test failure. We have discussed a few approaches, like making this function force a WAL flush to make it reliable across crashes, but we don't want to pay a performance penalty in some scenarios, as well. The test could have been tweaked to enforce a checkpoint but that actually breaks the promise of the test to rely on a stable result of txid_status() after a crash. This issue has been reported a few times across the past years, with an original report from Kyotaro Horiguchi. The buildfarm machines tanager, hachi and gokiburi enable wal_compression, and fail on this test periodically. Discussion: https://postgr.es/m/3163112.1674762209@sss.pgh.pa.us Discussion: https://postgr.es/m/20210305.115011.558061052471425531.horikyota.ntt@gmail.com Backpatch-through: 11
* Ensure that MERGE recomputes GENERATED expressions properly.Dean Rasheed2023-01-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This fixes a bug that, under some circumstances, would cause MERGE to fail to properly recompute expressions for GENERATED STORED columns. Formerly, ExecInitModifyTable() did not call ExecInitStoredGenerated() for a MERGE command, which meant that the generated expressions information was not computed until later, when the first merge action was executed. However, if the first merge action to execute was an UPDATE, then ExecInitStoredGenerated() could decide to skip some some generated columns, if the columns on which they depended were not updated, which was a problem if the MERGE also contained an INSERT action, for which no generated columns should be skipped. So fix by having ExecInitModifyTable() call ExecInitStoredGenerated() for MERGE, and assume that it isn't safe to skip any generated columns in a MERGE. Possibly that could be relaxed, by allowing some generated columns to be skipped for a MERGE without an INSERT action, but it's not clear that it's worth the effort. Noticed while investigating bug #17759. Back-patch to v15, where MERGE was added. Dean Rasheed, reviewed by Tom Lane. Discussion: https://postgr.es/m/17759-e76d9bece1b5421c%40postgresql.org https://postgr.es/m/CAEZATCXb_ezoMCcL0tzKwRGA1x0oeE%3DawTaysRfTPq%2B3wNJn8g%40mail.gmail.com
* Fix rare sharedtuplestore.c corruption.Thomas Munro2023-01-26
| | | | | | | | | | | | | | | | | If the final chunk of an oversized tuple being written out to disk was exactly 32760 bytes, it would be corrupted due to a fencepost bug. Bug #17619. Back-patch to 11 where the code arrived. While testing that (see test module in archives), I (tmunro) noticed that the per-participant page counter was not initialized to zero as it should have been; that wasn't a live bug when it was written since DSM memory was originally always zeroed, but since 14 min_dynamic_shared_memory might be configured and it supplies non-zeroed memory, so that is also fixed here. Author: Dmitry Astapov <dastapov@gmail.com> Discussion: https://postgr.es/m/17619-0de62ceda812b8b5%40postgresql.org
* Fix the Drop Database hang.Amit Kapila2023-01-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The drop database command waits for the logical replication sync worker to accept ProcSignalBarrier and the worker's slot creation waits for the drop database to finish which leads to a deadlock. This happens because the tablesync worker holds interrupts while creating a slot. We prevent cancel/die interrupts while creating a slot in the table sync worker because it is possible that before the server finishes this command, a concurrent drop subscription happens which would complete without removing this slot and that leads to the slot existing until the end of walsender. However, the slot will eventually get dropped at the walsender exit time, so there is no danger of the dangling slot. This patch reallows cancel/die interrupts while creating a slot and modifies the test to wait for slots to become zero to prevent finding an ephemeral slot. The reported hang doesn't happen in PG14 as the drop database starts to wait for ProcSignalBarrier with PG15 (commits 4eb2176318 and e2f65f4255) but it is good to backpatch this till PG14 as it is not a good idea to prevent interrupts during a network call that could block indefinitely. Reported-by: Lakshmi Narayanan Sreethar Diagnosed-by: Andres Freund Author: Hou Zhijie Reviewed-by: Vignesh C, Amit Kapila Backpatch-through: 14, where it was introduced in commit 6b67d72b60 Discussion: https://postgr.es/m/CA+kvmZELXQ4ZD3U=XCXuG3KvFgkuPoN1QrEj8c-rMRodrLOnsg@mail.gmail.com
* Fix error handling in libpqrcv_connect()Andres Freund2023-01-23
| | | | | | | | | | | | | | | | | When libpqrcv_connect (also known as walrcv_connect()) failed, it leaked the libpq connection. In most paths that's fairly harmless, as the calling process will exit soon after. But e.g. CREATE SUBSCRIPTION could lead to a somewhat longer lived leak. Fix by releasing resources, including the libpq connection, on error. Add a test exercising the error code path. To make it reliable and safe, the test tries to connect to port=-1, which happens to fail during connection establishment, rather than during connection string parsing. Reviewed-by: Noah Misch <noah@leadboat.com> Discussion: https://postgr.es/m/20230121011237.q52apbvlarfv6jm6@awork3.anarazel.de Backpatch: 11-
* Use OFFSET 0 instead of ORDER BY to stop subquery pullupDavid Rowley2023-01-24
| | | | | | | | | | b762fed64 recently changed this test to prevent subquery pullup to allow us to test Memoize with lateral_vars. As pointed out by Tom Lane, OFFSET 0 is our standard way of preventing subquery pullups, so do it that way instead. Discussion: https://postgr.es/m/2144818.1674517061@sss.pgh.pa.us Backpatch-through: 14, same as b762fed64
* Fix LATERAL join test in test memoize.sqlDavid Rowley2023-01-24
| | | | | | | | | | | | | | The test in question was meant to be testing Memoize to ensure it worked correctly when the inner side of the join contained lateral vars, however, nothing in the lateral subquery stopped it from being pulled up into the main query, so the planner did that, and that meant no more lateral vars. Here we add a simple ORDER BY to stop the planner from being able to pullup the lateral subquery. Author: Richard Guo Discussion: https://postgr.es/m/CAMbWs4_LHJaN4L-tXpKMiPFnsCJWU1P8Xh59o0W7AA6UN99=cQ@mail.gmail.com Backpatch-through: 14, where Memoize was added.
* Fix and clarify function comment on LogicalTapeSetCreate.Heikki Linnakangas2023-01-23
| | | | | | | | | | | | | | | | | Commit c4649cce39 removed the "shared" and "ntapes" arguments, but the comment still talked about "shared". It also talked about "a shared file handle", which was technically correct because even before commit c4649cce39, the "shared file handle" referred to the "fileset" argument, not "shared". But it was very confusing. Improve the comment. Also add a comment on what the "preallocate" argument does. Backpatch to v15, just to make backpatching other patches easier in the future. Discussion: https://www.postgresql.org/message-id/af989685-91d5-aad4-8f60-1d066b5ec309@enterprisedb.com Reviewed-by: Peter Eisentraut
* Allow REPLICA IDENTITY to be set on an index that's not (yet) valid.Tom Lane2023-01-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The motivation for this change is that when pg_dump dumps a partitioned index that's marked REPLICA IDENTITY, it generates a command sequence that applies REPLICA IDENTITY before the partitioned index has been marked valid, causing restore to fail. We could perhaps change pg_dump to not do it like that, but that would be difficult and would not fix existing dump files with the problem. There seems to be very little reason for the backend to disallow this anyway --- the code ignores indisreplident when the index isn't valid --- so instead let's fix it by allowing the case. Commit 9511fb37a previously expressed a concern that allowing indisreplident to be set on invalid indexes might allow us to wind up in a situation where a table could have indisreplident set on multiple indexes. I'm not sure I follow that concern exactly, but in any case the only way that could happen is because relation_mark_replica_identity is too trusting about the existing set of markings being valid. Let's just rip out its early-exit code path (which sure looks like premature optimization anyway; what are we doing expending code to make redundant ALTER TABLE ... REPLICA IDENTITY commands marginally faster and not-redundant ones marginally slower?) and fix it to positively guarantee that no more than one index is marked indisreplident. The pg_dump failure can be demonstrated in all supported branches, so back-patch all the way. I chose to back-patch 9511fb37a as well, just to keep indisreplident handling the same in all branches. Per bug #17756 from Sergey Belyashov. Discussion: https://postgr.es/m/17756-dd50e8e0c8dd4a40@postgresql.org
* Reject CancelRequestPacket having unexpected length.Noah Misch2023-01-21
| | | | | | | | | | | | | When the length was too short, the server read outside the allocation. That yielded the same log noise as sending the correct length with (backendPID,cancelAuthCode) matching nothing. Change to a message about the unexpected length. Given the attacker's lack of control over the memory layout and the general lack of diversity in memory layouts at the code in question, we doubt a would-be attacker could cause a segfault. Hence, while the report arrived via security@postgresql.org, this is not a vulnerability. Back-patch to v11 (all supported versions). Andrey Borodin, reviewed by Tom Lane. Reported by Andrey Borodin.
* Make our back branches build under -fkeep-inline-functions.Tom Lane2023-01-20
| | | | | | | | | | | | Add "#ifndef FRONTEND" where necessary to make pg_waldump build on compilers that don't elide unused static-inline functions. This back-patches relevant parts of commit 3e9ca5260, fixing build breakage from dc7420c2c and back-patching of f10f0ae42. Per recently-resurrected buildfarm member castoroides. We aren't expecting castoroides to build anything newer than v11, but we might as well clean up the intermediate branches while at it.
* Avoid harmless warning from pg_dump --if-exists mode.Tom Lane2023-01-19
| | | | | | | | | | | | | | | | | | | If the public schema has a non-default owner (perhaps due to dropping and recreating it) then use of pg_dump's "--if-exists" option results in a warning message: warning: could not find where to insert IF EXISTS in statement "-- *not* dropping schema, since initdb creates it" This is harmless since the dump output is the same either way, but nonetheless it's undesirable. It's the fault of commit a7a7be1f2, which created situations where a TOC entry's "defn" or "dropStmt" fields could be just comments. Although that commit fixed up the kluges in pg_backup_archiver.c that munge defn strings, it missed doing so for the one that munges dropStmts. Per bug# 17753 from Justin Zhang. Discussion: https://postgr.es/m/17753-9c8773631747ee1c@postgresql.org
* Log the correct ending timestamp in recovery_target_xid mode.Tom Lane2023-01-19
| | | | | | | | | | | | | | | | When ending recovery based on recovery_target_xid matching with recovery_target_inclusive = off, we printed an incorrect timestamp (always 2000-01-01) in the "recovery stopping before ... transaction" log message. This is a consequence of sloppy refactoring in c945af80c: the code to fetch recordXtime out of the commit/abort record used to be executed unconditionally, but it was changed to get called only in the RECOVERY_TARGET_TIME case. We need only flip the order of operations to restore the intended behavior. Per report from Torsten Förtsch. Back-patch to all supported branches. Discussion: https://postgr.es/m/CAKkG4_kUevPqbmyOfLajx7opAQk6Cvwkvx0HRcFjSPfRPTXanA@mail.gmail.com
* Add missing assign hook for GUC checkpoint_completion_targetMichael Paquier2023-01-19
| | | | | | | | | | | | | | | | | | This is wrong since 88e9823, that has switched the WAL sizing configuration from checkpoint_segments to min_wal_size and max_wal_size. This missed the recalculation of the internal value of the internal "CheckPointSegments", that works as a mapping of the old GUC checkpoint_segments, on reload, for example, and it controls the timing of checkpoints depending on the volume of WAL generated. Most users tend to leave checkpoint_completion_target at 0.9 to smooth the I/O workload, which is why I guess this has gone unnoticed for so long, still it can be useful to tweak and reload the value dynamically in some cases to control the timing of checkpoints. Author: Bharath Rupireddy Discussion: https://postgr.es/m/CALj2ACXgPPAm28mruojSBno+F_=9cTOOxHAywu_dfZPeBdybQw@mail.gmail.com Backpatch-through: 11
* Fix failure with perlcritic in psql's create_help.plMichael Paquier2023-01-19
| | | | | | | | | No buildfarm members have reported that yet, but a recently-refreshed Debian host did. Reviewed-by: Andrew Dunstan Discussion: https://postgr.es/m/Y8ey5z4Nav62g4/K@paquier.xyz Backpatch-through: 11
* AdjustUpgrade.pm should zap test_ext_cine, too.Tom Lane2023-01-17
| | | | | | | | | | | | | | test_extensions' test_ext_cine extension has the same upgrade hazard as test_ext7: the regression test leaves it in an updated state from which no downgrade path to default is provided. This causes the update_extensions.sql script helpfully provided by pg_upgrade to fail. So drop it in cross-version-upgrade testing. Not entirely sure how come I didn't hit this in testing yesterday; possibly I'd built the upgrade reference databases with testmodules-install-check disabled. Backpatch to v10 where this module was introduced.
* Create common infrastructure for cross-version upgrade testing.Tom Lane2023-01-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To test pg_upgrade across major PG versions, we have to be able to modify or drop any old objects with no-longer-supported properties, and we have to be able to deal with cosmetic changes in pg_dump output. Up to now, the buildfarm and pg_upgrade's own test infrastructure had separate implementations of the former, and we had nothing but very ad-hoc rules for the latter (including an arbitrary threshold on how many lines of unchecked diff were okay!). This patch creates a Perl module that can be shared by both those use-cases, and adds logic that deals with pg_dump output diffs in a much more tightly defined fashion. This largely supersedes previous efforts in commits 0df9641d3, 9814ff550, and 62be9e4cd, which developed a SQL-script-based solution for the task of dropping old objects. There was nothing fundamentally wrong with that work in itself, but it had no basis for solving the output-formatting problem. The most plausible way to deal with formatting is to build a Perl module that can perform editing on the dump files; and once we commit to that, it makes more sense for the same module to also embed the knowledge of what has to be done for dropping old objects. Back-patch versions of the helper module as far as 9.2, to support buildfarm animals that still test that far back. It's also necessary to back-patch PostgreSQL/Version.pm, because the new code depends on that. I fixed up pg_upgrade's 002_pg_upgrade.pl in v15, but did not look into back-patching it further than that. Tom Lane and Andrew Dunstan Discussion: https://postgr.es/m/891521.1673657296@sss.pgh.pa.us
* Fix some BufFileRead() error reportingPeter Eisentraut2023-01-16
| | | | | | | | | | | | Remove "%m" from error messages where errno would be bogus. Add short read byte counts where appropriate. This is equivalent to what was done in 7897e3bb902c557412645b82120f4d95f7474906, but some code was apparently developed concurrently to that and not updated accordingly. Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/f3501945-c591-8cc3-5ef0-b72a2e0eaa9c@enterprisedb.com
* Remove arbitrary FUNC_MAX_ARGS limit in int2vectorin and oidvectorin.Tom Lane2023-01-15
| | | | | | | | | | | | | | | | | | | | | | | | | int2vectorin limited the number of array elements it'd take to FUNC_MAX_ARGS, which is probably fine for the traditional use-cases. But now that pg_publication_rel.prattrs is an int2vector, it's not fine at all: it's easy to construct cases where that can have up to about MaxTupleAttributeNumber entries. Trying to replicate such tables leads to logical-replication failures. As long as we have to touch this code anyway, let's just remove the a-priori limit altogether, and let it accept any size that'll be allowed by repalloc. (Note that since int2vector isn't toastable, we cannot store arrays longer than about BLCKSZ/2; but there is no good excuse for letting int2vectorin depend on that. Perhaps we will lift the no-toast restriction someday.) While at it, also improve the equivalent logic in oidvectorin. I don't know of any practical use-case for long oidvectors right now, but doing it right actually makes the code shorter. Per report from Erik Rijkers. Back-patch to v15 where pg_publication_rel.prattrs was added. Discussion: https://postgr.es/m/668ba539-33c5-8190-ca11-def2913cb94b@xs4all.nl
* Make new GENERATED-expressions code more bulletproof.Tom Lane2023-01-15
| | | | | | | | | | | | | | | | | In commit 8bf6ec3ba I assumed that no code path could reach ExecGetExtraUpdatedCols without having gone through ExecInitStoredGenerated. That turns out not to be the case in logical replication: if there's an ON UPDATE trigger on the target table, trigger.c will call this code before anybody has set up its generated columns. Having seen that, I don't have a lot of faith in there not being other such paths. ExecGetExtraUpdatedCols can call ExecInitStoredGenerated for itself, as long as we are willing to assume that it is only called in CMD_UPDATE operations, which on the whole seems like a safer leap of faith. Per report from Vitaly Davydov. Discussion: https://postgr.es/m/d259d69652b8c2ff50e14cda3c236c7f@postgrespro.ru
* Fix WaitEventSetWait() buffer overrun.Thomas Munro2023-01-13
| | | | | | | | | | | | | | | | | | | | | | | | | The WAIT_USE_EPOLL and WAIT_USE_KQUEUE implementations of WaitEventSetWaitBlock() confused the size of their internal buffer with the size of the caller's output buffer, and could ask the kernel for too many events. In fact the set of events retrieved from the kernel needs to be able to fit in both buffers, so take the smaller of the two. The WAIT_USE_POLL and WAIT_USE WIN32 implementations didn't have this confusion. This probably didn't come up before because we always used the same number in both places, but commit 7389aad6 calculates a dynamic size at construction time, while using MAXLISTEN for its output event buffer on the stack. That seems like a reasonable thing to want to do, so consider this to be a pre-existing bug worth fixing. As discovered by valgrind on skink. Back-patch to all supported releases for epoll, and to release 13 for the kqueue part, which copied the incorrect epoll code. Reviewed-by: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/901504.1673504836%40sss.pgh.pa.us
* Fix jsonpath existense checking of missing variablesAlexander Korotkov2023-01-12
| | | | | | | | | | | | | | | | | The current jsonpath code assumes that the referenced variable always exists. It could only throw an error at the value valuation time. At the same time existence checking assumes variable is present without valuation, and error suppression doesn't work for missing variables. This commit makes existense checking trigger an error for missing variables. This makes the overall behavior consistent. Backpatch to 12 where jsonpath was introduced. Reported-by: David G. Johnston Discussion: https://postgr.es/m/CAKFQuwbeytffJkVnEqDyLZ%3DrQsznoTh1OgDoOF3VmOMkxcTMjA%40mail.gmail.com Author: Alexander Korotkov, David G. Johnston Backpatch-through: 12
* Acquire spinlock when updating 2PC slot data during logical decoding creationMichael Paquier2023-01-12
| | | | | | | | | | | | | | The creation of a logical decoding context in CreateDecodingContext() updates some data of its slot for two-phase transactions if enabled by the caller, but the code forgot to acquire a spinlock when updating these fields like any other code paths. This could lead to the read of inconsistent data. Oversight in a8fd13c. Author: Sawada Masahiko Discussion: https://postgr.es/m/CAD21AoAD8_fp47191LKuecjDd3DYhoQ4TaucFco1_TEr_jQ-Zw@mail.gmail.com Backpatch-through: 15
* Fix MERGE's test for unreachable WHEN clauses.Dean Rasheed2023-01-10
| | | | | | | | | | The former code would only detect an unreachable WHEN clause if it had an AND condition. Fix, so that unreachable unconditional WHEN clauses are also detected. Back-patch to v15, where MERGE was added. Discussion: https://postgr.es/m/CAEZATCVQ=7E2z4cSBB49jjeGGsB6WeoYQY32NDeSvcHiLUZ=ow@mail.gmail.com