aboutsummaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAge
* Avoid Assert failure when processing empty statement in aborted xact.Tom Lane2023-06-21
| | | | | | | | | | | | | | | | | | | | | exec_parse_message() wants to create a cached plan in all cases, including for empty input. The empty-input path does not have a test for being in an aborted transaction, making it possible that plancache.c will fail due to trying to do database lookups even though there's no real work to do. One solution would be to throw an aborted-transaction error in this path too, but it's not entirely clear whether the lack of such an error was intentional or whether some clients might be relying on non-error behavior. Instead, let's hack plancache.c so that it treats empty statements with the same logic it already had for transaction control commands, ensuring that it can soldier through even in an already-aborted transaction. Per bug #17983 from Alexander Lakhin. Back-patch to all supported branches. Discussion: https://postgr.es/m/17983-da4569fcb878672e@postgresql.org
* Disable use of archiving in 009_twophase.plMichael Paquier2023-06-21
| | | | | | | | | | | | | | | | | This partially reverts 68cb5af, as using archiving to enforce the rename of the last partial segment of the old timeline at promotion to use .partial as suffix is impacting the tests when it does switchovers. As showed by the logs gathered by the CI in the tests that failed, a new standby may fail to find the WAL segment it needs to follow a promoted instance with its timeline jump, as it got renamed to .partial. This problem would manifest as a run timeout with 009_twophase.pl, as the new standby repeatedly requests a segment from the promoted primary that it would not find. Reported-by: Nathan Bossart Discussion: https://postgr.es/m/20230621043345.GA787473@nathanxps13 Backpatch-through: 13
* Fix the errhint message and docs for drop subscription failure.Amit Kapila2023-06-21
| | | | | | | | | | The existing errhint message and docs were missing the fact that we can't disassociate from the slot unless the subscription is disabled. Author: Robert Sjöblom, Peter Smith Reviewed-by: Peter Eisentraut, Amit Kapila Backpatch-through: 11 Discussion: https://postgr.es/m/807bdf85-61ea-88e2-5712-6d9fcd4eabff@fortnox.se
* Fix hash join when inner hashkey expressions contain Params.Tom Lane2023-06-20
| | | | | | | | | | | | | | | | | | | | | | | | If the inner-side expressions contain PARAM_EXEC Params, we must re-hash whenever the values of those Params change. The executor mechanism for that exists already, but we failed to invoke it because finalize_plan() neglected to search the Hash.hashkeys field for Params. This allowed a previous scan's hash table to be re-used when it should not be, leading to rows missing from the join's output. (I believe incorrectly-included join rows are impossible however, since checking the real hashclauses would reject false matches.) This bug is very ancient, dating probably to d24d75ff1 of 7.4. Sadly, this simple fix depends on the plan representational changes made by 2abd7ae9b, so it will only work back to v12. I thought about trying to make some kind of hack for v11, but I'm leery of putting code significantly different from what is used in the newer branches into a nearly-EOL branch. Seeing that the bug escaped detection for a full twenty years, problematic cases must be rare; so I don't feel too awful about leaving v11 as-is. Per bug #17985 from Zuming Jiang. Back-patch to v12. Discussion: https://postgr.es/m/17985-748b66607acd432e@postgresql.org
* Enable archiving in recovery TAP test 009_twophase.plMichael Paquier2023-06-20
| | | | | | | | | | | | | | | | | This is a follow-up of f663b00, that has been committed to v13 and v14, tweaking the TAP test for two-phase transactions so as it provides coverage for the bug that has been fixed. This change is done in its own commit for clarity, as v15 and HEAD did not show the problematic behavior, still missed coverage for it. While on it, this adds a comment about the dependency of the last partial segment rename and RecoverPreparedTransactions() at the end of recovery, as that can be easy to miss. Author: Michael Paquier Reviewed-by: Kyotaro Horiguchi Discussion: https://postgr.es/m/743b9b45a2d4013bd90b6a5cba8d6faeb717ee34.camel@cybertec.at Backpatch-through: 13
* Fix failure at promotion with 2PC transactions and archiving enabledMichael Paquier2023-06-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When archiving is enabled, a promotion request would fail with the following error when some 2PC transaction needs to be recovered from WAL, preventing the promotion to complete: FATAL: requested WAL segment pg_wal/000000010000000000000001 has already been removed The origin of the problem is that the last partial segment of the old timeline is renamed before recovering the 2PC data via RecoverPreparedTransactions() at the end of recovery, causing the FATAL because the segment wanted is now renamed with a .partial suffix. This commit reorders a bit the end-of-recovery actions so as the execution of recovery_end_command, the cleanup of the old segments of the old timeline (RemoveNonParentXlogFiles) and the last partial segment rename are done after the 2PC transaction data is recovered with RecoverPreparedTransactions(). This makes the order of these end-of-recovery actions more consistent with ~15, at the exception of the end-of-recovery checkpoint that still needs to happen before all the actions reordered here in v13 and v14, contrary to what 15~ does. v15 and newer versions have "fixed" this problem somewhat accidentally with 811051c, where the end-of-recovery actions got reordered. In this case, the recovery of 2PC transactions happens before the renaming of the last partial segment of the old timeline. v13 and v14 are the versions that can easily see this problem as per the refactoring of 38a95731 where XLogReaderState is reset in XLogBeginRead() before reading the 2PC transaction data. v11 and v12 could also see this problem, but may finish by reading the 2PC data from some of the WAL buffers instead. Perhaps something could be done for these two branches, but I am not really excited about doing something on these per the lack of complaints and per the fact that v11 is soon going to be EOL'd soon (there is always a risk of breaking something). Note that the TAP test 009_twophase.pl is able to exhibit the issue if it enables archiving on the primary node, which does not impact the test coverage as restore_command would remain unused. This is something that should be changed on v15 and HEAD as well, so this will be changed in a separate commit for clarity. Author: Julian Markwort Reviewed-by: Kyotaro Horiguchi, Michael Paquier Discussion: https://postgr.es/m/743b9b45a2d4013bd90b6a5cba8d6faeb717ee34.camel@cybertec.at Backpatch-through: 13
* Don't use partial unique indexes for unique proofs in the plannerDavid Rowley2023-06-19
| | | | | | | | | | | | | | | | | | | Here we adjust relation_has_unique_index_for() so that it no longer makes use of partial unique indexes as uniqueness proofs. It is incorrect to use these as the predicates used by check_index_predicates() to set predOK makes use of not only baserestrictinfo quals as proofs, but also qual from join conditions. For relation_has_unique_index_for()'s case, we need to know the relation is unique for a given set of columns before any joins are evaluated, so if predOK was only set to true due to some join qual, then it's unsafe to use such indexes in relation_has_unique_index_for(). The final plan may not even make use of that index, which could result in reading tuples that are not as unique as the planner previously expected them to be. Bug: #17975 Reported-by: Tor Erik Linnerud Backpatch-through: 11, all supported versions Discussion: https://postgr.es/m/17975-98a90c156f25c952%40postgresql.org
* Fix typo in comment.Amit Langote2023-06-16
| | | | | | | Back-patch down to 11. Author: Sho Kato (<kato-sho@fujitsu.com>) Discussion: https://postgr.es/m/TYCPR01MB68499042A33BC32241193AAF9F5BA%40TYCPR01MB6849.jpnprd01.prod.outlook.com
* intarray: Prevent out-of-bound memory reads with gist__int_opsMichael Paquier2023-06-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As gist__int_ops stands in intarray, it is possible to store GiST entries for leaf pages that can cause corruptions when decompressed. Leaf nodes are stored as decompressed all the time by the compression method, and the decompression method should map with that, retrieving the contents of the page without doing any decompression. However, the code authorized the insertion of leaf page data with a higher number of array items than what can be supported, generating a NOTICE message to inform about this matter (199 for a 8k page, for reference). When calling the decompression method, a decompression would be attempted on this leaf node item but the contents should be retrieved as they are. The NOTICE message generated when dealing with the compression of a leaf page and too many elements in the input array for gist__int_ops has been introduced by 08ee64e, removing the marker stored in the array to track if this is actually a leaf node. However, it also missed the fact that the decompression path should do nothing for a leaf page. Hence, as the code stand, a too-large array would be stored as uncompressed but the decompression path would attempt a decompression rather that retrieving the contents as they are. This leads to various problems. First, even if 08ee64e tried to address that, it is possible to do out-of-bound chunk writes with a large input array, with the backend informing about that with WARNINGs. On decompression, retrieving the stored leaf data would lead to incorrect memory reads, leading to crashes or even worse. Perhaps somebody would be interested in expanding the number of array items that can be handled in a leaf page for this operator in the future, which would require revisiting the choice done in 08ee64e, but based on the lack of reports about this problem since 2005 it does not look so. For now, this commit prevents the insertion of data for leaf pages when using more array items that the code can handle on decompression, switching the NOTICE message to an ERROR. If one wishes to use more array items, gist__intbig_ops is an optional choice. While on it, use ERRCODE_PROGRAM_LIMIT_EXCEEDED as error code when a limit is reached, because that's what the module is facing in such cases. Author: Ankit Kumar Pandey, Alexander Lakhin Reviewed-by: Richard Guo, Michael Paquier Discussion: https://postgr.es/m/796b65c3-57b7-bddf-b0d5-a8afafb8b627@gmail.com Discussion: https://postgr.es/m/17888-f72930e6b5ce8c14@postgresql.org Backpatch-through: 11
* Correctly update hasSubLinks while mutating a rule action.Tom Lane2023-06-13
| | | | | | | | | | | | | | | | | | | | | rewriteRuleAction neglected to check for SubLink nodes in the securityQuals of range table entries. This could lead to failing to convert such a SubLink to a SubPlan, resulting in assertion crashes or weird errors later in planning. In passing, fix some poor coding in rewriteTargetView: we should not pass the source parsetree's hasSubLinks field to ReplaceVarsFromTargetList's outer_hasSubLinks. ReplaceVarsFromTargetList knows enough to ignore that when a Query node is passed, but it's still confusing and bad precedent: if we did try to update that flag we'd be updating a stale copy of the parsetree. Per bug #17972 from Alexander Lakhin. This has been broken since we added RangeTblEntry.securityQuals (although the presented test case only fails back to 215b43cdc), so back-patch all the way. Discussion: https://postgr.es/m/17972-f422c094237847d0@postgresql.org
* Accept fractional seconds in jsonpath's datetime() method.Tom Lane2023-06-12
| | | | | | | | | | | | | | | | | | | | | | Commit 927d9abb6 purported to make datetime() accept any string that could be output for a datetime value by to_jsonb(). But it overlooked the possibility of fractional seconds being present, so that cases as simple as to_jsonb(now()) would defeat it. Fix by adding formats that include ".US" to the list in executeDateTimeMethod(). (Note that while this is nominally microseconds, it'll do the right thing for fractions with fewer than six digits.) In passing, re-order the list to restore the datatype ordering specified in its comment. The violation accidentally did not break anything; but the next edit might be less lucky, so add more comments. Per report from Tim Field. Back-patch to v13 where datetime() was added, like the previous patch. Discussion: https://postgr.es/m/014A028B-5CE6-4FDF-AC24-426CA6FC9CEE@mohiohio.com
* hstore: Tighten key/value parsing check for whitespacesMichael Paquier2023-06-12
| | | | | | | | | | | | | | | | | isspace() can be locale-sensitive depending on the platform, causing hstore to consider as whitespaces characters it should not see as such. For example, U+0105, being decoded as 0xC4 0x85 in UTF-8, would be discarded from the input given. This problem is similar to 9ae2661, though it was missed that hstore can also manipulate non-ASCII inputs, so replace the existing isspace() calls with scanner_isspace(). This problem exists for a long time, so backpatch all the way down. Author: Evan Jones Discussion: https://postgr.es/m/CA+HWA9awUW0+RV_gO9r1ABZwGoZxPztcJxPy8vMFSTbTfi4jig@mail.gmail.com Backpatch-through: 11
* Fix missing initializations of MyProc.delayChkptEndMichael Paquier2023-06-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | This commit fixes an oversight introduced in 10520f4, that has added delayChkptEnd to PGPROC to avoid ABI breakages on stable branches, where two spots have missed to initialize this variable (delayChkpt was switched back from int to bool, and it was initialized as 0 so there was no consequences for it): - InitProcess(), where the per-process data structures of a backend are initialized. - InitAuxiliaryProcess(), same but for auxiliary processes. An interruption during relation truncation while this flag is set could cause an assertion failure when a follow-up process does a relation truncation while reusing the same PGPROC entry. A second effect could be incorrect checkpoint end delays. While on it, add an assertion in ProcArrayClearTransaction() for delayChkptEnd to be in line with 5788e25. This is needed only for v14. This issue affects v11~v14, but not v15~, as we use a single field called delayChkptFlags to delay checkpoints there. Author: suyu.cmj (mengjuan.cmj@alibaba-inc.com) Reviewed-by: Kyotaro Horiguchi, Michael Paquier Discussion: https://postgr.es/m/9c3d2a49-db5f-43cb-840b-d58f9a684295.mengjuan.cmj@alibaba-inc.com Backpatch-through: 11
* Refactor routine to find single log content pattern in TAP testsMichael Paquier2023-06-09
| | | | | | | | | | | | | | | | | | | | | | | The same routine to check if a specific pattern can be found in the server logs was copied over four different test scripts. This refactors the whole to use a single routine located in PostgreSQL::Test::Cluster, named log_contains, to grab the contents of the server logs and check for a specific pattern. On HEAD, the code previously used assumed that slurp_file() could not handle an undefined offset, setting it to zero, but slurp_file() does do an extra fseek() before retrieving the log contents only if an offset is defined. In two places, the test was retrieving the full log contents with slurp_file() after calling substr() to apply an offset, ignoring that slurp_file() would be able to handle that. Backpatch all the way down to ease the introduction of new tests that could rely on the new routine. Author: Vignesh C Reviewed-by: Andrew Dunstan, Dagfinn Ilmari Mannsåker, Michael Paquier Discussion: https://postgr.es/m/CALDaNm0YSiLpjCmajwLfidQrFOrLNKPQir7s__PeVvh9U3uoTQ@mail.gmail.com Backpatch-through: 11
* doc: Fix example command for ALTER FOREIGN TABLE ... OPTIONS.Fujii Masao2023-06-08
| | | | | | | | | | | | | | | In the documentation, previously the example command for ALTER FOREIGN TABLE ... OPTIONS incorrectly included both the option name and value with the DROP operation. The correct syntax for the DROP operation requires only the name of the option to be specified. This commit fixes the example by removing the option value from the DROP operation. Back-patch to all supported versions. Author: Mehmet Emin KARAKAS <emin100@gmail.com> Reviewed-by: Fujii Masao Discussion: https://postgr.es/m/CANQrdXAHzbcEYhjGoe5A42OmfvdQhHFJzyKj9gJvHuDKyOF5Ng@mail.gmail.com
* Use per-tuple context in ExecGetAllUpdatedColsTomas Vondra2023-06-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | Commit fc22b6623b (generated columns) replaced ExecGetUpdatedCols() with ExecGetAllUpdatedCols() in a couple places handling UPDATE (triggers and lock mode). However, ExecGetUpdatedCols() did exec_rt_fetch() while ExecGetAllUpdatedCols() also allocates memory through bms_union() without paying attention to the memory context and happened to use the long-lived ExecutorState, leaking the memory until the end of the query. The amount of leaked memory is proportional to the number of (updated) attributes, types of UPDATE triggers, and the number of processed rows (which for UPDATE ... FROM ... may be much higher than updated rows). Fixed by switching to the per-tuple context in GetAllUpdatedColumns(). This is fine for all in-core callers, but external callers may need to copy the result. But we're not aware of any such callers. Note the issue was introduced by fc22b6623b, but the macros were later renamed by f50e888990. Backpatch to 12, where the issue was introduced. Reported-by: Tomas Vondra Reviewed-by: Andres Freund, Tom Lane, Jakub Wartak Backpatch-through: 12 Discussion: https://postgr.es/m/222a3442-7f7d-246c-ed9b-a76209d19239@enterprisedb.com
* Initialize 'recordXtime' to silence compiler warning.Heikki Linnakangas2023-06-06
| | | | | | | | | | In reality, recordXtime will always be set by the getRecordTimestamp call, but the compiler doesn't necessarily see that. Back-patch to all supported versions. Author: Tristan Partin Discussion: https://www.postgresql.org/message-id/CT5MN8E11U0M.1NYNCHXYUHY41@gonk
* doc: add missing "the" in LATERAL sentence.Bruce Momjian2023-06-01
| | | | Backpatch-through: 11
* nbtree VACUUM: cope with right sibling link corruption.Peter Geoghegan2023-05-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Avoid "right sibling's left-link doesn't match" errors when vacuuming a corrupt nbtree index. Just LOG the issue and press on. That way VACUUM will have a decent chance of finishing off all required processing for the index (and for the table as a whole). This error was seen in the field from time to time (it's more than a theoretical risk), so giving VACUUM the ability to press on like this has real value. Nothing short of a REINDEX is expected to fix the underlying index corruption, so giving up (by throwing an error) risks making a bad situation far worse. Anything that blocks forward progress by VACUUM like this might go unnoticed for a long time. This could eventually lead to a wraparound/xidStopLimit outage. Note that _bt_unlink_halfdead_page() has always been able to bail on page deletion when the target page's left sibling page was in an inconsistent state. It now does the same thing (returns false to back out of the second phase of deletion) when it notices sibling link corruption in the target page's right sibling page. This is similar to the work from commit 5b861baa (later backpatched as commit 43e409ce), which taught nbtree to press on with vacuuming an index when page deletion fails to "re-find" a downlink in the target page's parent page. The "re-find" check seems to make VACUUM bail on page deletion more often in practice, but there is no reason to take any chances here. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Heikki Linnakangas <hlinnaka@iki.fi> Discussion: https://postgr.es/m/CAH2-Wzko2q2kP1+UvgJyP9g0mF4hopK0NtQZcxwvMv9_ytGhkQ@mail.gmail.com Backpatch: 11- (all supported versions).
* Fix misbehavior of EvalPlanQual checks with multiple result relations.Tom Lane2023-05-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The idea of EvalPlanQual is that we replace the query's scan of the result relation with a single injected tuple, and see if we get a tuple out, thereby implying that the injected tuple still passes the query quals. (In join cases, other relations in the query are still scanned normally.) This logic was not updated when commit 86dc90056 made it possible for a single DML query plan to have multiple result relations, when the query target relation has inheritance or partition children. We replaced the output for the current result relation successfully, but other result relations were still scanned normally; thus, if any other result relation contained a tuple satisfying the quals, we'd think the EPQ check passed, even if it did not pass for the injected tuple itself. This would lead to update or delete actions getting performed when they should have been skipped due to a conflicting concurrent update in READ COMMITTED isolation mode. Fix by blocking all sibling result relations from emitting tuples during an EvalPlanQual recheck. In the back branches, the fix is complicated a bit by the need to not change the size of struct EPQState (else we'd have ABI-breaking changes in offsets in struct ModifyTableState). Like the back-patches of 3f7836ff6 and 4b3e37993, add a separately palloc'd struct to avoid that. The logic is the same as in HEAD otherwise. This is only a live bug back to v14 where 86dc90056 came in. However, I chose to back-patch the test cases further, on the grounds that this whole area is none too well tested. I skipped doing so in v11 though because none of the test applied cleanly, and it didn't quite seem worth extra work for a branch with only six months to live. Per report from Ante Krešić (via Aleksander Alekseev) Discussion: https://postgr.es/m/CAJ7c6TMBTN3rcz4=AjYhLPD_w3FFT0Wq_C15jxCDn8U4tZnH1g@mail.gmail.com
* Avoid naming conflict between transactions.sql and namespace.sql.Tom Lane2023-05-19
| | | | | | | | | | | | | | Commits 681d9e462 et al added a test case in namespace.sql that implicitly relied on there not being a table "public.abc". However, the concurrently-run transactions.sql test creates precisely such a table, so with the right timing you'd get a failure. Creating a table named as generically as "abc" in a common schema seems like bad practice, so fix this by changing the name of transactions.sql's table. (Compare 2cf8c7aa4.) Marina Polyakova Discussion: https://postgr.es/m/80d0201636665d82185942e7112257b4@postgrespro.ru
* Fix handling of empty ranges and NULLs in BRINTomas Vondra2023-05-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | BRIN indexes did not properly distinguish between summaries for empty (no rows) and all-NULL ranges, treating them as essentially the same thing. Summaries were initialized with allnulls=true, and opclasses simply reset allnulls to false when processing the first non-NULL value. This however produces incorrect results if the range starts with a NULL value (or a sequence of NULL values), in which case we forget the range contains NULL values when adding the first non-NULL value. This happens because the allnulls flag is used for two separate purposes - to mark empty ranges (not representing any rows yet) and ranges containing only NULL values. Opclasses don't know which of these cases it is, and so don't know whether to set hasnulls=true. Setting the flag in both cases would make it correct, but it would also make BRIN indexes useless for queries with IS NULL clauses. All ranges start empty (and thus allnulls=true), so all ranges would end up with either allnulls=true or hasnulls=true. The severity of the issue is somewhat reduced by the fact that it only happens when adding values to an existing summary with allnulls=true. This can happen e.g. for small tables (because a summary for the first range exists for all BRIN indexes), or for tables with large fraction of NULL values in the indexed columns. Bulk summarization (e.g. during CREATE INDEX or automatic summarization) that processes all values at once is not affected by this issue. In this case the flags were updated in a slightly different way, not forgetting the NULL values. To identify empty ranges we use a new flag, stored in an unused bit in the BRIN tuple header so the on-disk format remains the same. A matching flag is added to BrinMemTuple, into a 3B gap after bt_placeholder. That means there's no risk of ABI breakage, although we don't actually pass the BrinMemTuple to any public API. We could also skip storing index tuples for empty summaries, but then we'd have to always process such ranges - even if there are no rows in large parts of the table (e.g. after a bulk DELETE), it would still require reading the pages etc. So we store them, but ignore them when building the bitmap. Backpatch to 11. The issue exists since BRIN indexes were introduced in 9.5, but older releases are already EOL. Backpatch-through: 11 Reviewed-by: Justin Pryzby, Matthias van de Meent, Alvaro Herrera Discussion: https://postgr.es/m/402430e4-7d9d-6cf1-09ef-464d80afff3b@enterprisedb.com
* Fix handling of NULLs when merging BRIN summariesTomas Vondra2023-05-18
| | | | | | | | | | | | | | | | | | | | | | When merging BRIN summaries, union_tuples() did not correctly update the target hasnulls/allnulls flags. When merging all-NULL summary into a summary without any NULL values, the result had both flags set to false (instead of having hasnulls=true). This happened because the code only considered the hasnulls flags, ignoring the possibility the source summary has allnulls=true. Discovered while investigating issues with handling empty BRIN ranges and handling of NULL values, but it's a separate problem (has nothing to do with empty ranges). Fixed by considering both flags on the source summary, and updating the hasnulls flag on the target summary. Backpatch to 11. The bug exists since 9.5 (where BRIN indexes were introduced), but those releases are EOL already. Discussion: https://postgr.es/m/9d993d0d-e431-2196-9ccc-0554d0e60154%40enterprisedb.com
* Ensure Soundex difference() function handles empty input sanely.Tom Lane2023-05-16
| | | | | | | | | | | | | | | | fuzzystrmatch's difference() function assumes that _soundex() always initializes its output buffer fully. This was not so for the case of a string containing no alphabetic characters, resulting in unstable output and Valgrind complaints. Fix by using memset() to fill the whole buffer in the early-exit case. Also make some cosmetic improvements (I didn't care for the random switches between "instr[0]" and "*instr" notation). Report and diagnosis by Alexander Lakhin (bug #17935). Back-patch to all supported branches. Discussion: https://postgr.es/m/17935-b99316aa79c18513@postgresql.org
* Doc: Fix link to fillfactor reloption.Peter Geoghegan2023-05-10
| | | | | | | | | | | | Fix a link from the "Heap-Only Tuples" documentation section. Previously, its "fillfactor" link pointed to the "CREATE TABLE" command's documentation. Now the link directly points to the fillfactor storage parameter documentation (which is about half way into the "CREATE TABLE" sect1). Oversight in commit 115464bb. Backpatch: 12-, the first version with a usable reloption link.
* Stamp 13.11.REL_13_11Tom Lane2023-05-08
|
* Last-minute updates for release notes.Tom Lane2023-05-08
| | | | Security: CVE-2023-2454, CVE-2023-2455
* Adjust sepgsql expected output for 681d9e462 et al.Tom Lane2023-05-08
| | | | Security: CVE-2023-2454
* Handle RLS dependencies in inlined set-returning functions properly.Tom Lane2023-05-08
| | | | | | | | | | | | | | | If an SRF in the FROM clause references a table having row-level security policies, and we inline that SRF into the calling query, we neglected to mark the plan as potentially dependent on which role is executing it. This could lead to later executions in the same session returning or hiding rows that should have been hidden or returned instead. Our thanks to Wolfgang Walther for reporting this problem. Stephen Frost and Tom Lane Security: CVE-2023-2455
* Replace last PushOverrideSearchPath() call with set_config_option().Noah Misch2023-05-08
| | | | | | | | | | | | | | | | | | | | | The two methods don't cooperate, so set_config_option("search_path", ...) has been ineffective under non-empty overrideStack. This defect enabled an attacker having database-level CREATE privilege to execute arbitrary code as the bootstrap superuser. While that particular attack requires v13+ for the trusted extension attribute, other attacks are feasible in all supported versions. Standardize on the combination of NewGUCNestLevel() and set_config_option("search_path", ...). It is newer than PushOverrideSearchPath(), more-prevalent, and has no known disadvantages. The "override" mechanism remains for now, for compatibility with out-of-tree code. Users should update such code, which likely suffers from the same sort of vulnerability closed here. Back-patch to v11 (all supported versions). Alexander Lakhin. Reported by Alexander Lakhin. Security: CVE-2023-2454
* Translation updatesPeter Eisentraut2023-05-08
| | | | | Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: 5880bed52cbf5fb44921c4a42b23e3251575dcdb
* Release notes for 15.3, 14.8, 13.11, 12.15, 11.20.Tom Lane2023-05-07
|
* Fix typo with wait event for SLRU buffer of commit timestampsMichael Paquier2023-05-05
| | | | | | | | | | | | | This wait event was documented as "CommitTsBuffer" since its introduction, but the code named it "CommitTSBuffer". This commit fixes the code to follow the term documented, which is also more consistent with the naming of the other wait events used for commit timestamps. Introduced by 5da1493. Author: Alexander Lakhin Discussion: https://postgr.es/m/e8c38840-596a-83d6-bd8d-cebc51111572@gmail.com Backpatch-through: 13
* Fix prove_installcheck when used with PGXSPeter Eisentraut2023-05-05
| | | | | | | | | | | | | | | | | | Commit 153e215677 added the portlock directory. This is created in $ENV{top_builddir} if it is set. Under PGXS, top_builddir points into the installation directory, which is not necessarily writable and in any case inappropriate to use by a test suite. The cause of the problem is that the prove_installcheck target in Makefile.global exports top_builddir, which isn't useful (since no other Perl code actually reads it) and breaks this use case. The reason this code is there is probably that is has been dragged around with various other changes, in particular a0fc813266, but without a real purpose of its own. By just removing the exporting of top_builddir in prove_installcheck, the portlock directory then ends up under tmp_check in the build directory, which is more suitable. Reviewed-by: Andrew Dunstan <andrew@dunslane.net> Discussion: https://www.postgresql.org/message-id/78d1cfa6-0065-865d-584b-cde6d8c18aff@enterprisedb.com
* Move return statements out of PG_TRY blocks.Nathan Bossart2023-05-04
| | | | | | | | | | | | | | | | | | If we exit a PG_TRY block early via "continue", "break", "goto", or "return", we'll skip unwinding its exception stack. This change moves a couple of such "return" statements in PL/Python out of PG_TRY blocks. This was introduced in d0aa965c0a and affects all supported versions. We might also be able to add compile-time checks to prevent recurrence, but that is left as a future exercise. Reported-by: Mikhail Gribkov, Xing Guo Author: Xing Guo Reviewed-by: Michael Paquier, Andres Freund, Tom Lane Discussion: https://postgr.es/m/CAMEv5_v5Y%2B-D%3DCO1%2Bqoe16sAmgC4sbbQjz%2BUtcHmB6zcgS%2B5Ew%40mail.gmail.com Discussion: https://postgr.es/m/CACpMh%2BCMsGMRKFzFMm3bYTzQmMU5nfEEoEDU2apJcc4hid36AQ%40mail.gmail.com Backpatch-through: 11 (all supported versions)
* In array_position()/array_positions(), beware of empty input array.Tom Lane2023-05-04
| | | | | | | | | | | | These functions incautiously fetched the array's first lower bound even when the array is zero-dimensional, thus fetching the word after the allocated array space. While almost always harmless, with very bad luck this could result in SIGSEGV. Fix by adding an early exit for empty input. Per bug #17920 from Alexander Lakhin. Discussion: https://postgr.es/m/17920-f7c228c627b6d02e%40postgresql.org
* Tighten array dimensionality checks in Python -> SQL array conversion.Tom Lane2023-05-04
| | | | | | | | | | | | | | | | | | | | | | Like plperl before f47004add, plpython wasn't being sufficiently careful about checking that list-of-list structures represent rectangular arrays, so that it would accept some cases in which different parts of the "array" are nested to different depths. This was exacerbated by Python's weak distinction between sequences and lists, so that in some cases strings could get treated as though they are lists (and burst into individual characters) even though a different ordering of the upper-level list would give a different result. Some of this behavior was unreachable (without risking a crash) before 81eaaf65e. It seems like a good idea to clean it all up in the same releases, rather than shipping a non-crashing but nonetheless visibly buggy behavior in the name of minimal change. Hence, back-patch. Per bug #17912 and further testing by Alexander Lakhin. Discussion: https://postgr.es/m/17912-82ceed78731d9cdc@postgresql.org
* Doc: clarify behavior of row-limit arguments in the PLs' SPI wrappers.Tom Lane2023-05-02
| | | | | | | | | | | | | | plperl, plpython, and pltcl all provide query-execution functions that are thin wrappers around SPI_execute() or its variants. The SPI functions document their row-count limit arguments clearly, as "maximum number of rows to return, or 0 for no limit". However the PLs' documentation failed to explain this special behavior of zero, so that a reader might well assume it means "fetch zero rows". Improve that. Daniel Gustafsson and Tom Lane, per report from Kieran McCusker Discussion: https://postgr.es/m/CAGgUQ6H6qYScctOhktQ9HLFDDoafBKHyUgJbZ6q_dOApnzNTXg@mail.gmail.com
* Tighten array dimensionality checks in Perl -> SQL array conversion.Tom Lane2023-04-29
| | | | | | | | | | | | | | | | plperl_array_to_datum() wasn't sufficiently careful about checking that nested lists represent a rectangular array structure; it would accept inputs such as "[1, []]". This is a bit related to the PL/Python bug fixed in commit 81eaaf65e, but it doesn't seem to provide any direct route to a memory stomp. Instead the likely failure mode is for makeMdArrayResult to be passed fewer Datums than the claimed array dimensionality requires, possibly leading to a wild pointer dereference and SIGSEGV. Per report from Alexander Lakhin. It's been broken for a long time, so back-patch to all supported branches. Discussion: https://postgr.es/m/5ebae5e4-d401-fadf-8585-ac3eaf53219c@gmail.com
* Handle zero-length sublist correctly in Python -> SQL array conversion.Tom Lane2023-04-28
| | | | | | | | | | | | | | | | | | | | If PLySequence_ToArray came across a zero-length sublist, it'd compute the overall array size as zero, possibly leading to a memory clobber. (This would likely qualify as a security bug, were it not that plpython is an untrusted language already.) I think there are other corner-case issues in this code as well, notably that the error messages don't match the core code and for some ranges of array sizes you'd get "invalid memory alloc request size" rather than the intended message about array size. Really this code has no business doing its own array size calculation at all, so remove the faulty code in favor of using ArrayGetNItems(). Per bug #17912 from Alexander Lakhin. Bug seems to have come in with commit 94aceed31, so back-patch to all supported branches. Discussion: https://postgr.es/m/17912-82ceed78731d9cdc@postgresql.org
* Fix crashes with CREATE SCHEMA AUTHORIZATION and schema elementsMichael Paquier2023-04-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CREATE SCHEMA AUTHORIZATION with appended schema elements can lead to crashes when comparing the schema name of the query with the schemas used in the qualification of some clauses in the elements' queries. The origin of the problem is that the transformation routine for the elements listed in a CREATE SCHEMA query uses as new, expected, schema name the one listed in CreateSchemaStmt itself. However, depending on the query, CreateSchemaStmt.schemaname may be NULL, being computed instead from the role specification of the query given by the AUTHORIZATION clause, that could be either: - A user name string, with the new schema name being set to the same value as the role given. - Guessed from CURRENT_ROLE, SESSION_ROLE or CURRENT_ROLE, with a new schema name computed from the security context where CREATE SCHEMA is running. Regression tests are added for CREATE SCHEMA with some appended elements (some of them with schema qualifications), covering also some role specification patterns. While on it, this simplifies the context structure used during the transformation of the elements listed in a CREATE SCHEMA query by removing the fields for the role specification and the role type. They were not used, and for the role specification this could be confusing as the schema name may by extracted from that at the beginning of CreateSchemaCommand(). This issue exists for a long time, so backpatch down to all the versions supported. Reported-by: Song Hongyu Author: Michael Paquier Reviewed-by: Richard Guo Discussion: https://postgr.es/m/17909-f65c12dfc5f0451d@postgresql.org Backpatch-through: 11
* Prevent underflow in KeepLogSeg().Nathan Bossart2023-04-27
| | | | | | | | | | | | | | | The call to XLogGetReplicationSlotMinimumLSN() might return a greater LSN than the one given to the function. Subsequent segment number calculations might then underflow, which could result in unexpected behavior when removing or recyling WAL files. This was introduced with max_slot_wal_keep_size in c655077639. To fix, skip the block of code for replication slots if the LSN is greater. Reported-by: Xu Xingwang Author: Kyotaro Horiguchi Reviewed-by: Junwang Zhao Discussion: https://postgr.es/m/17903-4288d439dee856c6%40postgresql.org Backpatch-through: 13
* In hstore_plpython, avoid crashing when return value isn't a mapping.Tom Lane2023-04-27
| | | | | | | | | | | | | | | | | | Python 3 changed the behavior of PyMapping_Check(), breaking the test in plpython_to_hstore() that verifies whether a function result to be transformed is acceptable. A backwards-compatible fix is to first verify that the object doesn't pass PySequence_Check(). Perhaps accidentally, our other uses of PyMapping_Check() already follow uses of PySequence_Check(), so that no other bugs were created by this change. Per bug #17908 from Alexander Lakhin. Back-patch to all supported branches. Dmitry Dolgov and Tom Lane Discussion: https://postgr.es/m/17908-3f19a125d56a11d6@postgresql.org
* Fix vacuum_cost_delay check for balance calculation.Daniel Gustafsson2023-04-25
| | | | | | | | | | | | | | | | Commit 1021bd6a89 excluded autovacuum workers from cost-limit balance calculations when per-relation options were set. The code checks for limit and cost_delay being greater than zero, but since cost_delay can be set to -1 the test needs to check for greater than or zero. Backpatch to all supported branches since 1021bd6a89 was backpatched all the way at the time. Author: Masahiko Sawada <sawada.mshk@gmail.com> Reviewed-by: Melanie Plageman <melanieplageman@gmail.com> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Discussion: https://postgr.es/m/CAD21AoBS7o6Ljt_vfqPQPf67AhzKu3fR0iqk8B=vVYczMugKMQ@mail.gmail.com Backpatch-through: v11 (all supported branches)
* Fix memory leakage in plpgsql DO blocks that use cast expressions.Tom Lane2023-04-24
| | | | | | | | | | | | | | | | | | | | | | Commit 04fe805a1 modified plpgsql so that datatype casts make use of expressions cached by plancache.c, in place of older code where these expression trees were managed by plpgsql itself. However, I (tgl) forgot that we use a separate, shorter-lived cast info hashtable in DO blocks. The new mechanism thus resulted in session-lifespan leakage of the plancache data once a DO block containing one or more casts terminated. To fix, split the cast hash table into two parts, one that tracks only the plancache's CachedExpressions and one that tracks the expression state trees generated from them. DO blocks need their own expression state trees and hence their own version of the second hash table, but there's no reason they can't share the CachedExpressions with regular plpgsql functions. Per report from Ajit Awekar. Back-patch to v12 where the issue was introduced. Ajit Awekar and Tom Lane Discussion: https://postgr.es/m/CAHv6PyrNaqdvyWUspzd3txYQguFTBSnhx+m6tS06TnM+KWc_LQ@mail.gmail.com
* Remove duplicate lines of codeDaniel Gustafsson2023-04-24
| | | | | | | | | | | | | | | Commit 6df7a9698bb accidentally included two identical prototypes for default_multirange_selectivi() and commit 086cf1458c6 added a break; statement where one was already present, thus duplicating it. While there is no bug caused by this, fix by removing the duplicated lines as they provide no value. Backpatch the fix for duplicate prototypes to v14 and the duplicate break statement fix to all supported branches to avoid backpatching hazards due to the removal. Reported-by: Anton Voloshin <a.voloshin@postgrespro.ru> Discussion: https://postgr.es/m/0e69cb60-0176-f6d0-7e15-6478b7d85724@postgrespro.ru
* Validate ltree siglen GiST option to be int-alignedAlexander Korotkov2023-04-23
| | | | | | | | | | | | Unaligned siglen could lead to an unaligned access to subsequent key fields. Backpatch to 13, where opclass options were introduced. Reported-by: Alexander Lakhin Bug: 17847 Discussion: https://postgr.es/m/17847-171232970bea406b%40postgresql.org Reviewed-by: Tom Lane, Pavel Borisov, Alexander Lakhin Backpatch-through: 13
* Fix custom validators call in build_local_reloptions()Alexander Korotkov2023-04-23
| | | | | | | | | | | We need to call them only when validate == true. Backpatch to 13, where opclass options were introduced. Reported-by: Tom Lane Discussion: https://postgr.es/m/2656633.1681831542%40sss.pgh.pa.us Reviewed-by: Tom Lane, Pavel Borisov Backpatch-through: 13
* Avoid character classification in regex escape parsing.Jeff Davis2023-04-21
| | | | | | | | | | | | | For regex escape sequences, just test directly for the relevant ASCII characters rather than using locale-sensitive character classification. This fixes an assertion failure when a locale considers a non-ASCII character, such as "൧", to be a digit. Reported-by: Richard Guo Discussion: https://postgr.es/m/CAMbWs49Q6UoKGeT8pBkMtJGJd+16CBFZaaWUk9Du+2ERE5g_YA@mail.gmail.com Backpatch-through: 11
* Use --strip-unneeded when stripping static libraries with GNU strip.Tom Lane2023-04-20
| | | | | | | | | | | | | | | | | | | | | We've long used "--strip-unneeded" for shared libraries but plain "-x" for static libraries when stripping symbols with GNU strip. There doesn't seem to be any really good reason for that though, since --strip-unneeded produces smaller output (as "-x" alone does not remove debug symbols). Moreover it seems that llvm-strip, although it identifies as GNU strip, misbehaves when given "-x" for this purpose. It's unclear whether that's intentional or a bug in llvm-strip, but in any case it seems like changing to use --strip-unneeded in all cases should be a win. Note that this doesn't change our behavior when dealing with non-GNU strip. Per gripes from Ed Maste and Palle Girgensohn. Back-patch, in case anyone wants to use llvm-strip with stable branches. Discussion: https://postgr.es/m/17898-5308d09543463266@postgresql.org Discussion: https://postgr.es/m/20230420153338.bbj2g5jiyy3afhjz@awork3.anarazel.de