aboutsummaryrefslogtreecommitdiff
path: root/src/backend
Commit message (Collapse)AuthorAge
...
* Track the number of presorted outer pathkeys in MergePathRichard Guo2025-05-08
| | | | | | | | | | | | | | | | When creating an explicit Sort node for the outer path of a mergejoin, we need to determine the number of presorted keys of the outer path to decide whether explicit incremental sort can be applied. Currently, this is done by repeatedly calling pathkeys_count_contained_in. This patch caches the number of presorted outer pathkeys in MergePath, allowing us to save several calls to pathkeys_count_contained_in. It can be considered a complement to the changes in commit 828e94c9d. Reported-by: David Rowley <dgrowleyml@gmail.com> Author: Richard Guo <guofenglinux@gmail.com> Reviewed-by: Tender Wang <tndrwang@gmail.com> Discussion: https://postgr.es/m/CAApHDvqvBireB_w6x8BN5txdvBEHxVgZBt=rUnpf5ww5P_E_ww@mail.gmail.com
* Suppress unnecessary explicit sorting for EPQ mergejoin pathRichard Guo2025-05-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When building a ForeignPath for a joinrel, if there's a possibility that EvalPlanQual will be executed, we must identify a suitable path for EPQ checks. If the outer or inner path of the chosen path is a ForeignPath representing a pushed-down join, we replace it with its fdw_outerpath to ensure that the EPQ check path consists entirely of local joins. If the chosen path is a MergePath, and its outer or inner path is a ForeignPath that is not already well enough ordered, the MergePath will have non-NIL outersortkeys or innersortkeys indicating the desired ordering to be created by an explicit Sort node. If we then replace the outer or inner path with its corresponding fdw_outerpath, and that path is already sufficiently ordered, we end up in an inconsistent state: the MergePath has non-NIL outersortkeys or innersortkeys, and its input path is already properly ordered. This inconsistency can result in an Assert failure or the addition of a redundant Sort node. To fix, check if the new outer or inner path of a MergePath is already properly sorted, and set its outersortkeys or innersortkeys to NIL if so. Bug: #18902 Reported-by: Nikita Kalinin <n.kalinin@postgrespro.ru> Author: Richard Guo <guofenglinux@gmail.com> Reviewed-by: Tender Wang <tndrwang@gmail.com> Discussion: https://postgr.es/m/18902-71c1bed2b9f7c46f@postgresql.org
* Remove pg_replication_origin's TOAST table.Nathan Bossart2025-05-07
| | | | | | | | | | | | | | | | | | | | | | A few places that access this catalog don't set up an active snapshot before potentially accessing its TOAST table. However, roname (the replication origin name) is the only varlena column, so this is only a problem if the name requires out-of-line storage. This commit removes its TOAST table to avoid needing to set up a snapshot. It also places a limit on replication origin names so that attempts to set long names will fail with a more user-friendly error. Those chosen limit of 512 bytes should be sufficient to avoid "row is too big" errors independent of BLCKSZ, but it should also be lenient enough for all reasonable use-cases. Bumps catversion. Reviewed-by: Michael Paquier <michael@paquier.xyz> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Reviewed-by: Euler Taveira <euler@eulerto.com> Reviewed-by: Nisha Moond <nisha.moond412@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/ZvMSUPOqUU-VNADN%40nathan
* Prevent premature nbtree array advancement.Peter Geoghegan2025-05-07
| | | | | | | | | | | | | | | | | | | | | | | | | nbtree array index scans could fail to return matching tuples in rare cases where the missed tuples cover key space that the scan's arrays incorrectly indicate has already been read. These cases involved nearby tuples with NULL values that were evaluated using a skip array key while in pstate.forcenonrequired mode. To fix, prevent forcenonrequired mode from prematurely advancing the scan's array keys beyond key space that the scan has yet to read tuples from: reset the scan's array keys (to the first elements in the current scan direction) before the _bt_checkkeys call for pstate.finaltup. That way _bt_checkkeys starts from a clean slate, which ensures that it will call _bt_advance_array_keys (while passing it sktrig_required=true). This reliably restores the invariant that the scan's arrays always accurately track its progress through the index's key space (at least when the scan is "between pages"). Oversight in commit 8a510275, which optimized nbtree search scan key comparisons. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Mark Dilger <mark.dilger@enterprisedb.com> Discussion: https://postgr.es/m/CAH2-WzmodSE+gpTd1CRGU9ez8ytyyDS+Kns2r9NzgUp1s56kpw@mail.gmail.com
* nbtree: tighten up array recheck rules.Peter Geoghegan2025-05-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Be more conservative when performing a scheduled recheck of an nbtree scan's array keys once on the next page, having set so->scanBehind: back out of reading the page (perform another primitive scan instead) when the next page's high key/finaltup has an untruncated prefix of matching values and truncated suffix attributes associated with lower-order keys. In other words, stop assuming that the lower-order keys have been satisfied by the truncated suffix attributes in this context (only do so when considering scheduling a recheck within _bt_advance_array_keys). The new behavior is more logical: if the next page read after setting so->scanBehind can only contain tuples that are themselves "behind the scan", that's reason enough to cut our losses. In general, when we set so->scanBehind, we only expect to perform one recheck on the next page to make a final decision about whether or not to continue the current primitive index scan. It seems unprincipled for the recheck to allow a _bt_readpage to continue unless the scan's arrays will advance/unless the page might actually contain relevant tuples. In practice it is highly unlikely that things will line up like this (the untruncated prefix of attribute values from the next page's high key is seldom an exact match for their corresponding array's current element following array advancement on the original/previous page). That gives us all the more reason to keep things simple and consistent. This was arguably an oversight in commit 9a2e2a285a, which improved nbtree array primitive scan scheduling. Author: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/CAH2-WzkXzJajgyW-pCQ7vaDPhaT3huU+Zw_j448rpCBEsu2YOQ@mail.gmail.com
* Refactor ChangeVarNodesExtended() using the custom callbackAlexander Korotkov2025-05-07
| | | | | | | | | | | | | | fc069a3a6319 implemented Self-Join Elimination (SJE) and put related logic to ChangeVarNodes_walker(). This commit provides refactoring to remove the SJE-related logic from ChangeVarNodes_walker() but adds a custom callback to ChangeVarNodesExtended(), which has a chance to process a node before ChangeVarNodes_walker(). Passing this callback to ChangeVarNodesExtended() allows SJE-related node handling to be kept within the analyzejoins.c. Reported-by: Richard Guo <guofenglinux@gmail.com> Discussion: https://postgr.es/m/CAMbWs49PE3CvnV8vrQ0Dr%3DHqgZZmX0tdNbzVNJxqc8yg-8kDQQ%40mail.gmail.com Author: Andrei Lepikhov <lepihov@gmail.com> Author: Alexander Korotkov <aekorotkov@gmail.com>
* Fix some comments related to IO workersMichael Paquier2025-05-07
| | | | | | | | | | | IO workers are treated as auxiliary processes. The comments fixed in this commit stated that there could be only one auxiliary process of each BackendType at the same time. This is not true for IO workers, as up to MAX_IO_WORKERS of them can co-exist at the same time. Author: Cédric Villemain <Cedric.Villemain@data-bene.io> Co-authored-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/e4a3ac45-abce-4b58-a043-b4a31cd11113@Data-Bene.io
* With GB18030, prevent SIGSEGV from reading past end of allocation.Noah Misch2025-05-05
| | | | | | | | | | | | | | | | | | | | | With GB18030 as source encoding, applications could crash the server via SQL functions convert() or convert_from(). Applications themselves could crash after passing unterminated GB18030 input to libpq functions PQescapeLiteral(), PQescapeIdentifier(), PQescapeStringConn(), or PQescapeString(). Extension code could crash by passing unterminated GB18030 input to jsonapi.h functions. All those functions have been intended to handle untrusted, unterminated input safely. A crash required allocating the input such that the last byte of the allocation was the last byte of a virtual memory page. Some malloc() implementations take measures against that, making the SIGSEGV hard to reach. Back-patch to v13 (all supported versions). Author: Noah Misch <noah@leadboat.com> Author: Andres Freund <andres@anarazel.de> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Backpatch-through: 13 Security: CVE-2025-4207
* Translation updatesPeter Eisentraut2025-05-05
| | | | | Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: f90ee4803c30491e5c49996b973b8a30de47bfb2
* Revert "Refactor ChangeVarNodesExtended() using the custom callback"Alexander Korotkov2025-05-03
| | | | | | | | This reverts commit 250a718aadad68793e82103282247556a46a3cfc. It shouldn't be pushed during the release freeze. Reported-by: Tom Lane Discussion: https://postgr.es/m/E1uBIbY-000owH-0O%40gemulon.postgresql.org
* Refactor ChangeVarNodesExtended() using the custom callbackAlexander Korotkov2025-05-03
| | | | | | | | | | | | | | fc069a3a6319 implemented Self-Join Elimination (SJE) and put related logic to ChangeVarNodes_walker(). This commit provides refactoring to remove the SJE-related logic from ChangeVarNodes_walker() but adds a custom callback to ChangeVarNodesExtended(), which has a chance to process a node before ChangeVarNodes_walker(). Passing this callback to ChangeVarNodesExtended() allows SJE-related node handling to be kept within the analyzejoins.c. Reported-by: Richard Guo <guofenglinux@gmail.com> Discussion: https://postgr.es/m/CAMbWs49PE3CvnV8vrQ0Dr%3DHqgZZmX0tdNbzVNJxqc8yg-8kDQQ%40mail.gmail.com Author: Andrei Lepikhov <lepihov@gmail.com> Author: Alexander Korotkov <aekorotkov@gmail.com>
* Fix memory allocation/copy mistakes.Etsuro Fujita2025-05-03
| | | | | | | | | | | | | | The previous code was allocating more memory and copying more data than necessary because it specified the wrong PgStat_KindInfo member as the size argument for MemoryContextAlloc and memcpy, respectively. Although these issues exist since 5891c7a8e, there have been no reports from the field. So for now, it seems sufficient to fix them in master. Author: Etsuro Fujita <etsuro.fujita@gmail.com> Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Reviewed-by: Gurjeet Singh <gurjeet@singh.im> Discussion: https://postgr.es/m/CAPmGK15eTRCZTnfgQ4EuBNo%3DQLYGFEbXS_7m2dXqtkcT7L8qrQ%40mail.gmail.com
* Fix typos in comments.Etsuro Fujita2025-05-03
| | | | | | | | | | | | Also adjust the phrasing in the comments. Author: Etsuro Fujita <etsuro.fujita@gmail.com> Author: Heikki Linnakangas <hlinnaka@iki.fi> Reviewed-by: Tender Wang <tndrwang@gmail.com> Reviewed-by: Gurjeet Singh <gurjeet@singh.im> Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/CAPmGK17%3DPHSDZ%2B0G6jcj12buyyE1bQQc3sbp1Wxri7tODT-SDw%40mail.gmail.com Backpatch-through: 15
* Avoid treating nonrequired nbtree keys as required.Peter Geoghegan2025-05-02
| | | | | | | | | | | | | | | | | | | Consistently prevent nbtree array advancement from treating a scankey as required when operating in pstate.forcenonrequired mode. Otherwise, we risk a NULL pointer dereference. This was possible in the path where _bt_check_compare is called to recheck a tuple that advanced all of the scan's arrays to matching values: its continuescan=false handling expects _bt_advance_array_keys to have been called with a valid pstate, but it'll always be NULL during sktrig_required=false calls (which is how _bt_advance_array_keys must be called when pstate.forcenonrequired). Oversight in commit 8a510275, which optimized nbtree search scan key comparisons. Author: Peter Geoghegan <pg@bowt.ie> Reported-By: Mark Dilger <mark.dilger@enterprisedb.com> Discussion: https://postgr.es/m/CAHgHdKsn2W=gPBmj7p6MjQFvxB+zZDBkwTSg0o3f5Hh8rkRrsA@mail.gmail.com Discussion: https://postgr.es/m/CAH2-WzmodSE+gpTd1CRGU9ez8ytyyDS+Kns2r9NzgUp1s56kpw@mail.gmail.com
* Fix memory leak in _gin_parallel_mergeTomas Vondra2025-05-02
| | | | | | | | | | | | | | | | | | | To insert the merged GIN entries in _gin_parallel_merge, the leader calls ginEntryInsert(). This may allocate memory, e.g. for a new leaf tuple. This was allocated in the PortalContext, and kept until the end of the index build. For most GIN indexes the amount of leaked memory is negligible, but for custom opclasses with large keys it may cause OOMs. Fixed by calling ginEntryInsert() in a temporary memory context, reset after each insert. Other ginEntryInsert() callers do this too, except that the context is reset after batches of inserts. More frequent resets don't seem to hurt performance, it may even help it a bit. Report and fix by Vinod Sridharan. Author: Vinod Sridharan <vsridh90@gmail.com> Reviewed-by: Tomas Vondra <tomas@vondra.me> Discussion: https://postgr.es/m/CAFMdLD4p0VBd8JG=Nbi=BKv6rzFAiGJ_sXSFrw-2tNmNZFO5Kg@mail.gmail.com
* Don't use a tuplestore if we don't have to for SQL-language functions.Tom Lane2025-05-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We only need a tuplestore if we're actually going to accumulate multiple result tuples. Obviously then we don't need one for non-set- returning functions; but even a SRF doesn't need one if we decide to use "lazyEval" (one row at a time) mode. In these cases, it's sufficient to use the junkfilter's result slot to hold the single row that's due to be returned. We just need to "materialize" that slot to ensure it holds onto the data past shutdown of the sub-executor. The original intent of this patch was partially to save a few cycles (by not putting tuples into a tuplestore only to pull them back out immediately), but mostly to ensure that we don't use a tuplestore in non-set-returning functions. That's because I had concerns about whether a tuplestore is safe to keep across queries, which was possible for functions invoked via long-lived FmgrInfos such as those kept in the typcache. There are no cases where SRFs are called that way, so getting rid of the tuplestore in non-SRFs should make things safer. However, it emerges that running fmgr_sql in a short-lived context (as 595d1efed made it do) makes the existing coding unsafe anyway: we can end up with a long-lived TupleTableSlot holding a freeable reference to a short-lived tuple, resulting in a double-free crash. Not trying to pull tuples out of the tuplestore using that slot dodges the problem, so I'm going to commit this now rather than invent a band-aid solution for v18. Reported-by: Alexander Lakhin <exclusion@gmail.com> Author: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/2443532.1744919968@sss.pgh.pa.us Discussion: https://postgr.es/m/9f975803-1a1c-4f21-b987-f572e110e860@gmail.com
* Handle self-referencing FKs correctly in partitioned tablesÁlvaro Herrera2025-05-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For self-referencing foreign keys in partitioned tables, we weren't handling creation of pg_constraint rows during CREATE TABLE PARTITION AS as well as ALTER TABLE ATTACH PARTITION. This is an old bug -- mostly, we broke this in 614a406b4ff1 while trying to fix it (so 12.13, 13.9, 14.6 and 15.0 and up all behave incorrectly). This commit reverts part of that with additional fixes for full correctness, and installs more tests to verify the parts we broke, not just the catalog contents but also the user-visible behavior. Backpatch to all live branches. In branches 13 and 14, commit 46a8c27a7226 changed the behavior during DETACH to drop a FK constraint rather than trying to repair it, because the complete fix of repairing catalog constraints was problematic due to lack of previous fixes. For this reason, the test behavior in those branches is a bit different. However, as best as I can tell, the fix works correctly there. In release notes we have to recommend that all self-referencing foreign keys on partitioned tables be recreated if partitions have been created or attached after the FK was created, keeping in mind that violating rows might already be present on the referencing side. Reported-by: Guillaume Lelarge <guillaume@lelarge.info> Reported-by: Matthew Gabeler-Lee <fastcat@gmail.com> Reported-by: Luca Vallisa <luca.vallisa@gmail.com> Discussion: https://postgr.es/m/CAECtzeWHCA+6tTcm2Oh2+g7fURUJpLZb-=pRXgeWJ-Pi+VU=_w@mail.gmail.com Discussion: https://postgr.es/m/18156-a44bc7096f0683e6@postgresql.org Discussion: https://postgr.es/m/CAAT=myvsiF-Attja5DcWoUWh21R12R-sfXECY2-3ynt8kaOqjw@mail.gmail.com
* Make "directory" setting work with extension_control_pathPeter Eisentraut2025-05-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The extension_control_path setting (commit 4f7f7b03758) did not support extensions that set a custom "directory" setting in their control file. Very few extensions use that and during the discussion on the previous commit it was suggested to maybe remove that functionality. But a fix was easier than initially thought, so this just adds that support. The fix is to use the control->control_dir as a share dir to return the path of the extension script files. To make this work more sensibly overall, the directory suffix "extension" is no longer to be included in the extension_control_path value. To quote the patch, it would be -extension_control_path = '/usr/local/share/postgresql/extension:/home/my_project/share/extension:$system' +extension_control_path = '/usr/local/share/postgresql:/home/my_project/share:$system' During the initial patch, there was some discussion on which of these two approaches would be better, and the committed patch was a 50/50 decision. But the support for the "directory" setting pushed it the other way, and also it seems like many people didn't like the previous behavior much. Author: Matheus Alcantara <mths.dev@pm.me> Reviewed-by: Christoph Berg <myon@debian.org> Reviewed-by: David E. Wheeler <david@justatheory.com> Discussion: https://www.postgresql.org/message-id/flat/aAi1VACxhjMhjFnb%40msg.df7cb.de#0cdf7b7d727cc593b029650daa3c4fbc
* Adjust overstrong nbtree skip array assertion.Peter Geoghegan2025-04-30
| | | | | | | | | | | | Make an nbtree array preprocessing assertion account for scans that add fewer skip arrays than initially expected due to preprocessing finding an unsatisfiable array qual. Oversight in commit 92fe23d9. Author: Peter Geoghegan <pg@bowt.ie> Reported-By: Mark Dilger <mark.dilger@enterprisedb.com> Discussion: https://postgr.es/m/CAHgHdKtQMhHy5qcB3KqCcGiW-Rp8P7KzUFRa9ZMKUiv6zen7LQ@mail.gmail.com
* Convert strncpy to strlcpyDaniel Gustafsson2025-04-30
| | | | | | | | | We try to avoid using strncpy() due to the ease of which it can be misused. Convert this callsite to use strlcpy() instead to match similar codepaths in this file. Suggested-by: Peter Eisentraut <peter@eisentraut.org> Discussion: https://postgr.es/m/2a796830-de2d-4030-b480-d673f6cc5d94@eisentraut.org
* Typo and doc fixups for memory context reportingDaniel Gustafsson2025-04-30
| | | | | | | | | This fixes comment and docs typos as well as a small documentation change to make it clearer. Found via post-commit review. Author: Rahila Syed <rahilasyed90@gmail.com> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Discussion: https://postgr.es/m/CAH2L28vt16C9xTuK+K7QZvtA3kCNWXOEiT=gEekUw3Xxp9LVQw@mail.gmail.com
* Add missing string terminatorDaniel Gustafsson2025-04-30
| | | | | | | | | | | When copying the string strncpy won't add nul termination since the string length is equal to the length specified. Explicitly set a nul terminator after copying to properly terminate. Found via post-commit review. Author: Rahila Syed <rahilasyed90@gmail.com> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Discussion: https://postgr.es/m/CAH2L28vt16C9xTuK+K7QZvtA3kCNWXOEiT=gEekUw3Xxp9LVQw@mail.gmail.com
* Fix broken indentationDavid Rowley2025-04-30
| | | | | | | I forgot to run pgindent in d8555e522. Reported-by: Fujii Masao <masao.fujii@oss.nttdata.com> Discussion: https://postgr.es/m/156083c9-eac0-418d-9667-92dec4d6d6cd@oss.nttdata.com
* Fix a couple of comment typosDavid Rowley2025-04-30
| | | | | Author: Junwang Zhao <zhjwpku@gmail.com> Discussion: https://postgr.es/m/CAEG8a3+MRwDKc4YSFKKPKq7Y+vMufVC5u94wM5KZPB2CbgCxnQ@mail.gmail.com
* Give up on running with NetBSD/OpenBSD's default semaphore settings.Tom Lane2025-04-29
| | | | | | | | | | | | | | | | | | | | | | | | | This reverts commit 38da053463bef32adf563ddee5277d16d2b6c5af, which attempted to preserve our ability to start with only 60 semaphores. Subsequent changes (particularly 55b454d0e) have put that idea pretty much permanently out of reach: people wishing to use Postgres v18 on OpenBSD or NetBSD will have no choice but to increase those platforms' default values of SEMMNI and SEMMNS. Hence, revert 38da05346's changes in SEMAS_PER_SET and the minimum tested value of max_connections. Adjust a comment from the subsequent patch 6d0154196, and tweak the wording in runtime.sgml to make it clear that changing SEMMNI/SEMMNS is no longer even a little bit optional on these platforms. Although 38da05346 was later back-patched into v17, leave that branch alone: it's still capable of starting with 60 semaphores, and there's no reason to break that. Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Nathan Bossart <nathandbossart@gmail.com> Discussion: https://postgr.es/m/E1tuZNv-0037Gs-34@gemulon.postgresql.org Discussion: https://postgr.es/m/1052019.1745947915@sss.pgh.pa.us
* Fixes for ChangeVarNodes_walker()Alexander Korotkov2025-04-29
| | | | | | | | | | | | | | | | | This commit fixes two bug in ChangeVarNodes_walker() function. * When considering RestrictInfo, walk down to its clauses based on the presense of relid to be deleted not just in clause_relids but also in required_relids. * Incrementally adjust num_base_rels based on the change of clause_relids instead of recalculating it using clause_relids, which could contain outer-join relids. Reported-by: Richard Guo <guofenglinux@gmail.com> Discussion: https://postgr.es/m/CAMbWs49PE3CvnV8vrQ0Dr%3DHqgZZmX0tdNbzVNJxqc8yg-8kDQQ%40mail.gmail.com Author: Andrei Lepikhov <lepihov@gmail.com> Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com>
* Fix assertion failure during decoding from synced slots.Amit Kapila2025-04-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The slot synchronization skips updating the confirmed_flush LSN of the local slot if the local slot has a newer catalog_xmin or restart_lsn, but still allows updating the two_phase and two_phase_at fields of the slot. This opens up a window for the prepared transactions between old confirmed_flush LSN and two_phase_at to unexpectedly get decoded and sent to the downstream after promotion. Then, while decoding the commit prepared the assert will fail, which expects that the prepare hasn't been sent to the downstream. The fix is to skip updating the other slot fields when we are skipping to update the confirmed_flush LSN of the slot. We didn't backpatch this commit as two_phase_at was not synced in back branches, which means prepared transactions won't be unexpectedly sent to downstream. We discovered this problem while analyzing BF failure reported in the discussion link. Reliably reproducing this issue without a debugger is difficult. Given its rarity, adding specific injection point to test it doesn't seem worthwhile, so we won't be adding a dedicated test case. Author: Zhijie Hou <houzj.fnst@fujitsu.com> Reviewed-by: shveta malik <shveta.malik@gmail.com> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Discussion: https://postgr.es/m/OS0PR01MB5716B44052000EB91EFAE60E94BC2@OS0PR01MB5716.jpnprd01.prod.outlook.com
* Add maintenance_io_concurrency flag to some read stream usersMelanie Plageman2025-04-28
| | | | | | | | | | | Index vacuuming and [auto]prewarm AIO concurrency should be governed by maintenance_io_concurrency. As such, pass those read stream users the READ_STREAM_MAINTENANCE flag which will calculate their read stream distance with maintenance_io_concurrency instead of effective_io_concurrency. This was an oversight in the original commits making those operations use the read stream API. Discussion: https://postgr.es/m/flat/CAAKRu_aopDxTo4b41Mt_7Zc-z0_ngocrY8SFCCY6Aph1HgwuNw%40mail.gmail.com
* Fix obsolete nbtree array advancement comment.Peter Geoghegan2025-04-28
| | | | | | | | | Checking if another primitive scan is required after all once the next leaf page was moved from _bt_checkkeys to its _bt_readpage caller by commit 9a2e2a28. Update a comment that incorrectly described the recheck mechanism as something that takes place in _bt_checkkeys. Also fix an older typo in related code comments.
* Make NULL tuple values always advance skip arrays.Peter Geoghegan2025-04-28
| | | | | | | | | | | | | | | | | | | | | | | _bt_check_compare neglected to handle a case that can arise when the scan's keys are temporarily treated as nonrequired, as an optimization: whenever a NULL tuple value was encountered that had a skip array whose current element wasn't already NULL, _bt_check_compare failed to advance the array to the NULL element. This allowed _bt_check_compare to fail to return matching tuples containing a NULL value (though only with an array column that came before a skip array column with NULLs, and only during _bt_readpage calls that set pstate.forcenonrequired=true on a page where the higher-order column also had to advance). To fix, teach _bt_check_compare to handle this case just like any other case where a skip array key is unsatisfied and must be advanced directly (due to the key being considered a nonrequired key). Oversight in commit 8a510275, which optimized nbtree search scan key comparisons with skip arrays. Author: Peter Geoghegan <pg@bowt.ie> Reported-By: Mark Dilger <mark.dilger@enterprisedb.com> Discussion: https://postgr.es/m/CAHgHdKtLFWZcjr87hMH0hYDHgcifu4Tj7iHz-xh8qsJREt5cqA@mail.gmail.com
* Restore comments in ChangeVarNodesExtended()Alexander Korotkov2025-04-28
| | | | | | | | This commit restores comments in ChangeVarNodesExtended(), which were accidentally removed by fc069a3a6319. Reported-by: Richard Guo <guofenglinux@gmail.com> Discussion: https://postgr.es/m/CAMbWs49PE3CvnV8vrQ0Dr%3DHqgZZmX0tdNbzVNJxqc8yg-8kDQQ%40mail.gmail.com
* Fix xmin advancement during fast_forward decoding.Amit Kapila2025-04-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | During logical decoding, we advance catalog_xmin of logical too early in fast_forward mode, resulting in required catalog data being removed by vacuum. This mode is normally used to advance the slot without processing the changes, but we still can't let the slot's xmin to advance to an incorrect value. Commit f49a80c481 fixed a similar issue where the logical slot's catalog_xmin was getting advanced prematurely during non-fast-forward mode. During xl_running_xacts processing, instead of directly advancing the slot's xmin to the oldest running xid in the record, it allowed the xmin to be held back for snapshots that can be used for not-yet-replayed transactions, as those might consider older txns as running too. However, it missed the fact that the same problem can happen during fast_forward mode decoding, as we won't build a base snapshot in that mode, and the future call to get_changes from the same slot can miss seeing the required catalog changes leading to incorrect reslts. This commit allows building the base snapshot even in fast_forward mode to prevent the early advancement of xmin. Reported-by: Amit Kapila <amit.kapila16@gmail.com> Author: Zhijie Hou <houzj.fnst@fujitsu.com> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Reviewed-by: shveta malik <shveta.malik@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Backpatch-through: 13 Discussion: https://postgr.es/m/CAA4eK1LqWncUOqKijiafe+Ypt1gQAQRjctKLMY953J79xDBgAg@mail.gmail.com Discussion: https://postgr.es/m/OS0PR01MB57163087F86621D44D9A72BF94BB2@OS0PR01MB5716.jpnprd01.prod.outlook.com
* Remove circular #include's between wait_event.h and wait_event_types.hMichael Paquier2025-04-28
| | | | | | | | | | | | | | | | | | | | wait_event_types.h is generated by the code, and included wait_event.h. wait_event.h did the opposite move, including wait_event_types.h, causing a circular dependency between both. wait_event_types.h only needs to now about the wait event classes, so this information is moved into its own file, and wait_event_types.h uses this new header so as it does not depend anymore on wait_event.h. Note that such errors can be found with clang-tidy, with commands like this one: clang-tidy source_file.c --checks=misc-header-include-cycle -- \ -I/install/path/include/ -I/install/path/include/server/ Issue introduced by fa88928470b5. Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com> Discussion: https://postgr.es/m/350192.1745768770@sss.pgh.pa.us
* Disallow removing placeholders during Self-Join Elimination.Alexander Korotkov2025-04-28
| | | | | | | | | | | | | | | | | | | | | fc069a3a6319 implements Self-Join Elimination (SJE), which can remove base relations when appropriate. However, regressions tests for SJE only cover the case when placeholder variables (PHVs) are evaluated and needed only in a single base rel. If this baserel is removed due to SJE, its clauses, including PHVs, will be transferred to the keeping relation. Removing these PHVs may trigger an error on plan creation -- thanks to the b3ff6c742f6c for detecting that. This commit skips removal of PHVs during SJE. This might also happen that we skip the removal of some PHVs that could be removed. However, the overhead of extra PHVs is small compared to the complexity of analysis needed to remove them. Reported-by: Alexander Lakhin <exclusion@gmail.com> Author: Alena Rybakina <a.rybakina@postgrespro.ru> Author: Andrei Lepikhov <lepihov@gmail.com> Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com> Reviewed-by: Richard Guo <guofenglinux@gmail.com>
* Don't use double-quotes in #include's of system headers, redux.Tom Lane2025-04-27
| | | | | | | | | | | | | | | | | | | | | | | This cleans up some loose ends left by commit e8ca9ed1d. I hadn't looked closely enough at these places before, but now I have. The use of double-quoted #includes for Perl headers in plperl_system.h seems to be simply a mistake introduced in 6c944bf3c and faithfully copied forward since then. (I had thought possibly it was required by some weird Windows build setup, but there's no evidence of that in our history.) The occurrences in SectionMemoryManager.h and SectionMemoryManager.cpp evidently stem from those files' origin as LLVM code. It's understandable that LLVM would treat their own files as needing double-quoted #includes; but they're still system headers to us. I also applied the same check to *.c files, and found a few other random incorrect usages in both directions. Our ECPG headers and test files routinely use angle brackets to refer to ECPG headers. I left those usages alone, since it seems reasonable for an ECPG user to regard those headers as system headers.
* Eliminate divide in new fast-path locking codeDavid Rowley2025-04-27
| | | | | | | | | | | | | | | | | | | | | | | c4d5cb71d2 adjusted the fast-path locking code to allow some configuration of the number of fast-path locking slots via the max_locks_per_transaction GUC. In that commit the FAST_PATH_REL_GROUP() macro used integer division to determine the fast-path locking group slot to use for the lock. The divisor in this case is always a power-of-two value. Here we swap out the divide by a bitwise-AND, which is a significantly faster operation to perform. In passing, adjust the code that's setting FastPathLockGroupsPerBackend so that it's more clear that the value being set is a power-of-two. Also, adjust some comments in the area which contained some magic numbers. It seems better to justify the 1024 upper limit in the location where the #define is made instead of where it is used. Author: David Rowley <drowleyml@gmail.com> Reviewed-by: Tomas Vondra <tomas@vondra.me> Discussion: https://postgr.es/m/CAApHDvodr3bcnpxcs7+k-3cFwYR0tP-BYhyd2PpDhe-bCx9i=g@mail.gmail.com
* aio: Improve debug logging around waiting for IOsAndres Freund2025-04-25
| | | | | | | Trying to investigate a bug report by Alexander Lakhin made it apparent that the debug logging around waiting for IO completion is insufficient. Fix that. Discussion: https://postgr.es/m/h4in2db37vepagmi2oz5vvqymjasc5gyb4lpqkunj4eusu274i@37jpd3c2spd3
* Fix bug allowing io_combine_limit > io_max_combine_combine limitAndres Freund2025-04-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | 10f66468475 intended to limit the value of io_combine_limit to the minimum of io_combine_limit and io_max_combine_limit. To avoid issues with interdependent GUCs, it introduced io_combine_limit_guc and set io_combine_limit in assign hooks. That plan was thwarted by guc_tables.c accidentally still referencing io_combine_limit, instead of io_combine_limit_guc. That lead to the GUC machinery overriding the work done in the assign hooks, potentially leaving io_combine_limit with a too high value. The consequence of this bug was that when running with io_combine_limit > io_combine_limit_guc the AIO machinery would not have reserved large enough iovec and IO data arrays, with one IO's arrays overlapping with another IO's, leading to total confusion. To make such a problem easier to detect in the future, add assertions to pgaio_io_set_handle_data_* checking the length is smaller than io_max_combine_limit (not just PG_IOV_MAX). It'd be nice to have a few tests for this, but it's not entirely obvious how to do so portably. As remarked upon by Tom, the GUC assignment hooks really shouldn't set the underlying variable, that's the job of the GUC machinery. Change that as well. Discussion: https://postgr.es/m/c5jyqnuwrpigd35qe7xdypxsisdjrdba5iw63mhcse4mzjogxo@qdjpv22z763f
* aio: Fix crash potential for pg_aios views due to late state updateAndres Freund2025-04-25
| | | | | | | | | | | | | | | | | | | pgaio_io_reclaim() reset the fields in PgAioHandle before updating the state to IDLE or incrementing the generation. For most things that's OK, but for pg_get_aios() it is not - if it copied the PgAioHandle while fields were being reset, we wouldn't detect that and could call pgaio_io_get_target_description() with ioh->target == PGAIO_TID_INVALID, leading to a crash. Fix this issue by incrementing the generation and state earlier, before resetting. Also add an assertion to pgaio_io_get_target_description() for the target to be valid - that'd have made this case a bit easier to debug. While at it, add/update a few related assertions. Author: Alexander Lakhin <exclusion@gmail.com> Discussion: https://postgr.es/m/062daca9-dfad-4750-9da8-b13388301ad9@gmail.com
* Add sanity check for dshash entries when reading pgstats fileMichael Paquier2025-04-24
| | | | | | | | | | | | | | | | | | | | Not having this check would produce a core dump at startup when running pgstat_read_statsfile(), in the case where the information of a stats kind for an entry in the dshash could not be found. The same check already happens for fixed-numbered stats and entries that are stored with their names. This issue can be seen with custom stats kinds. Note that this problem can be reproduced what what is in the core code: - Tweak the test module injection_points to not load the fixed-numbered stats part, leaving only the variable-numbered stats. - Create an instance with injection_points defined in shared_preload_libraries. - Create a pgstats entry by attaching and running a point. - Restart the server without shared_preload_libraries. The startup process detects that something is wrong and reports a WARNING. Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com> Discussion: https://postgr.es/m/aAieZAvM+K1d89R2@ip-10-97-1-34.eu-west-3.compute.internal
* Avoid possibly-theoretical OOM crash hazard in hash_create().Tom Lane2025-04-23
| | | | | | | | | | | | | | | | | | | | | One place in hash_create() used DynaHashAlloc() as a convenient shorthand for MemoryContextAlloc(). That was fine when it was written, but it stopped being fine when 9c911ec06 changed DynaHashAlloc() to use MCXT_ALLOC_NO_OOM (mea culpa). Change the code to call plain MemoryContextAlloc() as intended. I think that this bug may be unreachable in practice, since we now always create AllocSets with some space already allocated, so that an OOM failure here for a non-shared hash table should be impossible (with a hash table name of reasonable length anyway). And there aren't enough shared hash tables to make a crash for one of those probable. Nonetheless it's clearly not operating as designed, so back-patch to v16 where 9c911ec06 came in. Reported-by: Maksim Korotkov <m.korotkov@postgrespro.ru> Author: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/219bdccd460510efaccf90b57e5e5ef2@postgrespro.ru Backpatch-through: 16
* Maintain RelIdToTypeIdCacheHash in TypeCacheOpcCallback()Alexander Korotkov2025-04-23
| | | | | | | | | | | | | | | | | | b85a9d046efd introduced a new RelIdToTypeIdCacheHash, whose entries should exist for typecache entries with TCFLAGS_HAVE_PG_TYPE_DATA flag set or any of TCFLAGS_OPERATOR_FLAGS set or tupDesc set. However, TypeCacheOpcCallback(), which resets TCFLAGS_OPERATOR_FLAGS, was forgotten to update RelIdToTypeIdCacheHash. This commit adds a delete_rel_type_cache_if_needed() call to the TypeCacheOpcCallback() function to maintain RelIdToTypeIdCacheHash after resetting TCFLAGS_OPERATOR_FLAGS. Also, this commit fixes the name of the delete_rel_type_cache_if_needed() function in its mentions in the comments. Reported-by: Noah Misch Discussion: https://postgr.es/m/20250411203241.e9.nmisch%40google.com
* Properly prepare varinfos in estimate_multivariate_bucketsize()Alexander Korotkov2025-04-23
| | | | | | | | | | | | | | | | | | | | | | | | | | To estimate with extended statistics, we need to clear the varnullingrels field in the expression, and duplicates are not allowed in the GroupVarInfo list. We might re-use add_unique_group_var(), but we don't do so for two reasons. 1) We must keep the origin_rinfos list ordered exactly the same way as varinfos. 2) add_unique_group_var() is designed for estimate_num_groups(), where a larger number of groups is worse. While estimating the number of hash buckets, we have the opposite: a lesser number of groups is worse. Therefore, we don't have to remove "known equal" vars: the removed var may valuably contribute to the multivariate statistics to grow the number of groups. This commit adds custom code to estimate_multivariate_bucketsize() to initialize varinfos properly. Reported-by: Robins Tharakan <tharakan@gmail.com> Discussion: https://postgr.es/m/18885-da51324078588253%40postgresql.org Author: Andrei Lepikhov <lepihov@gmail.com> Reviewed-by: David Rowley <dgrowleyml@gmail.com> Reviewed-by: Tomas Vondra <tomas@vondra.me> Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com>
* Change the names generated for child foreign key constraints.Tom Lane2025-04-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When a foreign key constraint is placed on a partitioned table, we actually make two pg_constraint entries associated with that table. (I have my doubts about the wisdom of that, but it's been like that since v12 and post-feature-freeze is no time to be messing with such entrenched decisions.) The second "child" entry always had a name generated according to the default rule, "table_column(s)_fkey[nnn]", even if the primary entry had an unrelated user-specified name. The trouble with doing that is that the default name could collide with the user-specified name of some other constraint on the same table. While we were willing to adjust the generated name to avoid collisions, that only helps if it's made second; if it's made first then creation of the other constraint would fail, potentially causing dump/reload or pg_upgrade failures. The core of the problem here is that we're infringing on user namespace, so I doubt that there's any 100% solution other than to find a way to not need the "child" entry. In the meantime, it seems like it'd be an improvement to make the child's name be the name of the parent constraint with an underscore and digit(s) appended as necessary to make it unique. This rule can in theory fail in the same way, but it seems much less probable; for one thing, this rule is guaranteed not to match primary entries having auto-generated names. (While an auto-generated primary name isn't user-specified to begin with, it acts like that during dump/reload, so collisions against such names are definitely possible.) An additional bonus, visible in some of the regression test cases that change here, arises from the fact that some error messages cite the child constraint's name not the parent's. In the previous approach the two names could be completely unrelated, leading to user confusion --- the more so since psql's \d command hides child constraints. With this approach it's hopefully much clearer which constraint-the-user-knows-about is failing. However, that does mean that there's user-visible behavior change occurring here, making it seem like not something to back-patch. I feel it's not too late for v18, though. Reported-by: Kirill Reshke <reshkekirill@gmail.com> Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Alvaro Herrera <alvherre@kurilemu.de> Discussion: https://postgr.es/m/CALdSSPhGitjpTfzEMJN-Y2x+Q-5QChSxAsmSJ1-E8mQJLkHOqQ@mail.gmail.com
* Fix an oversight in 3f28b2fcac.Amit Kapila2025-04-23
| | | | | | | | | | | | | | | | | | | | | | | | | Commit 3f28b2fcac tried to ensure that the replication origin shouldn't be advanced in case of an ERROR in the apply worker, so that it can request the same data again after restart. However, it is possible that an ERROR was caught and handled by a (say PL/pgSQL) function, and the apply worker continues to apply further changes, in which case, we shouldn't reset the replication origin. Ensure to reset the origin only when the apply worker exits after an ERROR. Commit 3f28b2fcac added new function geterrlevel, which we removed in HEAD as part of this commit, but kept it in backbranches to avoid breaking any applications. A separate case can be made to have such a function even for HEAD. Reported-by: Shawn McCoy <shawn.the.mccoy@gmail.com> Author: Hayato Kuroda <kuroda.hayato@fujitsu.com> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Reviewed-by: vignesh C <vignesh21@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Backpatch-through: 16, where it was introduced Discussion: https://postgr.es/m/CALsgZNCGARa2mcYNVTSj9uoPcJo-tPuWUGECReKpNgTpo31_Pw@mail.gmail.com
* Remove assertion based on pending_since in pgstat_report_stat()Michael Paquier2025-04-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This assertion, based on pending_since (timestamp used to prevent stats reports to be too frequent or should a partial flush happen), is reached when it is found that no data can be flushed but a previous call of pgstat_report_stat() determined that some stats data has been found as in need of a flush. So pending_since is set when some stats data is pending (in non-force mode) or if report attempts are too frequent, and reset to 0 once all stats have been flushed. Since 5cbbe70a9cc6, WAL senders have begun to report their stats on a periodic basis for IO stats in v16~ and backend stats on HEAD, creating some friction with the concurrent pgstat_report_stat() calls that can happen in the context of a WAL sender (shutdown callback doing a final report or backend-related code paths). This problem is the cause of spurious failures in the TAP tests. In theory, this assertion can be also reached in v15, even if that's very unlikely. For example, a process, say a background worker, could do periodic and direct stats flushes with concurrent calls of pgstat_report_stat() that could cause conflicting values of pending_since. This can be done with WAL or SLRU stats flushes using pgstat_flush_wal() or pgstat_slru_flush(). HEAD makes this situation easier to happen with custom cumulative stats. This commit removes the assertion altogether, per discussion, as it is more useful to keep the state of things as they are for the WAL sender. The assertion could use a special state based on for example am_walsender, but I doubt that this would be meaningful in the long run based on the other arguments raised while discussing this issue. Reported-by: Tom Lane <tgl@sss.pgh.pa.us> Reported-by: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/1489124.1744685908@sss.pgh.pa.us Discussion: https://postgr.es/m/dwrkeszz6czvtkxzr5mqlciy652zau5qqnm3cp5f3p2po74ppk@omg4g3cc6dgq Backpatch-through: 15
* gen_node_support.pl: improve error message for unclosed struct.Tom Lane2025-04-22
| | | | | | | | | | | | This error message was 'runaway "struct_name"', which isn't all that clear; I think 'could not find closing brace for "struct_name"' is better. Also, provide the location of the struct start using the script's usual '$file:$lineno' style. Bug: #18901 Reported-by: Clemens Ruck <clemens.ruck@t-online.de> Author: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/18901-424272abe01357e6@postgresql.org
* Rename injection point for invalidation messages at end of transactionMichael Paquier2025-04-22
| | | | | | | | | | | | | This injection point was named "AtEOXact_Inval-with-transInvalInfo", not respecting the implied naming convention that injection points should use lower-case characters, with terms separated by dashes. All the other points defined in the tree follow this style, so let's be more consistent. Author: Hayato Kuroda <kuroda.hayato@fujitsu.com> Reviewed-by: Aleksander Alekseev <aleksander@timescale.com> Discussion: https://postgr.es/m/OSCPR01MB14966E14C1378DEE51FB7B7C5F5B32@OSCPR01MB14966.jpnprd01.prod.outlook.com Backpatch-through: 17
* Fix INITCAP() word boundaries for PG_UNICODE_FAST.Jeff Davis2025-04-21
| | | | | | | | | | | | Word boundaries are based on whether a character is alphanumeric or not. For the PG_UNICODE_FAST collation, alphanumeric includes non-ASCII digits; whereas for the PG_C_UTF8 collation, it only includes digits 0-9. Pass down the right information from the pg_locale_t into initcap_wbnext to differentiate the behavior. Reported-by: Noah Misch <noah@leadboat.com> Reviewed-by: Noah Misch <noah@leadboat.com> Discussion: https://postgr.es/m/20250417135841.33.nmisch@google.com
* Use the same cmd_context throughout a walsender's lifetime.Tom Lane2025-04-21
| | | | | | | | | | | | | | | | | | | | | | | | | exec_replication_command created a cmd_context to work in and then deleted it on exit. This is pretty dangerous because some replication commands start/finish transactions. In the wake of commit 1afe31f03, that could lead to re-selecting a CurrentMemoryContext that's already been deleted, leading to hilarity such as a memory context that is its own parent. To fix, let's make the cmd_context persist across exec_replication_command calls; instead of deleting it, we'll just reset it each time. In this way it retains the same identity and there's no problem if transaction abort restores it as the working context. It probably even saves a few microseconds to do this. This fix also ensures that exec_replication_command returns to the caller (PostgresMain) with the same context active that had been when it was called (probably MessageContext). The previous coding could get that wrong too. Reported-by: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com> Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/CAO6_XqoJA7-_G6t7Uqe5nWF3nj+QBGn4F6Ptp=rUGDr0zo+KvA@mail.gmail.com