aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
...
* Use generateClonedIndexStmt to propagate CREATE INDEX to partitions.Tom Lane2024-10-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When instantiating an existing partitioned index for a new child partition, we use generateClonedIndexStmt to build a suitable IndexStmt to pass to DefineIndex. However, when DefineIndex needs to recurse to instantiate a newly created partitioned index on an existing child partition, it was doing copyObject on the given IndexStmt and then applying a bunch of ad-hoc fixups. This has a number of problems, primarily that it implies fresh lookups of referenced objects such as opclasses and collations. Since commit 2af07e2f7 caused DefineIndex to restrict search_path internally, those lookups could fail or deliver different results than the original one. We can avoid those problems and save a few dozen lines of code by using generateClonedIndexStmt in this code path too. Another thing this fixes is incorrect propagation of parent-index comments to child indexes (because the copyObject approach copies the idxcomment field while generateClonedIndexStmt doesn't). I had noticed this in connection with commit c01eb619a, but not run the problem to ground. I'm tempted to back-patch this further than v17, but the only thing it's known to fix in older branches is the comment issue, which is pretty minor and doesn't seem worth the risk of introducing new issues in stable branches. (If anyone does care about that, clearing idxcomment in the copied IndexStmt would be a safer fix.) Per bug #18637 from usamoi. Back-patch to v17 where the search_path change came in. Discussion: https://postgr.es/m/18637-f51e314546e3ba2a@postgresql.org
* Reject non-ASCII locale names.Thomas Munro2024-10-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit bf03cfd1 started scanning all available BCP 47 locale names on Windows. This caused an abort/crash in the Windows runtime library if the default locale name contained non-ASCII characters, because of our use of the setlocale() save/restore pattern with "char" strings. After switching to another locale with a different encoding, the saved name could no longer be understood, and setlocale() would abort. "Turkish_Türkiye.1254" is the example from recent reports, but there are other examples of countries and languages with non-ASCII characters in their names, and they appear in Windows' (old style) locale names. To defend against this: 1. In initdb, reject non-ASCII locale names given explicity on the command line, or returned by the operating system environment with setlocale(..., ""), or "canonicalized" by the operating system when we set it. 2. In initdb only, perform the save-and-restore with Windows' non-standard wchar_t variant of setlocale(), so that it is not subject to round trip failures stemming from char string encoding confusion. 3. In the backend, we don't have to worry about the save-and-restore problem because we have already vetted the defaults, so we just have to make sure that CREATE DATABASE also rejects non-ASCII names in any new databases. SET lc_XXX doesn't suffer from the problem, but the ban applies to it too because it uses check_locale(). CREATE COLLATION doesn't suffer from the problem either, but it doesn't use check_locale() so it is not included in the new ban for now, to minimize the change. Anyone who encounters the new error message should either create a new duplicated locale with an ASCII-only name using Windows Locale Builder, or consider using BCP 47 names like "tr-TR". Users already couldn't initialize a cluster with "Turkish_Türkiye.1254" on PostgreSQL 16+, but the new failure mode is an error message that explains why, instead of a crash. Back-patch to 16, where bf03cfd1 landed. Older versions are affected in theory too, but only 16 and later are causing crash reports. Reviewed-by: Andrew Dunstan <andrew@dunslane.net> (the idea, not the patch) Reported-by: Haifang Wang (Centific Technologies Inc) <v-haiwang@microsoft.com> Discussion: https://postgr.es/m/PH8PR21MB3902F334A3174C54058F792CE5182%40PH8PR21MB3902.namprd21.prod.outlook.com
* Fix wrong varnullingrels error for MERGE WHEN NOT MATCHED BY SOURCE.Dean Rasheed2024-10-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a MERGE command contains WHEN NOT MATCHED BY SOURCE actions, the source relation appears on the outer side of the join. Thus, any Vars referring to the source in the merge join condition, actions, and RETURNING list should be marked as nullable by the join, since they are used in the ModifyTable node above the join. Note that this only applies to the copy of join condition used in the executor to distinguish MATCHED from NOT MATCHED BY SOURCE cases. Vars in the original join condition, inside the join node itself, should not be marked. Failure to correctly mark these Vars led to a "wrong varnullingrels" error in the final stage of query planning, in some circumstances. We happened to get away without this in all previous tests, since they all involved a ModifyTable node directly on top of the join node, so that the top plan targetlist coincided with the output of the join, and the varnullingrels check was more lax. However, if another plan node, such as a one-time filter Result node, gets inserted between the ModifyTable node and the join node, then a stricter check is applied, which fails. Per bug #18634 from Alexander Lakhin. Thanks to Tom Lane and Richard Guo for review and analysis. Back-patch to v17, where WHEN NOT MATCHED BY SOURCE support was added to MERGE. Discussion: https://postgr.es/m/18634-db5299c937877f2b%40postgresql.org
* Fix incorrect non-strict join recheck in MERGE WHEN NOT MATCHED BY SOURCE.Dean Rasheed2024-10-03
| | | | | | | | | | | | | | | | | | | | | | If a MERGE command contains WHEN NOT MATCHED BY SOURCE actions, the merge join condition is used by the executor to distinguish MATCHED from NOT MATCHED BY SOURCE cases. However, this qual is executed using the output from the join subplan node, which nulls the output from the source relation in the not matched case, and so the result may be incorrect if the join condition is "non-strict" -- for example, something like "src.col IS NOT DISTINCT FROM tgt.col". Fix this by enhancing the join recheck condition with an additional "src IS NOT NULL" check, so that it does the right thing when evaluated using the output from the join subplan. Noted by Tom Lane while investigating bug #18634 from Alexander Lakhin. Back-patch to v17, where WHEN NOT MATCHED BY SOURCE support was added to MERGE. Discussion: https://postgr.es/m/18634-db5299c937877f2b%40postgresql.org
* Parse libpq's "keepalives" option more like other integer options.Tom Lane2024-10-02
| | | | | | | | | | | | | | | | | | | | | Use pqParseIntParam (nee parse_int_param) instead of using strtol directly. This allows trailing whitespace, which the previous coding didn't, and makes the spelling of the error message consistent with other similar cases. This seems to be an oversight in commit e7a221797, which introduced parse_int_param. That fixed places that were using atoi(), but missed this place which was randomly using strtol() instead. Ordinarily I'd consider this minor cleanup not worth back-patching. However, it seems that ecpg assumes it can add trailing whitespace to URL parameters, so that use of the keepalives option fails in that context. Perhaps that's worth improving as a separate matter. In the meantime, back-patch this to all supported branches. Yuto Sasaki (some further cleanup by me) Discussion: https://postgr.es/m/TY2PR01MB36286A7B97B9A15793335D18C1772@TY2PR01MB3628.jpnprd01.prod.outlook.com
* Fix race condition in COMMIT PREPARED causing orphaned 2PC filesMichael Paquier2024-10-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | COMMIT PREPARED removes on-disk 2PC files near its end, but the state checked if a file is on-disk or not gets read from shared memory while not holding the two-phase state lock. Because of that, there was a small window where a second backend doing a PREPARE TRANSACTION could reuse the GlobalTransaction put back into the 2PC free list by the COMMIT PREPARED, overwriting the "ondisk" flag read afterwards by the COMMIT PREPARED to decide if its on-disk two-phase state file should be removed, preventing the file deletion. This commit fixes this issue so as the "ondisk" flag in the GlobalTransaction is read while holding the two-phase state lock, not from shared memory after its entry has been added to the free list. Orphaned two-phase state files flushed to disk after a checkpoint are discarded at the beginning of recovery. However, a truncation of pg_xact/ would make the startup process issue a FATAL when it cannot read the SLRU page holding the state of the transaction whose 2PC file was orphaned, which is a necessary step to decide if the 2PC file should be removed or not. Removing manually the file would be necessary in this case. Issue introduced by effe7d9552dd, so backpatch all the way down. Mea culpa. Author: wuchengwen Discussion: https://postgr.es/m/tencent_A7F059B5136A359625C7B2E4A386B3C3F007@qq.com Backpatch-through: 12
* Remove incorrect entries in pg_walsummary's getopt_long call.Tom Lane2024-09-30
| | | | | | | | | | | For some reason this listed "-f" and "-w" as valid switches, though the code doesn't implement any such thing nor do the docs mention them. The effect of this was that if you tried to use one of these switches, you'd get an unhelpful error message. Yusuke Sugie Discussion: https://postgr.es/m/68e72a2a70f4d84c1c7847b13bcdaef8@oss.nttdata.com
* reindexdb: Skip reindexing temporary tables and indexes.Fujii Masao2024-09-30
| | | | | | | | | | | | | | | | | | | | Reindexing temp tables or indexes of other sessions is not allowed. However, reindexdb in parallel mode previously listed them as the objects to process, leading to failures. This commit ensures reindexdb in parallel mode skips temporary tables and indexes by adding a condition based on the relpersistence column in pg_class to the object listing queries, preventing these issues. Note that this commit does not affect reindexdb when temporary tables or indexes are explicitly specified using the -t or -j options; reindexdb in that case still does not skip them and can cause an error. Back-patch to v13 where parallel mode was introduced in reindexdb. Author: Fujii Masao Reviewed-by: Michael Paquier Discussion: https://postgr.es/m/5f37ee56-14fb-44fe-9150-9eb97e10538b@oss.nttdata.com
* Remove NULL dereference from RenameRelationInternal().Noah Misch2024-09-29
| | | | | | Defect in last week's commit aac2c9b4fde889d13f859c233c2523345e72d32b, per Coverity. Reaching this would need catalog corruption. Back-patch to v12, like that commit.
* Avoid 037_invalid_database.pl hang under debug_discard_caches.Noah Misch2024-09-27
| | | | Back-patch to v12 (all supported versions).
* Fix incorrect memory access in VACUUM FULL with invalid toast indexesMichael Paquier2024-09-27
| | | | | | | | | | | | | | | | | | | | | | | | An invalid toast index is skipped in reindex_relation(). These would be remnants of a failed REINDEX CONCURRENTLY and they should never been rebuilt as there can only be one valid toast index at a time. REINDEX_REL_SUPPRESS_INDEX_USE, used by CLUSTER and VACUUM FULL, needs to maintain a list of the indexes being processed. The list of indexes is retrieved from the relation cache, and includes invalid indexes. The code has missed that invalid toast indexes are ignored in reindex_relation() as this leads to a hard failure in reindex_index(), and they were left in the reindex pending list, making the list inconsistent when rechecked. The incorrect memory access was happening when scanning pg_class for the refresh of pg_database.datfrozenxid, when doing a scan of pg_class. This issue exists since REINDEX CONCURRENTLY exists, where invalid toast indexes can exist, so backpatch all the way down. Reported-by: Alexander Lakhin Author: Tender Wang Discussion: https://postgr.es/m/18630-9aed99c38830657d@postgresql.org Backpatch-through: 12
* Remove extra whitespace in pg_upgrade status message.Nathan Bossart2024-09-25
| | | | | | | | | | | There's no need to add another level of indentation to this status message. pg_log() will put it in the right place. Oversight in commit 347758b120. Reviewed-by: Daniel Gustafsson Discussion: https://postgr.es/m/ZunW7XHLd2uTts4f%40nathan Backpatch-through: 17
* vacuumdb: Skip temporary tables in query to build list of relationsMichael Paquier2024-09-25
| | | | | | | | | | | | | | | | | Running vacuumdb with a non-superuser while another user has created a temporary table would lead to a mid-flight permission failure, interrupting the operation. vacuum_rel() skips temporary relations of other backends, and it makes no sense for vacuumdb to know about these relations, so let's switch it to ignore temporary relations entirely. Adding a qual in the query based on relpersistence simplifies the generation of its WHERE clause in vacuum_one_database(), per se the removal of "has_where". Author: VaibhaveS, Michael Paquier Reviewed-by: Fujii Masao Discussion: https://postgr.es/m/CAM_eQjwfAR=y3G1fGyS1U9FTmc+FyJm9amNfY2QCZBnDDbNPZg@mail.gmail.com Backpatch-through: 12
* For inplace update durability, make heap_update() callers wait.Noah Misch2024-09-24
| | | | | | | | | | | | | | | | | | | The previous commit fixed some ways of losing an inplace update. It remained possible to lose one when a backend working toward a heap_update() copied a tuple into memory just before inplace update of that tuple. In catalogs eligible for inplace update, use LOCKTAG_TUPLE to govern admission to the steps of copying an old tuple, modifying it, and issuing heap_update(). This includes MERGE commands. To avoid changing most of the pg_class DDL, don't require LOCKTAG_TUPLE when holding a relation lock sufficient to exclude inplace updaters. Back-patch to v12 (all supported versions). In v13 and v12, "UPDATE pg_class" or "UPDATE pg_database" can still lose an inplace update. The v14+ UPDATE fix needs commit 86dc90056dfdbd9d1b891718d2e5614e3e432f35, and it wasn't worth reimplementing that fix without such infrastructure. Reviewed by Nitin Motiani and (in earlier versions) Heikki Linnakangas. Discussion: https://postgr.es/m/20231027214946.79.nmisch@google.com
* Fix data loss at inplace update after heap_update().Noah Misch2024-09-24
| | | | | | | | | | | | | | | | | | | | | | As previously-added tests demonstrated, heap_inplace_update() could instead update an unrelated tuple of the same catalog. It could lose the update. Losing relhasindex=t was a source of index corruption. Inplace-updating commands like VACUUM will now wait for heap_update() commands like GRANT TABLE and GRANT DATABASE. That isn't ideal, but a long-running GRANT already hurts VACUUM progress more just by keeping an XID running. The VACUUM will behave like a DELETE or UPDATE waiting for the uncommitted change. For implementation details, start at the systable_inplace_update_begin() header comment and README.tuplock. Back-patch to v12 (all supported versions). In back branches, retain a deprecated heap_inplace_update(), for extensions. Reported by Smolkin Grigory. Reviewed by Nitin Motiani, (in earlier versions) Heikki Linnakangas, and (in earlier versions) Alexander Lakhin. Discussion: https://postgr.es/m/CAMp+ueZQz3yDk7qg42hk6-9gxniYbp-=bG2mgqecErqR5gGGOA@mail.gmail.com
* Warn if LOCKTAG_TUPLE is held at commit, under debug_assertions.Noah Misch2024-09-24
| | | | | | | | | | | | The current use always releases this locktag. A planned use will continue that intent. It will involve more areas of code, making unlock omissions easier. Warn under debug_assertions, like we do for various resource leaks. Back-patch to v12 (all supported versions), the plan for the commit of the new use. Reviewed by Heikki Linnakangas. Discussion: https://postgr.es/m/20240512232923.aa.nmisch@google.com
* Fix psql describe commands' handling of ACL columns for old servers.Tom Lane2024-09-24
| | | | | | | | | | | Commit d1379ebf4 carelessly broke printACLColumn for pre-9.4 servers, by using the cardinality() function which we introduced in 9.4. We expect psql's describe-related commands to work back to 9.2, so this is bad. Use the longstanding array_length() function instead. Per report from Christoph Berg. Back-patch to v17. Discussion: https://postgr.es/m/ZvLXYglRS6hMMhtr@msg.df7cb.de
* Translation updatesPeter Eisentraut2024-09-23
| | | | | Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: 4b069f67b5be4227eb620a74c9900f079f2e59f4
* Fix nbtree pgstats accounting with parallel scans.Peter Geoghegan2024-09-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 5bf748b8, which enhanced nbtree ScalarArrayOp execution, made parallel index scans work with the new design for arrays via explicit scheduling of primitive index scans. Under this scheme a parallel index scan with array keys will perform the same number of index descents as an equivalent serial index scan (barring corner cases where an individual parallel worker discovers that it can advance the scan's array keys without anybody needing to perform another descent of the index to get to the relevant page on the leaf level). Despite all this, the pgstats accounting wasn't updated; it continued to increment the total number of index scans for the rel once per _bt_first call, no matter the details. As a result, the number of (primitive) index scans could be over-counted during parallel scans. To fix, delay incrementing the count of index scans until after we've established that another descent of the index (using either _bt_search or _bt_endpoint) is required. That way pg_stat_user_tables.idx_scan always advances in the same way, regardless of whether or not the scan makes use of parallelism. Oversight in commit 5bf748b8, which enhanced nbtree ScalarArrayOp execution. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Tomas Vondra <tomas@vondra.me> Discussion: https://postgr.es/m/CAH2-Wz=E7XrkvscBN0U6V81NK3Q-dQOmivvbEsjG-zwEfDdFpg@mail.gmail.com Discussion: https://postgr.es/m/CAH2-WzkRqvaqR2CTNqTZP0z6FuL4-3ED6eQB0yx38XBNj1v-4Q@mail.gmail.com Backpatch: 17-, where nbtree SAOP execution was enhanced.
* Improve Perl script which adds commit links to release notesBruce Momjian2024-09-19
| | | | | | | | | | Reported-by: Andrew Dunstan Discussion: https://postgr.es/m/b2465837-56df-4794-a0b5-5e6ed44ed870@dunslane.net Author: Andrew Dunstan Backpatch-through: 12
* psql: Fix memory leak with repeated calls of \bindMichael Paquier2024-09-19
| | | | | | | | | | | | | | | | | Calling \bind repeatedly would cause the memory allocated for the list of bind parameters to be leaked after each call, as the list is reset when beginning a single call. This issue is fixed by making the cleanup of the bind parameter list more aggressive, refactoring it into a single routine called after processing a query and before running an individual \bind. HEAD required more surgery and has been fixed by 87eeadaea143. Issue introduced by 5b66de3433e2. Reported-by: Anthonin Bonnefoy Discussion: https://postgr.es/m/2e5b89af-a351-ff0a-000c-037ac28314ab@gmail.com Backpatch-through: 16
* Don't enter parallel mode when holding interrupts.Noah Misch2024-09-17
| | | | | | | | | | Doing so caused the leader to hang in wait_event=ParallelFinish, which required an immediate shutdown to resolve. Back-patch to v12 (all supported versions). Francesco Degrassi Discussion: https://postgr.es/m/CAC-SaSzHUKT=vZJ8MPxYdC_URPfax+yoA1hKTcF4ROz_Q6z0_Q@mail.gmail.com
* Add missing query ID reporting in extended query protocolMichael Paquier2024-09-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit adds query ID reports for two code paths when processing extended query protocol messages: - When receiving a bind message, setting it to the first Query retrieved from a cached cache. - When receiving an execute message, setting it to the first PlannedStmt stored in a portal. An advantage of this method is that this is able to cover all the types of portals handled in the extended query protocol, particularly these two when the report done in ExecutorStart() is not enough (neither is an addition in ExecutorRun(), actually, for the second point): - Multiple execute messages, with multiple ExecutorRun(). - Portal with execute/fetch messages, like a query with a RETURNING clause and a fetch size that stores the tuples in a first execute message going though ExecutorStart() and ExecuteRun(), followed by one or more execute messages doing only fetches from the tuplestore created in the first message. This corresponds to the case where execute_is_fetch is set, for example. Note that the query ID reporting done in ExecutorStart() is still necessary, as an EXECUTE requires it. Query ID reporting is optimistic and more calls to pgstat_report_query_id() don't matter as the first report takes priority except if the report is forced. The comment in ExecutorStart() is adjusted to reflect better the reality with the extended query protocol. The test added in pg_stat_statements is a courtesy of Robert Haas. This uses psql's \bind metacommand, hence this part is backpatched down to v16. Reported-by: Kaido Vaikla, Erik Wienhold Author: Sami Imseih Reviewed-by: Jian He, Andrei Lepikhov, Michael Paquier Discussion: https://postgr.es/m/CA+427g8DiW3aZ6pOpVgkPbqK97ouBdf18VLiHFesea2jUk3XoQ@mail.gmail.com Discussion: https://postgr.es/m/CA+TgmoZxtnf_jZ=VqBSyaU8hfUkkwoJCJ6ufy4LGpXaunKrjrg@mail.gmail.com Discussion: https://postgr.es/m/1391613709.939460.1684777418070@office.mailbox.org Backpatch-through: 14
* Allow ReadStream to be consumed as raw block numbers.Thomas Munro2024-09-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commits 041b9680 and 6377e12a changed the interface of scan_analyze_next_block() to take a ReadStream instead of a BlockNumber and a BufferAccessStrategy, and to return a value to indicate when the stream has run out of blocks. This caused integration problems for at least one known extension that uses specially encoded BlockNumber values that map to different underlying storage, because acquire_sample_rows() sets up the stream so that read_stream_next_buffer() reads blocks from the main fork of the relation's SMgrRelation. Provide read_stream_next_block(), as a way for such an extension to access the stream of raw BlockNumbers directly and forward them to its own ReadBuffer() calls after decoding, as it could in earlier releases. The new function returns the BlockNumber and BufferAccessStrategy that were previously passed directly to scan_analyze_next_block(). Alternatively, an extension could wrap the stream of BlockNumbers in another ReadStream with a callback that performs any decoding required to arrive at real storage manager BlockNumber values, so that it could benefit from the I/O combining and concurrency provided by read_stream.c. Another class of table access method that does nothing in scan_analyze_next_block() because it is not block-oriented could use this function to control the number of block sampling loops. It could match the previous behavior with "return read_stream_next_block(stream, &bas) != InvalidBlockNumber". Ongoing work is expected to provide better ANALYZE support for table access methods that don't behave like heapam with respect to storage blocks, but that will be for future releases. Back-patch to 17. Reported-by: Mats Kindahl <mats@timescale.com> Reviewed-by: Mats Kindahl <mats@timescale.com> Discussion: https://postgr.es/m/CA%2B14425%2BCcm07ocG97Fp%2BFrD9xUXqmBKFvecp0p%2BgV2YYR258Q%40mail.gmail.com
* Repair pg_upgrade for identity sequences with non-default persistence.Tom Lane2024-09-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since we introduced unlogged sequences in v15, identity sequences have defaulted to having the same persistence as their owning table. However, it is possible to change that with ALTER SEQUENCE, and pg_dump tries to preserve the logged-ness of sequences when it doesn't match (as indeed it wouldn't for an unlogged table from before v15). The fly in the ointment is that ALTER SEQUENCE SET [UN]LOGGED fails in binary-upgrade mode, because it needs to assign a new relfilenode which we cannot permit in that mode. Thus, trying to pg_upgrade a database containing a mismatching identity sequence failed. To fix, add syntax to ADD/ALTER COLUMN GENERATED AS IDENTITY to allow the sequence's persistence to be set correctly at creation, and use that instead of ALTER SEQUENCE SET [UN]LOGGED in pg_dump. (I tried to make SET [UN]LOGGED work without any pg_dump modifications, but that seems too fragile to be a desirable answer. This way should be markedly faster anyhow.) In passing, document the previously-undocumented SEQUENCE NAME option that pg_dump also relies on for identity sequences; I see no value in trying to pretend it doesn't exist. Per bug #18618 from Anthony Hsu. Back-patch to v15 where we invented this stuff. Discussion: https://postgr.es/m/18618-d4eb26d669ed110a@postgresql.org
* Avoid parallel nbtree index scan hangs with SAOPs.Peter Geoghegan2024-09-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 5bf748b8, which enhanced nbtree ScalarArrayOp execution, made parallel index scans work with the new design for arrays via explicit scheduling of primitive index scans. A backend that successfully scheduled the scan's next primitive index scan saved its backend local array keys in shared memory. Any backend could pick up the scheduled primitive scan within _bt_first. This scheme decouples scheduling a primitive scan from starting the scan (by performing another descent of the index via a _bt_search call from _bt_first) to make things robust. The scheme had a deadlock hazard, at least when the leader process participated in the scan. _bt_parallel_seize had a code path that made backends that were not in an immediate position to start a scheduled primitive index scan wait for some other backend to do so instead. Under the right circumstances, the leader process could wait here forever: the leader would wait for any other backend to start the primitive scan, while every worker was busy waiting on the leader to consume tuples from the scan's tuple queue. To fix, don't wait for a scheduled primitive index scan to be started by some other eligible backend from within _bt_parallel_seize (when the calling backend isn't in a position to do so itself). Return false instead, while recording that the scan has a scheduled primitive index scan in backend local state. This leaves the backend in the same state as the existing case where a backend schedules (or tries to schedule) another primitive index scan from within _bt_advance_array_keys, before calling _bt_parallel_seize. _bt_parallel_seize already handles that case by returning false without waiting, and without unsetting the backend local state. Leaving the backend in this state enables it to start a previously scheduled primitive index scan once it gets back to _bt_first. Oversight in commit 5bf748b8, which enhanced nbtree ScalarArrayOp execution. Matthias van de Meent, with tweaks by me. Author: Matthias van de Meent <boekewurm+postgres@gmail.com> Reported-By: Tomas Vondra <tomas@vondra.me> Reviewed-By: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/CAH2-WzmMGaPa32u9x_FvEbPTUkP5e95i=QxR8054nvCRydP-sw@mail.gmail.com Backpatch: 17-, where nbtree SAOP execution was enhanced.
* scripts: add Perl script to add links to release notesBruce Momjian2024-09-16
| | | | | | | | Reported-by: jian he Discussion: https://postgr.es/m/ZuYsS5XdA7hVcV9l@momjian.us Backpatch-through: 12
* Replace usages of xmlXPathCompile() with xmlXPathCtxtCompile().Tom Lane2024-09-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In existing releases of libxml2, xmlXPathCompile can be driven to stack overflow because it fails to protect itself against too-deeply-nested input. While there is an upstream fix as of yesterday, it will take years for that to propagate into all shipping versions. In the meantime, we can protect our own usages basically for free by calling xmlXPathCtxtCompile instead. (The actual bug is that libxml2 keeps its nesting counter in the xmlXPathContext, and its parsing code was willing to just skip counting nesting levels if it didn't have a context. So if we supply a context, all is well. It seems odd actually that it works at all to not supply a context, because this means that XPath parsing does not have access to XML namespace info. Apparently libxml2 never checks namespaces until runtime? Anyway, this seems like good future-proofing even if its only immediate effect is to dodge a bug.) Sadly, this hack only offers protection with libxml2 2.9.11 and newer. Before that there are multiple similar problems, so if you are processing untrusted XML it behooves you to get a newer version. But we have some pretty old libxml2 in the buildfarm, so it seems impractical to add a regression test to verify this fix. Per bug #18617 from Jingzhou Fu. Back-patch to all supported versions. Discussion: https://postgr.es/m/18617-1cee4d2ed1f4e7ae@postgresql.org Discussion: https://gitlab.gnome.org/GNOME/libxml2/-/issues/799
* Run regression tests with timezone America/Los_Angeles.Tom Lane2024-09-14
| | | | | | | | | | | | | | | | | Historically we've used timezone "PST8PDT", but the recent release 2024b of tzdb changes the definition of that zone in a way that breaks many test cases concerned with dates before 1970. Although we've not yet adopted 2024b into our own tree, this is already problematic for people using --with-system-tzdata if their platform has already adopted 2024b. To work with both older and newer versions of tzdb, switch to using "America/Los_Angeles", accepting the ensuing changes in regression test results. Back-patch to all supported branches. Per report and patch from Wolfgang Walther. Discussion: https://postgr.es/m/0a997455-5aba-4cf2-a354-d26d8bcbfae6@technowledgy.de
* Allow _h_indexbuild() to be interrupted.Tom Lane2024-09-13
| | | | | | | | | | | | | | When we are building a hash index that is large enough to need pre-sorting (larger than either maintenance_work_mem or NBuffers), the initial sorting phase is interruptible, but the insertion phase wasn't. Add the missing CHECK_FOR_INTERRUPTS(). Per bug #18616 from Alexander Lakhin. Back-patch to all supported branches. Pavel Borisov Discussion: https://postgr.es/m/18616-acbb9e5caf41e964@postgresql.org
* Make jsonpath .string() be immutable for datetimes.Tom Lane2024-09-12
| | | | | | | | | | | | | | | | | Discussion of commit ed055d249 revealed that we don't actually want jsonpath's .string() method to depend on DateStyle, nor TimeZone either, because the non-"_tz" jsonpath functions are supposed to be immutable. Potentially we could allow a TimeZone dependency in the "_tz" variants, but it seems better to just uniformly define this method as returning the same string that jsonb text output would do. That's easier to implement too, saving a couple dozen lines. Patch by me, per complaint from Peter Eisentraut. Back-patch to v17 where this feature came in (in 66ea94e8e). Also back-patch ed055d249 to provide test cases. Discussion: https://postgr.es/m/5e8879d0-a3c8-4be2-950f-d83aa2af953a@eisentraut.org
* SQL/JSON: Fix JSON_QUERY(... WITH CONDITIONAL WRAPPER)Amit Langote2024-09-12
| | | | | | | | | | | | | | | | Currently, when WITH CONDITIONAL WRAPPER is specified, array wrappers are applied even to a single SQL/JSON item if it is a scalar JSON value, but this behavior does not comply with the standard. To fix, apply wrappers only when there are multiple SQL/JSON items in the result. Reported-by: Peter Eisentraut <peter@eisentraut.org> Author: Peter Eisentraut <peter@eisentraut.org> Author: Amit Langote <amitlangote09@gmail.com> Reviewed-by: Andrew Dunstan <andrew@dunslane.net> Discussion: https://postgr.es/m/8022e067-818b-45d3-8fab-6e0d94d03626%40eisentraut.org Backpatch-through: 17
* Remove incorrect Assert.Tom Lane2024-09-11
| | | | | | | | | | | | | | | | | | | | | | | | check_agglevels_and_constraints() asserted that if we find an aggregate function in an EXPR_KIND_FROM_SUBSELECT expression, the expression must be in a LATERAL subquery. Alexander Lakhin found a case where that's not so: because of the odd scoping rules for NEW/OLD within a rule, a reference to NEW/OLD could cause an aggregate to be considered top-level even though it's in an unmarked sub-select. The error message that would be thrown seems sufficiently on-point, so just remove the Assert. (Hence, this is not a bug for production builds.) This Assert was added by me in commit eaccfded9 (9.3 era). It looks like I put it in to cross-check that the new logic for detecting misplaced aggregates (using agglevelsup) caught the same cases that a previous check on p_lateral_active did. So there might have been some related misbehavior before eaccfded9 ... but that's very ancient history by now, so I didn't dig any deeper. Per bug #18608 from Alexander Lakhin. Back-patch to all supported branches. Discussion: https://postgr.es/m/18608-48de0717508ee429@postgresql.org
* Fix unique key checks in JSON object constructorsTomas Vondra2024-09-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When building a JSON object, the code builds a hash table of keys, to allow checking if the keys are unique. The uniqueness check and adding the new key happens in json_unique_check_key(), but this assumes the pointer to the key remains valid. Unfortunately, two places passed pointers to keys in a buffer, while also appending more data (additional key/value pairs) to the buffer. With enough data the buffer is resized by enlargeStringInfo(), which calls repalloc(), invalidating the earlier key pointers. Due to this the uniqueness check may fail with both false negatives and false positives, producing JSON objects with duplicate keys or failing to produce a perfectly valid JSON object. This affects multiple functions that enforce uniqueness of keys, all introduced in PG16 with the new SQL/JSON: - json_object_agg_unique / jsonb_object_agg_unique - json_object / jsonb_objectagg Existing regression tests did not detect the issue, simply because the initial buffer size is 1024 and the objects were small enough not to require the repalloc. With a sufficiently large object, AddressSanitizer reported the access to invalid memory immediately. So would valgrind, of course. Fixed by copying the key into the hash table memory context, and adding regression tests with enough data to repalloc the buffer. Backpatch to 16, where the functions were introduced. Reported by Alexander Lakhin. Investigation and initial fix by Junwang Zhao, with various improvements and tests by me. Reported-by: Alexander Lakhin Author: Junwang Zhao, Tomas Vondra Backpatch-through: 16 Discussion: https://postgr.es/m/18598-3279ed972a2347c7@postgresql.org Discussion: https://postgr.es/m/CAEG8a3JjH0ReJF2_O7-8LuEbO69BxPhYeXs95_x7+H9AMWF1gw@mail.gmail.com
* Fix some whitespace issues in XMLSERIALIZE(... INDENT).Tom Lane2024-09-10
| | | | | | | | | | | | | | | | | | | We must drop whitespace while parsing the input, else libxml2 will include "blank" nodes that interfere with the desired indentation behavior. The end result is that we didn't indent nodes separated by whitespace. Also, it seems that libxml2 may add a trailing newline when working in DOCUMENT mode. This is semantically insignificant, so strip it. This is in the gray area between being a bug fix and a definition change. However, the INDENT option is still pretty new (since v16), so I think we can get away with changing this in stable branches. Hence, back-patch to v16. Jim Jones Discussion: https://postgr.es/m/872865a8-548b-48e1-bfcd-4e38e672c1e4@uni-muenster.de
* SQL/JSON: Avoid initializing unnecessary ON ERROR / ON EMPTY stepsAmit Langote2024-09-09
| | | | | | | | | | | | | | | | | When the ON ERROR / ON EMPTY behavior is to return NULL, returning NULL directly from ExecEvalJsonExprPath() suffices. Therefore, there's no need to create separate steps to check the error/empty flag or those to evaluate the the constant NULL expression. This speeds up common cases because the default ON ERROR / ON EMPTY behavior for JSON_QUERY() and JSON_VALUE() is to return NULL. However, these steps are necessary if the RETURNING type is a domain, as constraints on the domain may need to be checked. Reported-by: Jian He <jian.universality@gmail.com> Author: Jian He <jian.universality@gmail.com> Author: Amit Langote <amitlangote09@gmail.com> Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com Backpatch-through: 17
* Fix waits of REINDEX CONCURRENTLY for indexes with predicates or expressionsMichael Paquier2024-09-09
| | | | | | | | | | | | | | | | | | | | | As introduced by f9900df5f94, a REINDEX CONCURRENTLY job done for an index with predicates or expressions would set PROC_IN_SAFE_IC in its MyProc->statusFlags, causing it to be ignored by other concurrent operations. Such concurrent index rebuilds should never be ignored, as a predicate or an expression could call a user-defined function that accesses a different table than the table where the index is rebuilt. A test that uses injection points is added, backpatched down to 17. Michail has proposed a different test, but I have added something simpler with more coverage. Oversight in f9900df5f949. Author: Michail Nikolaev Discussion: https://postgr.es/m/CANtu0oj9A3kZVduFTG0vrmGnKB+DCHgEpzOp0qAyOgmks84j0w@mail.gmail.com Backpatch-through: 14
* Fix incorrect pg_stat_io output on 32-bit machines.Tom Lane2024-09-06
| | | | | | | | | | | | | | pg_stat_get_io() applied TimestampTzGetDatum twice to the stat_reset_timestamp value. On 64-bit builds that's harmless because TimestampTzGetDatum is a no-op, but on 32-bit builds it results in displaying garbage in the stats_reset column of the pg_stat_io view. Bug dates to commit a9c70b46d which introduced pg_stat_io, so back-patch to v16 where that came in. Bertrand Drouvot Discussion: https://postgr.es/m/Ztrd+XcPTz1zorkg@ip-10-97-1-34.eu-west-3.compute.internal
* SQL/JSON: Fix default ON ERROR behavior for JSON_TABLEAmit Langote2024-09-06
| | | | | | | | | | | | Use EMPTY ARRAY instead of EMPTY. This change does not affect the runtime behavior of JSON_TABLE(), which continues to return an empty relation ON ERROR. It only alters whether the default ON ERROR behavior is shown in the deparsed output. Reported-by: Jian He <jian.universality@gmail.com> Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com Backpatch-through: 17
* SQL/JSON: Fix JSON_TABLE() column deparsingAmit Langote2024-09-06
| | | | | | | | | | | | | | | | The deparsing code in get_json_expr_options() unnecessarily emitted the default column-specific ON ERROR / EMPTY behavior when the top-level ON ERROR behavior in JSON_TABLE was set to ERROR. Fix that by not overriding the column-specific default, determined based on the column's JsonExprOp in get_json_table_columns(), with JSON_BEHAVIOR_ERROR when that is the top-level ON ERROR behavior. Note that this only removes redundancy; the current deparsing output is not incorrect, just redundant. Reviewed-by: Jian He <jian.universality@gmail.com> Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com Backpatch-through: 17
* Revert recent SQL/JSON related commitsAmit Langote2024-09-06
| | | | | Reverts c88ce386c4d, 5067c230b8e, and e4e27976a68, because a few BF animals didn't like one or all of them.
* SQL/JSON: Avoid initializing unnecessary ON ERROR / ON EMPTY stepsAmit Langote2024-09-06
| | | | | | | | | | | | | | | | | When the ON ERROR / ON EMPTY behavior is to return NULL, returning NULL directly from ExecEvalJsonExprPath() suffices. Therefore, there's no need to create separate steps to check the error/empty flag or those to evaluate the the constant NULL expression. This speeds up common cases because the default ON ERROR / ON EMPTY behavior for JSON_QUERY() and JSON_VALUE() is to return NULL. However, these steps are necessary if the RETURNING type is a domain, as constraints on the domain may need to be checked. Reported-by: Jian He <jian.universality@gmail.com> Author: Jian He <jian.universality@gmail.com> Author: Amit Langote <amitlangote09@gmail.com> Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com Backpatch-through: 17
* SQL/JSON: Fix default ON ERROR behavior for JSON_TABLEAmit Langote2024-09-06
| | | | | | | | | | | | Use EMPTY ARRAY instead of EMPTY. This change does not affect the runtime behavior of JSON_TABLE(), which continues to return an empty relation ON ERROR. It only alters whether the default ON ERROR behavior is shown in the deparsed output. Reported-by: Jian He <jian.universality@gmail.com> Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com Backpatch-through: 17
* SQL/JSON: Fix JSON_TABLE() column deparsingAmit Langote2024-09-06
| | | | | | | | | | | | | | | | The deparsing code in get_json_expr_options() unnecessarily emitted the default column-specific ON ERROR / EMPTY behavior when the top-level ON ERROR behavior in JSON_TABLE was set to ERROR. Fix that by not overriding the column-specific default, determined based on the column's JsonExprOp in get_json_table_columns(), with JSON_BEHAVIOR_ERROR when that is the top-level ON ERROR behavior. Note that this only removes redundancy; the current deparsing output is not incorrect, just redundant. Reviewed-by: Jian He <jian.universality@gmail.com> Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com Backpatch-through: 17
* Update comment about ExprState.escontextAmit Langote2024-09-06
| | | | | | | | | The updated comment provides more helpful guidance by mentioning that escontext should be set when soft error handling is needed. Reported-by: Jian He <jian.universality@gmail.com> Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com Backpatch-through: 17
* Prevent mis-encoding of "trailing junk after numeric literal" errors.Tom Lane2024-09-05
| | | | | | | | | | | | | | | | | | | | | | | | | | Since commit 2549f0661, we reject an identifier immediately following a numeric literal (without separating whitespace), because that risks ambiguity with hex/octal/binary integers. However, that patch used token patterns like "{integer}{ident_start}", which is problematic because {ident_start} matches only a single byte. If the first character after the integer is a multibyte character, this ends up with flex reporting an error message that includes a partial multibyte character. That can cause assorted bad-encoding problems downstream, both in the report to the client and in the postmaster log file. To fix, use {identifier} not {ident_start} in the "junk" token patterns, so that they will match complete multibyte characters. This seems generally better user experience quite aside from the encoding problem: for "123abc" the error message will now say that the error appeared at or near "123abc" instead of "123a". While at it, add some commentary about why these patterns exist and how they work. Report and patch by Karina Litskevich; review by Pavel Borisov. Back-patch to v15 where the problem came in. Discussion: https://postgr.es/m/CACiT8iZ_diop=0zJ7zuY3BXegJpkKK1Av-PU7xh0EDYHsa5+=g@mail.gmail.com
* Fix inconsistent LWLock tranche name "CommitTsSLRU"Michael Paquier2024-09-04
| | | | | | | | | | | | | | | | This term was using an inconsistent casing between the code and the documentation, using "CommitTsSLRU" in wait_event_names.txt and "CommitTSSLRU" in the code. Let's update the term in the code to reflect what's in the documentation, "CommitTs" being more commonly used, so as pg_stat_activity shows the same term as the documentation. Oversight in 53c2a97a9266. Author: Alexander Lakhin Discussion: https://postgr.es/m/f7e514cf-2446-21f1-a5d2-8c089a6e2168@gmail.com Backpatch-through: 17
* Avoid installcheck failure in TAP tests using injection_pointsMichael Paquier2024-09-04
| | | | | | | | | | | | | | | | | | These tests depend on the test module injection_points to be installed, but it may not be available as the contents of src/test/modules/ are not installed by default. This commit adds a workaround based on a scan of pg_available_extensions to check if the extension is available, skipping the test if it is not. This allows installcheck to work transparently. There are more tests impacted by this problem on HEAD, but for now this addresses only the tests that exist on HEAD and v17 as the release is close by. Reported-by: Maxim Orlov Discussion: https://postgr.es/m/CACG=ezZkoT-pFz6a9XnyToiuR-Wg8fGELqHLoyBodr+2h-77qA@mail.gmail.com Backpatch-through: 17
* Simplify makefiles exporting twice enable_injection_pointsMichael Paquier2024-09-04
| | | | | | | | | This is confusing, as it exports twice the same variable. Oversight in 6782709df81f that has spread in more places afterwards. Reported-by: Alvaro Herrera, Tom Lane Discussion: https://postgr.es/m/202408201630.mn6vbohjh7hh@alvherre.pgsql Backpatch-through: 17
* Fix warnings from msgfmtPeter Eisentraut2024-09-02
| | | | | | | | /usr/bin/msgfmt: po/fr.po: warning: PO file header fuzzy warning: older versions of msgfmt will give an error on this Apparently, not all versions of msgfmt produce this. Quick fix for now, more to be researched later.