aboutsummaryrefslogtreecommitdiff
path: root/src/include
Commit message (Collapse)AuthorAge
* Split ProcSleep function into JoinWaitQueue and ProcSleepHeikki Linnakangas2024-11-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Split ProcSleep into two functions: JoinWaitQueue and ProcSleep. JoinWaitQueue is called while holding the partition lock, and inserts the current process to the wait queue, while ProcSleep() does the actual sleeping. ProcSleep() is now called without holding the partition lock, and it no longer re-acquires the partition lock before returning. That makes the wakeup a little cheaper. Once upon a time, re-acquiring the partition lock was needed to prevent a signal handler from longjmping out at a bad time, but these days our signal handlers just set flags, and longjmping can only happen at points where we explicitly run CHECK_FOR_INTERRUPTS(). If JoinWaitQueue detects an "early deadlock" before even joining the wait queue, it returns without changing the shared lock entry, leaving the cleanup of the shared lock entry to the caller. This makes the handling of an early deadlock the same as the dontWait=true case. One small user-visible side-effect of this refactoring is that we now only set the 'ps' title to say "waiting" when we actually enter the sleep, not when the lock is skipped because dontWait=true, or when a deadlock is detected early before entering the sleep. This eliminates the 'lockAwaited' global variable in proc.c, which was largely redundant with 'awaitedLock' in lock.c Note: Updating the local lock table is now the caller's responsibility. JoinWaitQueue and ProcSleep are now only responsible for modifying the shared state. Seems a little nicer that way. Based on Thomas Munro's earlier patch and observation that ProcSleep doesn't really need to re-acquire the partition lock. Reviewed-by: Maxim Orlov Discussion: https://www.postgresql.org/message-id/7c2090cd-a72a-4e34-afaa-6dd2ef31440e@iki.fi
* Fix inplace update buffer self-deadlock.Noah Misch2024-11-02
| | | | | | | | | | | | | | | A CacheInvalidateHeapTuple* callee might call CatalogCacheInitializeCache(), which needs a relcache entry. Acquiring a valid relcache entry might scan pg_class. Hence, to prevent undetected LWLock self-deadlock, CacheInvalidateHeapTuple* callers must not hold BUFFER_LOCK_EXCLUSIVE on buffers of pg_class. Move the CacheInvalidateHeapTupleInplace() before the BUFFER_LOCK_EXCLUSIVE. No back-patch, since I've reverted commit 243e9b40f1b2dd09d6e5bf91ebf6e822a2cd3704 from non-master branches. Reported by Alexander Lakhin. Reviewed by Alexander Lakhin. Discussion: https://postgr.es/m/10ec0bc3-5933-1189-6bb8-5dec4114558e@gmail.com
* Rename two functions that wake up other processesHeikki Linnakangas2024-11-01
| | | | | | | | | | | Instead of talking about setting latches, which is a pretty low-level mechanism, emphasize that they wake up other processes. This is in preparation for replacing Latches with a new abstraction. That's still work in progress, but this seems a little tidier anyway, so let's get this refactoring out of the way already. Discussion: https://www.postgresql.org/message-id/391abe21-413e-4d91-a650-b663af49500c%40iki.fi
* Use ProcNumbers instead of direct Latch pointers to address other procsHeikki Linnakangas2024-11-01
| | | | | | | | This is in preparation for replacing Latches with a new abstraction. That's still work in progress, but this seems a little tidier anyway, so let's get this refactoring out of the way already. Discussion: https://www.postgresql.org/message-id/391abe21-413e-4d91-a650-b663af49500c%40iki.fi
* Add pg_memory_is_all_zeros() in memutils.hMichael Paquier2024-11-01
| | | | | | | | | | | | | | | | | This new function tests if a memory region starting at a given location for a defined length is made only of zeroes. This unifies in a single path the all-zero checks that were happening in a couple of places of the backend code: - For pgstats entries of relation, checkpointer and bgwriter, where some "all_zeroes" variables were previously used with memcpy(). - For all-zero buffer pages in PageIsVerifiedExtended(). This new function uses the same forward scan as the check for all-zero buffer pages, applying it to the three pgstats paths mentioned above. Author: Bertrand Drouvot Reviewed-by: Peter Eisentraut, Heikki Linnakangas, Peter Smith Discussion: https://postgr.es/m/ZupUDDyf1hHI4ibn@ip-10-97-1-34.eu-west-3.compute.internal
* Add SQL function array_reverse()Michael Paquier2024-11-01
| | | | | | | | | | | | | | | | | This function takes in input an array, and reverses the position of all its elements. This operation only affects the first dimension of the array, like array_shuffle(). The implementation structure is inspired by array_shuffle(), with a subroutine called array_reverse_n() that may come in handy in the future, should more functions able to reverse portions of arrays be introduced. Bump catalog version. Author: Aleksander Alekseev Reviewed-by: Ashutosh Bapat, Tom Lane, Vladlen Popolitov Discussion: https://postgr.es/m/CAJ7c6TMpeO_ke+QGOaAx9xdJuxa7r=49-anMh3G5476e3CX1CA@mail.gmail.com
* Add a parse location field to struct FunctionParameter.Tom Lane2024-10-31
| | | | | | | | | | | | | | | | This allows an error cursor to be supplied for a bunch of bad-function-definition errors that previously lacked one, or that cheated a bit by pointing at the contained type name when the error isn't really about that. Bump catversion from an abundance of caution --- I don't think this node type can actually appear in stored views/rules, but better safe than sorry. Jian He and Tom Lane (extracted from a larger patch by Jian, with some additional work by me) Discussion: https://postgr.es/m/CACJufxEmONE3P2En=jopZy1m=cCCUs65M4+1o52MW5og9oaUPA@mail.gmail.com
* Remove unused field from SubPlanState structDavid Rowley2024-10-31
| | | | | | | | | | | | | | | | bf6c614a2 did some conversion work to use ExprState instead of manually calling equality functions to check if one set of values is not distinct from another set. That patch removed many of the fields that became redundant as a result of that change, but it forgot to remove SubPlanState.tab_eq_funcs. Fix that. In passing, fix the header comment for TupleHashEntryData to correctly spell the field name it's talking about. Author: Rafia Sabih <rafia.pghackers@gmail.com> Reviewed-by: Andrei Lepikhov <lepihov@gmail.com> Discussion: https://postgr.es/m/CA+FpmFeycdombFzrjZw7Rmc29CVm4OOzCWwu=dVBQ6q=PX8SvQ@mail.gmail.com Discussion: https://postgr.es/m/CAApHDvrWR2jYVhec=COyF2g2BE_ns91NDsCHAMFiXbyhEujKdQ@mail.gmail.com
* Fix bug in nbtree array primitive scan scheduling.Peter Geoghegan2024-10-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A bug in nbtree's handling of primitive index scan scheduling could lead to wrong answers when a scrollable cursor was used with an index scan that had a SAOP index qual. Wrong answers were only possible when the scan direction changed after a primitive scan was scheduled, but before _bt_next was asked to fetch the next tuple in line (i.e. for things to break, _bt_next had to be denied the opportunity to step off the page in the same direction as the one used when the primscan was scheduled). Furthermore, the issue only occurred when the page in question happened to be the first page to be visited by the entire top-level scan; the issue hinged upon the cursor backing up to the absolute beginning of the key space that it returns tuples from (fetching in the opposite scan direction across a "primitive scan boundary" always worked correctly). To fix, make _bt_next unset the "needs primitive index scan" flag when it detects that the current scan direction is not the one that was used by _bt_readpage back when the primitive scan in question was scheduled. This fixes the cases that are known to be faulty, and also seems like a good idea on general robustness grounds. Affected scrollable cursor cases now avoid a spurious primitive index scan when they fetch backwards to the absolute start of the key space to be visited by their cursor. Fetching backwards now only returns those tuples at the start of the scan, as expected. It'll also be okay to once again fetch forwards from the start at that point, since the scan will be left in a state that's exactly consistent with the state it was in before any tuples were ever fetched, as expected. Oversight in commit 5bf748b8, which enhanced nbtree ScalarArrayOp execution. Author: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/CAH2-Wznv49bFsE2jkt4GuZ0tU2C91dEST=50egzjY2FeOcHL4Q@mail.gmail.com Backpatch: 17-, where commit 5bf748b8 first appears.
* Replicate generated columns when specified in the column list.Amit Kapila2024-10-30
| | | | | | | | | | | | | | | | | | | This commit allows logical replication to publish and replicate generated columns when explicitly listed in the column list. We also ensured that the generated columns were copied during the initial tablesync when they were published. We will allow to replicate generated columns even when they are not specified in the column list (via a new publication option) in a separate commit. The motivation of this work is to allow replication for cases where the client doesn't have generated columns. For example, the case where one is trying to replicate data from Postgres to the non-Postgres database. Author: Shubham Khanna, Vignesh C, Hou Zhijie Reviewed-by: Peter Smith, Hayato Kuroda, Shlok Kyal, Amit Kapila Discussion: https://postgr.es/m/B80D17B2-2C8E-4C7D-87F2-E5B4BE3C069E@gmail.com
* Unpin buffer before inplace update waits for an XID to end.Noah Misch2024-10-29
| | | | | | | | | | | | Commit a07e03fd8fa7daf4d1356f7cb501ffe784ea6257 changed inplace updates to wait for heap_update() commands like GRANT TABLE and GRANT DATABASE. By keeping the pin during that wait, a sequence of autovacuum workers and an uncommitted GRANT starved one foreground LockBufferForCleanup() for six minutes, on buildfarm member sarus. Prevent, at the cost of a bit of complexity. Back-patch to v12, like the earlier commit. That commit and heap_inplace_lock() have not yet appeared in any release. Discussion: https://postgr.es/m/20241026184936.ae.nmisch@google.com
* doc: Add better description for rewrite functions in event triggersMichael Paquier2024-10-29
| | | | | | | | | | | | | | | | | | | | | | | | | There are two functions that can be used in event triggers to get more details about a rewrite happening on a relation. Both had a limited documentation: - pg_event_trigger_table_rewrite_reason() and pg_event_trigger_table_rewrite_oid() were not mentioned in the main event trigger section in the paragraph dedicated to the event table_rewrite. - pg_event_trigger_table_rewrite_reason() returns an integer which is a bitmap of the reasons why a rewrite happens. There was no explanation about the meaning of these values, forcing the reader to look at the code to find out that these are defined in event_trigger.h. While on it, let's add a comment in event_trigger.h where the AT_REWRITE_* are defined, telling to update the documentation when these values are changed. Backpatch down to 13 as a consequence of 1ad23335f36b, where this area of the documentation has been heavily reworked. Author: Greg Sabino Mullane Discussion: https://postgr.es/m/CAKAnmmL+Z6j-C8dAx1tVrnBmZJu+BSoc68WSg3sR+CVNjBCqbw@mail.gmail.com Backpatch-through: 13
* Unify src/common/'s definitions of MaxAllocSize.Tom Lane2024-10-28
| | | | | | | | | | As threatened in the previous patch, define MaxAllocSize in src/include/common/fe_memutils.h rather than having several copies of it in different src/common/*.c files. This also provides an opportunity to document it better. While this would probably be safe to back-patch, I'll refrain (for now anyway).
* Set query ID for inner queries of CREATE TABLE AS and DECLAREMichael Paquier2024-10-28
| | | | | | | | | | | | | | | | | | | | | Some utility statements contain queries that can be planned and executed: CREATE TABLE AS and DECLARE CURSOR. This commit adds query ID computation for the inner queries executed by these two utility commands, with and without EXPLAIN. This change leads to four new callers of JumbleQuery() and post_parse_analyze_hook() so as extensions can decide what to do with this new data. Previously, extensions relying on the query ID, like pg_stat_statements, were not able to track these nested queries as the query_id was 0. For pg_stat_statements, this commit leads to additions under !toplevel when pg_stat_statements.track is set to "all", as shown in its regression tests. The output of EXPLAIN for these two utilities gains a "Query Identifier" if compute_query_id is enabled. Author: Anthonin Bonnefoy Reviewed-by: Michael Paquier, Jian He Discussion: https://postgr.es/m/CAO6_XqqM6S9bQ2qd=75W+yKATwoazxSNhv5sjW06fjGAtHbTUA@mail.gmail.com
* Make table_scan_bitmap_next_block() async-friendlyMelanie Plageman2024-10-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move all responsibility for indicating a block is exhuasted into table_scan_bitmap_next_tuple() and advance the main iterator in heap-specific code. This flow control makes more sense and is a step toward using the read stream API for bitmap heap scans. Previously, table_scan_bitmap_next_block() returned false to indicate table_scan_bitmap_next_tuple() should not be called for the tuples on the page. This happened both when 1) there were no visible tuples on the page and 2) when the block returned by the iterator was past the end of the table. BitmapHeapNext() (generic bitmap table scan code) handled the case when the bitmap was exhausted. It makes more sense for table_scan_bitmap_next_tuple() to return false when there are no visible tuples on the page and table_scan_bitmap_next_block() to return false when the bitmap is exhausted or there are no more blocks in the table. As part of this new design, TBMIterateResults are no longer used as a flow control mechanism in BitmapHeapNext(), so we removed table_scan_bitmap_next_tuple's TBMIterateResult parameter. Note that the prefetch iterator is still saved in the BitmapHeapScanState node and advanced in generic bitmap table scan code. This is because 1) it was not necessary to change the prefetch iterator location to change the flow control in BitmapHeapNext() 2) modifying prefetch iterator management requires several more steps better split over multiple commits and 3) the prefetch iterator will be removed once the read stream API is used. Author: Melanie Plageman Reviewed-by: Tomas Vondra, Andres Freund, Heikki Linnakangas, Mark Dilger Discussion: https://postgr.es/m/063e4eb4-32d9-439e-a0b1-75565a9835a8%40iki.fi
* Move EXPLAIN counter increment to heapam_scan_bitmap_next_blockMelanie Plageman2024-10-25
| | | | | | | | | | | | | | | | Increment the lossy and exact page counters for EXPLAIN of bitmap heap scans in heapam_scan_bitmap_next_block(). Note that other table AMs will need to do this as well Pushing the counters into heapam_scan_bitmap_next_block() is required to be able to use the read stream API for bitmap heap scans. The bitmap iterator must be advanced from inside the read stream callback, so TBMIterateResults cannot be used as a flow control mechanism in BitmapHeapNext(). Author: Melanie Plageman Reviewed-by: Tomas Vondra, Heikki Linnakangas Discussion: https://postgr.es/m/063e4eb4-32d9-439e-a0b1-75565a9835a8%40iki.fi
* For inplace update, send nontransactional invalidations.Noah Misch2024-10-25
| | | | | | | | | | | | | | | | The inplace update survives ROLLBACK. The inval didn't, so another backend's DDL could then update the row without incorporating the inplace update. In the test this fixes, a mix of CREATE INDEX and ALTER TABLE resulted in a table with an index, yet relhasindex=f. That is a source of index corruption. Back-patch to v12 (all supported versions). The back branch versions don't change WAL, because those branches just added end-of-recovery SIResetAll(). All branches change the ABI of extern function PrepareToInvalidateCacheTuple(). No PGXN extension calls that, and there's no apparent use case in extensions. Reviewed by Nitin Motiani and (in earlier versions) Andres Freund. Discussion: https://postgr.es/m/20240523000548.58.nmisch@google.com
* Refactor code converting a publication name List to a StringInfoMichael Paquier2024-10-25
| | | | | | | | | | | | | | | | The existing get_publications_str() is renamed to GetPublicationsStr() and is moved to pg_subscription.c, so as it is possible to reuse it at two locations of the tablesync code where the same logic was duplicated. fetch_remote_table_info() was doing two List->StringInfo conversions when dealing with a server of version 15 or newer. The conversion happens only once now. This refactoring leads to less code overall. Author: Peter Smith Reviewed-by: Michael Paquier, Masahiko Sawada Discussion: https://postgr.es/m/CAHut+PtJMk4bKXqtpvqVy9ckknCgK9P6=FeG8zHF=6+Em_Snpw@mail.gmail.com
* Add functions pg_restore_relation_stats(), pg_restore_attribute_stats().Jeff Davis2024-10-24
| | | | | | | | | | | | | Similar to the pg_set_*_stats() functions, except with a variadic signature that's designed to be more future-proof. Additionally, most problems are reported as WARNINGs rather than ERRORs, allowing most stats to be restored even if some cannot. These functions are intended to be called from pg_dump to avoid the need to run ANALYZE after an upgrade. Author: Corey Huinker Discussion: https://postgr.es/m/CADkLM=eErgzn7ECDpwFcptJKOk9SxZEk5Pot4d94eVTZsvj3gw@mail.gmail.com
* Support configuring TLSv1.3 cipher suitesDaniel Gustafsson2024-10-24
| | | | | | | | | | | | | | | | | | The ssl_ciphers GUC can only set cipher suites for TLSv1.2, and lower, connections. For TLSv1.3 connections a different OpenSSL API must be used. This adds a new GUC, ssl_tls13_ciphers, which can be used to configure a colon separated list of cipher suites to support when performing a TLSv1.3 handshake. Original patch by Erica Zhang with additional hacking by me. Author: Erica Zhang <ericazhangy2021@qq.com> Author: Daniel Gustafsson <daniel@yesql.se> Reviewed-by: Jacob Champion <jacob.champion@enterprisedb.com> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Peter Eisentraut <peter@eisentraut.org> Reviewed-by: Jelte Fennema-Nio <postgres@jeltef.nl> Discussion: https://postgr.es/m/tencent_063F89FA72CCF2E48A0DF5338841988E9809@qq.com
* Raise the minimum supported OpenSSL version to 1.1.1Daniel Gustafsson2024-10-24
| | | | | | | | | | | | | | | | | | Commit a70e01d4306fdbcd retired support for OpenSSL 1.0.2 in order to get rid of the need for manual initialization of the library. This left our API usage compatible with 1.1.0 which was defined as the minimum required version. Also mention that 3.4 is the minimum version required when using LibreSSL. An upcoming commit will introduce support for configuring TLSv1.3 cipher suites which require an API call in OpenSSL 1.1.1 and onwards. In order to support this setting this commit will set v1.1.1 as the new minimum required version. The version-specific call for randomness init added in commit c3333dbc0c0 is removed as it's no longer needed. Author: Daniel Gustafsson <daniel@yesql.se> Discussion: https://postgr.es/m/909A668B-06AD-47D1-B8EB-A164211AAD16@yesql.se Discussion: https://postgr.es/m/tencent_063F89FA72CCF2E48A0DF5338841988E9809@qq.com
* Add 'no_error' argument to pg_wal_replay_wait()Alexander Korotkov2024-10-24
| | | | | | | | | | | This argument allow skipping throwing an error. Instead, the result status can be obtained using pg_wal_replay_wait_status() function. Catversion is bumped. Reported-by: Michael Paquier Discussion: https://postgr.es/m/ZtUF17gF0pNpwZDI%40paquier.xyz Reviewed-by: Pavel Borisov
* Refactor WaitForLSNReplay() to return the result of waitingAlexander Korotkov2024-10-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, WaitForLSNReplay() immediately throws an error if waiting for LSN replay is not successful. This commit teaches WaitForLSNReplay() to return the result of waiting, while making pg_wal_replay_wait() responsible for throwing an appropriate error. This is preparation to adding 'no_error' argument to pg_wal_replay_wait() and new function pg_wal_replay_wait_status(), which returns the last wait result status. Additionally, we stop distinguishing situations when we find our instance to be not in a recovery state before entering the waiting loop and inside the waiting loop. Standby promotion may happen at any moment, even between issuing a procedure call statement and pg_wal_replay_wait() doing a first check of recovery status. Thus, there is no pointing distinguishing these situations. Also, since we may exit the waiting loop and see our instance not in recovery without throwing an error, we need to deleteLSNWaiter() in that case. We do this unconditionally for the sake of simplicity, even if standby was already promoted after reaching the target LSN, the startup process surely already deleted us. Reported-by: Michael Paquier Discussion: https://postgr.es/m/ZtUF17gF0pNpwZDI%40paquier.xyz Reviewed-by: Michael Paquier, Pavel Borisov
* Move LSN waiting declarations and definitions to better placeAlexander Korotkov2024-10-24
| | | | | | | | | | | | | | | | | | | 3c5db1d6b implemented the pg_wal_replay_wait() stored procedure. Due to the patch development history, the implementation resided in src/backend/commands/waitlsn.c (src/include/commands/waitlsn.h for headers). 014f9f34d moved pg_wal_replay_wait() itself to src/backend/access/transam/xlogfuncs.c near to the WAL-manipulation functions. But most of the implementation stayed in place. The code in src/backend/commands/waitlsn.c has nothing to do with commands, but is related to WAL. So, this commit moves this code into src/backend/access/transam/xlogwait.c (src/include/access/xlogwait.h for headers). Reported-by: Peter Eisentraut Discussion: https://postgr.es/m/18c0fa64-0475-415e-a1bd-665d922c5201%40eisentraut.org Reviewed-by: Pavel Borisov
* Avoid looping over all type cache entries in TypeCacheRelCallback()Alexander Korotkov2024-10-24
| | | | | | | | | | | | | | | | | | | | | | | | | | Currently, when a single relcache entry gets invalidated, TypeCacheRelCallback() has to loop over all type cache entries to find appropriate typentry to invalidate. Unfortunately, using the syscache here is impossible, because this callback could be called outside a transaction and this makes impossible catalog lookups. This is why present commit introduces RelIdToTypeIdCacheHash to map relation OID to its composite type OID. We are keeping RelIdToTypeIdCacheHash entry while corresponding type cache entry have something to clean. Therefore, RelIdToTypeIdCacheHash shouldn't get bloat in the case of temporary tables flood. There are many places in lookup_type_cache() where syscache invalidation, user interruption, or even error could occur. In order to handle this, we keep an array of in-progress type cache entries. In the case of lookup_type_cache() interruption this array is processed to keep RelIdToTypeIdCacheHash in a consistent state. Discussion: https://postgr.es/m/5812a6e5-68ae-4d84-9d85-b443176966a1%40sigaev.ru Author: Teodor Sigaev Reviewed-by: Aleksander Alekseev, Tom Lane, Michael Paquier, Roman Zharkov Reviewed-by: Andrei Lepikhov, Pavel Borisov, Jian He, Alexander Lakhin Reviewed-by: Artur Zakirov
* Track more precisely query locations for nested statementsMichael Paquier2024-10-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, a Query generated through the transform phase would have unset stmt_location, tracking the starting point of a query string. Extensions relying on the statement location to extract its relevant parts in the source text string would fallback to use the whole statement instead, leading to confusing results like in pg_stat_statements for queries relying on nested queries, like: - EXPLAIN, with top-level and nested query using the same query string, and a query ID coming from the nested query when the non-top-level entry. - Multi-statements, with only partial portions of queries being normalized. - COPY TO with a query, SELECT or DMLs. This patch improves things by keeping track of the statement locations and propagate it to Query during transform, allowing PGSS to only show the relevant part of the query for nested query. This leads to less bloat in entries for non-top-level entries, as queries can now be grouped within the same (toplevel, queryid) duos in pg_stat_statements. The result gives a stricter one-one mapping between query IDs and its query strings. The regression tests introduced in 45e0ba30fc40 produce differences reflecting the new logic. Author: Anthonin Bonnefoy Reviewed-by: Michael Paquier, Jian He Discussion: https://postgr.es/m/CAO6_XqqM6S9bQ2qd=75W+yKATwoazxSNhv5sjW06fjGAtHbTUA@mail.gmail.com
* Fix typo in tidstore.h.Masahiko Sawada2024-10-23
| | | | | | | An oversight in commit f6bef362c. Reviewed-by: David Rowley Discussion: https://postgr.es/m/CAD21AoB8MJ5OHtpUw1UEGf7spioFmP3PNH44KNx6Yb3FiZSwKA%40mail.gmail.com
* Make SASL max message length configurableDaniel Gustafsson2024-10-23
| | | | | | | | | | | The proposed OAUTHBEARER SASL mechanism will need to allow larger messages in the exchange, since tokens are sent directly by the client. Move this limit into the pg_be_sasl_mech struct so that it can be changed per-mechanism. Author: Jacob Champion <jacob.champion@enterprisedb.com> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Discussion: https://postgr.es/m/CAOYmi+nqX_5=Se0W0Ynrr55Fha3CMzwv_R9P3rkpHb=1kG7ZTQ@mail.gmail.com
* Add functions pg_set_attribute_stats() and pg_clear_attribute_stats().Jeff Davis2024-10-22
| | | | | | | | | | | | Enable manipulation of attribute statistics. Only superficial validation is performed, so it's possible to add nonsense, and it's up to the planner (or other users of statistics) to behave reasonably in that case. Bump catalog version. Author: Corey Huinker Discussion: https://postgr.es/m/CADkLM=eErgzn7ECDpwFcptJKOk9SxZEk5Pot4d94eVTZsvj3gw@mail.gmail.com
* Change pg_*_relation_stats() functions to return type to void.Jeff Davis2024-10-22
| | | | | | | | | | These functions will either raise an ERROR or run to normal completion, so no return value is necessary. Bump catalog version. Author: Corey Huinker Discussion: https://postgr.es/m/CADkLM=cBF8rnphuTyHFi3KYzB9ByDgx57HwK9Rz2yp7S+Om87w@mail.gmail.com
* Fix C23 compiler warningPeter Eisentraut2024-10-22
| | | | | | | | | | | | | | | | | The approach of declaring a function pointer with an empty argument list and hoping that the compiler will not complain about casting it to another type no longer works with C23, because foo() is now equivalent to foo(void). We don't need to do this here. With a few struct forward declarations we can supply a correct argument list without having to pull in another header file. (This is the only new warning with C23. Together with the previous fix a67a49648d9, this makes the whole code compile cleanly under C23.) Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://www.postgresql.org/message-id/flat/95c6a9bf-d306-43d8-b880-664ef08f2944%40eisentraut.org
* SQL/JSON: Fix some oversights in commit b6e1157e7Amit Langote2024-10-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The decision in b6e1157e7 to ignore raw_expr when evaluating a JsonValueExpr was incorrect. While its value is not ultimately used (since formatted_expr's value is), failing to initialize it can lead to problems, for instance, when the expression tree in raw_expr contains Aggref nodes, which must be initialized to ensure the parent Agg node works correctly. Also, optimize eval_const_expressions_mutator()'s handling of JsonValueExpr a bit. Currently, when formatted_expr cannot be folded into a constant, we end up processing it twice -- once directly in eval_const_expressions_mutator() and again recursively via ece_generic_processing(). This recursive processing is required to handle raw_expr. To avoid the redundant processing of formatted_expr, we now process raw_expr directly in eval_const_expressions_mutator(). Finally, update the comment of JsonValueExpr to describe the roles of raw_expr and formatted_expr more clearly. Bug: #18657 Reported-by: Alexander Lakhin <exclusion@gmail.com> Diagnosed-by: Fabio R. Sluzala <fabio3rs@gmail.com> Diagnosed-by: Tender Wang <tndrwang@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/18657-1b90ccce2b16bdb8@postgresql.org Backpatch-through: 16
* Fix comment about pg_authid.Tom Lane2024-10-19
| | | | | | | | | | pg_shadow is not "publicly readable". (pg_group is, but there seems no need to make that distinction here.) Seems to be a thinko dating clear back to 7762619e9. Antonin Houska Discussion: https://postgr.es/m/31926.1729252247@antos
* Optimize nbtree backwards scans.Peter Geoghegan2024-10-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Make nbtree backwards scans optimistically access the next page to be read to the left by following a prevPage block number that's now stashed in currPos when the leaf page is first read. This approach matches the one taken during forward scans, which follow a symmetric nextPage block number from currPos. We stash both a prevPage and a nextPage, since the scan direction might change (when fetching from a scrollable cursor). Backwards scans will no longer need to lock the same page twice, except in rare cases where the scan detects a concurrent page split (or page deletion). Testing has shown this optimization to be particularly effective during parallel index-only backwards scans: ~12% reductions in query execution time are quite possible. We're much better off being optimistic; concurrent left sibling page splits are rare in general. It's possible that we'll need to lock more pages than the pessimistic approach would have, but only when there are _multiple_ concurrent splits of the left sibling page we now start at. If there's just a single concurrent left sibling page split, the new approach to scanning backwards will at least break even relative to the old one (we'll acquire the same number of leaf page locks as before). The optimization from this commit has long been contemplated by comments added by commit 2ed5b87f96, which changed the rules for locking/pinning during nbtree index scans. The approach that that commit introduced to leaf level link traversal when scanning forwards is now more or less applied all the time, regardless of the direction we're scanning in. Following uniform conventions around sibling link traversal is simpler. The only real remaining difference between our forward and backwards handling is that our backwards handling must still detect and recover from any concurrent left sibling splits (and concurrent page deletions), as documented in the nbtree README. That is structured as a single, isolated extra step that takes place in _bt_readnextpage. Also use this opportunity to further simplify the functions that deal with reading pages and traversing sibling links on the leaf level, and to document their preconditions and postconditions (with respect to things like buffer locks, buffer pins, and seizing the parallel scan). This enhancement completely supersedes the one recently added by commit 3f44959f. Author: Matthias van de Meent <boekewurm+postgres@gmail.com> Author: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/CAEze2WgpBGRgTTxTWVPXc9+PB6fc1a7t+VyGXHzfnrFXcQVxnA@mail.gmail.com Discussion: https://postgr.es/m/CAH2-WzkBTuFv7W2+84jJT8mWZLXVL0GHq2hMUTn6c9Vw=eYrCw@mail.gmail.com
* Improve ThrowErrorData() comments for use with soft errors.Jeff Davis2024-10-17
| | | | | Reviewed-by: Corey Huinker Discussion: https://postgr.es/m/901ab7cf01957f92ea8b30b6feeb0eacfb7505fc.camel@j-davis.com
* Improve node type forward referencePeter Eisentraut2024-10-17
| | | | | | | | | | Instead of using Node *, we can use an incomplete struct. That way, everything has the correct type and fewer casts are required. This technique is already used elsewhere in node type definitions. Reviewed-by: Nathan Bossart <nathandbossart@gmail.com> Reviewed-by: Tender Wang <tndrwang@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/637eeea8-5663-460b-a114-39572c0f6c6e%40eisentraut.org
* Don't store intermediate hash values in ExprState->resvalueDavid Rowley2024-10-17
| | | | | | | | | | | | | | | | | | | | | | | adf97c156 made it so ExprStates could support hashing and changed Hash Join to use that instead of manually extracting Datums from tuples and hashing them one column at a time. When hashing multiple columns or expressions, the code added in that commit stored the intermediate hash value in the ExprState's resvalue field. That was a mistake as steps may be injected into the ExprState between each hashing step that look at or overwrite the stored intermediate hash value. EEOP_PARAM_SET is an example of such a step. Here we fix this by adding a new dedicated field for storing intermediate hash values and adjust the code so that all apart from the final hashing step store their result in the intermediate field. In passing, rename a variable so that it's more aligned to the surrounding code and also so a few lines stay within the 80 char margin. Reported-by: Andres Freund Reviewed-by: Alena Rybakina <a.rybakina@postgrespro.ru> Discussion: https://postgr.es/m/CAApHDvqo9eenEFXND5zZ9JxO_k4eTA4jKMGxSyjdTrsmYvnmZw@mail.gmail.com
* Normalize nbtree truncated high key array behavior.Peter Geoghegan2024-10-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 5bf748b8 taught nbtree ScalarArrayOp index scans to decide when and how to start the next primitive index scan based on physical index characteristics. This included rules for deciding whether to start a new primitive index scan (or whether to move onto the right sibling leaf page instead) that specifically consider truncated lower-order columns (-inf columns) from leaf page high keys. These omitted columns were treated as satisfying the scan's required scan keys, though only for scan keys marked required in the current scan direction (forward). Scan keys that didn't get this behavior (those marked required in the backwards direction only) usually didn't give the scan reasonable cause to reposition itself to a later leaf page (via another descent of the index in _bt_first), but _bt_advance_array_keys would nevertheless always give up by forcing another call to _bt_first. _bt_advance_array_keys was unwilling to allow the scan to continue onto the next leaf page, to reconsider whether we really should start another primitive scan based on the details of the sibling page's tuples. This didn't match its behavior with similar cases involving keys required in the current scan direction (forward), which seems unprincipled. It led to an excessive number of primitive scans/index descents for queries with a higher-order = array scan key (with dense, contiguous values) mixed with a lower-order required > or >= scan key. Bring > and >= strategy scan keys in line with other required scan key types: treat truncated -inf scan keys as having satisfied scan keys required in either scan direction (forwards and backwards alike) during array advancement. That way affected scans can continue to the right sibling leaf page. Advancement must now schedule an explicit recheck of the right sibling page's high key in cases involving > or >= scan keys. The recheck gives the scan a way to back out and start another primitive index scan (we can't just rely on _bt_checkkeys with > or >= scan keys). This work can be considered a stand alone optimization on top of the work from commit 5bf748b8. But it was written in preparation for an upcoming patch that will add skip scan to nbtree. In practice scans that use "skip arrays" will tend to be much more sensitive to any implementation deficiencies in this area. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Tomas Vondra <tomas@vondra.me> Discussion: https://postgr.es/m/CAH2-Wz=9A_UtM7HzUThSkQ+BcrQsQZuNhWOvQWK06PRkEp=SKQ@mail.gmail.com
* Add type cast to foreach_internal's loop variable.Nathan Bossart2024-10-15
| | | | | | | | | | | C++ requires explicitly casting void pointers to the appropriate pointer type, which means the foreach_ptr macro cannot be used in C++ code without this change. Author: Jelte Fennema-Nio Reviewed-by: Bruce Momjian Discussion: https://postgr.es/m/CAGECzQSYG3QfHrc-rOk2KbnB9iJOd7Qu-Xii1s-GTA%3D3JFt49Q%40mail.gmail.com Backpatch-through: 17
* Move clause_sides_match_join() into restrictinfo.hDavid Rowley2024-10-15
| | | | | | | | | | | | | | Two near-identical copies of clause_sides_match_join() existed in joinpath.c and analyzejoins.c. Deduplicate this by moving the function into restrictinfo.h. It isn't quite clear that keeping the inline property of this function is worthwhile, but this commit is just an exercise in code deduplication. More effort would be required to determine if the inline property is worth keeping. Author: James Hunter <james.hunter.pg@gmail.com> Discussion: https://postgr.es/m/CAJVSvF7Nm_9kgMLOch4c-5fbh3MYg%3D9BdnDx3Dv7Fcb64zr64Q%40mail.gmail.com
* Add contrib/pg_logicalinspect.Masahiko Sawada2024-10-14
| | | | | | | | | | | | | | This module provides SQL functions that allow to inspect logical decoding components. It currently allows to inspect the contents of serialized logical snapshots of a running database cluster, which is useful for debugging or educational purposes. Author: Bertrand Drouvot Reviewed-by: Amit Kapila, Shveta Malik, Peter Smith, Peter Eisentraut Reviewed-by: David G. Johnston Discussion: https://postgr.es/m/ZscuZ92uGh3wm4tW%40ip-10-97-1-34.eu-west-3.compute.internal
* Move SnapBuild and SnapBuildOnDisk structs to snapshot_internal.h.Masahiko Sawada2024-10-14
| | | | | | | | | | | | This commit moves the definitions of the SnapBuild and SnapBuildOnDisk structs, related to logical snapshots, to the snapshot_internal.h file. This change allows external tools, such as pg_logicalinspect (with an upcoming patch), to access and utilize the contents of logical snapshots. Author: Bertrand Drouvot Reviewed-by: Amit Kapila, Shveta Malik, Peter Smith Discussion: https://postgr.es/m/ZscuZ92uGh3wm4tW%40ip-10-97-1-34.eu-west-3.compute.internal
* Remove obsolete comment in reorderbuffer.h.Masahiko Sawada2024-10-14
| | | | | | | | | | Commit 9fab40ad32e changed ReorderBuffer to use Slab Context for allocating ReorderBufferTXN entries instead of using a caching mechanism. The txn->node is no longer used as an element of the list of preallocated ReorderBufferTXNs. Reviewed-by: Amit Kapila Discussion: https://postgr.es/m/CAD21AoB1CTnX66Ji3zTCnjoPVC9OzYe0B6LygUHcxEB2RV-hFw%40mail.gmail.com
* Track scan reversals in MergeJoinPeter Eisentraut2024-10-14
| | | | | | | | | | | | | The MergeJoin struct was tracking "mergeStrategies", which were an array of btree strategy numbers, purely for the purpose of comparing it later against btree strategies to determine if the scan direction was forward or reverse. Change that. Instead, track "mergeReversals", an array of bool, to indicate the same without an unfortunate assumption that a strategy number refers specifically to a btree strategy. Author: Mark Dilger <mark.dilger@enterprisedb.com> Discussion: https://www.postgresql.org/message-id/flat/E72EAA49-354D-4C2E-8EB9-255197F55330@enterprisedb.com
* Track sort direction in SortGroupClausePeter Eisentraut2024-10-14
| | | | | | | | | | | | | | Functions make_pathkey_from_sortop() and transformWindowDefinitions(), which receive a SortGroupClause, were determining the sort order (ascending vs. descending) by comparing that structure's operator strategy to BTLessStrategyNumber, but could just as easily have gotten it from the SortGroupClause object, if it had such a field, so add one. This reduces the number of places that hardcode the assumption that the strategy refers specifically to a btree strategy, rather than some other index AM's operators. Author: Mark Dilger <mark.dilger@enterprisedb.com> Discussion: https://www.postgresql.org/message-id/flat/E72EAA49-354D-4C2E-8EB9-255197F55330@enterprisedb.com
* Remove traces of BeOS.Peter Eisentraut2024-10-14
| | | | | | | | | | | | | | Commit 15abc7788e6 tolerated namespace pollution from BeOS system headers. Commit 44f902122 de-supported BeOS. Since that stuff didn't make it into the Meson build system, synchronize by removing from configure. Author: Thomas Munro <thomas.munro@gmail.com> Reviewed-by: Peter Eisentraut <peter@eisentraut.org> Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Reviewed-by: Japin Li <japinli@hotmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> (the idea, not the patch) Discussion: https://postgr.es/m/ME3P282MB3166F9D1F71F787929C0C7E7B6312%40ME3P282MB3166.AUSP282.PROD.OUTLOOK.COM
* Create functions pg_set_relation_stats, pg_clear_relation_stats.Jeff Davis2024-10-11
| | | | | | | | | | | These functions are used to tweak statistics on any relation, provided that the user has MAINTAIN privilege on the relation, or is the database owner. Bump catalog version. Author: Corey Huinker Discussion: https://postgr.es/m/CADkLM=eErgzn7ECDpwFcptJKOk9SxZEk5Pot4d94eVTZsvj3gw@mail.gmail.com
* Avoid mixing custom and OpenSSL BIO functionsDaniel Gustafsson2024-10-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PostgreSQL has for a long time mixed two BIO implementations, which can lead to subtle bugs and inconsistencies. This cleans up our BIO by just just setting up the methods we need. This patch does not introduce any functionality changes. The following methods are no longer defined due to not being needed: - gets: Not used by libssl - puts: Not used by libssl - create: Sets up state not used by libpq - destroy: Not used since libpq use BIO_NOCLOSE, if it was used it close the socket from underneath libpq - callback_ctrl: Not implemented by sockets The following methods are defined for our BIO: - read: Used for reading arbitrary length data from the BIO. No change in functionality from the previous implementation. - write: Used for writing arbitrary length data to the BIO. No change in functionality from the previous implementation. - ctrl: Used for processing ctrl messages in the BIO (similar to ioctl). The only ctrl message which matters is BIO_CTRL_FLUSH used for writing out buffered data (or signal EOF and that no more data will be written). BIO_CTRL_FLUSH is mandatory to implement and is implemented as a no-op since there is no intermediate buffer to flush. BIO_CTRL_EOF is the out-of-band method for signalling EOF to read_ex based BIO's. Our BIO is not read_ex based but someone could accidentally call BIO_CTRL_EOF on us so implement mainly for completeness sake. As the implementation is no longer related to BIO_s_socket or calling SSL_set_fd, methods have been renamed to reference the PGconn and Port types instead. This also reverts back to using BIO_set_data, with our fallback, as a small optimization as BIO_set_app_data require the ex_data mechanism in OpenSSL. Author: David Benjamin <davidben@google.com> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Discussion: https://postgr.es/m/CAF8qwaCZ97AZWXtg_y359SpOHe+HdJ+p0poLCpJYSUxL-8Eo8A@mail.gmail.com
* Add pg_ls_summariesdir().Nathan Bossart2024-10-11
| | | | | | | | | | | | | | | | This function returns the name, size, and last modification time of each regular file in pg_wal/summaries. This allows administrators to grant privileges to view the contents of this directory without granting privileges on pg_ls_dir(), which allows listing the contents of many other directories. This commit also gives the pg_monitor predefined role EXECUTE privileges on the new pg_ls_summariesdir() function. Bumps catversion. Author: Yushi Ogiwara Reviewed-by: Michael Paquier, Fujii Masao Discussion: https://postgr.es/m/a0a3af15a9b9daa107739eb45aa9a9bc%40oss.nttdata.com
* Unbreak overflow test for attinhcount/coninhcountÁlvaro Herrera2024-10-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 90189eefc1e1 narrowed pg_attribute.attinhcount and pg_constraint.coninhcount from 32 to 16 bits, but kept other related structs with 32-bit wide fields: ColumnDef and CookedConstraint contain an int 'inhcount' field which is itself checked for overflow on increments, but there's no check that the values aren't above INT16_MAX before assigning to the catalog columns. This means that a creative user can get a inconsistent table definition and override some protections. Fix it by changing those other structs to also use int16. Also, modernize style by using pg_add_s16_overflow for overflow testing instead of checking for negative values. We also have Constraint.inhcount, which is here removed completely. This was added by commit b0e96f311985 and not removed by its revert at 6f8bb7c1e961. It is not needed by the upcoming not-null constraints patch. This is mostly academic, so we agreed not to backpatch to avoid ABI problems. Bump catversion because of the changes to parse nodes. Co-authored-by: Álvaro Herrera <alvherre@alvh.no-ip.org> Co-authored-by: 何建 (jian he) <jian.universality@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/202410081611.up4iyofb5ie7@alvherre.pgsql