aboutsummaryrefslogtreecommitdiff
path: root/src/backend/access
Commit message (Collapse)AuthorAge
* Add injection-point test for new multixact CV usageAlvaro Herrera2024-08-20
| | | | | | | | | | | | | | Before commit a0e0fb1ba56f, multixact.c contained a case in the multixact-read path where it would loop sleeping 1ms each time until another multixact-create path completed, which was uncovered by any tests. That commit changed the code to rely on a condition variable instead. Add a test now, which relies on injection points and "loading" thereof (because of it being in a critical section), per commit 4b211003ecc2. Author: Andrey Borodin <x4mmm@yandex-team.ru> Reviewed-by: Michaël Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/0925F9A9-4D53-4B27-A87E-3D83A757B0E0@yandex-team.ru
* Log the conflicts while applying changes in logical replication.Amit Kapila2024-08-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch provides the additional logging information in the following conflict scenarios while applying changes: insert_exists: Inserting a row that violates a NOT DEFERRABLE unique constraint. update_differ: Updating a row that was previously modified by another origin. update_exists: The updated row value violates a NOT DEFERRABLE unique constraint. update_missing: The tuple to be updated is missing. delete_differ: Deleting a row that was previously modified by another origin. delete_missing: The tuple to be deleted is missing. For insert_exists and update_exists conflicts, the log can include the origin and commit timestamp details of the conflicting key with track_commit_timestamp enabled. update_differ and delete_differ conflicts can only be detected when track_commit_timestamp is enabled on the subscriber. We do not offer additional logging for exclusion constraint violations because these constraints can specify rules that are more complex than simple equality checks. Resolving such conflicts won't be straightforward. This area can be further enhanced if required. Author: Hou Zhijie Reviewed-by: Shveta Malik, Amit Kapila, Nisha Moond, Hayato Kuroda, Dilip Kumar Discussion: https://postgr.es/m/OS0PR01MB5716352552DFADB8E9AD1D8994C92@OS0PR01MB5716.jpnprd01.prod.outlook.com
* Fix more holes with SLRU code in need of int64 for segment numbersMichael Paquier2024-08-19
| | | | | | | | | | | | | This is a continuation of c9e24573905b, containing changes included into the proposed patch that have been missed in the actual commit. I have managed to miss these diffs while doing a rebase of the original patch. Thanks to Noah Misch, Peter Eisentraut and Alexander Korotkov for the pokes. Discussion: https://postgr.es/m/92fe572d-638e-4162-aef6-1c42a2936f25@eisentraut.org Discussion: https://postgr.es/m/20240810175055.cd.nmisch@google.com Backpatch-through: 17
* Search for SLRU page only in its own bankAlvaro Herrera2024-08-18
| | | | | | | | | | | One of the two slot scans in SlruSelectLRUPage was not walking only the slots in the specific bank where the buffer could be; change it to do that. Oversight in 53c2a97a9266. Author: Sergey Sargsyan <sergey.sargsyan.2001@gmail.com> Discussion: https://postgr.es/m/18582-5f301dd30ba91a38@postgresql.org
* Fix comments on wal_level=minimal, CREATE TABLESPACE and CREATE DATABASE.Noah Misch2024-08-18
| | | | | | | | | | | | | | | | | Commit 97ddda8a82ac470ae581d0eb485b6577707678bc removed the rmtree() behavior from XLOG_TBLSPC_CREATE, obsoleting that part of the comment. The comment's point about XLOG_DBASE_CREATE was wrong when commit fa0f466d5329e10b16f3b38c8eaf5306f7e234e8 introduced the point. (It would have been accurate if that commit had predated commit fbcbc5d06f53aea412130deb52e216aa3883fb8d introducing the second checkpoint of CREATE DATABASE.) Nothing can skip log_smgrcreate() on the basis of wal_level=minimal, so don't comment on that. Commit c6b92041d38512a4176ed76ad06f713d2e6c01a8 expanded WAL skipping from five specific operations to relfilenodes generally, hence the CreateDatabaseUsingFileCopy() comment change. Discussion: https://postgr.es/m/20231008022204.cc@rfd.leadboat.com
* Use pgBufferUsage for buffer usage tracking in analyze.Masahiko Sawada2024-08-13
| | | | | | | | | | | | | | | | | | | | Previously, (auto)analyze used global variables VacuumPageHit, VacuumPageMiss, and VacuumPageDirty to track buffer usage. However, pgBufferUsage provides a more generic way to track buffer usage with support functions. This change replaces those global variables with pgBufferUsage in analyze. Since analyze was the sole user of those variables, it removes their declarations. Vacuum previously used those variables but replaced them with pgBufferUsage as part of a bug fix, commit 5cd72cc0c. Additionally, it adjusts the buffer usage message in both vacuum and analyze for better consistency. Author: Anthonin Bonnefoy Reviewed-by: Masahiko Sawada, Michael Paquier Discussion: https://postgr.es/m/CAO6_Xqr__kTTCLkftqS0qSCm-J7_xbRG3Ge2rWhucxQJMJhcRA%40mail.gmail.com
* Give nbtree move right function internal linkage.Peter Geoghegan2024-08-12
| | | | | | | | Declare _bt_moveright() static. This is a minor modularity win; the routine was already private to nbtsearch.c for all practical purposes. Author: Matthias van de Meent <boekewurm+postgres@gmail.com> Discussion: https://postgr.es/m/CAEze2WgWVzCNEXQB_op5MMZMDgJ3fg3AhVm6bq2iZPpJNXGhWw@mail.gmail.com
* Fix a series of typos and outdated referencesDavid Rowley2024-08-12
| | | | | Author: Alexander Lakhin <exclusion@gmail.com> Discussion: https://postgr.es/m/c1d63754-cb85-2d8a-8409-bde2c4d2d04b@gmail.com
* Fix bad indentation introduced in commit f011e82c2cHeikki Linnakangas2024-08-12
|
* Avoid unneeded nbtree backwards scan buffer locks.Peter Geoghegan2024-08-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Teach nbtree backwards scans to avoid relocking a just-read leaf page to read its current left sibling link when it isn't truly necessary. This happened inside _bt_readnextpage whenever _bt_readpage had already determined that there'll be no further matches to the left (or at least none for the current primitive index scan, for a scan with array keys). A new precheck inside _bt_readnextpage is all that we need to avoid these useless lock acquisitions. Arguably, using a precheck like this was a missed opportunity for commit 2ed5b87f96, which taught nbtree to drop leaf page pins early to avoid blocking cleanup by VACUUM. Forwards scans already managed to avoid relocking the page like this. The optimization added by this commit is particularly helpful with backwards scans that use array keys where the scan must perform multiple primitive index scans. Such backwards scans will now avoid a useless leaf page re-lock at the end of each primitive index scan. Note that this commit does not attempt to avoid needlessly re-locking a leaf page that was just read when the scan must follow the leaf page's left link. That more ambitious optimization could work by stashing the left link when the page is first read by a backwards scan, allowing the subsequent _bt_readnextpage call to optimistically skip re-reading the original page just to get a new copy of its left link. For now we only address cases where we don't care about our original page's left link. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Matthias van de Meent <boekewurm+postgres@gmail.com> Discussion: https://postgr.es/m/CAH2-Wz=xgs7PojG=EUvhgadwENzu_mY_riNh-w9wFPsaS717ew@mail.gmail.com
* Initialize HASHCTL differently, to suppress Coverity warningHeikki Linnakangas2024-08-11
| | | | | | | | Coverity complained that the hash_create() call might access hash_table_ctl->hctl. That's a false alarm, hash_create() only accesses that field when passed the HASH_SHARED_MEM flag. Try to silence it by using a plain local variable instead of a const. That's how the HASHCTL is initialized in all the other hash_create() calls.
* Mark misc static global variables as constHeikki Linnakangas2024-08-06
| | | | | Reviewed-by: Andres Freund Discussion: https://www.postgresql.org/message-id/54c29fb0-edf2-48ea-9814-44e918bbd6e8@iki.fi
* Turn a few 'validnsps' static variables into localsHeikki Linnakangas2024-08-06
| | | | | | | | | | | | | There was no need for these to be static buffers, local variables work just as well. I think they were marked as 'static' to imply that they are read-only, but 'const' is more appropriate for that, so change them to const. To make it possible to mark the variables as 'const', also add 'const' decorations to the transformRelOptions() signature. Reviewed-by: Andres Freund Discussion: https://www.postgresql.org/message-id/54c29fb0-edf2-48ea-9814-44e918bbd6e8@iki.fi
* Allow parallel workers to cope with a newly-created session user ID.Tom Lane2024-08-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Parallel workers failed after a sequence like BEGIN; CREATE USER foo; SET SESSION AUTHORIZATION foo; because check_session_authorization could not see the uncommitted pg_authid row for "foo". This is because we ran RestoreGUCState() in a separate transaction using an ordinary just-created snapshot. The same disease afflicts any other GUC that requires catalog lookups and isn't forgiving about the lookups failing. To fix, postpone RestoreGUCState() into the worker's main transaction after we've set up a snapshot duplicating the leader's. This affects check_transaction_isolation and check_transaction_deferrable, which think they should only run during transaction start. Make them act like check_transaction_read_only, which already knows it should silently accept the value when InitializingParallelWorker. This un-reverts commit f5f30c22e. The original plan was to back-patch that, but the fact that 0ae5b763e proved to be a pre-requisite shows that the subtle API change for GUC hooks might actually break some of them. The problem we're trying to fix seems not worth taking such a risk for in stable branches. Per bug #18545 from Andrey Rachitskiy. Discussion: https://postgr.es/m/18545-feba138862f19aaa@postgresql.org
* Clean up handling of client_encoding GUC in parallel workers.Tom Lane2024-08-06
| | | | | | | | | | | | | | | | | | | | | | | The previous coding here threw an error from assign_client_encoding if it was invoked in a parallel worker. That's a very fundamental violation of the GUC hook API: assign hooks must not throw errors. The place to complain is in the check hook, so move the test to there, and use the regular check-hook API (ie return false) to report it. The reason this coding is a problem is that it breaks GUC rollback, which may occur after we leave InitializingParallelWorker state. That case seems not actually reachable before now, but commit f5f30c22e made it reachable, so we need to fix this before that can be un-reverted. In passing, improve the commentary in ParallelWorkerMain, and add a check for failure of SetClientEncoding. That's another case that can't happen now but might become possible after foreseeable code rearrangements (notably, if the shortcut of skipping PrepareClientEncoding stops being OK). Discussion: https://postgr.es/m/18545-feba138862f19aaa@postgresql.org
* Remove unnecessary declaration of heapam_methodsMichael Paquier2024-08-06
| | | | | | | | This overlaps with the declaration at the end of heapam_handler.c that lists all the callback routines for the heap table AM. Author: Japin Li Discussion: https://postgr.es/m/ME0P300MB04459456D5C4E70D48116896B6B12@ME0P300MB0445.AUSP300.PROD.OUTLOOK.COM
* Remove support for null pg_locale_t most places.Jeff Davis2024-08-05
| | | | | | | | | | | | | Previously, passing NULL for pg_locale_t meant "use the libc provider and the server environment". Now that the database collation is represented as a proper pg_locale_t (not dependent on setlocale()), remove special cases for NULL. Leave wchar2char() and char2wchar() unchanged for now, because the callers don't always have a libc-based pg_locale_t available. Discussion: https://postgr.es/m/cfd9eb85-c52a-4ec9-a90e-a5e4de56e57d@eisentraut.org Reviewed-by: Peter Eisentraut, Andreas Karlsson
* Implement pg_wal_replay_wait() stored procedureAlexander Korotkov2024-08-02
| | | | | | | | | | | | | | | | | | | | | | | | | | pg_wal_replay_wait() is to be used on standby and specifies waiting for the specific WAL location to be replayed. This option is useful when the user makes some data changes on primary and needs a guarantee to see these changes are on standby. The queue of waiters is stored in the shared memory as an LSN-ordered pairing heap, where the waiter with the nearest LSN stays on the top. During the replay of WAL, waiters whose LSNs have already been replayed are deleted from the shared memory pairing heap and woken up by setting their latches. pg_wal_replay_wait() needs to wait without any snapshot held. Otherwise, the snapshot could prevent the replay of WAL records, implying a kind of self-deadlock. This is why it is only possible to implement pg_wal_replay_wait() as a procedure working without an active snapshot, not a function. Catversion is bumped. Discussion: https://postgr.es/m/eb12f9b03851bb2583adab5df9579b4b%40postgrespro.ru Author: Kartyshov Ivan, Alexander Korotkov Reviewed-by: Michael Paquier, Peter Eisentraut, Dilip Kumar, Amit Kapila Reviewed-by: Alexander Lakhin, Bharath Rupireddy, Euler Taveira Reviewed-by: Heikki Linnakangas, Kyotaro Horiguchi
* Add redo LSN to pgstats filesMichael Paquier2024-08-02
| | | | | | | | | | | | | | | | This is used in the startup process to check that the pgstats file we are reading includes the redo LSN referring to the shutdown checkpoint where it has been written. The redo LSN in the pgstats file needs to match with what the control file has. This is intended to be used for an upcoming change that will extend the write of the stats file to happen during checkpoints, rather than only shutdown sequences. Bump PGSTAT_FILE_FORMAT_ID. Reviewed-by: Bertrand Drouvot Discussion: https://postgr.es/m/Zp8o6_cl0KSgsnvS@paquier.xyz
* Revert "Allow parallel workers to cope with a newly-created session user ID."Tom Lane2024-07-31
| | | | | | | | | | | | | | | | This reverts commit f5f30c22ed69fb37b896c4d4546b2ab823c3fd61. Some buildfarm animals are failing with "cannot change "client_encoding" during a parallel operation". It looks like assign_client_encoding is unhappy at being asked to roll back a client_encoding setting after a parallel worker encounters a failure. There must be more to it though: why didn't I see this during local testing? In any case, it's clear that moving the RestoreGUCState() call is not as side-effect-free as I thought. Given that the bug f5f30c22e intended to fix has gone unreported for years, it's not something that's urgent to fix; I'm not willing to risk messing with it further with only days to our next release wrap.
* Allow parallel workers to cope with a newly-created session user ID.Tom Lane2024-07-31
| | | | | | | | | | | | | | | | | | | | | | | | Parallel workers failed after a sequence like BEGIN; CREATE USER foo; SET SESSION AUTHORIZATION foo; because check_session_authorization could not see the uncommitted pg_authid row for "foo". This is because we ran RestoreGUCState() in a separate transaction using an ordinary just-created snapshot. The same disease afflicts any other GUC that requires catalog lookups and isn't forgiving about the lookups failing. To fix, postpone RestoreGUCState() into the worker's main transaction after we've set up a snapshot duplicating the leader's. This affects check_transaction_isolation and check_transaction_deferrable, which think they should only run during transaction start. Make them act like check_transaction_read_only, which already knows it should silently accept the value when InitializingParallelWorker. Per bug #18545 from Andrey Rachitskiy. Back-patch to all supported branches, because this has been wrong for awhile. Discussion: https://postgr.es/m/18545-feba138862f19aaa@postgresql.org
* Relax check for return value from second call of pg_strnxfrm().Jeff Davis2024-07-30
| | | | | | | | | strxfrm() is not guaranteed to return the exact number of bytes needed to store the result; it may return a higher value. Discussion: https://postgr.es/m/32f85d88d1f64395abfe5a10dd97a62a4d3474ce.camel@j-davis.com Reviewed-by: Heikki Linnakangas Backpatch-through: 16
* Replace static buf with a stack-allocated one in ReadControlFileHeikki Linnakangas2024-07-30
| | | | | | | It's only used very locally within the function. Reviewed-by: Robert Haas Discussion: https://www.postgresql.org/message-id/7f86e06a-98c5-4ce3-8ec9-3885c8de0358@iki.fi
* Replace static buf with palloc in str_time()Heikki Linnakangas2024-07-30
| | | | | | | | | | The function is used only once in the startup process, so the leak into current memory context is harmless. This is a tiny step in making the server thread-safe. Reviewed-by: Robert Haas Discussion: https://www.postgresql.org/message-id/7f86e06a-98c5-4ce3-8ec9-3885c8de0358@iki.fi
* Fix more holes with SLRU code in need of int64 for segment numbersMichael Paquier2024-07-27
| | | | | | | | | | This is a continuation of 3937cadfd438, taking care of more areas I have managed to miss previously. Reported-by: Noah Misch Reviewed-by: Noah Misch Discussion: https://postgr.es/m/20240724130059.1f.nmisch@google.com Backpatch-through: 17
* Wait for WAL summarization to catch up before creating .partial file.Robert Haas2024-07-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When a standby is promoted, CleanupAfterArchiveRecovery() may decide to rename the final WAL file from the old timeline by adding ".partial" to the name. If WAL summarization is enabled and this file is renamed before its partial contents are summarized, WAL summarization breaks: the summarizer gets stuck at that point in the WAL stream and just errors out. To fix that, first make the startup process wait for WAL summarization to catch up before renaming the file. Generally, this should be quick, and if it's not, the user can shut off summarize_wal and try again. To make this fix work, also teach the WAL summarizer that after a promotion has occurred, no more WAL can appear on the previous timeline: previously, the WAL summarizer wouldn't switch to the new timeline until we actually started writing WAL there, but that meant that when the startup process was waiting for the WAL summarizer, it was waiting for an action that the summarizer wasn't yet prepared to take. In the process of fixing these bugs, I realized that the logic to wait for WAL summarization to catch up was spread out in a way that made it difficult to reuse properly, so this code refactors things to make it easier. Finally, add a test case that would have caught this bug and the previously-fixed bug that WAL summarization sometimes needs to back up when the timeline changes. Discussion: https://postgr.es/m/CA+TgmoZGEsZodXC4f=XZNkAeyuDmWTSkpkjCEOcF19Am0mt_OA@mail.gmail.com
* Refactor tidstore.c iterator buffering.Thomas Munro2024-07-24
| | | | | | | | | | | | | Previously, TidStoreIterateNext() would expand the set of offsets for each block into an internal buffer that it overwrote each time. In order to be able to collect the offsets for multiple blocks before working with them, change the contract. Now, the offsets are obtained by a separate call to TidStoreGetBlockOffsets(), which can be called at a later time. TidStoreIteratorResult objects are safe to copy and store in a queue. Reviewed-by: Noah Misch <noah@leadboat.com> Discussion: https://postgr.es/m/CAAKRu_bbkmwAzSBgnezancgJeXrQZXy4G4kBTd+5=cr86H5yew@mail.gmail.com
* Allow altering of two_phase option of a SUBSCRIPTION.Amit Kapila2024-07-24
| | | | | | | | | | | | | | | | | | | | The two_phase option is controlled by both the publisher (as a slot option) and the subscriber (as a subscription option), so the slot option must also be modified. Changing the 'two_phase' option for a subscription from 'true' to 'false' is permitted only when there are no pending prepared transactions corresponding to that subscription. Otherwise, the changes of already prepared transactions can be replicated again along with their corresponding commit leading to duplicate data or errors. To avoid data loss, the 'two_phase' option for a subscription can only be changed from 'false' to 'true' once the initial data synchronization is completed. Therefore this is performed later by the logical replication worker. Author: Hayato Kuroda, Ajin Cherian, Amit Kapila Reviewed-by: Peter Smith, Hou Zhijie, Amit Kapila, Vitaly Davydov, Vignesh C Discussion: https://postgr.es/m/8fab8-65d74c80-1-2f28e880@39088166
* Use more consistently int64 for page numbers in SLRU-related codeMichael Paquier2024-07-23
| | | | | | | | | | | | | | clog.c, async.c and predicate.c included some SLRU page numbers still handled as 4-byte integers, while int64 should be used for this purpose. These holes have been introduced in 4ed8f0913bfd, that has introduced the use of 8-byte integers for SLRU page numbers, still forgot about the code paths updated by this commit. Reported-by: Noah Misch Author: Aleksander Alekseev, Michael Paquier Discussion: https://postgr.es/m/20240626002747.dc.nmisch@google.com Backpatch-through: 17
* Get rid of a global variablePeter Eisentraut2024-07-23
| | | | | | | | bootstrap_data_checksum_version can just as easily be passed to where it is used via function arguments. Reviewed-by: Andres Freund <andres@anarazel.de> Discussion: https://www.postgresql.org/message-id/flat/e0a62134-83da-4ba4-8cdb-ceb0111c95ce@eisentraut.org
* Improve comments in slru.{c,h} about segment name formatMichael Paquier2024-07-23
| | | | | | | | | | | | slru.h described incorrectly how SLRU segment names are formatted depending on the segment number and if long or short segment names are used. This commit closes the gap with a better description, fitting with the reality. Reported-by: Noah Misch Author: Aleksander Alekseev Discussion: https://postgr.es/m/20240626002747.dc.nmisch@google.com Backpatch-through: 17
* Initialize wal_level in the initial checkpoint record.Robert Haas2024-07-22
| | | | | | As per Coverity and Tom Lane, commit 402b586d0 (back-patched to v17 as 2b5819e2b) forgot to initialize this new structure member in this code path.
* Ensure vacuum removes all visibly dead tuples older than OldestXminMelanie Plageman2024-07-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If vacuum fails to remove a tuple with xmax older than VacuumCutoffs->OldestXmin and younger than GlobalVisState->maybe_needed, it may attempt to freeze the tuple's xmax and then ERROR out in pre-freeze checks with "cannot freeze committed xmax". Fix this by having vacuum always remove tuples older than OldestXmin. It is possible for GlobalVisState->maybe_needed to precede OldestXmin if maybe_needed is forced to go backward while vacuum is running. This can happen if a disconnected standby with a running transaction older than VacuumCutoffs->OldestXmin reconnects to the primary after vacuum initially calculates GlobalVisState and OldestXmin. In back branches starting with 14, the first version using GlobalVisState, failing to remove tuples older than OldestXmin during pruning caused vacuum to infinitely loop in lazy_scan_prune(), as investigated on this [1] thread. After 1ccc1e05ae removed the retry loop in lazy_scan_prune() and stopped comparing tuples to OldestXmin, the hang could no longer happen, but we could still attempt to freeze dead tuples with xmax older than OldestXmin -- resulting in an ERROR. Fix this by always removing dead tuples with xmax older than VacuumCutoffs->OldestXmin. This is okay because the standby won't replay the tuple removal until the tuple is removable. Thus, the worst that can happen is a recovery conflict. [1] https://postgr.es/m/20240415173913.4zyyrwaftujxthf2%40awork3.anarazel.de#1b216b7768b5bd577a3d3d51bd5aadee Back-patch through 14 Author: Melanie Plageman Reviewed-by: Peter Geoghegan, Robert Haas, Andres Freund, Heikki Linnakangas, and Noah Misch Discussion: https://postgr.es/m/CAAKRu_bDD7oq9ZwB2OJqub5BovMG6UjEYsoK2LVttadjEqyRGg%40mail.gmail.com
* Do not summarize WAL if generated with wal_level=minimal.Robert Haas2024-07-18
| | | | | | | | | | | | | | | | | | | | | | | | | | To do this, we must include the wal_level in the first WAL record covered by each summary file; so add wal_level to struct Checkpoint and the payload of XLOG_CHECKPOINT_REDO and XLOG_END_OF_RECOVERY. This, in turn, requires bumping XLOG_PAGE_MAGIC and, since the Checkpoint is also stored in the control file, also PG_CONTROL_VERSION. It's not great to do that so late in the release cycle, but the alternative seems to ship v17 without robust protections against this scenario, which could result in corrupted incremental backups. A side effect of this patch is that, when a server with wal_level=replica is started with summarize_wal=on for the first time, summarization will no longer begin with the oldest WAL that still exists in pg_wal, but rather from the first checkpoint after that. This change should be harmless, because a WAL summary for a partial checkpoint cycle can never make an incremental backup possible when it would otherwise not have been. Report by Fujii Masao. Patch by me. Review and/or testing by Jakub Wartak and Fujii Masao. Discussion: http://postgr.es/m/6e30082e-041b-4e31-9633-95a66de76f5d@oss.nttdata.com
* Use PqMsg_* macros in more places.Nathan Bossart2024-07-17
| | | | | | | | | | | Commit f4b54e1ed9, which introduced macros for protocol characters, missed updating a few places. It also did not introduce macros for messages sent from parallel workers to their leader processes. This commit adds a new section in protocol.h for those. Author: Aleksander Alekseev Discussion: https://postgr.es/m/CAJ7c6TNTd09AZq8tGaHS3LDyH_CCnpv0oOz2wN1dGe8zekxrdQ%40mail.gmail.com Backpatch-through: 17
* Assign error codes where missing for user-facing failuresMichael Paquier2024-07-04
| | | | | | | | | | | | | | | | | | All the errors triggered in the code paths patched here would cause the backend to issue an internal_error errcode, which is a state that should be used only for "can't happen" situations. However, these code paths are reachable by the regression tests, and could be seen by users in valid cases. Some regression tests expect internal errcodes as they manipulate the backend state to cause corruption (like checksums), or use elog() because it is more convenient (like injection points), these have no need to change. This reduces the number of internal failures triggered in a check-world by more than half, while providing correct errcodes for these valid cases. Reviewed-by: Robert Haas Discussion: https://postgr.es/m/Zic_GNgos5sMxKoa@paquier.xyz
* Use safe string copy routineDaniel Gustafsson2024-07-02
| | | | | | | | | | | Using memcpy with strlen as the size parameter will not take the NULL terminator into account, relying instead on the destination buffer being properly initialized. Replace with strlcpy which is a safer alternative, and more in line with how we handle copying strings elsewhere. Author: Ranier Vilela <ranier.vf@gmail.com> Discussion: https://postgr.es/m/CAEudQApAsbLsQ+gGiw-hT+JwGhgogFa_=5NUkgFO6kOPxyNidQ@mail.gmail.com
* Preserve CurrentMemoryContext across Start/CommitTransactionCommand.Tom Lane2024-07-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Up to now, committing a transaction has caused CurrentMemoryContext to get set to TopMemoryContext. Most callers did not pay any particular heed to this, which is problematic because TopMemoryContext is a long-lived context that never gets reset. If the caller assumes it can leak memory because it's running in a limited-lifespan context, that behavior translates into a session-lifespan memory leak. The first-reported instance of this involved ProcessIncomingNotify, which is called from the main processing loop that normally runs in MessageContext. That outer-loop code assumes that whatever it allocates will be cleaned up when we're done processing the current client message --- but if we service a notify interrupt, then whatever gets allocated before the next switch to MessageContext will be permanently leaked in TopMemoryContext. sinval catchup interrupts have a similar problem, and I strongly suspect that some places in logical replication do too. To fix this in a generic way, let's redefine the behavior as "CommitTransactionCommand restores the memory context that was current at entry to StartTransactionCommand". This clearly fixes the issue for the notify and sinval cases, and it seems to match the mental model that's in use in the logical replication code, to the extent that anybody thought about it there at all. For consistency, likewise make subtransaction exit restore the context that was current at subtransaction start (rather than always selecting the CurTransactionContext of the parent transaction level). This case has less risk of resulting in a permanent leak than the outer-level behavior has, but it would not meet the principle of least surprise for some CommitTransactionCommand calls to restore the previous context while others don't. While we're here, also change xact.c so that we reset TopTransactionContext at transaction exit and then re-use it in later transactions, rather than dropping and recreating it in each cycle. This probably doesn't save a lot given the context recycling mechanism in aset.c, but it should save a little bit. Per suggestion from David Rowley. (Parenthetically, the text in src/backend/utils/mmgr/README implies that this is how I'd planned to implement it as far back as commit 1aebc3618 --- but the code actually added in that commit just drops and recreates it each time.) This commit also cleans up a few places outside xact.c that were needlessly making TopMemoryContext current, and thus risking more leaks of the same kind. I don't think any of them represent significant leak risks today, but let's deal with them while the issue is top-of-mind. Per bug #18512 from wizardbrony. Commit to HEAD only, as this change seems to have some risk of breaking things for some callers. We'll apply a narrower fix for the known-broken cases in the back branches. Discussion: https://postgr.es/m/3478884.1718656625@sss.pgh.pa.us
* Prevent summarizer hang when summarize_wal turned off and back on.Robert Haas2024-06-28
| | | | | | | | | | | | | | | Before this commit, when the WAL summarizer started up or recovered from an error, it would resume summarization from wherever it left off. That was OK normally, but wrong if summarize_wal=off had been turned off temporary, allowing some WAL to be removed, and then turned back on again. In such cases, the WAL summarizer would simply hang forever. This commit changes the reinitialization sequence for WAL summarizer to rederive the starting position in the way we were already doing at initial startup, fixing the problem. Per report from Israel Barth Rubio. Reviewed by Tom Lane. Discussion: http://postgr.es/m/CA+TgmoYN6x=YS+FoFOS6=nr6=qkXZFWhdiL7k0oatGwug2hcuA@mail.gmail.com
* Fix comments in heaptuple.cMichael Paquier2024-06-28
| | | | | | | | | Since e27f4ee0a701, fastgetattr() and heap_getattr() are not macros, but inlined functions. Author: Junwang Zhao Reviewed-by: Stepan Neretin Discussion: https://postgr.es/m/CAEG8a3JS-JKWWyOcM7BU=vPqFXa3W7mZSHnvc3CBqx=tC+3SCA@mail.gmail.com
* Add an injection_points isolation test suite.Noah Misch2024-06-27
| | | | | | | | | Make the isolation harness recognize injection_points wait events as a type of blocked state. Test an extant inplace-update bug. Reviewed by Robert Haas and Michael Paquier. Discussion: https://postgr.es/m/20240512232923.aa.nmisch@google.com
* Fix MVCC bug with prepared xact with subxacts on standbyHeikki Linnakangas2024-06-27
| | | | | | | | | | | | | | | | | | We did not recover the subtransaction IDs of prepared transactions when starting a hot standby from a shutdown checkpoint. As a result, such subtransactions were considered as aborted, rather than in-progress. That would lead to hint bits being set incorrectly, and the subtransactions suddenly becoming visible to old snapshots when the prepared transaction was committed. To fix, update pg_subtrans with prepared transactions's subxids when starting hot standby from a shutdown checkpoint. The snapshots taken from that state need to be marked as "suboverflowed", so that we also check the pg_subtrans. Backport to all supported versions. Discussion: https://www.postgresql.org/message-id/6b852e98-2d49-4ca1-9e95-db419a2696e0@iki.fi
* Fix bugs in MultiXact truncationHeikki Linnakangas2024-06-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1. TruncateMultiXact() performs the SLRU truncations in a critical section. Deleting the SLRU segments calls ForwardSyncRequest(), which will try to compact the request queue if it's full (CompactCheckpointerRequestQueue()). That in turn allocates memory, which is not allowed in a critical section. Backtrace: TRAP: failed Assert("CritSectionCount == 0 || (context)->allowInCritSection"), File: "../src/backend/utils/mmgr/mcxt.c", Line: 1353, PID: 920981 postgres: autovacuum worker template0(ExceptionalCondition+0x6e)[0x560a501e866e] postgres: autovacuum worker template0(+0x5dce3d)[0x560a50217e3d] postgres: autovacuum worker template0(ForwardSyncRequest+0x8e)[0x560a4ffec95e] postgres: autovacuum worker template0(RegisterSyncRequest+0x2b)[0x560a50091eeb] postgres: autovacuum worker template0(+0x187b0a)[0x560a4fdc2b0a] postgres: autovacuum worker template0(SlruDeleteSegment+0x101)[0x560a4fdc2ab1] postgres: autovacuum worker template0(TruncateMultiXact+0x2fb)[0x560a4fdbde1b] postgres: autovacuum worker template0(vac_update_datfrozenxid+0x4b3)[0x560a4febd2f3] postgres: autovacuum worker template0(+0x3adf66)[0x560a4ffe8f66] postgres: autovacuum worker template0(AutoVacWorkerMain+0x3ed)[0x560a4ffe7c2d] postgres: autovacuum worker template0(+0x3b1ead)[0x560a4ffecead] postgres: autovacuum worker template0(+0x3b620e)[0x560a4fff120e] postgres: autovacuum worker template0(+0x3b3fbb)[0x560a4ffeefbb] postgres: autovacuum worker template0(+0x2f724e)[0x560a4ff3224e] /lib/x86_64-linux-gnu/libc.so.6(+0x27c8a)[0x7f62cc642c8a] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x7f62cc642d45] postgres: autovacuum worker template0(_start+0x21)[0x560a4fd16f31] To fix, bail out in CompactCheckpointerRequestQueue() without doing anything, if it's called in a critical section. That covers the above call path, as well as any other similar cases where RegisterSyncRequest might be called in a critical section. 2. After fixing that, another problem became apparent: Autovacuum process doing that truncation can deadlock with the checkpointer process. TruncateMultiXact() sets "MyProc->delayChkptFlags |= DELAY_CHKPT_START". If the sync request queue is full and cannot be compacted, the process will repeatedly sleep and retry, until there is room in the queue. However, if the checkpointer is trying to start a checkpoint at the same time, and is waiting for the DELAY_CHKPT_START processes to finish, the queue will never shrink. More concretely, the autovacuum process is stuck here: #0 0x00007fc934926dc3 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6 #1 0x000056220b24348b in WaitEventSetWaitBlock (set=0x56220c2e4b50, occurred_events=0x7ffe7856d040, nevents=1, cur_timeout=<optimized out>) at ../src/backend/storage/ipc/latch.c:1570 #2 WaitEventSetWait (set=0x56220c2e4b50, timeout=timeout@entry=10, occurred_events=<optimized out>, occurred_events@entry=0x7ffe7856d040, nevents=nevents@entry=1, wait_event_info=wait_event_info@entry=150994949) at ../src/backend/storage/ipc/latch.c:1516 #3 0x000056220b243224 in WaitLatch (latch=<optimized out>, latch@entry=0x0, wakeEvents=wakeEvents@entry=40, timeout=timeout@entry=10, wait_event_info=wait_event_info@entry=150994949) at ../src/backend/storage/ipc/latch.c:538 #4 0x000056220b26cf46 in RegisterSyncRequest (ftag=ftag@entry=0x7ffe7856d0a0, type=type@entry=SYNC_FORGET_REQUEST, retryOnError=true) at ../src/backend/storage/sync/sync.c:614 #5 0x000056220af9db0a in SlruInternalDeleteSegment (ctl=ctl@entry=0x56220b7beb60 <MultiXactMemberCtlData>, segno=segno@entry=11350) at ../src/backend/access/transam/slru.c:1495 #6 0x000056220af9dab1 in SlruDeleteSegment (ctl=ctl@entry=0x56220b7beb60 <MultiXactMemberCtlData>, segno=segno@entry=11350) at ../src/backend/access/transam/slru.c:1566 #7 0x000056220af98e1b in PerformMembersTruncation (oldestOffset=<optimized out>, newOldestOffset=<optimized out>) at ../src/backend/access/transam/multixact.c:3006 #8 TruncateMultiXact (newOldestMulti=newOldestMulti@entry=3221225472, newOldestMultiDB=newOldestMultiDB@entry=4) at ../src/backend/access/transam/multixact.c:3201 #9 0x000056220b098303 in vac_truncate_clog (frozenXID=749, minMulti=<optimized out>, lastSaneFrozenXid=749, lastSaneMinMulti=3221225472) at ../src/backend/commands/vacuum.c:1917 #10 vac_update_datfrozenxid () at ../src/backend/commands/vacuum.c:1760 #11 0x000056220b1c3f76 in do_autovacuum () at ../src/backend/postmaster/autovacuum.c:2550 #12 0x000056220b1c2c3d in AutoVacWorkerMain (startup_data=<optimized out>, startup_data_len=<optimized out>) at ../src/backend/postmaster/autovacuum.c:1569 and the checkpointer is stuck here: #0 0x00007fc9348ebf93 in clock_nanosleep () from /lib/x86_64-linux-gnu/libc.so.6 #1 0x00007fc9348fe353 in nanosleep () from /lib/x86_64-linux-gnu/libc.so.6 #2 0x000056220b40ecb4 in pg_usleep (microsec=microsec@entry=10000) at ../src/port/pgsleep.c:50 #3 0x000056220afb43c3 in CreateCheckPoint (flags=flags@entry=108) at ../src/backend/access/transam/xlog.c:7098 #4 0x000056220b1c6e86 in CheckpointerMain (startup_data=<optimized out>, startup_data_len=<optimized out>) at ../src/backend/postmaster/checkpointer.c:464 To fix, add AbsorbSyncRequests() to the loops where the checkpointer waits for DELAY_CHKPT_START or DELAY_CHKPT_COMPLETE operations to finish. Backpatch to v14. Before that, SLRU deletion didn't call RegisterSyncRequest, which avoided this failure. I'm not sure if there are other similar scenarios on older versions, but we haven't had any such reports. Discussion: https://www.postgresql.org/message-id/ccc66933-31c1-4f6a-bf4b-45fef0d4f22e@iki.fi
* Fix nbtree array unsatisfied inequality check.Peter Geoghegan2024-06-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | _bt_advance_array_keys didn't take sufficient care at the point where it decides whether to start a new primitive index scan based on a call to _bt_check_compare against finaltup (a call with the scan direction flipped around). The final decision was conditioned on rules about how the scan key offset sktrig that initially triggered array advancement (passed to _bt_advance_array_keys from its _bt_checkkeys caller) compared to the offset set by its own _bt_check_compare finaltup call. This approach was faulty, in that it allowed _bt_advance_array_keys to incorrectly start a new primitive index scan, that landed on the same leaf page (on assert-enabled builds it led to an assertion failure). In general, scans with array keys are expected to never have to read the same leaf page more than once (barring cases involving cursors, and cases where the scan restores a marked position for the inner side of a merge join). This principle was established by commit 5bf748b8. To fix, make the final decision based on whether the scan key offset set by the _bt_check_compare finaltup call is an offset to an inequality strategy scan key. An unsatisfied required inequality strategy scan key indicates that all of the scan's required equality strategy scan keys must also be satisfied by finaltup (not just by caller's tuple), and that there is a decent chance that _bt_first will be able to reposition the scan to a position many leaf pages ahead of the current leaf page. Oversight in commit 5bf748b8. Discussion: https://postgr.es/m/CAH2-Wz=DyHbcg7o6zXqzyiin8WE8vzk4tvU8Lrnh-a=EAvO0TQ@mail.gmail.com
* Fix insertion of SP-GiST REDIRECT tuples during REINDEX CONCURRENTLY.Tom Lane2024-06-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Reconstruction of an SP-GiST index by REINDEX CONCURRENTLY may insert some REDIRECT tuples. This will typically happen in a transaction that lacks an XID, which leads either to assertion failure in spgFormDeadTuple or to insertion of a REDIRECT tuple with zero xid. The latter's not good either, since eventually VACUUM will apply GlobalVisTestIsRemovableXid() to the zero xid, resulting in either an assertion failure or a garbage answer. In practice, since REINDEX CONCURRENTLY locks out index scans till it's done, it doesn't matter whether it inserts REDIRECTs or PLACEHOLDERs; and likewise it doesn't matter how soon VACUUM reduces such a REDIRECT to a PLACEHOLDER. So in non-assert builds there's no observable problem here, other than perhaps a little index bloat. But it's not behaving as intended. To fix, remove the failing Assert in spgFormDeadTuple, acknowledging that we might sometimes insert a zero XID; and guard VACUUM's GlobalVisTestIsRemovableXid() call with a test for valid XID, ensuring that we'll reduce such a REDIRECT the first time VACUUM sees it. (Versions before v14 use TransactionIdPrecedes here, which won't fail on zero xid, so they really have no bug at all in non-assert builds.) Another solution could be to not create REDIRECTs at all during REINDEX CONCURRENTLY, making the relevant code paths treat that case like index build (which likewise knows that no concurrent index scans can be happening). That would allow restoring the Assert in spgFormDeadTuple, but we'd still need the VACUUM change because redirection tuples with zero xid may be out there already. But there doesn't seem to be a nice way for spginsert() to tell that it's being called in REINDEX CONCURRENTLY without some API changes, so we'll leave that as a possible future improvement. In HEAD, also rename the SpGistState.myXid field to redirectXid, which seems less misleading (since it might not in fact be our transaction's XID) and is certainly less uninformatively generic. Per bug #18499 from Alexander Lakhin. Back-patch to all supported branches. Discussion: https://postgr.es/m/18499-8a519c280f956480@postgresql.org
* Convert confusing macros in multixact.c to static inline functionsHeikki Linnakangas2024-06-16
| | | | | | | | | | | | | The macros were confused about the argument data types. All the arguments were called 'xid', and some of the macros included casts to TransactionId, even though the arguments were actually either MultiXactIds or MultiXactOffsets. It compiles to the same thing, because TransactionId, MultiXactId and MultiXactOffset are all typedefs of uint32, but it was highly misleading. Author: Maxim Orlov <orlovmg@gmail.com> Discussion: https://www.postgresql.org/message-id/CACG%3DezbLUG-OD1osAW3OchOMxZtdxHh2itYR9Zhh-a13wEBEQw%40mail.gmail.com Discussion: https://www.postgresql.org/message-id/ff143b24-a093-40da-9833-d36b83726bdf%40iki.fi
* Reintroduce dead tuple counter in pg_stat_progress_vacuum.Masahiko Sawada2024-06-14
| | | | | | | | | | | | | | | | | | Commit 667e65aac3 changed both num_dead_tuples and max_dead_tuples columns to dead_tuple_bytes and max_dead_tuple_bytes columns, respectively. But as per discussion, the number of dead tuples collected still provides meaningful insights for users. This commit reintroduces the column for the count of dead tuples, renamed as num_dead_item_ids. It avoids confusion with the number of dead tuples removed by VACUUM, which includes dead heap-only tuples but excludes any pre-existing LP_DEAD items left behind by opportunistic pruning. Bump catalog version. Reviewed-by: Peter Geoghegan, Álvaro Herrera, Andrey Borodin Discussion: https://postgr.es/m/CAD21AoBL5sJE9TRWPyv%2Bw7k5Ee5QAJqDJEDJBUdAaCzGWAdvZw%40mail.gmail.com
* Clamp result of MultiXactMemberFreezeThresholdHeikki Linnakangas2024-06-13
| | | | | | | | | | | | The purpose of the function is to reduce the effective autovacuum_multixact_freeze_max_age if the multixact members SLRU is approaching wraparound, to make multixid freezing more aggressive. The returned value should therefore never be greater than plain autovacuum_multixact_freeze_max_age. Reviewed-by: Robert Haas Discussion: https://www.postgresql.org/message-id/85fb354c-f89f-4d47-b3a2-3cbd461c90a3@iki.fi Backpatch-through: 12, all supported versions
* Harmonize function parameter names for Postgres 17.Peter Geoghegan2024-06-12
| | | | | | | | | | | | | Make sure that function declarations use names that exactly match the corresponding names from function definitions in a few places. These inconsistencies were all introduced during Postgres 17 development. pg_bsd_indent still has a couple of similar inconsistencies, which I (pgeoghegan) have left untouched for now. This commit was written with help from clang-tidy, by mechanically applying the same rules as similar clean-up commits (the earliest such commit was commit 035ce1fe).
* Make RelationFlushRelation() work without ResourceOwner during abortHeikki Linnakangas2024-06-06
| | | | | | | | | | | | | | | | | | | | | ReorderBufferImmediateInvalidation() executes invalidation messages in an aborted transaction. However, RelationFlushRelation sometimes required a valid resource owner, to temporarily increment the refcount of the relache entry. Commit b8bff07daa worked around that in the main subtransaction abort function, AbortSubTransaction(), but missed this similar case in ReorderBufferImmediateInvalidation(). To fix, introduce a separate function to invalidate a relcache entry. It does the same thing as RelationClearRelation(rebuild==true) does when outside a transaction, but can be called without incrementing the refcount. Add regression test. Before this fix, it failed with: ERROR: ResourceOwnerEnlarge called after release started Reported-by: Alexander Lakhin <exclusion@gmail.com> Discussion: https://www.postgresql.org/message-id/e56be7d9-14b1-664d-0bfc-00ce9772721c@gmail.com