aboutsummaryrefslogtreecommitdiff
path: root/src/backend/access
Commit message (Collapse)AuthorAge
* Fix race in SSI interaction with gin fast path.Thomas Munro2023-07-04
| | | | | | | | | | | | | | | | | | The ginfast.c code previously checked for conflicts in before locking the relevant buffer, leaving a window where a RW conflict could be missed. Re-order. There was also a place where buffer ID and block number were confused while trying to predicate-lock a page, noted by visual inspection. Back-patch to all supported releases. Fixes one more problem discovered with the reproducer from bug #17949, in this case when Dmitry tried other index types. Reported-by: Artem Anisimov <artem.anisimov.255@gmail.com> Reported-by: Dmitry Dolgov <9erthalion6@gmail.com> Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Discussion: https://postgr.es/m/17949-a0f17035294a55e2%40postgresql.org
* Fix race in SSI interaction with bitmap heap scan.Thomas Munro2023-07-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | When performing a bitmap heap scan, we don't want to miss concurrent writes that occurred after we observed the heap's rs_nblocks, but before we took predicate locks on index pages. Therefore, we can't skip fetching any heap tuples that are referenced by the index, because we need to test them all with CheckForSerializableConflictOut(). The old optimization that would ignore any references to blocks >= rs_nblocks gets in the way of that requirement, because it means that concurrent writes in that window are ignored. Removing that optimization shouldn't affect correctness at any isolation level, because any new tuples shouldn't be visible to an MVCC snapshot. There also shouldn't be any error-causing references to heap blocks past the end, because we should have held at least an AccessShareLock on the table before the index scan. It can't get smaller while our transaction is running. For now, though, we'll keep the optimization at lower levels to avoid making unnecessary changes in a bug fix. Back-patch to all supported releases. In release 11, the code is in a different place but not fundamentally different. Fixes one aspect of bug #17949. Reported-by: Artem Anisimov <artem.anisimov.255@gmail.com> Reviewed-by: Dmitry Dolgov <9erthalion6@gmail.com> Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Discussion: https://postgr.es/m/17949-a0f17035294a55e2%40postgresql.org
* Fix race in SSI interaction with empty btrees.Thomas Munro2023-07-04
| | | | | | | | | | | | | | | | | When predicate-locking btrees, we have a special case for completely empty btrees, since there is no page to lock. This was racy, because, without buffer lock held, a matching key could be inserted between the _bt_search() and the PredicateLockRelation() calls. Fix, by rechecking _bt_search() after taking the relation-level SIREAD lock, if using SERIALIZABLE isolation and an empty btree is discovered. Back-patch to all supported releases. Fixes one aspect of bug #17949. Reported-by: Artem Anisimov <artem.anisimov.255@gmail.com> Reviewed-by: Dmitry Dolgov <9erthalion6@gmail.com> Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Discussion: https://postgr.es/m/17949-a0f17035294a55e2%40postgresql.org
* Silence "missing contrecord" error.Thomas Munro2023-07-03
| | | | | | | | | | | | | | | | | Commit dd38ff28ad added a new error message "missing contrecord" when we fail to reassemble a record. Unfortunately that caused noisy messages to be logged by pg_waldump at end of segment, and by walsender when asked to shut down on a segment boundary. Remove the new error message, so that this condition signals end-of- WAL without a message. It's arguably a reportable condition that should not be silenced while performing crash recovery, but fixing that without introducing noise in the other cases will require more research. Back-patch to 15. Reported-by: Tomas Vondra <tomas.vondra@enterprisedb.com> Discussion: https://postgr.es/m/6a1df56e-4656-b3ce-4b7a-a9cb41df8189%40enterprisedb.com
* nbtree VACUUM: cope with topparent inconsistencies.Peter Geoghegan2023-06-21
| | | | | | | | | | | | | | | | | | Avoid "right sibling %u of block %u is not next child" errors when vacuuming a corrupt nbtree index. Just LOG the issue and press on. That way VACUUM will have a decent chance of finishing off all required processing for the index (and for the table as a whole). This is similar to recent work from commit 5abff197, as well as work from commit 5b861baa (later backpatched as commit 43e409ce), which taught nbtree VACUUM to keep going when its "re-find" check fails. The hardening added by this commit takes place directly after the "re-find" check, right before the critical section for the first stage of page deletion. Author: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/CAH2-Wz=dayg0vjs4+er84TS9ami=csdzjpuiCGbEw=idhwqhzQ@mail.gmail.com Backpatch: 11- (all supported versions).
* Enable archiving in recovery TAP test 009_twophase.plMichael Paquier2023-06-20
| | | | | | | | | | | | | | | | | This is a follow-up of f663b00, that has been committed to v13 and v14, tweaking the TAP test for two-phase transactions so as it provides coverage for the bug that has been fixed. This change is done in its own commit for clarity, as v15 and HEAD did not show the problematic behavior, still missed coverage for it. While on it, this adds a comment about the dependency of the last partial segment rename and RecoverPreparedTransactions() at the end of recovery, as that can be easy to miss. Author: Michael Paquier Reviewed-by: Kyotaro Horiguchi Discussion: https://postgr.es/m/743b9b45a2d4013bd90b6a5cba8d6faeb717ee34.camel@cybertec.at Backpatch-through: 13
* Initialize 'recordXtime' to silence compiler warning.Heikki Linnakangas2023-06-06
| | | | | | | | | | In reality, recordXtime will always be set by the getRecordTimestamp call, but the compiler doesn't necessarily see that. Back-patch to all supported versions. Author: Tristan Partin Discussion: https://www.postgresql.org/message-id/CT5MN8E11U0M.1NYNCHXYUHY41@gonk
* nbtree VACUUM: cope with right sibling link corruption.Peter Geoghegan2023-05-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Avoid "right sibling's left-link doesn't match" errors when vacuuming a corrupt nbtree index. Just LOG the issue and press on. That way VACUUM will have a decent chance of finishing off all required processing for the index (and for the table as a whole). This error was seen in the field from time to time (it's more than a theoretical risk), so giving VACUUM the ability to press on like this has real value. Nothing short of a REINDEX is expected to fix the underlying index corruption, so giving up (by throwing an error) risks making a bad situation far worse. Anything that blocks forward progress by VACUUM like this might go unnoticed for a long time. This could eventually lead to a wraparound/xidStopLimit outage. Note that _bt_unlink_halfdead_page() has always been able to bail on page deletion when the target page's left sibling page was in an inconsistent state. It now does the same thing (returns false to back out of the second phase of deletion) when it notices sibling link corruption in the target page's right sibling page. This is similar to the work from commit 5b861baa (later backpatched as commit 43e409ce), which taught nbtree to press on with vacuuming an index when page deletion fails to "re-find" a downlink in the target page's parent page. The "re-find" check seems to make VACUUM bail on page deletion more often in practice, but there is no reason to take any chances here. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Heikki Linnakangas <hlinnaka@iki.fi> Discussion: https://postgr.es/m/CAH2-Wzko2q2kP1+UvgJyP9g0mF4hopK0NtQZcxwvMv9_ytGhkQ@mail.gmail.com Backpatch: 11- (all supported versions).
* Fix handling of empty ranges and NULLs in BRINTomas Vondra2023-05-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | BRIN indexes did not properly distinguish between summaries for empty (no rows) and all-NULL ranges, treating them as essentially the same thing. Summaries were initialized with allnulls=true, and opclasses simply reset allnulls to false when processing the first non-NULL value. This however produces incorrect results if the range starts with a NULL value (or a sequence of NULL values), in which case we forget the range contains NULL values when adding the first non-NULL value. This happens because the allnulls flag is used for two separate purposes - to mark empty ranges (not representing any rows yet) and ranges containing only NULL values. Opclasses don't know which of these cases it is, and so don't know whether to set hasnulls=true. Setting the flag in both cases would make it correct, but it would also make BRIN indexes useless for queries with IS NULL clauses. All ranges start empty (and thus allnulls=true), so all ranges would end up with either allnulls=true or hasnulls=true. The severity of the issue is somewhat reduced by the fact that it only happens when adding values to an existing summary with allnulls=true. This can happen e.g. for small tables (because a summary for the first range exists for all BRIN indexes), or for tables with large fraction of NULL values in the indexed columns. Bulk summarization (e.g. during CREATE INDEX or automatic summarization) that processes all values at once is not affected by this issue. In this case the flags were updated in a slightly different way, not forgetting the NULL values. To identify empty ranges we use a new flag, stored in an unused bit in the BRIN tuple header so the on-disk format remains the same. A matching flag is added to BrinMemTuple, into a 3B gap after bt_placeholder. That means there's no risk of ABI breakage, although we don't actually pass the BrinMemTuple to any public API. We could also skip storing index tuples for empty summaries, but then we'd have to always process such ranges - even if there are no rows in large parts of the table (e.g. after a bulk DELETE), it would still require reading the pages etc. So we store them, but ignore them when building the bitmap. Backpatch to 11. The issue exists since BRIN indexes were introduced in 9.5, but older releases are already EOL. Backpatch-through: 11 Reviewed-by: Justin Pryzby, Matthias van de Meent, Alvaro Herrera Discussion: https://postgr.es/m/402430e4-7d9d-6cf1-09ef-464d80afff3b@enterprisedb.com
* Fix handling of NULLs when merging BRIN summariesTomas Vondra2023-05-18
| | | | | | | | | | | | | | | | | | | | | | When merging BRIN summaries, union_tuples() did not correctly update the target hasnulls/allnulls flags. When merging all-NULL summary into a summary without any NULL values, the result had both flags set to false (instead of having hasnulls=true). This happened because the code only considered the hasnulls flags, ignoring the possibility the source summary has allnulls=true. Discovered while investigating issues with handling empty BRIN ranges and handling of NULL values, but it's a separate problem (has nothing to do with empty ranges). Fixed by considering both flags on the source summary, and updating the hasnulls flag on the target summary. Backpatch to 11. The bug exists since 9.5 (where BRIN indexes were introduced), but those releases are EOL already. Discussion: https://postgr.es/m/9d993d0d-e431-2196-9ccc-0554d0e60154%40enterprisedb.com
* Prevent underflow in KeepLogSeg().Nathan Bossart2023-04-27
| | | | | | | | | | | | | | | The call to XLogGetReplicationSlotMinimumLSN() might return a greater LSN than the one given to the function. Subsequent segment number calculations might then underflow, which could result in unexpected behavior when removing or recyling WAL files. This was introduced with max_slot_wal_keep_size in c655077639. To fix, skip the block of code for replication slots if the LSN is greater. Reported-by: Xu Xingwang Author: Kyotaro Horiguchi Reviewed-by: Junwang Zhao Discussion: https://postgr.es/m/17903-4288d439dee856c6%40postgresql.org Backpatch-through: 13
* Re-add tracking of wait event SLRUFlushSyncMichael Paquier2023-04-26
| | | | | | | | | | | SLRUFlushSync has been accidently removed during dee663f, that has moved the flush of the SLRU files to the checkpointer, so add it back. The issue has been noticed by Thomas when checking for orphaned wait events. Author: Thomas Munro Reviewed-by: Bharath Rupireddy Discussion: https://postgr.es/m/CA+hUKGK6tqm59KuF1z+h5Y8fsWcu5v8+84kduSHwRzwjB2aa_A@mail.gmail.com
* Fix custom validators call in build_local_reloptions()Alexander Korotkov2023-04-23
| | | | | | | | | | | We need to call them only when validate == true. Backpatch to 13, where opclass options were introduced. Reported-by: Tom Lane Discussion: https://postgr.es/m/2656633.1681831542%40sss.pgh.pa.us Reviewed-by: Tom Lane, Pavel Borisov Backpatch-through: 13
* Avoid trying to write an empty WAL record in log_newpage_range().Tom Lane2023-04-17
| | | | | | | | | | | | | | | | | | | If the last few pages in the specified range are empty (all zero), then log_newpage_range() could try to emit an empty WAL record containing no FPIs. This at least upsets an Assert in ReserveXLogInsertLocation, and might perhaps have bad real-world consequences in non-assert builds. This has been broken since log_newpage_range() was introduced, but the case was hard if not impossible to hit before commit 3d6a98457 decided it was okay to leave VM and FSM pages intentionally zero. Nonetheless, it seems prudent to back-patch. log_newpage_range() was added in v12 but later back-patched, so this affects all supported branches. Matthias van de Meent, per report from Justin Pryzby Discussion: https://postgr.es/m/ZD1daibg4RF50IOj@telsasoft.com
* Fix dereference of dangling pointer in GiST index buffering build.Tom Lane2023-03-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | gistBuildCallback tried to fetch the size of an index tuple that might have already been freed by gistProcessEmptyingQueue. While this seems to usually be harmless in production builds, in principle it could result in a SIGSEGV, or more likely a bogus value for indtuplesSize leading to poor page-split decisions later in the build. The memory management here is confusing and could stand to be refactored, but for the moment it seems to be enough to fetch the tuple size sooner. AFAICT the indtuples[Size] totals aren't used in between these places; even if they were, the updated values shouldn't be any worse to use. So just move the incrementing of the totals up. It's not very clear why our valgrind-using buildfarm animals haven't noticed this problem, because the relevant code path does seem to be exercised according to the code coverage report. I think the reason that we didn't fix this bug after the first report is that I'd wanted to try to understand that better. However, now that it's been re-discovered let's just be pragmatic and fix it already. Original report by Alexander Lakhin (bug #16329), later rediscovered by Egor Chindyaskin (bug #17874). Patch by Alexander Lakhin (commentary by Pavel Borisov and me). Back-patch to all supported branches. Discussion: https://postgr.es/m/16329-7a6aa9b6fa1118a1@postgresql.org Discussion: https://postgr.es/m/17874-63ca6c7ce42d2103@postgresql.org
* Fix netmask handling in inet_minmax_multi_opsTomas Vondra2023-03-20
| | | | | | | | | | | | | | | | | | | When calculating distance in brin_minmax_multi_distance_inet(), the netmask was applied incorrectly. This results in (seemingly) incorrect ordering of values, triggering an assert. For builds without asserts this is mostly harmless - we may merge other ranges, possibly resulting in slightly less efficient index. But it's still correct and the greedy algorithm doesn't guarantee optimality anyway. Backpatch to 14, where minmax-multi indexes were introduced. Reported by Dmitry Dolgov, investigation and fix by me. Reported-by: Dmitry Dolgov Backpatch-through: 14 Discussion: https://postgr.es/m/17774-c6f3e36dd4471e67@postgresql.org
* Fix handling of multi-column BRIN indexesTomas Vondra2023-02-19
| | | | | | | | | | | | | | When evaluating clauses on multiple scan keys of a multi-column BRIN index, we can stop processing as soon as we find a scan key eliminating the range, and the range should not be added to tbe bitmap. That's how it worked before 14, but since a681e3c107a the code treated the range as matching if it matched at least the last scan key. Backpatch to 14, where this code was introduced. Backpatch-through: 14 Discussion: https://postgr.es/m/ebc18613-125e-60df-7520-fcbe0f9274fc%40enterprisedb.com
* Un-revert "Disable STARTUP_PROGRESS_TIMEOUT in standby mode."Robert Haas2023-02-10
| | | | | | | | | This reverts commit 1eadfbdd7eb0679ba8d45787aa8b2f06e76de20a and thus reinstates commit 98e7234242a652497c99d4d0d6f2bf9a75d4e921. It's a better time to commit this now that the release is over. Discussion: http://postgr.es/m/3509384.1675878203@sss.pgh.pa.us
* Revert "Disable STARTUP_PROGRESS_TIMEOUT in standby mode."Robert Haas2023-02-06
| | | | | | | | | This reverts commit 98e7234242a652497c99d4d0d6f2bf9a75d4e921. I forgot that we're about to wrap a release, and this fix isn't critical enough to justify committing it right before we wrap a release. Discussion: http://postgr.es/m/2676424.1675700113@sss.pgh.pa.us
* Disable STARTUP_PROGRESS_TIMEOUT in standby mode.Robert Haas2023-02-06
| | | | | | | | | | | | | In standby mode, we don't actually report progress of recovery, but up until now, startup_progress_timeout_handler() nevertheless got called every log_startup_progress_interval seconds. That's an unnecessary expense, so avoid it. Report by Thomas Munro. Patch by Bharath Rupireddy, reviewed by Simon Riggs, Thomas Munro, and me. Back-patch to v15, where the problem was introduced. Discussion: https://www.postgresql.org/message-id/CA%2BhUKGKCHSffAj8zZJKJvNX7ygnQFxVD6wm1d-2j3fVw%2BMafPQ%40mail.gmail.com
* Log the correct ending timestamp in recovery_target_xid mode.Tom Lane2023-01-19
| | | | | | | | | | | | | | | | When ending recovery based on recovery_target_xid matching with recovery_target_inclusive = off, we printed an incorrect timestamp (always 2000-01-01) in the "recovery stopping before ... transaction" log message. This is a consequence of sloppy refactoring in c945af80c: the code to fetch recordXtime out of the commit/abort record used to be executed unconditionally, but it was changed to get called only in the RECOVERY_TARGET_TIME case. We need only flip the order of operations to restore the intended behavior. Per report from Torsten Förtsch. Back-patch to all supported branches. Discussion: https://postgr.es/m/CAKkG4_kUevPqbmyOfLajx7opAQk6Cvwkvx0HRcFjSPfRPTXanA@mail.gmail.com
* Fix typos in comments, code and documentationMichael Paquier2023-01-03
| | | | | | | | | | While on it, newlines are removed from the end of two elog() strings. The others are simple grammar mistakes. One comment in pg_upgrade referred incorrectly to sequences since a7e5457. Author: Justin Pryzby Discussion: https://postgr.es/m/20221230231257.GI1153@telsasoft.com Backpatch-through: 11
* Fix assert in BRIN build_distancesTomas Vondra2022-12-30
| | | | | | | | | | | | | | | | | | | | | When brin_minmax_multi_union merges summaries, we may end up with just a single range after merge_overlapping_ranges. The summaries may contain just one range each, and they may overlap (or be exactly the same). With a single range there's no distance to calculate, but we happen to call build_distances anyway - which is fine, we don't calculate the distance in this case, except that with asserts this failed due to a check there are at least two ranges. The assert is unnecessarily strict, so relax it a bit and bail out if there's just a single range. The relaxed assert would be enough, but this way we don't allocate unnecessary memory for distance. Backpatch to 14, where minmax-multi opclasses were introduced. Reported-by: Jaime Casanova Backpatch-through: 14 Discussion: https://postgr.es/m/YzVA55qS0hgz8P3r@ahch-to
* Rethink handling of [Prevent|Is]InTransactionBlock in pipeline mode.Tom Lane2022-12-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commits f92944137 et al. made IsInTransactionBlock() set the XACT_FLAGS_NEEDIMMEDIATECOMMIT flag before returning "false", on the grounds that that kept its API promises equivalent to those of PreventInTransactionBlock(). This turns out to be a bad idea though, because it allows an ANALYZE in a pipelined series of commands to cause an immediate commit, which is unexpected. Furthermore, if we return "false" then we have another issue, which is that ANALYZE will decide it's allowed to do internal commit-and-start-transaction sequences, thus possibly unexpectedly committing the effects of previous commands in the pipeline. To fix the latter situation, invent another transaction state flag XACT_FLAGS_PIPELINING, which explicitly records the fact that we have executed some extended-protocol command and not yet seen a commit for it. Then, require that flag to not be set before allowing InTransactionBlock() to return "false". Having done that, we can remove its setting of NEEDIMMEDIATECOMMIT without fear of causing problems. This means that the API guarantees of IsInTransactionBlock now diverge from PreventInTransactionBlock, which is mildly annoying, but it seems OK given the very limited usage of IsInTransactionBlock. (In any case, a caller preferring the old behavior could always set NEEDIMMEDIATECOMMIT for itself.) For consistency also require XACT_FLAGS_PIPELINING to not be set in PreventInTransactionBlock. This too is meant to prevent commands such as CREATE DATABASE from silently committing previous commands in a pipeline. Per report from Peter Eisentraut. As before, back-patch to all supported branches (which sadly no longer includes v10). Discussion: https://postgr.es/m/65a899dd-aebc-f667-1d0a-abb89ff3abf8@enterprisedb.com
* Fix memory leak for hashing with nondeterministic collations.Jeff Davis2022-12-01
| | | | | | | Backpatch through 12, where nondeterministic collations were introduced (5e1963fb76). Backpatch-through: 12
* Improve heuristics for compressing the KnownAssignedXids array.Tom Lane2022-11-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, we'd compress only when the active range of array entries reached Max(4 * PROCARRAY_MAXPROCS, 2 * pArray->numKnownAssignedXids). If max_connections is large, the first term could result in not compressing for a long time, resulting in much wastage of cycles in hot-standby backends scanning the array to take snapshots. Get rid of that term, and just bound it to 2 * pArray->numKnownAssignedXids. That however creates the opposite risk, that we might spend too much effort compressing. Hence, consider compressing only once every 128 commit records. (This frequency was chosen by benchmarking. While we only tried one benchmark scenario, the results seem stable over a fairly wide range of frequencies.) Also, force compression when processing RecoveryInfo WAL records (which should be infrequent); the old code could perform compression then, but would do so only after the same array-range check as for the transaction-commit path. Also, opportunistically run compression if the startup process is about to wait for WAL, though not oftener than once a second. This should prevent cases where we waste lots of time by leaving the array not-compressed for long intervals due to low WAL traffic. Lastly, add a simple check to keep us from uselessly compressing when the array storage is already compact. Back-patch, as the performance problem is worse in pre-v14 branches than in HEAD. Simon Riggs and Michail Nikolaev, with help from Tom Lane and Andres Freund. Discussion: https://postgr.es/m/CALdSSPgahNUD_=pB_j=1zSnDBaiOtqVfzo8Ejt5J_k7qZiU1Tw@mail.gmail.com
* Make multixact error message more explicitAlvaro Herrera2022-11-24
| | | | | | | | | | | | | | There are recent reports involving a very old error message that we have no history of hitting -- perhaps a recently introduced bug. Improve the error message in an attempt to improve our chances of investigating the bug. Per reports from Dimos Stamatakis and Bob Krier. Backpatch to 11. Discussion: https://postgr.es/m/CO2PR0801MB2310579F65529380A4E5EDC0E20A9@CO2PR0801MB2310.namprd08.prod.outlook.com Discussion: https://postgr.es/m/17518-04e368df5ad7f2ee@postgresql.org
* Fix cleanup lock acquisition in SPLIT_ALLOCATE_PAGE replay.Amit Kapila2022-11-14
| | | | | | | | | | | | | | | | During XLOG_HASH_SPLIT_ALLOCATE_PAGE replay, we were checking for a cleanup lock on the new bucket page after acquiring an exclusive lock on it and raising a PANIC error on failure. However, it is quite possible that checkpointer can acquire the pin on the same page before acquiring a lock on it, and then the replay will lead to an error. So instead, directly acquire the cleanup lock on the new bucket page during XLOG_HASH_SPLIT_ALLOCATE_PAGE replay operation. Reported-by: Andres Freund Author: Robert Haas Reviewed-By: Amit Kapila, Andres Freund, Vignesh C Backpatch-through: 11 Discussion: https://postgr.es/m/20220810022617.fvjkjiauaykwrbse@awork3.anarazel.de
* Fix theoretical torn page hazard.Jeff Davis2022-11-11
| | | | | | | | | | | | | | | | | | | | | | | | The original report was concerned with a possible inconsistency between the heap and the visibility map, which I was unable to confirm. The concern has been retracted. However, there did seem to be a torn page hazard when using checksums. By not setting the heap page LSN during redo, the protections of minRecoveryPoint were bypassed. Fixed, along with a misleading comment. It may have been impossible to hit this problem in practice, because it would require a page tear between the checksum and the flags, so I am marking this as a theoretical risk. But, as discussed, it did violate expectations about the page LSN, so it may have other consequences. Backpatch to all supported versions. Reported-by: Konstantin Knizhnik Reviewed-by: Konstantin Knizhnik Discussion: https://postgr.es/m/fed17dac-8cb8-4f5b-d462-1bb4908c029e@garret.ru Backpatch-through: 11
* Fix comment of SimpleLruInit() in slru.cMichael Paquier2022-11-10
| | | | | | | | | | sync_handler was not mentioned in the comment block of the function. Oversight in dee663f. Author: Aleksander Alekseev Discussion: https://postgr.es/m/CAJ7c6TPUd9BwNY47TtMxaijLHSbyHNdhu=kvbGnvO_bi+oC6_Q@mail.gmail.com Backpatch-through: 14
* Doc: add comments about PreventInTransactionBlock/IsInTransactionBlock.Tom Lane2022-11-09
| | | | | | | | | | | | | Add a little to the header comments for these functions to make it clearer what guarantees about commit behavior are provided to callers. (See commit f92944137 for context.) Although this is only a comment change, it's really documentation aimed at authors of extensions, so it seems appropriate to back-patch. Yugo Nagata and Tom Lane, per further discussion of bug #17434. Discussion: https://postgr.es/m/17434-d9f7a064ce2a88a3@postgresql.org
* Fix ordering issue with WAL operations in GIN fast insert pathMichael Paquier2022-10-26
| | | | | | | | | | | | | | | | | | | | | | | | Contrary to what is documented in src/backend/access/transam/README, ginHeapTupleFastInsert() had a few ordering issues with the way it does its WAL operations when inserting items in its fast path. First, when using a separate list, XLogBeginInsert() was being always called before START_CRIT_SECTION(), and in this case a second thing was wrong when merging lists, as an exclusive lock was taken on the tail page *before* calling XLogBeginInsert(). Finally, when inserting items into a tail page, the order of XLogBeginInsert() and START_CRIT_SECTION() was reversed. This commit addresses all these issues by moving the calls of XLogBeginInsert() after all the pages logged are locked and pinned, within a critical section. This has been applied first only on HEAD as of 56b6625, but as per discussion with Tom Lane and Álvaro Herrera, a backpatch is preferred to keep all the branches consistent and to respect the transam's README where we can. Author: Matthias van de Meent, Zhang Mingli Discussion: https://postgr.es/m/CAEze2WhL8uLMqynnnCu1LAPwxD5RKEo0nHV+eXGg_N6ELU88HQ@mail.gmail.com Backpatch-through: 10
* Rename SetSingleFuncCall() to InitMaterializedSRF()Michael Paquier2022-10-18
| | | | | | | | | | | | | | | | | | Per discussion, the existing routine name able to initialize a SRF function with materialize mode is unpopular, so rename it. Equally, the flags of this function are renamed, as of: - SRF_SINGLE_USE_EXPECTED -> MAT_SRF_USE_EXPECTED_DESC - SRF_SINGLE_BLESS -> MAT_SRF_BLESS The previous function and flags introduced in 9e98583 are kept around for compatibility purposes, so as any extension code already compiled with v15 continues to work as-is. The declarations introduced here for compatibility will be removed from HEAD in a follow-up commit. The new names have been suggested by Andres Freund and Melanie Plageman. Discussion: https://postgr.es/m/20221013194820.ciktb2sbbpw7cljm@awork3.anarazel.de Backpatch-through: 15
* doc: Fix some grammar and typosMichael Paquier2022-10-01
| | | | | | | | This fixes some areas related to logical replication and custom RMGRs. Author: Ekaterina Kiryanova Discussion: https://postgr.es/m/fa4773f1-1396-384a-bcd7-85b5e013f399@postgrespro.ru Backpatch-through: 15
* Avoid improbable PANIC during heap_update, redux.Tom Lane2022-09-30
| | | | | | | | | | | | | | | | | | | | Commit 34f581c39 intended to ensure that RelationGetBufferForTuple would acquire a visibility-map page pin in case the otherBuffer's all-visible bit had become set since we last had lock on that page. But I missed a case: when we're extending the relation, VM concerns were dealt with only in the relatively-less-likely case that we fail to conditionally lock the otherBuffer. I think I'd believed that we couldn't need to worry about it if the conditional lock succeeds, which is true for the target buffer; but the otherBuffer was unlocked for awhile so its bit might be set anyway. So we need to do the GetVisibilityMapPins dance, and then also recheck the page's free space, in both cases. Per report from Jaime Casanova. Back-patch to v12 as the previous patch was (although there's still no evidence that the bug is reachable pre-v14). Discussion: https://postgr.es/m/E1lWLjP-00006Y-Ml@gemulon.postgresql.org
* Message style improvementsPeter Eisentraut2022-09-24
|
* Fix race condition where heap_delete() fails to pin VM page.Jeff Davis2022-09-22
| | | | | | | | | | Similar to 5f12bc94dc, the code must re-check PageIsAllVisible() after buffer lock is re-acquired. Backpatching to the same version, 12. Discussion: https://postgr.es/m/CAEP4nAw9jYQDKd_5Y+-s2E4YiUJq1vqiikFjYGpLShtp-K3gag@mail.gmail.com Reported-by: Robins Tharakan Reviewed-by: Robins Tharakan Backpatch-through: 12
* Clear ps display of startup process at the end of recoveryMichael Paquier2022-09-22
| | | | | | | | | | | | | | | | | If the ps display is not cleared at this point, the process could continue displaying "recovering NNN" even if handling end-of-recovery steps. df9274a has tackled that by providing some information with the end-of-recovery checkpoint but 7ff23c6 has nullified the effect of the first commit. Per a suggestion from Justin, just clear the ps display when we are done with recovery, so as no incorrect information is displayed. This may get extended in the future, but for now restore the pre-7ff23c6 behavior. Author: Justin Prysby Discussion: https://postgr.es/m/20220913223954.GU31833@telsasoft.com Backpatch-through: 15
* Suppress variable-set-but-not-used warnings from clang 15.Tom Lane2022-09-20
| | | | | | | | | | | | | | | | | | | | | | | | clang 15+ will issue a set-but-not-used warning when the only use of a variable is in autoincrements (e.g., "foo++;"). That's perfectly sensible, but it detects a few more cases that we'd not noticed before. Silence the warnings with our usual methods, such as PG_USED_FOR_ASSERTS_ONLY, or in one case by actually removing a useless variable. One thing that we can't nicely get rid of is that with %pure-parser, Bison emits "yynerrs" as a local variable that falls foul of this warning. To silence those, I inserted "(void) yynerrs;" in the top-level productions of affected grammars. Per recently-established project policy, this is a candidate for back-patching into out-of-support branches: it suppresses annoying compiler warnings but changes no behavior. Hence, back-patch to 9.5, which is as far as these patches go without issues. (A preliminary check shows that the prior branches need some other set-but-not-used cleanups too, so I'll leave them for another day.) Discussion: https://postgr.es/m/514615.1663615243@sss.pgh.pa.us
* Reset InstallXLogFileSegmentActive after walreceiver self-initiated exit.Noah Misch2022-09-15
| | | | | | | | | | | | | After commit cc2c7d65fc27e877c9f407587b0b92d46cd6dd16 added this flag, failure to reset it caused assertion failures. In non-assert builds, it made the system fail to achieve the objectives listed in that commit; chiefly, we might emit a spurious log message. Back-patch to v15, where that commit first appeared. Bharath Rupireddy and Kyotaro Horiguchi. Reviewed by Dilip Kumar, Nathan Bossart and Michael Paquier. Reported by Dilip Kumar. Discussion: https://postgr.es/m/CAFiTN-sE3ry=ycMPVtC+Djw4Fd7gbUGVv_qqw6qfzp=JLvqT3g@mail.gmail.com
* Don't reference out-of-bounds array elements in brin_minmax_multi.cDavid Rowley2022-09-13
| | | | | | | | | | | | | | | | | | The primary fix here is to fix has_matching_range() so it does not reference ranges->values[-1] when nranges == 0. Similar problems existed in AssertCheckRanges() too. It does not look like any of these problems could lead to a crash as the array in question is at the end of the Ranges struct, and values[-1] is memory that belongs to other fields in the struct. However, let's get rid of these rather unsafe coding practices. In passing, I (David) adjusted some comments to try to make it more clear what some of the fields are for in the Ranges struct. I had to study the code to find out what nsorted was for as I couldn't tell from the comments. Author: Ranier Vilela Discussion: https://postgr.es/m/CAEudQAqJQzPitufX-jR=YUbJafpCDAKUnwgdbX_MzSc93wuvdw@mail.gmail.com Backpatch-through: 14, where multi-range brin was added.
* Add more error context to RestoreBlockImage() and consume itMichael Paquier2022-09-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | On failure in restoring a block image, no details were provided, while it is possible to see failure with an inconsistent record state, a failure in processing decompression or a failure in decompression because a build does not support this option. RestoreBlockImage() is used in two code paths in the backend code, during recovery and when checking a page consistency after applying masking, and both places are changed to consume the error message produced by the internal routine when it returns a false status. All the error messages are reported under ERRCODE_INTERNAL_ERROR, that gets used also when attempting to access a page compressed by a method not supported by the build attempting the decompression. This is something that can happen in core when doing physical replication with primary and standby using inconsistent build options, for example. This routine is available since 2c03216d and it has never provided any context about the error happening when it failed. This change is justified even more after 57aa5b2, that introduced compression of FPWs in WAL. Reported-by: Justin Prysby Author: Michael Paquier Discussion: https://postgr.es/m/20220905002320.GD31833@telsasoft.com Backpatch-through: 15
* Fix recovery_prefetch with low maintenance_io_concurrency.Thomas Munro2022-09-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | We should process completed IOs *before* trying to start more, so that it is always possible to decode one more record when the decoded record queue is empty, even if maintenance_io_concurrency is set so low that a single earlier WAL record might have saturated the IO queue. That bug was hidden because the effect of maintenance_io_concurrency was arbitrarily clamped to be at least 2. Fix the ordering, and also remove that clamp. We need a special case for 0, which is now treated the same as recovery_prefetch=off, but otherwise the number is used directly. This allows for testing with 1, which would have made the problem obvious in simple test scenarios. Also add an explicit error message for missing contrecords. It was a bit strange that we didn't report an error already, and became a latent bug with prefetching, since the internal state that tracks aborted contrecords would not survive retrying, as revealed by 026_overwrite_contrecord.pl with this adjustment. Reporting an error prevents that. Back-patch to 15. Reported-by: Justin Pryzby <pryzby@telsasoft.com> Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Discussion: https://postgr.es/m/20220831140128.GS31833%40telsasoft.com
* Fix cache invalidation bug in recovery_prefetch.Thomas Munro2022-09-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | XLogPageRead() can retry internally after a pread() system call has succeeded, in the case of short reads, and page validation failures while in standby mode (see commit 0668719801). Due to an oversight in commit 3f1ce973, these cases could leave stale data in the internal cache of xlogreader.c without marking it invalid. The main defense against stale cached data on failure to read a page was in the error handling path of the calling function ReadPageInternal(), but that wasn't quite enough for errors handled internally by XLogPageRead()'s retry loop if we then exited with XLREAD_WOULDBLOCK. 1. ReadPageInternal() now marks the cache invalid before calling the page_read callback, by setting state->readLen to 0. It'll be set to a non-zero value only after a successful read. It'll stay valid as long as the caller requests data in the cached range. 2. XLogPageRead() no long performs internal retries while reading ahead. While such retries should work, the general philosophy is that we should give up prefetching if anything unusual happens so we can handle it when recovery catches up, to reduce the complexity of the system. Let's do that here too. 3. While here, a new function XLogReaderResetError() improves the separation between xlogrecovery.c and xlogreader.c, where the former previously clobbered the latter's internal error buffer directly. The new function makes this more explicit, and also clears a related flag, without which a standby would needlessly retry in the outer function. Thanks to Noah Misch for tracking down the conditions required for a rare build farm failure in src/bin/pg_ctl/t/003_promote.pl, and providing a reproducer. Back-patch to 15. Reported-by: Noah Misch <noah@leadboat.com> Discussion: https://postgr.es/m/20220807003627.GA4168930%40rfd.leadboat.com
* Prevent WAL corruption after a standby promotion.Robert Haas2022-08-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | When a PostgreSQL instance performing archive recovery but not using standby mode is promoted, and the last WAL segment that it attempted to read ended in a partial record, the previous code would create invalid WAL on the new timeline. The WAL from the previously timeline would be copied to the new timeline up until the end of the last valid record, but instead of beginning to write WAL at immediately afterwards, the promoted server would write an overwrite contrecord at the beginning of the next segment. The end of the previous segment would be left as all-zeroes, resulting in failures if anything tried to read WAL from that file. The root of the issue is that ReadRecord() decides whether to set abortedRecPtr and missingContrecPtr based on the value of StandbyMode, but ReadRecord() switches to a new timeline based on the value of ArchiveRecoveryRequested. We shouldn't try to write an overwrite contrecord if we're switching to a new timeline, so change the test in ReadRecod() to check ArchiveRecoveryRequested instead. Code fix by Dilip Kumar. Comments by me incorporating suggested language from Álvaro Herrera. Further review from Kyotaro Horiguchi and Sami Imseih. Discussion: http://postgr.es/m/CAFiTN-t7umki=PK8dT1tcPV=mOUe2vNhHML6b3T7W7qqvvajjg@mail.gmail.com Discussion: http://postgr.es/m/FB0DEA0B-E14E-43A0-811F-C1AE93D00FF3%40amazon.com
* Fix code comments still referring to pg_start/stop_backup()Michael Paquier2022-08-24
| | | | | | | | | pg_start_backup() and pg_stop_backup() have been respectively renamed to pg_backup_start() and pg_backup_stop() as of 39969e2, but a few comments did not get the call. Reviewed-by: Kyotaro Horiguchi, David Steele Discussion: https://postgr.es/m/YrqGlj1+4DF3dbZ/@paquier.xyz
* Fix assert in logicalmsg_descTomas Vondra2022-08-17
| | | | | | | | | | | The assert, introduced by 9f1cf97bb5, is intended to check if the prefix is terminated by a \0 byte, but it has two flaws. Firstly, prefix_size includes the \0 byte, so prefix[prefix_size] points to the byte after the null byte. Secondly, the check ensures the byte is not equal \0, while it should be checking the opposite. Backpatch-through: 14 Discussion: https://postgr.es/m/b99b6101-2f14-3796-3dfa-4a6cd7d4326d@enterprisedb.com
* Move basebackup code to new directory src/backend/backupRobert Haas2022-08-10
| | | | | | Reviewed by David Steele and Justin Pryzby Discussion: http://postgr.es/m/CA+TgmoafqboATDSoXHz8VLrSwK_MDhjthK4hEpYjqf9_1Fmczw%40mail.gmail.com
* BRIN: mask BRIN_EVACUATE_PAGE for WAL consistency checkingAlvaro Herrera2022-08-05
| | | | | | | | | | | | | | | | That bit is unlogged and therefore it's wrong to consider it in WAL page comparison. Add a test that tickles the case, as branch testing technology allows. This has been a problem ever since wal consistency checking was introduced (commit a507b86900f6 for pg10), so backpatch to all supported branches. Author: 王海洋 (Haiyang Wang) <wanghaiyang.001@bytedance.com> Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Discussion: https://postgr.es/m/CACciXAD2UvLMOhc4jX9VvOKt7DtYLr3OYRBhvOZ-jRxtzc_7Jg@mail.gmail.com Discussion: https://postgr.es/m/CACciXADOfErX9Bx0nzE_SkdfXr6Bbpo5R=v_B6MUTEYW4ya+cg@mail.gmail.com
* Fix replay of create database records on standbyAlvaro Herrera2022-07-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Crash recovery on standby may encounter missing directories when replaying database-creation WAL records. Prior to this patch, the standby would fail to recover in such a case; however, the directories could be legitimately missing. Consider the following sequence of commands: CREATE DATABASE DROP DATABASE DROP TABLESPACE If, after replaying the last WAL record and removing the tablespace directory, the standby crashes and has to replay the create database record again, crash recovery must be able to continue. A fix for this problem was already attempted in 49d9cfc68bf4, but it was reverted because of design issues. This new version is based on Robert Haas' proposal: any missing tablespaces are created during recovery before reaching consistency. Tablespaces are created as real directories, and should be deleted by later replay. CheckRecoveryConsistency ensures they have disappeared. The problems detected by this new code are reported as PANIC, except when allow_in_place_tablespaces is set to ON, in which case they are WARNING. Apart from making tests possible, this gives users an escape hatch in case things don't go as planned. Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Author: Asim R Praveen <apraveen@pivotal.io> Author: Paul Guo <paulguo@gmail.com> Reviewed-by: Anastasia Lubennikova <lubennikovaav@gmail.com> (older versions) Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com> (older versions) Reviewed-by: Michaël Paquier <michael@paquier.xyz> Diagnosed-by: Paul Guo <paulguo@gmail.com> Discussion: https://postgr.es/m/CAEET0ZGx9AvioViLf7nbR_8tH9-=27DN5xWJ2P9-ROH16e4JUA@mail.gmail.com