aboutsummaryrefslogtreecommitdiff
path: root/src/backend
Commit message (Collapse)AuthorAge
...
* Fix multi-table VACUUM VERBOSE accounting.Peter Geoghegan2022-04-15
| | | | | | | | | | | | | | | | Per-backend global variables like VacuumPageHit are initialized once per VACUUM command. This was missed by commit 49c9d9fc, which unified VACUUM VERBOSE and autovacuum logging. As a result of that oversight, incorrect values were shown when multiple relations were processed by a single VACUUM VERBOSE command. Relations that happened to be processed later on would show "buffer usage:" values that incorrectly included buffer accesses made while processing earlier unrelated relations. The same accesses were counted multiple times. To fix, take initial values for the tracker variables at the start of heap_vacuum_rel(), and report delta values later on.
* Tighten ComputeXidHorizons' handling of walsenders.Tom Lane2022-04-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ComputeXidHorizons (nee GetOldestXmin) thought that it could identify walsenders by checking for proc->databaseId == 0. Perhaps that was safe when the code was written, but it's been wrong at least since autovacuum was invented. Background processes that aren't connected to any particular database, such as the autovacuum launcher and logical replication launcher, look like that too. This imprecision is harmful because when such a process advertises an xmin, the result is to hold back dead-tuple cleanup in all databases, though it'd be sufficient to hold it back in shared catalogs (which are the only relations such a process can access). Aside from being generally inefficient, this has recently been seen to cause regression test failures in the buildfarm, as a consequence of the logical replication launcher's startup transaction preventing VACUUM from marking pages of a user table as all-visible. We only want that global hold-back effect for the case where a walsender is advertising a hot standby feedback xmin. Therefore, invent a new PGPROC flag that says that a process' xmin should be considered globally, and check that instead of using the incorrect databaseId == 0 test. Currently only a walsender sets that flag, and only if it is not connected to any particular database. (This is for bug-compatibility with the undocumented behavior of the existing code, namely that feedback sent by a client who has connected to a particular database would not be applied globally. I'm not sure this is a great definition; however, such a client is capable of issuing plain SQL commands, and I don't think we want xmins advertised for such commands to be applied globally. Perhaps this could do with refinement later.) While at it, I rewrote the comment in ComputeXidHorizons, and re-ordered the commented-upon if-tests, to make them match up for intelligibility's sake. This is arguably a back-patchable bug fix, but given the lack of complaints I think it prudent to let it age awhile in HEAD first. Discussion: https://postgr.es/m/1346227.1649887693@sss.pgh.pa.us
* VACUUM VERBOSE: Show dead items for an empty table.Peter Geoghegan2022-04-15
| | | | | | | | | | | | Be consistent about the lines that VACUUM VERBOSE outputs by including an "index scan not needed: " line for completely empty tables. This makes the output more readable, especially with multiple distinct VACUUM operations processed by the same VACUUM command. It's also more consistent; even empty tables can use the failsafe, which wasn't reported in the standard way until now. Follow-up to commit 6e20f460, which taught VACUUM VERBOSE to be more consistent about reporting on scanned pages with empty tables.
* Adjust VACUUM's removable cutoff log message.Peter Geoghegan2022-04-15
| | | | | | | | | | | | | | | | | | | | | The age of OldestXmin (a.k.a. "removable cutoff") when VACUUM ends often indicates the approximate number of XIDs consumed while VACUUM ran. However, there is at least one important exception: the cutoff could be held back by a snapshot that was acquired before our VACUUM even began. Successive VACUUM operations may even use exactly the same old cutoff in extreme cases involving long held snapshots. The log messages that described how removable cutoff aged (which were added by commit 872770fd) created the impression that we were reporting on how VACUUM's usable cutoff advanced while VACUUM ran, which was misleading in these extreme cases. Fix by using a more general wording. Per gripe from Tom Lane. In passing, relocate related instrumentation code for clarity. Author: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/1643035.1650035653@sss.pgh.pa.us
* Small cleanups in SQL/JSON codeAndrew Dunstan2022-04-15
| | | | | | These are to keep Coverity happy. In one case remove a redundant NULL check, and in another explicitly ignore a function result that is already known.
* pgstat: set timestamps of fixed-numbered stats after a crash.Andres Freund2022-04-14
| | | | | | | | | | | | When not loading stats at startup (i.e. pgstat_discard_stats() getting called), reset timestamps of fixed numbered stats would be left at 0. Oversight in 5891c7a8ed8. Instead use pgstat_reset_after_failure() and add tests verifying that fixed-numbered reset timestamps are set appropriately. Reported-By: "David G. Johnston" <david.g.johnston@gmail.com> Discussion: https://postgr.es/m/CAKFQuwamFuaQHKdhcMt4Gbw5+Hca2UE741B8gOOXoA=TtAd2Yw@mail.gmail.com
* Have CLUSTER ignore partitions not owned by callerAlvaro Herrera2022-04-14
| | | | | | | | | | | | | | | | If a partitioned table has partitions owned by roles other than the owner of the partitioned table, don't include them in the to-be- clustered list. This is similar to what VACUUM FULL does (except we do it sooner, because there is no reason to postpone it). Add a simple test to verify that only owned partitions are clustered. While at it, change memory context switch-and-back to occur once per partition instead of outside of the loop. Author: Justin Pryzby <pryzby@telsasoft.com> Reviewed-by: Zhihong Yu <zyu@yugabyte.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/20220411140609.GF26620@telsasoft.com
* Improve a couple of sql/json error messagesAndrew Dunstan2022-04-14
| | | | Fix the grammar in two, and add a hint to one.
* Fix transformJsonBehaviorAndrew Dunstan2022-04-14
| | | | | | | Commit 1a36bc9dba8 conained some logic that was a little opaque and could have involved a NULL dereference, as complained about by Coverity. Make the logic more transparent and in doing so avoid the NULL dereference.
* Add missing spaces after single-line commentsDavid Rowley2022-04-14
| | | | | | | | | | | | | Only 1 of 3 of these changes appear to be handled by pgindent. That change is new to v15. The remaining two appear to be left alone by pgindent. The exact reason for that is not 100% clear to me. It seems related to the fact that it's a line that contains *only* a single line comment and no actual code. It does not seem worth investigating this in too much detail. In any case, these do not conform to our usual practices, so fix them. Author: Justin Pryzby Discussion: https://postgr.es/m/20220411020336.GB26620@telsasoft.com
* Prevent access to no-longer-pinned buffer in heapam_tuple_lock().Tom Lane2022-04-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | heap_fetch() used to have a "keep_buf" parameter that told it to return ownership of the buffer pin to the caller after finding that the requested tuple TID exists but is invisible to the specified snapshot. This was thoughtlessly removed in commit 5db6df0c0, which broke heapam_tuple_lock() (formerly EvalPlanQualFetch) because that function needs to do more accesses to the tuple even if it's invisible. The net effect is that we would continue to touch the page for a microsecond or two after releasing pin on the buffer. Usually no harm would result; but if a different session decided to defragment the page concurrently, we could see garbage data and mistakenly conclude that there's no newer tuple version to chain up to. (It's hard to say whether this has happened in the field. The bug was actually found thanks to a later change that allowed valgrind to detect accesses to non-pinned buffers.) The most reasonable way to fix this is to reintroduce keep_buf, although I made it behave slightly differently: buffer ownership is passed back only if there is a valid tuple at the requested TID. In HEAD, we can just add the parameter back to heap_fetch(). To avoid an API break in the back branches, introduce an additional function heap_fetch_extended() in those branches. In HEAD there is an additional, less obvious API change: tuple->t_data will be set to NULL in all cases where buffer ownership is not returned, in particular when the tuple exists but fails the time qual (and !keep_buf). This is to defend against any other callers attempting to access non-pinned buffers. We concluded that making that change in back branches would be more likely to introduce problems than cure any. In passing, remove a comment about heap_fetch that was obsoleted by 9a8ee1dc6. Per bug #17462 from Daniil Anisimov. Back-patch to v12 where the bug was introduced. Discussion: https://postgr.es/m/17462-9c98a0f00df9bd36@postgresql.org
* Remove extraneous blank lines before block-closing bracesAlvaro Herrera2022-04-13
| | | | | | | | | These are useless and distracting. We wouldn't have written the code with them to begin with, so there's no reason to keep them. Author: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/20220411020336.GB26620@telsasoft.com Discussion: https://postgr.es/m/attachment/133167/0016-Extraneous-blank-lines.patch
* Release cache tuple when no longer neededAlvaro Herrera2022-04-13
| | | | | | | | | | | | There was a small buglet in commit 52e4f0cd472d whereby a tuple acquired from cache was not released, giving rise to WARNING messages; fix that. While at it, restructure the code a bit on stylistic grounds. Author: Hou zj <houzj.fnst@fujitsu.com> Reported-by: Peter Smith <smithpb2250@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Discussion: https://postgr.es/m/CAHut+PvKTyhTBtYCQsP6Ph7=o-oWRSX+v+PXXLXp81-o2bazig@mail.gmail.com
* Fix finalization for json_objectagg and friendsAndrew Dunstan2022-04-13
| | | | | | | | | | | | | Commit f4fb45d15c misguidedly tried to free some state during aggregate finalization for json_objectagg. This resulted in attempts to access freed memory, especially when the function is used as a window function. Commit 4eb9798879 attempted to ameliorate that, but in fact it should just be ripped out, which is done here. Also add some regression tests for json_objectagg in various flavors as a window function. Original report from Jaime Casanova, diagnosis by Andres Freund. Discussion: https://postgr.es/m/YkfeMNYRCGhySKyg@ahch-to
* Fix incorrect format placeholdersPeter Eisentraut2022-04-13
|
* Remove "recheck" argument from check_index_is_clusterable()Michael Paquier2022-04-13
| | | | | | | | | | | The last usage of this argument in this routine can be tracked down to 7e2f9062, aka 11 years ago. Getting rid of this argument can also be an advantage for extensions calling check_index_is_clusterable(), as it removes any need to worry about the meaning of what a recheck would be at this level. Author: Justin Pryzby Discussion: https://postgr.es/m/20220411140609.GF26620@telsasoft.com
* Revert the addition of GetMaxBackends() and related stuff.Robert Haas2022-04-12
| | | | | | | | | | | | This reverts commits 0147fc7, 4567596, aa64f23, and 5ecd018. There is no longer agreement that introducing this function was the right way to address the problem. The consensus now seems to favor trying to make a correct value for MaxBackends available to mdules executing their _PG_init() functions. Nathan Bossart Discussion: http://postgr.es/m/20220323045229.i23skfscdbvrsuxa@jrouhaud
* Use WRITE_ENUM_FIELD for enum fieldPeter Eisentraut2022-04-12
|
* Make node output prefix match node structure namePeter Eisentraut2022-04-12
| | | | as done in e58136069687b9cf29c27281e227ac397d72141d
* adjust_partition_colnos mustn't be called if not neededAlvaro Herrera2022-04-12
| | | | | | | | | | Add an assert to make this very explicit, as well as a code comment. The former should silence Coverity complaining about this. Introduced by 7103ebb7aae8. Reported-by: Ranier Vilela Discussion: https://postgr.es/m/CAEudQAqTTAOzXiYybab+1DQOb3ZUuK99=p_KD+yrRFhcDbd0jg@mail.gmail.com
* Change mechanism to set up source targetlist in MERGEAlvaro Herrera2022-04-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We were setting MERGE source subplan's targetlist by expanding the individual attributes of the source relation completely, early in the parse analysis phase. This failed to work when the condition of an action included a whole-row reference, causing setrefs.c to error out with ERROR: variable not found in subplan target lists because at that point there is nothing to resolve the whole-row reference with. We can fix this by having preprocess_targetlist expand the source targetlist for Vars required from the source rel by all actions. Moreover, by using this expansion mechanism we can do away with the targetlist expansion in transformMergeStmt, which is good because then we no longer pull in columns that aren't needed for anything. Add a test case for the problem. While at it, remove some redundant code in preprocess_targetlist(): MERGE was doing separately what is already being done for UPDATE/DELETE, so we can just rely on the latter and remove the former. (The handling of inherited rels was different for MERGE, but that was a no-longer- necessary hack.) Fix outdated, related comments for fix_join_expr also. Author: Richard Guo <guofenglinux@gmail.com> Author: Álvaro Herrera <alvherre@alvh.no-ip.org> Reported-by: Joe Wildish <joe@lateraljoin.com> Discussion: https://postgr.es/m/fab3b90a-914d-46a9-beb0-df011ee39ee5@www.fastmail.com
* Rename backup_compression.{c,h} to compression.{c,h}Michael Paquier2022-04-12
| | | | | | | | | | | | | | | | | | Compression option handling (level, algorithm or even workers) can be used across several parts of the system and not only base backups. Structures, objects and routines are renamed in consequence, to remove the concept of base backups from this part of the code making this change straight-forward. pg_receivewal, that has gained support for LZ4 since babbbb5, will make use of this infrastructure for its set of compression options, bringing more consistency with pg_basebackup. This cleanup needs to be done before releasing a beta of 15. pg_dump is a potential future target, as well, and adding more compression options to it may happen in 16~. Author: Michael Paquier Reviewed-by: Robert Haas, Georgios Kokolatos Discussion: https://postgr.es/m/YlPQGNAAa04raObK@paquier.xyz
* Make XLogRecGetBlockTag() throw error if there's no such block.Tom Lane2022-04-11
| | | | | | | | | | | | | | | | | | | | | | | | All but a few existing callers assume without checking that this function succeeds. While it probably will, that's a poor excuse for not checking. Let's make it return void and instead throw an error if it doesn't find the block reference. Callers that actually need to handle the no-such-block case must now use the underlying function XLogRecGetBlockTagExtended. In addition to being a bit less error-prone, this should also serve to suppress some Coverity complaints about XLogRecGetBlockRefInfo. While at it, clean up some inconsistency about use of the XLogRecHasBlockRef macro: make XLogRecGetBlockTagExtended use that instead of open-coding the same condition, and avoid calling XLogRecHasBlockRef twice in relevant code paths. (That is, calling XLogRecHasBlockRef followed by XLogRecGetBlockTag is now deprecated: use XLogRecGetBlockTagExtended instead.) Patch HEAD only; this doesn't seem to have enough value to consider a back-branch API break. Discussion: https://postgr.es/m/425039.1649701221@sss.pgh.pa.us
* Remove comment about historic heap vacuuming issue.Peter Geoghegan2022-04-11
| | | | | | | | | Remove comment block about how heap page vacuuming used to set tuples with storage to LP_UNUSED in a rare edge case that can no longer happen following commit 8523492d4e. The comments seem unnecessary now, since it's now generally clear that heap vacuuming only applies to LP_DEAD items from VACUUM's first heap pass following more recent work from commits 12b5ade902 and 4f8d9d1217.
* Remove dead code in do_pg_backup_start().Tom Lane2022-04-11
| | | | | | | | | | As of commit 39969e2a1, no caller of do_pg_backup_start() passes NULL for labelfile or tblspcmapfile, nor is it plausible that any would do so in the future. Remove the code that coped with that case, as (a) it's dead and (b) it causes Coverity to bleat about possibly leaked storage. While here, do some janitorial work on the function's header comment.
* Explicitly ignore guaranteed-true result from pgstat_lock_entry().Tom Lane2022-04-11
| | | | | | With nowait passed as false, pgstat_lock_entry() must return true so there's no need to check its result. Coverity seems unconvinced of this, so whack it upside the head with a (void) cast.
* fgetc() returns int, not char.Tom Lane2022-04-11
| | | | | This has no practical effect, since this code doesn't actually need to distinguish EOF (-1) from \0377; but it silences a Coverity complaint.
* Fix various typos and spelling mistakes in code commentsDavid Rowley2022-04-11
| | | | | Author: Justin Pryzby Discussion: https://postgr.es/m/20220411020336.GB26620@telsasoft.com
* Fix the dates of some copyright noticesMichael Paquier2022-04-11
| | | | | | | | 0ad8032 and 4e34747 are at the origin of that. Julien has found the one in parse_jsontable.c, while I have spotted the rest. Author: Julien Rouhaud, Michael Paquier Discussion: https://postgr.es/m/20220411060838.ftnzyvflpwu6f74w@jrouhaud
* Add missing serial commasPeter Eisentraut2022-04-09
|
* Rename delayChkpt to delayChkptFlags.Robert Haas2022-04-08
| | | | | | | | | | | | | | | | | | Before commit 412ad7a55639516f284cd0ef9757d6ae5c7abd43, delayChkpt was a Boolean. Now it's an integer. Extensions using it need to be appropriately updated, so let's rename the field to make sure that a hard compilation failure occurs. Replacing delayChkpt with delayChkptFlags made a few comments extend past 80 characters, so I reflowed them and changed some wording very slightly. The back-branches will need a different change to restore compatibility with existing minor releases; this is just for master. Per suggestion from Tom Lane. Discussion: http://postgr.es/m/a7880f4d-1d74-582a-ada7-dad168d046d1@enterprisedb.com
* Check XLogRecHasBlockRef() before XLogRecHasBlockImage().Jeff Davis2022-04-08
| | | | Trial fix of buildfarm failures on kestrel and tamandua.
* Add contrib/pg_walinspect.Jeff Davis2022-04-08
| | | | | | | | | Provides similar functionality to pg_waldump, but from a SQL interface rather than a separate utility. Author: Bharath Rupireddy Reviewed-by: Greg Stark, Kyotaro Horiguchi, Andres Freund, Ashutosh Sharma, Nitin Jadhav, RKN Sai Krishna Discussion: https://postgr.es/m/CALj2ACUGUYXsEQdKhEdsBzhGEyF3xggvLdD8C0VT72TNEfOiog%40mail.gmail.com
* Remove error message hints mentioning configure optionsPeter Eisentraut2022-04-08
| | | | | | | | | | | | These are usually not useful since users will use packaged distributions and won't be interested in rebuilding their installation from source. Also, we have only used these kinds of hints for some features and in some places, not consistently throughout. Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://www.postgresql.org/message-id/flat/2552aed7-d0e9-280a-54aa-2dc7073f371d%40enterprisedb.com
* Track I/O timing for temporary file blocks in EXPLAIN (BUFFERS)Michael Paquier2022-04-08
| | | | | | | | | | | | | | | | | | | | Previously, the output of EXPLAIN (BUFFERS) option showed only the I/O timing spent reading and writing shared and local buffers. This commit adds on top of that the I/O timing for temporary buffers in the output of EXPLAIN (for spilled external sorts, hashes, materialization. etc). This can be helpful for users in cases where the I/O related to temporary buffers is the bottleneck. Like its cousin, this information is available only when track_io_timing is enabled. Playing the patch, this is showing an extra overhead of up to 1% even when using gettimeofday() as implementation for interval timings, which is slightly within the usual range noise still that's measurable. Author: Masahiko Sawada Reviewed-by: Georgios Kokolatos, Melanie Plageman, Julien Rouhaud, Ranier Vilela Discussion: https://postgr.es/m/CAD21AoAJgotTeP83p6HiAGDhs_9Fw9pZ2J=_tYTsiO5Ob-V5GQ@mail.gmail.com
* Truncate line pointer array during heap pruning.Peter Geoghegan2022-04-07
| | | | | | | | | | | | | | | | | | | Reclaim space from the line pointer array when heap pruning leaves behind a contiguous group of LP_UNUSED items at the end of the array. This happens during subsequent page defragmentation. Certain kinds of heap line pointer bloat are ameliorated by this new optimization. Follow-up work to commit 3c3b8a4b26, which taught VACUUM to truncate the line pointer array in about the same way during VACUUM's second pass over the heap. We now apply line pointer array truncation during both the first and the second pass over the heap made by VACUUM. We can also perform line pointer array truncation during opportunistic pruning. Matthias van de Meent, with small tweaks by me. Author: Matthias van de Meent <boekewurm+postgres@gmail.com> Discussion: https://postgr.es/m/CAEze2WjgaQc55Y5f5CQd3L=eS5CZcff2Obxp=O6pto8-f0hC4w@mail.gmail.com Discussion: https://postgr.es/m/CAEze2Wg36%2B4at2eWJNcYNiW2FJmht34x3YeX54ctUSs7kKoNcA%40mail.gmail.com
* Teach planner and executor about monotonic window funcsDavid Rowley2022-04-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Window functions such as row_number() always return a value higher than the previously returned value for tuples in any given window partition. Traditionally queries such as; SELECT * FROM ( SELECT *, row_number() over (order by c) rn FROM t ) t WHERE rn <= 10; were executed fairly inefficiently. Neither the query planner nor the executor knew that once rn made it to 11 that nothing further would match the outer query's WHERE clause. It would blindly continue until all tuples were exhausted from the subquery. Here we implement means to make the above execute more efficiently. This is done by way of adding a pg_proc.prosupport function to various of the built-in window functions and adding supporting code to allow the support function to inform the planner if the window function is monotonically increasing, monotonically decreasing, both or neither. The planner is then able to make use of that information and possibly allow the executor to short-circuit execution by way of adding a "run condition" to the WindowAgg to allow it to determine if some of its execution work can be skipped. This "run condition" is not like a normal filter. These run conditions are only built using quals comparing values to monotonic window functions. For monotonic increasing functions, quals making use of the btree operators for <, <= and = can be used (assuming the window function column is on the left). You can see here that once such a condition becomes false that a monotonic increasing function could never make it subsequently true again. For monotonically decreasing functions the >, >= and = btree operators for the given type can be used for run conditions. The best-case situation for this is when there is a single WindowAgg node without a PARTITION BY clause. Here when the run condition becomes false the WindowAgg node can simply return NULL. No more tuples will ever match the run condition. It's a little more complex when there is a PARTITION BY clause. In this case, we cannot return NULL as we must still process other partitions. To speed this case up we pull tuples from the outer plan to check if they're from the same partition and simply discard them if they are. When we find a tuple belonging to another partition we start processing as normal again until the run condition becomes false or we run out of tuples to process. When there are multiple WindowAgg nodes to evaluate then this complicates the situation. For intermediate WindowAggs we must ensure we always return all tuples to the calling node. Any filtering done could lead to incorrect results in WindowAgg nodes above. For all intermediate nodes, we can still save some work when the run condition becomes false. We've no need to evaluate the WindowFuncs anymore. Other WindowAgg nodes cannot reference the value of these and these tuples will not appear in the final result anyway. The savings here are small in comparison to what can be saved in the top-level WingowAgg, but still worthwhile. Intermediate WindowAgg nodes never filter out tuples, but here we change WindowAgg so that the top-level WindowAgg filters out tuples that don't match the intermediate WindowAgg node's run condition. Such filters appear in the "Filter" clause in EXPLAIN for the top-level WindowAgg node. Here we add prosupport functions to allow the above to work for; row_number(), rank(), dense_rank(), count(*) and count(expr). It appears technically possible to do the same for min() and max(), however, it seems unlikely to be useful enough, so that's not done here. Bump catversion Author: David Rowley Reviewed-by: Andy Fan, Zhihong Yu Discussion: https://postgr.es/m/CAApHDvqvp3At8++yF8ij06sdcoo1S_b2YoaT9D4Nf+MObzsrLQ@mail.gmail.com
* Revert "Rewrite some RI code to avoid using SPI"Alvaro Herrera2022-04-07
| | | | | | | This reverts commit 99392cdd78b788295e52b9f4942fa11992fd5ba9. We'd rather rewrite ri_triggers.c as a whole rather than piecemeal. Discussion: https://postgr.es/m/E1ncXX2-000mFt-Pe@gemulon.postgresql.org
* Rewrite some RI code to avoid using SPIAlvaro Herrera2022-04-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Modify the subroutines called by RI trigger functions that want to check if a given referenced value exists in the referenced relation to simply scan the foreign key constraint's unique index, instead of using SPI to execute SELECT 1 FROM referenced_relation WHERE ref_key = $1 This saves a lot of work, especially when inserting into or updating a referencing relation. This rewrite allows to fix a PK row visibility bug caused by a partition descriptor hack which requires ActiveSnapshot to be set to come up with the correct set of partitions for the RI query running under REPEATABLE READ isolation. We now set that snapshot indepedently of the snapshot to be used by the PK index scan, so the two no longer interfere. The buggy output in src/test/isolation/expected/fk-snapshot.out of the relevant test case added by commit 00cb86e75d6d has been corrected. (The bug still exists in branch 14, however, but this fix is too invasive to backpatch.) Author: Amit Langote <amitlangote09@gmail.com> Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Reviewed-by: Corey Huinker <corey.huinker@gmail.com> Reviewed-by: Li Japin <japinli@hotmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Zhihong Yu <zyu@yugabyte.com> Discussion: https://postgr.es/m/CA+HiwqGkfJfYdeq5vHPh6eqPKjSbfpDDY+j-kXYFePQedtSLeg@mail.gmail.com
* Revert "Logical decoding of sequences"Tomas Vondra2022-04-07
| | | | | | | | | | | | | | | | | | | | | This reverts a sequence of commits, implementing features related to logical decoding and replication of sequences: - 0da92dc530c9251735fc70b20cd004d9630a1266 - 80901b32913ffa59bf157a4d88284b2b3a7511d9 - b779d7d8fdae088d70da5ed9fcd8205035676df3 - d5ed9da41d96988d905b49bebb273a9b2d6e2915 - a180c2b34de0989269fdb819bff241a249bf5380 - 75b1521dae1ff1fde17fda2e30e591f2e5d64b6a - 2d2232933b02d9396113662e44dca5f120d6830e - 002c9dd97a0c874fd1693a570383e2dd38cd40d5 - 05843b1aa49df2ecc9b97c693b755bd1b6f856a9 The implementation has issues, mostly due to combining transactional and non-transactional behavior of sequences. It's not clear how this could be fixed, but it'll require reworking significant part of the patch. Discussion: https://postgr.es/m/95345a19-d508-63d1-860a-f5c2f41e8d40@enterprisedb.com
* Unlogged sequencesPeter Eisentraut2022-04-07
| | | | | | | | | | | | | | | | | | Add support for unlogged sequences. Unlike for unlogged tables, this is not a performance feature. It allows sequences associated with unlogged tables to be excluded from replication. A new subcommand ALTER SEQUENCE ... SET LOGGED/UNLOGGED is added. An identity/serial sequence now automatically gets and follows the persistence level (logged/unlogged) of its owning table. (The sequences owned by temporary tables were already temporary through the separate mechanism in RangeVarAdjustRelationPersistence().) But you can still change the persistence of an owned sequence separately. Also, pg_dump and pg_upgrade preserve the persistence of existing sequences. Discussion: https://www.postgresql.org/message-id/flat/04e12818-2f98-257c-b926-2845d74ed04f%402ndquadrant.com
* Fix typo in xlogrecovery.c code commentDaniel Gustafsson2022-04-07
| | | | | Author: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> Discussion: https://postgr.es/m/CALj2ACUoPtnReT=yAQMcWLtcCpk7p83xjeA8tiRX8Q0_sjh8kw@mail.gmail.com
* Prefetch data referenced by the WAL, take II.Thomas Munro2022-04-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Introduce a new GUC recovery_prefetch. When enabled, look ahead in the WAL and try to initiate asynchronous reading of referenced data blocks that are not yet cached in our buffer pool. For now, this is done with posix_fadvise(), which has several caveats. Since not all OSes have that system call, "try" is provided so that it can be enabled where available. Better mechanisms for asynchronous I/O are possible in later work. Set to "try" for now for test coverage. Default setting to be finalized before release. The GUC wal_decode_buffer_size limits the distance we can look ahead in bytes of decoded data. The existing GUC maintenance_io_concurrency is used to limit the number of concurrent I/Os allowed, based on pessimistic heuristics used to infer that I/Os have begun and completed. We'll also not look more than maintenance_io_concurrency * 4 block references ahead. Reviewed-by: Julien Rouhaud <rjuju123@gmail.com> Reviewed-by: Tomas Vondra <tomas.vondra@2ndquadrant.com> Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com> (earlier version) Reviewed-by: Andres Freund <andres@anarazel.de> (earlier version) Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> (earlier version) Tested-by: Tomas Vondra <tomas.vondra@2ndquadrant.com> (earlier version) Tested-by: Jakub Wartak <Jakub.Wartak@tomtom.com> (earlier version) Tested-by: Dmitry Dolgov <9erthalion6@gmail.com> (earlier version) Tested-by: Sait Talha Nisanci <Sait.Nisanci@microsoft.com> (earlier version) Discussion: https://postgr.es/m/CA%2BhUKGJ4VJN8ttxScUFM8dOKX0BrBiboo5uz1cq%3DAovOddfHpA%40mail.gmail.com
* Fix warning introduced in 5c279a6d350.Jeff Davis2022-04-07
| | | | | | | | | Change two macros to be static inline functions instead to keep the data type consistent. This avoids a "comparison is always true" warning that was occurring with -Wtype-limits. In the process, change the names to look less like macros. Discussion: https://postgr.es/m/20220407063505.njnnrmbn4sxqfsts@alap3.anarazel.de
* pgstat: add pg_stat_have_stats() test helper.Andres Freund2022-04-07
| | | | | | | | | | Will be used by tests committed subsequently. Bumps catversion (this time for real, the one in 0f96965c658 got lost when rebasing over 5c279a6d350). Author: Melanie Plageman <melanieplageman@gmail.com> Discussion: https://postgr.es/m/CAAKRu_aNxL1WegCa45r=VAViCLnpOU7uNC7bTtGw+=QAPyYivw@mail.gmail.com
* pgstat: add pg_stat_force_next_flush(), use it to simplify tests.Andres Freund2022-04-06
| | | | | | | | | | | | | | | In the stats collector days it was hard to write tests for the stats system, because fundamentally delivery of stats messages over UDP was not synchronous (nor guaranteed). Now we easily can force pending stats updates to be flushed synchronously. This moves stats.sql into a parallel group, there isn't a reason for it to run in isolation anymore. And it may shake out some bugs. Bumps catversion. Author: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/20220303021600.hs34ghqcw6zcokdh@alap3.anarazel.de
* pgstat: fix small bug in pgstat_drop_relation().Andres Freund2022-04-06
| | | | | | | | | | Just after committing 5891c7a8ed8, a test running with debug_discard_caches=1 failed locally... pgstat_drop_relation() neither checked pgstat_should_count_relation() nor called pgstat_prep_relation_pending(). With debug_discard_caches=1 rel->pgstat_info wasn't set up, leading pg_stat_get_xact_tuples_inserted() spuriously still returning > 0 while in the transaction dropping the table.
* pgstat: prevent fix pgstat_reinit_entry() from zeroing out lwlock.Andres Freund2022-04-06
| | | | | | | | | | | Zeroing out an lwlock in a normal build turns out to not trigger any alarms, if nobody can use the lwlock at that moment (as the case here). But with --disable-spinlocks --disable-atomics, the sema field needs to be initialized. We probably should make sure that this fails on more common configurations as well... Per buildfarm animal rorqual
* Fix compilation with WAL_DEBUG.Andres Freund2022-04-06
| | | | | | Broke with 5c279a6d350. But looks like it had been half-broken since 70e81861fad, because 'rmid' didn't refer to the current record's rmid anymore, but to rmid from "Initialize resource managers" - a constant.
* Custom WAL Resource Managers.Jeff Davis2022-04-06
| | | | | | | | | | | | | Allow extensions to specify a new custom resource manager (rmgr), which allows specialized WAL. This is meant to be used by a Table Access Method or Index Access Method. Prior to this commit, only Generic WAL was available, which offers support for recovery and physical replication but not logical replication. Reviewed-by: Julien Rouhaud, Bharath Rupireddy, Andres Freund Discussion: https://postgr.es/m/ed1fb2e22d15d3563ae0eb610f7b61bb15999c0a.camel%40j-davis.com