aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
...
* Refactoring for CommitTransactionCommand()/AbortCurrentTransaction()Alexander Korotkov2024-04-18
| | | | | | | | | | | | | | | fefd9a3fed turned tail recursion of CommitTransactionCommand() and AbortCurrentTransaction() into iteration. However, it splits the handling of cases between different functions. This commit puts the handling of all the cases into AbortCurrentTransactionInternal() and CommitTransactionCommandInternal(). Now CommitTransactionCommand() and AbortCurrentTransaction() are just doing the repeated calls of internal functions. Reported-by: Andres Freund Discussion: https://postgr.es/m/20240415224834.w6piwtefskoh32mv%40awork3.anarazel.de Author: Andres Freund
* Remove GlobalVisTestNonRemovable[Full]Horizon, not used anymoreAndres Freund2024-04-17
| | | | | | | | | GlobalVisTestNonRemovableHorizon() was only used for the implementation of snapshot_too_old, which was removed in f691f5b80a8. As using GlobalVisTestNonRemovableHorizon() is not particularly efficient, no new uses for it should be added. Therefore remove. Discussion: https://postgr.es/m/20240415185720.q4dg4dlcyvvrabz4@awork3.anarazel.de
* Cleanup parallel BRIN index build codeTomas Vondra2024-04-17
| | | | | | | | | | | | | | | | | | Commit b43757171470 added support for parallel builds of BRIN indexes, using code similar to BTREE. But there were to be a couple unnecessary differences, particularly in how the leader waits for the workers, and merges the results. So remove these, to make the code more similar. The leader never waited on the workersdonecv condition variable, but simply called WaitForParallelWorkersToFinish() in _brin_end_parallel() and then merged the per-worker results. This worked correctly, but it seems better to do the wait and merge before _brin_end_parallel(). This commit moves the relevant code to _brin_parallel_heapscan/merge(), which means _brin_end_parallel() remains responsible only for exiting the parallel mode and accumulating WAL usage data. Discussion: https://postgr.es/m/3733d042-71e1-6ae6-5fac-00c12db62db6@enterprisedb.com
* Remove dead codePeter Eisentraut2024-04-17
| | | | | | | | | | The configure check for HAVE_DECL_LLVMORCREGISTERPERF was removed by e9a9843e138, but some code guarded by it was left. (That commit removed the "register" calls but left the "unregister" calls.) That code cannot be reached anymore, so remove it. Reported-by: Heikki Linnakangas <hlinnaka@iki.fi> Discussion: https://www.postgresql.org/message-id/5539b16c-cff7-46d5-9621-c3fb6b549e9e@iki.fi
* Add missing source file to libpq/nls.mkPeter Eisentraut2024-04-17
|
* Fix typos with function name in event_trigger.cMichael Paquier2024-04-17
| | | | | | | | | Databases exist, contrary to datatabases. Oversight in e83d1b0c40cc. Author: Japin Li Discussion: https://postgr.es/m/ME3P282MB316611A2F7BF43919F695228B6082@ME3P282MB3166.AUSP282.PROD.OUTLOOK.COM
* Disallow specifying ON_ERROR option without value.Masahiko Sawada2024-04-17
| | | | | | | | | | | | | | The ON_ERROR option of the COPY command previously allowed omitting its value, which was inconsistent with the syntax synopsis in the documentation and the behavior of other non-boolean COPY options. This change enforces providing a value for the ON_ERROR option, ensuring consistency across other non-boolean options and aligning with the documented syntax. Author: Atsushi Torikoshi Reviewed-by: Masahiko Sawada Discussion: https://postgr.es/m/a9770bf57646d90dedc3d54cf32634b2%40oss.nttdata.com
* Update mmgr's README to mention BumpContextDavid Rowley2024-04-17
| | | | | | | | | | | | Oversight in 29f6a959c. In passing, since we now have 4 memory context types to choose from, provide a brief overview of the specialities of each memory context type. Reported-by: Amul Sul Author: Amul Sul, David Rowley Discussion: https://postgr.es/m/CAAJ_b94U2s9nHh--DEK=sPEZUQ+x7vQJ7529fF8UAH97QJ9NXg@mail.gmail.com
* Push dedicated BumpBlocks to the tail of the blocks listDavid Rowley2024-04-17
| | | | | | | | | | | | | | | | | | | | | | | BumpContext relies on using the head block from its 'blocks' field to use as the current block to allocate new chunks to. When we receive an allocation request larger than allocChunkLimit, we place these chunks on a new dedicated block and, until now, we pushed the block onto the *head* of the 'blocks' list. This behavior caused the previous bump block to no longer be available for new normal-sized (non-large) allocations and would result in blocks only being partially filled if a large allocation request arrived before the block became full. Here adjust the code to push these dedicated blocks onto the *tail* of the blocks list so that the head block remains intact and available to be used by normal allocation request sizes until it becomes full. In passing, make the elog(ERROR) calls for the unsupported callbacks consistent. Likewise for the header comments for those functions. Discussion: https://postgr.es/m/CAApHDvp9___r-ayJj0nZ6GD3MeCGwGZ0_6ZptWpwj+zqHtmwCw@mail.gmail.com Discussion: https://postgr.es/m/CAApHDvqerXpzUnuDQfUEi3DZA+9=Ud9WSt3ruxN5b6PcOosx2g@mail.gmail.com
* Mark some new location fields as ParseLocPeter Eisentraut2024-04-16
| | | | | Some new code probably didn't see 605721f819f and continued to use type int for parse location fields. Fix those.
* Clean up more indent breakage from 6377e12a5.Tom Lane2024-04-16
| | | | Per buildfarm member koel.
* Fix assorted bugs in ecpg's macro mechanism.Tom Lane2024-04-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The code associated with EXEC SQL DEFINE was unreadable and full of bugs, notably: * It'd attempt to free a non-malloced string if the ecpg program tries to redefine a macro that was defined on the command line. * Possible memory stomp if user writes "-D=foo". * Undef'ing or redefining a macro defined on the command line would change the state visible to the next file, when multiple files are specified on the command line. (While possibly that could have been an intentional choice, the code clearly intends to revert to the original macro state; it's just failing to consider this interaction.) * Missing "break" in defining a new macro meant that redefinition of an existing name would cause an extra entry to be added to the definition list. While not immediately harmful, a subsequent undef would result in the prior entry becoming visible again. * The interactions with input buffering are subtle and were entirely undocumented. It's not that surprising that we hadn't noticed these bugs, because there was no test coverage at all of either the -D command line switch or multiple input files. This patch adds such coverage (in a rather hacky way I guess). In addition to the code bugs, the user documentation was confused about whether the -D switch defines a C macro or an ecpg one, and it failed to mention that you can write "-Dsymbol=value". These problems are old, so back-patch to all supported branches. Discussion: https://postgr.es/m/998011.1713217712@sss.pgh.pa.us
* Fix nbtree "deduce NOT NULL" scan key comment.Peter Geoghegan2024-04-16
| | | | Oversight in commit c9c0589fda.
* Undo incorrect typedefs.list change.Tom Lane2024-04-16
| | | | | 6377e12a5 should not have removed AcquireSampleRowsFunc, as that's still in use. Per buildfarm member koel.
* Ensure generated join clauses for child rels have correct relids.Tom Lane2024-04-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When building a join clause derived from an EquivalenceClass, if the clause is to be used with an appendrel child relation then make sure its clause_relids include the relids of that child relation. Normally this would be true already because the EquivalenceMember would be a Var of that relation. However, if the appendrel represents a flattened UNION ALL construct then some child EquivalenceMembers could be constants with no relids. The resulting under-marked clause is problematic because it could mislead join_clause_is_movable_into about where the clause should be evaluated. We do not have an example showing incorrect plan generation, but there are existing cases in the regression tests that will fail the Asserts this patch adds to get_baserel_parampathinfo. A similarly wrong conclusion about a clause being considered by get_joinrel_parampathinfo would lead to wrong placement of the clause. (This also squares with the way that clause_relids is calculated for non-equijoin clauses in adjust_appendrel_attrs.) The other reason for wanting these new Asserts is that the previous blithe assumption that the results of generate_join_implied_equalities "necessarily satisfy join_clause_is_movable_into" turns out to be wrong pre-v16. If it's still wrong it'd be good to find out. Per bug #18429 from BenoƮt Ryder. The bug as filed was fixed by commit 2489d76c4, but these changes correlate with the fix we will need to apply in pre-v16 branches. Discussion: https://postgr.es/m/18429-8982d4a348cc86c6@postgresql.org
* Fix nbtree posting list comment.Peter Geoghegan2024-04-16
| | | | Oversight in commit 0d861bbb70.
* Fix nbtree page recycling comment.Peter Geoghegan2024-04-16
| | | | Oversight in commit e5d8a99903.
* revert: Generalize relation analyze in table AM interfaceAlexander Korotkov2024-04-16
| | | | | | This commit reverts 27bc1772fc and dd1f6b0c17. Per review by Andres Freund. Discussion: https://postgr.es/m/20240415201057.khoyxbwwxfgzomeo%40awork3.anarazel.de
* Improve test coverage in bump.cDavid Rowley2024-04-16
| | | | | | | | | | | | There were no callers of BumpAllocLarge() in the regression tests, so here we add a sort with a tuple large enough to use that path in bump.c. Also, BumpStats() wasn't being called, so add a test to sysviews.sql to call pg_backend_memory_contexts() while a bump context exists in the backend. Reported-by: Andres Freund Discussion: https://postgr.es/m/20240414223305.m3i5eju6zylabvln@awork3.anarazel.de
* Add missing PGDLLIMPORT markingsMichael Paquier2024-04-16
| | | | | | | | | | | | | All backend-side variables should be marked with PGDLLIMPORT, as per policy introduced in 8ec569479f. aafc05de1bf5 has forgotten MyClientSocket, and 05c3980e7f47 LoadedSSL. These can be spotted with a command like this one (be careful of not switching __pg_log_level): src/tools/mark_pgdllimport.pl $(git ls-files src/include/) Reported-by: Peter Eisentraut Discussion: https://postgr.es/m/ZhzkRzrkKhbeQMRm@paquier.xyz
* Fix type-checking of RECORD-returning functions in FROM, redux.Tom Lane2024-04-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 2ed8f9a01 intended to institute a policy that if a RangeTblFunction has a coldeflist, then the function return type is certainly RECORD, and we should use the coldeflist as the source of truth about what the columns of the record type are. When the original function has been folded to a constant, inspection of the constant might give a different answer. This situation will lead to a tuple-type-mismatch error at execution, but up until that point we need to consistently believe the coldeflist, or we'll have problems from different bits of code reaching different conclusions. expandRTE didn't get that memo though, and would try to produce a tupdesc based on the constant in this situation, leading to an assertion failure. (Desultory testing suggests that non-assert builds often manage to give the expected error, although I also saw a "cache lookup failed for type 0" error, and it seems at least possible that a crash could happen.) Some other callers of get_expr_result_type and get_expr_result_tupdesc were also being incautious about this. While none of them seem to have actual bugs, they're working harder than necessary in this case, besides which it seems safest to have an explicit policy of not using those functions on an RTE with a coldeflist. Adjust the code accordingly, and add commentary to funcapi.c about this policy. Also fix an obsolete comment that claimed "get_expr_result_type() doesn't know how to extract type info from a RECORD constant". That hasn't been true since commit d57534740. Per bug #18422 from Alexander Lakhin. As with the previous commit, back-patch to all supported branches. Discussion: https://postgr.es/m/18422-89ca86c8eac5246d@postgresql.org
* ATTACH PARTITION: Don't match a PK with a UNIQUE constraintAlvaro Herrera2024-04-15
| | | | | | | | | | When matching constraints in AttachPartitionEnsureIndexes() we weren't testing the constraint type, which could make a UNIQUE key lacking a not-null constraint incorrectly satisfy a primary key requirement. Fix this by testing that the constraint types match. (Other possible mismatches are verified by comparing index properties.) Discussion: https://postgr.es/m/202402051447.wimb4xmtiiyb@alvherre.pgsql
* Grammar fixes for split/merge partitions codeAlexander Korotkov2024-04-15
| | | | | | | | | | The fixes relate to comments, error messages, and corresponding expected output of regression tests. Discussion: https://postgr.es/m/CAMbWs49DDsknxyoycBqiE72VxzL_sYHF6zqL8dSeNehKPJhkKg%40mail.gmail.com Discussion: https://postgr.es/m/86bfd241-a58c-479a-9a72-2c67a02becf8%40postgrespro.ru Discussion: https://postgr.es/m/CAHewXNkGMPU50QG7V6Q60JGFORfo8LfYO1_GCkCa0VWbmB-fEw%40mail.gmail.com Author: Richard Guo, Dmitry Koval, Tender Wang
* Fix propagating attnotnull in multiple inheritanceAlvaro Herrera2024-04-15
| | | | | | | | | | | | | | | | | | | In one of the many strange corner cases of multiple inheritance being used, commit b0e96f311985 missed a CommandCounterIncrement() call after updating the attnotnull flag during ALTER TABLE ADD COLUMN, which caused a catalog tuple to be update attempted twice in the same command, giving rise to a "tuple already updated by self" error. Add the missing call to solve that, and a test case that reproduces the scenario. As a (perhaps surprising) secondary effect, this CCI addition triggers another behavior change: when a primary key is added to a parent partitioned table and the column in an existing partition does not have a not-null constraint, we no longer error out. This will probably be a welcome change by some users, and I think it's unlikely that anybody will miss the old behavior. Reported-by: Alexander Lakhin <exclusion@gmail.com> Discussion: http://postgr.es/m/045dec3f-9b3d-aa44-0c99-85f6992306c7@gmail.com
* psql: Make output of \dD more stablePeter Eisentraut2024-04-15
| | | | | | | \dD showed domain check constraints in arbitrary order, which can cause regression test failures, which was exposed by commit 9895b35cb8. To fix, order the constraints by conname, which matches what psql does in other queries listing constraints.
* Fix ALTER DOMAIN NOT NULL syntaxPeter Eisentraut2024-04-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This addresses a few problems with commit e5da0fe3c22 ("Catalog domain not-null constraints"). In CREATE DOMAIN, a NOT NULL constraint looks like CREATE DOMAIN d1 AS int [ CONSTRAINT conname ] NOT NULL (Before e5da0fe3c22, the constraint name was accepted but ignored.) But in ALTER DOMAIN, a NOT NULL constraint looks like ALTER DOMAIN d1 ADD [ CONSTRAINT conname ] NOT NULL VALUE where VALUE is where for a table constraint the column name would be. (This works as of e5da0fe3c22. Before e5da0fe3c22, this syntax resulted in an internal error.) But for domains, this latter syntax is confusing and needlessly inconsistent between CREATE and ALTER. So this changes it to just ALTER DOMAIN d1 ADD [ CONSTRAINT conname ] NOT NULL (None of these syntaxes are per SQL standard; we are just living with the bits of inconsistency that have built up over time.) In passing, this also changes the psql \dD output to not show not-null constraints in the column "Check", since it's already shown in the column "Nullable". This has also been off since e5da0fe3c22. Reviewed-by: jian he <jian.universality@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/9ec24d7b-633d-463a-84c6-7acff769c9e8%40eisentraut.org
* Put back initialization of 'sslmode', to silence CoverityHeikki Linnakangas2024-04-14
| | | | | | | | | | | | | | | | Coverity pointed out that the function checks for conn->sslmode != NULL, which implies that it might be NULL, but later we access it without a NULL-check anyway. It doesn't know that it is in fact always initialized earlier, in conninfo_add_defaults(), and hence the NULL-check is not necessary. However, there is a lot of distance between conninfo_add_defaults() and pqConnectOptions2(), so it's not surprising that it doesn't see that. Put back the initialization code, as it existed before commit 05fd30c0e7, to silence the warning. In the long run, I'd like to refactor the libpq options handling and initalization code. It seems silly to strdup() and copy strings, for things like sslmode that have a limited set of possible values; it should be an enum. But that's for another day.
* Fix unnecessary padding in incremental backupsTomas Vondra2024-04-14
| | | | | | | | | | | | | Commit 10e3226ba13d added padding to incremental backups to ensure the block data is properly aligned. The code in sendFile() however failed to consider that the header may be a multiple of BLCKSZ and thus already aligned, adding a full BLCKSZ of unnecessary padding. Not only does this make the incremental file a bit larger, but the other places calculating the amount of padding did realize it's not needed and did not include it in the formula. This resulted in pg_basebackup getting confused while parsing the data stream, trying to access files with invalid filenames (e.g. with binary data etc.) and failing.
* Use the correct PG_DETOAST_DATUM macro in BRINTomas Vondra2024-04-14
| | | | | | | | | | | | Commit 6bcda4a721 replaced PG_DETOAST_DATUM with PG_DETOAST_DATUM_PACKED in two BRIN output functions, for minmax-multi and bloom opclasses. But this is incorrect - the code is accessing the data through structs that already include a 4B header, so the detoast needs to match that. But the PACKED macro may keep the 1B header, which means the struct fields will point to incorrect data. Backpatch-through: 16 Discussion: https://postgr.es/m/1df00a66-db5a-4e66-809a-99b386a06d86%40enterprisedb.com
* Update nbits_set in brin_bloom_unionTomas Vondra2024-04-14
| | | | | | | | | | | | | | | | | | | Properly update the number of bits set in the bitmap after merging the filters in brin_bloom_union. This is mostly harmless, as the counter is used only in the output function, which means pageinspect may show incorrect information about the BRIN summary. The counter does not affect correctness. Discovered while adding a regression test comparing indexes built with and without parallelism. The parallel index builds exercise the union procedure when merging results from workers, which is otherwise very hard to do in a test. Which is why this went unnoticed until now. Backpatch through 14, where the BRIN bloom opclasses were introduced. Backpatch-through: 14 Discussion: https://postgr.es/m/1df00a66-db5a-4e66-809a-99b386a06d86%40enterprisedb.com
* freespace: Don't return blocks past the end of the main fork.Noah Misch2024-04-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | GetPageWithFreeSpace() callers assume the returned block exists in the main fork, failing with "could not read block" errors if that doesn't hold. Make that assumption reliable now. It hadn't been guaranteed, due to the weak WAL and data ordering of participating components. Most operations on the fsm fork are not WAL-logged. Relation extension is not WAL-logged. Hence, an fsm-fork block on disk can reference a main-fork block that no WAL record has initialized. That could happen after an OS crash, a replica promote, or a PITR restore. wal_log_hints makes the trouble easier to hit; a replica promote or PITR ending just after a relevant fsm-fork FPI_FOR_HINT may yield this broken state. The v16 RelationAddBlocks() mechanism also makes the trouble easier to hit, since it bulk-extends even without extension lock waiters. Commit 917dc7d2393ce680dea7a59418be9ff341df3c14 stopped trouble around truncation, but vectors involving PageIsNew() pages remained. This implementation adds a RelationGetNumberOfBlocks() call when the cached relation size doesn't confirm a block exists. We've been unable to identify a benchmark that slows materially, but this may show up as additional time in lseek(). An alternative without that overhead would be a new ReadBufferMode such that ReadBufferExtended() returns NULL after a 0-byte read, with all other errors handled normally. However, each GetFreeIndexPage() caller would then need code for the return-NULL case. Back-patch to v14, due to earlier versions not caching relation size and the absence of a pre-v16 problem report. Ronan Dunklau. Reported by Ronan Dunklau. Discussion: https://postgr.es/m/1878547.tdWV9SEqCh%40aivenlaptop
* Document PG_TEST_EXTRA=libpq_encryption and also check 'kerberos'Heikki Linnakangas2024-04-12
| | | | | | | In the libpq encryption negotiation tests, don't run the GSSAPI tests unless PG_TEST_EXTRA='kerberos' is also set. That makes it possible to still run most of the tests when GSSAPI support is compiled in, but there's no MIT Kerberos installation.
* Move libpq encryption negotiation testsHeikki Linnakangas2024-04-12
| | | | | | | | | | | | | The test targets libpq's options, so 'src/test/interfaces/libpq/t' is a more natural place for it. While doing this, I noticed that I had missed adding the libpq_encryption subdir to the Makefile. That's why this commit only needs to remove it from the meson.build file. Per Peter Eisentraut's suggestion. Discussion: https://www.postgresql.org/message-id/09d4bf5d-d0fa-4c66-a1d7-5ec757609646@eisentraut.org
* Fix compilation with --with-gssapi --without-opensslHeikki Linnakangas2024-04-12
| | | | | | | The #define is spelled ENABLE_GSS, not USE_GSS. Introduced in commit 05fd30c0e7, reported by Thomas Munro. Discussion: https://www.postgresql.org/message-id/CA%2BhUKG%2BHRTtB%2Bx%2BKKKj_cfX6sNhbeGuqmGxjGMwdVPG7YGFP8w@mail.gmail.com
* Fix libpq_encryption tests when compiled without SSL supportHeikki Linnakangas2024-04-12
| | | | | | | | | | | | | It correctly skipped tests involving SSL in the server when SSL support was not compiled in, but even when SSL is not enabled in the server and the connection is established without SSL, libpq behaves differently in many of the test scenarios when libpq is compiled without SSL support. For example, with sslmode=prefer, if libpq is compiled with SSL support it will attempt to use SSL, but without SSL support it will try authenticating in plaintext mode directly. The expected test output didn't take that into account. Discussion: https://www.postgresql.org/message-id/CA%2BhUKG%2BHRTtB%2Bx%2BKKKj_cfX6sNhbeGuqmGxjGMwdVPG7YGFP8w@mail.gmail.com
* Don't allocate large buffer on the stack in pg_verifybackupAndrew Dunstan2024-04-12
| | | | | | | | | Per complaint from Andres Freund. Follow his suggestion to allocate the buffer once in the calling routine instead. Also make a tiny indentation improvement. Discussion: https://postgr.es/m/20240411190147.a3yries632olfcgg@awork3.anarazel.de
* Assorted minor cleanups in the test_json_parser moduleAndrew Dunstan2024-04-12
| | | | | | | | | | Per gripes from Michael Paquier Discussion: https://postgr.es/m/ZhTQ6_w1vwOhqTQI@paquier.xyz Along the way, also clean up a handful of typos in 3311ea86ed and ea7b4e9a2a, found by Alexander Lakhin, and a couple of stylistic snafus noted by Daniel Westermann and Daniel Gustafsson.
* Shrink test file for test_json_parser moduleAndrew Dunstan2024-04-12
| | | | | | | | Also delete live URLs Jacob Champion Discussion: https://postgr.es/m/CAOYmi+mtH=V1wZKAOauCd5QqQWr61hnXMJbJ9h-CZXAa1JXd3w@mail.gmail.com
* Add a TAP test for test_json_parser_perfAndrew Dunstan2024-04-12
| | | | | This just makes sure the test can run with a single iteration. A real performance test would test with many more.
* Fix some memory leaks associated with parsing json and manifestsAndrew Dunstan2024-04-12
| | | | | | | | | | | Coverity complained about not freeing some memory associated with incrementally parsing backup manifests. To fix that, provide and use a new shutdown function for the JsonManifestParseIncrementalState object, in line with a suggestion from Tom Lane. While analysing the problem, I noticed a buglet in freeing memory for incremental json lexers. To fix that remove a bogus condition on freeing the memory allocated for them.
* Fix recently introduced typo in code commentDavid Rowley2024-04-12
| | | | | Reported-by: Richard Guo Discussion: https://postgr.es/m/CAMbWs49kAsZUsj7-0SBLvE9+uKz0RCqMEmM3NVytc1YvS8sTrQ@mail.gmail.com
* Fix the review comments and a bug in the slot sync code.Amit Kapila2024-04-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | Ensure that when updating the catalog_xmin of the synced slots, it is first written to disk before changing the in-memory value (effective_catalog_xmin). This is to prevent a scenario where the in-memory value change triggers a vacuum to remove catalog tuples before the catalog_xmin is written to disk. In the event of a crash before the catalog_xmin is persisted, we would not know that some required catalog tuples have been removed and the synced slot would be invalidated. Change the sanity check to ensure that remote_slot's confirmed_flush LSN can't precede the local/synced slot during slot sync. Note that the restart_lsn of the synced/local slot can be ahead of remote_slot. This can happen when slot advancing machinery finds a running xacts record after reaching the consistent state at a later point than the primary where it serializes the snapshot and updates the restart_lsn. Make the check to sync slots robust by allowing to sync only when the confirmed_lsn, restart_lsn, or catalog_xmin of the remote slot is ahead of the synced/local slot. Reported-by: Amit Kapila and Shveta Malik Author: Hou Zhijie, Shveta Malik Reviewed-by: Amit Kapila, Bertrand Drouvot Discussion: https://postgr.es/m/OS0PR01MB57162B67D3CB01B2756FBA6D94062@OS0PR01MB5716.jpnprd01.prod.outlook.com Discussion: https://postgr.es/m/CAJpy0uCSS5zmdyUXhvw41HSdTbRqX1hbYqkOfHNj7qQ+2zn0AQ@mail.gmail.com
* Fix IS [NOT] NULL qual optimization for inheritance tablesDavid Rowley2024-04-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | b262ad440 added code to have the planner remove redundant IS NOT NULL quals and eliminate needless scans for IS NULL quals on tables where the qual's column has a NOT NULL constraint. That commit failed to consider that an inheritance parent table could have differing NOT NULL constraints between the parent and the child. This caused issues as if we eliminated a qual on the parent, when applying the quals to child tables in apply_child_basequals(), the qual might not have been added to the parent's baserestrictinfo. Here we fix this by not applying the optimization to remove redundant quals to RelOptInfos belonging to inheritance parents and applying the optimization again in apply_child_basequals(). Effectively, this means that the parent and child are considered independently as the parent has both an inh=true and inh=false RTE and we still apply the optimization to the RelOptInfo corresponding to the inh=false RTE. We're able to still apply the optimization in add_base_clause_to_rel() for partitioned tables as the NULLability of partitions must match that of their parent. And, if we ever expand restriction_is_always_false() and restriction_is_always_true() to handle partition constraints then we can apply the same logic as, even in multi-level partitioned tables, there's no way to route values to a partition when the qual does not match the partition qual of the partitioned table's parent partition. The same is true for CHECK constraints as those must also match between arent partitioned tables and their partitions. Author: Richard Guo, David Rowley Discussion: https://postgr.es/m/CAMbWs4930gQSZmjR7aANzEapdy61gCg6z8dT-kAEYD0sYWKPdQ@mail.gmail.com
* Revert: Implement pg_wal_replay_wait() stored procedureAlexander Korotkov2024-04-11
| | | | | | | This commit reverts 06c418e163, e37662f221, bf1e650806, 25f42429e2, ee79928441, and 74eaf66f98 per review by Heikki Linnakangas. Discussion: https://postgr.es/m/b155606b-e744-4218-bda5-29379779da1a%40iki.fi
* Revert: Allow table AM to store complex data structures in rd_amcacheAlexander Korotkov2024-04-11
| | | | | | This commit reverts 02eb07ea89 per review by Andres Freund. Discussion: https://postgr.es/m/20240410165236.rwyrny7ihi4ddxw4%40awork3.anarazel.de
* Revert: Allow table AM tuple_insert() method to return the different slotAlexander Korotkov2024-04-11
| | | | | | This commit reverts c35a3fb5e0 per review by Andres Freund. Discussion: https://postgr.es/m/20240410165236.rwyrny7ihi4ddxw4%40awork3.anarazel.de
* Revert: Allow locking updated tuples in tuple_update() and tuple_delete()Alexander Korotkov2024-04-11
| | | | | | This commit reverts 87985cc925 and 818861eb57 per review by Andres Freund. Discussion: https://postgr.es/m/20240410165236.rwyrny7ihi4ddxw4%40awork3.anarazel.de
* Revert: Let table AM insertion methods control index insertionAlexander Korotkov2024-04-11
| | | | | | This commit reverts b1484a3f19 per review by Andres Freund. Discussion: https://postgr.es/m/20240410165236.rwyrny7ihi4ddxw4%40awork3.anarazel.de
* Revert: Custom reloptions for table AMAlexander Korotkov2024-04-11
| | | | | | This commit reverts 9bd99f4c26 and 422041542f per review by Andres Freund. Discussion: https://postgr.es/m/20240410165236.rwyrny7ihi4ddxw4%40awork3.anarazel.de
* Use correct datatype for xmin variables in slot.cMichael Paquier2024-04-11
| | | | | | | | | | | | Two variables storing a slot's effective_xmin and effective_catalog_xmin were saved as XLogRecPtr, which is incorrect as these should be TransactionIds. Oversight in 818fefd8fd44. Author: Bharath Rupireddy Discussion: https://postgr.es/m/CALj2ACVPSB74mrDTFezz-LV3Oi6F3SN71QA0oUHvndzi5dwTNg@mail.gmail.com Backpatch-through: 16