aboutsummaryrefslogtreecommitdiff
path: root/src/backend
Commit message (Collapse)AuthorAge
...
* Disallow specifying ON_ERROR option without value.Masahiko Sawada2024-04-17
| | | | | | | | | | | | | | The ON_ERROR option of the COPY command previously allowed omitting its value, which was inconsistent with the syntax synopsis in the documentation and the behavior of other non-boolean COPY options. This change enforces providing a value for the ON_ERROR option, ensuring consistency across other non-boolean options and aligning with the documented syntax. Author: Atsushi Torikoshi Reviewed-by: Masahiko Sawada Discussion: https://postgr.es/m/a9770bf57646d90dedc3d54cf32634b2%40oss.nttdata.com
* Update mmgr's README to mention BumpContextDavid Rowley2024-04-17
| | | | | | | | | | | | Oversight in 29f6a959c. In passing, since we now have 4 memory context types to choose from, provide a brief overview of the specialities of each memory context type. Reported-by: Amul Sul Author: Amul Sul, David Rowley Discussion: https://postgr.es/m/CAAJ_b94U2s9nHh--DEK=sPEZUQ+x7vQJ7529fF8UAH97QJ9NXg@mail.gmail.com
* Push dedicated BumpBlocks to the tail of the blocks listDavid Rowley2024-04-17
| | | | | | | | | | | | | | | | | | | | | | | BumpContext relies on using the head block from its 'blocks' field to use as the current block to allocate new chunks to. When we receive an allocation request larger than allocChunkLimit, we place these chunks on a new dedicated block and, until now, we pushed the block onto the *head* of the 'blocks' list. This behavior caused the previous bump block to no longer be available for new normal-sized (non-large) allocations and would result in blocks only being partially filled if a large allocation request arrived before the block became full. Here adjust the code to push these dedicated blocks onto the *tail* of the blocks list so that the head block remains intact and available to be used by normal allocation request sizes until it becomes full. In passing, make the elog(ERROR) calls for the unsupported callbacks consistent. Likewise for the header comments for those functions. Discussion: https://postgr.es/m/CAApHDvp9___r-ayJj0nZ6GD3MeCGwGZ0_6ZptWpwj+zqHtmwCw@mail.gmail.com Discussion: https://postgr.es/m/CAApHDvqerXpzUnuDQfUEi3DZA+9=Ud9WSt3ruxN5b6PcOosx2g@mail.gmail.com
* Fix nbtree "deduce NOT NULL" scan key comment.Peter Geoghegan2024-04-16
| | | | Oversight in commit c9c0589fda.
* Ensure generated join clauses for child rels have correct relids.Tom Lane2024-04-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When building a join clause derived from an EquivalenceClass, if the clause is to be used with an appendrel child relation then make sure its clause_relids include the relids of that child relation. Normally this would be true already because the EquivalenceMember would be a Var of that relation. However, if the appendrel represents a flattened UNION ALL construct then some child EquivalenceMembers could be constants with no relids. The resulting under-marked clause is problematic because it could mislead join_clause_is_movable_into about where the clause should be evaluated. We do not have an example showing incorrect plan generation, but there are existing cases in the regression tests that will fail the Asserts this patch adds to get_baserel_parampathinfo. A similarly wrong conclusion about a clause being considered by get_joinrel_parampathinfo would lead to wrong placement of the clause. (This also squares with the way that clause_relids is calculated for non-equijoin clauses in adjust_appendrel_attrs.) The other reason for wanting these new Asserts is that the previous blithe assumption that the results of generate_join_implied_equalities "necessarily satisfy join_clause_is_movable_into" turns out to be wrong pre-v16. If it's still wrong it'd be good to find out. Per bug #18429 from Benoît Ryder. The bug as filed was fixed by commit 2489d76c4, but these changes correlate with the fix we will need to apply in pre-v16 branches. Discussion: https://postgr.es/m/18429-8982d4a348cc86c6@postgresql.org
* revert: Generalize relation analyze in table AM interfaceAlexander Korotkov2024-04-16
| | | | | | This commit reverts 27bc1772fc and dd1f6b0c17. Per review by Andres Freund. Discussion: https://postgr.es/m/20240415201057.khoyxbwwxfgzomeo%40awork3.anarazel.de
* Fix type-checking of RECORD-returning functions in FROM, redux.Tom Lane2024-04-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 2ed8f9a01 intended to institute a policy that if a RangeTblFunction has a coldeflist, then the function return type is certainly RECORD, and we should use the coldeflist as the source of truth about what the columns of the record type are. When the original function has been folded to a constant, inspection of the constant might give a different answer. This situation will lead to a tuple-type-mismatch error at execution, but up until that point we need to consistently believe the coldeflist, or we'll have problems from different bits of code reaching different conclusions. expandRTE didn't get that memo though, and would try to produce a tupdesc based on the constant in this situation, leading to an assertion failure. (Desultory testing suggests that non-assert builds often manage to give the expected error, although I also saw a "cache lookup failed for type 0" error, and it seems at least possible that a crash could happen.) Some other callers of get_expr_result_type and get_expr_result_tupdesc were also being incautious about this. While none of them seem to have actual bugs, they're working harder than necessary in this case, besides which it seems safest to have an explicit policy of not using those functions on an RTE with a coldeflist. Adjust the code accordingly, and add commentary to funcapi.c about this policy. Also fix an obsolete comment that claimed "get_expr_result_type() doesn't know how to extract type info from a RECORD constant". That hasn't been true since commit d57534740. Per bug #18422 from Alexander Lakhin. As with the previous commit, back-patch to all supported branches. Discussion: https://postgr.es/m/18422-89ca86c8eac5246d@postgresql.org
* ATTACH PARTITION: Don't match a PK with a UNIQUE constraintAlvaro Herrera2024-04-15
| | | | | | | | | | When matching constraints in AttachPartitionEnsureIndexes() we weren't testing the constraint type, which could make a UNIQUE key lacking a not-null constraint incorrectly satisfy a primary key requirement. Fix this by testing that the constraint types match. (Other possible mismatches are verified by comparing index properties.) Discussion: https://postgr.es/m/202402051447.wimb4xmtiiyb@alvherre.pgsql
* Grammar fixes for split/merge partitions codeAlexander Korotkov2024-04-15
| | | | | | | | | | The fixes relate to comments, error messages, and corresponding expected output of regression tests. Discussion: https://postgr.es/m/CAMbWs49DDsknxyoycBqiE72VxzL_sYHF6zqL8dSeNehKPJhkKg%40mail.gmail.com Discussion: https://postgr.es/m/86bfd241-a58c-479a-9a72-2c67a02becf8%40postgrespro.ru Discussion: https://postgr.es/m/CAHewXNkGMPU50QG7V6Q60JGFORfo8LfYO1_GCkCa0VWbmB-fEw%40mail.gmail.com Author: Richard Guo, Dmitry Koval, Tender Wang
* Fix propagating attnotnull in multiple inheritanceAlvaro Herrera2024-04-15
| | | | | | | | | | | | | | | | | | | In one of the many strange corner cases of multiple inheritance being used, commit b0e96f311985 missed a CommandCounterIncrement() call after updating the attnotnull flag during ALTER TABLE ADD COLUMN, which caused a catalog tuple to be update attempted twice in the same command, giving rise to a "tuple already updated by self" error. Add the missing call to solve that, and a test case that reproduces the scenario. As a (perhaps surprising) secondary effect, this CCI addition triggers another behavior change: when a primary key is added to a parent partitioned table and the column in an existing partition does not have a not-null constraint, we no longer error out. This will probably be a welcome change by some users, and I think it's unlikely that anybody will miss the old behavior. Reported-by: Alexander Lakhin <exclusion@gmail.com> Discussion: http://postgr.es/m/045dec3f-9b3d-aa44-0c99-85f6992306c7@gmail.com
* Fix ALTER DOMAIN NOT NULL syntaxPeter Eisentraut2024-04-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This addresses a few problems with commit e5da0fe3c22 ("Catalog domain not-null constraints"). In CREATE DOMAIN, a NOT NULL constraint looks like CREATE DOMAIN d1 AS int [ CONSTRAINT conname ] NOT NULL (Before e5da0fe3c22, the constraint name was accepted but ignored.) But in ALTER DOMAIN, a NOT NULL constraint looks like ALTER DOMAIN d1 ADD [ CONSTRAINT conname ] NOT NULL VALUE where VALUE is where for a table constraint the column name would be. (This works as of e5da0fe3c22. Before e5da0fe3c22, this syntax resulted in an internal error.) But for domains, this latter syntax is confusing and needlessly inconsistent between CREATE and ALTER. So this changes it to just ALTER DOMAIN d1 ADD [ CONSTRAINT conname ] NOT NULL (None of these syntaxes are per SQL standard; we are just living with the bits of inconsistency that have built up over time.) In passing, this also changes the psql \dD output to not show not-null constraints in the column "Check", since it's already shown in the column "Nullable". This has also been off since e5da0fe3c22. Reviewed-by: jian he <jian.universality@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/9ec24d7b-633d-463a-84c6-7acff769c9e8%40eisentraut.org
* Fix unnecessary padding in incremental backupsTomas Vondra2024-04-14
| | | | | | | | | | | | | Commit 10e3226ba13d added padding to incremental backups to ensure the block data is properly aligned. The code in sendFile() however failed to consider that the header may be a multiple of BLCKSZ and thus already aligned, adding a full BLCKSZ of unnecessary padding. Not only does this make the incremental file a bit larger, but the other places calculating the amount of padding did realize it's not needed and did not include it in the formula. This resulted in pg_basebackup getting confused while parsing the data stream, trying to access files with invalid filenames (e.g. with binary data etc.) and failing.
* Use the correct PG_DETOAST_DATUM macro in BRINTomas Vondra2024-04-14
| | | | | | | | | | | | Commit 6bcda4a721 replaced PG_DETOAST_DATUM with PG_DETOAST_DATUM_PACKED in two BRIN output functions, for minmax-multi and bloom opclasses. But this is incorrect - the code is accessing the data through structs that already include a 4B header, so the detoast needs to match that. But the PACKED macro may keep the 1B header, which means the struct fields will point to incorrect data. Backpatch-through: 16 Discussion: https://postgr.es/m/1df00a66-db5a-4e66-809a-99b386a06d86%40enterprisedb.com
* Update nbits_set in brin_bloom_unionTomas Vondra2024-04-14
| | | | | | | | | | | | | | | | | | | Properly update the number of bits set in the bitmap after merging the filters in brin_bloom_union. This is mostly harmless, as the counter is used only in the output function, which means pageinspect may show incorrect information about the BRIN summary. The counter does not affect correctness. Discovered while adding a regression test comparing indexes built with and without parallelism. The parallel index builds exercise the union procedure when merging results from workers, which is otherwise very hard to do in a test. Which is why this went unnoticed until now. Backpatch through 14, where the BRIN bloom opclasses were introduced. Backpatch-through: 14 Discussion: https://postgr.es/m/1df00a66-db5a-4e66-809a-99b386a06d86%40enterprisedb.com
* freespace: Don't return blocks past the end of the main fork.Noah Misch2024-04-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | GetPageWithFreeSpace() callers assume the returned block exists in the main fork, failing with "could not read block" errors if that doesn't hold. Make that assumption reliable now. It hadn't been guaranteed, due to the weak WAL and data ordering of participating components. Most operations on the fsm fork are not WAL-logged. Relation extension is not WAL-logged. Hence, an fsm-fork block on disk can reference a main-fork block that no WAL record has initialized. That could happen after an OS crash, a replica promote, or a PITR restore. wal_log_hints makes the trouble easier to hit; a replica promote or PITR ending just after a relevant fsm-fork FPI_FOR_HINT may yield this broken state. The v16 RelationAddBlocks() mechanism also makes the trouble easier to hit, since it bulk-extends even without extension lock waiters. Commit 917dc7d2393ce680dea7a59418be9ff341df3c14 stopped trouble around truncation, but vectors involving PageIsNew() pages remained. This implementation adds a RelationGetNumberOfBlocks() call when the cached relation size doesn't confirm a block exists. We've been unable to identify a benchmark that slows materially, but this may show up as additional time in lseek(). An alternative without that overhead would be a new ReadBufferMode such that ReadBufferExtended() returns NULL after a 0-byte read, with all other errors handled normally. However, each GetFreeIndexPage() caller would then need code for the return-NULL case. Back-patch to v14, due to earlier versions not caching relation size and the absence of a pre-v16 problem report. Ronan Dunklau. Reported by Ronan Dunklau. Discussion: https://postgr.es/m/1878547.tdWV9SEqCh%40aivenlaptop
* Assorted minor cleanups in the test_json_parser moduleAndrew Dunstan2024-04-12
| | | | | | | | | | Per gripes from Michael Paquier Discussion: https://postgr.es/m/ZhTQ6_w1vwOhqTQI@paquier.xyz Along the way, also clean up a handful of typos in 3311ea86ed and ea7b4e9a2a, found by Alexander Lakhin, and a couple of stylistic snafus noted by Daniel Westermann and Daniel Gustafsson.
* Fix some memory leaks associated with parsing json and manifestsAndrew Dunstan2024-04-12
| | | | | | | | | | | Coverity complained about not freeing some memory associated with incrementally parsing backup manifests. To fix that, provide and use a new shutdown function for the JsonManifestParseIncrementalState object, in line with a suggestion from Tom Lane. While analysing the problem, I noticed a buglet in freeing memory for incremental json lexers. To fix that remove a bogus condition on freeing the memory allocated for them.
* Fix recently introduced typo in code commentDavid Rowley2024-04-12
| | | | | Reported-by: Richard Guo Discussion: https://postgr.es/m/CAMbWs49kAsZUsj7-0SBLvE9+uKz0RCqMEmM3NVytc1YvS8sTrQ@mail.gmail.com
* Fix the review comments and a bug in the slot sync code.Amit Kapila2024-04-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | Ensure that when updating the catalog_xmin of the synced slots, it is first written to disk before changing the in-memory value (effective_catalog_xmin). This is to prevent a scenario where the in-memory value change triggers a vacuum to remove catalog tuples before the catalog_xmin is written to disk. In the event of a crash before the catalog_xmin is persisted, we would not know that some required catalog tuples have been removed and the synced slot would be invalidated. Change the sanity check to ensure that remote_slot's confirmed_flush LSN can't precede the local/synced slot during slot sync. Note that the restart_lsn of the synced/local slot can be ahead of remote_slot. This can happen when slot advancing machinery finds a running xacts record after reaching the consistent state at a later point than the primary where it serializes the snapshot and updates the restart_lsn. Make the check to sync slots robust by allowing to sync only when the confirmed_lsn, restart_lsn, or catalog_xmin of the remote slot is ahead of the synced/local slot. Reported-by: Amit Kapila and Shveta Malik Author: Hou Zhijie, Shveta Malik Reviewed-by: Amit Kapila, Bertrand Drouvot Discussion: https://postgr.es/m/OS0PR01MB57162B67D3CB01B2756FBA6D94062@OS0PR01MB5716.jpnprd01.prod.outlook.com Discussion: https://postgr.es/m/CAJpy0uCSS5zmdyUXhvw41HSdTbRqX1hbYqkOfHNj7qQ+2zn0AQ@mail.gmail.com
* Fix IS [NOT] NULL qual optimization for inheritance tablesDavid Rowley2024-04-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | b262ad440 added code to have the planner remove redundant IS NOT NULL quals and eliminate needless scans for IS NULL quals on tables where the qual's column has a NOT NULL constraint. That commit failed to consider that an inheritance parent table could have differing NOT NULL constraints between the parent and the child. This caused issues as if we eliminated a qual on the parent, when applying the quals to child tables in apply_child_basequals(), the qual might not have been added to the parent's baserestrictinfo. Here we fix this by not applying the optimization to remove redundant quals to RelOptInfos belonging to inheritance parents and applying the optimization again in apply_child_basequals(). Effectively, this means that the parent and child are considered independently as the parent has both an inh=true and inh=false RTE and we still apply the optimization to the RelOptInfo corresponding to the inh=false RTE. We're able to still apply the optimization in add_base_clause_to_rel() for partitioned tables as the NULLability of partitions must match that of their parent. And, if we ever expand restriction_is_always_false() and restriction_is_always_true() to handle partition constraints then we can apply the same logic as, even in multi-level partitioned tables, there's no way to route values to a partition when the qual does not match the partition qual of the partitioned table's parent partition. The same is true for CHECK constraints as those must also match between arent partitioned tables and their partitions. Author: Richard Guo, David Rowley Discussion: https://postgr.es/m/CAMbWs4930gQSZmjR7aANzEapdy61gCg6z8dT-kAEYD0sYWKPdQ@mail.gmail.com
* Revert: Implement pg_wal_replay_wait() stored procedureAlexander Korotkov2024-04-11
| | | | | | | This commit reverts 06c418e163, e37662f221, bf1e650806, 25f42429e2, ee79928441, and 74eaf66f98 per review by Heikki Linnakangas. Discussion: https://postgr.es/m/b155606b-e744-4218-bda5-29379779da1a%40iki.fi
* Revert: Allow table AM to store complex data structures in rd_amcacheAlexander Korotkov2024-04-11
| | | | | | This commit reverts 02eb07ea89 per review by Andres Freund. Discussion: https://postgr.es/m/20240410165236.rwyrny7ihi4ddxw4%40awork3.anarazel.de
* Revert: Allow table AM tuple_insert() method to return the different slotAlexander Korotkov2024-04-11
| | | | | | This commit reverts c35a3fb5e0 per review by Andres Freund. Discussion: https://postgr.es/m/20240410165236.rwyrny7ihi4ddxw4%40awork3.anarazel.de
* Revert: Allow locking updated tuples in tuple_update() and tuple_delete()Alexander Korotkov2024-04-11
| | | | | | This commit reverts 87985cc925 and 818861eb57 per review by Andres Freund. Discussion: https://postgr.es/m/20240410165236.rwyrny7ihi4ddxw4%40awork3.anarazel.de
* Revert: Let table AM insertion methods control index insertionAlexander Korotkov2024-04-11
| | | | | | This commit reverts b1484a3f19 per review by Andres Freund. Discussion: https://postgr.es/m/20240410165236.rwyrny7ihi4ddxw4%40awork3.anarazel.de
* Revert: Custom reloptions for table AMAlexander Korotkov2024-04-11
| | | | | | This commit reverts 9bd99f4c26 and 422041542f per review by Andres Freund. Discussion: https://postgr.es/m/20240410165236.rwyrny7ihi4ddxw4%40awork3.anarazel.de
* Use correct datatype for xmin variables in slot.cMichael Paquier2024-04-11
| | | | | | | | | | | | Two variables storing a slot's effective_xmin and effective_catalog_xmin were saved as XLogRecPtr, which is incorrect as these should be TransactionIds. Oversight in 818fefd8fd44. Author: Bharath Rupireddy Discussion: https://postgr.es/m/CALj2ACVPSB74mrDTFezz-LV3Oi6F3SN71QA0oUHvndzi5dwTNg@mail.gmail.com Backpatch-through: 16
* Revert indexed and enlargable binary heap implementation.Masahiko Sawada2024-04-11
| | | | | | | | | | | This reverts commit b840508644 and bcb14f4abc. These commits were made for commit 5bec1d6bc5 (Improve eviction algorithm in ReorderBuffer using max-heap for many subtransactions). However, per discussion, commit efb8acc0d0 replaced binary heap + index with pairing heap, and made these commits unnecessary. Reported-by: Jeff Davis Discussion: https://postgr.es/m/12747c15811d94efcc5cda72d6b35c80d7bf3443.camel%40j-davis.com
* Replace binaryheap + index with pairingheap in reorderbuffer.cMasahiko Sawada2024-04-11
| | | | | | | | | | | | | | | | | | | A pairing heap can perform the same operations as the binary heap + index, with as good or better algorithmic complexity, and that's an existing data structure so that we don't need to invent anything new compared to v16. This commit makes the new binaryheap functionality that was added in commits b840508644 and bcb14f4abc unnecessary, but they will be reverted separately. Remove the optimization to only build and maintain the heap when the amount of memory used is close to the limit, becuase the bookkeeping overhead with the pairing heap seems to be small enough that it doesn't matter in practice. Reported-by: Jeff Davis Author: Heikki Linnakangas Reviewed-by: Michael Paquier, Hayato Kuroda, Masahiko Sawada Discussion: https://postgr.es/m/12747c15811d94efcc5cda72d6b35c80d7bf3443.camel%40j-davis.com
* Fix grammar.Thomas Munro2024-04-11
| | | | | Reported-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/ZhdKqj5DwoOzirFv%40paquier.xyz
* Fix potential stack overflow in incremental backup.Thomas Munro2024-04-11
| | | | | | | | | | | | The user can set RELSEG_SIZE to a high number at compile time, so we can't use it to control the size of an array on the stack: it could be many gigabytes in size. On closer inspection, we don't really need that intermediate array anyway. Let's just write directly into the output array, and then perform the absolute->relative adjustment in place. This fixes new code from commit dc212340058. Reviewed-by: Robert Haas <robertmhaas@gmail.com> Discussion: https://postgr.es/m/CA%2BhUKG%2B2hZ0sBztPW4mkLfng0qfkNtAHFUfxOMLizJ0BPmi5%2Bg%40mail.gmail.com
* Fix inconsistency with replay of hash squeeze record for clean buffersMichael Paquier2024-04-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | aa5edbe379d6 has tweaked _hash_freeovflpage() so as the write buffer's LSN is updated only when necessary, when REGBUF_NO_CHANGE is not used. The replay code was not consistent with that, causing the write buffer's LSN to be updated and its page to be marked as dirty even if the buffer was registered in a "clean" state. This was possible for the case of a squeeze record when there are no tuples to add to the write buffer, for (is_prim_bucket_same_wrt && !is_prev_bucket_same_wrt). I have performed some validation of this commit with wal_consistency_checking and a change in WAL that logs REGBUF_NO_CHANGE to a new BKPIMAGE_*. Thanks to that, it is possible to know at replay if a buffer was clean when it was registered, then cross-checked the LSN of the "clean" page copy coming from WAL with the LSN of the block once the record has been replayed. This eats one bit in bimg_info, which is not acceptable to be integrated as-is, but it could become handy in the future. I didn't spot other areas than the one fixed by this commit at the extent of what the main regression test suite covers. As this is an oversight in aa5edbe379d6, no backpatch is required. Reported-by: Zubeyr Eryilmaz Author: Hayato Kuroda Reviewed-by: Amit Kapila, Michael Paquier Discussion: https://postgr.es/m/ZbyVVG_7eW3YD5-A@paquier.xyz
* Fix illegal attribute propagation in LLVM JIT.Thomas Munro2024-04-10
| | | | | | | | | | | | | | | | | | | Commit 72559438 started copying more attributes from AttributeTemplate to the functions we generate on the fly. In the case of deform functions, which return void, this meant that "noundef", from AttributeTemplate's return value (a Datum) was copied to a void type. Older LLVM releases were OK with that, but LLVM 18 crashes. Update our llvm_copy_attributes() function to skip copying the attribute for the return value, if the target function returns void. Thanks to Dmitry Dolgov for help chasing this down. Back-patch to all supported releases, like 72559438. Reported-by: Pavel Stehule <pavel.stehule@gmail.com> Reviewed-by: Dmitry Dolgov <9erthalion6@gmail.com> Discussion: https://postgr.es/m/CAFj8pRACpVFr7LMdVYENUkScG5FCYMZDDdSGNU-tch%2Bw98OxYg%40mail.gmail.com
* Fixup various StringInfo function usagesDavid Rowley2024-04-10
| | | | | | | | | | | | | | | This adjusts various appendStringInfo* function calls to use a more appropriate and efficient function with the same behavior. For example, use appendStringInfoChar() when appending a single character rather than appendStringInfo() and appendStringInfoString() when no formatting is required rather than using appendStringInfo(). All adjustments made here are in code that's new to v17, so it makes sense to fix these now rather than wait a few years and make backpatching harder. Discussion: https://postgr.es/m/CAApHDvojY2UvMiO+9_55ArTj10P1LBNJyyoGB+C65BLDNT0GsQ@mail.gmail.com Reviewed-by: Nathan Bossart, Tom Lane
* revert: Transform OR clauses to ANY expressionAlexander Korotkov2024-04-10
| | | | | | | This commit reverts 72bd38cc99 due to implementation and design issues. Reported-by: Tom Lane Discussion: https://postgr.es/m/3604469.1712628736%40sss.pgh.pa.us
* Remove unused BumpBlockIsValid macroDavid Rowley2024-04-10
| | | | | | | | | | | The bump allocator was recently added in 29f6a959c. Our other allocators have a similar macro to this, but seemingly the version of the macro for those allocators is only used in places where the chunk header is decoded. Since the bump allocator has no chunk header, none of those functions exist for bump therefore macro is unused. Remove it. Reported-by: Peter Eisentraut Discussion: https://postgr.es/m/5f724fb2-96e1-4f36-b65b-47b337ad432e@eisentraut.org
* Checks for ALTER TABLE ... SPLIT/MERGE PARTITIONS ... commandsAlexander Korotkov2024-04-10
| | | | | | | | Check that the target partition actually belongs to the parent table. Reported-by: Alexander Lakhin Discussion: https://postgr.es/m/cd842601-cf1a-9806-f7b7-d2509b93ba61%40gmail.com Author: Dmitry Koval
* Fix incorrect format placeholdersPeter Eisentraut2024-04-09
|
* Get rid of anonymous structJohn Naylor2024-04-09
| | | | | | | | | | | | | | This is a C11 feature, and we require C99. While at it, go the further step and get rid of the surrounding union (with uintptr_t) entirely, as there is currently no use case for this file to access the header of BlocktableEntry as a uintptr_t, and there are no additional alignment requirements. The least invasive way seems to be to transfer the old union name to this struct. Reported by Pavel Borisov and Andres Freund, per buildfarm member mylodon Reviewed by Pavel Borisov Discussion: https://postgr.es/m/CALT9ZEH11NYV8AOzKb1bWhCf6J0H=H31f0MgT9xX+HdqvcA1rw@mail.gmail.com
* Teach radix tree to embed values at runtimeJohn Naylor2024-04-08
| | | | | | | | | | | | | | | | | | | Previously, the decision to store values in leaves or within the child pointer was made at compile time, with variable length values using leaves by necessity. This commit allows introspecting the length of variable length values at runtime for that decision. This requires the ability to tell whether the last-level child pointer is actually a value, so we use a pointer tag in the lowest level bit. Use this in TID store. This entails adding a byte to the header to reserve space for the tag. Commit f35bd9bf3 stores up to three offsets within the header with no bitmap, and now the header can be embedded as above. This reduces worst-case memory usage when TIDs are sparse. Reviewed (in an earlier version) by Masahiko Sawada Discussion: https://postgr.es/m/CANWCAZYw+_KAaUNruhJfE=h6WgtBKeDG32St8vBJBEY82bGVRQ@mail.gmail.com Discussion: https://postgr.es/m/CAD21AoBci3Hujzijubomo1tdwH3XtQ9F89cTNQ4bsQijOmqnEw@mail.gmail.com
* Teach TID store to skip bitmap for small numbers of offsetsJohn Naylor2024-04-08
| | | | | | | | | | | | The header portion of BlocktableEntry has enough padding space for an array of 3 offsets (1 on 32-bit platforms). Use this space instead of having a sparse bitmap array. This will take up a constant amount of space no matter what the offsets are. Reviewed (in an earlier version) by Masahiko Sawada Discussion: https://postgr.es/m/CANWCAZYw+_KAaUNruhJfE=h6WgtBKeDG32St8vBJBEY82bGVRQ@mail.gmail.com Discussion: https://postgr.es/m/CAD21AoBci3Hujzijubomo1tdwH3XtQ9F89cTNQ4bsQijOmqnEw@mail.gmail.com
* Provide a way block-level table AMs could re-use acquire_sample_rows()Alexander Korotkov2024-04-08
| | | | | | | | | | While keeping API the same, this commit provides a way for block-level table AMs to re-use existing acquire_sample_rows() by providing custom callbacks for getting the next block and the next tuple. Reported-by: Andres Freund Discussion: https://postgr.es/m/20240407214001.jgpg5q3yv33ve6y3%40awork3.anarazel.de Reviewed-by: Pavel Borisov
* Fix some grammer errors from error messages and codes commentsAlexander Korotkov2024-04-08
| | | | | Discussion: https://postgr.es/m/CAHewXNkGMPU50QG7V6Q60JGFORfo8LfYO1_GCkCa0VWbmB-fEw%40mail.gmail.com Author: Tender Wang
* Fill CommonRdOptions with default values in extract_autovac_opts()Alexander Korotkov2024-04-08
| | | | | | Reported-by: Thomas Munro Reported-by: Pavel Borisov Discussion: https://postgr.es/m/CA%2BhUKGLZzLR50RBvuqOO3MZ%3DF54ETz-rTp1PDX9uDGP_GqyYqA%40mail.gmail.com
* Adjust wording of trace_connection_negotiation GUC's descriptionHeikki Linnakangas2024-04-08
| | | | | | | | | | We're not very consistent about this across all the GUCs, but the "Logs ..." phrasing is more common than "Log ...", and is used by the neighboring "log_connections" and "log_disconnections" GUCs, so switch to that. Author: Kyotaro Horiguchi Discussion: https://www.postgresql.org/message-id/20240408.154010.1170771365226258348.horikyota.ntt@gmail.com
* Custom reloptions for table AMAlexander Korotkov2024-04-08
| | | | | | | | | | | | | | | | | | Let table AM define custom reloptions for its tables. This allows specifying AM-specific parameters by the WITH clause when creating a table. The reloptions, which could be used outside of table AM, are now extracted into the CommonRdOptions data structure. These options could be by decision of table AM directly specified by a user or calculated in some way. The new test module test_tam_options evaluates the ability to set up custom reloptions and calculate fields of CommonRdOptions on their base. The code may use some parts from prior work by Hao Wu. Discussion: https://postgr.es/m/CAPpHfdurb9ycV8udYqM%3Do0sPS66PJ4RCBM1g-bBpvzUfogY0EA%40mail.gmail.com Discussion: https://postgr.es/m/AMUA1wBBBxfc3tKRLLdU64rb.1.1683276279979.Hmail.wuhao%40hashdata.cn Reviewed-by: Reviewed-by: Pavel Borisov, Matthias van de Meent, Jess Davis
* Use bump context for TID bitmaps stored by vacuumJohn Naylor2024-04-08
| | | | | | | | | | | | | | | Vacuum does not pfree individual entries, and only frees the entire storage space when finished with it. This allows using a bump context, eliminating the chunk header in each leaf allocation. Most leaf allocations will be 16 to 32 bytes, so that's a significant savings. TidStoreCreateLocal gets a boolean parameter to indicate that the created store is insert-only. This requires a separate tree context for iteration, since we free the iteration state after iteration completes. Discussion: https://postgr.es/m/CANWCAZac%3DpBePg3rhX8nXkUuaLoiAJJLtmnCfZsPEAS4EtJ%3Dkg%40mail.gmail.com Discussion: https://postgr.es/m/CANWCAZZQFfxvzO8yZHFWtQV+Z2gAMv1ku16Vu7KWmb5kZQyd1w@mail.gmail.com
* JSON_TABLE: Add support for NESTED paths and columnsAmit Langote2024-04-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A NESTED path allows to extract data from nested levels of JSON objects given by the parent path expression, which are projected as columns specified using a nested COLUMNS clause, just like the parent COLUMNS clause. Rows comprised from a NESTED columns are "joined" to the row comprised from the parent columns. If a particular NESTED path evaluates to 0 rows, then the nested COLUMNS will emit NULLs, making it an OUTER join. NESTED columns themselves may include NESTED paths to allow extracting data from arbitrary nesting levels, which are likewise joined against the rows at the parent level. Multiple NESTED paths at a given level are called "sibling" paths and their rows are combined by UNIONing them, that is, after being joined against the parent row as described above. Author: Nikita Glukhov <n.gluhov@postgrespro.ru> Author: Teodor Sigaev <teodor@sigaev.ru> Author: Oleg Bartunov <obartunov@gmail.com> Author: Alexander Korotkov <aekorotkov@gmail.com> Author: Andrew Dunstan <andrew@dunslane.net> Author: Amit Langote <amitlangote09@gmail.com> Author: Jian He <jian.universality@gmail.com> Reviewers have included (in no particular order): Andres Freund, Alexander Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zihong Yu, Himanshu Upadhyaya, Daniel Gustafsson, Justin Pryzby, Álvaro Herrera, Jian He Discussion: https://postgr.es/m/cd0bb935-0158-78a7-08b5-904886deac4b@postgrespro.ru Discussion: https://postgr.es/m/20220616233130.rparivafipt6doj3@alap3.anarazel.de Discussion: https://postgr.es/m/abd9b83b-aa66-f230-3d6d-734817f0995d%40postgresql.org Discussion: https://postgr.es/m/CA+HiwqE4XTdfb1nW=Ojoy_tQSRhYt-q_kb6i5d4xcKyrLC1Nbg@mail.gmail.com
* Fix JsonExpr deparsing to emit QUOTES and WRAPPER correctlyAmit Langote2024-04-08
| | | | | | | | | | | | | | | Currently, get_json_expr_options() does not emit the default values for QUOTES (KEEP QUOTES) and WRAPPER (WITHOUT WRAPPER). That causes the deparsed JSON_TABLE() columns, such as those contained in a a view's query, to behave differently when executed than the original definition. That's because the rules encoded in transformJsonTableColumns() will choose either JSON_VALUE() or JSON_QUERY() as implementation to execute a given column's path expression depending on the QUOTES and WRAPPER specificationd and they have slightly different semantics. Reported-by: Jian He <jian.universality@gmail.com> Discussion: https://postgr.es/m/CACJufxEqhqsfrg_p7EMyo5zak3d767iFDL8vz_4%3DZBHpOtrghw%40mail.gmail.com
* Fix restriction on specifying KEEP QUOTES in JSON_QUERY()Amit Langote2024-04-08
| | | | | | | | Currently, transformJsonFuncExpr() enforces some restrictions on the combinations of QUOTES and WRAPPER clauses that can be specified in JSON_QUERY(). The intent was to only prevent the useless combination WITH WRAPPER OMIT QUOTES, but the coding prevented KEEP QUOTES too, which is not helpful. Fix that.