aboutsummaryrefslogtreecommitdiff
path: root/src/backend/access/nbtree/nbtinsert.c
Commit message (Collapse)AuthorAge
* Silence another _bt_check_unique compiler warning.Peter Geoghegan2021-04-08
| | | | | | Per complaint from Tom Lane Discussion: https://postgr.es/m/1922884.1617909599@sss.pgh.pa.us
* Don't consider newly inserted tuples in nbtree VACUUM.Peter Geoghegan2021-03-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove the entire idea of "stale stats" within nbtree VACUUM (stop caring about stats involving the number of inserted tuples). Also remove the vacuum_cleanup_index_scale_factor GUC/param on the master branch (though just disable them on postgres 13). The vacuum_cleanup_index_scale_factor/stats interface made the nbtree AM partially responsible for deciding when pg_class.reltuples stats needed to be updated. This seems contrary to the spirit of the index AM API, though -- it is not actually necessary for an index AM's bulk delete and cleanup callbacks to provide accurate stats when it happens to be inconvenient. The core code owns that. (Index AMs have the authority to perform or not perform certain kinds of deferred cleanup based on their own considerations, such as page deletion and recycling, but that has little to do with pg_class.reltuples/num_index_tuples.) This issue was fairly harmless until the introduction of the autovacuum_vacuum_insert_threshold feature by commit b07642db, which had an undesirable interaction with the vacuum_cleanup_index_scale_factor mechanism: it made insert-driven autovacuums perform full index scans, even though there is no real benefit to doing so. This has been tied to a regression with an append-only insert benchmark [1]. Also have remaining cases that perform a full scan of an index during a cleanup-only nbtree VACUUM indicate that the final tuple count is only an estimate. This prevents vacuumlazy.c from setting the index's pg_class.reltuples in those cases (it will now only update pg_class when vacuumlazy.c had TIDs for nbtree to bulk delete). This arguably fixes an oversight in deduplication-related bugfix commit 48e12913. [1] https://smalldatum.blogspot.com/2021/01/insert-benchmark-postgres-is-still.html Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Masahiko Sawada <sawada.mshk@gmail.com> Discussion: https://postgr.es/m/CAD21AoA4WHthN5uU6+WScZ7+J_RcEjmcuH94qcoUPuB42ShXzg@mail.gmail.com Backpatch: 13-, where autovacuum_vacuum_insert_threshold was added.
* Use full 64-bit XIDs in deleted nbtree pages.Peter Geoghegan2021-02-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Otherwise we risk "leaking" deleted pages by making them non-recyclable indefinitely. Commit 6655a729 did the same thing for deleted pages in GiST indexes. That work was used as a starting point here. Stop storing an XID indicating the oldest bpto.xact across all deleted though unrecycled pages in nbtree metapages. There is no longer any reason to care about that condition/the oldest XID. It only ever made sense when wraparound was something _bt_vacuum_needs_cleanup() had to consider. The btm_oldest_btpo_xact metapage field has been repurposed and renamed. It is now btm_last_cleanup_num_delpages, which is used to remember how many non-recycled deleted pages remain from the last VACUUM (in practice its value is usually the precise number of pages that were _newly deleted_ during the specific VACUUM operation that last set the field). The general idea behind storing btm_last_cleanup_num_delpages is to use it to give _some_ consideration to non-recycled deleted pages inside _bt_vacuum_needs_cleanup() -- though never too much. We only really need to avoid leaving a truly excessive number of deleted pages in an unrecycled state forever. We only do this to cover certain narrow cases where no other factor makes VACUUM do a full scan, and yet the index continues to grow (and so actually misses out on recycling existing deleted pages). These metapage changes result in a clear user-visible benefit: We no longer trigger full index scans during VACUUM operations solely due to the presence of only 1 or 2 known deleted (though unrecycled) blocks from a very large index. All that matters now is keeping the costs and benefits in balance over time. Fix an issue that has been around since commit 857f9c36, which added the "skip full scan of index" mechanism (i.e. the _bt_vacuum_needs_cleanup() logic). The accuracy of btm_last_cleanup_num_heap_tuples accidentally hinged upon _when_ the source value gets stored. We now always store btm_last_cleanup_num_heap_tuples in btvacuumcleanup(). This fixes the issue because IndexVacuumInfo.num_heap_tuples (the source field) is expected to accurately indicate the state of the table _after_ the VACUUM completes inside btvacuumcleanup(). A backpatchable fix cannot easily be extracted from this commit. A targeted fix for the issue will follow in a later commit, though that won't happen today. I (pgeoghegan) have chosen to remove any mention of deleted pages in the documentation of the vacuum_cleanup_index_scale_factor GUC/param, since the presence of deleted (though unrecycled) pages is no longer of much concern to users. The vacuum_cleanup_index_scale_factor description in the docs now seems rather unclear in any case, and it should probably be rewritten in the near future. Perhaps some passing mention of page deletion will be added back at the same time. Bump XLOG_PAGE_MAGIC due to nbtree WAL records using full XIDs now. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Masahiko Sawada <sawada.mshk@gmail.com> Discussion: https://postgr.es/m/CAH2-WznpdHvujGUwYZ8sihX=d5u-tRYhi-F4wnV2uN2zHpMUXw@mail.gmail.com
* Enhance nbtree index tuple deletion.Peter Geoghegan2021-01-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Teach nbtree and heapam to cooperate in order to eagerly remove duplicate tuples representing dead MVCC versions. This is "bottom-up deletion". Each bottom-up deletion pass is triggered lazily in response to a flood of versions on an nbtree leaf page. This usually involves a "logically unchanged index" hint (these are produced by the executor mechanism added by commit 9dc718bd). The immediate goal of bottom-up index deletion is to avoid "unnecessary" page splits caused entirely by version duplicates. It naturally has an even more useful effect, though: it acts as a backstop against accumulating an excessive number of index tuple versions for any given _logical row_. Bottom-up index deletion complements what we might now call "top-down index deletion": index vacuuming performed by VACUUM. Bottom-up index deletion responds to the immediate local needs of queries, while leaving it up to autovacuum to perform infrequent clean sweeps of the index. The overall effect is to avoid certain pathological performance issues related to "version churn" from UPDATEs. The previous tableam interface used by index AMs to perform tuple deletion (the table_compute_xid_horizon_for_tuples() function) has been replaced with a new interface that supports certain new requirements. Many (perhaps all) of the capabilities added to nbtree by this commit could also be extended to other index AMs. That is left as work for a later commit. Extend deletion of LP_DEAD-marked index tuples in nbtree by adding logic to consider extra index tuples (that are not LP_DEAD-marked) for deletion in passing. This increases the number of index tuples deleted significantly in many cases. The LP_DEAD deletion process (which is now called "simple deletion" to clearly distinguish it from bottom-up deletion) won't usually need to visit any extra table blocks to check these extra tuples. We have to visit the same table blocks anyway to generate a latestRemovedXid value (at least in the common case where the index deletion operation's WAL record needs such a value). Testing has shown that the "extra tuples" simple deletion enhancement increases the number of index tuples deleted with almost any workload that has LP_DEAD bits set in leaf pages. That is, it almost never fails to delete at least a few extra index tuples. It helps most of all in cases that happen to naturally have a lot of delete-safe tuples. It's not uncommon for an individual deletion operation to end up deleting an order of magnitude more index tuples compared to the old naive approach (e.g., custom instrumentation of the patch shows that this happens fairly often when the regression tests are run). Add a further enhancement that augments simple deletion and bottom-up deletion in indexes that make use of deduplication: Teach nbtree's _bt_delitems_delete() function to support granular TID deletion in posting list tuples. It is now possible to delete individual TIDs from posting list tuples provided the TIDs have a tableam block number of a table block that gets visited as part of the deletion process (visiting the table block can be triggered directly or indirectly). Setting the LP_DEAD bit of a posting list tuple is still an all-or-nothing thing, but that matters much less now that deletion only needs to start out with the right _general_ idea about which index tuples are deletable. Bump XLOG_PAGE_MAGIC because xl_btree_delete changed. No bump in BTREE_VERSION, since there are no changes to the on-disk representation of nbtree indexes. Indexes built on PostgreSQL 12 or PostgreSQL 13 will automatically benefit from bottom-up index deletion (i.e. no reindexing required) following a pg_upgrade. The enhancement to simple deletion is available with all B-Tree indexes following a pg_upgrade, no matter what PostgreSQL version the user upgrades from. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Heikki Linnakangas <hlinnaka@iki.fi> Reviewed-By: Victor Yegorov <vyegorov@gmail.com> Discussion: https://postgr.es/m/CAH2-Wzm+maE3apHB8NOtmM=p-DO65j2V5GzAWCOEEuy3JZgb2g@mail.gmail.com
* Update copyright for 2021Bruce Momjian2021-01-02
| | | | Backpatch-through: 9.5
* Deprecate nbtree's BTP_HAS_GARBAGE flag.Peter Geoghegan2020-11-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Streamline handling of the various strategies that we have to avoid a page split in nbtinsert.c. When it looks like a leaf page is about to overflow, we now perform deleting LP_DEAD items and deduplication in one central place. This greatly simplifies _bt_findinsertloc(). This has an independently useful consequence: nbtree no longer relies on the BTP_HAS_GARBAGE page level flag/hint for anything important. We still set and unset the flag in the same way as before, but it's no longer treated as a gating condition when considering if we should check for already-set LP_DEAD bits. This happens at the point where the page looks like it might have to be split anyway, so simply checking the LP_DEAD bits in passing is practically free. This avoids missing LP_DEAD bits just because the page-level hint is unset, which is probably reasonably common (e.g. it happens when VACUUM unsets the page-level flag without actually removing index tuples whose LP_DEAD-bit was set recently, after the VACUUM operation began but before it reached the leaf page in question). Note that this isn't a big behavioral change compared to PostgreSQL 13. We were already checking for set LP_DEAD bits regardless of whether the BTP_HAS_GARBAGE page level flag was set before we considered doing a deduplication pass. This commit only goes slightly further by doing the same check for all indexes, even indexes where deduplication won't be performed. We don't completely remove the BTP_HAS_GARBAGE flag. We still rely on it as a gating condition with pg_upgrade'd indexes from before B-tree version 4/PostgreSQL 12. That makes sense because we sometimes have to make a choice among pages full of duplicates when inserting a tuple with pre version 4 indexes. It probably still pays to avoid accessing the line pointer array of a page there, since it won't yet be clear whether we'll insert on to the page in question at all, let alone split it as a result. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Victor Yegorov <vyegorov@gmail.com> Discussion: https://postgr.es/m/CAH2-Wz%3DYpc1PDdk8OVJDChGJBjT06%3DA0Mbv9HyTLCsOknGcUFg%40mail.gmail.com
* nbtree: Rename nbtinsert.c variables for consistency.Peter Geoghegan2020-11-17
| | | | | | | | | | | | | | Stop naming special area/opaque pointer variables 'lpageop' in contexts where it doesn't make sense. This is a holdover from a time when logic that performs tasks that are now spread across _bt_insertonpg(), _bt_findinsertloc(), and _bt_split() was more centralized. 'lpageop' denotes "left page", which doesn't make sense outside of contexts in which there isn't also a right page. Also acquire page flag variables up front within _bt_insertonpg(). This makes it closer to _bt_split() following refactoring commit bc3087b626d. This allows the page split and retail insert paths to both make use of the same variables.
* nbtree: Demote incomplete split "can't happen" error.Peter Geoghegan2020-11-15
| | | | | | | | | Only a basic logic bug in a _bt_insertonpg() caller could lead to a violation of this invariant (index corruption won't do it). A "can't happen" error seems inappropriate (it is arbitrary at best). Demote the error to a simple assertion. This matches similar nearby sanity checks.
* Correct nbtree page split lock coupling comment.Peter Geoghegan2020-08-09
| | | | There is no reason to distinguish between readers and writers here.
* Add nbtree Valgrind buffer lock checks.Peter Geoghegan2020-07-21
| | | | | | | | | | | | | | | | | | | | | | | | | | Holding just a buffer pin (with no buffer lock) on an nbtree buffer/page provides very weak guarantees, especially compared to heapam, where it's often safe to read a page while only holding a buffer pin. This commit has Valgrind enforce the following rule: it is never okay to access an nbtree buffer without holding both a pin and a lock on the buffer. A draft version of this patch detected questionable code that was cleaned up by commits fa7ff642 and 7154aa16. The code in question used to access an nbtree buffer page's special/opaque area with no buffer lock (only a buffer pin). This practice (which isn't obviously unsafe) is hereby formally disallowed in nbtree. There doesn't seem to be any reason to allow it, and banning it keeps things simple for Valgrind. The new checks are implemented by adding custom nbtree client requests (located in LockBuffer() wrapper functions); these requests are "superimposed" on top of the generic bufmgr.c Valgrind client requests added by commit 1e0dfd16. No custom resource management cleanup code is needed to undo the effects of marking buffers as non-accessible under this scheme. Author: Peter Geoghegan Reviewed-By: Anastasia Lubennikova, Georgios Kokolatos Discussion: https://postgr.es/m/CAH2-WzkLgyN3zBvRZ1pkNJThC=xi_0gpWRUb_45eexLH1+k2_Q@mail.gmail.com
* Fix misuse of table_index_fetch_tuple_check().Peter Geoghegan2020-06-25
| | | | | | | | | | | | | Commit 0d861bbb, which added deduplication to nbtree, had _bt_check_unique() pass a TID to table_index_fetch_tuple_check() that isn't safe to mutate. table_index_fetch_tuple_check()'s tid argument is modified when the TID in question is not the latest visible tuple in a hot chain, though this wasn't documented. To fix, go back to using a local copy of the TID in _bt_check_unique(), and update comments above table_index_fetch_tuple_check(). Backpatch: 13-, where B-Tree deduplication was introduced.
* Silence _bt_check_unique compiler warning.Peter Geoghegan2020-06-13
| | | | | Reported-By: Tom Lane Discussion: https://postgr.es/m/841649.1592065060@sss.pgh.pa.us
* Adjust "root of to-be-deleted subtree" function.Peter Geoghegan2020-05-11
| | | | | | | | | | | | | | | | | | | | Restructure the function that locates the root of the to-be-deleted subtree during nbtree page deletion. Handle the conditions that make page deletion unsafe in a slightly more uniform way, and acknowledge the fact that the behavior with incomplete splits on internal pages is different (as pointed out in the nbtree README as of commit 35bc0ec7). Also invent new terminology that avoids ambiguity around which pages are about to be deleted. Consistently use the term "to-be-deleted subtree", not the ambiguous term "branch". We were calling the subtree parent page the "top parent page", but that was quite misleading. The top parent page usually refers to a page unlinked from its siblings and marked deleted (during the second stage of page deletion). There was one kind of top parent page that we merely removed a downlink from, and another kind of top parent page that we actually marked deleted. Eliminate the ambiguity by inventing a new term ("subtree parent page") that refers to the former kind of page only.
* Refactor btvacuumpage().Peter Geoghegan2020-05-02
| | | | | | | | | | | | | | | | | | | | | | | | | | Remove one of the arguments to btvacuumpage(), and give up on the idea that it's a recursive function. We now use the term "backtracking" to refer to the case where an earlier block must be visited to make sure no tuples that need to be removed were missed. Advertising btvacuumpage() as a recursive function was unhelpful. In reality the function always simulates recursion with a loop (it doesn't actually call itself). This wasn't just necessary as a precaution (per the comments mentioning tail recursion), though. There is no reliable natural limit on the number of times we can backtrack. There are important behavioral difference when "recursing"/backtracking, mostly related to page deletion. We don't perform page deletion when backtracking due to the extra complexity. And when we recurse, we're not performing a physical order scan anymore, so we expect fairly different conditions to hold for the page. Structuring the code like this makes it clearer how _bt_pagedel() cooperates with btvacuumpage() and btvacuumscan() (as established in commit b0229f26 and commit 73a076b0). Author: Peter Geoghegan Reviewed-By: Masahiko Sawada Discussion: https://postgr.es/m/CAH2-WzmRGMDWiLMcb+zagG9652PboNN4Gfcq1Gc_wJL6A716MA@mail.gmail.com
* Fix AddressSanitizer use-after-scope complaint.Peter Geoghegan2020-04-30
| | | | | | | | | | | XLogRegisterBufData() does not copy data pointed to by caller's pointer argument. Oversight in commit 0d861bbb702. Author: Peter Eisentraut Reported-By: Peter Eisentraut Discussion: https://postgr.es/m/21800dbe-a13e-22f7-d423-b81db9d249f5@2ndquadrant.com
* Remove obsolete "hole in center of page" comment.Peter Geoghegan2020-04-14
| | | | | | | | | A comment from the Berkeley days incorrectly claimed that the page management code cares about the contents of the hole in the center of the page (at least in the case of the left half of an nbtree page split). Commit 8fa30f906be added an addendum that stated that the original comment was "probably obsolete". It's definitely obsolete, though, so remove the original comment plus the addendum.
* Rearrange _bt_insertonpg() "update metapage" code.Peter Geoghegan2020-04-14
| | | | | | Nest the "update metapage as part of insert into root-like page" branch inside the broader "insert into internal page" branch. This improves readability.
* Add defensive "split_only_page" nbtree assertion.Peter Geoghegan2020-04-13
| | | | | | | | | | | | | | Clearly it's not okay for nbtree to split a page that is the only page on its level, and then find that it has to split the parent one level up in turn. There is simply no code to handle the split_only_page case in the _bt_insertonpg() "newitem won't fit" branch (only the "newitem fits" branch handles split_only_page). Add a defensive assertion that will fail if a split_only_page call to _bt_insertonpg() somehow ends up splitting the target/parent page. I (pgeoghegan) believe that we don't need split_only_page handling for the "newitem won't fit" branch because anybody calling _bt_insertonpg() like this would have to hold a lock on the same one and only child page.
* Make _bt_insertonpg() more like _bt_split().Peter Geoghegan2020-04-13
| | | | | | | | | | It seems like a good idea for nbtree's retail insert code to be absolutely consistent with nbtree's page split code for anything that naturally requires equivalent handling. Anything that concerns inserting newitem (which is handled as part of the page split atomic action when a page split is required) should work in exactly the same way. With that in mind, make _bt_insertonpg() handle 'cbuf' in a way that matches _bt_split().
* Harmonize nbtree page split point code.Peter Geoghegan2020-04-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | An nbtree split point can be thought of as a point between two adjoining tuples from an imaginary version of the page being split that includes the incoming/new item (in addition to the items that really are on the page). These adjoining tuples are called the lastleft and firstright tuples. The variables that represent split points contained a field called firstright, which is an offset number of the first data item from the original page that goes on the new right page. The corresponding tuple from origpage was usually the same thing as the actual firstright tuple, but not always: the firstright tuple is sometimes the new/incoming item instead. This situation seems unnecessarily confusing. Make things clearer by renaming the origpage offset returned by _bt_findsplitloc() to "firstrightoff". We now have a firstright tuple and a firstrightoff offset number which are comparable to the newitem/lastleft tuples and the newitemoff/lastleftoff offset numbers respectively. Also make sure that we are consistent about how we describe nbtree page split point state. Push the responsibility for dealing with pg_upgrade'd !heapkeyspace indexes down to lower level code, relieving _bt_split() from dealing with it directly. This means that we always have a palloc'd left page high key on the leaf level, no matter what. This enables simplifying some of the code (and code comments) within _bt_split(). Finally, restructure the page split code to make it clearer why suffix truncation (which only takes place during leaf page splits) is completely different to the first data item truncation that takes place during internal page splits. Tuples are marked as having fewer attributes stored in both cases, and the firstright tuple is truncated in both cases, so it's easy to imagine somebody missing the distinction.
* Remove nbtree BTreeTupleSetAltHeapTID() function.Peter Geoghegan2020-04-07
| | | | | | | Since heap TID is supposed to be just another key attribute to the implementation, it doesn't make much sense to have separate BTreeTupleSetNAtts() and BTreeTupleSetAltHeapTID() functions. Merge the two functions together. This slightly simplifies _bt_truncate().
* Justify nbtree page split locking in code comment.Peter Geoghegan2020-03-27
| | | | | | | | | | | | | | | Delaying unlocking the right child page until after the point that the left child's parent page has been refound is no longer truly necessary. Commit 40dae7ec made nbtree tolerant of interrupted page splits. VACUUM was taught to avoid deleting a page that happens to be the right half of an incomplete split. As long as page splits don't unlock the left child page until the end of the second/final phase, it should be safe to unlock the right child page earlier (at the end of the first phase). It probably isn't actually useful to release the right child's lock earlier like this (it probably won't improve performance). Even still, pointing out that it ought to be safe to do so should make it easier to understand the overall design.
* nbtree: Remove obsolete _bt_pgaddtup() comments.Peter Geoghegan2020-03-19
| | | | | | Remove comments that are a throw back to a time when nbtree cared about write-ordering dependencies. The comments are similar to those removed by commit 9ee7414e, among others.
* nbtree: Use raw PageAddItem() for retail inserts.Peter Geoghegan2020-03-18
| | | | | | | | | | | | | | | | | | | | Only internal page splits need to call _bt_pgaddtup() instead of PageAddItem(), and only for data items, one of which will end up at the first offset (or first offset after the high key offset) on the new right page. This data item alone will need to be truncated in _bt_pgaddtup(). Since there is no reason why retail inserts ever need to truncate the incoming item, use a raw PageAddItem() call there instead. Even _bt_split() uses raw PageAddItem() calls for left page and right page high keys. Clearly the _bt_pgaddtup() shim function wasn't really encapsulating anything. _bt_pgaddtup() should now be thought of as a _bt_split() helper function. Note that the assertions from commit d1e241c2 verify that retail inserts never insert an item at an internal page's negative infinity offset. This invariant could only ever be violated as a result of a basic logic error in nbtinsert.c.
* Refactor nbtree fastpath optimization.Peter Geoghegan2020-03-18
| | | | | | | | | | | | | | | | | | Commit 2b272734, which added the fastpath rightmost leaf page cache insert optimization, added code to _bt_doinsert() to handle using and invalidating the backend local block cache. It doesn't seem like a good place to handle these low level details, though. _bt_doinsert() is supposed to be a high level function -- it is the main entry point to nbtinsert.c. Restructure the code by placing handling of the rightmost block cache at the start of a new _bt_search() shim function, _bt_search_insert(). The new function is called from _bt_doinsert(), which uses it as a _bt_search() variant that conveniently accepts its BTInsertState state as an argument. _bt_doinsert() no longer needs to directly consider the fastpath optimization. Discussion: https://postgr.es/m/CAH2-Wzk59cxKJRd=rfbyub6-V4yWRjsOYRkUNHBLT1P1GdtCQQ@mail.gmail.com
* nbtree: Remove useless local variables.Peter Geoghegan2020-03-17
| | | | | | | | | | | | | | Copying block and offset numbers to local variables in _bt_insertonpg() made the code less readable. Remove the variables. There is already code that conditionally calls BufferGetBlockNumber() in the same block, so consistently do it that way instead. Calling BufferGetBlockNumber() is very cheap, but we might as well avoid it when it isn't truly necessary. It isn't truly necessary for _bt_insertonpg() to call BufferGetBlockNumber() in almost all cases. Spotted while working on a patch that refactors the fastpath rightmost leaf page cache optimization, which was added by commit 2b272734.
* nbtree: Pass down MAXALIGN()'d itemsz for new item.Peter Geoghegan2020-03-16
| | | | | | | | | | | | | | Refactor nbtinsert.c so that the final itemsz of each new non-pivot tuple (the MAXALIGN()'d size) is determined once. Most of the functions used by leaf page inserts used the insertstate.itemsz value already. This commit makes everything use insertstate.itemsz as standard practice. The goal is to decouple tuple size from "effective" tuple size. Making this distinction isn't truly necessary right now, but that might change in the future. Also explain why we consistently apply MAXALIGN() to get an effective index tuple size. This was rather unclear, in part because it isn't actually strictly necessary right now.
* nbtree: Reorder nbtinsert.c prototypes.Peter Geoghegan2020-03-15
| | | | | Relocate _bt_newroot() prototype, so that the order that prototypes appear in matches the order that the functions are defined in.
* nbtree: Move fastpath NULL descent stack assertion.Peter Geoghegan2020-03-10
| | | | | | | | | | | | | | | | Commit 074251db added an assertion that verified the fastpath/rightmost page insert optimization's assumption about free space: There should always be enough free space on the page to insert the new item without splitting the page. Otherwise, we end up using the "concurrent root page split" phony/fake stack path in _bt_insert_parent(). This does not lead to incorrect behavior, but it is likely to be far slower than simply using the regular _bt_search() path. The assertion catches serious performance bugs that would probably take a long time to detect any other way. It seems much more natural to make this assertion just before the point that we generate a fake/phony descent stack. Move the assert there. This also makes _bt_insertonpg() a bit more readable.
* nbtree: Demote minus infinity "can't happen" error.Peter Geoghegan2020-03-10
| | | | | | | | | | | | | | | Only a very basic logic bug in a _bt_insertonpg() caller could lead to a violation of this invariant. Besides, any newitemoff used for an internal page is sanitized using other "can't happen" errors in _bt_getstackbuf() or its callers, before _bt_insertonpg() even gets called. Also, move the error/assertion from the insert-without-split path of _bt_insertonpg() to the top of the same function. There is no reason why this invariant only applies to insertions that happen to not result in a page split; cover every insertion. The assertion naturally belongs next to the existing generic assertions that document relatively high-level invariants for the item being inserted.
* Remove overzealous _bt_split() assertions.Peter Geoghegan2020-03-02
| | | | | | | | | | | _bt_split() is passed NULL as its insertion scankey for internal page splits. Two recently added Assert() statements failed to consider this, leading to a crash with pg_upgrade'd BREE_VERSION < 4 indexes. Remove the assertions. The assertions in question were added by commit 0d861bbb, which added nbtree deduplication. It would be possible to fix the assertions directly instead, but they weren't adding much anyway.
* Silence another compiler warning in nbtinsert.c.Peter Geoghegan2020-02-26
| | | | Per complaint from Álvaro Herrera.
* Silence compiler warning in nbtinsert.c.Peter Geoghegan2020-02-26
| | | | Per buildfarm member longfin.
* Add deduplication to nbtree.Peter Geoghegan2020-02-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Deduplication reduces the storage overhead of duplicates in indexes that use the standard nbtree index access method. The deduplication process is applied lazily, after the point where opportunistic deletion of LP_DEAD-marked index tuples occurs. Deduplication is only applied at the point where a leaf page split would otherwise be required. New posting list tuples are formed by merging together existing duplicate tuples. The physical representation of the items on an nbtree leaf page is made more space efficient by deduplication, but the logical contents of the page are not changed. Even unique indexes make use of deduplication as a way of controlling bloat from duplicates whose TIDs point to different versions of the same logical table row. The lazy approach taken by nbtree has significant advantages over a GIN style eager approach. Most individual inserts of index tuples have exactly the same overhead as before. The extra overhead of deduplication is amortized across insertions, just like the overhead of page splits. The key space of indexes works in the same way as it has since commit dd299df8 (the commit that made heap TID a tiebreaker column). Testing has shown that nbtree deduplication can generally make indexes with about 10 or 15 tuples for each distinct key value about 2.5X - 4X smaller, even with single column integer indexes (e.g., an index on a referencing column that accompanies a foreign key). The final size of single column nbtree indexes comes close to the final size of a similar contrib/btree_gin index, at least in cases where GIN's posting list compression isn't very effective. This can significantly improve transaction throughput, and significantly reduce the cost of vacuuming indexes. A new index storage parameter (deduplicate_items) controls the use of deduplication. The default setting is 'on', so all new B-Tree indexes automatically use deduplication where possible. This decision will be reviewed at the end of the Postgres 13 beta period. There is a regression of approximately 2% of transaction throughput with synthetic workloads that consist of append-only inserts into a table with several non-unique indexes, where all indexes have few or no repeated values. The underlying issue is that cycles are wasted on unsuccessful attempts at deduplicating items in non-unique indexes. There doesn't seem to be a way around it short of disabling deduplication entirely. Note that deduplication of items in unique indexes is fairly well targeted in general, which avoids the problem there (we can use a special heuristic to trigger deduplication passes in unique indexes, since we're specifically targeting "version bloat"). Bump XLOG_PAGE_MAGIC because xl_btree_vacuum changed. No bump in BTREE_VERSION, since the representation of posting list tuples works in a way that's backwards compatible with version 4 indexes (i.e. indexes built on PostgreSQL 12). However, users must still REINDEX a pg_upgrade'd index to use deduplication, regardless of the Postgres version they've upgraded from. This is the only way to set the new nbtree metapage flag indicating that deduplication is generally safe. Author: Anastasia Lubennikova, Peter Geoghegan Reviewed-By: Peter Geoghegan, Heikki Linnakangas Discussion: https://postgr.es/m/55E4051B.7020209@postgrespro.ru https://postgr.es/m/4ab6e2db-bcee-f4cf-0916-3a06e6ccbb55@postgrespro.ru
* Remove dependency on HeapTuple from predicate locking functions.Thomas Munro2020-01-28
| | | | | | | | | | | | | | | | The following changes make the predicate locking functions more generic and suitable for use by future access methods: - PredicateLockTuple() is renamed to PredicateLockTID(). It takes ItemPointer and inserting transaction ID instead of HeapTuple. - CheckForSerializableConflictIn() takes blocknum instead of buffer. - CheckForSerializableConflictOut() no longer takes HeapTuple or buffer. Author: Ashwin Agrawal Reviewed-by: Andres Freund, Kuntal Ghosh, Thomas Munro Discussion: https://postgr.es/m/CALfoeiv0k3hkEb3Oqk%3DziWqtyk2Jys1UOK5hwRBNeANT_yX%2Bng%40mail.gmail.com
* Remove redundant incomplete split assertion.Peter Geoghegan2020-01-05
| | | | | | | The fastpath insert optimization's incomplete split flag Assert() is redundant. We'll reach the more general Assert() within _bt_findinsertloc() in all cases. (Besides, Assert()'ing that the rightmost page doesn't have the flag set never made much sense.)
* Update copyrights for 2020Bruce Momjian2020-01-01
| | | | Backpatch-through: update all files in master, backpatch legal files through 9.4
* Rename nbtree tuple macros.Peter Geoghegan2019-12-16
| | | | | Rename two function-style macros, removing the word "inner". This makes things more consistent.
* nbtree: Tweak _bt_pgaddtup() comments.Peter Geoghegan2019-11-18
| | | | | | Make it clear that _bt_pgaddtup() truncates the first data item on an internal page because its key is supposed to be treated as minus infinity within _bt_compare().
* Explain subtlety in nbtree locking protocol.Peter Geoghegan2019-08-23
| | | | | | | The Postgres approach to coupling locks during an ascent of the tree is slightly different to the approach taken by Lehman and Yao. Add a new paragraph to the "Differences to the Lehman & Yao algorithm" section of the nbtree README that explains the similarities and differences.
* Remove block number field from nbtree stack.Peter Geoghegan2019-08-14
| | | | | | | | | | | | | | The initial value of the nbtree stack downlink block number field recorded during an initial descent of the tree wasn't actually used. Both _bt_getstackbuf() callers overwrote the value with their own value. Remove the block number field from the stack struct, and add a child block number argument to _bt_getstackbuf() in its place. This makes the overall design of _bt_getstackbuf() clearer. Author: Peter Geoghegan Reviewed-By: Anastasia Lubennikova Discussion: https://postgr.es/m/CAH2-Wzmx+UbXt2YNOUCZ-a04VdXU=S=OHuAuD7Z8uQq-PXTYUg@mail.gmail.com
* Add error codes to some corruption log messagesPeter Eisentraut2019-08-01
| | | | | | | | | In some cases we have elog(ERROR) while corruption is certain and we can give a clear error code ERRCODE_DATA_CORRUPTED or ERRCODE_INDEX_CORRUPTED. Author: Andrey Borodin <x4mmm@yandex-team.ru> Discussion: https://www.postgresql.org/message-id/flat/25F6C686-6442-4A6B-BAF8-A6F7B84B16DE@yandex-team.ru
* Fix typos.Amit Kapila2019-05-26
| | | | | | | Reported-by: Alexander Lakhin Author: Alexander Lakhin Reviewed-by: Amit Kapila and Tom Lane Discussion: https://postgr.es/m/7208de98-add8-8537-91c0-f8b089e2928c@gmail.com
* Phase 2 pgindent run for v12.Tom Lane2019-05-22
| | | | | | | | | Switch to 2.1 version of pg_bsd_indent. This formats multiline function declarations "correctly", that is with additional lines of parameter declarations indented to match where the first line's left parenthesis is. Discussion: https://postgr.es/m/CAEepm=0P3FeTXRcU5B2W3jv3PgRVZ-kGUXLGfd42FFhUROO3ug@mail.gmail.com
* Initial pgindent run for v12.Tom Lane2019-05-22
| | | | | | | | This is still using the 2.0 version of pg_bsd_indent. I thought it would be good to commit this separately, so as to document the differences between 2.0 and 2.1 behavior. Discussion: https://postgr.es/m/16296.1558103386@sss.pgh.pa.us
* Remove obsolete nbtree insertion comment.Peter Geoghegan2019-05-15
| | | | | | | | | | | | | | | | | | Remove a Berkeley-era comment above _bt_insertonpg() that admonishes the reader to grok Lehman and Yao's paper before making any changes. This made a certain amount of sense back when _bt_insertonpg() was responsible for most of the things that are now spread across _bt_insertonpg(), _bt_findinsertloc(), _bt_insert_parent(), and _bt_split(), but it doesn't work like that anymore. I believe that this comment alludes to the need to "couple" or "crab" buffer locks as we ascend the tree as page splits cascade upwards. The nbtree README already explains this in detail, which seems sufficient. Besides, the changes to page splits made by commit 40dae7ec537 altered the exact details of how buffer locks are retained during splits; Lehman and Yao's original algorithm seems to release the lock on the left child page/buffer slightly earlier than _bt_insertonpg()/_bt_insert_parent() can.
* Standardize ItemIdData terminology.Peter Geoghegan2019-05-13
| | | | | | | | | | | | | The term "item pointer" should not be used to refer to ItemIdData variables, since that is needlessly ambiguous. Only ItemPointerData/ItemPointer variables should be called item pointers. To fix, establish the convention that ItemIdData variables should always be referred to either as "item identifiers" or "line pointers". The term "item identifier" already predominates in docs and translatable messages, and so should be the preferred alternative there. Discussion: https://postgr.es/m/CAH2-Wz=c=MZQjUzde3o9+2PLAPuHTpVZPPdYxN=E4ndQ2--8ew@mail.gmail.com
* Don't leave behind junk nbtree pages during split.Peter Geoghegan2019-05-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 8fa30f906be reduced the elevel of a number of "can't happen" _bt_split() errors from PANIC to ERROR. At the same time, the new right page buffer for the split could continue to be acquired well before the critical section. This was possible because it was relatively straightforward to make sure that _bt_split() could not throw an error, with a few specific exceptions. The exceptional cases were safe because they involved specific, well understood errors, making it possible to consistently zero the right page before actually raising an error using elog(). There was no danger of leaving around a junk page, provided _bt_split() stuck to this coding rule. Commit 8224de4f, which introduced INCLUDE indexes, added code to make _bt_split() truncate away non-key attributes. This happened at a point that broke the rule around zeroing the right page in _bt_split(). If truncation failed (perhaps due to palloc() failure), that would result in an errant right page buffer with junk contents. This could confuse VACUUM when it attempted to delete the page, and should be avoided on general principle. To fix, reorganize _bt_split() so that truncation occurs before the new right page buffer is even acquired. A junk page/buffer will not be left behind if _bt_nonkey_truncate()/_bt_truncate() raise an error. Discussion: https://postgr.es/m/CAH2-WzkcWT_-NH7EeL=Az4efg0KCV+wArygW8zKB=+HoP=VWMw@mail.gmail.com Backpatch: 11-, where INCLUDE indexes were introduced.
* Correct more obsolete nbtree page split comments.Peter Geoghegan2019-05-03
| | | | | | | | | | | | Commit 3f342839 corrected obsolete comments about buffer locks at the main _bt_insert_parent() call site, but missed similar obsolete comments above _bt_insert_parent() itself. Both sets of comments were rendered obsolete by commit 40dae7ec537, which made the nbtree page split algorithm more robust. Fix the comments that were missed the first time around now. In passing, refine a related _bt_insert_parent() comment about re-finding the parent page to insert new downlink.
* Remove obsolete _bt_insert_parent() comment.Peter Geoghegan2019-04-29
| | | | | | | Remove a comment that refers to a coding practice that was fully removed by commit a8b8f4db, which introduced MarkBufferDirty(). It looks like the comment was even obsolete before then, since it concerns write-ordering dependencies with synchronous buffer writes.