aboutsummaryrefslogtreecommitdiff
path: root/src/backend/access/nbtree/nbtinsert.c
Commit message (Collapse)AuthorAge
* nbtree: Make BTMaxItemSize into object-like macro.Peter Geoghegan2025-03-11
| | | | | | | | | | | | | | | Make nbtree's "1/3 of a page limit" BTMaxItemSize function-like macro (which accepts a "page" argument) into an object-like macro that can be used from code that doesn't have convenient access to an nbtree page. Preparation for an upcoming patch that adds skip scan to nbtree. Parallel index scans that use skip scan will serialize datums (not just SAOP array subscripts) when scheduling primitive scans. BTMaxItemSize will be used by btestimateparallelscan to determine how much DSM to request. Author: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/CAH2-Wz=H_RG5weNGeUG_TkK87tRBnH9mGCQj6WpM4V4FNWKv2g@mail.gmail.com
* Remove unnecessary (char *) casts [xlog]Peter Eisentraut2025-02-13
| | | | | | | | Remove (char *) casts no longer needed after XLogRegisterData() and XLogRegisterBufData() argument type change. Reviewed-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> Discussion: https://www.postgresql.org/message-id/flat/fd1fcedb-3492-4fc8-9e3e-74b97f2db6c7%40eisentraut.org
* Update copyright for 2025Bruce Momjian2025-01-01
| | | | Backpatch-through: 13
* Fix obsolete nbtree split buffer comment.Peter Geoghegan2024-10-27
| | | | Oversight in commit d088ba5a.
* Remove unused #include's from backend .c filesPeter Eisentraut2024-03-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | as determined by include-what-you-use (IWYU) While IWYU also suggests to *add* a bunch of #include's (which is its main purpose), this patch does not do that. In some cases, a more specific #include replaces another less specific one. Some manual adjustments of the automatic result: - IWYU currently doesn't know about includes that provide global variable declarations (like -Wmissing-variable-declarations), so those includes are being kept manually. - All includes for port(ability) headers are being kept for now, to play it safe. - No changes of catalog/pg_foo.h to catalog/pg_foo_d.h, to keep the patch from exploding in size. Note that this patch touches just *.c files, so nothing declared in header files changes in hidden ways. As a small example, in src/backend/access/transam/rmgr.c, some IWYU pragma annotations are added to handle a special case there. Discussion: https://www.postgresql.org/message-id/flat/af837490-6b2f-46df-ba05-37ea6a6653fc%40eisentraut.org
* Use new overflow-safe integer comparison functions.Nathan Bossart2024-02-16
| | | | | | | | | | | | Commit 6b80394781 introduced integer comparison functions designed to be as efficient as possible while avoiding overflow. This commit makes use of these functions in many of the in-tree qsort() comparators to help ensure transitivity. Many of these comparator functions should also see a small performance boost. Author: Mats Kindahl Reviewed-by: Andres Freund, Fabrízio de Royes Mello Discussion: https://postgr.es/m/CA%2B14426g2Wa9QuUpmakwPxXFWG_1FaY0AsApkvcTBy-YfS6uaw%40mail.gmail.com
* Update copyright for 2024Bruce Momjian2024-01-03
| | | | | | | | Reported-by: Michael Paquier Discussion: https://postgr.es/m/ZZKTDPxBBMt3C0J9@paquier.xyz Backpatch-through: 12
* Remove some more "snapshot too old" vestiges.Thomas Munro2023-09-08
| | | | | | | | | Commit f691f5b8 removed the logic, but left behind some now-useless Snapshot arguments to various AM-internal functions, and missed a couple of comments. Reported-by: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/CAH2-Wznj9qSNXZ1P1uWTUD_FeaTezbUazb416EPwi4Qr_jR_6A%40mail.gmail.com
* nbtree: Allocate new pages in separate function.Peter Geoghegan2023-06-10
| | | | | | | | | | | | | | | | | | | | | | | | Split nbtree's _bt_getbuf function is two: code that read locks or write locks existing pages remains in _bt_getbuf, while code that deals with allocating new pages is moved to a new, dedicated function called _bt_allocbuf. This simplifies most _bt_getbuf callers, since it is no longer necessary for them to pass a heaprel argument. Many of the changes to nbtree from commit 61b313e4 can be reverted. This minimizes the divergence between HEAD/PostgreSQL 16 and earlier release branches. _bt_allocbuf replaces the previous nbtree idiom of passing P_NEW to _bt_getbuf. There are only 3 affected call sites, all of which continue to pass a heaprel for recovery conflict purposes. Note that nbtree's use of P_NEW was superficial; nbtree never actually relied on the P_NEW code paths in bufmgr.c, so this change is strictly mechanical. GiST already took the same approach; it has a dedicated function for allocating new pages called gistNewBuffer(). That factor allowed commit 61b313e4 to make much more targeted changes to GiST. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Heikki Linnakangas <hlinnaka@iki.fi> Discussion: https://postgr.es/m/CAH2-Wz=8Z9qY58bjm_7TAHgtW6RzZ5Ke62q5emdCEy9BAzwhmg@mail.gmail.com
* Remove useless argument from nbtree dedup function.Peter Geoghegan2023-04-18
| | | | | | _bt_dedup_pass()'s heapRel argument hasn't been needed or used since commit cf2acaf4dc made deleting any existing LP_DEAD index tuples the caller's responsibility.
* Pass down table relation into more index relation functionsAndres Freund2023-04-01
| | | | | | | | | | | | This is done in preparation for logical decoding on standby, which needs to include whether visibility affecting WAL records are about a (user) catalog table. Which is only known for the table, not the indexes. It's also nice to be able to pass the heap relation to GlobalVisTestFor() in vacuumRedirectAndPlaceholder(). Author: "Drouvot, Bertrand" <bertranddrouvot.pg@gmail.com> Discussion: https://postgr.es/m/21b700c3-eecf-2e05-a699-f8c78dd31ec7@gmail.com
* Update copyright for 2023Bruce Momjian2023-01-02
| | | | Backpatch-through: 11
* Add macros in hash and btree AMs to get the special area of their pagesMichael Paquier2022-04-01
| | | | | | | | | | | This makes the code more consistent with SpGiST, GiST and GIN, that already use this style, and the idea is to make easier the introduction of more sanity checks for each of these AM-specific macros. BRIN uses a different set of macros to get a page's type and flags, so it has no need for something similar. Author: Matthias van de Meent Discussion: https://postgr.es/m/CAEze2WjE3+tGO9Fs9+iZMU+z6mMZKo54W1Zt98WKqbEUHbHOBg@mail.gmail.com
* Add UNIQUE null treatment optionPeter Eisentraut2022-02-03
| | | | | | | | | | | | | | | | | | | | | | | | | The SQL standard has been ambiguous about whether null values in unique constraints should be considered equal or not. Different implementations have different behaviors. In the SQL:202x draft, this has been formalized by making this implementation-defined and adding an option on unique constraint definitions UNIQUE [ NULLS [NOT] DISTINCT ] to choose a behavior explicitly. This patch adds this option to PostgreSQL. The default behavior remains UNIQUE NULLS DISTINCT. Making this happen in the btree code is pretty easy; most of the patch is just to carry the flag around to all the places that need it. The CREATE UNIQUE INDEX syntax extension is not from the standard, it's my own invention. I named all the internal flags, catalog columns, etc. in the negative ("nulls not distinct") so that the default PostgreSQL behavior is the default if the flag is false. Reviewed-by: Maxim Orlov <orlovmg@gmail.com> Reviewed-by: Pavel Borisov <pashkin.elfe@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/84e5ee1b-387e-9a54-c326-9082674bde78@enterprisedb.com
* Update copyright for 2022Bruce Momjian2022-01-07
| | | | Backpatch-through: 10
* Replace random(), pg_erand48(), etc with a better PRNG API and algorithm.Tom Lane2021-11-28
| | | | | | | | | | | | | | | | | | | Standardize on xoroshiro128** as our basic PRNG algorithm, eliminating a bunch of platform dependencies as well as fundamentally-obsolete PRNG code. In addition, this API replacement will ease replacing the algorithm again in future, should that become necessary. xoroshiro128** is a few percent slower than the drand48 family, but it can produce full-width 64-bit random values not only 48-bit, and it should be much more trustworthy. It's likely to be noticeably faster than the platform's random(), depending on which platform you are thinking about; and we can have non-global state vectors easily, unlike with random(). It is not cryptographically strong, but neither are the functions it replaces. Fabien Coelho, reviewed by Dean Rasheed, Aleksander Alekseev, and myself Discussion: https://postgr.es/m/alpine.DEB.2.22.394.2105241211230.165418@pseudo
* Add hardening to catch invalid TIDs in indexes.Peter Geoghegan2021-11-04
| | | | | | | | | | | | | | | | | | | Add hardening to the heapam index tuple deletion path to catch TIDs in index pages that point to a heap item that index tuples should never point to. The corruption we're trying to catch here is particularly tricky to detect, since it typically involves "extra" (corrupt) index tuples, as opposed to the absence of required index tuples in the index. For example, a heap TID from an index page that turns out to point to an LP_UNUSED item in the heap page has a good chance of being caught by one of the new checks. There is a decent chance that the recently fixed parallel VACUUM bug (see commit 9bacec15) would have been caught had that particular check been in place for Postgres 14. No backpatch of this extra hardening for now, though. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/CAH2-Wzk-4_raTzawWGaiqNvkpwDXxv3y1AQhQyUeHfkU=tFCeA@mail.gmail.com
* Remove obsolete nbtree LP_DEAD item comments.Peter Geoghegan2021-10-27
| | | | | | | | Comments above _bt_findinsertloc() that talk about LP_DEAD items are now out of place. We already discuss index tuple deletion at an earlier point in the same comment block. Oversight in commit d168b666.
* Fix ordering of items in nbtree error message.Peter Geoghegan2021-10-27
| | | | | | Oversight in commit a5213adf. Backpatch: 13-, just like commit a5213adf.
* Further harden nbtree posting split code.Peter Geoghegan2021-10-27
| | | | | | | | | | | Add more defensive checks around posting list split code. These should detect corruption involving duplicate table TIDs earlier and more reliably than any existing check. Follow up to commit 8f72bbac. Discussion: https://postgr.es/m/CAH2-WzkrSY_kjyd1_M5xJK1uM0govJXMxPn8JUSvwcUOiHuWVw@mail.gmail.com Backpatch: 13-, where nbtree deduplication was introduced.
* Fix "single value strategy" index deletion issue.Peter Geoghegan2021-09-21
| | | | | | | | | | | | | | | | | | | | | | | | It is not appropriate for deduplication to apply single value strategy when triggered by a bottom-up index deletion pass. This wastes cycles because later bottom-up deletion passes will overinterpret older duplicate tuples that deduplication actually just skipped over "by design". It also makes bottom-up deletion much less effective for low cardinality indexes that happen to cross a meaningless "index has single key value per leaf page" threshold. To fix, slightly narrow the conditions under which deduplication's single value strategy is considered. We already avoided the strategy for a unique index, since our high level goal must just be to buy time for VACUUM to run (not to buy space). We'll now also avoid it when we just had a bottom-up pass that reported failure. The two cases share the same high level goal, and already overlapped significantly, so this approach is quite natural. Oversight in commit d168b666, which added bottom-up index deletion. Author: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/CAH2-WznaOvM+Gyj-JQ0X=JxoMDxctDTYjiEuETdAGbF5EUc3MA@mail.gmail.com Backpatch: 14-, where bottom-up deletion was introduced.
* Silence another _bt_check_unique compiler warning.Peter Geoghegan2021-04-08
| | | | | | Per complaint from Tom Lane Discussion: https://postgr.es/m/1922884.1617909599@sss.pgh.pa.us
* Don't consider newly inserted tuples in nbtree VACUUM.Peter Geoghegan2021-03-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove the entire idea of "stale stats" within nbtree VACUUM (stop caring about stats involving the number of inserted tuples). Also remove the vacuum_cleanup_index_scale_factor GUC/param on the master branch (though just disable them on postgres 13). The vacuum_cleanup_index_scale_factor/stats interface made the nbtree AM partially responsible for deciding when pg_class.reltuples stats needed to be updated. This seems contrary to the spirit of the index AM API, though -- it is not actually necessary for an index AM's bulk delete and cleanup callbacks to provide accurate stats when it happens to be inconvenient. The core code owns that. (Index AMs have the authority to perform or not perform certain kinds of deferred cleanup based on their own considerations, such as page deletion and recycling, but that has little to do with pg_class.reltuples/num_index_tuples.) This issue was fairly harmless until the introduction of the autovacuum_vacuum_insert_threshold feature by commit b07642db, which had an undesirable interaction with the vacuum_cleanup_index_scale_factor mechanism: it made insert-driven autovacuums perform full index scans, even though there is no real benefit to doing so. This has been tied to a regression with an append-only insert benchmark [1]. Also have remaining cases that perform a full scan of an index during a cleanup-only nbtree VACUUM indicate that the final tuple count is only an estimate. This prevents vacuumlazy.c from setting the index's pg_class.reltuples in those cases (it will now only update pg_class when vacuumlazy.c had TIDs for nbtree to bulk delete). This arguably fixes an oversight in deduplication-related bugfix commit 48e12913. [1] https://smalldatum.blogspot.com/2021/01/insert-benchmark-postgres-is-still.html Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Masahiko Sawada <sawada.mshk@gmail.com> Discussion: https://postgr.es/m/CAD21AoA4WHthN5uU6+WScZ7+J_RcEjmcuH94qcoUPuB42ShXzg@mail.gmail.com Backpatch: 13-, where autovacuum_vacuum_insert_threshold was added.
* Use full 64-bit XIDs in deleted nbtree pages.Peter Geoghegan2021-02-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Otherwise we risk "leaking" deleted pages by making them non-recyclable indefinitely. Commit 6655a729 did the same thing for deleted pages in GiST indexes. That work was used as a starting point here. Stop storing an XID indicating the oldest bpto.xact across all deleted though unrecycled pages in nbtree metapages. There is no longer any reason to care about that condition/the oldest XID. It only ever made sense when wraparound was something _bt_vacuum_needs_cleanup() had to consider. The btm_oldest_btpo_xact metapage field has been repurposed and renamed. It is now btm_last_cleanup_num_delpages, which is used to remember how many non-recycled deleted pages remain from the last VACUUM (in practice its value is usually the precise number of pages that were _newly deleted_ during the specific VACUUM operation that last set the field). The general idea behind storing btm_last_cleanup_num_delpages is to use it to give _some_ consideration to non-recycled deleted pages inside _bt_vacuum_needs_cleanup() -- though never too much. We only really need to avoid leaving a truly excessive number of deleted pages in an unrecycled state forever. We only do this to cover certain narrow cases where no other factor makes VACUUM do a full scan, and yet the index continues to grow (and so actually misses out on recycling existing deleted pages). These metapage changes result in a clear user-visible benefit: We no longer trigger full index scans during VACUUM operations solely due to the presence of only 1 or 2 known deleted (though unrecycled) blocks from a very large index. All that matters now is keeping the costs and benefits in balance over time. Fix an issue that has been around since commit 857f9c36, which added the "skip full scan of index" mechanism (i.e. the _bt_vacuum_needs_cleanup() logic). The accuracy of btm_last_cleanup_num_heap_tuples accidentally hinged upon _when_ the source value gets stored. We now always store btm_last_cleanup_num_heap_tuples in btvacuumcleanup(). This fixes the issue because IndexVacuumInfo.num_heap_tuples (the source field) is expected to accurately indicate the state of the table _after_ the VACUUM completes inside btvacuumcleanup(). A backpatchable fix cannot easily be extracted from this commit. A targeted fix for the issue will follow in a later commit, though that won't happen today. I (pgeoghegan) have chosen to remove any mention of deleted pages in the documentation of the vacuum_cleanup_index_scale_factor GUC/param, since the presence of deleted (though unrecycled) pages is no longer of much concern to users. The vacuum_cleanup_index_scale_factor description in the docs now seems rather unclear in any case, and it should probably be rewritten in the near future. Perhaps some passing mention of page deletion will be added back at the same time. Bump XLOG_PAGE_MAGIC due to nbtree WAL records using full XIDs now. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Masahiko Sawada <sawada.mshk@gmail.com> Discussion: https://postgr.es/m/CAH2-WznpdHvujGUwYZ8sihX=d5u-tRYhi-F4wnV2uN2zHpMUXw@mail.gmail.com
* Enhance nbtree index tuple deletion.Peter Geoghegan2021-01-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Teach nbtree and heapam to cooperate in order to eagerly remove duplicate tuples representing dead MVCC versions. This is "bottom-up deletion". Each bottom-up deletion pass is triggered lazily in response to a flood of versions on an nbtree leaf page. This usually involves a "logically unchanged index" hint (these are produced by the executor mechanism added by commit 9dc718bd). The immediate goal of bottom-up index deletion is to avoid "unnecessary" page splits caused entirely by version duplicates. It naturally has an even more useful effect, though: it acts as a backstop against accumulating an excessive number of index tuple versions for any given _logical row_. Bottom-up index deletion complements what we might now call "top-down index deletion": index vacuuming performed by VACUUM. Bottom-up index deletion responds to the immediate local needs of queries, while leaving it up to autovacuum to perform infrequent clean sweeps of the index. The overall effect is to avoid certain pathological performance issues related to "version churn" from UPDATEs. The previous tableam interface used by index AMs to perform tuple deletion (the table_compute_xid_horizon_for_tuples() function) has been replaced with a new interface that supports certain new requirements. Many (perhaps all) of the capabilities added to nbtree by this commit could also be extended to other index AMs. That is left as work for a later commit. Extend deletion of LP_DEAD-marked index tuples in nbtree by adding logic to consider extra index tuples (that are not LP_DEAD-marked) for deletion in passing. This increases the number of index tuples deleted significantly in many cases. The LP_DEAD deletion process (which is now called "simple deletion" to clearly distinguish it from bottom-up deletion) won't usually need to visit any extra table blocks to check these extra tuples. We have to visit the same table blocks anyway to generate a latestRemovedXid value (at least in the common case where the index deletion operation's WAL record needs such a value). Testing has shown that the "extra tuples" simple deletion enhancement increases the number of index tuples deleted with almost any workload that has LP_DEAD bits set in leaf pages. That is, it almost never fails to delete at least a few extra index tuples. It helps most of all in cases that happen to naturally have a lot of delete-safe tuples. It's not uncommon for an individual deletion operation to end up deleting an order of magnitude more index tuples compared to the old naive approach (e.g., custom instrumentation of the patch shows that this happens fairly often when the regression tests are run). Add a further enhancement that augments simple deletion and bottom-up deletion in indexes that make use of deduplication: Teach nbtree's _bt_delitems_delete() function to support granular TID deletion in posting list tuples. It is now possible to delete individual TIDs from posting list tuples provided the TIDs have a tableam block number of a table block that gets visited as part of the deletion process (visiting the table block can be triggered directly or indirectly). Setting the LP_DEAD bit of a posting list tuple is still an all-or-nothing thing, but that matters much less now that deletion only needs to start out with the right _general_ idea about which index tuples are deletable. Bump XLOG_PAGE_MAGIC because xl_btree_delete changed. No bump in BTREE_VERSION, since there are no changes to the on-disk representation of nbtree indexes. Indexes built on PostgreSQL 12 or PostgreSQL 13 will automatically benefit from bottom-up index deletion (i.e. no reindexing required) following a pg_upgrade. The enhancement to simple deletion is available with all B-Tree indexes following a pg_upgrade, no matter what PostgreSQL version the user upgrades from. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Heikki Linnakangas <hlinnaka@iki.fi> Reviewed-By: Victor Yegorov <vyegorov@gmail.com> Discussion: https://postgr.es/m/CAH2-Wzm+maE3apHB8NOtmM=p-DO65j2V5GzAWCOEEuy3JZgb2g@mail.gmail.com
* Update copyright for 2021Bruce Momjian2021-01-02
| | | | Backpatch-through: 9.5
* Deprecate nbtree's BTP_HAS_GARBAGE flag.Peter Geoghegan2020-11-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Streamline handling of the various strategies that we have to avoid a page split in nbtinsert.c. When it looks like a leaf page is about to overflow, we now perform deleting LP_DEAD items and deduplication in one central place. This greatly simplifies _bt_findinsertloc(). This has an independently useful consequence: nbtree no longer relies on the BTP_HAS_GARBAGE page level flag/hint for anything important. We still set and unset the flag in the same way as before, but it's no longer treated as a gating condition when considering if we should check for already-set LP_DEAD bits. This happens at the point where the page looks like it might have to be split anyway, so simply checking the LP_DEAD bits in passing is practically free. This avoids missing LP_DEAD bits just because the page-level hint is unset, which is probably reasonably common (e.g. it happens when VACUUM unsets the page-level flag without actually removing index tuples whose LP_DEAD-bit was set recently, after the VACUUM operation began but before it reached the leaf page in question). Note that this isn't a big behavioral change compared to PostgreSQL 13. We were already checking for set LP_DEAD bits regardless of whether the BTP_HAS_GARBAGE page level flag was set before we considered doing a deduplication pass. This commit only goes slightly further by doing the same check for all indexes, even indexes where deduplication won't be performed. We don't completely remove the BTP_HAS_GARBAGE flag. We still rely on it as a gating condition with pg_upgrade'd indexes from before B-tree version 4/PostgreSQL 12. That makes sense because we sometimes have to make a choice among pages full of duplicates when inserting a tuple with pre version 4 indexes. It probably still pays to avoid accessing the line pointer array of a page there, since it won't yet be clear whether we'll insert on to the page in question at all, let alone split it as a result. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Victor Yegorov <vyegorov@gmail.com> Discussion: https://postgr.es/m/CAH2-Wz%3DYpc1PDdk8OVJDChGJBjT06%3DA0Mbv9HyTLCsOknGcUFg%40mail.gmail.com
* nbtree: Rename nbtinsert.c variables for consistency.Peter Geoghegan2020-11-17
| | | | | | | | | | | | | | Stop naming special area/opaque pointer variables 'lpageop' in contexts where it doesn't make sense. This is a holdover from a time when logic that performs tasks that are now spread across _bt_insertonpg(), _bt_findinsertloc(), and _bt_split() was more centralized. 'lpageop' denotes "left page", which doesn't make sense outside of contexts in which there isn't also a right page. Also acquire page flag variables up front within _bt_insertonpg(). This makes it closer to _bt_split() following refactoring commit bc3087b626d. This allows the page split and retail insert paths to both make use of the same variables.
* nbtree: Demote incomplete split "can't happen" error.Peter Geoghegan2020-11-15
| | | | | | | | | Only a basic logic bug in a _bt_insertonpg() caller could lead to a violation of this invariant (index corruption won't do it). A "can't happen" error seems inappropriate (it is arbitrary at best). Demote the error to a simple assertion. This matches similar nearby sanity checks.
* Correct nbtree page split lock coupling comment.Peter Geoghegan2020-08-09
| | | | There is no reason to distinguish between readers and writers here.
* Add nbtree Valgrind buffer lock checks.Peter Geoghegan2020-07-21
| | | | | | | | | | | | | | | | | | | | | | | | | | Holding just a buffer pin (with no buffer lock) on an nbtree buffer/page provides very weak guarantees, especially compared to heapam, where it's often safe to read a page while only holding a buffer pin. This commit has Valgrind enforce the following rule: it is never okay to access an nbtree buffer without holding both a pin and a lock on the buffer. A draft version of this patch detected questionable code that was cleaned up by commits fa7ff642 and 7154aa16. The code in question used to access an nbtree buffer page's special/opaque area with no buffer lock (only a buffer pin). This practice (which isn't obviously unsafe) is hereby formally disallowed in nbtree. There doesn't seem to be any reason to allow it, and banning it keeps things simple for Valgrind. The new checks are implemented by adding custom nbtree client requests (located in LockBuffer() wrapper functions); these requests are "superimposed" on top of the generic bufmgr.c Valgrind client requests added by commit 1e0dfd16. No custom resource management cleanup code is needed to undo the effects of marking buffers as non-accessible under this scheme. Author: Peter Geoghegan Reviewed-By: Anastasia Lubennikova, Georgios Kokolatos Discussion: https://postgr.es/m/CAH2-WzkLgyN3zBvRZ1pkNJThC=xi_0gpWRUb_45eexLH1+k2_Q@mail.gmail.com
* Fix misuse of table_index_fetch_tuple_check().Peter Geoghegan2020-06-25
| | | | | | | | | | | | | Commit 0d861bbb, which added deduplication to nbtree, had _bt_check_unique() pass a TID to table_index_fetch_tuple_check() that isn't safe to mutate. table_index_fetch_tuple_check()'s tid argument is modified when the TID in question is not the latest visible tuple in a hot chain, though this wasn't documented. To fix, go back to using a local copy of the TID in _bt_check_unique(), and update comments above table_index_fetch_tuple_check(). Backpatch: 13-, where B-Tree deduplication was introduced.
* Silence _bt_check_unique compiler warning.Peter Geoghegan2020-06-13
| | | | | Reported-By: Tom Lane Discussion: https://postgr.es/m/841649.1592065060@sss.pgh.pa.us
* Adjust "root of to-be-deleted subtree" function.Peter Geoghegan2020-05-11
| | | | | | | | | | | | | | | | | | | | Restructure the function that locates the root of the to-be-deleted subtree during nbtree page deletion. Handle the conditions that make page deletion unsafe in a slightly more uniform way, and acknowledge the fact that the behavior with incomplete splits on internal pages is different (as pointed out in the nbtree README as of commit 35bc0ec7). Also invent new terminology that avoids ambiguity around which pages are about to be deleted. Consistently use the term "to-be-deleted subtree", not the ambiguous term "branch". We were calling the subtree parent page the "top parent page", but that was quite misleading. The top parent page usually refers to a page unlinked from its siblings and marked deleted (during the second stage of page deletion). There was one kind of top parent page that we merely removed a downlink from, and another kind of top parent page that we actually marked deleted. Eliminate the ambiguity by inventing a new term ("subtree parent page") that refers to the former kind of page only.
* Refactor btvacuumpage().Peter Geoghegan2020-05-02
| | | | | | | | | | | | | | | | | | | | | | | | | | Remove one of the arguments to btvacuumpage(), and give up on the idea that it's a recursive function. We now use the term "backtracking" to refer to the case where an earlier block must be visited to make sure no tuples that need to be removed were missed. Advertising btvacuumpage() as a recursive function was unhelpful. In reality the function always simulates recursion with a loop (it doesn't actually call itself). This wasn't just necessary as a precaution (per the comments mentioning tail recursion), though. There is no reliable natural limit on the number of times we can backtrack. There are important behavioral difference when "recursing"/backtracking, mostly related to page deletion. We don't perform page deletion when backtracking due to the extra complexity. And when we recurse, we're not performing a physical order scan anymore, so we expect fairly different conditions to hold for the page. Structuring the code like this makes it clearer how _bt_pagedel() cooperates with btvacuumpage() and btvacuumscan() (as established in commit b0229f26 and commit 73a076b0). Author: Peter Geoghegan Reviewed-By: Masahiko Sawada Discussion: https://postgr.es/m/CAH2-WzmRGMDWiLMcb+zagG9652PboNN4Gfcq1Gc_wJL6A716MA@mail.gmail.com
* Fix AddressSanitizer use-after-scope complaint.Peter Geoghegan2020-04-30
| | | | | | | | | | | XLogRegisterBufData() does not copy data pointed to by caller's pointer argument. Oversight in commit 0d861bbb702. Author: Peter Eisentraut Reported-By: Peter Eisentraut Discussion: https://postgr.es/m/21800dbe-a13e-22f7-d423-b81db9d249f5@2ndquadrant.com
* Remove obsolete "hole in center of page" comment.Peter Geoghegan2020-04-14
| | | | | | | | | A comment from the Berkeley days incorrectly claimed that the page management code cares about the contents of the hole in the center of the page (at least in the case of the left half of an nbtree page split). Commit 8fa30f906be added an addendum that stated that the original comment was "probably obsolete". It's definitely obsolete, though, so remove the original comment plus the addendum.
* Rearrange _bt_insertonpg() "update metapage" code.Peter Geoghegan2020-04-14
| | | | | | Nest the "update metapage as part of insert into root-like page" branch inside the broader "insert into internal page" branch. This improves readability.
* Add defensive "split_only_page" nbtree assertion.Peter Geoghegan2020-04-13
| | | | | | | | | | | | | | Clearly it's not okay for nbtree to split a page that is the only page on its level, and then find that it has to split the parent one level up in turn. There is simply no code to handle the split_only_page case in the _bt_insertonpg() "newitem won't fit" branch (only the "newitem fits" branch handles split_only_page). Add a defensive assertion that will fail if a split_only_page call to _bt_insertonpg() somehow ends up splitting the target/parent page. I (pgeoghegan) believe that we don't need split_only_page handling for the "newitem won't fit" branch because anybody calling _bt_insertonpg() like this would have to hold a lock on the same one and only child page.
* Make _bt_insertonpg() more like _bt_split().Peter Geoghegan2020-04-13
| | | | | | | | | | It seems like a good idea for nbtree's retail insert code to be absolutely consistent with nbtree's page split code for anything that naturally requires equivalent handling. Anything that concerns inserting newitem (which is handled as part of the page split atomic action when a page split is required) should work in exactly the same way. With that in mind, make _bt_insertonpg() handle 'cbuf' in a way that matches _bt_split().
* Harmonize nbtree page split point code.Peter Geoghegan2020-04-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | An nbtree split point can be thought of as a point between two adjoining tuples from an imaginary version of the page being split that includes the incoming/new item (in addition to the items that really are on the page). These adjoining tuples are called the lastleft and firstright tuples. The variables that represent split points contained a field called firstright, which is an offset number of the first data item from the original page that goes on the new right page. The corresponding tuple from origpage was usually the same thing as the actual firstright tuple, but not always: the firstright tuple is sometimes the new/incoming item instead. This situation seems unnecessarily confusing. Make things clearer by renaming the origpage offset returned by _bt_findsplitloc() to "firstrightoff". We now have a firstright tuple and a firstrightoff offset number which are comparable to the newitem/lastleft tuples and the newitemoff/lastleftoff offset numbers respectively. Also make sure that we are consistent about how we describe nbtree page split point state. Push the responsibility for dealing with pg_upgrade'd !heapkeyspace indexes down to lower level code, relieving _bt_split() from dealing with it directly. This means that we always have a palloc'd left page high key on the leaf level, no matter what. This enables simplifying some of the code (and code comments) within _bt_split(). Finally, restructure the page split code to make it clearer why suffix truncation (which only takes place during leaf page splits) is completely different to the first data item truncation that takes place during internal page splits. Tuples are marked as having fewer attributes stored in both cases, and the firstright tuple is truncated in both cases, so it's easy to imagine somebody missing the distinction.
* Remove nbtree BTreeTupleSetAltHeapTID() function.Peter Geoghegan2020-04-07
| | | | | | | Since heap TID is supposed to be just another key attribute to the implementation, it doesn't make much sense to have separate BTreeTupleSetNAtts() and BTreeTupleSetAltHeapTID() functions. Merge the two functions together. This slightly simplifies _bt_truncate().
* Justify nbtree page split locking in code comment.Peter Geoghegan2020-03-27
| | | | | | | | | | | | | | | Delaying unlocking the right child page until after the point that the left child's parent page has been refound is no longer truly necessary. Commit 40dae7ec made nbtree tolerant of interrupted page splits. VACUUM was taught to avoid deleting a page that happens to be the right half of an incomplete split. As long as page splits don't unlock the left child page until the end of the second/final phase, it should be safe to unlock the right child page earlier (at the end of the first phase). It probably isn't actually useful to release the right child's lock earlier like this (it probably won't improve performance). Even still, pointing out that it ought to be safe to do so should make it easier to understand the overall design.
* nbtree: Remove obsolete _bt_pgaddtup() comments.Peter Geoghegan2020-03-19
| | | | | | Remove comments that are a throw back to a time when nbtree cared about write-ordering dependencies. The comments are similar to those removed by commit 9ee7414e, among others.
* nbtree: Use raw PageAddItem() for retail inserts.Peter Geoghegan2020-03-18
| | | | | | | | | | | | | | | | | | | | Only internal page splits need to call _bt_pgaddtup() instead of PageAddItem(), and only for data items, one of which will end up at the first offset (or first offset after the high key offset) on the new right page. This data item alone will need to be truncated in _bt_pgaddtup(). Since there is no reason why retail inserts ever need to truncate the incoming item, use a raw PageAddItem() call there instead. Even _bt_split() uses raw PageAddItem() calls for left page and right page high keys. Clearly the _bt_pgaddtup() shim function wasn't really encapsulating anything. _bt_pgaddtup() should now be thought of as a _bt_split() helper function. Note that the assertions from commit d1e241c2 verify that retail inserts never insert an item at an internal page's negative infinity offset. This invariant could only ever be violated as a result of a basic logic error in nbtinsert.c.
* Refactor nbtree fastpath optimization.Peter Geoghegan2020-03-18
| | | | | | | | | | | | | | | | | | Commit 2b272734, which added the fastpath rightmost leaf page cache insert optimization, added code to _bt_doinsert() to handle using and invalidating the backend local block cache. It doesn't seem like a good place to handle these low level details, though. _bt_doinsert() is supposed to be a high level function -- it is the main entry point to nbtinsert.c. Restructure the code by placing handling of the rightmost block cache at the start of a new _bt_search() shim function, _bt_search_insert(). The new function is called from _bt_doinsert(), which uses it as a _bt_search() variant that conveniently accepts its BTInsertState state as an argument. _bt_doinsert() no longer needs to directly consider the fastpath optimization. Discussion: https://postgr.es/m/CAH2-Wzk59cxKJRd=rfbyub6-V4yWRjsOYRkUNHBLT1P1GdtCQQ@mail.gmail.com
* nbtree: Remove useless local variables.Peter Geoghegan2020-03-17
| | | | | | | | | | | | | | Copying block and offset numbers to local variables in _bt_insertonpg() made the code less readable. Remove the variables. There is already code that conditionally calls BufferGetBlockNumber() in the same block, so consistently do it that way instead. Calling BufferGetBlockNumber() is very cheap, but we might as well avoid it when it isn't truly necessary. It isn't truly necessary for _bt_insertonpg() to call BufferGetBlockNumber() in almost all cases. Spotted while working on a patch that refactors the fastpath rightmost leaf page cache optimization, which was added by commit 2b272734.
* nbtree: Pass down MAXALIGN()'d itemsz for new item.Peter Geoghegan2020-03-16
| | | | | | | | | | | | | | Refactor nbtinsert.c so that the final itemsz of each new non-pivot tuple (the MAXALIGN()'d size) is determined once. Most of the functions used by leaf page inserts used the insertstate.itemsz value already. This commit makes everything use insertstate.itemsz as standard practice. The goal is to decouple tuple size from "effective" tuple size. Making this distinction isn't truly necessary right now, but that might change in the future. Also explain why we consistently apply MAXALIGN() to get an effective index tuple size. This was rather unclear, in part because it isn't actually strictly necessary right now.
* nbtree: Reorder nbtinsert.c prototypes.Peter Geoghegan2020-03-15
| | | | | Relocate _bt_newroot() prototype, so that the order that prototypes appear in matches the order that the functions are defined in.
* nbtree: Move fastpath NULL descent stack assertion.Peter Geoghegan2020-03-10
| | | | | | | | | | | | | | | | Commit 074251db added an assertion that verified the fastpath/rightmost page insert optimization's assumption about free space: There should always be enough free space on the page to insert the new item without splitting the page. Otherwise, we end up using the "concurrent root page split" phony/fake stack path in _bt_insert_parent(). This does not lead to incorrect behavior, but it is likely to be far slower than simply using the regular _bt_search() path. The assertion catches serious performance bugs that would probably take a long time to detect any other way. It seems much more natural to make this assertion just before the point that we generate a fake/phony descent stack. Move the assert there. This also makes _bt_insertonpg() a bit more readable.