aboutsummaryrefslogtreecommitdiff
path: root/src/backend/access/heap
Commit message (Collapse)AuthorAge
...
* Remove lazy_scan_heap parallel VACUUM comment block.Peter Geoghegan2021-11-21
| | | | | | This doesn't belong next to very high level discussion of the tasks that lazy_scan_heap performs. There is already a similar, longer comment block at the top of vacuumlazy.c that mentions lazy_scan_heap directly.
* Fix parallel operations that prevent oldest xmin from advancing.Amit Kapila2021-11-19
| | | | | | | | | | | | | | | | | While determining xid horizons, we skip over backends that are running Vacuum. We also ignore Create Index Concurrently, or Reindex Concurrently for the purposes of computing Xmin for Vacuum. But we were not setting the flags corresponding to these operations when they are performed in parallel which was preventing Xid horizon from advancing. The optimization related to skipping Create Index Concurrently, or Reindex Concurrently operations was implemented in PG-14 but the fix is the same for the Parallel Vacuum as well so back-patched till PG-13. Author: Masahiko Sawada Reviewed-by: Amit Kapila Backpatch-through: 13 Discussion: https://postgr.es/m/CAD21AoCLQqgM1sXh9BrDFq0uzd3RBFKi=Vfo6cjjKODm0Onr5w@mail.gmail.com
* Explain pruning pgstats accounting subtleties.Peter Geoghegan2021-11-12
| | | | | | | | | | | | | | | | Add a comment explaining why the pgstats accounting used during opportunistic heap pruning operations (to maintain the current number of dead tuples in the relation) needs to compensate by subtracting away the number of new LP_DEAD items. This is needed so it can avoid completely forgetting about tuples that become LP_DEAD items during pruning -- they should still count. It seems more natural to discuss this issue at the only relevant call site (opportunistic pruning), since the same issue does not apply to the only other caller (the VACUUM call site). Move everything there too. Author: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/CAH2-Wzm7f+A6ej650gi_ifTgbhsadVW5cujAL3punpupHff5Yg@mail.gmail.com
* Update heap_page_prune() free space map comments.Peter Geoghegan2021-11-11
| | | | | | | | It is up to the heap_page_prune() caller to decide what to do about updating the FSM for a page following pruning. Update old comments that address what we might want to do as if it was the responsibility of heap_page_prune() itself. heap_page_prune() doesn't have enough high-level context to make a sensible choice.
* Update another obsolete reference in vacuumlazy.c.Peter Geoghegan2021-11-11
| | | | Addresses an oversight in commit 7ab96cf6.
* Update obsolete reference in vacuumlazy.c.Peter Geoghegan2021-11-05
| | | | Oversight in commit 7ab96cf6.
* Update obsolete heap pruning comments.Peter Geoghegan2021-11-05
| | | | | | | | | | | | | Add new comments that spell out what VACUUM expects from heap pruning: pruning must never leave behind DEAD tuples that still have tuple storage. This has at least been the case since commit 8523492d, which established the principle that vacuumlazy.c doesn't have to deal with DEAD tuples that still have tuple storage directly, except perhaps by simply retrying pruning (to handle a rare corner case involving concurrent transaction abort). In passing, update some references to old symbol names that were missed by the snapshot scalability work (specifically commit dc7420c2c9).
* Add hardening to catch invalid TIDs in indexes.Peter Geoghegan2021-11-04
| | | | | | | | | | | | | | | | | | | Add hardening to the heapam index tuple deletion path to catch TIDs in index pages that point to a heap item that index tuples should never point to. The corruption we're trying to catch here is particularly tricky to detect, since it typically involves "extra" (corrupt) index tuples, as opposed to the absence of required index tuples in the index. For example, a heap TID from an index page that turns out to point to an LP_UNUSED item in the heap page has a good chance of being caught by one of the new checks. There is a decent chance that the recently fixed parallel VACUUM bug (see commit 9bacec15) would have been caught had that particular check been in place for Postgres 14. No backpatch of this extra hardening for now, though. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/CAH2-Wzk-4_raTzawWGaiqNvkpwDXxv3y1AQhQyUeHfkU=tFCeA@mail.gmail.com
* Add various assertions to heap pruning code.Peter Geoghegan2021-11-04
| | | | | | | | | | | These assertions document (and verify) our high level assumptions about how pruning can and cannot affect existing items from target heap pages. For example, one of the new assertions verifies that pruning does not set a heap-only tuple to LP_DEAD. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/CAH2-Wz=vhvBx1GjF+oueHh8YQcHoQYrMi0F0zFMHEr8yc4sCoA@mail.gmail.com
* Fix parallel amvacuumcleanup safety bug.Peter Geoghegan2021-11-02
| | | | | | | | | | | | | | Commit b4af70cb inverted the return value of the function parallel_processing_is_safe(), but missed the amvacuumcleanup test. Index AMs that don't support parallel cleanup at all were affected. The practical consequences of this bug were not very serious. Hash indexes are affected, but since they just return the number of blocks during hashvacuumcleanup anyway, it can't have had much impact. Author: Masahiko Sawada <sawada.mshk@gmail.com> Discussion: https://postgr.es/m/CAD21AoA-Em+aeVPmBbL_s1V-ghsJQSxYL-i3JP8nTfPiD1wjKw@mail.gmail.com Backpatch: 14-, where commit b4af70cb appears.
* Don't overlook indexes during parallel VACUUM.Peter Geoghegan2021-11-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit b4af70cb, which simplified state managed by VACUUM, performed refactoring of parallel VACUUM in passing. Confusion about the exact details of the tasks that the leader process is responsible for led to code that made it possible for parallel VACUUM to miss a subset of the table's indexes entirely. Specifically, indexes that fell under the min_parallel_index_scan_size size cutoff were missed. These indexes are supposed to be vacuumed by the leader (alongside any parallel unsafe indexes), but weren't vacuumed at all. Affected indexes could easily end up with duplicate heap TIDs, once heap TIDs were recycled for new heap tuples. This had generic symptoms that might be seen with almost any index corruption involving structural inconsistencies between an index and its table. To fix, make sure that the parallel VACUUM leader process performs any required index vacuuming for indexes that happen to be below the size cutoff. Also document the design of parallel VACUUM with these below-size-cutoff indexes. It's unclear how many users might be affected by this bug. There had to be at least three indexes on the table to hit the bug: a smaller index, plus at least two additional indexes that themselves exceed the size cutoff. Cases with just one additional index would not run into trouble, since the parallel VACUUM cost model requires two larger-than-cutoff indexes on the table to apply any parallel processing. Note also that autovacuum was not affected, since it never uses parallel processing. Test case based on tests from a larger patch to test parallel VACUUM by Masahiko Sawada. Many thanks to Kamigishi Rei for her invaluable help with tracking this problem down. Author: Peter Geoghegan <pg@bowt.ie> Author: Masahiko Sawada <sawada.mshk@gmail.com> Reported-By: Kamigishi Rei <iijima.yun@koumakan.jp> Reported-By: Andrew Gierth <andrew@tao11.riddles.org.uk> Diagnosed-By: Andres Freund <andres@anarazel.de> Bug: #17245 Discussion: https://postgr.es/m/17245-ddf06aaf85735f36@postgresql.org Discussion: https://postgr.es/m/20211030023740.qbnsl2xaoh2grq3d@alap3.anarazel.de Backpatch: 14-, where the refactoring commit appears.
* Demote pg_unreachable() in heapam to an assertion.Peter Geoghegan2021-10-29
| | | | | | | | | | | | | | Commit d168b66682, which overhauled index deletion, added a pg_unreachable() to the end of a sort comparator used when sorting heap TIDs from an index page. This allows the compiler to apply optimizations that assume that the heap TIDs from the index AM must always be unique. That doesn't seem like a good idea now, given recent reports of corruption involving duplicate TIDs in indexes on Postgres 14. Demote to an assertion, just in case. Backpatch: 14-, where index deletion was overhauled.
* Replace occurrences of InvalidXid with InvalidTransactionIdDaniel Gustafsson2021-10-04
| | | | | | | | | While Xid is a known shortening of TransactionId, InvalidXid is not defined in the code. Fix comments which mistakenly were using the shorter version. Author: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> Discussion: https://postgr.es/m/CALj2ACUQzdigML868nV4cojfELPkEzNLNOk7b91Pho4JB90fng@mail.gmail.com
* vacuumlazy.c: Remove obsolete 'onecall' comment.Peter Geoghegan2021-09-25
| | | | | | | | | | | | Remove obsolete reference to lazy_vacuum()'s onecall argument. The function argument was removed by commit 3499df0dee. Also remove adjoining comment block that introduces the wraparound failsafe concept. Talking about the failsafe here no longer makes sense, since lazy_vacuum() (and related functions) are no longer the only place where the failsafe might be triggered. This has been the case since commit c242baa4a8 taught VACUUM to consider triggering the failsafe mechanism during its initial heap scan.
* Document issue with heapam line pointer truncation.Peter Geoghegan2021-09-22
| | | | | | | | | | | | Checking that an offset number isn't past the end of a heap page's line pointer array was just a defensive sanity check for HOT-chain traversal code before commit 3c3b8a4b. It's etrictly necessary now, though. Add comments that reference the issue to code in heapam that needs to get it right. Per suggestion from Alexander Lakhin. Discussion: https://postgr.es/m/f76a292c-9170-1aef-91a0-59d9443b99a3@gmail.com
* Remove overzealous index deletion assertion.Peter Geoghegan2021-09-20
| | | | | | | | | | | | | | | | | | | | A broken HOT chain is not an unexpected condition, even when the offset number points past the end of the page's line pointer array. heap_prune_chain() does not (and never has) treated this condition as unexpected, so derivative code in heap_index_delete_tuples() shouldn't do so either. Oversight in commit 4228817449. The assertion can probably only fail on Postgres 14 and master. Earlier releases don't have commit 3c3b8a4b, which taught VACUUM to truncate the line pointer array of heap pages. Backpatch all the same, just to be consistent. Author: Peter Geoghegan <pg@bowt.ie> Reported-By: Alexander Lakhin <exclusion@gmail.com> Discussion: https://postgr.es/m/17197-9438f31f46705182@postgresql.org Backpatch: 12-, just like commit 4228817449.
* Clean up some code using "(expr) ? true : false"Michael Paquier2021-09-08
| | | | | | | | | | All the code paths simplified here were already using a boolean or used an expression that led to zero or one, making the extra bits unnecessary. Author: Justin Pryzby Reviewed-by: Tom Lane, Michael Paquier, Peter Smith Discussion: https://postgr.es/m/20210428182936.GE27406@telsasoft.com
* VACUUM VERBOSE: Don't report "pages removed".Peter Geoghegan2021-08-31
| | | | | | | | | | | | | | It doesn't make any sense to report this information, since VACUUM VERBOSE reports on heap relation truncation directly. This was an oversight in commit 7ab96cf6, which made VACUUM VERBOSE output a little more consistent with nearby autovacuum-specific log output. Adjust comments that describe how this is supposed to work in passing. Also bring truncation-related VACUUM VERBOSE output in line with the convention established for VACUUM VERBOSE output by commit f4f4a649. Author: Peter Geoghegan <pg@bowt.ie> Backpatch: 14-, where VACUUM VERBOSE's output changed.
* vacuumlazy.c: Correct prune state comment.Peter Geoghegan2021-08-31
| | | | Oversight in commit 7ab96cf6b3.
* Remove unneeded old_rel_pages VACUUM state field.Peter Geoghegan2021-08-31
| | | | | | | | | | The field hasn't been used since commit 3d351d91, which redefined pg_class.reltuples to be -1 before the first VACUUM or ANALYZE. Also rename a local variable of the same name ("old_rel_pages"). This is used by relation truncation to represent the original relation size at the start of the ongoing VACUUM operation. Rename it to orig_rel_pages, since that's a lot clearer. (This name matches similar nearby code.)
* Report tuple address in data-corruption error messageAlvaro Herrera2021-08-30
| | | | | | | | | | | Most data-corruption reports mention the location of the problem, but this one failed to. Add it. Backpatch all the way back. In 12 and older, also assign the ERRCODE_DATA_CORRUPTED error code as was done in commit fd6ec93bf890 for 13 and later. Discussion: https://postgr.es/m/202108191637.oqyzrdtnheir@alvherre.pgsql
* track_io_timing logging: Don't special case 0 ms.Peter Geoghegan2021-08-27
| | | | | | | | | | | | | | | Adjust track_io_timing related logging code added by commit 94d13d474d. Make it consistent with other nearby autovacuum and autoanalyze logging code by removing logic that suppressed zero millisecond outputs. log_autovacuum_min_duration log output now reliably shows "read:" and "write:" millisecond-based values in its report (when track_io_timing is enabled). Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Stephen Frost <sfrost@snowman.net> Discussion: https://postgr.es/m/CAH2-WznW0FNxSVQMSRazAMYNfZ6DR_gr5WE78hc6E1CBkkJpzw@mail.gmail.com Backpatch: 14-, where the track_io_timing logging was introduced.
* Reorder log_autovacuum_min_duration log output.Peter Geoghegan2021-08-27
| | | | | | | | | | This order seems more natural. It starts with details that are particular to heap and index data structures, and ends with system-level costs incurred during the autovacuum worker's VACUUM/ANALYZE operation. Author: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/CAH2-WzkzxK6ahA9xxsOftRtBX_R0swuHZsvo4QUbak1Bz7hb7Q@mail.gmail.com Backpatch: 14-, which enhanced the log output in various ways.
* vacuumlazy.c: Remove unnecessary parentheses.Peter Geoghegan2021-08-27
| | | | | | This was arguably a minor oversight in commit b4af70cb, which cleaned up the function signatures of functions that modify IndexBulkDeleteResult variables.
* Emit namespace in the post-copy errmsgDaniel Gustafsson2021-08-16
| | | | | | | | | | | | | | During a VACUUM or CLUSTER command, the initial output emits a fully qualified relation path with namespace. The post-action errmsg only emitted the relation name however, which may lead to hard to parse output when using multiple jobs with vacuumdb as the output from different jobs may be interleaved. Include the full path in the post-action errmsg to be consistent with the initial errmsg. Author: Mike Fiedler <miketheman@gmail.com> Reviewed-by: Corey Huinker <corey.huinker@gmail.com> Discussion: https://postgr.es/m/CAMerE0oz+8G-aORZL_BJcPxnBqewZAvND4bSUysjz+r-oT1BxQ@mail.gmail.com
* Message style improvementsPeter Eisentraut2021-08-07
|
* Move InRecovery and standbyState global vars to xlogutils.c.Heikki Linnakangas2021-07-31
| | | | | | | | | | They are used in code that runs both during normal operation and during WAL replay, and needs to behave differently during replay. Move them to xlogutils.c, because that's where we have other helper functions used by redo routines. Reviewed-by: Andres Freund Discussion: https://www.postgresql.org/message-id/b3b71061-4919-e882-4857-27e370ab134a%40iki.fi
* Replace RelationOpenSmgr() with RelationGetSmgr().Tom Lane2021-07-12
| | | | | | | | | | | | | | | | | | | | The idea behind this patch is to design out bugs like the one fixed by commit 9d523119f. Previously, once one did RelationOpenSmgr(rel), it was considered okay to access rel->rd_smgr directly for some not-very-clear interval. But since that pointer will be cleared by relcache flushes, we had bugs arising from overreliance on a previous RelationOpenSmgr call still being effective. Now, very little code except that in rel.h and relcache.c should ever touch the rd_smgr field directly. The normal coding rule is to use RelationGetSmgr(rel) and not expect the result to be valid for longer than one smgr function call. There are a couple of places where using the function every single time seemed like overkill, but they are now annotated with large warning comments. Amul Sul, after an idea of mine. Discussion: https://postgr.es/m/CANiYTQsU7yMFpQYnv=BrcRVqK_3U3mtAzAsJCaqtzsDHfsUbdQ@mail.gmail.com
* Use WaitLatch() instead of pg_usleep() at end-of-vacuum truncationMichael Paquier2021-07-02
| | | | | | | | | | | | This has the advantage to make a process more responsive when the postmaster dies, even if the wait time was rather limited as there was only a 50ms timeout here. Another advantage of this change is for monitoring, as we gain a new wait event for the end-of-vacuum truncation. Author: Bharath Rupireddy Reviewed-by: Aleksander Alekseev, Thomas Munro, Michael Paquier Discussion: https://postgr.es/m/CALj2ACU4AdPCq6NLfcA-ZGwX7pPCK5FgEj-CAU0xCKzkASSy_A@mail.gmail.com
* Pre branch pgindent / pgperltidy runAndrew Dunstan2021-06-28
| | | | | Along the way make a slight adjustment to src/include/utils/queryjumble.h to avoid an unused typedef.
* Remove overzealous VACUUM failsafe assertions.Peter Geoghegan2021-06-20
| | | | | | | | | | | The failsafe can trigger when index processing is already disabled. This can happen when VACUUM's INDEX_CLEANUP parameter is "off" and the failsafe happens to trigger. Remove assertions that assume that index processing is directly tied to the failsafe. Oversight in commit c242baa4, which made it possible for the failsafe to trigger in a two-pass strategy VACUUM that has yet to make its first call to lazy_vacuum_all_indexes().
* Support disabling index bypassing by VACUUM.Peter Geoghegan2021-06-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Generalize the INDEX_CLEANUP VACUUM parameter (and the corresponding reloption): make it into a ternary style boolean parameter. It now exposes a third option, "auto". The "auto" option (which is now the default) enables the "bypass index vacuuming" optimization added by commit 1e55e7d1. "VACUUM (INDEX_CLEANUP TRUE)" is redefined to once again make VACUUM simply do any required index vacuuming, regardless of how few dead tuples are encountered during the first scan of the target heap relation (unless there are exactly zero). This gives users a way of opting out of the "bypass index vacuuming" optimization, if for whatever reason that proves necessary. It is also expected to be used by PostgreSQL developers as a testing option from time to time. "VACUUM (INDEX_CLEANUP FALSE)" does the same thing as it always has: it forcibly disables both index vacuuming and index cleanup. It's not expected to be used much in PostgreSQL 14. The failsafe mechanism added by commit 1e55e7d1 addresses the same problem in a simpler way. INDEX_CLEANUP can now be thought of as a testing and compatibility option. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Masahiko Sawada <sawada.mshk@gmail.com> Reviewed-By: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/CAH2-WznrBoCST4_Gxh_G9hA8NzGUbeBGnOUC8FcXcrhqsv6OHQ@mail.gmail.com
* Remove unneeded field from VACUUM state.Peter Geoghegan2021-06-15
| | | | | | | Bugfix commit 5fc89376 effectively made the lock_waiter_detected field from vacuumlazy.c's global state struct into private state owned by lazy_truncate_heap(). Finish this off by replacing the struct field with a local variable.
* Remove forced toast recompression in VACUUM FULL/CLUSTERMichael Paquier2021-06-14
| | | | | | | | | | | | | | | | The extra checks added by the recompression of toast data introduced in bbe0a81 is proving to have a performance impact on VACUUM or CLUSTER even if no recompression is done. This is more noticeable with more toastable columns that contain non-NULL values. Improvements could be done to make those extra checks less expensive, but that's not material for 14 at this stage, and we are not sure either if the code path of VACUUM FULL/CLUSTER is adapted for this job. Per discussion with several people, including Andres Freund, Robert Haas, Álvaro Herrera, Tom Lane and myself. Discussion: https://postgr.es/m/20210527003144.xxqppojoiwurc2iz@alap3.anarazel.de
* Fix pg_visibility regression failure with CLOBBER_CACHE_ALWAYSTomas Vondra2021-06-08
| | | | | | | | | | | Commit 8e03eb92e9 reverted a bit too much code, reintroducing one of the issues fixed by 39b66a91bd - a page might have been left partially empty after relcache invalidation. Reported-By: Tom Lane Author: Masahiko Sawada Discussion: https://postgr.es/m/822752.1623032114@sss.pgh.pa.us Discussion: https://postgr.es/m/CAD21AoA%3D%3Df2VSw3c-Cp_y%3DWLKHMKc1D6s7g3YWsCOvgaYPpJcg%40mail.gmail.com
* Standardize usages of appendStringInfo and appendPQExpBufferDavid Rowley2021-06-03
| | | | | | | | | | | | | | | | | | | Fix a few places that were using appendStringInfo() when they should have been using appendStringInfoString(). Also some cases of appendPQExpBuffer() that would have been better suited to use appendPQExpBufferChar(), and finally, some places that used appendPQExpBuffer() when appendPQExpBufferStr() would have suited better. There are no bugs are being fixed here. The aim is just to make the code use the most optimal function for the job. All the code being changed here is new to PG14. It makes sense to fix these before we branch for PG15. There are a few other places that we could fix, but those cases are older code so fixing those seems less worthwhile as it may cause unnecessary back-patching pain in the future. Author: Hou Zhijie Discussion: https://postgr.es/m/OS0PR01MB5716732158B1C4142C6FE375943D9@OS0PR01MB5716.jpnprd01.prod.outlook.com
* Revert most of 39b66a91bdTomas Vondra2021-06-03
| | | | | | | | | | Reverts most of commit 39b66a91bd, which was found to cause significant regression for REFRESH MATERIALIZED VIEW. This means only rows inserted by heap_multi_insert will benefit from the optimization, implemented in commit 7db0cd2145. Reported-by: Masahiko Sawada Discussion: https://postgr.es/m/CAD21AoA%3D%3Df2VSw3c-Cp_y%3DWLKHMKc1D6s7g3YWsCOvgaYPpJcg%40mail.gmail.com
* Fix VACUUM VERBOSE's LP_DEAD item pages output.Peter Geoghegan2021-05-27
| | | | Oversight in commit 5100010e.
* Rethink definition of pg_attribute.attcompression.Tom Lane2021-05-27
| | | | | | | | | | | | | | | | | | | | | | | Redefine '\0' (InvalidCompressionMethod) as meaning "if we need to compress, use the current setting of default_toast_compression". This allows '\0' to be a suitable default choice regardless of datatype, greatly simplifying code paths that initialize tupledescs and the like. It seems like a more user-friendly approach as well, because now the default compression choice doesn't migrate into table definitions, meaning that changing default_toast_compression is usually sufficient to flip an installation's behavior; one needn't tediously issue per-column ALTER SET COMPRESSION commands. Along the way, fix a few minor bugs and documentation issues with the per-column-compression feature. Adopt more robust APIs for SetIndexStorageProperties and GetAttributeCompression. Bump catversion because typical contents of attcompression will now be different. We could get away without doing that, but it seems better to ensure v14 installations all agree on this. (We already forced initdb for beta2, anyway.) Discussion: https://postgr.es/m/626613.1621787110@sss.pgh.pa.us
* Fix typo in heapam.cMichael Paquier2021-05-26
| | | | | Author: Hou Zhijie Discussion: https://postgr.es/m/OS0PR01MB571612191738540B27A8DE5894249@OS0PR01MB5716.jpnprd01.prod.outlook.com
* Fix memory leak when de-toasting compressed values in VACUUM FULL/CLUSTERMichael Paquier2021-05-25
| | | | | | | | | | | | | | | | | | | | | | VACUUM FULL and CLUSTER can be used to enforce the use of the existing compression method of a toastable column if a value currently stored is compressed with a method that does not match the column's defined method. The code in charge of decompressing and recompressing toast values at rewrite left around the detoasted values, causing an accumulation of memory allocated in TopTransactionContext. When processing large relations, this could cause the system to run out of memory. The detoasted values are not needed once their tuple is rewritten, and this commit ensures that the necessary cleanup happens. Issue introduced by bbe0a81d. The comments of the area are reordered a bit while on it. Reported-by: Andres Freund Analyzed-by: Andres Freund Author: Michael Paquier Reviewed-by: Dilip Kumar Discussion: https://postgr.es/m/20210521211929.pcehg6f23icwstdb@alap3.anarazel.de
* Consider triggering VACUUM failsafe during scan.Peter Geoghegan2021-05-24
| | | | | | | | | | | | | | | | | | | | | The wraparound failsafe mechanism added by commit 1e55e7d1 handled the one-pass strategy case (i.e. the "table has no indexes" case) by adding a dedicated failsafe check. This made up for the fact that the usual one-pass checks inside lazy_vacuum_all_indexes() cannot ever be reached during a one-pass strategy VACUUM. This approach failed to account for two-pass VACUUMs that opt out of index vacuuming up-front. The INDEX_CLEANUP off case in the only case that works like that. Fix this by performing a failsafe check every 4GB during the first scan of the heap, regardless of the details of the VACUUM. This eliminates the special case, and will make the failsafe trigger more reliably. Author: Peter Geoghegan <pg@bowt.ie> Reported-By: Andres Freund <andres@anarazel.de> Reviewed-By: Masahiko Sawada <sawada.mshk@gmail.com> Discussion: https://postgr.es/m/20210424002921.pb3t7h6frupdqnkp@alap3.anarazel.de
* Fix autovacuum log output heap truncation issue.Peter Geoghegan2021-05-13
| | | | | | | | | | | | | | | The percentage of blocks from the table value reported by autovacuum log output (following commit 5100010ee4d) should never exceed 100% because it describes the state of the table back when lazy_vacuum() was called. The value could nevertheless exceed 100% in the event of heap relation truncation. We failed to compensate for how truncation affects rel_pages. Fix the faulty accounting by using the original rel_pages value instead of the current/final rel_pages value. Reported-By: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/20210423204306.5osfpkt2ggaedyvy@alap3.anarazel.de
* Initial pgindent and pgperltidy run for v14.Tom Lane2021-05-12
| | | | | | | | Also "make reformat-dat-files". The only change worthy of note is that pgindent messed up the formatting of launcher.c's struct LogicalRepWorkerId, which led me to notice that that struct wasn't used at all anymore, so I just took it out.
* Change data type of counters in BufferUsage and WalUsage from long to int64.Fujii Masao2021-05-12
| | | | | | | | | | | | | | | | Previously long was used as the data type for some counters in BufferUsage and WalUsage. But long is only four byte, e.g., on Windows, and it's entirely possible to wrap a four byte counter. For example, emitting more than four billion WAL records in one transaction isn't actually particularly rare. To avoid the overflows of those counters, this commit changes the data type of them from long to int64. Suggested-by: Andres Freund Author: Masahiro Ikeda Reviewed-by: Fujii Masao Discussion: https://postgr.es/m/20201221211650.k7b53tcnadrciqjo@alap3.anarazel.de Discussion: https://postgr.es/m/af0964ac-7080-1984-dc23-513754987716@oss.nttdata.com
* Fix mishandling of resjunk columns in ON CONFLICT ... UPDATE tlists.Tom Lane2021-05-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It's unusual to have any resjunk columns in an ON CONFLICT ... UPDATE list, but it can happen when MULTIEXPR_SUBLINK SubPlans are present. If it happens, the ON CONFLICT UPDATE code path would end up storing tuples that include the values of the extra resjunk columns. That's fairly harmless in the short run, but if new columns are added to the table then the values would become accessible, possibly leading to malfunctions if they don't match the datatypes of the new columns. This had escaped notice through a confluence of missing sanity checks, including * There's no cross-check that a tuple presented to heap_insert or heap_update matches the table rowtype. While it's difficult to check that fully at reasonable cost, we can easily add assertions that there aren't too many columns. * The output-column-assignment cases in execExprInterp.c lacked any sanity checks on the output column numbers, which seems like an oversight considering there are plenty of assertion checks on input column numbers. Add assertions there too. * We failed to apply nodeModifyTable's ExecCheckPlanOutput() to the ON CONFLICT UPDATE tlist. That wouldn't have caught this specific error, since that function is chartered to ignore resjunk columns; but it sure seems like a bad omission now that we've seen this bug. In HEAD, the right way to fix this is to make the processing of ON CONFLICT UPDATE tlists work the same as regular UPDATE tlists now do, that is don't add "SET x = x" entries, and use ExecBuildUpdateProjection to evaluate the tlist and combine it with old values of the not-set columns. This adds a little complication to ExecBuildUpdateProjection, but allows removal of a comparable amount of now-dead code from the planner. In the back branches, the most expedient solution seems to be to (a) use an output slot for the ON CONFLICT UPDATE projection that actually matches the target table, and then (b) invent a variant of ExecBuildProjectionInfo that can be told to not store values resulting from resjunk columns, so it doesn't try to store into nonexistent columns of the output slot. (We can't simply ignore the resjunk columns altogether; they have to be evaluated for MULTIEXPR_SUBLINK to work.) This works back to v10. In 9.6, projections work much differently and we can't cheaply give them such an option. The 9.6 version of this patch works by inserting a JunkFilter when it's necessary to get rid of resjunk columns. In addition, v11 and up have the reverse problem when trying to perform ON CONFLICT UPDATE on a partitioned table. Through a further oversight, adjust_partition_tlist() discarded resjunk columns when re-ordering the ON CONFLICT UPDATE tlist to match a partition. This accidentally prevented the storing-bogus-tuples problem, but at the cost that MULTIEXPR_SUBLINK cases didn't work, typically crashing if more than one row has to be updated. Fix by preserving resjunk columns in that routine. (I failed to resist the temptation to add more assertions there too, and to do some minor code beautification.) Per report from Andres Freund. Back-patch to all supported branches. Security: CVE-2021-32028
* Remove overzealous VACUUM visibility map assertion.Peter Geoghegan2021-05-06
| | | | | | | | | | | | | The all_visible_according_to_vm variable's value is inherently prone to becoming invalidated concurrently, since it is set before we even acquire a lock on a related heap page buffer. Oversight in commit 7136bf34, which added the assertion in passing. Author: Masahiko Sawada <sawada.mshk@gmail.com> Reported-By: Tang <tanghy.fnst@fujitsu.com> Diagnosed-By:: Masahiko Sawada <sawada.mshk@gmail.com> Discussion: https://postgr.es/m/CAD21AoDzgc8_MYrA5m1fyydomw_eVKtQiYh7sfDK4KEhdMsf_g@mail.gmail.com
* Document LP_DEAD accounting issues in VACUUM.Peter Geoghegan2021-04-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | Document VACUUM's soft assumption that any LP_DEAD items encountered during pruning will become LP_UNUSED items before VACUUM finishes up. This is integral to the accounting used by VACUUM to generate its final report on the table to the stats collector. It also affects how VACUUM determines which heap pages are truncatable. In both cases VACUUM is concerned with the likely contents of the page in the near future, not the current contents of the page. This state of affairs created the false impression that VACUUM's dead tuple accounting had significant difference with similar accounting used during ANALYZE. There were and are no substantive differences, at least when the soft assumption completely works out. This is far clearer now. Also document cases where things don't quite work out for VACUUM's dead tuple accounting. It's possible that a significant number of LP_DEAD items will be left behind by VACUUM, and won't be recorded as remaining dead tuples in VACUUM's statistics collector report. This behavior dates back to commit a96c41fe, which taught VACUUM to run without index and heap vacuuming at the user's request. The failsafe mechanism added to VACUUM more recently by commit 1e55e7d1 takes the same approach to dead tuple accounting. Reported-By: Masahiko Sawada <sawada.mshk@gmail.com> Discussion: https://postgr.es/m/CAH2-Wz=Jmtu18PrsYq3EvvZJGOmZqSO2u3bvKpx9xJa5uhNp=Q@mail.gmail.com
* Use correct format placeholder for block numbersPeter Eisentraut2021-04-17
| | | | Should be %u rather than %d.
* Don't truncate heap when VACUUM's failsafe is in effect.Peter Geoghegan2021-04-13
| | | | | | | | | | | | It seems like a good idea to bypass heap truncation when the wraparound failsafe mechanism (which was added in commit 1e55e7d1) is in effect. Deliberately don't bypass heap truncation in the INDEX_CLEANUP=off case, even though it is similar to the failsafe case. There is already a separate reloption (and related VACUUM parameter) for that. Reported-By: Masahiko Sawada <sawada.mshk@gmail.com> Discussion: https://postgr.es/m/CAD21AoDWRh6oTN5T8wa+cpZUVpHXET8BJ8Da7WHVHpwkPP6KLg@mail.gmail.com