aboutsummaryrefslogtreecommitdiff
path: root/src/backend/commands
Commit message (Collapse)AuthorAge
...
* Remove workarounds for avoiding [U]INT64_FORMAT in translatable strings.Tom Lane2022-03-21
| | | | | | | | | Further code simplification along the same lines as d914eb347 and earlier patches. Aleksander Alekseev, Japin Li Discussion: https://postgr.es/m/CAJ7c6TMSKi3Xs8h5MP38XOnQQpBLazJvVxVfPn++roitDJcR7g@mail.gmail.com
* Enforce foreign key correctly during cross-partition updatesAlvaro Herrera2022-03-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When an update on a partitioned table referenced in foreign key constraints causes a row to move from one partition to another, the fact that the move is implemented as a delete followed by an insert on the target partition causes the foreign key triggers to have surprising behavior. For example, a given foreign key's delete trigger which implements the ON DELETE CASCADE clause of that key will delete any referencing rows when triggered for that internal DELETE, although it should not, because the referenced row is simply being moved from one partition of the referenced root partitioned table into another, not being deleted from it. This commit teaches trigger.c to skip queuing such delete trigger events on the leaf partitions in favor of an UPDATE event fired on the root target relation. Doing so is sensible because both the old and the new tuple "logically" belong to the root relation. The after trigger event queuing interface now allows passing the source and the target partitions of a particular cross-partition update when registering the update event for the root partitioned table. Along with the two ctids of the old and the new tuple, the after trigger event now also stores the OIDs of those partitions. The tuples fetched from the source and the target partitions are converted into the root table format, if necessary, before they are passed to the trigger function. The implementation currently has a limitation that only the foreign keys pointing into the query's target relation are considered, not those of its sub-partitioned partitions. That seems like a reasonable limitation, because it sounds rare to have distinct foreign keys pointing to sub-partitioned partitions instead of to the root table. This misbehavior stems from commit f56f8f8da6af (which added support for foreign keys to reference partitioned tables) not paying sufficient attention to commit 2f178441044b (which had introduced cross-partition updates a year earlier). Even though the former commit goes back to Postgres 12, we're not backpatching this fix at this time for fear of destabilizing things too much, and because there are a few ABI breaks in it that we'd have to work around in older branches. It also depends on commit f4566345cf40, which had its own share of backpatchability issues as well. Author: Amit Langote <amitlangote09@gmail.com> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org> Reported-by: Eduard Català <eduard.catala@gmail.com> Discussion: https://postgr.es/m/CA+HiwqFvkBCmfwkQX_yBqv2Wz8ugUGiBDxum8=WvVbfU1TXaNg@mail.gmail.com Discussion: https://postgr.es/m/CAL54xNZsLwEM1XCk5yW9EqaRzsZYHuWsHQkA2L5MOSKXAwviCQ@mail.gmail.com
* Fix global ICU collations for ICU < 54Peter Eisentraut2022-03-20
| | | | | | | | | | | | | | | | | createdb() didn't check for collation attributes validity, which has to be done explicitly on ICU < 54. It also forgot to close the ICU collator opened during the check which leaks some memory. To fix both, add a new check_icu_locale() that does all the appropriate verification and close the ICU collator. initdb also had some partial check for ICU < 54. To have consistent error reporting across major ICU versions, and get rid of the need to include ucol.h, remove the partial check there. The backend will report an error if needed during the post-boostrap iniitialization phase. Author: Julien Rouhaud <julien.rouhaud@free.fr> Discussion: https://www.postgresql.org/message-id/20220319041459.qqqiqh335sga5ezj@jrouhaud
* Silence -Wmaybe-uninitialized compiler warning in dbcommands.c.Andres Freund2022-03-18
| | | | | | | Introduced in f2553d43060e. See also 3f6b3be39ca9, which did so for nearby variables. Discussion: https://postgr.es/m/20220319014707.kgtomqdzm6m2ulro@alap3.anarazel.de
* Add option to use ICU as global locale providerPeter Eisentraut2022-03-17
| | | | | | | | | | | | | | | | | | This adds the option to use ICU as the default locale provider for either the whole cluster or a database. New options for initdb, createdb, and CREATE DATABASE are used to select this. Since some (legacy) code still uses the libc locale facilities directly, we still need to set the libc global locale settings even if ICU is otherwise selected. So pg_database now has three locale-related fields: the existing datcollate and datctype, which are always set, and a new daticulocale, which is only set if ICU is selected. A similar change is made in pg_collation for consistency, but in that case, only the libc-related fields or the ICU-related field is set, never both. Reviewed-by: Julien Rouhaud <rjuju123@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/5e756dd6-0e91-d778-96fd-b1bcb06c161a%402ndquadrant.com
* Fix publish_as_relid with multiple publicationsTomas Vondra2022-03-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 83fd4532a7 allowed publishing of changes via ancestors, for publications defined with publish_via_partition_root. But the way the ancestor was determined in get_rel_sync_entry() was incorrect, simply updating the same variable. So with multiple publications, replicating different ancestors, the outcome depended on the order of publications in the list - the value from the last loop was used, even if it wasn't the top-most ancestor. This is a probably rare situation, as in most cases publications do not overlap, so each partition has exactly one candidate ancestor to replicate as and there's no ambiguity. Fixed by tracking the "ancestor level" for each publication, and picking the top-most ancestor. Adds a test case, verifying the correct ancestor is used for publishing the changes and that this does not depend on order of publications in the list. Older releases have another bug in this loop - once all actions are replicated, the loop is terminated, on the assumption that inspecting additional publications is unecessary. But that misses the fact that those additional applications may replicate different ancestors. Fixed by removal of this break condition. We might still terminate the loop in some cases (e.g. when replicating all actions and the ancestor is the partition root). Backpatch to 13, where publish_via_partition_root was introduced. Initial report and fix by me, test added by Hou zj. Reviews and improvements by Amit Kapila. Author: Tomas Vondra, Hou zj, Amit Kapila Reviewed-by: Amit Kapila, Hou zj Discussion: https://postgr.es/m/d26d24dd-2fab-3c48-0162-2b7f84a9c893%40enterprisedb.com
* Fix collection of typos in the code and the documentationMichael Paquier2022-03-15
| | | | | | | | Some words were duplicated while other places were grammatically incorrect, including one variable name in the code. Author: Otto Kekalainen, Justin Pryzby Discussion: https://postgr.es/m/7DDBEFC5-09B6-4325-B942-B563D1A24BDC@amazon.com
* Optionally disable subscriptions on error.Amit Kapila2022-03-14
| | | | | | | | | | | | | | | | | | | | | | Logical replication apply workers for a subscription can easily get stuck in an infinite loop of attempting to apply a change, triggering an error (such as a constraint violation), exiting with the error written to the subscription server log, and restarting. To partially remedy the situation, this patch adds a new subscription option named 'disable_on_error'. To be consistent with old behavior, this option defaults to false. When true, both the tablesync worker and apply worker catch any errors thrown and disable the subscription in order to break the loop. The error is still also written in the logs. Once the subscription is disabled, users can either manually resolve the conflict/error or skip the conflicting transaction by using pg_replication_origin_advance() function. After resolving the conflict, users need to enable the subscription to allow apply process to proceed. Author: Osumi Takamichi and Mark Dilger Reviewed-by: Greg Nancarrow, Vignesh C, Amit Kapila, Wang wei, Tang Haiying, Peter Smith, Masahiko Sawada, Shi Yu Discussion : https://postgr.es/m/DB35438F-9356-4841-89A0-412709EBD3AB%40enterprisedb.com
* Add API of sorts for transition table handling in trigger.cAlvaro Herrera2022-03-11
| | | | | | | | | | Preparatory patch for further additions in this area, particularly to allow MERGE to have separate transition tables for each action. Author: Pavan Deolasee <pavan.deolasee@gmail.com> Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org> Discussion: https://postgr.es/m/CABOikdNj+8HEJ5D8tu56mrPkjHVRrBb2_cdKWwpiYNcjXgDw8g@mail.gmail.com Discussion: https://postgr.es/m/20201231134736.GA25392@alvherre.pgsql
* DefineCollation() code cleanupPeter Eisentraut2022-03-11
| | | | | | | | | Reorganize the code in DefineCollation() so that the parts using the FROM clause and the parts not doing so are more cleanly separated. No functionality change intended. Reviewed-by: Julien Rouhaud <rjuju123@gmail.com> Discussion: https://www.postgresql.org/message-id/29ae752f-80e9-8d31-601c-62cf01cc93d8@enterprisedb.com
* Add pg_analyze_and_rewrite_varparams()Peter Eisentraut2022-03-07
| | | | | | | | | | | | | | | | | | | | | This new function extracts common code from PrepareQuery() and exec_parse_message(). It is then exactly analogous to the existing pg_analyze_and_rewrite_fixedparams() and pg_analyze_and_rewrite_withcb(). To unify these two code paths, this makes PrepareQuery() now subject to log_parser_stats. Also, both paths now invoke TRACE_POSTGRESQL_QUERY_REWRITE_START(). PrepareQuery() no longer checks whether a utility statement was specified. The grammar doesn't allow that anyway, and exec_parse_message() supports it, so restricting it doesn't seem necessary. This also adds QueryEnvironment support to the *varparams functions, for consistency with its cousins, even though it is not used right now. Reviewed-by: Nathan Bossart <bossartn@amazon.com> Discussion: https://www.postgresql.org/message-id/flat/c67ce276-52b4-0239-dc0e-39875bf81840@enterprisedb.com
* Create routine able to set single-call SRFs for Materialize modeMichael Paquier2022-03-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Set-returning functions that use the Materialize mode, creating a tuplestore to include all the tuples returned in a set rather than doing so in multiple calls, use roughly the same set of steps to prepare ReturnSetInfo for this job: - Check if ReturnSetInfo supports returning a tuplestore and if the materialize mode is enabled. - Create a tuplestore for all the tuples part of the returned set in the per-query memory context, stored in ReturnSetInfo->setResult. - Build a tuple descriptor mostly from get_call_result_type(), then stored in ReturnSetInfo->setDesc. Note that there are some cases where the SRF's tuple descriptor has to be the one specified by the function caller. This refactoring is done so as there are (well, should be) no behavior changes in any of the in-core functions refactored, and the centralized function that checks and sets up the function's ReturnSetInfo can be controlled with a set of bits32 options. Two of them prove to be necessary now: - SRF_SINGLE_USE_EXPECTED to use expectedDesc as tuple descriptor, as expected by the function's caller. - SRF_SINGLE_BLESS to validate the tuple descriptor for the SRF. The same initialization pattern is simplified in 28 places per my count as of src/backend/, shaving up to ~900 lines of code. These mostly come from the removal of the per-query initializations and the sanity checks now grouped in a single location. There are more locations that could be simplified in contrib/, that are left for a follow-up cleanup. fcc2817, 07daca5 and d61a361 have prepared the areas of the code related to this change, to ease this refactoring. Author: Melanie Plageman, Michael Paquier Reviewed-by: Álvaro Herrera, Justin Pryzby Discussion: https://postgr.es/m/CAAKRu_azyd1Z3W_r7Ou4sorTjRCs+PxeHw1CWJeXKofkE6TuZg@mail.gmail.com
* Parse/analyze function renamingPeter Eisentraut2022-03-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are three parallel ways to call parse/analyze: with fixed parameters, with variable parameters, and by supplying your own parser callback. Some of the involved functions were confusingly named and made this API structure more confusing. This patch renames some functions to make this clearer: parse_analyze() -> parse_analyze_fixedparams() pg_analyze_and_rewrite() -> pg_analyze_and_rewrite_fixedparams() (Otherwise one might think this variant doesn't accept parameters, but in fact all three ways accept parameters.) pg_analyze_and_rewrite_params() -> pg_analyze_and_rewrite_withcb() (Before, and also when considering pg_analyze_and_rewrite(), one might think this is the only way to pass parameters. Moreover, the parser callback doesn't necessarily need to parse only parameters, it's just one of the things it could do.) parse_fixed_parameters() -> setup_parse_fixed_parameters() parse_variable_parameters() -> setup_parse_variable_parameters() (These functions don't actually do any parsing, they just set up callbacks to use during parsing later.) This patch also adds some const decorations to the fixed-parameters API, so the distinction from the variable-parameters API is more clear. Reviewed-by: Nathan Bossart <bossartn@amazon.com> Discussion: https://www.postgresql.org/message-id/flat/c67ce276-52b4-0239-dc0e-39875bf81840@enterprisedb.com
* Clean up and simplify code in a couple of set-returning functionsMichael Paquier2022-02-24
| | | | | | | | | | | | | | | | | | | | | | | | | | The following set-returning functions have their logic simplified, to be more consistent with other in-core areas: - pg_prepared_statement()'s tuple descriptor is now created with get_call_result_type() instead of being created from scratch, saving from some duplication with pg_proc.dat. - show_all_file_settings(), similarly, now uses get_call_result_type() to build its tuple descriptor instead of creating it from scratch. - pg_options_to_table() made use of a static routine called only once. This commit removes this internal routine to make the function easier to follow. - pg_config() was using a unique logic style, doing checks on the tuple descriptor passed down in expectedDesc, but it has no need to do so. This switches the function to use a tuplestore with a tuple descriptor retrieved from get_call_result_type(), instead. This simplifies an upcoming patch aimed at refactoring the way tuplestores are created and checked in set-returning functions, this change making sense as its own independent cleanup by shaving some code. Author: Melanie Plageman, Michael Paquier Reviewed-by: Justin Pryzby Discussion: https://postgr.es/m/CAAKRu_azyd1Z3W_r7Ou4sorTjRCs+PxeHw1CWJeXKofkE6TuZg@mail.gmail.com
* Allow specifying row filters for logical replication of tables.Amit Kapila2022-02-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This feature adds row filtering for publication tables. When a publication is defined or modified, an optional WHERE clause can be specified. Rows that don't satisfy this WHERE clause will be filtered out. This allows a set of tables to be partially replicated. The row filter is per table. A new row filter can be added simply by specifying a WHERE clause after the table name. The WHERE clause must be enclosed by parentheses. The row filter WHERE clause for a table added to a publication that publishes UPDATE and/or DELETE operations must contain only columns that are covered by REPLICA IDENTITY. The row filter WHERE clause for a table added to a publication that publishes INSERT can use any column. If the row filter evaluates to NULL, it is regarded as "false". The WHERE clause only allows simple expressions that don't have user-defined functions, user-defined operators, user-defined types, user-defined collations, non-immutable built-in functions, or references to system columns. These restrictions could be addressed in the future. If you choose to do the initial table synchronization, only data that satisfies the row filters is copied to the subscriber. If the subscription has several publications in which a table has been published with different WHERE clauses, rows that satisfy ANY of the expressions will be copied. If a subscriber is a pre-15 version, the initial table synchronization won't use row filters even if they are defined in the publisher. The row filters are applied before publishing the changes. If the subscription has several publications in which the same table has been published with different filters (for the same publish operation), those expressions get OR'ed together so that rows satisfying any of the expressions will be replicated. This means all the other filters become redundant if (a) one of the publications have no filter at all, (b) one of the publications was created using FOR ALL TABLES, (c) one of the publications was created using FOR ALL TABLES IN SCHEMA and the table belongs to that same schema. If your publication contains a partitioned table, the publication parameter publish_via_partition_root determines if it uses the partition's row filter (if the parameter is false, the default) or the root partitioned table's row filter. Psql commands \dRp+ and \d <table-name> will display any row filters. Author: Hou Zhijie, Euler Taveira, Peter Smith, Ajin Cherian Reviewed-by: Greg Nancarrow, Haiying Tang, Amit Kapila, Tomas Vondra, Dilip Kumar, Vignesh C, Alvaro Herrera, Andres Freund, Wei Wang Discussion: https://www.postgresql.org/message-id/flat/CAHE3wggb715X%2BmK_DitLXF25B%3DjE6xyNCH4YOwM860JR7HarGQ%40mail.gmail.com
* Add compute_query_id = regressMichael Paquier2022-02-22
| | | | | | | | | | | | | | | | | | | | | | | | | | "regress" is a new mode added to compute_query_id aimed at facilitating regression testing when a module computing query IDs is loaded into the backend, like pg_stat_statements. It works the same way as "auto", meaning that query IDs are computed if a module enables it, except that query IDs are hidden in EXPLAIN outputs to ensure regression output stability. Like any GUCs of the kind (force_parallel_mode, etc.), this new configuration can be added to an instance's postgresql.conf, or just passed down with PGOPTIONS at command level. compute_query_id uses an enum for its set of option values, meaning that this addition ensures ABI compatibility. Using this new configuration mode allows installcheck-world to pass when running the tests on an instance with pg_stat_statements enabled, stabilizing the test output while checking the paths doing query ID computations. Reported-by: Anton Melnikov Reviewed-by: Julien Rouhaud Discussion: https://postgr.es/m/1634283396.372373993@f75.i.mail.ru Discussion: https://postgr.es/m/YgHlxgc/OimuPYhH@paquier.xyz Backpatch-through: 14
* Fix comment in CheckIndexCompatible().Fujii Masao2022-02-18
| | | | | | | | | | | | | Commit 5f173040 removed the parameter "heapRelation" from CheckIndexCompatible(), but forgot to remove the mention of it from the comment. This commit removes that unnecessary mention. Also this commit adds the missing mention of the parameter "oldId" in the comment. Author: Yugo Nagata Reviewed-by: Nathan Bossart, Fujii Masao Discussion: https://postgr.es/m/20220204014634.b39314f278ff4ae3de96e201@sraoss.co.jp
* Avoid VACUUM reltuples distortion.Peter Geoghegan2022-02-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | Add a heuristic that avoids distortion in the pg_class.reltuples estimates used by VACUUM. Without the heuristic, successive manually run VACUUM commands (run against a table that is never modified after initial bulk loading) will scan the same page in each VACUUM operation. Eventually pg_class.reltuples may reach the point where one single heap page is accidentally considered highly representative of the entire table. This is likely to be completely wrong, since the last heap page typically has fewer tuples than average for the table. It's not obvious that this was a problem prior to commit 44fa8488, which made vacuumlazy.c consistently scan the last heap page (even when it is all-visible in the visibility map). It seems possible that there were more subtle variants of the same problem that went unnoticed for quite some time, though. Commit 44fa8488 simplified certain aspects of when and how relation truncation was considered, but it did not introduce the "scan the last page" behavior. Essentially the same behavior was introduced much earlier, in commit e8429082. It was conditioned on whether or not truncation looked promising towards the end of the initial heap pass by VACUUM until recently, which was at least somewhat protective. That doesn't seem like something that we should be relying on, though. Author: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/CAH2-WzkNKORurux459M64mR63Aw4Jq7MBRVcX=CvALqN3A88WA@mail.gmail.com
* Remove all traces of tuplestore_donestoring() in the C codeMichael Paquier2022-02-17
| | | | | | | | | | | | | | | | | | This routine is a no-op since dd04e95 from 2003, with a macro kept around for compatibility purposes. This has led to the same code patterns being copy-pasted around for no effect, sometimes in confusing ways like in pg_logical_slot_get_changes_guts() from logical.c where the code was actually incorrect. This issue has been discussed on two different threads recently, so rather than living with this legacy, remove any uses of this routine in the C code to simplify things. The compatibility macro is kept to avoid breaking any out-of-core modules that depend on it. Reported-by: Tatsuhito Kasahara, Justin Pryzby Author: Tatsuhito Kasahara Discussion: https://postgr.es/m/20211217200419.GQ17618@telsasoft.com Discussion: https://postgr.es/m/CAP0=ZVJeeYfAeRfmzqAF2Lumdiv4S4FewyBnZd4DPTrsSQKJKw@mail.gmail.com
* Add missing TYPEALIGN macrosJohn Naylor2022-02-16
| | | | | | | | A couple call sites still had hard-coded characters. Amul Sul Discussion: https://www.postgresql.org/message-id/CAAJ_b94Y35MWB3PJoCbc_O-_Q4%2B-9DHKhWtAwboEyx8wm4mqcA%40mail.gmail.com
* Reject change of output-column collation in CREATE OR REPLACE VIEW.Tom Lane2022-02-15
| | | | | | | | | | | | checkViewTupleDesc() didn't get the memo that it should verify same attcollation along with same type/typmod. (A quick scan did not find other similar oversights.) Per bug #17404 from Pierre-Aurélien Georges. On another day I might've back-patched this, but today I'm feeling paranoid about unnecessary behavioral changes in back branches. Discussion: https://postgr.es/m/17404-8a4a270ef30a6709@postgresql.org
* Database-level collation version trackingPeter Eisentraut2022-02-14
| | | | | | | | | | | | | | | | | | This adds to database objects the same version tracking that collation objects have. There is a new pg_database column datcollversion that stores the version, a new function pg_database_collation_actual_version() to get the version from the operating system, and a new subcommand ALTER DATABASE ... REFRESH COLLATION VERSION. This was not originally added together with pg_collation.collversion, since originally version tracking was only supported for ICU, and ICU on a database-level is not currently supported. But we now have version tracking for glibc (since PG13), FreeBSD (since PG14), and Windows (since PG13), so this is useful to have now. Reviewed-by: Julien Rouhaud <rjuju123@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/f0ff3190-29a3-5b39-a179-fa32eee57db6%40enterprisedb.com
* Consolidate VACUUM xid cutoff logic.Peter Geoghegan2022-02-11
| | | | | | | | | Push the logic for determining whether or not a VACUUM operation will be aggressive down into vacuum_set_xid_limits(). This makes the function's signature significantly simpler, and seems clearer overall. Author: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/CAH2-WzkymFbz6D_vL+jmqSn_5q1wsFvFrE+37yLgL_Rkfd6Gzg@mail.gmail.com
* Add VACUUM instrumentation for scanned pages, relfrozenxid.Peter Geoghegan2022-02-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Report on scanned pages within VACUUM VERBOSE and autovacuum logging. These are pages that were physically examined during the VACUUM operation. Note that this can include a small number of pages that were marked all-visible in the visibility map by some earlier VACUUM operation. VACUUM won't skip all-visible pages that aren't part of a range of all-visible pages that's at least 32 blocks in length (partly to avoid missing out on opportunities to advance relfrozenxid during non-aggressive VACUUMs). Commit 44fa8488 simplified the definition of scanned pages. It became the complement of the pages (of those pages from rel_pages) that were skipped using the visibility map. And so scanned pages precisely indicates how effective the visibility map was at saving work. (Before now we displayed the number of pages skipped via the visibility map when happened to be frozen pages, but not when they were merely all-visible, which was less useful to users.) Rename the user-visible OldestXmin output field to "removal cutoff", and show some supplementary information: how far behind the cutoff is (number of XIDs behind) by the time the VACUUM operation finished. This will help users to figure out what's _not_ working in extreme cases where VACUUM is fundamentally unable to remove dead tuples or freeze older tuples (e.g., due to a leaked replication slot). Also report when relfrozenxid is advanced by VACUUM in output that immediately follows "removal cutoff". This structure is intended to highlight the relationship between the new relfrozenxid value for the table, and the VACUUM operation's removal cutoff. Finally, add instrumentation of "missed dead tuples", and the number of pages that had at least one such tuple. These are fully DEAD (not just RECENTLY_DEAD) tuples with storage that could not be pruned due to failure to acquire a cleanup lock on a heap page. This is a replacement for the "skipped due to pin" instrumentation removed by commit 44fa8488. It shows more details than before for pages where failing to get a cleanup lock actually resulted in VACUUM missing out on useful work, but usually shows nothing at all instead (the mere fact that we couldn't get a cleanup lock is usually of no consequence whatsoever now). Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/CAH2-Wznp=c=Opj8Z7RMR3G=ec3_JfGYMN_YvmCEjoPCHzWbx0g@mail.gmail.com
* Fix DROP {DATABASE,TABLESPACE} on Windows.Thomas Munro2022-02-12
| | | | | | | | | | | | | | | | | | | | | | | | | Previously, it was possible for DROP DATABASE, DROP TABLESPACE and ALTER DATABASE SET TABLESPACE to fail because other backends still had file handles open for dropped tables. Windows won't allow a directory containing unlinked-but-still-open files to be unlinked. Tackle this problem by forcing all backends to close all smgr fds. No change for Unix systems, which don't suffer from the problem, but the new code path can be tested by Unix-based developers by defining USE_BARRIER_SMGRRELEASE explicitly. It's possible that PROCSIGNAL_BARRIER_SMGRRELEASE will have more bug-fixing applications soon (under discussion). Note that this is the first user of the ProcSignalBarrier mechanism from commit 16a4e4aec. It could in principle be back-patched as far as 14, but since field complaints are rare and ProcSignalBarrier hasn't been battle-tested, that seems like a bad idea. Fix in master only, where these failures have started to show up in automated testing due to new tests. Suggested-by: Andres Freund <andres@anarazel.de> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Reviewed-by: Robert Haas <robertmhaas@gmail.com> Discussion: https://postgr.es/m/CA+hUKGLdemy2gBm80kz20GTe6hNVwoErE8KwcJk6-U56oStjtg@mail.gmail.com
* Logical decoding of sequencesTomas Vondra2022-02-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This extends the logical decoding to also decode sequence increments. We differentiate between sequences created in the current (in-progress) transaction, and sequences created earlier. This mixed behavior is necessary because while sequences are not transactional (increments are not subject to ROLLBACK), relfilenode changes are. So we do this: * Changes for sequences created in the same top-level transaction are treated as transactional, i.e. just like any other change from that transaction, and discarded in case of a rollback. * Changes for sequences created earlier are applied immediately, as if performed outside any transaction. This applies also after ALTER SEQUENCE, which may create a new relfilenode. Moreover, if we ever get support for DDL replication, the sequence won't exist until the transaction gets applied. Sequences created in the current transaction are tracked in a simple hash table, identified by a relfilenode. That means a sequence may already exist, but if a transaction does ALTER SEQUENCE then the increments for the new relfilenode will be treated as transactional. For each relfilenode we track the XID of (sub)transaction that created it, which is needed for cleanup at transaction end. We don't need to check the XID to decide if an increment is transactional - if we find a match in the hash table, it has to be the same transaction. This requires two minor changes to WAL-logging. Firstly, we need to ensure the sequence record has a valid XID - until now the the increment might have XID 0 if it was the first change in a subxact. But the sequence might have been created in the same top-level transaction. So we ensure the XID is assigned when WAL-logging increments. The other change is addition of "created" flag, marking increments for newly created relfilenodes. This makes it easier to maintain the hash table of sequences that need transactional handling. Note: This is needed because of subxacts. A XID 0 might still have the sequence created in a different subxact of the same top-level xact. This does not include any changes to test_decoding and/or the built-in replication - those will be committed in separate patches. A patch adding decoding of sequences was originally submitted by Cary Huang. This commit reworks various important aspects (e.g. the WAL logging and transactional/non-transactional handling). However, the original patch and reviews were very useful. Author: Tomas Vondra, Cary Huang Reviewed-by: Peter Eisentraut, Hannu Krosing, Andres Freund Discussion: https://postgr.es/m/d045f3c2-6cfb-06d3-5540-e63c320df8bc@enterprisedb.com Discussion: https://postgr.es/m/1710ed7e13b.cd7177461430746.3372264562543607781@highgo.ca
* Reduce more the number of calls to GetMaxBackends()Michael Paquier2022-02-10
| | | | | | | | | | | | | Some of the code paths changed by aa64f23 can reduce the number of times GetMaxBackends() is called. The performance gain is marginal, but most of the code changed by this commit already did that. Hence, let's be clean and apply the same rule everywhere, for consistency. Some of the code paths, like in deadlock.c, involve only assertions. These are left unchanged. Reviewed-by: Nathan Bossart, Robert Haas Discussion: https://postgr.es/m/YgMpGZhPOjNfS7er@paquier.xyz
* Remove MaxBackends variable in favor of GetMaxBackends() function.Robert Haas2022-02-08
| | | | | | | | | | | | | | Previously, it was really easy to write code that accessed MaxBackends before we'd actually initialized it, especially when coding up an extension. To make this less error-prune, introduce a new function GetMaxBackends() which should be used to obtain the correct value. This will ERROR if called too early. Demote the global variable to a file-level static, so that nobody can peak at it directly. Nathan Bossart. Idea by Andres Freund. Review by Greg Sabino Mullane, by Michael Paquier (who had doubts about the approach), and by me. Discussion: http://postgr.es/m/20210802224204.bckcikl45uezv5e4@alap3.anarazel.de
* Add UNIQUE null treatment optionPeter Eisentraut2022-02-03
| | | | | | | | | | | | | | | | | | | | | | | | | The SQL standard has been ambiguous about whether null values in unique constraints should be considered equal or not. Different implementations have different behaviors. In the SQL:202x draft, this has been formalized by making this implementation-defined and adding an option on unique constraint definitions UNIQUE [ NULLS [NOT] DISTINCT ] to choose a behavior explicitly. This patch adds this option to PostgreSQL. The default behavior remains UNIQUE NULLS DISTINCT. Making this happen in the btree code is pretty easy; most of the patch is just to carry the flag around to all the places that need it. The CREATE UNIQUE INDEX syntax extension is not from the standard, it's my own invention. I named all the internal flags, catalog columns, etc. in the negative ("nulls not distinct") so that the default PostgreSQL behavior is the default if the flag is false. Reviewed-by: Maxim Orlov <orlovmg@gmail.com> Reviewed-by: Pavel Borisov <pashkin.elfe@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/84e5ee1b-387e-9a54-c326-9082674bde78@enterprisedb.com
* Some cleanup for change of collate and ctype fields to type textPeter Eisentraut2022-02-02
| | | | | | | Some cleanup for commit 54637508f87bd5f07fb9406bac6b08240283be3b: Reformat pg_database.dat to reflect the new field order. Also update the corresponding example in bki.sgml. Reorder the way the fields are filled in dbcommands.c to correspond to the new order.
* Remove xloginsert.h from xlog.hAlvaro Herrera2022-01-30
| | | | | | | | | xlog.h is directly and indirectly #included in a lot of places. With this change, xloginsert.h is no longer unnecessarily included in the large number of them that don't need it. Author: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> Discussion: https://postgr.es/m/CALj2ACVe-W+WM5P44N7eG9C2_FmaeM8Dq5aCnD3fHt0Ba=WR6w@mail.gmail.com
* Fix incorrect memory context switch in COPY TO executionMichael Paquier2022-01-29
| | | | | | | | | | | | | | | | | | | | c532d15 has split the logic of COPY commands into multiple files, one change being to move the internals of BeginCopy() to BeginCopyTo(). Originally the code was written so as we'd switch back-and-forth between the current execution memory context and the dedicated memory context for the COPY command, and this refactoring has introduced an extra switch to the current memory context from the COPY context once BeginCopyTo() is done with the past logic coming from BeginCopy(). The code was correctly doing the analyze, rewrite and planning phases in the COPY context, but it was not assigning "copy_file" (FILE* used when copying to a source file) and "filename" in the COPY context, making the COPY status data inconsistent. Author: Bharath Rupireddy Reviewed-by: Japin Li Discussion: https://postgr.es/m/CALj2ACWvVa69foi9jhHFY=2BuHxAoYboyE+vXQTARwxZcJnVrQ@mail.gmail.com Backpatch-through: 14
* Add HEADER support to COPY text formatPeter Eisentraut2022-01-28
| | | | | | | | | | The COPY CSV format supports the HEADER option to output a header line. This patch adds the same option to the default text format. On input, the HEADER option causes the first line to be skipped, same as with CSV. Author: Rémi Lapeyre <remi.lapeyre@lenstra.fr> Discussion: https://www.postgresql.org/message-id/flat/CAF1-J-0PtCWMeLtswwGV2M70U26n4g33gpe1rcKQqe6wVQDrFA@mail.gmail.com
* Add some const decorationsPeter Eisentraut2022-01-28
|
* Change collate and ctype fields to type textPeter Eisentraut2022-01-27
| | | | | | | | | | | | | | | | | | | This changes the data type of the catalog fields datcollate, datctype, collcollate, and collctype from name to text. There wasn't ever a really good reason for them to be of type name; presumably this was just carried over from when they were fixed-size fields in pg_control, first into the corresponding pg_database fields, and then to pg_collation. The values are not identifiers or object names, and we don't ever look them up that way. Changing to type text saves space in the typical case, since locale names are typically only a few bytes long. But it is also possible that an ICU locale name with several customization options appended could be longer than 63 bytes, so this also enables that case, which was previously probably broken. Reviewed-by: Julien Rouhaud <rjuju123@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/5e756dd6-0e91-d778-96fd-b1bcb06c161a@2ndquadrant.com
* Fix various typos, grammar and code style in comments and docsMichael Paquier2022-01-25
| | | | | | | | | This fixes a set of issues that have accumulated over the past months (or years) in various code areas. Most fixes are related to some recent additions, as of the development of v15. Author: Justin Pryzby Discussion: https://postgr.es/m/20220124030001.GQ23027@telsasoft.com
* pg_upgrade: Preserve database OIDs.Robert Haas2022-01-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 9a974cbcba005256a19991203583a94b4f9a21a9 arranged to preserve relfilenodes and tablespace OIDs. For similar reasons, also arrange to preserve database OIDs. One problem is that, up until now, the OIDs assigned to the template0 and postgres databases have not been fixed. This could be a problem when upgrading, because pg_upgrade might try to migrate a database from the old cluster to the new cluster while keeping the OID and find a different database with that OID, resulting in a failure. If it finds a database with the same name and the same OID that's OK: it will be dropped and recreated. But the same OID and a different name is a problem. To prevent that, fix the OIDs for postgres and template0 to specific values less than 16384. To avoid running afoul of this rule, these values should not be changed in future releases. It's not a problem that these OIDs aren't fixed in existing releases, because the OIDs that we're assigning here weren't used for either of these databases in any previous release. Thus, there's no chance that an upgrade of a cluster from any previous release will collide with the OIDs we're assigning here. And going forward, the OIDs will always be fixed, so the only potential collision is with a system database having the same name and the same OID, which is OK. This patch lets users assign a specific OID to a database as well, provided however that it can't be less than 16384. I (rhaas) thought it might be better not to expose this capability to users, but the consensus was otherwise, so the syntax is documented. Letting users assign OIDs below 16384 would not be OK, though, because a user-created database with a low-numbered OID might collide with a system-created database in a future release. We therefore prohibit that. Shruthi KC, based on an earlier patch from Antonin Houska, reviewed and with some adjustments by me. Discussion: http://postgr.es/m/CA+TgmoYgTwYcUmB=e8+hRHOFA0kkS6Kde85+UNdon6q7bt1niQ@mail.gmail.com Discussion: http://postgr.es/m/CAASxf_Mnwm1Dh2vd5FAhVX6S1nwNSZUB1z12VddYtM++H2+p7w@mail.gmail.com
* Check syscache result in AlterStatisticsTomas Vondra2022-01-23
| | | | | | | | | | | | The syscache lookup may return NULL even for valid OID, for example due to a concurrent DROP STATISTICS, so a HeapTupleIsValid is necessary. Without it, it may fail with a segfault. Reported by Alexander Lakhin, patch by me. Backpatch to 13, where ALTER STATISTICS ... SET STATISTICS was introduced. Backpatch-through: 13 Discussion: https://postgr.es/m/17372-bf3b6e947e35ae77%40postgresql.org
* Remove useless inline marker.Tom Lane2022-01-22
| | | | | | | | | | | Putting "inline" on a function that's not used anywhere in its own file is useless unless the linker is doing global optimization, a method we don't generally enable. Moreover, it draws warnings from some buildfarm members (curculio at least). Looks like this was sloppiness in cc8b25712, which moved the function from somewhere else where the inline marker was more appropriate.
* Remove 'datlastsysoid'.Robert Haas2022-01-20
| | | | | | | | | It hasn't been used for anything for a long time. Up until recently, we still queried it when dumping very old servers, but since commit 30e7c175b81d53c0f60f6ad12d1913a6d7d77008, there's no longer any code at all that cares about it. Discussion: http://postgr.es/m/CA+Tgmoa14=BRq0WEd0eevjEMn9EkghDB1FZEkBw7+UAb7tF49A@mail.gmail.com
* Remove redundant memory context switches in BeginCopyFrom().Tom Lane2022-01-19
| | | | | | | | This is probably a leftover from code refactoring. Japin Li Discussion: https://postgr.es/m/MEYP282MB16693DDABDFEC7949AC31857B6599@MEYP282MB1669.AUSP282.PROD.OUTLOOK.COM
* pg_upgrade: Preserve relfilenodes and tablespace OIDs.Robert Haas2022-01-17
| | | | | | | | | | | | | | | | | | | | | | | | Currently, database OIDs, relfilenodes, and tablespace OIDs can all change when a cluster is upgraded using pg_upgrade. It seems better to preserve them, because (1) it makes troubleshooting pg_upgrade easier, since you don't have to do a lot of work to match up files in the old and new clusters, (2) it allows 'rsync' to save bandwidth when used to re-sync a cluster after an upgrade, and (3) if we ever encrypt or sign blocks, we would likely want to use a nonce that depends on these values. This patch only arranges to preserve relfilenodes and tablespace OIDs. The task of preserving database OIDs is left for another patch, since it involves some complexities that don't exist in these cases. Database OIDs have a similar issue, but there are some tricky points in that case that do not apply to these cases, so that problem is left for another patch. Shruthi KC, based on an earlier patch from Antonin Houska, reviewed and with some adjustments by me. Discussion: http://postgr.es/m/CA+TgmoYgTwYcUmB=e8+hRHOFA0kkS6Kde85+UNdon6q7bt1niQ@mail.gmail.com
* Add Boolean nodePeter Eisentraut2022-01-17
| | | | | | | | | | Before, SQL-level boolean constants were represented by a string with a cast, and internal Boolean values in DDL commands were usually represented by Integer nodes. This takes the place of both of these uses, making the intent clearer and having some amount of type safety. Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/8c1a2e37-c68d-703c-5a83-7a6077f4f997@enterprisedb.com
* Add stxdinherit flag to pg_statistic_ext_dataTomas Vondra2022-01-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add pg_statistic_ext_data.stxdinherit flag, so that for each extended statistics definition we can store two versions of data - one for the relation alone, one for the whole inheritance tree. This is analogous to pg_statistic.stainherit, but we failed to include such flag in catalogs for extended statistics, and we had to work around it (see commits 859b3003de, 36c4bc6e72 and 20b9fa308e). This changes the relationship between the two catalogs storing extended statistics objects (pg_statistic_ext and pg_statistic_ext_data). Until now, there was a simple 1:1 mapping - for each definition there was one pg_statistic_ext_data row, and this row was inserted while creating the statistics (and then updated during ANALYZE). With the stxdinherit flag, we don't know how many rows there will be (child relations may be added after the statistics object is defined), so there may be up to two rows. We could make CREATE STATISTICS to always create both rows, but that seems wasteful - without partitioning we only need stxdinherit=false rows, and declaratively partitioned tables need only stxdinherit=true. So we no longer initialize pg_statistic_ext_data in CREATE STATISTICS, and instead make that a responsibility of ANALYZE. Which is what we do for regular statistics too. Patch by me, with extensive improvements and fixes by Justin Pryzby. Author: Tomas Vondra, Justin Pryzby Reviewed-by: Tomas Vondra, Justin Pryzby Discussion: https://postgr.es/m/20210923212624.GI831%40telsasoft.com
* Build inherited extended stats on partitioned tablesTomas Vondra2022-01-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 859b3003de disabled building of extended stats for inheritance trees, to prevent updating the same catalog row twice. While that resolved the issue, it also means there are no extended stats for declaratively partitioned tables, because there are no data in the non-leaf relations. That also means declaratively partitioned tables were not affected by the issue 859b3003de addressed, which means this is a regression affecting queries that calculate estimates for the whole inheritance tree as a whole (which includes e.g. GROUP BY queries). But because partitioned tables are empty, we can invert the condition and build statistics only for the case with inheritance, without losing anything. And we can consider them when calculating estimates. It may be necessary to run ANALYZE on partitioned tables, to collect proper statistics. For declarative partitioning there should no prior statistics, and it might take time before autoanalyze is triggered. For tables partitioned by inheritance the statistics may include data from child relations (if built 859b3003de), contradicting the current code. Report and patch by Justin Pryzby, minor fixes and cleanup by me. Backpatch all the way back to PostgreSQL 10, where extended statistics were introduced (same as 859b3003de). Author: Justin Pryzby Reported-by: Justin Pryzby Backpatch-through: 10 Discussion: https://postgr.es/m/20210923212624.GI831%40telsasoft.com
* Unify VACUUM VERBOSE and autovacuum logging.Peter Geoghegan2022-01-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The log_autovacuum_min_duration instrumentation used its own dedicated code for logging, which was not reused by VACUUM VERBOSE. This was highly duplicative, and sometimes led to each code path using slightly different accounting for essentially the same information. Clean things up by making VACUUM VERBOSE reuse the same instrumentation code. This code restructuring changes the structure of the VACUUM VERBOSE output itself, but that seems like an overall improvement. The most noticeable change in VACUUM VERBOSE output is that it no longer outputs a distinct message per index per round of index vacuuming. Most of the same information (about each index) is now shown in its new per-operation summary message. This is far more legible. A few details are no longer displayed by VACUUM VERBOSE, but that's no real loss in practice, especially in the common case where we don't need multiple index scans/rounds of vacuuming. This super fine-grained information is still available via DEBUG2 messages, which might still be useful in debugging scenarios. VACUUM VERBOSE now shows new instrumentation, which is typically very useful: all of the log_autovacuum_min_duration instrumentation that it missed out on before now. This includes information about WAL overhead, buffers hit/missed/dirtied information, and I/O timing information. VACUUM VERBOSE still retains a few INFO messages of its own. This is limited to output concerning the progress of heap rel truncation, as well as some basic information about parallel workers. These details are still potentially quite useful. They aren't a good fit for the log output, which must summarize the whole operation. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Masahiko Sawada <sawada.mshk@gmail.com> Reviewed-By: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/CAH2-WzmW4Me7_qR4X4ka7pxP-jGmn7=Npma_-Z-9Y1eD0MQRLw@mail.gmail.com
* Allow "in place" tablespaces.Thomas Munro2022-01-15
| | | | | | | | | | | | | Provide a developer-only GUC allow_in_place_tablespaces, disabled by default. When enabled, tablespaces can be created with an empty LOCATION string, meaning that they should be created as a directory directly beneath pg_tblspc. This can be used for new testing scenarios, in a follow-up patch. Not intended for end-user usage, since it might confuse backup tools that expect symlinks. Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/CA%2BhUKGKpRWQ9SxdxxDmTBCJoR0YnFpMBe7kyzY8SUQk%2BHeskxg%40mail.gmail.com
* Rename value node fieldsPeter Eisentraut2022-01-14
| | | | | | | | | For the formerly-Value node types, rename the "val" field to a name specific to the node type, namely "ival", "fval", "sval", and "bsval". This makes some code clearer and catches mixups better. Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/8c1a2e37-c68d-703c-5a83-7a6077f4f997@enterprisedb.com
* Refactor AlterRole()Peter Eisentraut2022-01-14
| | | | | | | | | | | | Get rid of the three-valued logic for the Boolean variables to track whether the value was been specified and what the new value should be. Instead, we can use the "dfoo" variables to determine whether the value was specified and should be applied. This was already done in some cases, so this makes this more uniform and removes one layer of indirection. Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/8c1a2e37-c68d-703c-5a83-7a6077f4f997@enterprisedb.com
* Improve error message for missing extension.Tom Lane2022-01-11
| | | | | | | | | | | | | | If we get ENOENT while trying to read an extension control file, report that as a missing extension (with a HINT to install it) rather than as a filesystem access problem. The message wording was extensively bikeshedded in hopes of pointing people to the idea that they need to do a software installation before they can install the extension into the current database. Nathan Bossart, with review/wording suggestions from Daniel Gustafsson, Chapman Flack, and myself Discussion: https://postgr.es/m/3950D56A-4E47-48E7-BF9B-F5F22E268BE7@amazon.com