aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
...
* Harden nbtree page deletion.Peter Geoghegan2021-02-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | Add some additional defensive checks in the second phase of index deletion to detect and report index corruption during VACUUM, and to avoid having VACUUM become stuck in more cases. The code is still not robust in the presence of a circular chain of sibling links, though it's not clear whether that really matters. This is follow-up work to commit 3a01f68e. The new defensive checks rely on the assumption that there can be no more than one VACUUM operation running for an index at any given time. Remove an old comment suggesting that multiple concurrent VACUUMs need to be considered here. This concern now seems highly unlikely to have any real validity, since we clearly rely on the same assumption in several other places. For example, there are much more recent comments that appear in the same function (added by commit efada2b8e92) that make the same assumption. Also add a CHECK_FOR_INTERRUPTS() to the relevant code path. Contrary to comments added by commit 3a01f68e, it is actually possible to handle interrupts here, at least in the common case where processing takes place at the leaf level. We only hold a pin on leafbuf/target page when stepping right at the leaf level. No backpatch due to the lack of complaints following hardening added to the same area by commit 3a01f68e.
* Fix small error in COPY FROM progress reporting.Heikki Linnakangas2021-02-04
| | | | | | | | The # of bytes processed was accumulated slightly incorrectly. After loading more data to the input buffer, we added the number of bytes in the buffer to the sum. But in case of multi-byte characters or escapes, there can be a few unprocessed bytes left over from previous load in the buffer. Those bytes got counted twice.
* Refactor Windows error message for easier translationPeter Eisentraut2021-02-04
| | | | | | | | In the error messages referring to the user right "Lock pages in memory", this is a term from the Windows OS, so it should be translated in accordance with the OS localization. Refactor the error messages so this is easier and clearer. Also fix the capitalization to match the existing capitalization in the OS.
* Ensure unlinking of old index file with REINDEX (TABLESPACE)Michael Paquier2021-02-04
| | | | | The original versions of the patch included this part, but a mismerge from my side has made this piece go missing. Oversight in c5b28604.
* Clarify comment in tablesync.cMichael Paquier2021-02-04
| | | | | | Author: Peter Smith Reviewed-by: Amit Kapila, Michael Paquier, Euler Taveira Discussion: https://postgr.es/m/CAHut+Pt9_T6pWar0FLtPsygNmme8HPWPdGUyZ_8mE1Yvjdf0ZA@mail.gmail.com
* Add TABLESPACE option to REINDEXMichael Paquier2021-02-04
| | | | | | | | | | | | | | | | | | | | | This patch adds the possibility to move indexes to a new tablespace while rebuilding them. Both the concurrent and the non-concurrent cases are supported, and the following set of restrictions apply: - When using TABLESPACE with a REINDEX command that targets a partitioned table or index, all the indexes of the leaf partitions are moved to the new tablespace. The tablespace references of the non-leaf, partitioned tables in pg_class.reltablespace are not changed. This requires an extra ALTER TABLE SET TABLESPACE. - Any index on a toast table rebuilt as part of a parent table is kept in its original tablespace. - The operation is forbidden on system catalogs, including trying to directly move a toast relation with REINDEX. This results in an error if doing REINDEX on a single object. REINDEX SCHEMA, DATABASE and SYSTEM skip system relations when TABLESPACE is used. Author: Alexey Kondratov, Michael Paquier, Justin Pryzby Reviewed-by: Álvaro Herrera, Michael Paquier Discussion: https://postgr.es/m/8a8f5f73-00d3-55f8-7583-1375ca8f6a91@postgrespro.ru
* Avoid crash when rolling back within a prepared statement.Tom Lane2021-02-03
| | | | | | | | | | | | | | | | | | | | | | | | | If a portal is used to run a prepared CALL or DO statement that contains a ROLLBACK, PortalRunMulti fails because the portal's statement list gets cleared by the rollback. (Since the grammar doesn't allow CALL/DO in PREPARE, the only easy way to get to this is via extended query protocol, which treats all inputs as prepared statements.) It's difficult to avoid resetting the portal early because of resource-management issues, so work around this by teaching PortalRunMulti to be wary of portal->stmts having suddenly become NIL. The crash has only been seen to occur in v13 and HEAD (as a consequence of commit 1cff1b95a having added an extra touch of portal->stmts). But even before that, the code involved touching a List that the portal no longer has any claim on. In the test case at hand, the List will still exist because of another refcount on the cached plan; but I'm far from convinced that it's impossible for the cached plan to have been dropped by the time control gets back to PortalRunMulti. Hence, backpatch to v11 where nested transactions were added. Thomas Munro and Tom Lane, per bug #16811 from James Inform Discussion: https://postgr.es/m/16811-c1b599b2c6c2d622@postgresql.org
* Factor pattern-construction logic out of processSQLNamePattern.Robert Haas2021-02-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The logic for converting the shell-glob-like syntax supported by utilities like psql and pg_dump to regular expression is extracted into a new function patternToSQLRegex. The existing function processSQLNamePattern now uses this function as a subroutine. patternToSQLRegex is a little more general than what is required by processSQLNamePattern. That function is only interested in patterns that can have up to 2 parts, a schema and a relation; but patternToSQLRegex can limit the maximum number of parts to between 1 and 3, so that patterns can look like either "database.schema.relation", "schema.relation", or "relation" depending on how it's invoked and what the user specifies. processSQLNamePattern only passes two buffers, so works exactly the same as before, always interpreting the pattern as either a "schema.relation" pattern or a "relation" pattern. But, future callers can use this function in other ways. Mark Dilger, reviewed by me. The larger patch series of which this is a part has also had review from Peter Geoghegan, Andres Freund, Álvaro Herrera, Michael Paquier, and Amul Sul, but I don't know whether any of them have reviewed this bit specifically. Discussion: http://postgr.es/m/12ED3DA8-25F0-4B68-937D-D907CFBF08E7@enterprisedb.com Discussion: http://postgr.es/m/5F743835-3399-419C-8324-2D424237E999@enterprisedb.com Discussion: http://postgr.es/m/70655DF3-33CE-4527-9A4D-DDEB582B6BA0@enterprisedb.com
* Remove special BKI_LOOKUP magic for namespace and role OIDs.Tom Lane2021-02-03
| | | | | | | | | | | | | | | | | | | | | | | Now that commit 62f34097c attached BKI_LOOKUP annotation to all the namespace and role OID columns in the catalogs, there's no real reason to have the magic PGNSP and PGUID symbols. Get rid of them in favor of implementing those lookups according to genbki.pl's normal pattern. This means that in the catalog headers, BKI_DEFAULT(PGNSP) becomes BKI_DEFAULT(pg_catalog), which seems a lot more transparent. BKI_DEFAULT(PGUID) becomes BKI_DEFAULT(POSTGRES), which is perhaps less so; but you can look into pg_authid.dat to discover that POSTGRES is the nonce name for the bootstrap superuser. This change also means that if we ever need cross-references in the initial catalog data to any of the other built-in roles besides POSTGRES, or to some other built-in schema besides pg_catalog, we can just do it. No catversion bump here, as there's no actual change in the contents of postgres.bki. Discussion: https://postgr.es/m/3240355.1612129197@sss.pgh.pa.us
* pg_dump: Fix dumping of inherited generated columnsPeter Eisentraut2021-02-03
| | | | | | | | | | | | | Generation expressions of generated columns are always inherited, so there is no need to set them separately in child tables, and there is no syntax to do so either. The code previously used the code paths for the handling of default values, for which different rules apply; in particular it might want to set a default value explicitly for an inherited column. This resulted in unrestorable dumps. For generated columns, just skip them in inherited tables. Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://www.postgresql.org/message-id/flat/15830.1575468847%40sss.pgh.pa.us
* Retire findoidjoins.Tom Lane2021-02-02
| | | | | | In the wake of commit 62f34097c, we no longer need this tool. Discussion: https://postgr.es/m/3240355.1612129197@sss.pgh.pa.us
* Build in some knowledge about foreign-key relationships in the catalogs.Tom Lane2021-02-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This follows in the spirit of commit dfb75e478, which created primary key and uniqueness constraints to improve the visibility of constraints imposed on the system catalogs. While our catalogs contain many foreign-key-like relationships, they don't quite follow SQL semantics, in that the convention for an omitted reference is to write zero not NULL. Plus, we have some cases in which there are arrays each of whose elements is supposed to be an FK reference; SQL has no way to model that. So we can't create actual foreign key constraints to describe the situation. Nonetheless, we can collect and use knowledge about these relationships. This patch therefore adds annotations to the catalog header files to declare foreign-key relationships. (The BKI_LOOKUP annotations cover simple cases, but we weren't previously distinguishing which such columns are allowed to contain zeroes; we also need new markings for multi-column FK references.) Then, Catalog.pm and genbki.pl are taught to collect this information into a table in a new generated header "system_fk_info.h". The only user of that at the moment is a new SQL function pg_get_catalog_foreign_keys(), which exposes the table to SQL. The oidjoins regression test is rewritten to use pg_get_catalog_foreign_keys() to find out which columns to check. Aside from removing the need for manual maintenance of that test script, this allows it to cover numerous relationships that were not checked by the old implementation based on findoidjoins. (As of this commit, 217 relationships are checked by the test, versus 181 before.) Discussion: https://postgr.es/m/3240355.1612129197@sss.pgh.pa.us
* Remove extra increment of plpgsql's statement counter for FOR loops.Tom Lane2021-02-02
| | | | | | | | | | | | This left gaps in the internal statement numbering, which is not terribly harmful (else we'd have noticed sooner), but it's not great either. Oversight in bbd5c207b; backpatch to v12 where that came in. Pavel Stehule Discussion: https://postgr.es/m/CAFj8pRDXyQaJmpotNTQVc-t-WxdWZC35V2PnmwOaV1-taidFWA@mail.gmail.com
* Improve confusing variable namesPeter Eisentraut2021-02-02
| | | | | | | | | | The prototype calls the second argument of pgstat_progress_update_multi_param() "index", and some callers name their local variable that way. But when the surrounding code deals with index relations, this is confusing, and in at least one case shadowed another variable that is referring to an index relation. Adjust those call sites to have clearer local variable naming, similar to existing callers in indexcmds.c.
* Remove unused column atttypmod from initial tablesync queryMichael Paquier2021-02-02
| | | | | | | | | | The initial tablesync done by logical replication used a query to fetch the information of a relation's columns that included atttypmod, but it was left unused. This was added by 7c4f524. Author: Euler Taveira Reviewed-by: Önder Kalacı, Amit Langote, Japin Li Discussion: https://postgr.es/m/CAHE3wggb715X+mK_DitLXF25B=jE6xyNCH4YOwM860JR7HarGQ@mail.gmail.com
* Remove [Merge]AppendPath.partitioned_rels.Tom Lane2021-02-01
| | | | | | | | | | | | | | | | | It turns out that the calculation of [Merge]AppendPath.partitioned_rels in allpaths.c is faulty and sometimes omits relevant non-leaf partitions, allowing an assertion added by commit a929e17e5a8 to trigger. Rather than fix that, it seems better to get rid of those fields altogether. We don't really need the info until create_plan time, and calculating it once for the selected plan should be cheaper than calculating it for each append path we consider. The preceding two commits did away with all use of the partitioned_rels values; this commit just mechanically removes the fields and the code that calculated them. Discussion: https://postgr.es/m/87sg8tqhsl.fsf@aurora.ydns.eu Discussion: https://postgr.es/m/CAJKUy5gCXDSmFs2c=R+VGgn7FiYcLCsEFEuDNNLGfoha=pBE_g@mail.gmail.com
* Remove incidental dependencies on partitioned_rels lists.Tom Lane2021-02-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | It turns out that the calculation of [Merge]AppendPath.partitioned_rels in allpaths.c is faulty and sometimes omits relevant non-leaf partitions, allowing an assertion added by commit a929e17e5a8 to trigger. Rather than fix that, it seems better to get rid of those fields altogether. We don't really need the info until create_plan time, and calculating it once for the selected plan should be cheaper than calculating it for each append path we consider. This patch undoes a couple of very minor uses of the partitioned_rels values. createplan.c was testing for nil-ness to optimize away the preparatory work for make_partition_pruneinfo(). That is worth doing if the check is nigh free, but it's not worth going to any great lengths to avoid. create_append_path() was testing for nil-ness as part of deciding how to set up ParamPathInfo for an AppendPath. I replaced that with a check for the appendrel's parent rel being partitioned. That's not quite the same thing but should cover most cases. If we note any interesting loss of optimizations, we can dumb this down to just always use the more expensive method when the parent is a baserel. Discussion: https://postgr.es/m/87sg8tqhsl.fsf@aurora.ydns.eu Discussion: https://postgr.es/m/CAJKUy5gCXDSmFs2c=R+VGgn7FiYcLCsEFEuDNNLGfoha=pBE_g@mail.gmail.com
* Revise make_partition_pruneinfo to not use its partitioned_rels input.Tom Lane2021-02-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It turns out that the calculation of [Merge]AppendPath.partitioned_rels in allpaths.c is faulty and sometimes omits relevant non-leaf partitions, allowing an assertion added by commit a929e17e5a8 to trigger. Rather than fix that, it seems better to get rid of those fields altogether. We don't really need the info until create_plan time, and calculating it once for the selected plan should be cheaper than calculating it for each append path we consider. As a first step, teach make_partition_pruneinfo to collect the relevant partitioned tables for itself. It's not hard to do so by traversing from child tables up to parents using the AppendRelInfo links. While here, make some minor stylistic improvements; mainly, don't use the "Relids" alias for bitmapsets that are not identities of any relation considered by the planner. Try to document the logic better, too. No backpatch, as there does not seem to be a live problem before a929e17e5a8. Also no new regression test; the code where the bug was will be gone at the end of this patch series, so it seems a bit pointless to memorialize the issue. Tom Lane and David Rowley, per reports from Andreas Seltenreich and Jaime Casanova. Discussion: https://postgr.es/m/87sg8tqhsl.fsf@aurora.ydns.eu Discussion: https://postgr.es/m/CAJKUy5gCXDSmFs2c=R+VGgn7FiYcLCsEFEuDNNLGfoha=pBE_g@mail.gmail.com
* SEARCH and CYCLE clausesPeter Eisentraut2021-02-01
| | | | | | | | | | | | This adds the SQL standard feature that adds the SEARCH and CYCLE clauses to recursive queries to be able to do produce breadth- or depth-first search orders and detect cycles. These clauses can be rewritten into queries using existing syntax, and that is what this patch does in the rewriter. Reviewed-by: Vik Fearing <vik@postgresfriends.org> Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/db80ceee-6f97-9b4a-8ee8-3ba0c58e5be2@2ndquadrant.com
* Get rid of unnecessary memory allocation in jsonb_subscript_assign()Alexander Korotkov2021-02-01
| | | | Current code allocates memory for JsonbValue, but it could be placed locally.
* Introduce --with-ssl={openssl} as a configure optionMichael Paquier2021-02-01
| | | | | | | | | | | | | This is a replacement for the existing --with-openssl, extending the logic to make easier the addition of new SSL libraries. The grammar is chosen to be similar to --with-uuid, where multiple values can be chosen, with "openssl" as the only supported value for now. The original switch, --with-openssl, is kept for compatibility. Author: Daniel Gustafsson, Michael Paquier Reviewed-by: Jacob Champion Discussion: https://postgr.es/m/FAB21FC8-0F62-434F-AA78-6BD9336D630A@yesql.se
* Fix portability issue in new jsonbsubs code.Tom Lane2021-02-01
| | | | | | | | | On machines where sizeof(Datum) > sizeof(Oid) (that is, any 64-bit platform), the previous coding would compute a misaligned workspace->index pointer if nupper is odd. Architectures where misaligned access is a hard no-no would then fail. This appears to explain why thorntail is unhappy but other buildfarm members are not.
* Throw error when assigning jsonb scalar instead of a composite objectAlexander Korotkov2021-01-31
| | | | | | | | | | | | | | | | | | During the jsonb subscripting assignment, the provided path might assume an object or an array where the source jsonb has a scalar value. Initial subscripting assignment logic will skip such an update operation with no message shown. This commit makes it throw an error to indicate this type of situation. Discussion: https://postgr.es/m/CA%2Bq6zcV8qvGcDXurwwgUbwACV86Th7G80pnubg42e-p9gsSf%3Dg%40mail.gmail.com Discussion: https://postgr.es/m/CA%2Bq6zcX3mdxGCgdThzuySwH-ApyHHM-G4oB1R0fn0j2hZqqkLQ%40mail.gmail.com Discussion: https://postgr.es/m/CA%2Bq6zcVDuGBv%3DM0FqBYX8DPebS3F_0KQ6OVFobGJPM507_SZ_w%40mail.gmail.com Discussion: https://postgr.es/m/CA%2Bq6zcVovR%2BXY4mfk-7oNk-rF91gH0PebnNfuUjuuDsyHjOcVA%40mail.gmail.com Author: Dmitry Dolgov Reviewed-by: Tom Lane, Arthur Zakirov, Pavel Stehule, Dian M Fay Reviewed-by: Andrew Dunstan, Chapman Flack, Merlin Moncure, Peter Geoghegan Reviewed-by: Alvaro Herrera, Jim Nasby, Josh Berkus, Victor Wagner Reviewed-by: Aleksander Alekseev, Robert Haas, Oleg Bartunov
* Filling array gaps during jsonb subscriptingAlexander Korotkov2021-01-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | This commit introduces two new flags for jsonb assignment: * JB_PATH_FILL_GAPS: Appending array elements on the specified position, gaps are filled with nulls (similar to the JavaScript behavior). This mode also instructs to create the whole path in a jsonb object if some part of the path (more than just the last element) is not present. * JB_PATH_CONSISTENT_POSITION: Assigning keeps array positions consistent by preventing prepending of elements. Both flags are used only in jsonb subscripting assignment. Initially proposed by Nikita Glukhov based on polymorphic subscripting patch, but transformed into an independent change. Discussion: https://postgr.es/m/CA%2Bq6zcV8qvGcDXurwwgUbwACV86Th7G80pnubg42e-p9gsSf%3Dg%40mail.gmail.com Discussion: https://postgr.es/m/CA%2Bq6zcX3mdxGCgdThzuySwH-ApyHHM-G4oB1R0fn0j2hZqqkLQ%40mail.gmail.com Discussion: https://postgr.es/m/CA%2Bq6zcVDuGBv%3DM0FqBYX8DPebS3F_0KQ6OVFobGJPM507_SZ_w%40mail.gmail.com Discussion: https://postgr.es/m/CA%2Bq6zcVovR%2BXY4mfk-7oNk-rF91gH0PebnNfuUjuuDsyHjOcVA%40mail.gmail.com Author: Dmitry Dolgov Reviewed-by: Tom Lane, Arthur Zakirov, Pavel Stehule, Dian M Fay Reviewed-by: Andrew Dunstan, Chapman Flack, Merlin Moncure, Peter Geoghegan Reviewed-by: Alvaro Herrera, Jim Nasby, Josh Berkus, Victor Wagner Reviewed-by: Aleksander Alekseev, Robert Haas, Oleg Bartunov
* Implementation of subscripting for jsonbAlexander Korotkov2021-01-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | Subscripting for jsonb does not support slices, does not have a limit for the number of subscripts, and an assignment expects a replace value to have jsonb type. There is also one functional difference between assignment via subscripting and assignment via jsonb_set(). When an original jsonb container is NULL, the subscripting replaces it with an empty jsonb and proceeds with an assignment. For the sake of code reuse, we rearrange some parts of jsonb functionality to allow the usage of the same functions for jsonb_set and assign subscripting operation. The original idea belongs to Oleg Bartunov. Catversion is bumped. Discussion: https://postgr.es/m/CA%2Bq6zcV8qvGcDXurwwgUbwACV86Th7G80pnubg42e-p9gsSf%3Dg%40mail.gmail.com Discussion: https://postgr.es/m/CA%2Bq6zcX3mdxGCgdThzuySwH-ApyHHM-G4oB1R0fn0j2hZqqkLQ%40mail.gmail.com Discussion: https://postgr.es/m/CA%2Bq6zcVDuGBv%3DM0FqBYX8DPebS3F_0KQ6OVFobGJPM507_SZ_w%40mail.gmail.com Discussion: https://postgr.es/m/CA%2Bq6zcVovR%2BXY4mfk-7oNk-rF91gH0PebnNfuUjuuDsyHjOcVA%40mail.gmail.com Author: Dmitry Dolgov Reviewed-by: Tom Lane, Arthur Zakirov, Pavel Stehule, Dian M Fay Reviewed-by: Andrew Dunstan, Chapman Flack, Merlin Moncure, Peter Geoghegan Reviewed-by: Alvaro Herrera, Jim Nasby, Josh Berkus, Victor Wagner Reviewed-by: Aleksander Alekseev, Robert Haas, Oleg Bartunov
* Remove unused _bt_delitems_delete() argument.Peter Geoghegan2021-01-31
| | | | | | | | The latestRemovedXid values used by nbtree deletion operations are determined by _bt_delitems_delete()'s caller, so there is no reason to pass a separate heapRel argument. Oversight in commit d168b666823.
* Fix parsing of complex morphs to tsqueryAlexander Korotkov2021-01-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | When to_tsquery() or websearch_to_tsquery() meet a complex morph containing multiple words residing adjacent position, these words are connected with OP_AND operator. That leads to surprising results. For instace, both websearch_to_tsquery('"pg_class pg"') and to_tsquery('pg_class <-> pg') produce '( pg & class ) <-> pg' tsquery. This tsquery requires 'pg' and 'class' words to reside on the same position and doesn't match to to_tsvector('pg_class pg'). It appears to be ridiculous behavior, which needs to be fixed. This commit makes to_tsquery() or websearch_to_tsquery() connect words residing adjacent position with OP_PHRASE. Therefore, now those words are normally chained with other OP_PHRASE operator. The examples of above now produces 'pg <-> class <-> pg' tsquery, which matches to to_tsvector('pg_class pg'). Another effect of this commit is that complex morph word positions now need to match the tsvector even if there is no surrounding OP_PHRASE. This behavior change generally looks like an improvement but making this commit not backpatchable. Reported-by: Barry Pederson Bug: #16592 Discussion: https://postgr.es/m/16592-70b110ff9731c07d@postgresql.org Discussion: https://postgr.es/m/CAPpHfdv0EzVhf6CWfB1_TTZqXV_2Sn-jSY3zSd7ePH%3D-%2B1V2DQ%40mail.gmail.com Author: Alexander Korotkov Reviewed-by: Tom Lane, Neil Chen
* Add primary keys and unique constraints to system catalogsPeter Eisentraut2021-01-30
| | | | | | | | | | | | | | | | | | | For those system catalogs that have a unique indexes, make a primary key and unique constraint, using ALTER TABLE ... PRIMARY KEY/UNIQUE USING INDEX. This can be helpful for GUI tools that look for a primary key, and it might in the future allow declaring foreign keys, for making schema diagrams. The constraint creation statements are automatically created by genbki.pl from DECLARE_UNIQUE_INDEX directives. To specify which one of the available unique indexes is the primary key, use the new directive DECLARE_UNIQUE_INDEX_PKEY instead. By convention, we usually make a catalog's OID column its primary key, if it has one. Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://www.postgresql.org/message-id/flat/dc5f44d9-5ec1-a596-0251-dadadcdede98@2ndquadrant.com
* Allow GRANTED BY clause in normal GRANT and REVOKE statementsPeter Eisentraut2021-01-30
| | | | | | | | | | | | | | The SQL standard allows a GRANTED BY clause on GRANT and REVOKE (privilege) statements that can specify CURRENT_USER or CURRENT_ROLE. In PostgreSQL, both of these are the default behavior. Since we already have all the parsing support for this for the GRANT (role) statement, we might as well add basic support for this for the privilege variant as well. This allows us to check off SQL feature T332. In the future, perhaps more interesting things could be done with this, too. Reviewed-by: Simon Riggs <simon@2ndquadrant.com> Discussion: https://www.postgresql.org/message-id/flat/f2feac44-b4c5-f38f-3699-2851d6a76dc9@2ndquadrant.com
* Revive "snapshot too old" with wal_level=minimal and SET TABLESPACE.Noah Misch2021-01-30
| | | | | | | | | | | | | | | | | | Given a permanent relation rewritten in the current transaction, the old_snapshot_threshold mechanism assumed the relation had never been subject to early pruning. Hence, a query could fail to report "snapshot too old" when the rewrite followed an early truncation. ALTER TABLE SET TABLESPACE is probably the only rewrite mechanism capable of exposing this bug. REINDEX sets indcheckxmin, avoiding the problem. CLUSTER has zeroed page LSNs since before old_snapshot_threshold existed, so old_snapshot_threshold has never cooperated with it. ALTER TABLE ... SET DATA TYPE makes the table look empty to every past snapshot, which is strictly worse. Back-patch to v13, where commit c6b92041d38512a4176ed76ad06f713d2e6c01a8 broke this. Kyotaro Horiguchi and Noah Misch Discussion: https://postgr.es/m/20210113.160705.2225256954956139776.horikyota.ntt@gmail.com
* Fix error with CREATE PUBLICATION, wal_level=minimal, and new tables.Noah Misch2021-01-30
| | | | | | | | | | | | CREATE PUBLICATION has failed spuriously when applied to a permanent relation created or rewritten in the current transaction. Make the same change to another site having the same semantic intent; the second instance has no user-visible consequences. Back-patch to v13, where commit c6b92041d38512a4176ed76ad06f713d2e6c01a8 broke this. Kyotaro Horiguchi Discussion: https://postgr.es/m/20210113.160705.2225256954956139776.horikyota.ntt@gmail.com
* Fix CREATE INDEX CONCURRENTLY for simultaneous prepared transactions.Noah Misch2021-01-30
| | | | | | | | | | | | | | | In a cluster having used CREATE INDEX CONCURRENTLY while having enabled prepared transactions, queries that use the resulting index can silently fail to find rows. Fix this for future CREATE INDEX CONCURRENTLY by making it wait for prepared transactions like it waits for ordinary transactions. This expands the VirtualTransactionId structure domain to admit prepared transactions. It may be necessary to reindex to recover from past occurrences. Back-patch to 9.5 (all supported versions). Andrey Borodin, reviewed (in earlier versions) by Tom Lane and Michael Paquier. Discussion: https://postgr.es/m/2E712143-97F7-4890-B470-4A35142ABC82@yandex-team.ru
* Fix typoPeter Eisentraut2021-01-29
|
* Adjust comments of CheckRelationTableSpaceMove() and SetRelationTableSpace()Michael Paquier2021-01-29
| | | | | | | | | | | | | | | | | | | 4c9c359, that introduced those two functions, has been overoptimistic on the point that only ShareUpdateExclusiveLock would be required when moving a relation to a new tablespace. AccessExclusiveLock is a requirement, but ShareUpdateExclusiveLock may be used under specific conditions like REINDEX CONCURRENTLY where waits on past transactions make the operation safe even with a lower-level lock. The current code does only the former, so update the existing comments to reflect that. Once a REINDEX (TABLESPACE) is introduced, those comments would require an extra refresh to mention their new use case. While on it, fix an incorrect variable name. Per discussion with Álvaro Herrera. Discussion: https://postgr.es/m/20210127140741.GA14174@alvherre.pgsql
* Retire pg_standby.Thomas Munro2021-01-29
| | | | | | | | | | | | | | pg_standby was useful more than a decade ago, but now it is obsolete. It has been proposed that we retire it many times. Now seems like a good time to finally do it, because "waiting restore commands" are incompatible with a proposed recovery prefetching feature. Discussion: https://postgr.es/m/20201029024412.GP5380%40telsasoft.com Author: Justin Pryzby <pryzby@telsasoft.com> Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com>
* Silence another gcc 11 warning.Tom Lane2021-01-28
| | | | | | | | Per buildfarm and local experimentation, bleeding-edge gcc isn't convinced that the MemSet in reorder_function_arguments() is safe. Shut it up by adding an explicit check that pronargs isn't negative, and by changing MemSet to memset. (It appears that either change is enough to quiet the warning at -O2, but let's do both to be sure.)
* Remove bogus restriction from BEFORE UPDATE triggersAlvaro Herrera2021-01-28
| | | | | | | | | | | | | | | | | | | | | | In trying to protect the user from inconsistent behavior, commit 487e9861d0cf "Enable BEFORE row-level triggers for partitioned tables" tried to prevent BEFORE UPDATE FOR EACH ROW triggers from moving the row from one partition to another. However, it turns out that the restriction is wrong in two ways: first, it fails spuriously, preventing valid situations from working, as in bug #16794; and second, they don't protect from any misbehavior, because tuple routing would cope anyway. Fix by removing that restriction. We keep the same restriction on BEFORE INSERT FOR EACH ROW triggers, though. It is valid and useful there. In the future we could remove it by having tuple reroute work for inserts as it does for updates. Backpatch to 13. Author: Álvaro Herrera <alvherre@alvh.no-ip.org> Reported-by: Phillip Menke <pg@pmenke.de> Discussion: https://postgr.es/m/16794-350a655580fbb9ae@postgresql.org
* Fix hash partition pruning with asymmetric partition sets.Tom Lane2021-01-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | perform_pruning_combine_step() was not taught about the number of partition indexes used in hash partitioning; more embarrassingly, get_matching_hash_bounds() also had it wrong. These errors are masked in the common case where all the partitions have the same modulus and no partition is missing. However, with missing or unequal-size partitions, we could erroneously prune some partitions that need to be scanned, leading to silently wrong query answers. While a minimal-footprint fix for this could be to export get_partition_bound_num_indexes and make the incorrect functions use it, I'm of the opinion that that function should never have existed in the first place. It's not reasonable data structure design that PartitionBoundInfoData lacks any explicit record of the length of its indexes[] array. Perhaps that was all right when it could always be assumed equal to ndatums, but something should have been done about it as soon as that stopped being true. Putting in an explicit "nindexes" field makes both partition_bounds_equal() and partition_bounds_copy() simpler, safer, and faster than before, and removes explicit knowledge of the number-of-partition-indexes rules from some other places too. This change also makes get_hash_partition_greatest_modulus obsolete. I left that in place in case any external code uses it, but no core code does anymore. Per bug #16840 from Michał Albrycht. Back-patch to v11 where the hash partitioning code came in. (In the back branches, add the new field at the end of PartitionBoundInfoData to minimize ABI risks.) Discussion: https://postgr.es/m/16840-571a22976f829ad4@postgresql.org
* Make ecpg's rjulmdy() and rmdyjul() agree with their declarations.Tom Lane2021-01-28
| | | | | | | | | | | | | We had "short *mdy" in the extern declarations, but "short mdy[3]" in the actual function definitions. Per C99 these are equivalent, but recent versions of gcc have started to issue warnings about the inconsistency. Clean it up before the warnings get any more widespread. Back-patch, in case anyone wants to build older PG versions with bleeding-edge compilers. Discussion: https://postgr.es/m/2401575.1611764534@sss.pgh.pa.us
* pgbench: Remove dead codeAlvaro Herrera2021-01-28
| | | | | | | | | | doConnect() never returns connections in state CONNECTION_BAD, so checking for that is pointless. Remove the code that does. This code has been dead since ba708ea3dc84, 20 years ago. Discussion: https://postgr.es/m/20210126195224.GA20361@alvherre.pgsql Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
* Remove gratuitous uses of deprecated SELECT INTOPeter Eisentraut2021-01-28
| | | | | | | | | | CREATE TABLE AS has been preferred over SELECT INTO (outside of ecpg and PL/pgSQL) for a long time. There were still a few uses of SELECT INTO in tests and documentation, some old, some more recent. This changes them to CREATE TABLE AS. Some occurrences in the tests remain where they are specifically testing SELECT INTO parsing or similar. Discussion: https://www.postgresql.org/message-id/flat/96dc0df3-e13a-a85d-d045-d6e2c85218da%40enterprisedb.com
* Add direct conversion routines between EUC_TW and Big5.Heikki Linnakangas2021-01-28
| | | | | | | | | | | | | | | | | Conversions between EUC_TW and Big5 were previously implemented by converting the whole input to MIC first, and then from MIC to the target encoding. Implement functions to convert directly between the two. The reason to do this now is that I'm working on a patch that will change the conversion function signature so that if the input is invalid, we convert as much as we can and return the number of bytes successfully converted. That's not possible if we use an intermediary format, because if an error happens in the intermediary -> final conversion, we lose track of the location of the invalid character in the original input. Avoiding the intermediate step makes the conversions faster, too. Reviewed-by: John Naylor Discussion: https://www.postgresql.org/message-id/b9e3167f-f84b-7aa4-5738-be578a4db924%40iki.fi
* Add mbverifystr() functions specific to each encoding.Heikki Linnakangas2021-01-28
| | | | | | | | | | | This makes pg_verify_mbstr() function faster, by allowing more efficient encoding-specific implementations. All the implementations included in this commit are pretty naive, they just call the same encoding-specific verifychar functions that were used previously, but that already gives a performance boost because the tight character-at-a-time loop is simpler. Reviewed-by: John Naylor Discussion: https://www.postgresql.org/message-id/e7861509-3960-538a-9025-b75a61188e01@iki.fi
* Don't add bailout adjustment for non-strict deserialize calls.Andrew Gierth2021-01-28
| | | | | | | | | | | | | | | | | | | | When building aggregate expression steps, strict checks need a bailout jump for when a null value is encountered, so there is a list of steps that require later adjustment. Adding entries to that list for steps that aren't actually strict would be harmless, except that there is an Assert which catches them. This leads to spurious errors on asserts builds, for data sets that trigger parallel aggregation of an aggregate with a non-strict deserialization function (no such aggregates exist in the core system). Repair by not adding the adjustment entry when it's not needed. Backpatch back to 11 where the code was introduced. Per a report from Darafei (Komzpa) of the PostGIS project; analysis and patch by me. Discussion: https://postgr.es/m/87mty7peb3.fsf@news-spur.riddles.org.uk
* Refactor SQL functions of SHA-2 in cryptohashfuncs.cMichael Paquier2021-01-28
| | | | | | | | | | | The same code pattern was repeated four times when compiling a SHA-2 hash. This refactoring has the advantage to issue a compilation warning if a new value is added to pg_cryptohash_type, so as anybody doing an addition in this area would need to consider if support for a new SQL function is needed or not. Author: Sehrope Sarkuni, Michael Paquier Discussion: https://postgr.es/m/YA7DvLRn2xnTgsMc@paquier.xyz
* Reduce the default value of vacuum_cost_page_miss.Peter Geoghegan2021-01-27
| | | | | | | | | | | | | | | | | | | | | | | | | When commit f425b605 introduced cost based vacuum delays back in 2004, the defaults reflected then-current trends in hardware, as well as certain historical limitations in PostgreSQL. There have been enormous improvements in both areas since that time. The cost limit GUC defaults finally became much more representative of current trends following commit cbccac37, which decreased autovacuum_vacuum_cost_delay's default by 10x for PostgreSQL 12 (it went from 20ms to only 2ms). The relative costs have shifted too. This should also be accounted for by the defaults. More specifically, the relative importance of avoiding dirtying pages within VACUUM has greatly increased, primarily due to main memory capacity scaling and trends in flash storage. Within Postgres itself, improvements like sequential access during index vacuuming (at least in nbtree and GiST indexes) have also been contributing factors. To reflect all this, decrease the default of vacuum_cost_page_miss to 2. Since the default of vacuum_cost_page_dirty remains 20, dirtying a page is now considered 10x "costlier" than a page miss by default. Author: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/CAH2-WzmLPFnkWT8xMjmcsm7YS3+_Qi3iRWAb2+_Bc8UhVyHfuA@mail.gmail.com
* In TrimCLOG(), don't reset XactCtl->shared->latest_page_number.Robert Haas2021-01-27
| | | | | | | | | | | | | | | | | | | | | Since the CLOG page number is not recorded directly in the checkpoint record, we have to use ShmemVariableCache->nextXid to figure out the latest CLOG page number at the start of recovery. However, as recovery progresses, replay of CLOG/EXTEND records will update our notion of the latest page number, and we should rely on that being accurate rather than recomputing the value based on an updated notion of nextXid. ShmemVariableCache->nextXid is only an approximation during recovery anyway, whereas CLOG/EXTEND records are an authoritative representation of how the SLRU has been updated. Commit 0fcc2decd485a61321a3220d8f76cb108b082009 makes this simplification possible, as before that change clog_redo() might have injected a bogus value here, and we'd want to get rid of that before entering normal running. Patch by me, reviewed by Heikki Linnakangas. Discussion: http://postgr.es/m/CA+TgmoZYig9+AQodhF5sRXuKkJ=RgFDugLr3XX_dz_F-p=TwTg@mail.gmail.com
* In clog_redo(), don't set XactCtl->shared->latest_page_number.Robert Haas2021-01-27
| | | | | | | | | | | | | | | | | | | | | | The comment is no longer accurate, and hasn't been entirely accurate since Hot Standby was introduced. The original idea here was that StartupCLOG() wouldn't be called until the end of recovery and therefore this value would be uninitialized when this code is reached, but Hot Standby made that true only when hot_standby=off, and commit 1f113abdf87cd085dee3927960bb4f70442b7250 means that this value is now always initialized before replay even starts. The original purpose of this code was to bypass the sanity check in SimpleLruTruncate(), which will no longer occur: now, if something is wrong, that sanity check might trip during recovery. That's probably a good thing, because in the current code base latest_page_number should always be initialized and therefore we expect that the sanity check should pass. If it doesn't, something has gone wrong, and complaining about it is appropriate. Patch by me, reviewed by Heikki Linnakangas. Discussion: http://postgr.es/m/CA+TgmoZYig9+AQodhF5sRXuKkJ=RgFDugLr3XX_dz_F-p=TwTg@mail.gmail.com
* Move StartupCLOG() calls to just after we initialize ShmemVariableCache.Robert Haas2021-01-27
| | | | | | | | | | | | | Previously, the hot_standby=off code path did this at end of recovery, while the hot_standby=on code path did it at the beginning of recovery. It's better to do this in only one place because (a) it's simpler, (b) StartupCLOG() is trivial so trying to postpone the work isn't useful, and (c) this will make it possible to simplify some other logic. Patch by me, reviewed by Heikki Linnakangas. Discussion: http://postgr.es/m/CA+TgmoZYig9+AQodhF5sRXuKkJ=RgFDugLr3XX_dz_F-p=TwTg@mail.gmail.com
* Fix GiST index deletion assert issue.Peter Geoghegan2021-01-26
| | | | | | | | | | | | | Avoid calling heap_index_delete_tuples() with an empty deltids array to avoid an assertion failure. This issue was arguably an oversight in commit b5f58cf2, though the failing assert itself was added by my recent commit d168b666. No backpatch, though, since the oversight is harmless in the back branches. Author: Peter Geoghegan <pg@bowt.ie> Reported-By: Jaime Casanova <jcasanov@systemguards.com.ec> Discussion: https://postgr.es/m/CAJKUy5jscES84n3puE=sYngyF+zpb4wv8UMtuLnLPv5z=6yyNw@mail.gmail.com