aboutsummaryrefslogtreecommitdiff
path: root/src/test
Commit message (Collapse)AuthorAge
* Remove ATT_TABLE for ALTER TABLE ... ATTACH/DETACHMichael Paquier2024-09-24
| | | | | | | | | | | | | | | Attempting these commands for a non-partitioned table would result in a failure when creating the relation in transformPartitionCmd(). This gives the possibility to throw an error earlier with a much better error message, thanks to d69a3f4d70b7. The extra test cases are from me. Note that FINALIZE uses a different subcommand and it had no coverage for its failure path with non-partitioned tables. Author: Álvaro Herrera, Michael Paquier Reviewed-by: Nathan Bossart Discussion: https://postgr.es/m/202409190803.tnis52adt2n5@alvherre.pgsql
* Remove pg_authid's TOAST table.Nathan Bossart2024-09-21
| | | | | | | | | | | | | | | | | pg_authid's only varlena column is rolpassword, which unfortunately cannot be de-TOASTed during authentication because we haven't selected a database yet and cannot read pg_class. By removing pg_authid's TOAST table, attempts to set password hashes that require out-of-line storage will fail with a "row is too big" error instead. We may want to provide a more user-friendly error in the future, but for now let's just remove the useless TOAST table. Bumps catversion. Reported-by: Alexander Lakhin Reviewed-by: Tom Lane, Michael Paquier Discussion: https://postgr.es/m/89e8649c-eb74-db25-7945-6d6b23992394%40gmail.com
* Add parameter "connstr" to PostgreSQL::Test::Cluster::background_psqlMichael Paquier2024-09-20
| | | | | | | | Like for Cluster::psql, this can be handy to force the use of a connection string with some values overriden, like a "host". Author: Aidar Imamov Discussion: https://postgr.es/m/ecacb079efc533aed3c234cbcb5b07b6@postgrespro.ru
* psql: Add tests for repeated calls of \bind[_named]Michael Paquier2024-09-20
| | | | | | | | | | | | The implementation assumes that on multiple calls of these meta-commands the last one wins. Multiple \g calls in-between mean multiple executions. There were no tests to check these properties, hence let's add something. Author: Jelte Fennema-Nio, Michael Paquier Discussion: https://postgr.es/m/CAGECzQSTE7CoM=Gst56Xj8pOvjaPr09+7jjtWqTC40pGETyAuA@mail.gmail.com
* psql: Clean up more aggressively state of \bind[_named], \parse and \closeMichael Paquier2024-09-19
| | | | | | | | | | | | | | | | | | | | | | | | | | This fixes a couple of issues with the psql meta-commands mentioned above when called repeatedly: - The statement name is reset for each call. If a command errors out, its send_mode would still be set, causing an incorrect path to be taken when processing a query. For \bind_named, this could trigger an assertion failure as a statement name is always expected for this meta-command. This issue has been introduced by d55322b0da60. - The memory allocated for bind parameters can be leaked. This is a bug enlarged by d55322b0da60 that exists since 5b66de3433e2, as it is also possible to leak memory with \bind in v16 and v17. This requires a fix that will be done on the affected branches separately. This issue is taken care of here for HEAD. This patch tightens the cleanup of the state used for the extended protocol meta-commands (bind parameters, send mode, statement name) by doing it before running each meta-command on top of doing it once a query has been processed, avoiding any leaks and the inconsistencies when mixing calls, by refactoring the cleanup in a single routine used in all the code paths where this step is required. Reported-by: Alexander Lakhin Author: Anthonin Bonnefoy Discussion: https://postgr.es/m/2e5b89af-a351-ff0a-000c-037ac28314ab@gmail.com
* Add TOAST table to pg_index.Nathan Bossart2024-09-18
| | | | | | | | | | | | | | | | | | | | This change allows pg_index rows to use out-of-line storage for the "indexprs" and "indpred" columns, which enables use-cases such as very large index expressions. This system catalog was previously not given a TOAST table due to a fear of circularity issues (see commit 96cdeae07f). Testing has not revealed any such problems, and it seems unlikely that the entries for system indexes could ever need out-of-line storage. In any case, it is still early in the v18 development cycle, so committing this now will hopefully increase the chances of finding any unexpected problems prior to release. Bumps catversion. Reported-by: Jonathan Katz Reviewed-by: Tom Lane Discussion: https://postgr.es/m/b611015f-b423-458c-aa2d-be0e655cc1b4%40postgresql.org
* Extend PgStat_HashKey.objid from 4 to 8 bytesMichael Paquier2024-09-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This opens the possibility to define keys for more types of statistics kinds in PgStat_HashKey, the first case being 8-byte query IDs for statistics like pg_stat_statements. This increases the size of PgStat_HashKey from 12 to 16 bytes, while PgStatShared_HashEntry, entry stored in the dshash for pgstats, keeps the same size due to alignment. xl_xact_stats_item, that tracks the stats items to drop in commit WAL records, is increased from 12 to 16 bytes. Note that individual chunks in commit WAL records should be multiples of sizeof(int), hence 8-byte object IDs are stored as two uint32, based on a suggestion from Heikki Linnakangas. While on it, the field of PgStat_HashKey is renamed from "objoid" to "objid", as for some stats kinds this field does not refer to OIDs but just IDs, like for replication slot stats. This commit bumps the following format variables: - PGSTAT_FILE_FORMAT_ID, as PgStat_HashKey is written to the stats file for non-serialized stats kinds in the dshash table. - XLOG_PAGE_MAGIC for the changes in xl_xact_stats_item. - Catalog version, for the SQL function pg_stat_have_stats(). Reviewed-by: Bertrand Drouvot Discussion: https://postgr.es/m/ZsvTS9EW79Up8I62@paquier.xyz
* Don't enter parallel mode when holding interrupts.Noah Misch2024-09-17
| | | | | | | | | | Doing so caused the leader to hang in wait_event=ParallelFinish, which required an immediate shutdown to resolve. Back-patch to v12 (all supported versions). Francesco Degrassi Discussion: https://postgr.es/m/CAC-SaSzHUKT=vZJ8MPxYdC_URPfax+yoA1hKTcF4ROz_Q6z0_Q@mail.gmail.com
* Repair pg_upgrade for identity sequences with non-default persistence.Tom Lane2024-09-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since we introduced unlogged sequences in v15, identity sequences have defaulted to having the same persistence as their owning table. However, it is possible to change that with ALTER SEQUENCE, and pg_dump tries to preserve the logged-ness of sequences when it doesn't match (as indeed it wouldn't for an unlogged table from before v15). The fly in the ointment is that ALTER SEQUENCE SET [UN]LOGGED fails in binary-upgrade mode, because it needs to assign a new relfilenode which we cannot permit in that mode. Thus, trying to pg_upgrade a database containing a mismatching identity sequence failed. To fix, add syntax to ADD/ALTER COLUMN GENERATED AS IDENTITY to allow the sequence's persistence to be set correctly at creation, and use that instead of ALTER SEQUENCE SET [UN]LOGGED in pg_dump. (I tried to make SET [UN]LOGGED work without any pg_dump modifications, but that seems too fragile to be a desirable answer. This way should be markedly faster anyhow.) In passing, document the previously-undocumented SEQUENCE NAME option that pg_dump also relies on for identity sequences; I see no value in trying to pretend it doesn't exist. Per bug #18618 from Anthony Hsu. Back-patch to v15 where we invented this stuff. Discussion: https://postgr.es/m/18618-d4eb26d669ed110a@postgresql.org
* Ensure standby promotion point in 043_wal_replay_wait.plAlexander Korotkov2024-09-17
| | | | | | | | | This commit ensures standby will be promoted at least at the primary insert LSN we have just observed. We use pg_switch_wal() to force the insert LSN to be written then wait for standby to catchup. Reported-by: Alexander Lakhin Discussion: https://postgr.es/m/1d7b08f2-64a2-77fb-c666-c9a74c68eeda%40gmail.com
* Minor cleanup related to pg_wal_replay_wait() procedureAlexander Korotkov2024-09-17
| | | | | | | | | | | | * Rename $node_standby1 to $node_standby in 043_wal_replay_wait.pl as there is only one standby. * Remove useless debug printing in 043_wal_replay_wait.pl. * Fix typo in one check description in 043_wal_replay_wait.pl. * Fix some wording in comments and documentation. Reported-by: Alexander Lakhin Discussion: https://postgr.es/m/1d7b08f2-64a2-77fb-c666-c9a74c68eeda%40gmail.com Reviewed-by: Alexander Lakhin
* Add temporal FOREIGN KEY contraintsPeter Eisentraut2024-09-17
| | | | | | | | | | | | | | | | | | | | | Add PERIOD clause to foreign key constraint definitions. This is supported for range and multirange types. Temporal foreign keys check for range containment instead of equality. This feature matches the behavior of the SQL standard temporal foreign keys, but it works on PostgreSQL's native ranges instead of SQL's "periods", which don't exist in PostgreSQL (yet). Reference actions ON {UPDATE,DELETE} {CASCADE,SET NULL,SET DEFAULT} are not supported yet. (previously committed as 34768ee3616, reverted by 8aee330af55; this is essentially unchanged from those) Author: Paul A. Jungwirth <pj@illuminatedcomputing.com> Reviewed-by: Peter Eisentraut <peter@eisentraut.org> Reviewed-by: jian he <jian.universality@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/CA+renyUApHgSZF9-nd-a0+OPGharLQLO=mDHcY4_qQ0+noCUVg@mail.gmail.com
* Add temporal PRIMARY KEY and UNIQUE constraintsPeter Eisentraut2024-09-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Add WITHOUT OVERLAPS clause to PRIMARY KEY and UNIQUE constraints. These are backed by GiST indexes instead of B-tree indexes, since they are essentially exclusion constraints with = for the scalar parts of the key and && for the temporal part. (previously committed as 46a0cd4cefb, reverted by 46a0cd4cefb; the new part is this:) Because 'empty' && 'empty' is false, the temporal PK/UQ constraint allowed duplicates, which is confusing to users and breaks internal expectations. For instance, when GROUP BY checks functional dependencies on the PK, it allows selecting other columns from the table, but in the presence of duplicate keys you could get the value from any of their rows. So we need to forbid empties. This all means that at the moment we can only support ranges and multiranges for temporal PK/UQs, unlike the original patch (above). Documentation and tests for this are added. But this could conceivably be extended by introducing some more general support for the notion of "empty" for other types. Author: Paul A. Jungwirth <pj@illuminatedcomputing.com> Reviewed-by: Peter Eisentraut <peter@eisentraut.org> Reviewed-by: jian he <jian.universality@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/CA+renyUApHgSZF9-nd-a0+OPGharLQLO=mDHcY4_qQ0+noCUVg@mail.gmail.com
* Add stratnum GiST support functionPeter Eisentraut2024-09-17
| | | | | | | | | | | | | | | | | | | | | This is support function 12 for the GiST AM and translates "well-known" RT*StrategyNumber values into whatever strategy number is used by the opclass (since no particular numbers are actually required). We will use this to support temporal PRIMARY KEY/UNIQUE/FOREIGN KEY/FOR PORTION OF functionality. This commit adds two implementations, one for internal GiST opclasses (just an identity function) and another for btree_gist opclasses. It updates btree_gist from 1.7 to 1.8, adding the support function for all its opclasses. (previously committed as 6db4598fcb8, reverted by 8aee330af55; this is essentially unchanged from those) Author: Paul A. Jungwirth <pj@illuminatedcomputing.com> Reviewed-by: Peter Eisentraut <peter@eisentraut.org> Reviewed-by: jian he <jian.universality@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/CA+renyUApHgSZF9-nd-a0+OPGharLQLO=mDHcY4_qQ0+noCUVg@mail.gmail.com
* Add memory/disk usage for Window aggregate nodes in EXPLAIN.Tatsuo Ishii2024-09-17
| | | | | | | | | | | | | This commit is similar to 1eff8279d and expands the idea to Window aggregate nodes so that users can know how much memory or disk the tuplestore used. This commit uses newly introduced tuplestore_get_stats() to inquire this information and add some additional output in EXPLAIN ANALYZE to display the information for the Window aggregate node. Reviewed-by: David Rowley, Ashutosh Bapat, Maxim Orlov, Jian He Discussion: https://postgr.es/m/20240706.202254.89740021795421286.ishii%40postgresql.org
* Run regression tests with timezone America/Los_Angeles.Tom Lane2024-09-14
| | | | | | | | | | | | | | | | | Historically we've used timezone "PST8PDT", but the recent release 2024b of tzdb changes the definition of that zone in a way that breaks many test cases concerned with dates before 1970. Although we've not yet adopted 2024b into our own tree, this is already problematic for people using --with-system-tzdata if their platform has already adopted 2024b. To work with both older and newer versions of tzdb, switch to using "America/Los_Angeles", accepting the ensuing changes in regression test results. Back-patch to all supported branches. Per report and patch from Wolfgang Walther. Discussion: https://postgr.es/m/0a997455-5aba-4cf2-a354-d26d8bcbfae6@technowledgy.de
* Prohibit altering invalidated replication slots.Amit Kapila2024-09-13
| | | | | | | | | | ALTER_REPLICATION_SLOT for invalid replication slots should not be allowed because there is no way to get back the invalidated (logical) slot to work. Author: Bharath Rupireddy Reviewed-by: Peter Smith, Shveta Malik Discussion: https://www.postgresql.org/message-id/CALj2ACW4fSOMiKjQ3=2NVBMTZRTG8Ujg6jsK9z3EvOtvA4vzKQ@mail.gmail.com
* Make jsonpath .string() be immutable for datetimes.Tom Lane2024-09-12
| | | | | | | | | | | | | | | | | Discussion of commit ed055d249 revealed that we don't actually want jsonpath's .string() method to depend on DateStyle, nor TimeZone either, because the non-"_tz" jsonpath functions are supposed to be immutable. Potentially we could allow a TimeZone dependency in the "_tz" variants, but it seems better to just uniformly define this method as returning the same string that jsonb text output would do. That's easier to implement too, saving a couple dozen lines. Patch by me, per complaint from Peter Eisentraut. Back-patch to v17 where this feature came in (in 66ea94e8e). Also back-patch ed055d249 to provide test cases. Discussion: https://postgr.es/m/5e8879d0-a3c8-4be2-950f-d83aa2af953a@eisentraut.org
* Add has_largeobject_privilege function.Fujii Masao2024-09-12
| | | | | | | | | | | | | | | This function checks whether a user has specific privileges on a large object, identified by OID. The user can be provided by name, OID, or default to the current user. If the specified large object doesn't exist, the function returns NULL. It raises an error for a non-existent user name. This behavior is basically consistent with other privilege inquiry functions like has_table_privilege. Bump catalog version. Author: Yugo Nagata Reviewed-by: Fujii Masao Discussion: https://postgr.es/m/20240702163444.ab586f6075e502eb84f11b1a@sranhm.sraoss.co.jp
* Remove old RULE privilege completely.Fujii Masao2024-09-12
| | | | | | | | | | | | | | | | The RULE privilege for tables was removed in v8.2, but for backward compatibility, GRANT/REVOKE and privilege functions like has_table_privilege continued to accept the RULE keyword without any effect. After discussions on pgsql-hackers, it was agreed that this compatibility is no longer needed. Since it's been long enough since the deprecation, we've decided to fully remove support for RULE privilege, so GRANT/REVOKE and privilege functions will no longer accept it. Author: Fujii Masao Reviewed-by: Nathan Bossart Discussion: https://postgr.es/m/976a3581-6939-457f-b947-fc3dc836c083@oss.nttdata.com
* SQL/JSON: Fix JSON_QUERY(... WITH CONDITIONAL WRAPPER)Amit Langote2024-09-12
| | | | | | | | | | | | | | | | Currently, when WITH CONDITIONAL WRAPPER is specified, array wrappers are applied even to a single SQL/JSON item if it is a scalar JSON value, but this behavior does not comply with the standard. To fix, apply wrappers only when there are multiple SQL/JSON items in the result. Reported-by: Peter Eisentraut <peter@eisentraut.org> Author: Peter Eisentraut <peter@eisentraut.org> Author: Amit Langote <amitlangote09@gmail.com> Reviewed-by: Andrew Dunstan <andrew@dunslane.net> Discussion: https://postgr.es/m/8022e067-818b-45d3-8fab-6e0d94d03626%40eisentraut.org Backpatch-through: 17
* Fix unique key checks in JSON object constructorsTomas Vondra2024-09-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When building a JSON object, the code builds a hash table of keys, to allow checking if the keys are unique. The uniqueness check and adding the new key happens in json_unique_check_key(), but this assumes the pointer to the key remains valid. Unfortunately, two places passed pointers to keys in a buffer, while also appending more data (additional key/value pairs) to the buffer. With enough data the buffer is resized by enlargeStringInfo(), which calls repalloc(), invalidating the earlier key pointers. Due to this the uniqueness check may fail with both false negatives and false positives, producing JSON objects with duplicate keys or failing to produce a perfectly valid JSON object. This affects multiple functions that enforce uniqueness of keys, all introduced in PG16 with the new SQL/JSON: - json_object_agg_unique / jsonb_object_agg_unique - json_object / jsonb_objectagg Existing regression tests did not detect the issue, simply because the initial buffer size is 1024 and the objects were small enough not to require the repalloc. With a sufficiently large object, AddressSanitizer reported the access to invalid memory immediately. So would valgrind, of course. Fixed by copying the key into the hash table memory context, and adding regression tests with enough data to repalloc the buffer. Backpatch to 16, where the functions were introduced. Reported by Alexander Lakhin. Investigation and initial fix by Junwang Zhao, with various improvements and tests by me. Reported-by: Alexander Lakhin Author: Junwang Zhao, Tomas Vondra Backpatch-through: 16 Discussion: https://postgr.es/m/18598-3279ed972a2347c7@postgresql.org Discussion: https://postgr.es/m/CAEG8a3JjH0ReJF2_O7-8LuEbO69BxPhYeXs95_x7+H9AMWF1gw@mail.gmail.com
* Update .gitignorePeter Eisentraut2024-09-11
| | | | for commit 0785d1b8b2
* common/jsonapi: support libpq as a clientPeter Eisentraut2024-09-11
| | | | | | | | | | | | | | Based on a patch by Michael Paquier. For libpq, use PQExpBuffer instead of StringInfo. This requires us to track allocation failures so that we can return JSON_OUT_OF_MEMORY as needed rather than exit()ing. Author: Jacob Champion <jacob.champion@enterprisedb.com> Co-authored-by: Michael Paquier <michael@paquier.xyz> Co-authored-by: Daniel Gustafsson <daniel@yesql.se> Reviewed-by: Peter Eisentraut <peter@eisentraut.org> Discussion: https://www.postgresql.org/message-id/flat/d1b467a78e0e36ed85a09adf979d04cf124a9d4b.camel@vmware.com
* Fix some whitespace issues in XMLSERIALIZE(... INDENT).Tom Lane2024-09-10
| | | | | | | | | | | | | | | | | | | We must drop whitespace while parsing the input, else libxml2 will include "blank" nodes that interfere with the desired indentation behavior. The end result is that we didn't indent nodes separated by whitespace. Also, it seems that libxml2 may add a trailing newline when working in DOCUMENT mode. This is semantically insignificant, so strip it. This is in the gray area between being a bug fix and a definition change. However, the INDENT option is still pretty new (since v16), so I think we can get away with changing this in stable branches. Hence, back-patch to v16. Jim Jones Discussion: https://postgr.es/m/872865a8-548b-48e1-bfcd-4e38e672c1e4@uni-muenster.de
* Improve documentation and testing of jsonpath string() for datetimes.Tom Lane2024-09-10
| | | | | | | | | | | | | | | | Point out that the output format depends on DateStyle, and test that, along with testing some cases previously not covered. In passing, adjust the horology test to verify that the prevailing DateStyle is 'Postgres, MDY', much as it has long verified the prevailing TimeZone. We expect pg_regress to have set these up, and there are multiple regression tests relying on these settings. Also make the formatting of entries in table 9.50 more consistent. David Wheeler (marginal additional hacking by me); review by jian he Discussion: https://postgr.es/m/56955B33-6959-4FDA-A459-F00363ECDFEE@justatheory.com
* Add amgettreeheight index AM API routinePeter Eisentraut2024-09-10
| | | | | | | | | | | The only current implementation is for btree where it calls _bt_getrootheight(). Other index types can now also use this to pass information to their amcostestimate routine. Previously, btree was hardcoded and other index types could not hook into the optimizer at this point. Author: Mark Dilger <mark.dilger@enterprisedb.com> Discussion: https://www.postgresql.org/message-id/flat/E72EAA49-354D-4C2E-8EB9-255197F55330@enterprisedb.com
* Mark expressions nullable by grouping setsRichard Guo2024-09-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When generating window_pathkeys, distinct_pathkeys, or sort_pathkeys, we failed to realize that the grouping/ordering expressions might be nullable by grouping sets. As a result, we may incorrectly deem that the PathKeys are redundant by EquivalenceClass processing and thus remove them from the pathkeys list. That would lead to wrong results in some cases. To fix this issue, we mark the grouping expressions nullable by grouping sets if that is the case. If the grouping expression is a Var or PlaceHolderVar or constructed from those, we can just add the RT index of the RTE_GROUP RTE to the existing nullingrels field(s); otherwise we have to add a PlaceHolderVar to carry on the nullingrel bit. However, we have to manually remove this nullingrel bit from expressions in various cases where these expressions are logically below the grouping step, such as when we generate groupClause pathkeys for grouping sets, or when we generate PathTarget for initial input to grouping nodes. Furthermore, in set_upper_references, the targetlist and quals of an Agg node should have nullingrels that include the effects of the grouping step, ie they will have nullingrels equal to the input Vars/PHVs' nullingrels plus the nullingrel bit that references the grouping RTE. In order to perform exact nullingrels matches, we also need to manually remove this nullingrel bit. Bump catversion because this changes the querytree produced by the parser. Thanks to Tom Lane for the idea to invent a new kind of RTE. Per reports from Geoff Winkless, Tobias Wendorff, Richard Guo from various threads. Author: Richard Guo Reviewed-by: Ashutosh Bapat, Sutou Kouhei Discussion: https://postgr.es/m/CAMbWs4_dp7e7oTwaiZeBX8+P1rXw4ThkZxh1QG81rhu9Z47VsQ@mail.gmail.com
* Introduce an RTE for the grouping stepRichard Guo2024-09-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If there are subqueries in the grouping expressions, each of these subqueries in the targetlist and HAVING clause is expanded into distinct SubPlan nodes. As a result, only one of these SubPlan nodes would be converted to reference to the grouping key column output by the Agg node; others would have to get evaluated afresh. This is not efficient, and with grouping sets this can cause wrong results issues in cases where they should go to NULL because they are from the wrong grouping set. Furthermore, during re-evaluation, these SubPlan nodes might use nulled column values from grouping sets, which is not correct. This issue is not limited to subqueries. For other types of expressions that are part of grouping items, if they are transformed into another form during preprocessing, they may fail to match lower target items. This can also lead to wrong results with grouping sets. To fix this issue, we introduce a new kind of RTE representing the output of the grouping step, with columns that are the Vars or expressions being grouped on. In the parser, we replace the grouping expressions in the targetlist and HAVING clause with Vars referencing this new RTE, so that the output of the parser directly expresses the semantic requirement that the grouping expressions be gotten from the grouping output rather than computed some other way. In the planner, we first preprocess all the columns of this new RTE and then replace any Vars in the targetlist and HAVING clause that reference this new RTE with the underlying grouping expressions, so that we will have only one instance of a SubPlan node for each subquery contained in the grouping expressions. Bump catversion because this changes the querytree produced by the parser. Thanks to Tom Lane for the idea to invent a new kind of RTE. Per reports from Geoff Winkless, Tobias Wendorff, Richard Guo from various threads. Author: Richard Guo Reviewed-by: Ashutosh Bapat, Sutou Kouhei Discussion: https://postgr.es/m/CAMbWs4_dp7e7oTwaiZeBX8+P1rXw4ThkZxh1QG81rhu9Z47VsQ@mail.gmail.com
* Don't bother checking the result of SPI_connect[_ext] anymore.Tom Lane2024-09-09
| | | | | | | | | | | | | | | | | | | | | | | SPI_connect/SPI_connect_ext have not returned any value other than SPI_OK_CONNECT since commit 1833f1a1c in v10; any errors are thrown via ereport. (The most likely failure is out-of-memory, which has always been thrown that way, so callers had better be prepared for such errors.) This makes it somewhat pointless to check these functions' result, and some callers within our code haven't been bothering; indeed, the only usage example within spi.sgml doesn't bother. So it's likely that the omission has propagated into extensions too. Hence, let's standardize on not checking, and document the return value as historical, while not actually changing these functions' behavior. (The original proposal was to change their return type to "void", but that would needlessly break extensions that are conforming to the old practice.) This saves a small amount of boilerplate code in a lot of places. Stepan Neretin Discussion: https://postgr.es/m/CAMaYL5Z9Uk8cD9qGz9QaZ2UBJFOu7jFx5Mwbznz-1tBbPDQZow@mail.gmail.com
* Fix waits of REINDEX CONCURRENTLY for indexes with predicates or expressionsMichael Paquier2024-09-09
| | | | | | | | | | | | | | | | | | | | | As introduced by f9900df5f94, a REINDEX CONCURRENTLY job done for an index with predicates or expressions would set PROC_IN_SAFE_IC in its MyProc->statusFlags, causing it to be ignored by other concurrent operations. Such concurrent index rebuilds should never be ignored, as a predicate or an expression could call a user-defined function that accesses a different table than the table where the index is rebuilt. A test that uses injection points is added, backpatched down to 17. Michail has proposed a different test, but I have added something simpler with more coverage. Oversight in f9900df5f949. Author: Michail Nikolaev Discussion: https://postgr.es/m/CANtu0oj9A3kZVduFTG0vrmGnKB+DCHgEpzOp0qAyOgmks84j0w@mail.gmail.com Backpatch-through: 14
* SQL/JSON: Fix default ON ERROR behavior for JSON_TABLEAmit Langote2024-09-06
| | | | | | | | | | | | Use EMPTY ARRAY instead of EMPTY. This change does not affect the runtime behavior of JSON_TABLE(), which continues to return an empty relation ON ERROR. It only alters whether the default ON ERROR behavior is shown in the deparsed output. Reported-by: Jian He <jian.universality@gmail.com> Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com Backpatch-through: 17
* SQL/JSON: Fix JSON_TABLE() column deparsingAmit Langote2024-09-06
| | | | | | | | | | | | | | | | The deparsing code in get_json_expr_options() unnecessarily emitted the default column-specific ON ERROR / EMPTY behavior when the top-level ON ERROR behavior in JSON_TABLE was set to ERROR. Fix that by not overriding the column-specific default, determined based on the column's JsonExprOp in get_json_table_columns(), with JSON_BEHAVIOR_ERROR when that is the top-level ON ERROR behavior. Note that this only removes redundancy; the current deparsing output is not incorrect, just redundant. Reviewed-by: Jian He <jian.universality@gmail.com> Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com Backpatch-through: 17
* Revert recent SQL/JSON related commitsAmit Langote2024-09-06
| | | | | Reverts 68222851d5a8, 565caaa79af, and 3a97460970f, because a few BF animals didn't like one or all of them.
* SQL/JSON: Fix default ON ERROR behavior for JSON_TABLEAmit Langote2024-09-06
| | | | | | | | | | | | Use EMPTY ARRAY instead of EMPTY. This change does not affect the runtime behavior of JSON_TABLE(), which continues to return an empty relation ON ERROR. It only alters whether the default ON ERROR behavior is shown in the deparsed output. Reported-by: Jian He <jian.universality@gmail.com> Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com Backpatch-through: 17
* SQL/JSON: Fix JSON_TABLE() column deparsingAmit Langote2024-09-06
| | | | | | | | | | | | | | | | The deparsing code in get_json_expr_options() unnecessarily emitted the default column-specific ON ERROR / EMPTY behavior when the top-level ON ERROR behavior in JSON_TABLE was set to ERROR. Fix that by not overriding the column-specific default, determined based on the column's JsonExprOp in get_json_table_columns(), with JSON_BEHAVIOR_ERROR when that is the top-level ON ERROR behavior. Note that this only removes redundancy; the current deparsing output is not incorrect, just redundant. Reviewed-by: Jian He <jian.universality@gmail.com> Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com Backpatch-through: 17
* Prevent mis-encoding of "trailing junk after numeric literal" errors.Tom Lane2024-09-05
| | | | | | | | | | | | | | | | | | | | | | | | | | Since commit 2549f0661, we reject an identifier immediately following a numeric literal (without separating whitespace), because that risks ambiguity with hex/octal/binary integers. However, that patch used token patterns like "{integer}{ident_start}", which is problematic because {ident_start} matches only a single byte. If the first character after the integer is a multibyte character, this ends up with flex reporting an error message that includes a partial multibyte character. That can cause assorted bad-encoding problems downstream, both in the report to the client and in the postmaster log file. To fix, use {identifier} not {ident_start} in the "junk" token patterns, so that they will match complete multibyte characters. This seems generally better user experience quite aside from the encoding problem: for "123abc" the error message will now say that the error appeared at or near "123abc" instead of "123a". While at it, add some commentary about why these patterns exist and how they work. Report and patch by Karina Litskevich; review by Pavel Borisov. Back-patch to v15 where the problem came in. Discussion: https://postgr.es/m/CACiT8iZ_diop=0zJ7zuY3BXegJpkKK1Av-PU7xh0EDYHsa5+=g@mail.gmail.com
* Fix misleading error message contextPeter Eisentraut2024-09-05
| | | | | | Author: Pavel Stehule <pavel.stehule@gmail.com> Reviewed-by: Stepan Neretin <sncfmgg@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/CAFj8pRAw+OkVW=FgMKHKyvY3CgtWy3cWdY7XT+S5TJaTttu=oA@mail.gmail.com
* Fix two NULL pointer dereferences when reading custom pgstats from fileMichael Paquier2024-09-05
| | | | | | | | | | | | | | | | | | | There were two spots in pgstat_read_statsfile() where is was possible to finish with a null-pointer-dereference crash for custom pgstats kinds: - When reading stats for a fixed-numbered stats entry. - When reading a variable stats entry with name serialization. For both cases, these issues were reachable by starting a server after changing shared_preload_libraries so as the stats written previously could not be loaded. The code is changed so as the stats are ignored in this case, like the other code paths doing similar sanity checks. Two WARNINGs are added to be able to debug these issues. A test is added for the case of fixed-numbered stats with the module injection_points. Oversights in 7949d9594582, spotted while looking at a different report. Discussion: https://postgr.es/m/Ztj0Jftsn4xXuXtl@paquier.xyz
* Check availability of module injection_points in TAP testsMichael Paquier2024-09-05
| | | | | | | | | | | | | | | | | | | | | | | | | | This fixes defects with installcheck for TAP tests that expect the module injection_points to exist in an installation, but the contents of src/test/modules are not installed by default with installcheck. This would cause, for example, failures under installcheck-world for a build with injection points enabled, when the contents of src/test/modules/ are not installed. The availability of the module can be done with a scan of pg_available_extension. This has been introduced in 2cdcae9da696, and it is refactored here as a new routine in Cluster.pm. Tests are changed in different ways depending on what they need: - The libpq TAP test sets up a node even without injection points, so it is enough to check that CREATE EXTENSION can be used. There is no need for the variable enable_injection_points. - In test_misc, 006_signal_autovacuum requires a runtime check. - 041_checkpoint_at_promote in recovery tests and 005_timeouts in test_misc are updated to use the routine introduced in Cluster.pm. - test_slru's 001_multixact, injection_points's 001_stats and modules/gin/ do not require a check as these modules disable installcheck entirely. Discussion: https://postgr.es/m/ZtesYQ-WupeAK7xK@paquier.xyz
* Collect statistics about conflicts in logical replication.Amit Kapila2024-09-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit adds columns in view pg_stat_subscription_stats to show the number of times a particular conflict type has occurred during the application of logical replication changes. The following columns are added: confl_insert_exists: Number of times a row insertion violated a NOT DEFERRABLE unique constraint. confl_update_origin_differs: Number of times an update was performed on a row that was previously modified by another origin. confl_update_exists: Number of times that the updated value of a row violates a NOT DEFERRABLE unique constraint. confl_update_missing: Number of times that the tuple to be updated is missing. confl_delete_origin_differs: Number of times a delete was performed on a row that was previously modified by another origin. confl_delete_missing: Number of times that the tuple to be deleted is missing. The update_origin_differs and delete_origin_differs conflicts can be detected only when track_commit_timestamp is enabled. Author: Hou Zhijie Reviewed-by: Shveta Malik, Peter Smith, Anit Kapila Discussion: https://postgr.es/m/OS0PR01MB57160A07BD575773045FC214948F2@OS0PR01MB5716.jpnprd01.prod.outlook.com
* Avoid unnecessary post-sort projectionRichard Guo2024-09-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When generating paths for the ORDER BY clause, one thing we need to ensure is that the output paths project the correct final_target. To achieve this, in create_ordered_paths, we compare the pathtarget of each generated path with the given 'target', and add a post-sort projection step if the two targets do not match. Currently we perform a simple pointer comparison between the two targets. It turns out that this is not sufficient. Each sorted_path generated in create_ordered_paths initially projects the correct target required by the preceding steps of sort. If it is the same pointer as sort_input_target, pointer comparison suffices, because sort_input_target is always identical to final_target when no post-sort projection is needed. However, sorted_path's initial pathtarget may not be the same pointer as sort_input_target, because in apply_scanjoin_target_to_paths, if the target to be applied has the same expressions as the existing reltarget, we only inject the sortgroupref info into the existing pathtargets, rather than create new projection paths. As a result, pointer comparison in create_ordered_paths is not reliable. Instead, we can compare PathTarget.exprs to determine whether a projection step is needed. If the expressions match, we can be confident that a post-sort projection is not required. It could be argued that this change adds extra check cost each time we decide whether a post-sort projection is needed. However, as explained in apply_scanjoin_target_to_paths, by avoiding the creation of projection paths, we save effort both immediately and at plan creation time. This, I think, justifies the extra check cost. There are two ensuing plan changes in the regression tests, but they look reasonable and are exactly what we are fixing here. So no additional test cases are added. No backpatch as this could result in plan changes. Author: Richard Guo Reviewed-by: Peter Eisentraut, David Rowley, Tom Lane Discussion: https://postgr.es/m/CAMbWs48TosSvmnz88663_2yg3hfeOFss-J2PtnENDH6J_rLnRQ@mail.gmail.com
* Check the validity of commutators for merge/hash clausesRichard Guo2024-09-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When creating merge or hash join plans in createplan.c, the merge or hash clauses may need to get commuted to ensure that the outer var is on the left and the inner var is on the right if they are not already in the expected form. This requires that their operators have commutators. Failing to find a commutator at this stage would result in 'ERROR: could not find commutator for operator xxx', with no opportunity to select an alternative plan. Typically, this is not an issue because mergejoinable or hashable operators are expected to always have valid commutators. But in some artificial cases this assumption may not hold true. Therefore, here in this patch we check the validity of commutators for clauses in the form "inner op outer" when selecting mergejoin/hash clauses, and consider a clause unusable for the current pair of outer and inner relations if it lacks a commutator. There are not (and should not be) any such operators built into Postgres that are mergejoinable or hashable but have no commutators; so we leverage the alias type 'int8alias1' created in equivclass.sql to build the test case. This is why the test case is included in equivclass.sql rather than in join.sql. Although this is arguably a bug fix, it cannot be reproduced without installing an incomplete opclass, which is unlikely to happen in practice, so no back-patch. Reported-by: Alexander Pyhalov Author: Richard Guo Reviewed-by: Tom Lane Discussion: https://postgr.es/m/c59ec04a2fef94d9ffc35a9b17dfc081@postgrespro.ru
* Avoid installcheck failure in TAP tests using injection_pointsMichael Paquier2024-09-04
| | | | | | | | | | | | | | | | | | These tests depend on the test module injection_points to be installed, but it may not be available as the contents of src/test/modules/ are not installed by default. This commit adds a workaround based on a scan of pg_available_extensions to check if the extension is available, skipping the test if it is not. This allows installcheck to work transparently. There are more tests impacted by this problem on HEAD, but for now this addresses only the tests that exist on HEAD and v17 as the release is close by. Reported-by: Maxim Orlov Discussion: https://postgr.es/m/CACG=ezZkoT-pFz6a9XnyToiuR-Wg8fGELqHLoyBodr+2h-77qA@mail.gmail.com Backpatch-through: 17
* Simplify makefiles exporting twice enable_injection_pointsMichael Paquier2024-09-04
| | | | | | | | | This is confusing, as it exports twice the same variable. Oversight in 6782709df81f that has spread in more places afterwards. Reported-by: Alvaro Herrera, Tom Lane Discussion: https://postgr.es/m/202408201630.mn6vbohjh7hh@alvherre.pgsql Backpatch-through: 17
* Use library functions to edit config in SSL testsDaniel Gustafsson2024-09-03
| | | | | | | | | The SSL tests were editing the postgres configuration by directly reading and writing the files rather than using append_conf() from the testcode library. Reviewed-by: Peter Eisentraut <peter@eisentraut.org> Discussion: https://postgr.es/m/01F4684C-8C98-4BBE-AB83-AC8D7C746AF8@yesql.se
* Test for PG_TEST_EXTRA separately in SSL testsDaniel Gustafsson2024-09-03
| | | | | | | | | PG_TEST_EXTRA is an override and should be tested for separately from any other test as there is no dependency on whether OpenSSL is available or not. Reviewed-by: Peter Eisentraut <peter@eisentraut.org> Discussion: https://postgr.es/m/01F4684C-8C98-4BBE-AB83-AC8D7C746AF8@yesql.se
* Fix typos and grammar in code comments and docsMichael Paquier2024-09-03
| | | | | Author: Alexander Lakhin Discussion: https://postgr.es/m/f7e514cf-2446-21f1-a5d2-8c089a6e2168@gmail.com
* Fix rarely-run test for message wording changePeter Eisentraut2024-09-02
| | | | | | fixup for 2e6a8047f0 Reported-by: Nazir Bilal Yavuz <byavuz81@gmail.com>
* Remove support for OpenSSL older than 1.1.0Daniel Gustafsson2024-09-02
| | | | | | | | | | | | | | OpenSSL 1.0.2 has been EOL from the upstream OpenSSL project for some time, and is no longer the default OpenSSL version with any vendor which package PostgreSQL. By retiring support for OpenSSL 1.0.2 we can remove a lot of no longer required complexity for managing state within libcrypto which is now handled by OpenSSL. Reviewed-by: Jacob Champion <jacob.champion@enterprisedb.com> Reviewed-by: Peter Eisentraut <peter@eisentraut.org> Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/ZG3JNursG69dz1lr@paquier.xyz Discussion: https://postgr.es/m/CA+hUKGKh7QrYzu=8yWEUJvXtMVm_CNWH1L_TLWCbZMwbi1XP2Q@mail.gmail.com