aboutsummaryrefslogtreecommitdiff
path: root/src/backend
Commit message (Collapse)AuthorAge
* Allow _h_indexbuild() to be interrupted.Tom Lane2024-09-13
| | | | | | | | | | | | | | When we are building a hash index that is large enough to need pre-sorting (larger than either maintenance_work_mem or NBuffers), the initial sorting phase is interruptible, but the insertion phase wasn't. Add the missing CHECK_FOR_INTERRUPTS(). Per bug #18616 from Alexander Lakhin. Back-patch to all supported branches. Pavel Borisov Discussion: https://postgr.es/m/18616-acbb9e5caf41e964@postgresql.org
* Remove incorrect Assert.Tom Lane2024-09-11
| | | | | | | | | | | | | | | | | | | | | | | | check_agglevels_and_constraints() asserted that if we find an aggregate function in an EXPR_KIND_FROM_SUBSELECT expression, the expression must be in a LATERAL subquery. Alexander Lakhin found a case where that's not so: because of the odd scoping rules for NEW/OLD within a rule, a reference to NEW/OLD could cause an aggregate to be considered top-level even though it's in an unmarked sub-select. The error message that would be thrown seems sufficiently on-point, so just remove the Assert. (Hence, this is not a bug for production builds.) This Assert was added by me in commit eaccfded9 (9.3 era). It looks like I put it in to cross-check that the new logic for detecting misplaced aggregates (using agglevelsup) caught the same cases that a previous check on p_lateral_active did. So there might have been some related misbehavior before eaccfded9 ... but that's very ancient history by now, so I didn't dig any deeper. Per bug #18608 from Alexander Lakhin. Back-patch to all supported branches. Discussion: https://postgr.es/m/18608-48de0717508ee429@postgresql.org
* Fix unique key checks in JSON object constructorsTomas Vondra2024-09-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When building a JSON object, the code builds a hash table of keys, to allow checking if the keys are unique. The uniqueness check and adding the new key happens in json_unique_check_key(), but this assumes the pointer to the key remains valid. Unfortunately, two places passed pointers to keys in a buffer, while also appending more data (additional key/value pairs) to the buffer. With enough data the buffer is resized by enlargeStringInfo(), which calls repalloc(), invalidating the earlier key pointers. Due to this the uniqueness check may fail with both false negatives and false positives, producing JSON objects with duplicate keys or failing to produce a perfectly valid JSON object. This affects multiple functions that enforce uniqueness of keys, all introduced in PG16 with the new SQL/JSON: - json_object_agg_unique / jsonb_object_agg_unique - json_object / jsonb_objectagg Existing regression tests did not detect the issue, simply because the initial buffer size is 1024 and the objects were small enough not to require the repalloc. With a sufficiently large object, AddressSanitizer reported the access to invalid memory immediately. So would valgrind, of course. Fixed by copying the key into the hash table memory context, and adding regression tests with enough data to repalloc the buffer. Backpatch to 16, where the functions were introduced. Reported by Alexander Lakhin. Investigation and initial fix by Junwang Zhao, with various improvements and tests by me. Reported-by: Alexander Lakhin Author: Junwang Zhao, Tomas Vondra Backpatch-through: 16 Discussion: https://postgr.es/m/18598-3279ed972a2347c7@postgresql.org Discussion: https://postgr.es/m/CAEG8a3JjH0ReJF2_O7-8LuEbO69BxPhYeXs95_x7+H9AMWF1gw@mail.gmail.com
* Fix some whitespace issues in XMLSERIALIZE(... INDENT).Tom Lane2024-09-10
| | | | | | | | | | | | | | | | | | | We must drop whitespace while parsing the input, else libxml2 will include "blank" nodes that interfere with the desired indentation behavior. The end result is that we didn't indent nodes separated by whitespace. Also, it seems that libxml2 may add a trailing newline when working in DOCUMENT mode. This is semantically insignificant, so strip it. This is in the gray area between being a bug fix and a definition change. However, the INDENT option is still pretty new (since v16), so I think we can get away with changing this in stable branches. Hence, back-patch to v16. Jim Jones Discussion: https://postgr.es/m/872865a8-548b-48e1-bfcd-4e38e672c1e4@uni-muenster.de
* Fix waits of REINDEX CONCURRENTLY for indexes with predicates or expressionsMichael Paquier2024-09-09
| | | | | | | | | | | | | | | | | | | | | As introduced by f9900df5f94, a REINDEX CONCURRENTLY job done for an index with predicates or expressions would set PROC_IN_SAFE_IC in its MyProc->statusFlags, causing it to be ignored by other concurrent operations. Such concurrent index rebuilds should never be ignored, as a predicate or an expression could call a user-defined function that accesses a different table than the table where the index is rebuilt. A test that uses injection points is added, backpatched down to 17. Michail has proposed a different test, but I have added something simpler with more coverage. Oversight in f9900df5f949. Author: Michail Nikolaev Discussion: https://postgr.es/m/CANtu0oj9A3kZVduFTG0vrmGnKB+DCHgEpzOp0qAyOgmks84j0w@mail.gmail.com Backpatch-through: 14
* Fix incorrect pg_stat_io output on 32-bit machines.Tom Lane2024-09-06
| | | | | | | | | | | | | | pg_stat_get_io() applied TimestampTzGetDatum twice to the stat_reset_timestamp value. On 64-bit builds that's harmless because TimestampTzGetDatum is a no-op, but on 32-bit builds it results in displaying garbage in the stats_reset column of the pg_stat_io view. Bug dates to commit a9c70b46d which introduced pg_stat_io, so back-patch to v16 where that came in. Bertrand Drouvot Discussion: https://postgr.es/m/Ztrd+XcPTz1zorkg@ip-10-97-1-34.eu-west-3.compute.internal
* Prevent mis-encoding of "trailing junk after numeric literal" errors.Tom Lane2024-09-05
| | | | | | | | | | | | | | | | | | | | | | | | | | Since commit 2549f0661, we reject an identifier immediately following a numeric literal (without separating whitespace), because that risks ambiguity with hex/octal/binary integers. However, that patch used token patterns like "{integer}{ident_start}", which is problematic because {ident_start} matches only a single byte. If the first character after the integer is a multibyte character, this ends up with flex reporting an error message that includes a partial multibyte character. That can cause assorted bad-encoding problems downstream, both in the report to the client and in the postmaster log file. To fix, use {identifier} not {ident_start} in the "junk" token patterns, so that they will match complete multibyte characters. This seems generally better user experience quite aside from the encoding problem: for "123abc" the error message will now say that the error appeared at or near "123abc" instead of "123a". While at it, add some commentary about why these patterns exist and how they work. Report and patch by Karina Litskevich; review by Pavel Borisov. Back-patch to v15 where the problem came in. Discussion: https://postgr.es/m/CACiT8iZ_diop=0zJ7zuY3BXegJpkKK1Av-PU7xh0EDYHsa5+=g@mail.gmail.com
* Clarify restrict_nonsystem_relation_kind description.Masahiko Sawada2024-08-30
| | | | | | | | | | | | This change improves the description of the restrict_nonsystem_relation_kind parameter in guc_table.c and the documentation for better clarity. Backpatch to 12, where this GUC parameter was introduced. Reviewed-by: Peter Eisentraut Discussion: https://postgr.es/m/6a96f1af-22b4-4a80-8161-1f26606b9ee2%40eisentraut.org Backpatch-through: 12
* Avoid inserting PlaceHolderVars in cases where pre-v16 PG did not.Tom Lane2024-08-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 2489d76c4 removed some logic from pullup_replace_vars() that avoided wrapping a PlaceHolderVar around a pulled-up subquery output expression if the expression could be proven to go to NULL anyway (because it contained Vars or PHVs of the pulled-up relation and did not contain non-strict constructs). But removing that logic turns out to cause performance regressions in some cases, because the extra PHV blocks subexpression folding, and will do so even if outer-join reduction later turns it into a no-op with no phnullingrels bits. This can for example prevent an expression from being matched to an index. The reason for always adding a PHV was to ensure we had someplace to put the varnullingrels marker bits of the Var being replaced. However, it turns out we can optimize in exactly the same cases that the previous code did, because we can instead attach the needed varnullingrels bits to the contained Var(s)/PHV(s). This is not a complete solution --- it would be even better if we could remove PHVs after reducing them to no-ops. It doesn't look practical to back-patch such an improvement, but this change seems safe and at least gets rid of the performance-regression cases. Per complaint from Nikhil Raj. Back-patch to v16 where the problem appeared. Discussion: https://postgr.es/m/CAG1ps1xvnTZceKK24OUfMKLPvDP2vjT-d+F2AOCWbw_v3KeEgg@mail.gmail.com
* Fix mis-deparsing of ORDER BY lists when there is a name conflict.Tom Lane2024-08-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If an ORDER BY item in SELECT is a bare identifier, the parser first seeks it as an output column name of the SELECT (for SQL92 compatibility). However, ruleutils.c is expecting the SQL99 interpretation where such a name is an input column name. So it's possible to produce an incorrect display of a view in the (admittedly pretty ill-advised) case where some other column is renamed in the SELECT output list to match an ORDER BY column. This can be fixed by table-qualifying such names in the dumped view text. To avoid cluttering less-ill-advised queries, we'd like to do so only when there's an actual name conflict. That requires passing the current get_query_def call's resultDesc parameter down to get_variable, so that it can determine what the output column names are. In hopes of reducing rather than increasing notational clutter in ruleutils.c, I moved that value into the deparse_context struct and removed it from the parameter lists of get_query_def's other subroutines. I made a few other cosmetic changes while at it: * Likewise move the colNamesVisible parameter into deparse_context. * Rename deparse_context's windowTList field to targetList, since it's no longer used only in connection with WINDOW clauses. * Replace the special_exprkind field with a bool inGroupBy, since that was all it was being used for, and the apparent flexibility of storing a ParseExprKind proved to be illusory. (We need a separate varInOrderBy field to make this patch work.) * Remove useless save/restore logic in get_select_query_def. In principle, this bug is quite old. However, it seems unreachable before 1b4d280ea, because before that the presence of "new" and "old" entries in a view's rangetable caused us to always table-qualify every Var reference in dumped views. Hence, back-patch to v16 where that came in. Per bug #18589 from Quynh Tran. Discussion: https://postgr.es/m/18589-70091cb81db1a3f1@postgresql.org
* Disallow USING clause when altering type of generated columnPeter Eisentraut2024-08-29
| | | | | | | | | | | | | This does not make sense. It would write the output of the USING clause into the converted column, which would violate the generation expression. This adds a check to error out if this is specified. There was a test for this, but that test errored out for a different reason, so it was not effective. Reported-by: Jian He <jian.universality@gmail.com> Reviewed-by: Yugo NAGATA <nagata@sraoss.co.jp> Discussion: https://www.postgresql.org/message-id/flat/c7083982-69f4-4b14-8315-f9ddb20b9834%40eisentraut.org
* Don't advance origin during apply failure.Amit Kapila2024-08-21
| | | | | | | | | | | | | | We advance origin progress during abort on successful streaming and application of ROLLBACK in parallel streaming mode. But the origin shouldn't be advanced during an error or unsuccessful apply due to shutdown. Otherwise, it will result in a transaction loss as such a transaction won't be sent again by the server. Reported-by: Hou Zhijie Author: Hayato Kuroda and Shveta Malik Reviewed-by: Amit Kapila Backpatch-through: 16 Discussion: https://postgr.es/m/TYAPR01MB5692FAC23BE40C69DA8ED4AFF5B92@TYAPR01MB5692.jpnprd01.prod.outlook.com
* Avoid failure to open dropped detached partitionAlvaro Herrera2024-08-19
| | | | | | | | | | | | | | | | | | | | When a partition is detached and immediately dropped, a prepared statement could try to compute a new partition descriptor that includes it. This leads to this kind of error: ERROR: could not open relation with OID 457639 Avoid this by skipping the partition in expand_partitioned_rtentry if it doesn't exist. Noted by me while investigating bug #18559. Kuntal Gosh helped to identify the exact failure. Backpatch to 14, where DETACH CONCURRENTLY was introduced. Author: Álvaro Herrera <alvherre@alvh.no-ip.org> Reviewed-by: Kuntal Ghosh <kuntalghosh.2007@gmail.com> Reviewed-by: Junwang Zhao <zhjwpku@gmail.com> Discussion: https://postgr.es/m/202408122233.bo4adt3vh5bi@alvherre.pgsql
* Explain dropdb can't use syscache because of TOASTTomas Vondra2024-08-19
| | | | | | | | | | | | | Add a comment explaining dropdb() can't rely on syscache. The issue with flattened rows was fixed by commit 0f92b230f88b, but better to have a clear explanation why the systable scan is necessary. The other places doing in-place updates on pg_database have the same comment. Suggestion and patch by Yugo Nagata. Backpatch to 12, same as the fix. Author: Yugo Nagata Backpatch-through: 12 Discussion: https://postgr.es/m/CAJTYsWWNkCt+-UnMhg=BiCD3Mh8c2JdHLofPxsW3m2dkDFw8RA@mail.gmail.com
* Fix regression in TLS session ticket disablingDaniel Gustafsson2024-08-19
| | | | | | | | | | | | | | | | Commit 274bbced disabled session tickets for TLSv1.3 on top of the already disabled TLSv1.2 session tickets, but accidentally caused a regression where TLSv1.2 session tickets were incorrectly sent. Fix by unconditionally disabling TLSv1.2 session tickets and only disable TLSv1.3 tickets when the right version of OpenSSL is used. Backpatch to all supported branches. Reported-by: Cameron Vogt <cvogt@automaticcontrols.net> Reported-by: Fire Emerald <fire.github@gmail.com> Reviewed-by: Jacob Champion <jacob.champion@enterprisedb.com> Discussion: https://postgr.es/m/DM6PR16MB3145CF62857226F350C710D1AB852@DM6PR16MB3145.namprd16.prod.outlook.com Backpatch-through: v12
* Fix harmless LC_COLLATE[_MASK] confusion.Thomas Munro2024-08-19
| | | | | | | | Commit ca051d8b101 called newlocale(LC_COLLATE, ...) instead of newlocale(LC_COLLATE_MASK, ...), in code reached only on FreeBSD. They have the same value on that OS, explaining why it worked. Fix. Back-patch to 14, where ca051d8b101 landed.
* Fix DROP DATABASE for databases with many ACLsTomas Vondra2024-08-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | Commit c66a7d75e652 modified DROP DATABASE so that if interrupted, the database is known to be in an invalid state and can only be dropped. This is done by setting a flag using an in-place update, so that it's not lost in case of rollback. For databases with many ACLs, this may however fail like this: ERROR: wrong tuple length This happens because with many ACLs, the pg_database.datacl attribute gets TOASTed. The dropdb() code reads the tuple from the syscache, which means it's detoasted. But the in-place update expects the tuple length to match the on-disk tuple. Fixed by reading the tuple from the catalog directly, not from syscache. Report and fix by Ayush Tiwari. Backpatch to 12. The DROP DATABASE fix was backpatched to 11, but 11 is EOL at this point. Reported-by: Ayush Tiwari Author: Ayush Tiwari Reviewed-by: Tomas Vondra Backpatch-through: 12 Discussion: https://postgr.es/m/CAJTYsWWNkCt+-UnMhg=BiCD3Mh8c2JdHLofPxsW3m2dkDFw8RA@mail.gmail.com
* Fix creation of partition descriptor during concurrent detach+dropAlvaro Herrera2024-08-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | If a partition undergoes DETACH CONCURRENTLY immediately followed by DROP, this could cause a problem for a concurrent transaction recomputing the partition descriptor when running a prepared statement, because it tries to dereference a pointer to a tuple that's not found in a catalog scan. The existing retry logic added in commit dbca3469ebf8 is sufficient to cope with the overall problem, provided we don't try to dereference a non-existant heap tuple. Arguably, the code in RelationBuildPartitionDesc() has been wrong all along, since no check was added in commit 898e5e3290a7 against receiving a NULL tuple from the catalog scan; that bug has only become user-visible with DETACH CONCURRENTLY which was added in branch 14. Therefore, even though there's no known mechanism to cause a crash because of this, backpatch the addition of such a check to all supported branches. In branches prior to 14, this would cause the code to fail with a "missing relpartbound for relation XYZ" error instead of crashing; that's okay, because there are no reports of such behavior anyway. Author: Kuntal Ghosh <kuntalghosh.2007@gmail.com> Reviewed-by: Junwang Zhao <zhjwpku@gmail.com> Reviewed-by: Tender Wang <tndrwang@gmail.com> Discussion: https://postgr.es/m/18559-b48286d2eacd9a4e@postgresql.org
* Suppress Coverity warnings about Asserts in get_name_for_var_field.Tom Lane2024-08-11
| | | | | | | | | Coverity thinks dpns->plan could be null at these points. That shouldn't really be possible, but it's easy enough to modify the Asserts so they'd not core-dump if it were true. These are new in b919a97a6. Back-patch to v13; the v12 version of the patch didn't have these Asserts.
* Allow adjusting session_authorization and role in parallel workers.Tom Lane2024-08-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | The code intends to allow GUCs to be set within parallel workers via function SET clauses, but not otherwise. However, doing so fails for "session_authorization" and "role", because the assign hooks for those attempt to set the subsidiary "is_superuser" GUC, and that call falls foul of the "not otherwise" prohibition. We can't switch to using GUC_ACTION_SAVE for this, so instead add a new GUC variable flag GUC_ALLOW_IN_PARALLEL to mark is_superuser as being safe to set anyway. (This is okay because is_superuser has context PGC_INTERNAL and thus only hard-wired calls can change it. We'd need more thought before applying the flag to other GUCs; but maybe there are other use-cases.) This isn't the prettiest fix perhaps, but other alternatives we thought of would be much more invasive. While here, correct a thinko in commit 059de3ca4: when rejecting a GUC setting within a parallel worker, we should return 0 not -1 if the ereport doesn't longjmp. (This seems to have no consequences right now because no caller cares, but it's inconsistent.) Improve the comments to try to forestall future confusion of the same kind. Despite the lack of field complaints, this seems worth back-patching. Thanks to Nathan Bossart for the idea to invent a new flag, and for review. Discussion: https://postgr.es/m/2833457.1723229039@sss.pgh.pa.us
* Fix "failed to find plan for subquery/CTE" errors in EXPLAIN.Tom Lane2024-08-09
| | | | | | | | | | | | | | | | | | | | | | To deparse a reference to a field of a RECORD-type output of a subquery, EXPLAIN normally digs down into the subquery's plan to try to discover exactly which anonymous RECORD type is meant. However, this can fail if the subquery has been optimized out of the plan altogether on the grounds that no rows could pass the WHERE quals, which has been possible at least since 3fc6e2d7f. There isn't anything remaining in the plan tree that would help us, so fall back to printing the field name as "fN" for the N'th column of the record. (This will actually be the right thing some of the time, since it matches the column names we assign to RowExprs.) In passing, fix a comment typo in create_projection_plan, which I noticed while experimenting with an alternative fix for this. Per bug #18576 from Vasya B. Back-patch to all supported branches. Richard Guo and Tom Lane Discussion: https://postgr.es/m/18576-9feac34e132fea9e@postgresql.org
* Refuse ATTACH of a table referenced by a foreign keyAlvaro Herrera2024-08-08
| | | | | | | | | | | | | | | | | | | | | Trying to attach a table as a partition which is already on the referenced side of a foreign key on the partitioned table that it is being attached to, leads to strange behavior: we try to clone the foreign key from the parent to the partition, but this new FK points to the partition itself, and the mix of pg_constraint rows and triggers doesn't behave well. Rather than trying to untangle the mess (which might be possible given sufficient time), I opted to forbid the ATTACH. This doesn't seem a problematic restriction, given that we already fail to create the foreign key if you do it the other way around, that is, having the partition first and the FK second. Backpatch to all supported branches. Reported-by: Alexander Lakhin <exclusion@gmail.com> Reviewed-by: Tender Wang <tndrwang@gmail.com> Discussion: https://postgr.es/m/18541-628a61bc267cd2d3@postgresql.org
* Fix edge case in plpgsql's make_callstmt_target().Tom Lane2024-08-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | If the plancache entry for the CALL statement is already stale, it's possible for us to fetch an old procedure OID out of it, and then fail with "cache lookup failed for function NNN". In ordinary usage this never happens because make_callstmt_target is called just once immediately after building the plancache entry. It can be forced however by setting up an erroneous CALL (that causes make_callstmt_target itself to report an error), then dropping/recreating the target procedure, then repeating the erroneous CALL. To fix, use SPI_plan_get_cached_plan() to fetch the plancache's plan, rather than assuming we can use SPI_plan_get_plan_sources(). This shouldn't add any noticeable overhead in the normal case, and in the stale-plan case we'd have had to replan anyway a little further down. The other callers of SPI_plan_get_plan_sources() seem OK, because either they don't need up-to-date plans or they know that the query was just (re) planned. But add some commentary in hopes of not falling into this trap again. Per bug #18574 from Song Hongyu. Back-patch to v14 where this coding was introduced. (Older branches have comparable code, but it's run after any required replanning, so there's no issue.) Discussion: https://postgr.es/m/18574-2ce7ba3249221389@postgresql.org
* Restrict accesses to non-system views and foreign tables during pg_dump.Masahiko Sawada2024-08-05
| | | | | | | | | | | | | | | | | | | | | | | | When pg_dump retrieves the list of database objects and performs the data dump, there was possibility that objects are replaced with others of the same name, such as views, and access them. This vulnerability could result in code execution with superuser privileges during the pg_dump process. This issue can arise when dumping data of sequences, foreign tables (only 13 or later), or tables registered with a WHERE clause in the extension configuration table. To address this, pg_dump now utilizes the newly introduced restrict_nonsystem_relation_kind GUC parameter to restrict the accesses to non-system views and foreign tables during the dump process. This new GUC parameter is added to back branches too, but these changes do not require cluster recreation. Back-patch to all supported branches. Reviewed-by: Noah Misch Security: CVE-2024-7348 Backpatch-through: 12
* Translation updatesPeter Eisentraut2024-08-05
| | | | | Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: 2a4e0c192e2738ce2451e6d6970dcb2210d31800
* Revert "Allow parallel workers to cope with a newly-created session user ID."Tom Lane2024-07-31
| | | | | | | | | | | | | | | | This reverts commit 849326e49a5dd56941eb8fb4699130c301bff303. Some buildfarm animals are failing with "cannot change "client_encoding" during a parallel operation". It looks like assign_client_encoding is unhappy at being asked to roll back a client_encoding setting after a parallel worker encounters a failure. There must be more to it though: why didn't I see this during local testing? In any case, it's clear that moving the RestoreGUCState() call is not as side-effect-free as I thought. Given that the bug f5f30c22e intended to fix has gone unreported for years, it's not something that's urgent to fix; I'm not willing to risk messing with it further with only days to our next release wrap.
* Allow parallel workers to cope with a newly-created session user ID.Tom Lane2024-07-31
| | | | | | | | | | | | | | | | | | | | | | | | Parallel workers failed after a sequence like BEGIN; CREATE USER foo; SET SESSION AUTHORIZATION foo; because check_session_authorization could not see the uncommitted pg_authid row for "foo". This is because we ran RestoreGUCState() in a separate transaction using an ordinary just-created snapshot. The same disease afflicts any other GUC that requires catalog lookups and isn't forgiving about the lookups failing. To fix, postpone RestoreGUCState() into the worker's main transaction after we've set up a snapshot duplicating the leader's. This affects check_transaction_isolation and check_transaction_deferrable, which think they should only run during transaction start. Make them act like check_transaction_read_only, which already knows it should silently accept the value when InitializingParallelWorker. Per bug #18545 from Andrey Rachitskiy. Back-patch to all supported branches, because this has been wrong for awhile. Discussion: https://postgr.es/m/18545-feba138862f19aaa@postgresql.org
* Relax check for return value from second call of pg_strnxfrm().Jeff Davis2024-07-30
| | | | | | | | | strxfrm() is not guaranteed to return the exact number of bytes needed to store the result; it may return a higher value. Discussion: https://postgr.es/m/32f85d88d1f64395abfe5a10dd97a62a4d3474ce.camel@j-davis.com Reviewed-by: Heikki Linnakangas Backpatch-through: 16
* Fix incorrect return value for pg_size_pretty(bigint)David Rowley2024-07-28
| | | | | | | | | | | | | | | | | pg_size_pretty(bigint) would return the value in bytes rather than PB for the smallest-most bigint value. This happened due to an incorrect assumption that the absolute value of -9223372036854775808 could be stored inside a signed 64-bit type. Here we fix that by instead storing that value in an unsigned 64-bit type. This bug does exist in versions prior to 15 but the code there is sufficiently different and the bug seems sufficiently non-critical that it does not seem worth risking backpatching further. Author: Joseph Koshakow <koshy44@gmail.com> Discussion: https://postgr.es/m/CAAvxfHdTsMZPWEHUrZ=h3cky9Ccc3Mtx2whUHygY+ABP-mCmUw@mail.gmail.com Backpatch-through: 15
* libpq: Use strerror_r instead of strerrorPeter Eisentraut2024-07-28
| | | | | | | | | | | | Commit 453c4687377 introduced a use of strerror() into libpq, but that is not thread-safe. Fix by using strerror_r() instead. In passing, update some of the code comments added by 453c4687377, as we have learned more about the reason for the change in OpenSSL that started this. Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Discussion: Discussion: https://postgr.es/m/b6fb018b-f05c-4afd-abd3-318c649faf18@highgo.ca
* Disable all TLS session ticketsDaniel Gustafsson2024-07-26
| | | | | | | | | | | | | | OpenSSL supports two types of session tickets for TLSv1.3, stateless and stateful. The option we've used only turns off stateless tickets leaving stateful tickets active. Use the new API introduced in 1.1.1 to disable all types of tickets. Backpatch to all supported versions. Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Reported-by: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/20240617173803.6alnafnxpiqvlh3g@awork3.anarazel.de Backpatch-through: v12
* Reset relhassubclass upon attaching table as a partitionAlvaro Herrera2024-07-24
| | | | | | | | | | | | | | | | | We don't allow inheritance parents as partitions, and have checks to prevent this; but if a table _was_ in the past an inheritance parents and all their children are removed, the pg_class.relhassubclass flag may remain set, which confuses the partition pruning code (most obviously, it results in an assertion failure; in production builds it may be worse.) Fix by resetting relhassubclass on attach. Backpatch to all supported versions. Reported-by: Alexander Lakhin <exclusion@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/18550-d5e047e9a897a889@postgresql.org
* Detect integer overflow in array_set_slice().Nathan Bossart2024-07-23
| | | | | | | | | | | | | | | | | | | | | When provided an empty initial array, array_set_slice() fails to check for overflow when computing the new array's dimensions. While such overflows are ordinarily caught by ArrayGetNItems(), commands with the following form are accepted: INSERT INTO t (i[-2147483648:2147483647]) VALUES ('{}'); To fix, perform the hazardous computations using overflow-detecting arithmetic routines. As with commit 18b585155a, the added test cases generate errors that include a platform-dependent value, so we again use psql's VERBOSITY parameter to suppress printing the message text. Reported-by: Alexander Lakhin Author: Joseph Koshakow Reviewed-by: Jian He Discussion: https://postgr.es/m/31ad2cd1-db94-bdb3-f91a-65ffdb4bef95%40gmail.com Backpatch-through: 12
* Correctly check updatability of columns targeted by INSERT...DEFAULT.Tom Lane2024-07-20
| | | | | | | | | | | | | | | If a view has some updatable and some non-updatable columns, we failed to verify updatability of any columns for which an INSERT or UPDATE on the view explicitly specifies a DEFAULT item (unless the view has a declared default for that column, which is rare anyway, and one would almost certainly not write one for a non-updatable column). This would lead to an unexpected "attribute number N not found in view targetlist" error rather than the intended error. Per bug #18546 from Alexander Lakhin. This bug is old, so back-patch to all supported branches. Discussion: https://postgr.es/m/18546-84a292e759a9361d@postgresql.org
* Add overflow checks to money type.Nathan Bossart2024-07-19
| | | | | | | | | | | | | | | None of the arithmetic functions for the the money type handle overflow. This commit introduces several helper functions with overflow checking and makes use of them in the money type's arithmetic functions. Fixes bug #18240. Reported-by: Alexander Lakhin Author: Joseph Koshakow Discussion: https://postgr.es/m/18240-c5da758d7dc1ecf0%40postgresql.org Discussion: https://postgr.es/m/CAAvxfHdBPOyEGS7s%2Bxf4iaW0-cgiq25jpYdWBqQqvLtLe_t6tw%40mail.gmail.com Backpatch-through: 12
* Ensure vacuum removes all visibly dead tuples older than OldestXminMelanie Plageman2024-07-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | If vacuum fails to remove a tuple with xmax older than VacuumCutoffs->OldestXmin and younger than GlobalVisState->maybe_needed, it will loop infinitely in lazy_scan_prune(), which compares tuples' visibility information to OldestXmin. Starting in version 14, which uses GlobalVisState for visibility testing during pruning, it is possible for GlobalVisState->maybe_needed to precede OldestXmin if maybe_needed is forced to go backward while vacuum is running. This can happen if a disconnected standby with a running transaction older than VacuumCutoffs->OldestXmin reconnects to the primary after vacuum initially calculates GlobalVisState and OldestXmin. Fix this by having vacuum always remove tuples older than OldestXmin during pruning. This is okay because the standby won't replay the tuple removal until the tuple is removable. Thus, the worst that can happen is a recovery conflict. Fixes BUG# 17257 Back-patched in versions 14-17 Author: Melanie Plageman Reviewed-by: Noah Misch, Peter Geoghegan, Robert Haas, Andres Freund, and Heikki Linnakangas Discussion: https://postgr.es/m/CAAKRu_Y_NJzF4-8gzTTeaOuUL3CcGoXPjXcAHbTTygT8AyVqag%40mail.gmail.com
* Propagate query IDs of utility statements in functionsMichael Paquier2024-07-19
| | | | | | | | | | | | | | | | | | | For utility statements defined within a function, the query tree is copied to a PlannedStmt as utility commands do not require planning. However, the query ID was missing from the information passed down. This leads to plugins relying on the query ID like pg_stat_statements to not be able to track utility statements within function calls. Tests are added to check this behavior, depending on pg_stat_statements.track. This is an old bug. Now, query IDs for utilities are compiled using their parsed trees rather than the query string since v16 (3db72ebcbe20), leading to less bloat with utilities, so backpatch down only to this version. Author: Anthonin Bonnefoy Discussion: https://postgr.es/m/CAO6_XqrGp-uwBqi3vBPLuRULKkddjC7R5QZCgsFren=8E+m2Sg@mail.gmail.com Backpatch-through: 16
* Fix bad indentation introduced in 43cd30bcd1cAndres Freund2024-07-15
| | | | | | | | Oops. Reported-by: Nathan Bossart <nathandbossart@gmail.com> Discussion: https://postgr.es/m/ZpVZB9rH5tHllO75@nathan Backpatch: 12-, like 43cd30bcd1c
* Fix type confusion in guc_var_compare()Andres Freund2024-07-15
| | | | | | | | | | | | | | | | | Before this change guc_var_compare() cast the input arguments to const struct config_generic *. That's not quite right however, as the input on one side is often just a char * on one side. Instead just use char *, the first field in config_generic. This fixes a -Warray-bounds warning with some versions of gcc. While the warning is only known to be triggered for <= 15, the issue the warning points out seems real, so apply the fix everywhere. Author: Nazir Bilal Yavuz <byavuz81@gmail.com> Reported-by: Erik Rijkers <er@xs4all.nl> Suggested-by: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/a74a1a0d-0fd2-3649-5224-4f754e8f91aa%40xs4all.nl
* Avoid unhelpful internal error for incorrect recursive-WITH queries.Tom Lane2024-07-14
| | | | | | | | | | | | | | | | checkWellFormedRecursion would issue "missing recursive reference" if a WITH RECURSIVE query contained a single self-reference but that self-reference was inside a top-level WITH, ORDER BY, LIMIT, etc, rather than inside the second arm of the UNION as expected. We already intended to throw more-on-point errors for such cases, but those error checks must be done before examining the UNION arm in order to have the desired results. So this patch need only move some code (and improve the comments). Per bug #18536 from Alexander Lakhin. Back-patch to all supported branches. Discussion: https://postgr.es/m/18536-0a342ec07901203e@postgresql.org
* Don't lose partitioned table reltuples=0 after relhassubclass=f.Noah Misch2024-07-13
| | | | | | | | | | | | | | ANALYZE sets relhassubclass=f when a partitioned table no longer has partitions. An ANALYZE doing that proceeded to apply the inplace update of pg_class.reltuples to the old pg_class tuple instead of the new tuple, losing that reltuples=0 change if the ANALYZE committed. Non-partitioning inheritance trees were unaffected. Back-patch to v14, where commit 375aed36ad83f0e021e9bdd3a0034c0c992c66dc introduced maintenance of partitioned table pg_class.reltuples. Reported by Alexander Lakhin. Discussion: https://postgr.es/m/a295b499-dcab-6a99-c06e-01cf60593344@gmail.com
* Fix lost Windows socket EOF events.Thomas Munro2024-07-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Winsock only signals an FD_CLOSE event once if the other end of the socket shuts down gracefully. Because each WaitLatchOrSocket() call constructs and destroys a new event handle every time, with unlucky timing we can lose it and hang. We get away with this only if the other end disconnects non-gracefully, because FD_CLOSE is repeatedly signaled in that case. To fix this design flaw in our Windows socket support fundamentally, we'd probably need to rearchitect it so that a single event handle exists for the lifetime of a socket, or switch to completely different multiplexing or async I/O APIs. That's going to be a bigger job and probably wouldn't be back-patchable. This brute force kludge closes the race by explicitly polling with MSG_PEEK before sleeping. Back-patch to all supported releases. This should hopefully clear up some random build farm and CI hang failures reported over the years. It might also allow us to try using graceful shutdown in more places again (reverted in commit 29992a6) to fix instability in the transmission of FATAL error messages, but that isn't done by this commit. Reported-by: Tom Lane <tgl@sss.pgh.pa.us> Tested-by: Alexander Lakhin <exclusion@gmail.com> Discussion: https://postgr.es/m/176008.1715492071%40sss.pgh.pa.us
* Fix ALTER TABLE DETACH for inconsistent indexesAlvaro Herrera2024-07-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When a partitioned table has an index that doesn't support a constraint, but a partition has an equivalent index that does, then a DETACH operation would misbehave: a crash in assertion-enabled systems (because we fail to find the constraint in the parent that we expect to), or a broken coninhcount value (-1) in production systems (because we blindly believe that we've successfully detached the parent). While we should reject an ATTACH of a partition with such an index, we have failed to do so in existing releases, so adding an error in stable releases might break the (unlikely) existing applications that rely on this behavior. At this point I don't even want to reject them in master, because it'd break pg_upgrade if such databases exist, and there would be no easy way to fix existing databases without expensive index rebuilds. (Later on we could add ALTER TABLE ... ADD CONSTRAINT USING INDEX to partitioned tables, which would allow the user to fix such patterns. At that point we could add more restrictions to prevent the problem from its root.) Also, add a test case that leaves one table in this condition, so that we can verify that pg_upgrade continues to work if we later decide to change the policy on the master branch. Backpatch to all supported branches. Co-authored-by: Tender Wang <tndrwang@gmail.com> Reported-by: Alexander Lakhin <exclusion@gmail.com> Reviewed-by: Tender Wang <tndrwang@gmail.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/18500-62948b6fe5522f56@postgresql.org
* Fix possibility of logical decoding partial transaction changes.Masahiko Sawada2024-07-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When creating and initializing a logical slot, the restart_lsn is set to the latest WAL insertion point (or the latest replay point on standbys). Subsequently, WAL records are decoded from that point to find the start point for extracting changes in the DecodingContextFindStartpoint() function. Since the initial restart_lsn could be in the middle of a transaction, the start point must be a consistent point where we won't see the data for partial transactions. Previously, when not building a full snapshot, serialized snapshots were restored, and the SnapBuild jumps to the consistent state even while finding the start point. Consequently, the slot's restart_lsn and confirmed_flush could be set to the middle of a transaction. This could lead to various unexpected consequences. Specifically, there were reports of logical decoding decoding partial transactions, and assertion failures occurred because only subtransactions were decoded without decoding their top-level transaction until decoding the commit record. To resolve this issue, the changes prevent restoring the serialized snapshot and jumping to the consistent state while finding the start point. On v17 and HEAD, a flag indicating whether snapshot restores should be skipped has been added to the SnapBuild struct, and SNAPBUILD_VERSION has been bumpded. On backbranches, the flag is stored in the LogicalDecodingContext instead, preserving on-disk compatibility. Backpatch to all supported versions. Reported-by: Drew Callahan Reviewed-by: Amit Kapila, Hayato Kuroda Discussion: https://postgr.es/m/2444AA15-D21B-4CCE-8052-52C7C2DAFE5C%40amazon.com Backpatch-through: 12
* Make our back branches compatible with libxml2 2.13.x.Tom Lane2024-07-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This back-patches HEAD commits 066e8ac6e, 6082b3d5d, e7192486d, and 896cd266f into supported branches. Changes: * Use xmlAddChildList not xmlAddChild in XMLSERIALIZE (affects v16 and up only). This was a flat-out coding mistake that we got away with due to lax checking in previous versions of xmlAddChild. * Use xmlParseInNodeContext not xmlParseBalancedChunkMemory. This is to dodge a bug in xmlParseBalancedChunkMemory in libxm2 releases 2.13.0-2.13.2. While that bug is now fixed upstream and will probably never be seen in any production-oriented distro, it is currently a problem on some more-bleeding-edge-friendly platforms. * Suppress "chunk is not well balanced" errors from libxml2, unless it is the only error. This eliminates an error-reporting discrepancy between 2.13 and older releases. This error is almost always redundant with previous errors, if not flat-out inappropriate, which is why 2.13 changed the behavior and why nobody's likely to miss it. Erik Wienhold and Tom Lane, per report from Frank Streitzig. Discussion: https://postgr.es/m/trinity-b0161630-d230-4598-9ebc-7a23acdb37cb-1720186432160@3c-app-gmx-bap25 Discussion: https://postgr.es/m/trinity-361ba18b-541a-4fe7-bc63-655ae3a7d599-1720259822452@3c-app-gmx-bs01
* Fix scale clamping in numeric round() and trunc().Dean Rasheed2024-07-08
| | | | | | | | | | | | | | | | | | | | | | | | The numeric round() and trunc() functions clamp the scale argument to the range between +/- NUMERIC_MAX_RESULT_SCALE (2000), which is much smaller than the actual allowed range of type numeric. As a result, they return incorrect results when asked to round/truncate more than 2000 digits before or after the decimal point. Fix by using the correct upper and lower scale limits based on the actual allowed (and documented) range of type numeric. While at it, use the new NUMERIC_WEIGHT_MAX constant instead of SHRT_MAX in all other overflow checks, and fix a comment thinko in power_var() introduced by e54a758d24 -- the minimum value of ln_dweight is -NUMERIC_DSCALE_MAX (-16383), not -SHRT_MAX, though this doesn't affect the point being made in the comment, that the resulting local_rscale value may exceed NUMERIC_MAX_DISPLAY_SCALE (1000). Back-patch to all supported branches. Dean Rasheed, reviewed by Joel Jacobson. Discussion: https://postgr.es/m/CAEZATCXB%2BrDTuMjhK5ZxcouufigSc-X4tGJCBTMpZ3n%3DxxQuhg%40mail.gmail.com
* Fix right-anti-joins when the inner relation is proven uniqueRichard Guo2024-07-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | For an inner_unique join, we always assume that the executor will stop scanning for matches after the first match. Therefore, for a mergejoin that is inner_unique and whose mergeclauses are sufficient to identify a match, we set the skip_mark_restore flag to true, indicating that the executor need not do mark/restore calls. However, merge-right-anti-join did not get this memo and continues scanning the inner side for matches after the first match. If there are duplicates in the outer scan, we may incorrectly skip matching some inner tuples, which can lead to wrong results. Here we fix this issue by ensuring that merge-right-anti-join also advances to next outer tuple after the first match in inner_unique cases. This also saves cycles by avoiding unnecessary scanning of inner tuples after the first match. Although hash-right-anti-join does not suffer from this wrong results issue, we apply the same change to it as well, to help save cycles for the same reason. Per bug #18522 from Antti Lampinen, and bug #18526 from Feliphe Pozzer. Back-patch to v16 where right-anti-join was introduced. Author: Richard Guo Discussion: https://postgr.es/m/18522-c7a8956126afdfd0@postgresql.org
* Fix incorrect sentinel byte logic in GenerationRealloc()David Rowley2024-07-06
| | | | | | | | | | | | | | | | | | | | | This only affects MEMORY_CONTEXT_CHECKING builds. This fixes an off-by-one issue in GenerationRealloc() where the fast-path code which tries to reuse the existing allocation if the existing chunk is >= the new requested size. The code there thought it was always ok to use the existing chunk, but when oldsize == size there isn't enough space to store the sentinel byte. If both sizes matched exactly set_sentinel() would overwrite the first byte beyond the chunk and then subsequent GenerationRealloc() calls could then fail the Assert(chunk->requested_size < oldsize) check which is trying to ensure the chunk is large enough to store the sentinel. The same issue does not exist in aset.c as the sentinel checking code only adds a sentinel byte if there's enough space in the chunk. Reported-by: Alexander Lakhin <exclusion@gmail.com> Discussion: https://postgr.es/m/49275921-7b39-41af-5eb8-97b50ce3312e@gmail.com Backpatch-through: 16, where the problem was introduced by 0e480385e
* SQL/JSON: Fix some obsolete comments.Amit Langote2024-07-04
| | | | | | | | | | | | JSON_OBJECT(), JSON_OBJETAGG(), JSON_ARRAY(), and JSON_ARRAYAGG() added in 7081ac46ace are not transformed into direct calls to user-defined functions as the comments claim. Fix by mentioning instead that they are transformed into JsonConstructorExpr nodes, which may call them, for example, for the *AGG() functions. Reported-by: Alexander Lakhin <exclusion@gmail.com> Discussion: https://postgr.es/m/058c856a-e090-ac42-ff00-ffe394f52a87%40gmail.com Backpatch-through: 16
* Preserve CurrentMemoryContext across notify and sinval interrupts.Tom Lane2024-07-01
| | | | | | | | | | | | | | | | | | | | | | | | | ProcessIncomingNotify is called from the main processing loop that normally runs in MessageContext. That outer-loop code assumes that whatever it allocates will be cleaned up when we're done processing the current client message --- but if we service a notify interrupt, then whatever gets allocated before the next switch into MessageContext will be permanently leaked in TopMemoryContext, because CommitTransactionCommand sets CurrentMemoryContext to TopMemoryContext. There are observable leaks associated with (at least) encoding conversion of incoming queries and parameters attached to Bind messages. sinval catchup interrupts have a similar problem. There might be others, but I've not identified any other clear cases. To fix, take care to save and restore CurrentMemoryContext across the Start/CommitTransactionCommand calls in these functions. Per bug #18512 from wizardbrony. Commit to back branches only; in HEAD, this was dealt with by the riskier but more thoroughgoing approach in commit 1afe31f03. Discussion: https://postgr.es/m/3478884.1718656625@sss.pgh.pa.us