aboutsummaryrefslogtreecommitdiff
path: root/src/backend
Commit message (Collapse)AuthorAge
...
* Make dsm_impl_posix_resize more future-proof.Thomas Munro2022-07-16
| | | | | | | | | | | | | | | | | | | | Commit 4518c798 blocks signals for a short region of code, but it assumed that whatever called it had the signal mask set to UnBlockSig on entry. That may be true today (or may even not be, in extensions in the wild), but it would be better not to make that assumption. We should save-and-restore the caller's signal mask. The PG_SETMASK() portability macro couldn't be used for that, which is why it wasn't done before. But... considering that commit a65e0864 established back in 9.6 that supported POSIX systems have sigprocmask(), and that this is POSIX-only code, there is no reason not to use standard sigprocmask() directly to achieve that. Back-patch to all supported releases, like 4518c798 and 80845b7c. Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/CA%2BhUKGKx6Biq7_UuV0kn9DW%2B8QWcpJC1qwhizdtD9tN-fn0H0g%40mail.gmail.com
* Don't clobber postmaster sigmask in dsm_impl_resize.Thomas Munro2022-07-15
| | | | | | | | | | | | Commit 4518c798 intended to block signals in regular backends that allocate DSM segments, but dsm_impl_resize() is also reached by dsm_postmaster_startup(). It's not OK to clobber the postmaster's signal mask, so only manipulate the signal mask when under the postmaster. Back-patch to all releases, like 4518c798. Discussion: https://postgr.es/m/CA%2BhUKGKNpK%3D2OMeea_AZwpLg7Bm4%3DgYWk7eDjZ5F6YbozfOf8w%40mail.gmail.com
* Block signals while allocating DSM memory.Thomas Munro2022-07-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On Linux, we call posix_fallocate() on shm_open()'d memory to avoid later potential SIGBUS (see commit 899bd785). Based on field reports of systems stuck in an EINTR retry loop there, there, we made it possible to break out of that loop via slightly odd coding where the CHECK_FOR_INTERRUPTS() call was somewhat removed from the loop (see commit 422952ee). On further reflection, that was not a great choice for at least two reasons: 1. If interrupts were held, the CHECK_FOR_INTERRUPTS() would do nothing and the EINTR error would be surfaced to the user. 2. If EINTR was reported but neither QueryCancelPending nor ProcDiePending was set, then we'd dutifully retry, but with a bit more understanding of how posix_fallocate() works, it's now clear that you can get into a loop that never terminates. posix_fallocate() is not a function that can do some of the job and tell you about progress if it's interrupted, it has to undo what it's done so far and report EINTR, and if signals keep arriving faster than it can complete (cf recovery conflict signals), you're stuck. Therefore, for now, we'll simply block most signals to guarantee progress. SIGQUIT is not blocked (see InitPostmasterChild()), because its expected handler doesn't return, and unblockable signals like SIGCONT are not expected to arrive at a high rate. For good measure, we'll include the ftruncate() call in the blocked region, and add a retry loop. Back-patch to all supported releases. Reported-by: Alvaro Herrera <alvherre@alvh.no-ip.org> Reported-by: Nicola Contu <nicola.contu@gmail.com> Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org> Reviewed-by: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/20220701154105.jjfutmngoedgiad3%40alvherre.pgsql
* Fix lock assertions in dshash.c.Thomas Munro2022-07-11
| | | | | | | | | | | | | | | | | | | | | | | | | dshash.c previously maintained flags to be able to assert that you didn't hold any partition lock. These flags could get out of sync with reality in error scenarios. Get rid of all that, and make assertions about the locks themselves instead. Since LWLockHeldByMe() loops internally, we don't want to put that inside another loop over all partition locks. Introduce a new debugging-only interface LWLockAnyHeldByMe() to avoid that. This problem was noted by Tom and Andres while reviewing changes to support the new shared memory stats system, and later showed up in reality while working on commit 389869af. Back-patch to 11, where dshash.c arrived. Reported-by: Tom Lane <tgl@sss.pgh.pa.us> Reported-by: Andres Freund <andres@anarazel.de> Reviewed-by: Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> Reviewed-by: Zhihong Yu <zyu@yugabyte.com> Reviewed-by: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/20220311012712.botrpsikaufzteyt@alap3.anarazel.de Discussion: https://postgr.es/m/CA%2BhUKGJ31Wce6HJ7xnVTKWjFUWQZPBngxfJVx4q0E98pDr3kAw%40mail.gmail.com
* Fix alias matching in transformLockingClause().Dean Rasheed2022-07-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | When locking a specific named relation for a FOR [KEY] UPDATE/SHARE clause, transformLockingClause() finds the relation to lock by scanning the rangetable for an RTE with a matching eref->aliasname. However, it failed to account for the visibility rules of a join RTE. If a join RTE doesn't have a user-supplied alias, it will have a generated eref->aliasname of "unnamed_join" that is not visible as a relation name in the parse namespace. Such an RTE needs to be skipped, otherwise it might be found in preference to a regular base relation with a user-supplied alias of "unnamed_join", preventing it from being locked. In addition, if a join RTE doesn't have a user-supplied alias, but does have a join_using_alias, then the RTE needs to be matched using that alias rather than the generated eref->aliasname, otherwise a misleading "relation not found" error will be reported rather than a "join cannot be locked" error. Backpatch all the way, except for the second part which only goes back to 14, where JOIN USING aliases were added. Dean Rasheed, reviewed by Tom Lane. Discussion: https://postgr.es/m/CAEZATCUY_KOBnqxbTSPf=7fz9HWPnZ5Xgb9SwYzZ8rFXe7nb=w@mail.gmail.com
* Remove %error-verbose directive from jsonpath parserAndrew Dunstan2022-07-03
| | | | | | | | | | | None of the other bison parsers contains this directive, and it gives rise to some unfortunate and impenetrable messages, so just remove it. Backpatch to release 12, where it was introduced. Per gripe from Erik Rijkers Discussion: https://postgr.es/m/ba069ce2-a98f-dc70-dc17-2ccf2a9bf7c7@xs4all.nl
* Harden dsm_impl.c against unexpected EEXIST.Thomas Munro2022-07-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, we trusted the OS not to report EEXIST unless we'd passed in IPC_CREAT | IPC_EXCL or O_CREAT | O_EXCL, as appropriate. Solaris's shm_open() can in fact do that, causing us to crash because we didn't ereport and then we blithely assumed the mapping was successful. Let's treat EEXIST just like any other error, unless we're actually trying to create a new segment. This applies to shm_open(), where this behavior has been seen, and also to the equivalent operations for our sysv and mmap modes just on principle. Based on the underlying reason for the error, namely contention on a lock file managed by Solaris librt for each distinct name, this problem is only likely to happen on 15 and later, because the new shared memory stats system produces shm_open() calls for the same path from potentially large numbers of backends concurrently during authentication. Earlier releases only shared memory segments between a small number of parallel workers under one Gather node. You could probably hit it if you tried hard enough though, and we should have been more defensive in the first place. Therefore, back-patch to all supported releases. Per build farm animal margay. This isn't the end of the story, though, it just changes random crashes into random "File exists" errors; more work needed for a green build farm. Reviewed-by: Robert Haas <robertmhaas@gmail.com> Discussion: https://postgr.es/m/CA%2BhUKGKqKrCV5xKWfh9rnm%3Do%3DDwZLTLtnsj_XpUi9g5%3DV%2B9oyg%40mail.gmail.com
* Fix visibility check when XID is committed in CLOG but not in procarray.Heikki Linnakangas2022-06-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | TransactionIdIsInProgress had a fast path to return 'false' if the single-item CLOG cache said that the transaction was known to be committed. However, that was wrong, because a transaction is first marked as committed in the CLOG but doesn't become visible to others until it has removed its XID from the proc array. That could lead to an error: ERROR: t_xmin is uncommitted in tuple to be updated or for an UPDATE to go ahead without blocking, before the previous UPDATE on the same row was made visible. The window is usually very short, but synchronous replication makes it much wider, because the wait for synchronous replica happens in that window. Another thing that makes it hard to hit is that it's hard to get such a commit-in-progress transaction into the single item CLOG cache. Normally, if you call TransactionIdIsInProgress on such a transaction, it determines that the XID is in progress without checking the CLOG and without populating the cache. One way to prime the cache is to explicitly call pg_xact_status() on the XID. Another way is to use a lot of subtransactions, so that the subxid cache in the proc array is overflown, making TransactionIdIsInProgress rely on pg_subtrans and CLOG checks. This has been broken ever since it was introduced in 2008, but the race condition is very hard to hit, especially without synchronous replication. There were a couple of reports of the error starting from summer 2021, but no one was able to find the root cause then. TransactionIdIsKnownCompleted() is now unused. In 'master', remove it, but I left it in place in backbranches in case it's used by extensions. Also change pg_xact_status() to check TransactionIdIsInProgress(). Previously, it only checked the CLOG, and returned "committed" before the transaction was actually made visible to other queries. Note that this also means that you cannot use pg_xact_status() to reproduce the bug anymore, even if the code wasn't fixed. Report and analysis by Konstantin Knizhnik. Patch by Simon Riggs, with the pg_xact_status() change added by me. Author: Simon Riggs Reviewed-by: Andres Freund Discussion: https://www.postgresql.org/message-id/flat/4da7913d-398c-e2ad-d777-f752cf7f0bbb%40garret.ru
* CREATE INDEX: use the original userid for more ACL checks.Noah Misch2022-06-25
| | | | | | | | | | | | | Commit a117cebd638dd02e5c2e791c25e43745f233111b used the original userid for ACL checks located directly in DefineIndex(), but it still adopted the table owner userid for more ACL checks than intended. That broke dump/reload of indexes that refer to an operator class, collation, or exclusion operator in a schema other than "public" or "pg_catalog". Back-patch to v10 (all supported versions), like the earlier commit. Nathan Bossart and Noah Misch Discussion: https://postgr.es/m/f8a4105f076544c180a87ef0c4822352@stmuk.bayern.de
* Fix memory leak due to LogicalRepRelMapEntry.attrmap.Amit Kapila2022-06-23
| | | | | | | | | | | | | | When rebuilding the relation mapping on subscribers, we were not releasing the attribute mapping's memory which was no longer required. The attribute mapping used in logical tuple conversion was refactored in PG13 (by commit e1551f96e6) but we forgot to update the related code that frees the attribute map. Author: Hou Zhijie Reviewed-by: Amit Langote, Amit Kapila, Shi yu Backpatch-through: 10, where it was introduced Discussion: https://postgr.es/m/OSZPR01MB6310F46CD425A967E4AEF736FDA49@OSZPR01MB6310.jpnprd01.prod.outlook.com
* Fix SPI's handling of errors during transaction commit.Tom Lane2022-06-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | SPI_commit previously left it up to the caller to recover from any error occurring during commit. Since that's complicated and requires use of low-level xact.c facilities, it's not too surprising that no caller got it right. Let's move the responsibility for cleanup into spi.c. Doing that requires redefining SPI_commit as starting a new transaction, so that it becomes equivalent to SPI_commit_and_chain except that you get default transaction characteristics instead of preserving the prior transaction's characteristics. We can make this pretty transparent API-wise by redefining SPI_start_transaction() as a no-op. Callers that expect to do something in between might be surprised, but available evidence is that no callers do so. Having made that API redefinition, we can fix this mess by having SPI_commit[_and_chain] trap errors and start a new, clean transaction before re-throwing the error. Likewise for SPI_rollback[_and_chain]. Some cleanup is also needed in AtEOXact_SPI, which was nowhere near smart enough to deal with SPI contexts nested inside a committing context. While plperl and pltcl need no changes beyond removing their now-useless SPI_start_transaction() calls, plpython needs some more work because it hadn't gotten the memo about catching commit/rollback errors in the first place. Such an error resulted in longjmp'ing out of the Python interpreter, which leaks Python stack entries at present and is reported to crash Python 3.11 altogether. Add the missing logic to catch such errors and convert them into Python exceptions. This is a back-patch of commit 2e517818f. That's now aged long enough to reduce the concerns about whether it will break something, and we do need to ensure that supported branches will work with Python 3.11. Peter Eisentraut and Tom Lane Discussion: https://postgr.es/m/3375ffd8-d71c-2565-e348-a597d6e739e3@enterprisedb.com Discussion: https://postgr.es/m/17416-ed8fe5d7213d6c25@postgresql.org
* Fix stale values in partition map entries on subscribers.Amit Kapila2022-06-21
| | | | | | | | | | | | | | | | | | We build the partition map entries on subscribers while applying the changes for update/delete on partitions. The component relation in each entry is closed after its use so we need to update it on successive use of cache entries. This problem was there since the original commit f1ac27bfda that introduced this code but we didn't notice it till the recent commit 26b3455afa started to use the component relation of partition map cache entry. Reported-by: Tom Lane, as per buildfarm Author: Amit Langote, Hou Zhijie Reviewed-by: Amit Kapila, Shi Yu Backpatch-through: 13, where it was introduced Discussion: https://postgr.es/m/OSZPR01MB6310F46CD425A967E4AEF736FDA49@OSZPR01MB6310.jpnprd01.prod.outlook.com
* Fix partition table's REPLICA IDENTITY checking on the subscriber.Amit Kapila2022-06-21
| | | | | | | | | | | | | | | | | | | | In logical replication, we will check if the target table on the subscriber is updatable by comparing the replica identity of the table on the publisher with the table on the subscriber. When the target table is a partitioned table, we only check its replica identity but not for the partition tables. This leads to assertion failure while applying changes for update/delete as we expect those to succeed only when the corresponding partition table has a primary key or has a replica identity defined. Fix it by checking the replica identity of the partition table while applying changes. Reported-by: Shi Yu Author: Shi Yu, Hou Zhijie Reviewed-by: Amit Langote, Amit Kapila Backpatch-through: 13, where it was introduced Discussion: https://postgr.es/m/OSZPR01MB6310F46CD425A967E4AEF736FDA49@OSZPR01MB6310.jpnprd01.prod.outlook.com
* Fix data inconsistency between publisher and subscriber.Amit Kapila2022-06-16
| | | | | | | | | | | | | | | | We were not updating the partition map cache in the subscriber even when the corresponding remote rel is changed. Due to this data was getting incorrectly replicated for partition tables after the publisher has changed the table schema. Fix it by resetting the required entries in the partition map cache after receiving a new relation mapping from the publisher. Reported-by: Shi Yu Author: Shi Yu, Hou Zhijie Reviewed-by: Amit Langote, Amit Kapila Backpatch-through: 13, where it was introduced Discussion: https://postgr.es/m/OSZPR01MB6310F46CD425A967E4AEF736FDA49@OSZPR01MB6310.jpnprd01.prod.outlook.com
* Fix cache look-up failures while applying changes in logical replication.Amit Kapila2022-06-15
| | | | | | | | | | | | | | | | | | While building a new attrmap which maps partition attribute numbers to remoterel's, we incorrectly update the map for dropped column attributes. Later, it caused cache look-up failure when we tried to use the map to fetch the information about attributes. This also fixes the partition map cache invalidation which was using the wrong type cast to fetch the entry. We were using stale partition map entry after invalidation which leads to the assertion or cache look-up failure. Reported-by: Shi Yu Author: Hou Zhijie, Shi Yu Reviewed-by: Amit Langote, Amit Kapila Backpatch-through: 13, where it was introduced Discussion: https://postgr.es/m/OSZPR01MB6310F46CD425A967E4AEF736FDA49@OSZPR01MB6310.jpnprd01.prod.outlook.com
* Un-break whole-row Vars referencing domain-over-composite types.Tom Lane2022-06-10
| | | | | | | | | | | | | | | | | In commit ec62cb0aa, I foolishly replaced ExecEvalWholeRowVar's lookup_rowtype_tupdesc_domain call with just lookup_rowtype_tupdesc, because I didn't see how a domain could be involved there, and there were no regression test cases to jog my memory. But the existing code was correct, so revert that change and add a test case showing why it's necessary. (Note: per comment in struct DatumTupleFields, it is correct to produce an output tuple that's labeled with the base composite type, not the domain; hence just blindly looking through the domain is correct here.) Per bug #17515 from Dan Kubb. Back-patch to v11 where domains over composites became a thing. Discussion: https://postgr.es/m/17515-a24737438363aca0@postgresql.org
* Fix off-by-one loop termination condition in pg_stat_get_subscription().Tom Lane2022-06-07
| | | | | | | | | | | | | | | | pg_stat_get_subscription scanned one more LogicalRepWorker array entry than is really allocated. In the worst case this could lead to SIGSEGV, if the LogicalRepCtx data structure is near the end of shared memory. That seems quite unlikely though (thanks to the ordering of calls in CreateSharedMemoryAndSemaphores) and we've heard no field reports of it. A more likely misbehavior is one row of garbage data in the function's result, but even that is not real likely because of the check that the pid field matches some live backend. Report and fix by Kuntal Ghosh. This bug is old, so back-patch to all supported branches. Discussion: https://postgr.es/m/CAGz5QCJykEDzW6jQK6Yz7Qh_PMtD=95de_7QoocbVR2Qy8hWZA@mail.gmail.com
* Silence compiler warnings from some older compilers.Tom Lane2022-06-01
| | | | | | | | | | | | Since a117cebd6, some older gcc versions issue "variable may be used uninitialized in this function" complaints for brin_summarize_range. Silence that using the same coding pattern as in bt_index_check_internal; arguably, a117cebd6 had too narrow a view of which compilers might give trouble. Nathan Bossart and Tom Lane. Back-patch as the previous commit was. Discussion: https://postgr.es/m/20220601163537.GA2331988@nathanxps13
* Ensure ParseTzFile() closes the input file after failing.Tom Lane2022-05-31
| | | | | | | | | | | | | | | We hadn't noticed this because (a) few people feed invalid timezone abbreviation files to the server, and (b) in typical scenarios guc.c would throw ereport(ERROR) and then transaction abort handling would silently clean up the leaked file reference. However, it was possible to observe file leakage warnings if one breaks an already-active abbreviation file, because guc.c does not throw ERROR when loading supposedly-validated settings during session start or SIGHUP processing. Report and fix by Kyotaro Horiguchi (cosmetic adjustments by me) Discussion: https://postgr.es/m/20220530.173740.748502979257582392.horikyota.ntt@gmail.com
* Handle NULL for short descriptions of custom GUC variablesMichael Paquier2022-05-28
| | | | | | | | | | | | | | | If a short description is specified as NULL in one of the various DefineCustomXXXVariable() functions available to external modules to define a custom parameter, SHOW ALL would crash. This change teaches SHOW ALL to properly handle NULL short descriptions, as well as any code paths that manipulate it, to gain in flexibility. Note that help_config.c was already able to do that, when describing a set of GUCs for postgres --describe-config. Author: Steve Chavez Reviewed by: Nathan Bossart, Andres Freund, Michael Paquier, Tom Lane Discussion: https://postgr.es/m/CAGRrpzY6hO-Kmykna_XvsTv8P2DshGiU6G3j8yGao4mk0CqjHA%40mail.gmail.com Backpatch-through: 10
* Remove misguided SSL key file ownership check in libpq.Tom Lane2022-05-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commits a59c79564 et al. tried to sync libpq's SSL key file permissions checks with what we've used for years in the backend. We did not intend to create any new failure cases, but it turns out we did: restricting the key file's ownership breaks cases where the client is allowed to read a key file despite not having the identical UID. In particular a client running as root used to be able to read someone else's key file; and having seen that I suspect that there are other, less-dubious use cases that this restriction breaks on some platforms. We don't really need an ownership check, since if we can read the key file despite its having restricted permissions, it must have the right ownership --- under normal conditions anyway, and the point of this patch is that any additional corner cases where that works should be deemed allowable, as they have been historically. Hence, just drop the ownership check, and rearrange the permissions check to get rid of its faulty assumption that geteuid() can't be zero. (Note that the comparable backend-side code doesn't have to cater for geteuid() == 0, since the server rejects that very early on.) This does have the end result that the permissions safety check used for a root user's private key file is weaker than that used for anyone else's. While odd, root really ought to know what she's doing with file permissions, so I think this is acceptable. Per report from Yogendra Suralkar. Like the previous patch, back-patch to all supported branches. Discussion: https://postgr.es/m/MW3PR15MB3931DF96896DC36D21AFD47CA3D39@MW3PR15MB3931.namprd15.prod.outlook.com
* Show 'AS "?column?"' explicitly when it's important.Tom Lane2022-05-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | ruleutils.c was coded to suppress the AS label for a SELECT output expression if the column name is "?column?", which is the parser's fallback if it can't think of something better. This is fine, and avoids ugly clutter, so long as (1) nothing further up in the parse tree relies on that column name or (2) the same fallback would be assigned when the rule or view definition is reloaded. Unfortunately (2) is far from certain, both because ruleutils.c might print the expression in a different form from how it was originally written and because FigureColname's rules might change in future releases. So we shouldn't rely on that. Detecting exactly whether there is any outer-level use of a SELECT column name would be rather expensive. This patch takes the simpler approach of just passing down a flag indicating whether there *could* be any outer use; for example, the output column names of a SubLink are not referenceable, and we also do not care about the names exposed by the right-hand side of a setop. This is sufficient to suppress unwanted clutter in all but one case in the regression tests. That seems like reasonable evidence that it won't be too much in users' faces, while still fixing the cases we need to fix. Per bug #17486 from Nicolas Lutic. This issue is ancient, so back-patch to all supported branches. Discussion: https://postgr.es/m/17486-1ad6fd786728b8af@postgresql.org
* Fix DDL deparse of CREATE OPERATOR CLASSAlvaro Herrera2022-05-20
| | | | | | | | | | | | | When an implicit operator family is created, it wasn't getting reported. Make it do so. This has always been missing. Backpatch to 10. Author: Masahiko Sawada <sawada.mshk@gmail.com> Reported-by: Leslie LEMAIRE <leslie.lemaire@developpement-durable.gouv.fr> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Reviewed-by: Michael Paquiër <michael@paquier.xyz> Discussion: https://postgr.es/m/f74d69e151b22171e8829551b1159e77@developpement-durable.gouv.fr
* Repurpose PROC_COPYABLE_FLAGS as PROC_XMIN_FLAGSAlvaro Herrera2022-05-19
| | | | | | | | | | | | | | | This is a slight, convenient semantics change from what commit 0f0cfb494004 ("Fix parallel operations that prevent oldest xmin from advancing") introduced that lets us simplify the coding in the one place where it is used. Backpatch to 13. This is related to commit 6fea65508a1a ("Tighten ComputeXidHorizons' handling of walsenders") rewriting the code site where this is used, which has not yet been backpatched, but it may well be in the future. Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Discussion: https://postgr.es/m/202204191637.eldwa2exvguw@alvherre.pgsql
* Check column list length in XMLTABLE/JSON_TABLE aliasAlvaro Herrera2022-05-18
| | | | | | | | | | | | | | | | | We weren't checking the length of the column list in the alias clause of an XMLTABLE or JSON_TABLE function (a "tablefunc" RTE), and it was possible to make the server crash by passing an overly long one. Fix it by throwing an error in that case, like the other places that deal with alias lists. In passing, modify the equivalent test used for join RTEs to look like the other ones, which was different for no apparent reason. This bug came in when XMLTABLE was born in version 10; backpatch to all stable versions. Reported-by: Wang Ke <krking@zju.edu.cn> Discussion: https://postgr.es/m/17480-1c9d73565bb28e90@postgresql.org
* Fix control file update done in restartpoints still running after promotionMichael Paquier2022-05-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a cluster is promoted (aka the control file shows a state different than DB_IN_ARCHIVE_RECOVERY) while CreateRestartPoint() is still processing, this function could miss an update of the control file for "checkPoint" and "checkPointCopy" but still do the recycling and/or removal of the past WAL segments, assuming that the to-be-updated LSN values should be used as reference points for the cleanup. This causes a follow-up restart attempting crash recovery to fail with a PANIC on a missing checkpoint record if the end-of-recovery checkpoint triggered by the promotion did not complete while the cluster abruptly stopped or crashed before the completion of this checkpoint. The PANIC would be caused by the redo LSN referred in the control file as located in a segment already gone, recycled by the previous restartpoint with "checkPoint" out-of-sync in the control file. This commit fixes the update of the control file during restartpoints so as "checkPoint" and "checkPointCopy" are updated even if the cluster has been promoted while a restartpoint is running, to be on par with the set of WAL segments actually recycled in the end of CreateRestartPoint(). 7863ee4 has fixed this problem already on master, but the release timing of the latest point versions did not let me enough time to study and fix that on all the stable branches. Reported-by: Fujii Masao, Rui Zhao Author: Kyotaro Horiguchi Reviewed-by: Nathan Bossart, Michael Paquier Discussion: https://postgr.es/m/20220316.102444.2193181487576617583.horikyota.ntt@gmail.com Backpatch-through: 10
* Make pull_var_clause() handle GroupingFuncs exactly like Aggrefs.Tom Lane2022-05-12
| | | | | | | | | | | | | | | | | This follows in the footsteps of commit 2591ee8ec by removing one more ill-advised shortcut from planning of GroupingFuncs. It's true that we don't intend to execute the argument expression(s) at runtime, but we still have to process any Vars appearing within them, or we risk failure at setrefs.c time (or more fundamentally, in EXPLAIN trying to print such an expression). Vars in upper plan nodes have to have referents in the next plan level, whether we ever execute 'em or not. Per bug #17479 from Michael J. Sullivan. Back-patch to all supported branches. Richard Guo Discussion: https://postgr.es/m/17479-6260deceaf0ad304@postgresql.org
* Fix the logical replication timeout during large transactions.Amit Kapila2022-05-11
| | | | | | | | | | | | | | | | | | | The problem is that we don't send keep-alive messages for a long time while processing large transactions during logical replication where we don't send any data of such transactions. This can happen when the table modified in the transaction is not published or because all the changes got filtered. We do try to send the keep_alive if necessary at the end of the transaction (via WalSndWriteData()) but by that time the subscriber-side can timeout and exit. To fix this we try to send the keepalive message if required after processing certain threshold of changes. Reported-by: Fabrice Chapuis Author: Wang wei and Amit Kapila Reviewed By: Masahiko Sawada, Euler Taveira, Hou Zhijie, Hayato Kuroda Backpatch-through: 10 Discussion: https://postgr.es/m/CAA5-nLARN7-3SLU_QUxfy510pmrYK6JJb=bk3hcgemAM_pAv+w@mail.gmail.com
* Fix core dump in transformValuesClause when there are no columns.Tom Lane2022-05-09
| | | | | | | | | | | | The parser code that transformed VALUES from row-oriented to column-oriented lists failed if there were zero columns. You can't write that straightforwardly (though probably you should be able to), but the case can be reached by expanding a "tab.*" reference to a zero-column table. Per bug #17477 from Wang Ke. Back-patch to all supported branches. Discussion: https://postgr.es/m/17477-0af3c6ac6b0a6ae0@postgresql.org
* Revert "Disallow infinite endpoints in generate_series() for timestamps."Tom Lane2022-05-09
| | | | | | | | | | | | | | | | | | | This reverts commit eafdf9de06e9b60168f5e47cedcfceecdc6d4b5f and its back-branch counterparts. Corey Huinker pointed out that we'd discussed this exact change back in 2016 and rejected it, on the grounds that there's at least one usage pattern with LIMIT where an infinite endpoint can usefully be used. Perhaps that argument needs to be re-litigated, but there's no time left before our back-branch releases. To keep our options open, restore the status quo ante; if we do end up deciding to change things, waiting one more quarter won't hurt anything. Rather than just doing a straight revert, I added a new test case demonstrating the usage with LIMIT. That'll at least remind us of the issue if we forget again. Discussion: https://postgr.es/m/3603504.1652068977@sss.pgh.pa.us Discussion: https://postgr.es/m/CADkLM=dzw0Pvdqp5yWKxMd+VmNkAMhG=4ku7GnCZxebWnzmz3Q@mail.gmail.com
* In REFRESH MATERIALIZED VIEW, set user ID before running user code.Noah Misch2022-05-09
| | | | | | | | | | It intended to, but did not, achieve this. Adopt the new standard of setting user ID just after locking the relation. Back-patch to v10 (all supported versions). Reviewed by Simon Riggs. Reported by Alvaro Herrera. Security: CVE-2022-1552
* Make relation-enumerating operations be security-restricted operations.Noah Misch2022-05-09
| | | | | | | | | | | | | | | | | | When a feature enumerates relations and runs functions associated with all found relations, the feature's user shall not need to trust every user having permission to create objects. BRIN-specific functionality in autovacuum neglected to account for this, as did pg_amcheck and CLUSTER. An attacker having permission to create non-temp objects in at least one schema could execute arbitrary SQL functions under the identity of the bootstrap superuser. CREATE INDEX (not a relation-enumerating operation) and REINDEX protected themselves too late. This change extends to the non-enumerating amcheck interface. Back-patch to v10 (all supported versions). Sergey Shinderuk, reviewed (in earlier versions) by Alexander Lakhin. Reported by Alexander Lakhin. Security: CVE-2022-1552
* Translation updatesPeter Eisentraut2022-05-09
| | | | | Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: 7c8dcb6669ccc6ae33090d02ed92f0bad6ada742
* Fix possibility of self-deadlock in ResolveRecoveryConflictWithBufferPin().Andres Freund2022-05-02
| | | | | | | | | | | | | | | | | | | | | The tests added in 9f8a050f68d failed nearly reliably on FreeBSD in CI, and occasionally on the buildfarm. That turns out to be caused not by a bug in the test, but by a longstanding bug in recovery conflict handling. The standby timeout handler, used by ResolveRecoveryConflictWithBufferPin(), executed SendRecoveryConflictWithBufferPin() inside a signal handler. A bad idea, because the deadlock timeout handler (or a spurious latch set) could have interrupted ProcWaitForSignal(). If unlucky that could cause a self-deadlock on ProcArrayLock, if the deadlock check is in SendRecoveryConflictWithBufferPin()->CancelDBBackends(). To fix, set a flag in StandbyTimeoutHandler(), and check the flag in ResolveRecoveryConflictWithBufferPin(). Subsequently the recovery conflict tests will be backpatched. Discussion: https://postgr.es/m/20220413002626.udl7lll7f3o7nre7@alap3.anarazel.de Backpatch: 10-
* Fix typo in comment.Etsuro Fujita2022-05-02
|
* Remove inadequate assertion check in CTE inlining.Tom Lane2022-04-21
| | | | | | | | | | | | | | | | | | | | inline_cte() expected to find exactly as many references to the target CTE as its cterefcount indicates. While that should be accurate for the tree as emitted by the parser, there are some optimizations that occur upstream of here that could falsify it, notably removal of unused subquery output expressions. Trying to make the accounting 100% accurate seems expensive and doomed to future breakage. It's not really worth it, because all this code is protecting is downstream assumptions that every referenced CTE has a plan. Let's convert those assertions to regular test-and-elog just in case there's some actual problem, and then drop the failing assertion. Per report from Tomas Vondra (thanks also to Richard Guo for analysis). Back-patch to v12 where the faulty code came in. Discussion: https://postgr.es/m/29196a1e-ed47-c7ca-9be2-b1c636816183@enterprisedb.com
* Fix CLUSTER tuplesorts on abbreviated expressions.Peter Geoghegan2022-04-20
| | | | | | | | | | | | | | | | | | | | CLUSTER sort won't use the datum1 SortTuple field when clustering against an index whose leading key is an expression. This makes it unsafe to use the abbreviated keys optimization, which was missed by the logic that sets up SortSupport state. Affected tuplesorts output tuples in a completely bogus order as a result (the wrong SortSupport based comparator was used for the leading attribute). This issue is similar to the bug fixed on the master branch by recent commit cc58eecc5d. But it's a far older issue, that dates back to the introduction of the abbreviated keys optimization by commit 4ea51cdfe8. Backpatch to all supported versions. Author: Peter Geoghegan <pg@bowt.ie> Author: Thomas Munro <thomas.munro@gmail.com> Discussion: https://postgr.es/m/CA+hUKG+bA+bmwD36_oDxAoLrCwZjVtST2fqe=b4=qZcmU7u89A@mail.gmail.com Backpatch: 10-
* Disallow infinite endpoints in generate_series() for timestamps.Tom Lane2022-04-20
| | | | | | | | | | Such cases will lead to infinite loops, so they're of no practical value. The numeric variant of generate_series() already threw error for this, so borrow its message wording. Per report from Richard Wesley. Back-patch to all supported branches. Discussion: https://postgr.es/m/91B44E7B-68D5-448F-95C8-B4B3B0F5DEAF@duckdblabs.com
* Fix breakage in AlterFunction().Tom Lane2022-04-19
| | | | | | | | | | | | | | | | An ALTER FUNCTION command that tried to update both the function's proparallel property and its proconfig list failed to do the former, because it stored the new proparallel value into a tuple that was no longer the interesting one. Carelessness in 7aea8e4f2. (I did not bother with a regression test, because the only likely future breakage would be for someone to ignore the comment I added and add some other field update after the heap_modify_tuple step. A test using existing function properties could not catch that.) Per report from Bryn Llewellyn. Back-patch to all supported branches. Discussion: https://postgr.es/m/8AC9A37F-99BD-446F-A2F7-B89AD0022774@yugabyte.com
* Fix the check to limit sync workers.Amit Kapila2022-04-19
| | | | | | | | | | | | | | | | | We don't allow to invoke more sync workers once we have reached the sync worker limit per subscription. But the check to enforce this also doesn't allow to launch an apply worker if it gets restarted. This code was introduced by commit de43897122 but we caught the problem only with the test added by recent commit c91f71b9dc which started failing occasionally in the buildfarm. As per buildfarm. Diagnosed-by: Amit Kapila, Masahiko Sawada, Tomas Vondra Author: Amit Kapila Backpatch-through: 10 Discussion: https://postgr.es/m/CAH2L28vddB_NFdRVpuyRBJEBWjz4BSyTB=_ektNRH8NJ1jf95g@mail.gmail.com https://postgr.es/m/f90d2b03-4462-ce95-a524-d91464e797c8@enterprisedb.com
* Avoid invalid array reference in transformAlterTableStmt().Tom Lane2022-04-18
| | | | | | | | | | | | | | | | | | | | | | Don't try to look at the attidentity field of system attributes, because they're not there in the TupleDescAttr array. Sometimes this is harmless because we accidentally pick up a zero, but otherwise we'll report "no owned sequence found" from an attempt to alter a system attribute. (It seems possible that a SIGSEGV could occur, too, though I've not seen it in testing.) It's not in this function's charter to complain that you can't alter a system column, so instead just hard-wire an assumption that system attributes aren't identities. I didn't bother with a regression test because the appearance of the bug is very erratic. Per bug #17465 from Roman Zharkov. Back-patch to all supported branches. (There's not actually a live bug before v12, because before that get_attidentity() did the right thing anyway. But for consistency I changed the test in the older branches too.) Discussion: https://postgr.es/m/17465-f2a554a6cb5740d3@postgresql.org
* Rethink the delay-checkpoint-end mechanism in the back-branches.Robert Haas2022-04-14
| | | | | | | | | | | | | | | | | | | | The back-patch of commit bbace5697df12398e87ffd9879171c39d27f5b33 had the unfortunate effect of changing the layout of PGPROC in the back-branches, which could break extensions. This happened because it changed the delayChkpt from type bool to type int. So, change it back, and add a new bool delayChkptEnd field instead. The new field should fall within what used to be padding space within the struct, and so hopefully won't cause any extensions to break. Per report from Markus Wanner and discussion with Tom Lane and others. Patch originally by me, somewhat revised by Markus Wanner per a suggestion from Michael Paquier. A very similar patch was developed by Kyotaro Horiguchi, but I failed to see the email in which that was posted before writing one of my own. Discussion: http://postgr.es/m/CA+Tgmoao-kUD9c5nG5sub3F7tbo39+cdr8jKaOVEs_1aBWcJ3Q@mail.gmail.com Discussion: http://postgr.es/m/20220406.164521.17171257901083417.horikyota.ntt@gmail.com
* Prevent access to no-longer-pinned buffer in heapam_tuple_lock().Tom Lane2022-04-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | heap_fetch() used to have a "keep_buf" parameter that told it to return ownership of the buffer pin to the caller after finding that the requested tuple TID exists but is invisible to the specified snapshot. This was thoughtlessly removed in commit 5db6df0c0, which broke heapam_tuple_lock() (formerly EvalPlanQualFetch) because that function needs to do more accesses to the tuple even if it's invisible. The net effect is that we would continue to touch the page for a microsecond or two after releasing pin on the buffer. Usually no harm would result; but if a different session decided to defragment the page concurrently, we could see garbage data and mistakenly conclude that there's no newer tuple version to chain up to. (It's hard to say whether this has happened in the field. The bug was actually found thanks to a later change that allowed valgrind to detect accesses to non-pinned buffers.) The most reasonable way to fix this is to reintroduce keep_buf, although I made it behave slightly differently: buffer ownership is passed back only if there is a valid tuple at the requested TID. In HEAD, we can just add the parameter back to heap_fetch(). To avoid an API break in the back branches, introduce an additional function heap_fetch_extended() in those branches. In HEAD there is an additional, less obvious API change: tuple->t_data will be set to NULL in all cases where buffer ownership is not returned, in particular when the tuple exists but fails the time qual (and !keep_buf). This is to defend against any other callers attempting to access non-pinned buffers. We concluded that making that change in back branches would be more likely to introduce problems than cure any. In passing, remove a comment about heap_fetch that was obsoleted by 9a8ee1dc6. Per bug #17462 from Daniil Anisimov. Back-patch to v12 where the bug was introduced. Discussion: https://postgr.es/m/17462-9c98a0f00df9bd36@postgresql.org
* Suppress "variable 'pagesaving' set but not used" warning.Tom Lane2022-04-06
| | | | | | | With asserts disabled, late-model clang notices that this variable is incremented but never otherwise read. Discussion: https://postgr.es/m/3171401.1649275153@sss.pgh.pa.us
* Fix typo in comment.Etsuro Fujita2022-03-30
|
* Revert "Fix replay of create database records on standby"Alvaro Herrera2022-03-29
| | | | | | | This reverts commit 49d9cfc68bf4. The approach taken by this patch has problems, so we'll come up with a radically different fix. Discussion: https://postgr.es/m/CA+TgmoYcUPL+WOJL2ZzhH=zmrhj0iOQ=iCFM0SuYqBbqZEamEg@mail.gmail.com
* Fix breakage of get_ps_display() in the PS_USE_NONE case.Tom Lane2022-03-27
| | | | | | | | | | | | | | Commit 8c6d30f21 caused this function to fail to set *displen in the PS_USE_NONE code path. If the variable's previous value had been negative, that'd lead to a memory clobber at some call sites. We'd managed not to notice due to very thin test coverage of such configurations, but this appears to explain buildfarm member lorikeet's recent struggles. Credit to Andrew Dunstan for spotting the problem. Back-patch to v13 where the bug was introduced. Discussion: https://postgr.es/m/136102.1648320427@sss.pgh.pa.us
* Fix replay of create database records on standbyAlvaro Herrera2022-03-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Crash recovery on standby may encounter missing directories when replaying create database WAL records. Prior to this patch, the standby would fail to recover in such a case. However, the directories could be legitimately missing. Consider a sequence of WAL records as follows: CREATE DATABASE DROP DATABASE DROP TABLESPACE If, after replaying the last WAL record and removing the tablespace directory, the standby crashes and has to replay the create database record again, the crash recovery must be able to move on. This patch adds a mechanism similar to invalid-page tracking, to keep a tally of missing directories during crash recovery. If all the missing directory references are matched with corresponding drop records at the end of crash recovery, the standby can safely continue following the primary. Backpatch to 13, at least for now. The bug is older, but fixing it in older branches requires more careful study of the interactions with commit e6d8069522c8, which appeared in 13. A new TAP test file is added to verify the condition. However, because it depends on commit d6d317dbf615, it can only be added to branch master. I (Álvaro) manually verified that the code behaves as expected in branch 14. It's a bit nervous-making to leave the code uncovered by tests in older branches, but leaving the bug unfixed is even worse. Also, the main reason this fix took so long is precisely that we couldn't agree on a good strategy to approach testing for the bug, so perhaps this is the best we can do. Diagnosed-by: Paul Guo <paulguo@gmail.com> Author: Paul Guo <paulguo@gmail.com> Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Author: Asim R Praveen <apraveen@pivotal.io> Discussion: https://postgr.es/m/CAEET0ZGx9AvioViLf7nbR_8tH9-=27DN5xWJ2P9-ROH16e4JUA@mail.gmail.com
* Fix possible recovery trouble if TRUNCATE overlaps a checkpoint.Robert Haas2022-03-24
| | | | | | | | | | | | | | | | If TRUNCATE causes some buffers to be invalidated and thus the checkpoint does not flush them, TRUNCATE must also ensure that the corresponding files are truncated on disk. Otherwise, a replay from the checkpoint might find that the buffers exist but have the wrong contents, which may cause replay to fail. Report by Teja Mupparti. Patch by Kyotaro Horiguchi, per a design suggestion from Heikki Linnakangas, with some changes to the comments by me. Review of this and a prior patch that approached the issue differently by Heikki Linnakangas, Andres Freund, Álvaro Herrera, Masahiko Sawada, and Tom Lane. Discussion: http://postgr.es/m/BYAPR06MB6373BF50B469CA393C614257ABF00@BYAPR06MB6373.namprd06.prod.outlook.com
* Don't try to translate NULL in GetConfigOptionByNum().Andres Freund2022-03-23
| | | | | | | | | | | Noticed via -fsanitize=undefined. Introduced when a few columns in GetConfigOptionByNum() / pg_settings started to be translated in 72be8c29a / PG 12. Backpatch to all affected branches, for the same reasons as 46ab07ffda9. Discussion: https://postgr.es/m/20220323173537.ll7klrglnp4gn2um@alap3.anarazel.de Backpatch: 12-