aboutsummaryrefslogtreecommitdiff
path: root/src/backend/storage
Commit message (Collapse)AuthorAge
* Lock before setting relhassubclass on RELKIND_PARTITIONED_INDEX.Noah Misch2024-06-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 5b562644fec696977df4a82790064e8287927891 added a comment that SetRelationHasSubclass() callers must hold this lock. When commit 17f206fbc824d2b4b14480199ca9ff7dea417eda extended use of this column to partitioned indexes, it didn't take the lock. As the latter commit message mentioned, we currently never reset a partitioned index to relhassubclass=f. That largely avoids harm from the lock omission. The cause for fixing this now is to unblock introducing a rule about locks required to heap_update() a pg_class row. This might cause more deadlocks. It gives minor user-visible benefits: - If an ALTER INDEX SET TABLESPACE runs concurrently with ALTER TABLE ATTACH PARTITION or CREATE PARTITION OF, one transaction blocks instead of failing with "tuple concurrently updated". (Many cases of DDL concurrency still fail that way.) - Match ALTER INDEX ATTACH PARTITION in choosing to lock the index. While not user-visible today, we'll need this if we ever make something set the flag to false for a partitioned index, like ANALYZE does today for tables. Back-patch to v12 (all supported versions), the plan for the commit relying on the new rule. In back branches, add LockOrStrongerHeldByMe() instead of adding a LockHeldByMe() parameter. Reviewed (in an earlier version) by Robert Haas. Discussion: https://postgr.es/m/20240611024525.9f.nmisch@google.com
* Fix MVCC bug with prepared xact with subxacts on standbyHeikki Linnakangas2024-06-27
| | | | | | | | | | | | | | | | | | We did not recover the subtransaction IDs of prepared transactions when starting a hot standby from a shutdown checkpoint. As a result, such subtransactions were considered as aborted, rather than in-progress. That would lead to hint bits being set incorrectly, and the subtransactions suddenly becoming visible to old snapshots when the prepared transaction was committed. To fix, update pg_subtrans with prepared transactions's subxids when starting hot standby from a shutdown checkpoint. The snapshots taken from that state need to be marked as "suboverflowed", so that we also check the pg_subtrans. Backport to all supported versions. Discussion: https://www.postgresql.org/message-id/6b852e98-2d49-4ca1-9e95-db419a2696e0@iki.fi
* Fix WaitEventSet resource leak in WaitLatchOrSocket().Etsuro Fujita2024-04-11
| | | | | | | | | | | | | | | This function would have the same issue we solved in commit 501cfd07d: If an error is thrown after calling CreateWaitEventSet(), the file descriptor (on epoll- or kqueue-based systems) or handles (on Windows) that the WaitEventSet contains are leaked. Like that commit, use PG_TRY-PG_FINALLY (PG_TRY-PG_CATCH in v12) to make sure the WaitEventSet is freed properly. Back-patch to all supported versions, but as we do not have this issue in HEAD (cf. commit 50c67c201), no need to apply this patch to it. Discussion: https://postgr.es/m/CAPmGK16MqdDoD8oatp8SQWaEa4vS3nfQqDN_Sj9YRuu5J3Lj9g%40mail.gmail.com
* Avoid deadlock during orphan temp table removal.Tom Lane2024-04-02
| | | | | | | | | | | | | | | | | | | | | | | If temp tables have dependencies (such as sequences) then it's possible for autovacuum's cleanup of orphan temp tables to deadlock against an incoming backend that's trying to clean out the temp namespace for its own use. That can happen because RemoveTempRelations' performDeletion call can visit objects within the namespace in an order different from the order in which a per-table deletion will visit them. To fix, observe that performDeletion will begin by taking an exclusive lock on the temp namespace (even though it won't actually delete it). So, if we can get a shared lock on the namespace, we can be sure we're not running concurrently with RemoveTempRelations, while also not conflicting with ordinary use of the namespace. This requires introducing a conditional version of LockDatabaseObject, but that's no big deal. (It's surprising we've got along without that this long.) Report and patch by Mikhail Zhilin. Back-patch to all supported branches. Discussion: https://postgr.es/m/c43ce028-2bc2-4865-9b89-3f706246eed5@postgrespro.ru
* Fix 'mmap' DSM implementation with allocations larger than 4 GBHeikki Linnakangas2024-02-13
| | | | | | Fixes bug #18341. Backpatch to all supported versions. Discussion: https://www.postgresql.org/message-id/18341-ce16599e7fd6228c@postgresql.org
* lwlock: Fix quadratic behavior with very long wait listsAndres Freund2024-01-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Until now LWLockDequeueSelf() sequentially searched the list of waiters to see if the current proc is still is on the list of waiters, or has already been removed. In extreme workloads, where the wait lists are very long, this leads to a quadratic behavior. #backends iterating over a list #backends long. Additionally, the likelihood of needing to call LWLockDequeueSelf() in the first place also increases with the increased length of the wait queue, as it becomes more likely that a lock is released while waiting for the wait list lock, which is held for longer during lock release. Due to the exponential back-off in perform_spin_delay() this is surprisingly hard to detect. We should make that easier, e.g. by adding a wait event around the pg_usleep() - but that's a separate patch. The fix is simple - track whether a proc is currently waiting in the wait list or already removed but waiting to be woken up in PGPROC->lwWaiting. In some workloads with a lot of clients contending for a small number of lwlocks (e.g. WALWriteLock), the fix can substantially increase throughput. This has been originally fixed for 16~ with a4adc31f6902 without a backpatch, and we have heard complaints from users impacted by this quadratic behavior in older versions as well. Author: Andres Freund <andres@anarazel.de> Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> Discussion: https://postgr.es/m/20221027165914.2hofzp4cvutj6gin@awork3.anarazel.de Discussion: https://postgr.es/m/CALj2ACXktNbG=K8Xi7PSqbofTZozavhaxjatVc14iYaLu4Maag@mail.gmail.com Backpatch-through: 12
* Remove incorrect file reference in comment.Etsuro Fujita2023-11-13
| | | | | | | | | | | | | | | Commit b7eda3e0e moved XidInMVCCSnapshot() from tqual.c into snapmgr.c, but follow-up commit c91560def incorrectly updated this reference. We could fix it, but as pointed out by Daniel Gustafsson, 1) the reader can easily find the file that contains the definition of that function, e.g. by grepping, and 2) this kind of reference is prone to going stale; so let's just remove it. Back-patch to all supported branches. Reviewed by Daniel Gustafsson. Discussion: https://postgr.es/m/CAPmGK145VdKkPBLWS2urwhgsfidbSexwY-9zCL6xSUJH%2BBTUUg%40mail.gmail.com
* Ban role pg_signal_backend from more superuser backend types.Noah Misch2023-11-06
| | | | | | | | | | | | | | | | | | | Documentation says it cannot signal "a backend owned by a superuser". On the contrary, it could signal background workers, including the logical replication launcher. It could signal autovacuum workers and the autovacuum launcher. Block all that. Signaling autovacuum workers and those two launchers doesn't stall progress beyond what one could achieve other ways. If a cluster uses a non-core extension with a background worker that does not auto-restart, this could create a denial of service with respect to that background worker. A background worker with bugs in its code for responding to terminations or cancellations could experience those bugs at a time the pg_signal_backend member chooses. Back-patch to v11 (all supported versions). Reviewed by Jelte Fennema-Nio. Reported by Hemanth Sandrana and Mahendrakar Srinivasarao. Security: CVE-2023-5870
* Avoid calling proc_exit() in processes forked by system().Nathan Bossart2023-10-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The SIGTERM handler for the startup process immediately calls proc_exit() for the duration of the restore_command, i.e., a call to system(). This system() call forks a new process to execute the shell command, and this child process inherits the parent's signal handlers. If both the parent and child processes receive SIGTERM, both will attempt to call proc_exit(). This can end badly. For example, both processes will try to remove themselves from the PGPROC shared array. To fix this problem, this commit adds a check in StartupProcShutdownHandler() to see whether MyProcPid == getpid(). If they match, this is the parent process, and we can proc_exit() like before. If they do not match, this is a child process, and we just emit a message to STDERR (in a signal safe manner) and _exit(), thereby skipping any problematic exit callbacks. This commit also adds checks in proc_exit(), ProcKill(), and AuxiliaryProcKill() that verify they are not being called within such child processes. Suggested-by: Andres Freund Reviewed-by: Thomas Munro, Andres Freund Discussion: https://postgr.es/m/Y9nGDSgIm83FHcad%40paquier.xyz Discussion: https://postgr.es/m/20230223231503.GA743455%40nathanxps13 Backpatch-through: 11
* Ensure that creation of an empty relfile is fsync'd at checkpoint.Heikki Linnakangas2023-07-04
| | | | | | | | | | | | | | | | | If you create a table and don't insert any data into it, the relation file is never fsync'd. You don't lose data, because an empty table doesn't have any data to begin with, but if you crash and lose the file, subsequent operations on the table will fail with "could not open file" error. To fix, register an fsync request in mdcreate(), like we do for mdwrite(). Per discussion, we probably should also fsync the containing directory after creating a new file. But that's a separate and much wider issue. Backpatch to all supported versions. Reviewed-by: Andres Freund, Thomas Munro Discussion: https://www.postgresql.org/message-id/d47d8122-415e-425c-d0a2-e0160829702d%40iki.fi
* Fix missing initializations of MyProc.delayChkptEndMichael Paquier2023-06-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | This commit fixes an oversight introduced in 10520f4, that has added delayChkptEnd to PGPROC to avoid ABI breakages on stable branches, where two spots have missed to initialize this variable (delayChkpt was switched back from int to bool, and it was initialized as 0 so there was no consequences for it): - InitProcess(), where the per-process data structures of a backend are initialized. - InitAuxiliaryProcess(), same but for auxiliary processes. An interruption during relation truncation while this flag is set could cause an assertion failure when a follow-up process does a relation truncation while reusing the same PGPROC entry. A second effect could be incorrect checkpoint end delays. While on it, add an assertion in ProcArrayClearTransaction() for delayChkptEnd to be in line with 5788e25. This is needed only for v14. This issue affects v11~v14, but not v15~, as we use a single field called delayChkptFlags to delay checkpoints there. Author: suyu.cmj (mengjuan.cmj@alibaba-inc.com) Reviewed-by: Kyotaro Horiguchi, Michael Paquier Discussion: https://postgr.es/m/9c3d2a49-db5f-43cb-840b-d58f9a684295.mengjuan.cmj@alibaba-inc.com Backpatch-through: 11
* Fix race in SERIALIZABLE READ ONLY.Thomas Munro2023-03-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit bdaabb9b started skipping doomed transactions when building the list of possible conflicts for SERIALIZABLE READ ONLY. That makes sense, because doomed transactions won't commit, but a couple of subtle things broke: 1. If all uncommitted r/w transactions are doomed, a READ ONLY transaction would arbitrarily not benefit from the safe snapshot optimization. It would not be taken immediately, and yet no other transaction would set SXACT_FLAG_RO_SAFE later. 2. In the same circumstances but with DEFERRABLE, GetSafeSnapshot() would correctly exit its wait loop without sleeping and then take the optimization in non-assert builds, but assert builds would fail a sanity check that SXACT_FLAG_RO_SAFE had been set by another transaction. This is similar to the case for PredXact->WritableSxactCount == 0. We should opt out immediately if our possibleUnsafeConflicts list is empty after filtering. The code to maintain the serializable global xmin is moved down below the new opt out site, because otherwise we'd have to reverse its effects before returning. Back-patch to all supported releases. Bug #17368. Reported-by: Alexander Lakhin <exclusion@gmail.com> Discussion: https://postgr.es/m/17116-d6ca217acc180e30%40postgresql.org Discussion: https://postgr.es/m/20110707212159.GF76634%40csail.mit.edu
* Fix assert failures in parallel SERIALIZABLE READ ONLY.Thomas Munro2023-03-06
| | | | | | | | | | | | | | | | | | | | | | 1. Make sure that we don't decrement SxactGlobalXminCount twice when the SXACT_FLAG_RO_SAFE optimization is reached in a parallel query. This could trigger a sanity check failure in assert builds. Non-assert builds recompute the count in SetNewSxactGlobalXmin(), so the problem was hidden, explaining the lack of field reports. Add a new isolation test to exercise that case. 2. Remove an assertion that the DOOMED flag can't be set on a partially released SERIALIZABLEXACT. Instead, ignore the flag (our transaction was already determined to be read-only safe, and DOOMED is in fact set during partial release, and there was already an assertion that it wasn't set sooner). Improve an existing isolation test so that it reaches that case (previously it wasn't quite testing what it was supposed to be testing; see discussion). Back-patch to 12. Bug #17116. Defects in commit 47a338cf. Reported-by: Alexander Lakhin <exclusion@gmail.com> Discussion: https://postgr.es/m/17116-d6ca217acc180e30%40postgresql.org
* Fix WaitEventSetWait() buffer overrun.Thomas Munro2023-01-13
| | | | | | | | | | | | | | | | | | | | | | | | | The WAIT_USE_EPOLL and WAIT_USE_KQUEUE implementations of WaitEventSetWaitBlock() confused the size of their internal buffer with the size of the caller's output buffer, and could ask the kernel for too many events. In fact the set of events retrieved from the kernel needs to be able to fit in both buffers, so take the smaller of the two. The WAIT_USE_POLL and WAIT_USE WIN32 implementations didn't have this confusion. This probably didn't come up before because we always used the same number in both places, but commit 7389aad6 calculates a dynamic size at construction time, while using MAXLISTEN for its output event buffer on the stack. That seems like a reasonable thing to want to do, so consider this to be a pre-existing bug worth fixing. As discovered by valgrind on skink. Back-patch to all supported releases for epoll, and to release 13 for the kqueue part, which copied the incorrect epoll code. Reviewed-by: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/901504.1673504836%40sss.pgh.pa.us
* Improve heuristics for compressing the KnownAssignedXids array.Tom Lane2022-11-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, we'd compress only when the active range of array entries reached Max(4 * PROCARRAY_MAXPROCS, 2 * pArray->numKnownAssignedXids). If max_connections is large, the first term could result in not compressing for a long time, resulting in much wastage of cycles in hot-standby backends scanning the array to take snapshots. Get rid of that term, and just bound it to 2 * pArray->numKnownAssignedXids. That however creates the opposite risk, that we might spend too much effort compressing. Hence, consider compressing only once every 128 commit records. (This frequency was chosen by benchmarking. While we only tried one benchmark scenario, the results seem stable over a fairly wide range of frequencies.) Also, force compression when processing RecoveryInfo WAL records (which should be infrequent); the old code could perform compression then, but would do so only after the same array-range check as for the transaction-commit path. Also, opportunistically run compression if the startup process is about to wait for WAL, though not oftener than once a second. This should prevent cases where we waste lots of time by leaving the array not-compressed for long intervals due to low WAL traffic. Lastly, add a simple check to keep us from uselessly compressing when the array storage is already compact. Back-patch, as the performance problem is worse in pre-v14 branches than in HEAD. Simon Riggs and Michail Nikolaev, with help from Tom Lane and Andres Freund. Discussion: https://postgr.es/m/CALdSSPgahNUD_=pB_j=1zSnDBaiOtqVfzo8Ejt5J_k7qZiU1Tw@mail.gmail.com
* Fix mislabeling of PROC_QUEUE->links as PGPROC, fixing UBSan on 32bitAndres Freund2022-11-19
| | | | | | | | | | | | | | | | | | | | | | | ProcSleep() used a PGPROC* variable to point to PROC_QUEUE->links.next, because that does "the right thing" with SHMQueueInsertBefore(). While that largely works, it's certainly not correct and unnecessary - we can just use SHM_QUEUE* to point to the insertion point. Noticed when testing a 32bit of postgres with undefined behavior sanitizer. UBSan noticed that sometimes the supposed PGPROC wasn't sufficiently aligned (required since 46d6e5f5679, ensured indirectly, via ShmemAllocRaw() guaranteeing cacheline alignment). For now fix this by using a SHM_QUEUE* for the insertion point. Subsequently we should replace all the use of PROC_QUEUE and SHM_QUEUE with ilist.h, but that's a larger change that we don't want to backpatch. Backpatch to all supported versions - it's useful to be able to run postgres under UBSan. Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/20221117014230.op5kmgypdv2dtqsf@awork3.anarazel.de Backpatch: 11-
* Replace RelationOpenSmgr() with RelationGetSmgr().Tom Lane2022-11-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a back-patch of the v15-era commit f10f0ae42 into older supported branches. The idea is to design out bugs in which an ill-timed relcache flush clears rel->rd_smgr partway through some code sequence that wasn't expecting that. We had another report today of a corner case that reliably crashes v14 under debug_discard_caches (nee CLOBBER_CACHE_ALWAYS), and therefore would crash once in a blue moon in the field. We're unlikely to get rid of all such code paths unless we adopt the more rigorous coding rules instituted by f10f0ae42. Therefore, even though this is a bit invasive, it's time to back-patch. Some comfort can be taken in the fact that f10f0ae42 has been in v15 for 16 months without problems. I left the RelationOpenSmgr macro present in the back branches, even though no core code should use it anymore, in order to not break third-party extensions in minor releases. Such extensions might opt to start using RelationGetSmgr instead, to reduce their code differential between v15 and earlier branches. This carries a hazard of failing to compile against headers from existing minor releases. However, once compiled the extension should work fine even with such releases, because RelationGetSmgr is a "static inline" function so it creates no link-time dependency. So depending on distribution practices, that might be an OK tradeoff. Per report from Spyridon Dimitrios Agathos. Original patch by Amul Sul. Discussion: https://postgr.es/m/CAFM5RaqdgyusQvmWkyPYaWMwoK5gigdtW-7HcgHgOeAw7mqJ_Q@mail.gmail.com Discussion: https://postgr.es/m/CANiYTQsU7yMFpQYnv=BrcRVqK_3U3mtAzAsJCaqtzsDHfsUbdQ@mail.gmail.com
* Harden pmsignal.c against clobbered shared memory.Tom Lane2022-10-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | The postmaster is not supposed to do anything that depends fundamentally on shared memory contents, because that creates the risk that a backend crash that trashes shared memory will take the postmaster down with it, preventing automatic recovery. In commit 969d7cd43 I lost sight of this principle and coded AssignPostmasterChildSlot() in such a way that it could fail or even crash if the shared PMSignalState structure became corrupted. Remarkably, we've not seen field reports of such crashes; but I managed to induce one while testing the recent changes around palloc chunk headers. To fix, make a semi-duplicative state array inside the postmaster so that we need consult only local state while choosing a "child slot" for a new backend. Ensure that other postmaster-executed routines in pmsignal.c don't have critical dependencies on the shared state, either. Corruption of PMSignalState might now lead ReleasePostmasterChildSlot() to conclude that backend X failed, when actually backend Y was the one that trashed things. But that doesn't matter, because we'll force a cluster-wide reset regardless. Back-patch to all supported branches, since this is an old bug. Discussion: https://postgr.es/m/3436789.1665187055@sss.pgh.pa.us
* Add HINT for restartpoint race with KeepFileRestoredFromArchive().Noah Misch2022-08-05
| | | | | | | | | | The five commits ending at cc2c7d65fc27e877c9f407587b0b92d46cd6dd16 closed this race condition for v15+. For v14 through v10, add a HINT to discourage studying the cosmetic problem. Reviewed by Kyotaro Horiguchi and David Steele. Discussion: https://postgr.es/m/20220731061747.GA3692882@rfd.leadboat.com
* Make dsm_impl_posix_resize more future-proof.Thomas Munro2022-07-16
| | | | | | | | | | | | | | | | | | | | Commit 4518c798 blocks signals for a short region of code, but it assumed that whatever called it had the signal mask set to UnBlockSig on entry. That may be true today (or may even not be, in extensions in the wild), but it would be better not to make that assumption. We should save-and-restore the caller's signal mask. The PG_SETMASK() portability macro couldn't be used for that, which is why it wasn't done before. But... considering that commit a65e0864 established back in 9.6 that supported POSIX systems have sigprocmask(), and that this is POSIX-only code, there is no reason not to use standard sigprocmask() directly to achieve that. Back-patch to all supported releases, like 4518c798 and 80845b7c. Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/CA%2BhUKGKx6Biq7_UuV0kn9DW%2B8QWcpJC1qwhizdtD9tN-fn0H0g%40mail.gmail.com
* Don't clobber postmaster sigmask in dsm_impl_resize.Thomas Munro2022-07-15
| | | | | | | | | | | | Commit 4518c798 intended to block signals in regular backends that allocate DSM segments, but dsm_impl_resize() is also reached by dsm_postmaster_startup(). It's not OK to clobber the postmaster's signal mask, so only manipulate the signal mask when under the postmaster. Back-patch to all releases, like 4518c798. Discussion: https://postgr.es/m/CA%2BhUKGKNpK%3D2OMeea_AZwpLg7Bm4%3DgYWk7eDjZ5F6YbozfOf8w%40mail.gmail.com
* Block signals while allocating DSM memory.Thomas Munro2022-07-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On Linux, we call posix_fallocate() on shm_open()'d memory to avoid later potential SIGBUS (see commit 899bd785). Based on field reports of systems stuck in an EINTR retry loop there, there, we made it possible to break out of that loop via slightly odd coding where the CHECK_FOR_INTERRUPTS() call was somewhat removed from the loop (see commit 422952ee). On further reflection, that was not a great choice for at least two reasons: 1. If interrupts were held, the CHECK_FOR_INTERRUPTS() would do nothing and the EINTR error would be surfaced to the user. 2. If EINTR was reported but neither QueryCancelPending nor ProcDiePending was set, then we'd dutifully retry, but with a bit more understanding of how posix_fallocate() works, it's now clear that you can get into a loop that never terminates. posix_fallocate() is not a function that can do some of the job and tell you about progress if it's interrupted, it has to undo what it's done so far and report EINTR, and if signals keep arriving faster than it can complete (cf recovery conflict signals), you're stuck. Therefore, for now, we'll simply block most signals to guarantee progress. SIGQUIT is not blocked (see InitPostmasterChild()), because its expected handler doesn't return, and unblockable signals like SIGCONT are not expected to arrive at a high rate. For good measure, we'll include the ftruncate() call in the blocked region, and add a retry loop. Back-patch to all supported releases. Reported-by: Alvaro Herrera <alvherre@alvh.no-ip.org> Reported-by: Nicola Contu <nicola.contu@gmail.com> Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org> Reviewed-by: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/20220701154105.jjfutmngoedgiad3%40alvherre.pgsql
* Fix lock assertions in dshash.c.Thomas Munro2022-07-11
| | | | | | | | | | | | | | | | | | | | | | | | | dshash.c previously maintained flags to be able to assert that you didn't hold any partition lock. These flags could get out of sync with reality in error scenarios. Get rid of all that, and make assertions about the locks themselves instead. Since LWLockHeldByMe() loops internally, we don't want to put that inside another loop over all partition locks. Introduce a new debugging-only interface LWLockAnyHeldByMe() to avoid that. This problem was noted by Tom and Andres while reviewing changes to support the new shared memory stats system, and later showed up in reality while working on commit 389869af. Back-patch to 11, where dshash.c arrived. Reported-by: Tom Lane <tgl@sss.pgh.pa.us> Reported-by: Andres Freund <andres@anarazel.de> Reviewed-by: Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> Reviewed-by: Zhihong Yu <zyu@yugabyte.com> Reviewed-by: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/20220311012712.botrpsikaufzteyt@alap3.anarazel.de Discussion: https://postgr.es/m/CA%2BhUKGJ31Wce6HJ7xnVTKWjFUWQZPBngxfJVx4q0E98pDr3kAw%40mail.gmail.com
* Harden dsm_impl.c against unexpected EEXIST.Thomas Munro2022-07-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, we trusted the OS not to report EEXIST unless we'd passed in IPC_CREAT | IPC_EXCL or O_CREAT | O_EXCL, as appropriate. Solaris's shm_open() can in fact do that, causing us to crash because we didn't ereport and then we blithely assumed the mapping was successful. Let's treat EEXIST just like any other error, unless we're actually trying to create a new segment. This applies to shm_open(), where this behavior has been seen, and also to the equivalent operations for our sysv and mmap modes just on principle. Based on the underlying reason for the error, namely contention on a lock file managed by Solaris librt for each distinct name, this problem is only likely to happen on 15 and later, because the new shared memory stats system produces shm_open() calls for the same path from potentially large numbers of backends concurrently during authentication. Earlier releases only shared memory segments between a small number of parallel workers under one Gather node. You could probably hit it if you tried hard enough though, and we should have been more defensive in the first place. Therefore, back-patch to all supported releases. Per build farm animal margay. This isn't the end of the story, though, it just changes random crashes into random "File exists" errors; more work needed for a green build farm. Reviewed-by: Robert Haas <robertmhaas@gmail.com> Discussion: https://postgr.es/m/CA%2BhUKGKqKrCV5xKWfh9rnm%3Do%3DDwZLTLtnsj_XpUi9g5%3DV%2B9oyg%40mail.gmail.com
* Fix visibility check when XID is committed in CLOG but not in procarray.Heikki Linnakangas2022-06-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | TransactionIdIsInProgress had a fast path to return 'false' if the single-item CLOG cache said that the transaction was known to be committed. However, that was wrong, because a transaction is first marked as committed in the CLOG but doesn't become visible to others until it has removed its XID from the proc array. That could lead to an error: ERROR: t_xmin is uncommitted in tuple to be updated or for an UPDATE to go ahead without blocking, before the previous UPDATE on the same row was made visible. The window is usually very short, but synchronous replication makes it much wider, because the wait for synchronous replica happens in that window. Another thing that makes it hard to hit is that it's hard to get such a commit-in-progress transaction into the single item CLOG cache. Normally, if you call TransactionIdIsInProgress on such a transaction, it determines that the XID is in progress without checking the CLOG and without populating the cache. One way to prime the cache is to explicitly call pg_xact_status() on the XID. Another way is to use a lot of subtransactions, so that the subxid cache in the proc array is overflown, making TransactionIdIsInProgress rely on pg_subtrans and CLOG checks. This has been broken ever since it was introduced in 2008, but the race condition is very hard to hit, especially without synchronous replication. There were a couple of reports of the error starting from summer 2021, but no one was able to find the root cause then. TransactionIdIsKnownCompleted() is now unused. In 'master', remove it, but I left it in place in backbranches in case it's used by extensions. Also change pg_xact_status() to check TransactionIdIsInProgress(). Previously, it only checked the CLOG, and returned "committed" before the transaction was actually made visible to other queries. Note that this also means that you cannot use pg_xact_status() to reproduce the bug anymore, even if the code wasn't fixed. Report and analysis by Konstantin Knizhnik. Patch by Simon Riggs, with the pg_xact_status() change added by me. Author: Simon Riggs Reviewed-by: Andres Freund Discussion: https://www.postgresql.org/message-id/flat/4da7913d-398c-e2ad-d777-f752cf7f0bbb%40garret.ru
* Fix possibility of self-deadlock in ResolveRecoveryConflictWithBufferPin().Andres Freund2022-05-02
| | | | | | | | | | | | | | | | | | | | | The tests added in 9f8a050f68d failed nearly reliably on FreeBSD in CI, and occasionally on the buildfarm. That turns out to be caused not by a bug in the test, but by a longstanding bug in recovery conflict handling. The standby timeout handler, used by ResolveRecoveryConflictWithBufferPin(), executed SendRecoveryConflictWithBufferPin() inside a signal handler. A bad idea, because the deadlock timeout handler (or a spurious latch set) could have interrupted ProcWaitForSignal(). If unlucky that could cause a self-deadlock on ProcArrayLock, if the deadlock check is in SendRecoveryConflictWithBufferPin()->CancelDBBackends(). To fix, set a flag in StandbyTimeoutHandler(), and check the flag in ResolveRecoveryConflictWithBufferPin(). Subsequently the recovery conflict tests will be backpatched. Discussion: https://postgr.es/m/20220413002626.udl7lll7f3o7nre7@alap3.anarazel.de Backpatch: 10-
* Fix typo in comment.Etsuro Fujita2022-05-02
|
* Rethink the delay-checkpoint-end mechanism in the back-branches.Robert Haas2022-04-14
| | | | | | | | | | | | | | | | | | | | The back-patch of commit bbace5697df12398e87ffd9879171c39d27f5b33 had the unfortunate effect of changing the layout of PGPROC in the back-branches, which could break extensions. This happened because it changed the delayChkpt from type bool to type int. So, change it back, and add a new bool delayChkptEnd field instead. The new field should fall within what used to be padding space within the struct, and so hopefully won't cause any extensions to break. Per report from Markus Wanner and discussion with Tom Lane and others. Patch originally by me, somewhat revised by Markus Wanner per a suggestion from Michael Paquier. A very similar patch was developed by Kyotaro Horiguchi, but I failed to see the email in which that was posted before writing one of my own. Discussion: http://postgr.es/m/CA+Tgmoao-kUD9c5nG5sub3F7tbo39+cdr8jKaOVEs_1aBWcJ3Q@mail.gmail.com Discussion: http://postgr.es/m/20220406.164521.17171257901083417.horikyota.ntt@gmail.com
* Fix possible recovery trouble if TRUNCATE overlaps a checkpoint.Robert Haas2022-03-24
| | | | | | | | | | | | | | | | If TRUNCATE causes some buffers to be invalidated and thus the checkpoint does not flush them, TRUNCATE must also ensure that the corresponding files are truncated on disk. Otherwise, a replay from the checkpoint might find that the buffers exist but have the wrong contents, which may cause replay to fail. Report by Teja Mupparti. Patch by Kyotaro Horiguchi, per a design suggestion from Heikki Linnakangas, with some changes to the comments by me. Review of this and a prior patch that approached the issue differently by Heikki Linnakangas, Andres Freund, Álvaro Herrera, Masahiko Sawada, and Tom Lane. Discussion: http://postgr.es/m/BYAPR06MB6373BF50B469CA393C614257ABF00@BYAPR06MB6373.namprd06.prod.outlook.com
* Fix race between DROP TABLESPACE and checkpointing.Thomas Munro2022-03-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Commands like ALTER TABLE SET TABLESPACE may leave files for the next checkpoint to clean up. If such files are not removed by the time DROP TABLESPACE is called, we request a checkpoint so that they are deleted. However, there is presently a window before checkpoint start where new unlink requests won't be scheduled until the following checkpoint. This means that the checkpoint forced by DROP TABLESPACE might not remove the files we expect it to remove, and the following ERROR will be emitted: ERROR: tablespace "mytblspc" is not empty To fix, add a call to AbsorbSyncRequests() just before advancing the unlink cycle counter. This ensures that any unlink requests forwarded prior to checkpoint start (i.e., when ckpt_started is incremented) will be processed by the current checkpoint. Since AbsorbSyncRequests() performs memory allocations, it cannot be called within a critical section, so we also need to move SyncPreCheckpoint() to before CreateCheckPoint()'s critical section. This is an old bug, so back-patch to all supported versions. Author: Nathan Bossart <nathandbossart@gmail.com> Reported-by: Nathan Bossart <nathandbossart@gmail.com> Reviewed-by: Thomas Munro <thomas.munro@gmail.com> Reviewed-by: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/20220215235845.GA2665318%40nathanxps13
* Fix waiting in RegisterSyncRequest().Thomas Munro2022-03-16
| | | | | | | | | | | | | | | | | If we run out of space in the checkpointer sync request queue (which is hopefully rare on real systems, but common with very small buffer pool), we wait for it to drain. While waiting, we should report that as a wait event so that users know what is going on, and also handle postmaster death, since otherwise the loop might never terminate if the checkpointer has exited. Back-patch to 12. Although the problem exists in earlier releases too, the code is structured differently before 12 so I haven't gone any further for now, in the absence of field complaints. Reported-by: Andres Freund <andres@anarazel.de> Reviewed-by: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/20220226213942.nb7uvb2pamyu26dj%40alap3.anarazel.de
* Clean up assorted failures under clang's -fsanitize=undefined checks.Tom Lane2022-03-03
| | | | | | | | | | | | | | | | | | | | | | Most of these are cases where we could call memcpy() or other libc functions with a NULL pointer and a zero count, which is forbidden by POSIX even though every production version of libc allows it. We've fixed such things before in a piecemeal way, but apparently never made an effort to try to get them all. I don't claim that this patch does so either, but it gets every failure I observe in check-world, using clang 12.0.1 on current RHEL8. numeric.c has a different issue that the sanitizer doesn't like: "ln(-1.0)" will compute log10(0) and then try to assign the resulting -Inf to an integer variable. We don't actually use the result in such a case, so there's no live bug. Back-patch to all supported branches, with the idea that we might start running a buildfarm member that tests this case. This includes back-patching c1132aae3 (Check the size in COPY_POINTER_FIELD), which previously silenced some of these issues in copyfuncs.c. Discussion: https://postgr.es/m/CALNJ-vT9r0DSsAOw9OXVJFxLENoVS_68kJ5x0p44atoYH+H4dg@mail.gmail.com
* Fix ordering of XIDs in ProcArrayApplyRecoveryInfoTomas Vondra2022-01-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 8431e296ea reworked ProcArrayApplyRecoveryInfo to sort XIDs before adding them to KnownAssignedXids. But the XIDs are sorted using xidComparator, which compares the XIDs simply as uint32 values, not logically. KnownAssignedXidsAdd() however expects XIDs in logical order, and calls TransactionIdFollowsOrEquals() to enforce that. If there are XIDs for which the two orderings disagree, an error is raised and the recovery fails/restarts. Hitting this issue is fairly easy - you just need two transactions, one started before the 4B limit (e.g. XID 4294967290), the other sometime after it (e.g. XID 1000). Logically (4294967290 <= 1000) but when compared using xidComparator we try to add them in the opposite order. Which makes KnownAssignedXidsAdd() fail with an error like this: ERROR: out-of-order XID insertion in KnownAssignedXids This only happens during replica startup, while processing RUNNING_XACTS records to build the snapshot. Once we reach STANDBY_SNAPSHOT_READY, we skip these records. So this does not affect already running replicas, but if you restart (or create) a replica while there are transactions with XIDs for which the two orderings disagree, you may hit this. Long-running transactions and frequent replica restarts increase the likelihood of hitting this issue. Once the replica gets into this state, it can't be started (even if the old transactions are terminated). Fixed by sorting the XIDs logically - this is fine because we're dealing with normal XIDs (because it's XIDs assigned to backends) and from the same wraparound epoch (otherwise the backends could not be running at the same time on the primary node). So there are no problems with the triangle inequality, which is why xidComparator compares raw values. Investigation and root cause analysis by Abhijit Menon-Sen. Patch by me. This issue is present in all releases since 9.4, however releases up to 9.6 are EOL already so backpatch to 10 only. Reviewed-by: Abhijit Menon-Sen Reviewed-by: Alvaro Herrera Backpatch-through: 10 Discussion: https://postgr.es/m/36b8a501-5d73-277c-4972-f58a4dce088a%40enterprisedb.com
* Reset lastOverflowedXid on standby when neededAlexander Korotkov2021-11-06
| | | | | | | | | | | | | | | | | Currently, lastOverflowedXid is never reset. It's just adjusted on new transactions known to be overflowed. But if there are no overflowed transactions for a long time, snapshots could be mistakenly marked as suboverflowed due to wraparound. This commit fixes this issue by resetting lastOverflowedXid when needed altogether with KnownAssignedXids. Backpatch to all supported versions. Reported-by: Stan Hu Discussion: https://postgr.es/m/CAMBWrQ%3DFp5UAsU_nATY7EMY7NHczG4-DTDU%3DmCvBQZAQ6wa2xQ%40mail.gmail.com Author: Kyotaro Horiguchi, Alexander Korotkov Reviewed-by: Stan Hu, Simon Riggs, Nikolay Samokhvalov, Andrey Borodin, Dmitry Dolgov
* Fix snapshot reference leak if lo_export fails.Heikki Linnakangas2021-11-03
| | | | | | | | | | | | | | | | | | | If lo_export() fails to open the target file or to write to it, it leaks the created LargeObjectDesc and its snapshot in the top-transaction context and resource owner. That's pretty harmless, it's a small leak after all, but it gives the user a "Snapshot reference leak" warning. Fix by using a short-lived memory context and no resource owner for transient LargeObjectDescs that are opened and closed within one function call. The leak is easiest to reproduce with lo_export() on a directory that doesn't exist, but in principle the other lo_* functions could also fail. Backpatch to all supported versions. Reported-by: Andrew B Reviewed-by: Alvaro Herrera Discussion: https://www.postgresql.org/message-id/32bf767a-2d65-71c4-f170-122f416bab7e@iki.fi
* Fix CREATE INDEX CONCURRENTLY for the newest prepared transactions.Noah Misch2021-10-23
| | | | | | | | | | | | | | | | | | | The purpose of commit 8a54e12a38d1545d249f1402f66c8cde2837d97c was to fix this, and it sufficed when the PREPARE TRANSACTION completed before the CIC looked for lock conflicts. Otherwise, things still broke. As before, in a cluster having used CIC while having enabled prepared transactions, queries that use the resulting index can silently fail to find rows. It may be necessary to reindex to recover from past occurrences; REINDEX CONCURRENTLY suffices. Fix this for future index builds by making CIC wait for arbitrarily-recent prepared transactions and for ordinary transactions that may yet PREPARE TRANSACTION. As part of that, have PREPARE TRANSACTION transfer locks to its dummy PGPROC before it calls ProcArrayClearTransaction(). Back-patch to 9.6 (all supported versions). Andrey Borodin, reviewed (in earlier versions) by Andres Freund. Discussion: https://postgr.es/m/01824242-AA92-4FE9-9BA7-AEBAFFEA3D0C@yandex-team.ru
* Fix variable shadowing in procarray.c.Fujii Masao2021-09-16
| | | | | | | | | | | | | ProcArrayGroupClearXid function has a parameter named "proc", but the same name was used for its local variables. This commit fixes this variable shadowing, to improve code readability. Back-patch to all supported versions, to make future back-patching easy though this patch is classified as refactoring only. Reported-by: Ranier Vilela Author: Ranier Vilela, Aleksander Alekseev https://postgr.es/m/CAEudQAqyoTZC670xWi6w-Oe2_Bk1bfu2JzXz6xRfiOUzm7xbyQ@mail.gmail.com
* Check for relation length overrun soon enough.Tom Lane2021-09-09
| | | | | | | | | | | | | | | | | We don't allow relations to exceed 2^32-1 blocks, because block numbers are 32 bits and the last possible block number is reserved to mean InvalidBlockNumber. There is a check for this in mdextend, but that's really way too late, because the smgr API requires us to create a buffer for the block-to-be-added, and we do not want to have any buffer with blocknum InvalidBlockNumber. (Such a case can trigger assertions in bufmgr.c, plus I think it might confuse ReadBuffer's logic for data-past-EOF later on.) So put the check into ReadBuffer. Per report from Christoph Berg. It's been like this forever, so back-patch to all supported branches. Discussion: https://postgr.es/m/YTn1iTkUYBZfcODk@msg.credativ.de
* Fix check for conflicting session- vs transaction-level locks.Tom Lane2021-07-24
| | | | | | | | | | | | | | | | | | | | | | | We have an implementation restriction that PREPARE TRANSACTION can't handle cases where both session-lifespan and transaction-lifespan locks are held on the same lockable object. (That's because we'd otherwise need to acquire a new PROCLOCK entry during post-prepare cleanup, which is an operation that might fail. The situation can only arise with odd usages of advisory locks, so removing the restriction is probably not worth the amount of effort it would take.) AtPrepare_Locks attempted to enforce this, but its logic was many bricks shy of a load, because it only detected cases where the session and transaction locks had the same lockmode. Locks of different modes on the same object would lead to the rather unhelpful message "PANIC: we seem to have dropped a bit somewhere". To fix, build a transient hashtable with one entry per locktag, not one per locktag + mode, and use that to detect conflicts. Per bug #17122 from Alexander Pyhalov. This bug is ancient, so back-patch to all supported branches. Discussion: https://postgr.es/m/17122-04f3c32098a62233@postgresql.org
* Remove more obsolete comments about semaphores.Thomas Munro2021-07-09
| | | | | | | | | Commit 6753333f stopped using semaphores as the sleep/wake mechanism for heavyweight locks, but some obsolete references to that scheme remained in comments. As with similar commit 25b93a29, back-patch all the way. Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Discussion: https://postgr.es/m/CA%2BhUKGLafjB1uzXcy%3D%3D2L3cy7rjHkqOVn7qRYGBjk%3D%3DtMJE7Yg%40mail.gmail.com
* Fix typo and outdated information in README.barrierDavid Rowley2021-05-18
| | | | | | | | | README.barrier didn't seem to get the memo when atomics were added. Fix that. Author: Tatsuo Ishii, David Rowley Discussion: https://postgr.es/m/20210516.211133.2159010194908437625.t-ishii%40sraoss.co.jp Backpatch-through: 9.6, oldest supported release
* Shut down transaction tracking at startup process exit.Fujii Masao2021-04-06
| | | | | | | | | | | | | | | | | | | | | | | Maxim Orlov reported that the shutdown of standby server could result in the following assertion failure. The cause of this issue was that, when the shutdown caused the startup process to exit, recovery-time transaction tracking was not shut down even if it's already initialized, and some locks the tracked transactions were holding could not be released. At this situation, if other process was invoked and the PGPROC entry that the startup process used was assigned to it, it found such unreleased locks and caused the assertion failure, during the initialization of it. TRAP: FailedAssertion("SHMQueueEmpty(&(MyProc->myProcLocks[i]))" This commit fixes this issue by making the startup process shut down transaction tracking and release all locks, at the exit of it. Back-patch to all supported branches. Reported-by: Maxim Orlov Author: Fujii Masao Reviewed-by: Maxim Orlov Discussion: https://postgr.es/m/ad4ce692cc1d89a093b471ab1d969b0b@postgrespro.ru
* Hold interrupts while running dsm_detach() callbacks.Thomas Munro2021-02-15
| | | | | | | | | | | | | | | | | | While cleaning up after a parallel query or parallel index creation that created temporary files, we could be interrupted by a statement timeout. The error handling path would then fail to clean up the files when it ran dsm_detach() again, because the callback was already popped off the list. Prevent this hazard by holding interrupts while the cleanup code runs. Thanks to Heikki Linnakangas for this suggestion, and also to Kyotaro Horiguchi, Masahiko Sawada, Justin Pryzby and Tom Lane for discussion of this and earlier ideas on how to fix the problem. Back-patch to all supported releases. Reported-by: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/20191212180506.GR2082@telsasoft.com
* Fix CREATE INDEX CONCURRENTLY for simultaneous prepared transactions.Noah Misch2021-01-30
| | | | | | | | | | | | | | | In a cluster having used CREATE INDEX CONCURRENTLY while having enabled prepared transactions, queries that use the resulting index can silently fail to find rows. Fix this for future CREATE INDEX CONCURRENTLY by making it wait for prepared transactions like it waits for ordinary transactions. This expands the VirtualTransactionId structure domain to admit prepared transactions. It may be necessary to reindex to recover from past occurrences. Back-patch to 9.5 (all supported versions). Andrey Borodin, reviewed (in earlier versions) by Tom Lane and Michael Paquier. Discussion: https://postgr.es/m/2E712143-97F7-4890-B470-4A35142ABC82@yandex-team.ru
* Prevent excess SimpleLruTruncate() deletion.Noah Misch2021-01-16
| | | | | | | | | | | | | | | | | Every core SLRU wraps around. With the exception of pg_notify, the wrap point can fall in the middle of a page. Account for this in the PagePrecedes callback specification and in SimpleLruTruncate()'s use of said callback. Update each callback implementation to fit the new specification. This changes SerialPagePrecedesLogically() from the style of asyncQueuePagePrecedes() to the style of CLOGPagePrecedes(). (Whereas pg_clog and pg_serial share a key space, pg_serial is nothing like pg_notify.) The bug fixed here has the same symptoms and user followup steps as 592a589a04bd456410b853d86bd05faa9432cbbb. Back-patch to 9.5 (all supported versions). Reviewed by Andrey Borodin and (in earlier versions) by Tom Lane. Discussion: https://postgr.es/m/20190202083822.GC32531@gust.leadboat.com
* Fix calculation of how much shared memory is required to store a TOC.Fujii Masao2021-01-15
| | | | | | | | | | | Commit ac883ac453 refactored shm_toc_estimate() but changed its calculation of shared memory size for TOC incorrectly. Previously this could cause too large memory to be allocated. Back-patch to v11 where the bug was introduced. Author: Takayuki Tsunakawa Discussion: https://postgr.es/m/TYAPR01MB2990BFB73170E2C4921E2C4DFEA80@TYAPR01MB2990.jpnprd01.prod.outlook.com
* Detect the deadlocks between backends and the startup process.Fujii Masao2021-01-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The deadlocks that the recovery conflict on lock is involved in can happen between hot-standby backends and the startup process. If a backend takes an access exclusive lock on the table and which finally triggers the deadlock, that deadlock can be detected as expected. On the other hand, previously, if the startup process took an access exclusive lock and which finally triggered the deadlock, that deadlock could not be detected and could remain even after deadlock_timeout passed. This is a bug. The cause of this bug was that the code for handling the recovery conflict on lock didn't take care of deadlock case at all. It assumed that deadlocks involving the startup process and backends were able to be detected by the deadlock detector invoked within backends. But this assumption was incorrect. The startup process also should have invoked the deadlock detector if necessary. To fix this bug, this commit makes the startup process invoke the deadlock detector if deadlock_timeout is reached while handling the recovery conflict on lock. Specifically, in that case, the startup process requests all the backends holding the conflicting locks to check themselves for deadlocks. Back-patch to v9.6. v9.5 has also this bug, but per discussion we decided not to back-patch the fix to v9.5. Because v9.5 doesn't have some infrastructure codes (e.g., 37c54863cf) that this bug fix patch depends on. We can apply those codes for the back-patch, but since the next minor version release is the final one for v9.5, it's risky to do that. If we unexpectedly introduce new bug to v9.5 by the back-patch, there is no chance to fix that. We determined that the back-patch to v9.5 would give more risk than gain. Author: Fujii Masao Reviewed-by: Bertrand Drouvot, Masahiko Sawada, Kyotaro Horiguchi Discussion: https://postgr.es/m/4041d6b6-cf24-a120-36fa-1294220f8243@oss.nttdata.com
* Fix index deletion latestRemovedXid bug.Peter Geoghegan2020-12-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The logic for determining the latest removed XID for the purposes of generating recovery conflicts in REDO routines was subtly broken. It failed to follow links from HOT chains, and so failed to consider all relevant heap tuple headers in some cases. To fix, expand the loop that deals with LP_REDIRECT line pointers to also deal with HOT chains. The new version of the loop is loosely based on a similar loop from heap_prune_chain(). The impact of this bug is probably quite limited, since the horizon code necessarily deals with heap tuples that are pointed to by LP_DEAD-set index tuples. The process of setting LP_DEAD index tuples (e.g. within the kill_prior_tuple mechanism) is highly correlated with opportunistic pruning of pointed-to heap tuples. Plus the question of generating a recovery conflict usually comes up some time after index tuple LP_DEAD bits were initially set, unlike heap pruning, where a latestRemovedXid is generated at the point of the pruning operation (heap pruning has no deferred "would-be page split" style processing that produces conflicts lazily). Only backpatch to Postgres 12, the first version where this logic runs during original execution (following commit 558a9165e08). The index latestRemovedXid mechanism has had the same bug since it first appeared over 10 years ago (in commit a760893d), but backpatching to all supported versions now seems like a bad idea on balance. Running the new improved code during recovery seems risky, especially given the lack of complaints from the field. Author: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/CAH2-Wz=Eib393+HHcERK_9MtgNS7Ew1HY=RDC_g6GL46zM5C6Q@mail.gmail.com Backpatch: 12-
* Free disk space for dropped relations on commit.Thomas Munro2020-12-01
| | | | | | | | | | | | | | | | | | | | | When committing a transaction that dropped a relation, we previously truncated only the first segment file to free up disk space (the one that won't be unlinked until the next checkpoint). Truncate higher numbered segments too, even though we unlink them on commit. This frees the disk space immediately, even if other backends have open file descriptors and might take a long time to get around to handling shared invalidation events and closing them. Also extend the same behavior to the first segment, in recovery. Back-patch to all supported releases. Bug: #16663 Reported-by: Denis Patron <denis.patron@previnet.it> Reviewed-by: Pavel Borisov <pashkin.elfe@gmail.com> Reviewed-by: Neil Chen <carpenter.nail.cz@gmail.com> Reviewed-by: David Zhang <david.zhang@highgo.ca> Discussion: https://postgr.es/m/16663-fe97ccf9932fc800%40postgresql.org
* Adjust DSM and DSA slot usage constants (back-patch).Thomas Munro2020-11-20
| | | | | | | | | | | | | | | 1. Previously, a DSA area would create up to four segments at each size before doubling the size. After this commit, it will create only two at each size, so it ramps up faster and therefore needs fewer slots. 2. Previously, the total limit on DSM slots allowed for 2 per connection. Switch to 5 per connection. This back-patches commit d061ea21 from release 13 into 10-12 based on a field complaint. Discussion: https://postgr.es/m/CAO03teA%2BjE1qt5iWDWzHqaufqBsF6EoOgZphnazps_tr_jDPZA%40mail.gmail.com Discussion: https://postgr.es/m/CA%2BhUKGL6H2BpGbiF7Lj6QiTjTGyTLW_vLR%3DSn2tEBeTcYXiMKw%40mail.gmail.com