aboutsummaryrefslogtreecommitdiff
path: root/src/backend/storage/ipc
Commit message (Collapse)AuthorAge
* Don't CHECK_FOR_INTERRUPTS between WaitLatch and ResetLatch.Tom Lane2016-08-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | This coding pattern creates a race condition, because if an interesting interrupt happens after we've checked InterruptPending but before we reset our latch, the latch-setting done by the signal handler would get lost, and then we might block at WaitLatch in the next iteration without ever noticing the interrupt condition. You can put the CHECK_FOR_INTERRUPTS before WaitLatch or after ResetLatch, but not between them. Aside from fixing the bugs, add some explanatory comments to latch.h to perhaps forestall the next person from making the same mistake. In HEAD, also replace gather_readnext's direct call of HandleParallelMessages with CHECK_FOR_INTERRUPTS. It does not seem clean or useful for this one caller to bypass ProcessInterrupts and go straight to HandleParallelMessages; not least because that fails to consider the InterruptPending flag, resulting in useless work both here (if InterruptPending isn't set) and in the next CHECK_FOR_INTERRUPTS call (if it is). This thinko seems to have been introduced in the initial coding of storage/ipc/shm_mq.c (commit ec9037df2), and then blindly copied into all the subsequent parallel-query support logic. Back-patch relevant hunks to 9.4 to extirpate the error everywhere. Discussion: <1661.1469996911@sss.pgh.pa.us>
* Adjust spellings of forms of "cancel"Peter Eisentraut2016-07-14
|
* Revert "Add some temporary code to record stack usage at server process exit."Tom Lane2016-07-10
| | | | | | | This reverts commit 88cf37d2a86d5b66380003d7c3384530e3f91e40 as well as follow-on commits ea9c4a16d5ad88a1d28d43ef458e3209b53eb106 and c57562725d219c4249b82f4a4fb5aaeee3ae0d53. We've learned about as much as we can from the buildfarm.
* Improve recording of IA64 stack data.Tom Lane2016-07-09
| | | | | | | | Examination of the results from anole and gharial suggests that we're only managing to track the size of one of the two stacks of IA64 machines. Some googling gave the answer: on HPUX11, the register stack is reported as a page type I don't see in pstat.h on my HPUX10 box. Let's try testing for that.
* Add more temporary code to record stack usage at server process exit.Tom Lane2016-07-08
| | | | | | | | After a look at preliminary results from commit 88cf37d2a86d5b66, I realized it'd be a good idea to spew out the maximum depth measurement seen by check_stack_depth. So add some quick-n-dirty code to do that. Like the previous commit, this will be reverted once we've gathered a set of buildfarm runs with it.
* Add some temporary code to record stack usage at server process exit.Tom Lane2016-07-08
| | | | | | | | | | | | | This patch is meant to gather information from the buildfarm members, and will be reverted in a day or so. The idea is to try to find out the high-water stack consumption while running the regression tests, particularly on IA64 which is suspected to use much more stack than other architectures. On machines with pmap, we can use that; but the IA64 farm members are running HPUX, so also include some bespoke code for HPUX. (I've tested the latter on HPUX 10/HPPA; not entirely sure it will work on HPUX 11/IA64, but we'll soon find out.) Discussion: <CAM-w4HMwwcwaVvYcAH0_FGtG5GeXdYVRfvG81pXnSJWHnCfosQ@mail.gmail.com>
* Fix obsolete comment.Robert Haas2016-06-29
| | | | | | | Commit 3bd261ca18c67eafe18088e58fab511e3b965418 should have updated this, but didn't. Extracted from a larger patch by Piotr Stefaniak.
* pgindent run for 9.6Robert Haas2016-06-09
|
* Correct phrasing in dsm.c commentsSimon Riggs2016-06-07
|
* shm_mq: After a send fails with SHM_MQ_DETACHED, later ones should too.Robert Haas2016-06-06
| | | | | | | | | | | | | | | | Prior to this patch, it was occasionally possible, after shm_mq_sendv had previously returned SHM_MQ_DETACHED, for a later shm_mq_sendv operation to fail an assertion instead of just again returning SHM_MQ_ATTACHED. From the shm_mq code's point of view, it was expecting to be called again with the same arguments, since the previous operation had only partially completed. However, a caller who isn't using non-blocking mode won't be prepared to repeat the call with the same arguments, and this code shouldn't expect that they will. Repair in such a way that we'll be OK whether the next call uses the same arguments or not. Found by Andreas Seltenreich. Analysis and sketch of fix by Amit Kapila. Patch by me, reviewed by Amit Kapila.
* Fix whitespacePeter Eisentraut2016-06-05
|
* Fix various common mispellings.Greg Stark2016-06-03
| | | | | | | | | | Mostly these are just comments but there are a few in documentation and a handful in code and tests. Hopefully this doesn't cause too much unnecessary pain for backpatching. I relented from some of the most common like "thru" for that reason. The rest don't seem numerous enough to cause problems. Thanks to Kevin Lyda's tool https://pypi.python.org/pypi/misspellings
* Be conservative about alignment requirements of struct epoll_event.Greg Stark2016-06-02
| | | | | | | | | | | | Use MAXALIGN size/alignment to guarantee that later uses of memory are aligned correctly. E.g. epoll_event might need 8 byte alignment on some platforms, but earlier allocations like WaitEventSet and WaitEvent might not sized to guarantee that when purely using sizeof(). Found by myself while testing on an Sun Ultra 5 (Sparc IIi) with some editorializing by Andres Freund. In passing fix a couple typos in the area
* Emit invalidations to standby for transactions without xid.Andres Freund2016-04-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | So far, when a transaction with pending invalidations, but without an assigned xid, committed, we simply ignored those invalidation messages. That's problematic, because those are actually sent for a reason. Known symptoms of this include that existing sessions on a hot-standby replica sometimes fail to notice new concurrently built indexes and visibility map updates. The solution is to WAL log such invalidations in transactions without an xid. We considered to alternatively force-assign an xid, but that'd be problematic for vacuum, which might be run in systems with few xids. Important: This adds a new WAL record, but as the patch has to be back-patched, we can't bump the WAL page magic. This means that standbys have to be updated before primaries; otherwise "PANIC: standby_redo: unknown op code 32" errors can be encountered. XXX: Reported-By: Васильев Дмитрий, Masahiko Sawada Discussion: CAB-SwXY6oH=9twBkXJtgR4UC1NqT-vpYAtxCseME62ADwyK5OA@mail.gmail.com CAD21AoDpZ6Xjg=gFrGPnSn4oTRRcwK1EBrWCq9OqOHuAcMMC=w@mail.gmail.com
* Fix assorted defects in 09adc9a8c09c9640de05c7023b27fb83c761e91c.Robert Haas2016-04-21
| | | | | | | | | | | | | | That commit increased all shared memory allocations to the next higher multiple of PG_CACHE_LINE_SIZE, but it didn't ensure that allocation started on a cache line boundary. It also failed to remove a couple other pieces of now-useless code. BUFFERALIGN() is perhaps obsolete at this point, and likely should be removed at some point, too, but that seems like it can be left to a future cleanup. Mistakes all pointed out by Andres Freund. The patch is mine, with a few extra assertions which I adopted from his version of this fix.
* Avoid extra locks in GetSnapshotData if old_snapshot_threshold < 0Kevin Grittner2016-04-12
| | | | | | | | | | | | On a big NUMA machine with 1000 connections in saturation load there was a performance regression due to spinlock contention, for acquiring values which were never used. Just fill with dummy values if we're not going to use them. This patch has not been benchmarked yet on a big NUMA machine, but it seems like a good idea on general principle, and it seemed to prevent an apparent 2.2% regression on a single-socket i7 box running 200 connections at saturation load.
* Add the "snapshot too old" featureKevin Grittner2016-04-08
| | | | | | | | | | | | | | | | This feature is controlled by a new old_snapshot_threshold GUC. A value of -1 disables the feature, and that is the default. The value of 0 is just intended for testing. Above that it is the number of minutes a snapshot can reach before pruning and vacuum are allowed to remove dead tuples which the snapshot would otherwise protect. The xmin associated with a transaction ID does still protect dead tuples. A connection which is using an "old" snapshot does not get an error unless it accesses a page modified recently enough that it might not be able to produce accurate results. This is similar to the Oracle feature, and we use the same SQLSTATE and error message for compatibility.
* Revert bf08f2292ffca14fd133aa0901d1563b6ecd6894Simon Riggs2016-04-06
| | | | Remove recent changes to logging XLOG_RUNNING_XACTS by request.
* Align all shared memory allocations to cache line boundaries.Robert Haas2016-04-05
| | | | | | | | Experimentation shows this only costs about 6kB, which seems well worth it given the major performance effects that can be caused by insufficient alignment, especially on larger systems. Discussion: 14166.1458924422@sss.pgh.pa.us
* Avoid archiving XLOG_RUNNING_XACTS on idle serverSimon Riggs2016-04-04
| | | | | | | | | | If archive_timeout > 0 we should avoid logging XLOG_RUNNING_XACTS if idle. Bug 13685 reported by Laurence Rowe, investigated in detail by Michael Paquier, though this is not his proposed fix. 20151016203031.3019.72930@wrigleys.postgresql.org Simple non-invasive patch to allow later backpatch to 9.4 and 9.5
* Make all the declarations of WaitEventSetWaitBlock be marked "inline".Tom Lane2016-04-02
| | | | | The inconsistency here triggered compiler warnings on some buildfarm members, and it's surely pretty pointless.
* Copyedit comments and documentation.Noah Misch2016-04-01
|
* Fix typo in comment.Robert Haas2016-03-28
| | | | Thomas Munro
* Introduce WaitEventSet API.Andres Freund2016-03-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit ac1d794 ("Make idle backends exit if the postmaster dies.") introduced a regression on, at least, large linux systems. Constantly adding the same postmaster_alive_fds to the OSs internal datastructures for implementing poll/select can cause significant contention; leading to a performance regression of nearly 3x in one example. This can be avoided by using e.g. linux' epoll, which avoids having to add/remove file descriptors to the wait datastructures at a high rate. Unfortunately the current latch interface makes it hard to allocate any persistent per-backend resources. Replace, with a backward compatibility layer, WaitLatchOrSocket with a new WaitEventSet API. Users can allocate such a Set across multiple calls, and add more than one file-descriptor to wait on. The latter has been added because there's upcoming postgres features where that will be helpful. In addition to the previously existing poll(2), select(2), WaitForMultipleObjects() implementations also provide an epoll_wait(2) based implementation to address the aforementioned performance problem. Epoll is only available on linux, but that is the most likely OS for machines large enough (four sockets) to reproduce the problem. To actually address the aforementioned regression, create and use a long-lived WaitEventSet for FE/BE communication. There are additional places that would benefit from a long-lived set, but that's a task for another day. Thanks to Amit Kapila, who helped make the windows code I blindly wrote actually work. Reported-By: Dmitry Vasilyev Discussion: CAB-SwXZh44_2ybvS5Z67p_CDz=XFn4hNAD=CnMEF+QqkXwFrGg@mail.gmail.com 20160114143931.GG10941@awork2.anarazel.de
* Combine win32 and unix latch implementations.Andres Freund2016-03-21
| | | | | | | | | | | | | Previously latches for windows and unix had been implemented in different files. A later patch introduce an expanded wait infrastructure, keeping the implementation separate would introduce too much duplication. This basically just moves the functions, without too much change. The reason to keep this separate is that it allows blame to continue working a little less badly; and to make review a tiny bit easier. Discussion: 20160114143931.GG10941@awork2.anarazel.de
* Fix typos.Robert Haas2016-03-15
| | | | Oskari Saarenmaa
* Rework wait for AccessExclusiveLocks on Hot StandbySimon Riggs2016-03-10
| | | | | | | | Earlier version committed in 9.0 caused spurious waits in some cases. New infrastructure for lock waits in 9.3 used to correct and improve this. Jeff Janes based upon a proposal by Simon Riggs, who also reviewed Additional review comments from Amit Kapila
* Change the format of the VM fork to add a second bit per page.Robert Haas2016-03-01
| | | | | | | | | | | | | | | | | | | | | | | The new bit indicates whether every tuple on the page is already frozen. It is cleared only when the all-visible bit is cleared, and it can be set only when we vacuum a page and find that every tuple on that page is both visible to every transaction and in no need of any future vacuuming. A future commit will use this new bit to optimize away full-table scans that would otherwise be triggered by XID wraparound considerations. A page which is merely all-visible must still be scanned in that case, but a page which is all-frozen need not be. This commit does not attempt that optimization, although that optimization is the goal here. It seems better to get the basic infrastructure in place first. Per discussion, it's very desirable for pg_upgrade to automatically migrate existing VM forks from the old format to the new format. That, too, will be handled in a follow-on patch. Masahiko Sawada, reviewed by Kyotaro Horiguchi, Fujii Masao, Amit Kapila, Simon Riggs, Andres Freund, and others, and substantially revised by me.
* Create a function to reliably identify which sessions block which others.Tom Lane2016-02-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch introduces "pg_blocking_pids(int) returns int[]", which returns the PIDs of any sessions that are blocking the session with the given PID. Historically people have obtained such information using a self-join on the pg_locks view, but it's unreasonably tedious to do it that way with any modicum of correctness, and the addition of parallel queries has pretty much broken that approach altogether. (Given some more columns in the view than there are today, you could imagine handling parallel-query cases with a 4-way join; but ugh.) The new function has the following behaviors that are painful or impossible to get right via pg_locks: 1. Correctly understands which lock modes block which other ones. 2. In soft-block situations (two processes both waiting for conflicting lock modes), only the one that's in front in the wait queue is reported to block the other. 3. In parallel-query cases, reports all sessions blocking any member of the given PID's lock group, and reports a session by naming its leader process's PID, which will be the pg_backend_pid() value visible to clients. The motivation for doing this right now is mostly to fix the isolation tests. Commit 38f8bdcac4982215beb9f65a19debecaf22fd470 lobotomized isolationtester's is-it-waiting query by removing its ability to recognize nonconflicting lock modes, as a crude workaround for the inability to handle soft-block situations properly. But even without the lock mode tests, the old query was excessively slow, particularly in CLOBBER_CACHE_ALWAYS builds; some of our buildfarm animals fail the new deadlock-hard test because the deadlock timeout elapses before they can probe the waiting status of all eight sessions. Replacing the pg_locks self-join with use of pg_blocking_pids() is not only much more correct, but a lot faster: I measure it at about 9X faster in a typical dev build with Asserts, and 3X faster in CLOBBER_CACHE_ALWAYS builds. That should provide enough headroom for the slower CLOBBER_CACHE_ALWAYS animals to pass the test, without having to lengthen deadlock_timeout yet more and thus slow down the test for everyone else.
* Rename PGPROC fields related to group XID clearing again.Robert Haas2016-02-11
| | | | | | | | | | | | | | Commit 0e141c0fbb211bdd23783afa731e3eef95c9ad7a introduced a new facility to reduce ProcArrayLock contention by clearing several XIDs from the ProcArray under a single lock acquisition. The names initially chosen were deemed not to be very good choices, so commit 4aec49899e5782247e134f94ce1c6ee926f88e1c renamed them. But now it seems like we still didn't get it right. A pending patch wants to add similar infrastructure for batching CLOG updates, so the names need to be clear enough to allow a new set of structure members with a related purpose. Amit Kapila
* Revert "Temporarily make pg_ctl and server shutdown a whole lot chattier."Tom Lane2016-02-10
| | | | | | This reverts commit 3971f64843b02e4a55d854156bd53e46a0588e45 and a couple of followon debugging commits; I think we've learned what we can from them.
* Add still more chattiness in server shutdown.Tom Lane2016-02-09
| | | | | | Further investigation says that there may be some slow operations after we've finished ShutdownXLOG(), so add some more log messages to try to isolate that. This is all temporary code too.
* Migrate PGPROC's backendLock into PGPROC itself, using a new tranche.Robert Haas2016-01-29
| | | | | | | | | | | | | Previously, each PGPROC's backendLock was part of the main tranche, and the PGPROC just contained a pointer. Now, the actual LWLock is part of the PGPROC. As with previous, similar patches, this makes it significantly easier to identify these lwlocks in LWLOCK_STATS or Trace_lwlocks output and improves modularity. Author: Ildus Kurbangaliev Reviewed-by: Amit Kapila, Robert Haas
* Correct comment in GetConflictingVirtualXIDs()Simon Riggs2016-01-24
| | | | We use Share lock because it is safe to do so.
* Update copyright for 2016Bruce Momjian2016-01-02
| | | | Backpatch certain files through 9.1
* shm_mq: Third attempt at fixing nowait behavior in shm_mq_receive.Robert Haas2015-11-03
| | | | | | | | | | | | | Commit a1480ec1d3bacb9acb08ec09f22bc25bc033115b purported to fix the problems with commit b2ccb5f4e6c81305386edb34daf7d1d1e1ee112a, but it didn't completely fix them. The problem is that the checks were performed in the wrong order, leading to a race condition. If the sender attached, sent a message, and detached after the receiver called shm_mq_get_sender and before the receiver called shm_mq_counterparty_gone, we'd incorrectly return SHM_MQ_DETACHED before all messages were read. Repair by reversing the order of operations, and add a long comment explaining why this new logic is (hopefully) correct.
* Remove some more dead Alpha-specific code.Tom Lane2015-11-02
|
* shm_mq: Repair breakage from previous commit.Robert Haas2015-10-22
| | | | | | If the counterparty writes some data into the queue and then detaches, it's wrong to return SHM_MQ_DETACHED right away. If we do that, we fail to read whatever was written.
* shm_mq: Fix failure to notice a dead counterparty when nowait is used.Robert Haas2015-10-22
| | | | | | | | | | | | The shm_mq mechanism was intended to optionally notice when the process on the other end of the queue fails to attach to the queue. It does this by allowing the user to pass a BackgroundWorkerHandle; if the background worker in question is launched and dies without attaching to the queue, then we know it never will. This logic works OK in blocking mode, but when called with nowait = true we fail to notice that this has happened due to an asymmetry in the logic. Repair. Reported off-list by Rushabh Lathia. Patch by me.
* Remove volatile qualifiers from proc.c and procarray.cRobert Haas2015-10-16
| | | | | | | | Prior to commit 0709b7ee72e4bc71ad07b7120acd117265ab51d0, access to variables within a spinlock-protected critical section had to be done through a volatile pointer, but that should no longer be necessary. Michael Paquier
* Remove volatile qualifiers from dynahash.c, shmem.c, and sinvaladt.cRobert Haas2015-10-16
| | | | | | | | Prior to commit 0709b7ee72e4bc71ad07b7120acd117265ab51d0, access to variables within a spinlock-protected critical section had to be done through a volatile pointer, but that should no longer be necessary. Thomas Munro
* Remove set_latch_on_sigusr1 flag.Robert Haas2015-10-09
| | | | | | | | | This flag has proven to be a recipe for bugs, and it doesn't seem like it can really buy anything in terms of performance. So let's just *always* set the process latch when we receive SIGUSR1 instead of trying to do it only when needed. Per my recent proposal on pgsql-hackers.
* Glue layer to connect the executor to the shm_mq mechanism.Robert Haas2015-09-18
| | | | | | | | | | | | | | | | | | | | | | | | | | The shm_mq mechanism was built to send error (and notice) messages and tuples between backends. However, shm_mq itself only deals in raw bytes. Since commit 2bd9e412f92bc6a68f3e8bcb18e04955cc35001d, we have had infrastructure for one message to redirect protocol messages to a queue and for another backend to parse them and do useful things with them. This commit introduces a somewhat analogous facility for tuples by adding a new type of DestReceiver, DestTupleQueue, which writes each tuple generated by a query into a shm_mq, and a new TupleQueueFunnel facility which reads raw tuples out of the queue and reconstructs the HeapTuple format expected by the executor. The TupleQueueFunnel abstraction supports reading from multiple tuple streams at the same time, but only in round-robin fashion. Someone could imaginably want other policies, but this should be good enough to meet our short-term needs related to parallel query, and we can always extend it later. This also makes one minor addition to the shm_mq API that didn' seem worth breaking out as a separate patch. Extracted from Amit Kapila's parallel sequential scan patch. This code was originally written by me, and then it was revised by Amit, and then it was revised some more by me.
* Assorted code review for recent ProcArrayLock patch.Robert Haas2015-09-03
| | | | | | | | | | | | | | | Post-commit review by Andres Freund discovered a couple of concurrency bugs in the original patch: specifically, if the leader cleared a follower's XID before it reached PGSemaphoreLock, the semaphore would be left in the wrong state; and if another process did PGSemaphoreUnlock for some unrelated reason, we might resume execution before the fact that our XID was cleared was globally visible. Also, improve the wording of some comments, rename nextClearXidElem to firstClearXidElem in PROC_HDR for clarity, and drop some volatile qualifiers that aren't necessary. Amit Kapila, reviewed and slightly revised by me.
* Don't use function definitions looking like old-style ones.Andres Freund2015-08-15
| | | | | | | This fixes a bunch of somewhat pedantic warnings with new compilers. Since by far the majority of other functions definitions use the (void) style it just seems to be consistent to do so as well in the remaining few places.
* Fix attach-related race condition in shm_mq_send_bytes.Robert Haas2015-08-07
| | | | Spotted by Antonin Houska.
* Fix incorrect calculation in shm_mq_receive.Robert Haas2015-08-06
| | | | | | | | | If some, but not all, of the length word has already been read, and the next attempt to read sees exactly the number of bytes needed to complete the length word, or fewer, then we'll incorrectly read less than all of the available data. Antonin Houska
* Reduce ProcArrayLock contention by removing backends in batches.Robert Haas2015-08-06
| | | | | | | | | | | | | | | | | When a write transaction commits, it must clear its XID advertised via the ProcArray, which requires that we hold ProcArrayLock in exclusive mode in order to prevent concurrent processes running GetSnapshotData from seeing inconsistent results. When many processes try to commit at once, ProcArrayLock must change hands repeatedly, with each concurrent process trying to commit waking up to acquire the lock in turn. To make things more efficient, when more than one backend is trying to commit a write transaction at the same time, have just one of them acquire ProcArrayLock in exclusive mode and clear the XIDs of all processes in the group. Benchmarking reveals that this is much more efficient at very high client counts. Amit Kapila, heavily revised by me, with some review also from Pavan Deolasee.
* pgindent run for 9.5Bruce Momjian2015-05-23
|
* Fix more typos in comments.Heikki Linnakangas2015-05-20
| | | | Patch by CharSyam, plus a few more I spotted with grep.