aboutsummaryrefslogtreecommitdiff
path: root/src/backend/commands/async.c
Commit message (Collapse)AuthorAge
* Prevent excess SimpleLruTruncate() deletion.Noah Misch2021-01-16
| | | | | | | | | | | | | | | | | Every core SLRU wraps around. With the exception of pg_notify, the wrap point can fall in the middle of a page. Account for this in the PagePrecedes callback specification and in SimpleLruTruncate()'s use of said callback. Update each callback implementation to fit the new specification. This changes SerialPagePrecedesLogically() from the style of asyncQueuePagePrecedes() to the style of CLOGPagePrecedes(). (Whereas pg_clog and pg_serial share a key space, pg_serial is nothing like pg_notify.) The bug fixed here has the same symptoms and user followup steps as 592a589a04bd456410b853d86bd05faa9432cbbb. Back-patch to 9.5 (all supported versions). Reviewed by Andrey Borodin and (in earlier versions) by Tom Lane. Discussion: https://postgr.es/m/20190202083822.GC32531@gust.leadboat.com
* Fix a recently-introduced race condition in LISTEN/NOTIFY handling.Tom Lane2020-11-28
| | | | | | | | | | | | | | | | | | | | | | | | | | Commit 566372b3d fixed some race conditions involving concurrent SimpleLruTruncate calls, but it introduced new ones in async.c. A newly-listening backend could attempt to read Notify SLRU pages that were in process of being truncated, possibly causing an error. Also, the QUEUE_TAIL pointer could become set to a value that's not equal to the queue position of any backend. While that's fairly harmless in v13 and up (thanks to commit 51004c717), in older branches it resulted in near-permanent disabling of the queue truncation logic, so that continued use of NOTIFY led to queue-fill warnings and eventual inability to send any more notifies. (A server restart is enough to make that go away, but it's still pretty unpleasant.) The core of the problem is confusion about whether QUEUE_TAIL represents the "logical" tail of the queue (i.e., the oldest still-interesting data) or the "physical" tail (the oldest data we've not yet truncated away). To fix, split that into two variables. QUEUE_TAIL regains its definition as the logical tail, and we introduce a new variable to track the oldest un-truncated page. Per report from Mikael Gustavsson. Like the previous patch, back-patch to all supported branches. Discussion: https://postgr.es/m/1b8561412e8a4f038d7a491c8b922788@smhi.se
* Prevent concurrent SimpleLruTruncate() for any given SLRU.Noah Misch2020-08-15
| | | | | | | | | | | | | | | | | The SimpleLruTruncate() header comment states the new coding rule. To achieve this, add locktype "frozenid" and two LWLocks. This closes a rare opportunity for data loss, which manifested as "apparent wraparound" or "could not access status of transaction" errors. Data loss is more likely in pg_multixact, due to released branches' thin margin between multiStopLimit and multiWrapLimit. If a user's physical replication primary logged ": apparent wraparound" messages, the user should rebuild standbys of that primary regardless of symptoms. At less risk is a cluster having emitted "not accepting commands" errors or "must be vacuumed" warnings at some point. One can test a cluster for this data loss by running VACUUM FREEZE in every database. Back-patch to 9.5 (all supported versions). Discussion: https://postgr.es/m/20190218073103.GA1434723@rfd.leadboat.com
* Spelling adjustmentsPeter Eisentraut2020-06-07
|
* Rename SLRU structures and associated LWLocks.Tom Lane2020-05-15
| | | | | | | | | | | | | | | | | | | | | | | | | | Originally, the names assigned to SLRUs had no purpose other than being shmem lookup keys, so not a lot of thought went into them. As of v13, though, we're exposing them in the pg_stat_slru view and the pg_stat_reset_slru function, so it seems advisable to take a bit more care. Rename them to names based on the associated on-disk storage directories (which fortunately we *did* think about, to some extent; since those are also visible to DBAs, consistency seems like a good thing). Also rename the associated LWLocks, since those names are likewise user-exposed now as wait event names. For the most part I only touched symbols used in the respective modules' SimpleLruInit() calls, not the names of other related objects. This renaming could have been taken further, and maybe someday we will do so. But for now it seems undesirable to change the names of any globally visible functions or structs, so some inconsistency is unavoidable. (But I *did* terminate "oldserxid" with prejudice, as I found that name both unreadable and not descriptive of the SLRU's contents.) Table 27.12 needs re-alphabetization now, but I'll leave that till after the other LWLock renamings I have in mind. Discussion: https://postgr.es/m/28683.1589405363@sss.pgh.pa.us
* Fix async.c to not register any SLRU stats counts in the postmaster.Tom Lane2020-05-13
| | | | | | | | | | | | | | | | | | | | | Previously, AsyncShmemInit forcibly initialized the first page of the async SLRU, to save dealing with that case in asyncQueueAddEntries. But this is a poor tradeoff, since many installations do not ever use NOTIFY; for them, expending those cycles in AsyncShmemInit is a complete waste. Besides, this only saves a couple of instructions in asyncQueueAddEntries, which hardly seems likely to be measurable. The real reason to change this now, though, is that now that we track SLRU access stats, the existing code is causing the postmaster to accumulate some access counts, which then get inherited into child processes by fork(), messing up the statistics. Delaying the initialization into the first child that does a NOTIFY fixes that. Hence, we can revert f3d23d83e, which was an incorrect attempt at fixing that issue. Also, add an Assert to pgstat.c that should catch any future errors of the same sort. Discussion: https://postgr.es/m/8367.1589391884@sss.pgh.pa.us
* Refactor ps_status.c APIPeter Eisentraut2020-03-11
| | | | | | | | | | | | | | | | | | | | | | The init_ps_display() arguments were mostly lies by now, so to match typical usage, just use one argument and let the caller assemble it from multiple sources if necessary. The only user of the additional arguments is BackendInitialize(), which was already doing string assembly on the caller side anyway. Remove the second argument of set_ps_display() ("force") and just handle that in init_ps_display() internally. BackendInitialize() also used to set the initial status as "authentication", but that was very far from where authentication actually happened. So now it's set to "initializing" and then "authentication" just before the actual call to ClientAuthentication(). Reviewed-by: Julien Rouhaud <rjuju123@gmail.com> Reviewed-by: Kuntal Ghosh <kuntalghosh.2007@gmail.com> Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com> Discussion: https://www.postgresql.org/message-id/flat/c65e5196-4f04-4ead-9353-6088c19615a3@2ndquadrant.com
* Move src/backend/utils/hash/hashfn.c to src/commonRobert Haas2020-02-27
| | | | | | | | | | | | | | This also involves renaming src/include/utils/hashutils.h, which becomes src/include/common/hashfn.h. Perhaps an argument can be made for keeping the hashutils.h name, but it seemed more consistent to make it match the name of the file, and also more descriptive of what is actually going on here. Patch by me, reviewed by Suraj Kharage and Mark Dilger. Off-list advice on how not to break the Windows build from Davinder Singh and Amit Kapila. Discussion: http://postgr.es/m/CA+TgmoaRiG4TXND8QuM6JXFRkM_1wL2ZNhzaUKsuec9-4yrkgw@mail.gmail.com
* Update copyrights for 2020Bruce Momjian2020-01-01
| | | | Backpatch-through: update all files in master, backpatch legal files through 9.4
* Doc: improve discussion of race conditions involved in LISTEN.Tom Lane2019-11-24
| | | | | | | | The user docs didn't really explain how to use LISTEN safely, so clarify that. Also clean up some fuzzy-headed explanations in comments. No code changes. Discussion: https://postgr.es/m/3ac7f397-4d5f-be8e-f354-440020675694@gmail.com
* Stabilize NOTIFY behavior by transmitting notifies before ReadyForQuery.Tom Lane2019-11-24
| | | | | | | | | | | | | | | | | | | | | | This patch ensures that, if any notify messages were received during a just-finished transaction, they get sent to the frontend just before not just after the ReadyForQuery message. With libpq and other client libraries that act similarly, this guarantees that the client will see the notify messages as available as soon as it thinks the transaction is done. This probably makes no difference in practice, since in realistic use-cases the application would have to cope with asynchronous arrival of notify events anyhow. However, it makes it a lot easier to build cross-session-notify test cases with stable behavior. I'm a bit surprised now that we've not seen any buildfarm instability with the test cases added by commit b10f40bf0. Tests that I intend to add in an upcoming bug fix are definitely unstable without this. Back-patch to 9.6, which is as far back as we can do NOTIFY testing with the isolationtester infrastructure. Discussion: https://postgr.es/m/13881.1574557302@sss.pgh.pa.us
* Stabilize the results of pg_notification_queue_usage().Tom Lane2019-11-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This function wasn't touched in commit 51004c717, but that turns out to be a bad idea, because its results now include any dead space that exists in the NOTIFY queue on account of our being lazy about advancing the queue tail. Notably, the isolation tests now fail if run twice without a server restart between, because async-notify's first test of the function will already show a positive value. It seems likely that end users would be equally unhappy about the result's instability. To fix, just make the function call asyncQueueAdvanceTail before computing its result. That should end in producing the same value as before, and it's hard to believe that there's any practical use-case where pg_notification_queue_usage() is called so often as to create a performance degradation, especially compared to what we did before. Out of paranoia, also mark this function parallel-restricted (it was volatile, but parallel-safe by default, before). Although the code seems to work fine when run in a parallel worker, that's outside the design scope of async.c, and it's a bit scary to have intentional side-effects happening in a parallel worker. There seems no plausible use-case where it'd be important to try to parallelize this, so let's not take any risk of introducing new bugs. In passing, re-pgindent async.c and run reformat-dat-files on pg_proc.dat, just because I'm a neatnik. Discussion: https://postgr.es/m/13881.1574557302@sss.pgh.pa.us
* PG_FINALLYPeter Eisentraut2019-11-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This gives an alternative way of catching exceptions, for the common case where the cleanup code is the same in the error and non-error cases. So instead of PG_TRY(); { ... code that might throw ereport(ERROR) ... } PG_CATCH(); { cleanup(); PG_RE_THROW(); } PG_END_TRY(); cleanup(); one can write PG_TRY(); { ... code that might throw ereport(ERROR) ... } PG_FINALLY(); { cleanup(); } PG_END_TRY(); Discussion: https://www.postgresql.org/message-id/flat/95a822c3-728b-af0e-d7e5-71890507ae0c%402ndquadrant.com
* Remove AtSubStart_Notify.Robert Haas2019-10-04
| | | | | | | | | | Allocate notify-related state lazily instead. This makes trivial subtransactions noticeably faster. Patch by me, reviewed and tested by Dilip Kumar, Kyotaro Horiguchi, and Jeevan Ladhe. Discussion: https://postgr.es/m/CA+TgmobE1J22S1eC-6N-je9LgrcwZypkwp+zH6JXo9mc=4Nk3A@mail.gmail.com
* Make some efficiency improvements in LISTEN/NOTIFY.Tom Lane2019-09-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move the responsibility for advancing the NOTIFY queue tail pointer from the listener(s) to the notification sender, and only have the sender do it once every few queue pages, rather than after every batch of notifications as at present. This reduces the number of times we execute asyncQueueAdvanceTail, and reduces contention when there are multiple listeners (since that function requires exclusive lock). This change relies on the observation that we don't really need the tail pointer to be exactly up-to-date. It's certainly not necessary to attempt to release disk space more often than once per SLRU segment. The only other usage of the tail pointer is that an incoming listener, if it's the only listener in its database, will need to scan the queue forward from the tail; but that's surely a less performance-critical path than routine sending and receiving of notifies. We compromise by advancing the tail pointer after every 4 pages of output, so that it shouldn't get more than a few pages behind. Also, when sending signals to other backends after adding notify message(s) to the queue, recognize that only backends in our own database are going to care about those messages, so only such backends really need to be awakened promptly. Backends in other databases should get kicked if they're well behind on reading the queue, else they'll hold back the global tail pointer; but wakening them for every single message is pointless. This change can substantially reduce signal traffic if listeners are spread among many databases. It won't help for the common case of only a single active database, but the extra check costs very little. Martijn van Oosterhout, with some adjustments by me Discussion: https://postgr.es/m/CADWG95vtRBFDdrx1JdT1_9nhOFw48KaeTev6F_LtDQAFVpSPhA@mail.gmail.com Discussion: https://postgr.es/m/CADWG95uFj8rLM52Er80JnhRsTbb_AqPP1ANHS8XQRGbqLrU+jA@mail.gmail.com
* Reduce overhead of scanning the backend[] array in LISTEN/NOTIFY.Tom Lane2019-09-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | Up to now, async.c scanned its whole array of per-backend state whenever it needed to find listening backends. That's expensive if MaxBackends is large, so extend the data structure with list links that thread the active entries together. A downside of this change is that asyncQueueUnregister (unregister a listening backend at backend exit) now requires exclusive not shared lock, and it can take awhile if there are many other listening backends. We could improve the latter issue by using a doubly- not singly-linked list, but it's probably not worth the storage space; typical usage patterns for LISTEN/NOTIFY have fairly long-lived listeners. In return for that, Exec_ListenPreCommit (initially register a listening backend), SignalBackends, and asyncQueueAdvanceTail get significantly faster when MaxBackends is much larger than the number of listening backends. If most of the potential backend slots are listening, we don't win, but that's a case where the actual interprocess-signal overhead is going to swamp these considerations anyway. Martijn van Oosterhout, hacked a bit more by me Discussion: https://postgr.es/m/CADWG95vtRBFDdrx1JdT1_9nhOFw48KaeTev6F_LtDQAFVpSPhA@mail.gmail.com
* Use a hash table to de-duplicate NOTIFY events faster.Tom Lane2019-08-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, async.c got rid of duplicate notifications by scanning the list of pending events to compare each one to the proposed new event. This works okay for very small numbers of distinct events, but degrades as O(N^2) for many events. We can improve matters by using a hash table to probe for duplicates. So as not to add a lot of overhead for the simple cases that the code did handle well before, create the hash table only once a (sub)transaction has queued more than 16 distinct notify events. A downside is that we now have to do per-event work to propagate a successful subtransaction's notify events up to its parent. (But this isn't significant unless the subtransaction had many events, in which case the O(N^2) behavior would have been in play already, so we still come out ahead.) We can make some lemonade out of this lemon, though: since we must examine each event anyway, it's now possible to de-duplicate events fully, rather than skipping that for events merged up from subtransactions. Hence, remove the old weasel wording in notify.sgml about whether de-duplication happens or not, and adjust the test case in async-notify.spec that exhibited the old behavior. While at it, rearrange the definition of struct Notification to make it more compact and require just one palloc per event, rather than two or three. This saves space when there are a lot of events, in fact more than enough to buy back the space needed for the hash table. Patch by me, based on discussions around a different patch submitted by Filip Rembiałkowski. Discussion: https://postgr.es/m/17822.1564186806@sss.pgh.pa.us
* Remove dead code.Tom Lane2019-07-15
| | | | | | | These memory context switches are useless in the wake of commit 1cff1b95a. Noted by Jesper Pedersen. Discussion: https://postgr.es/m/f078ce63-9e04-0f3e-d200-d7ee66279abe@redhat.com
* Represent Lists as expansible arrays, not chains of cons-cells.Tom Lane2019-07-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Originally, Postgres Lists were a more or less exact reimplementation of Lisp lists, which consist of chains of separately-allocated cons cells, each having a value and a next-cell link. We'd hacked that once before (commit d0b4399d8) to add a separate List header, but the data was still in cons cells. That makes some operations -- notably list_nth() -- O(N), and it's bulky because of the next-cell pointers and per-cell palloc overhead, and it's very cache-unfriendly if the cons cells end up scattered around rather than being adjacent. In this rewrite, we still have List headers, but the data is in a resizable array of values, with no next-cell links. Now we need at most two palloc's per List, and often only one, since we can allocate some values in the same palloc call as the List header. (Of course, extending an existing List may require repalloc's to enlarge the array. But this involves just O(log N) allocations not O(N).) Of course this is not without downsides. The key difficulty is that addition or deletion of a list entry may now cause other entries to move, which it did not before. For example, that breaks foreach() and sister macros, which historically used a pointer to the current cons-cell as loop state. We can repair those macros transparently by making their actual loop state be an integer list index; the exposed "ListCell *" pointer is no longer state carried across loop iterations, but is just a derived value. (In practice, modern compilers can optimize things back to having just one loop state value, at least for simple cases with inline loop bodies.) In principle, this is a semantics change for cases where the loop body inserts or deletes list entries ahead of the current loop index; but I found no such cases in the Postgres code. The change is not at all transparent for code that doesn't use foreach() but chases lists "by hand" using lnext(). The largest share of such code in the backend is in loops that were maintaining "prev" and "next" variables in addition to the current-cell pointer, in order to delete list cells efficiently using list_delete_cell(). However, we no longer need a previous-cell pointer to delete a list cell efficiently. Keeping a next-cell pointer doesn't work, as explained above, but we can improve matters by changing such code to use a regular foreach() loop and then using the new macro foreach_delete_current() to delete the current cell. (This macro knows how to update the associated foreach loop's state so that no cells will be missed in the traversal.) There remains a nontrivial risk of code assuming that a ListCell * pointer will remain good over an operation that could now move the list contents. To help catch such errors, list.c can be compiled with a new define symbol DEBUG_LIST_MEMORY_USAGE that forcibly moves list contents whenever that could possibly happen. This makes list operations significantly more expensive so it's not normally turned on (though it is on by default if USE_VALGRIND is on). There are two notable API differences from the previous code: * lnext() now requires the List's header pointer in addition to the current cell's address. * list_delete_cell() no longer requires a previous-cell argument. These changes are somewhat unfortunate, but on the other hand code using either function needs inspection to see if it is assuming anything it shouldn't, so it's not all bad. Programmers should be aware of these significant performance changes: * list_nth() and related functions are now O(1); so there's no major access-speed difference between a list and an array. * Inserting or deleting a list element now takes time proportional to the distance to the end of the list, due to moving the array elements. (However, it typically *doesn't* require palloc or pfree, so except in long lists it's probably still faster than before.) Notably, lcons() used to be about the same cost as lappend(), but that's no longer true if the list is long. Code that uses lcons() and list_delete_first() to maintain a stack might usefully be rewritten to push and pop at the end of the list rather than the beginning. * There are now list_insert_nth...() and list_delete_nth...() functions that add or remove a list cell identified by index. These have the data-movement penalty explained above, but there's no search penalty. * list_concat() and variants now copy the second list's data into storage belonging to the first list, so there is no longer any sharing of cells between the input lists. The second argument is now declared "const List *" to reflect that it isn't changed. This patch just does the minimum needed to get the new implementation in place and fix bugs exposed by the regression tests. As suggested by the foregoing, there's a fair amount of followup work remaining to do. Also, the ENABLE_LIST_COMPAT macros are finally removed in this commit. Code using those should have been gone a dozen years ago. Patch by me; thanks to David Rowley, Jesper Pedersen, and others for review. Discussion: https://postgr.es/m/11587.1550975080@sss.pgh.pa.us
* Phase 2 pgindent run for v12.Tom Lane2019-05-22
| | | | | | | | | Switch to 2.1 version of pg_bsd_indent. This formats multiline function declarations "correctly", that is with additional lines of parameter declarations indented to match where the first line's left parenthesis is. Discussion: https://postgr.es/m/CAEepm=0P3FeTXRcU5B2W3jv3PgRVZ-kGUXLGfd42FFhUROO3ug@mail.gmail.com
* Move remaining code from tqual.[ch] to heapam.h / heapam_visibility.c.Andres Freund2019-01-21
| | | | | | | | | | | | | | Given these routines are heap specific, and that there will be more generic visibility support in via table AM, it makes sense to move the prototypes to heapam.h (routines like HeapTupleSatisfiesVacuum will not be exposed in a generic fashion, because they are too storage specific). Similarly, the code in tqual.c is specific to heap, so moving it into access/heap/ makes sense. Author: Andres Freund Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
* Move generic snapshot related code from tqual.h to snapmgr.h.Andres Freund2019-01-21
| | | | | | | | | | | | | | | | The code in tqual.c is largely heap specific. Due to the upcoming pluggable storage work, it therefore makes sense to move it into access/heap/ (as the file's header notes, the tqual name isn't very good). But the various statically allocated snapshot and snapshot initialization functions are now (see previous commit) generic and do not depend on functions declared in tqual.h anymore. Therefore move. Also move XidInMVCCSnapshot as that's useful for future AMs, and already used outside of tqual.c. Author: Andres Freund Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
* Update copyright for 2019Bruce Momjian2019-01-02
| | | | Backpatch-through: certain files through 9.4
* Update copyright for 2018Bruce Momjian2018-01-02
| | | | Backpatch-through: certain files through 9.3
* Replace remaining uses of pq_sendint with pq_sendint{8,16,32}.Andres Freund2017-10-11
| | | | | | | pq_sendint() remains, so extension code doesn't unnecessarily break. Author: Andres Freund Discussion: https://postgr.es/m/20170914063418.sckdzgjfrsbekae4@alap3.anarazel.de
* Fix low-probability loss of NOTIFY messages due to XID wraparound.Tom Lane2017-10-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Up to now async.c has used TransactionIdIsInProgress() to detect whether a notify message's source transaction is still running. However, that function has a quick-exit path that reports that XIDs before RecentXmin are no longer running. If a listening backend is doing nothing but listening, and not running any queries, there is nothing that will advance its value of RecentXmin. Once 2 billion transactions elapse, the RecentXmin check causes active transactions to be reported as not running. If they aren't committed yet according to CLOG, async.c decides they aborted and discards their messages. The timing for that is a bit tight but it can happen when multiple backends are sending notifies concurrently. The net symptom therefore is that a sufficiently-long-surviving listen-only backend starts to miss some fraction of NOTIFY traffic, but only under heavy load. The only function that updates RecentXmin is GetSnapshotData(). A brute-force fix would therefore be to take a snapshot before processing incoming notify messages. But that would add cycles, as well as contention for the ProcArrayLock. We can be smarter: having taken the snapshot, let's use that to check for running XIDs, and not call TransactionIdIsInProgress() at all. In this way we reduce the number of ProcArrayLock acquisitions from one per message to one per notify interrupt; that's the same under light load but should be a benefit under heavy load. Light testing says that this change is a wash performance-wise for normal loads. I looked around for other callers of TransactionIdIsInProgress() that might be at similar risk, and didn't find any; all of them are inside transactions that presumably have already taken a snapshot. Problem report and diagnosis by Marko Tiikkaja, patch by me. Back-patch to all supported branches, since it's been like this since 9.0. Discussion: https://postgr.es/m/20170926182935.14128.65278@wrigleys.postgresql.org
* Phase 3 of pgindent updates.Tom Lane2017-06-21
| | | | | | | | | | | | | | | | | | | | | | | | | Don't move parenthesized lines to the left, even if that means they flow past the right margin. By default, BSD indent lines up statement continuation lines that are within parentheses so that they start just to the right of the preceding left parenthesis. However, traditionally, if that resulted in the continuation line extending to the right of the desired right margin, then indent would push it left just far enough to not overrun the margin, if it could do so without making the continuation line start to the left of the current statement indent. That makes for a weird mix of indentations unless one has been completely rigid about never violating the 80-column limit. This behavior has been pretty universally panned by Postgres developers. Hence, disable it with indent's new -lpl switch, so that parenthesized lines are always lined up with the preceding left paren. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
* Phase 2 of pgindent updates.Tom Lane2017-06-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
* Improve castNode notation by introducing list-extraction-specific variants.Tom Lane2017-04-10
| | | | | | | | | | | | | | | | | This extends the castNode() notation introduced by commit 5bcab1114 to provide, in one step, extraction of a list cell's pointer and coercion to a concrete node type. For example, "lfirst_node(Foo, lc)" is the same as "castNode(Foo, lfirst(lc))". Almost half of the uses of castNode that have appeared so far include a list extraction call, so this is pretty widely useful, and it saves a few more keystrokes compared to the old way. As with the previous patch, back-patch the addition of these macros to pg_list.h, so that the notation will be available when back-patching. Patch by me, after an idea of Andrew Gierth's. Discussion: https://postgr.es/m/14197.1491841216@sss.pgh.pa.us
* Use the new castNode() macro in a number of places.Andres Freund2017-01-26
| | | | | | | | | This is far from a pervasive conversion, but it's a good starting point. Author: Peter Eisentraut, with some minor changes by me Reviewed-By: Tom Lane Discussion: https://postgr.es/m/c5d387d9-3440-f5e0-f9d4-71d53b9fbe52@2ndquadrant.com
* Update copyright via script for 2017Bruce Momjian2017-01-03
|
* Fix several mistakes around parallel workers and client_encoding.Robert Haas2016-06-30
| | | | | | | | | | | | | | | | | | | | | | | | | Previously, workers sent data to the leader using the client encoding. That mostly worked, but the leader the converted the data back to the server encoding. Since not all encoding conversions are reversible, that could provoke failures. Fix by using the database encoding for all communication between worker and leader. Also, while temporary changes to GUC settings, as from the SET clause of a function, are in general OK for parallel query, changing client_encoding this way inside of a parallel worker is not OK. Previously, that would have confused the leader; with these changes, it would not confuse the leader, but it wouldn't do anything either. So refuse such changes in parallel workers. Also, the previous code naively assumed that when it received a NotifyResonse from the worker, it could pass that directly back to the user. But now that worker-to-leader communication always uses the database encoding, that's clearly no longer correct - though, actually, the old way was always broken for V2 clients. So disassemble and reconstitute the message instead. Issues reported by Peter Eisentraut. Patch by me, reviewed by Peter Eisentraut.
* Make all built-in lwlock tranche IDs fixed.Robert Haas2016-02-02
| | | | | | | This makes the values more stable, which seems like a good thing for anybody who needs to look at at them. Alexander Korotkov and Amit Kapila
* Update copyright for 2016Bruce Momjian2016-01-02
| | | | Backpatch certain files through 9.1
* Move each SLRU's lwlocks to a separate tranche.Robert Haas2015-11-12
| | | | | | | | | | This makes it significantly easier to identify these lwlocks in LWLOCK_STATS or Trace_lwlocks output. It's also arguably better from a modularity standpoint, since lwlock.c no longer needs to know anything about the LWLock needs of the higher-level SLRU facility. Ildus Kurbangaliev, reviewd by Álvaro Herrera and by me.
* Fix a couple of bugs in recent parallelism-related commits.Robert Haas2015-10-22
| | | | | | | | | | | Commit 816e336f12ecabdc834d4cc31bcf966b2dd323dc added the wrong error check to async.c; sending restrictions is restricted to the leader, not altogether unsafe. Commit 3bd909b220930f21d6e15833a17947be749e7fde added ExecShutdownNode to traverse the planstate tree and call shutdown functions, but made a Gather node, the only node that actually has such a function, abort the tree traversal, which is wrong.
* Mark more functions parallel-restricted or parallel-unsafe.Robert Haas2015-10-16
| | | | | | | | | | | | | | | | | | Commit 7aea8e4f2daa4b39ca9d1309a0c4aadb0f7ed81b was overoptimistic about the degree of safety associated with running various functions in parallel mode. Functions that take a table name or OID as an argument are at least parallel-restricted, because the table might be temporary, and we currently don't allow parallel workers to touch temporary tables. Functions that take a query as an argument are outright unsafe, because the query could be anything, including a parallel-unsafe query. Also, the queue of pending notifications is backend-private, so adding to it from a worker doesn't behave correctly. We could fix this by transferring the worker's queue of pending notifications to the master during worker cleanup, but that seems like more trouble than it's worth for now. In addition to adjusting the pg_proc.h markings, also add an explicit check for this in async.c.
* Fix errors in commit a04bb65f70dafdf462e0478ad19e6de56df89bfc.Tom Lane2015-09-30
| | | | Not a lot of commentary needed here really.
* Improve LISTEN startup time when there are many unread notifications.Tom Lane2015-09-30
| | | | | | | | | | | | | | | | If some existing listener is far behind, incoming new listener sessions would start from that session's read pointer and then need to advance over many already-committed notification messages, which they have no interest in. This was expensive in itself and also thrashed the pg_notify SLRU buffers a lot more than necessary. We can improve matters considerably in typical scenarios, without much added cost, by starting from the furthest-ahead read pointer, not the furthest-behind one. We do have to consider only sessions in our own database when doing this, which requires an extra field in the data structure, but that's a pretty small cost. Back-patch to 9.0 where the current LISTEN/NOTIFY logic was introduced. Matt Newell, slightly adjusted by me
* Small improvements in comments in async.c.Tom Lane2015-09-29
| | | | | | | | | We seem to have lost a line somewhere along the way in the comment block that discusses async.c's locks, because it suddenly refers to "both locks" without previously having mentioned more than one. Add a sentence to make that read more sanely. Also, refer to the "pos of the slowest backend" not the "tail of the slowest backend", since we have no per-backend value called "tail".
* Add new function pg_notification_queue_usage.Robert Haas2015-07-17
| | | | | | | This tells you what fraction of NOTIFY's queue is currently filled. Brendan Jurd, reviewed by Merlin Moncure and Gurjeet Singh. A few further tweaks by me.
* Use FLEXIBLE_ARRAY_MEMBER in a number of other places.Tom Lane2015-02-21
| | | | I think we're about done with this...
* Introduce and use infrastructure for interrupt processing during client reads.Andres Freund2015-02-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Up to now large swathes of backend code ran inside signal handlers while reading commands from the client, to allow for speedy reaction to asynchronous events. Most prominently shared invalidation and NOTIFY handling. That means that complex code like the starting/stopping of transactions is run in signal handlers... The required code was fragile and verbose, and is likely to contain bugs. That approach also severely limited what could be done while communicating with the client. As the read might be from within openssl it wasn't safely possible to trigger an error, e.g. to cancel a backend in idle-in-transaction state. We did that in some cases, namely fatal errors, nonetheless. Now that FE/BE communication in the backend employs non-blocking sockets and latches to block, we can quite simply interrupt reads from signal handlers by setting the latch. That allows us to signal an interrupted read, which is supposed to be retried after returning from within the ssl library. As signal handlers now only need to set the latch to guarantee timely interrupt processing, remove a fair amount of complicated & fragile code from async.c and sinval.c. We could now actually start to process some kinds of interrupts, like sinval ones, more often that before, but that seems better done separately. This work will hopefully allow to handle cases like being blocked by sending data, interrupting idle transactions and similar to be implemented without too much effort. In addition to allowing getting rid of ImmediateInterruptOK, that is. Author: Andres Freund Reviewed-By: Heikki Linnakangas
* Fix volatile-safety issue in asyncQueueReadAllNotifications().Tom Lane2015-01-26
| | | | | | | | | | | | | | | | | The "pos" variable is modified within PG_TRY and then referenced within PG_CATCH, so for strict POSIX conformance it must be marked volatile. Superficially the code looked safe because pos's address was taken, which was sufficient to force it into memory ... but it's not sufficient to ensure that the compiler applies updates exactly where the program text says to. The volatility marking has to extend into a couple of subroutines too, but I think that's probably a good thing because the risk of out-of-order updates is mostly in those subroutines not asyncQueueReadAllNotifications() itself. In principle the compiler could have re-ordered operations such that an error could be thrown while "pos" had an incorrect value. It's unclear how real the risk is here, but for safety back-patch to all active branches.
* Update copyright for 2015Bruce Momjian2015-01-06
| | | | Backpatch certain files through 9.0
* pgindent run for 9.4Bruce Momjian2014-05-06
| | | | | This includes removing tabs after periods in C comments, which was applied to back branches, so this change should not effect backpatching.
* Avoid transaction-commit race condition while receiving a NOTIFY message.Tom Lane2014-03-13
| | | | | | | | | | | | | | | | | | | | | | | | Use TransactionIdIsInProgress, then TransactionIdDidCommit, to distinguish whether a NOTIFY message's originating transaction is in progress, committed, or aborted. The previous coding could accept a message from a transaction that was still in-progress according to the PGPROC array; if the client were fast enough at starting a new transaction, it might fail to see table rows added/updated by the message-sending transaction. Which of course would usually be the point of receiving the message. We noted this type of race condition long ago in tqual.c, but async.c overlooked it. The race condition probably cannot occur unless there are multiple NOTIFY senders in action, since an individual backend doesn't send NOTIFY signals until well after it's done committing. But if two senders commit in close succession, it's certainly possible that we could see the second sender's message within the race condition window while responding to the signal from the first one. Per bug #9557 from Marko Tiikkaja. This patch is slightly more invasive than what he proposed, since it removes the now-redundant TransactionIdDidAbort call. Back-patch to 9.0, where the current NOTIFY implementation was introduced.
* Update copyright for 2014Bruce Momjian2014-01-07
| | | | | Update all files in head, and files COPYRIGHT and legal.sgml in all back branches.
* Allow on-detach callbacks for dynamic shared memory segments.Robert Haas2013-12-18
| | | | | | | | | | | | | | | | | | | | | Just as backends must clean up their shared memory state (releasing lwlocks, buffer pins, etc.) before exiting, they must also perform any similar cleanups related to dynamic shared memory segments they have mapped before unmapping those segments. So add a mechanism to ensure that. Existing on_shmem_exit hooks include both "user level" cleanup such as transaction abort and removal of leftover temporary relations and also "low level" cleanup that forcibly released leftover shared memory resources. On-detach callbacks should run after the first group but before the second group, so create a new before_shmem_exit function for registering the early callbacks and keep on_shmem_exit for the regular callbacks. (An earlier draft of this patch added an additional argument to on_shmem_exit, but that had a much larger footprint and probably a substantially higher risk of breaking third party code for no real gain.) Patch by me, reviewed by KaiGai Kohei and Andres Freund.
* Add HOLD/RESUME_INTERRUPTS in HandleCatchupInterrupt/HandleNotifyInterrupt.Tom Lane2013-12-13
| | | | | | | | | | | | | | | | | | This prevents a possible longjmp out of the signal handler if a timeout or SIGINT occurs while something within the handler has transiently set ImmediateInterruptOK. For safety we must hold off the timeout or cancel error until we're back in mainline, or at least till we reach the end of the signal handler when ImmediateInterruptOK was true at entry. This syncs these functions with the logic now present in handle_sig_alarm. AFAICT there is no live bug here in 9.0 and up, because I don't think we currently can wait for any heavyweight lock inside these functions, and there is no other code (except read-from-client) that will turn on ImmediateInterruptOK. However, that was not true pre-9.0: in older branches ProcessIncomingNotify might block trying to lock pg_listener, and then a SIGINT could lead to undesirable control flow. It might be all right anyway given the relatively narrow code ranges in which NOTIFY interrupts are enabled, but for safety's sake I'm back-patching this.