aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
...
* Fix RBM_ZERO_AND_LOCK mode to not acquire lock on local buffers.Heikki Linnakangas2015-05-13
| | | | | | | | | | | | | | | Commit 81c45081 introduced a new RBM_ZERO_AND_LOCK mode to ReadBuffer, which takes a lock on the buffer before zeroing it. However, you cannot take a lock on a local buffer, and you got a segfault instead. The version of that patch committed to master included a check for !isLocalBuf, and therefore didn't crash, but oddly I missed that in the back-patched versions. This patch adds that check to the back-branches too. RBM_ZERO_AND_LOCK mode is only used during WAL replay, and in hash indexes. WAL replay only deals with shared buffers, so the only way to trigger the bug is with a temporary hash index. Reported by Artem Ignatyev, analysis by Tom Lane.
* Fix incorrect checking of deferred exclusion constraint after a HOT update.Tom Lane2015-05-11
| | | | | | | | | | | | | If a row that potentially violates a deferred exclusion constraint is HOT-updated later in the same transaction, the exclusion constraint would be reported as violated when the check finally occurs, even if the row(s) the new row originally conflicted with have since been removed. This happened because the wrong TID was passed to check_exclusion_constraint(), causing the live HOT-updated row to be seen as a conflicting row rather than recognized as the row-under-test. Per bug #13148 from Evan Martin. It's been broken since exclusion constraints were invented, so back-patch to all supported branches.
* Increase threshold for multixact member emergency autovac to 50%.Robert Haas2015-05-11
| | | | | | | | | | | | | | | | | | | | | | | Analysis by Noah Misch shows that the 25% threshold set by commit 53bb309d2d5a9432d2602c93ed18e58bd2924e15 is lower than any other, similar autovac threshold. While we don't know exactly what value will be optimal for all users, it is better to err a little on the high side than on the low side. A higher value increases the risk that users might exhaust the available space and start seeing errors before autovacuum can clean things up sufficiently, but a user who hits that problem can compensate for it by reducing autovacuum_multixact_freeze_max_age to a value dependent on their average multixact size. On the flip side, if the emergency cap imposed by that patch kicks in too early, the user will experience excessive wraparound scanning and will be unable to mitigate that problem by configuration. The new value will hopefully reduce the risk of such bad experiences while still providing enough headroom to avoid multixact member exhaustion for most users. Along the way, adjust the documentation to reflect the effects of commit 04e6d3b877e060d8445eb653b7ea26b1ee5cec6b, which taught autovacuum to run for multixact wraparound even when autovacuum is configured off.
* Even when autovacuum=off, force it for members as we do in other cases.Robert Haas2015-05-11
| | | | Thomas Munro, with some adjustments by me.
* Advance the stop point for multixact offset creation only at checkpoint.Robert Haas2015-05-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit b69bf30b9bfacafc733a9ba77c9587cf54d06c0c advanced the stop point at vacuum time, but this has subsequently been shown to be unsafe as a result of analysis by myself and Thomas Munro and testing by Thomas Munro. The crux of the problem is that the SLRU deletion logic may get confused about what to remove if, at exactly the right time during the checkpoint process, the head of the SLRU crosses what used to be the tail. This patch, by me, fixes the problem by advancing the stop point only following a checkpoint. This has the additional advantage of making the removal logic work during recovery more like the way it works during normal running, which is probably good. At least one of the calls to DetermineSafeOldestOffset which this patch removes was already dead, because MultiXactAdvanceOldest is called only during recovery and DetermineSafeOldestOffset was set up to do nothing during recovery. That, however, is inconsistent with the principle that recovery and normal running should work similarly, and was confusing to boot. Along the way, fix some comments that previous patches in this area neglected to update. It's not clear to me whether there's any concrete basis for the decision to use only half of the multixact ID space, but it's neither necessary nor sufficient to prevent multixact member wraparound, so the comments should not say otherwise.
* Fix DetermineSafeOldestOffset for the case where there are no mxacts.Robert Haas2015-05-10
| | | | | | | | Commit b69bf30b9bfacafc733a9ba77c9587cf54d06c0c failed to take into account the possibility that there might be no multixacts in existence at all. Report by Thomas Munro; patch by me.
* Teach autovacuum about multixact member wraparound.Robert Haas2015-05-08
| | | | | | | | | | | | | | | | | | | | | The logic introduced in commit b69bf30b9bfacafc733a9ba77c9587cf54d06c0c and repaired in commits 669c7d20e6374850593cb430d332e11a3992bbcf and 7be47c56af3d3013955c91c2877c08f2a0e3e6a2 helps to ensure that we don't overwrite old multixact member information while it is still needed, but a user who creates many large multixacts can still exhaust the member space (and thus start getting errors) while autovacuum stands idly by. To fix this, progressively ramp down the effective value (but not the actual contents) of autovacuum_multixact_freeze_max_age as member space utilization increases. This makes autovacuum more aggressive and also reduces the threshold for a manual VACUUM to perform a full-table scan. This patch leaves unsolved the problem of ensuring that emergency autovacuums are triggered even when autovacuum=off. We'll need to fix that via a separate patch. Thomas Munro and Robert Haas
* Fix incorrect math in DetermineSafeOldestOffset.Robert Haas2015-05-07
| | | | | | | | | | The old formula didn't have enough parentheses, so it would do the wrong thing, and it used / rather than % to find a remainder. The effect of these oversights is that the stop point chosen by the logic introduced in commit b69bf30b9bfacafc733a9ba77c9587cf54d06c0c might be rather meaningless. Thomas Munro, reviewed by Kevin Grittner, with a whitespace tweak by me.
* Properly send SCM status updates when shutting down service on WindowsMagnus Hagander2015-05-07
| | | | | | | | | | | The Service Control Manager should be notified regularly during a shutdown that takes a long time. Previously we would increaes the counter, but forgot to actually send the notification to the system. The loop counter was also incorrectly initalized in the event that the startup of the system took long enough for it to increase, which could cause the shutdown process not to wait as long as expected. Krystian Bigaj, reviewed by Michael Paquier
* Fix some problems with patch to fsync the data directory.Robert Haas2015-05-05
| | | | | | | | pg_win32_is_junction() was a typo for pgwin32_is_junction(). open() was used not only in a two-argument form, which breaks on Windows, but also where BasicOpenFile() should have been used. Per reports from Andrew Dunstan and David Rowley.
* Recursively fsync() the data directory after a crash.Robert Haas2015-05-04
| | | | | | | | | | | Otherwise, if there's another crash, some writes from after the first crash might make it to disk while writes from before the crash fail to make it to disk. This could lead to data corruption. Back-patch to all supported versions. Abhijit Menon-Sen, reviewed by Andres Freund and slightly revised by me.
* Fix two small bugs in json's populate_record_workerAndrew Dunstan2015-05-04
| | | | | | | | | | The first bug is not releasing a tupdesc when doing an early return out of the function. The second bug is a logic error in choosing when to do an early return if given an empty jsonb object. Bug reports from Pavel Stehule and Tom Lane respectively. Backpatch to 9.4 where these were introduced.
* Fix overlooked relcache invalidation in ALTER TABLE ... ALTER CONSTRAINT.Tom Lane2015-05-03
| | | | | | | | | | | | | | | When altering the deferredness state of a foreign key constraint, we correctly updated the catalogs and then invalidated the relcache state for the target relation ... but that's not the only relation with relevant triggers. Must invalidate the other table as well, or the state change fails to take effect promptly for operations triggered on the other table. Per bug #13224 from Christian Ullrich. In passing, reorganize regression test case for this feature so that it isn't randomly injected into the middle of an unrelated test sequence. Oversight in commit f177cbfe676dc2c7ca2b206c54d6bf819feeea8b. Back-patch to 9.4 where the faulty code was added.
* Mark views created from tables as replication identity 'nothing'Bruce Momjian2015-05-01
| | | | | | | | | pg_dump turns tables into views using a method that was not setting pg_class.relreplident properly. Patch by Marko Tiikkaja Backpatch through 9.4
* Fix pg_upgrade's multixact handling (again)Alvaro Herrera2015-04-30
| | | | | | | | | | | | | | | We need to create the pg_multixact/offsets file deleted by pg_upgrade much earlier than we originally were: it was in TrimMultiXact(), which runs after we exit recovery, but it actually needs to run earlier than the first call to SetMultiXactIdLimit (before recovery), because that routine already wants to read the first offset segment. Per pg_upgrade trouble report from Jeff Janes. While at it, silence a compiler warning about a pointless assert that an unsigned variable was being tested non-negative. This was a signed constant in Thomas Munro's patch which I changed to unsigned before commit. Pointed out by Andres Freund.
* Code review for multixact bugfixAlvaro Herrera2015-04-28
| | | | | | Reword messages, rename a confusingly named function. Per Robert Haas.
* Protect against multixact members wraparoundAlvaro Herrera2015-04-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Multixact member files are subject to early wraparound overflow and removal: if the average multixact size is above a certain threshold (see note below) the protections against offset overflow are not enough: during multixact truncation at checkpoint time, some pg_multixact/members files would be removed because the server considers them to be old and not needed anymore. This leads to loss of files that are critical to interpret existing tuples's Xmax values. To protect against this, since we don't have enough info in pg_control and we can't modify it in old branches, we maintain shared memory state about the oldest value that we need to keep; we use this during new multixact creation to abort if an old still-needed file would get overwritten. This value is kept up to date by checkpoints, which makes it not completely accurate but should be good enough. We start emitting warnings sometime earlier, so that the eventual multixact-shutdown doesn't take DBAs completely by surprise (more precisely: once 20 members SLRU segments are remaining before shutdown.) On troublesome average multixact size: The threshold size depends on the multixact freeze parameters. The oldest age is related to the greater of multixact_freeze_table_age and multixact_freeze_min_age: anything older than that should be removed promptly by autovacuum. If autovacuum is keeping up with multixact freezing, the troublesome multixact average size is (2^32-1) / Max(freeze table age, freeze min age) or around 28 members per multixact. Having an average multixact size larger than that will eventually cause new multixact data to overwrite the data area for older multixacts. (If autovacuum is not able to keep up, or there are errors in vacuuming, the actual maximum is multixact_freeeze_max_age instead, at which point multixact generation is stopped completely. The default value for this limit is 400 million, which means that the multixact size that would cause trouble is about 10 members). Initial bug report by Timothy Garnett, bug #12990 Backpatch to 9.3, where the problem was introduced. Authors: Álvaro Herrera, Thomas Munro Reviews: Thomas Munro, Amit Kapila, Robert Haas, Kevin Grittner
* Use a fd opened for read/write when syncing slots during startup.Andres Freund2015-04-28
| | | | | | | | | | | | | | | | | | | | | Some operating systems, including the reporter's windows, return EBADFD or similar when fsync() is invoked on a O_RDONLY file descriptor. Unfortunately RestoreSlotFromDisk() does exactly that; which causes failures after restarts in at least some scenarios. If you hit the bug the error message will be something like ERROR: could not fsync file "pg_replslot/$name/state": Bad file descriptor Simply use O_RDWR instead of O_RDONLY when opening the relevant file descriptor to fix the bug. Unfortunately I have no way of verifying the fix, but we've seen similar problems in the past. This bug goes back to 9.4 where slots were introduced. Backpatch accordingly. Reported-By: Patrice Drolet Bug: #13143: Discussion: 20150424101006.2556.60897@wrigleys.postgresql.org
* Prevent improper reordering of antijoins vs. outer joins.Tom Lane2015-04-25
| | | | | | | | | | | An outer join appearing within the RHS of an antijoin can't commute with the antijoin, but somehow I missed teaching make_outerjoininfo() about that. In Teodor Sigaev's recent trouble report, this manifests as a "could not find RelOptInfo for given relids" error within eqjoinsel(); but I think silently wrong query results are possible too, if the planner misorders the joins and doesn't happen to trigger any internal consistency checks. It's broken as far back as we had antijoins, so back-patch to all supported branches.
* Build every ECPG library with -DFRONTEND.Noah Misch2015-04-24
| | | | | | | Each of the libraries incorporates src/port files, which often check FRONTEND. Build systems disagreed on whether to build libpgtypes this way. Only libecpg incorporates files that rely on it today. Back-patch to 9.0 (all supported versions) to forestall surprises.
* Fix obsolete comment in set_rel_size().Tom Lane2015-04-24
| | | | | | | | | | | The cross-reference to set_append_rel_pathlist() was obsoleted by commit e2fa76d80ba571d4de8992de6386536867250474, which split what had been set_rel_pathlist() and child routines into two sets of functions. But I (tgl) evidently missed updating this comment. Back-patch to 9.2 to avoid unnecessary divergence among branches. Amit Langote
* Fix deadlock at startup, if max_prepared_transactions is too small.Heikki Linnakangas2015-04-23
| | | | | | | | | | | | | | | When the startup process recovers transactions by scanning pg_twophase directory, it should clear MyLockedGxact after it's done processing each transaction. Like we do during normal operation, at PREPARE TRANSACTION. Otherwise, if the startup process exits due to an error, it will try to clear the locking_backend field of the last recovered transaction. That's usually harmless, but if the error happens in MarkAsPreparing, while holding TwoPhaseStateLock, the shmem-exit hook will try to acquire TwoPhaseStateLock again, and deadlock with itself. This fixes bug #13128 reported by Grant McAlister. The bug was introduced by commit bb38fb0d, so backpatch to all supported versions like that commit.
* Fix typo in commentAlvaro Herrera2015-04-14
| | | | | | | | SLRU_SEGMENTS_PER_PAGE -> SLRU_PAGES_PER_SEGMENT I introduced this ancient typo in subtrans.c and later propagated it to multixact.c. I fixed the latter in f741300c, but only back to 9.3; backpatch to all supported branches for consistency.
* Don't archive bogus recycled or preallocated files after timeline switch.Heikki Linnakangas2015-04-13
| | | | | | | | | | | | | | | | | | | After a timeline switch, we would leave behind recycled WAL segments that are in the future, but on the old timeline. After promotion, and after they become old enough to be recycled again, we would notice that they don't have a .ready or .done file, create a .ready file for them, and archive them. That's bogus, because the files contain garbage, recycled from an older timeline (or prealloced as zeros). We shouldn't archive such files. This could happen when we're following a timeline switch during replay, or when we switch to new timeline at end-of-recovery. To fix, whenever we switch to a new timeline, scan the data directory for WAL segments on the old timeline, but with a higher segment number, and remove them. Those don't belong to our timeline history, and are most likely bogus recycled or preallocated files. They could also be valid files that we streamed from the primary ahead of time, but in any case, they're not needed to recover to the new timeline.
* Remove duplicated words in comments.Heikki Linnakangas2015-04-12
| | | | David Rowley
* Fix autovacuum launcher shutdown sequenceAlvaro Herrera2015-04-08
| | | | | | | | | | | | | | | It was previously possible to have the launcher re-execute its main loop before shutting down if some other signal was received or an error occurred after getting SIGTERM, as reported by Qingqing Zhou. While investigating, Tom Lane further noticed that if autovacuum had been disabled in the config file, it would misbehave by trying to start a new worker instead of bailing out immediately -- it would consider itself as invoked in emergency mode. Fix both problems by checking the shutdown flag in a few more places. These problems have existed since autovacuum was introduced, so backpatch all the way back.
* Fix assorted inconsistent function declarations.Tom Lane2015-04-07
| | | | | | | While gcc doesn't complain if you declare a function "static" and then define it not-static, other compilers do; and in any case the code is highly misleading this way. Add the missing "static" keywords to a couple of recent patches. Per buildfarm member pademelon.
* Fix incorrect matching of subexpressions in outer-join plan nodes.Tom Lane2015-04-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously we would re-use input subexpressions in all expression trees attached to a Join plan node. However, if it's an outer join and the subexpression appears in the nullable-side input, this is potentially incorrect for apparently-matching subexpressions that came from above the outer join (ie, targetlist and qpqual expressions), because the executor will treat the subexpression value as NULL when maybe it should not be. The case is fairly hard to hit because (a) you need a non-strict subexpression (else NULL is correct), and (b) we don't usually compute expressions in the outputs of non-toplevel plan nodes. But we might do so if the expressions are sort keys for a mergejoin, for example. Probably in the long run we should make a more explicit distinction between Vars appearing above and below an outer join, but that will be a major planner redesign and not at all back-patchable. For the moment, just hack set_join_references so that it will not match any non-Var expressions coming from nullable inputs to expressions that came from above the join. (This is somewhat overkill, in that a strict expression could still be matched, but it doesn't seem worth the effort to check that.) Per report from Qingqing Zhou. The added regression test case is based on his example. This has been broken for a very long time, so back-patch to all active branches.
* Fix TAP tests to use only standard command-line argument ordering.Tom Lane2015-04-04
| | | | | | | | | | | | | Some of the TAP tests were supposing that PG programs would accept switches after non-switch arguments on their command lines. While GNU getopt_long() does allow that, our own implementation does not, and it's nowhere suggested in our documentation that such cases should work. Adjust the tests to use only the documented syntax. Back-patch to 9.4, since without this the TAP tests fail when run with src/port's getopt_long() implementation. Michael Paquier
* Remove unnecessary variables in _hash_splitbucket().Tom Lane2015-04-03
| | | | | | | | | | | Commit ed9cc2b5df59fdbc50cce37399e26b03ab2c1686 made it unnecessary to pass start_nblkno to _hash_splitbucket(), and for that matter unnecessary to have the internal nblkno variable either. My compiler didn't complain about that, but some did. I also rearranged the use of oblkno a bit to make that case more parallel. Report and initial patch by Petr Jelinek, rearranged a bit by me. Back-patch to all branches, like the previous patch.
* Fix rare startup failure induced by MVCC-catalog-scans patch.Tom Lane2015-04-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | While a new backend nominally participates in sinval signaling starting from the SharedInvalBackendInit call near the top of InitPostgres, it cannot recognize sinval messages for unshared catalogs of its database until it has set up MyDatabaseId. This is not problematic for the catcache or relcache, which by definition won't have loaded any data from or about such catalogs before that point. However, commit 568d4138c646cd7c introduced a mechanism for re-using MVCC snapshots for catalog scans, and made invalidation of those depend on recognizing relevant sinval messages. So it's possible to establish a catalog snapshot to read pg_authid and pg_database, then before we set MyDatabaseId, receive sinval messages that should result in invalidating that snapshot --- but do not, because we don't realize they are for our database. This mechanism explains the intermittent buildfarm failures we've seen since commit 31eae6028eca4365. That commit was not itself at fault, but it introduced a new regression test that does reconnections concurrently with the "vacuum full pg_am" command in vacuum.sql. This allowed the pre-existing error to be exposed, given just the right timing, because we'd fail to update our information about how to access pg_am. In principle any VACUUM FULL on a system catalog could have created a similar hazard for concurrent incoming connections. Perhaps there are more subtle failure cases as well. To fix, force invalidation of the catalog snapshot as soon as we've set MyDatabaseId. Back-patch to 9.4 where the error was introduced.
* After a crash, don't restart workers with BGW_NEVER_RESTART.Robert Haas2015-04-02
| | | | Amit Khandekar
* Correct comment to use RS_EPHEMERALSimon Riggs2015-04-02
|
* psql: fix \connect with URIs and conninfo stringsAlvaro Herrera2015-04-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | psql was already accepting conninfo strings as the first parameter in \connect, but the way it worked wasn't sane; some of the other parameters would get the previous connection's values, causing it to connect to a completely unexpected server or, more likely, not finding any server at all because of completely wrong combinations of parameters. Fix by explicitely checking for a conninfo-looking parameter in the dbname position; if one is found, use its complete specification rather than mix with the other arguments. Also, change tab-completion to not try to complete conninfo/URI-looking "dbnames" and document that conninfos are accepted as first argument. There was a weak consensus to backpatch this, because while the behavior of using the dbname as a conninfo is nowhere documented for \connect, it is reasonable to expect that it works because it does work in many other contexts. Therefore this is backpatched all the way back to 9.0. To implement this, routines previously private to libpq have been duplicated so that psql can decide what looks like a conninfo/URI string. In back branches, just duplicate the same code all the way back to 9.2, where URIs where introduced; 9.0 and 9.1 have a simpler version. In master, the routines are moved to src/common and renamed. Author: David Fetter, Andrew Dunstan. Some editorialization by me (probably earning a Gierth's "Sloppy" badge in the process.) Reviewers: Andrew Gierth, Erik Rijkers, Pavel Stěhule, Stephen Frost, Robert Haas, Andrew Dunstan.
* Remove spurious semicolons.Heikki Linnakangas2015-03-31
| | | | Petr Jelinek
* Run pg_upgrade and pg_resetxlog with restricted token on WindowsAndrew Dunstan2015-03-30
| | | | | | | | | | | | | | As with initdb these programs need to run with a restricted token, and if they don't pg_upgrade will fail when run as a user with Adminstrator privileges. Backpatch to all live branches. On the development branch the code is reorganized so that the restricted token code is now in a single location. On the stable bramches a less invasive change is made by simply copying the relevant code to pg_upgrade.c and pg_resetxlog.c. Patches and bug report from Muhammad Asif Naeem, reviewed by Michael Paquier, slightly edited by me.
* Fix bogus concurrent use of _hash_getnewbuf() in bucket split code.Tom Lane2015-03-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | _hash_splitbucket() obtained the base page of the new bucket by calling _hash_getnewbuf(), but it held no exclusive lock that would prevent some other process from calling _hash_getnewbuf() at the same time. This is contrary to _hash_getnewbuf()'s API spec and could in fact cause failures. In practice, we must only call that function while holding write lock on the hash index's metapage. An additional problem was that we'd already modified the metapage's bucket mapping data, meaning that failure to extend the index would leave us with a corrupt index. Fix both issues by moving the _hash_getnewbuf() call to just before we modify the metapage in _hash_expandtable(). Unfortunately there's still a large problem here, which is that we could also incur ENOSPC while trying to get an overflow page for the new bucket. That would leave the index corrupt in a more subtle way, namely that some index tuples that should be in the new bucket might still be in the old one. Fixing that seems substantially more difficult; even preallocating as many pages as we could possibly need wouldn't entirely guarantee that the bucket split would complete successfully. So for today let's just deal with the base case. Per report from Antonin Houska. Back-patch to all active branches.
* Fix rare core dump in BackendIdGetTransactionIds().Tom Lane2015-03-30
| | | | | | | | | | BackendIdGetTransactionIds() neglected the possibility that the PROC pointer in a ProcState array entry is null. In current usage, this could only crash if the other backend had exited since pgstat_read_current_status saw it as active, which is a pretty narrow window. But it's reachable in the field, per bug #12918 from Vladimir Borodin. Back-patch to 9.4 where the faulty code was introduced.
* Add vacuum_delay_point call in compute_index_stats's per-sample-row loop.Tom Lane2015-03-29
| | | | | | | | | Slow functions in index expressions might cause this loop to take long enough to make it worth being cancellable. Probably it would be enough to call CHECK_FOR_INTERRUPTS here, but for consistency with other per-sample-row loops in this file, let's use vacuum_delay_point. Report and patch by Jeff Janes. Back-patch to all supported branches.
* Make SyncRepWakeQueue to a static functionTatsuo Ishii2015-03-26
| | | | | | | It is only used in src/backend/replication/syncrep.c. Back-patch to all supported branches except 9.1 which declares the function as static.
* Fix ExecOpenScanRelation to take a lock on a ROW_MARK_COPY relation.Tom Lane2015-03-24
| | | | | | | | | | | | | | | | | | ExecOpenScanRelation assumed that any relation listed in the ExecRowMark list has been locked by InitPlan; but this is not true if the rel's markType is ROW_MARK_COPY, which is possible if it's a foreign table. In most (possibly all) cases, failure to acquire a lock here isn't really problematic because the parser, planner, or plancache would have taken the appropriate lock already. In principle though it might leave us vulnerable to working with a relation that we hold no lock on, and in any case if the executor isn't depending on previously-taken locks otherwise then it should not do so for ROW_MARK_COPY relations. Noted by Etsuro Fujita. Back-patch to all active versions, since the inconsistency has been there a long time. (It's almost certainly irrelevant in 9.0, since that predates foreign tables, but the code's still wrong on its own terms.)
* Don't delay replication for less than recovery_min_apply_delay's resolution.Andres Freund2015-03-23
| | | | | | | | | | | | | | | | | Recovery delays are implemented by waiting on a latch, and latches take milliseconds as a parameter. The required amount of waiting was computed using microsecond resolution though and the wait loop's abort condition was checking the delay in microseconds as well. This could lead to short spurts of busy looping when the overall wait time was below a millisecond, but above 0 microseconds. Instead just formulate the wait loop's abort condition in millisecond granularity as well. Given that that's recovery_min_apply_delay resolution, it seems harmless to not wait for less than a millisecond. Backpatch to 9.4 where recovery_min_apply_delay was introduced. Discussion: 20150323141819.GH26995@alap3.anarazel.de
* Fix status reporting for terminated bgworkers that were never started.Robert Haas2015-03-19
| | | | | | | | | | | | | | | | | | | Previously, GetBackgroundWorkerPid() would return BGWH_NOT_YET_STARTED if the slot used for the worker registration had not been reused by unrelated activity, and BGWH_STOPPED if it had. Either way, a process that had requested notification when the state of one of its background workers changed did not receive such notifications. Fix things so that GetBackgroundWorkerPid() always returns BGWH_STOPPED in this situation, so that we do not erroneously give waiters the impression that the worker will eventually be started; and send notifications just as we would if the process terminated after having been started, so that it's possible to wait for the postmaster to process a worker termination request without polling. Discovered by Amit Kapila during testing of parallel sequential scan. Analysis and fix by me. Back-patch to 9.4; there may not be anyone relying on this interface yet, but if anyone is, the new behavior is a clear improvement.
* Build src/port/dirmod.c only on Windows.Tom Lane2015-03-14
| | | | | | | | | | | Since commit ba7c5975adea74c6f17bdb0e0427ad85962092a2, port/dirmod.c has contained only Windows-specific functions. Most platforms don't seem to mind uselessly building an empty file, but OS X for one issues warnings. Hence, treat dirmod.c as a Windows-specific file selected by configure rather than one that's always built. We can revert this change if dirmod.c ever gains any non-Windows functionality again. Back-patch to 9.4 where the mentioned commit appeared.
* Remove workaround for ancient incompatibility between readline and libedit.Tom Lane2015-03-14
| | | | | | | | | | | | | | | | | | | | | | | | | GNU readline defines the return value of write_history() as "zero if OK, else an errno code". libedit's version of that function used to have a different definition (to wit, "-1 if error, else the number of lines written to the file"). We tried to work around that by checking whether errno had become nonzero, but this method has never been kosher according to the published API of either library. It's reportedly completely broken in recent Ubuntu releases: psql bleats about "No such file or directory" when saving ~/.psql_history, even though the write worked fine. However, libedit has been following the readline definition since somewhere around 2006, so it seems all right to finally break compatibility with ancient libedit releases and trust that the return value is what readline specifies. (I'm not sure when the various Linux distributions incorporated this fix, but I did find that OS X has been shipping fixed versions since 10.5/Leopard.) If anyone is still using such an ancient libedit, they will find that psql complains it can't write ~/.psql_history at exit, even when the file was written correctly. This is no worse than the behavior we're fixing for current releases. Back-patch to all supported branches.
* Fix integer overflow in debug message of walreceiverTatsuo Ishii2015-03-14
| | | | | | | | | | | The message tries to tell the replication apply delay which fails if the first WAL record is not applied yet. Fix is, instead of telling overflowed minus numeric, showing "N/A" which indicates that the delay data is not yet available. Problem reported by me and patch by Fabrízio de Royes Mello. Back patched to 9.4, 9.3 and 9.2 stable branches (9.1 and 9.0 do not have the debug message).
* Ensure tableoid reads correctly in EvalPlanQual-manufactured tuples.Tom Lane2015-03-12
| | | | | | | | | | | | | | | | | | | | The ROW_MARK_COPY path in EvalPlanQualFetchRowMarks() was just setting tableoid to InvalidOid, I think on the assumption that the referenced RTE must be a subquery or other case without a meaningful OID. However, foreign tables also use this code path, and they do have meaningful table OIDs; so failure to set the tuple field can lead to user-visible misbehavior. Fix that by fetching the appropriate OID from the range table. There's still an issue about whether CTID can ever have a meaningful value in this case; at least with postgres_fdw foreign tables, it does. But that is a different problem that seems to require a significantly different patch --- it's debatable whether postgres_fdw really wants to use this code path at all. Simplified version of a patch by Etsuro Fujita, who also noted the problem to begin with. The issue can be demonstrated in all versions having FDWs, so back-patch to 9.1.
* Fix memory leaks in GIN index vacuum.Heikki Linnakangas2015-03-12
| | | | | Per bug #12850 by Walter Nordmann. Backpatch to 9.4 where the leak was introduced.
* Cast to (void *) rather than (int *) when passing int64's to PQfn().Tom Lane2015-03-08
| | | | | | | | | | | | | | | This is a possibly-vain effort to silence a Coverity warning about bogus endianness dependency. The code's fine, because it takes care of endianness issues for itself, but Coverity sees an int64 being passed to an int* argument and not unreasonably suspects something's wrong. I'm not sure if putting the void* cast in the way will shut it up; but it can't hurt and seems better from a documentation standpoint anyway, since the pointer is not used as an int* in this code path. Just for a bit of additional safety, verify that the result length is 8 bytes as expected. Back-patch to 9.3 where the code in question was added.
* Fix documentation for libpq's PQfn().Tom Lane2015-03-08
| | | | | | | | | | | | The SGML docs claimed that 1-byte integers could be sent or received with the "isint" options, but no such behavior has ever been implemented in pqGetInt() or pqPutInt(). The in-code documentation header for PQfn() was even less in tune with reality, and the code itself used parameter names matching neither the SGML docs nor its libpq-fe.h declaration. Do a bit of additional wordsmithing on the SGML docs while at it. Since the business about 1-byte integers is a clear documentation bug, back-patch to all supported branches.