aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
* Stamp 9.1.18.REL9_1_18Tom Lane2015-06-09
|
* Report more information if pg_perm_setlocale() fails at startup.Tom Lane2015-06-09
| | | | | | We don't know why a few Windows users have seen this fail, but the taciturnity of the error message certainly isn't helping debug it. Let's at least find out which LC category isn't working.
* Use a safer method for determining whether relcache init file is stale.Tom Lane2015-06-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we invalidate the relcache entry for a system catalog or index, we must also delete the relcache "init file" if the init file contains a copy of that rel's entry. The old way of doing this relied on a specially maintained list of the OIDs of relations present in the init file: we made the list either when reading the file in, or when writing the file out. The problem is that when writing the file out, we included only rels present in our local relcache, which might have already suffered some deletions due to relcache inval events. In such cases we correctly decided not to overwrite the real init file with incomplete data --- but we still used the incomplete initFileRelationIds list for the rest of the current session. This could result in wrong decisions about whether the session's own actions require deletion of the init file, potentially allowing an init file created by some other concurrent session to be left around even though it's been made stale. Since we don't support changing the schema of a system catalog at runtime, the only likely scenario in which this would cause a problem in the field involves a "vacuum full" on a catalog concurrently with other activity, and even then it's far from easy to provoke. Remarkably, this has been broken since 2002 (in commit 786340441706ac1957a031f11ad1c2e5b6e18314), but we had never seen a reproducible test case until recently. If it did happen in the field, the symptoms would probably involve unexpected "cache lookup failed" errors to begin with, then "could not open file" failures after the next checkpoint, as all accesses to the affected catalog stopped working. Recovery would require manually removing the stale "pg_internal.init" file. To fix, get rid of the initFileRelationIds list, and instead consult syscache.c's list of relations used in catalog caches to decide whether a relation is included in the init file. This should be a tad more efficient anyway, since we're replacing linear search of a list with ~100 entries with a binary search. It's a bit ugly that the init file contents are now so directly tied to the catalog caches, but in practice that won't make much difference. Back-patch to all supported branches.
* Fix incorrect order of database-locking operations in InitPostgres().Tom Lane2015-06-05
| | | | | | | | | | | | | | | | | | | | | | | | We should set MyProc->databaseId after acquiring the per-database lock, not beforehand. The old way risked deadlock against processes trying to copy or delete the target database, since they would first acquire the lock and then wait for processes with matching databaseId to exit; that left a window wherein an incoming process could set its databaseId and then block on the lock, while the other process had the lock and waited in vain for the incoming process to exit. CountOtherDBBackends() would time out and fail after 5 seconds, so this just resulted in an unexpected failure not a permanent lockup, but it's still annoying when it happens. A real-world example of a use-case is that short-duration connections to a template database should not cause CREATE DATABASE to fail. Doing it in the other order should be fine since the contract has always been that processes searching the ProcArray for a database ID must hold the relevant per-database lock while searching. Thus, this actually removes the former race condition that required an assumption that storing to MyProc->databaseId is atomic. It's been like this for a long time, so back-patch to all active branches.
* Stamp 9.1.17.REL9_1_17Tom Lane2015-06-01
|
* Remove special cases for ETXTBSY from new fsync'ing logic.Tom Lane2015-05-29
| | | | | | | | | | The argument that this is a sufficiently-expected case to be silently ignored seems pretty thin. Andres had brought it up back when we were still considering that most fsync failures should be hard errors, and it probably would be legit not to fail hard for ETXTBSY --- but the same is true for EROFS and other cases, which is why we gave up on hard failures. ETXTBSY is surely not a normal case, so logging the failure seems fine from here.
* Fix fsync-at-startup code to not treat errors as fatal.Tom Lane2015-05-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 2ce439f3379aed857517c8ce207485655000fc8e introduced a rather serious regression, namely that if its scan of the data directory came across any un-fsync-able files, it would fail and thereby prevent database startup. Worse yet, symlinks to such files also caused the problem, which meant that crash restart was guaranteed to fail on certain common installations such as older Debian. After discussion, we agreed that (1) failure to start is worse than any consequence of not fsync'ing is likely to be, therefore treat all errors in this code as nonfatal; (2) we should not chase symlinks other than those that are expected to exist, namely pg_xlog/ and tablespace links under pg_tblspc/. The latter restriction avoids possibly fsync'ing a much larger part of the filesystem than intended, if the user has left random symlinks hanging about in the data directory. This commit takes care of that and also does some code beautification, mainly moving the relevant code into fd.c, which seems a much better place for it than xlog.c, and making sure that the conditional compilation for the pre_sync_fname pass has something to do with whether pg_flush_data works. I also relocated the call site in xlog.c down a few lines; it seems a bit silly to be doing this before ValidateXLOGDirectoryStructure(). The similar logic in initdb.c ought to be made to match this, but that change is noncritical and will be dealt with separately. Back-patch to all active branches, like the prior commit. Abhijit Menon-Sen and Tom Lane
* Fix portability issue in isolationtester grammar.Tom Lane2015-05-27
| | | | | | | | | | | | | | | | | | | | | specparse.y and specscanner.l used "string" as a token name. Now, bison likes to define each token name as a macro for the token code it assigns, which means those names are basically off-limits for any other use within the grammar file or included headers. So names as generic as "string" are dangerous. This is what was causing the recent failures on protosciurus: some versions of Solaris' sys/kstat.h use "string" as a field name. With late-model bison we don't see this problem because the token macros aren't defined till later (that is why castoroides didn't show the problem even though it's on the same machine). But protosciurus uses bison 1.875 which defines the token macros up front. This land mine has been there from day one; we'd have found it sooner except that protosciurus wasn't trying to run the isolation tests till recently. To fix, rename the token to "string_literal" which is hopefully less likely to collide with names used by system headers. Back-patch to all branches containing the isolation tests.
* Rename pg_shdepend.c's typedef "objectType" to SharedDependencyObjectType.Tom Lane2015-05-24
| | | | | | | | | | | | The name objectType is widely used as a field name, and it's pure luck that this conflict has not caused pgindent to go crazy before. It messed up pg_audit.c pretty good though. Since pg_shdepend.c doesn't export this typedef and only uses it in three places, changing that seems saner than changing the field usages. Back-patch because we're contemplating using the union of all branch typedefs for future pgindent runs, so this won't fix anything if it stays the same in back branches.
* Back-patch libpq support for TLS versions beyond v1.Tom Lane2015-05-21
| | | | | | | | | | | | | | | | | Since 7.3.2, libpq has been coded in such a way that the only SSL protocol it would allow was TLS v1. That approach is looking increasingly obsolete. In commit 820f08cabdcbb899 we fixed it to allow TLS >= v1, but did not back-patch the change at the time, partly out of caution and partly because the question was confused by a contemporary server-side change to reject the now-obsolete SSL protocol v3. 9.4 has now been out long enough that it seems safe to assume the change is OK; hence, back-patch into 9.0-9.3. (I also chose to back-patch some relevant comments added by commit 326e1d73c476a0b5, but did *not* change the server behavior; hence, pre-9.4 servers will continue to allow SSL v3, even though no remotely modern client will request it.) Per gripe from Jan Bilek.
* Revert error-throwing wrappers for the printf family of functions.Tom Lane2015-05-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This reverts commit 16304a013432931e61e623c8d85e9fe24709d9ba, except for its changes in src/port/snprintf.c; as well as commit cac18a76bb6b08f1ecc2a85e46c9d2ab82dd9d23 which is no longer needed. Fujii Masao reported that the previous commit caused failures in psql on OS X, since if one exits the pager program early while viewing a query result, psql sees an EPIPE error from fprintf --- and the wrapper function thought that was reason to panic. (It's a bit surprising that the same does not happen on Linux.) Further discussion among the security list concluded that the risk of other such failures was far too great, and that the one-size-fits-all approach to error handling embodied in the previous patch is unlikely to be workable. This leaves us again exposed to the possibility of the type of failure envisioned in CVE-2015-3166. However, that failure mode is strictly hypothetical at this point: there is no concrete reason to believe that an attacker could trigger information disclosure through the supposed mechanism. In the first place, the attack surface is fairly limited, since so much of what the backend does with format strings goes through stringinfo.c or psprintf(), and those already had adequate defenses. In the second place, even granting that an unprivileged attacker could control the occurrence of ENOMEM with some precision, it's a stretch to believe that he could induce it just where the target buffer contains some valuable information. So we concluded that the risk of non-hypothetical problems induced by the patch greatly outweighs the security risks. We will therefore revert, and instead undertake closer analysis to identify specific calls that may need hardening, rather than attempt a universal solution. We have kept the portion of the previous patch that improved snprintf.c's handling of errors when it calls the platform's sprintf(). That seems to be an unalloyed improvement. Security: CVE-2015-3166
* Fix off-by-one error in Assertion.Heikki Linnakangas2015-05-19
| | | | | | | | | | | | The point of the assertion is to ensure that the arrays allocated in stack are large enough, but the check was one item short. This won't matter in practice because MaxIndexTuplesPerPage is an overestimate, so you can't have that many items on a page in reality. But let's be tidy. Spotted by Anastasia Lubennikova. Backpatch to all supported versions, like the patch that added the assertion.
* Don't MultiXactIdIsRunning when in recoveryAlvaro Herrera2015-05-18
| | | | | | | | | | | | | | | | | | | | | | | | | | In 9.1 and earlier, it is possible for index_getnext() to try to examine a heap buffer for possible HOT-prune when in recovery; this causes a problem when a multixact is found in a tuple's Xmax, because GetMultiXactIdMembers refuses to run when in recovery, raising an error: ERROR: cannot GetMultiXactIdMembers() during recovery This can be solved easily by having MultiXactIdIsRunning always return false when in recovery, which is reasonable because a HOT standby cannot acquire further tuple locks nor update/delete tuples. (Note: it doesn't look like this specific code path has a problem in 9.2, because instead of doing HeapTupleSatisfiesUpdate directly, heap_hot_search_buffer uses HeapTupleIsSurelyDead instead. Still, there may be other paths affected by the same bug, for instance in pgrowlocks, and the multixact code hasn't changed; so apply the same fix throughout.) Apply this fix to 9.0 through 9.2. In 9.3 the multixact code has been changed completely and is no longer subject to this problem. Per report from Marko Tiikkaja, https://www.postgresql.org/message-id/54EB3283.2080305@joh.to Analysis by Andres Freund
* Stamp 9.1.16.Tom Lane2015-05-18
|
* Fix error message in pre_sync_fname.Robert Haas2015-05-18
| | | | | | | The old one didn't include %m anywhere, and required extra translation. Report by Peter Eisentraut. Fix by me. Review by Tom Lane.
* Check return values of sensitive system library calls.Noah Misch2015-05-18
| | | | | | | | | | | | | | | | | | | | | | PostgreSQL already checked the vast majority of these, missing this handful that nearly cannot fail. If putenv() failed with ENOMEM in pg_GSS_recvauth(), authentication would proceed with the wrong keytab file. If strftime() returned zero in cache_locale_time(), using the unspecified buffer contents could lead to information exposure or a crash. Back-patch to 9.0 (all supported versions). Other unchecked calls to these functions, especially those in frontend code, pose negligible security concern. This patch does not address them. Nonetheless, it is always better to check return values whose specification provides for indicating an error. In passing, fix an off-by-one error in strftime_win32()'s invocation of WideCharToMultiByte(). Upon retrieving a value of exactly MAX_L10N_DATA bytes, strftime_win32() would overrun the caller's buffer by one byte. MAX_L10N_DATA is chosen to exceed the length of every possible value, so the vulnerable scenario probably does not arise. Security: CVE-2015-3166
* Add error-throwing wrappers for the printf family of functions.Noah Misch2015-05-18
| | | | | | | | | | | | | | | | | | | | | | | All known standard library implementations of these functions can fail with ENOMEM. A caller neglecting to check for failure would experience missing output, information exposure, or a crash. Check return values within wrappers and code, currently just snprintf.c, that bypasses the wrappers. The wrappers do not return after an error, so their callers need not check. Back-patch to 9.0 (all supported versions). Popular free software standard library implementations do take pains to bypass malloc() in simple cases, but they risk ENOMEM for floating point numbers, positional arguments, large field widths, and large precisions. No specification demands such caution, so this commit regards every call to a printf family function as a potential threat. Injecting the wrappers implicitly is a compromise between patch scope and design goals. I would prefer to edit each call site to name a wrapper explicitly. libpq and the ECPG libraries would, ideally, convey errors to the caller rather than abort(). All that would be painfully invasive for a back-patched security fix, hence this compromise. Security: CVE-2015-3166
* Permit use of vsprintf() in PostgreSQL code.Noah Misch2015-05-18
| | | | The next commit needs it. Back-patch to 9.0 (all supported versions).
* Prevent a double free by not reentering be_tls_close().Noah Misch2015-05-18
| | | | | | | | | | | | Reentering this function with the right timing caused a double free, typically crashing the backend. By synchronizing a disconnection with the authentication timeout, an unauthenticated attacker could achieve this somewhat consistently. Call be_tls_close() solely from within proc_exit_prepare(). Back-patch to 9.0 (all supported versions). Benkocs Norbert Attila Security: CVE-2015-3165
* Translation updatesPeter Eisentraut2015-05-18
| | | | | Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: 3fd92c72461f8fa03989609f4f2513fe1d865582
* Fix typosPeter Eisentraut2015-05-17
|
* Update time zone data files to tzdata release 2015d.Tom Lane2015-05-15
| | | | | | DST law changes in Egypt, Mongolia, Palestine. Historical corrections for Canada and Chile. Revised zone abbreviation for America/Adak (HST/HDT not HAST/HADT).
* Fix RBM_ZERO_AND_LOCK mode to not acquire lock on local buffers.Heikki Linnakangas2015-05-13
| | | | | | | | | | | | | | | Commit 81c45081 introduced a new RBM_ZERO_AND_LOCK mode to ReadBuffer, which takes a lock on the buffer before zeroing it. However, you cannot take a lock on a local buffer, and you got a segfault instead. The version of that patch committed to master included a check for !isLocalBuf, and therefore didn't crash, but oddly I missed that in the back-patched versions. This patch adds that check to the back-branches too. RBM_ZERO_AND_LOCK mode is only used during WAL replay, and in hash indexes. WAL replay only deals with shared buffers, so the only way to trigger the bug is with a temporary hash index. Reported by Artem Ignatyev, analysis by Tom Lane.
* Fix incorrect checking of deferred exclusion constraint after a HOT update.Tom Lane2015-05-11
| | | | | | | | | | | | | If a row that potentially violates a deferred exclusion constraint is HOT-updated later in the same transaction, the exclusion constraint would be reported as violated when the check finally occurs, even if the row(s) the new row originally conflicted with have since been removed. This happened because the wrong TID was passed to check_exclusion_constraint(), causing the live HOT-updated row to be seen as a conflicting row rather than recognized as the row-under-test. Per bug #13148 from Evan Martin. It's been broken since exclusion constraints were invented, so back-patch to all supported branches.
* Properly send SCM status updates when shutting down service on WindowsMagnus Hagander2015-05-07
| | | | | | | | | | | The Service Control Manager should be notified regularly during a shutdown that takes a long time. Previously we would increaes the counter, but forgot to actually send the notification to the system. The loop counter was also incorrectly initalized in the event that the startup of the system took long enough for it to increase, which could cause the shutdown process not to wait as long as expected. Krystian Bigaj, reviewed by Michael Paquier
* Fix some problems with patch to fsync the data directory.Robert Haas2015-05-05
| | | | | | | | pg_win32_is_junction() was a typo for pgwin32_is_junction(). open() was used not only in a two-argument form, which breaks on Windows, but also where BasicOpenFile() should have been used. Per reports from Andrew Dunstan and David Rowley.
* Recursively fsync() the data directory after a crash.Robert Haas2015-05-04
| | | | | | | | | | | Otherwise, if there's another crash, some writes from after the first crash might make it to disk while writes from before the crash fail to make it to disk. This could lead to data corruption. Back-patch to all supported versions. Abhijit Menon-Sen, reviewed by Andres Freund and slightly revised by me.
* Build libecpg with -DFRONTEND in all supported versions.Noah Misch2015-04-26
| | | | | Fix an oversight in commit 151e74719b0cc5c040bd3191b51b95f925773dd1 by back-patching commit 44c5d387eafb4ba1a032f8d7b13d85c553d69181 to 9.0.
* Prevent improper reordering of antijoins vs. outer joins.Tom Lane2015-04-25
| | | | | | | | | | | An outer join appearing within the RHS of an antijoin can't commute with the antijoin, but somehow I missed teaching make_outerjoininfo() about that. In Teodor Sigaev's recent trouble report, this manifests as a "could not find RelOptInfo for given relids" error within eqjoinsel(); but I think silently wrong query results are possible too, if the planner misorders the joins and doesn't happen to trigger any internal consistency checks. It's broken as far back as we had antijoins, so back-patch to all supported branches.
* Build every ECPG library with -DFRONTEND.Noah Misch2015-04-24
| | | | | | | Each of the libraries incorporates src/port files, which often check FRONTEND. Build systems disagreed on whether to build libpgtypes this way. Only libecpg incorporates files that rely on it today. Back-patch to 9.0 (all supported versions) to forestall surprises.
* Fix deadlock at startup, if max_prepared_transactions is too small.Heikki Linnakangas2015-04-23
| | | | | | | | | | | | | | | When the startup process recovers transactions by scanning pg_twophase directory, it should clear MyLockedGxact after it's done processing each transaction. Like we do during normal operation, at PREPARE TRANSACTION. Otherwise, if the startup process exits due to an error, it will try to clear the locking_backend field of the last recovered transaction. That's usually harmless, but if the error happens in MarkAsPreparing, while holding TwoPhaseStateLock, the shmem-exit hook will try to acquire TwoPhaseStateLock again, and deadlock with itself. This fixes bug #13128 reported by Grant McAlister. The bug was introduced by commit bb38fb0d, so backpatch to all supported versions like that commit.
* Fix typo in commentAlvaro Herrera2015-04-14
| | | | | | | | SLRU_SEGMENTS_PER_PAGE -> SLRU_PAGES_PER_SEGMENT I introduced this ancient typo in subtrans.c and later propagated it to multixact.c. I fixed the latter in f741300c, but only back to 9.3; backpatch to all supported branches for consistency.
* Don't archive bogus recycled or preallocated files after timeline switch.Heikki Linnakangas2015-04-13
| | | | | | | | | | | | | | | | | | | After a timeline switch, we would leave behind recycled WAL segments that are in the future, but on the old timeline. After promotion, and after they become old enough to be recycled again, we would notice that they don't have a .ready or .done file, create a .ready file for them, and archive them. That's bogus, because the files contain garbage, recycled from an older timeline (or prealloced as zeros). We shouldn't archive such files. This could happen when we're following a timeline switch during replay, or when we switch to new timeline at end-of-recovery. To fix, whenever we switch to a new timeline, scan the data directory for WAL segments on the old timeline, but with a higher segment number, and remove them. Those don't belong to our timeline history, and are most likely bogus recycled or preallocated files. They could also be valid files that we streamed from the primary ahead of time, but in any case, they're not needed to recover to the new timeline.
* Fix autovacuum launcher shutdown sequenceAlvaro Herrera2015-04-08
| | | | | | | | | | | | | | | It was previously possible to have the launcher re-execute its main loop before shutting down if some other signal was received or an error occurred after getting SIGTERM, as reported by Qingqing Zhou. While investigating, Tom Lane further noticed that if autovacuum had been disabled in the config file, it would misbehave by trying to start a new worker instead of bailing out immediately -- it would consider itself as invoked in emergency mode. Fix both problems by checking the shutdown flag in a few more places. These problems have existed since autovacuum was introduced, so backpatch all the way back.
* Fix assorted inconsistent function declarations.Tom Lane2015-04-07
| | | | | | | While gcc doesn't complain if you declare a function "static" and then define it not-static, other compilers do; and in any case the code is highly misleading this way. Add the missing "static" keywords to a couple of recent patches. Per buildfarm member pademelon.
* Fix incorrect matching of subexpressions in outer-join plan nodes.Tom Lane2015-04-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously we would re-use input subexpressions in all expression trees attached to a Join plan node. However, if it's an outer join and the subexpression appears in the nullable-side input, this is potentially incorrect for apparently-matching subexpressions that came from above the outer join (ie, targetlist and qpqual expressions), because the executor will treat the subexpression value as NULL when maybe it should not be. The case is fairly hard to hit because (a) you need a non-strict subexpression (else NULL is correct), and (b) we don't usually compute expressions in the outputs of non-toplevel plan nodes. But we might do so if the expressions are sort keys for a mergejoin, for example. Probably in the long run we should make a more explicit distinction between Vars appearing above and below an outer join, but that will be a major planner redesign and not at all back-patchable. For the moment, just hack set_join_references so that it will not match any non-Var expressions coming from nullable inputs to expressions that came from above the join. (This is somewhat overkill, in that a strict expression could still be matched, but it doesn't seem worth the effort to check that.) Per report from Qingqing Zhou. The added regression test case is based on his example. This has been broken for a very long time, so back-patch to all active branches.
* Remove unnecessary variables in _hash_splitbucket().Tom Lane2015-04-03
| | | | | | | | | | | Commit ed9cc2b5df59fdbc50cce37399e26b03ab2c1686 made it unnecessary to pass start_nblkno to _hash_splitbucket(), and for that matter unnecessary to have the internal nblkno variable either. My compiler didn't complain about that, but some did. I also rearranged the use of oblkno a bit to make that case more parallel. Report and initial patch by Petr Jelinek, rearranged a bit by me. Back-patch to all branches, like the previous patch.
* psql: fix \connect with URIs and conninfo stringsAlvaro Herrera2015-04-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | psql was already accepting conninfo strings as the first parameter in \connect, but the way it worked wasn't sane; some of the other parameters would get the previous connection's values, causing it to connect to a completely unexpected server or, more likely, not finding any server at all because of completely wrong combinations of parameters. Fix by explicitely checking for a conninfo-looking parameter in the dbname position; if one is found, use its complete specification rather than mix with the other arguments. Also, change tab-completion to not try to complete conninfo/URI-looking "dbnames" and document that conninfos are accepted as first argument. There was a weak consensus to backpatch this, because while the behavior of using the dbname as a conninfo is nowhere documented for \connect, it is reasonable to expect that it works because it does work in many other contexts. Therefore this is backpatched all the way back to 9.0. To implement this, routines previously private to libpq have been duplicated so that psql can decide what looks like a conninfo/URI string. In back branches, just duplicate the same code all the way back to 9.2, where URIs where introduced; 9.0 and 9.1 have a simpler version. In master, the routines are moved to src/common and renamed. Author: David Fetter, Andrew Dunstan. Some editorialization by me (probably earning a Gierth's "Sloppy" badge in the process.) Reviewers: Andrew Gierth, Erik Rijkers, Pavel Stěhule, Stephen Frost, Robert Haas, Andrew Dunstan.
* Remove spurious semicolons.Heikki Linnakangas2015-03-31
| | | | Petr Jelinek
* Run pg_upgrade and pg_resetxlog with restricted token on WindowsAndrew Dunstan2015-03-30
| | | | | | | | | | | | | | As with initdb these programs need to run with a restricted token, and if they don't pg_upgrade will fail when run as a user with Adminstrator privileges. Backpatch to all live branches. On the development branch the code is reorganized so that the restricted token code is now in a single location. On the stable bramches a less invasive change is made by simply copying the relevant code to pg_upgrade.c and pg_resetxlog.c. Patches and bug report from Muhammad Asif Naeem, reviewed by Michael Paquier, slightly edited by me.
* Fix bogus concurrent use of _hash_getnewbuf() in bucket split code.Tom Lane2015-03-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | _hash_splitbucket() obtained the base page of the new bucket by calling _hash_getnewbuf(), but it held no exclusive lock that would prevent some other process from calling _hash_getnewbuf() at the same time. This is contrary to _hash_getnewbuf()'s API spec and could in fact cause failures. In practice, we must only call that function while holding write lock on the hash index's metapage. An additional problem was that we'd already modified the metapage's bucket mapping data, meaning that failure to extend the index would leave us with a corrupt index. Fix both issues by moving the _hash_getnewbuf() call to just before we modify the metapage in _hash_expandtable(). Unfortunately there's still a large problem here, which is that we could also incur ENOSPC while trying to get an overflow page for the new bucket. That would leave the index corrupt in a more subtle way, namely that some index tuples that should be in the new bucket might still be in the old one. Fixing that seems substantially more difficult; even preallocating as many pages as we could possibly need wouldn't entirely guarantee that the bucket split would complete successfully. So for today let's just deal with the base case. Per report from Antonin Houska. Back-patch to all active branches.
* Add vacuum_delay_point call in compute_index_stats's per-sample-row loop.Tom Lane2015-03-29
| | | | | | | | | Slow functions in index expressions might cause this loop to take long enough to make it worth being cancellable. Probably it would be enough to call CHECK_FOR_INTERRUPTS here, but for consistency with other per-sample-row loops in this file, let's use vacuum_delay_point. Report and patch by Jeff Janes. Back-patch to all supported branches.
* Fix ExecOpenScanRelation to take a lock on a ROW_MARK_COPY relation.Tom Lane2015-03-24
| | | | | | | | | | | | | | | | | | ExecOpenScanRelation assumed that any relation listed in the ExecRowMark list has been locked by InitPlan; but this is not true if the rel's markType is ROW_MARK_COPY, which is possible if it's a foreign table. In most (possibly all) cases, failure to acquire a lock here isn't really problematic because the parser, planner, or plancache would have taken the appropriate lock already. In principle though it might leave us vulnerable to working with a relation that we hold no lock on, and in any case if the executor isn't depending on previously-taken locks otherwise then it should not do so for ROW_MARK_COPY relations. Noted by Etsuro Fujita. Back-patch to all active versions, since the inconsistency has been there a long time. (It's almost certainly irrelevant in 9.0, since that predates foreign tables, but the code's still wrong on its own terms.)
* Remove workaround for ancient incompatibility between readline and libedit.Tom Lane2015-03-14
| | | | | | | | | | | | | | | | | | | | | | | | | GNU readline defines the return value of write_history() as "zero if OK, else an errno code". libedit's version of that function used to have a different definition (to wit, "-1 if error, else the number of lines written to the file"). We tried to work around that by checking whether errno had become nonzero, but this method has never been kosher according to the published API of either library. It's reportedly completely broken in recent Ubuntu releases: psql bleats about "No such file or directory" when saving ~/.psql_history, even though the write worked fine. However, libedit has been following the readline definition since somewhere around 2006, so it seems all right to finally break compatibility with ancient libedit releases and trust that the return value is what readline specifies. (I'm not sure when the various Linux distributions incorporated this fix, but I did find that OS X has been shipping fixed versions since 10.5/Leopard.) If anyone is still using such an ancient libedit, they will find that psql complains it can't write ~/.psql_history at exit, even when the file was written correctly. This is no worse than the behavior we're fixing for current releases. Back-patch to all supported branches.
* Ensure tableoid reads correctly in EvalPlanQual-manufactured tuples.Tom Lane2015-03-12
| | | | | | | | | | | | | | | | | | | | The ROW_MARK_COPY path in EvalPlanQualFetchRowMarks() was just setting tableoid to InvalidOid, I think on the assumption that the referenced RTE must be a subquery or other case without a meaningful OID. However, foreign tables also use this code path, and they do have meaningful table OIDs; so failure to set the tuple field can lead to user-visible misbehavior. Fix that by fetching the appropriate OID from the range table. There's still an issue about whether CTID can ever have a meaningful value in this case; at least with postgres_fdw foreign tables, it does. But that is a different problem that seems to require a significantly different patch --- it's debatable whether postgres_fdw really wants to use this code path at all. Simplified version of a patch by Etsuro Fujita, who also noted the problem to begin with. The issue can be demonstrated in all versions having FDWs, so back-patch to 9.1.
* Fix documentation for libpq's PQfn().Tom Lane2015-03-08
| | | | | | | | | | | | The SGML docs claimed that 1-byte integers could be sent or received with the "isint" options, but no such behavior has ever been implemented in pqGetInt() or pqPutInt(). The in-code documentation header for PQfn() was even less in tune with reality, and the code itself used parameter names matching neither the SGML docs nor its libpq-fe.h declaration. Do a bit of additional wordsmithing on the SGML docs while at it. Since the business about 1-byte integers is a clear documentation bug, back-patch to all supported branches.
* Fix user mapping object descriptionAlvaro Herrera2015-03-05
| | | | | | | | | | | | We were using "user mapping for user XYZ" as description for user mappings, but that's ambiguous because users can have mappings on multiple foreign servers; therefore change it to "for user XYZ on server UVW" instead. Object identities for user mappings are also updated in the same way, in branches 9.3 and above. The incomplete description string was introduced together with the whole SQL/MED infrastructure by commit cae565e503 of 8.4 era, so backpatch all the way back.
* Fix pg_dump handling of extension config tablesStephen Frost2015-03-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since 9.1, we've provided extensions with a way to denote "configuration" tables- tables created by an extension which the user may modify. By marking these as "configuration" tables, the extension is asking for the data in these tables to be pg_dump'd (tables which are not marked in this way are assumed to be entirely handled during CREATE EXTENSION and are not included at all in a pg_dump). Unfortunately, pg_dump neglected to consider foreign key relationships between extension configuration tables and therefore could end up trying to reload the data in an order which would cause FK violations. This patch teaches pg_dump about these dependencies, so that the data dumped out is done so in the best order possible. Note that there's no way to handle circular dependencies, but those have yet to be seen in the wild. The release notes for this should include a caution to users that existing pg_dump-based backups may be invalid due to this issue. The data is all there, but restoring from it will require extracting the data for the configuration tables and then loading them in the correct order by hand. Discussed initially back in bug #6738, more recently brought up by Gilles Darold, who provided an initial patch which was further reworked by Michael Paquier. Further modifications and documentation updates by me. Back-patch to 9.1 where we added the concept of extension configuration tables.
* Unlink static libraries before rebuilding them.Noah Misch2015-03-01
| | | | | | | | When the library already exists in the build directory, "ar" preserves members not named on its command line. This mattered when, for example, a "configure" rerun dropped a file from $(LIBOBJS). libpgport carried the obsolete member until "make clean". Back-patch to 9.0 (all supported versions).
* Reconsider when to wait for WAL flushes/syncrep during commit.Andres Freund2015-02-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Up to now RecordTransactionCommit() waited for WAL to be flushed (if synchronous_commit != off) and to be synchronously replicated (if enabled), even if a transaction did not have a xid assigned. The primary reason for that is that sequence's nextval() did not assign a xid, but are worthwhile to wait for on commit. This can be problematic because sometimes read only transactions do write WAL, e.g. HOT page prune records. That then could lead to read only transactions having to wait during commit. Not something people expect in a read only transaction. This lead to such strange symptoms as backends being seemingly stuck during connection establishment when all synchronous replicas are down. Especially annoying when said stuck connection is the standby trying to reconnect to allow syncrep again... This behavior also is involved in a rather complicated <= 9.4 bug where the transaction started by catchup interrupt processing waited for syncrep using latches, but didn't get the wakeup because it was already running inside the same overloaded signal handler. Fix the issue here doesn't properly solve that issue, merely papers over the problems. In 9.5 catchup interrupts aren't processed out of signal handlers anymore. To fix all this, make nextval() acquire a top level xid, and only wait for transaction commit if a transaction both acquired a xid and emitted WAL records. If only a xid has been assigned we don't uselessly want to wait just because of writes to temporary/unlogged tables; if only WAL has been written we don't want to wait just because of HOT prunes. The xid assignment in nextval() is unlikely to cause overhead in real-world workloads. For one it only happens SEQ_LOG_VALS/32 values anyway, for another only usage of nextval() without using the result in an insert or similar is affected. Discussion: 20150223165359.GF30784@awork2.anarazel.de, 369698E947874884A77849D8FE3680C2@maumau, 5CF4ABBA67674088B3941894E22A0D25@maumau Per complaint from maumau and Thom Brown Backpatch all the way back; 9.0 doesn't have syncrep, but it seems better to be consistent behavior across all maintained branches.