aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
* xlogreader.c: Fix report_invalid_record translatability flagAlvaro Herrera2015-01-09
| | | | | | | | | For some reason I overlooked in GETTEXT_TRIGGERS that the right argument be read by gettext in 7fcbf6a405ffc12a4546a25b98592ee6733783fc. This will drop the translation percentages for the backend all the way back to 9.3 ... Problem reported by Heikki.
* On Darwin, detect and report a multithreaded postmaster.Noah Misch2015-01-07
| | | | | | | | | Darwin --enable-nls builds use a substitute setlocale() that may start a thread. Buildfarm member orangutan experienced BackendList corruption on account of different postmaster threads executing signal handlers simultaneously. Furthermore, a multithreaded postmaster risks undefined behavior from sigprocmask() and fork(). Emit LOG messages about the problem and its workaround. Back-patch to 9.0 (all supported versions).
* Always set the six locale category environment variables in main().Noah Misch2015-01-07
| | | | | | | | | | | | | Typical server invocations already achieved that. Invalid locale settings in the initial postmaster environment interfered, as could malloc() failure. Setting "LC_MESSAGES=pt_BR.utf8 LC_ALL=invalid" in the postmaster environment will now choose C-locale messages, not Brazilian Portuguese messages. Most localized programs, including all PostgreSQL frontend executables, do likewise. Users are unlikely to observe changes involving locale categories other than LC_MESSAGES. CheckMyDatabase() ensures that we successfully set LC_COLLATE and LC_CTYPE; main() sets the remaining three categories to locale "C", which almost cannot fail. Back-patch to 9.0 (all supported versions).
* Reject ANALYZE commands during VACUUM FULL or another ANALYZE.Noah Misch2015-01-07
| | | | | | vacuum()'s static variable handling makes it non-reentrant; an ensuing null pointer deference crashed the backend. Back-patch to 9.0 (all supported versions).
* Improve relcache invalidation handling of currently invisible relations.Andres Freund2015-01-07
| | | | | | | | | | | | | | | | | | | | | | | | | The corner case where a relcache invalidation tried to rebuild the entry for a referenced relation but couldn't find it in the catalog wasn't correct. The code tried to RelationCacheDelete/RelationDestroyRelation the entry. That didn't work when assertions are enabled because the latter contains an assertion ensuring the refcount is zero. It's also more generally a bad idea, because by virtue of being referenced somebody might actually look at the entry, which is possible if the error is trapped and handled via a subtransaction abort. Instead just error out, without deleting the entry. As the entry is marked invalid, the worst that can happen is that the invalid (and at some point unused) entry lingers in the relcache. Discussion: 22459.1418656530@sss.pgh.pa.us There should be no way to hit this case < 9.4 where logical decoding introduced a bug that can hit this. But since the code for handling the corner case is there it should do something halfway sane, so backpatch all the the way back. The logical decoding bug will be handled in a separate commit.
* Fix thinko in plpython error messageAlvaro Herrera2015-01-06
|
* Fix broken pg_dump code for dumping comments on event triggers.Tom Lane2015-01-05
| | | | | | | This never worked, I think. Per report from Marc Munro. In passing, fix funny spacing in the COMMENT ON command as a result of excess space in the "label" string.
* Fix thinko in lock mode enumAlvaro Herrera2015-01-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 0e5680f4737a9c6aa94aa9e77543e5de60411322 contained a thinko mixing LOCKMODE with LockTupleMode. This caused misbehavior in the case where a tuple is marked with a multixact with at most a FOR SHARE lock, and another transaction tries to acquire a FOR NO KEY EXCLUSIVE lock; this case should block but doesn't. Include a new isolation tester spec file to explicitely try all the tuple lock combinations; without the fix it shows the problem: starting permutation: s1_begin s1_lcksvpt s1_tuplock2 s2_tuplock3 s1_commit step s1_begin: BEGIN; step s1_lcksvpt: SELECT * FROM multixact_conflict FOR KEY SHARE; SAVEPOINT foo; a 1 step s1_tuplock2: SELECT * FROM multixact_conflict FOR SHARE; a 1 step s2_tuplock3: SELECT * FROM multixact_conflict FOR NO KEY UPDATE; a 1 step s1_commit: COMMIT; With the fixed code, step s2_tuplock3 blocks until session 1 commits, which is the correct behavior. All other cases behave correctly. Backpatch to 9.3, like the commit that introduced the problem.
* Fix inconsequential fd leak in the new mark_file_as_archived() function.Andres Freund2015-01-04
| | | | | | | | | | | | As every error in mark_file_as_archived() will lead to a failure of pg_basebackup the FD leak couldn't ever lead to a real problem. It seems better to fix the leak anyway though, rather than silence Coverity, as the usage of the function might get extended or copied at some point in the future. Pointed out by Coverity. Backpatch to 9.2, like the relevant part of the previous patch.
* Prevent WAL files created by pg_basebackup -x/X from being archived again.Andres Freund2015-01-03
| | | | | | | | | | | | | | | | | | | | | WAL (and timeline history) files created by pg_basebackup did not maintain the new base backup's archive status. That's currently not a problem if the new node is used as a standby - but if that node is promoted all still existing files can get archived again. With a high wal_keep_segment settings that can happen a significant time later - which is quite confusing. Change both the backend (for the -x/-X fetch case) and pg_basebackup (for -X stream) itself to always mark WAL/timeline files included in the base backup as .done. That's in line with walreceiver.c doing so. The verbosity of the pg_basebackup changes show pretty clearly that it needs some refactoring, but that'd result in not be backpatchable changes. Backpatch to 9.1 where pg_basebackup was introduced. Discussion: 20141205002854.GE21964@awork2.anarazel.de
* Add pg_string_endswith as the start of a string helper library in src/common.Andres Freund2015-01-03
| | | | | | Backpatch to 9.3 where src/common was introduce, because a bugfix that needs to be backpatched, requires the function. Earlier branches will have to duplicate the code.
* Improve consistency of parsing of psql's magic variables.Tom Lane2014-12-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | For simple boolean variables such as ON_ERROR_STOP, psql has for a long time recognized variant spellings of "on" and "off" (such as "1"/"0"), and it also made a point of warning you if you'd misspelled the setting. But these conveniences did not exist for other keyword-valued variables. In particular, though ECHO_HIDDEN and ON_ERROR_ROLLBACK include "on" and "off" as possible values, none of the alternative spellings for those were recognized; and to make matters worse the code would just silently assume "on" was meant for any unrecognized spelling. Several people have reported getting bitten by this, so let's fix it. In detail, this patch: * Allows all spellings recognized by ParseVariableBool() for ECHO_HIDDEN and ON_ERROR_ROLLBACK. * Reports a warning for unrecognized values for COMP_KEYWORD_CASE, ECHO, ECHO_HIDDEN, HISTCONTROL, ON_ERROR_ROLLBACK, and VERBOSITY. * Recognizes all values for all these variables case-insensitively; previously there was a mishmash of case-sensitive and case-insensitive behaviors. Back-patch to all supported branches. There is a small risk of breaking existing scripts that were accidentally failing to malfunction; but the consensus is that the chance of detecting real problems and preventing future mistakes outweighs this.
* Backpatch variable renaming in formatting.cBruce Momjian2014-12-29
| | | | | | | Backpatch a9c22d1480aa8e6d97a000292d05ef2b31bbde4e to make future backpatching easier. Backpatch through 9.0
* Grab heavyweight tuple lock only before sleepingAlvaro Herrera2014-12-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | We were trying to acquire the lock even when we were subsequently not sleeping in some other transaction, which opens us up unnecessarily to deadlocks. In particular, this is troublesome if an update tries to lock an updated version of a tuple and finds itself doing EvalPlanQual update chain walking; more than two sessions doing this concurrently will find themselves sleeping on each other because the HW tuple lock acquisition in heap_lock_tuple called from EvalPlanQualFetch races with the same tuple lock being acquired in heap_update -- one of these sessions sleeps on the other one to finish while holding the tuple lock, and the other one sleeps on the tuple lock. Per trouble report from Andrew Sackville-West in http://www.postgresql.org/message-id/20140731233051.GN17765@andrew-ThinkPad-X230 His scenario can be simplified down to a relatively simple isolationtester spec file which I don't include in this commit; the reason is that the current isolationtester is not able to deal with more than one blocked session concurrently and it blocks instead of raising the expected deadlock. In the future, if we improve isolationtester, it would be good to include the spec file in the isolation schedule. I posted it in http://www.postgresql.org/message-id/20141212205254.GC1768@alvh.no-ip.org Hat tip to Mark Kirkwood, who helped diagnose the trouble.
* Have config_sspi_auth() permit IPv6 localhost connections.Noah Misch2014-12-25
| | | | | | | | | Windows versions later than Windows Server 2003 map "localhost" to ::1. Account for that in the generated pg_hba.conf, fixing another oversight in commit f6dc6dd5ba54d52c0733aaafc50da2fbaeabb8b0. Back-patch to 9.0, like that commit. David Rowley and Noah Misch
* Add CST (China Standard Time) to our lists of timezone abbreviations.Tom Lane2014-12-24
| | | | | | | | | For some reason this seems to have been missed when the lists in src/timezone/tznames/ were first constructed. We can't put it in Default because of the conflict with US CST, but we should certainly list it among the alternative entries in Asia.txt. (I checked for other oversights, but all the other abbreviations that are in current use according to the IANA files seem to be accounted for.) Noted while responding to bug #12326.
* Fix timestamp in end-of-recovery WAL records.Heikki Linnakangas2014-12-19
| | | | | | | We used time(null) to set a TimestampTz field, which gave bogus results. Noticed while looking at pg_xlogdump output. Backpatch to 9.3 and above, where the fast promotion was introduced.
* Prevent potentially hazardous compiler/cpu reordering during lwlock release.Andres Freund2014-12-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | In LWLockRelease() (and in 9.4+ LWLockUpdateVar()) we release enqueued waiters using PGSemaphoreUnlock(). As there are other sources of such unlocks backends only wake up if MyProc->lwWaiting is set to false; which is only done in the aforementioned functions. Before this commit there were dangers because the store to lwWaitLink could become visible before the store to lwWaitLink. This could both happen due to compiler reordering (on most compilers) and on some platforms due to the CPU reordering stores. The possible consequence of this is that a backend stops waiting before lwWaitLink is set to NULL. If that backend then tries to acquire another lock and has to wait there the list could become corrupted once the lwWaitLink store is finally performed. Add a write memory barrier to prevent that issue. Unfortunately the barrier support has been only added in 9.2. Given that the issue has not knowingly been observed in praxis it seems sufficient to prohibit compiler reordering using volatile for 9.0 and 9.1. Actual problems due to compiler reordering are more likely anyway. Discussion: 20140210134625.GA15246@awork2.anarazel.de
* Recognize Makefile line continuations in fetchRegressOpts().Noah Misch2014-12-18
| | | | | | Back-patch to 9.0 (all supported versions). This is mere future-proofing in the context of the master branch, but commit f6dc6dd5ba54d52c0733aaafc50da2fbaeabb8b0 requires it of older branches.
* Fix (re-)starting from a basebackup taken off a standby after a failure.Andres Freund2014-12-18
| | | | | | | | | | | | | | | | | | | | | | | | | | When starting up from a basebackup taken off a standby extra logic has to be applied to compute the point where the data directory is consistent. Normal base backups use a WAL record for that purpose, but that isn't possible on a standby. That logic had a error check ensuring that the cluster's control file indicates being in recovery. Unfortunately that check was too strict, disregarding the fact that the control file could also indicate that the cluster was shut down while in recovery. That's possible when the a cluster starting from a basebackup is shut down before the backup label has been removed. When everything goes well that's a short window, but when either restore_command or primary_conninfo isn't configured correctly the window can get much wider. That's because inbetween reading and unlinking the label we restore the last checkpoint from WAL which can need additional WAL. To fix simply also allow starting when the control file indicates "shutdown in recovery". There's nicer fixes imaginable, but they'd be more invasive. Backpatch to 9.2 where support for taking basebackups from standbys was added.
* Lock down regression testing temporary clusters on Windows.Noah Misch2014-12-17
| | | | | | | | | | | | Use SSPI authentication to allow connections exclusively from the OS user that launched the test suite. This closes on Windows the vulnerability that commit be76a6d39e2832d4b88c0e1cc381aa44a7f86881 closed on other platforms. Users of "make installcheck" or custom test harnesses can run "pg_regress --config-auth=DATADIR" to activate the same authentication configuration that "make check" would use. Back-patch to 9.0 (all supported versions). Security: CVE-2014-0067
* Fix another poorly worded error message.Tom Lane2014-12-17
| | | | Spotted by Álvaro Herrera.
* Fix off-by-one loop count in MapArrayTypeName, and get rid of static array.Tom Lane2014-12-16
| | | | | | | | | | | | | | | | | | | | | MapArrayTypeName would copy up to NAMEDATALEN-1 bytes of the base type name, which of course is wrong: after prepending '_' there is only room for NAMEDATALEN-2 bytes. Aside from being the wrong result, this case would lead to overrunning the statically allocated work buffer. This would be a security bug if the function were ever used outside bootstrap mode, but it isn't, at least not in any currently supported branches. Aside from fixing the off-by-one loop logic, this patch gets rid of the static work buffer by having MapArrayTypeName pstrdup its result; the sole caller was already doing that, so this just requires moving the pstrdup call. This saves a few bytes but mainly it makes the API a lot cleaner. Back-patch on the off chance that there is some third-party code using MapArrayTypeName with less-secure input. Pushing pstrdup into the function should not cause any serious problems for such hypothetical code; at worst there might be a short term memory leak. Per Coverity scanning.
* Misc comment typo fixes.Heikki Linnakangas2014-12-16
| | | | | Backpatch the applicable parts, just to make backpatching future patches easier.
* Fix planning of SELECT FOR UPDATE on child table with partial index.Tom Lane2014-12-11
| | | | | | | | | | | | | | | | | | | Ordinarily we can omit checking of a WHERE condition that matches a partial index's condition, when we are using an indexscan on that partial index. However, in SELECT FOR UPDATE we must include the "redundant" filter condition in the plan so that it gets checked properly in an EvalPlanQual recheck. The planner got this mostly right, but improperly omitted the filter condition if the index in question was on an inheritance child table. In READ COMMITTED mode, this could result in incorrectly returning just-updated rows that no longer satisfy the filter condition. The cause of the error is using get_parse_rowmark() when get_plan_rowmark() is what should be used during planning. In 9.3 and up, also fix the same mistake in contrib/postgres_fdw. It's currently harmless there (for lack of inheritance support) but wrong is wrong, and the incorrect code might get copied to someplace where it's more significant. Report and fix by Kyotaro Horiguchi. Back-patch to all supported branches.
* Fix corner case where SELECT FOR UPDATE could return a row twice.Tom Lane2014-12-11
| | | | | | | | | | | | | | | | In READ COMMITTED mode, if a SELECT FOR UPDATE discovers it has to redo WHERE-clause checking on rows that have been updated since the SELECT's snapshot, it invokes EvalPlanQual processing to do that. If this first occurs within a non-first child table of an inheritance tree, the previous coding could accidentally re-return a matching row from an earlier, already-scanned child table. (And, to add insult to injury, I think this could make it miss returning a row that should have been returned, if the updated row that this happens on should still have passed the WHERE qual.) Per report from Kyotaro Horiguchi; the added isolation test is based on his test case. This has been broken for quite awhile, so back-patch to all supported branches.
* Fix assorted confusion between Oid and int32.Tom Lane2014-12-11
| | | | | | | | | | | In passing, also make some debugging elog's in pgstat.c a bit more consistently worded. Back-patch as far as applicable (9.3 or 9.4; none of these mistakes are really old). Mark Dilger identified and patched the type violations; the message rewordings are mine.
* Give a proper error message if initdb password file is empty.Heikki Linnakangas2014-12-05
| | | | | | | Used to say just "could not read password from file "...": Success", which isn't very informative. Mats Erik Andersson. Backpatch to all supported versions.
* Fix JSON aggregates to work properly when final function is re-executed.Tom Lane2014-12-02
| | | | | | | | | | | | Davide S. reported that json_agg() sometimes produced multiple trailing right brackets. This turns out to be because json_agg_finalfn() attaches the final right bracket, and was doing so by modifying the aggregate state in-place. That's verboten, though unfortunately it seems there's no way for nodeAgg.c to check for such mistakes. Fix that back to 9.3 where the broken code was introduced. In 9.4 and HEAD, likewise fix json_object_agg(), which had copied the erroneous logic. Make some cosmetic cleanups as well.
* Guard against bad "dscale" values in numeric_recv().Tom Lane2014-12-01
| | | | | | | | | | | | | | | | | | | | | | | | We were not checking to see if the supplied dscale was valid for the given digit array when receiving binary-format numeric values. While dscale can validly be more than the number of nonzero fractional digits, it shouldn't be less; that case causes fractional digits to be hidden on display even though they're there and participate in arithmetic. Bug #12053 from Tommaso Sala indicates that there's at least one broken client library out there that sometimes supplies an incorrect dscale value, leading to strange behavior. This suggests that simply throwing an error might not be the best response; it would lead to failures in applications that might seem to be working fine today. What seems the least risky fix is to truncate away any digits that would be hidden by dscale. This preserves the existing behavior in terms of what will be printed for the transmitted value, while preventing subsequent arithmetic from producing results inconsistent with that. In passing, throw a specific error for the case of dscale being outside the range that will fit into a numeric's header. Before you got "value overflows numeric format", which is a bit misleading. Back-patch to all supported branches.
* Fix backpatching error in commit 55c88079Andrew Dunstan2014-12-01
|
* Fix hstore_to_json_loose's detection of valid JSON number values.Andrew Dunstan2014-12-01
| | | | | | | | | | We expose a function IsValidJsonNumber that internally calls the lexer for json numbers. That allows us to use the same test everywhere, instead of inventing a broken test for hstore conversions. The new function is also used in datum_to_json, replacing the code that is now moved to the new function. Backpatch to 9.3 where hstore_to_json_loose was introduced.
* Fix minor bugs in commit 30bf4689a96cd283af33edcdd6b7210df3f20cd8 et al.Tom Lane2014-11-30
| | | | | | | | | Coverity complained that the "else" added to fillPGconn() was unreachable, which it was. Remove the dead code. In passing, rearrange the tests so as not to bother trying to fetch values for options that can't be assigned. Pre-9.3 did not have that issue, but it did have a "return" that should be "goto oom_error" to ensure that a suitable error message gets filled in.
* Update transaction README for persistent multixactsAlvaro Herrera2014-11-28
| | | | | Multixacts are now maintained during recovery, but the README didn't get the memo. Backpatch to 9.3, where the divergence was introduced.
* Make \watch respect the user's \pset null setting.Fujii Masao2014-11-28
| | | | | | | | | | | Previously \watch always ignored the user's \pset null setting. \pset null setting should be ignored for \d and similar queries. For those, the code can reasonably have an opinion about what the presentation should be like, since it knows what SQL query it's issuing. This argument surely doesn't apply to \watch, so this commit makes \watch use the user's \pset null setting. Back-patch to 9.3 where \watch was added.
* Mark response messages for translation in pg_isready.Fujii Masao2014-11-28
| | | | | | Back-patch to 9.3 where pg_isready was added. Mats Erik Andersson
* Allow "dbname" from connection string to be overridden in PQconnectDBParamsHeikki Linnakangas2014-11-25
| | | | | | | | | | | | | | | | If the "dbname" attribute in PQconnectDBParams contained a connection string or URI (and expand_dbname = TRUE), the database name from the connection string could not be overridden by a subsequent "dbname" keyword in the array. That was not intentional; all other options can be overridden. Furthermore, any subsequent "dbname" caused the connection string from the first dbname value to be processed again, overriding any values for the same options that were given between the connection string and the second dbname option. In the passing, clarify in the docs that only the first dbname option in the array is parsed as a connection string. Alex Shulgin. Backpatch to all supported versions.
* Check return value of strdup() in libpq connection option parsing.Heikki Linnakangas2014-11-25
| | | | | | | | An out-of-memory in most of these would lead to strange behavior, like connecting to a different database than intended, but some would lead to an outright segfault. Alex Shulgin and me. Backpatch to all supported versions.
* Fix mishandling of system columns in FDW queries.Tom Lane2014-11-22
| | | | | | | | | | | | | | | | | | | | postgres_fdw would send query conditions involving system columns to the remote server, even though it makes no effort to ensure that system columns other than CTID match what the remote side thinks. tableoid, in particular, probably won't match and might have some use in queries. Hence, prevent sending conditions that include non-CTID system columns. Also, create_foreignscan_plan neglected to check local restriction conditions while determining whether to set fsSystemCol for a foreign scan plan node. This again would bollix the results for queries that test a foreign table's tableoid. Back-patch the first fix to 9.3 where postgres_fdw was introduced. Back-patch the second to 9.2. The code is probably broken in 9.1 as well, but the patch doesn't apply cleanly there; given the weak state of support for FDWs in 9.1, it doesn't seem worth fixing. Etsuro Fujita, reviewed by Ashutosh Bapat, and somewhat modified by me
* Don't require bleeding-edge timezone data in timestamptz regression test.Tom Lane2014-11-18
| | | | | | | | | | | | | | The regression test cases added in commits b2cbced9e et al depended in part on the Russian timezone offset changes of Oct 2014. While this is of no particular concern for a default Postgres build, it was possible for a build using --with-system-tzdata to fail the tests if the system tzdata database wasn't au courant. Bjorn Munch and Christoph Berg both complained about this while packaging 9.4rc1, so we probably shouldn't insist on the system tzdata being up-to-date. Instead, make an equivalent test using a zone change that occurred in Venezuela in 2007. With this patch, the regression tests should pass using any tzdata set from 2012 or later. (I can't muster much sympathy for somebody using --with-system-tzdata on a machine whose system tzdata is more than three years out-of-date.)
* Fix some bogus direct uses of realloc().Tom Lane2014-11-18
| | | | | | | | | | | | | | pg_dump/parallel.c was using realloc() directly with no error check. While the odds of an actual failure here seem pretty low, Coverity complains about it, so fix by using pg_realloc() instead. While looking for other instances, I noticed a couple of places in psql that hadn't gotten the memo about the availability of pg_realloc. These aren't bugs, since they did have error checks, but verbosely inconsistent code is not a good thing. Back-patch as far as 9.3. 9.2 did not have pg_dump/parallel.c, nor did it have pg_realloc available in all frontend code.
* Update time zone data files to tzdata release 2014j.Tom Lane2014-11-17
| | | | | | DST law changes in the Turks & Caicos Islands (America/Grand_Turk) and in Fiji. New zone Pacific/Bougainville for portions of Papua New Guinea. Historical changes for Korea and Vietnam.
* Fix initdb --sync-only to also sync tablespaces.Andres Freund2014-11-15
| | | | | | | | | | | | | 630cd14426dc added initdb --sync-only, for use by pg_upgrade, by just exposing the existing fsync code. That's wrong, because initdb so far had absolutely no reason to deal with tablespaces. Fix --sync-only by additionally explicitly syncing each of the tablespaces. Backpatch to 9.3 where --sync-only was introduced. Abhijit Menon-Sen and Andres Freund
* Sync unlogged relations to disk after they have been reset.Andres Freund2014-11-15
| | | | | | | | | | | | | | | | | | | Unlogged relations are only reset when performing a unclean restart. That means they have to be synced to disk during clean shutdowns. During normal processing that's achieved by registering a buffer's file to be fsynced at the next checkpoint when flushed. But ResetUnloggedRelations() doesn't go through the buffer manager, so nothing will force reset relations to disk before the next shutdown checkpoint. So just make ResetUnloggedRelations() fsync the newly created main forks to disk. Discussion: 20140912112246.GA4984@alap3.anarazel.de Backpatch to 9.1 where unlogged tables were introduced. Abhijit Menon-Sen and Andres Freund
* Ensure unlogged tables are reset even if crash recovery errors out.Andres Freund2014-11-15
| | | | | | | | | | | | | | | | | | | | | | | | Unlogged relations are reset at the end of crash recovery as they're only synced to disk during a proper shutdown. Unfortunately that and later steps can fail, e.g. due to running out of space. This reset was, up to now performed after marking the database as having finished crash recovery successfully. As out of space errors trigger a crash restart that could lead to the situation that not all unlogged relations are reset. Once that happend usage of unlogged relations could yield errors like "could not open file "...": No such file or directory". Luckily clusters that show the problem can be fixed by performing a immediate shutdown, and starting the database again. To fix, just call ResetUnloggedRelations(UNLOGGED_RELATION_INIT) earlier, before marking the database as having successfully recovered. Discussion: 20140912112246.GA4984@alap3.anarazel.de Backpatch to 9.1 where unlogged tables were introduced. Abhijit Menon-Sen and Andres Freund
* Backport "Expose fsync_fname as a public API".Andres Freund2014-11-15
| | | | | Backport commit cc52d5b33ff5df29de57dcae9322214cfe9c8464 back to 9.1 to allow backpatching some unlogged table fixes that use fsync_fname.
* Allow interrupting GetMultiXactIdMembersAlvaro Herrera2014-11-14
| | | | | | | | | | | This function has a loop which can lead to uninterruptible process "stalls" (actually infinite loops) when some bugs are triggered. Avoid that unpleasant situation by adding a check for interrupts in a place that shouldn't degrade performance in the normal case. Backpatch to 9.3. Older branches have an identical loop here, but the aforementioned bugs are only a problem starting in 9.3 so there doesn't seem to be any point in backpatching any further.
* Fix pg_dumpall to restore its ability to dump from ancient servers.Tom Lane2014-11-13
| | | | | | | | | | | | | | | | | | | Fix breakage induced by commits d8d3d2a4f37f6df5d0118b7f5211978cca22091a and 463f2625a5fb183b6a8925ccde98bb3889f921d9: pg_dumpall has crashed when attempting to dump from pre-8.1 servers since then, due to faulty construction of the query used for dumping roles from older servers. The query was erroneous as of the earlier commit, but it wasn't exposed unless you tried to use --binary-upgrade, which you presumably wouldn't with a pre-8.1 server. However commit 463f2625a made it fail always. In HEAD, also fix additional breakage induced in the same query by commit 491c029dbc4206779cf659aa0ff986af7831d2ff, which evidently wasn't tested against pre-8.1 servers either. The bug is only latent in 9.1 because 463f2625a hadn't landed yet, but it seems best to back-patch all branches containing the faulty query. Gilles Darold
* Fix race condition between hot standby and restoring a full-page image.Heikki Linnakangas2014-11-13
| | | | | | | | | | | | | | | | | | | There was a window in RestoreBackupBlock where a page would be zeroed out, but not yet locked. If a backend pinned and locked the page in that window, it saw the zeroed page instead of the old page or new page contents, which could lead to missing rows in a result set, or errors. To fix, replace RBM_ZERO with RBM_ZERO_AND_LOCK, which atomically pins, zeroes, and locks the page, if it's not in the buffer cache already. In stable branches, the old RBM_ZERO constant is renamed to RBM_DO_NOT_USE, to avoid breaking any 3rd party extensions that might use RBM_ZERO. More importantly, this avoids renumbering the other enum values, which would cause even bigger confusion in extensions that use ReadBufferExtended, but haven't been recompiled. Backpatch to all supported versions; this has been racy since hot standby was introduced.
* Explicitly support the case that a plancache's raw_parse_tree is NULL.Tom Lane2014-11-12
| | | | | | | | | | | | | | | | | | | | | | | This only happens if a client issues a Parse message with an empty query string, which is a bit odd; but since it is explicitly called out as legal by our FE/BE protocol spec, we'd probably better continue to allow it. Fix by adding tests everywhere that the raw_parse_tree field is passed to functions that don't or shouldn't accept NULL. Also make it clear in the relevant comments that NULL is an expected case. This reverts commits a73c9dbab0165b3395dfe8a44a7dfd16166963c4 and 2e9650cbcff8c8fb0d9ef807c73a44f241822eee, which fixed specific crash symptoms by hacking things at what now seems to be the wrong end, ie the callee functions. Making the callees allow NULL is superficially more robust, but it's not always true that there is a defensible thing for the callee to do in such cases. The caller has more context and is better able to decide what the empty-query case ought to do. Per followup discussion of bug #11335. Back-patch to 9.2. The code before that is sufficiently different that it would require development of a separate patch, which doesn't seem worthwhile for what is believed to be an essentially cosmetic change.