aboutsummaryrefslogtreecommitdiff
path: root/src/backend
Commit message (Collapse)AuthorAge
* Fix ancient encoding error in hungarian.stop.Tom Lane2014-06-10
| | | | | | | | | | | | | | | | | When we grabbed this file off the Snowball project's website, we mistakenly supposed that it was in LATIN1 encoding, but evidently it was actually in LATIN2. This resulted in ő (o-double-acute, U+0151, which is code 0xF5 in LATIN2) being misconverted into õ (o-tilde, U+00F5), as complained of in bug #10589 from Zoltán Sörös. We'd have messed up u-double-acute too, but there aren't any of those in the file. Other characters used in the file have the same codes in LATIN1 and LATIN2, which no doubt helped hide the problem for so long. The error is not only ours: the Snowball project also was confused about which encoding is required for Hungarian. But dealing with that will require source-code changes that I'm not at all sure we'll wish to back-patch. Fixing the stopword file seems reasonably safe to back-patch however.
* Fix infinite loop when splitting inner tuples in SPGiST text indexes.Tom Lane2014-06-09
| | | | | | | | | | | | | | | | | | | | | | | Previously, the code used a node label of zero both for strings that contain no bytes beyond the inner tuple's prefix, and for cases where an "allTheSame" inner tuple has to be split to allow a string with a different next byte to be inserted into it. Failing to distinguish these cases meant that if a string ending with the current prefix needed to be inserted into an allTheSame tuple, we got into an infinite loop, because after splitting the tuple we'd descend into the child allTheSame tuple and then find we need to split again. To fix, instead use -1 and -2 as the node labels for these two cases. This requires widening the node label type from "char" to int2, but fortunately SPGiST stores all pass-by-value node label types in their Datum representation, which means that this change is transparently upward compatible so far as the on-disk representation goes. We continue to recognize zero as a dummy node label for reading purposes, but will not attempt to push new index entries down into such a label, so that the loop won't occur even when dealing with an existing index. Per report from Teodor Sigaev. Back-patch to 9.2 where the faulty code was introduced.
* Wrap multixact/members correctly during extension, take 2Alvaro Herrera2014-06-09
| | | | | | | | In a50d97625497b7 I already changed this, but got it wrong for the case where the number of members is larger than the number of entries that fit in the last page of the last segment. As reported by Serge Negodyuck in a followup to bug #8673.
* Add defenses against running with a wrong selection of LOBLKSIZE.Tom Lane2014-06-05
| | | | | | | | | | | | | | | | | | | | | It's critical that the backend's idea of LOBLKSIZE match the way data has actually been divided up in pg_largeobject. While we don't provide any direct way to adjust that value, doing so is a one-line source code change and various people have expressed interest recently in changing it. So, just as with TOAST_MAX_CHUNK_SIZE, it seems prudent to record the value in pg_control and cross-check that the backend's compiled-in setting matches the on-disk data. Also tweak the code in inv_api.c so that fetches from pg_largeobject explicitly verify that the length of the data field is not more than LOBLKSIZE. Formerly we just had Asserts() for that, which is no protection at all in production builds. In some of the call sites an overlength data value would translate directly to a security-relevant stack clobber, so it seems worth one extra runtime comparison to be sure. In the back branches, we can't change the contents of pg_control; but we can still make the extra checks in inv_api.c, which will offer some amount of protection against running with the wrong value of LOBLKSIZE.
* Fix longstanding bug in HeapTupleSatisfiesVacuum().Andres Freund2014-06-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | HeapTupleSatisfiesVacuum() didn't properly discern between DELETE_IN_PROGRESS and INSERT_IN_PROGRESS for rows that have been inserted in the current transaction and deleted in a aborted subtransaction of the current backend. At the very least that caused problems for CLUSTER and CREATE INDEX in transactions that had aborting subtransactions producing rows, leading to warnings like: WARNING: concurrent delete in progress within table "..." possibly in an endless, uninterruptible, loop. Instead of treating *InProgress xmins the same as *IsCurrent ones, treat them as being distinct like the other visibility routines. As implemented this separatation can cause a behaviour change for rows that have been inserted and deleted in another, still running, transaction. HTSV will now return INSERT_IN_PROGRESS instead of DELETE_IN_PROGRESS for those. That's both, more in line with the other visibility routines and arguably more correct. The latter because a INSERT_IN_PROGRESS will make callers look at/wait for xmin, instead of xmax. The only current caller where that's possibly worse than the old behaviour is heap_prune_chain() which now won't mark the page as prunable if a row has concurrently been inserted and deleted. That's harmless enough. As a cautionary measure also insert a interrupt check before the gotos in IndexBuildHeapScan() that lead to the uninterruptible loop. There are other possible causes, like a row that several sessions try to update and all fail, for repeated loops and the cost of doing so in the retry case is low. As this bug goes back all the way to the introduction of subtransactions in 573a71a5da backpatch to all supported releases. Reported-By: Sandro Santilli
* Set the process latch when processing recovery conflict interrupts.Andres Freund2014-06-03
| | | | | | | | | | | | | | | | | | | | | | | | Because RecoveryConflictInterrupt() didn't set the process latch anything using the latter to wait for events didn't get notified about recovery conflicts. Most latch users are never the target of recovery conflicts, which explains the lack of reports about this until now. Since 9.3 two possible affected users exist though: The sql callable pg_sleep() now uses latches to wait and background workers are expected to use latches in their main loop. Both would currently wait until the end of WaitLatch's timeout. Fix by adding a SetLatch() to RecoveryConflictInterrupt(). It'd also be possible to fix the issue by having each latch user set set_latch_on_sigusr1. That seems failure prone and though, as most of these callsites won't often receive recovery conflicts and thus will likely only be tested against normal query cancels et al. It'd also be unnecessarily verbose. Backpatch to 9.1 where latches were introduced. Arguably 9.3 would be sufficient, because that's where pg_sleep() was converted to waiting on the latch and background workers got introduced; but there could be user level code making use of the latch pre 9.3.
* Revert "Fix bogus %name-prefix option syntax in all our Bison files."Tom Lane2014-05-28
| | | | | | | | | | | | This reverts commit ece7aa8b0f57d92577055a88579555df895eb929. It turns out that the %name-prefix syntax without "=" does not work at all in pre-2.4 Bison. We are not prepared to make such a large jump in minimum required Bison version just to suppress a warning message in a version hardly any developers are using yet. When 3.0 gets more popular, we'll figure out a way to deal with this. In the meantime, BISONFLAGS=-Wno-deprecated is recommendable for anyone using 3.0 who doesn't want to see the warning.
* Fix bogus %name-prefix option syntax in all our Bison files.Tom Lane2014-05-28
| | | | | | | | | | | | %name-prefix doesn't use an "=" sign according to the Bison docs, but it silently accepted one anyway, until Bison 3.0. This was originally a typo of mine in commit 012abebab1bc72043f3f670bf32e91ae4ee04bd2, and we seem to have slavishly copied the error into all the other grammar files. Per report from Vik Fearing; analysis by Peter Eisentraut. Back-patch to all active branches, since somebody might try to build a back branch with up-to-date tools.
* Ensure cleanup in case of early errors in streaming base backupsMagnus Hagander2014-05-28
| | | | | | | | | | Move the code that sends the initial status information as well as the calculation of paths inside the ENSURE_ERROR_CLEANUP block. If this code failed, we would "leak" a counter of number of concurrent backups, thereby making the system always believe it was in backup mode. This could happen if the sending failed (which it probably never did given that the small amount of data to send would never cause a flush). It is very low risk, but all operations after do_pg_start_backup should be protected.
* Prevent auto_explain from changing the output of a user's EXPLAIN.Tom Lane2014-05-20
| | | | | | | | | | | | | | | | | | Commit af7914c6627bcf0b0ca614e9ce95d3f8056602bf, which introduced the EXPLAIN (TIMING) option, for some reason coded explain.c to look at planstate->instrument->need_timer rather than es->timing to decide whether to print timing info. However, the former flag might get set as a result of contrib/auto_explain wanting timing information. We certainly don't want activation of auto_explain to change user-visible statement behavior, so fix that. Also fix an independent bug introduced in the same patch: in the code path for a never-executed node with a machine-friendly output format, if timing was selected, it would fail to print the Actual Rows and Actual Loops items. Per bug #10404 from Tomonari Katsumata. Back-patch to 9.2 where the faulty code was introduced.
* Use 0-based numbering in comments about backup blocks.Heikki Linnakangas2014-05-19
| | | | | | | | | The macros and functions that work with backup blocks in the redo function use 0-based numbering, so let's use that consistently in the function that generates the records too. Makes it so much easier to compare the generation and replay functions. Backpatch to 9.0, where we switched from 1-based to 0-based numbering.
* Initialize tsId and dbId fields in WAL record of COMMIT PREPARED.Heikki Linnakangas2014-05-16
| | | | | | | | | | | | | | | | Commit dd428c79 added dbId and tsId to the xl_xact_commit struct but missed that prepared transaction commits reuse that struct. Fix that. Because those fields were left unitialized, replaying a commit prepared WAL record in a hot standby node would fail to remove the relcache init file. That can lead to "could not open file" errors on the standby. Relcache init file only needs to be removed when a system table/index is rewritten in the transaction using two phase commit, so that should be rare in practice. In HEAD, the incorrect dbId/tsId values are also used for filtering in logical replication code, causing the transaction to always be filtered out. Analysis and fix by Andres Freund. Backpatch to 9.0 where hot standby was introduced.
* Fix unportable setvbuf() usage in initdb.Tom Lane2014-05-15
| | | | | | | | | | | | In yesterday's commit 2dc4f011fd61501cce507be78c39a2677690d44b, I tried to force buffering of stdout/stderr in initdb to be what it is by default when the program is run interactively on Unix (since that's how most manual testing is done). This tripped over the fact that Windows doesn't support _IOLBF mode. We dealt with that a long time ago in syslogger.c by falling back to unbuffered mode on Windows. Export that solution in port.h and use it in initdb. Back-patch to 8.4, like the previous commit.
* Handle duplicate XIDs in txid_snapshot.Heikki Linnakangas2014-05-15
| | | | | | | | | | The proc array can contain duplicate XIDs, when a transaction is just being prepared for two-phase commit. To cope, remove any duplicates in txid_current_snapshot(). Also ignore duplicates in the input functions, so that if e.g. you have an old pg_dump file that already contains duplicates, it will be accepted. Report and fix by Jan Wieck. Backpatch to all supported versions.
* Fix race condition in preparing a transaction for two-phase commit.Heikki Linnakangas2014-05-15
| | | | | | | | | | | | | | | | | | | | To lock a prepared transaction's shared memory entry, we used to mark it with the XID of the backend. When the XID was no longer active according to the proc array, the entry was implicitly considered as not locked anymore. However, when preparing a transaction, the backend's proc array entry was cleared before transfering the locks (and some other state) to the prepared transaction's dummy PGPROC entry, so there was a window where another backend could finish the transaction before it was in fact fully prepared. To fix, rewrite the locking mechanism of global transaction entries. Instead of an XID, just have simple locked-or-not flag in each entry (we store the locking backend's backend id rather than a simple boolean, but that's just for debugging purposes). The backend is responsible for explicitly unlocking the entry, and to make sure that that happens, install a callback to unlock it on abort or process exit. Backpatch to all supported versions.
* Code review for recent changes in relcache.c.Tom Lane2014-05-14
| | | | | | | | | | | | | | | | | | | | | | | rd_replidindex should be managed the same as rd_oidindex, and rd_keyattr and rd_idattr should be managed like rd_indexattr. Omissions in this area meant that the bitmapsets computed for rd_keyattr and rd_idattr would be leaked during any relcache flush, resulting in a slow but permanent leak in CacheMemoryContext. There was also a tiny probability of relcache entry corruption if we ran out of memory at just the wrong point in RelationGetIndexAttrBitmap. Otherwise, the fields were not zeroed where expected, which would not bother the code any AFAICS but could greatly confuse anyone examining the relcache entry while debugging. Also, create an API function RelationGetReplicaIndex rather than letting non-relcache code be intimate with the mechanisms underlying caching of that value (we won't even mention the memory leak there). Also, fix a relcache flush hazard identified by Andres Freund: RelationGetIndexAttrBitmap must not assume that rd_replidindex stays valid across index_open. The aspects of this involving rd_keyattr date back to 9.3, so back-patch those changes.
* Get rid of bogus dependency on typcategory in to_json() and friends.Tom Lane2014-05-09
| | | | | | | | | | | | | | | | | | | These functions were relying on typcategory to identify arrays and composites, which is not reliable and not the normal way to do it. Using typcategory to identify boolean, numeric types, and json itself is also pretty questionable, though the code in those cases didn't seem to be at risk of anything worse than wrong output. Instead, use the standard lsyscache functions to identify arrays and composites, and rely on a direct check of the type OID for the other cases. In HEAD, also be sure to look through domains so that a domain is treated the same as its base type for conversions to JSON. However, this is a small behavioral change; given the lack of field complaints, we won't back-patch it. In passing, refactor so that there's only one copy of the code that decides which conversion strategy to apply, not multiple copies that could (and have) gotten out of sync.
* Protect against torn pages when deleting GIN list pages.Heikki Linnakangas2014-05-08
| | | | | | | | | To-be-deleted list pages contain no useful information, as they are being deleted, but we must still protect the writes from being torn by a crash after a partial write. To do that, re-initialize the pages on WAL replay. Jeff Janes caught this with a test program to test partial writes. Backpatch to all supported versions.
* Fix failure to set ActiveSnapshot while rewinding a cursor.Tom Lane2014-05-07
| | | | | | | | | | | | | | ActiveSnapshot needs to be set when we call ExecutorRewind because some plan node types may execute user-defined functions during their ReScan calls (nodeLimit.c does so, at least). The wisdom of that is somewhat debatable, perhaps, but for now the simplest fix is to make sure the required context is valid. Failure to do this typically led to a null-pointer-dereference core dump, though it's possible that in more complex cases a function could be executed with the wrong snapshot leading to very subtle misbehavior. Per report from Leif Jensen. It's been broken for a long time, so back-patch to all active branches.
* Remove tabs after spaces in C commentsBruce Momjian2014-05-06
| | | | | | | | | This was not changed in HEAD, but will be done later as part of a pgindent run. Future pgindent runs will also do this. Report by Tom Lane Backpatch through all supported branches, but not HEAD
* Correct comment in Hot Standby nbtree handlingSimon Riggs2014-05-06
| | | | Logic is correct, matching handling of LP_DEAD elsewhere.
* Fix use of free in walsender error handling after a sysid mismatch.Heikki Linnakangas2014-05-06
| | | | | | | Found via valgrind. The bug exists since the introduction of the walsender, so backpatch to 9.0. Andres Freund
* Fix possible cache invalidation failure in ReceiveSharedInvalidMessages.Tom Lane2014-05-05
| | | | | | | | | | | | | | | Commit fad153ec45299bd4d4f29dec8d9e04e2f1c08148 modified sinval.c to reduce the number of calls into sinvaladt.c (which require taking a shared lock) by keeping a local buffer of collected-but-not-yet-processed messages. However, if processing of the last message in a batch resulted in a recursive call to ReceiveSharedInvalidMessages, we could overwrite that message with a new one while the outer invalidation function was still working on it. This would be likely to lead to invalidation of the wrong cache entry, allowing subsequent processing to use stale cache data. The fix is just to make a local copy of each message while we're processing it. Spotted by Andres Freund. Back-patch to 8.4 where the bug was introduced.
* Fix yet another corner case in dumping rules/views with USING clauses.Tom Lane2014-05-01
| | | | | | | | | | | | | | | | ruleutils.c tries to cope with additions/deletions/renamings of columns in tables referenced by views, by means of adding machine-generated aliases to the printed form of a view when needed to preserve the original semantics. A recent blog post by Marko Tiikkaja pointed out a case I'd missed though: if one input of a join with USING is itself a join, there is nothing to stop the user from adding a column of the same name as the USING column to whichever side of the sub-join didn't provide the USING column. And then there'll be an error when the view is re-parsed, since now the sub-join exposes two columns matching the USING specification. We were catching a lot of related cases, but not this one, so add some logic to cope with it. Back-patch to 9.3, which is the first release that makes any serious attempt to cope with such cases (cf commit 2ffa740be and follow-ons).
* Fix failure to detoast fields in composite elements of structured types.Tom Lane2014-05-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we have an array of records stored on disk, the individual record fields cannot contain out-of-line TOAST pointers: the tuptoaster.c mechanisms are only prepared to deal with TOAST pointers appearing in top-level fields of a stored row. The same applies for ranges over composite types, nested composites, etc. However, the existing code only took care of expanding sub-field TOAST pointers for the case of nested composites, not for other structured types containing composites. For example, given a command such as UPDATE tab SET arraycol = ARRAY[(ROW(x,42)::mycompositetype] ... where x is a direct reference to a field of an on-disk tuple, if that field is long enough to be toasted out-of-line then the TOAST pointer would be inserted as-is into the array column. If the source record for x is later deleted, the array field value would become a dangling pointer, leading to errors along the line of "missing chunk number 0 for toast value ..." when the value is referenced. A reproducible test case for this was provided by Jan Pecek, but it seems likely that some of the "missing chunk number" reports we've heard in the past were caused by similar issues. Code-wise, the problem is that PG_DETOAST_DATUM() is not adequate to produce a self-contained Datum value if the Datum is of composite type. Seen in this light, the problem is not just confined to arrays and ranges, but could also affect some other places where detoasting is done in that way, for example form_index_tuple(). I tried teaching the array code to apply toast_flatten_tuple_attribute() along with PG_DETOAST_DATUM() when the array element type is composite, but this was messy and imposed extra cache lookup costs whether or not any TOAST pointers were present, indeed sometimes when the array element type isn't even composite (since sometimes it takes a typcache lookup to find that out). The idea of extending that approach to all the places that currently use PG_DETOAST_DATUM() wasn't attractive at all. This patch instead solves the problem by decreeing that composite Datum values must not contain any out-of-line TOAST pointers in the first place; that is, we expand out-of-line fields at the point of constructing a composite Datum, not at the point where we're about to insert it into a larger tuple. This rule is applied only to true composite Datums, not to tuples that are being passed around the system as tuples, so it's not as invasive as it might sound at first. With this approach, the amount of code that has to be touched for a full solution is greatly reduced, and added cache lookup costs are avoided except when there actually is a TOAST pointer that needs to be inlined. The main drawback of this approach is that we might sometimes dereference a TOAST pointer that will never actually be used by the query, imposing a rather large cost that wasn't there before. On the other side of the coin, if the field value is used multiple times then we'll come out ahead by avoiding repeat detoastings. Experimentation suggests that common SQL coding patterns are unaffected either way, though. Applications that are very negatively affected could be advised to modify their code to not fetch columns they won't be using. In future, we might consider reverting this solution in favor of detoasting only at the point where data is about to be stored to disk, using some method that can drill down into multiple levels of nested structured types. That will require defining new APIs for structured types, though, so it doesn't seem feasible as a back-patchable fix. Note that this patch changes HeapTupleGetDatum() from a macro to a function call; this means that any third-party code using that macro will not get protection against creating TOAST-pointer-containing Datums until it's recompiled. The same applies to any uses of PG_RETURN_HEAPTUPLEHEADER(). It seems likely that this is not a big problem in practice: most of the tuple-returning functions in core and contrib produce outputs that could not possibly be toasted anyway, and the same probably holds for third-party extensions. This bug has existed since TOAST was invented, so back-patch to all supported branches.
* Check for interrupts and stack overflow during rule/view dumps.Tom Lane2014-04-30
| | | | | | | | | Since ruleutils.c recurses, it could be driven to stack overflow by deeply nested constructs. Very large queries might also take long enough to deparse that a check for interrupts seems like a good idea. Stick appropriate tests into a couple of key places. Noted by Greg Stark. Back-patch to all supported branches.
* Reduce indentation/parenthesization of set operations in rule/view dumps.Tom Lane2014-04-30
| | | | | | | | | | | | | | | | | | | | | | | | | A query such as "SELECT x UNION SELECT y UNION SELECT z UNION ..." produces a left-deep nested parse tree, which we formerly showed in its full nested glory and with all the possible parentheses. This does little for readability, though, and long UNION lists resulting in excessive indentation are common. Instead, let's omit parentheses and indent all the subqueries at the same level in such cases. This patch skips indentation/parenthesization whenever the lefthand input of a SetOperationStmt is another SetOperationStmt of the same kind and ALL/DISTINCT property. We could teach the code the exact syntactic precedence of set operations and thereby avoid parenthesization in some more cases, but it's not clear that that'd be a readability win: it seems better to parenthesize if the set operation changes. (As an example, if there's one UNION in a long list of UNION ALL, it now stands out like a sore thumb, which seems like a good thing.) Back-patch to 9.3. This completes our response to a complaint from Greg Stark that since commit 62e666400d there's a performance problem in pg_dump for views containing long UNION sequences (or other types of deeply nested constructs). The previous commit 0601cb54dac14d979d726ab2ebeda251ae36e857 handles the general problem, but this one makes the specific case of UNION lists look a lot nicer.
* Limit overall indentation in rule/view dumps.Tom Lane2014-04-30
| | | | | | | | | | | | | | | | | Continuing to indent no matter how deeply nested we get doesn't really do anything for readability; what's worse, it results in O(N^2) total whitespace, which can become a performance and memory-consumption issue. To address this, once we get past 40 characters of indentation, reduce the indentation step distance 4x, and also limit the maximum indentation by reducing it modulo 40. This latter choice is a bit weird at first glance, but it seems to preserve readability better than a simple cap would do. Back-patch to 9.3, because since commit 62e666400d the performance issue is a hazard for pg_dump. Greg Stark and Tom Lane
* Fix indentation of JOIN clauses in rule/view dumps.Tom Lane2014-04-30
| | | | | | | | | | | | | | | | | | | | The code attempted to outdent JOIN clauses further left than the parent FROM keyword, which was odd in any case, and led to inconsistent formatting since in simple cases the clauses couldn't be moved any further left than that. And it left a permanent decrement of the indentation level, causing subsequent lines to be much further left than they should be (again, this couldn't be seen in simple cases for lack of indentation to give up). After a little experimentation I chose to make it indent JOIN keywords two spaces from the parent FROM, which is one space more than the join's lefthand input in cases where that appears on a different line from FROM. Back-patch to 9.3. This is a purely cosmetic change, and the bug is quite old, so that may seem arbitrary; but we are going to be making some other changes to the indentation behavior in both HEAD and 9.3, so it seems reasonable to include this in 9.3 too. I committed this one first because its effects are more visible in the regression test results as they currently stand than they will be later.
* Improve planner to drop constant-NULL inputs of AND/OR where it's legal.Tom Lane2014-04-29
| | | | | | | | | | | | | | | | | | | In general we can't discard constant-NULL inputs, since they could change the result of the AND/OR to be NULL. But at top level of WHERE, we do not need to distinguish a NULL result from a FALSE result, so it's okay to treat NULL as FALSE and then simplify AND/OR accordingly. This is a very ancient oversight, but in 9.2 and later it can lead to failure to optimize queries that previous releases did optimize, as a result of more aggressive parameter substitution rules making it possible to reduce more subexpressions to NULL constants. This is the root cause of bug #10171 from Arnold Scheffler. We could alternatively have fixed that by teaching orclauses.c to ignore constant-NULL OR arms, but it seems better to get rid of them globally. I resisted the temptation to back-patch this change into all active branches, but it seems appropriate to back-patch as far as 9.2 so that there will not be performance regressions of the kind shown in this bug.
* Fix two bugs in WAL-logging of GIN pending-list pages.Heikki Linnakangas2014-04-28
| | | | | | | | | | | | | | | | | | In writeListPage, never take a full-page image of the page, because we have all the information required to re-initialize in the WAL record anyway. Before this fix, a full-page image was always generated, unless full_page_writes=off, because when the page is initialized its LSN is always 0. In stable-branches, keep the code to restore the backup blocks if they exist, in case that the WAL is generated with an older minor version, but in master Assert that there are no full-page images. In the redo routine, add missing "off++". Otherwise the tuples are added to the page in reverse order. That happens to be harmless because we always scan and remove all the tuples together, but it was clearly wrong. Also, it was masked by the first bug unless full_page_writes=off, because the page was always restored from a full-page image. Backpatch to all supported versions.
* Fix race when updating a tuple concurrently locked by another processAlvaro Herrera2014-04-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a tuple is locked, and this lock is later upgraded either to an update or to a stronger lock, and in the meantime some other process tries to lock, update or delete the same tuple, it (the tuple) could end up being updated twice, or having conflicting locks held. The reason for this is that the second updater checks for a change in Xmax value, or in the HEAP_XMAX_IS_MULTI infomask bit, after noticing the first lock; and if there's a change, it restarts and re-evaluates its ability to update the tuple. But it neglected to check for changes in lock strength or in lock-vs-update status when those two properties stayed the same. This would lead it to take the wrong decision and continue with its own update, when in reality it shouldn't do so but instead restart from the top. This could lead to either an assertion failure much later (when a multixact containing multiple updates is detected), or duplicate copies of tuples. To fix, make sure to compare the other relevant infomask bits alongside the Xmax value and HEAP_XMAX_IS_MULTI bit, and restart from the top if necessary. Also, in the belt-and-suspenders spirit, add a check to MultiXactCreateFromMembers that a multixact being created does not have two or more members that are claimed to be updates. This should protect against other bugs that might cause similar bogus situations. Backpatch to 9.3, where the possibility of multixacts containing updates was introduced. (In prior versions it was possible to have the tuple lock upgraded from shared to exclusive, and an update would not restart from the top; yet we're protected against a bug there because there's always a sleep to wait for the locking transaction to complete before continuing to do anything. Really, the fact that tuple locks always conflicted with concurrent updates is what protected against bugs here.) Per report from Andrew Dunstan and Josh Berkus in thread at http://www.postgresql.org/message-id/534C8B33.9050807@pgexperts.com Bug analysis by Andres Freund.
* Reset pg_stat_activity.xact_start during PREPARE TRANSACTION.Tom Lane2014-04-24
| | | | | | | | | | | | | | | Once we've completed a PREPARE, our session is not running a transaction, so its entry in pg_stat_activity should show xact_start as null, rather than leaving the value as the start time of the now-prepared transaction. I think possibly this oversight was triggered by faulty extrapolation from the adjacent comment that says PrepareTransaction should not call AtEOXact_PgStat, so tweak the wording of that comment. Noted by Andres Freund while considering bug #10123 from Maxim Boguk, although this error doesn't seem to explain that report. Back-patch to all active branches.
* Update obsolete comments.Heikki Linnakangas2014-04-23
| | | | We no longer have a TLI field in the page header.
* Fix typos in comment.Heikki Linnakangas2014-04-23
|
* Set the all-visible flag on heap page before writing WAL record, not after.Heikki Linnakangas2014-04-17
| | | | | | | | | | | | | | | | If we set the all-visible flag after writing WAL record, and XLogInsert takes a full-page image of the page, the image would not include the flag. We will then proceed to set the VM bit, which would then be set without the corresponding all-visible flag on the heap page. Found by comparing page images on master and standby, after writing/replaying each WAL record. (There is still a discrepancy: the all-visible flag won't be set after replaying the HEAP_CLEAN record, even though it is set in the master. However, it will be set when replaying the HEAP2_VISIBLE record and setting the VM bit, so the all-visible flag and VM bit are always consistent on the standby, even though they are momentarily out-of-sync with master) Backpatch to 9.3 where this code was introduced.
* Fix object identities for text search objectsAlvaro Herrera2014-04-16
| | | | | | | We were neglecting to schema-qualify them. Backpatch to 9.3, where object identities were introduced as a concept by commit f8348ea32ec8.
* Use AF_UNSPEC not PF_UNSPEC in getaddrinfo calls.Tom Lane2014-04-16
| | | | | | | | | | | | | | | | | | According to the Single Unix Spec and assorted man pages, you're supposed to use the constants named AF_xxx when setting ai_family for a getaddrinfo call. In a few places we were using PF_xxx instead. Use of PF_xxx appears to be an ancient BSD convention that was not adopted by later standardization. On BSD and most later Unixen, it doesn't matter much because those constants have equivalent values anyway; but nonetheless this code is not per spec. In the same vein, replace PF_INET by AF_INET in one socket() call, which wasn't even consistent with the other socket() call in the same function let alone the remainder of our code. Per investigation of a Cygwin trouble report from Marco Atzeri. It's probably a long shot that this will fix his issue, but it's wrong in any case.
* check socket creation errors against PGINVALID_SOCKETBruce Momjian2014-04-16
| | | | | | | | Previously, in some places, socket creation errors were checked for negative values, which is not true for Windows because sockets are unsigned. This masked socket creation errors on Windows. Backpatch through 9.0. 8.4 doesn't have the infrastructure to fix this.
* Use correctly-sized buffer when zero-filling a WAL file.Heikki Linnakangas2014-04-16
| | | | | | I mixed up BLCKSZ and XLOG_BLCKSZ when I changed the way the buffer is allocated a couple of weeks ago. With the default settings, they are both 8k, but they can be changed at compile-time.
* Fix hot standby bug with GiST scans.Heikki Linnakangas2014-04-08
| | | | | | | | | | Don't reset the rightlink of a page when replaying a page update record. This was a leftover from pre-hot standby days, when it was not possible to have scans concurrent with WAL replay. Resetting the right-link was not necessary back then either, but it was done for the sake of tidiness. But with hot standby, it's wrong, because a concurrent scan might still need it. Backpatch all versions with hot standby, 9.0 and above.
* Assert that strong-lock count is >0 everywhere it's decremented.Robert Haas2014-04-07
| | | | | | | | | | The one existing assertion of this type has tripped a few times in the buildfarm lately, but it's not clear whether the problem is really originating there or whether it's leftovers from a trip through one of the other two paths that lack a matching assertion. So add one. Since the same bug(s) most likely exist(s) in the back-branches also, back-patch to 9.2, where the fast-path lock mechanism was added.
* Block signals earlier during postmaster startup.Tom Lane2014-04-05
| | | | | | | | | | | | | | | | | | | | | | | Formerly, we set up the postmaster's signal handling only when we were about to start launching subprocesses. This is a bad idea though, as it means that for example a SIGINT arriving before that will kill the postmaster instantly, perhaps leaving lockfiles, socket files, shared memory, etc laying about. We'd rather that such a signal caused orderly postmaster termination including releasing of those resources. A simple fix is to move the PostmasterMain stanza that initializes signal handling to an earlier point, before we've created any such resources. Then, an early-arriving signal will be blocked until we're ready to deal with it in the usual way. (The only part that really needs to be moved up is blocking of signals, but it seems best to keep the signal handler installation calls together with that; for one thing this ensures the kernel won't drop any signals we wished to get. The handlers won't get invoked in any case until we unblock signals in ServerLoop.) Per a report from MauMau. He proposed changing the way "pg_ctl stop" works to deal with this, but that'd just be masking one symptom not fixing the core issue. It's been like this since forever, so back-patch to all supported branches.
* Fix processing of PGC_BACKEND GUC parameters on Windows.Tom Lane2014-04-05
| | | | | | | | | | | | | | | EXEC_BACKEND builds (i.e., Windows) failed to absorb values of PGC_BACKEND parameters if they'd been changed post-startup via the config file. This for example prevented log_connections from working if it were turned on post-startup. The mechanism for handling this case has always been a bit of a kluge, and it wasn't revisited when we implemented EXEC_BACKEND. While in a normal forking environment new backends will inherit the postmaster's value of such settings, EXEC_BACKEND backends have to read the settings from the CONFIG_EXEC_PARAMS file, and they were mistakenly rejecting them. So this case has always been broken in the Windows port; so back-patch to all supported branches. Amit Kapila
* Fix tablespace creation WAL replay to work on Windows.Tom Lane2014-04-04
| | | | | | | | | | The code segment that removes the old symlink (if present) wasn't clued into the fact that on Windows, symlinks are junction points which have to be removed with rmdir(). Backpatch to 9.0, where the failing code was introduced. MauMau, reviewed by Muhammad Asif Naeem and Amit Kapila
* Allow "-C variable" and "--describe-config" even to root users.Tom Lane2014-04-04
| | | | | | | | | | | | | | | | | | | | | | | | There's no really compelling reason to refuse to do these read-only, non-server-starting options as root, and there's at least one good reason to allow -C: pg_ctl uses -C to find out the true data directory location when pointed at a config-only directory. On Windows, this is done before dropping administrator privileges, which means that pg_ctl fails for administrators if and only if a config-only layout is used. Since the root-privilege check is done so early in startup, it's a bit awkward to check for these switches. Make the somewhat arbitrary decision that we'll only skip the root check if -C is the first switch. This is not just to make the code a bit simpler: it also guarantees that we can't misinterpret a --boot mode switch. (While AuxiliaryProcessMain doesn't currently recognize any such switch, it might have one in the future.) This is no particular problem for pg_ctl, and since the whole behavior is undocumented anyhow, it's not a documentation issue either. (--describe-config only works as the first switch anyway, so this is no restriction for that case either.) Back-patch to 9.2 where pg_ctl first began to use -C. MauMau, heavily edited by me
* Fix bogus time printout in walreceiver's debug log messages.Tom Lane2014-04-04
| | | | | | | | | | | | | The displayed sendtime and receipttime were always exactly equal, because somebody forgot that timestamptz_to_str returns a static buffer (thereby simplifying life for most callers, at the cost of complicating it for those who need two results concurrently). Apply the same pstrdup solution used by the other call sites with this issue. Back-patch to 9.2 where the faulty code was introduced. Per bug #9849 from Haruka Takatsuka, though this is not exactly his patch. Possibly we should change timestamptz_to_str's API, but I wouldn't want to do so in the back branches.
* Move multixid allocation out of critical section.Heikki Linnakangas2014-04-04
| | | | | | It can fail if you run out of memory. This call was added in 9.3, so backpatch to 9.3 only.
* Avoid allocations in critical sections.Heikki Linnakangas2014-04-04
| | | | If a palloc in a critical section fails, it becomes a PANIC.
* Fix non-equivalence of VARIADIC and non-VARIADIC function call formats.Tom Lane2014-04-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For variadic functions (other than VARIADIC ANY), the syntaxes foo(x,y,...) and foo(VARIADIC ARRAY[x,y,...]) should be considered equivalent, since the former is converted to the latter at parse time. They have indeed been equivalent, in all releases before 9.3. However, commit 75b39e790 made an ill-considered decision to record which syntax had been used in FuncExpr nodes, and then to make equal() test that in checking node equality --- which caused the syntaxes to not be seen as equivalent by the planner. This is the underlying cause of bug #9817 from Dmitry Ryabov. It might seem that a quick fix would be to make equal() disregard FuncExpr.funcvariadic, but the same commit made that untenable, because the field actually *is* semantically significant for some VARIADIC ANY functions. This patch instead adopts the approach of redefining funcvariadic (and aggvariadic, in HEAD) as meaning that the last argument is a variadic array, whether it got that way by parser intervention or was supplied explicitly by the user. Therefore the value will always be true for non-ANY variadic functions, restoring the principle of equivalence. (However, the planner will continue to consider use of VARIADIC as a meaningful difference for VARIADIC ANY functions, even though some such functions might disregard it.) In HEAD, this change lets us simplify the decompilation logic in ruleutils.c, since the funcvariadic/aggvariadic flag tells directly whether to print VARIADIC. However, in 9.3 we have to continue to cope with existing stored rules/views that might contain the previous definition. Fortunately, this just means no change in ruleutils.c, since its existing behavior effectively ignores funcvariadic for all cases other than VARIADIC ANY functions. In HEAD, bump catversion to reflect the fact that FuncExpr.funcvariadic changed meanings; this is sort of pro forma, since I don't believe any built-in views are affected. Unfortunately, this patch doesn't magically fix everything for affected 9.3 users. After installing 9.3.5, they might need to recreate their rules/views/indexes containing variadic function calls in order to get everything consistent with the new definition. As in the cited bug, the symptom of a problem would be failure to use a nominally matching index that has a variadic function call in its definition. We'll need to mention this in the 9.3.5 release notes.