aboutsummaryrefslogtreecommitdiff
path: root/src/backend
Commit message (Collapse)AuthorAge
* If recovery.conf is created after "pg_ctl stop -m i", do crash recovery.Heikki Linnakangas2013-02-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If you create a base backup using an atomic filesystem snapshot, and try to perform PITR starting from that base backup, or if you just kill a master server and create recovery.conf to put it into standby mode, we don't know how far we need to recover before reaching consistency. Normally in crash recovery, we replay all the WAL present in pg_xlog, and assume that we're consistent after that. And normally in archive recovery, minRecoveryPoint, backupEndRequired, or backupEndPoint is set in the control file, indicating how far we need to replay to reach consistency. But if the server was previously up and running normally, and you kill -9 it or take an atomic filesystem snapshot, none of those fields are set in the control file. The solution is to perform crash recovery first, replaying all the WAL in pg_xlog. After that's done, we assume that the system is consistent like in normal crash recovery, and switch to archive recovery mode after that. Per report from Kyotaro HORIGUCHI. In his scenario, recovery.conf was created after "pg_ctl stop -m i". I'm not sure we need to support that exact scenario, but we should support backing up using a filesystem snapshot, which looks identical. This issue goes back to at least 9.0, where hot standby was introduced and we started to track when consistency is reached. In 9.1 and 9.2, we would open up for hot standby too early, and queries could briefly see an inconsistent state. But 9.2 made it more visible, as we started to PANIC if we see a reference to a non-existing page during recovery, if we've already reached consistency. This is a fairly big patch, so back-patch to 9.2 only, where the issue is more visible. We can consider back-patching further after this has received some more testing in 9.2 and master.
* Better fix for "unarchived WAL files get deleted on crash recovery" bug.Heikki Linnakangas2013-02-15
| | | | | | | | | | Revert my earlier fix for the bug that unarchived WAL files get deleted on crash recovery, commit c9cc7e05c6d82a9781883a016c70d95aa4923122. We create a .done file for files streamed or restored from archive, so the WAL file recycling logic used during normal operation works just as well during archive recovery. Per Fujii Masao's suggestion.
* Don't delete unarchived WAL files during crash recovery.Heikki Linnakangas2013-02-15
| | | | | Bug reported by Jehan-Guillaume (ioguix) de Rorthais. This was introduced with the change to keep WAL files restored from archive in pg_xlog, in 9.2.
* Fix bogus when-to-deregister-from-listener-array logic.Tom Lane2013-02-13
| | | | | | | | | | | | | | | | | | Since a backend adds itself to the global listener array during Exec_ListenPreCommit, it's inappropriate for it to remove itself during Exec_UnlistenCommit or Exec_UnlistenAllCommit --- that leads to failure when committing a transaction that did UNLISTEN then LISTEN, since we end up not registered though we should be. (This leads to missing later notifications, or to Assert failures in assert-enabled builds.) Instead deal with deregistering at the bottom of AtCommit_Notify, when we know the final state of the listenChannels list. Also, simplify the representation of registration status by replacing the transient backendHasExecutedInitialListen flag with an amRegisteredListener flag. Per report from Greg Sabino Mullane. Back-patch to 9.0, where the problem was introduced during the LISTEN/NOTIFY rewrite.
* Further cleanup of gistsplit.c.Tom Lane2013-02-10
| | | | | | | | | | | | | | | | | | | | After further reflection I was unconvinced that the existing coding is guaranteed to return valid union datums in every code path for multi-column indexes. Fix that by forcing a gistunionsubkey() call at the end of the recursion. Having done that, we can remove some clearly-redundant calls elsewhere. This should be a little faster for multi-column indexes (since the previous coding would uselessly do such a call for each column while unwinding the recursion), as well as much harder to break. Also, simplify the handling of cases where one side or the other of a primary split contains only don't-care tuples. The previous coding used a very ugly hack in removeDontCares() that essentially forced one random tuple to be treated as non-don't-care, providing a random initial choice of seed datum for the secondary split. It seems unlikely that that method will give better-than-random splits. Instead, treat such a split as degenerate and just let the next column determine the split, the same way that we handle fully degenerate cases where the two sides produce identical union datums.
* Remove useless picksplit-doesn't-support-secondary-split log spam.Tom Lane2013-02-10
| | | | | | | | | | | | | | This LOG message was put in over five years ago with the evident expectation that we'd make all GiST opclasses support secondary split directly. However, no such thing ever happened, and indeed the number of opclasses supporting it decreased to zero in 9.2. The reason is that improving on the default implementation isn't that easy --- the opclass-specific code that did exist, before 9.2, doesn't appear to have been any improvement over the default. Hence, remove the message altogether. There's certainly no point in nagging users about this in released branches, but I doubt that we'll ever implement complete opclass-specific support anyway.
* Remove vestigial secondary-split support in gist_box_picksplit().Tom Lane2013-02-10
| | | | | | | | | | | | | | | | | | | | | | | Not only is this implementation of secondary-split not better than the default implementation in gistsplit.c, it's actually worse. The gistsplit.c code at least looks to see if switching the left and right sides would make a better merge with the previously-split tuples, while this doesn't. In any case it's rather useless to support secondary split only in an edge case. There used to be more complete support for it here (in chooseLR()), but that was removed in commit 7f3bd86843e5aad84585a57d3f6b80db3c609916. It appears to me though that the chooseLR() code was really isomorphic to the default implementation, since it was still based on choosing the cheaper way of adding two sub-split vectors that had been chosen without regard to the primary split initially. I think an implementation of secondary split that could beat the default implementation would have to be pretty fully integrated into the split algorithm, not plastered on at the end. Back-patch to 9.2, but not further; previous branches have the chooseLR() code which I don't feel a great need to mess with. This is mainly so we just have two behaviors and not three among the various branches (IOW, this patch is cleanup for commit 7f3bd86843e5aad84585a57d3f6b80db3c609916's incomplete removal of secondary-split support).
* Document and clean up gistsplit.c.Tom Lane2013-02-10
| | | | | | | | | | Improve comments, rename some variables and functions, slightly simplify a couple of APIs, in an attempt to make this code readable by people other than its original author. Even though this is essentially just cosmetic, back-patch to all active branches, because otherwise it's going to make back-patching future fixes in this file very painful.
* Fix gist_box_same and gist_point_consistent to handle fuzziness correctly.Tom Lane2013-02-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | While there's considerable doubt that we want fuzzy behavior in the geometric operators at all (let alone as currently implemented), nobody is stepping forward to redesign that stuff. In the meantime it behooves us to make sure that index searches agree with the behavior of the underlying operators. This patch fixes two problems in this area. First, gist_box_same was using fuzzy equality, but it really needs to use exact equality to prevent not-quite-identical upper index keys from being treated as identical, which for example would prevent an existing upper key from being extended by an amount less than epsilon. This would result in inconsistent indexes. (The next release notes will need to recommend that users reindex GiST indexes on boxes, polygons, circles, and points, since all four opclasses use gist_box_same.) Second, gist_point_consistent used exact comparisons for upper-page comparisons in ~= searches, when it needs to use fuzzy comparisons to ensure it finds all matches; and it used fuzzy comparisons for point <@ box searches, when it needs to use exact comparisons because that's what the <@ operator (rather inconsistently) does. The added regression test cases illustrate all three misbehaviors. Back-patch to all active branches. (8.4 did not have GiST point_ops, but it still seems prudent to apply the gist_box_same patch to it.) Alexander Korotkov, reviewed by Noah Misch
* Fix performance issue in EXPLAIN (ANALYZE, TIMING OFF).Tom Lane2013-02-07
| | | | | | | | | | | | | | | | | | | Commit af7914c6627bcf0b0ca614e9ce95d3f8056602bf, which added the TIMING option to EXPLAIN, had an oversight: if the TIMING option is disabled then control in InstrStartNode() goes through an elog(DEBUG2) call, which typically does nothing but takes a noticeable amount of time to do it. Tweak the logic to avoid that. In HEAD, also change the elog(DEBUG2)'s in instrument.c to elog(ERROR). It's not very clear why they weren't like that to begin with, but this episode shows that not complaining more vociferously about misuse is likely to do little except allow bugs to remain hidden. While at it, adjust some code that was making possibly-dangerous assumptions about flag bits being in the rightmost byte of the instrument_options word. Problem reported by Pavel Stehule (via Tomas Vondra).
* Repair bugs in GiST page splitting code for multi-column indexes.Tom Lane2013-02-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When considering a non-last column in a multi-column GiST index, gistsplit.c tries to improve on the split chosen by the opclass-specific pickSplit function by considering penalties for the next column. However, there were two bugs in this code: it failed to recompute the union keys for the leftmost index columns, even though these might well change after reassigning tuples; and it included the old union keys in the recomputation for the columns it did recompute, so that those keys couldn't get smaller even if they should. The first problem could result in an invalid index in which searches wouldn't find index entries that are in fact present; the second would make the index less efficient to search. Both of these errors were caused by misuse of gistMakeUnionItVec, whose API was designed in a way that just begged such errors to be made. There is no situation in which it's safe or useful to compute the union keys for a subset of the index columns, and there is no caller that wants any previous union keys to be included in the computation; so the undocumented choice to treat the union keys as in/out rather than pure output parameters is a waste of code as well as being dangerous. Hence, rather than just making a minimal patch, I've changed the API of gistMakeUnionItVec to remove the "startkey" parameter (it now always processes all index columns) and treat the attr/isnull arrays as purely output parameters. In passing, also get rid of a couple of unnecessary and dangerous uses of static variables in gistutil.c. It's remarkable that the one in gistMakeUnionKey hasn't given us portability troubles before now, because in addition to posing a re-entrancy hazard, it was unsafely assuming that a static char[] array would have at least Datum alignment. Per investigation of a trouble report from Tomas Vondra. (There are also some bugs in contrib/btree_gist to be fixed, but that seems like material for a separate patch.) Back-patch to all supported branches.
* Fix possible failure to send final transaction counts to stats collector.Tom Lane2013-02-07
| | | | | | | | | | | | | Normally, we suppress sending a tabstats message to the collector unless there were some actual table stats to send. However, during backend exit we should force out the message if there are any transaction commit/abort counts to send, else the session's last few commit/abort counts will never get reported at all. We had logic for this, but the short-circuit test at the top of pgstat_report_stat() ignored the "force" flag, with the consequence that session-ending transactions that touched no database-local tables would not get counted. Seems to be an oversight in my commit 641912b4d17fd214a5e5bae4e7bb9ddbc28b144b, which added the "force" flag. That was back in 8.3, so back-patch to all supported versions.
* Enable building with Microsoft Visual Studio 2012.Andrew Dunstan2013-02-06
| | | | | | Backpatch to release 9.2 Brar Piening and Noah Misch, reviewed by Craig Ringer.
* Prevent execution of enum_recv() from SQL.Tom Lane2013-02-04
| | | | | | | | | | | | | | | | | This function was misdeclared to take cstring when it should take internal. This at least allows crashing the server, and in principle an attacker might be able to use the function to examine the contents of server memory. The correct fix is to adjust the system catalog contents (and fix the regression tests that should have caught this but failed to). However, asking users to correct the catalog contents in existing installations is a pain, so as a band-aid fix for the back branches, install a check in enum_recv() to make it throw error if called with a cstring argument. We will later revert this in HEAD in favor of correcting the catalogs. Our thanks to Sumit Soni (via Secunia SVCRP) for reporting this issue. Security: CVE-2013-0255
* Reset vacuum_defer_cleanup_age to PGC_SIGHUP.Simon Riggs2013-02-04
| | | | Revert commit 84725aa5efe11688633b553e58113efce4181f2e
* Translation updatesPeter Eisentraut2013-02-04
|
* Mark vacuum_defer_cleanup_age as PGC_POSTMASTER.Simon Riggs2013-02-02
| | | | Following bug analysis of #7819 by Tom Lane
* Fix typo in freeze_table_age implementationAlvaro Herrera2013-02-01
| | | | | | | | | | | | | | The original code used freeze_min_age instead of freeze_table_age. The main consequence of this mistake is that lowering freeze_min_age would cause full-table scans to occur much more frequently, which causes serious issues because the number of writes required is much larger. That feature (freeze_min_age) is supposed to affect only how soon tuples are frozen; some pages should still be skipped due to the visibility map. Backpatch to 8.4, where the freeze_table_age feature was introduced. Report and patch from Andres Freund
* Fix plpgsql's reporting of plan-time errors in possibly-simple expressions.Tom Lane2013-01-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | exec_simple_check_plan and exec_eval_simple_expr attempted to call GetCachedPlan directly. This meant that if an error was thrown during planning, the resulting context traceback would not include the line normally contributed by _SPI_error_callback. This is already inconsistent, but just to be really odd, a re-execution of the very same expression *would* show the additional context line, because we'd already have cached the plan and marked the expression as non-simple. The problem is easy to demonstrate in 9.2 and HEAD because planning of a cached plan doesn't occur at all until GetCachedPlan is done. In earlier versions, it could only be an issue if initial planning had succeeded, then a replan was forced (already somewhat improbable for a simple expression), and the replan attempt failed. Since the issue is mainly cosmetic in older branches anyway, it doesn't seem worth the risk of trying to fix it there. It is worth fixing in 9.2 since the instability of the context printout can affect the results of GET STACKED DIAGNOSTICS, as per a recent discussion on pgsql-novice. To fix, introduce a SPI function that wraps GetCachedPlan while installing the correct callback function. Use this instead of calling GetCachedPlan directly from plpgsql. Also introduce a wrapper function for extracting a SPI plan's CachedPlanSource list. This lets us stop including spi_priv.h in pl_exec.c, which was never a very good idea from a modularity standpoint. In passing, fix a similar inconsistency that could occur in SPI_cursor_open, which was also calling GetCachedPlan without setting up a context callback.
* Fix grammar for subscripting or field selection from a sub-SELECT result.Tom Lane2013-01-30
| | | | | | | | | | | | | | | | | | | Such cases should work, but the grammar failed to accept them because of our ancient precedence hacks to convince bison that extra parentheses around a sub-SELECT in an expression are unambiguous. (Formally, they *are* ambiguous, but we don't especially care whether they're treated as part of the sub-SELECT or part of the expression. Bison cares, though.) Fix by adding a redundant-looking production for this case. This is a fine example of why fixing shift/reduce conflicts via precedence declarations is more dangerous than it looks: you can easily cause the parser to reject cases that should work. This has been wrong since commit 3db4056e22b0c6b2adc92543baf8408d2894fe91 or maybe before, and apparently some people have been working around it by inserting no-op casts. That method introduces a dump/reload hazard, as illustrated in bug #7838 from Jan Mate. Hence, back-patch to all active branches.
* DROP OWNED: don't try to drop tablespaces/databasesAlvaro Herrera2013-01-28
| | | | | | | | | | | | | | | | | My "fix" for bugs #7578 and #6116 on DROP OWNED at fe3b5eb08a1 not only misstated that it applied to REASSIGN OWNED (which it did not affect), but it also failed to fix the problems fully, because I didn't test the case of owned shared objects. Thus I created a new bug, reported by Thomas Kellerer as #7748, which would cause DROP OWNED to fail with a not-for-user-consumption error message. The code would attempt to drop the database, which not only fails to work because the underlying code does not support that, but is a pretty dangerous and undesirable thing to be doing as well. This patch fixes that bug by having DROP OWNED only attempt to process shared objects when grants on them are found, ignoring ownership. Backpatch to 8.3, which is as far as the previous bug was backpatched.
* Fix SPI documentation for new handling of ExecutorRun's count parameter.Tom Lane2013-01-24
| | | | | | | | | | | | | | Since 9.0, the count parameter has only limited the number of tuples actually returned by the executor. It doesn't affect the behavior of INSERT/UPDATE/DELETE unless RETURNING is specified, because without RETURNING, the ModifyTable plan node doesn't return control to execMain.c for each tuple. And we only check the limit at the top level. While this behavioral change was unintentional at the time, discussion of bug #6572 led us to the conclusion that we prefer the new behavior anyway, and so we should just adjust the docs to match rather than change the code. Accordingly, do that. Back-patch as far as 9.0 so that the docs match the code in each branch.
* Fix rare missing cancellations in Hot Standby.Simon Riggs2013-01-24
| | | | | | | | | | | | The machinery around XLOG_HEAP2_CLEANUP_INFO failed to correctly pass through the necessary information on latestRemovedXid, avoiding cancellations in some infrequent concurrent update/cleanup scenarios. Backpatchable fix to 9.0 Detailed bug report and fix by Noah Misch, backpatchable version by me.
* Also fix rotation of csvlog on Windows.Heikki Linnakangas2013-01-24
| | | | Backpatch to 9.2, like the previous fix.
* Fix failure to rotate postmaster log file for size reasons on Windows.Tom Lane2013-01-23
| | | | | | | | | | | | When we eliminated "unnecessary" wakeups of the syslogger process, we broke size-based logfile rotation on Windows, because on that platform data transfer is done in a separate thread. While non-Windows platforms would recheck the output file size after every log message, Windows only did so when the control thread woke up for some other reason, which might be quite infrequent. Per bug #7814 from Tsunezumi. Back-patch to 9.2 where the problem was introduced. Jeff Janes
* Fix performance problems with autovacuum truncation in busy workloads.Kevin Grittner2013-01-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In situations where there are over 8MB of empty pages at the end of a table, the truncation work for trailing empty pages takes longer than deadlock_timeout, and there is frequent access to the table by processes other than autovacuum, there was a problem with the autovacuum worker process being canceled by the deadlock checking code. The truncation work done by autovacuum up that point was lost, and the attempt tried again by a later autovacuum worker. The attempts could continue indefinitely without making progress, consuming resources and blocking other processes for up to deadlock_timeout each time. This patch has the autovacuum worker checking whether it is blocking any other thread at 20ms intervals. If such a condition develops, the autovacuum worker will persist the work it has done so far, release its lock on the table, and sleep in 50ms intervals for up to 5 seconds, hoping to be able to re-acquire the lock and try again. If it is unable to get the lock in that time, it moves on and a worker will try to continue later from the point this one left off. While this patch doesn't change the rules about when and what to truncate, it does cause the truncation to occur sooner, with less blocking, and with the consumption of fewer resources when there is contention for the table's lock. The only user-visible change other than improved performance is that the table size during truncation may change incrementally instead of just once. Backpatched to 9.0 from initial master commit at b19e4250b45e91c9cbdd18d35ea6391ab5961c8d -- before that the differences are too large to be clearly safe. Jan Wieck
* Fix error-checking typo in check_TSCurrentConfig().Tom Lane2013-01-20
| | | | | | The code failed to detect an out-of-memory failure. Xi Wang
* Protect against SnapshotNow race conditions in pg_tablespace scans.Tom Lane2013-01-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use of SnapshotNow is known to expose us to race conditions if the tuple(s) being sought could be updated by concurrently-committing transactions. CREATE DATABASE and DROP DATABASE are particularly exposed because they do heavyweight filesystem operations during their scans of pg_tablespace, so that the scans run for a very long time compared to most. Furthermore, the potential consequences of a missed or twice-visited row are nastier than average: * createdb() could fail with a bogus "file already exists" error, or silently fail to copy one or more tablespace's worth of files into the new database. * remove_dbtablespaces() could miss one or more tablespaces, thus failing to free filesystem space for the dropped database. * check_db_file_conflict() could likewise miss a tablespace, leading to an OID conflict that could result in data loss either immediately or in future operations. (This seems of very low probability, though, since a duplicate database OID would be unlikely to start with.) Hence, it seems worth fixing these three places to use MVCC snapshots, even though this will someday be superseded by a generic solution to SnapshotNow race conditions. Back-patch to all active branches. Stephen Frost and Tom Lane
* Unbreak lock conflict detection for Hot Standby.Robert Haas2013-01-18
| | | | | | | | | | This got broken in the original fast-path locking patch, because I failed to account for the fact that Hot Standby startup process might take a strong relation lock on a relation in a database to which it is not bound, and confused MyDatabaseId with the database ID of the relation being locked. Report and diagnosis by Andres Freund. Final form of patch by me.
* Reject out-of-range dates in to_date().Tom Lane2013-01-14
| | | | | | | | | Dates outside the supported range could be entered, but would not print reasonably, and operations such as conversion to timestamp wouldn't behave sanely either. Since this has the potential to result in undumpable table data, it seems worth back-patching. Hitoshi Harada
* Revert ill-considered change of index-size fudge factor.Tom Lane2013-01-11
| | | | | | | | | This partially reverts commit 21a39de5809cd3050a37d2554323cc1d0cbeed9d, restoring the pre-9.2 cost estimates for index usage. That change introduced much too large a bias against larger indexes, as per reports from Jeff Janes and others. The whole thing needs a rewrite, which I've done in HEAD, but the safest thing to do in 9.2 is just to undo this multiplier change.
* Fix potential corruption of lock table in CREATE/DROP INDEX CONCURRENTLY.Tom Lane2013-01-08
| | | | | | | | | | | | | | | | If VirtualXactLock() has to wait for a transaction that holds its VXID lock as a fast-path lock, it must first convert the fast-path lock to a regular lock. It failed to take the required "partition" lock on the main shared-memory lock table while doing so. This is the direct cause of the assert failure in GetLockStatusData() recently observed in the buildfarm, but more worryingly it could result in arbitrary corruption of the shared lock table if some other process were concurrently engaged in modifying the same partition of the lock table. Fortunately, VirtualXactLock() is only used by CREATE INDEX CONCURRENTLY and DROP INDEX CONCURRENTLY, so the opportunities for failure are fewer than they might have been. In passing, improve some comments and be a bit more consistent about order of operations.
* Invent a "one-shot" variant of CachedPlans for better performance.Tom Lane2013-01-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | SPI_execute() and related functions create a CachedPlan, execute it once, and immediately discard it, so that the functionality offered by plancache.c is of no value in this code path. And performance measurements show that the extra data copying and invalidation checking done by plancache.c slows down simple queries by 10% or more compared to 9.1. However, enough of the SPI code is shared with functions that do need plan caching that it seems impractical to bypass plancache.c altogether. Instead, let's invent a variant version of cached plans that preserves 99% of the API but doesn't offer any of the actual functionality, nor the overhead. This puts SPI_execute() performance back on par, or maybe even slightly better, than it was before. This change should resolve recent complaints of performance degradation from Dong Ye, Pavel Stehule, and others. By avoiding data copying, this change also reduces the amount of memory needed to execute many-statement SPI_execute() strings, as for instance in a recent complaint from Tomas Vondra. An additional benefit of this change is that multi-statement SPI_execute() query strings are now processed fully serially, that is we complete execution of earlier statements before running parse analysis and planning on following ones. This eliminates a long-standing POLA violation, in that DDL that affects the behavior of a later statement will now behave as expected. Back-patch to 9.2, since this was a performance regression compared to 9.1. (In 9.2, place the added struct fields so as to avoid changing the offsets of existing fields.) Heikki Linnakangas and Tom Lane
* Tolerate timeline switches while "pg_basebackup -X fetch" is running.Heikki Linnakangas2013-01-03
| | | | | | | | | | | | | | | | | | | | | | If you take a base backup from a standby server with "pg_basebackup -X fetch", and the timeline switches while the backup is being taken, the backup used to fail with an error "requested WAL segment %s has already been removed". This is because the server-side code that sends over the required WAL files would not construct the WAL filename with the correct timeline after a switch. Fix that by using readdir() to scan pg_xlog for all the WAL segments in the range, regardless of timeline. Also, include all timeline history files in the backup, if taken with "-X fetch". That fixes another related bug: If a timeline switch happened just before the backup was initiated in a standby, the WAL segment containing the initial checkpoint record contains WAL from the older timeline too. Recovery will not accept that without a timeline history file that lists the older timeline. Backpatch to 9.2. Versions prior to that were not affected as you could not take a base backup from a standby before 9.2.
* Keep timeline history files restored from archive in pg_xlog.Heikki Linnakangas2012-12-30
| | | | | | | | | | | | | | | | | | | | | | | The cascading standby patch in 9.2 changed the way WAL files are treated when restored from the archive. Before, they were restored under a temporary filename, and not kept in pg_xlog, but after the patch, they were copied under pg_xlog. This is necessary for a cascading standby to find them, but it also means that if the archive goes offline and a standby is restarted, it can recover back to where it was using the files in pg_xlog. It also means that if you take an offline backup from a standby server, it includes all the required WAL files in pg_xlog. However, the same change was not made to timeline history files, so if the WAL segment containing the checkpoint record contains a timeline switch, you will still get an error if you try to restart recovery without the archive, or recover from an offline backup taken from the standby. With this patch, timeline history files restored from archive are copied into pg_xlog like WAL files are, so that pg_xlog contains all the files required to recover. This is a corner-case pre-existing issue in 9.2, but even more important in master where it's possible for a standby to follow a timeline switch through streaming replication. To make that possible, the timeline history files must be present in pg_xlog.
* Fix some minor issues in view pretty-printing.Tom Lane2012-12-24
| | | | | | | | Code review for commit 2f582f76b1945929ff07116cd4639747ce9bb8a1: don't use a static variable for what ought to be a deparse_context field, fix non-multibyte-safe test for spaces, avoid useless and potentially O(N^2) (though admittedly with a very small constant) calculations of wrap positions when we aren't going to wrap.
* Prevent failure when RowExpr or XmlExpr is parse-analyzed twice.Tom Lane2012-12-23
| | | | | | | | | | | | transformExpr() is required to cope with already-transformed expression trees, for various ugly-but-not-quite-worth-cleaning-up reasons. However, some of its newer subroutines hadn't gotten the memo. This accounts for bug #7763 from Norbert Buchmuller: transformRowExpr() was overwriting the previously determined type of a RowExpr during CREATE TABLE LIKE INCLUDING INDEXES. Additional investigation showed that transformXmlExpr had the same kind of problem, but all the other cases seem to be safe. Andres Freund and Tom Lane
* Fix race condition if a file is removed while pg_basebackup is running.Heikki Linnakangas2012-12-21
| | | | | | | | If a relation file was removed when the server-side counterpart of pg_basebackup was just about to open it to send it to the client, you'd get a "could not open file" error. Fix that. Backpatch to 9.1, this goes back to when pg_basebackup was introduced.
* Fix grammatical mistake in error messagePeter Eisentraut2012-12-20
|
* Fix pg_extension_config_dump() to handle update cases more sanely.Tom Lane2012-12-20
| | | | | | | | | | | | | | | | | | | | | | If pg_extension_config_dump() is executed again for a table already listed in the extension's extconfig, the code was blindly making a new array entry. This does not seem useful. Fix it to replace the existing array entry instead, so that it's possible for extension update scripts to alter the filter conditions for configuration tables. In addition, teach ALTER EXTENSION DROP TABLE to check for an extconfig entry for the target table, and remove it if present. This is not a 100% solution because it's allowed for an extension update script to just summarily DROP a member table, and that code path doesn't go through ExecAlterExtensionContentsStmt. We could probably make that case clean things up if we had to, but it would involve sticking a very ugly wart somewhere in the guts of dependency.c. Since on the whole it seems quite unlikely that extension updates would want to remove pre-existing configuration tables, making the case possible with an explicit command seems sufficient. Per bug #7756 from Regina Obe. Back-patch to 9.1 where extensions were introduced.
* Fix recycling of WAL segments after changing recovery target timeline.Heikki Linnakangas2012-12-20
| | | | | | | | | | | | | | | | | | | | | | | | | After the recovery target timeline is changed, we would still recycle and preallocate WAL segments on the old target timeline. Those WAL segments created for the old timeline are a waste of space, although otherwise harmless. The problem is that when installing a recycled WAL segment as a future one, ThisTimeLineID is used to construct the filename. ThisTimeLineID is initialized in the checkpointer process to the recovery target timeline at startup, but it was not updated when the startup process chooses a new target timeline (recovery_target_timeline='latest'). To fix, always update ThisTimeLineID before recycling WAL segments at a restartpoint. This still leaves a small window where we might install WAL segments under wrong timeline ID, if the target timeline is changed just as we're about to start recycling. Also, when we're not on the target timeline yet, but still replaying some older timeline, we'll install WAL segments to the newer timeline anyway and they will still go wasted. We'll just live with the waste in that situation. Commit to 9.2 and 9.1. Older versions didn't change recovery target timeline after startup, and for master, I'll commit a slightly different variant of this.
* Check if we've reached end-of-backup point also if no redo is required.Heikki Linnakangas2012-12-19
| | | | | | | | | | | | | If you restored from a backup taken from a standby, and the last record in the backup is the checkpoint record, ie. there is no redo required except for the checkpoint record, we would fail to notice that we've reached the end-of-backup point, and the database is consistent. The result was an error "WAL ends before end of online backup". To fix, move the have-we-reached-end-of-backup check into CheckRecoveryConsistency(), which is already responsible for similar checks with minRecoveryPoint, and is called in the right places. Backpatch to 9.2, this check and bug did not exist before that.
* Fix failure to ignore leftover temp tables after a server crash.Tom Lane2012-12-17
| | | | | | | | | | | | | | | | | | | During crash recovery, we remove disk files belonging to temporary tables, but the system catalog entries for such tables are intentionally not cleaned up right away. Instead, the first backend that uses a temp schema is expected to clean out any leftover objects therein. This approach requires that we be careful to ignore leftover temp tables (since any actual access attempt would fail), *even if their BackendId matches our session*, if we have not yet established use of the session's corresponding temp schema. That worked fine in the past, but was broken by commit debcec7dc31a992703911a9953e299c8d730c778 which incorrectly removed the rd_islocaltemp relcache flag. Put it back, and undo various changes that substituted tests like "rel->rd_backend == MyBackendId" for use of a state-aware flag. Per trouble report from Heikki Linnakangas. Back-patch to 9.1 where the erroneous change was made. In the back branches, be careful to add rd_islocaltemp in a spot in the struct that was alignment padding before, so as not to break existing add-on code.
* Fix filling of postmaster.pid in bootstrap/standalone mode.Tom Lane2012-12-16
| | | | | | | | | | | | | | | | | | | | | | | We failed to ever fill the sixth line (LISTEN_ADDR), which caused the attempt to fill the seventh line (SHMEM_KEY) to fail, so that the shared memory key never got added to the file in standalone mode. This has been broken since we added more content to our lock files in 9.1. To fix, tweak the logic in CreateLockFile to add an empty LISTEN_ADDR line in standalone mode. This is a tad grotty, but since that function already knows almost everything there is to know about the contents of lock files, it doesn't seem that it's any better to hack it elsewhere. It's not clear how significant this bug really is, since a standalone backend should never have any children and thus it seems not critical to be able to check the nattch count of the shmem segment externally. But I'm going to back-patch the fix anyway. This problem had escaped notice because of an ancient (and in hindsight pretty dubious) decision to suppress LOG-level messages by default in standalone mode; so that the elog(LOG) complaint in AddToDataDirLockFile that should have warned of the problem didn't do anything. Fixing that is material for a separate patch though.
* In multi-insert, don't go into infinite loop on a huge tuple and fillfactor.Heikki Linnakangas2012-12-12
| | | | | | | | | | | | | | | | | | If a tuple is larger than page size minus space reserved for fillfactor, heap_multi_insert would never find a page that it fits in and repeatedly ask for a new page from RelationGetBufferForTuple. If a tuple is too large to fit on any page, taking fillfactor into account, RelationGetBufferForTuple will always expand the relation. In a normal insert, heap_insert will accept that and put the tuple on the new page. heap_multi_insert, however, does a fillfactor check of its own, and doesn't accept the newly-extended page RelationGetBufferForTuple returns, even though there is no other choice to make the tuple fit. Fix that by making the logic in heap_multi_insert more like the heap_insert logic. The first tuple is always put on the page RelationGetBufferForTuple gives us, and the fillfactor check is only applied to the subsequent tuples. Report from David Gould, although I didn't use his patch.
* Add defenses against integer overflow in dynahash numbuckets calculations.Tom Lane2012-12-11
| | | | | | | | | | | | | | | The dynahash code requires the number of buckets in a hash table to fit in an int; but since we calculate the desired hash table size dynamically, there are various scenarios where we might calculate too large a value. The resulting overflow can lead to infinite loops, division-by-zero crashes, etc. I (tgl) had previously installed some defenses against that in commit 299d1716525c659f0e02840e31fbe4dea3, but that covered only one call path. Moreover it worked by limiting the request size to work_mem, but in a 64-bit machine it's possible to set work_mem high enough that the problem appears anyway. So let's fix the problem at the root by installing limits in the dynahash.c functions themselves. Trouble report and patch by Jeff Davis.
* Consistency check should compare last record replayed, not last record read.Heikki Linnakangas2012-12-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | EndRecPtr is the last record that we've read, but not necessarily yet replayed. CheckRecoveryConsistency should compare minRecoveryPoint with the last replayed record instead. This caused recovery to think it's reached consistency too early. Now that we do the check in CheckRecoveryConsistency correctly, we have to move the call of that function to after redoing a record. The current place, after reading a record but before replaying it, is wrong. In particular, if there are no more records after the one ending at minRecoveryPoint, we don't enter hot standby until one extra record is generated and read by the standby, and CheckRecoveryConsistency is called. These two bugs conspired to make the code appear to work correctly, except for the small window between reading the last record that reaches minRecoveryPoint, and replaying it. In the passing, rename recoveryLastRecPtr, which is the last record replayed, to lastReplayedEndRecPtr. This makes it slightly less confusing with replayEndRecPtr, which is the last record read that we're about to replay. Original report from Kyotaro HORIGUCHI, further diagnosis by Fujii Masao. Backpatch to 9.0, where Hot Standby subtly changed the test from "minRecoveryPoint < EndRecPtr" to "minRecoveryPoint <= EndRecPtr". The former works because where the test is performed, we have always read one more record than we've replayed.
* Update minimum recovery point on truncation.Heikki Linnakangas2012-12-10
| | | | | | | | | If a file is truncated, we must update minRecoveryPoint. Once a file is truncated, there's no going back; it would not be safe to stop recovery at a point earlier than that anymore. Per report from Kyotaro HORIGUCHI. Backpatch to 8.4. Before that, minRecoveryPoint was not updated during recovery at all.
* Fix assorted bugs in privileges-for-types patch.Tom Lane2012-12-09
| | | | | | | | Commit 729205571e81b4767efc42ad7beb53663e08d1ff added privileges on data types, but there were a number of oversights. The implementation of default privileges for types missed a few places, and pg_dump was utterly innocent of the whole concept. Per bug #7741 from Nathan Alden, and subsequent wider investigation.
* Fix intermittent crash in DROP INDEX CONCURRENTLY.Tom Lane2012-12-05
| | | | | | | | | | | | | | | When deleteOneObject closes and reopens the pg_depend relation, we must see to it that the relcache pointer held by the calling function (typically performMultipleDeletions) is updated. Usually the relcache entry is retained so that the pointer value doesn't change, which is why the problem had escaped notice ... but after a cache flush event there's no guarantee that the same memory will be reassigned. To fix, change the recursive functions' APIs so that we pass around a "Relation *" not just "Relation". Per investigation of occasional buildfarm failures. This is trivial to reproduce with -DCLOBBER_CACHE_ALWAYS, which points up the sad lack of any buildfarm member running that way on a regular basis.