aboutsummaryrefslogtreecommitdiff
path: root/src/backend
Commit message (Collapse)AuthorAge
* Fix off-by-one bug in xlog reading logicAlvaro Herrera2013-01-18
| | | | | | Bug reported by Michael Paquier Author: Andres Freund
* Now that START_REPLICATION returns the next timeline's ID after reaching endHeikki Linnakangas2013-01-18
| | | | | | | | | | | | | | of timeline, take advantage of that in walreceiver. Startup process is still in control of choosign the target timeline, by scanning the timeline history files present in pg_xlog, but walreceiver now uses the next timeline's ID to fetch its history file immediately after it has finished streaming the old timeline. Before, the standby would first try to restart streaming on the old timeline, which fetches the missing timeline history file as a side-effect, and only then restart from the new timeline. This patch eliminates the extra iteration, which speeds up the timeline switch and reduces the noise in the log caused by the extra restart on the old timeline.
* Use the right timeline when beginning to stream from master.Heikki Linnakangas2013-01-18
| | | | | | | | | | | | | | | | | | | | | The xlogreader refactoring broke the logic to decide which timeline to start streaming from. XLogPageRead() uses the timeline history to check which timeline the requested WAL position falls into. However, after the refactoring, XLogPageRead() is always first called with the first page in the segment, to verify the segment header, and only then with the actual WAL position we're interested in. That first read of the segment's header made XLogPageRead() to always start streaming from the old timeline containing the segment header, not the timeline containing the actual record, if there was a timeline switch within the segment. I thought I fixed this yesterday, but that fix was too narrow and only fixed this for the corner-case that the timeline switch happened in the first page of the segment. To fix this more robustly, pass explicitly the position of the record we're actually interested in to XLogPageRead, and use that to decide which timeline to read from, rather than deduce it from the page and offset. Per report from Fujii Masao.
* When xlogreader asks the callback function to read a page, make sure weHeikki Linnakangas2013-01-17
| | | | | | | | | get a large enough part of the page to include the beginning of the next record we're interested in. The XLogPageRead callback uses the requested length to decide which timeline to stream WAL from, and if the first call is short, and the page contains a timeline switch, we'll repeatedly try to stream that page from the old timeline, and never get across the timeline switch.
* I added a result set to START_STREAMING command, but neglected walreceiver.Heikki Linnakangas2013-01-17
| | | | | | | The patch to allow pg_receivexlog to switch timeline added a result set after copy has ended in START_STREAMING command, to return the next timeline's ID to the client. But walreceived didn't get the memo, and threw an error on the unexpected result set. Fix.
* Accelerate end-of-transaction dropping of relationsAlvaro Herrera2013-01-17
| | | | | | | | | | | | | | | | | | | | | | When relations are dropped, at end of transaction we need to remove the files and clean the buffer pool of buffers containing pages of those relations. Previously we would scan the buffer pool once per relation to clean up buffers. When there are many relations to drop, the repeated scans make this process slow; so we now instead pass a list of relations to drop and scan the pool once, checking each buffer against the passed list. When the number of relations is larger than a threshold (which as of this patch is being set to 20 relations) we sort the array before starting, and bsearch the array; when it's smaller, we simply scan the array linearly each time, because that's faster. The exact optimal threshold value depends on many factors, but the difference is not likely to be significant enough to justify making it user-settable. This has been measured to be a significant win (a 15x win when dropping 100,000 relations; an extreme case, but reportedly a real one). Author: Tomas Vondra, some tweaks by me Reviewed by: Robert Haas, Shigeru Hanada, Andres Freund, Álvaro Herrera
* Make pg_receivexlog and pg_basebackup -X stream work across timeline switches.Heikki Linnakangas2013-01-17
| | | | | | | | | | | | | | | | | | | | | | This mirrors the changes done earlier to the server in standby mode. When receivelog reaches the end of a timeline, as reported by the server, it fetches the timeline history file of the next timeline, and restarts streaming from the new timeline by issuing a new START_STREAMING command. When pg_receivexlog crosses a timeline, it leaves the .partial suffix on the last segment on the old timeline. This helps you to tell apart a partial segment left in the directory because of a timeline switch, and a completed segment. If you just follow a single server, it won't make a difference, but it can be significant in more complicated scenarios where new WAL is still generated on the old timeline. This includes two small changes to the streaming replication protocol: First, when you reach the end of timeline while streaming, the server now sends the TLI of the next timeline in the server's history to the client. pg_receivexlog uses that as the next timeline, so that it doesn't need to parse the timeline history file like a standby server does. Second, when BASE_BACKUP command sends the begin and end WAL positions, it now also sends the timeline IDs corresponding the positions.
* Improve memory space management in tuplesort and tuplestore.Tom Lane2013-01-17
| | | | | | | | | | | | | | | | | | | | | The code originally just doubled the size of the tuple-pointer array so long as that would fit in allowedMem. This could result in failing to use as much as half of allowedMem, if (as is typical) the last doubling attempt didn't quite fit. Worse, we might double the array size but be unable to use most of the added slots, because there was no room left within the allowedMem limit for tuples the slots should point to. To fix, double only so long as we've used less than half of allowedMem in total. Then do one more array enlargement, but scale it based on total memory consumption so far. This will work nicely as long as the average tuple size is reasonably stable, and in any case should be better than the old method. This change will result in large sort operations consuming a larger fraction of work_mem than they typically did in the past. The release notes should mention that users may want to revisit their work_mem settings, if they'd tuned those settings based on the old behavior of sorting. Jeff Janes, reviewed by Peter Geoghegan and Robert Haas
* Fix a couple of error-handling bugs in the xlogreader patch.Heikki Linnakangas2013-01-17
| | | | | | | | | | | | XLogReadRecord should reset its state on every error, to make sure it re-reads the page on next call. It was inconsistent in that some errors did that, but some did not. In ReadRecord(), don't give up on an error if we're in standby mode. The loop was set up to retry, but the checks within the loop broke out of the loop on any error. Andres Freund, with some tweaking by me.
* Make GiST indexes on-disk compatible with 9.2 again.Heikki Linnakangas2013-01-17
| | | | | | | | | | | The patch that turned XLogRecPtr into a uint64 inadvertently changed the on-disk format of GiST indexes, because the NSN field in the GiST page opaque is an XLogRecPtr. That breaks pg_upgrade. Revert the format of that field back to the two-field struct that XLogRecPtr was before. This is the same we did to LSNs in the page header to avoid changing on-disk format. Bump catversion, as this invalidates any existing GiST indexes built on 9.3devel.
* Base the default SSL ciphers on DEFAULT instead of ALLMagnus Hagander2013-01-17
| | | | | | | | It's better to start from what the OpenSSL people consider a good default and then remove insecure things (low encryption, exportable encryption and md5 at this point) from that, instead of starting from everything that exists and remove from that. We trust the OpenSSL people to make good choices about what the default is.
* Split out XLog reading as an independent facilityAlvaro Herrera2013-01-16
| | | | | | | | | | | | | | | | | This new facility can not only be used by xlog.c to carry out crash recovery, but also by external programs. By supplying a function to read XLog pages from somewhere, all the WAL reading can be used for completely different purposes. For the standard backend use, the behavior should be pretty much the same as previously. As for non-backend programs, an hypothetical pg_xlogdump program is now closer to reality, but some more backend support is still necessary. This patch was originally submitted by Andres Freund in a different form, but Heikki Linnakangas opted for and authored another design of the concept. Andres has advanced the patch since Heikki's initial version. Review and some (mostly cosmetics) changes by me.
* Rework order of checks in ALTER / SET SCHEMAAlvaro Herrera2013-01-15
| | | | | | | | | | | | | | | | | | | | | | | When attempting to move an object into the schema in which it already was, for most objects classes we were correctly complaining about exactly that ("object is already in schema"); but for some other object classes, such as functions, we were instead complaining of a name collision ("object already exists in schema"). The latter is wrong and misleading, per complaint from Robert Haas in CA+TgmoZ0+gNf7RDKRc3u5rHXffP=QjqPZKGxb4BsPz65k7qnHQ@mail.gmail.com To fix, refactor the way these checks are done. As a bonus, the resulting code is smaller and can also share some code with Rename cases. While at it, remove use of getObjectDescriptionOids() in error messages. These are normally disallowed because of translatability considerations, but this one had slipped through since 9.1. (Not sure that this is worth backpatching, though, as it would create some untranslated messages in back branches.) This is loosely based on a patch by KaiGai Kohei, heavily reworked by me.
* Fix hash_update_hash_key() to handle same-bucket case correctly.Tom Lane2013-01-14
| | | | | | Original coding would corrupt the hashtable if the item being updated was at the end of its bucket chain and the new hash key hashed to that same bucket. Diagnosis and fix by Heikki Linnakangas.
* Return value of lseek() can be negative on failure.Heikki Linnakangas2013-01-15
| | | | | | | | Because the return value of lseek() was assigned to an unsigned size_t variable, we'd fail to notice an error return code -1. Compiler gave a warning about this. Andres Freund
* Fix obsolete SQL syntax in comment.Tom Lane2013-01-14
| | | | | | | This was legal back in the days of add_missing_from, though perhaps never good style. It's not legal anymore ... Jan Urbański
* Reject out-of-range dates in to_date().Tom Lane2013-01-14
| | | | | | | | | Dates outside the supported range could be entered, but would not print reasonably, and operations such as conversion to timestamp wouldn't behave sanely either. Since this has the potential to result in undumpable table data, it seems worth back-patching. Hitoshi Harada
* Remove spurious spaceAlvaro Herrera2013-01-14
| | | | Andres Freund
* Prevent very-low-probability PANIC during PREPARE TRANSACTION.Tom Lane2013-01-13
| | | | | | | | | | | | | | | | | | | | | The code in PostPrepare_Locks supposed that it could reassign locks to the prepared transaction's dummy PGPROC by deleting the PROCLOCK table entries and immediately creating new ones. This was safe when that code was written, but since we invented partitioning of the shared lock table, it's not safe --- another process could steal away the PROCLOCK entry in the short interval when it's on the freelist. Then, if we were otherwise out of shared memory, PostPrepare_Locks would have to PANIC, since it's too late to back out of the PREPARE at that point. Fix by inventing a dynahash.c function to atomically update a hashtable entry's key. (This might possibly have other uses in future.) This is an ancient bug that in principle we ought to back-patch, but the odds of someone hitting it in the field seem really tiny, because (a) the risk window is small, and (b) nobody runs servers with maxed-out lock tables for long, because they'll be getting non-PANIC out-of-memory errors anyway. So fixing it in HEAD seems sufficient, at least until the new code has gotten some testing.
* Make spelling more uniformPeter Eisentraut2013-01-13
|
* Update comments for elog_start().Tom Lane2013-01-13
| | | | Forgot I was going to do this as part of the previous patch ...
* Improve handling of ereport(ERROR) and elog(ERROR).Tom Lane2013-01-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In commit 71450d7fd6c7cf7b3e38ac56e363bff6a681973c, we added code to inform suitably-intelligent compilers that ereport() doesn't return if the elevel is ERROR or higher. This patch extends that to elog(), and also fixes a double-evaluation hazard that the previous commit created in ereport(), as well as reducing the emitted code size. The elog() improvement requires the compiler to support __VA_ARGS__, which should be available in just about anything nowadays since it's required by C99. But our minimum language baseline is still C89, so add a configure test for that. The previous commit assumed that ereport's elevel could be evaluated twice, which isn't terribly safe --- there are already counterexamples in xlog.c. On compilers that have __builtin_constant_p, we can use that to protect the second test, since there's no possible optimization gain if the compiler doesn't know the value of elevel. Otherwise, use a local variable inside the macros to prevent double evaluation. The local-variable solution is inferior because (a) it leads to useless code being emitted when elevel isn't constant, and (b) it increases the optimization level needed for the compiler to recognize that subsequent code is unreachable. But it seems better than not teaching non-gcc compilers about unreachability at all. Lastly, if the compiler has __builtin_unreachable(), we can use that instead of abort(), resulting in a noticeable code savings since no function call is actually emitted. However, it seems wise to do this only in non-assert builds. In an assert build, continue to use abort(), so that the behavior will be predictable and debuggable if the "impossible" happens. These changes involve making the ereport and elog macros emit do-while statement blocks not just expressions, which forces small changes in a few call sites. Andres Freund, Tom Lane, Heikki Linnakangas
* Redesign the planner's handling of index-descent cost estimation.Tom Lane2013-01-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Historically we've used a couple of very ad-hoc fudge factors to try to get the right results when indexes of different sizes would satisfy a query with the same number of index leaf tuples being visited. In commit 21a39de5809cd3050a37d2554323cc1d0cbeed9d I tweaked one of these fudge factors, with results that proved disastrous for larger indexes. Commit bf01e34b556ff37982ba2d882db424aa484c0d07 fudged it some more, but still with not a lot of principle behind it. What seems like a better way to address these issues is to explicitly model index-descent costs, since that's what's really at stake when considering diferent indexes with similar leaf-page-level costs. We tried that once long ago, and found that charging random_page_cost per page descended through was way too much, because upper btree levels tend to stay in cache in real-world workloads. However, there's still CPU costs to think about, and the previous fudge factors can be seen as a crude attempt to account for those costs. So this patch replaces those fudge factors with explicit charges for the number of tuple comparisons needed to descend the index tree, plus a small charge per page touched in the descent. The cost multipliers are chosen so that the resulting charges are in the vicinity of the historical (pre-9.2) fudge factors for indexes of up to about a million tuples, while not ballooning unreasonably beyond that, as the old fudge factor did (even more so in 9.2). To make this work accurately for btree indexes, add some code that allows extraction of the known root-page height from a btree. There's no equivalent number readily available for other index types, but we can use the log of the number of index pages as an approximate substitute. This seems like too much of a behavioral change to risk back-patching, but it should improve matters going forward. In 9.2 I'll just revert the fudge-factor change.
* Fix potential corruption of lock table in CREATE/DROP INDEX CONCURRENTLY.Tom Lane2013-01-08
| | | | | | | | | | | | | | | | If VirtualXactLock() has to wait for a transaction that holds its VXID lock as a fast-path lock, it must first convert the fast-path lock to a regular lock. It failed to take the required "partition" lock on the main shared-memory lock table while doing so. This is the direct cause of the assert failure in GetLockStatusData() recently observed in the buildfarm, but more worryingly it could result in arbitrary corruption of the shared lock table if some other process were concurrently engaged in modifying the same partition of the lock table. Fortunately, VirtualXactLock() is only used by CREATE INDEX CONCURRENTLY and DROP INDEX CONCURRENTLY, so the opportunities for failure are fewer than they might have been. In passing, improve some comments and be a bit more consistent about order of operations.
* Fix incorrect error message when schema-CREATE permission is absent.Robert Haas2013-01-07
| | | | Report by me. Fix by KaiGai Kohei.
* Make some spelling more consistentPeter Eisentraut2013-01-05
|
* Invent a "one-shot" variant of CachedPlans for better performance.Tom Lane2013-01-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | SPI_execute() and related functions create a CachedPlan, execute it once, and immediately discard it, so that the functionality offered by plancache.c is of no value in this code path. And performance measurements show that the extra data copying and invalidation checking done by plancache.c slows down simple queries by 10% or more compared to 9.1. However, enough of the SPI code is shared with functions that do need plan caching that it seems impractical to bypass plancache.c altogether. Instead, let's invent a variant version of cached plans that preserves 99% of the API but doesn't offer any of the actual functionality, nor the overhead. This puts SPI_execute() performance back on par, or maybe even slightly better, than it was before. This change should resolve recent complaints of performance degradation from Dong Ye, Pavel Stehule, and others. By avoiding data copying, this change also reduces the amount of memory needed to execute many-statement SPI_execute() strings, as for instance in a recent complaint from Tomas Vondra. An additional benefit of this change is that multi-statement SPI_execute() query strings are now processed fully serially, that is we complete execution of earlier statements before running parse analysis and planning on following ones. This eliminates a long-standing POLA violation, in that DDL that affects the behavior of a later statement will now behave as expected. Back-patch to 9.2, since this was a performance regression compared to 9.1. (In 9.2, place the added struct fields so as to avoid changing the offsets of existing fields.) Heikki Linnakangas and Tom Lane
* Tolerate timeline switches while "pg_basebackup -X fetch" is running.Heikki Linnakangas2013-01-03
| | | | | | | | | | | | | | | | | | | | | | If you take a base backup from a standby server with "pg_basebackup -X fetch", and the timeline switches while the backup is being taken, the backup used to fail with an error "requested WAL segment %s has already been removed". This is because the server-side code that sends over the required WAL files would not construct the WAL filename with the correct timeline after a switch. Fix that by using readdir() to scan pg_xlog for all the WAL segments in the range, regardless of timeline. Also, include all timeline history files in the backup, if taken with "-X fetch". That fixes another related bug: If a timeline switch happened just before the backup was initiated in a standby, the WAL segment containing the initial checkpoint record contains WAL from the older timeline too. Recovery will not accept that without a timeline history file that lists the older timeline. Backpatch to 9.2. Versions prior to that were not affected as you could not take a base backup from a standby before 9.2.
* Delay reading timeline history file until it's fetched from master.Heikki Linnakangas2013-01-03
| | | | | | | | | | | | | | | | | | Streaming replication can fetch any missing timeline history files from the master, but recovery would read the timeline history file for the target timeline before reading the checkpoint record, and before walreceiver has had a chance to fetch it from the master. Delay reading it, and the sanity checks involving timeline history, until after reading the checkpoint record. There is at least one scenario where this makes a difference: if you take a base backup from a standby server right after a timeline switch, the WAL segment containing the initial checkpoint record will begin with an older timeline ID. Without the timeline history file, recovering that file will fail as the older timeline ID is not recognized to be an ancestor of the target timeline. If you try to recover from such a backup, using only streaming replication to fetch the WAL, this patch is required for that to work.
* Fix IsUnderPostmaster/EXEC_BACKEND confusionAlvaro Herrera2013-01-02
|
* Set MaxBackends only on bootstrap and standalone modesAlvaro Herrera2013-01-02
| | | | | | | | ... not on auxiliary processes. I managed to overlook the fact that I had disabled assertions on my HEAD checkout long ago. Hopefully this will turn the buildfarm green again, and put an end to today's silliness.
* Move tar function headers to pgtar.hMagnus Hagander2013-01-02
| | | | | | This makes it possible to include them only where they are used, so we can avoid the conflict of the uid_t and gid_t datatypes that happened in plperl (since plperl doesn't need the tar functions)
* Make sure MaxBackends is always setAlvaro Herrera2013-01-02
| | | | | Auxiliary and bootstrap processes weren't getting it, causing initdb to fail completely.
* Fix background workers for EXEC_BACKENDAlvaro Herrera2013-01-02
| | | | | | | | | | | | | | | Commit da07a1e8 was broken for EXEC_BACKEND because I failed to realize that the MaxBackends recomputation needed to be duplicated by subprocesses in SubPostmasterMain. However, instead of having the value be recomputed at all, it's better to assign the correct value at postmaster initialization time, and have it be propagated to exec'ed backends via BackendParameters. MaxBackends stays as zero until after modules in shared_preload_libraries have had a chance to register bgworkers, since the value is going to be untrustworthy till that's finished. Heikki Linnakangas and Álvaro Herrera
* Fix bug in streaming replication over multiple tli switches.Heikki Linnakangas2013-01-02
| | | | | | | | After receiving some WAL over streaming replication, try to open the file from the timeline we're currently recieving, not recoveryTargetTLI. They are usually the same, which is why wasn't noticed before, but you'd get an error if there have been more than one timeline switch between the current point in WAL and the recovery target.
* Fix silly typo in code, which broke the check for reaching consistency.Heikki Linnakangas2013-01-02
|
* Update copyrights for 2013Bruce Momjian2013-01-01
| | | | | Fully update git head, and update back branches in ./COPYRIGHT and legal.sgml files.
* Unify some tar functionality across different partsMagnus Hagander2013-01-01
| | | | | | | | | | Move some of the tar functionality that existed mostly duplicated in both pg_dump and the walsender basebackup functionality into port/tar.c instead, so it can be used from both. It will also be used by pg_basebackup in the future, which would've caused a third copy of it around. Zoltan Boszormenyi and Magnus Hagander
* Fix ruleutils to cope with conflicts from adding/dropping/renaming columns.Tom Lane2012-12-31
| | | | | | | | | | | | | | | | | In commit 11e131854f8231a21613f834c40fe9d046926387, we improved the rule/view dumping code so that it would produce valid query representations even if some of the tables involved in a query had been renamed since the query was parsed. This patch extends that idea to fix problems that occur when individual columns are renamed, or added or dropped. As before, the core of the fix is to assign unique new aliases when a name conflict has been created. This is complicated by the JOIN USING feature, which requires the same column alias to be used in both input relations, but we can handle that with a sufficiently complex approach to assigning aliases. A fortiori, this patch takes care of situations where the query didn't have unique column names to begin with, such as in a recent complaint from Bryan Nuse. (Because of expansion of "SELECT *", re-parsing a dumped query can require column name uniqueness even though the original text did not.)
* Fix compiler warning about uninitialized variablePeter Eisentraut2012-12-31
|
* Keep timeline history files restored from archive in pg_xlog.Heikki Linnakangas2012-12-30
| | | | | | | | | | | | | | | | | | | | | | | The cascading standby patch in 9.2 changed the way WAL files are treated when restored from the archive. Before, they were restored under a temporary filename, and not kept in pg_xlog, but after the patch, they were copied under pg_xlog. This is necessary for a cascading standby to find them, but it also means that if the archive goes offline and a standby is restarted, it can recover back to where it was using the files in pg_xlog. It also means that if you take an offline backup from a standby server, it includes all the required WAL files in pg_xlog. However, the same change was not made to timeline history files, so if the WAL segment containing the checkpoint record contains a timeline switch, you will still get an error if you try to restart recovery without the archive, or recover from an offline backup taken from the standby. With this patch, timeline history files restored from archive are copied into pg_xlog like WAL files are, so that pg_xlog contains all the files required to recover. This is a corner-case pre-existing issue in 9.2, but even more important in master where it's possible for a standby to follow a timeline switch through streaming replication. To make that possible, the timeline history files must be present in pg_xlog.
* Adjust more backend functions to return OID rather than void.Robert Haas2012-12-29
| | | | | | | | | This is again intended to support extensions to the event trigger functionality. This may go a bit further than we need for that purpose, but there's some value in being consistent, and the OID may be useful for other purposes also. Dimitri Fontaine
* Remove obsolete XLogRecPtr macrosAlvaro Herrera2012-12-28
| | | | | | | | | | | | | | | | | This gets rid of XLByteLT, XLByteLE, XLByteEQ and XLByteAdvance. These were useful for brevity when XLogRecPtrs were split in xlogid/xrecoff; but now that they are simple uint64's, they are just clutter. The only downside to making this change would be ease of backporting patches, but that has been negated by other substantive changes to the involved code anyway. The clarity of simpler expressions makes the change worthwhile. Most of the changes are mechanical, but in a couple of places, the patch author chose to invert the operator sense, making the code flow more logical (and more in line with preceding comments). Author: Andres Freund Eyeballed by Dimitri Fontaine and Alvaro Herrera
* Assign InvalidXLogRecPtr instead of MemSet(0)Alvaro Herrera2012-12-27
| | | | | | For consistency. Author: Andres Freund
* Fix some minor issues in view pretty-printing.Tom Lane2012-12-24
| | | | | | | | Code review for commit 2f582f76b1945929ff07116cd4639747ce9bb8a1: don't use a static variable for what ought to be a deparse_context field, fix non-multibyte-safe test for spaces, avoid useless and potentially O(N^2) (though admittedly with a very small constant) calculations of wrap positions when we aren't going to wrap.
* Update comments on rd_newRelfilenodeSubid.Simon Riggs2012-12-24
| | | | | | | | Ensure comments accurately reflect state of code given new understanding, and recent changes. Include example code from Noah Misch to illustrate how rd_newRelfilenodeSubid can be reset deterministically. No code changes.
* Keep rd_newRelfilenodeSubid across overflow.Simon Riggs2012-12-24
| | | | | | | Teach RelationCacheInvalidate() to keep rd_newRelfilenodeSubid across rel cache message overflows, so that behaviour is now fully deterministic. Noah Misch
* Adjust many backend functions to return OID rather than void.Robert Haas2012-12-23
| | | | | | | Extracted from a larger patch by Dimitri Fontaine. It is hoped that this will provide infrastructure for enriching the new event trigger functionality, but it seems possibly useful for other purposes as well.
* Prevent failure when RowExpr or XmlExpr is parse-analyzed twice.Tom Lane2012-12-23
| | | | | | | | | | | | transformExpr() is required to cope with already-transformed expression trees, for various ugly-but-not-quite-worth-cleaning-up reasons. However, some of its newer subroutines hadn't gotten the memo. This accounts for bug #7763 from Norbert Buchmuller: transformRowExpr() was overwriting the previously determined type of a RowExpr during CREATE TABLE LIKE INCLUDING INDEXES. Additional investigation showed that transformXmlExpr had the same kind of problem, but all the other cases seem to be safe. Andres Freund and Tom Lane
* Fix sloppiness in the timeline switch over streaming replication patch.Heikki Linnakangas2012-12-21
| | | | | | | | | Here's another attempt at fixing the logic that decides how far the WAL can be streamed, which was still broken if the timeline changed while streaming. You would get an assertion failure. The way the logic is now written is more readable, too. Thom Brown reported the assertion failure.