aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
* Fix `make installcheck` for serializable transactions.Kevin Grittner2015-08-06
| | | | | | | | | | | | | | | | | | | | | Commit e5550d5fec66aa74caad1f79b79826ec64898688 added some new tests for ALTER TABLE which involved table scans. When default_transaction_isolation = 'serializable' these acquire relation-level SIReadLocks. The test results didn't cope with that. Add SIReadLock as the minimum lock level for purposes of these tests. This could also be fixed by excluding this type of lock from the my_locks view, but it would be a bug for SIReadLock to show up for a relation which was not otherwise locked, so do it this way to allow that sort of condition to cause a regression test failure. There is some question whether we could avoid taking SIReadLocks during these operations, but confirming the safety of that and figuring out how to avoid the locks is not trivial, and would be a separate patch. Backpatch to 9.4 where the new tests were added.
* Make real sure we don't reassociate joins into or out of SEMI/ANTI joins.Tom Lane2015-08-05
| | | | | | | | | | | | | | | | Per the discussion in optimizer/README, it's unsafe to reassociate anything into or out of the RHS of a SEMI or ANTI join. An example from Piotr Stefaniak showed that join_is_legal() wasn't sufficiently enforcing this rule, so lock it down a little harder. I couldn't find a reasonably simple example of the optimizer trying to do this, so no new regression test. (Piotr's example involved the random search in GEQO accidentally trying an invalid case and triggering a sanity check way downstream in clause selectivity estimation, which did not seem like a sequence of events that would be useful to memorialize in a regression test as-is.) Back-patch to all active branches.
* Fix pg_dump to dump shell types.Tom Lane2015-08-04
| | | | | | | | | | | Per discussion, it really ought to do this. The original choice to exclude shell types was probably made in the dark ages before we made it harder to accidentally create shell types; but that was in 7.3. Also, cause the standard regression tests to leave a shell type behind, for convenience in testing the case in pg_dump and pg_upgrade. Back-patch to all supported branches.
* Fix bogus "out of memory" reports in tuplestore.c.Tom Lane2015-08-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | The tuplesort/tuplestore memory management logic assumed that the chunk allocation overhead for its memtuples array could not increase when increasing the array size. This is and always was true for tuplesort, but we (I, I think) blindly copied that logic into tuplestore.c without noticing that the assumption failed to hold for the much smaller array elements used by tuplestore. Given rather small work_mem, this could result in an improper complaint about "unexpected out-of-memory situation", as reported by Brent DeSpain in bug #13530. The easiest way to fix this is just to increase tuplestore's initial array size so that the assumption holds. Rather than relying on magic constants, though, let's export a #define from aset.c that represents the safe allocation threshold, and make tuplestore's calculation depend on that. Do the same in tuplesort.c to keep the logic looking parallel, even though tuplesort.c isn't actually at risk at present. This will keep us from breaking it if we ever muck with the allocation parameters in aset.c. Back-patch to all supported versions. The error message doesn't occur pre-9.3, not so much because the problem can't happen as because the pre-9.3 tuplestore code neglected to check for it. (The chance of trouble is a great deal larger as of 9.3, though, due to changes in the array-size-increasing strategy.) However, allowing LACKMEM() to become true unexpectedly could still result in less-than-desirable behavior, so let's patch it all the way back.
* Fix a PlaceHolderVar-related oversight in star-schema planning patch.Tom Lane2015-08-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In commit b514a7460d9127ddda6598307272c701cbb133b7, I changed the planner so that it would allow nestloop paths to remain partially parameterized, ie the inner relation might need parameters from both the current outer relation and some upper-level outer relation. That's fine so long as we're talking about distinct parameters; but the patch also allowed creation of nestloop paths for cases where the inner relation's parameter was a PlaceHolderVar whose eval_at set included the current outer relation and some upper-level one. That does *not* work. In principle we could allow such a PlaceHolderVar to be evaluated at the lower join node using values passed down from the upper relation along with values from the join's own outer relation. However, nodeNestloop.c only supports simple Vars not arbitrary expressions as nestloop parameters. createplan.c is also a few bricks shy of being able to handle such cases; it misplaces the PlaceHolderVar parameters in the plan tree, which is why the visible symptoms of this bug are "plan should not reference subplan's variable" and "failed to assign all NestLoopParams to plan nodes" planner errors. Adding the necessary complexity to make this work doesn't seem like it would be repaid in significantly better plans, because in cases where such a PHV exists, there is probably a corresponding join order constraint that would allow a good plan to be found without using the star-schema exception. Furthermore, adding complexity to nodeNestloop.c would create a run-time penalty even for plans where this whole consideration is irrelevant. So let's just reject such paths instead. Per fuzz testing by Andreas Seltenreich; the added regression test is based on his example query. Back-patch to 9.2, like the previous patch.
* Cap wal_buffers to avoid a server crash when it's set very large.Robert Haas2015-08-04
| | | | | | | | | | It must be possible to multiply wal_buffers by XLOG_BLCKSZ without overflowing int, or calculations in StartupXLOG will go badly wrong and crash the server. Avoid that by imposing a maximum value on wal_buffers. This will be just under 2GB, assuming the usual value for XLOG_BLCKSZ. Josh Berkus, per an analysis by Andrew Gierth.
* Fix incorrect order of lock file removal and failure to close() sockets.Tom Lane2015-08-02
| | | | | | | | | | | | | | | | | | | | | | | | | Commit c9b0cbe98bd783e24a8c4d8d8ac472a494b81292 accidentally broke the order of operations during postmaster shutdown: it resulted in removing the per-socket lockfiles after, not before, postmaster.pid. This creates a race-condition hazard for a new postmaster that's started immediately after observing that postmaster.pid has disappeared; if it sees the socket lockfile still present, it will quite properly refuse to start. This error appears to be the explanation for at least some of the intermittent buildfarm failures we've seen in the pg_upgrade test. Another problem, which has been there all along, is that the postmaster has never bothered to close() its listen sockets, but has just allowed them to close at process death. This creates a different race condition for an incoming postmaster: it might be unable to bind to the desired listen address because the old postmaster is still incumbent. This might explain some odd failures we've seen in the past, too. (Note: this is not related to the fact that individual backends don't close their client communication sockets. That behavior is intentional and is not changed by this patch.) Fix by adding an on_proc_exit function that closes the postmaster's ports explicitly, and (in 9.3 and up) reshuffling the responsibility for where to unlink the Unix socket files. Lock file unlinking can stay where it is, but teach it to unlink the lock files in reverse order of creation.
* Fix race condition that lead to WALInsertLock deadlock with commit_delay.Heikki Linnakangas2015-08-02
| | | | | | | | | | | | | | | | | If a call to WaitForXLogInsertionsToFinish() returned a value in the middle of a page, and another backend then started to insert a record to the same page, and then you called WaitXLogInsertionsToFinish() again, the second call might return a smaller value than the first call. The problem was in GetXLogBuffer(), which always updated the insertingAt value to the beginning of the requested page, not the actual requested location. Because of that, the second call might return a xlog pointer to the beginning of the page, while the first one returned a later position on the same page. XLogFlush() performs two calls to WaitXLogInsertionsToFinish() in succession, and holds WALWriteLock on the second call, which can deadlock if the second call to WaitXLogInsertionsToFinish() blocks. Reported by Spiros Ioannou. Backpatch to 9.4, where the more scalable WALInsertLock mechanism, and this bug, was introduced.
* Fix some planner issues with degenerate outer join clauses.Tom Lane2015-08-01
| | | | | | | | | | | | | An outer join clause that didn't actually reference the RHS (perhaps only after constant-folding) could confuse the join order enforcement logic, leading to wrong query results. Also, nested occurrences of such things could trigger an Assertion that on reflection seems incorrect. Per fuzz testing by Andreas Seltenreich. The practical use of such cases seems thin enough that it's not too surprising we've not heard field reports about it. This has been broken for a long time, so back-patch to all active branches.
* Fix an oversight in checking whether a join with LATERAL refs is legal.Tom Lane2015-07-31
| | | | | | | | | | | | | | | In many cases, we can implement a semijoin as a plain innerjoin by first passing the righthand-side relation through a unique-ification step. However, one of the cases where this does NOT work is where the RHS has a LATERAL reference to the LHS; that makes the RHS dependent on the LHS so that unique-ification is meaningless. joinpath.c understood this, and so would not generate any join paths of this kind ... but join_is_legal neglected to check for the case, so it would think that we could do it. The upshot would be a "could not devise a query plan for the given query" failure once we had failed to generate any join paths at all for the bogus join pair. Back-patch to 9.3 where LATERAL was added.
* Consolidate makefile code for setting top_srcdir, srcdir and VPATH.Noah Misch2015-07-30
| | | | | | | | | Responsibility was formerly split between Makefile.global and pgxs.mk. As a result of commit b58233c71b93a32fcab7219585cafc25a27eb769, in the PGXS case, these variables were unset while parsing Makefile.global and callees. Inclusion of Makefile.custom did not work from PGXS, and the subtle difference seemed like a recipe for future bugs. Back-patch to 9.4, where that commit first appeared.
* Avoid some zero-divide hazards in the planner.Tom Lane2015-07-30
| | | | | | | | | | | | | | | | | | | | | | | | | Although I think on all modern machines floating division by zero results in Infinity not SIGFPE, we still don't want infinities running around in the planner's costing estimates; too much risk of that leading to insane behavior. grouping_planner() failed to consider the possibility that final_rel might be known dummy and hence have zero rowcount. (I wonder if it would be better to set a rows estimate of 1 for dummy relations? But at least in the back branches, changing this convention seems like a bad idea, so I'll leave that for another day.) Make certain that get_variable_numdistinct() produces a nonzero result. The case that can be shown to be broken is with stadistinct < 0.0 and small ntuples; we did not prevent the result from rounding to zero. For good luck I applied clamp_row_est() to all the nonconstant return values. In ExecChooseHashTableSize(), Assert that we compute positive nbuckets and nbatch. I know of no reason to think this isn't the case, but it seems like a good safety check. Per reports from Piotr Stefaniak. Back-patch to all active branches.
* Reduce chatter from signaling of autovacuum workers.Tom Lane2015-07-28
| | | | | | | | | | | | | | | | | | | | Don't print a WARNING if we get ESRCH from a kill() that's attempting to cancel an autovacuum worker. It's possible (and has been seen in the buildfarm) that the worker is already gone by the time we are able to execute the kill, in which case the failure is harmless. About the only plausible reason for reporting such cases would be to help debug corrupted lock table contents, but this is hardly likely to be the most important symptom if that happens. Moreover issuing a WARNING might scare users more than is warranted. Also, since sending a signal to an autovacuum worker is now entirely a routine thing, and the worker will log the query cancel on its end anyway, reduce the message saying we're doing that from LOG to DEBUG1 level. Very minor cosmetic cleanup as well. Since the main practical reason for doing this is to avoid unnecessary buildfarm failures, back-patch to all active branches.
* Disable ssl renegotiation by default.Andres Freund2015-07-28
| | | | | | | | | | | | | | | | | | | | | | While postgres' use of SSL renegotiation is a good idea in theory, it turned out to not work well in practice. The specification and openssl's implementation of it have lead to several security issues. Postgres' use of renegotiation also had its share of bugs. Additionally OpenSSL has a bunch of bugs around renegotiation, reported and open for years, that regularly lead to connections breaking with obscure error messages. We tried increasingly complex workarounds to get around these bugs, but we didn't find anything complete. Since these connection breakages often lead to hard to debug problems, e.g. spuriously failing base backups and significant latency spikes when synchronous replication is used, we have decided to change the default setting for ssl renegotiation to 0 (disabled) in the released backbranches and remove it entirely in 9.5 and master.. Author: Michael Paquier, with changes by me Discussion: 20150624144148.GQ4797@alap3.anarazel.de Backpatch: 9.0-9.4; 9.5 and master get a different patch
* Make tap tests store postmaster logs and handle vpaths correctlyAndrew Dunstan2015-07-28
| | | | | Given this it is possible that the buildfarm animals running these tests will be able to capture adequate logging to allow diagnosis of failures.
* Remove an unsafe Assert, and explain join_clause_is_movable_into() better.Tom Lane2015-07-28
| | | | | | | | | | | | | | join_clause_is_movable_into() is approximate, in the sense that it might sometimes return "false" when actually it would be valid to push the given join clause down to the specified level. This is okay ... but there was an Assert in get_joinrel_parampathinfo() that's only safe if the answers are always exact. Comment out the Assert, and add a bunch of commentary to clarify what's going on. Per fuzz testing by Andreas Seltenreich. The added regression test is a pretty silly query, but it's based on his crasher example. Back-patch to 9.2 where the faulty logic was introduced.
* Improve logging of TAP tests.Andrew Dunstan2015-07-28
| | | | | | | | | | | | | | | | | | | | | | | | | Create a log file for each test run. Stdout and stderr of the test script, as well as any subprocesses run as part of the test, are redirected to the log file. This makes it a lot easier to debug test failures. Also print the test output (ok 12 - ... messages) to the log file, and the command line of any external programs executed with the system_or_bail and run_log functions. This makes it a lot easier to debug failing tests. Modify some of the pg_ctl and other command invocations to not use 'silent' or 'quiet' options, and don't redirect output to /dev/null, so that you get all the information in the log instead. In the passing, construct some command lines in a way that works if $tempdir contains quote-characters. I haven't systematically gone through all of them or tested that, so I don't know if this is enough to make that work. pg_rewind tests had a custom mechanism for creating a similar log file. Use the new generic facility instead. Michael Paquier and Heikki Linnakangas. This os a backpatch of Heikki's commit 1ea06203b82b98b5098808667f6ba652181ef5b2 modified by me to suit 9.4
* Don't assume that PageIsEmpty() returns true on an all-zeros page.Heikki Linnakangas2015-07-27
| | | | | | | | It does currently, and I don't see us changing that any time soon, but we don't make that assumption anywhere else. Per Tom Lane's suggestion. Backpatch to 9.2, like the previous patch that added this assumption.
* Reuse all-zero pages in GIN.Heikki Linnakangas2015-07-27
| | | | | | | | | | In GIN, an all-zeros page would be leaked forever, and never reused. Just add them to the FSM in vacuum, and they will be reinitialized when grabbed from the FSM. On master and 9.5, attempting to access the page's opaque struct also caused an assertion failure, although that was otherwise harmless. Reported by Jeff Janes. Backpatch to all supported versions.
* Fix handling of all-zero pages in SP-GiST vacuum.Heikki Linnakangas2015-07-27
| | | | | | | | | | | | SP-GiST initialized an all-zeros page at vacuum, but that was not WAL-logged, which is not safe. You might get a torn page write, when it gets flushed to disk, and end-up with a half-initialized index page. To fix, leave it in the all-zeros state, and add it to the FSM. It will be initialized when reused. Also don't set the page-deleted flag when recycling an empty page. That was also not WAL-logged, and a torn write of that would cause the page to have an invalid checksum. Backpatch to 9.2, where SP-GiST indexes were added.
* Make entirely-dummy appendrels get marked as such in set_append_rel_size.Tom Lane2015-07-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The planner generally expects that the estimated rowcount of any relation is at least one row, *unless* it has been proven empty by constraint exclusion or similar mechanisms, which is marked by installing a dummy path as the rel's cheapest path (cf. IS_DUMMY_REL). When I split up allpaths.c's processing of base rels into separate set_base_rel_sizes and set_base_rel_pathlists steps, the intention was that dummy rels would get marked as such during the "set size" step; this is what justifies an Assert in indxpath.c's get_loop_count that other relations should either be dummy or have positive rowcount. Unfortunately I didn't get that quite right for append relations: if all the child rels have been proven empty then set_append_rel_size would come up with a rowcount of zero, which is correct, but it didn't then do set_dummy_rel_pathlist. (We would have ended up with the right state after set_append_rel_pathlist, but that's too late, if we generate indexpaths for some other rel first.) In addition to fixing the actual bug, I installed an Assert enforcing this convention in set_rel_size; that then allows simplification of a couple of now-redundant tests for zero rowcount in set_append_rel_size. Also, to cover the possibility that third-party FDWs have been careless about not returning a zero rowcount estimate, apply clamp_row_est to whatever an FDW comes up with as the rows estimate. Per report from Andreas Seltenreich. Back-patch to 9.2. Earlier branches did not have the separation between set_base_rel_sizes and set_base_rel_pathlists steps, so there was no intermediate state where an appendrel would have had inconsistent rowcount and pathlist. It's possible that adding the Assert to set_rel_size would be a good idea in older branches too; but since they're not under development any more, it's likely not worth the trouble.
* Restore use of zlib default compression in pg_dump directory mode.Andrew Dunstan2015-07-25
| | | | | | | | | | | | | | | | This was broken by commit 0e7e355f27302b62af3e1add93853ccd45678443 and friends, which ignored the fact that gzopen() will treat "-1" in the mode argument as an invalid character, which it ignores, and a flag for compression level 1. Now, when this value is encountered no compression level flag is passed to gzopen, leaving it to use the zlib default. Also, enforce the documented allowed range for pg_dump's -Z option, namely 0 .. 9, and remove some consequently dead code from pg_backup_tar.c. Problem reported by Marc Mamin. Backpatch to 9.1, like the patch that introduced the bug.
* Fix off-by-one error in calculating subtrans/multixact truncation point.Heikki Linnakangas2015-07-23
| | | | | | | | | | If there were no subtransactions (or multixacts) active, we would calculate the oldestxid == next xid. That's correct, but if next XID happens to be on the next pg_subtrans (pg_multixact) page, the page does not exist yet, and SimpleLruTruncate will produce an "apparent wraparound" warning. The warning is harmless in this case, but looks very alarming to users. Backpatch to all supported versions. Patch and analysis by Thomas Munro.
* Fix add_rte_to_flat_rtable() for recent feature additions.Tom Lane2015-07-21
| | | | | | | | | | | The TABLESAMPLE and row security patches each overlooked this function, though their errors of omission were opposite: RLS failed to zero out the securityQuals field, leading to wasteful copying of useless expression trees in finished plans, while TABLESAMPLE neglected to add a comment saying that it intentionally *isn't* deleting the tablesample subtree. There probably should be a similar comment about ctename, too. Back-patch as appropriate.
* Fix (some of) pltcl memory usageAlvaro Herrera2015-07-20
| | | | | | | | | | | | | | | As reported by Bill Parker, PL/Tcl did not validate some malloc() calls against NULL return. Fix by using palloc() in a new long-lived memory context instead. This allows us to simplify error handling too, by simply deleting the memory context instead of doing retail frees. There's still a lot that could be done to improve PL/Tcl's memory handling ... This is pretty ancient, so backpatch all the way back. Author: Michael Paquier and Álvaro Herrera Discussion: https://www.postgresql.org/message-id/CAFrbyQwyLDYXfBOhPfoBGqnvuZO_Y90YgqFM11T2jvnxjLFmqw@mail.gmail.com
* Make WaitLatchOrSocket's timeout detection more robust.Tom Lane2015-07-18
| | | | | | | | | | | | | | | | | | | | In the previous coding, timeout would be noticed and reported only when poll() or socket() returned zero (or the equivalent behavior on Windows). Ordinarily that should work well enough, but it seems conceivable that we could get into a state where poll() always returns a nonzero value --- for example, if it is noticing a condition on one of the file descriptors that we do not think is reason to exit the loop. If that happened, we'd be in a busy-wait loop that would fail to terminate even when the timeout expires. We can make this more robust at essentially no cost, by deciding to exit of our own accord if we compute a zero or negative time-remaining-to-wait. Previously the code noted this but just clamped the time-remaining to zero, expecting that we'd detect timeout on the next loop iteration. Back-patch to 9.2. While 9.1 had a version of WaitLatchOrSocket, it was primitive compared to later versions, and did not guarantee reliable detection of timeouts anyway. (Essentially, this is a refinement of commit 3e7fdcffd6f77187, which was back-patched only as far as 9.2.)
* AIX: Test the -qlonglong option before use.Noah Misch2015-07-17
| | | | | | | | xlc provides "long long" unconditionally at C99-compatible language levels, and this option provokes a warning. The warning interferes with "configure" tests that fail in response to any warning. Notably, before commit 85a2a8903f7e9151793308d0638621003aded5ae, it interfered with the test for -qnoansialias. Back-patch to 9.0 (all supported versions).
* Fix a low-probability crash in our qsort implementation.Tom Lane2015-07-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It's standard for quicksort implementations, after having partitioned the input into two subgroups, to recurse to process the smaller partition and then handle the larger partition by iterating. This method guarantees that no more than log2(N) levels of recursion can be needed. However, Bentley and McIlroy argued that checking to see which partition is smaller isn't worth the cycles, and so their code doesn't do that but just always recurses on the left partition. In most cases that's fine; but with worst-case input we might need O(N) levels of recursion, and that means that qsort could be driven to stack overflow. Such an overflow seems to be the only explanation for today's report from Yiqing Jin of a SIGSEGV in med3_tuple while creating an index of a couple billion entries with a very large maintenance_work_mem setting. Therefore, let's spend the few additional cycles and lines of code needed to choose the smaller partition for recursion. Also, fix up the qsort code so that it properly uses size_t not int for some intermediate values representing numbers of items. This would only be a live risk when sorting more than INT_MAX bytes (in qsort/qsort_arg) or tuples (in qsort_tuple), which I believe would never happen with any caller in the current core code --- but perhaps it could happen with call sites in third-party modules? In any case, this is trouble waiting to happen, and the corrected code is probably if anything shorter and faster than before, since it removes sign-extension steps that had to happen when converting between int and size_t. In passing, move a couple of CHECK_FOR_INTERRUPTS() calls so that it's not necessary to preserve the value of "r" across them, and prettify the output of gen_qsort_tuple.pl a little. Back-patch to all supported branches. The odds of hitting this issue are probably higher in 9.4 and up than before, due to the new ability to allocate sort workspaces exceeding 1GB, but there's no good reason to believe that it's impossible to crash older branches this way.
* Fix spelling errorMagnus Hagander2015-07-16
| | | | David Rowley
* AIX: Link the postgres executable with -Wl,-brtllib.Noah Misch2015-07-15
| | | | | | | | | This allows PostgreSQL modules and their dependencies to have undefined symbols, resolved at runtime. Perl module shared objects rely on that in Perl 5.8.0 and later. This fixes the crash when PL/PerlU loads such modules, as the hstore_plperl test suite does. Module authors can link using -Wl,-G to permit undefined symbols; by default, linking will fail as it has. Back-patch to 9.0 (all supported versions).
* Fix assorted memory leaks.Tom Lane2015-07-12
| | | | | | | | Per Coverity (not that any of these are so non-obvious that they should not have been caught before commit). The extent of leakage is probably minor to unnoticeable, but a leak is a leak. Back-patch as necessary. Michael Paquier
* Fix postmaster's handling of a startup-process crash.Tom Lane2015-07-09
| | | | | | | | | | | | | | | | | | | | | | | Ordinarily, a failure (unexpected exit status) of the startup subprocess should be considered fatal, so the postmaster should just close up shop and quit. However, if we sent the startup process a SIGQUIT or SIGKILL signal, the failure is hardly "unexpected", and we should attempt restart; this is necessary for recovery from ordinary backend crashes in hot-standby scenarios. I attempted to implement the latter rule with a two-line patch in commit 442231d7f71764b8c628044e7ce2225f9aa43b67, but it now emerges that that patch was a few bricks shy of a load: it failed to distinguish the case of a signaled startup process from the case where the new startup process crashes before reaching database consistency. That resulted in infinitely respawning a new startup process only to have it crash again. To handle this properly, we really must track whether we have sent the *current* startup process a kill signal. Rather than add yet another ad-hoc boolean to the postmaster's state, I chose to unify this with the existing RecoveryError flag into an enum tracking the startup process's state. That seems more consistent with the postmaster's general state machine design. Back-patch to 9.0, like the previous patch.
* Fix null pointer dereference in "\c" psql command.Noah Misch2015-07-08
| | | | | | The psql crash happened when no current connection existed. (The second new check is optional given today's undocumented NULL argument handling in PQhost() etc.) Back-patch to 9.0 (all supported versions).
* Improve handling of out-of-memory in libpq.Heikki Linnakangas2015-07-07
| | | | | | | | | | | | If an allocation fails in the main message handling loop, pqParseInput3 or pqParseInput2, it should not be treated as "not enough data available yet". Otherwise libpq will wait indefinitely for more data to arrive from the server, and gets stuck forever. This isn't a complete fix - getParamDescriptions and getCopyStart still have the same issue, but it's a step in the right direction. Michael Paquier and me. Backpatch to all supported versions.
* Turn install.bat into a pure one line wrapper fort he perl script.Heikki Linnakangas2015-07-07
| | | | | | | | | | | | | Build.bat and vcregress.bat got similar treatment years ago. I'm not sure why install.bat wasn't treated at the same time, but it seems like a good idea anyway. The immediate problem with the old install.bat was that it had quoting issues, and wouldn't work if the target directory's name contained spaces. This fixes that problem. I committed this to master yesterday, this is a backpatch of the same for all supported versions.
* Fix logical decoding bug leading to inefficient reopening of files.Andres Freund2015-07-07
| | | | | | | | | | | | | | | | | | | | When spilling transaction data to disk a simple typo caused the output file to be closed and reopened for every serialized change. That happens to not have a huge impact on linux, which is why it probably wasn't noticed so far, but on windows that appears to trigger actual disk writes after every change. Not fun. The bug fortunately does not have any impact besides speed. A change could end up being in the wrong segment (last instead of next), but since we read all files to the end, that's just ugly, not really problematic. It's not a problem to upgrade, since transaction spill files do not persist across restarts. Bug: #13484 Reported-By: Olivier Gosseaume Discussion: 20150703090217.1190.63940@wrigleys.postgresql.org Backpatch to 9.4, where logical decoding was added.
* Fix pg_recvlogical not to fsync output when it's a tty or pipe.Andres Freund2015-07-07
| | | | | | | | | | | | The previous coding tried to handle possible failures when fsyncing a tty or pipe fd by accepting EINVAL - but apparently some platforms (windows, OSX) don't reliably return that. So instead check whether the output fd refers to a pipe or a tty when opening it. Reported-By: Olivier Gosseaume, Marko Tiikkaja Discussion: 559AF98B.3050901@joh.to Backpatch to 9.4, where pg_recvlogical was added.
* Fix some typos in regression test comments.Tom Lane2015-07-05
| | | | | | Back-patch to avoid unnecessary cross-branch differences. CharSyam
* Make numeric form of PG version number readily available in Makefiles.Tom Lane2015-07-05
| | | | | | | | | | | | | | | | Expose PG_VERSION_NUM (e.g., "90600") as a Make variable; but for consistency with the other Make variables holding similar info, call the variable just VERSION_NUM not PG_VERSION_NUM. There was some discussion of making this value available as a pg_config value as well. However, that would entail substantially more work than this two-line patch. Given that there was not exactly universal consensus that we need this at all, let's just do a minimal amount of work for now. Back-patch of commit a5d489ccb7e613c7ca3be6141092b8c1d2c13fa7, so that this variable is actually useful for its intended purpose sometime before 2020. Michael Paquier, reviewed by Pavel Stehule
* PL/Perl: Add alternative expected file for Perl 5.22Peter Eisentraut2015-07-03
|
* Don't emit a spurious space at end of line in pg_dump of event triggers.Heikki Linnakangas2015-07-02
| | | | Backpatch to 9.3 and above, where event triggers were added.
* Don't call PageGetSpecialPointer() on page until it's been initialized.Heikki Linnakangas2015-06-30
| | | | | | | | | | After calling XLogInitBufferForRedo(), the page might be all-zeros if it was not in page cache already. btree_xlog_unlink_page initialized the page correctly, but it called PageGetSpecialPointer before initializing it, which would lead to a corrupt page at WAL replay, if the unlinked page is not in page cache. Backpatch to 9.4, the bug came with the rewrite of B-tree page deletion.
* Back-patch some minor bug fixes in GUC code.Tom Lane2015-06-28
| | | | | | | | | | | | | | | In 9.4, fix a 9.4.1 regression that allowed multiple entries for a PGC_POSTMASTER variable to cause bogus complaints in the postmaster log. (The issue here was that commit bf007a27acd7b2fb unintentionally reverted 3e3f65973a3c94a6, which suppressed any duplicate entries within ParseConfigFp. Back-patch the reimplementation just made in HEAD, which makes use of an "ignore" field to prevent application of superseded items.) Add missed failure check in AlterSystemSetConfigFile(). We don't really expect ParseConfigFp() to fail, but that's not an excuse for not checking. In both 9.3 and 9.4, remove mistaken assignment to ConfigFileLineno that caused line counting after an include_dir directive to be completely wrong.
* Fix comment for GetCurrentIntegerTimestamp().Kevin Grittner2015-06-28
| | | | | | The unit of measure is microseconds, not milliseconds. Backpatch to 9.3 where the function and its comment were added.
* Add opaque declaration of HTAB to tqual.h.Kevin Grittner2015-06-27
| | | | | | | | | | | | | Commit b89e151054a05f0f6d356ca52e3b725dd0505e53 added the ResolveCminCmaxDuringDecoding declaration to tqual.h, which uses an HTAB parameter, without declaring HTAB. It accidentally fails to fail to build with current sources because a declaration happens to be included, directly or indirectly, in all source files that currently use tqual.h before tqual.h is first included, but we shouldn't count on that. Since an opaque declaration is enough here, just use that, as was done in snapmgr.h. Backpatch to 9.4, where the HTAB reference was added to tqual.h.
* Revoke incorrectly applied patch versionSimon Riggs2015-06-27
|
* Avoid hot standby cancels from VAC FREEZESimon Riggs2015-06-27
| | | | | | | | | | | | VACUUM FREEZE generated false cancelations of standby queries on an otherwise idle master. Caused by an off-by-one error on cutoff_xid which goes back to original commit. Backpatch to all versions 9.0+ Analysis and report by Marco Nenciarini Bug fix by Simon Riggs
* Fix a couple of bugs with wal_log_hints.Heikki Linnakangas2015-06-26
| | | | | | | | | | | | | | 1. Replay of the WAL record for setting a bit in the visibility map contained an assertion that a full-page image of that record type can only occur with checksums enabled. But it can also happen with wal_log_hints, so remove the assertion. Unlike checksums, wal_log_hints can be changed on the fly, so it would be complicated to figure out if it was enabled at the time that the WAL record was generated. 2. wal_log_hints has the same effect on the locking needed to read the LSN of a page as data checksums. BufferGetLSNAtomic() didn't get the memo. Backpatch to 9.4, where wal_log_hints was added.
* Allow background workers to connect to no particular database.Robert Haas2015-06-25
| | | | | | | The documentation claims that this is supported, but it didn't actually work. Fix that. Reported by Pavel Stehule; patch by me.
* Fix the logic for putting relations into the relcache init file.Tom Lane2015-06-25
| | | | | | | | | | | | | | | | | | | | | | | | | | Commit f3b5565dd4e59576be4c772da364704863e6a835 was a couple of bricks shy of a load; specifically, it missed putting pg_trigger_tgrelid_tgname_index into the relcache init file, because that index is not used by any syscache. However, we have historically nailed that index into cache for performance reasons. The upshot was that load_relcache_init_file always decided that the init file was busted and silently ignored it, resulting in a significant hit to backend startup speed. To fix, reinstantiate RelationIdIsInInitFile() as a wrapper around RelationSupportsSysCache(), which can know about additional relations that should be in the init file despite being unknown to syscache.c. Also install some guards against future mistakes of this type: make write_relcache_init_file Assert that all nailed relations get written to the init file, and make load_relcache_init_file emit a WARNING if it takes the "wrong number of nailed relations" exit path. Now that we remove the init files during postmaster startup, that case should never occur in the field, even if we are starting a minor-version update that added or removed rels from the nailed set. So the warning shouldn't ever be seen by end users, but it will show up in the regression tests if somebody breaks this logic. Back-patch to all supported branches, like the previous commit.