aboutsummaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAge
* Remove USE_VPATH make variable from PGXSPeter Eisentraut2014-12-04
| | | | | The user can just set VPATH directly. There is no need to invent another variable.
* Fix SHLIB_PREREQS use in contrib, allowing PGXS buildsPeter Eisentraut2014-12-04
| | | | | | | | | | | | | | | | dblink and postgres_fdw use SHLIB_PREREQS = submake-libpq to build libpq first. This doesn't work in a PGXS build, because there is no libpq to build. So just omit setting SHLIB_PREREQS in this case. Note that PGXS users can still use SHLIB_PREREQS (although it is not documented). The problem here is only that contrib modules can be built in-tree or using PGXS, and the prerequisite is only applicable in the former case. Commit 6697aa2bc25c83b88d6165340348a31328c35de6 previously attempted to address this by creating a somewhat fake submake-libpq target in Makefile.global. That was not the right fix, and it was also done in a nonportable way, so revert that.
* Move PG_AUTOCONF_FILENAME definitionPeter Eisentraut2014-12-03
| | | | | | | | | Since this is not something that a user should change, pg_config_manual.h was an inappropriate place for it. In initdb.c, remove the use of the macro, because utils/guc.h can't be included by non-backend code. But we hardcode all the other configuration file names there, so this isn't a disaster.
* Fix typosAlvaro Herrera2014-12-03
|
* Improve error messages for malformed array input strings.Tom Lane2014-12-02
| | | | | | | | | | | | | | | Make the error messages issued by array_in() uniformly follow the style ERROR: malformed array literal: "actual input string" DETAIL: specific complaint here and rewrite many of the specific complaints to be clearer. The immediate motivation for doing this is a complaint from Josh Berkus that json_to_record() produced an unintelligible error message when dealing with an array item, because it tries to feed the JSON-format array value to array_in(). Really it ought to be smart enough to perform JSON-to-Postgres array conversion, but that's a future feature not a bug fix. In the meantime, this change is something we agreed we could back-patch into 9.4, and it should help de-confuse things a bit.
* Don't skip SQL backends in logical decoding for visibility computation.Andres Freund2014-12-02
| | | | | | | | | | | | | | | | | | The logical decoding patchset introduced PROC_IN_LOGICAL_DECODING flag PGXACT flag, that allows such backends to be skipped when computing the xmin horizon/snapshots. That's fine and sensible for walsenders streaming out logical changes, but not at all fine for SQL backends doing logical decoding. If the latter set that flag any change they have performed outside of logical decoding will not be regarded as visible - which e.g. can lead to that change being vacuumed away. Note that not setting the flag for SQL backends isn't particularly bothersome - the SQL backend doesn't do streaming, so it only runs for a limited amount of time. Per buildfarm member 'tick' and Alvaro. Backpatch to 9.4, where logical decoding was introduced.
* Fix JSON aggregates to work properly when final function is re-executed.Tom Lane2014-12-02
| | | | | | | | | | | | Davide S. reported that json_agg() sometimes produced multiple trailing right brackets. This turns out to be because json_agg_finalfn() attaches the final right bracket, and was doing so by modifying the aggregate state in-place. That's verboten, though unfortunately it seems there's no way for nodeAgg.c to check for such mistakes. Fix that back to 9.3 where the broken code was introduced. In 9.4 and HEAD, likewise fix json_object_agg(), which had copied the erroneous logic. Make some cosmetic cleanups as well.
* Guard against bad "dscale" values in numeric_recv().Tom Lane2014-12-01
| | | | | | | | | | | | | | | | | | | | | | | | We were not checking to see if the supplied dscale was valid for the given digit array when receiving binary-format numeric values. While dscale can validly be more than the number of nonzero fractional digits, it shouldn't be less; that case causes fractional digits to be hidden on display even though they're there and participate in arithmetic. Bug #12053 from Tommaso Sala indicates that there's at least one broken client library out there that sometimes supplies an incorrect dscale value, leading to strange behavior. This suggests that simply throwing an error might not be the best response; it would lead to failures in applications that might seem to be working fine today. What seems the least risky fix is to truncate away any digits that would be hidden by dscale. This preserves the existing behavior in terms of what will be printed for the transmitted value, while preventing subsequent arithmetic from producing results inconsistent with that. In passing, throw a specific error for the case of dscale being outside the range that will fit into a numeric's header. Before you got "value overflows numeric format", which is a bit misleading. Back-patch to all supported branches.
* Fix hstore_to_json_loose's detection of valid JSON number values.Andrew Dunstan2014-12-01
| | | | | | | | | | We expose a function IsValidJsonNumber that internally calls the lexer for json numbers. That allows us to use the same test everywhere, instead of inventing a broken test for hstore conversions. The new function is also used in datum_to_json, replacing the code that is now moved to the new function. Backpatch to 9.3 where hstore_to_json_loose was introduced.
* Fix missing space in documentationMagnus Hagander2014-12-01
| | | | Ian Barwick
* Fix minor bugs in commit 30bf4689a96cd283af33edcdd6b7210df3f20cd8 et al.Tom Lane2014-11-30
| | | | | | | | | Coverity complained that the "else" added to fillPGconn() was unreachable, which it was. Remove the dead code. In passing, rearrange the tests so as not to bother trying to fetch values for options that can't be assigned. Pre-9.3 did not have that issue, but it did have a "return" that should be "goto oom_error" to ensure that a suitable error message gets filled in.
* Remove PQhostaddr() from 9.4 release notes.Noah Misch2014-11-29
| | | | Back-patch to 9.4, like the feature's removal.
* Reimplement 9f80f4835a55a1cbffcda5d23a617917f3286c14 with PQconninfo().Noah Misch2014-11-29
| | | | | | | | Apart from ignoring "hostaddr" set to the empty string, this behaves identically to its predecessor. Back-patch to 9.4, where the original commit first appeared. Reviewed by Fujii Masao.
* Revert "Add libpq function PQhostaddr()."Noah Misch2014-11-29
| | | | | | | This reverts commit 9f80f4835a55a1cbffcda5d23a617917f3286c14. The function returned the raw value of a connection parameter, a task served by PQconninfo(). The next commit will reimplement the psql \conninfo change that way. Back-patch to 9.4, where that commit first appeared.
* Update transaction README for persistent multixactsAlvaro Herrera2014-11-28
| | | | | Multixacts are now maintained during recovery, but the README didn't get the memo. Backpatch to 9.3, where the divergence was introduced.
* Add tab-completion for ALTER TABLE ALTER CONSTRAINT in psql.Fujii Masao2014-11-28
| | | | | | Back-patch to 9.4 where ALTER TABLE ALTER CONSTRAINT was added. Michael Paquier, bug reported by Andrey Lizenko.
* Make \watch respect the user's \pset null setting.Fujii Masao2014-11-28
| | | | | | | | | | | Previously \watch always ignored the user's \pset null setting. \pset null setting should be ignored for \d and similar queries. For those, the code can reasonably have an opinion about what the presentation should be like, since it knows what SQL query it's issuing. This argument surely doesn't apply to \watch, so this commit makes \watch use the user's \pset null setting. Back-patch to 9.3 where \watch was added.
* Mark response messages for translation in pg_isready.Fujii Masao2014-11-28
| | | | | | Back-patch to 9.3 where pg_isready was added. Mats Erik Andersson
* Free libxml2/libxslt resources in a safer order.Tom Lane2014-11-27
| | | | | | | | | | | | | | | | | Mark Simonetti reported that libxslt sometimes crashes for him, and that swapping xslt_process's object-freeing calls around to do them in reverse order of creation seemed to fix it. I've not reproduced the crash, but valgrind clearly shows a reference to already-freed memory, which is consistent with the idea that shutdown of the xsltTransformContext is trying to reference the already-freed stylesheet or input document. With this patch, valgrind is no longer unhappy. I have an inquiry in to see if this is a libxslt bug or if we're just abusing the library; but even if it's a library bug, we'd want to adjust our code so it doesn't fail with unpatched libraries. Back-patch to all supported branches, because we've been doing this in the wrong(?) order for a long time.
* Allow "dbname" from connection string to be overridden in PQconnectDBParamsHeikki Linnakangas2014-11-25
| | | | | | | | | | | | | | | | If the "dbname" attribute in PQconnectDBParams contained a connection string or URI (and expand_dbname = TRUE), the database name from the connection string could not be overridden by a subsequent "dbname" keyword in the array. That was not intentional; all other options can be overridden. Furthermore, any subsequent "dbname" caused the connection string from the first dbname value to be processed again, overriding any values for the same options that were given between the connection string and the second dbname option. In the passing, clarify in the docs that only the first dbname option in the array is parsed as a connection string. Alex Shulgin. Backpatch to all supported versions.
* Check return value of strdup() in libpq connection option parsing.Heikki Linnakangas2014-11-25
| | | | | | | | An out-of-memory in most of these would lead to strange behavior, like connecting to a different database than intended, but some would lead to an outright segfault. Alex Shulgin and me. Backpatch to all supported versions.
* Fix mishandling of system columns in FDW queries.Tom Lane2014-11-22
| | | | | | | | | | | | | | | | | | | | postgres_fdw would send query conditions involving system columns to the remote server, even though it makes no effort to ensure that system columns other than CTID match what the remote side thinks. tableoid, in particular, probably won't match and might have some use in queries. Hence, prevent sending conditions that include non-CTID system columns. Also, create_foreignscan_plan neglected to check local restriction conditions while determining whether to set fsSystemCol for a foreign scan plan node. This again would bollix the results for queries that test a foreign table's tableoid. Back-patch the first fix to 9.3 where postgres_fdw was introduced. Back-patch the second to 9.2. The code is probably broken in 9.1 as well, but the patch doesn't apply cleanly there; given the weak state of support for FDWs in 9.1, it doesn't seem worth fixing. Etsuro Fujita, reviewed by Ashutosh Bapat, and somewhat modified by me
* Improve documentation's description of JOIN clauses.Tom Lane2014-11-19
| | | | | | | | | | | | | | | | In bug #12000, Andreas Kunert complained that the documentation was misleading in saying "FROM T1 CROSS JOIN T2 is equivalent to FROM T1, T2". That's correct as far as it goes, but the equivalence doesn't hold when you consider three or more tables, since JOIN binds more tightly than comma. I added a <note> to explain this, and ended up rearranging some of the existing text so that the note would make sense in context. In passing, rewrite the description of JOIN USING, which was unnecessarily vague, and hadn't been helped any by somebody's reliance on markup as a substitute for clear writing. (Mostly this involved reintroducing a concrete example that was unaccountably removed by commit 032f3b7e166cfa28.) Back-patch to all supported branches.
* Avoid file descriptor leak in pg_test_fsync.Robert Haas2014-11-19
| | | | | | | This can cause problems on Windows, where files that are still open can't be unlinked. Jeff Janes
* Fix bug in the test of file descriptor of current WAL file in pg_receivexlog.Fujii Masao2014-11-19
| | | | | | | | | | | In pg_receivexlog, in order to check whether the current WAL file is being opened or not, its file descriptor has to be checked against -1 as an invalid value. But, oops, 7900e94 added the incorrect test checking the descriptor against 1. This commit fixes that bug. Back-patch to 9.4 where the bug was added. Spotted by Magnus Hagander
* Fix pg_receivexlog --slot so that it doesn't prevent the server shutdown.Fujii Masao2014-11-19
| | | | | | | | | | | | | | | | | | | | | | | | | When pg_receivexlog --slot is connecting to the server, at the shutdown of the server, walsender keeps waiting for the last WAL record to be replicated and flushed in pg_receivexlog. But previously pg_receivexlog issued sync command only when WAL file was switched. So there was the case where the last WAL was never flushed and walsender had to keep waiting infinitely. This caused the server shutdown to get stuck. pg_recvlogical handles this problem by calling fsync() when it receives the request of immediate reply from the server. That is, at shutdown, walsender sends the request, pg_recvlogical receives it, flushes the last WAL record, and sends the flush location back to the server. Since walsender can see that the last WAL record is successfully flushed, it can exit cleanly. This commit introduces the same logic as pg_recvlogical has, to pg_receivexlog. Back-patch to 9.4 where pg_receivexlog was changed so that it can use the replication slot. Original patch by Michael Paquier, rewritten by me. Bug report by Furuya Osamu.
* Don't require bleeding-edge timezone data in timestamptz regression test.Tom Lane2014-11-18
| | | | | | | | | | | | | | The regression test cases added in commits b2cbced9e et al depended in part on the Russian timezone offset changes of Oct 2014. While this is of no particular concern for a default Postgres build, it was possible for a build using --with-system-tzdata to fail the tests if the system tzdata database wasn't au courant. Bjorn Munch and Christoph Berg both complained about this while packaging 9.4rc1, so we probably shouldn't insist on the system tzdata being up-to-date. Instead, make an equivalent test using a zone change that occurred in Venezuela in 2007. With this patch, the regression tests should pass using any tzdata set from 2012 or later. (I can't muster much sympathy for somebody using --with-system-tzdata on a machine whose system tzdata is more than three years out-of-date.)
* Fix some bogus direct uses of realloc().Tom Lane2014-11-18
| | | | | | | | | | | | | | pg_dump/parallel.c was using realloc() directly with no error check. While the odds of an actual failure here seem pretty low, Coverity complains about it, so fix by using pg_realloc() instead. While looking for other instances, I noticed a couple of places in psql that hadn't gotten the memo about the availability of pg_realloc. These aren't bugs, since they did have error checks, but verbosely inconsistent code is not a good thing. Back-patch as far as 9.3. 9.2 did not have pg_dump/parallel.c, nor did it have pg_realloc available in all frontend code.
* Stamp 9.4rc1.REL9_4_RC1Tom Lane2014-11-17
|
* Update 9.4 release notes for commits through today.Tom Lane2014-11-17
|
* Update time zone data files to tzdata release 2014j.Tom Lane2014-11-17
| | | | | | DST law changes in the Turks & Caicos Islands (America/Grand_Turk) and in Fiji. New zone Pacific/Bougainville for portions of Papua New Guinea. Historical changes for Korea and Vietnam.
* Fix WAL-logging of B-tree "unlink halfdead page" operation.Heikki Linnakangas2014-11-17
| | | | | | | | | | | There was some confusion on how to record the case that the operation unlinks the last non-leaf page in the branch being deleted. _bt_unlink_halfdead_page set the "topdead" field in the WAL record to the leaf page, but the redo routine assumed that it would be an invalid block number in that case. This commit fixes _bt_unlink_halfdead_page to do what the redo routine expected. This code is new in 9.4, so backpatch there.
* Translation updatesPeter Eisentraut2014-11-16
|
* Mention the TZ environment variable for initdbMagnus Hagander2014-11-16
| | | | Daniel Gustafsson
* Fix duplicated platforms due to copy/paste errorMagnus Hagander2014-11-16
| | | | Patch from Michael Paquier, mistake spotted by KOIZUMI Satoru
* Fix initdb --sync-only to also sync tablespaces.Andres Freund2014-11-15
| | | | | | | | | | | | | 630cd14426dc added initdb --sync-only, for use by pg_upgrade, by just exposing the existing fsync code. That's wrong, because initdb so far had absolutely no reason to deal with tablespaces. Fix --sync-only by additionally explicitly syncing each of the tablespaces. Backpatch to 9.3 where --sync-only was introduced. Abhijit Menon-Sen and Andres Freund
* Sync unlogged relations to disk after they have been reset.Andres Freund2014-11-15
| | | | | | | | | | | | | | | | | | | Unlogged relations are only reset when performing a unclean restart. That means they have to be synced to disk during clean shutdowns. During normal processing that's achieved by registering a buffer's file to be fsynced at the next checkpoint when flushed. But ResetUnloggedRelations() doesn't go through the buffer manager, so nothing will force reset relations to disk before the next shutdown checkpoint. So just make ResetUnloggedRelations() fsync the newly created main forks to disk. Discussion: 20140912112246.GA4984@alap3.anarazel.de Backpatch to 9.1 where unlogged tables were introduced. Abhijit Menon-Sen and Andres Freund
* Ensure unlogged tables are reset even if crash recovery errors out.Andres Freund2014-11-15
| | | | | | | | | | | | | | | | | | | | | | | | Unlogged relations are reset at the end of crash recovery as they're only synced to disk during a proper shutdown. Unfortunately that and later steps can fail, e.g. due to running out of space. This reset was, up to now performed after marking the database as having finished crash recovery successfully. As out of space errors trigger a crash restart that could lead to the situation that not all unlogged relations are reset. Once that happend usage of unlogged relations could yield errors like "could not open file "...": No such file or directory". Luckily clusters that show the problem can be fixed by performing a immediate shutdown, and starting the database again. To fix, just call ResetUnloggedRelations(UNLOGGED_RELATION_INIT) earlier, before marking the database as having successfully recovered. Discussion: 20140912112246.GA4984@alap3.anarazel.de Backpatch to 9.1 where unlogged tables were introduced. Abhijit Menon-Sen and Andres Freund
* Document evaluation-order considerations for aggregate functions.Tom Lane2014-11-14
| | | | | | | | | | The SELECT reference page didn't really address the question of when aggregate function evaluation occurs, nor did the "expression evaluation rules" documentation mention that CASE can't be used to control whether an aggregate gets evaluated or not. Improve that. Per discussion of bug #11661. Original text by Marti Raudsepp and Michael Paquier, rewritten significantly by me.
* Revert change to ALTER TABLESPACE summary.Stephen Frost2014-11-14
| | | | | | | | When ALTER TABLESPACE MOVE ALL was changed to be ALTER TABLE ALL IN TABLESPACE, the ALTER TABLESPACE summary should have been adjusted back to its original definition. Patch by Thom Brown (thanks!).
* Allow interrupting GetMultiXactIdMembersAlvaro Herrera2014-11-14
| | | | | | | | | | | This function has a loop which can lead to uninterruptible process "stalls" (actually infinite loops) when some bugs are triggered. Avoid that unpleasant situation by adding a check for interrupts in a place that shouldn't degrade performance in the normal case. Backpatch to 9.3. Older branches have an identical loop here, but the aforementioned bugs are only a problem starting in 9.3 so there doesn't seem to be any point in backpatching any further.
* Improve logical decoding log messagesPeter Eisentraut2014-11-13
| | | | suggestions from Robert Haas
* Fix pg_dumpall to restore its ability to dump from ancient servers.Tom Lane2014-11-13
| | | | | | | | | | | | | | | | | | | Fix breakage induced by commits d8d3d2a4f37f6df5d0118b7f5211978cca22091a and 463f2625a5fb183b6a8925ccde98bb3889f921d9: pg_dumpall has crashed when attempting to dump from pre-8.1 servers since then, due to faulty construction of the query used for dumping roles from older servers. The query was erroneous as of the earlier commit, but it wasn't exposed unless you tried to use --binary-upgrade, which you presumably wouldn't with a pre-8.1 server. However commit 463f2625a made it fail always. In HEAD, also fix additional breakage induced in the same query by commit 491c029dbc4206779cf659aa0ff986af7831d2ff, which evidently wasn't tested against pre-8.1 servers either. The bug is only latent in 9.1 because 463f2625a hadn't landed yet, but it seems best to back-patch all branches containing the faulty query. Gilles Darold
* Fix and improve cache invalidation logic for logical decoding.Andres Freund2014-11-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | There are basically three situations in which logical decoding needs to perform cache invalidation. During/After replaying a transaction with catalog changes, when skipping a uninteresting transaction that performed catalog changes and when erroring out while replaying a transaction. Unfortunately these three cases were all done slightly differently - partially because 8de3e410fa, which greatly simplifies matters, got committed in the midst of the development of logical decoding. The actually problematic case was when logical decoding skipped transaction commits (and thus processed invalidations). When used via the SQL interface cache invalidation could access the catalog - bad, because we didn't set up enough state to allow that correctly. It'd not be hard to setup sufficient state, but the simpler solution is to always perform cache invalidation outside a valid transaction. Also make the different cache invalidation cases look as similar as possible, to ease code review. This fixes the assertion failure reported by Antonin Houska in 53EE02D9.7040702@gmail.com. The presented testcase has been expanded into a regression test. Backpatch to 9.4, where logical decoding was introduced.
* Fix xmin/xmax horizon computation during logical decoding initialization.Andres Freund2014-11-13
| | | | | | | | | | | | | | | When building the initial historic catalog snapshot there were scenarios where snapbuild.c would use incorrect xmin/xmax values when starting from a xl_running_xacts record. The values used were always a bit suspect, but happened to be correct in the easy to test cases. Notably the values used when the the initial snapshot was computed while no other transactions were running were correct. This is likely to be the cause of the occasional buildfarm failures on animals markhor and tick; but it's quite possible to reproduce problems without CLOBBER_CACHE_ALWAYS. Backpatch to 9.4, where logical decoding was introduced.
* Fix race condition between hot standby and restoring a full-page image.Heikki Linnakangas2014-11-13
| | | | | | | | | | | | | | | | | | | There was a window in RestoreBackupBlock where a page would be zeroed out, but not yet locked. If a backend pinned and locked the page in that window, it saw the zeroed page instead of the old page or new page contents, which could lead to missing rows in a result set, or errors. To fix, replace RBM_ZERO with RBM_ZERO_AND_LOCK, which atomically pins, zeroes, and locks the page, if it's not in the buffer cache already. In stable branches, the old RBM_ZERO constant is renamed to RBM_DO_NOT_USE, to avoid breaking any 3rd party extensions that might use RBM_ZERO. More importantly, this avoids renumbering the other enum values, which would cause even bigger confusion in extensions that use ReadBufferExtended, but haven't been recompiled. Backpatch to all supported versions; this has been racy since hot standby was introduced.
* Tweak row-level locking documentationAlvaro Herrera2014-11-13
| | | | | | | Move the meat of locking levels to mvcc.sgml, leaving only a link to it in the SELECT reference page. Michael Paquier, with some tweaks by Álvaro
* doc: Add index entry for "hypothetical-set aggregate"Peter Eisentraut2014-11-13
|
* Explicitly support the case that a plancache's raw_parse_tree is NULL.Tom Lane2014-11-12
| | | | | | | | | | | | | | | | | | | | | | | This only happens if a client issues a Parse message with an empty query string, which is a bit odd; but since it is explicitly called out as legal by our FE/BE protocol spec, we'd probably better continue to allow it. Fix by adding tests everywhere that the raw_parse_tree field is passed to functions that don't or shouldn't accept NULL. Also make it clear in the relevant comments that NULL is an expected case. This reverts commits a73c9dbab0165b3395dfe8a44a7dfd16166963c4 and 2e9650cbcff8c8fb0d9ef807c73a44f241822eee, which fixed specific crash symptoms by hacking things at what now seems to be the wrong end, ie the callee functions. Making the callees allow NULL is superficially more robust, but it's not always true that there is a defensible thing for the callee to do in such cases. The caller has more context and is better able to decide what the empty-query case ought to do. Per followup discussion of bug #11335. Back-patch to 9.2. The code before that is sufficiently different that it would require development of a separate patch, which doesn't seem worthwhile for what is believed to be an essentially cosmetic change.
* Fix several weaknesses in slot and logical replication on-disk serialization.Andres Freund2014-11-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Heikki noticed in 544E23C0.8090605@vmware.com that slot.c and snapbuild.c were missing the FIN_CRC32 call when computing/checking checksums of on disk files. That doesn't lower the the error detection capabilities of the checksum, but is inconsistent with other usages. In a followup mail Heikki also noticed that, contrary to a comment, the 'version' and 'length' struct fields of replication slot's on disk data where not covered by the checksum. That's not likely to lead to actually missed corruption as those fields are cross checked with the expected version and the actual file length. But it's wrong nonetheless. As fixing these issues makes existing on disk files unreadable, bump the expected versions of on disk files for both slots and logical decoding historic catalog snapshots. This means that loading old files will fail with ERROR: "replication slot file ... has unsupported version 1" and ERROR: "snapbuild state file ... has unsupported version 1 instead of 2" respectively. Given the low likelihood of anybody already using these new features in a production setup that seems acceptable. Fixing these issues made me notice that there's no regression test covering the loading of historic snapshot from disk - so add one. Backpatch to 9.4 where these features were introduced.