aboutsummaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAge
...
* Make dblink try harder to form useful error messagesJoe Conway2016-12-22
| | | | | | | | | | | | | | | When libpq encounters a connection-level error, e.g. runs out of memory while forming a result, there will be no error associated with PGresult, but a message will be placed into PGconn's error buffer. postgres_fdw takes care to use the PGconn error message when PGresult does not have one, but dblink has been negligent in that regard. Modify dblink to mirror what postgres_fdw has been doing. Back-patch to all supported branches. Author: Joe Conway Reviewed-By: Tom Lane Discussion: https://postgr.es/m/02fa2d90-2efd-00bc-fefc-c23c00eb671e%40joeconway.com
* Protect dblink from invalid options when using postgres_fdw serverJoe Conway2016-12-22
| | | | | | | | | | | | | | | | | | | | When dblink uses a postgres_fdw server name for its connection, it is possible for the connection to have options that are invalid with dblink (e.g. "updatable"). The recommended way to avoid this problem is to use dblink_fdw servers instead. However there are use cases for using postgres_fdw, and possibly other FDWs, for dblink connection options, therefore protect against trying to use any options that do not apply by using is_valid_dblink_option() when building the connection string from the options. Back-patch to 9.3. Although 9.2 supports FDWs for connection info, is_valid_dblink_option() did not yet exist, and neither did postgres_fdw, at least in the postgres source tree. Given the lack of previous complaints, fixing that seems too invasive/not worth it. Author: Corey Huinker Reviewed-By: Joe Conway Discussion: https://postgr.es/m/CADkLM%3DfWyXVEyYcqbcRnxcHutkP45UHU9WD7XpdZaMfe7S%3DRwA%40mail.gmail.com
* Give a useful error message if uuid-ossp is built without preconfiguration.Tom Lane2016-12-22
| | | | | | | | | | | | | | | | | | | | Before commit b8cc8f947, it was possible to build contrib/uuid-ossp without having told configure you meant to; you could just cd into that directory and "make". That no longer works because the code depends on configure to have done header and library probes, but the ensuing error messages are not so easy to interpret if you're not an old C hand. We've gotten a couple of complaints recently from people trying to do this the low-tech way, so add an explicit #error directing the user to use --with-uuid. (In principle we might want to do something similar in the other optionally-built contrib modules; but I don't think any of the others have ever worked without preconfiguration, so there are no bad habits to break people of.) Back-patch to 9.4 where the previous commit came in. Report: https://postgr.es/m/CAHeEsBf42AWTnk=1qJvFv+mYgRFm07Knsfuc86Ono8nRjf3tvQ@mail.gmail.com Report: https://postgr.es/m/CAKYdkBrUaZX+F6KpmzoHqMtiUqCtAW_w6Dgvr6F0WTiopuGxow@mail.gmail.com
* Fix buffer overflow on particularly named files and clarify documentation aboutMichael Meskes2016-12-22
| | | | | | output file naming. Patch by Tsunakawa, Takayuki <tsunakawa.takay@jp.fujitsu.com>
* Improve dblink error message when remote does not provide itJoe Conway2016-12-21
| | | | | | | | | | | | | When dblink or postgres_fdw detects an error on the remote side of the connection, it will try to construct a local error message as best it can using libpq's PQresultErrorField(). When no primary message is available, it was bailing out with an unhelpful "unknown error". Make that message better and more style guide compliant. Per discussion on hackers. Backpatch to 9.2 except postgres_fdw which didn't exist before 9.3. Discussion: https://postgr.es/m/19872.1482338965%40sss.pgh.pa.us
* Fix detection of unfinished Unicode surrogate pair at end of string.Tom Lane2016-12-21
| | | | | | | | | | | | | The U&'...' and U&"..." syntaxes silently discarded a surrogate pair start (that is, a code between U+D800 and U+DBFF) if it occurred at the very end of the string. This seems like an obvious oversight, since we throw an error for every other invalid combination of surrogate characters, including the very same situation in E'...' syntax. This has been wrong since the pair processing was added (in 9.0), so back-patch to all supported branches. Discussion: https://postgr.es/m/19113.1482337898@sss.pgh.pa.us
* Fix strange behavior (and possible crashes) in full text phrase search.Tom Lane2016-12-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In an attempt to simplify the tsquery matching engine, the original phrase search patch invented rewrite rules that would rearrange a tsquery so that no AND/OR/NOT operator appeared below a PHRASE operator. But this approach had numerous problems. The rearrangement step was missed by ts_rewrite (and perhaps other places), allowing tsqueries to be created that would cause Assert failures or perhaps crashes at execution, as reported by Andreas Seltenreich. The rewrite rules effectively defined semantics for operators underneath PHRASE that were buggy, or at least unintuitive. And because rewriting was done in tsqueryin() rather than at execution, the rearrangement was user-visible, which is not very desirable --- for example, it might cause unexpected matches or failures to match in ts_rewrite. As a somewhat independent problem, the behavior of nested PHRASE operators was only sane for left-deep trees; queries like "x <-> (y <-> z)" did not behave intuitively at all. To fix, get rid of the rewrite logic altogether, and instead teach the tsquery execution engine to manage AND/OR/NOT below a PHRASE operator by explicitly computing the match location(s) and match widths for these operators. This requires introducing some additional fields into the publicly visible ExecPhraseData struct; but since there's no way for third-party code to pass such a struct to TS_phrase_execute, it shouldn't create an ABI problem as long as we don't move the offsets of the existing fields. Another related problem was that index searches supposed that "!x <-> y" could be lossily approximated as "!x & y", which isn't correct because the latter will reject, say, "x q y" which the query itself accepts. This required some tweaking in TS_execute_ternary along with the main tsquery engine. Back-patch to 9.6 where phrase operators were introduced. While this could be argued to change behavior more than we'd like in a stable branch, we have to do something about the crash hazards and index-vs-seqscan inconsistency, and it doesn't seem desirable to let the unintuitive behaviors induced by the rewriting implementation stand as precedent. Discussion: https://postgr.es/m/28215.1481999808@sss.pgh.pa.us Discussion: https://postgr.es/m/26706.1482087250@sss.pgh.pa.us
* Improve ALTER TABLE documentationStephen Frost2016-12-21
| | | | | | | | | | | | | | | | | | | The ALTER TABLE documentation wasn't terribly clear when it came to which commands could be combined together and what it meant when they were. In particular, SET TABLESPACE *can* be combined with other commands, when it's operating against a single table, but not when multiple tables are being moved with ALL IN TABLESPACE. Further, the actions are applied together but not really in 'parallel', at least today. Pointed out by: Amit Langote Improved wording from Tom. Back-patch to 9.4, where the ALL IN TABLESPACE option was added. Discussion: https://www.postgresql.org/message-id/14c535b4-13ef-0590-1b98-76af355a0763%40lab.ntt.co.jp
* Fix dumping of casts and transforms using built-in functionsStephen Frost2016-12-21
| | | | | | | | | | | | | | | | | In pg_dump.c dumpCast() and dumpTransform(), we would happily ignore the cast or transform if it happened to use a built-in function because we weren't including the information about built-in functions when querying pg_proc from getFuncs(). Modify the query in getFuncs() to also gather information about functions which are used by user-defined casts and transforms (where "user-defined" means "has an OID >= FirstNormalObjectId"). This also adds to the TAP regression tests for 9.6 and master to cover these types of objects. Back-patch all the way for casts, back to 9.5 for transforms. Discussion: https://www.postgresql.org/message-id/flat/20160504183952.GE10850%40tamriel.snowman.net
* For 8.0 servers, get last built-in oid from pg_databaseStephen Frost2016-12-21
| | | | | | | | | | | | We didn't start ensuring that all built-in objects had OIDs less than 16384 until 8.1, so for 8.0 servers we still need to query the value out of pg_database. We need this, in particular, to distinguish which casts were built-in and which were user-defined. For HEAD, we only worry about going back to 8.0, for the back-branches, we also ensure that 7.0-7.4 work. Discussion: https://www.postgresql.org/message-id/flat/20160504183952.GE10850%40tamriel.snowman.net
* Fix order of operations in CREATE OR REPLACE VIEW.Dean Rasheed2016-12-21
| | | | | | | | | | | | | | | | | | | | | | | When CREATE OR REPLACE VIEW acts on an existing view, don't update the view options until after the view query has been updated. This is necessary in the case where CREATE OR REPLACE VIEW is used on an existing view that is not updatable, and the new view is updatable and specifies the WITH CHECK OPTION. In this case, attempting to apply the new options to the view before updating its query fails, because the options are applied using the ALTER TABLE infrastructure which checks that WITH CHECK OPTION is only applied to an updatable view. If new columns are being added to the view, that is also done using the ALTER TABLE infrastructure, but it is important that that still be done before updating the view query, because the rules system checks that the query columns match those on the view relation. Added a comment to explain that, in case someone is tempted to move that to where the view options are now being set. Back-patch to 9.4 where WITH CHECK OPTION was added. Report: https://postgr.es/m/CAEZATCUp%3Dz%3Ds4SzZjr14bfct_bdJNwMPi-gFi3Xc5k1ntbsAgQ%40mail.gmail.com
* Fix corner-case bug in WaitEventSetWaitBlock on Windows.Robert Haas2016-12-21
| | | | | | | | | | | | | | | | | If we do not reset the FD_READ event, WaitForMultipleObjects won't return it again again unless we've meanwhile read from the socket, which is generally true but not guaranteed. WaitEventSetWaitBlock itself may fail to return the event to the caller if the latch is also set, and even if we changed that, the caller isn't obliged to handle all returned events at once. On non-Windows systems, the socket-read event is purely level-triggered, so this issue does not exist. To fix, make Windows reset the event when needed. This bug was introduced by 98a64d0bd713cb89e61bef6432befc4b7b5da59e, and causes hangs when trying to use the pldebugger extension. Patch by Amit Kapial. Reported and tested by Ashutosh Sharma, who also provided some analysis. Further analysis by Michael Paquier.
* Fix sharing Agg transition state of DISTINCT or ordered aggs.Heikki Linnakangas2016-12-20
| | | | | | | | | | | | | | | If a query contained two aggregates that could share the transition value, we would correctly collect the input into a tuplesort only once, but incorrectly run the transition function over the accumulated input twice, in finalize_aggregates(). That caused a crash, when we tried to call tuplesort_performsort() on an already-freed NULL tuplestore. Backport to 9.6, where sharing of transition state and this bug were introduced. Analysis by Tom Lane. Discussion: https://www.postgresql.org/message-id/ac5b0b69-744c-9114-6218-8300ac920e61@iki.fi
* Fix handling of phrase operator removal while removing tsquery stopwords.Tom Lane2016-12-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The distance of a removed phrase operator should propagate up to a parent phrase operator if there is one, but this only worked correctly in left-deep trees. Throwing in a few parentheses confused it completely, as indeed was illustrated by bizarre results in existing regression test cases. To fix, track unaccounted-for distances that should propagate to the left and to the right of the current node, rather than trying to make it work with only one returned distance. Also make some adjustments to behave as well as we can for cases of intermixed phrase and regular (AND/OR) operators. I don't think it's possible to be 100% correct for that without a rethinking of the tsquery representation; for example, maybe we should just not drop stopword nodes at all underneath phrase operators. But this is better than it was, and changing tsquery representation wouldn't be safely back-patchable. While at it, I simplified the API of the clean_fakeval_intree function a bit by getting rid of the "char *result" output parameter; that wasn't doing anything that wasn't redundant with whether the result node is NULL or not, and testing for NULL seems a lot clearer/safer. This is part of a larger project to fix various infelicities in the phrase-search implementation, but this part seems comittable on its own. Back-patch to 9.6 where phrase operators were introduced. Discussion: https://postgr.es/m/28215.1481999808@sss.pgh.pa.us Discussion: https://postgr.es/m/26706.1482087250@sss.pgh.pa.us
* Fix base backup rate limiting in presence of slow i/oMagnus Hagander2016-12-19
| | | | | | | | | | | When source i/o on disk was too slow compared to the rate limiting specified, the system could end up with a negative value for sleep that it never got out of, which caused rate limiting to effectively be turned off. Discussion: https://postgr.es/m/CABUevEy_-e0YvL4ayoX8bH_Ja9w%2BBHoP6jUgdxZuG2nEj3uAfQ%40mail.gmail.com Analysis by me, patch by Antonin Houska
* In contrib/uuid-ossp, #include headers needed for ntohl() and ntohs().Tom Lane2016-12-17
| | | | | | | | | | | | Oversight in commit b8cc8f947. I just noticed this causes compiler warnings on FreeBSD, and it really ought to cause warnings elsewhere too: all references I can find say that <arpa/inet.h> is required for these. We have a lot of code elsewhere that thinks that both <netinet/in.h> and <arpa/inet.h> should be included for these functions, so do it that way here too, even though <arpa/inet.h> ought to be sufficient according to the references I consulted. Back-patch to 9.4 where the previous commit landed.
* Fix FK-based join selectivity estimation for semi/antijoins.Tom Lane2016-12-17
| | | | | | | | | | | | | | | | | | | | | | | This case wasn't thought through sufficiently in commit 100340e2d. It's true that the FK proves that every outer row has a match in the inner table, but we forgot that some of the inner rows might be filtered away by WHERE conditions located within the semijoin's RHS. If the RHS is just one table, we can reasonably take the semijoin selectivity as equal to the fraction of the referenced table's rows that are expected to survive its restriction clauses. If the RHS is a join, it's not clear how much of the referenced table might get through the join, so fall back to the same rule we were already using for other outer-join cases: use the minimum of the regular per-clause selectivity estimates. This gives the same result as if we hadn't considered the FK at all when there's a single FK column, but it should still help for multi-column FKs, which is the case that 100340e2d is really meant to help with. Back-patch to 9.6 where the previous commit came in. Discussion: https://postgr.es/m/16149.1481835103@sss.pgh.pa.us
* Ensure that num_sync is greater than zero in synchronous_standby_names.Fujii Masao2016-12-17
| | | | | | | | | | | | | | | | | | | | Previously num_sync could be set to zero and this setting caused an assertion failure. This means that multiple synchronous standbys code should assume that num_sync is greater than zero. Also setting num_sync to zero is nonsense because it's basically the configuration for synchronous replication. If users want not to make transaction commits wait for any standbys, synchronous_standby_names should be emptied to disable synchronous replication instead of setting num_sync to zero. This patch forbids users from setting num_sync to zero in synchronous_standby_names. If zero is specified, an error will happen during processing the parameter settings. Back-patch to 9.6 where multiple synchronous standbys feature was added. Patch by me. Reviewed by Tom Lane. Discussion: <CAHGQGwHWB3izc6cXuFLh5kOcAbFXaRhhgwd-X5PeN9TEjxqXwg@mail.gmail.com>
* Improve documentation around TS_execute().Tom Lane2016-12-16
| | | | | | | | | | I got frustrated by the lack of commentary in this area, so here is some reverse-engineered documentation, along with minor stylistic cleanup. No code changes more significant than removal of unused variables. Back-patch to 9.6, not because that's useful in itself, but because we have some bugs to fix in phrase search and this would cause merge failures if it's only in HEAD.
* Add missing documentation for effective_io_concurrency tablespace option.Fujii Masao2016-12-17
| | | | | | | | | The description of effective_io_concurrency option was missing in ALTER TABLESPACE docs though it's included in CREATE TABLESPACE one. Back-patch to 9.6 where effective_io_concurrency tablespace option was added. Michael Paquier, reported by Marc-Olaf Jaschke
* Fix off-by-one in memory allocation for quote_literal_cstr().Heikki Linnakangas2016-12-16
| | | | | | | | | | The calculation didn't take into account the NULL terminator. That lead to overwriting the palloc'd buffer by one byte, if the input consists entirely of backslashes. For example "format('%L', E'\\')". Fixes bug #14468. Backpatch to all supported versions. Report: https://www.postgresql.org/message-id/20161216105001.13334.42819%40wrigleys.postgresql.org
* Sync our copy of the timezone library with IANA release tzcode2016j.Tom Lane2016-12-15
| | | | | | | This is a trivial update (consisting in fact only in the addition of a comment). The point is just to get back to being synced with an official release of tzcode, rather than some ad-hoc point in their commit history, which is where commit 1f87181e1 left it.
* Prevent planagg.c from failing on queries containing CTEs.Tom Lane2016-12-13
| | | | | | | | | | | | | | | | | | | | | | | The existing tests in preprocess_minmax_aggregates() usually prevent it from trying to do anything with queries containing CTEs, but there's an exception: a CTE could be present as a member of an appendrel, if we flattened a UNION ALL that contains CTE references. If it did try to generate an optimized path for a query using a CTE, it failed with "could not find plan for CTE", as reported by Torsten Förtsch. The proximate cause is an unwise decision in commit 3fc6e2d7f to clear subroot->cte_plan_ids in build_minmax_path(). That left the subroot's cte_plan_ids list out of step with its parse->cteList. Removing the "subroot->cte_plan_ids = NIL;" assignment is enough to let the case work again, but really it's pretty silly to be expending any cycles at all in this module when there are CTEs: we always treat their outputs as unordered so there's no way for the optimization to win. Hence, also add an early-exit test so we don't waste time like that. Back-patch to 9.6 where the misbehavior was introduced. Report: https://postgr.es/m/CAKkG4_=gjY5QiHtqSZyWMwDuTd_CftKoTaCqxjJ7uUz1-Gw=qw@mail.gmail.com
* doc: Fix purported type of pg_am.amhandler to match reality.Robert Haas2016-12-12
| | | | Joel Jacobson
* Use "%option prefix" to set API names in ecpg's lexer.Tom Lane2016-12-11
| | | | | | | | | | | | | | | | | Clean up some technical debt left behind by commit 72b1e3a21: instead of quickly hacking the name of base_yylex() with a #define, set it properly with "%option prefix". This causes the names of pgc.l's other exported symbols to change as well, so run around and modify the outside references to them as needed. Similarly, make pgc.l's external references to base_yylval use that variable's true name instead of a macro. The reason for doing this now is that the quick-hack solution will fail with future versions of flex, as reported by Дилян Палаузов. Hence, back-patch into 9.6 where the previous commit appeared, since it's likely people will build 9.6 with newer flex versions during its lifetime. Discussion: https://postgr.es/m/d845c1af-e18d-6651-178f-9f08cdf37e10@aegee.org
* Prevent crash when ts_rewrite() replaces a non-top-level subtree with null.Tom Lane2016-12-11
| | | | | | | | | | | | | | When ts_rewrite()'s replacement argument is an empty tsquery, it's supposed to simplify any operator nodes whose operand(s) become NULL; but it failed to do that reliably, because dropvoidsubtree() only examined the top level of the result tree. Rather than make a second recursive pass, let's just give the responsibility to dofindsubquery() to simplify while it's doing the main replacement pass. Per report from Andreas Seltenreich. Artur Zakirov, with some cosmetic changes by me. Back-patch to all supported branches. Discussion: https://postgr.es/m/8737i01dew.fsf@credativ.de
* Be more careful about Python refcounts while creating exception objects.Tom Lane2016-12-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PLy_generate_spi_exceptions neglected to do Py_INCREF on the new exception objects, evidently supposing that PyModule_AddObject would do that --- but it doesn't. This left us in a situation where a Python garbage collection cycle could result in deletion of exception object(s), causing server crashes or wrong answers if the exception objects are used later in the session. In addition, PLy_generate_spi_exceptions didn't bother to test for a null result from PyErr_NewException, which at best is inconsistent with the code in PLy_add_exceptions. And PLy_add_exceptions, while it did do Py_INCREF on the exceptions it makes, waited to do that till after some PyModule_AddObject calls, creating a similar risk for failure if garbage collection happened within those calls. To fix, refactor to have just one piece of code that creates an exception object and adds it to the spiexceptions module, bumping the refcount first. Also, let's add an additional refcount to represent the pointer we're going to store in a C global variable or hash table. This should only matter if the user does something weird like delete the spiexceptions Python module, but lack of paranoia has caused us enough problems in PL/Python already. The fact that PyModule_AddObject doesn't do a Py_INCREF of its own explains the need for the Py_INCREF added in commit 4c966d920, so we can improve the comment about that; also, this means we really want to do that before not after the PyModule_AddObject call. The missing Py_INCREF in PLy_generate_spi_exceptions was reported and diagnosed by Rafa de la Torre; the other fixes by me. Back-patch to all supported branches. Discussion: https://postgr.es/m/CA+Fz15kR1OXZv43mDrJb3XY+1MuQYWhx5kx3ea6BRKQp6ezGkg@mail.gmail.com
* Fix reporting of column typmods for multi-row VALUES constructs.Tom Lane2016-12-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | expandRTE() and get_rte_attribute_type() reported the exprType() and exprTypmod() values of the expressions in the first row of the VALUES as being the column type/typmod returned by the VALUES RTE. That's fine for the data type, since we coerce all expressions in a column to have the same common type. But we don't coerce them to have a common typmod, so it was possible for rows after the first one to return values that violate the claimed column typmod. This leads to the incorrect result seen in bug #14448 from Hassan Mahmood, as well as some other corner-case misbehaviors. The desired behavior is the same as we use in other type-unification cases: report the common typmod if there is one, but otherwise return -1 indicating no particular constraint. We fixed this in HEAD by deriving the typmods during transformValuesClause and storing them in the RTE, but that's not a feasible solution in the back branches. Instead, just use a brute-force approach of determining the correct common typmod during expandRTE() and get_rte_attribute_type(). Simple testing says that that doesn't really cost much, at least not in common cases where expandRTE() is only used once per query. It turns out that get_rte_attribute_type() is typically never used at all on VALUES RTEs, so the inefficiency there is of no great concern. Report: https://postgr.es/m/20161205143037.4377.60754@wrigleys.postgresql.org Discussion: https://postgr.es/m/27429.1480968538@sss.pgh.pa.us
* Fix crasher bug in array_position(s)Alvaro Herrera2016-12-09
| | | | | | | | | | | | | | | | | | | array_position and its cousin array_positions were caching the element type equality function's FmgrInfo without being careful enough to put it in a long-lived context. This is obviously broken but it didn't matter in most cases; only when using arrays of records (involving record_eq) it becomes a problem. The fix is to ensure that the type's equality function's FmgrInfo is cached in the array_position's flinfo->fn_mcxt rather than the current memory context. Apart from record types, the only other case that seems complex enough to possibly cause the same problem are range types. I didn't find a way to reproduce the problem with those, so I only include the test case submitted with the bug report as regression test. Bug report and patch: Junseok Yang Discussion: https://postgr.es/m/CAE+byMupUURYiZ6bKYgMZb9pgV1CYAijJGqWj-90W=nS7uEOeA@mail.gmail.com Backpatch to 9.5, where array_position appeared.
* Fix bogus comment.Robert Haas2016-12-08
| | | | | | | Commit 4212cb73262bbdd164727beffa4c4744b4ead92d rendered a comment in execMain.c incorrect. Per complaint from Tom Lane, repair. Patch from Amit Kapila, per wording suggested by Tom Lane and me.
* Log the creation of an init fork unconditionally.Robert Haas2016-12-08
| | | | | | | | | | | | | | | | | Previously, it was thought that this only needed to be done for the benefit of possible standbys, so wal_level = minimal skipped it. But that's not safe, because during crash recovery we might replay XLOG_DBASE_CREATE or XLOG_TBLSPC_CREATE record which recursively removes the directory that contains the new init fork. So log it always. The user-visible effect of this bug is that if you create a database or tablespace, then create an unlogged table, then crash without checkpointing, then restart, accessing the table will fail, because the it won't have been properly reset. This commit fixes that. Michael Paquier, per a report from Konstantin Knizhnik. Wording of the comments per a suggestion from me.
* Restore psql's SIGPIPE setting if popen() fails.Tom Lane2016-12-07
| | | | | | Ancient oversight in PageOutput(): if popen() fails, we'd better reset the SIGPIPE handler before returning stdout, because ClosePager() won't. Noticed while fixing the empty-PAGER issue.
* Handle empty or all-blank PAGER setting more sanely in psql.Tom Lane2016-12-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If the PAGER environment variable is set but contains an empty string, psql would pass it to "sh" which would silently exit, causing whatever query output we were printing to vanish entirely. This is quite mystifying; it took a long time for us to figure out that this was the cause of Joseph Brenner's trouble report. Rather than allowing that to happen, we should treat this as another way to specify "no pager". (We could alternatively treat it as selecting the default pager, but it seems more likely that the former is what the user meant to achieve by setting PAGER this way.) Nonempty, but all-white-space, PAGER values have the same behavior, and it's pretty easy to test for that, so let's handle that case the same way. Most other cases of faulty PAGER values will result in the shell printing some kind of complaint to stderr, which should be enough to diagnose the problem, so we don't need to work harder than this. (Note that there's been an intentional decision not to be very chatty about apparent failure returns from the pager process, since that may happen if, eg, the user quits the pager with control-C or some such. I'd just as soon not start splitting hairs about which exit codes might merit making our own report.) libpq's old PQprint() function was already on board with ignoring empty PAGER values, but for consistency, make it ignore all-white-space values as well. It's been like this a long time, so back-patch to all supported branches. Discussion: https://postgr.es/m/CAFfgvXWLOE2novHzYjmQK8-J6TmHz42G8f3X0SORM44+stUGmw@mail.gmail.com
* Fix unsafe assumption that struct timeval.tv_sec is a "long".Tom Lane2016-12-06
| | | | | | | | | | | | It typically is a "long", but it seems possible that on some platforms it wouldn't be. In any case, this silences a compiler warning on OpenBSD (cf buildfarm member curculio). While at it, use snprintf not sprintf. This format string couldn't possibly overrun the supplied buffer, but that doesn't seem like a good reason not to use the safer style. Oversight in commit f828654e1. Back-patch to 9.6 where that came in.
* Fix interaction of parallel query with prepared statements.Robert Haas2016-12-06
| | | | | | | | | | | | | | | | | Previously, a prepared statement created via a Parse message could get a parallel plan, but one created with a PREPARE statement could not. This state of affairs was due to confusion on my (rhaas) part: I erroneously believed that a CREATE TABLE .. AS EXECUTE statement could only be performed with a prepared statement by PREPARE, but in fact one created by a Prepare message works just as well. Therefore, it makes no sense to allow parallel query in one case but not the other. To fix, allow parallel query with all prepared statements, but run the parallel plan serially (i.e. without workers) in the case of CREATE TABLE .. AS EXECUTE. Also, document this. Amit Kapila and Tobias Bussman, plus an extra sentence of documentation by me.
* Revert "Permit dump/reload of not-too-large >1GB tuples"Alvaro Herrera2016-12-06
| | | | | | This reverts commit 4e01ecae98275298c680c92fdba62daf603dc98e. Per Tom Lane, changing the definition of StringInfoData amounts to an ABI break, which is unacceptable in back branches.
* Ensure gatherstate->nextreader is properly initialized.Robert Haas2016-12-05
| | | | | | | | | The previously code worked OK as long as a Gather node was never rescanned, or if it was rescanned, as long as it got at least as many workers on rescan as it had originally. But if the number of workers ever decreased on a rescan, then it could crash. Andreas Seltenreich
* Fix typo in docs.Fujii Masao2016-12-05
| | | | Reported-by: Darko Prelec
* Fix incorrect output from gin_desc().Fujii Masao2016-12-05
| | | | | | | | | | | | | | | | | | | | | Previously gin_desc() displayed incorrect output "unknown action 0" for XLOG_GIN_INSERT and XLOG_GIN_VACUUM_DATA_LEAF_PAGE records with valid actions. The cause of this problem was that gin_desc() wrongly used XLogRecGetData() to extract data from those records. Since they were registered by XLogRegisterBufData(), gin_desc() should have used XLogRecGetBlockData(), instead, like gin_redo(). Also there were other differences about how to treat XLOG_GIN_INSERT record between gin_desc() and gin_redo(). This commit fixes gin_desc() routine so that it treats those records in the same way as gin_redo(). Batch-patch to 9.5 where WAL record format was revamped and XLogRegisterBufData() was added. Reported-By: Andres Freund Reviewed-By: Tom Lane Discussion: <20160509194645.7lewnpw647zegx2m@alap3.anarazel.de>
* Don't mess up pstate->p_next_resno in transformOnConflictClause().Tom Lane2016-12-04
| | | | | | | | | | | | | | | | | | | | | | | | | transformOnConflictClause incremented p_next_resno while generating the phony targetlist for the EXCLUDED pseudo-rel. Then that field got incremented some more during transformTargetList, possibly leading to free_parsestate concluding that we'd overrun the allowed length of a tlist, as reported by Justin Pryzby. We could fix this by resetting p_next_resno to 1 after using it for the EXCLUDED pseudo-rel tlist, but it seems easier and less coupled to other places if we just don't use that field at all in this loop. (Note that this doesn't change anything about the resnos that end up appearing in the main target list, because those are all replaced with target-column numbers by updateTargetListEntry.) In passing, fix incorrect type OID assigned to the whole-row Var for "EXCLUDED.*" (somehow this escaped having any bad consequences so far, but it's certainly wrong); remove useless assignment to var->location; pstrdup the column names in case of a relcache flush; and improve nearby comments. Back-patch to 9.5 where ON CONFLICT was introduced. Report: https://postgr.es/m/20161204163237.GA8030@telsasoft.com
* Make pgwin32_putenv() visit debug CRTs.Noah Misch2016-12-03
| | | | | | | | | | This has no effect in the most conventional case, where no relevant DLL uses a debug build. For an example where it does matter, given a debug build of MIT Kerberos, the krb_server_keyfile parameter usually had no effect. Since nobody wants a Heisenbug, back-patch to 9.2 (all supported versions). Christian Ullrich, reviewed by Michael Paquier.
* Remove wrong CloseHandle() call.Noah Misch2016-12-03
| | | | | | | | | | | In accordance with its own documentation, invoke CloseHandle() only when directed in the documentation for the function that furnished the handle. GetModuleHandle() does not so direct. We have been issuing this call only in the rare event that a CRT DLL contains no "_putenv" symbol, so lack of bug reports is uninformative. Back-patch to 9.2 (all supported versions). Christian Ullrich, reviewed by Michael Paquier.
* Refine win32env.c cosmetics.Noah Misch2016-12-03
| | | | | | | | | Replace use of plain 0 as a null pointer constant. In comments, update terminology and lessen redundancy. Back-patch to 9.2 (all supported versions) for the convenience of back-patching the next two commits. Christian Ullrich and Noah Misch, reviewed (in earlier versions) by Michael Paquier.
* Permit dump/reload of not-too-large >1GB tuplesAlvaro Herrera2016-12-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Our documentation states that our maximum field size is 1 GB, and that our maximum row size of 1.6 TB. However, while this might be attainable in theory with enough contortions, it is not workable in practice; for starters, pg_dump fails to dump tables containing rows larger than 1 GB, even if individual columns are well below the limit; and even if one does manage to manufacture a dump file containing a row that large, the server refuses to load it anyway. This commit enables dumping and reloading of such tuples, provided two conditions are met: 1. no single column is larger than 1 GB (in output size -- for bytea this includes the formatting overhead) 2. the whole row is not larger than 2 GB There are three related changes to enable this: a. StringInfo's API now has two additional functions that allow creating a string that grows beyond the typical 1GB limit (and "long" string). ABI compatibility is maintained. We still limit these strings to 2 GB, though, for reasons explained below. b. COPY now uses long StringInfos, so that pg_dump doesn't choke trying to emit rows longer than 1GB. c. heap_form_tuple now uses the MCXT_ALLOW_HUGE flag in its allocation for the input tuple, which means that large tuples are accepted on input. Note that at this point we do not apply any further limit to the input tuple size. The main reason to limit to 2 GB is that the FE/BE protocol uses 32 bit length words to describe each row; and because the documentation is ambiguous on its signedness and libpq does consider it signed, we cannot use the highest-order bit. Additionally, the StringInfo API uses "int" (which is 4 bytes wide in most platforms) in many places, so we'd need to change that API too in order to improve, which has lots of fallout. Backpatch to 9.5, which is the oldest that has MemoryContextAllocExtended, a necessary piece of infrastructure. We could apply to 9.4 with very minimal additional effort, but any further than that would require backpatching "huge" allocations too. This is the largest set of changes we could find that can be back-patched without breaking compatibility with existing systems. Fixing a bigger set of problems (for example, dumping tuples bigger than 2GB, or dumping fields bigger than 1GB) would require changing the FE/BE protocol and/or changing the StringInfo API in an ABI-incompatible way, neither of which would be back-patchable. Authors: Daniel Vérité, Álvaro Herrera Reviewed by: Tomas Vondra Discussion: https://postgr.es/m/20160229183023.GA286012@alvherre.pgsql
* Doc: improve description of trim() and related functions.Tom Lane2016-11-30
| | | | | | | | | | | | | | | | Per bug #14441 from Mark Pether, the documentation could be misread, mainly because some of the examples failed to show what happens with a multicharacter "characters to trim" string. Also, while the text description in most of these entries was fairly clear that the "characters" argument is a set of characters not a substring to match, some of them used variant wording that was a bit less clear. trim() itself suffered from both deficiencies and was thus pretty misinterpretable. Also fix failure to explain which of LEADING/TRAILING/BOTH is the default. Discussion: https://postgr.es/m/20161130011710.6539.53657@wrigleys.postgresql.org
* Fix bogus handling of JOIN_UNIQUE_OUTER/INNER cases for parallel joins.Tom Lane2016-11-29
| | | | | | | | | | | | | | | | | | | | consider_parallel_nestloop passed the wrong jointype down to its subroutines for JOIN_UNIQUE_INNER cases (it should pass JOIN_INNER), and it thought that it could pass paths other than innerrel->cheapest_total_path to create_unique_path, which create_unique_path is not on board with. These bugs would lead to assertion failures or other errors, suggesting that this code path hasn't been tested much. hash_inner_and_outer's code for parallel join effectively treated both JOIN_UNIQUE_OUTER and JOIN_UNIQUE_INNER the same as JOIN_INNER (for different reasons :-(), leading to incorrect plans that treated a semijoin as if it were a plain join. Michael Day submitted a test case demonstrating that hash_inner_and_outer failed for JOIN_UNIQUE_OUTER, and I found the other cases through code review. Report: https://postgr.es/m/D0E8A029-D1AC-42E8-979A-5DE4A77E4413@rcmail.com
* Fix incorrect variable type in set_rel_consider_parallel().Tom Lane2016-11-29
| | | | | | func_parallel() returns char not Oid. Harmless, but still wrong. Amit Langote
* Clarify pg_dump -b documentationStephen Frost2016-11-29
| | | | | | | | | | | | | The documentation around the -b/--blobs option to pg_dump seemed to imply that it might be possible to add blobs to a "schema-only" dump or similar. Clarify that blobs are data and therefore will only be included in dumps where data is being included, even when -b is used to request blobs be included. The -b option has been around since before 9.2, so back-patch to all supported branches. Discussion: https://postgr.es/m/20161119173316.GA13284@tamriel.snowman.net
* Correct psql documentation exampleStephen Frost2016-11-29
| | | | | | | | | | | An example in the psql documentation had an incorrect field name from what the command actually produced. Pointed out by Fabien COELHO Back-patch to 9.6 where the example was added. Discussion: https://postgr.es/m/alpine.DEB.2.20.1611291349400.19314@lancre
* Fix get_relation_info name typo'ed in a commentAlvaro Herrera2016-11-28
| | | | | | | Plus add a missing comment about this in get_relation_info itself. Author: Amit Langote Discussion: https://postgr.es/m/e46c0569-0449-afa0-e2fe-f3776e4b3fd5@lab.ntt.co.jp