aboutsummaryrefslogtreecommitdiff
path: root/src/backend
Commit message (Collapse)AuthorAge
...
* BRIN revmap pages are not standard pages ...Alvaro Herrera2017-01-09
| | | | | | | | | | | | | | | | | | | | | | | | ... and therefore we ought not to tell XLogRegisterBuffer the opposite, when writing XLog for a brin update that moves the index tuple to a different page. Otherwise, xlog insertion would try to "compress the hole" when producing a full-page image for it; but since we don't update pd_lower/upper, the hole covers the whole page. On WAL replay, the revmap page becomes empty and so the entire portion of the index is useless and needs to be recomputed. This is low-probability: a BRIN update only moves an index tuple to a different page when the summary tuple is larger than the existing one, which doesn't happen with fixed-width datatypes. Also, the revmap page must be first after a checkpoint. Report and patch: Kuntal Ghosh Bug is alleged to have detected by a WAL-consistency-checking tool. Discussion: https://postgr.es/m/CAGz5QCJ=00UQjScSEFbV=0qO5ShTZB9WWz_Fm7+Wd83zPs9Geg@mail.gmail.com I posted a test case demonstrating the problem, but I'm refraining from adding it to the test suite; if the WAL consistency tool makes it in, that will be a better way to catch this from regressing. (We should definitely have someting that causes not-same-page updates, though.)
* Invalidate cached plans on FDW option changes.Tom Lane2017-01-06
| | | | | | | | | | | | | | | | | | | | | | | | This fixes problems where a plan must change but fails to do so, as seen in a bug report from Rajkumar Raghuwanshi. For ALTER FOREIGN TABLE OPTIONS, do this through the standard method of forcing a relcache flush on the table. For ALTER FOREIGN DATA WRAPPER and ALTER SERVER, just flush the whole plan cache on any change in pg_foreign_data_wrapper or pg_foreign_server. That matches the way we handle some other low-probability cases such as opclass changes, and it's unclear that the case arises often enough to be worth working harder. Besides, that gives a patch that is simple enough to back-patch with confidence. Back-patch to 9.3. In principle we could apply the code change to 9.2 as well, but (a) we lack postgres_fdw to test it with, (b) it's doubtful that anyone is doing anything exciting enough with FDWs that far back to need this desperately, and (c) the patch doesn't apply cleanly. Patch originally by Amit Langote, reviewed by Etsuro Fujita and Ashutosh Bapat, who each contributed substantial changes as well. Discussion: https://postgr.es/m/CAKcux6m5cA6rRPTKkqVdJ-R=KKDfe35Q_ZuUqxDSV_4hwga=og@mail.gmail.com
* Fix possible leak of semaphore count.Robert Haas2017-01-05
| | | | | | | | | | | | | Commit 4aec49899e5782247e134f94ce1c6ee926f88e1c reorganized the order of operations here so that we no longer increment the number of "extra waits" before locking the semaphore, but it did not change the starting value of extraWaits from 0 to -1 to compensate. In the worst case, this could leak a semaphore count, but that seems to be unlikely in practice. Discussion: http://postgr.es/m/CAA4eK1JyVqXiMba+-a589Rk0pyHsyKkGxeumVKjU6Y74hdrVLQ@mail.gmail.com Amit Kapila, per an off-list report by Dilip Kumar. Reviewed by me.
* Fix handling of empty arrays in array_fill().Tom Lane2017-01-05
| | | | | | | | | | | | | | | | | array_fill(..., array[0]) produced an empty array, which is probably what users expect, but it was a one-dimensional zero-length array which is not our standard representation of empty arrays. Also, for no very good reason, it rejected empty input arrays; that case should be allowed and produce an empty output array. In passing, remove the restriction that the input array(s) have lower bound 1. That seems rather pointless, and it would have needed extra complexity to make the check deal with empty input arrays. Per bug #14487 from Andrew Gierth. It's been broken all along, so back-patch to all supported branches. Discussion: https://postgr.es/m/20170105152156.10135.64195@wrigleys.postgresql.org
* Handle OID column inheritance correctly in ALTER TABLE ... INHERIT.Tom Lane2017-01-04
| | | | | | | | | | | | | Inheritance operations must treat the OID column, if any, much like regular user columns. But MergeAttributesIntoExisting() neglected to do that, leading to weird results after a table with OIDs is associated to a parent with OIDs via ALTER TABLE ... INHERIT. Report and patch by Amit Langote, reviewed by Ashutosh Bapat, some adjustments by me. It's been broken all along, so back-patch to all supported branches. Discussion: https://postgr.es/m/cb13cfe7-a48c-5720-c383-bb843ab28298@lab.ntt.co.jp
* Silence compiler warningsJoe Conway2017-01-02
| | | | | | | | | | | Rearrange a bit of code to ensure that 'mode' in LWLockRelease is obviously always set, which seems a bit cleaner and avoids a compiler warning (thanks to Robert for the suggestion!). Back-patch back to 9.5 where the warning is first seen. Author: Stephen Frost Discussion: https://postgr.es/m/20161129152102.GR13284%40tamriel.snowman.net
* Silence compiler warningsJoe Conway2017-01-02
| | | | | | | | | | | | In GetCachedPlan(), initialize 'plan' to silence a compiler warning, but also add an Assert() to make sure we don't ever actually fall through with 'plan' still being set to NULL, since we are about to dereference it. Back-patch back to 9.2. Author: Stephen Frost Discussion: https://postgr.es/m/20161129152102.GR13284%40tamriel.snowman.net
* Fix interval_transform so it doesn't throw away non-no-op casts.Tom Lane2016-12-27
| | | | | | | | | | | | | | | | | | | | | | | | interval_transform() contained two separate bugs that caused it to sometimes mistakenly decide that a cast from interval to restricted interval is a no-op and throw it away. First, it was wrong to rely on dt.h's field type macros to have an ordering consistent with the field's significance; in one case they do not. This led to mistakenly treating YEAR as less significant than MONTH, so that a cast from INTERVAL MONTH to INTERVAL YEAR was incorrectly discarded. Second, fls(1<<k) produces k+1 not k, so comparing its output directly to SECOND was wrong. This led to supposing that a cast to INTERVAL MINUTE was really a cast to INTERVAL SECOND and so could be discarded. To fix, get rid of the use of fls(), and make a function based on intervaltypmodout to produce a field ID code adapted to the need here. Per bug #14479 from Piotr Stefaniak. Back-patch to 9.2 where transform functions were introduced, because this code was born broken. Discussion: https://postgr.es/m/20161227172307.10135.7747@wrigleys.postgresql.org
* Remove triggerable Assert in hashname().Tom Lane2016-12-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | hashname() asserted that the key string it is given is shorter than NAMEDATALEN. That should surely always be true if the input is in fact a regular value of type "name". However, for reasons of coding convenience, we allow plain old C strings to be treated as "name" values in many places. Some SQL functions accept arbitrary "text" inputs, convert them to C strings, and pass them otherwise-untransformed to syscache lookups for name columns, allowing an overlength input value to trigger hashname's Assert. This would be a DOS problem, except that it only happens in assert-enabled builds which aren't recommended for production. In a production build, you'll just get a name lookup error, since regardless of the hash value computed by hashname, the later equality comparison checks can't match. Likewise, if the catalog lookup is done by seqscan or indexscan searches, there will just be a lookup error, since the name comparison functions don't contain any similar length checks, and will see an overlength input as unequal to any stored entry. After discussion we concluded that we should simply remove this Assert. It's inessential to hashname's own functionality, and having such an assertion in only some paths for name lookup is more of a foot-gun than a useful check. There may or may not be a case for the affected callers to do something other than let the name lookup fail, but we'll consider that separately; in any case we probably don't want to change such behavior in the back branches. Per report from Tushar Ahuja. Back-patch to all supported branches. Report: https://postgr.es/m/7d0809ee-6f25-c9d6-8e74-5b2967830d49@enterprisedb.com Discussion: https://postgr.es/m/17691.1482523168@sss.pgh.pa.us
* Use TSConfigRelationId in AlterTSConfiguration()Stephen Frost2016-12-22
| | | | | | | | | | | | | | | | | | | | When we are altering a text search configuration, we are getting the tuple from pg_ts_config and using its OID, so use TSConfigRelationId when invoking any post-alter hooks and setting the object address. Further, in the functions called from AlterTSConfiguration(), we're saving information about the command via EventTriggerCollectAlterTSConfig(), so we should be setting commandCollected to true. Also add a regression test to test_ddl_deparse for ALTER TEXT SEARCH CONFIGURATION. Author: Artur Zakirov, a few additional comments by me Discussion: https://www.postgresql.org/message-id/57a71eba-f2c7-e7fd-6fc0-2126ec0b39bd%40postgrespro.ru Back-patch the fix for the InvokeObjectPostAlterHook() call to 9.3 where it was introduced, and the fix for the ObjectAddressSet() call and setting commandCollected to true to 9.5 where those changes to ProcessUtilitySlow() were introduced.
* Fix CREATE TABLE ... LIKE ... WITH OIDS.Tom Lane2016-12-22
| | | | | | | | | | | | | | | | | | | | Having a WITH OIDS specification should result in the creation of an OID column, but commit b943f502b broke that in the case that there were LIKE tables without OIDS. Commentary in that patch makes it look like this was intentional, but if so it was based on a faulty reading of what inheritance does: the parent tables can add an OID column, but they can't subtract one. AFAICS, the behavior ought to be that you get an OID column if any of the inherited tables, LIKE tables, or WITH clause ask for one. Also, revert that patch's unnecessary split of transformCreateStmt's loop over the tableElts list into two passes. That seems to have been based on a misunderstanding as well: we already have two-pass processing here, we don't need three passes. Per bug #14474 from Jeff Dafoe. Back-patch to 9.6 where the misbehavior was introduced. Report: https://postgr.es/m/20161222145304.25620.47445@wrigleys.postgresql.org
* Fix handling of expanded objects in CoerceToDomain and CASE execution.Tom Lane2016-12-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | When the input value to a CoerceToDomain expression node is a read-write expanded datum, we should pass a read-only pointer to any domain CHECK expressions and then return the original read-write pointer as the expression result. Previously we were blindly passing the same pointer to all the consumers of the value, making it possible for a function in CHECK to modify or even delete the expanded value. (Since a plpgsql function will absorb a passed-in read-write expanded array as a local variable value, it will in fact delete the value on exit.) A similar hazard of passing the same read-write pointer to multiple consumers exists in domain_check() and in ExecEvalCase, so fix those too. The fix requires adding MakeExpandedObjectReadOnly calls at the appropriate places, which is simple enough except that we need to get the data type's typlen from somewhere. For the domain cases, solve this by redefining DomainConstraintRef.tcache as okay for callers to access; there wasn't any reason for the original convention against that, other than not wanting the API of typcache.c to be any wider than it had to be. For CASE, there's no good solution except to add a syscache lookup during executor start. Per bug #14472 from Marcos Castedo. Back-patch to 9.5 where expanded values were introduced. Discussion: https://postgr.es/m/15225.1482431619@sss.pgh.pa.us
* Fix broken error check in _hash_doinsert.Robert Haas2016-12-22
| | | | | | | | | | | | You can't just cast a HashMetaPage to a Page, because the meta page data is stored after the page header, not at offset 0. Fortunately, this didn't break anything because it happens to find hashm_bsize at the offset at which it expects to find pd_pagesize_version, and the values are close enough to the same that this works out. Still, it's a bug, so back-patch to all supported versions. Mithun Cy, revised a bit by me.
* Fix detection of unfinished Unicode surrogate pair at end of string.Tom Lane2016-12-21
| | | | | | | | | | | | | The U&'...' and U&"..." syntaxes silently discarded a surrogate pair start (that is, a code between U+D800 and U+DBFF) if it occurred at the very end of the string. This seems like an obvious oversight, since we throw an error for every other invalid combination of surrogate characters, including the very same situation in E'...' syntax. This has been wrong since the pair processing was added (in 9.0), so back-patch to all supported branches. Discussion: https://postgr.es/m/19113.1482337898@sss.pgh.pa.us
* Fix strange behavior (and possible crashes) in full text phrase search.Tom Lane2016-12-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In an attempt to simplify the tsquery matching engine, the original phrase search patch invented rewrite rules that would rearrange a tsquery so that no AND/OR/NOT operator appeared below a PHRASE operator. But this approach had numerous problems. The rearrangement step was missed by ts_rewrite (and perhaps other places), allowing tsqueries to be created that would cause Assert failures or perhaps crashes at execution, as reported by Andreas Seltenreich. The rewrite rules effectively defined semantics for operators underneath PHRASE that were buggy, or at least unintuitive. And because rewriting was done in tsqueryin() rather than at execution, the rearrangement was user-visible, which is not very desirable --- for example, it might cause unexpected matches or failures to match in ts_rewrite. As a somewhat independent problem, the behavior of nested PHRASE operators was only sane for left-deep trees; queries like "x <-> (y <-> z)" did not behave intuitively at all. To fix, get rid of the rewrite logic altogether, and instead teach the tsquery execution engine to manage AND/OR/NOT below a PHRASE operator by explicitly computing the match location(s) and match widths for these operators. This requires introducing some additional fields into the publicly visible ExecPhraseData struct; but since there's no way for third-party code to pass such a struct to TS_phrase_execute, it shouldn't create an ABI problem as long as we don't move the offsets of the existing fields. Another related problem was that index searches supposed that "!x <-> y" could be lossily approximated as "!x & y", which isn't correct because the latter will reject, say, "x q y" which the query itself accepts. This required some tweaking in TS_execute_ternary along with the main tsquery engine. Back-patch to 9.6 where phrase operators were introduced. While this could be argued to change behavior more than we'd like in a stable branch, we have to do something about the crash hazards and index-vs-seqscan inconsistency, and it doesn't seem desirable to let the unintuitive behaviors induced by the rewriting implementation stand as precedent. Discussion: https://postgr.es/m/28215.1481999808@sss.pgh.pa.us Discussion: https://postgr.es/m/26706.1482087250@sss.pgh.pa.us
* Fix order of operations in CREATE OR REPLACE VIEW.Dean Rasheed2016-12-21
| | | | | | | | | | | | | | | | | | | | | | | When CREATE OR REPLACE VIEW acts on an existing view, don't update the view options until after the view query has been updated. This is necessary in the case where CREATE OR REPLACE VIEW is used on an existing view that is not updatable, and the new view is updatable and specifies the WITH CHECK OPTION. In this case, attempting to apply the new options to the view before updating its query fails, because the options are applied using the ALTER TABLE infrastructure which checks that WITH CHECK OPTION is only applied to an updatable view. If new columns are being added to the view, that is also done using the ALTER TABLE infrastructure, but it is important that that still be done before updating the view query, because the rules system checks that the query columns match those on the view relation. Added a comment to explain that, in case someone is tempted to move that to where the view options are now being set. Back-patch to 9.4 where WITH CHECK OPTION was added. Report: https://postgr.es/m/CAEZATCUp%3Dz%3Ds4SzZjr14bfct_bdJNwMPi-gFi3Xc5k1ntbsAgQ%40mail.gmail.com
* Fix corner-case bug in WaitEventSetWaitBlock on Windows.Robert Haas2016-12-21
| | | | | | | | | | | | | | | | | If we do not reset the FD_READ event, WaitForMultipleObjects won't return it again again unless we've meanwhile read from the socket, which is generally true but not guaranteed. WaitEventSetWaitBlock itself may fail to return the event to the caller if the latch is also set, and even if we changed that, the caller isn't obliged to handle all returned events at once. On non-Windows systems, the socket-read event is purely level-triggered, so this issue does not exist. To fix, make Windows reset the event when needed. This bug was introduced by 98a64d0bd713cb89e61bef6432befc4b7b5da59e, and causes hangs when trying to use the pldebugger extension. Patch by Amit Kapial. Reported and tested by Ashutosh Sharma, who also provided some analysis. Further analysis by Michael Paquier.
* Fix sharing Agg transition state of DISTINCT or ordered aggs.Heikki Linnakangas2016-12-20
| | | | | | | | | | | | | | | If a query contained two aggregates that could share the transition value, we would correctly collect the input into a tuplesort only once, but incorrectly run the transition function over the accumulated input twice, in finalize_aggregates(). That caused a crash, when we tried to call tuplesort_performsort() on an already-freed NULL tuplestore. Backport to 9.6, where sharing of transition state and this bug were introduced. Analysis by Tom Lane. Discussion: https://www.postgresql.org/message-id/ac5b0b69-744c-9114-6218-8300ac920e61@iki.fi
* Fix handling of phrase operator removal while removing tsquery stopwords.Tom Lane2016-12-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The distance of a removed phrase operator should propagate up to a parent phrase operator if there is one, but this only worked correctly in left-deep trees. Throwing in a few parentheses confused it completely, as indeed was illustrated by bizarre results in existing regression test cases. To fix, track unaccounted-for distances that should propagate to the left and to the right of the current node, rather than trying to make it work with only one returned distance. Also make some adjustments to behave as well as we can for cases of intermixed phrase and regular (AND/OR) operators. I don't think it's possible to be 100% correct for that without a rethinking of the tsquery representation; for example, maybe we should just not drop stopword nodes at all underneath phrase operators. But this is better than it was, and changing tsquery representation wouldn't be safely back-patchable. While at it, I simplified the API of the clean_fakeval_intree function a bit by getting rid of the "char *result" output parameter; that wasn't doing anything that wasn't redundant with whether the result node is NULL or not, and testing for NULL seems a lot clearer/safer. This is part of a larger project to fix various infelicities in the phrase-search implementation, but this part seems comittable on its own. Back-patch to 9.6 where phrase operators were introduced. Discussion: https://postgr.es/m/28215.1481999808@sss.pgh.pa.us Discussion: https://postgr.es/m/26706.1482087250@sss.pgh.pa.us
* Fix base backup rate limiting in presence of slow i/oMagnus Hagander2016-12-19
| | | | | | | | | | | When source i/o on disk was too slow compared to the rate limiting specified, the system could end up with a negative value for sleep that it never got out of, which caused rate limiting to effectively be turned off. Discussion: https://postgr.es/m/CABUevEy_-e0YvL4ayoX8bH_Ja9w%2BBHoP6jUgdxZuG2nEj3uAfQ%40mail.gmail.com Analysis by me, patch by Antonin Houska
* Fix FK-based join selectivity estimation for semi/antijoins.Tom Lane2016-12-17
| | | | | | | | | | | | | | | | | | | | | | | This case wasn't thought through sufficiently in commit 100340e2d. It's true that the FK proves that every outer row has a match in the inner table, but we forgot that some of the inner rows might be filtered away by WHERE conditions located within the semijoin's RHS. If the RHS is just one table, we can reasonably take the semijoin selectivity as equal to the fraction of the referenced table's rows that are expected to survive its restriction clauses. If the RHS is a join, it's not clear how much of the referenced table might get through the join, so fall back to the same rule we were already using for other outer-join cases: use the minimum of the regular per-clause selectivity estimates. This gives the same result as if we hadn't considered the FK at all when there's a single FK column, but it should still help for multi-column FKs, which is the case that 100340e2d is really meant to help with. Back-patch to 9.6 where the previous commit came in. Discussion: https://postgr.es/m/16149.1481835103@sss.pgh.pa.us
* Ensure that num_sync is greater than zero in synchronous_standby_names.Fujii Masao2016-12-17
| | | | | | | | | | | | | | | | | | | | Previously num_sync could be set to zero and this setting caused an assertion failure. This means that multiple synchronous standbys code should assume that num_sync is greater than zero. Also setting num_sync to zero is nonsense because it's basically the configuration for synchronous replication. If users want not to make transaction commits wait for any standbys, synchronous_standby_names should be emptied to disable synchronous replication instead of setting num_sync to zero. This patch forbids users from setting num_sync to zero in synchronous_standby_names. If zero is specified, an error will happen during processing the parameter settings. Back-patch to 9.6 where multiple synchronous standbys feature was added. Patch by me. Reviewed by Tom Lane. Discussion: <CAHGQGwHWB3izc6cXuFLh5kOcAbFXaRhhgwd-X5PeN9TEjxqXwg@mail.gmail.com>
* Improve documentation around TS_execute().Tom Lane2016-12-16
| | | | | | | | | | I got frustrated by the lack of commentary in this area, so here is some reverse-engineered documentation, along with minor stylistic cleanup. No code changes more significant than removal of unused variables. Back-patch to 9.6, not because that's useful in itself, but because we have some bugs to fix in phrase search and this would cause merge failures if it's only in HEAD.
* Fix off-by-one in memory allocation for quote_literal_cstr().Heikki Linnakangas2016-12-16
| | | | | | | | | | The calculation didn't take into account the NULL terminator. That lead to overwriting the palloc'd buffer by one byte, if the input consists entirely of backslashes. For example "format('%L', E'\\')". Fixes bug #14468. Backpatch to all supported versions. Report: https://www.postgresql.org/message-id/20161216105001.13334.42819%40wrigleys.postgresql.org
* Prevent planagg.c from failing on queries containing CTEs.Tom Lane2016-12-13
| | | | | | | | | | | | | | | | | | | | | | | The existing tests in preprocess_minmax_aggregates() usually prevent it from trying to do anything with queries containing CTEs, but there's an exception: a CTE could be present as a member of an appendrel, if we flattened a UNION ALL that contains CTE references. If it did try to generate an optimized path for a query using a CTE, it failed with "could not find plan for CTE", as reported by Torsten Förtsch. The proximate cause is an unwise decision in commit 3fc6e2d7f to clear subroot->cte_plan_ids in build_minmax_path(). That left the subroot's cte_plan_ids list out of step with its parse->cteList. Removing the "subroot->cte_plan_ids = NIL;" assignment is enough to let the case work again, but really it's pretty silly to be expending any cycles at all in this module when there are CTEs: we always treat their outputs as unordered so there's no way for the optimization to win. Hence, also add an early-exit test so we don't waste time like that. Back-patch to 9.6 where the misbehavior was introduced. Report: https://postgr.es/m/CAKkG4_=gjY5QiHtqSZyWMwDuTd_CftKoTaCqxjJ7uUz1-Gw=qw@mail.gmail.com
* Prevent crash when ts_rewrite() replaces a non-top-level subtree with null.Tom Lane2016-12-11
| | | | | | | | | | | | | | When ts_rewrite()'s replacement argument is an empty tsquery, it's supposed to simplify any operator nodes whose operand(s) become NULL; but it failed to do that reliably, because dropvoidsubtree() only examined the top level of the result tree. Rather than make a second recursive pass, let's just give the responsibility to dofindsubquery() to simplify while it's doing the main replacement pass. Per report from Andreas Seltenreich. Artur Zakirov, with some cosmetic changes by me. Back-patch to all supported branches. Discussion: https://postgr.es/m/8737i01dew.fsf@credativ.de
* Fix reporting of column typmods for multi-row VALUES constructs.Tom Lane2016-12-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | expandRTE() and get_rte_attribute_type() reported the exprType() and exprTypmod() values of the expressions in the first row of the VALUES as being the column type/typmod returned by the VALUES RTE. That's fine for the data type, since we coerce all expressions in a column to have the same common type. But we don't coerce them to have a common typmod, so it was possible for rows after the first one to return values that violate the claimed column typmod. This leads to the incorrect result seen in bug #14448 from Hassan Mahmood, as well as some other corner-case misbehaviors. The desired behavior is the same as we use in other type-unification cases: report the common typmod if there is one, but otherwise return -1 indicating no particular constraint. We fixed this in HEAD by deriving the typmods during transformValuesClause and storing them in the RTE, but that's not a feasible solution in the back branches. Instead, just use a brute-force approach of determining the correct common typmod during expandRTE() and get_rte_attribute_type(). Simple testing says that that doesn't really cost much, at least not in common cases where expandRTE() is only used once per query. It turns out that get_rte_attribute_type() is typically never used at all on VALUES RTEs, so the inefficiency there is of no great concern. Report: https://postgr.es/m/20161205143037.4377.60754@wrigleys.postgresql.org Discussion: https://postgr.es/m/27429.1480968538@sss.pgh.pa.us
* Fix crasher bug in array_position(s)Alvaro Herrera2016-12-09
| | | | | | | | | | | | | | | | | | | array_position and its cousin array_positions were caching the element type equality function's FmgrInfo without being careful enough to put it in a long-lived context. This is obviously broken but it didn't matter in most cases; only when using arrays of records (involving record_eq) it becomes a problem. The fix is to ensure that the type's equality function's FmgrInfo is cached in the array_position's flinfo->fn_mcxt rather than the current memory context. Apart from record types, the only other case that seems complex enough to possibly cause the same problem are range types. I didn't find a way to reproduce the problem with those, so I only include the test case submitted with the bug report as regression test. Bug report and patch: Junseok Yang Discussion: https://postgr.es/m/CAE+byMupUURYiZ6bKYgMZb9pgV1CYAijJGqWj-90W=nS7uEOeA@mail.gmail.com Backpatch to 9.5, where array_position appeared.
* Fix bogus comment.Robert Haas2016-12-08
| | | | | | | Commit 4212cb73262bbdd164727beffa4c4744b4ead92d rendered a comment in execMain.c incorrect. Per complaint from Tom Lane, repair. Patch from Amit Kapila, per wording suggested by Tom Lane and me.
* Log the creation of an init fork unconditionally.Robert Haas2016-12-08
| | | | | | | | | | | | | | | | | Previously, it was thought that this only needed to be done for the benefit of possible standbys, so wal_level = minimal skipped it. But that's not safe, because during crash recovery we might replay XLOG_DBASE_CREATE or XLOG_TBLSPC_CREATE record which recursively removes the directory that contains the new init fork. So log it always. The user-visible effect of this bug is that if you create a database or tablespace, then create an unlogged table, then crash without checkpointing, then restart, accessing the table will fail, because the it won't have been properly reset. This commit fixes that. Michael Paquier, per a report from Konstantin Knizhnik. Wording of the comments per a suggestion from me.
* Fix unsafe assumption that struct timeval.tv_sec is a "long".Tom Lane2016-12-06
| | | | | | | | | | | | It typically is a "long", but it seems possible that on some platforms it wouldn't be. In any case, this silences a compiler warning on OpenBSD (cf buildfarm member curculio). While at it, use snprintf not sprintf. This format string couldn't possibly overrun the supplied buffer, but that doesn't seem like a good reason not to use the safer style. Oversight in commit f828654e1. Back-patch to 9.6 where that came in.
* Fix interaction of parallel query with prepared statements.Robert Haas2016-12-06
| | | | | | | | | | | | | | | | | Previously, a prepared statement created via a Parse message could get a parallel plan, but one created with a PREPARE statement could not. This state of affairs was due to confusion on my (rhaas) part: I erroneously believed that a CREATE TABLE .. AS EXECUTE statement could only be performed with a prepared statement by PREPARE, but in fact one created by a Prepare message works just as well. Therefore, it makes no sense to allow parallel query in one case but not the other. To fix, allow parallel query with all prepared statements, but run the parallel plan serially (i.e. without workers) in the case of CREATE TABLE .. AS EXECUTE. Also, document this. Amit Kapila and Tobias Bussman, plus an extra sentence of documentation by me.
* Revert "Permit dump/reload of not-too-large >1GB tuples"Alvaro Herrera2016-12-06
| | | | | | This reverts commit 4e01ecae98275298c680c92fdba62daf603dc98e. Per Tom Lane, changing the definition of StringInfoData amounts to an ABI break, which is unacceptable in back branches.
* Ensure gatherstate->nextreader is properly initialized.Robert Haas2016-12-05
| | | | | | | | | The previously code worked OK as long as a Gather node was never rescanned, or if it was rescanned, as long as it got at least as many workers on rescan as it had originally. But if the number of workers ever decreased on a rescan, then it could crash. Andreas Seltenreich
* Fix incorrect output from gin_desc().Fujii Masao2016-12-05
| | | | | | | | | | | | | | | | | | | | | Previously gin_desc() displayed incorrect output "unknown action 0" for XLOG_GIN_INSERT and XLOG_GIN_VACUUM_DATA_LEAF_PAGE records with valid actions. The cause of this problem was that gin_desc() wrongly used XLogRecGetData() to extract data from those records. Since they were registered by XLogRegisterBufData(), gin_desc() should have used XLogRecGetBlockData(), instead, like gin_redo(). Also there were other differences about how to treat XLOG_GIN_INSERT record between gin_desc() and gin_redo(). This commit fixes gin_desc() routine so that it treats those records in the same way as gin_redo(). Batch-patch to 9.5 where WAL record format was revamped and XLogRegisterBufData() was added. Reported-By: Andres Freund Reviewed-By: Tom Lane Discussion: <20160509194645.7lewnpw647zegx2m@alap3.anarazel.de>
* Don't mess up pstate->p_next_resno in transformOnConflictClause().Tom Lane2016-12-04
| | | | | | | | | | | | | | | | | | | | | | | | | transformOnConflictClause incremented p_next_resno while generating the phony targetlist for the EXCLUDED pseudo-rel. Then that field got incremented some more during transformTargetList, possibly leading to free_parsestate concluding that we'd overrun the allowed length of a tlist, as reported by Justin Pryzby. We could fix this by resetting p_next_resno to 1 after using it for the EXCLUDED pseudo-rel tlist, but it seems easier and less coupled to other places if we just don't use that field at all in this loop. (Note that this doesn't change anything about the resnos that end up appearing in the main target list, because those are all replaced with target-column numbers by updateTargetListEntry.) In passing, fix incorrect type OID assigned to the whole-row Var for "EXCLUDED.*" (somehow this escaped having any bad consequences so far, but it's certainly wrong); remove useless assignment to var->location; pstrdup the column names in case of a relcache flush; and improve nearby comments. Back-patch to 9.5 where ON CONFLICT was introduced. Report: https://postgr.es/m/20161204163237.GA8030@telsasoft.com
* Permit dump/reload of not-too-large >1GB tuplesAlvaro Herrera2016-12-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Our documentation states that our maximum field size is 1 GB, and that our maximum row size of 1.6 TB. However, while this might be attainable in theory with enough contortions, it is not workable in practice; for starters, pg_dump fails to dump tables containing rows larger than 1 GB, even if individual columns are well below the limit; and even if one does manage to manufacture a dump file containing a row that large, the server refuses to load it anyway. This commit enables dumping and reloading of such tuples, provided two conditions are met: 1. no single column is larger than 1 GB (in output size -- for bytea this includes the formatting overhead) 2. the whole row is not larger than 2 GB There are three related changes to enable this: a. StringInfo's API now has two additional functions that allow creating a string that grows beyond the typical 1GB limit (and "long" string). ABI compatibility is maintained. We still limit these strings to 2 GB, though, for reasons explained below. b. COPY now uses long StringInfos, so that pg_dump doesn't choke trying to emit rows longer than 1GB. c. heap_form_tuple now uses the MCXT_ALLOW_HUGE flag in its allocation for the input tuple, which means that large tuples are accepted on input. Note that at this point we do not apply any further limit to the input tuple size. The main reason to limit to 2 GB is that the FE/BE protocol uses 32 bit length words to describe each row; and because the documentation is ambiguous on its signedness and libpq does consider it signed, we cannot use the highest-order bit. Additionally, the StringInfo API uses "int" (which is 4 bytes wide in most platforms) in many places, so we'd need to change that API too in order to improve, which has lots of fallout. Backpatch to 9.5, which is the oldest that has MemoryContextAllocExtended, a necessary piece of infrastructure. We could apply to 9.4 with very minimal additional effort, but any further than that would require backpatching "huge" allocations too. This is the largest set of changes we could find that can be back-patched without breaking compatibility with existing systems. Fixing a bigger set of problems (for example, dumping tuples bigger than 2GB, or dumping fields bigger than 1GB) would require changing the FE/BE protocol and/or changing the StringInfo API in an ABI-incompatible way, neither of which would be back-patchable. Authors: Daniel Vérité, Álvaro Herrera Reviewed by: Tomas Vondra Discussion: https://postgr.es/m/20160229183023.GA286012@alvherre.pgsql
* Fix bogus handling of JOIN_UNIQUE_OUTER/INNER cases for parallel joins.Tom Lane2016-11-29
| | | | | | | | | | | | | | | | | | | | consider_parallel_nestloop passed the wrong jointype down to its subroutines for JOIN_UNIQUE_INNER cases (it should pass JOIN_INNER), and it thought that it could pass paths other than innerrel->cheapest_total_path to create_unique_path, which create_unique_path is not on board with. These bugs would lead to assertion failures or other errors, suggesting that this code path hasn't been tested much. hash_inner_and_outer's code for parallel join effectively treated both JOIN_UNIQUE_OUTER and JOIN_UNIQUE_INNER the same as JOIN_INNER (for different reasons :-(), leading to incorrect plans that treated a semijoin as if it were a plain join. Michael Day submitted a test case demonstrating that hash_inner_and_outer failed for JOIN_UNIQUE_OUTER, and I found the other cases through code review. Report: https://postgr.es/m/D0E8A029-D1AC-42E8-979A-5DE4A77E4413@rcmail.com
* Fix incorrect variable type in set_rel_consider_parallel().Tom Lane2016-11-29
| | | | | | func_parallel() returns char not Oid. Harmless, but still wrong. Amit Langote
* Fix get_relation_info name typo'ed in a commentAlvaro Herrera2016-11-28
| | | | | | | Plus add a missing comment about this in get_relation_info itself. Author: Amit Langote Discussion: https://postgr.es/m/e46c0569-0449-afa0-e2fe-f3776e4b3fd5@lab.ntt.co.jp
* Fix test about ignoring extension dependencies during extension scripts.Tom Lane2016-11-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 08dd23cec introduced an exception to the rule that extension member objects can only be dropped as part of dropping the whole extension, intending to allow such drops while running the extension's own creation or update scripts. However, the exception was only applied at the outermost recursion level, because it was modeled on a pre-existing check to ignore dependencies on objects listed in pendingObjects. Bug #14434 from Philippe Beaudoin shows that this is inadequate: in some cases we can reach an extension member object by recursion from another one. (The bug concerns the serial-sequence case; I'm not sure if there are other cases, but there might well be.) To fix, revert 08dd23cec's changes to findDependentObjects() and instead apply the creating_extension exception regardless of stack level. Having seen this example, I'm a bit suspicious that the pendingObjects logic is also wrong and such cases should likewise be allowed at any recursion level. However, changing that would interact in subtle ways with the recursion logic (at least it would need to be moved to after the recursing-from check). Given that the code's been like that a long time, I'll refrain from touching it without a clear example showing it's wrong. Back-patch to all active branches. In HEAD and 9.6, where suitable test infrastructure exists, add a regression test case based on the bug report. Report: <20161125151448.6529.33039@wrigleys.postgresql.org> Discussion: <13224.1480177514@sss.pgh.pa.us>
* Bring some clarity to the defaults for the xxx_flush_after parameters.Tom Lane2016-11-25
| | | | | | | | | | | | | | | | | | | | | | | | | | Instead of confusingly stating platform-dependent defaults for these parameters in the comments in postgresql.conf.sample (with the main entry being a lie on Linux), teach initdb to install the correct platform-dependent value in postgresql.conf, similarly to the way we handle other platform-dependent defaults. This won't do anything for existing 9.6 installations, but since it's effectively only a documentation improvement, that seems OK. Since this requires initdb to have access to the default values, move the #define's for those to pg_config_manual.h; the original placement in bufmgr.h is unworkable because that file can't be included by frontend programs. Adjust the default value for wal_writer_flush_after so that it is 1MB regardless of XLOG_BLCKSZ, conforming to what is stated in both the SGML docs and postgresql.conf. (We could alternatively make it scale with XLOG_BLCKSZ, but I'm not sure I see the point.) Copy-edit related SGML documentation. Fabien Coelho and Tom Lane, per a gripe from Tomas Vondra. Discussion: <30ebc6e3-8358-09cf-44a8-578252938424@2ndquadrant.com>
* Mark a query's topmost Paths parallel-unsafe if they will have initPlans.Tom Lane2016-11-25
| | | | | | | | | | | | | Andreas Seltenreich found another case where we were being too optimistic about allowing a plan to be considered parallelizable despite it containing initPlans. It seems like the real issue here is that if we know we are going to tack initPlans onto the topmost Plan node for a subquery, we had better mark that subquery's result Paths as not-parallel-safe. That fixes this problem and allows reversion of a kluge (added in commit 7b67a0a49 and extended in f24cf960d) to not trust the parallel_safe flag at top level. Discussion: <874m2w4k5d.fsf@ex.ansel.ydns.eu>
* Check for pending trigger events on far end when dropping an FK constraint.Tom Lane2016-11-25
| | | | | | | | | | | | | When dropping a foreign key constraint with ALTER TABLE DROP CONSTRAINT, we refuse the drop if there are any pending trigger events on the named table; this ensures that we won't remove the pg_trigger row that will be consulted by those events. But we should make the same check for the referenced relation, else we might remove a due-to-be-referenced pg_trigger row for that relation too, resulting in "could not find trigger NNN" or "relation NNN has no triggers" errors at commit. Per bug #14431 from Benjie Gillam. Back-patch to all supported branches. Report: <20161124114911.6530.31200@wrigleys.postgresql.org>
* Fix typo in commentMagnus Hagander2016-11-25
| | | | Thomas Munro
* Fix commit_ts for FrozenXid and BootstrapXidAlvaro Herrera2016-11-24
| | | | | | | | | | | | | | | | Previously, requesting commit timestamp for transactions FrozenTransactionId and BootstrapTransactionId resulted in an error. But since those values can validly appear in committed tuples' Xmin, this behavior is unhelpful and error prone: each caller would have to special-case those values before requesting timestamp data for an Xid. We already have a perfectly good interface for returning "the Xid you requested is too old for us to have commit TS data for it", so let's use that instead. Backpatch to 9.5, where commit timestamps appeared. Author: Craig Ringer Discussion: https://www.postgresql.org/message-id/CAMsr+YFM5Q=+ry3mKvWEqRTxrB0iU3qUSRnS28nz6FJYtBwhJg@mail.gmail.com
* Make sure ALTER TABLE preserves index tablespaces.Tom Lane2016-11-23
| | | | | | | | | | | | | | | | | | | | | | | | When rebuilding an existing index, ALTER TABLE correctly kept the physical file in the same tablespace, but it messed up the pg_class entry if the index had been in the database's default tablespace and "default_tablespace" was set to some non-default tablespace. This led to an inaccessible index. Fix by fixing pg_get_indexdef_string() to always include a tablespace clause, whether or not the index is in the default tablespace. The previous behavior was installed in commit 537e92e41, and I think it just wasn't thought through very clearly; certainly the possible effect of default_tablespace wasn't considered. There's some risk in changing the behavior of this function, but there are no other call sites in the core code. Even if it's being used by some third party extension, it's fairly hard to envision a usage that is okay with a tablespace clause being appended some of the time but can't handle it being appended all the time. Back-patch to all supported versions. Code fix by me, investigation and test cases by Michael Paquier. Discussion: <1479294998857-5930602.post@n3.nabble.com>
* Fix PGLC_localeconv() to handle errors better.Tom Lane2016-11-21
| | | | | | | | | | | | | | | | | | | | | | | The code was intentionally not very careful about leaking strdup'd strings in case of an error. That was forgivable probably, but it also failed to notice strdup() failures, which could lead to subsequent null-pointer-dereference crashes, since many callers unsurprisingly didn't check for null pointers in the struct lconv fields. An even worse problem is that it could throw error while we were setlocale'd to a non-C locale, causing unwanted behavior in subsequent libc calls. Rewrite to ensure that we cannot throw elog(ERROR) until after we've restored the previous locale settings, or at least attempted to. (I'm sorely tempted to make restore failure be a FATAL error, but will refrain for the moment.) Having done that, it's not much more work to ensure that we clean up strdup'd storage on the way out, too. This code is substantially the same in all supported branches, so back-patch all the way. Michael Paquier and Tom Lane Discussion: <CAB7nPqRMbGqa_mesopcn4MPyTs34eqtVEK7ELYxvvV=oqS00YA@mail.gmail.com>
* Fix test for subplans in force-parallel mode.Tom Lane2016-11-21
| | | | | | | | | | | | | We mustn't force parallel mode if the query has any subplans, since ExecSerializePlan doesn't transmit them to workers. Testing top_plan->initPlan is inadequate because (1) there might be initPlans attached to lower plan nodes, and (2) non-initPlan subplans don't work either. There's certainly room for improvement in those restrictions, but for the moment that's what we've got. Amit Kapila, per report from Andreas Seltenreich Discussion: <8737im6pmh.fsf@credativ.de>
* Prevent multicolumn expansion of "foo.*" in an UPDATE source expression.Tom Lane2016-11-20
| | | | | | | | | | | | | | | | | | | | | | | | Because we use transformTargetList() for UPDATE as well as SELECT tlists, the code accidentally tried to expand a "*" reference into several columns. This is nonsensical, because the UPDATE syntax provides exactly one target column to put the value into. The immediate result was that transformUpdateTargetList() got confused and reported "UPDATE target count mismatch --- internal error". It seems better to treat such a reference as a plain whole-row variable, as it would be in other contexts. (This could produce useful results when the target column is of composite type.) Fix by tweaking transformTargetList() to perform *-expansion only conditionally, depending on its exprKind parameter. Back-patch to 9.3. The problem exists further back, but a fix would be much more invasive before that, because transformTargetList() wasn't told what kind of list it was working on. Doesn't seem worth the trouble given the lack of field reports. (I only noticed it because I was checking the code while trying to improve the documentation about how we handle "foo.*".) Discussion: <4308.1479595330@sss.pgh.pa.us>