aboutsummaryrefslogtreecommitdiff
path: root/src/backend
Commit message (Collapse)AuthorAge
* tableam: Move heap-specific logic from needs_toast_table below tableam.Robert Haas2019-05-21
| | | | | | | | | This allows table AMs to completely suppress TOAST table creation, or to modify the conditions under which they are created. Patch by me. Reviewed by Andres Freund. Discussion: http://postgr.es/m/CA+Tgmoa4O2n=yphqD2pERUnYmUO84bH1SqMsA-nSxBGsZ7gWfA@mail.gmail.com
* Fix comment for issue_xlog_fsync().Fujii Masao2019-05-21
| | | | | | | "segno" is the argument for the function, not "log" and "seg". Author: Antonin Houska Discussion: https://postgr.es/m/11863.1558361020@spoje.net
* Make VACUUM accept 1 and 0 as a boolean value.Fujii Masao2019-05-21
| | | | | | | | | | | | | Commit 41b54ba78e allowed existing VACUUM options to take a boolean argument. It's documented that valid boolean values that VACUUM can accept are true, false, on, off, 1, and 0. But previously the parser failed to accept 1 and 0 as a boolean value in VACUUM syntax because of a lack of NumericOnly clause for vac_analyze_option_arg in gram.y. This commit adds such NumericOnly clause so that VACUUM options can take also 1 and 0 as a boolean value. Discussion: https://postgr.es/m/CAHGQGwGYg82A8UCQxZe7Zn9MnyUBGdyB=1CNpKF3jBny+RbyfA@mail.gmail.com
* Translation updatesPeter Eisentraut2019-05-20
| | | | | Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: a20bf6b8a5b4e32450967055eb5b07cee4704edd
* Remove outdated comment in copy.c.Andres Freund2019-05-19
|
* Minimally fix partial aggregation for aggregates that don't have one argument.Andres Freund2019-05-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For partial aggregation combine steps, AggStatePerTrans->numTransInputs was set to the transition function's number of inputs, rather than the combine function's number of inputs (always 1). That lead to partial aggregates with strict combine functions to wrongly check for NOT NULL input as required by strictness. When the aggregate wasn't exactly passed one argument, the strictness check was either omitted (in the 0 args case) or too many arguments were checked. In the latter case we'd read beyond the end of FunctionCallInfoData->args (only in master). AggStatePerTrans->numTransInputs actually has been wrong since since 9.6, where partial aggregates were added. But it turns out to not be an active problem in 9.6 and 10, because numTransInputs wasn't used at all for combine functions: Before c253b722f6 there simply was no NULL check for the input to strict trans functions, and after that the check was simply hardcoded for the right offset in fcinfo, as it's done by code specific to combine functions. In bf6c614a2f2 (11) the strictness check was generalized, with common code doing the strictness checks for both plain and combine transition functions, based on numTransInputs. For combine functions this lead to not emitting an expression step to check for strict input in the 0 arguments case, and in the > 1 arguments case, we'd check too many arguments.Due to the fact that the relevant fcinfo->isnull[2..] was always zero-initialized (more or less by accident, by being part of the AggStatePerTrans struct, which is palloc0'ed), there was no observable damage in the latter case before a9c35cf85ca1f, we just checked too many array elements. Due to the changes in a9c35cf85ca1f, > 1 argument bug became visible, because these days fcinfo is a) dynamically allocated without being zeroed b) exactly the length required for the number of specified arguments (hardcoded to 2 in this case). This commit only contains a fairly minimal fix, setting numTransInputs to a hardcoded 1 when building a pertrans for a combine function. It seems likely that we'll want to clean this up further (e.g. the arguments build_pertrans_for_aggref() aren't particularly meaningful for combine functions). But the wrap date for 12 beta1 is coming up fast, so it seems good to have a minimal fix in place. Backpatch to 11. While AggStatePerTrans->numTransInputs was set wrongly before that, the value was not used for combine functions. Reported-By: Rajkumar Raghuwanshi Diagnosed-By: Kyotaro Horiguchi, Jeevan Chalke, Andres Freund, David Rowley Author: David Rowley, Kyotaro Horiguchi, Andres Freund Discussion: https://postgr.es/m/CAKcux6=uZEyWyLw0N7HtR9OBc-sWEFeByEZC7t-KDf15FKxVew@mail.gmail.com
* Don't to predicate lock for analyze scans, refactor scan option passing.Andres Freund2019-05-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Before this commit, when ANALYZE was run on a table and serializable was used (either by virtue of an explicit BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE, or default_transaction_isolation being set to serializable) a null pointer dereference lead to a crash. The analyze scan doesn't need a snapshot (nor predicate locking), but before this commit a scan only contained information about being a bitmap or sample scan. Refactor the option passing to the scan_begin callback to use a bitmask instead. Alternatively we could have added a new boolean parameter, but that seems harder to read. Even before this issue various people (Heikki, Tom, Robert) suggested doing so. These changes don't change the scan APIs outside of tableam. The flags argument could be exposed, it's not necessary to fix this problem. Also the wrapper table_beginscan* functions encapsulate most of that complexity. After these changes fixing the bug is trivial, just don't acquire predicate lock for analyze style scans. That was already done for bitmap heap scans. Add an assert that a snapshot is passed when acquiring the predicate lock, so this kind of bug doesn't require running with serializable. Also add a comment about sample scans currently requiring predicate locking the entire relation, that previously wasn't remarked upon. Reported-By: Joe Wildish Author: Andres Freund Discussion: https://postgr.es/m/4EA80A20-E9BF-49F1-9F01-5B66CAB21453@elusive.cx https://postgr.es/m/20190411164947.nkii4gaeilt4bui7@alap3.anarazel.de https://postgr.es/m/20190518203102.g7peu2fianukjuxm@alap3.anarazel.de
* Revert "postmaster: Start syslogger earlier".Tom Lane2019-05-19
| | | | | | | | | | | | | | | | This commit reverts 57431a911d3a650451d198846ad3194900666152. While that's still a good idea in the abstract, we found out that there are multiple crasher bugs in it on Windows builds, making the logging_collector option unusable on Windows. There's no time left to fix these issues before 12beta1, so revert the patch to allow Windows beta testing to proceed. We'll try again at some future date. Per bug #15804 from Yulian Khodorkovskiy and additional investigation by Michael Paquier. Discussion: https://postgr.es/m/15804-3721117bf40fb654@postgresql.org
* Fix declarations of couple jsonpath functionsAlexander Korotkov2019-05-19
| | | | | Make jsonb_path_query_array() and jsonb_path_query_first() use PG_FUNCTION_ARGS macro instead of its expansion.
* Make BufFileCreateTemp() ensure that temp tablespaces are set up.Tom Lane2019-05-18
| | | | | | | | | | | | | | | | | If PrepareTempTablespaces() has never been called in the current transaction, OpenTemporaryFile() will fall back to using the default tablespace, which is a bug if the user wanted temp files placed elsewhere. gistInitBuildBuffers() appears to have this disease already, and it seems like an easy trap for future coders to fall into. We discussed other ways to close this gap, but none of them are prettier or more reliable than just having BufFileCreateTemp do it. In particular, having fd.c do this creates layering issues that we could do without. Per suggestion from Melanie Plageman. Arguably this is a bug fix, but nobody seems very excited about back-patching, so change in HEAD only. Discussion: https://postgr.es/m/CAAKRu_YwzjuGAmmaw4-8XO=OVFGR1QhY_Pq-t3wjb9ribBJb_Q@mail.gmail.com
* "A void function may not return a value".Tom Lane2019-05-18
| | | | Per buildfarm.
* tableam: Avoid relying on relation size to determine validity of tids.Andres Freund2019-05-17
| | | | | | | | | | | Instead add a tableam callback to do so. To avoid adding per validation overhead, pass a scan to tuple_tid_valid. In heap's case we'd otherwise incurred a RelationGetNumberOfBlocks() call for each tid - which'd have added noticable overhead to nodeTidscan.c. Author: Andres Freund Reviewed-By: Ashwin Agrawal Discussion: https://postgr.es/m/20190515185447.gno2jtqxyktylyvs@alap3.anarazel.de
* tableam: Don't assume that every AM uses md.c style storage.Andres Freund2019-05-17
| | | | | | | | | | | | | | | | | | | | | | Previously various parts of the code routed size requests through RelationGetNumberOfBlocks[InFork]. That works if md.c is used by the AM, but not otherwise. Add a tableam callback to return the size of the table. As not every AM will use postgres' BLCKSZ, have it return bytes, and have RelationGetNumberOfBlocksInFork() round the byte size up into blocks. To allow code outside of the AM to determine the actual relation size map InvalidForkNumber the total size of a relation, as not every AM might just need the postgres defined forks. A few users of RelationGetNumberOfBlocks() ought to be converted away from that. One case, the use of it to determine whether a tid is valid, will be fixed in a follow up commit. Others will have to wait for v13. Author: Andres Freund Discussion: https://postgr.es/m/20190423225201.3bbv6tbqzkb5w7cw@alap3.anarazel.de
* Restructure creation of run-time pruning steps.Tom Lane2019-05-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, gen_partprune_steps() always built executor pruning steps using all suitable clauses, including those containing PARAM_EXEC Params. This meant that the pruning steps were only completely safe for executor run-time (scan start) pruning. To prune at executor startup, we had to ignore the steps involving exec Params. But this doesn't really work in general, since there may be logic changes needed as well --- for example, pruning according to the last operator's btree strategy is the wrong thing if we're not applying that operator. The rules embodied in gen_partprune_steps() and its minions are sufficiently complicated that tracking their incremental effects in other logic seems quite impractical. Short of a complete redesign, the only safe fix seems to be to run gen_partprune_steps() twice, once to create executor startup pruning steps and then again for run-time pruning steps. We can save a few cycles however by noting during the first scan whether we rejected any clauses because they involved exec Params --- if not, we don't need to do the second scan. In support of this, refactor the internal APIs in partprune.c to make more use of passing information in the GeneratePruningStepsContext struct, rather than as separate arguments. This is, I hope, the last piece of our response to a bug report from Alan Jackson. Back-patch to v11 where this code came in. Discussion: https://postgr.es/m/FAD28A83-AC73-489E-A058-2681FA31D648@tvsquared.com
* More message style fixesAlvaro Herrera2019-05-16
| | | | Discussion: https://postgr.es/m/20190515183005.GA26486@alvherre.pgsql
* Remove extra nbtree half-dead internal page check.Peter Geoghegan2019-05-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It's not safe for nbtree VACUUM to attempt to delete a target page whose right sibling is already half-dead, since that would fail the cross-check when VACUUM attempts to re-find a downlink to the right sibling in the parent page. Logic to prevent this from happening was added by commit 8da31837803, which addressed a bug in the overhaul of page deletion that went into PostgreSQL 9.4 (commit efada2b8e92). VACUUM was made to check the right sibling page, and back off when it happened to be half-dead already. However, it is only truly necessary to do the right sibling check on the leaf level, since that transitively determines if the deletion target's parent's right sibling page is itself undergoing deletion. Remove the internal page level check, and add a comment explaining why the leaf level check alone suffices. The extra check is also unnecessary due to the fact that internal pages that are marked half-dead are generally considered corrupt. Commit efada2b8e92 established the principle that there should never be half-dead internal pages (internal pages pending deletion are possible, but that status is never directly represented in the internal page). VACUUM will complain about corruption when it encounters half-dead internal pages, so VACUUM is bound to raise an error one way or another when an nbtree index has a half-dead internal page (contrib/amcheck will also report that the page is corrupt). It's possible that a pg_upgrade'd 9.3 database will still have half-dead internal pages, so it may seem like there is an argument for leaving the check in place to reliably get a cleaner error message that advises the user to REINDEX. However, leaf pages are also deleted in the first phase of deletion prior to PostgreSQL 9.4, so I believe we won't even attempt to re-find the parent page anyway (we won't have the fully deleted leaf page as the right sibling of our target page, so we won't even try to find a downlink for it). Discussion: https://postgr.es/m/CAH2-Wzm_ntmqJjWLRyKzimFmFvk+BnVAvUpaA4s1h9Ja58woaQ@mail.gmail.com
* Fix bogus logic for combining range-partitioned columns during pruning.Tom Lane2019-05-16
| | | | | | | | | | | | gen_prune_steps_from_opexps's notion of how to do this was overly complicated and underly correct. Per discussion of a report from Alan Jackson (though this fixes only one aspect of that problem). Back-patch to v11 where this code came in. Amit Langote Discussion: https://postgr.es/m/FAD28A83-AC73-489E-A058-2681FA31D648@tvsquared.com
* Fix partition pruning to treat stable comparison operators properly.Tom Lane2019-05-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Cross-type comparison operators in a btree or hash opclass might be only stable not immutable (this is true of timestamp vs. timestamptz for example). partprune.c ignored this possibility and would perform plan-time pruning with them anyway, possibly leading to wrong answers if the environment changed between planning and execution. To fix, teach gen_partprune_steps() to do things differently when creating plan-time pruning steps vs. run-time pruning steps. analyze_partkey_exprs() also needs an extra check, which is rather annoying but now is not the time to restructure things enough to avoid that. While at it, simplify the logic for the plan-time case a little by insisting that the comparison value be a Const and nothing else. This relies on the assumption that eval_const_expressions will have reduced any immutable expression to a Const; which is not quite 100% true, but certainly any case that comes up often enough to be interesting should have simplification logic there. Also improve a bunch of inadequate/obsolete/wrong comments. Per discussion of a report from Alan Jackson (though this fixes only one aspect of that problem). Back-patch to v11 where this code came in. David Rowley, with some further hacking by me Discussion: https://postgr.es/m/FAD28A83-AC73-489E-A058-2681FA31D648@tvsquared.com
* Remove obsolete nbtree insertion comment.Peter Geoghegan2019-05-15
| | | | | | | | | | | | | | | | | | Remove a Berkeley-era comment above _bt_insertonpg() that admonishes the reader to grok Lehman and Yao's paper before making any changes. This made a certain amount of sense back when _bt_insertonpg() was responsible for most of the things that are now spread across _bt_insertonpg(), _bt_findinsertloc(), _bt_insert_parent(), and _bt_split(), but it doesn't work like that anymore. I believe that this comment alludes to the need to "couple" or "crab" buffer locks as we ascend the tree as page splits cascade upwards. The nbtree README already explains this in detail, which seems sufficient. Besides, the changes to page splits made by commit 40dae7ec537 altered the exact details of how buffer locks are retained during splits; Lehman and Yao's original algorithm seems to release the lock on the left child page/buffer slightly earlier than _bt_insertonpg()/_bt_insert_parent() can.
* Reverse order of newitem nbtree candidate splits.Peter Geoghegan2019-05-15
| | | | | | | | | | | | | | | | | | | | | | | | | | Commit fab25024, which taught nbtree to choose candidate split points more carefully, had _bt_findsplitloc() record all possible split points in an initial pass over a page that is about to be split. The order that candidate split points were processed and stored in was assumed to match the offset number order of split points on an imaginary version of the page that contains the same items as the original, but also fits newitem (the item that provoked the split precisely because it didn't fit). However, the order of split points in the final array was not quite what was expected: the split point that makes newitem the firstright item came after the split point that makes newitem the lastleft item -- not before. As a result, _bt_findsplitloc() could get confused about the leftmost and rightmost tuples among all possible split points recorded for the page. This seems to have no appreciable impact on the quality of the final split point chosen by _bt_findsplitloc(), but it's still wrong. To fix, switch the order in which newitem candidate splits are recorded in. This also makes it possible to describe candidate split points in terms of which pair of adjoining tuples enclose the split point within _bt_findsplitloc(), making it clearer why it's generally safe for _bt_split() to expect lastleft and firstright tuples.
* Handle table_complete_speculative's succeeded argument as documented.Andres Freund2019-05-14
| | | | | | | | | | | | | | | | For some reason both callsite and the implementation for heapam had the meaning inverted (i.e. succeeded == true was passed in case of conflict). That's confusing. I (Andres) briefly pondered whether it'd be better to rename table_complete_speculative's argument to 'bool specConflict' or such, but decided not to. The 'complete' in the function name for me makes `succeeded` sound a bit better. Reported-By: Ashwin Agrawal, Melanie Plageman, Heikki Linnakangas Discussion: https://postgr.es/m/CALfoeitk7-TACwYv3hCw45FNPjkA86RfXg4iQ5kAOPhR+F1Y4w@mail.gmail.com https://postgr.es/m/97673451-339f-b21e-a781-998d06b1067c@iki.fi
* Fix SQL-style substring() to have spec-compliant greediness behavior.Tom Lane2019-05-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | SQL's regular-expression substring() function is defined to have a pattern argument that's separated into three subpatterns by escape- double-quote markers; the function result is the part of the input matching the second subpattern. The standard makes it clear that if there is ambiguity about how to match the input to the subpatterns, the first and third subpatterns should be taken to match the smallest possible amount of text (i.e., they're "non greedy", in the terms of our regex code). We were not doing it that way: the first subpattern would eat the largest possible amount of text, causing the function result to be shorter than what the spec requires. Fix that by attaching explicit greediness quantifiers to the subpatterns. (This depends on the regex fix in commit 8a29ed053; before that, this didn't reliably change the regex engine's behavior.) Also, by adding parentheses around each subpattern, we ensure that "|" (OR) in the subpatterns behave sanely. Previously, "|" in the first or third subpatterns didn't work. This patch also makes the function throw error if you write more than two escape-double-quote markers, and do something sane if you write just one, and document that behavior. Previously, an odd number of markers led to a confusing complaint about unbalanced parentheses, while extra pairs of markers were just ignored. (Note that the spec requires exactly two markers, but we've historically allowed there to be none, and this patch preserves the old behavior for that case.) In passing, adjust some substring() test cases that didn't really prove what they said they were testing for: they used patterns that didn't match the data string, so that the output would be NULL whether or not the function was really strict. Although this is certainly a bug fix, changing the behavior in back branches seems undesirable: applications could perhaps be depending on the old behavior, since it's not obviously wrong unless you read the spec very closely. Hence, no back-patch. Discussion: https://postgr.es/m/5bb27a41-350d-37bf-901e-9d26f5592dd0@charter.net
* In bootstrap mode, use default signal handling for SIGINT etc.Tom Lane2019-05-14
| | | | | | | | | | | | | | | | | | | | Previously, the code pointed the standard process-termination signals to postgres.c's die(). That would typically result in an attempt to execute a transaction abort, which is not possible in bootstrap mode, leading to PANIC. This choice seems to be a leftover from an old code structure in which the same signal-assignment code was used for many sorts of auxiliary processes, including interactive standalone backends. It's not very sensible for bootstrap mode, which has no interest in either interactivity or continuing after an error. We can get better behavior with less effort by just letting normal process termination happen, after which the parent initdb process will clean up. This is basically cosmetic in any case, since initdb will react the same way whether bootstrap dies on a signal or abort(). Given the lack of previous complaints, I don't feel a need to back-patch, even though the behavior is old. Discussion: https://postgr.es/m/3850b11a.5121.16aaf827e4a.Coremail.thunder1@126.com
* Update SQL features/conformance information to SQL:2016Peter Eisentraut2019-05-14
|
* Update information_schema for SQL:2016Peter Eisentraut2019-05-14
| | | | | This is mainly a light renumbering to match the sections in the standard.
* Detect internal GiST page splits correctly during index build.Heikki Linnakangas2019-05-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As we descend the GiST tree during insertion, we modify any downlinks on the way down to include the new tuple we're about to insert (if they don't cover it already). Modifying an existing downlink might cause an internal page to split, if the new downlink tuple is larger than the old one. If that happens, we need to back up to the parent and re-choose a page to insert to. We used to detect that situation, thanks to the NSN-LSN interlock normally used to detect concurrent page splits, but that got broken by commit 9155580fd5. With that commit, we now use a dummy constant LSN value for every page during index build, so the LSN-NSN interlock no longer works. I thought that was OK because there can't be any other backends modifying the index during index build, but missed that the insertion itself can modify the page we're inserting to. The consequence was that we would sometimes insert the new tuple to an incorrect page, one whose downlink doesn't cover the new tuple. To fix, add a flag to the stack that keeps track of the state while descending tree, to indicate that a page was split, and that we need to retry the descend from the parent. Thomas Munro first reported that the contrib/intarray regression test was failing occasionally on the buildfarm after commit 9155580fd5. The failure was intermittent, because the gistchoose() function is not deterministic, and would only occasionally create the right circumstances for this bug to cause the failure. Patch by Anastasia Lubennikova, with some changes by me to make it work correctly also when the internal page split also causes the "grandparent" to be split. Discussion: https://www.postgresql.org/message-id/CA%2BhUKGJRzLo7tZExWfSbwM3XuK7aAK7FhdBV0FLkbUG%2BW0v0zg%40mail.gmail.com
* Fix comment on when HOT update is possible.Heikki Linnakangas2019-05-14
| | | | | | | | The conditions listed in this comment have changed several times, and at some point the thing that the "if so" referred to was negated. The text was OK up to 9.6. It was differently wrong in v10, v11 and master, so fix in all those versions.
* Fix typo.Etsuro Fujita2019-05-14
|
* Fix duplicated words in commentsMichael Paquier2019-05-14
| | | | | Author: Stephen Amell Discussion: https://postgr.es/m/539fa271-21b3-777e-a468-d96cffe9c768@gmail.com
* Standardize ItemIdData terminology.Peter Geoghegan2019-05-13
| | | | | | | | | | | | | The term "item pointer" should not be used to refer to ItemIdData variables, since that is needlessly ambiguous. Only ItemPointerData/ItemPointer variables should be called item pointers. To fix, establish the convention that ItemIdData variables should always be referred to either as "item identifiers" or "line pointers". The term "item identifier" already predominates in docs and translatable messages, and so should be the preferred alternative there. Discussion: https://postgr.es/m/CAH2-Wz=c=MZQjUzde3o9+2PLAPuHTpVZPPdYxN=E4ndQ2--8ew@mail.gmail.com
* Fix logical replication's ideas about which type OIDs are built-in.Tom Lane2019-05-13
| | | | | | | | | | | | | | | | | | Only hand-assigned type OIDs should be presumed to match across different PG servers; those assigned during genbki.pl or during initdb are likely to change due to addition or removal of unrelated objects. This means that the cutoff should be FirstGenbkiObjectId (in HEAD) or FirstBootstrapObjectId (before that), not FirstNormalObjectId. Compare postgres_fdw's is_builtin() test. It's likely that this error has no observable consequence in a normally-functioning system, since ATM the only affected type OIDs are system catalog rowtypes and information_schema types, which would not typically be interesting for logical replication. But you could probably break it if you tried hard, so back-patch. Discussion: https://postgr.es/m/15150.1557257111@sss.pgh.pa.us
* Improve commentary about hack in is_publishable_class().Tom Lane2019-05-13
| | | | | | | | | The FirstNormalObjectId test here is a kluge that needs to go away, but the only substitute we can think of is to add a column to pg_class, which will take more work than can be handled right now. Add some commentary in the meanwhile. Discussion: https://postgr.es/m/15150.1557257111@sss.pgh.pa.us
* Don't leave behind junk nbtree pages during split.Peter Geoghegan2019-05-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 8fa30f906be reduced the elevel of a number of "can't happen" _bt_split() errors from PANIC to ERROR. At the same time, the new right page buffer for the split could continue to be acquired well before the critical section. This was possible because it was relatively straightforward to make sure that _bt_split() could not throw an error, with a few specific exceptions. The exceptional cases were safe because they involved specific, well understood errors, making it possible to consistently zero the right page before actually raising an error using elog(). There was no danger of leaving around a junk page, provided _bt_split() stuck to this coding rule. Commit 8224de4f, which introduced INCLUDE indexes, added code to make _bt_split() truncate away non-key attributes. This happened at a point that broke the rule around zeroing the right page in _bt_split(). If truncation failed (perhaps due to palloc() failure), that would result in an errant right page buffer with junk contents. This could confuse VACUUM when it attempted to delete the page, and should be avoided on general principle. To fix, reorganize _bt_split() so that truncation occurs before the new right page buffer is even acquired. A junk page/buffer will not be left behind if _bt_nonkey_truncate()/_bt_truncate() raise an error. Discussion: https://postgr.es/m/CAH2-WzkcWT_-NH7EeL=Az4efg0KCV+wArygW8zKB=+HoP=VWMw@mail.gmail.com Backpatch: 11-, where INCLUDE indexes were introduced.
* Fix incorrect return value in JSON equality function for scalarsMichael Paquier2019-05-13
| | | | | | | | | | | | equalsJsonbScalarValue() uses a boolean as return type, however for one code path -1 gets returned, which is confusing. The origin of the confusion is visibly that this code got copy-pasted from compareJsonbScalarValue() since it has been introduced in d1d50bf. No backpatch, as this is only cosmetic. Author: Rikard Falkeborn Discussion: https://postgr.es/m/CADRDgG7mJnek6HNW13f+LF6V=6gag9PM+P7H5dnyWZAv49aBGg@mail.gmail.com
* Fix misoptimization of "{1,1}" quantifiers in regular expressions.Tom Lane2019-05-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A bounded quantifier with m = n = 1 might be thought a no-op. But according to our documentation (which traces back to Henry Spencer's original man page) it still imposes greediness, or non-greediness in the case of the non-greedy variant "{1,1}?", on whatever it's attached to. This turns out not to work though, because parseqatom() optimizes away the m = n = 1 case without regard for whether it's supposed to change the greediness of the argument RE. We can fix this by just not applying the optimization when the greediness needs to change; the subsequent general cases handle it fine. The three cases in which we can still apply the optimization are (a) no quantifier, or quantifier does not impose a preference; (b) atom has no greediness property, implying it cannot match a variable amount of text anyway; or (c) quantifier's greediness is same as atom's. Note that in most cases where one of these applies, we'd have exited earlier in the "not a messy case" fast path. I think it's now only possible to get to the optimization when the atom involves capturing parentheses or a non-top-level backref. Back-patch to all supported branches. I'd ordinarily be hesitant to put a subtle behavioral change into back branches, but in this case it's very hard to see a reason why somebody would write "{1,1}?" unless they're trying to get the documented change-of-greediness behavior. Discussion: https://postgr.es/m/5bb27a41-350d-37bf-901e-9d26f5592dd0@charter.net
* Fail pgwin32_message_to_UTF16() for SQL_ASCII messages.Noah Misch2019-05-12
| | | | | | | | | | | | The function had been interpreting SQL_ASCII messages as UTF8, throwing an error when they were invalid UTF8. The new behavior is consistent with pg_do_encoding_conversion(). This affects LOG_DESTINATION_STDERR and LOG_DESTINATION_EVENTLOG, which will send untranslated bytes to write() and ReportEventA(). On buildfarm member bowerbird, enabling log_connections caused an error whenever the role name was not valid UTF8. Back-patch to 9.4 (all supported versions). Discussion: https://postgr.es/m/20190512015615.GD1124997@rfd.leadboat.com
* Rearrange pgstat_bestart() to avoid failures within its critical section.Tom Lane2019-05-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We long ago decided to design the shared PgBackendStatus data structure to minimize the cost of writing status updates, which means that writers just have to increment the st_changecount field twice. That isn't hooked into any sort of resource management mechanism, which means that if something were to throw error between the two increments, the st_changecount field would be left odd indefinitely. That would cause readers to lock up. Now, since it's also a bad idea to leave the field odd for longer than absolutely necessary (because readers will spin while we have it set), the expectation was that we'd treat these segments like spinlock critical sections, with only short, more or less straight-line, code in them. That was fine as originally designed, but commit 9029f4b37 broke it by inserting a significant amount of non-straight-line code into pgstat_bestart(), code that is very capable of throwing errors, not to mention taking a significant amount of time during which readers will spin. We have a report from Neeraj Kumar of readers actually locking up, which I suspect was due to an encoding conversion error in X509_NAME_to_cstring, though conceivably it was just a garden-variety OOM failure. Subsequent commits have loaded even more dubious code into pgstat_bestart's critical section (and commit fc70a4b0d deserves some kind of booby prize for managing to miss the critical section entirely, although the negative consequences seem minimal given that the PgBackendStatus entry should be seen by readers as inactive at that point). The right way to fix this mess seems to be to compute all these values into a local copy of the process' PgBackendStatus struct, and then just copy the data back within the critical section proper. This plan can't be implemented completely cleanly because of the struct's heavy reliance on out-of-line strings, which we must initialize separately within the critical section. But still, the critical section is far smaller and safer than it was before. In hopes of forestalling future errors of the same ilk, rename the macros for st_changecount management to make it more apparent that the writer-side macros create a critical section. And to prevent the worst consequences if we nonetheless manage to mess it up anyway, adjust those macros so that they really are a critical section, ie they now bump CritSectionCount. That doesn't add much overhead, and it guarantees that if we do somehow throw an error while the counter is odd, it will lead to PANIC and a database restart to reset shared memory. Back-patch to 9.5 where the problem was introduced. In HEAD, also fix an oversight in commit b0b39f72b: it failed to teach pgstat_read_current_status to copy st_gssstatus data from shared memory to local memory. Hence, subsequent use of that data within the transaction would potentially see changing data that it shouldn't see. Discussion: https://postgr.es/m/CAPR3Wj5Z17=+eeyrn_ZDG3NQGYgMEOY6JV6Y-WRRhGgwc16U3Q@mail.gmail.com
* Cope with EINVAL and EIDRM shmat() failures in PGSharedMemoryAttach.Tom Lane2019-05-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There's a very old race condition in our code to see whether a pre-existing shared memory segment is still in use by a conflicting postmaster: it's possible for the other postmaster to remove the segment in between our shmctl() and shmat() calls. It's a narrow window, and there's no risk unless both postmasters are using the same port number, but that's possible during parallelized "make check" tests. (Note that while the TAP tests take some pains to choose a randomized port number, pg_regress doesn't.) If it does happen, we treated that as an unexpected case and errored out. To fix, allow EINVAL to be treated as segment-not-present, and the same for EIDRM on Linux. AFAICS, the considerations here are basically identical to the checks for acceptable shmctl() failures, so I documented and coded it that way. While at it, adjust PGSharedMemoryAttach's API to remove its undocumented dependency on UsedShmemSegAddr in favor of passing the attach address explicitly. This makes it easier to be sure we're using a null shmaddr when probing for segment conflicts (thus avoiding questions about what EINVAL means). I don't think there was a bug there, but it required fragile assumptions about the state of UsedShmemSegAddr during PGSharedMemoryIsInUse. Commit c09850992 may have made this failure more probable by applying the conflicting-segment tests more often. Hence, back-patch to all supported branches, as that was. Discussion: https://postgr.es/m/22224.1557340366@sss.pgh.pa.us
* Improve and fix some error handling for REINDEX INDEX/TABLE CONCURRENTLYMichael Paquier2019-05-10
| | | | | | | | | | | | | | | | | | | This improves the user experience when it comes to restrict several flavors of REINDEX CONCURRENTLY. First, for INDEX, remove a restriction on shared relations as we already check after catalog relations. Then, for TABLE, add a proper error message when attempting to run the command on system catalogs. The code path of CREATE INDEX CONCURRENTLY already complains about that, but if a REINDEX is issued then then the error generated is confusing. While on it, add more tests to check restrictions on catalog indexes and on toast table/index for catalogs. Some error messages are improved, with wording suggestion coming from Tom Lane. Reported-by: Tom Lane Author: Michael Paquier Reviewed-by: Tom Lane Discussion: https://postgr.es/m/23694.1556806002@sss.pgh.pa.us
* Repair issues with faulty generation of merge-append plans.Tom Lane2019-05-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | create_merge_append_plan failed to honor the CP_EXACT_TLIST flag: it would generate the expected targetlist but then it felt free to add resjunk sort targets to it. This demonstrably leads to assertion failures in v11 and HEAD, and it's probably just accidental that we don't see the same in older branches. I've not looked into whether there would be any real-world consequences in non-assert builds. In HEAD, create_append_plan has sprouted the same problem, so fix that too (although we do not have any test cases that seem able to reach that bug). This is an oversight in commit 3fc6e2d7f which invented the CP_EXACT_TLIST flag, so back-patch to 9.6 where that came in. convert_subquery_pathkeys would create pathkeys for subquery output values if they match any EquivalenceClass known in the outer query and are available in the subquery's syntactic targetlist. However, the second part of that condition is wrong, because such values might not appear in the subquery relation's reltarget list, which would mean that they couldn't be accessed above the level of the subquery scan. We must check that they appear in the reltarget list, instead. This can lead to dropping knowledge about the subquery's sort ordering, but I believe it's okay, because any sort key that the outer query actually has any interest in would appear in the reltarget list. This second issue is of very long standing, but right now there's no evidence that it causes observable problems before 9.6, so I refrained from back-patching further than that. We can revisit that choice if somebody finds a way to make it cause problems in older branches. (Developing useful test cases for these issues is really problematic; fixing convert_subquery_pathkeys removes the only known way to exhibit the create_merge_append_plan bug, and neither of the test cases added by this patch causes a problem in all branches, even when considering the issues separately.) The second issue explains bug #15795 from Suresh Kumar R ("could not find pathkey item to sort" with nested DISTINCT queries). I stumbled across the first issue while investigating that. Discussion: https://postgr.es/m/15795-fadb56c8e44ee73c@postgresql.org
* postgres_fdw: Fix cost estimation for aggregate pushdown.Etsuro Fujita2019-05-09
| | | | | | | | | | | | | | | | | In commit 7012b132d0, which added support for aggregate pushdown in postgres_fdw, the expense of evaluating the final scan/join target computed by make_group_input_target() was not accounted for at all in costing aggregate pushdown paths with local statistics. The right fix for this would be to have a separate upper stage to adjust the final scan/join relation (see comments for apply_scanjoin_target_to_paths()); but for now, fix by adding the tlist eval cost when costing aggregate pushdown paths with local statistics. Apply this to HEAD only to avoid destabilizing existing plan choices. Author: Etsuro Fujita Reviewed-By: Antonin Houska Discussion: https://postgr.es/m/5C66A056.60007%40lab.ntt.co.jp
* Fix SxactGlobalXmin tracking.Thomas Munro2019-05-09
| | | | | | | | | | | | | | | | Commit bb16aba50 broke the code that maintains SxactGlobalXmin. It could get stuck when a well-timed READ ONLY transaction runs. If SxactGlobalXmin stops advancing, transactions on the FinishedSerializableTransactions queue are never cleaned up, so resources are effectively leaked. Revert that hunk of the commit. Also revert another similar hunk that was probably harmless, but unnecessary and unjustified, relating to the DOOMED flag in case of RO_SAFE early release. Author: Thomas Munro Reported-by: Tom Lane Discussion: https://postgr.es/m/16170.1557251214%40sss.pgh.pa.us
* Clean up the behavior and API of catalog.c's is-catalog-relation tests.Tom Lane2019-05-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The right way for IsCatalogRelation/Class to behave is to return true for OIDs less than FirstBootstrapObjectId (not FirstNormalObjectId), without any of the ad-hoc fooling around with schema membership. The previous code was wrong because (1) it claimed that information_schema tables were not catalog relations but their toast tables were, which is silly; and (2) if you dropped and recreated information_schema, which is a supported operation, the behavior changed. That's even sillier. With this definition, "catalog relations" are exactly the ones traceable to the postgres.bki data, which seems like what we want. With this simplification, we don't actually need access to the pg_class tuple to identify a catalog relation; we only need its OID. Hence, replace IsCatalogClass with "IsCatalogRelationOid(oid)". But keep IsCatalogRelation as a convenience function. This allows fixing some arguably-wrong semantics in contrib/sepgsql and ReindexRelationConcurrently, which were using an IsSystemNamespace test where what they really should be using is IsCatalogRelationOid. The previous coding failed to protect toast tables of system catalogs, and also was not on board with the general principle that user-created tables do not become catalogs just by virtue of being renamed into pg_catalog. We can also get rid of a messy hack in ReindexMultipleTables. While we're at it, also rename IsSystemNamespace to IsCatalogNamespace, because the previous name invited confusion with the more expansive semantics used by IsSystemRelation/Class. Also improve the comments in catalog.c. There are a few remaining places in replication-related code that are special-casing OIDs below FirstNormalObjectId. I'm inclined to think those are wrong too, and if there should be any special case it should just extend to FirstBootstrapObjectId. But first we need to debate whether a FOR ALL TABLES publication should include information_schema. Discussion: https://postgr.es/m/21697.1557092753@sss.pgh.pa.us Discussion: https://postgr.es/m/15150.1557257111@sss.pgh.pa.us
* Remove obsolete nbtree split REDO routine comment.Peter Geoghegan2019-05-08
| | | | | | | | | Commit dd299df8189, which added suffix truncation to nbtree, simplified the WAL record format used by page splits. It became necessary to explicitly WAL-log the new high key for the left half of a split in all cases, which relieved the REDO routine from having to reconstruct a new high key for the left page by copying the first item from the right page. Remove a comment that referred to the previous practice.
* Fix error messagesAlvaro Herrera2019-05-08
| | | | | | | | | | Some messages related to foreign servers were reporting the server name without quotes, or not at all; our style is to have all names be quoted, and the server name already appears quoted in a few other messages, so just add quotes and make them all consistent. Remove an extra "s" in other messages (typos introduced by myself in f56f8f8da6af).
* Fix table lock levels for REINDEX INDEX CONCURRENTLYPeter Eisentraut2019-05-08
| | | | | | | | | | | | | REINDEX CONCURRENTLY locks tables with ShareUpdateExclusiveLock rather than the ShareLock used by a plain REINDEX. However, RangeVarCallbackForReindexIndex() was not updated for that and still used the ShareLock only. This would lead to lock upgrades later, leading to possible deadlocks. Reported-by: Andres Freund <andres@anarazel.de> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://www.postgresql.org/message-id/flat/20190430151735.wi52sxjvxsjvaxxt%40alap3.anarazel.de
* Remove leftover reference to old "flat file" mechanism in a comment.Heikki Linnakangas2019-05-08
| | | | The flat file mechanism was removed in PostgreSQL 9.0.
* Correct obsolete nbtsort.c minimum key comment.Peter Geoghegan2019-05-07
| | | | | | | It is no longer possible under any circumstances for nbtree code to reconstruct a strict lower bound key (parent page's pivot tuple key) for a right sibling page by retrieving the first item in the right sibling page.
* Improve error reporting in jsonpathAlexander Korotkov2019-05-08
| | | | | | | | | | | | | | | This commit contains multiple improvements to error reporting in jsonpath including but not limited to getting rid of following things: * definition of error messages in macros, * errdetail() when valueable information could fit to errmsg(), * word "singleton" which is not properly explained anywhere, * line breaks in error messages. Reported-by: Tom Lane Discussion: https://postgr.es/m/14890.1555523005%40sss.pgh.pa.us Author: Alexander Korotkov Reviewed-by: Tom Lane
* Add TRUNCATE parameter to VACUUM.Fujii Masao2019-05-08
| | | | | | | | | | | | | | | This commit adds new parameter to VACUUM command, TRUNCATE, which specifies that VACUUM should attempt to truncate off any empty pages at the end of the table and allow the disk space for the truncated pages to be returned to the operating system. This parameter, if specified, overrides the vacuum_truncate reloption. If neither the reloption nor the VACUUM option is used, the default is true, as before. Author: Fujii Masao Reviewed-by: Julien Rouhaud, Masahiko Sawada Discussion: https://postgr.es/m/CAD21AoD+qtrSDL=GSma4Wd3kLYLeRC0hPna-YAdkDeV4z156vg@mail.gmail.com