aboutsummaryrefslogtreecommitdiff
path: root/src/backend
Commit message (Collapse)AuthorAge
* Fix typo.Thomas Munro2019-03-29
| | | | Author: Masahiko Sawada
* Fix deserialization of pg_mcv_list valuesTomas Vondra2019-03-28
| | | | | | | | | | | | | | | | | | There were multiple issues in deserialization of pg_mcv_list values. Firstly, the data is loaded from syscache, but the deserialization was performed after ReleaseSysCache(), at which point the data might have already disappeared. Fixed by moving the calls in statext_mcv_load, and using the same NULL-handling code as existing stats. Secondly, the deserialized representation used pointers into the serialized representation. But that is also unsafe, because the data may disappear at any time. Fixed by reworking and simplifying the deserialization code to always copy all the data. And thirdly, when deserializing values for types passed by value, the code simply did memcpy(d,s,typlen) which however does not work on bigendian machines. Fixed by using fetch_att/store_att_byval.
* Use FullTransactionId for the transaction stack.Thomas Munro2019-03-28
| | | | | | | | | | | | | | | | Provide GetTopFullTransactionId() and GetCurrentFullTransactionId(). The intended users of these interfaces are access methods that use xids for visibility checks but don't want to have to go back and "freeze" existing references some time later before the 32 bit xid counter wraps around. Use a new struct to serialize the transaction state for parallel query, because FullTransactionId doesn't fit into the previous serialization scheme very well. Author: Thomas Munro Reviewed-by: Heikki Linnakangas Discussion: https://postgr.es/m/CAA4eK1%2BMv%2Bmb0HFfWM9Srtc6MVe160WFurXV68iAFMcagRZ0dQ%40mail.gmail.com
* Add basic infrastructure for 64 bit transaction IDs.Thomas Munro2019-03-28
| | | | | | | | | | | | | | | | | | | | | | | | Instead of inferring epoch progress from xids and checkpoints, introduce a 64 bit FullTransactionId type and use it to track xid generation. This fixes an unlikely bug where the epoch is reported incorrectly if the range of active xids wraps around more than once between checkpoints. The only user-visible effect of this commit is to correct the epoch used by txid_current() and txid_status(), also visible with pg_controldata, in those rare circumstances. It also creates some basic infrastructure so that later patches can use 64 bit transaction IDs in more places. The new type is a struct that we pass by value, as a form of strong typedef. This prevents the sort of accidental confusion between TransactionId and FullTransactionId that would be possible if we were to use a plain old uint64. Author: Thomas Munro Reported-by: Amit Kapila Reviewed-by: Andres Freund, Tom Lane, Heikki Linnakangas Discussion: https://postgr.es/m/CAA4eK1%2BMv%2Bmb0HFfWM9Srtc6MVe160WFurXV68iAFMcagRZ0dQ%40mail.gmail.com
* tableam: Support for an index build's initial table scan(s).Andres Freund2019-03-27
| | | | | | | | | | | | | To support building indexes over tables of different AMs, the scans to do so need to be routed through the table AM. While moving a fair amount of code, nearly all the changes are just moving code to below a callback. Currently the range based interface wouldn't make much sense for non block based table AMs. But that seems aceptable for now. Author: Andres Freund Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
* Minor improvements for the multivariate MCV listsTomas Vondra2019-03-27
| | | | | | | | | | | | | | The MCV build should always call get_mincount_for_mcv_list(), as the there is no other logic to decide whether the MCV list represents all the data. So just remove the (ngroups > nitems) condition. Also, when building MCV lists, the number of items was limited by the statistics target (i.e. up to 10000). But when deserializing the MCV list, a different value (8192) was used to check the input, causing an error. Simply ensure that the same value is used in both places. This should have been included in 7300a69950, but I forgot to include it in that commit.
* Add support for multivariate MCV listsTomas Vondra2019-03-27
| | | | | | | | | | | | | | | | Introduce a third extended statistic type, supported by the CREATE STATISTICS command - MCV lists, a generalization of the statistic already built and used for individual columns. Compared to the already supported types (n-distinct coefficients and functional dependencies), MCV lists are more complex, include column values and allow estimation of much wider range of common clauses (equality and inequality conditions, IS NULL, IS NOT NULL etc.). Similarly to the other types, a new pseudo-type (pg_mcv_list) is used. Author: Tomas Vondra Reviewed-by: Dean Rasheed, David Rowley, Mark Dilger, Alvaro Herrera Discussion: https://postgr.es/m/dfdac334-9cf2-2597-fb27-f0fb3753f435@2ndquadrant.com
* Avoid passing query tlist around separately from root->processed_tlist.Tom Lane2019-03-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the dim past, the planner kept the fully-processed version of the query targetlist (the result of preprocess_targetlist) in grouping_planner's local variable "tlist", and only grudgingly passed it to individual other routines as needed. Later we discovered a need to still have it available after grouping_planner finishes, and invented the root->processed_tlist field for that purpose, but it wasn't used internally to grouping_planner; the tlist was still being passed around separately in the same places as before. Now comes a proposed patch to allow appendrel expansion to add entries to the processed tlist, well after preprocess_targetlist has finished its work. To avoid having to pass around the tlist explicitly, it's proposed to allow appendrel expansion to modify root->processed_tlist. That makes aliasing the tlist with assorted parameters and local variables really scary. It would accidentally work as long as the tlist is initially nonempty, because then the List header won't move around, but it's not exactly hard to think of ways for that to break. Aliased values are poor programming practice anyway. Hence, get rid of local variables and parameters that can be identified with root->processed_tlist, in favor of just using that field directly. And adjust comments to match. (Some of the new comments speak as though it's already possible for appendrel expansion to modify the tlist; that's not true yet, but will happen in a later patch.) Discussion: https://postgr.es/m/9d7c5112-cb99-6a47-d3be-cf1ee6862a1d@lab.ntt.co.jp
* Suppress uninitialized-variable warning.Tom Lane2019-03-27
| | | | | Apparently Andres' compiler is smart enough to see that hpage must be initialized before use ... but mine isn't.
* Improve error handling of column references in expression transformationMichael Paquier2019-03-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Column references are not allowed in default expressions and partition bound expressions, and are restricted as such once the transformation of their expressions is done. However, trying to use more complex column references can lead to confusing error messages. For example, trying to use a two-field column reference name for default expressions and partition bounds leads to "missing FROM-clause entry for table", which makes no sense in their respective context. In order to make the errors generated more useful, this commit adds more verbose messages when transforming column references depending on the context. This has a little consequence though: for example an expression using an aggregate with a column reference as argument would cause an error to be generated for the column reference, while the aggregate was the problem reported before this commit because column references get transformed first. The confusion exists for default expressions for a long time, and the problem is new as of v12 for partition bounds. Still per the lack of complaints on the matter no backpatch is done. The patch has been written by Amit Langote and me, and Tom Lane has provided the improvement of the documentation for default expressions on the CREATE TABLE page. Author: Amit Langote, Michael Paquier Reviewed-by: Tom Lane Discussion: https://postgr.es/m/20190326020853.GM2558@paquier.xyz
* Fix off-by-one error in txid_status().Thomas Munro2019-03-27
| | | | | | | | | | | | | The transaction ID returned by GetNextXidAndEpoch() is in the future, so we can't attempt to access its status or we might try to read a CLOG page that doesn't exist. The > vs >= confusion probably stemmed from the choice of a variable name containing the word "last" instead of "next", so fix that too. Back-patch to 10 where the function arrived. Author: Thomas Munro Discussion: https://postgr.es/m/CA%2BhUKG%2Buua_BV5cyfsioKVN2d61Lukg28ECsWTXKvh%3DBtN2DPA%40mail.gmail.com
* Switch some palloc/memset calls to palloc0Michael Paquier2019-03-27
| | | | | | | | | | | Some code paths have been doing some allocations followed by an immediate memset() to initialize the allocated area with zeros, this is a bit overkill as there are already interfaces to do both things in one call. Author: Daniel Gustafsson Reviewed-by: Michael Paquier Discussion: https://postgr.es/m/vN0OodBPkKs7g2Z1uyk3CUEmhdtspHgYCImhlmSxv1Xn6nY1ZnaaGHL8EWUIQ-NEv36tyc4G5-uA3UXUF2l4sFXtK_EQgLN1hcgunlFVKhA=@yesql.se
* Track unowned relations in doubly-linked listTomas Vondra2019-03-27
| | | | | | | | | | | | | | | | | | | | | | | | | Relations dropped in a single transaction are tracked in a list of unowned relations. With large number of dropped relations this resulted in poor performance at the end of a transaction, when the relations are removed from the singly linked list one by one. Commit b4166911 attempted to address this issue (particularly when it happens during recovery) by removing the relations in a reverse order, resulting in O(1) lookups in the list of unowned relations. This did not work reliably, though, and it was possible to trigger the O(N^2) behavior in various ways. Instead of trying to remove the relations in a specific order with respect to the linked list, which seems rather fragile, switch to a regular doubly linked. That allows us to remove relations cheaply no matter where in the list they are. As b4166911 was a bugfix, backpatched to all supported versions, do the same thing here. Reviewed-by: Alvaro Herrera Discussion: https://www.postgresql.org/message-id/flat/80c27103-99e4-1d0c-642c-d9f3b94aaa0a%402ndquadrant.com Backpatch-through: 9.4
* Compute XID horizon for page level index vacuum on primary.Andres Freund2019-03-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously the xid horizon was only computed during WAL replay. That had two major problems: 1) It relied on knowing what the table pointed to looks like. That was easy enough before the introducing of tableam (we knew it had to be heap, although some trickery around logging the heap relfilenodes was required). But to properly handle table AMs we need per-database catalog access to look up the AM handler, which recovery doesn't allow. 2) Not knowing the xid horizon also makes it hard to support logical decoding on standbys. When on a catalog table, we need to be able to conflict with slots that have an xid horizon that's too old. But computing the horizon by visiting the heap only works once consistency is reached, but we always need to be able to detect conflicts. There's also a secondary problem, in that the current method performs redundant work on every standby. But that's counterbalanced by potentially computing the value when not necessary (either because there's no standby, or because there's no connected backends). Solve 1) and 2) by moving computation of the xid horizon to the primary and by involving tableam in the computation of the horizon. To address the potentially increased overhead, increase the efficiency of the xid horizon computation for heap by sorting the tids, and eliminating redundant buffer accesses. When prefetching is available, additionally perform prefetching of buffers. As this is more of a maintenance task, rather than something routinely done in every read only query, we add an arbitrary 10 to the effective concurrency - thereby using IO concurrency, when not globally enabled. That's possibly not the perfect formula, but seems good enough for now. Bumps WAL format, as latestRemovedXid is now part of the records, and the heap's relfilenode isn't anymore. Author: Andres Freund, Amit Khandekar, Robert Haas Reviewed-By: Robert Haas Discussion: https://postgr.es/m/20181212204154.nsxf3gzqv3gesl32@alap3.anarazel.de https://postgr.es/m/20181214014235.dal5ogljs3bmlq44@alap3.anarazel.de https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
* Fix partitioned index creation bug with dropped columnsAlvaro Herrera2019-03-26
| | | | | | | | | | | | | | | | | | | ALTER INDEX .. ATTACH PARTITION fails if the partitioned table where the index is defined contains more dropped columns than its partition, with this message: ERROR: incorrect attribute map The cause was that one caller of CompareIndexInfo was passing the number of attributes of the partition rather than the parent, which confused the length check. Repair. This can cause pg_upgrade to fail when used on such a database. Leave some more objects around after regression tests, so that the case is detected by pg_upgrade test suite. Remove some spurious empty lines noticed while looking for other cases of the same problem. Discussion: https://postgr.es/m/20190326213924.GA2322@alvherre.pgsql
* Build "other rels" of appendrel baserels in a separate step.Tom Lane2019-03-26
| | | | | | | | | | | | | | | | | | | | | | | Up to now, otherrel RelOptInfos were built at the same time as baserel RelOptInfos, thanks to recursion in build_simple_rel(). However, nothing in query_planner's preprocessing cares at all about otherrels, only baserels, so we don't really need to build them until just before we enter make_one_rel. This has two benefits: * create_lateral_join_info did a lot of extra work to propagate lateral-reference information from parents to the correct children. But if we delay creation of the children till after that, it's trivial (and much harder to break, too). * Since we have all the restriction quals correctly assigned to parent appendrels by this point, it'll be possible to do plan-time pruning and never make child RelOptInfos at all for partitions that can be pruned away. That's not done here, but will be later on. Amit Langote, reviewed at various times by Dilip Kumar, Jesper Pedersen, Yoshikazu Imai, and David Rowley Discussion: https://postgr.es/m/9d7c5112-cb99-6a47-d3be-cf1ee6862a1d@lab.ntt.co.jp
* Fix oversight in data-type change for autovacuum_vacuum_cost_delay.Tom Lane2019-03-26
| | | | | | | | | | | | | Commit caf626b2c missed that the relevant reloptions entry needs to be moved from the intRelOpts[] array to realRelOpts[]. Somewhat surprisingly, it seems to work anyway, perhaps because the desired default and limit values are all integers. We ought to have either a simpler data structure or better cross-checking here, but that's for another patch. Nikolay Shaplov Discussion: https://postgr.es/m/4861742.12LTaSB3sv@x200m
* Get rid of duplicate child RTE for a partitioned table.Tom Lane2019-03-26
| | | | | | | | | | | | | | | | | | We've been creating duplicate RTEs for partitioned tables just because we do so for regular inheritance parent tables. But unlike regular-inheritance parents which are themselves regular tables and thus need to be scanned, partitioned tables don't need the extra RTE. This makes the conditions for building a child RTE the same as those for building an AppendRelInfo, allowing minor simplification in expand_single_inheritance_child. Since the planner's actual processing is driven off the AppendRelInfo list, nothing much changes beyond that, we just have one fewer useless RTE entry. Amit Langote, reviewed and hacked a bit by me Discussion: https://postgr.es/m/9d7c5112-cb99-6a47-d3be-cf1ee6862a1d@lab.ntt.co.jp
* Fix misplaced constPeter Eisentraut2019-03-26
| | | | | | These instances were apparently trying to carry the const qualifier from the arguments through the complex casts, but for that the const qualifier was misplaced.
* Remove heap_hot_search().Andres Freund2019-03-25
| | | | | | | | | | | | | After 71bdc99d0d7, "tableam: Add helper for indexes to check if a corresponding table tuples exist." there's no in-core user left. As there's unlikely to be an external user, and such an external user could easily be adjusted to use table_index_fetch_tuple_check(), remove heap_hot_search(). Per complaint from Peter Geoghegan Author: Andres Freund Discussion: https://postgr.es/m/CAH2-Wzn0Oq4ftJrTqRAsWy2WGjv0QrJcwoZ+yqWsF_Z5vjUBFw@mail.gmail.com
* Fix crash when using partition bound expressionsMichael Paquier2019-03-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since 7c079d7, partition bounds are able to use generalized expression syntax when processed, treating "minvalue" and "maxvalue" as specific cases as they get passed down for transformation as a column references. The checks for infinite bounds in range expressions have been lax though, causing crashes when trying to use column reference names with more than one field. Here is an example causing a crash: CREATE TABLE list_parted (a int) PARTITION BY LIST (a); CREATE TABLE part_list_crash PARTITION OF list_parted FOR VALUES IN (somename.somename); Note that the creation of the second relation should fail as partition bounds cannot have column references in their expressions, so when finding an expression which does not match the expected infinite bounds, then this commit lets the generic transformation machinery check after it. The error message generated in this case references as well a missing RTE, which is confusing. This problem will be treated separately as it impacts as well default expressions for some time, and for now only the cases where a crash can happen are fixed. While on it, extend the set of regression tests in place for list partition bounds and add an extra set for range partition bounds. Reported-by: Alexander Lakhin Author: Michael Paquier Reviewed-by: Amit Langote Discussion: https://postgr.es/m/15668-0377b1981aa1a393@postgresql.org
* tableam: Add table_get_latest_tid, to wrap heap_get_latest_tid.Andres Freund2019-03-25
| | | | | | | | | | This primarily is to allow WHERE CURRENT OF to continue to work as it currently does. It's not clear to me that these semantics make sense for every AM, but it works for the in-core heap, and the out of core zheap. We can refine it further at a later point if necessary. Author: Andres Freund Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
* tableam: Add helper for indexes to check if a corresponding table tuples exist.Andres Freund2019-03-25
| | | | | | | | This is, likely exclusively, useful to verify that conflicts detected in a unique index are with live tuples, rather than dead ones. Author: Andres Freund Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
* Improve planner's selectivity estimates for inequalities on CTID.Tom Lane2019-03-25
| | | | | | | | | | | | | | | | | We were getting just DEFAULT_INEQ_SEL for comparisons such as "ctid >= constant", but it's possible to do a lot better if we don't mind some assumptions about the table's tuple density being reasonably uniform. There are already assumptions much like that elsewhere in the planner, so that hardly seems like much of an objection. Extracted from a patch set that also proposes to introduce a special executor node type for such queries. Not sure if that's going to make it into v12, but improving the selectivity estimate is useful independently of that. Edmund Horner, reviewed by David Rowley Discussion: https://postgr.es/m/CAMyN-kB-nFTkF=VA_JPwFNo08S0d-Yk0F741S2B7LDmYAi8eyA@mail.gmail.com
* Suppress Append and MergeAppend plan nodes that have a single child.Tom Lane2019-03-25
| | | | | | | | | | | | | | | | | | | | | | | | | If there's only one child relation, the Append or MergeAppend isn't doing anything useful, and can be elided. It does have a purpose during planning though, which is to serve as a buffer between parent and child Var numbering. Therefore we keep it all the way through to setrefs.c, and get rid of it only after fixing references in the plan level(s) above it. This works largely the same as setrefs.c's ancient hack to get rid of no-op SubqueryScan nodes, and can even share some code with that. Note the change to make setrefs.c use apply_tlist_labeling rather than ad-hoc code. This has the effect of propagating the child's resjunk and ressortgroupref labels, which formerly weren't propagated when removing a SubqueryScan. Doing that is demonstrably necessary for the [Merge]Append cases, and seems harmless for SubqueryScan, if only because trivial_subqueryscan is afraid to collapse cases where the resjunk marking differs. (I suspect that restriction could now be removed, though it's unclear that it'd make any new matches possible, since the outer query can't have references to a child resjunk column.) David Rowley, reviewed by Alvaro Herrera and Tomas Vondra Discussion: https://postgr.es/m/CAKJS1f_7u8ATyJ1JGTMHFoKDvZdeF-iEBhs+sM_SXowOr9cArg@mail.gmail.com
* Add "split after new tuple" nbtree optimization.Peter Geoghegan2019-03-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add additional heuristics to the algorithm for locating an optimal split location. New logic identifies localized monotonically increasing values in indexes with multiple columns. When this insertion pattern is detected, page splits split just after the new item that provoked a page split (or apply leaf fillfactor in the style of a rightmost page split). This optimization is a variation of the long established leaf fillfactor optimization used during rightmost page splits. 50/50 page splits are only appropriate with a pattern of truly random insertions, where the average space utilization ends up at 65% - 70%. Without this patch, affected cases have leaf pages that are no more than about 50% full on average. Future insertions can never make use of the free space left behind. With this patch, affected cases have leaf pages that are about 90% full on average (assuming a fillfactor of 90). Localized monotonically increasing insertion patterns are presumed to be fairly common in real-world applications. There is a fair amount of anecdotal evidence for this. Both pg_depend system catalog indexes (pg_depend_depender_index and pg_depend_reference_index) are at least 20% smaller after the regression tests are run when the optimization is available. Furthermore, many of the indexes created by a fair use implementation of TPC-C for Postgres are consistently about 40% smaller when the optimization is available. Note that even pg_upgrade'd v3 indexes make use of this optimization. Author: Peter Geoghegan Reviewed-By: Heikki Linnakangas Discussion: https://postgr.es/m/CAH2-WzkpKeZJrXvR_p7VSY1b-s85E3gHyTbZQzR0BkJ5LrWF_A@mail.gmail.com
* Further code review for new integerset code.Tom Lane2019-03-25
| | | | | Mostly cosmetic adjustments, but I added a more reliable method of detecting whether an iteration is in progress.
* Fix use of wrong datatype with sizeof().Robert Haas2019-03-25
| | | | | | | | OID and int are the same size, but they are not the same thing. David Rowley Discussion: http://postgr.es/m/CAKJS1f_MhS++XngkTvWL9X1v8M5t-0N0B-R465yHQY=TmNV0Ew@mail.gmail.com
* Add progress reporting for CLUSTER and VACUUM FULL.Robert Haas2019-03-25
| | | | | | | | | | | | | | | This uses the same progress reporting infrastructure added in commit c16dc1aca5e01e6acaadfcf38f5fc964a381dc62 and extends it to these additional cases. We lack the ability to track the internal progress of sorts and index builds so the information reported is coarse-grained for some parts of the operation, but it still seems like a significant improvement over having nothing at all. Tatsuro Yamada, reviewed by Thomas Munro, Masahiko Sawada, Michael Paquier, Jeff Janes, Alvaro Herrera, Rafia Sabih, and by me. A fair amount of polishing also by me. Discussion: http://postgr.es/m/59A77072.3090401@lab.ntt.co.jp
* Get rid of backtracking in jsonpath_scan.lAlexander Korotkov2019-03-25
| | | | | | | | | | Non-backtracking flex parsers work faster than backtracking ones. So, this commit gets rid of backtracking in jsonpath_scan.l. That required explicit handling of some cases as well as manual backtracking for some cases. More regression tests for numerics are added. Discussion: https://mail.google.com/mail/u/0?ik=a20b091faa&view=om&permmsgid=msg-f%3A1628425344167939063 Author: John Naylor, Nikita Gluknov, Alexander Korotkov
* Cosmetic changes for jsonpath_gram.y and jsonpath_scan.lAlexander Korotkov2019-03-25
| | | | | | | | | | | This commit include formatting improvements, renamings and comments. Also, it makes jsonpath_scan.l be more uniform with other our lexers. Firstly, states names are renamed to more short alternatives. Secondly, <INITIAL> prefix removed from the rules. Corresponding rules are moved to the tail, so they would anyway work only in initial state. Author: Alexander Korotkov Reviewed-by: John Naylor
* Clean up the Simple-8b encoder code.Heikki Linnakangas2019-03-25
| | | | | | | | | | | | | | | | | | | | | | | | Coverity complained that simple8b_encode() might read beyond the end of the 'diffs' array, in the loop to encode the integers. That was a false positive, because we never get into the loop in modes 0 or 1, and the array is large enough for all the other modes. But I admit it's very subtle, so it's not surprising that Coverity didn't see it, and it's not very obvious to humans either. Refactor it, so that the second loop re-computes the differences, instead of carrying them over from the first loop in the 'diffs' array. This way, the 'diffs' array is not needed anymore. It makes no measurable difference in performance, and seems more straightforward this way. Also, improve the comments in simple8b_encode(): fix the comment about its return value that was flat-out wrong, and explain the condition when it returns EMPTY_CODEWORD better. In the passing, move the 'selector' from the codeword's low bits to the high bits. It doesn't matter much, but looking at the original paper, and googling around for other Simple-8b implementations, that's how it's usually done. Per Coverity, and Tom Lane's report off-list.
* Add macro to cast away volatile without allowing changes to underlying typePeter Eisentraut2019-03-25
| | | | | | This adds unvolatize(), which works just like unconstify() but for volatile. Discussion: https://www.postgresql.org/message-id/flat/7a5cbea7-b8df-e910-0f10-04014bcad701%402ndquadrant.com
* tableam: Add and use table_fetch_row_version().Andres Freund2019-03-25
| | | | | | | | | | | | | | | | | | This is essentially the tableam version of heapam_fetch(), i.e. fetching a tuple identified by a tid, performing visibility checks. Note that this different from table_index_fetch_tuple(), which is for index lookups. It therefore has to handle a tid pointing to an earlier version of a tuple if the AM uses an optimization like heap's HOT. Add comments to that end. This commit removes the stats_relation argument from heap_fetch, as it's been unused for a long time. Author: Andres Freund Reviewed-By: Haribabu Kommi Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
* tableam: Use in CREATE TABLE AS and CREATE MATERIALIZED VIEW.Andres Freund2019-03-24
| | | | | | | | | | | | | Previously those directly performed a heap_insert(). Use table_insert() instead. The input slot of those routines is not of the target relation - we could fix that by copying if necessary, but that'd not be beneficial for performance. As those codepaths don't access any AM specific tuple fields (say xmin/xmax), there's no need to use an AM specific slot. Author: Andres Freund Reviewed-By: Haribabu Kommi Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
* Sort dependent objects before reporting them in DROP ROLE.Tom Lane2019-03-24
| | | | | | | | | | | | | | | | | | Commit 8aa9dd74b didn't quite finish the job in this area after all, because DROP ROLE has a code path distinct from DROP OWNED BY, and it was still reporting dependent objects in whatever order the index scan returned them in. Buildfarm experience shows that index ordering of equal-keyed objects is significantly less stable than before in the wake of using heap TIDs as tie-breakers. So if we try to hide the unstable ordering by suppressing DETAIL reports, we're just going to end up having to do that for every DROP that reports multiple objects. That's not great from a coverage or problem-detection standpoint, and it's something we'll inevitably forget in future patches, leading to more iterations of fixing-an- unstable-result. So let's just bite the bullet and sort here too. Discussion: https://postgr.es/m/E1h6eep-0001Mw-Vd@gemulon.postgresql.org
* Remove dead code from nbtsplitloc.c.Peter Geoghegan2019-03-24
| | | | | | | | | | It doesn't make sense to consider the possibility that there will only be one candidate split point when choosing among split points to find the split with the lowest penalty. This is a vestige of an earlier version of the patch that became commit fab25024. Issue spotted while rereviewing coverage of the nbtree patch series using gcov.
* Make current_logfiles use permissions assigned to files in data directoryMichael Paquier2019-03-24
| | | | | | | | | | | | | | | | | | | | | | | Since its introduction in 19dc233c, current_logfiles has been assigned the same permissions as a log file, which can be enforced with log_file_mode. This setup can lead to incompatibility problems with group access permissions as current_logfiles is not located in the log directory, but at the root of the data folder. Hence, if group permissions are used but log_file_mode is more restrictive, a backup with a user in the group having read access could fail even if the log directory is located outside of the data folder. Per discussion with the folks mentioned below, we have concluded that current_logfiles should not be treated as a log file as it only stores metadata related to log files, and that it should use the same permissions as all other files in the data directory. This solution has the merit to be simple and fixes all the interaction problems between group access and log_file_mode. Author: Haribabu Kommi Reviewed-by: Stephen Frost, Robert Haas, Tom Lane, Michael Paquier Discussion: https://postgr.es/m/CAJrrPGcEotF1P7AWoeQyD3Pqr-0xkQg_Herv98DjbaMj+naozw@mail.gmail.com Backpatch-through: 11, where group access has been added.
* Transaction chainingPeter Eisentraut2019-03-24
| | | | | | | | | | | | | Add command variants COMMIT AND CHAIN and ROLLBACK AND CHAIN, which start new transactions with the same transaction characteristics as the just finished one, per SQL standard. Support for transaction chaining in PL/pgSQL is also added. This functionality is especially useful when running COMMIT in a loop in PL/pgSQL. Reviewed-by: Fabien COELHO <coelho@cri.ensmp.fr> Discussion: https://www.postgresql.org/message-id/flat/28536681-324b-10dc-ade8-ab46f7645a5a@2ndquadrant.com
* tableam: Add tuple_{insert, delete, update, lock} and use.Andres Freund2019-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds new, required, table AM callbacks for insert/delete/update and lock_tuple. To be able to reasonably use those, the EvalPlanQual mechanism had to be adapted, moving more logic into the AM. Previously both delete/update/lock call-sites and the EPQ mechanism had to have awareness of the specific tuple format to be able to fetch the latest version of a tuple. Obviously that needs to be abstracted away. To do so, move the logic that find the latest row version into the AM. lock_tuple has a new flag argument, TUPLE_LOCK_FLAG_FIND_LAST_VERSION, that forces it to lock the last version, rather than the current one. It'd have been possible to do so via a separate callback as well, but finding the last version usually also necessitates locking the newest version, making it sensible to combine the two. This replaces the previous use of EvalPlanQualFetch(). Additionally HeapTupleUpdated, which previously signaled either a concurrent update or delete, is now split into two, to avoid callers needing AM specific knowledge to differentiate. The move of finding the latest row version into tuple_lock means that encountering a row concurrently moved into another partition will now raise an error about "tuple to be locked" rather than "tuple to be updated/deleted" - which is accurate, as that always happens when locking rows. While possible slightly less helpful for users, it seems like an acceptable trade-off. As part of this commit HTSU_Result has been renamed to TM_Result, and its members been expanded to differentiated between updating and deleting. HeapUpdateFailureData has been renamed to TM_FailureData. The interface to speculative insertion is changed so nodeModifyTable.c does not have to set the speculative token itself anymore. Instead there's a version of tuple_insert, tuple_insert_speculative, that performs the speculative insertion (without requiring a flag to signal that fact), and the speculative insertion is either made permanent with table_complete_speculative(succeeded = true) or aborted with succeeded = false). Note that multi_insert is not yet routed through tableam, nor is COPY. Changing multi_insert requires changes to copy.c that are large enough to better be done separately. Similarly, although simpler, CREATE TABLE AS and CREATE MATERIALIZED VIEW are also only going to be adjusted in a later commit. Author: Andres Freund and Haribabu Kommi Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de https://postgr.es/m/20190313003903.nwvrxi7rw3ywhdel@alap3.anarazel.de https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
* Remove inadequate check for duplicate "xml" PI.Tom Lane2019-03-23
| | | | | | I failed to think about PIs starting with "xml". We don't really need this check at all, so just take it out. Oversight in commit 8d1dadb25 et al.
* Accept XML documents when xmloption = content, as required by SQL:2006+.Tom Lane2019-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | Previously we were using the SQL:2003 definition, which doesn't allow this, but that creates a serious dump/restore gotcha: there is no setting of xmloption that will allow all valid XML data. Hence, switch to the 2006 definition. Since libxml doesn't accept <!DOCTYPE> directives in the mode we use for CONTENT parsing, the implementation is to detect <!DOCTYPE> in the input and switch to DOCUMENT parsing mode. This should not cost much, because <!DOCTYPE> should be close to the front of the input if it's there at all. It's possible that this causes the error messages for malformed input to be slightly different than they were before, if said input includes <!DOCTYPE>; but that does not seem like a big problem. In passing, buy back a few cycles in parsing of large XML documents by not doing strlen() of the whole input in parse_xml_decl(). Back-patch because dump/restore failures are not nice. This change shouldn't break any cases that worked before, so it seems safe to back-patch. Chapman Flack (revised a bit by me) Discussion: https://postgr.es/m/CAN-V+g-6JqUQEQZ55Q3toXEN6d5Ez5uvzL4VR+8KtvJKj31taw@mail.gmail.com
* Add nbtree high key "continuescan" optimization.Peter Geoghegan2019-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Teach nbtree forward index scans to check the high key before moving to the right sibling page in the hope of finding that it isn't actually necessary to do so. The new check may indicate that the scan definitely cannot find matching tuples to the right, ending the scan immediately. We already opportunistically force a similar "continuescan orientated" key check of the final non-pivot tuple when it's clear that it cannot be returned to the scan due to being dead-to-all. The new high key check is complementary. The new approach for forward scans is more effective than checking the final non-pivot tuple, especially with composite indexes and non-unique indexes. The improvements to the logic for picking a split point added by commit fab25024 make it likely that relatively dissimilar high keys will appear on a page. A distinguishing key value that can only appear on non-pivot tuples on the right sibling page will often be present in leaf page high keys. Since forcing the final item to be key checked no longer makes any difference in the case of forward scans, the existing extra key check is now only used for backwards scans. Backward scans continue to opportunistically check the final non-pivot tuple, which is actually the first non-pivot tuple on the page (not the last). Note that even pg_upgrade'd v3 indexes make use of this optimization. Author: Peter Geoghegan, Heikki Linnakangas Reviewed-By: Heikki Linnakangas Discussion: https://postgr.es/m/CAH2-WzkOmUduME31QnuTFpimejuQoiZ-HOf0pOWeFZNhTMctvA@mail.gmail.com
* Add unreachable "break" to satisfy -Wimplicit-fallthrough.Tom Lane2019-03-23
| | | | | | gcc is a bit pickier about this than perhaps it should be. Discussion: https://postgr.es/m/E1h6zzT-0003ft-DD@gemulon.postgresql.org
* Revert "Add gitignore entries for jsonpath_gram.h"Peter Eisentraut2019-03-23
| | | | | | This reverts commit 4e274a043fc8310ce1148190ef674beca06e990c. These files aren't actually built anymore since 550b9d26f.
* Add gitignore entries for jsonpath_gram.hPeter Eisentraut2019-03-22
|
* Rearrange make_partitionedrel_pruneinfo to avoid work when we can't prune.Tom Lane2019-03-22
| | | | | | | | | | | | | | | | Postpone most of the effort of constructing PartitionedRelPruneInfos until after we have found out whether run-time pruning is needed at all. This costs very little duplicated effort (basically just an extra find_base_rel() call per partition) and saves quite a bit when we can't do run-time pruning. Also, merge the first loop (for building relid_subpart_map) into the second loop, since we don't need the map to be valid during that loop. Amit Langote Discussion: https://postgr.es/m/9d7c5112-cb99-6a47-d3be-cf1ee6862a1d@lab.ntt.co.jp
* Don't copy PartitionBoundInfo in set_relation_partition_info.Tom Lane2019-03-22
| | | | | | | | | | | | | | | | I (tgl) remain dubious that it's a good idea for PartitionDirectory to hold a pin on a relcache entry throughout planning, rather than copying the data or using some kind of refcount scheme. However, it's certainly the responsibility of the PartitionDirectory code to ensure that what it's handing back is a stable data structure, not that of its caller. So this is a pretty clear oversight in commit 898e5e329, and one that can cost a lot of performance when there are many partitions. Amit Langote (extracted from a much larger patch set) Discussion: https://postgr.es/m/CA+TgmoY3bRmGB6-DUnoVy5fJoreiBJ43rwMrQRCdPXuKt4Ykaw@mail.gmail.com Discussion: https://postgr.es/m/9d7c5112-cb99-6a47-d3be-cf1ee6862a1d@lab.ntt.co.jp
* Fix yet more portability bugs in integerset and its tests.Heikki Linnakangas2019-03-22
| | | | | | | | | | | | There were more large constants that needed UINT64CONST. And one variable was declared as "int", when it needed to be uint64. These bugs were only visible on 32-bit systems; clearly I should've tested on one, given that this code does a lot of work with 64-bit integers. Also, in the test "huge distances" test, the code created some values with random distances between them, but the test logic didn't take into account the possibility that the random distance was exactly 1. That never actually happens with the seed we're using, but let's be tidy.
* Fix bug in the GiST vacuum's 2nd stage.Heikki Linnakangas2019-03-22
| | | | | | | We mustn't assume that the IndexVacuumInfo pointer passed to bulkdelete() stage is still valid in the vacuumcleanup() stage. Per very pink buildfarm.