aboutsummaryrefslogtreecommitdiff
path: root/src/backend
Commit message (Collapse)AuthorAge
...
* Clean up the Simple-8b encoder code.Heikki Linnakangas2019-03-25
| | | | | | | | | | | | | | | | | | | | | | | | Coverity complained that simple8b_encode() might read beyond the end of the 'diffs' array, in the loop to encode the integers. That was a false positive, because we never get into the loop in modes 0 or 1, and the array is large enough for all the other modes. But I admit it's very subtle, so it's not surprising that Coverity didn't see it, and it's not very obvious to humans either. Refactor it, so that the second loop re-computes the differences, instead of carrying them over from the first loop in the 'diffs' array. This way, the 'diffs' array is not needed anymore. It makes no measurable difference in performance, and seems more straightforward this way. Also, improve the comments in simple8b_encode(): fix the comment about its return value that was flat-out wrong, and explain the condition when it returns EMPTY_CODEWORD better. In the passing, move the 'selector' from the codeword's low bits to the high bits. It doesn't matter much, but looking at the original paper, and googling around for other Simple-8b implementations, that's how it's usually done. Per Coverity, and Tom Lane's report off-list.
* Add macro to cast away volatile without allowing changes to underlying typePeter Eisentraut2019-03-25
| | | | | | This adds unvolatize(), which works just like unconstify() but for volatile. Discussion: https://www.postgresql.org/message-id/flat/7a5cbea7-b8df-e910-0f10-04014bcad701%402ndquadrant.com
* tableam: Add and use table_fetch_row_version().Andres Freund2019-03-25
| | | | | | | | | | | | | | | | | | This is essentially the tableam version of heapam_fetch(), i.e. fetching a tuple identified by a tid, performing visibility checks. Note that this different from table_index_fetch_tuple(), which is for index lookups. It therefore has to handle a tid pointing to an earlier version of a tuple if the AM uses an optimization like heap's HOT. Add comments to that end. This commit removes the stats_relation argument from heap_fetch, as it's been unused for a long time. Author: Andres Freund Reviewed-By: Haribabu Kommi Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
* tableam: Use in CREATE TABLE AS and CREATE MATERIALIZED VIEW.Andres Freund2019-03-24
| | | | | | | | | | | | | Previously those directly performed a heap_insert(). Use table_insert() instead. The input slot of those routines is not of the target relation - we could fix that by copying if necessary, but that'd not be beneficial for performance. As those codepaths don't access any AM specific tuple fields (say xmin/xmax), there's no need to use an AM specific slot. Author: Andres Freund Reviewed-By: Haribabu Kommi Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
* Sort dependent objects before reporting them in DROP ROLE.Tom Lane2019-03-24
| | | | | | | | | | | | | | | | | | Commit 8aa9dd74b didn't quite finish the job in this area after all, because DROP ROLE has a code path distinct from DROP OWNED BY, and it was still reporting dependent objects in whatever order the index scan returned them in. Buildfarm experience shows that index ordering of equal-keyed objects is significantly less stable than before in the wake of using heap TIDs as tie-breakers. So if we try to hide the unstable ordering by suppressing DETAIL reports, we're just going to end up having to do that for every DROP that reports multiple objects. That's not great from a coverage or problem-detection standpoint, and it's something we'll inevitably forget in future patches, leading to more iterations of fixing-an- unstable-result. So let's just bite the bullet and sort here too. Discussion: https://postgr.es/m/E1h6eep-0001Mw-Vd@gemulon.postgresql.org
* Remove dead code from nbtsplitloc.c.Peter Geoghegan2019-03-24
| | | | | | | | | | It doesn't make sense to consider the possibility that there will only be one candidate split point when choosing among split points to find the split with the lowest penalty. This is a vestige of an earlier version of the patch that became commit fab25024. Issue spotted while rereviewing coverage of the nbtree patch series using gcov.
* Make current_logfiles use permissions assigned to files in data directoryMichael Paquier2019-03-24
| | | | | | | | | | | | | | | | | | | | | | | Since its introduction in 19dc233c, current_logfiles has been assigned the same permissions as a log file, which can be enforced with log_file_mode. This setup can lead to incompatibility problems with group access permissions as current_logfiles is not located in the log directory, but at the root of the data folder. Hence, if group permissions are used but log_file_mode is more restrictive, a backup with a user in the group having read access could fail even if the log directory is located outside of the data folder. Per discussion with the folks mentioned below, we have concluded that current_logfiles should not be treated as a log file as it only stores metadata related to log files, and that it should use the same permissions as all other files in the data directory. This solution has the merit to be simple and fixes all the interaction problems between group access and log_file_mode. Author: Haribabu Kommi Reviewed-by: Stephen Frost, Robert Haas, Tom Lane, Michael Paquier Discussion: https://postgr.es/m/CAJrrPGcEotF1P7AWoeQyD3Pqr-0xkQg_Herv98DjbaMj+naozw@mail.gmail.com Backpatch-through: 11, where group access has been added.
* Transaction chainingPeter Eisentraut2019-03-24
| | | | | | | | | | | | | Add command variants COMMIT AND CHAIN and ROLLBACK AND CHAIN, which start new transactions with the same transaction characteristics as the just finished one, per SQL standard. Support for transaction chaining in PL/pgSQL is also added. This functionality is especially useful when running COMMIT in a loop in PL/pgSQL. Reviewed-by: Fabien COELHO <coelho@cri.ensmp.fr> Discussion: https://www.postgresql.org/message-id/flat/28536681-324b-10dc-ade8-ab46f7645a5a@2ndquadrant.com
* tableam: Add tuple_{insert, delete, update, lock} and use.Andres Freund2019-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds new, required, table AM callbacks for insert/delete/update and lock_tuple. To be able to reasonably use those, the EvalPlanQual mechanism had to be adapted, moving more logic into the AM. Previously both delete/update/lock call-sites and the EPQ mechanism had to have awareness of the specific tuple format to be able to fetch the latest version of a tuple. Obviously that needs to be abstracted away. To do so, move the logic that find the latest row version into the AM. lock_tuple has a new flag argument, TUPLE_LOCK_FLAG_FIND_LAST_VERSION, that forces it to lock the last version, rather than the current one. It'd have been possible to do so via a separate callback as well, but finding the last version usually also necessitates locking the newest version, making it sensible to combine the two. This replaces the previous use of EvalPlanQualFetch(). Additionally HeapTupleUpdated, which previously signaled either a concurrent update or delete, is now split into two, to avoid callers needing AM specific knowledge to differentiate. The move of finding the latest row version into tuple_lock means that encountering a row concurrently moved into another partition will now raise an error about "tuple to be locked" rather than "tuple to be updated/deleted" - which is accurate, as that always happens when locking rows. While possible slightly less helpful for users, it seems like an acceptable trade-off. As part of this commit HTSU_Result has been renamed to TM_Result, and its members been expanded to differentiated between updating and deleting. HeapUpdateFailureData has been renamed to TM_FailureData. The interface to speculative insertion is changed so nodeModifyTable.c does not have to set the speculative token itself anymore. Instead there's a version of tuple_insert, tuple_insert_speculative, that performs the speculative insertion (without requiring a flag to signal that fact), and the speculative insertion is either made permanent with table_complete_speculative(succeeded = true) or aborted with succeeded = false). Note that multi_insert is not yet routed through tableam, nor is COPY. Changing multi_insert requires changes to copy.c that are large enough to better be done separately. Similarly, although simpler, CREATE TABLE AS and CREATE MATERIALIZED VIEW are also only going to be adjusted in a later commit. Author: Andres Freund and Haribabu Kommi Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de https://postgr.es/m/20190313003903.nwvrxi7rw3ywhdel@alap3.anarazel.de https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
* Remove inadequate check for duplicate "xml" PI.Tom Lane2019-03-23
| | | | | | I failed to think about PIs starting with "xml". We don't really need this check at all, so just take it out. Oversight in commit 8d1dadb25 et al.
* Accept XML documents when xmloption = content, as required by SQL:2006+.Tom Lane2019-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | Previously we were using the SQL:2003 definition, which doesn't allow this, but that creates a serious dump/restore gotcha: there is no setting of xmloption that will allow all valid XML data. Hence, switch to the 2006 definition. Since libxml doesn't accept <!DOCTYPE> directives in the mode we use for CONTENT parsing, the implementation is to detect <!DOCTYPE> in the input and switch to DOCUMENT parsing mode. This should not cost much, because <!DOCTYPE> should be close to the front of the input if it's there at all. It's possible that this causes the error messages for malformed input to be slightly different than they were before, if said input includes <!DOCTYPE>; but that does not seem like a big problem. In passing, buy back a few cycles in parsing of large XML documents by not doing strlen() of the whole input in parse_xml_decl(). Back-patch because dump/restore failures are not nice. This change shouldn't break any cases that worked before, so it seems safe to back-patch. Chapman Flack (revised a bit by me) Discussion: https://postgr.es/m/CAN-V+g-6JqUQEQZ55Q3toXEN6d5Ez5uvzL4VR+8KtvJKj31taw@mail.gmail.com
* Add nbtree high key "continuescan" optimization.Peter Geoghegan2019-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Teach nbtree forward index scans to check the high key before moving to the right sibling page in the hope of finding that it isn't actually necessary to do so. The new check may indicate that the scan definitely cannot find matching tuples to the right, ending the scan immediately. We already opportunistically force a similar "continuescan orientated" key check of the final non-pivot tuple when it's clear that it cannot be returned to the scan due to being dead-to-all. The new high key check is complementary. The new approach for forward scans is more effective than checking the final non-pivot tuple, especially with composite indexes and non-unique indexes. The improvements to the logic for picking a split point added by commit fab25024 make it likely that relatively dissimilar high keys will appear on a page. A distinguishing key value that can only appear on non-pivot tuples on the right sibling page will often be present in leaf page high keys. Since forcing the final item to be key checked no longer makes any difference in the case of forward scans, the existing extra key check is now only used for backwards scans. Backward scans continue to opportunistically check the final non-pivot tuple, which is actually the first non-pivot tuple on the page (not the last). Note that even pg_upgrade'd v3 indexes make use of this optimization. Author: Peter Geoghegan, Heikki Linnakangas Reviewed-By: Heikki Linnakangas Discussion: https://postgr.es/m/CAH2-WzkOmUduME31QnuTFpimejuQoiZ-HOf0pOWeFZNhTMctvA@mail.gmail.com
* Add unreachable "break" to satisfy -Wimplicit-fallthrough.Tom Lane2019-03-23
| | | | | | gcc is a bit pickier about this than perhaps it should be. Discussion: https://postgr.es/m/E1h6zzT-0003ft-DD@gemulon.postgresql.org
* Revert "Add gitignore entries for jsonpath_gram.h"Peter Eisentraut2019-03-23
| | | | | | This reverts commit 4e274a043fc8310ce1148190ef674beca06e990c. These files aren't actually built anymore since 550b9d26f.
* Add gitignore entries for jsonpath_gram.hPeter Eisentraut2019-03-22
|
* Rearrange make_partitionedrel_pruneinfo to avoid work when we can't prune.Tom Lane2019-03-22
| | | | | | | | | | | | | | | | Postpone most of the effort of constructing PartitionedRelPruneInfos until after we have found out whether run-time pruning is needed at all. This costs very little duplicated effort (basically just an extra find_base_rel() call per partition) and saves quite a bit when we can't do run-time pruning. Also, merge the first loop (for building relid_subpart_map) into the second loop, since we don't need the map to be valid during that loop. Amit Langote Discussion: https://postgr.es/m/9d7c5112-cb99-6a47-d3be-cf1ee6862a1d@lab.ntt.co.jp
* Don't copy PartitionBoundInfo in set_relation_partition_info.Tom Lane2019-03-22
| | | | | | | | | | | | | | | | I (tgl) remain dubious that it's a good idea for PartitionDirectory to hold a pin on a relcache entry throughout planning, rather than copying the data or using some kind of refcount scheme. However, it's certainly the responsibility of the PartitionDirectory code to ensure that what it's handing back is a stable data structure, not that of its caller. So this is a pretty clear oversight in commit 898e5e329, and one that can cost a lot of performance when there are many partitions. Amit Langote (extracted from a much larger patch set) Discussion: https://postgr.es/m/CA+TgmoY3bRmGB6-DUnoVy5fJoreiBJ43rwMrQRCdPXuKt4Ykaw@mail.gmail.com Discussion: https://postgr.es/m/9d7c5112-cb99-6a47-d3be-cf1ee6862a1d@lab.ntt.co.jp
* Fix yet more portability bugs in integerset and its tests.Heikki Linnakangas2019-03-22
| | | | | | | | | | | | There were more large constants that needed UINT64CONST. And one variable was declared as "int", when it needed to be uint64. These bugs were only visible on 32-bit systems; clearly I should've tested on one, given that this code does a lot of work with 64-bit integers. Also, in the test "huge distances" test, the code created some values with random distances between them, but the test logic didn't take into account the possibility that the random distance was exactly 1. That never actually happens with the seed we're using, but let's be tidy.
* Fix bug in the GiST vacuum's 2nd stage.Heikki Linnakangas2019-03-22
| | | | | | | We mustn't assume that the IndexVacuumInfo pointer passed to bulkdelete() stage is still valid in the vacuumcleanup() stage. Per very pink buildfarm.
* Delete empty pages during GiST VACUUM.Heikki Linnakangas2019-03-22
| | | | | | | | | | | | | | | To do this, we scan GiST two times. In the first pass we make note of empty leaf pages and internal pages. At second pass we scan through internal pages, looking for downlinks to the empty pages. Deleting internal pages is still not supported, like in nbtree, the last child of an internal page is never deleted. That means that if you have a workload where new keys are always inserted to different area than where old keys are removed, the index will still grow without bound. But the rate of growth will be an order of magnitude slower than before. Author: Andrey Borodin Discussion: https://www.postgresql.org/message-id/B1E4DF12-6CD3-4706-BDBD-BF3283328F60@yandex-team.ru
* Add IntegerSet, to hold large sets of 64-bit ints efficiently.Heikki Linnakangas2019-03-22
| | | | | | | | | | | | | | | | | | | | | The set is implemented as a B-tree, with a compact representation at leaf items, using Simple-8b algorithm, so that clusters of nearby values use less memory. The IntegerSet isn't used for anything yet, aside from the test code, but we have two patches in the works that would benefit from this: A patch to allow GiST vacuum to delete empty pages, and a patch to reduce heap VACUUM's memory usage, by storing the list of dead TIDs more efficiently and lifting the 1 GB limit on its size. This includes a unit test module, in src/test/modules/test_integerset. It can be used to verify correctness, as a regression test, but if you run it manully, it can also print memory usage and execution time of some of the tests. Author: Heikki Linnakangas, Andrey Borodin Reviewed-by: Julien Rouhaud Discussion: https://www.postgresql.org/message-id/b5e82599-1966-5783-733c-1a947ddb729f@iki.fi
* Collations with nondeterministic comparisonPeter Eisentraut2019-03-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds a flag "deterministic" to collations. If that is false, such a collation disables various optimizations that assume that strings are equal only if they are byte-wise equal. That then allows use cases such as case-insensitive or accent-insensitive comparisons or handling of strings with different Unicode normal forms. This functionality is only supported with the ICU provider. At least glibc doesn't appear to have any locales that work in a nondeterministic way, so it's not worth supporting this for the libc provider. The term "deterministic comparison" in this context is from Unicode Technical Standard #10 (https://unicode.org/reports/tr10/#Deterministic_Comparison). This patch makes changes in three areas: - CREATE COLLATION DDL changes and system catalog changes to support this new flag. - Many executor nodes and auxiliary code are extended to track collations. Previously, this code would just throw away collation information, because the eventually-called user-defined functions didn't use it since they only cared about equality, which didn't need collation information. - String data type functions that do equality comparisons and hashing are changed to take the (non-)deterministic flag into account. For comparison, this just means skipping various shortcuts and tie breakers that use byte-wise comparison. For hashing, we first need to convert the input string to a canonical "sort key" using the ICU analogue of strxfrm(). Reviewed-by: Daniel Verite <daniel@manitou-mail.org> Reviewed-by: Peter Geoghegan <pg@bowt.ie> Discussion: https://www.postgresql.org/message-id/flat/1ccc668f-4cbc-0bef-af67-450b47cdfee7@2ndquadrant.com
* Fix crash with pg_partition_rootMichael Paquier2019-03-22
| | | | | | | | | | | Trying to call the function with the top-most parent of a partition tree was leading to a crash. In this case the correct result is to return the top-most parent itself. Reported-by: Álvaro Herrera Author: Michael Paquier Reviewed-by: Amit Langote Discussion: https://postgr.es/m/20190322032612.GA323@alvherre.pgsql
* Fix dependency recording bug for partitioned PKsAlvaro Herrera2019-03-21
| | | | | | | | | | | | | | | | | | | | | | | | | When DefineIndex recurses to create constraints on partitions, it needs to use the value returned by index_constraint_create to set up partition dependencies. However, in the course of fixing the DEPENDENCY_INTERNAL_AUTO mess, commit 1d92a0c9f7dd introduced some code to that function that clobbered the return value, causing the recorded OID to be of the wrong object. Close examination of pg_depend after creating the tables leads to indescribable objects :-( My sin (in commit bdc3d7fa2376, while preparing for DDL deparsing in event triggers) was to use a variable name for the return value that's typically used for throwaway objects in dependency-setting calls ("referenced"). Fix by changing the variable names to match extended practice (the return value is "myself" rather than "referenced".) The pg_upgrade test notices the problem (in an indirect way: the pg_dump outputs are in different order), but only if you create the objects in a specific way that wasn't being used in the existing tests. Add a stanza to leave some objects around that shows the bug. Catversion bump because preexisting databases might have bogus pg_depend entries. Discussion: https://postgr.es/m/20190318204235.GA30360@alvherre.pgsql
* Improve error reporting for DROP FUNCTION/PROCEDURE/AGGREGATE/ROUTINE.Tom Lane2019-03-21
| | | | | | | | | | | | | | | | | | | | | | | These commands allow the argument type list to be omitted if there is just one object that matches by name. However, if that syntax was used with DROP IF EXISTS and there was more than one match, you got a "function ... does not exist, skipping" notice message rather than a truthful complaint about the ambiguity. This was basically due to poor factorization and a rats-nest of logic, so refactor the relevant lookup code to make it cleaner. Note that this amounts to narrowing the scope of which sorts of error conditions IF EXISTS will bypass. Per discussion, we only intend it to skip no-such-object cases, not multiple-possible-matches cases. Per bug #15572 from Ash Marath. Although this definitely seems like a bug, it's not clear that people would thank us for changing the behavior in minor releases, so no back-patch. David Rowley, reviewed by Julien Rouhaud and Pavel Stehule Discussion: https://postgr.es/m/15572-ed1b9ed09503de8a@postgresql.org
* Add DNS SRV support for LDAP server discovery.Thomas Munro2019-03-21
| | | | | | | | | | | | | | | | | LDAP servers can be advertised on a network with RFC 2782 DNS SRV records. The OpenLDAP command-line tools automatically try to find servers that way, if no server name is provided by the user. Teach PostgreSQL to do the same using OpenLDAP's support functions, when building with OpenLDAP. For now, we assume that HAVE_LDAP_INITIALIZE (an OpenLDAP extension available since OpenLDAP 2.0 and also present in Apple LDAP) implies that you also have ldap_domain2hostlist() (which arrived in the same OpenLDAP version and is also present in Apple LDAP). Author: Thomas Munro Reviewed-by: Daniel Gustafsson Discussion: https://postgr.es/m/CAEepm=2hAnSfhdsd6vXsM6VZVN0br-FbAZ-O+Swk18S5HkCP=A@mail.gmail.com
* Sort the dependent objects before deletion in DROP OWNED BY.Tom Lane2019-03-20
| | | | | | | | | | This finishes a task we left undone in commit f1ad067fc, by extending the delete-in-descending-OID-order rule to deletions triggered by DROP OWNED BY. We've coped with machine-dependent deletion orders one time too many, and the new issues caused by Peter G's recent nbtree hacking seem like the last straw. Discussion: https://postgr.es/m/E1h6eep-0001Mw-Vd@gemulon.postgresql.org
* Add index_get_partition convenience functionAlvaro Herrera2019-03-20
| | | | | | | | This new function simplifies some existing coding, as well as supports future patches. Discussion: https://postgr.es/m/201901222145.t6wws6t6vrcu@alvherre.pgsql Reviewed-by: Amit Langote, Jesper Pedersen
* Fix spurious compiler warning in nbtxlog.c.Peter Geoghegan2019-03-20
| | | | | | Cleanup from commit dd299df8. Per complaint from Tom Lane.
* Restore RI trigger sanity checkAlvaro Herrera2019-03-20
| | | | | | | | I unnecessarily removed this check in 3de241dba86f because I misunderstood what the final representation of constraints across a partitioning hierarchy was to be. Put it back (in both branches). Discussion: https://postgr.es/m/201901222145.t6wws6t6vrcu@alvherre.pgsql
* Consider secondary factors during nbtree splits.Peter Geoghegan2019-03-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Teach nbtree to give some consideration to how "distinguishing" candidate leaf page split points are. This should not noticeably affect the balance of free space within each half of the split, while still making suffix truncation truncate away significantly more attributes on average. The logic for choosing a leaf split point now uses a fallback mode in the case where the page is full of duplicates and it isn't possible to find even a minimally distinguishing split point. When the page is full of duplicates, the split should pack the left half very tightly, while leaving the right half mostly empty. Our assumption is that logical duplicates will almost always be inserted in ascending heap TID order with v4 indexes. This strategy leaves most of the free space on the half of the split that will likely be where future logical duplicates of the same value need to be placed. The number of cycles added is not very noticeable. This is important because deciding on a split point takes place while at least one exclusive buffer lock is held. We avoid using authoritative insertion scankey comparisons to save cycles, unlike suffix truncation proper. We use a faster binary comparison instead. Note that even pg_upgrade'd v3 indexes make use of these optimizations. Benchmarking has shown that even v3 indexes benefit, despite the fact that suffix truncation will only truncate non-key attributes in INCLUDE indexes. Grouping relatively similar tuples together is beneficial in and of itself, since it reduces the number of leaf pages that must be accessed by subsequent index scans. Author: Peter Geoghegan Reviewed-By: Heikki Linnakangas Discussion: https://postgr.es/m/CAH2-WzmmoLNQOj9mAD78iQHfWLJDszHEDrAzGTUMG3mVh5xWPw@mail.gmail.com
* Make heap TID a tiebreaker nbtree index column.Peter Geoghegan2019-03-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Make nbtree treat all index tuples as having a heap TID attribute. Index searches can distinguish duplicates by heap TID, since heap TID is always guaranteed to be unique. This general approach has numerous benefits for performance, and is prerequisite to teaching VACUUM to perform "retail index tuple deletion". Naively adding a new attribute to every pivot tuple has unacceptable overhead (it bloats internal pages), so suffix truncation of pivot tuples is added. This will usually truncate away the "extra" heap TID attribute from pivot tuples during a leaf page split, and may also truncate away additional user attributes. This can increase fan-out, especially in a multi-column index. Truncation can only occur at the attribute granularity, which isn't particularly effective, but works well enough for now. A future patch may add support for truncating "within" text attributes by generating truncated key values using new opclass infrastructure. Only new indexes (BTREE_VERSION 4 indexes) will have insertions that treat heap TID as a tiebreaker attribute, or will have pivot tuples undergo suffix truncation during a leaf page split (on-disk compatibility with versions 2 and 3 is preserved). Upgrades to version 4 cannot be performed on-the-fly, unlike upgrades from version 2 to version 3. contrib/amcheck continues to work with version 2 and 3 indexes, while also enforcing stricter invariants when verifying version 4 indexes. These stricter invariants are the same invariants described by "3.1.12 Sequencing" from the Lehman and Yao paper. A later patch will enhance the logic used by nbtree to pick a split point. This patch is likely to negatively impact performance without smarter choices around the precise point to split leaf pages at. Making these two mostly-distinct sets of enhancements into distinct commits seems like it might clarify their design, even though neither commit is particularly useful on its own. The maximum allowed size of new tuples is reduced by an amount equal to the space required to store an extra MAXALIGN()'d TID in a new high key during leaf page splits. The user-facing definition of the "1/3 of a page" restriction is already imprecise, and so does not need to be revised. However, there should be a compatibility note in the v12 release notes. Author: Peter Geoghegan Reviewed-By: Heikki Linnakangas, Alexander Korotkov Discussion: https://postgr.es/m/CAH2-WzkVb0Kom=R+88fDFb=JSxZMFvbHVC6Mn9LJ2n=X=kS-Uw@mail.gmail.com
* Refactor nbtree insertion scankeys.Peter Geoghegan2019-03-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use dedicated struct to represent nbtree insertion scan keys. Having a dedicated struct makes the difference between search type scankeys and insertion scankeys a lot clearer, and simplifies the signature of several related functions. This is based on a suggestion by Andrey Lepikhov. Streamline how unique index insertions cache binary search progress. Cache the state of in-progress binary searches within _bt_check_unique() for later instead of having callers avoid repeating the binary search in an ad-hoc manner. This makes it easy to add a new optimization: _bt_check_unique() now falls out of its loop immediately in the common case where it's already clear that there couldn't possibly be a duplicate. The new _bt_check_unique() scheme makes it a lot easier to manage cached binary search effort afterwards, from within _bt_findinsertloc(). This is needed for the upcoming patch to make nbtree tuples unique by treating heap TID as a final tiebreaker column. Unique key binary searches need to restore lower and upper bounds. They cannot simply continue to use the >= lower bound as the offset to insert at, because the heap TID tiebreaker column must be used in comparisons for the restored binary search (unlike the original _bt_check_unique() binary search, where scankey's heap TID column must be omitted). Author: Peter Geoghegan, Heikki Linnakangas Reviewed-By: Heikki Linnakangas, Andrey Lepikhov Discussion: https://postgr.es/m/CAH2-WzmE6AhUdk9NdWBf4K3HjWXZBX3+umC7mH7+WDrKcRtsOw@mail.gmail.com
* Get rid of jsonpath_gram.h and jsonpath_scanner.hAlexander Korotkov2019-03-20
| | | | | | | | | Jsonpath grammar and scanner are both quite small. It doesn't worth complexity to compile them separately. This commit makes grammar and scanner be compiled at once. Therefore, jsonpath_gram.h and jsonpath_gram.h are no longer needed. This commit also does some reorganization of code in jsonpath_gram.y. Discussion: https://postgr.es/m/d47b2023-3ecb-5f04-d253-d557547cf74f%402ndQuadrant.com
* Rename typedef in jsonpath_gram.y from "string" to "JsonPathString"Alexander Korotkov2019-03-19
| | | | Reason is the same as in 75c57058b0.
* Tweak nbtsearch.c function prototype order.Peter Geoghegan2019-03-19
| | | | | nbtsearch.c's static function prototypes were slightly out of order. Make the order consistent with static function definition order.
* Make checkpoint requests more robust.Tom Lane2019-03-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 6f6a6d8b1 introduced a delay of up to 2 seconds if we're trying to request a checkpoint but the checkpointer hasn't started yet (or, much less likely, our kill() call fails). However buildfarm experience shows that that's not quite enough for slow or heavily-loaded machines. There's no good reason to assume that the checkpointer won't start eventually, so we may as well make the timeout much longer, say 60 sec. However, if the caller didn't say CHECKPOINT_WAIT, it seems like a bad idea to be waiting at all, much less for as long as 60 sec. We can remove the need for that, and make this whole thing more robust, by adjusting the code so that the existence of a pending checkpoint request is clear from the contents of shared memory, and making sure that the checkpointer process will notice it at startup even if it did not get a signal. In this way there's no need for a non-CHECKPOINT_WAIT call to wait at all; if it can't send the signal, it can nonetheless assume that the checkpointer will eventually service the request. A potential downside of this change is that "kill -INT" on the checkpointer process is no longer enough to trigger a checkpoint, should anyone be relying on something so hacky. But there's no obvious reason to do it like that rather than issuing a plain old CHECKPOINT command, so we'll assume that nobody is. There doesn't seem to be a way to preserve this undocumented quasi-feature without introducing race conditions. Since a principal reason for messing with this is to prevent intermittent buildfarm failures, back-patch to all supported branches. Discussion: https://postgr.es/m/27830.1552752475@sss.pgh.pa.us
* Rename typedef in jsonpath_scan.l from "keyword" to "JsonPathKeyword"Alexander Korotkov2019-03-19
| | | | | | | Typedef name should be both unique and non-intersect with variable names across all the sources. That makes both pg_indent and debuggers happy. Discussion: https://postgr.es/m/23865.1552936099%40sss.pgh.pa.us
* Ignore attempts to add TOAST table to shared or catalog tablesPeter Eisentraut2019-03-19
| | | | | | | | | | | | | | | | | | | | | | | Running ALTER TABLE on any table will check if a TOAST table needs to be added. On shared tables, this would previously fail, thus effectively disabling ALTER TABLE for those tables. On (non-shared) system catalogs, on the other hand, it would add a TOAST table, even though we don't really want TOAST tables on some system catalogs. In some cases, it would also fail with an error "AccessExclusiveLock required to add toast table.", depending on what locks the ALTER TABLE actions had already taken. So instead, just ignore attempts to add TOAST tables to such tables, outside of bootstrap mode, pretending they don't need one. This allows running ALTER TABLE on such tables without messing up the TOAST situation. Legitimate uses for ALTER TABLE on system catalogs include setting reloptions (say, fillfactor or autovacuum settings). (All this still requires allow_system_table_mods, which is independent of this.) Discussion: https://www.postgresql.org/message-id/flat/e49f825b-fb25-0bc8-8afc-d5ad895c7975@2ndquadrant.com
* Fix whitespacePeter Eisentraut2019-03-19
|
* Fix bug in support for collation attributes on older ICU versionsPeter Eisentraut2019-03-19
| | | | | | | | | | | | | | Unrecognized attribute names are supposed to be ignored. But the code would error out on an unrecognized attribute value even if it did not recognize the attribute name. So unrecognized attributes wouldn't really be ignored unless the value happened to be one that matched a recognized value. This would break some important cases where the attribute would be processed by ucol_open() directly. Fix that and add a test case. The restructured code should also avoid compiler warnings about initializing a UColAttribute value to -1, because the type might be an unsigned enum. (reported by Andres Freund)
* Fix copyfuncs/equalfuncs support for VacuumStmt.Robert Haas2019-03-18
| | | | | | | | | Commit 6776142a07afb4c28961f27059d800196902f5f1 failed to do this, and the buildfarm broke. Patch by me, per advice from Tom Lane and Michael Paquier. Discussion: http://postgr.es/m/13988.1552960403@sss.pgh.pa.us
* Implement OR REPLACE option for CREATE AGGREGATE.Andrew Gierth2019-03-19
| | | | | | | Aggregates have acquired a dozen or so optional attributes in recent years for things like parallel query and moving-aggregate mode; the lack of an OR REPLACE option to add or change these for an existing agg makes extension upgrades gratuitously hard. Rectify.
* Fix memory leak in printtup.c.Tom Lane2019-03-18
| | | | | | | | | | | | | | | | Commit f2dec34e1 changed things so that printtup's output stringinfo buffer was allocated outside the per-row temporary context, not inside it. This creates a need to free that buffer explicitly when the temp context is freed, but that was overlooked. In most cases, this is all happening inside a portal or executor context that will go away shortly anyhow, but that's not always true. Notably, the stringinfo ends up getting leaked when JDBC uses row-at-a-time fetches. For a query that returns wide rows, that adds up after awhile. Per bug #15700 from Matthias Otterbach. Back-patch to v11 where the faulty code was added. Discussion: https://postgr.es/m/15700-8c408321a87d56bb@postgresql.org
* Revise parse tree representation for VACUUM and ANALYZE.Robert Haas2019-03-18
| | | | | | | | | | | | | | Like commit f41551f61f9cf4eedd5b7173f985a3bdb4d9858c, this aims to make it easier to add non-Boolean options to VACUUM (or, in this case, to ANALYZE). Instead of building up a bitmap of options directly in the parser, build up a list of DefElem objects and let ExecVacuum() sort it out; right now, we make no use of the fact that a DefElem can carry an associated value, but it will be easy to make that change in the future. Masahiko Sawada Discussion: http://postgr.es/m/CAD21AoATE4sn0jFFH3NcfUZXkU2BMbjBWB_kDj-XWYA-LXDcQA@mail.gmail.com
* Fold vacuum's 'int options' parameter into VacuumParams.Robert Haas2019-03-18
| | | | | | | | | | | | | Many places need both, so this allows a few functions to take one fewer parameter. More importantly, as soon as we add a VACUUM option that takes a non-Boolean parameter, we need to replace 'int options' with a struct, and it seems better to think of adding more fields to VacuumParams rather than passing around both VacuumParams and a separate struct as well. Patch by me, reviewed by Masahiko Sawada Discussion: http://postgr.es/m/CA+Tgmob6g6-s50fyv8E8he7APfwCYYJ4z0wbZC2yZeSz=26CYQ@mail.gmail.com
* Fix optimization of foreign-key on update actionsPeter Eisentraut2019-03-18
| | | | | | | | | | | | | | | | | | | | In RI_FKey_pk_upd_check_required(), we check among other things whether the old and new key are equal, so that we don't need to run cascade actions when nothing has actually changed. This was using the equality operator. But the effect of this is that if a value in the primary key is changed to one that "looks" different but compares as equal, the update is not propagated. (Examples are float -0 and 0 and case-insensitive text.) This appears to violate the SQL standard, and it also behaves inconsistently if in a multicolumn key another key is also updated that would cause the row to compare as not equal. To fix, if we are looking at the PK table in ri_KeysEqual(), then do a bytewise comparison similar to record_image_eq() instead of using the equality operators. This only makes a difference for ON UPDATE CASCADE, but for consistency we treat all changes to the PK the same. For the FK table, we continue to use the equality operators. Discussion: https://www.postgresql.org/message-id/flat/3326fc2e-bc02-d4c5-e3e5-e54da466e89a@2ndquadrant.com
* Revert 4178d8b91cAlexander Korotkov2019-03-18
| | | | | | As it was agreed to worsen the code readability. Discussion: https://postgr.es/m/ecfcfb5f-3233-eaa9-0c83-07056fb49a83%402ndquadrant.com
* Refactor more code logic to update the control fileMichael Paquier2019-03-18
| | | | | | | | | | | | | | | | | | | | | | ce6afc6 has begun the refactoring work by plugging pg_rewind into a central routine to update the control file, and left around two extra copies, with one in xlog.c for the backend and one in pg_resetwal.c. By adding an extra option to the central routine in controldata_utils.c to control if a flush of the control file needs to be done, it is proving to be straight-forward to make xlog.c and pg_resetwal.c use the central code path at the condition of moving the wait event tracking there. Hence, this allows to have only one central code path to update the control file, shaving the code from the duplicates. This refactoring actually fixes a problem in pg_resetwal. Previously, the control file was first removed before being recreated. So if a crash happened between the moment the file was removed and the moment the file was created, then it would have been possible to not have a control file anymore in the database folder. Author: Fabien Coelho Reviewed-by: Michael Paquier Discussion: https://postgr.es/m/alpine.DEB.2.21.1903170935210.2506@lancre
* Beautify initialization of JsonValueList and JsonLikeRegexContextAlexander Korotkov2019-03-17
| | | | | Instead of tricky assignment to {0} introduce special macros, which explicitly initialize every field.