aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
* Revert "Suppress DETAIL output from a foreign_data test."Peter Geoghegan2019-03-21
| | | | This should be superseded by commit 8aa9dd74.
* Catversion bump announced in previous commit but forgottenAlvaro Herrera2019-03-21
|
* Fix dependency recording bug for partitioned PKsAlvaro Herrera2019-03-21
| | | | | | | | | | | | | | | | | | | | | | | | | When DefineIndex recurses to create constraints on partitions, it needs to use the value returned by index_constraint_create to set up partition dependencies. However, in the course of fixing the DEPENDENCY_INTERNAL_AUTO mess, commit 1d92a0c9f7dd introduced some code to that function that clobbered the return value, causing the recorded OID to be of the wrong object. Close examination of pg_depend after creating the tables leads to indescribable objects :-( My sin (in commit bdc3d7fa2376, while preparing for DDL deparsing in event triggers) was to use a variable name for the return value that's typically used for throwaway objects in dependency-setting calls ("referenced"). Fix by changing the variable names to match extended practice (the return value is "myself" rather than "referenced".) The pg_upgrade test notices the problem (in an indirect way: the pg_dump outputs are in different order), but only if you create the objects in a specific way that wasn't being used in the existing tests. Add a stanza to leave some objects around that shows the bug. Catversion bump because preexisting databases might have bogus pg_depend entries. Discussion: https://postgr.es/m/20190318204235.GA30360@alvherre.pgsql
* Improve error reporting for DROP FUNCTION/PROCEDURE/AGGREGATE/ROUTINE.Tom Lane2019-03-21
| | | | | | | | | | | | | | | | | | | | | | | These commands allow the argument type list to be omitted if there is just one object that matches by name. However, if that syntax was used with DROP IF EXISTS and there was more than one match, you got a "function ... does not exist, skipping" notice message rather than a truthful complaint about the ambiguity. This was basically due to poor factorization and a rats-nest of logic, so refactor the relevant lookup code to make it cleaner. Note that this amounts to narrowing the scope of which sorts of error conditions IF EXISTS will bypass. Per discussion, we only intend it to skip no-such-object cases, not multiple-possible-matches cases. Per bug #15572 from Ash Marath. Although this definitely seems like a bug, it's not clear that people would thank us for changing the behavior in minor releases, so no back-patch. David Rowley, reviewed by Julien Rouhaud and Pavel Stehule Discussion: https://postgr.es/m/15572-ed1b9ed09503de8a@postgresql.org
* Add DNS SRV support for LDAP server discovery.Thomas Munro2019-03-21
| | | | | | | | | | | | | | | | | LDAP servers can be advertised on a network with RFC 2782 DNS SRV records. The OpenLDAP command-line tools automatically try to find servers that way, if no server name is provided by the user. Teach PostgreSQL to do the same using OpenLDAP's support functions, when building with OpenLDAP. For now, we assume that HAVE_LDAP_INITIALIZE (an OpenLDAP extension available since OpenLDAP 2.0 and also present in Apple LDAP) implies that you also have ldap_domain2hostlist() (which arrived in the same OpenLDAP version and is also present in Apple LDAP). Author: Thomas Munro Reviewed-by: Daniel Gustafsson Discussion: https://postgr.es/m/CAEepm=2hAnSfhdsd6vXsM6VZVN0br-FbAZ-O+Swk18S5HkCP=A@mail.gmail.com
* Sort the dependent objects before deletion in DROP OWNED BY.Tom Lane2019-03-20
| | | | | | | | | | This finishes a task we left undone in commit f1ad067fc, by extending the delete-in-descending-OID-order rule to deletions triggered by DROP OWNED BY. We've coped with machine-dependent deletion orders one time too many, and the new issues caused by Peter G's recent nbtree hacking seem like the last straw. Discussion: https://postgr.es/m/E1h6eep-0001Mw-Vd@gemulon.postgresql.org
* Add index_get_partition convenience functionAlvaro Herrera2019-03-20
| | | | | | | | This new function simplifies some existing coding, as well as supports future patches. Discussion: https://postgr.es/m/201901222145.t6wws6t6vrcu@alvherre.pgsql Reviewed-by: Amit Langote, Jesper Pedersen
* Fix spurious compiler warning in nbtxlog.c.Peter Geoghegan2019-03-20
| | | | | | Cleanup from commit dd299df8. Per complaint from Tom Lane.
* Suppress DETAIL output from a foreign_data test.Peter Geoghegan2019-03-20
| | | | | | | | | | | | | | | Unstable sort order related to changes to nbtree from commit dd299df8 can cause two lines of DETAIL output to be in opposite-of-expected order. Suppress the output using the same VERBOSITY hack that is used elsewhere in the foreign_data tests. Note that the same foreign_data.out DETAIL output was mechanically updated by commit dd299df8. Only a few such changes were required, though. Per buildfarm member batfish. Discussion: https://postgr.es/m/CAH2-WzkCQ_MtKeOpzozj7QhhgP1unXsK8o9DMAFvDqQFEPpkYQ@mail.gmail.com
* Restore RI trigger sanity checkAlvaro Herrera2019-03-20
| | | | | | | | I unnecessarily removed this check in 3de241dba86f because I misunderstood what the final representation of constraints across a partitioning hierarchy was to be. Put it back (in both branches). Discussion: https://postgr.es/m/201901222145.t6wws6t6vrcu@alvherre.pgsql
* Consider secondary factors during nbtree splits.Peter Geoghegan2019-03-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Teach nbtree to give some consideration to how "distinguishing" candidate leaf page split points are. This should not noticeably affect the balance of free space within each half of the split, while still making suffix truncation truncate away significantly more attributes on average. The logic for choosing a leaf split point now uses a fallback mode in the case where the page is full of duplicates and it isn't possible to find even a minimally distinguishing split point. When the page is full of duplicates, the split should pack the left half very tightly, while leaving the right half mostly empty. Our assumption is that logical duplicates will almost always be inserted in ascending heap TID order with v4 indexes. This strategy leaves most of the free space on the half of the split that will likely be where future logical duplicates of the same value need to be placed. The number of cycles added is not very noticeable. This is important because deciding on a split point takes place while at least one exclusive buffer lock is held. We avoid using authoritative insertion scankey comparisons to save cycles, unlike suffix truncation proper. We use a faster binary comparison instead. Note that even pg_upgrade'd v3 indexes make use of these optimizations. Benchmarking has shown that even v3 indexes benefit, despite the fact that suffix truncation will only truncate non-key attributes in INCLUDE indexes. Grouping relatively similar tuples together is beneficial in and of itself, since it reduces the number of leaf pages that must be accessed by subsequent index scans. Author: Peter Geoghegan Reviewed-By: Heikki Linnakangas Discussion: https://postgr.es/m/CAH2-WzmmoLNQOj9mAD78iQHfWLJDszHEDrAzGTUMG3mVh5xWPw@mail.gmail.com
* Make heap TID a tiebreaker nbtree index column.Peter Geoghegan2019-03-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Make nbtree treat all index tuples as having a heap TID attribute. Index searches can distinguish duplicates by heap TID, since heap TID is always guaranteed to be unique. This general approach has numerous benefits for performance, and is prerequisite to teaching VACUUM to perform "retail index tuple deletion". Naively adding a new attribute to every pivot tuple has unacceptable overhead (it bloats internal pages), so suffix truncation of pivot tuples is added. This will usually truncate away the "extra" heap TID attribute from pivot tuples during a leaf page split, and may also truncate away additional user attributes. This can increase fan-out, especially in a multi-column index. Truncation can only occur at the attribute granularity, which isn't particularly effective, but works well enough for now. A future patch may add support for truncating "within" text attributes by generating truncated key values using new opclass infrastructure. Only new indexes (BTREE_VERSION 4 indexes) will have insertions that treat heap TID as a tiebreaker attribute, or will have pivot tuples undergo suffix truncation during a leaf page split (on-disk compatibility with versions 2 and 3 is preserved). Upgrades to version 4 cannot be performed on-the-fly, unlike upgrades from version 2 to version 3. contrib/amcheck continues to work with version 2 and 3 indexes, while also enforcing stricter invariants when verifying version 4 indexes. These stricter invariants are the same invariants described by "3.1.12 Sequencing" from the Lehman and Yao paper. A later patch will enhance the logic used by nbtree to pick a split point. This patch is likely to negatively impact performance without smarter choices around the precise point to split leaf pages at. Making these two mostly-distinct sets of enhancements into distinct commits seems like it might clarify their design, even though neither commit is particularly useful on its own. The maximum allowed size of new tuples is reduced by an amount equal to the space required to store an extra MAXALIGN()'d TID in a new high key during leaf page splits. The user-facing definition of the "1/3 of a page" restriction is already imprecise, and so does not need to be revised. However, there should be a compatibility note in the v12 release notes. Author: Peter Geoghegan Reviewed-By: Heikki Linnakangas, Alexander Korotkov Discussion: https://postgr.es/m/CAH2-WzkVb0Kom=R+88fDFb=JSxZMFvbHVC6Mn9LJ2n=X=kS-Uw@mail.gmail.com
* Refactor nbtree insertion scankeys.Peter Geoghegan2019-03-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use dedicated struct to represent nbtree insertion scan keys. Having a dedicated struct makes the difference between search type scankeys and insertion scankeys a lot clearer, and simplifies the signature of several related functions. This is based on a suggestion by Andrey Lepikhov. Streamline how unique index insertions cache binary search progress. Cache the state of in-progress binary searches within _bt_check_unique() for later instead of having callers avoid repeating the binary search in an ad-hoc manner. This makes it easy to add a new optimization: _bt_check_unique() now falls out of its loop immediately in the common case where it's already clear that there couldn't possibly be a duplicate. The new _bt_check_unique() scheme makes it a lot easier to manage cached binary search effort afterwards, from within _bt_findinsertloc(). This is needed for the upcoming patch to make nbtree tuples unique by treating heap TID as a final tiebreaker column. Unique key binary searches need to restore lower and upper bounds. They cannot simply continue to use the >= lower bound as the offset to insert at, because the heap TID tiebreaker column must be used in comparisons for the restored binary search (unlike the original _bt_check_unique() binary search, where scankey's heap TID column must be omitted). Author: Peter Geoghegan, Heikki Linnakangas Reviewed-By: Heikki Linnakangas, Andrey Lepikhov Discussion: https://postgr.es/m/CAH2-WzmE6AhUdk9NdWBf4K3HjWXZBX3+umC7mH7+WDrKcRtsOw@mail.gmail.com
* Get rid of jsonpath_gram.h and jsonpath_scanner.hAlexander Korotkov2019-03-20
| | | | | | | | | Jsonpath grammar and scanner are both quite small. It doesn't worth complexity to compile them separately. This commit makes grammar and scanner be compiled at once. Therefore, jsonpath_gram.h and jsonpath_gram.h are no longer needed. This commit also does some reorganization of code in jsonpath_gram.y. Discussion: https://postgr.es/m/d47b2023-3ecb-5f04-d253-d557547cf74f%402ndQuadrant.com
* Remove ambiguity for jsonb_path_match() and jsonb_path_exists()Alexander Korotkov2019-03-20
| | | | | | | | | | There are 2-arguments and 4-arguments versions of jsonb_path_match() and jsonb_path_exists(). But 4-arguments versions have optional 3rd and 4th arguments, that leads to ambiguity. In the same time 2-arguments versions are needed only for @@ and @? operators. So, rename 2-arguments versions to remove the ambiguity. Catversion is bumped.
* Restructure libpq's handling of send failures.Tom Lane2019-03-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Originally, if libpq got a failure (e.g., ECONNRESET) while trying to send data to the server, it would just report that and wash its hands of the matter. It was soon found that that wasn't a very pleasant way of coping with server-initiated disconnections, so we introduced a hack (pqHandleSendFailure) in the code that sends queries to make it peek ahead for server error reports before reporting the send failure. It now emerges that related cases can occur during connection setup; in particular, as of TLS 1.3 it's unsafe to assume that SSL connection failures will be reported by SSL_connect rather than during our first send attempt. We could have fixed that in a hacky way by applying pqHandleSendFailure after a startup packet send failure, but (a) pqHandleSendFailure explicitly disclaims suitability for use in any state except query startup, and (b) the problem still potentially exists for other send attempts in libpq. Instead, let's fix this in a more general fashion by eliminating pqHandleSendFailure altogether, and instead arranging to postpone all reports of send failures in libpq until after we've made an attempt to read and process server messages. The send failure won't be reported at all if we find a server message or detect input EOF. (Note: this removes one of the reasons why libpq typically overwrites, rather than appending to, conn->errorMessage: pqHandleSendFailure needed that behavior so that the send failure report would be replaced if we got a server message or read failure report. Eventually I'd like to get rid of that overwrite behavior altogether, but today is not that day. For the moment, pqSendSome is assuming that its callees will overwrite not append to conn->errorMessage.) Possibly this change should get back-patched someday; but it needs testing first, so let's not consider that till after v12 beta. Discussion: https://postgr.es/m/CAEepm=2n6Nv+5tFfe8YnkUm1fXgvxR0Mm1FoD+QKG-vLNGLyKg@mail.gmail.com
* Rename typedef in jsonpath_gram.y from "string" to "JsonPathString"Alexander Korotkov2019-03-19
| | | | Reason is the same as in 75c57058b0.
* Tweak nbtsearch.c function prototype order.Peter Geoghegan2019-03-19
| | | | | nbtsearch.c's static function prototypes were slightly out of order. Make the order consistent with static function definition order.
* Make checkpoint requests more robust.Tom Lane2019-03-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 6f6a6d8b1 introduced a delay of up to 2 seconds if we're trying to request a checkpoint but the checkpointer hasn't started yet (or, much less likely, our kill() call fails). However buildfarm experience shows that that's not quite enough for slow or heavily-loaded machines. There's no good reason to assume that the checkpointer won't start eventually, so we may as well make the timeout much longer, say 60 sec. However, if the caller didn't say CHECKPOINT_WAIT, it seems like a bad idea to be waiting at all, much less for as long as 60 sec. We can remove the need for that, and make this whole thing more robust, by adjusting the code so that the existence of a pending checkpoint request is clear from the contents of shared memory, and making sure that the checkpointer process will notice it at startup even if it did not get a signal. In this way there's no need for a non-CHECKPOINT_WAIT call to wait at all; if it can't send the signal, it can nonetheless assume that the checkpointer will eventually service the request. A potential downside of this change is that "kill -INT" on the checkpointer process is no longer enough to trigger a checkpoint, should anyone be relying on something so hacky. But there's no obvious reason to do it like that rather than issuing a plain old CHECKPOINT command, so we'll assume that nobody is. There doesn't seem to be a way to preserve this undocumented quasi-feature without introducing race conditions. Since a principal reason for messing with this is to prevent intermittent buildfarm failures, back-patch to all supported branches. Discussion: https://postgr.es/m/27830.1552752475@sss.pgh.pa.us
* Reorder LOCALLOCK structure members to compact the sizePeter Eisentraut2019-03-19
| | | | | | | Save 8 bytes (on x86-64) by filling up padding holes. Author: Takayuki Tsunakawa <tsunakawa.takay@jp.fujitsu.com> Discussion: https://www.postgresql.org/message-id/20190219001639.ft7kxir2iz644alf@alap3.anarazel.de
* Rename typedef in jsonpath_scan.l from "keyword" to "JsonPathKeyword"Alexander Korotkov2019-03-19
| | | | | | | Typedef name should be both unique and non-intersect with variable names across all the sources. That makes both pg_indent and debuggers happy. Discussion: https://postgr.es/m/23865.1552936099%40sss.pgh.pa.us
* Ignore attempts to add TOAST table to shared or catalog tablesPeter Eisentraut2019-03-19
| | | | | | | | | | | | | | | | | | | | | | | Running ALTER TABLE on any table will check if a TOAST table needs to be added. On shared tables, this would previously fail, thus effectively disabling ALTER TABLE for those tables. On (non-shared) system catalogs, on the other hand, it would add a TOAST table, even though we don't really want TOAST tables on some system catalogs. In some cases, it would also fail with an error "AccessExclusiveLock required to add toast table.", depending on what locks the ALTER TABLE actions had already taken. So instead, just ignore attempts to add TOAST tables to such tables, outside of bootstrap mode, pretending they don't need one. This allows running ALTER TABLE on such tables without messing up the TOAST situation. Legitimate uses for ALTER TABLE on system catalogs include setting reloptions (say, fillfactor or autovacuum settings). (All this still requires allow_system_table_mods, which is independent of this.) Discussion: https://www.postgresql.org/message-id/flat/e49f825b-fb25-0bc8-8afc-d5ad895c7975@2ndquadrant.com
* Fix whitespacePeter Eisentraut2019-03-19
|
* Fix bug in support for collation attributes on older ICU versionsPeter Eisentraut2019-03-19
| | | | | | | | | | | | | | Unrecognized attribute names are supposed to be ignored. But the code would error out on an unrecognized attribute value even if it did not recognize the attribute name. So unrecognized attributes wouldn't really be ignored unless the value happened to be one that matched a recognized value. This would break some important cases where the attribute would be processed by ucol_open() directly. Fix that and add a test case. The restructured code should also avoid compiler warnings about initializing a UColAttribute value to -1, because the type might be an unsigned enum. (reported by Andres Freund)
* Fix copyfuncs/equalfuncs support for VacuumStmt.Robert Haas2019-03-18
| | | | | | | | | Commit 6776142a07afb4c28961f27059d800196902f5f1 failed to do this, and the buildfarm broke. Patch by me, per advice from Tom Lane and Michael Paquier. Discussion: http://postgr.es/m/13988.1552960403@sss.pgh.pa.us
* Implement OR REPLACE option for CREATE AGGREGATE.Andrew Gierth2019-03-19
| | | | | | | Aggregates have acquired a dozen or so optional attributes in recent years for things like parallel query and moving-aggregate mode; the lack of an OR REPLACE option to add or change these for an existing agg makes extension upgrades gratuitously hard. Rectify.
* Fix memory leak in printtup.c.Tom Lane2019-03-18
| | | | | | | | | | | | | | | | Commit f2dec34e1 changed things so that printtup's output stringinfo buffer was allocated outside the per-row temporary context, not inside it. This creates a need to free that buffer explicitly when the temp context is freed, but that was overlooked. In most cases, this is all happening inside a portal or executor context that will go away shortly anyhow, but that's not always true. Notably, the stringinfo ends up getting leaked when JDBC uses row-at-a-time fetches. For a query that returns wide rows, that adds up after awhile. Per bug #15700 from Matthias Otterbach. Back-patch to v11 where the faulty code was added. Discussion: https://postgr.es/m/15700-8c408321a87d56bb@postgresql.org
* Revise parse tree representation for VACUUM and ANALYZE.Robert Haas2019-03-18
| | | | | | | | | | | | | | Like commit f41551f61f9cf4eedd5b7173f985a3bdb4d9858c, this aims to make it easier to add non-Boolean options to VACUUM (or, in this case, to ANALYZE). Instead of building up a bitmap of options directly in the parser, build up a list of DefElem objects and let ExecVacuum() sort it out; right now, we make no use of the fact that a DefElem can carry an associated value, but it will be easy to make that change in the future. Masahiko Sawada Discussion: http://postgr.es/m/CAD21AoATE4sn0jFFH3NcfUZXkU2BMbjBWB_kDj-XWYA-LXDcQA@mail.gmail.com
* Fold vacuum's 'int options' parameter into VacuumParams.Robert Haas2019-03-18
| | | | | | | | | | | | | Many places need both, so this allows a few functions to take one fewer parameter. More importantly, as soon as we add a VACUUM option that takes a non-Boolean parameter, we need to replace 'int options' with a struct, and it seems better to think of adding more fields to VacuumParams rather than passing around both VacuumParams and a separate struct as well. Patch by me, reviewed by Masahiko Sawada Discussion: http://postgr.es/m/CA+Tgmob6g6-s50fyv8E8he7APfwCYYJ4z0wbZC2yZeSz=26CYQ@mail.gmail.com
* Fix optimization of foreign-key on update actionsPeter Eisentraut2019-03-18
| | | | | | | | | | | | | | | | | | | | In RI_FKey_pk_upd_check_required(), we check among other things whether the old and new key are equal, so that we don't need to run cascade actions when nothing has actually changed. This was using the equality operator. But the effect of this is that if a value in the primary key is changed to one that "looks" different but compares as equal, the update is not propagated. (Examples are float -0 and 0 and case-insensitive text.) This appears to violate the SQL standard, and it also behaves inconsistently if in a multicolumn key another key is also updated that would cause the row to compare as not equal. To fix, if we are looking at the PK table in ri_KeysEqual(), then do a bytewise comparison similar to record_image_eq() instead of using the equality operators. This only makes a difference for ON UPDATE CASCADE, but for consistency we treat all changes to the PK the same. For the FK table, we continue to use the equality operators. Discussion: https://www.postgresql.org/message-id/flat/3326fc2e-bc02-d4c5-e3e5-e54da466e89a@2ndquadrant.com
* Remove unused macroPeter Eisentraut2019-03-18
| | | | It has never been used.
* Revert 4178d8b91cAlexander Korotkov2019-03-18
| | | | | | As it was agreed to worsen the code readability. Discussion: https://postgr.es/m/ecfcfb5f-3233-eaa9-0c83-07056fb49a83%402ndquadrant.com
* Refactor more code logic to update the control fileMichael Paquier2019-03-18
| | | | | | | | | | | | | | | | | | | | | | ce6afc6 has begun the refactoring work by plugging pg_rewind into a central routine to update the control file, and left around two extra copies, with one in xlog.c for the backend and one in pg_resetwal.c. By adding an extra option to the central routine in controldata_utils.c to control if a flush of the control file needs to be done, it is proving to be straight-forward to make xlog.c and pg_resetwal.c use the central code path at the condition of moving the wait event tracking there. Hence, this allows to have only one central code path to update the control file, shaving the code from the duplicates. This refactoring actually fixes a problem in pg_resetwal. Previously, the control file was first removed before being recreated. So if a crash happened between the moment the file was removed and the moment the file was created, then it would have been possible to not have a control file anymore in the database folder. Author: Fabien Coelho Reviewed-by: Michael Paquier Discussion: https://postgr.es/m/alpine.DEB.2.21.1903170935210.2506@lancre
* Fix pg_rewind when rewinding new database with tables includedMichael Paquier2019-03-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This fixes an issue introduced by 266b6ac, which has added filters to exclude file patterns on the target and source data directories to reduce the number of files transferred. Filters get applied to both the target and source data files, and include pg_internal.init which is present for each database once relations are created on it. However, if the target differed from the source with at least one new database with relations, the rewind would fail due to the exclusion filters applied on the target files, causing pg_internal.init to still be present on the target database folder, while its contents should have been completely removed so as there is nothing remaining inside at the time of the folder deletion. Applying exclusion filters on the source files is fine, because this way the amount of data copied from the source to the target is reduced. And actually, not applying the filters on the target is what pg_rewind should do, because this causes such files to be automatically removed during the rewind on the target. Exclusion filters apply to paths which are removed or recreated automatically at startup, so removing all those files on the target during the rewind is a win. The existing set of TAP tests already stresses the rewind of databases, but it did not include any tables on those newly-created databases. Creating extra tables in this case is enough to reproduce the failure, so the existing tests are extended to close the gap. Reported-by: Mithun Cy Author: Michael Paquier Discussion: https://postgr.es/m/CADq3xVYt6_pO7ZzmjOqPgY9HWsL=kLd-_tNyMtdfjKqEALDyTA@mail.gmail.com Backpatch-through: 11
* Error out in pg_checksums on incompatible block sizeMichael Paquier2019-03-18
| | | | | | | | | | | | | | | | | pg_checksums is compiled with a given block size and has a hard dependency to it per the way checksums are calculated via checksum_impl.h, and trying to use the tool on a data folder which has not the same block size would result in incorrect checksum calculations and/or block read errors, meaning that the data folder is corrupted. This is harmless as checksums are only checked now, but very confusing for the user so issue an error properly if the block size used at compilation and the block size used in the data folder do not match. Reported-by: Sergei Kornilov Author: Michael Banck, Michael Paquier Reviewed-by: Fabien Coelho, Magnus Hagander Discussion: https://postgr.es/m/20190317054657.GA3357@paquier.xyz ackpatch-through: 11
* Beautify initialization of JsonValueList and JsonLikeRegexContextAlexander Korotkov2019-03-17
| | | | | Instead of tricky assignment to {0} introduce special macros, which explicitly initialize every field.
* Apply const qualifier to keywords of jsonpath_scan.lAlexander Korotkov2019-03-17
| | | | | Discussion: https://postgr.es/m/CAEeOP_a-Pfy%3DU9-f%3DgQ0AsB8FrxrC8xCTVq%2BeO71-2VoWP5cag%40mail.gmail.com Author: Mark G
* Remove some make rules added in 142c400d72Alexander Korotkov2019-03-17
| | | | Because they fail build of jsonpath_scan.c.
* Fix make rules for jsonpath grammar making them similar to SQL grammarAlexander Korotkov2019-03-17
| | | | | Reported-by: Jeff Janes, Tom Lane Discussion: https://postgr.es/m/CAMkU%3D1w1qBvoW82ZTFpAKae027R-2OHw-m6ALe0VQRNAFueBVA%40mail.gmail.com
* Add support for collation attributes on older ICU versionsPeter Eisentraut2019-03-17
| | | | | | | | | | | | | | | | Starting in ICU 54, collation customization attributes can be specified in the locale string, for example "@colStrength=primary;colCaseLevel=yes". Add support for this for older ICU versions as well, by adding some minimal parsing of the attributes in the locale string and calling ucol_setAttribute() on them. This is essentially what never ICU versions do internally in ucol_open(). This was we can offer this functionality in a consistent way in all ICU versions supported by PostgreSQL. Also add some tests for ICU collation customization. Reported-by: Daniel Verite <daniel@manitou-mail.org> Discussion: https://www.postgresql.org/message-id/0270ebd4-f67c-8774-1a5a-91adfb9bb41f@2ndquadrant.com
* Fix compiler warning in jsonpath_exec.cAlexander Korotkov2019-03-17
| | | | | | | Warning was observed in gcc 4.4.6, gcc 4.4.7 and probably others. Reported-by: Tom Lane Discussion: https://postgr.es/m/25151.1552751426%40sss.pgh.pa.us
* Remove another unnecessary application_name specification in testPeter Eisentraut2019-03-16
| | | | see 8e93a516e68bac3c329fd2e7f423ee9aceca943a
* Further adjust the tests for the hyperbolic functions.Tom Lane2019-03-16
| | | | | | | | | It looks like we can leave in most of the test cases for Infinity/NaN inputs, but buildfarm member jacana gets the wrong answer for acosh(Inf). It's not worth carrying a variant expected file for that, so just disable that one test. Discussion: https://postgr.es/m/E1h3nUY-0000sM-Vf@gemulon.postgresql.org
* Suppress -Wimplicit-fallthrough warnings in new jsonpath code.Tom Lane2019-03-16
| | | | Per buildfarm. See commit 41c912cad for precedent.
* Update copyright year in files added by 1bb5e78218.Amit Kapila2019-03-16
|
* Numeric error suppression in jsonpathAlexander Korotkov2019-03-16
| | | | | | | | | | | Add support of numeric error suppression to jsonpath as it's required by standard. This commit doesn't use PG_TRY()/PG_CATCH() in order to implement that. Instead, it provides internal versions of numeric functions used, which support error suppression. Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com Author: Alexander Korotkov, Nikita Glukhov Reviewed-by: Tomas Vondra
* Partial implementation of SQL/JSON path languageAlexander Korotkov2019-03-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | SQL 2016 standards among other things contains set of SQL/JSON features for JSON processing inside of relational database. The core of SQL/JSON is JSON path language, allowing access parts of JSON documents and make computations over them. This commit implements partial support JSON path language as separate datatype called "jsonpath". The implementation is partial because it's lacking datetime support and suppression of numeric errors. Missing features will be added later by separate commits. Support of SQL/JSON features requires implementation of separate nodes, and it will be considered in subsequent patches. This commit includes following set of plain functions, allowing to execute jsonpath over jsonb values: * jsonb_path_exists(jsonb, jsonpath[, jsonb, bool]), * jsonb_path_match(jsonb, jsonpath[, jsonb, bool]), * jsonb_path_query(jsonb, jsonpath[, jsonb, bool]), * jsonb_path_query_array(jsonb, jsonpath[, jsonb, bool]). * jsonb_path_query_first(jsonb, jsonpath[, jsonb, bool]). This commit also implements "jsonb @? jsonpath" and "jsonb @@ jsonpath", which are wrappers over jsonpath_exists(jsonb, jsonpath) and jsonpath_predicate(jsonb, jsonpath) correspondingly. These operators will have an index support (implemented in subsequent patches). Catversion bumped, to add new functions and operators. Code was written by Nikita Glukhov and Teodor Sigaev, revised by me. Documentation was written by Oleg Bartunov and Liudmila Mantrova. The work was inspired by Oleg Bartunov. Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com Author: Nikita Glukhov, Teodor Sigaev, Alexander Korotkov, Oleg Bartunov, Liudmila Mantrova Reviewed-by: Tomas Vondra, Andrew Dunstan, Pavel Stehule, Alexander Korotkov
* Avoid casting away a constPeter Eisentraut2019-03-16
|
* Don't propagate PGAPPNAME through pg_ctl in testsPeter Eisentraut2019-03-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | When libpq is loaded in the server (for instance, by libpqwalreceiver), it may use libpq environment variables set in the postmaster environment for connection parameter defaults. This has some confusing effects in our test suites. For example, the TAP test infrastructure sets PGAPPNAME to allow identifying clients in the server log. But this environment variable is also inherited by temporary servers started with pg_ctl and is then in turn used by libpqwalreceiver as the application_name for connecting to remote servers where it then shows up in pg_stat_replication and is relevant for things like synchronous_standby_names. Replication already has a suitable default for application_name, and overriding that accidentally then requires the individual test cases to re-override that, which is all very confusing and unnecessary. To fix, unset PGAPPNAME temporarily before running pg_ctl start or restart in the tests. More comprehensive approaches like unsetting all environment variables in pg_ctl were considered but might be too complicated to achieve portably. The now unnecessary re-overriding of application_name by test cases is also removed. Reviewed-by: Noah Misch <noah@leadboat.com> Discussion: https://www.postgresql.org/message-id/flat/33383613-690e-6f1b-d5ba-4957ff40f6ce@2ndquadrant.com
* Use correct connection name variable in ecpglib.Michael Meskes2019-03-16
| | | | Fixed-by: Kuroda-san <kuroda.hayato@jp.fujitsu.com>