aboutsummaryrefslogtreecommitdiff
path: root/src/backend/utils/adt
Commit message (Collapse)AuthorAge
...
* Optimize escaping of JSON stringsDavid Rowley2024-07-27
| | | | | | | | | | | | | | | | | | | | | | | | There were quite a few places where we either had a non-NUL-terminated string or a text Datum which we needed to call escape_json() on. Many of these places required that a temporary string was created due to the fact that escape_json() needs a NUL-terminated cstring. For text types, those first had to be converted to cstring before calling escape_json() on them. Here we introduce two new functions to make escaping JSON more optimal: escape_json_text() can be given a text Datum to append onto the given buffer. This is more optimal as it foregoes the need to convert the text Datum into a cstring. A temporary allocation is only required if the text Datum needs to be detoasted. escape_json_with_len() can be used when the length of the cstring is already known or the given string isn't NUL-terminated. Having this allows various places which were creating a temporary NUL-terminated string to just call escape_json_with_len() without any temporary memory allocations. Discussion: https://postgr.es/m/CAApHDvpLXwMZvbCKcdGfU9XQjGCDm7tFpRdTXuB9PVgpNUYfEQ@mail.gmail.com Reviewed-by: Melih Mutlu, Heikki Linnakangas
* SQL/JSON: Respect OMIT QUOTES when RETURNING domains over jsonbAmit Langote2024-07-26
| | | | | | | | | | | | populate_domain() didn't take into account the omit_quotes flag passed down to json_populate_type() by ExecEvalJsonCoercion() and that led to incorrect behavior when the RETURNING type is a domain over jsonb. Fix that by passing the flag by adding a new function parameter to populate_domain(). Reported-by: Jian He <jian.universality@gmail.com> Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com Backpatch-through: 17
* Fix -Wmissing-variable-declarations warnings for float.c special casePeter Eisentraut2024-07-25
| | | | | | | | | | This adds extern declarations for the global variables defined in float.c but not meant for external use. This is a workaround to be able to add -Wmissing-variable-declarations to the global set of warning options in the near future. Reviewed-by: Andres Freund <andres@anarazel.de> Discussion: https://www.postgresql.org/message-id/flat/e0a62134-83da-4ba4-8cdb-ceb0111c95ce@eisentraut.org
* Add path column to pg_backend_memory_contexts viewDavid Rowley2024-07-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | "path" provides a reliable method of determining the parent/child relationships between memory contexts. Previously this could be done in a non-reliable way by writing a recursive query and joining the "parent" and "name" columns. This wasn't reliable as the names were not unique, which could result in joining to the wrong parent. To make this reliable, "path" stores an array of numerical identifiers starting with the identifier for TopLevelMemoryContext. It contains an element for each intermediate parent between that and the current context. Incompatibility: Here we also adjust the "level" column to make it 1-based rather than 0-based. A 1-based level provides a convenient way to access elements in the "path" array. e.g. path[level] gives the identifier for the current context. Identifiers are not stable across multiple evaluations of the view. In an attempt to make these more stable for ad-hoc queries, the identifiers are assigned breadth-first. Contexts closer to TopLevelMemoryContext are less likely to change between queries and during queries. Author: Melih Mutlu <m.melihmutlu@gmail.com> Discussion: https://postgr.es/m/CAGPVpCThLyOsj3e_gYEvLoHkr5w=tadDiN_=z2OwsK3VJppeBA@mail.gmail.com Reviewed-by: Andres Freund, Stephen Frost, Atsushi Torikoshi, Reviewed-by: Michael Paquier, Robert Haas, David Rowley
* Detect integer overflow in array_set_slice().Nathan Bossart2024-07-23
| | | | | | | | | | | | | | | | | | | | | When provided an empty initial array, array_set_slice() fails to check for overflow when computing the new array's dimensions. While such overflows are ordinarily caught by ArrayGetNItems(), commands with the following form are accepted: INSERT INTO t (i[-2147483648:2147483647]) VALUES ('{}'); To fix, perform the hazardous computations using overflow-detecting arithmetic routines. As with commit 18b585155a, the added test cases generate errors that include a platform-dependent value, so we again use psql's VERBOSITY parameter to suppress printing the message text. Reported-by: Alexander Lakhin Author: Joseph Koshakow Reviewed-by: Jian He Discussion: https://postgr.es/m/31ad2cd1-db94-bdb3-f91a-65ffdb4bef95%40gmail.com Backpatch-through: 12
* Replace some strtok() with strsep()Peter Eisentraut2024-07-22
| | | | | | | | | | | | | | | | | | | | | | | | | | strtok() considers adjacent delimiters to be one delimiter, which is arguably the wrong behavior in some cases. Replace with strsep(), which has the right behavior: Adjacent delimiters create an empty token. Affected by this are parsing of: - Stored SCRAM secrets ("SCRAM-SHA-256$<iterations>:<salt>$<storedkey>:<serverkey>") - ICU collation attributes ("und@colStrength=primary;colCaseLevel=yes") for ICU older than version 54 - PG_COLORS environment variable ("error=01;31:warning=01;35:note=01;36:locus=01") - pg_regress command-line options with comma-separated list arguments (--dbname, --create-role) (currently only used pg_regress_ecpg) Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Reviewed-by: David Steele <david@pgmasters.net> Discussion: https://www.postgresql.org/message-id/flat/79692bf9-17d3-41e6-b9c9-fc8c3944222a@eisentraut.org
* Add overflow checks to money type.Nathan Bossart2024-07-19
| | | | | | | | | | | | | | | None of the arithmetic functions for the the money type handle overflow. This commit introduces several helper functions with overflow checking and makes use of them in the money type's arithmetic functions. Fixes bug #18240. Reported-by: Alexander Lakhin Author: Joseph Koshakow Discussion: https://postgr.es/m/18240-c5da758d7dc1ecf0%40postgresql.org Discussion: https://postgr.es/m/CAAvxfHdBPOyEGS7s%2Bxf4iaW0-cgiq25jpYdWBqQqvLtLe_t6tw%40mail.gmail.com Backpatch-through: 12
* Add min and max aggregates for composite types (records).Tom Lane2024-07-11
| | | | | | | | | Like min/max for arrays, these are just thin wrappers around the existing btree comparison function for records. Aleksander Alekseev Discussion: https://postgr.es/m/CAO=iB8L4WYSNxCJ8GURRjQsrXEQ2-zn3FiCsh2LMqvWq2WcONg@mail.gmail.com
* Improve the numeric width_bucket() computation.Dean Rasheed2024-07-10
| | | | | | | | | | | | | | | | | | | | | | | | | | Formerly, the computation of the bucket index involved calling div_var() with a scale determined by select_div_scale(), and then taking the floor of the result. That involved computing anything from 16 to 1000 digits after the decimal point, only for floor_var() to throw them away. In addition, the quotient was computed with rounding in the final digit, which meant that in rare cases the whole result could round up to the wrong bucket, and could exceed count. Thus it was also necessary to clamp the result to the range [1, count], though that didn't prevent the result being in the wrong internal bucket. Instead, compute the quotient using floor division, which guarantees the correct result, as specified by the SQL spec, and doesn't need to be clamped. This is both much simpler and more efficient, since it no longer computes any quotient digits after the decimal point. In addition, it is not necessary to have separate code to handle reversed bounds, since the signs cancel out when dividing. As with b0e9e4d76c and a2a0c7c29e, no back-patch. Dean Rasheed, reviewed by Joel Jacobson. Discussion: https://postgr.es/m/CAEZATCVbJH%2BLE9EXW8Rk3AxLe%3DjbOk2yrT_AUJGGh5Rah6zoeg%40mail.gmail.com
* Suppress "chunk is not well balanced" errors from libxml2.Tom Lane2024-07-09
| | | | | | | | | | | | | | | | | | | | | | | libxml2 2.13 has an entirely different rule than earlier versions about when to emit "chunk is not well balanced" errors. This causes regression test output discrepancies for three test cases that formerly provoked that error (along with others) and now don't. Closer inspection shows that at least in 2.13, this error is pretty useless because it can only be emitted after some other more-relevant error. So let's get rid of the cross-version discrepancy by just suppressing it. In case some older libxml2 version is capable of emitting this error by itself, suppress only when some other error has already been captured. Like 066e8ac6e and 6082b3d5d, this will need to be back-patched, but let's check the results in HEAD first. (The patch for xml_2.out, in particular, is blind since I can't test it here.) Erik Wienhold and Tom Lane, per report from Frank Streitzig. Discussion: https://postgr.es/m/trinity-b0161630-d230-4598-9ebc-7a23acdb37cb-1720186432160@3c-app-gmx-bap25 Discussion: https://postgr.es/m/trinity-361ba18b-541a-4fe7-bc63-655ae3a7d599-1720259822452@3c-app-gmx-bs01
* Optimise numeric multiplication for short inputs.Dean Rasheed2024-07-09
| | | | | | | | | | | | | | | | | | | | | | | When either input has a small number of digits, and the exact product is requested, the speed of numeric multiplication can be increased significantly by using a faster direct multiplication algorithm. This works by fully computing each result digit in turn, starting with the least significant, and propagating the carry up. This save cycles by not requiring a temporary buffer to store digit products, not making multiple passes over the digits of the longer input, and not requiring separate carry-propagation passes. For now, this is used when the shorter input has 1-4 NBASE digits (up to 13-16 decimal digits), and the longer input is of any size, which covers a lot of common real-world cases. Also, the relative benefit increases as the size of the longer input increases. Possible future work would be to try extending the technique to larger numbers of digits in the shorter input. Joel Jacobson and Dean Rasheed. Discussion: https://postgr.es/m/44d2ffca-d560-4919-b85a-4d07060946aa@app.fastmail.com
* Teach planner how to estimate rows for timestamp generate_seriesDavid Rowley2024-07-09
| | | | | | | | | | | | This provides the planner with row estimates for generate_series(TIMESTAMP, TIMESTAMP, INTERVAL), generate_series(TIMESTAMPTZ, TIMESTAMPTZ, INTERVAL) and generate_series(TIMESTAMPTZ, TIMESTAMPTZ, INTERVAL, TEXT) when the input parameter values can be estimated during planning. Author: David Rowley Reviewed-by: jian he <jian.universality@gmail.com> Discussion: https://postgr.es/m/CAApHDvrBE%3D%2BASo_sGYmQJ3GvO8GPvX5yxXhRS%3Dt_ybd4odFkhQ%40mail.gmail.com
* Use xmlParseInNodeContext not xmlParseBalancedChunkMemory.Tom Lane2024-07-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | xmlParseInNodeContext has basically the same functionality with a different API: we have to supply an xmlNode that's attached to a document rather than just the document. That's not hard though. The benefits are two: * Early 2.13.x releases of libxml2 contain a bug that causes xmlParseBalancedChunkMemory to return the wrong status value in some cases. This breaks our regression tests. While that bug is now fixed upstream and will probably never be seen in any production-oriented distro, it is currently a problem on some more-bleeding-edge-friendly platforms. * xmlParseBalancedChunkMemory is considered to depend on libxml2's semi-deprecated SAX1 APIs, and will go away when and if they do. There may already be libxml2 builds out there that lack this function. So there are both short- and long-term reasons to make this change. While here, avoid allocating an xmlParserCtxt in DOCUMENT parse mode, since that code path is not going to use it. Like 066e8ac6e, this will need to be back-patched. This is just a trial commit to see if the buildfarm agrees that we can use xmlParseInNodeContext unconditionally. Erik Wienhold and Tom Lane, per report from Frank Streitzig. Discussion: https://postgr.es/m/trinity-b0161630-d230-4598-9ebc-7a23acdb37cb-1720186432160@3c-app-gmx-bap25 Discussion: https://postgr.es/m/trinity-361ba18b-541a-4fe7-bc63-655ae3a7d599-1720259822452@3c-app-gmx-bs01
* Fix scale clamping in numeric round() and trunc().Dean Rasheed2024-07-08
| | | | | | | | | | | | | | | | | | | | | | | | The numeric round() and trunc() functions clamp the scale argument to the range between +/- NUMERIC_MAX_RESULT_SCALE (2000), which is much smaller than the actual allowed range of type numeric. As a result, they return incorrect results when asked to round/truncate more than 2000 digits before or after the decimal point. Fix by using the correct upper and lower scale limits based on the actual allowed (and documented) range of type numeric. While at it, use the new NUMERIC_WEIGHT_MAX constant instead of SHRT_MAX in all other overflow checks, and fix a comment thinko in power_var() introduced by e54a758d24 -- the minimum value of ln_dweight is -NUMERIC_DSCALE_MAX (-16383), not -SHRT_MAX, though this doesn't affect the point being made in the comment, that the resulting local_rscale value may exceed NUMERIC_MAX_DISPLAY_SCALE (1000). Back-patch to all supported branches. Dean Rasheed, reviewed by Joel Jacobson. Discussion: https://postgr.es/m/CAEZATCXB%2BrDTuMjhK5ZxcouufigSc-X4tGJCBTMpZ3n%3DxxQuhg%40mail.gmail.com
* Use xmlAddChildList not xmlAddChild in XMLSERIALIZE.Tom Lane2024-07-06
| | | | | | | | | | | | | | | | It looks like we should have been doing this all along, but we got away with the wrong coding until libxml2 2.13.0 tightened up xmlAddChild's behavior. There is more stuff to be fixed to be compatible with 2.13.0, and it will all need to be back-patched. This is just a trial commit to see if the buildfarm agrees that we can use xmlAddChildList unconditionally. Erik Wienhold, per report from Frank Streitzig. Discussion: https://postgr.es/m/trinity-b0161630-d230-4598-9ebc-7a23acdb37cb-1720186432160@3c-app-gmx-bap25 Discussion: https://postgr.es/m/trinity-361ba18b-541a-4fe7-bc63-655ae3a7d599-1720259822452@3c-app-gmx-bs01
* Assign error codes where missing for user-facing failuresMichael Paquier2024-07-04
| | | | | | | | | | | | | | | | | | All the errors triggered in the code paths patched here would cause the backend to issue an internal_error errcode, which is a state that should be used only for "can't happen" situations. However, these code paths are reachable by the regression tests, and could be seen by users in valid cases. Some regression tests expect internal errcodes as they manipulate the backend state to cause corruption (like checksums), or use elog() because it is more convenient (like injection points), these have no need to change. This reduces the number of internal failures triggered in a check-world by more than half, while providing correct errcodes for these valid cases. Reviewed-by: Robert Haas Discussion: https://postgr.es/m/Zic_GNgos5sMxKoa@paquier.xyz
* Add context type field to pg_backend_memory_contextsDavid Rowley2024-07-01
| | | | | | | | | Since we now (as of v17) have 4 MemoryContext types, the type of context seems like useful information to include in the pg_backend_memory_contexts view. Here we add that. Reviewed-by: David Christensen, Michael Paquier Discussion: https://postgr.es/m/CAApHDvrXX1OR09Zjb5TnB0AwCKze9exZN%3D9Nxxg1ZCVV8W-3BA%40mail.gmail.com
* SQL/JSON: Always coerce JsonExpr result at runtimeAmit Langote2024-06-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of looking up casts at parse time for converting the result of JsonPath* query functions to the specified or the default RETURNING type, always perform the conversion at runtime using either the target type's input function or the function json_populate_type(). There are two motivations for this change: 1. json_populate_type() coerces to types with typmod such that any string values that exceed length limit cause an error instead of silent truncation, which is necessary to be standard-conforming. 2. It was possible to end up with a cast expression that doesn't support soft handling of errors causing bugs in the of handling ON ERROR clause. JsonExpr.coercion_expr which would store the cast expression is no longer necessary, so remove. Bump catversion because stored rules change because of the above removal. Reported-by: Alvaro Herrera <alvherre@alvh.no-ip.org> Reviewed-by: Jian He <jian.universality@gmail.com> Discussion: Discussion: https://postgr.es/m/202405271326.5a5rprki64aw%40alvherre.pgsql
* Add an injection_points isolation test suite.Noah Misch2024-06-27
| | | | | | | | | Make the isolation harness recognize injection_points wait events as a type of blocked state. Test an extant inplace-update bug. Reviewed by Robert Haas and Michael Paquier. Discussion: https://postgr.es/m/20240512232923.aa.nmisch@google.com
* Create waitfuncs.c for pg_isolation_test_session_is_blocked().Noah Misch2024-06-27
| | | | | | | | | The next commit makes the function inspect an additional non-lock contention source, so it no longer fits in lockfuncs.c. Reviewed by Robert Haas. Discussion: https://postgr.es/m/20240512232923.aa.nmisch@google.com
* jsonapi: Use const char *Peter Eisentraut2024-06-21
| | | | | | | | | | Apply const qualifiers to char * arguments and fields throughout the jsonapi. This allows the top-level APIs such as pg_parse_json_incremental() to declare their input argument as const. It also reduces the number of unconstify() calls. Reviewed-by: Andrew Dunstan <andrew@dunslane.net> Discussion: https://www.postgresql.org/message-id/flat/f732b014-f614-4600-a437-dba5a2c3738b%40eisentraut.org
* SQL/JSON: Correct jsonpath variable name matchingAmit Langote2024-06-19
| | | | | | | | | | | | | | | Previously, GetJsonPathVar() allowed a jsonpath expression to reference any prefix of a PASSING variable's name. For example, the following query would incorrectly work: SELECT JSON_QUERY(context_item, jsonpath '$xy' PASSING val AS xyz); The fix ensures that the length of the variable name mentioned in a jsonpath expression matches exactly with the name of the PASSING variable before comparing the strings using strncmp(). Reported-by: Alvaro Herrera (off-list) Discussion: https://postgr.es/m/CA+HiwqFGkLWMvELBH6E4SQ45qUHthgcRH6gCJL20OsYDRtFx_w@mail.gmail.com
* Fix incorrect punctuation in error messagePeter Eisentraut2024-06-18
|
* Improve tracking of role dependencies of pg_init_privs entries.Tom Lane2024-06-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 534287403 invented SHARED_DEPENDENCY_INITACL entries in pg_shdepend, but installed them only for non-owner roles mentioned in a pg_init_privs entry. This turns out to be the wrong thing, because there is nothing to cue REASSIGN OWNED to go and update pg_init_privs entries when the object's ownership is reassigned. That leads to leaving dangling entries in pg_init_privs, as reported by Hannu Krosing. Instead, install INITACL entries for all roles mentioned in pg_init_privs entries (except pinned roles), and change ALTER OWNER to not touch them, just as it doesn't touch pg_init_privs entries. REASSIGN OWNED will now substitute the new owner OID for the old in pg_init_privs entries. This feels like perhaps not quite the right thing, since pg_init_privs ought to be a historical record of the state of affairs just after CREATE EXTENSION. However, it's hard to see what else to do, if we don't want to disallow dropping the object's original owner. In any case this is better than the previous do-nothing behavior, and we're unlikely to come up with a superior solution in time for v17. While here, tighten up some coding rules about how ACLs in pg_init_privs should never be null or empty. There's not any obvious reason to allow that, and perhaps asserting that it's not so will catch some bugs. (We were previously inconsistent on the point, with some code paths taking care not to store empty ACLs and others not.) This leaves recordExtensionInitPrivWorker not doing anything with its ownerId argument, but we'll deal with that separately. catversion bump forced because of change of expected contents of pg_shdepend when pg_init_privs entries exist. Discussion: https://postgr.es/m/CAMT0RQSVgv48G5GArUvOVhottWqZLrvC5wBzBa4HrUdXe9VRXw@mail.gmail.com
* Teach jsonpath string() to unwrap in lax modeAndrew Dunstan2024-06-17
| | | | | | | | | | | | This was an ommission in commit 66ea94e, and brings it into compliance with both other methods and the standard. Per complaint from David Wheeler. Author: David Wheeler, Jeevan Chalke Reviewed-by: Chapman Flack Discussion: https://postgr.es/m/A64AE04F-4410-42B7-A141-7A7349260F4D@justatheory.com
* Fix parsing of ignored operators in websearch_to_tsquery().Tom Lane2024-06-13
| | | | | | | | | | | | | | | | | | | | | | | | | The manual says clearly that punctuation in the input of websearch_to_tsquery() is ignored, except for the special cases of dashes and quotes. However, this failed for cases like "(foo bar) or something", or in general an ISOPERATOR character in front of the "or". We'd switch back to WAITOPERAND state, then ignore the operator character while remaining in that state, and then reach the "or" in WAITOPERAND state which (intentionally) makes us treat it as data. The fix is simple enough: if we see an ISOPERATOR character while in WAITOPERATOR state, we have to skip it while staying in that state. (We don't need to worry about other punctuation characters: those will be consumed as though they were words, but then rejected by lexizing.) In v14 and up (since commit eb086056f) we can simplify the code a bit more too, because there is no longer a reason for the WAITOPERAND state to distinguish between quoted and unquoted operands. Per bug #18479 from Manos Emmanouilidis. Back-patch to all supported branches. Discussion: https://postgr.es/m/18479-d9b46e2fc242c33e@postgresql.org
* Harmonize function parameter names for Postgres 17.Peter Geoghegan2024-06-12
| | | | | | | | | | | | | Make sure that function declarations use names that exactly match the corresponding names from function definitions in a few places. These inconsistencies were all introduced during Postgres 17 development. pg_bsd_indent still has a couple of similar inconsistencies, which I (pgeoghegan) have left untouched for now. This commit was written with help from clang-tidy, by mechanically applying the same rules as similar clean-up commits (the earliest such commit was commit 035ce1fe).
* Fix another couple of outdated comments for MERGE RETURNING.Dean Rasheed2024-06-04
| | | | | | Oversights in c649fa24a4 which added RETURNING support to MERGE. Discussion: https://postgr.es/m/CAApHDvpqp6vtUzG-_josUEiBGyqnrnVxJ-VdF+hJLXjHdHzsyQ@mail.gmail.com
* Fix input of ISO "extended" time format for types time and timetz.Tom Lane2024-05-22
| | | | | | | | | | | | | | | | | | | | | | | Commit 3e1a373e2 missed teaching DecodeTimeOnly the same "ptype" manipulations it added to DecodeDateTime. While likely harmless at the time, it became a problem after 5b3c59535 added an error check that ptype must be zero once we exit the parsing loop (that is, there shouldn't be any unused prefixes). The consequence was that we'd reject time or timetz input like T12:34:56 (the "extended" format per ISO 8601-1:2019), even though that still worked in timestamp input. Since this is clearly under-tested code, add test cases covering all the ISO 8601 time formats. (Note: although 8601 allows just "Thh", we have never accepted that, and this patch doesn't change that. I'm content to leave that as-is because it seems too likely to be a mistake rather than intended input. If anyone wants to allow that, it should be a separate patch anyway, and not back-patched.) Per bug #18470 from David Perez. Back-patch to v16 where we broke it. Discussion: https://postgr.es/m/18470-34fad4c829106848@postgresql.org
* Revise GUC names quoting in messages againPeter Eisentraut2024-05-17
| | | | | | | | | | | | | | | After further review, we want to move in the direction of always quoting GUC names in error messages, rather than the previous (PG16) wildly mixed practice or the intermittent (mid-PG17) idea of doing this depending on how possibly confusing the GUC name is. This commit applies appropriate quotes to (almost?) all mentions of GUC names in error messages. It partially supersedes a243569bf65 and 8d9978a7176, which had moved things a bit in the opposite direction but which then were abandoned in a partial state. Author: Peter Smith <smithpb2250@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/CAHut%2BPv-kSN8SkxSdoHano_wPubqcg5789ejhCDZAcLFceBR-w%40mail.gmail.com
* Revert temporal primary keys and foreign keysPeter Eisentraut2024-05-16
| | | | | | | | | | | | | | | | | | | | This feature set did not handle empty ranges correctly, and it's now too late for PostgreSQL 17 to fix it. The following commits are reverted: 6db4598fcb8 Add stratnum GiST support function 46a0cd4cefb Add temporal PRIMARY KEY and UNIQUE constraints 86232a49a43 Fix comment on gist_stratnum_btree 030e10ff1a3 Rename pg_constraint.conwithoutoverlaps to conperiod a88c800deb6 Use daterange and YMD in without_overlaps tests instead of tsrange. 5577a71fb0c Use half-open interval notation in without_overlaps tests 34768ee3616 Add temporal FOREIGN KEY contraints 482e108cd38 Add test for REPLICA IDENTITY with a temporal key c3db1f30cba doc: clarify PERIOD and WITHOUT OVERLAPS in CREATE TABLE 144c2ce0cc7 Fix ON CONFLICT DO NOTHING/UPDATE for temporal indexes Discussion: https://www.postgresql.org/message-id/d0b64a7a-dfe4-4b84-a906-c7dedfa40a3e@eisentraut.org
* Revert structural changes to not-null constraintsAlvaro Herrera2024-05-13
| | | | | | | | | | | | | | | | | | | | | | | | | There are some problems with the new way to handle these constraints that were detected at the last minute, and require fixes that appear too invasive to be doing this late in the cycle. Revert this (again) for now, we'll try again with these problems fixed. The following commits are reverted: b0e96f311985 Catalog not-null constraints 9b581c534186 Disallow changing NO INHERIT status of a not-null constraint d0ec2ddbe088 Fix not-null constraint test ac22a9545ca9 Move privilege check to the right place b0f7dd915bca Check stack depth in new recursive functions 3af721794272 Update information_schema definition for not-null constraints c3709100be73 Fix propagating attnotnull in multiple inheritance d9f686a72ee9 Fix restore of not-null constraints with inheritance d72d32f52d26 Don't try to assign smart names to constraints 0cd711271d42 Better handle indirect constraint drops 13daa33fa5a6 Disallow NO INHERIT not-null constraints on partitioned tables d45597f72fe5 Disallow direct change of NO INHERIT of not-null constraints 21ac38f498b3 Fix inconsistencies in error messages Discussion: https://postgr.es/m/202405110940.joxlqcx4dogd@alvherre.pgsql
* Remove obsolete comment.Jeff Davis2024-05-07
| | | | | | | | Per suggestion from Peter, the comment was not helpful, so remove it rather than fixing it. Reported-by: Peter Eisentraut Discussion: https://postgr.es/m/d9421b21-e759-4b74-a039-c487b469c1f3@eisentraut.org
* Fix an assortment of typosDavid Rowley2024-05-04
| | | | | Author: Alexander Lakhin Discussion: https://postgr.es/m/ae9f2fcb-4b24-5bb0-4240-efbbbd944ca1@gmail.com
* Detect more overflows in timestamp[tz]_pl_interval.Tom Lane2024-04-28
| | | | | | | | | | | | | | | | | In commit 25cd2d640 I (tgl) opined that "The additions of the months and microseconds fields could also overflow, of course. However, I believe we need no additional checks there; the existing range checks should catch such cases". This is demonstrably wrong however for the microseconds field, and given that discovery it seems prudent to be paranoid about the months addition as well. Report and patch by Joseph Koshakow. As before, back-patch to all supported branches. (However, the test case doesn't work before v15 because we didn't allow wider-than-int32 numbers in interval literals. A variant test could probably be built that fits within that restriction, but it didn't seem worth the trouble.) Discussion: https://postgr.es/m/CAAvxfHf77sRHKoEzUw9_cMYSpbpNS2C+J_+8Dq4+0oi8iKopeA@mail.gmail.com
* Fix typos and duplicate wordsDaniel Gustafsson2024-04-18
| | | | | | | | | | | | This fixes various typos, duplicated words, and tiny bits of whitespace mainly in code comments but also in docs. Author: Daniel Gustafsson <daniel@yesql.se> Author: Heikki Linnakangas <hlinnaka@iki.fi> Author: Alexander Lakhin <exclusion@gmail.com> Author: David Rowley <dgrowleyml@gmail.com> Author: Nazir Bilal Yavuz <byavuz81@gmail.com> Discussion: https://postgr.es/m/3F577953-A29E-4722-98AD-2DA9EFF2CBB8@yesql.se
* SQL/JSON: Miscellaneous fixes and improvementsAmit Langote2024-04-18
| | | | | | | | | | | | | | | | | | | | This addresses some post-commit review comments for commits 6185c973, de3600452, and 9425c596a0, with the following changes: * Fix JSON_TABLE() syntax documentation to use the term "path_expression" for JSON path expressions instead of "json_path_specification" to be consistent with the other SQL/JSON functions. * Fix a typo in the example code in JSON_TABLE() documentation. * Rewrite some newly added comments in jsonpath.h. * In JsonPathQuery(), add missing cast to int before printing an enum value. Reported-by: Jian He <jian.universality@gmail.com> Discussion: https://postgr.es/m/CACJufxG_e0QLCgaELrr2ZNz7AxPeGCNKAORe3fHtFCQLsH4J4Q@mail.gmail.com
* SQL/JSON: Improve some error messagesAmit Langote2024-04-18
| | | | | | | | | | | | | | | This improves some error messages emitted by SQL/JSON query functions by mentioning column name when available, such as when they are invoked as part of evaluating JSON_TABLE() columns. To do so, a new field column_name is added to both JsonFuncExpr and JsonExpr that is only populated when creating those nodes for transformed JSON_TABLE() columns. While at it, relevant error messages are reworded for clarity. Reported-by: Jian He <jian.universality@gmail.com> Suggested-by: Jian He <jian.universality@gmail.com> Discussion: https://postgr.es/m/CACJufxG_e0QLCgaELrr2ZNz7AxPeGCNKAORe3fHtFCQLsH4J4Q@mail.gmail.com
* Fix ALTER DOMAIN NOT NULL syntaxPeter Eisentraut2024-04-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This addresses a few problems with commit e5da0fe3c22 ("Catalog domain not-null constraints"). In CREATE DOMAIN, a NOT NULL constraint looks like CREATE DOMAIN d1 AS int [ CONSTRAINT conname ] NOT NULL (Before e5da0fe3c22, the constraint name was accepted but ignored.) But in ALTER DOMAIN, a NOT NULL constraint looks like ALTER DOMAIN d1 ADD [ CONSTRAINT conname ] NOT NULL VALUE where VALUE is where for a table constraint the column name would be. (This works as of e5da0fe3c22. Before e5da0fe3c22, this syntax resulted in an internal error.) But for domains, this latter syntax is confusing and needlessly inconsistent between CREATE and ALTER. So this changes it to just ALTER DOMAIN d1 ADD [ CONSTRAINT conname ] NOT NULL (None of these syntaxes are per SQL standard; we are just living with the bits of inconsistency that have built up over time.) In passing, this also changes the psql \dD output to not show not-null constraints in the column "Check", since it's already shown in the column "Nullable". This has also been off since e5da0fe3c22. Reviewed-by: jian he <jian.universality@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/9ec24d7b-633d-463a-84c6-7acff769c9e8%40eisentraut.org
* Fixup various StringInfo function usagesDavid Rowley2024-04-10
| | | | | | | | | | | | | | | This adjusts various appendStringInfo* function calls to use a more appropriate and efficient function with the same behavior. For example, use appendStringInfoChar() when appending a single character rather than appendStringInfo() and appendStringInfoString() when no formatting is required rather than using appendStringInfo(). All adjustments made here are in code that's new to v17, so it makes sense to fix these now rather than wait a few years and make backpatching harder. Discussion: https://postgr.es/m/CAApHDvojY2UvMiO+9_55ArTj10P1LBNJyyoGB+C65BLDNT0GsQ@mail.gmail.com Reviewed-by: Nathan Bossart, Tom Lane
* JSON_TABLE: Add support for NESTED paths and columnsAmit Langote2024-04-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A NESTED path allows to extract data from nested levels of JSON objects given by the parent path expression, which are projected as columns specified using a nested COLUMNS clause, just like the parent COLUMNS clause. Rows comprised from a NESTED columns are "joined" to the row comprised from the parent columns. If a particular NESTED path evaluates to 0 rows, then the nested COLUMNS will emit NULLs, making it an OUTER join. NESTED columns themselves may include NESTED paths to allow extracting data from arbitrary nesting levels, which are likewise joined against the rows at the parent level. Multiple NESTED paths at a given level are called "sibling" paths and their rows are combined by UNIONing them, that is, after being joined against the parent row as described above. Author: Nikita Glukhov <n.gluhov@postgrespro.ru> Author: Teodor Sigaev <teodor@sigaev.ru> Author: Oleg Bartunov <obartunov@gmail.com> Author: Alexander Korotkov <aekorotkov@gmail.com> Author: Andrew Dunstan <andrew@dunslane.net> Author: Amit Langote <amitlangote09@gmail.com> Author: Jian He <jian.universality@gmail.com> Reviewers have included (in no particular order): Andres Freund, Alexander Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zihong Yu, Himanshu Upadhyaya, Daniel Gustafsson, Justin Pryzby, Álvaro Herrera, Jian He Discussion: https://postgr.es/m/cd0bb935-0158-78a7-08b5-904886deac4b@postgrespro.ru Discussion: https://postgr.es/m/20220616233130.rparivafipt6doj3@alap3.anarazel.de Discussion: https://postgr.es/m/abd9b83b-aa66-f230-3d6d-734817f0995d%40postgresql.org Discussion: https://postgr.es/m/CA+HiwqE4XTdfb1nW=Ojoy_tQSRhYt-q_kb6i5d4xcKyrLC1Nbg@mail.gmail.com
* Fix JsonExpr deparsing to emit QUOTES and WRAPPER correctlyAmit Langote2024-04-08
| | | | | | | | | | | | | | | Currently, get_json_expr_options() does not emit the default values for QUOTES (KEEP QUOTES) and WRAPPER (WITHOUT WRAPPER). That causes the deparsed JSON_TABLE() columns, such as those contained in a a view's query, to behave differently when executed than the original definition. That's because the rules encoded in transformJsonTableColumns() will choose either JSON_VALUE() or JSON_QUERY() as implementation to execute a given column's path expression depending on the QUOTES and WRAPPER specificationd and they have slightly different semantics. Reported-by: Jian He <jian.universality@gmail.com> Discussion: https://postgr.es/m/CACJufxEqhqsfrg_p7EMyo5zak3d767iFDL8vz_4%3DZBHpOtrghw%40mail.gmail.com
* Implement ALTER TABLE ... SPLIT PARTITION ... commandAlexander Korotkov2024-04-07
| | | | | | | | | | | | | | | | | | | | This new DDL command splits a single partition into several parititions. Just like ALTER TABLE ... MERGE PARTITIONS ... command, new patitions are created using createPartitionTable() function with parent partition as the template. This commit comprises quite naive implementation which works in single process and holds the ACCESS EXCLUSIVE LOCK on the parent table during all the operations including the tuple routing. This is why this new DDL command can't be recommended for large partitioned tables under a high load. However, this implementation come in handy in certain cases even as is. Also, it could be used as a foundation for future implementations with lesser locking and possibly parallel. Discussion: https://postgr.es/m/c73a1746-0cd0-6bdd-6b23-3ae0b7c0c582%40postgrespro.ru Author: Dmitry Koval Reviewed-by: Matthias van de Meent, Laurenz Albe, Zhihong Yu, Justin Pryzby Reviewed-by: Alvaro Herrera, Robert Haas, Stephane Tachoires
* Enhance nbtree ScalarArrayOp execution.Peter Geoghegan2024-04-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 9e8da0f7 taught nbtree to handle ScalarArrayOpExpr quals natively. This works by pushing down the full context (the array keys) to the nbtree index AM, enabling it to execute multiple primitive index scans that the planner treats as one continuous index scan/index path. This earlier enhancement enabled nbtree ScalarArrayOp index-only scans. It also allowed scans with ScalarArrayOp quals to return ordered results (with some notable restrictions, described further down). Take this general approach a lot further: teach nbtree SAOP index scans to decide how to execute ScalarArrayOp scans (when and where to start the next primitive index scan) based on physical index characteristics. This can be far more efficient. All SAOP scans will now reliably avoid duplicative leaf page accesses (just like any other nbtree index scan). SAOP scans whose array keys are naturally clustered together now require far fewer index descents, since we'll reliably avoid starting a new primitive scan just to get to a later offset from the same leaf page. The scan's arrays now advance using binary searches for the array element that best matches the next tuple's attribute value. Required scan key arrays (i.e. arrays from scan keys that can terminate the scan) ratchet forward in lockstep with the index scan. Non-required arrays (i.e. arrays from scan keys that can only exclude non-matching tuples) "advance" without the process ever rolling over to a higher-order array. Naturally, only required SAOP scan keys trigger skipping over leaf pages (non-required arrays cannot safely end or start primitive index scans). Consequently, even index scans of a composite index with a high-order inequality scan key (which we'll mark required) and a low-order SAOP scan key (which we won't mark required) now avoid repeating leaf page accesses -- that benefit isn't limited to simpler equality-only cases. In general, all nbtree index scans now output tuples as if they were one continuous index scan -- even scans that mix a high-order inequality with lower-order SAOP equalities reliably output tuples in index order. This allows us to remove a couple of special cases that were applied when building index paths with SAOP clauses during planning. Bugfix commit 807a40c5 taught the planner to avoid generating unsafe path keys: path keys on a multicolumn index path, with a SAOP clause on any attribute beyond the first/most significant attribute. These cases are now all safe, so we go back to generating path keys without regard for the presence of SAOP clauses (just like with any other clause type). Affected queries can now exploit scan output order in all the usual ways (e.g., certain "ORDER BY ... LIMIT n" queries can now terminate early). Also undo changes from follow-up bugfix commit a4523c5a, which taught the planner to produce alternative index paths, with path keys, but without low-order SAOP index quals (filter quals were used instead). We'll no longer generate these alternative paths, since they can no longer offer any meaningful advantages over standard index qual paths. Affected queries thereby avoid all of the disadvantages that come from using filter quals within index scan nodes. They can avoid extra heap page accesses from using filter quals to exclude non-matching tuples (index quals will never have that problem). They can also skip over irrelevant sections of the index in more cases (though only when nbtree determines that starting another primitive scan actually makes sense). There is a theoretical risk that removing restrictions on SAOP index paths from the planner will break compatibility with amcanorder-based index AMs maintained as extensions. Such an index AM could have the same limitations around ordered SAOP scans as nbtree had up until now. Adding a pro forma incompatibility item about the issue to the Postgres 17 release notes seems like a good idea. Author: Peter Geoghegan <pg@bowt.ie> Author: Matthias van de Meent <boekewurm+postgres@gmail.com> Reviewed-By: Heikki Linnakangas <hlinnaka@iki.fi> Reviewed-By: Matthias van de Meent <boekewurm+postgres@gmail.com> Reviewed-By: Tomas Vondra <tomas.vondra@enterprisedb.com> Discussion: https://postgr.es/m/CAH2-Wz=ksvN_sjcnD1+Bt-WtifRA5ok48aDYnq3pkKhxgMQpcw@mail.gmail.com
* Add basic JSON_TABLE() functionalityAmit Langote2024-04-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | JSON_TABLE() allows JSON data to be converted into a relational view and thus used, for example, in a FROM clause, like other tabular data. Data to show in the view is selected from a source JSON object using a JSON path expression to get a sequence of JSON objects that's called a "row pattern", which becomes the source to compute the SQL/JSON values that populate the view's output columns. Column values themselves are computed using JSON path expressions applied to each of the JSON objects comprising the "row pattern", for which the SQL/JSON query functions added in 6185c9737cf4 are used. To implement JSON_TABLE() as a table function, this augments the TableFunc and TableFuncScanState nodes that are currently used to support XMLTABLE() with some JSON_TABLE()-specific fields. Note that the JSON_TABLE() spec includes NESTED COLUMNS and PLAN clauses, which are required to provide more flexibility to extract data out of nested JSON objects, but they are not implemented here to keep this commit of manageable size. Author: Nikita Glukhov <n.gluhov@postgrespro.ru> Author: Teodor Sigaev <teodor@sigaev.ru> Author: Oleg Bartunov <obartunov@gmail.com> Author: Alexander Korotkov <aekorotkov@gmail.com> Author: Andrew Dunstan <andrew@dunslane.net> Author: Amit Langote <amitlangote09@gmail.com> Author: Jian He <jian.universality@gmail.com> Reviewers have included (in no particular order): Andres Freund, Alexander Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zihong Yu, Himanshu Upadhyaya, Daniel Gustafsson, Justin Pryzby, Álvaro Herrera, Jian He Discussion: https://postgr.es/m/cd0bb935-0158-78a7-08b5-904886deac4b@postgrespro.ru Discussion: https://postgr.es/m/20220616233130.rparivafipt6doj3@alap3.anarazel.de Discussion: https://postgr.es/m/abd9b83b-aa66-f230-3d6d-734817f0995d%40postgresql.org Discussion: https://postgr.es/m/CA+HiwqE4XTdfb1nW=Ojoy_tQSRhYt-q_kb6i5d4xcKyrLC1Nbg@mail.gmail.com
* Add pg_basetype() function to extract a domain's base type.Tom Lane2024-03-30
| | | | | | | | | | | | | This SQL-callable function behaves much like our internal utility function getBaseType(), except it returns NULL rather than failing for an invalid type OID. (That behavior is modeled on our experience with other catalog-inquiry functions such as the ACL checking functions.) The key advantage over doing a join to pg_type is that it will loop as needed to find the bottom base type of a nest of domains. Steve Chavez, reviewed by jian he and others Discussion: https://postgr.es/m/CAGRrpzZSX8j=MQcbCSEisFA=ic=K3bknVfnFjAv1diVJxFHJvg@mail.gmail.com
* Add support for MERGE ... WHEN NOT MATCHED BY SOURCE.Dean Rasheed2024-03-30
| | | | | | | | | | | | | | | | | | | This allows MERGE commands to include WHEN NOT MATCHED BY SOURCE actions, which operate on rows that exist in the target relation, but not in the data source. These actions can execute UPDATE, DELETE, or DO NOTHING sub-commands. This is in contrast to already-supported WHEN NOT MATCHED actions, which operate on rows that exist in the data source, but not in the target relation. To make this distinction clearer, such actions may now be written as WHEN NOT MATCHED BY TARGET. Writing WHEN NOT MATCHED without specifying BY SOURCE or BY TARGET is equivalent to writing WHEN NOT MATCHED BY TARGET. Dean Rasheed, reviewed by Alvaro Herrera, Ted Yu and Vik Fearing. Discussion: https://postgr.es/m/CAEZATCWqnKGc57Y_JanUBHQXNKcXd7r=0R4NEZUVwP+syRkWbA@mail.gmail.com
* Add unicode_strtitle() for Unicode Default Case Conversion.Jeff Davis2024-03-29
| | | | | | | | | | | | | This brings the titlecasing implementation for the builtin provider out of formatting.c and into unicode_case.c, along with unicode_strlower() and unicode_strupper(). Accepts an arbitrary word boundary callback. Simple for now, but can be extended to support the Unicode Default Case Conversion algorithm with full case mapping. Discussion: https://postgr.es/m/3bc653b5d562ae9e2838b11cb696816c328a489a.camel@j-davis.com Reviewed-by: Peter Eisentraut
* Use version for builtin collations.Jeff Davis2024-03-29
| | | | | | | | Given that the version field already exists, there's little reason not to use it. Suggestion from Peter Eisentraut. Discussion: https://postgr.es/m/613c120a-5413-4fa7-a501-6590eae558f8@eisentraut.org Reviewed-by: Peter Eisentraut
* Add functions to generate random numbers in a specified range.Dean Rasheed2024-03-27
| | | | | | | | | | | | | | | | | | | | | | | | This adds 3 new variants of the random() function: random(min integer, max integer) returns integer random(min bigint, max bigint) returns bigint random(min numeric, max numeric) returns numeric Each returns a random number x in the range min <= x <= max. For the numeric function, the number of digits after the decimal point is equal to the number of digits that "min" or "max" has after the decimal point, whichever has more. The main entry points for these functions are in a new C source file. The existing random(), random_normal(), and setseed() functions are moved there too, so that they can all share the same PRNG state, which is kept private to that file. Dean Rasheed, reviewed by Jian He, David Zhang, Aleksander Alekseev, and Tomas Vondra. Discussion: https://postgr.es/m/CAEZATCV89Vxuq93xQdmc0t-0Y2zeeNQTdsjbmV7dyFBPykbV4Q@mail.gmail.com