aboutsummaryrefslogtreecommitdiff
path: root/src/backend/parser/parse_coerce.c
Commit message (Collapse)AuthorAge
* Require callers of coerce_to_domain() to supply base type/typmod.Tom Lane2025-01-29
| | | | | | | | | | | | | | | In view of the issue fixed in commit 0da39aa76, it no longer seems like a great idea for coerce_to_domain() to offer to perform a lookup that its caller probably should have done already. The caller should be providing a value of the domain's base type, so it's hard to envision a valid case where it hasn't looked up that type. After 0da39aa76 there is only one caller using the option for internal lookup, and that one can trivially be rearranged to not do that. So this seems more like a bug-encouraging misfeature than a useful shortcut; let's get rid of it (in HEAD only, there's no need to break any external callers in back branches). Discussion: https://postgr.es/m/1865579.1738113656@sss.pgh.pa.us
* Handle default NULL insertion a little better.Tom Lane2025-01-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a column is omitted in an INSERT, and there's no column default, the code in preptlist.c generates a NULL Const to be inserted. Furthermore, if the column is of a domain type, we wrap the Const in CoerceToDomain, so as to throw a run-time error if the domain has a NOT NULL constraint. That's fine as far as it goes, but there are two problems: 1. We're being sloppy about the type/typmod that the Const is labeled with. It really should have the domain's base type/typmod, since it's the input to CoerceToDomain not the output. This can result in coerce_to_domain inserting a useless length-coercion function (useless because it's being applied to a null). The coercion would typically get const-folded away later, but it'd be better not to create it in the first place. 2. We're not applying expression preprocessing (specifically, eval_const_expressions) to the resulting expression tree. The planner's primary expression-preprocessing pass already happened, so that means the length coercion step and CoerceToDomain node miss preprocessing altogether. This is at the least inefficient, since it means the length coercion and CoerceToDomain will actually be executed for each inserted row, though they could be const-folded away in most cases. Worse, it seems possible that missing preprocessing for the length coercion could result in an invalid plan (for example, due to failing to perform default-function-argument insertion). I'm not aware of any live bug of that sort with core datatypes, and it might be unreachable for extension types as well because of restrictions of CREATE CAST, but I'm not entirely convinced that it's unreachable. Hence, it seems worth back-patching the fix (although I only went back to v14, as the patch doesn't apply cleanly at all in v13). There are several places in the rewriter that are building null domain constants the same way as preptlist.c. While those are before the planner and hence don't have any reachable bug, they're still applying a length coercion that will be const-folded away later, uselessly wasting cycles. Hence, make a utility routine that all of these places can call to do it right. Making this code more careful about the typmod assigned to the generated NULL constant has visible but cosmetic effects on some of the plans shown in contrib/postgres_fdw's regression tests. Discussion: https://postgr.es/m/1865579.1738113656@sss.pgh.pa.us Backpatch-through: 14
* Update copyright for 2025Bruce Momjian2025-01-01
| | | | Backpatch-through: 13
* Update copyright for 2024Bruce Momjian2024-01-03
| | | | | | | | Reported-by: Michael Paquier Discussion: https://postgr.es/m/ZZKTDPxBBMt3C0J9@paquier.xyz Backpatch-through: 12
* Make Vars be outer-join-aware.Tom Lane2023-01-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Traditionally we used the same Var struct to represent the value of a table column everywhere in parse and plan trees. This choice predates our support for SQL outer joins, and it's really a pretty bad idea with outer joins, because the Var's value can depend on where it is in the tree: it might go to NULL above an outer join. So expression nodes that are equal() per equalfuncs.c might not represent the same value, which is a huge correctness hazard for the planner. To improve this, decorate Var nodes with a bitmapset showing which outer joins (identified by RTE indexes) may have nulled them at the point in the parse tree where the Var appears. This allows us to trust that equal() Vars represent the same value. A certain amount of klugery is still needed to cope with cases where we re-order two outer joins, but it's possible to make it work without sacrificing that core principle. PlaceHolderVars receive similar decoration for the same reason. In the planner, we include these outer join bitmapsets into the relids that an expression is considered to depend on, and in consequence also add outer-join relids to the relids of join RelOptInfos. This allows us to correctly perceive whether an expression can be calculated above or below a particular outer join. This change affects FDWs that want to plan foreign joins. They *must* follow suit when labeling foreign joins in order to match with the core planner, but for many purposes (if postgres_fdw is any guide) they'd prefer to consider only base relations within the join. To support both requirements, redefine ForeignScan.fs_relids as base+OJ relids, and add a new field fs_base_relids that's set up by the core planner. Large though it is, this commit just does the minimum necessary to install the new mechanisms and get check-world passing again. Follow-up patches will perform some cleanup. (The README additions and comments mention some stuff that will appear in the follow-up.) Patch by me; thanks to Richard Guo for review. Discussion: https://postgr.es/m/830269.1656693747@sss.pgh.pa.us
* Update copyright for 2023Bruce Momjian2023-01-02
| | | | Backpatch-through: 11
* Record dependencies of a cast on other casts that it requires.Tom Lane2022-10-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | When creating a cast that uses a conversion function, we've historically allowed the input and result types to be binary-compatible with the function's input and result types, rather than necessarily being identical. This means that the new cast is logically dependent on the binary-compatible cast or casts that it references: if those are defined by pg_cast entries, and you try to restore the new cast without having defined them, it'll fail. Hence, we should make pg_depend entries to record these dependencies so that pg_dump knows that there is an ordering requirement. This is not the only place where we allow such shortcuts; aggregate functions for example are similarly lax, and in principle should gain similar dependencies. However, for now it seems sufficient to fix the cast-versus-cast case, as pg_dump's other ordering heuristics should keep it out of trouble for other object types. Per report from David Turoň; thanks also to Robert Haas for preliminary investigation. I considered back-patching, but seeing that this issue has existed for many years without previous reports, it's not clear it's worth the trouble. Moreover, back-patching wouldn't be enough to ensure that the new pg_depend entries exist in existing databases anyway. Discussion: https://postgr.es/m/OF0A160F3E.578B15D1-ONC12588DA.003E4857-C12588DA.0045A428@notes.linuxbox.cz
* Fix failure to validate the result of select_common_type().Tom Lane2022-01-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Although select_common_type() has a failure-return convention, an apparent successful return just provides a type OID that *might* work as a common supertype; we've not validated that the required casts actually exist. In the mainstream use-cases that doesn't matter, because we'll proceed to invoke coerce_to_common_type() on each input, which will fail appropriately if the proposed common type doesn't actually work. However, a few callers didn't read the (nonexistent) fine print, and thought that if they got back a nonzero OID then the coercions were sure to work. This affects in particular the recently-added "anycompatible" polymorphic types; we might think that a function/operator using such types matches cases it really doesn't. A likely end result of that is unexpected "ambiguous operator" errors, as for example in bug #17387 from James Inform. Another, much older, case is that the parser might try to transform an "x IN (list)" construct to a ScalarArrayOpExpr even when the list elements don't actually have a common supertype. It doesn't seem desirable to add more checking to select_common_type itself, as that'd just slow down the mainstream use-cases. Instead, write a separate function verify_common_type that performs the missing checks, and add a call to that where necessary. Likewise add verify_common_type_from_oids to go with select_common_type_from_oids. Back-patch to v13 where the "anycompatible" types came in. (The symptom complained of in bug #17387 doesn't appear till v14, but that's just because we didn't get around to converting || to use anycompatible till then.) In principle the "x IN (list)" fix could go back all the way, but I'm not currently convinced that it makes much difference in real-world cases, so I won't bother for now. Discussion: https://postgr.es/m/17387-5dfe54b988444963@postgresql.org
* Update copyright for 2022Bruce Momjian2022-01-07
| | | | Backpatch-through: 10
* Ensure casting to typmod -1 generates a RelabelType.Tom Lane2021-12-16
| | | | | | | | | | | | | | | | | | | | Fix the code changed by commit 5c056b0c2 so that we always generate RelabelType, not something else, for a cast to unspecified typmod. Otherwise planner optimizations might not happen. It appears we missed this point because the previous experiments were done on type numeric: the parser undesirably generates a call on the numeric() length-coercion function, but then numeric_support() optimizes that down to a RelabelType, so that everything seems fine. It misbehaves for types that have a non-optimized length coercion function, such as bpchar. Per report from John Naylor. Back-patch to all supported branches, as the previous patch eventually was. Unfortunately, that no longer includes 9.6 ... we really shouldn't put this type of change into a nearly-EOL branch. Discussion: https://postgr.es/m/CAFBsxsEfbFHEkouc+FSj+3K1sHipLPbEC67L0SAe-9-da8QtYg@mail.gmail.com
* Don't elide casting to typmod -1.Tom Lane2021-08-06
| | | | | | | | | | | | | | | | | | | | | | Casting a value that's already of a type with a specific typmod to an unspecified typmod doesn't do anything so far as run-time behavior is concerned. However, it really ought to change the exposed type of the expression to match. Up to now, coerce_type_typmod hasn't bothered with that, which creates gotchas in contexts such as recursive unions. If for example one side of the union is numeric(18,3), but it needs to be plain numeric to match the other side, there's no direct way to express that. This is easy enough to fix, by inserting a RelabelType to update the exposed type of the expression. However, it's a bit nervous-making to change this behavior, because it's stood for a really long time. (I strongly suspect that it's like this in part because the logic pre-dates the introduction of RelabelType in 7.0. The commit log message for 57b30e8e2 is interesting reading here.) As a compromise, we'll sneak the change into 14beta3, and consider back-patching to stable branches if no complaints emerge in the next three months. Discussion: https://postgr.es/m/CABNQVagu3bZGqiTjb31a8D5Od3fUMs7Oh3gmZMQZVHZ=uWWWfQ@mail.gmail.com
* Fix bugs in polymorphic-argument resolution for multiranges.Tom Lane2021-07-27
| | | | | | | | | | | | | | We failed to deal with an UNKNOWN-type input for anycompatiblemultirange; that should throw an error indicating that we don't know how to resolve the multirange type. We also failed to infer the type of an anycompatiblerange output from an anycompatiblemultirange input or vice versa. Per bug #17066 from Alexander Lakhin. Back-patch to v14 where multiranges were added. Discussion: https://postgr.es/m/17066-16a37f6223a8470b@postgresql.org
* Refactor some error messages for easier translationPeter Eisentraut2021-05-12
|
* Fix old bug with coercing the result of a COLLATE expression.Tom Lane2021-04-12
| | | | | | | | | | | | | | | | | | | | | | | | | There are hacks in parse_coerce.c to push down a requested coercion to below any CollateExpr that may appear. However, we did that even if the requested data type is non-collatable, leading to an invalid expression tree in which CollateExpr is applied to a non-collatable type. The fix is just to drop the CollateExpr altogether, reasoning that it's useless. This bug is ten years old, dating to the original addition of COLLATE support. The lack of field complaints suggests that there aren't a lot of user-visible consequences. We noticed the problem because it would trigger an assertion in DefineVirtualRelation if the invalid structure appears as an output column of a view; however, in a non-assert build, you don't see a crash just a (subtly incorrect) complaint about applying collation to a non-collatable type. I found that by putting the incorrect structure further down in a view, I could make a view definition that would fail dump/reload, per the added regression test case. But CollateExpr doesn't do anything at run-time, so this likely doesn't lead to any really exciting consequences. Per report from Yulin Pei. Back-patch to all supported branches. Discussion: https://postgr.es/m/HK0PR01MB22744393C474D503E16C8509F4709@HK0PR01MB2274.apcprd01.prod.exchangelabs.com
* Re-implement pl/pgsql's expression and assignment parsing.Tom Lane2021-01-04
| | | | | | | | | | | | | | | | | | | | | | | | | | Invent new RawParseModes that allow the core grammar to handle pl/pgsql expressions and assignments directly, and thereby get rid of a lot of hackery in pl/pgsql's parser. This moves a good deal of knowledge about pl/pgsql into the core code: notably, we have to invent a CoercionContext that matches pl/pgsql's (rather dubious) historical behavior for assignment coercions. That's getting away from the original idea of pl/pgsql as an arm's-length extension of the core, but really we crossed that bridge a long time ago. The main advantage of doing this is that we can now use the core parser to generate FieldStore and/or SubscriptingRef nodes to handle assignments to pl/pgsql variables that are records or arrays. That fixes a number of cases that had never been implemented in pl/pgsql assignment, such as nested records and array slicing, and it allows pl/pgsql assignment to support the datatype-specific subscripting behaviors introduced in commit c7aba7c14. There are cosmetic benefits too: when a syntax error occurs in a pl/pgsql expression, the error report no longer includes the confusing "SELECT" keyword that used to get prefixed to the expression text. Also, there seem to be some small speed gains. Discussion: https://postgr.es/m/4165684.1607707277@sss.pgh.pa.us
* Add the ability for the core grammar to have more than one parse target.Tom Lane2021-01-04
| | | | | | | | | | | | | | | | | | | This patch essentially allows gram.y to implement a family of related syntax trees, rather than necessarily always parsing a list of SQL statements. raw_parser() gains a new argument, enum RawParseMode, to say what to do. As proof of concept, add a mode that just parses a TypeName without any other decoration, and use that to greatly simplify typeStringToTypeName(). In addition, invent a new SPI entry point SPI_prepare_extended() to allow SPI users (particularly plpgsql) to get at this new functionality. In hopes of making this the last variant of SPI_prepare(), set up its additional arguments as a struct rather than direct arguments, and promise that future additions to the struct can default to zero. SPI_prepare_cursor() and SPI_prepare_params() can perhaps go away at some point. Discussion: https://postgr.es/m/4165684.1607707277@sss.pgh.pa.us
* Update copyright for 2021Bruce Momjian2021-01-02
| | | | Backpatch-through: 9.5
* Multirange datatypesAlexander Korotkov2020-12-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Multiranges are basically sorted arrays of non-overlapping ranges with set-theoretic operations defined over them. Since v14, each range type automatically gets a corresponding multirange datatype. There are both manual and automatic mechanisms for naming multirange types. Once can specify multirange type name using multirange_type_name attribute in CREATE TYPE.  Otherwise, a multirange type name is generated automatically. If the range type name contains "range" then we change that to "multirange". Otherwise, we add "_multirange" to the end. Implementation of multiranges comes with a space-efficient internal representation format, which evades extra paddings and duplicated storage of oids.  Altogether this format allows fetching a particular range by its index in O(n). Statistic gathering and selectivity estimation are implemented for multiranges. For this purpose, stored multirange is approximated as union range without gaps. This field will likely need improvements in the future. Catversion is bumped. Discussion: https://postgr.es/m/CALNJ-vSUpQ_Y%3DjXvTxt1VYFztaBSsWVXeF1y6gTYQ4bOiWDLgQ%40mail.gmail.com Discussion: https://postgr.es/m/a0b8026459d1e6167933be2104a6174e7d40d0ab.camel%40j-davis.com#fe7218c83b08068bfffb0c5293eceda0 Author: Paul Jungwirth, revised by me Reviewed-by: David Fetter, Corey Huinker, Jeff Davis, Pavel Stehule Reviewed-by: Alvaro Herrera, Tom Lane, Isaac Morland, David G. Johnston Reviewed-by: Zhihong Yu, Alexander Korotkov
* Support subscripting of arbitrary types, not only arrays.Tom Lane2020-12-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch generalizes the subscripting infrastructure so that any data type can be subscripted, if it provides a handler function to define what that means. Traditional variable-length (varlena) arrays all use array_subscript_handler(), while the existing fixed-length types that support subscripting use raw_array_subscript_handler(). It's expected that other types that want to use subscripting notation will define their own handlers. (This patch provides no such new features, though; it only lays the foundation for them.) To do this, move the parser's semantic processing of subscripts (including coercion to whatever data type is required) into a method callback supplied by the handler. On the execution side, replace the ExecEvalSubscriptingRef* layer of functions with direct calls to callback-supplied execution routines. (Thus, essentially no new run-time overhead should be caused by this patch. Indeed, there is room to remove some overhead by supplying specialized execution routines. This patch does a little bit in that line, but more could be done.) Additional work is required here and there to remove formerly hard-wired assumptions about the result type, collation, etc of a SubscriptingRef expression node; and to remove assumptions that the subscript values must be integers. One useful side-effect of this is that we now have a less squishy mechanism for identifying whether a data type is a "true" array: instead of wiring in weird rules about typlen, we can look to see if pg_type.typsubscript == F_ARRAY_SUBSCRIPT_HANDLER. For this to be bulletproof, we have to forbid user-defined types from using that handler directly; but there seems no good reason for them to do so. This patch also removes assumptions that the number of subscripts is limited to MAXDIM (6), or indeed has any hard-wired limit. That limit still applies to types handled by array_subscript_handler or raw_array_subscript_handler, but to discourage other dependencies on this constant, I've moved it from c.h to utils/array.h. Dmitry Dolgov, reviewed at various times by Tom Lane, Arthur Zakirov, Peter Eisentraut, Pavel Stehule Discussion: https://postgr.es/m/CA+q6zcVDuGBv=M0FqBYX8DPebS3F_0KQ6OVFobGJPM507_SZ_w@mail.gmail.com Discussion: https://postgr.es/m/CA+q6zcVovR+XY4mfk-7oNk-rF91gH0PebnNfuUjuuDsyHjOcVA@mail.gmail.com
* Add select_common_typmod()Peter Eisentraut2020-10-27
| | | | | | | | | | | | | | | This accompanies select_common_type() and select_common_collation(). Typmods were previously combined using hand-coded logic in several places. The logic in select_common_typmod() isn't very exciting, but it makes the code more compact and readable in a few locations, and in the future we can perhaps do more complicated things if desired. As a small enhancement, the type unification of the direct and aggregate arguments of hypothetical-set aggregates now unifies the typmod as well using this new function, instead of just dropping it. Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Discussion: https://www.postgresql.org/message-id/flat/97df3af9-8b5e-fb7f-a029-3eb7e80d7af9@2ndquadrant.com
* Clarify comments in enforce_generic_type_consistency().Tom Lane2020-09-07
| | | | | | | | | | | | Some of the pre-existing comments were vague about whether they referred to all polymorphic types or only the old-style ones. Also be more consistent about using the "family 1" vs "family 2" terminology. Himanshu Upadhyaya and Tom Lane Discussion: https://postgr.es/m/CAPF61jBUg9XoMPNuLpoZ+h6UZ2VxKdNt3rQL1xw1GOBwjWzAXQ@mail.gmail.com
* Put back mistakenly removed #include.Tom Lane2020-04-08
| | | | | | | | In commit 4dbcb3f84 I removed some code from parse_coerce.c, and also removed some apparently-no-longer-needed #includes. But removing datum.h broke some not-compiled-by-default code. Discussion: https://postgr.es/m/20200407205436.pyjhddw5bn5upvsu@development
* Remove bogus Assert, add some regression test cases showing why.Tom Lane2020-04-04
| | | | | | | | | | | | | Commit 77ec5affb added an assertion to enforce_generic_type_consistency that boils down to "if the function result is polymorphic, there must be at least one polymorphic argument". This should be true for user-created functions, but there are built-in functions for which it's not true, as pointed out by Jaime Casanova. Hence, go back to the old behavior of leaving the return type alone. There's only a limited amount of stuff you can do with such a function result, but it does work to some extent; add some regression test cases to ensure we don't break that again. Discussion: https://postgr.es/m/CAJGNTeMbhtsCUZgJJ8h8XxAJbK7U2ipsX8wkHRtZRz-NieT8RA@mail.gmail.com
* Go back to returning int from ereport auxiliary functions.Tom Lane2020-03-25
| | | | | | | | | | | | | | | | | | | This reverts the parts of commit 17a28b03645e27d73bf69a95d7569b61e58f06eb that changed ereport's auxiliary functions from returning dummy integer values to returning void. It turns out that a minority of compilers complain (not entirely unreasonably) about constructs such as (condition) ? errdetail(...) : 0 if errdetail() returns void rather than int. We could update those call sites to say "(void) 0" perhaps, but the expectation for this patch set was that ereport callers would not have to change anything. And this aspect of the patch set was already the most invasive and least compelling part of it, so let's just drop it. Per buildfarm. Discussion: https://postgr.es/m/CA+fd4k6N8EjNvZpM8nme+y+05mz-SM8Z_BgkixzkA34R+ej0Kw@mail.gmail.com
* Improve the internal implementation of ereport().Tom Lane2020-03-24
| | | | | | | | | | | | | | | | | | | | | | Change all the auxiliary error-reporting routines to return void, now that we no longer need to pretend they are passing something useful to errfinish(). While this probably doesn't save anything significant at the machine-code level, it allows detection of some additional types of mistakes. Pass the error location details (__FILE__, __LINE__, PG_FUNCNAME_MACRO) to errfinish not errstart. This shaves a few cycles off the case where errstart decides we're not going to emit anything. Re-implement elog() as a trivial wrapper around ereport(), removing the separate support infrastructure it used to have. Aside from getting rid of some now-surplus code, this means that elog() now really does have exactly the same semantics as ereport(), in particular that it can skip evaluation work if the message is not to be emitted. Andres Freund and Tom Lane Discussion: https://postgr.es/m/CA+fd4k6N8EjNvZpM8nme+y+05mz-SM8Z_BgkixzkA34R+ej0Kw@mail.gmail.com
* Introduce "anycompatible" family of polymorphic types.Tom Lane2020-03-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds the pseudo-types anycompatible, anycompatiblearray, anycompatiblenonarray, and anycompatiblerange. They work much like anyelement, anyarray, anynonarray, and anyrange respectively, except that the actual input values need not match precisely in type. Instead, if we can find a common supertype (using the same rules as for UNION/CASE type resolution), then the parser automatically promotes the input values to that type. For example, "myfunc(anycompatible, anycompatible)" can match a call with one integer and one bigint argument, with the integer automatically promoted to bigint. With anyelement in the definition, the user would have had to cast the integer explicitly. The new types also provide a second, independent set of type variables for function matching; thus with "myfunc(anyelement, anyelement, anycompatible) returns anycompatible" the first two arguments are constrained to be the same type, but the third can be some other type, and the result has the type of the third argument. The need for more than one set of type variables was foreseen back when we first invented the polymorphic types, but we never did anything about it. Pavel Stehule, revised a bit by me Discussion: https://postgr.es/m/CAFj8pRDna7VqNi8gR+Tt2Ktmz0cq5G93guc3Sbn_NVPLdXAkqA@mail.gmail.com
* Refactor our checks for valid function and aggregate signatures.Tom Lane2020-03-17
| | | | | | | | | | | | | | | | | | | | | | pg_proc.c and pg_aggregate.c had near-duplicate copies of the logic to decide whether a function or aggregate's signature is legal. This seems like a bad thing even without the problem that the upcoming "anycompatible" patch would roughly double the complexity of that logic. Hence, refactor so that the rules are localized in new subroutines supplied by parse_coerce.c. (One could quibble about just where to add that code, but putting it beside enforce_generic_type_consistency seems not totally unreasonable.) The fact that the rules are about to change would mandate some changes in the wording of the associated error messages in any case. I ended up spelling things out in a fairly literal fashion in the errdetail messages, eg "A result of type anyelement requires at least one input of type anyelement, anyarray, anynonarray, anyenum, or anyrange." Perhaps this is overkill, but once there's more than one subgroup of polymorphic types, people might get confused by more-abstract messages. Discussion: https://postgr.es/m/24137.1584139352@sss.pgh.pa.us
* Adjust handling of an ANYARRAY actual input for an ANYARRAY argument.Tom Lane2020-03-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Ordinarily it's impossible for an actual input of a function to have declared type ANYARRAY, since we'd resolve that to a concrete array type before doing argument type resolution for the function. But an exception arises for functions applied to certain columns of pg_statistic or pg_stats, since we abuse the "anyarray" pseudotype by using it to declare those columns. So parse_coerce.c has to deal with the case. Previously we allowed an ANYARRAY actual input to match an ANYARRAY polymorphic argument, but only if no other argument or result was declared ANYELEMENT. When that logic was written, those were the only two polymorphic types, and I fear nobody thought carefully about how it ought to extend to the other ones that came along later. But actually it was wrong even then, because if a function has two ANYARRAY arguments, it should be able to expect that they have identical element types, and we'd not be able to ensure that. The correct generalization is that we can match an ANYARRAY actual input to an ANYARRAY polymorphic argument only if no other argument or result is of any polymorphic type, so that no promises are being made about element type compatibility. check_generic_type_consistency can't test that condition, but it seems better anyway to accept such matches there and then throw an error if needed in enforce_generic_type_consistency. That way we can produce a specific error message rather than an unintuitive "function does not exist" complaint. (There's some risk perhaps of getting new ambiguous-function complaints, but I think that any set of functions for which that could happen would be ambiguous against ordinary array columns as well.) While we're at it, we can improve the message that's produced in cases that the code did already object to, as shown in the regression test changes. Also, remove a similar test that got cargo-culted in for ANYRANGE; there are no catalog columns of type ANYRANGE, and I hope we never create any, so that's not needed. (It was incomplete anyway.) While here, update some comments and rearrange the code a bit in preparation for upcoming additions of more polymorphic types. In practical situations I believe this is just exchanging one error message for another, hopefully better, one. So it doesn't seem needful to back-patch, even though the mistake is ancient. Discussion: https://postgr.es/m/21569.1584314271@sss.pgh.pa.us
* Restructure polymorphic-type resolution in funcapi.c.Tom Lane2020-03-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | resolve_polymorphic_tupdesc() and resolve_polymorphic_argtypes() failed to cover the case of having to resolve anyarray given only an anyrange input. The bug was masked if anyelement was also used (as either input or output), which probably helps account for our not having noticed. While looking at this I noticed that resolve_generic_type() would produce the wrong answer if asked to make that same resolution. ISTM that resolve_generic_type() is confusingly defined and overly complex, so rather than fix it, let's just make funcapi.c do the specific lookups it requires for itself. With this change, resolve_generic_type() is not used anywhere, so remove it in HEAD. In the back branches, leave it alone (complete with bug) just in case any external code is using it. While we're here, make some other refactoring adjustments in funcapi.c with an eye to upcoming future expansion of the set of polymorphic types: * Simplify quick-exit tests by adding an overall have_polymorphic_result flag. This is about a wash now but will be a win when there are more flags. * Reduce duplication of code between resolve_polymorphic_tupdesc() and resolve_polymorphic_argtypes(). * Don't bother to validate correct matching of anynonarray or anyenum; the parser should have done that, and even if it didn't, just doing "return false" here would lead to a very confusing, off-point error message. (Really, "return false" in these two functions should only occur if the call_expr isn't supplied or we can't obtain data type info from it.) * For the same reason, throw an elog rather than "return false" if we fail to resolve a polymorphic type. The bug's been there since we added anyrange, so back-patch to all supported branches. Discussion: https://postgr.es/m/6093.1584202130@sss.pgh.pa.us
* Make parser rely more heavily on the ParseNamespaceItem data structure.Tom Lane2020-01-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When I added the ParseNamespaceItem data structure (in commit 5ebaaa494), it wasn't very tightly integrated into the parser's APIs. In the wake of adding p_rtindex to that struct (commit b541e9acc), there is a good reason to make more use of it: by passing around ParseNamespaceItem pointers instead of bare RTE pointers, we can get rid of various messy methods for passing back or deducing the rangetable index of an RTE during parsing. Hence, refactor the addRangeTableEntryXXX functions to build and return a ParseNamespaceItem struct, not just the RTE proper; and replace addRTEtoQuery with addNSItemToQuery, which is passed a ParseNamespaceItem rather than building one internally. Also, add per-column data (a ParseNamespaceColumn array) to each ParseNamespaceItem. These arrays are built during addRangeTableEntryXXX, where we have column type data at hand so that it's nearly free to fill the data structure. Later, when we need to build Vars referencing RTEs, we can use the ParseNamespaceColumn info to avoid the rather expensive operations done in get_rte_attribute_type() or expandRTE(). get_rte_attribute_type() is indeed dead code now, so I've removed it. This makes for a useful improvement in parse analysis speed, around 20% in one moderately-complex test query. The ParseNamespaceColumn structs also include Var identity information (varno/varattno). That info isn't actually being used in this patch, except that p_varno == 0 is a handy test for a dropped column. A follow-on patch will make more use of it. Discussion: https://postgr.es/m/2461.1577764221@sss.pgh.pa.us
* Update copyrights for 2020Bruce Momjian2020-01-01
| | | | Backpatch-through: update all files in master, backpatch legal files through 9.4
* Represent Lists as expansible arrays, not chains of cons-cells.Tom Lane2019-07-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Originally, Postgres Lists were a more or less exact reimplementation of Lisp lists, which consist of chains of separately-allocated cons cells, each having a value and a next-cell link. We'd hacked that once before (commit d0b4399d8) to add a separate List header, but the data was still in cons cells. That makes some operations -- notably list_nth() -- O(N), and it's bulky because of the next-cell pointers and per-cell palloc overhead, and it's very cache-unfriendly if the cons cells end up scattered around rather than being adjacent. In this rewrite, we still have List headers, but the data is in a resizable array of values, with no next-cell links. Now we need at most two palloc's per List, and often only one, since we can allocate some values in the same palloc call as the List header. (Of course, extending an existing List may require repalloc's to enlarge the array. But this involves just O(log N) allocations not O(N).) Of course this is not without downsides. The key difficulty is that addition or deletion of a list entry may now cause other entries to move, which it did not before. For example, that breaks foreach() and sister macros, which historically used a pointer to the current cons-cell as loop state. We can repair those macros transparently by making their actual loop state be an integer list index; the exposed "ListCell *" pointer is no longer state carried across loop iterations, but is just a derived value. (In practice, modern compilers can optimize things back to having just one loop state value, at least for simple cases with inline loop bodies.) In principle, this is a semantics change for cases where the loop body inserts or deletes list entries ahead of the current loop index; but I found no such cases in the Postgres code. The change is not at all transparent for code that doesn't use foreach() but chases lists "by hand" using lnext(). The largest share of such code in the backend is in loops that were maintaining "prev" and "next" variables in addition to the current-cell pointer, in order to delete list cells efficiently using list_delete_cell(). However, we no longer need a previous-cell pointer to delete a list cell efficiently. Keeping a next-cell pointer doesn't work, as explained above, but we can improve matters by changing such code to use a regular foreach() loop and then using the new macro foreach_delete_current() to delete the current cell. (This macro knows how to update the associated foreach loop's state so that no cells will be missed in the traversal.) There remains a nontrivial risk of code assuming that a ListCell * pointer will remain good over an operation that could now move the list contents. To help catch such errors, list.c can be compiled with a new define symbol DEBUG_LIST_MEMORY_USAGE that forcibly moves list contents whenever that could possibly happen. This makes list operations significantly more expensive so it's not normally turned on (though it is on by default if USE_VALGRIND is on). There are two notable API differences from the previous code: * lnext() now requires the List's header pointer in addition to the current cell's address. * list_delete_cell() no longer requires a previous-cell argument. These changes are somewhat unfortunate, but on the other hand code using either function needs inspection to see if it is assuming anything it shouldn't, so it's not all bad. Programmers should be aware of these significant performance changes: * list_nth() and related functions are now O(1); so there's no major access-speed difference between a list and an array. * Inserting or deleting a list element now takes time proportional to the distance to the end of the list, due to moving the array elements. (However, it typically *doesn't* require palloc or pfree, so except in long lists it's probably still faster than before.) Notably, lcons() used to be about the same cost as lappend(), but that's no longer true if the list is long. Code that uses lcons() and list_delete_first() to maintain a stack might usefully be rewritten to push and pop at the end of the list rather than the beginning. * There are now list_insert_nth...() and list_delete_nth...() functions that add or remove a list cell identified by index. These have the data-movement penalty explained above, but there's no search penalty. * list_concat() and variants now copy the second list's data into storage belonging to the first list, so there is no longer any sharing of cells between the input lists. The second argument is now declared "const List *" to reflect that it isn't changed. This patch just does the minimum needed to get the new implementation in place and fix bugs exposed by the regression tests. As suggested by the foregoing, there's a fair amount of followup work remaining to do. Also, the ENABLE_LIST_COMPAT macros are finally removed in this commit. Code using those should have been gone a dozen years ago. Patch by me; thanks to David Rowley, Jesper Pedersen, and others for review. Discussion: https://postgr.es/m/11587.1550975080@sss.pgh.pa.us
* Phase 2 pgindent run for v12.Tom Lane2019-05-22
| | | | | | | | | Switch to 2.1 version of pg_bsd_indent. This formats multiline function declarations "correctly", that is with additional lines of parameter declarations indented to match where the first line's left parenthesis is. Discussion: https://postgr.es/m/CAEepm=0P3FeTXRcU5B2W3jv3PgRVZ-kGUXLGfd42FFhUROO3ug@mail.gmail.com
* Update copyright for 2019Bruce Momjian2019-01-02
| | | | Backpatch-through: certain files through 9.4
* Fix interaction of CASE and ArrayCoerceExpr.Tom Lane2018-10-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | An array-type coercion appearing within a CASE that has a constant (after const-folding) test expression was mangled by the planner, causing all the elements of the resulting array to be equal to the coerced value of the CASE's test expression. This is my oversight in commit c12d570fa: that changed ArrayCoerceExpr to use a subexpression involving a CaseTestExpr, and I didn't notice that eval_const_expressions needed an adjustment to keep from folding such a CaseTestExpr to a constant when it's inside a suitable CASE. This is another in what's getting to be a depressingly long line of bugs associated with misidentification of the referent of a CaseTestExpr. We're overdue to redesign that mechanism; but any such fix is unlikely to be back-patchable into v11. As a stopgap, fix eval_const_expressions to do what it must here. Also add a bunch of comments pointing out the restrictions and assumptions that are needed to make this work at all. Also fix a related oversight: contain_context_dependent_node() was not aware of the relationship of ArrayCoerceExpr to CaseTestExpr. That was somewhat fail-soft, in that the outcome of a wrong answer would be to prevent optimizations that could have been made, but let's fix it while we're at it. Per bug #15471 from Matt Williams. Back-patch to v11 where the faulty logic came in. Discussion: https://postgr.es/m/15471-1117f49271989bad@postgresql.org
* Sprinkle some const decorationsPeter Eisentraut2018-10-23
| | | | These mainly help understanding the function signatures better.
* Merge catalog/pg_foo_fn.h headers back into pg_foo.h headers.Tom Lane2018-04-08
| | | | | | | | | | | | | Traditionally, include/catalog/pg_foo.h contains extern declarations for functions in backend/catalog/pg_foo.c, in addition to its function as the authoritative definition of the pg_foo catalog's rowtype. In some cases, we'd been forced to split out those extern declarations into separate pg_foo_fn.h headers so that the catalog definitions could be #include'd by frontend code. That problem is gone as of commit 9c0a0de4c, so let's undo the splits to make things less confusing. Discussion: https://postgr.es/m/23690.1523031777@sss.pgh.pa.us
* Add prokind column, replacing proisagg and proiswindowPeter Eisentraut2018-03-02
| | | | | | | | | | The new column distinguishes normal functions, procedures, aggregates, and window functions. This replaces the existing columns proisagg and proiswindow, and replaces the convention that procedures are indicated by prorettype == 0. Also change prorettype to be VOIDOID for procedures. Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Michael Paquier <michael@paquier.xyz>
* Update copyright for 2018Bruce Momjian2018-01-02
| | | | Backpatch-through: certain files through 9.3
* Change TRUE/FALSE to true/falsePeter Eisentraut2017-11-08
| | | | | | | | | | | | | | The lower case spellings are C and C++ standard and are used in most parts of the PostgreSQL sources. The upper case spellings are only used in some files/modules. So standardize on the standard spellings. The APIs for ICU, Perl, and Windows define their own TRUE and FALSE, so those are left as is when using those APIs. In code comments, we use the lower-case spelling for the C concepts and keep the upper-case spelling for the SQL concepts. Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
* Support domains over composite types.Tom Lane2017-10-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | This is the last major omission in our domains feature: you can now make a domain over anything that's not a pseudotype. The major complication from an implementation standpoint is that places that might be creating tuples of a domain type now need to be prepared to apply domain_check(). It seems better that unprepared code fail with an error like "<type> is not composite" than that it silently fail to apply domain constraints. Therefore, relevant infrastructure like get_func_result_type() and lookup_rowtype_tupdesc() has been adjusted to treat domain-over-composite as a distinct case that unprepared code won't recognize, rather than just transparently treating it the same as plain composite. This isn't a 100% solution to the possibility of overlooked domain checks, but it catches most places. In passing, improve typcache.c's support for domains (it can now cache the identity of a domain's base type), and rewrite the argument handling logic in jsonfuncs.c's populate_record[set]_worker to reduce duplicative per-call lookups. I believe this is code-complete so far as the core and contrib code go. The PLs need varying amounts of work, which will be tackled in followup patches. Discussion: https://postgr.es/m/4206.1499798337@sss.pgh.pa.us
* Support arrays over domains.Tom Lane2017-09-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Allowing arrays with a domain type as their element type was left un-done in the original domain patch, but not for any very good reason. This omission leads to such surprising results as array_agg() not working on a domain column, because the parser can't identify a suitable output type for the polymorphic aggregate. In order to fix this, first clean up the APIs of coerce_to_domain() and some internal functions in parse_coerce.c so that we consistently pass around a CoercionContext along with CoercionForm. Previously, we sometimes passed an "isExplicit" boolean flag instead, which is strictly less information; and coerce_to_domain() didn't even get that, but instead had to reverse-engineer isExplicit from CoercionForm. That's contrary to the documentation in primnodes.h that says that CoercionForm only affects display and not semantics. I don't think this change fixes any live bugs, but it makes things more consistent. The main reason for doing it though is that now build_coercion_expression() receives ccontext, which it needs in order to be able to recursively invoke coerce_to_target_type(). Next, reimplement ArrayCoerceExpr so that the node does not directly know any details of what has to be done to the individual array elements while performing the array coercion. Instead, the per-element processing is represented by a sub-expression whose input is a source array element and whose output is a target array element. This simplifies life in parse_coerce.c, because it can build that sub-expression by a recursive invocation of coerce_to_target_type(). The executor now handles the per-element processing as a compiled expression instead of hard-wired code. The main advantage of this is that we can use a single ArrayCoerceExpr to handle as many as three successive steps per element: base type conversion, typmod coercion, and domain constraint checking. The old code used two stacked ArrayCoerceExprs to handle type + typmod coercion, which was pretty inefficient, and adding yet another array deconstruction to do domain constraint checking seemed very unappetizing. In the case where we just need a single, very simple coercion function, doing this straightforwardly leads to a noticeable increase in the per-array-element runtime cost. Hence, add an additional shortcut evalfunc in execExprInterp.c that skips unnecessary overhead for that specific form of expression. The runtime speed of simple cases is within 1% or so of where it was before, while cases that previously required two levels of array processing are significantly faster. Finally, create an implicit array type for every domain type, as we do for base types, enums, etc. Everything except the array-coercion case seems to just work without further effort. Tom Lane, reviewed by Andrew Dunstan Discussion: https://postgr.es/m/9852.1499791473@sss.pgh.pa.us
* Reduce excessive dereferencing of function pointersPeter Eisentraut2017-09-07
| | | | | | | | | | | | It is equivalent in ANSI C to write (*funcptr) () and funcptr(). These two styles have been applied inconsistently. After discussion, we'll use the more verbose style for plain function pointer variables, to make it clear that it's a variable, and the shorter style when the function pointer is in a struct (s.func() or s->func()), because then it's clear that it's not a plain function name, and otherwise the excessive punctuation makes some of those invocations hard to read. Discussion: https://www.postgresql.org/message-id/f52c16db-14ed-757d-4b48-7ef360b1631d@2ndquadrant.com
* Change tupledesc->attrs[n] to TupleDescAttr(tupledesc, n).Andres Freund2017-08-20
| | | | | | | | | | | This is a mechanical change in preparation for a later commit that will change the layout of TupleDesc. Introducing a macro to abstract the details of where attributes are stored will allow us to change that in separate step and revise it in future. Author: Thomas Munro, editorialized by Andres Freund Reviewed-By: Andres Freund Discussion: https://postgr.es/m/CAEepm=0ZtQ-SpsgCyzzYpsXS6e=kZWqk3g5Ygn3MDV7A8dabUA@mail.gmail.com
* Phase 3 of pgindent updates.Tom Lane2017-06-21
| | | | | | | | | | | | | | | | | | | | | | | | | Don't move parenthesized lines to the left, even if that means they flow past the right margin. By default, BSD indent lines up statement continuation lines that are within parentheses so that they start just to the right of the preceding left parenthesis. However, traditionally, if that resulted in the continuation line extending to the right of the desired right margin, then indent would push it left just far enough to not overrun the margin, if it could do so without making the continuation line start to the left of the current statement indent. That makes for a weird mix of indentations unless one has been completely rigid about never violating the 80-column limit. This behavior has been pretty universally panned by Postgres developers. Hence, disable it with indent's new -lpl switch, so that parenthesized lines are always lined up with the preceding left paren. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
* Phase 2 of pgindent updates.Tom Lane2017-06-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
* Support XMLTABLE query expressionAlvaro Herrera2017-03-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | XMLTABLE is defined by the SQL/XML standard as a feature that allows turning XML-formatted data into relational form, so that it can be used as a <table primary> in the FROM clause of a query. This new construct provides significant simplicity and performance benefit for XML data processing; what in a client-side custom implementation was reported to take 20 minutes can be executed in 400ms using XMLTABLE. (The same functionality was said to take 10 seconds using nested PostgreSQL XPath function calls, and 5 seconds using XMLReader under PL/Python). The implemented syntax deviates slightly from what the standard requires. First, the standard indicates that the PASSING clause is optional and that multiple XML input documents may be given to it; we make it mandatory and accept a single document only. Second, we don't currently support a default namespace to be specified. This implementation relies on a new executor node based on a hardcoded method table. (Because the grammar is fixed, there is no extensibility in the current approach; further constructs can be implemented on top of this such as JSON_TABLE, but they require changes to core code.) Author: Pavel Stehule, Álvaro Herrera Extensively reviewed by: Craig Ringer Discussion: https://postgr.es/m/CAFj8pRAgfzMD-LoSmnMGybD0WsEznLHWap8DO79+-GTRAPR4qA@mail.gmail.com
* Make messages mentioning type names more uniformAlvaro Herrera2017-01-18
| | | | | | | | | This avoids additional translatable strings for each distinct type, as well as making our quoting style around type names more consistent (namely, that we don't quote type names). This continues what started as f402b9950120. Discussion: https://postgr.es/m/20160401170642.GA57509@alvherre.pgsql
* Update copyright via script for 2017Bruce Momjian2017-01-03
|
* Add defenses against putting expanded objects into Const nodes.Tom Lane2016-01-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Putting a reference to an expanded-format value into a Const node would be a bad idea for a couple of reasons. It'd be possible for the supposedly immutable Const to change value, if something modified the referenced variable ... in fact, if the Const's reference were R/W, any function that has the Const as argument might itself change it at runtime. Also, because datumIsEqual() is pretty simplistic, the Const might fail to compare equal to other Consts that it should compare equal to, notably including copies of itself. This could lead to unexpected planner behavior, such as "could not find pathkey item to sort" errors or inferior plans. I have not been able to find any way to get an expanded value into a Const within the existing core code; but Paul Ramsey was able to trigger the problem by writing a datatype input function that returns an expanded value. The best fix seems to be to establish a rule that varlena values being placed into Const nodes should be passed through pg_detoast_datum(). That will do nothing (and cost little) in normal cases, but it will flatten expanded values and thereby avoid the above problems. Also, it will convert short-header or compressed values into canonical format, which will avoid possible unexpected lack-of-equality issues for those cases too. And it provides a last-ditch defense against putting a toasted value into a Const, which we already knew was dangerous, cf commit 2b0c86b66563cf2f. (In the light of this discussion, I'm no longer sure that that commit provided 100% protection against such cases, but this fix should do it.) The test added in commit 65c3d05e18e7c530 to catch datatype input functions with unstable results would fail for functions that returned expanded values; but it seems a bit uncharitable to deem a result unstable just because it's expressed in expanded form, so revise the coding so that we check for bitwise equality only after applying pg_detoast_datum(). That's a sufficient condition anyway given the new rule about detoasting when forming a Const. Back-patch to 9.5 where the expanded-object facility was added. It's possible that this should go back further; but in the absence of clear evidence that there's any live bug in older branches, I'll refrain for now.