aboutsummaryrefslogtreecommitdiff
path: root/src/backend/parser
Commit message (Collapse)AuthorAge
* Fix type-checking of RECORD-returning functions in FROM, redux.Tom Lane2024-04-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 2ed8f9a01 intended to institute a policy that if a RangeTblFunction has a coldeflist, then the function return type is certainly RECORD, and we should use the coldeflist as the source of truth about what the columns of the record type are. When the original function has been folded to a constant, inspection of the constant might give a different answer. This situation will lead to a tuple-type-mismatch error at execution, but up until that point we need to consistently believe the coldeflist, or we'll have problems from different bits of code reaching different conclusions. expandRTE didn't get that memo though, and would try to produce a tupdesc based on the constant in this situation, leading to an assertion failure. (Desultory testing suggests that non-assert builds often manage to give the expected error, although I also saw a "cache lookup failed for type 0" error, and it seems at least possible that a crash could happen.) Some other callers of get_expr_result_type and get_expr_result_tupdesc were also being incautious about this. While none of them seem to have actual bugs, they're working harder than necessary in this case, besides which it seems safest to have an explicit policy of not using those functions on an RTE with a coldeflist. Adjust the code accordingly, and add commentary to funcapi.c about this policy. Also fix an obsolete comment that claimed "get_expr_result_type() doesn't know how to extract type info from a RECORD constant". That hasn't been true since commit d57534740. Per bug #18422 from Alexander Lakhin. As with the previous commit, back-patch to all supported branches. Discussion: https://postgr.es/m/18422-89ca86c8eac5246d@postgresql.org
* Make INSERT-from-multiple-VALUES-rows handle domain target columns.Tom Lane2024-03-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | Commit a3c7a993d fixed some cases involving target columns that are arrays or composites by applying transformAssignedExpr to the VALUES entries, and then stripping off any assignment ArrayRefs or FieldStores that the transformation added. But I forgot about domains over arrays or composites :-(. Such cases would either fail with surprising complaints about mismatched datatypes, or insert unexpected coercions that could lead to odd results. To fix, extend the stripping logic to get rid of CoerceToDomain if it's atop an ArrayRef or FieldStore. While poking at this, I realized that there's a poorly documented and not-at-all-tested behavior nearby: we coerce each VALUES column to the domain type separately, and rely on the rewriter to merge those operations so that the domain constraints are checked only once. If that merging did not happen, it's entirely possible that we'd get unexpected domain constraint failures due to checking a partially-updated container value. There's no bug there, but while we're here let's improve the commentary about it and add some test cases that explicitly exercise that behavior. Per bug #18393 from Pablo Kharo. Back-patch to all supported branches. Discussion: https://postgr.es/m/18393-65fedb1a0de9260d@postgresql.org
* Compute aggregate argument types correctly in transformAggregateCall().Tom Lane2023-11-06
| | | | | | | | | | | | | | | | | | | | | | | | transformAggregateCall() captures the datatypes of the aggregate's arguments immediately to construct the Aggref.aggargtypes list. This seems reasonable because the arguments have already been transformed --- but there is an edge case where they haven't been. Specifically, if we have an unknown-type literal in an ANY argument position, nothing will have been done with it earlier. But if we also have DISTINCT, then addTargetToGroupList() converts the literal to "text" type, resulting in the aggargtypes list not matching the actual runtime type of the argument. The end result is that the aggregate tries to interpret a "text" value as being of type "unknown", that is a zero-terminated C string. If the text value contains no zero bytes, this could result in disclosure of server memory following the text literal value. To fix, move the collection of the aggargtypes list to the end of transformAggregateCall(), after DISTINCT has been handled. This requires slightly more code, but not a great deal. Our thanks to Jingzhou Fu for reporting this problem. Security: CVE-2023-5868
* Track nesting depth correctly when drilling down into RECORD Vars.Tom Lane2023-09-15
| | | | | | | | | | | | | | | | | | expandRecordVariable() failed to adjust the parse nesting structure correctly when recursing to inspect an outer-level Var. This could result in assertion failures or core dumps in corner cases. Likewise, get_name_for_var_field() failed to adjust the deparse namespace stack correctly when recursing to inspect an outer-level Var. In this case the likely result was a "bogus varno" error while deparsing a view. Per bug #18077 from Jingzhou Fu. Back-patch to all supported branches. Richard Guo, with some adjustments by me Discussion: https://postgr.es/m/18077-b9db97c6e0ab45d8@postgresql.org
* Avoid unnecessary plancache revalidation of utility statements.Tom Lane2023-08-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Revalidation of a plancache entry (after a cache invalidation event) requires acquiring a snapshot. Normally that is harmless, but not if the cached statement is one that needs to run without acquiring a snapshot. We were already aware of that for TransactionStmts, but for some reason hadn't extrapolated to the other statements that PlannedStmtRequiresSnapshot() knows mustn't set a snapshot. This can lead to unexpected failures of commands such as SET TRANSACTION ISOLATION LEVEL. We can fix it in the same way, by excluding those command types from revalidation. However, we can do even better than that: there is no need to revalidate for any statement type for which parse analysis, rewrite, and plan steps do nothing interesting, which is nearly all utility commands. To mechanize this, invent a parser function stmt_requires_parse_analysis() that tells whether parse analysis does anything beyond wrapping a CMD_UTILITY Query around the raw parse tree. If that's what it does, then rewrite and plan will just skip the Query, so that it is not possible for the same raw parse tree to produce a different plan tree after cache invalidation. stmt_requires_parse_analysis() is basically equivalent to the existing function analyze_requires_snapshot(), except that for obscure reasons that function omits ReturnStmt and CallStmt. It is unclear whether those were oversights or intentional. I have not been able to demonstrate a bug from not acquiring a snapshot while analyzing these commands, but at best it seems mighty fragile. It seems safer to acquire a snapshot for parse analysis of these commands too, which allows making stmt_requires_parse_analysis and analyze_requires_snapshot equivalent. In passing this fixes a second bug, which is that ResetPlanCache would exclude ReturnStmts and CallStmts from revalidation. That's surely *not* safe, since they contain parsable expressions. Per bug #18059 from Pavel Kulakov. Back-patch to all supported branches. Discussion: https://postgr.es/m/18059-79c692f036b25346@postgresql.org
* Fix crashes with CREATE SCHEMA AUTHORIZATION and schema elementsMichael Paquier2023-04-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CREATE SCHEMA AUTHORIZATION with appended schema elements can lead to crashes when comparing the schema name of the query with the schemas used in the qualification of some clauses in the elements' queries. The origin of the problem is that the transformation routine for the elements listed in a CREATE SCHEMA query uses as new, expected, schema name the one listed in CreateSchemaStmt itself. However, depending on the query, CreateSchemaStmt.schemaname may be NULL, being computed instead from the role specification of the query given by the AUTHORIZATION clause, that could be either: - A user name string, with the new schema name being set to the same value as the role given. - Guessed from CURRENT_ROLE, SESSION_ROLE or CURRENT_ROLE, with a new schema name computed from the security context where CREATE SCHEMA is running. Regression tests are added for CREATE SCHEMA with some appended elements (some of them with schema qualifications), covering also some role specification patterns. While on it, this simplifies the context structure used during the transformation of the elements listed in a CREATE SCHEMA query by removing the fields for the role specification and the role type. They were not used, and for the role specification this could be confusing as the schema name may by extracted from that at the beginning of CreateSchemaCommand(). This issue exists for a long time, so backpatch down to all the versions supported. Reported-by: Song Hongyu Author: Michael Paquier Reviewed-by: Richard Guo Discussion: https://postgr.es/m/17909-f65c12dfc5f0451d@postgresql.org Backpatch-through: 11
* Fix List memory issue in transformColumnDefinitionDavid Rowley2023-03-31
| | | | | | | | | | | | | | | | | | | | | When calling generateSerialExtraStmts(), we would pass in the constraint->options. In some cases, generateSerialExtraStmts() would modify the referenced List to remove elements from it, but doing so is invalid without assigning the list back to all variables that point to it. In the particular reported problem case, the List became empty, in which cases it became NIL, but the passed in constraint->options didn't get to find out about that and was left pointing to free'd memory. To fix this, just perform a list_copy() inside generateSerialExtraStmts(). We could just do a list_copy() just before we perform the delete from the list, however, that seems less robust. Let's make sure the generated CreateSeqStmt gets a completely different copy of the list to be safe. Bug: #17879 Reported-by: Fei Changhong Diagnosed-by: Fei Changhong Discussion: https://postgr.es/m/17879-b7dfb5debee58ff5@postgresql.org Backpatch-through: 11, all supported versions
* Fix failure to detect some cases of improperly-nested aggregates.Tom Lane2023-03-13
| | | | | | | | | | | | | | check_agg_arguments_walker() supposed that it needn't descend into the arguments of a lower-level aggregate function, but this is just wrong in the presence of multiple levels of sub-select. The oversight would lead to executor failures on queries that should be rejected. (Prior to v11, they actually were rejected, thanks to a "redundant" execution-time check.) Per bug #17835 from Anban Company. Back-patch to all supported branches. Discussion: https://postgr.es/m/17835-4f29f3098b2d0ba4@postgresql.org
* Fix more bugs caused by adding columns to the end of a view.Tom Lane2023-03-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | If a view is defined atop another view, and then CREATE OR REPLACE VIEW is used to add columns to the lower view, then when the upper view's referencing RTE is expanded by ApplyRetrieveRule we will have a subquery RTE with fewer eref->colnames than output columns. This confuses various code that assumes those lists are always in sync, as they are in plain parser output. We have seen such problems before (cf commit d5b760ecb), and now I think the time has come to do what was speculated about in that commit: let's make ApplyRetrieveRule synthesize some column names to preserve the invariant that holds in parser output. Otherwise we'll be chasing this class of bugs indefinitely. Moreover, it appears from testing that this actually gives us better results in the test case d5b760ecb added, and likely in other corner cases that we lack coverage for. In HEAD, I replaced d5b760ecb's hack to make expandRTE exit early with an elog(ERROR) call, since the case is now presumably unreachable. But it seems like changing that in back branches would bring more risk than benefit, so there I just updated the comment. Per bug #17811 from Alexander Lakhin. Back-patch to all supported branches. Discussion: https://postgr.es/m/17811-d31686b78f0dffc9@postgresql.org
* Rename parser token REF to REF_P to avoid a symbol conflict.Tom Lane2022-10-16
| | | | | | | | | | | | | | | | | | | | | | | In the latest version of Apple's macOS SDK, <sys/socket.h> fails to compile if "REF" is #define'd as something. Apple may or may not agree that this is a bug, and even if they do accept the bug report I filed, they probably won't fix it very quickly. In the meantime, our back branches will all fail to compile gram.y. v15 and HEAD currently escape the problem thanks to the refactoring done in 98e93a1fc, but that's purely accidental. Moreover, since that patch removed a widely-visible inclusion of <netdb.h>, back-patching it seems too likely to break third-party code. Instead, change the token's code name to REF_P, following our usual convention for naming parser tokens that are likely to have symbol conflicts. The effects of that should be localized to the grammar and immediately surrounding files, so it seems like a safer answer. Per project policy that we want to keep recently-out-of-support branches buildable on modern systems, back-patch all the way to 9.2. Discussion: https://postgr.es/m/1803927.1665938411@sss.pgh.pa.us
* Suppress variable-set-but-not-used warnings from clang 15.Tom Lane2022-09-20
| | | | | | | | | | | | | | | | | | | | | | | | clang 15+ will issue a set-but-not-used warning when the only use of a variable is in autoincrements (e.g., "foo++;"). That's perfectly sensible, but it detects a few more cases that we'd not noticed before. Silence the warnings with our usual methods, such as PG_USED_FOR_ASSERTS_ONLY, or in one case by actually removing a useless variable. One thing that we can't nicely get rid of is that with %pure-parser, Bison emits "yynerrs" as a local variable that falls foul of this warning. To silence those, I inserted "(void) yynerrs;" in the top-level productions of affected grammars. Per recently-established project policy, this is a candidate for back-patching into out-of-support branches: it suppresses annoying compiler warnings but changes no behavior. Hence, back-patch to 9.5, which is as far as these patches go without issues. (A preliminary check shows that the prior branches need some other set-but-not-used cleanups too, so I'll leave them for another day.) Discussion: https://postgr.es/m/514615.1663615243@sss.pgh.pa.us
* Catch stack overflow when recursing in transformFromClauseItem().Tom Lane2022-08-13
| | | | | | | | | | | | | | | Most parts of the parser can expect that the stack overflow check in transformExprRecurse() will trigger before things get desperate. However, transformFromClauseItem() can recurse directly to self without having analyzed any expressions, so it's possible to drive it to a stack-overrun crash. Add a check to prevent that. Per bug #17583 from Egor Chindyaskin. Back-patch to all supported branches. Richard Guo Discussion: https://postgr.es/m/17583-33be55b9f981f75c@postgresql.org
* In extensions, don't replace objects not belonging to the extension.Tom Lane2022-08-08
| | | | | | | | | | | | | | | | | | | | | | | Previously, if an extension script did CREATE OR REPLACE and there was an existing object not belonging to the extension, it would overwrite the object and adopt it into the extension. This is problematic, first because the overwrite is probably unintentional, and second because we didn't change the object's ownership. Thus a hostile user could create an object in advance of an expected CREATE EXTENSION command, and would then have ownership rights on an extension object, which could be modified for trojan-horse-type attacks. Hence, forbid CREATE OR REPLACE of an existing object unless it already belongs to the extension. (Note that we've always forbidden replacing an object that belongs to some other extension; only the behavior for previously-free-standing objects changes here.) For the same reason, also fail CREATE IF NOT EXISTS when there is an existing object that doesn't belong to the extension. Our thanks to Sven Klemm for reporting this problem. Security: CVE-2022-2625
* Check maximum number of columns in function RTEs, too.Tom Lane2022-08-01
| | | | | | | | | | | | | | | I thought commit fd96d14d9 had plugged all the holes of this sort, but no, function RTEs could produce oversize tuples too, either via long coldeflists or just from multiple functions in one RTE. (I'm pretty sure the other variants of base RTEs aren't a problem, because they ultimately refer to either a table or a sub-SELECT, whose widths are enforced elsewhere. But we explicitly allow join RTEs to be overwidth, as long as you don't try to form their tuple result.) Per further discussion of bug #17561. As before, patch all branches. Discussion: https://postgr.es/m/17561-80350151b9ad2ad4@postgresql.org
* In transformRowExpr(), check for too many columns in the row.Tom Lane2022-07-29
| | | | | | | | | | | | | | | | | | | | A RowExpr with more than MaxTupleAttributeNumber columns would fail at execution anyway, since we cannot form a tuple datum with more than that many columns. While heap_form_tuple() has a check for too many columns, it emerges that there are some intermediate bits of code that don't check and can be driven to failure with sufficiently many columns. Checking this at parse time seems like the most appropriate place to install a defense, since we already check SELECT list length there. While at it, make the SELECT-list-length error use the same errcode (TOO_MANY_COLUMNS) as heap_form_tuple does, rather than the generic PROGRAM_LIMIT_EXCEEDED. Per bug #17561 from Egor Chindyaskin. The given test case crashes in all supported branches (and probably a lot further back), so patch all. Discussion: https://postgr.es/m/17561-80350151b9ad2ad4@postgresql.org
* Fix ruleutils issues with dropped cols in functions-returning-composite.Tom Lane2022-07-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Due to lack of concern for the case in the dependency code, it's possible to drop a column of a composite type even though stored queries have references to the dropped column via functions-in-FROM that return the composite type. There are "soft" references, namely FROM-clause aliases for such columns, and "hard" references, that is actual Vars referring to them. The right fix for hard references is to add dependencies preventing the drop; something we've known for many years and not done (and this commit still doesn't address it). A "soft" reference shouldn't prevent a drop though. We've been around on this before (cf. 9b35ddce9, 2c4debbd0), but nobody had noticed that the current behavior can result in dump/reload failures, because ruleutils.c can print more column aliases than the underlying composite type now has. So we need to rejigger the column-alias-handling code to treat such columns as dropped and not print aliases for them. Rather than writing new code for this, I used expandRTE() which already knows how to figure out which function result columns are dropped. I'd initially thought maybe we could use expandRTE() in all cases, but that fails for EXPLAIN's purposes, because the planner strips a lot of RTE infrastructure that expandRTE() needs. So this patch just uses it for unplanned function RTEs and otherwise does things the old way. If there is a hard reference (Var), then removing the column alias causes us to fail to print the Var, since there's no longer a name to print. Failing seems less desirable than printing a made-up name, so I made it print "?dropped?column?" instead. Per report from Timo Stolz. Back-patch to all supported branches. Discussion: https://postgr.es/m/5c91267e-3b6d-5795-189c-d15a55d61dbb@nullachtvierzehn.de
* Fix alias matching in transformLockingClause().Dean Rasheed2022-07-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | When locking a specific named relation for a FOR [KEY] UPDATE/SHARE clause, transformLockingClause() finds the relation to lock by scanning the rangetable for an RTE with a matching eref->aliasname. However, it failed to account for the visibility rules of a join RTE. If a join RTE doesn't have a user-supplied alias, it will have a generated eref->aliasname of "unnamed_join" that is not visible as a relation name in the parse namespace. Such an RTE needs to be skipped, otherwise it might be found in preference to a regular base relation with a user-supplied alias of "unnamed_join", preventing it from being locked. In addition, if a join RTE doesn't have a user-supplied alias, but does have a join_using_alias, then the RTE needs to be matched using that alias rather than the generated eref->aliasname, otherwise a misleading "relation not found" error will be reported rather than a "join cannot be locked" error. Backpatch all the way, except for the second part which only goes back to 14, where JOIN USING aliases were added. Dean Rasheed, reviewed by Tom Lane. Discussion: https://postgr.es/m/CAEZATCUY_KOBnqxbTSPf=7fz9HWPnZ5Xgb9SwYzZ8rFXe7nb=w@mail.gmail.com
* Check column list length in XMLTABLE/JSON_TABLE aliasAlvaro Herrera2022-05-18
| | | | | | | | | | | | | | | | | We weren't checking the length of the column list in the alias clause of an XMLTABLE or JSON_TABLE function (a "tablefunc" RTE), and it was possible to make the server crash by passing an overly long one. Fix it by throwing an error in that case, like the other places that deal with alias lists. In passing, modify the equivalent test used for join RTEs to look like the other ones, which was different for no apparent reason. This bug came in when XMLTABLE was born in version 10; backpatch to all stable versions. Reported-by: Wang Ke <krking@zju.edu.cn> Discussion: https://postgr.es/m/17480-1c9d73565bb28e90@postgresql.org
* Fix core dump in transformValuesClause when there are no columns.Tom Lane2022-05-09
| | | | | | | | | | | | The parser code that transformed VALUES from row-oriented to column-oriented lists failed if there were zero columns. You can't write that straightforwardly (though probably you should be able to), but the case can be reached by expanding a "tab.*" reference to a zero-column table. Per bug #17477 from Wang Ke. Back-patch to all supported branches. Discussion: https://postgr.es/m/17477-0af3c6ac6b0a6ae0@postgresql.org
* Avoid invalid array reference in transformAlterTableStmt().Tom Lane2022-04-18
| | | | | | | | | | | | | | | | | | | | | | Don't try to look at the attidentity field of system attributes, because they're not there in the TupleDescAttr array. Sometimes this is harmless because we accidentally pick up a zero, but otherwise we'll report "no owned sequence found" from an attempt to alter a system attribute. (It seems possible that a SIGSEGV could occur, too, though I've not seen it in testing.) It's not in this function's charter to complain that you can't alter a system column, so instead just hard-wire an assumption that system attributes aren't identities. I didn't bother with a regression test because the appearance of the bug is very erratic. Per bug #17465 from Roman Zharkov. Back-patch to all supported branches. (There's not actually a live bug before v12, because before that get_attidentity() did the right thing anyway. But for consistency I changed the test in the older branches too.) Discussion: https://postgr.es/m/17465-f2a554a6cb5740d3@postgresql.org
* Ensure casting to typmod -1 generates a RelabelType.Tom Lane2021-12-16
| | | | | | | | | | | | | | | | | | | | Fix the code changed by commit 5c056b0c2 so that we always generate RelabelType, not something else, for a cast to unspecified typmod. Otherwise planner optimizations might not happen. It appears we missed this point because the previous experiments were done on type numeric: the parser undesirably generates a call on the numeric() length-coercion function, but then numeric_support() optimizes that down to a RelabelType, so that everything seems fine. It misbehaves for types that have a non-optimized length coercion function, such as bpchar. Per report from John Naylor. Back-patch to all supported branches, as the previous patch eventually was. Unfortunately, that no longer includes 9.6 ... we really shouldn't put this type of change into a nearly-EOL branch. Discussion: https://postgr.es/m/CAFBsxsEfbFHEkouc+FSj+3K1sHipLPbEC67L0SAe-9-da8QtYg@mail.gmail.com
* Remove bogus assertion in transformExpressionList().Tom Lane2021-10-19
| | | | | | | | | | | | | | | | | I think when I added this assertion (in commit 8f889b108), I was only thinking of the use of transformExpressionList at top level of INSERT and VALUES. But it's also called by transformRowExpr(), which can certainly occur in an UPDATE targetlist, so it's inappropriate to suppose that p_multiassign_exprs must be empty. Besides, since the input is not expected to contain ResTargets, there's no reason it should contain MultiAssignRefs either. Hence this code need not be concerned about the state of p_multiassign_exprs, and we should just drop the assertion. Per bug #17236 from ocean_li_996. It's been wrong for years, so back-patch to all supported branches. Discussion: https://postgr.es/m/17236-3210de9bcba1d7ca@postgresql.org
* Don't elide casting to typmod -1.Tom Lane2021-09-20
| | | | | | | | | | | | | | | | | | | | | | Casting a value that's already of a type with a specific typmod to an unspecified typmod doesn't do anything so far as run-time behavior is concerned. However, it really ought to change the exposed type of the expression to match. Up to now, coerce_type_typmod hasn't bothered with that, which creates gotchas in contexts such as recursive unions. If for example one side of the union is numeric(18,3), but it needs to be plain numeric to match the other side, there's no direct way to express that. This is easy enough to fix, by inserting a RelabelType to update the exposed type of the expression. However, it's a bit nervous-making to change this behavior, because it's stood for a really long time. But no complaints have emerged about 14beta3, so go ahead and back-patch. Back-patch of 5c056b0c2 into previous supported branches. Discussion: https://postgr.es/m/CABNQVagu3bZGqiTjb31a8D5Od3fUMs7Oh3gmZMQZVHZ=uWWWfQ@mail.gmail.com Discussion: https://postgr.es/m/1488389.1631984807@sss.pgh.pa.us
* Avoid using ambiguous word "positive" in error message.Fujii Masao2021-08-25
| | | | | | | | | | | | | | There are two identical error messages about valid value of modulus for hash partition, in PostgreSQL source code. Commit 0e1275fb07 improved only one of them so that ambiguous word "positive" was avoided there, and forgot to improve the other. This commit improves the other. Which would reduce translator burden. Back-pach to v11 where the error message exists. Author: Kyotaro Horiguchi Reviewed-by: Fujii Masao Discussion: https://postgr.es/m/20210819.170315.1413060634876301811.horikyota.ntt@gmail.com
* Avoid trying to lock OLD/NEW in a rule with FOR UPDATE.Tom Lane2021-08-19
| | | | | | | | | | | | | | transformLockingClause neglected to exclude the pseudo-RTEs for OLD/NEW when processing a rule's query. This led to odd errors or even crashes later on. This bug is very ancient, but it's not terribly surprising that nobody noticed, since the use-case for SELECT FOR UPDATE in a non-view rule is somewhere between thin and non-existent. Still, crashing is not OK. Per bug #17151 from Zhiyong Wu. Thanks to Masahiko Sawada for analysis of the problem. Discussion: https://postgr.es/m/17151-c03a3e6e4ec9aadb@postgresql.org
* Fix check_agg_arguments' examination of aggregate FILTER clauses.Tom Lane2021-08-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Recursion into the FILTER clause was mis-implemented, such that a relevant Var or Aggref at the very top of the FILTER clause would be ignored. (Of course, that'd have to be a plain boolean Var or boolean-returning aggregate.) The consequence would be mis-identification of the correct semantic level of the aggregate, which could lead to not-per-spec query behavior. If the FILTER expression is an aggregate, this could also lead to failure to issue an expected "aggregate function calls cannot be nested" error, which would likely result in a core dump later on, since the planner and executor aren't expecting such cases to appear. The root cause is that commit b560ec1b0 blindly copied some code that assumed it's recursing into a List, and thus didn't examine the top-level node. To forestall questions about why this call doesn't look like the others, as well as possible future copy-and-paste mistakes, let's change all three check_agg_arguments_walker calls in check_agg_arguments, even though only the one for the filter clause is really broken. Per bug #17152 from Zhiyong Wu. This has been wrong since we implemented FILTER, so back-patch to all supported versions. (Testing suggests that pre-v11 branches manage to avoid crashing in the bad-Aggref case, thanks to "redundant" checks in ExecInitAgg. But I'm not sure how thorough that protection is, and anyway the wrong-behavior issue remains, so fix 9.6 and 10 too.) Discussion: https://postgr.es/m/17152-c7f906cc1a88e61b@postgresql.org
* Reject SELECT ... GROUP BY GROUPING SETS (()) FOR UPDATE.Tom Lane2021-06-01
| | | | | | | | | | | | | | | | | This case should be disallowed, just as FOR UPDATE with a plain GROUP BY is disallowed; FOR UPDATE only makes sense when each row of the query result can be identified with a single table row. However, we missed teaching CheckSelectLocking() to check groupingSets as well as groupClause, so that it would allow degenerate grouping sets. That resulted in a bad plan and a null-pointer dereference in the executor. Looking around for other instances of the same bug, the only one I found was in examine_simple_variable(). That'd just lead to silly estimates, but it should be fixed too. Per private report from Yaoguang Chen. Back-patch to all supported branches.
* Fix old bug with coercing the result of a COLLATE expression.Tom Lane2021-04-12
| | | | | | | | | | | | | | | | | | | | | | | | | There are hacks in parse_coerce.c to push down a requested coercion to below any CollateExpr that may appear. However, we did that even if the requested data type is non-collatable, leading to an invalid expression tree in which CollateExpr is applied to a non-collatable type. The fix is just to drop the CollateExpr altogether, reasoning that it's useless. This bug is ten years old, dating to the original addition of COLLATE support. The lack of field complaints suggests that there aren't a lot of user-visible consequences. We noticed the problem because it would trigger an assertion in DefineVirtualRelation if the invalid structure appears as an output column of a view; however, in a non-assert build, you don't see a crash just a (subtly incorrect) complaint about applying collation to a non-collatable type. I found that by putting the incorrect structure further down in a view, I could make a view definition that would fail dump/reload, per the added regression test case. But CollateExpr doesn't do anything at run-time, so this likely doesn't lead to any really exciting consequences. Per report from Yulin Pei. Back-patch to all supported branches. Discussion: https://postgr.es/m/HK0PR01MB22744393C474D503E16C8509F4709@HK0PR01MB2274.apcprd01.prod.exchangelabs.com
* Forbid marking an identity column as nullable.Tom Lane2021-03-12
| | | | | | | | | | | | | | | GENERATED ALWAYS AS IDENTITY implies NOT NULL, but the code failed to complain if you overrode that with "GENERATED ALWAYS AS IDENTITY NULL". One might think the old behavior was a feature, but it was inconsistent because the outcome varied depending on the order of the clauses, so it seems to have been just an oversight. Per bug #16913 from Pavel Boev. Back-patch to v10 where identity columns were introduced. Vik Fearing (minor tweaks by me) Discussion: https://postgr.es/m/16913-3b5198410f67d8c6@postgresql.org
* Ensure that expandTableLikeClause() re-examines the same table.Tom Lane2020-12-01
| | | | | | | | | | | | | | | | | | | | | | As it stood, expandTableLikeClause() re-did the same relation_openrv call that transformTableLikeClause() had done. However there are scenarios where this would not find the same table as expected. We hold lock on the LIKE source table, so it can't be renamed or dropped, but another table could appear before it in the search path. This explains the odd behavior reported in bug #16758 when cloning a table as a temp table of the same name. This case worked as expected before commit 502898192 introduced the need to open the source table twice, so we should fix it. To make really sure we get the same table, let's re-open it by OID not name. That requires adding an OID field to struct TableLikeClause, which is a little nervous-making from an ABI standpoint, but as long as it's at the end I don't think there's any serious risk. Per bug #16758 from Marc Boeren. Like the previous patch, back-patch to all supported branches. Discussion: https://postgr.es/m/16758-840e84a6cfab276d@postgresql.org
* Further fixes for CREATE TABLE LIKE: cope with self-referential FKs.Tom Lane2020-11-19
| | | | | | | | | | | | | | | | | | | | | | | Commit 502898192 was too careless about the order of execution of the additional ALTER TABLE operations generated by expandTableLikeClause. It just stuck them all at the end, which seems okay for most purposes. But it falls down in the case where LIKE is importing a primary key or unique index and the outer CREATE TABLE includes a FOREIGN KEY constraint that needs to depend on that index. Weird as that is, it used to work, so we ought to keep it working. To fix, make parse_utilcmd.c insert LIKE clauses between index-creation and FK-creation commands in the transformed list of commands, and change utility.c so that the commands generated by expandTableLikeClause are executed immediately not at the end. One could imagine scenarios where this wouldn't work either; but currently expandTableLikeClause only makes column default expressions, CHECK constraints, and indexes, and this ordering seems fine for those. Per bug #16730 from Sofoklis Papasofokli. Like the previous patch, back-patch to all supported branches. Discussion: https://postgr.es/m/16730-b902f7e6e0276b30@postgresql.org
* Calculate extraUpdatedCols in query rewriter, not parser.Tom Lane2020-10-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It's unsafe to do this at parse time because addition of generated columns to a table would not invalidate stored rules containing UPDATEs on the table ... but there might now be dependent generated columns that were not there when the rule was made. This also fixes an oversight that rewriteTargetView failed to update extraUpdatedCols when transforming an UPDATE on an updatable view. (Since the new calculation is downstream of that, rewriteTargetView doesn't actually need to do anything; but before, there was a demonstrable bug there.) In v13 and HEAD, this leads to easily-visible bugs because (since commit c6679e4fc) we won't recalculate generated columns that aren't listed in extraUpdatedCols. In v12 this bitmap is mostly just used for trigger-firing decisions, so you'd only notice a problem if a trigger cared whether a generated column had been updated. I'd complained about this back in May, but then forgot about it until bug #16671 from Michael Paul Killian revived the issue. Back-patch to v12 where this field was introduced. If existing stored rules contain any extraUpdatedCols values, they'll be ignored because the rewriter will overwrite them, so the bug will be fixed even for existing rules. (But note that if someone were to update to 13.1 or 12.5, store some rules with UPDATEs on tables having generated columns, and then downgrade to a prior minor version, they might observe issues similar to what this patch fixes. That seems unlikely enough to not be worth going to a lot of effort to fix.) Discussion: https://postgr.es/m/10206.1588964727@sss.pgh.pa.us Discussion: https://postgr.es/m/16671-2fa55851859fb166@postgresql.org
* Fix optimization hazard in gram.y's makeOrderedSetArgs(), redux.Tom Lane2020-10-07
| | | | | | | | | | | | | | | | It appears that commit cf63c641c, which intended to prevent misoptimization of the result-building step in makeOrderedSetArgs, didn't go far enough: buildfarm member hornet's version of xlc is now optimizing back to the old, broken behavior in which list_length(directargs) is fetched only after list_concat() has changed that value. I'm not entirely convinced whether that's an undeniable compiler bug or whether it can be justified by a sufficiently aggressive interpretation of C sequence points. So let's just change the code to make it harder to misinterpret. Back-patch to all supported versions, just in case. Discussion: https://postgr.es/m/1830491.1601944935@sss.pgh.pa.us
* Assign collations in partition bound expressions.Tom Lane2020-09-28
| | | | | | | | | | Failure to do this can result in errors during evaluation of the bound expression, as illustrated by the new regression test. Back-patch to v12 where the ability for partition bounds to be expressions was added. Discussion: https://postgr.es/m/CAJV4CdrZ5mKuaEsRSbLf2URQ3h6iMtKD=hik8MaF5WwdmC9uZw@mail.gmail.com
* Fix handling of CREATE TABLE LIKE with inheritance.Tom Lane2020-08-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a CREATE TABLE command uses both LIKE and traditional inheritance, Vars in CHECK constraints and expression indexes that are absorbed from a LIKE parent table tended to get mis-numbered, resulting in wrong answers and/or bizarre error messages (though probably not any actual crashes, thanks to validation occurring in the executor). In v12 and up, the same could happen to Vars in GENERATED expressions, even in cases with no LIKE clause but multiple traditional-inheritance parents. The cause of the problem for LIKE is that parse_utilcmd.c supposed it could renumber such Vars correctly during transformCreateStmt(), which it cannot since we have not yet accounted for columns added via inheritance. Fix that by postponing processing of LIKE INCLUDING CONSTRAINTS, DEFAULTS, GENERATED, INDEXES till after we've performed DefineRelation(). The error with GENERATED and multiple inheritance is a simple oversight in MergeAttributes(); it knows it has to renumber Vars in inherited CHECK constraints, but forgot to apply the same processing to inherited GENERATED expressions (a/k/a defaults). Per bug #16272 from Tom Gottfried. The non-GENERATED variants of the issue are ancient, presumably dating right back to the addition of CREATE TABLE LIKE; hence back-patch to all supported branches. Discussion: https://postgr.es/m/16272-6e32da020e9a9381@postgresql.org
* Fix CREATE TABLE LIKE INCLUDING GENERATED column order issuePeter Eisentraut2020-04-09
| | | | | | | | | | | | | | | | | CREATE TABLE LIKE INCLUDING GENERATED would fail if a generated column referred to a column with a higher attribute number. This is because the column mapping mechanism created the mapping incrementally as columns are added. This was sufficient for previous uses of that mechanism (omitting dropped columns), and it also happened to work if generated columns only referred to columns with lower attribute numbers, but here it failed. This fix is to build the attribute mapping in a separate loop before processing the columns in detail. Bug: #16342 Reported-by: Ethan Waldo <ewaldo@healthetechs.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
* Fix crash when using COLLATE in partition bound expressionsMichael Paquier2020-04-08
| | | | | | | | | | | | | | | Attempting to use a COLLATE clause with a type that it not collatable in a partition bound expression could crash the server. This commit fixes the code by adding more checks similar to what is done when computing index or partition attributes by making sure that there is a collation iff the type is collatable. Backpatch down to 12, as 7c079d7 introduced this problem. Reported-by: Alexander Lakhin Author: Dmitry Dolgov Discussion: https://postgr.es/m/16325-809194cf742313ab@postgresql.org Backpatch-through: 12
* Revert "Skip WAL for new relfilenodes, under wal_level=minimal."Noah Misch2020-03-22
| | | | | | | | This reverts commit cb2fd7eac285b1b0a24eeb2b8ed4456b66c5a09f. Per numerous buildfarm members, it was incompatible with parallel query, and a test case assumed LP64. Back-patch to 9.5 (all supported versions). Discussion: https://postgr.es/m/20200321224920.GB1763544@rfd.leadboat.com
* Skip WAL for new relfilenodes, under wal_level=minimal.Noah Misch2020-03-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Until now, only selected bulk operations (e.g. COPY) did this. If a given relfilenode received both a WAL-skipping COPY and a WAL-logged operation (e.g. INSERT), recovery could lose tuples from the COPY. See src/backend/access/transam/README section "Skipping WAL for New RelFileNode" for the new coding rules. Maintainers of table access methods should examine that section. To maintain data durability, just before commit, we choose between an fsync of the relfilenode and copying its contents to WAL. A new GUC, wal_skip_threshold, guides that choice. If this change slows a workload that creates small, permanent relfilenodes under wal_level=minimal, try adjusting wal_skip_threshold. Users setting a timeout on COMMIT may need to adjust that timeout, and log_min_duration_statement analysis will reflect time consumption moving to COMMIT from commands like COPY. Internally, this requires a reliable determination of whether RollbackAndReleaseCurrentSubTransaction() would unlink a relation's current relfilenode. Introduce rd_firstRelfilenodeSubid. Amend the specification of rd_createSubid such that the field is zero when a new rel has an old rd_node. Make relcache.c retain entries for certain dropped relations until end of transaction. Back-patch to 9.5 (all supported versions). This introduces a new WAL record type, XLOG_GIST_ASSIGN_LSN, without bumping XLOG_PAGE_MAGIC. As always, update standby systems before master systems. This changes sizeof(RelationData) and sizeof(IndexStmt), breaking binary compatibility for affected extensions. (The most recent commit to affect the same class of extensions was 089e4d405d0f3b94c74a2c6a54357a84a681754b.) Kyotaro Horiguchi, reviewed (in earlier, similar versions) by Robert Haas. Heikki Linnakangas and Michael Paquier implemented earlier designs that materially clarified the problem. Reviewed, in earlier designs, by Andrew Dunstan, Andres Freund, Alvaro Herrera, Tom Lane, Fujii Masao, and Simon Riggs. Reported by Martijn van Oosterhout. Discussion: https://postgr.es/m/20150702220524.GA9392@svana.org
* Fix assertion failure with ALTER TABLE ATTACH PARTITION and indexesMichael Paquier2020-03-03
| | | | | | | | | | | | | | | | | | Using ALTER TABLE ATTACH PARTITION causes an assertion failure when attempting to work on a partitioned index, because partitioned indexes cannot have partition bounds. The grammar of ALTER TABLE ATTACH PARTITION requires partition bounds, but not ALTER INDEX, so mixing ALTER TABLE with partitioned indexes is confusing. Hence, on HEAD, prevent ALTER TABLE to attach a partition if the relation involved is a partitioned index. On back-branches, as applications may rely on the existing behavior, just remove the culprit assertion. Reported-by: Alexander Lakhin Author: Amit Langote, Michael Paquier Discussion: https://postgr.es/m/16276-5cd1dcc8fb8be7b5@postgresql.org Backpatch-through: 11
* Fill in extraUpdatedCols in logical replicationPeter Eisentraut2020-02-17
| | | | | | | | | | | | | | The extraUpdatedCols field of the target RTE records which generated columns are affected by an update. This is used in a variety of places, including per-column triggers and foreign data wrappers. When an update was initiated by a logical replication subscription, this field was not filled in, so such an update would not affect generated columns in a way that is consistent with normal updates. To fix, factor out some code from analyze.c to fill in extraUpdatedCols in the logical replication worker as well. Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/b05e781a-fa16-6b52-6738-761181204567@2ndquadrant.com
* Fix handling of GENERATED columns in CREATE TABLE LIKE INCLUDING DEFAULTS.Tom Lane2019-09-25
| | | | | | | | | | | | | | LIKE INCLUDING DEFAULTS tried to copy the attrdef expression without copying the state of the attgenerated column. This is in fact wrong, because GENERATED and DEFAULT expressions are not the same kind of animal; one can contain Vars and the other not. We *must* copy attgenerated when we're copying the attrdef expression. Rearrange the if-tests so that the expression is copied only when the correct one of INCLUDING DEFAULTS and INCLUDING GENERATED has been specified. Per private report from Manuel Rigger. Tom Lane and Peter Eisentraut
* Require the schema qualification in pg_temp.type_name(arg).Noah Misch2019-08-05
| | | | | | | | | | | | Commit aa27977fe21a7dfa4da4376ad66ae37cb8f0d0b5 introduced this restriction for pg_temp.function_name(arg); do likewise for types created in temporary schemas. Programs that this breaks should add "pg_temp." schema qualification or switch to arg::type_name syntax. Back-patch to 9.4 (all supported versions). Reviewed by Tom Lane. Reported by Tom Lane. Security: CVE-2019-10208
* Fix many typos and inconsistenciesMichael Paquier2019-07-01
| | | | | Author: Alexander Lakhin Discussion: https://postgr.es/m/af27d1b3-a128-9d62-46e0-88f424397f44@gmail.com
* Fix typos and inconsistencies in code commentsMichael Paquier2019-06-14
| | | | | Author: Alexander Lakhin Discussion: https://postgr.es/m/dec6aae8-2d63-639f-4d50-20e229fb83e3@gmail.com
* Reconcile nodes/*funcs.c with PostgreSQL 12 work.Noah Misch2019-06-09
| | | | | | One would have needed out-of-tree code to observe the defects. Remove unreferenced fields instead of completing their support functions. Since in-tree code can't reach _readIntoClause(), no catversion bump.
* Phase 2 pgindent run for v12.Tom Lane2019-05-22
| | | | | | | | | Switch to 2.1 version of pg_bsd_indent. This formats multiline function declarations "correctly", that is with additional lines of parameter declarations indented to match where the first line's left parenthesis is. Discussion: https://postgr.es/m/CAEepm=0P3FeTXRcU5B2W3jv3PgRVZ-kGUXLGfd42FFhUROO3ug@mail.gmail.com
* Initial pgindent run for v12.Tom Lane2019-05-22
| | | | | | | | This is still using the 2.0 version of pg_bsd_indent. I thought it would be good to commit this separately, so as to document the differences between 2.0 and 2.1 behavior. Discussion: https://postgr.es/m/16296.1558103386@sss.pgh.pa.us
* Make VACUUM accept 1 and 0 as a boolean value.Fujii Masao2019-05-21
| | | | | | | | | | | | | Commit 41b54ba78e allowed existing VACUUM options to take a boolean argument. It's documented that valid boolean values that VACUUM can accept are true, false, on, off, 1, and 0. But previously the parser failed to accept 1 and 0 as a boolean value in VACUUM syntax because of a lack of NumericOnly clause for vac_analyze_option_arg in gram.y. This commit adds such NumericOnly clause so that VACUUM options can take also 1 and 0 as a boolean value. Discussion: https://postgr.es/m/CAHGQGwGYg82A8UCQxZe7Zn9MnyUBGdyB=1CNpKF3jBny+RbyfA@mail.gmail.com
* More message style fixesAlvaro Herrera2019-05-16
| | | | Discussion: https://postgr.es/m/20190515183005.GA26486@alvherre.pgsql