aboutsummaryrefslogtreecommitdiff
path: root/src/backend/parser
Commit message (Collapse)AuthorAge
...
* Make more use of castNode()Peter Eisentraut2017-02-21
|
* Suppress "unused variable" warnings with older versions of flex.Tom Lane2017-02-19
| | | | | | | | | | | | | | Versions of flex before 2.5.36 might generate code that results in an "unused variable" warning, when using %option reentrant. Historically we've worked around that by specifying -Wno-error, but that's an unsatisfying solution. The official "fix" for this was just to insert a dummy reference to the variable, so write a small perl script that edits the generated C code similarly. The MSVC side of this is untested, but the buildfarm should soon reveal if I broke that. Discussion: https://postgr.es/m/25456.1487437842@sss.pgh.pa.us
* Avoid crash in ALTER TABLE not_partitioned DETACH PARTITION.Robert Haas2017-02-16
| | | | Amit Langote, reviewed and slightly changed by me.
* Fix YA unwanted behavioral difference with operator_precedence_warning.Tom Lane2017-02-15
| | | | | | | | | | | | | | | | Jeff Janes noted that the error cursor position shown for some errors would vary when operator_precedence_warning is turned on. We'd prefer that option to have no undocumented effects, so this isn't desirable. To fix, make sure that an AEXPR_PAREN node has the same exprLocation as its child node. (Note: it would be a little cheaper to use @2 here instead of an exprLocation call, but there are cases where that wouldn't produce the identical answer, so don't do it like that.) Back-patch to 9.5 where this feature was introduced. Discussion: https://postgr.es/m/CAMkU=1ykK+VhhcQ4Ky8KBo9FoaUJH3f3rDQB8TkTXi-ZsBRUkQ@mail.gmail.com
* Add CREATE COLLATION IF NOT EXISTS clausePeter Eisentraut2017-02-15
| | | | | | The core of the functionality was already implemented when pg_import_system_collations was added. This just exposes it as an option in the SQL command.
* Add CREATE SEQUENCE AS <data type> clausePeter Eisentraut2017-02-10
| | | | | | | | | | | | | | | | | | | | | | | | This stores a data type, required to be an integer type, with the sequence. The sequences min and max values default to the range supported by the type, and they cannot be set to values exceeding that range. The internal implementation of the sequence is not affected. Change the serial types to create sequences of the appropriate type. This makes sure that the min and max values of the sequence for a serial column match the range of values supported by the table column. So the sequence can no longer overflow the table column. This also makes monitoring for sequence exhaustion/wraparound easier, which currently requires various contortions to cross-reference the sequences with the table columns they are used with. This commit also effectively reverts the pg_sequence column reordering in f3b421da5f4addc95812b9db05a24972b8fd9739, because the new seqtypid column allows us to fill the hole in the struct and create a more natural overall column ordering. Reviewed-by: Steve Singer <steve@ssinger.info> Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
* Fix typos in comments.Heikki Linnakangas2017-02-06
| | | | | | | | | Backpatch to all supported versions, where applicable, to make backpatching of future fixes go more smoothly. Josh Soref Discussion: https://www.postgresql.org/message-id/CACZqfqCf+5qRztLPgmmosr-B0Ye4srWzzw_mo4c_8_B_mtjmJQ@mail.gmail.com
* Use castNode() in a bunch of statement-list-related code.Tom Lane2017-01-26
| | | | | | | | | | | | | When I wrote commit ab1f0c822, I really missed the castNode() macro that Peter E. had proposed shortly before. This back-fills the uses I would have put it to. It's probably not all that significant, but there are more assertions here than there were before, and conceivably they will help catch any bugs associated with those representation changes. I left behind a number of usages like "(Query *) copyObject(query_var)". Those could have been converted as well, but Peter has proposed another notational improvement that would handle copyObject cases automatically, so I let that be for now.
* Remove vestigial resolveUnknown arguments from transformSortClause etc.Tom Lane2017-01-25
| | | | | | | | | There's really no situation where we don't want these unknown-to-text conversions to happen. The alternative is failure anyway, and the one caller that was passing "false" did so only because it expected the case could not arise. Might as well simplify the code. Discussion: https://postgr.es/m/CAH2L28uwwbL9HUM-WR=hromW1Cvamkn7O-g8fPY2m=_7muJ0oA@mail.gmail.com
* Change unknown-type literals to type text in SELECT and RETURNING lists.Tom Lane2017-01-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, we left such literals alone if the query or subquery had no properties forcing a type decision to be made (such as an ORDER BY or DISTINCT clause using that output column). This meant that "unknown" could be an exposed output column type, which has never been a great idea because it could result in strange failures later on. For example, an outer query that tried to do any operations on an unknown-type subquery output would generally fail with some weird error like "failed to find conversion function from unknown to text" or "could not determine which collation to use for string comparison". Also, if the case occurred in a CREATE VIEW's query then the view would have an unknown-type column, causing similar failures in queries trying to use the view. To fix, at the tail end of parse analysis of a query, forcibly convert any remaining "unknown" literals in its SELECT or RETURNING list to type text. However, provide a switch to suppress that, and use it in the cases of SELECT inside a set operation or INSERT command. In those cases we already had type resolution rules that make use of context information from outside the subquery proper, and we don't want to change that behavior. Also, change creation of an unknown-type column in a relation from a warning to a hard error. The error should be unreachable now in CREATE VIEW or CREATE MATVIEW, but it's still possible to explicitly say "unknown" in CREATE TABLE or CREATE (composite) TYPE. We want to forbid that because it's nothing but a foot-gun. This change creates a pg_upgrade failure case: a matview that contains an unknown-type column can't be pg_upgraded, because reparsing the matview's defining query will now decide that the column is of type text, which doesn't match the cstring-like storage that the old materialized column would actually have. Add a checking pass to detect that. While at it, we can detect tables or composite types that would fail, essentially for free. Those would fail safely anyway later on, but we might as well fail earlier. This patch is by me, but it owes something to previous investigations by Rahila Syed. Also thanks to Ashutosh Bapat and Michael Paquier for review. Discussion: https://postgr.es/m/CAH2L28uwwbL9HUM-WR=hromW1Cvamkn7O-g8fPY2m=_7muJ0oA@mail.gmail.com
* Move some things from builtins.h to new header filesPeter Eisentraut2017-01-20
| | | | This avoids that builtins.h has to include additional header files.
* Logical replicationPeter Eisentraut2017-01-20
| | | | | | | | | | | | | - Add PUBLICATION catalogs and DDL - Add SUBSCRIPTION catalog and DDL - Define logical replication protocol and output plugin - Add logical replication workers From: Petr Jelinek <petr@2ndquadrant.com> Reviewed-by: Steve Singer <steve@ssinger.info> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Erik Rijkers <er@xs4all.nl> Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>
* Make messages mentioning type names more uniformAlvaro Herrera2017-01-18
| | | | | | | | | This avoids additional translatable strings for each distinct type, as well as making our quoting style around type names more consistent (namely, that we don't quote type names). This continues what started as f402b9950120. Discussion: https://postgr.es/m/20160401170642.GA57509@alvherre.pgsql
* Fix check_srf_call_placement() to handle VALUES cases correctly.Tom Lane2017-01-16
| | | | | | | | | | | | | | | | | | | | | INSERT ... VALUES with a single VALUES row is implemented quite differently from the general VALUES case. A user-visible implication of that is that we accept SRFs in the single-row case, but not in the multi-row case. That's a historical artifact no doubt, but in view of the lack of field complaints, I'm not excited about fixing it right now. However, check_srf_call_placement() needs to know about this, first because it should throw an error in the unsupported case, and second because it should set p_hasTargetSRFs in the single-row case (because we treat that like a SELECT tlist). That's an oversight in commit a4c35ea1c. To fix, split EXPR_KIND_VALUES into two values. So far as I can see, this is the only place where we need to distinguish the two cases at present; but there might be more later. Patch by me, per report from Andres Freund. Discussion: https://postgr.es/m/20170116081548.zg63zltblwimpfgp@alap3.anarazel.de
* Change representation of statement lists, and add statement location info.Tom Lane2017-01-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch makes several changes that improve the consistency of representation of lists of statements. It's always been the case that the output of parse analysis is a list of Query nodes, whatever the types of the individual statements in the list. This patch brings similar consistency to the outputs of raw parsing and planning steps: * The output of raw parsing is now always a list of RawStmt nodes; the statement-type-dependent nodes are one level down from that. * The output of pg_plan_queries() is now always a list of PlannedStmt nodes, even for utility statements. In the case of a utility statement, "planning" just consists of wrapping a CMD_UTILITY PlannedStmt around the utility node. This list representation is now used in Portal and CachedPlan plan lists, replacing the former convention of intermixing PlannedStmts with bare utility-statement nodes. Now, every list of statements has a consistent head-node type depending on how far along it is in processing. This allows changing many places that formerly used generic "Node *" pointers to use a more specific pointer type, thus reducing the number of IsA() tests and casts needed, as well as improving code clarity. Also, the post-parse-analysis representation of DECLARE CURSOR is changed so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained SELECT remains a child of the DeclareCursorStmt rather than getting flipped around to be the other way. It's now true for both Query and PlannedStmt that utilityStmt is non-null if and only if commandType is CMD_UTILITY. That allows simplifying a lot of places that were testing both fields. (I think some of those were just defensive programming, but in many places, it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.) Because PlannedStmt carries a canSetTag field, we're also able to get rid of some ad-hoc rules about how to reconstruct canSetTag for a bare utility statement; specifically, the assumption that a utility is canSetTag if and only if it's the only one in its list. While I see no near-term need for relaxing that restriction, it's nice to get rid of the ad-hocery. The API of ProcessUtility() is changed so that what it's passed is the wrapper PlannedStmt not just the bare utility statement. This will affect all users of ProcessUtility_hook, but the changes are pretty trivial; see the affected contrib modules for examples of the minimum change needed. (Most compilers should give pointer-type-mismatch warnings for uncorrected code.) There's also a change in the API of ExplainOneQuery_hook, to pass through cursorOptions instead of expecting hook functions to know what to pick. This is needed because of the DECLARE CURSOR changes, but really should have been done in 9.6; it's unlikely that any extant hook functions know about using CURSOR_OPT_PARALLEL_OK. Finally, teach gram.y to save statement boundary locations in RawStmt nodes, and pass those through to Query and PlannedStmt nodes. This allows more intelligent handling of cases where a source query string contains multiple statements. This patch doesn't actually do anything with the information, but a follow-on patch will. (Passing this information through cleanly is the true motivation for these changes; while I think this is all good cleanup, it's unlikely we'd have bothered without this end goal.) catversion bump because addition of location fields to struct Query affects stored rules. This patch is by me, but it owes a good deal to Fabien Coelho who did a lot of preliminary work on the problem, and also reviewed the patch. Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
* Get rid of ParseState.p_value_substitute; use a columnref hook instead.Tom Lane2017-01-07
| | | | | | | | I noticed that p_value_substitute, which is a single-purpose kluge I added in 2002 (commit b0422b215), could be replaced by having domainAddConstraint install a parser hook that looks for the name "value". The parser hook code only dates back to 2009, so it's not surprising that we had to kluge this in 2002, but we can do it more cleanly now.
* Update copyright via script for 2017Bruce Momjian2017-01-03
|
* Make more use of RoleSpec structPeter Eisentraut2016-12-29
| | | | | | | | Most code was casting this through a generic Node. By declaring everything as RoleSpec appropriately, we can remove a bunch of casts and ad-hoc node type checking. Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com>
* Replace enum InhOption with simple boolean.Tom Lane2016-12-23
| | | | | | | | | | | | | | | | | | | | Now that it has only INH_NO and INH_YES values, it's just weird that it's not a plain bool, so make it that way. Also rename RangeVar.inhOpt to "inh", to be like RangeTblEntry.inh. My recollection is that we gave it a different name specifically because it had a different representation than the derived bool value, but it no longer does. And this is a good forcing function to be sure we catch any places that are affected by the change. Bump catversion because of possible effect on stored RangeVar nodes. I'm not exactly convinced that we ever store RangeVar on disk, but we have a readfuncs function for it, so be cautious. (If we do do so, then commit e13486eba was in error not to bump catversion.) Follow-on to commit e13486eba. Discussion: http://postgr.es/m/CA+TgmoYe+EG7LdYX6pkcNxr4ygkP4+A=jm9o-CPXyOvRiCNwaQ@mail.gmail.com
* Remove sql_inheritance GUC.Robert Haas2016-12-23
| | | | | | This backward-compatibility GUC is long overdue for removal. Discussion: http://postgr.es/m/CA+TgmoYe+EG7LdYX6pkcNxr4ygkP4+A=jm9o-CPXyOvRiCNwaQ@mail.gmail.com
* Fix CREATE TABLE ... LIKE ... WITH OIDS.Tom Lane2016-12-22
| | | | | | | | | | | | | | | | | | | | Having a WITH OIDS specification should result in the creation of an OID column, but commit b943f502b broke that in the case that there were LIKE tables without OIDS. Commentary in that patch makes it look like this was intentional, but if so it was based on a faulty reading of what inheritance does: the parent tables can add an OID column, but they can't subtract one. AFAICS, the behavior ought to be that you get an OID column if any of the inherited tables, LIKE tables, or WITH clause ask for one. Also, revert that patch's unnecessary split of transformCreateStmt's loop over the tableElts list into two passes. That seems to have been based on a misunderstanding as well: we already have two-pass processing here, we don't need three passes. Per bug #14474 from Jeff Dafoe. Back-patch to 9.6 where the misbehavior was introduced. Report: https://postgr.es/m/20161222145304.25620.47445@wrigleys.postgresql.org
* Fix detection of unfinished Unicode surrogate pair at end of string.Tom Lane2016-12-21
| | | | | | | | | | | | | The U&'...' and U&"..." syntaxes silently discarded a surrogate pair start (that is, a code between U+D800 and U+DBFF) if it occurred at the very end of the string. This seems like an obvious oversight, since we throw an error for every other invalid combination of surrogate characters, including the very same situation in E'...' syntax. This has been wrong since the pair processing was added (in 9.0), so back-patch to all supported branches. Discussion: https://postgr.es/m/19113.1482337898@sss.pgh.pa.us
* Fix reporting of column typmods for multi-row VALUES constructs.Tom Lane2016-12-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | expandRTE() and get_rte_attribute_type() reported the exprType() and exprTypmod() values of the expressions in the first row of the VALUES as being the column type/typmod returned by the VALUES RTE. That's fine for the data type, since we coerce all expressions in a column to have the same common type. But we don't coerce them to have a common typmod, so it was possible for rows after the first one to return values that violate the claimed column typmod. This leads to the incorrect result seen in bug #14448 from Hassan Mahmood, as well as some other corner-case misbehaviors. The desired behavior is the same as we use in other type-unification cases: report the common typmod if there is one, but otherwise return -1 indicating no particular constraint. It's cheap for transformValuesClause to determine the common typmod while transforming a multi-row VALUES, but it'd be less cheap for expandRTE() and get_rte_attribute_type() to re-determine that info every time they're asked --- possibly a lot less cheap, if the VALUES has many rows. Therefore, the best fix is to record the common typmods explicitly in a list in the VALUES RTE, as we were already doing for column collations. This looks quite a bit like what we're doing for CTE RTEs, so we can save a little bit of space and code by unifying the representation for those two RTE types. They both now share coltypes/coltypmods/colcollations fields. (At some point it might seem desirable to populate those fields for all RTE types; but right now it looks like constructing them for other RTE types would add more code and cycles than it would save.) The RTE change requires a catversion bump, so this fix is only usable in HEAD. If we fix this at all in the back branches, the patch will need to look quite different. Report: https://postgr.es/m/20161205143037.4377.60754@wrigleys.postgresql.org Discussion: https://postgr.es/m/27429.1480968538@sss.pgh.pa.us
* Implement table partitioning.Robert Haas2016-12-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Table partitioning is like table inheritance and reuses much of the existing infrastructure, but there are some important differences. The parent is called a partitioned table and is always empty; it may not have indexes or non-inherited constraints, since those make no sense for a relation with no data of its own. The children are called partitions and contain all of the actual data. Each partition has an implicit partitioning constraint. Multiple inheritance is not allowed, and partitioning and inheritance can't be mixed. Partitions can't have extra columns and may not allow nulls unless the parent does. Tuples inserted into the parent are automatically routed to the correct partition, so tuple-routing ON INSERT triggers are not needed. Tuple routing isn't yet supported for partitions which are foreign tables, and it doesn't handle updates that cross partition boundaries. Currently, tables can be range-partitioned or list-partitioned. List partitioning is limited to a single column, but range partitioning can involve multiple columns. A partitioning "column" can be an expression. Because table partitioning is less general than table inheritance, it is hoped that it will be easier to reason about properties of partitions, and therefore that this will serve as a better foundation for a variety of possible optimizations, including query planner optimizations. The tuple routing based which this patch does based on the implicit partitioning constraints is an example of this, but it seems likely that many other useful optimizations are also possible. Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat, Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova, Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
* Add support for restrictive RLS policiesStephen Frost2016-12-05
| | | | | | | | | | | | | | | | We have had support for restrictive RLS policies since 9.5, but they were only available through extensions which use the appropriate hooks. This adds support into the grammer, catalog, psql and pg_dump for restrictive RLS policies, thus reducing the cases where an extension is necessary. In passing, also move away from using "AND"d and "OR"d in comments. As pointed out by Alvaro, it's not really appropriate to attempt to make verbs out of "AND" and "OR", so reword those comments which attempted to. Reviewed By: Jeevan Chalke, Dean Rasheed Discussion: https://postgr.es/m/20160901063404.GY4028@tamriel.snowman.net
* Don't mess up pstate->p_next_resno in transformOnConflictClause().Tom Lane2016-12-04
| | | | | | | | | | | | | | | | | | | | | | | | | transformOnConflictClause incremented p_next_resno while generating the phony targetlist for the EXCLUDED pseudo-rel. Then that field got incremented some more during transformTargetList, possibly leading to free_parsestate concluding that we'd overrun the allowed length of a tlist, as reported by Justin Pryzby. We could fix this by resetting p_next_resno to 1 after using it for the EXCLUDED pseudo-rel tlist, but it seems easier and less coupled to other places if we just don't use that field at all in this loop. (Note that this doesn't change anything about the resnos that end up appearing in the main target list, because those are all replaced with target-column numbers by updateTargetListEntry.) In passing, fix incorrect type OID assigned to the whole-row Var for "EXCLUDED.*" (somehow this escaped having any bad consequences so far, but it's certainly wrong); remove useless assignment to var->location; pstrdup the column names in case of a relcache flush; and improve nearby comments. Back-patch to 9.5 where ON CONFLICT was introduced. Report: https://postgr.es/m/20161204163237.GA8030@telsasoft.com
* Add aggregate_with_argtypes and use it consistentlyPeter Eisentraut2016-12-01
| | | | | | | | This works like function_with_argtypes, but aggregates allow slightly different arguments. Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
* Move function_with_argtypes to a better locationPeter Eisentraut2016-12-01
| | | | | | | | It was apparently added for use by GRANT/REVOKE, but move it closer to where other function signature related things are kept. Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
* Use grammar symbol function_with_argtypes consistentlyPeter Eisentraut2016-12-01
| | | | | | | | | Instead of sometimes referring to a function signature like func_name func_args, use the existing function_with_argtypes symbol, which combines the two. Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
* Improve handling of "UPDATE ... SET (column_list) = row_constructor".Tom Lane2016-11-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, the right-hand side of a multiple-column assignment, if it wasn't a sub-SELECT, had to be a simple parenthesized expression list, because gram.y was responsible for "bursting" the construct into independent column assignments. This had the minor defect that you couldn't write ROW (though you should be able to, since the standard says this is a row constructor), and the rather larger defect that unlike other uses of row constructors, we would not expand a "foo.*" item into multiple columns. Fix that by changing the RHS to be just "a_expr" in the grammar, leaving it to transformMultiAssignRef to separate the elements of a RowExpr; which it will do only after performing standard transformation of the RowExpr, so that "foo.*" behaves as expected. The key reason we didn't do that before was the hard-wired handling of DEFAULT tokens (SetToDefault nodes). This patch deals with that issue by allowing DEFAULT in any a_expr and having parse analysis throw an error if SetToDefault is found in an unexpected place. That's an improvement anyway since the error can be more specific than just "syntax error". The SQL standard suggests that the RHS could be any a_expr yielding a suitable row value. This patch doesn't really move the goal posts in that respect --- you're still limited to RowExpr or a sub-SELECT --- but it does fix the grammar restriction, so it provides some tangible progress towards a full implementation. And the limitation is now documented by an explicit error message rather than an unhelpful "syntax error". Discussion: <8542.1479742008@sss.pgh.pa.us>
* Prevent multicolumn expansion of "foo.*" in an UPDATE source expression.Tom Lane2016-11-20
| | | | | | | | | | | | | | | | | | | | | | | | Because we use transformTargetList() for UPDATE as well as SELECT tlists, the code accidentally tried to expand a "*" reference into several columns. This is nonsensical, because the UPDATE syntax provides exactly one target column to put the value into. The immediate result was that transformUpdateTargetList() got confused and reported "UPDATE target count mismatch --- internal error". It seems better to treat such a reference as a plain whole-row variable, as it would be in other contexts. (This could produce useful results when the target column is of composite type.) Fix by tweaking transformTargetList() to perform *-expansion only conditionally, depending on its exprKind parameter. Back-patch to 9.3. The problem exists further back, but a fix would be much more invasive before that, because transformTargetList() wasn't told what kind of list it was working on. Doesn't seem worth the trouble given the lack of field reports. (I only noticed it because I was checking the code while trying to improve the documentation about how we handle "foo.*".) Discussion: <4308.1479595330@sss.pgh.pa.us>
* Implement syntax for transition tables in AFTER triggers.Kevin Grittner2016-11-04
| | | | | | | | | | | | This is infrastructure for the complete SQL standard feature. No support is included at this point for execution nodes or PLs. The intent is to add that soon. As this patch leaves things, standard syntax can create tuplestores to contain old and/or new versions of rows affected by a statement. References to these tuplestores are in the TriggerData structure. C triggers can access the tuplestores directly, so they are usable, but they cannot yet be referenced within a SQL statement.
* Add ALTER EXTENSION ADD/DROP ACCESS METHOD, and use it in pg_upgrade.Tom Lane2016-10-02
| | | | | | | | | | Without this, an extension containing an access method is not properly dumped/restored during pg_upgrade --- the AM ends up not being a member of the extension after upgrading. Another oversight in commit 473b93287, reported by Andrew Dunstan. Report: <f7ac29f3-515c-2a44-21c5-ec925053265f@dunslane.net>
* C comment: fix function header commentBruce Momjian2016-09-22
| | | | | | Fix for transformOnConflictClause(). Author: Tomonari Katsumata
* Improve parser's and planner's handling of set-returning functions.Tom Lane2016-09-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Teach the parser to reject misplaced set-returning functions during parse analysis using p_expr_kind, in much the same way as we do for aggregates and window functions (cf commit eaccfded9). While this isn't complete (it misses nesting-based restrictions), it's much better than the previous error reporting for such cases, and it allows elimination of assorted ad-hoc expression_returns_set() error checks. We could add nesting checks later if it seems important to catch all cases at parse time. There is one case the parser will now throw error for although previous versions allowed it, which is SRFs in the tlist of an UPDATE. That never behaved sensibly (since it's ill-defined which generated row should be used to perform the update) and it's hard to see why it should not be treated as an error. It's a release-note-worthy change though. Also, add a new Query field hasTargetSRFs reporting whether there are any SRFs in the targetlist (including GROUP BY/ORDER BY expressions). The parser can now set that basically for free during parse analysis, and we can use it in a number of places to avoid expression_returns_set searches. (There will be more such checks soon.) In some places, this allows decontorting the logic since it's no longer expensive to check for SRFs in the tlist --- so I made the checks parallel to the handling of hasAggs/hasWindowFuncs wherever it seemed appropriate. catversion bump because adding a Query field changes stored rules. Andres Freund and Tom Lane Discussion: <24639.1473782855@sss.pgh.pa.us>
* Support renaming an existing value of an enum type.Tom Lane2016-09-07
| | | | | | | | | | | | | | Not much to be said about this patch: it does what it says on the tin. In passing, rename AlterEnumStmt.skipIfExists to skipIfNewValExists to clarify what it actually does. In the discussion of this patch we considered supporting other similar options, such as IF EXISTS on the type as a whole or IF NOT EXISTS on the target name. This patch doesn't actually add any such feature, but it might happen later. Dagfinn Ilmari Mannsåker, reviewed by Emre Hasegeli Discussion: <CAO=2mx6uvgPaPDf-rHqG8=1MZnGyVDMQeh8zS4euRyyg4D35OQ@mail.gmail.com>
* Add location field to DefElemPeter Eisentraut2016-09-06
| | | | | | | | | | | | | | Add a location field to the DefElem struct, used to parse many utility commands. Update various error messages to supply error position information. To propogate the error position information in a more systematic way, create a ParseState in standard_ProcessUtility() and pass that to interested functions implementing the utility commands. This seems better than passing the query string and then reassembling a parse state ad hoc, which violates the encapsulation of the ParseState type. Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
* Improve parsetree representation of special functions such as CURRENT_DATE.Tom Lane2016-08-16
| | | | | | | | | | | | | | We implement a dozen or so parameterless functions that the SQL standard defines special syntax for. Up to now, that was done by converting them into more or less ad-hoc constructs such as "'now'::text::date". That's messy for multiple reasons: it exposes what should be implementation details to users, and performance is worse than it needs to be in several cases. To improve matters, invent a new expression node type SQLValueFunction that can represent any of these parameterless functions. Bump catversion because this changes stored parsetrees for rules. Discussion: <30058.1463091294@sss.pgh.pa.us>
* Doc: clarify that DROP ... CASCADE is recursive.Tom Lane2016-08-12
| | | | | | | | | | Apparently that's not obvious to everybody, so let's belabor the point. In passing, document that DROP POLICY has CASCADE/RESTRICT options (which it does, per gram.y) but they do nothing (I assume, anyway). Also update some long-obsolete commentary in gram.y. Discussion: <20160805104837.1412.84915@wrigleys.postgresql.org>
* Make INSERT-from-multiple-VALUES-rows handle targetlist indirection better.Tom Lane2016-08-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, if an INSERT with multiple rows of VALUES had indirection (array subscripting or field selection) in its target-columns list, the parser handled that by applying transformAssignedExpr() to each element of each VALUES row independently. This led to having ArrayRef assignment nodes or FieldStore nodes in each row of the VALUES RTE. That works for simple cases, but in bug #14265 Nuri Boardman points out that it fails if there are multiple assignments to elements/fields of the same target column. For such cases to work, rewriteTargetListIU() has to nest the ArrayRefs or FieldStores together to produce a single expression to be assigned to the column. But it failed to find them in the top-level targetlist and issued an error about "multiple assignments to same column". We could possibly fix this by teaching the rewriter to apply rewriteTargetListIU to each VALUES row separately, but that would be messy (it would change the output rowtype of the VALUES RTE, for example) and inefficient. Instead, let's fix the parser so that the VALUES RTE outputs are just the user-specified values, cast to the right type if necessary, and then the ArrayRefs or FieldStores are applied in the top-level targetlist to Vars representing the RTE's outputs. This is the same parsetree representation already used for similar cases with INSERT/SELECT syntax, so it allows simplifications in ruleutils.c, which no longer needs to treat INSERT-from-multiple-VALUES as its own special case. This implementation works by applying transformAssignedExpr to the VALUES entries as before, and then stripping off any ArrayRefs or FieldStores it adds. With lots of VALUES rows it would be noticeably more efficient to not add those nodes in the first place. But that's just an optimization not a bug fix, and there doesn't seem to be any good way to do it without significant refactoring. (A non-invasive answer would be to apply transformAssignedExpr + stripping to just the first VALUES row, and then just forcibly cast remaining rows to the same data types exposed in the first row. But this way would lead to different, not-INSERT-specific errors being reported in casting failure cases, so it doesn't seem very nice.) So leave that for later; this patch at least isn't making the per-row parsing work worse, and it does make the finished parsetree smaller, saving rewriter and planner work. Catversion bump because stored rules containing such INSERTs would need to change. Because of that, no back-patch, even though this is a very long-standing bug. Report: <20160727005725.7438.26021@wrigleys.postgresql.org> Discussion: <9578.1469645245@sss.pgh.pa.us>
* Teach parser to transform "x IS [NOT] DISTINCT FROM NULL" to a NullTest.Tom Lane2016-07-28
| | | | | | | | | | | | | | | | | Now that we've nailed down the principle that NullTest with !argisrow is fully equivalent to SQL's IS [NOT] DISTINCT FROM NULL, let's teach the parser about it. This produces a slightly more compact parse tree and is much more amenable to optimization than a DistinctExpr, since the planner knows a good deal about NullTest and next to nothing about DistinctExpr. I'm not sure that there are all that many queries in the wild that could be improved by this, but at least one source of such cases is the patch just made to postgres_fdw to emit IS [NOT] DISTINCT FROM NULL when IS [NOT] NULL isn't semantically correct. No back-patch, since to the extent that this does affect planning results, it might be considered undesirable plan destabilization.
* Improve documentation about CREATE TABLE ... LIKE.Tom Lane2016-07-28
| | | | | | | | | | | | | The docs failed to explain that LIKE INCLUDING INDEXES would not preserve the names of indexes and associated constraints. Also, it wasn't mentioned that EXCLUDE constraints would be copied by this option. The latter oversight seems enough of a documentation bug to justify back-patching. In passing, do some minor copy-editing in the same area, and add an entry for LIKE under "Compatibility", since it's not exactly a faithful implementation of the standard's feature. Discussion: <20160728151154.AABE64016B@smtp.hushmail.com>
* Rethink node-level representation of partial-aggregation modes.Tom Lane2016-06-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | The original coding had three separate booleans representing partial aggregation behavior, which was confusing, unreadable, and error-prone, not least because the booleans weren't always listed in the same order. It was also inadequate for the allegedly-desirable future extension to support intermediate partial aggregation, because we'd need separate markers for serialization and deserialization in such a case. Merge these bools into an enum "AggSplit" to provide symbolic names for the supported operating modes (and document what those are). By assigning the values of the enum constants carefully, we can treat AggSplit values as options bitmasks so that tests of what to do aren't noticeably more expensive than before. While at it, get rid of Aggref.aggoutputtype. That's not needed since commit 59a3795c2 got rid of setrefs.c's special-purpose Aggref comparison code, and it likewise seemed more confusing than helpful. Assorted comment cleanup as well (there's still more that I want to do in that line). catversion bump for change in Aggref node contents. Should be the last one for partial-aggregation changes. Discussion: <29309.1466699160@sss.pgh.pa.us>
* Fix type-safety problem with parallel aggregate serial/deserialization.Tom Lane2016-06-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The original specification for this called for the deserialization function to have signature "deserialize(serialtype) returns transtype", which is a security violation if transtype is INTERNAL (which it always would be in practice) and serialtype is not (which ditto). The patch blithely overrode the opr_sanity check for that, which was sloppy-enough work in itself, but the indisputable reason this cannot be allowed to stand is that CREATE FUNCTION will reject such a signature and thus it'd be impossible for extensions to create parallelizable aggregates. The minimum fix to make the signature type-safe is to add a second, dummy argument of type INTERNAL. But to lock it down a bit more and make misuse of INTERNAL-accepting functions less likely, let's get rid of the ability to specify a "serialtype" for an aggregate and just say that the only useful serialtype is BYTEA --- which, in practice, is the only interesting value anyway, due to the usefulness of the send/recv infrastructure for this purpose. That means we only have to allow "serialize(internal) returns bytea" and "deserialize(bytea, internal) returns internal" as the signatures for these support functions. In passing fix bogus signature of int4_avg_combine, which I found thanks to adding an opr_sanity check on combinefunc signatures. catversion bump due to removing pg_aggregate.aggserialtype and adjusting signatures of assorted built-in functions. David Rowley and Tom Lane Discussion: <27247.1466185504@sss.pgh.pa.us>
* Update comment about allowing GUCs to change scanning.Bruce Momjian2016-06-21
| | | | | | Reported-by: David G. Johnston Discussion: CAKFQuwZZvnxwSq9tNtvL+uyuDKGgV91zR_agtPxQHRWMWQRP8g@mail.gmail.com
* Fix handling of argument and result datatypes for partial aggregation.Tom Lane2016-06-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When doing partial aggregation, the args list of the upper (combining) Aggref node is replaced by a Var representing the output of the partial aggregation steps, which has either the aggregate's transition data type or a serialized representation of that. However, nodeAgg.c blindly continued to use the args list as an indication of the user-level argument types. This broke resolution of polymorphic transition datatypes at executor startup (though it accidentally failed to fail for the ANYARRAY case, which is likely the only one anyone had tested). Moreover, the constructed FuncExpr passed to the finalfunc contained completely wrong information, which would have led to bogus answers or crashes for any case where the finalfunc examined that information (which is only likely to be with polymorphic aggregates using a non-polymorphic transition type). As an independent bug, apply_partialaggref_adjustment neglected to resolve a polymorphic transition datatype before assigning it as the output type of the lower-level Aggref node. This again accidentally failed to fail for ANYARRAY but would be unlikely to work in other cases. To fix the first problem, record the user-level argument types in a separate OID-list field of Aggref, and look to that rather than the args list when asking what the argument types were. (It turns out to be convenient to include any "direct" arguments in this list too, although those are not currently subject to being overwritten.) Rather than adding yet another resolve_aggregate_transtype() call to fix the second problem, add an aggtranstype field to Aggref, and store the resolved transition type OID there when the planner first computes it. (By doing this in the planner and not the parser, we can allow the aggregate's transition type to change from time to time, although no DDL support yet exists for that.) This saves nothing of consequence for simple non-polymorphic aggregates, but for polymorphic transition types we save a catalog lookup during executor startup as well as several planner lookups that are new in 9.6 due to parallel query planning. In passing, fix an error that was introduced into count_agg_clauses_walker some time ago: it was applying exprTypmod() to something that wasn't an expression node at all, but a TargetEntry. exprTypmod silently returned -1 so that there was not an obvious failure, but this broke the intended sensitivity of aggregate space consumption estimates to the typmod of varchar and similar data types. This part needs to be back-patched. Catversion bump due to change of stored Aggref nodes. Discussion: <8229.1466109074@sss.pgh.pa.us>
* Add VACUUM (DISABLE_PAGE_SKIPPING) for emergencies.Robert Haas2016-06-17
| | | | | | | | | | | | If you really want to vacuum every single page in the relation, regardless of apparent visibility status or anything else, you can use this option. In previous releases, this behavior could be achieved using VACUUM (FREEZE), but because we can now recognize all-frozen pages as not needing to be frozen again, that no longer works. There should be no need for routine use of this option, but maybe bugs or disaster recovery will necessitate its use. Patch by me, reviewed by Andres Freund.
* pgindent run for 9.6Robert Haas2016-06-09
|
* Fix loose ends for SQL ACCESS METHOD objectsAlvaro Herrera2016-06-07
| | | | | | | | | | | | | COMMENT ON ACCESS METHOD was missing; add it, along psql tab-completion support for it. psql was also missing a way to list existing access methods; the new \dA command does that. Also add tab-completion support for DROP ACCESS METHOD. Author: Michael Paquier Discussion: https://www.postgresql.org/message-id/CAB7nPqTzdZdu8J7EF8SXr_R2U5bSUUYNOT3oAWBZdEoggnwhGA@mail.gmail.com
* Fix grammar's AND/OR flattening to work with operator_precedence_warning.Tom Lane2016-06-03
| | | | | | | | | | | | | It'd be good for "(x AND y) AND z" to produce a three-child AND node whether or not operator_precedence_warning is on, but that failed to happen when it's on because makeAndExpr() didn't look through the added AEXPR_PAREN node. This has no effect on generated plans because prepqual.c would flatten the AND nest anyway; but it does affect the number of parens printed in ruleutils.c, for example. I'd already fixed some similar hazards in parse_expr.c in commit abb164655, but didn't think to search gram.y for problems of this ilk. Per gripe from Jean-Pierre Pelletier. Report: <fa0535ec6d6428cfec40c7e8a6d11156@mail.gmail.com>