aboutsummaryrefslogtreecommitdiff
path: root/src/backend/parser/analyze.c
Commit message (Collapse)AuthorAge
...
* Merge parser's p_relnamespace and p_varnamespace lists into a single list.Tom Lane2012-08-08
| | | | | | | | Now that we are storing structs in these lists, the distinction between the two lists can be represented with a couple of extra flags while using only a single list. This simplifies the code and should save a little bit of palloc traffic, since the majority of RTEs are represented in both lists anyway.
* Implement SQL-standard LATERAL subqueries.Tom Lane2012-08-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | This patch implements the standard syntax of LATERAL attached to a sub-SELECT in FROM, and also allows LATERAL attached to a function in FROM, since set-returning function calls are expected to be one of the principal use-cases. The main change here is a rewrite of the mechanism for keeping track of which relations are visible for column references while the FROM clause is being scanned. The parser "namespace" lists are no longer lists of bare RTEs, but are lists of ParseNamespaceItem structs, which carry an RTE pointer as well as some visibility-controlling flags. Aside from supporting LATERAL correctly, this lets us get rid of the ancient hacks that required rechecking subqueries and JOIN/ON and function-in-FROM expressions for invalid references after they were initially parsed. Invalid column references are now always correctly detected on sight. In passing, remove assorted parser error checks that are now dead code by virtue of our having gotten rid of add_missing_from, as well as some comments that are obsolete for the same reason. (It was mainly add_missing_from that caused so much fudging here in the first place.) The planner support for this feature is very minimal, and will be improved in future patches. It works well enough for testing purposes, though. catversion bump forced due to new field in RangeTblEntry.
* Fix WITH attached to a nested set operation (UNION/INTERSECT/EXCEPT).Tom Lane2012-07-31
| | | | | | | | | | | | | Parse analysis neglected to cover the case of a WITH clause attached to an intermediate-level set operation; it only handled WITH at the top level or WITH attached to a leaf-level SELECT. Per report from Adam Mackler. In HEAD, I rearranged the order of SelectStmt's fields to put withClause with the other fields that can appear on non-leaf SelectStmts. In back branches, leave it alone to avoid a possible ABI break for third-party code. Back-patch to 8.4 where WITH support was added.
* Run pgindent on 9.2 source tree in preparation for first 9.3Bruce Momjian2012-06-10
| | | | commit-fest.
* Add some infrastructure for contrib/pg_stat_statements.Tom Lane2012-03-27
| | | | | | | | | | | | | | | | | | | | Add a queryId field to Query and PlannedStmt. This is not used by the core backend, except for being copied around at appropriate times. It's meant to allow plug-ins to track a particular query forward from parse analysis to execution. The queryId is intentionally not dumped into stored rules (and hence this commit doesn't bump catversion). You could argue that choice either way, but it seems better that stored rule strings not have any dependency on plug-ins that might or might not be present. Also, add a post_parse_analyze_hook that gets invoked at the end of parse analysis (but only for top-level analysis of complete queries, not cases such as analyzing a domain's default-value expression). This is mainly meant to be used to compute and assign a queryId, but it could have other applications. Peter Geoghegan
* Clean up compiler warnings from unused variables with asserts disabledPeter Eisentraut2012-03-21
| | | | | | For those variables only used when asserts are enabled, use a new macro PG_USED_FOR_ASSERTS_ONLY, which expands to __attribute__((unused)) when asserts are not enabled.
* Restructure SELECT INTO's parsetree representation into CreateTableAsStmt.Tom Lane2012-03-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | Making this operation look like a utility statement seems generally a good idea, and particularly so in light of the desire to provide command triggers for utility statements. The original choice of representing it as SELECT with an IntoClause appendage had metastasized into rather a lot of places, unfortunately, so that this patch is a great deal more complicated than one might at first expect. In particular, keeping EXPLAIN working for SELECT INTO and CREATE TABLE AS subcommands required restructuring some EXPLAIN-related APIs. Add-on code that calls ExplainOnePlan or ExplainOneUtility, or uses ExplainOneQuery_hook, will need adjustment. Also, the cases PREPARE ... SELECT INTO and CREATE RULE ... SELECT INTO, which formerly were accepted though undocumented, are no longer accepted. The PREPARE case can be replaced with use of CREATE TABLE AS EXECUTE. The CREATE RULE case doesn't seem to have much real-world use (since the rule would work only once before failing with "table already exists"), so we'll not bother with that one. Both SELECT INTO and CREATE TABLE AS still return a command tag of "SELECT nnnn". There was some discussion of returning "CREATE TABLE nnnn", but for the moment backwards compatibility wins the day. Andres Freund and Tom Lane
* Check misplaced window functions before checking aggregate/group by sanity.Tom Lane2012-02-08
| | | | | | | | | | | If somebody puts a window function in WHERE, we should complain about that in so many words. The previous coding tended to complain about the window function's arguments instead, which is likely to be misleading to users who are unclear on the semantics of window functions; as seen for example in bug #6440 from Matyas Novak. Just another example of how "add new code at the end" is frequently a bad heuristic.
* Update copyright notices for year 2012.Bruce Momjian2012-01-01
|
* Fix unsupported options in CREATE TABLE ... AS EXECUTE.Tom Lane2011-11-24
| | | | | | | | | | | The WITH [NO] DATA option was not supported, nor the ability to specify replacement column names; the former limitation wasn't even documented, as per recent complaint from Naoya Anzai. Fix by moving the responsibility for supporting these options into the executor. It actually takes less code this way ... catversion bump due to change in representation of IntoClause, which might affect stored rules.
* Pgindent run before 9.1 beta2.Bruce Momjian2011-06-09
|
* Expose the "*VALUES*" alias that we generate for a stand-alone VALUES list.Tom Lane2011-06-04
| | | | | | | | | | | | | | | | We were trying to make that strictly an internal implementation detail, but it turns out that it's exposed anyway when dumping a view defined like CREATE VIEW test_view AS VALUES (1), (2), (3) ORDER BY 1; This comes out as CREATE VIEW ... ORDER BY "*VALUES*".column1; which fails to parse when reloading the dump. Hacking ruleutils.c to suppress the column qualification looks like it'd be a risky business, so instead promote the RTE alias to full-fledged usability. Per bug #6049 from Dylan Adams. Back-patch to all supported branches.
* Make a code-cleanup pass over the collations patch.Tom Lane2011-04-22
| | | | | | | This patch is almost entirely cosmetic --- mostly cleaning up a lot of neglected comments, and fixing code layout problems in places where the patch made lines too long and then pgindent did weird things with that. I did find a bug-of-omission in equalTupleDescs().
* Fix handling of collations in multi-row VALUES constructs.Tom Lane2011-04-18
| | | | | | | | | Per spec we ought to apply select_common_collation() across the expressions in each column of the VALUES table. The original coding was just taking the first row and assuming it was representative. This patch adds a field to struct RangeTblEntry to carry the resolved collations, so initdb is forced for changes in stored rule representation.
* pgindent run before PG 9.1 beta 1.Bruce Momjian2011-04-10
|
* Revise collation derivation method and expression-tree representation.Tom Lane2011-03-19
| | | | | | | | | | | | | | | | | | | All expression nodes now have an explicit output-collation field, unless they are known to only return a noncollatable data type (such as boolean or record). Also, nodes that can invoke collation-aware functions store a separate field that is the collation value to pass to the function. This avoids confusion that arises when a function has collatable inputs and noncollatable output type, or vice versa. Also, replace the parser's on-the-fly collation assignment method with a post-pass over the completed expression tree. This allows us to use a more complex (and hopefully more nearly spec-compliant) assignment rule without paying for it in extra storage in every expression node. Fix assorted bugs in the planner's handling of collations by making collation one of the defining properties of an EquivalenceClass and by converting CollateExprs into discardable RelabelType nodes during expression preprocessing.
* Improve handling of unknown-type literals in UNION/INTERSECT/EXCEPT.Tom Lane2011-03-15
| | | | | | | | | | | | | | | | | | | | This patch causes unknown-type Consts to be coerced to the resolved output type of the set operation at parse time. Formerly such Consts were left alone until late in the planning stage. The disadvantage of that approach is that it disables some optimizations, because the planner sees the set-op leaf query as having different output column types than the overall set-op. We saw an example of that in a recent performance gripe from Claudio Freire. Fixing such a Const requires scribbling on the leaf query in transformSetOperationTree, but that should be all right since if the leaf query's semantics depended on that output column, it would already have resolved the unknown to something else. Most of the bulk of this patch is a simple adjustment of transformSetOperationTree's API so that upper levels can get at the TargetEntry containing a Const to be replaced: it now returns a list of TargetEntries, instead of just the bare expressions.
* Support data-modifying commands (INSERT/UPDATE/DELETE) in WITH.Tom Lane2011-02-25
| | | | | | | | | | | | | | This patch implements data-modifying WITH queries according to the semantics that the updates all happen with the same command counter value, and in an unspecified order. Therefore one WITH clause can't see the effects of another, nor can the outer query see the effects other than through the RETURNING values. And attempts to do conflicting updates will have unpredictable results. We'll need to document all that. This commit just fixes the code; documentation updates are waiting on author. Marko Tiikkaja and Hitoshi Harada
* Add a relkind field to RangeTblEntry to avoid some syscache lookups.Tom Lane2011-02-22
| | | | | | | | | The recent additions for FDW support required checking foreign-table-ness in several places in the parse/plan chain. While it's not clear whether that would really result in a noticeable slowdown, it seems best to avoid any performance risk by keeping a copy of the relation's relkind in RangeTblEntry. That might have some other uses later, anyway. Per discussion.
* Implement an API to let foreign-data wrappers actually be functional.Tom Lane2011-02-20
| | | | | | | This commit provides the core code and documentation needed. A contrib module test case will follow shortly. Shigeru Hanada, Jan Urbanski, Heikki Linnakangas
* Per-column collation supportPeter Eisentraut2011-02-08
| | | | | | | | This adds collation support for columns and domains, a COLLATE clause to override it per expression, and B-tree index support. Peter Eisentraut reviewed by Pavel Stehule, Itagaki Takahiro, Robert Haas, Noah Misch
* Stamp copyrights for year 2011.Bruce Momjian2011-01-01
|
* Provide hashing support for arrays.Tom Lane2010-10-30
| | | | | | | | | | | | | | | | | | | | | The core of this patch is hash_array() and associated typcache infrastructure, which works just about exactly like the existing support for array comparison. In addition I did some work to ensure that the planner won't think that an array type is hashable unless its element type is hashable, and similarly for sorting. This includes adding a datatype parameter to op_hashjoinable and op_mergejoinable, and adding an explicit "hashable" flag to SortGroupClause. The lack of a cross-check on the element type was a pre-existing bug in mergejoin support --- but it didn't matter so much before, because if you couldn't sort the element type there wasn't any good alternative to failing anyhow. Now that we have the alternative of hashing the array type, there are cases where we can avoid a failure by being picky at the planner stage, so it's time to be picky. The issue of exactly how to combine the per-element hash values to produce an array hash is still open for discussion, but the rest of this is pretty solid, so I'll commit it as-is.
* Allow WITH clauses to be attached to INSERT, UPDATE, DELETE statements.Tom Lane2010-10-15
| | | | | | | | | | | | | | | | This is not the hoped-for facility of using INSERT/UPDATE/DELETE inside a WITH, but rather the other way around. It seems useful in its own right anyway. Note: catversion bumped because, although the contents of stored rules might look compatible, there's actually a subtle semantic change. A single Query containing a WITH and INSERT...VALUES now represents writing the WITH before the INSERT, not before the VALUES. While it's not clear that that matters to anyone, it seems like a good idea to have it cited in the git history for catversion.h. Original patch by Marko Tiikkaja, with updating and cleanup by Hitoshi Harada.
* Behave correctly if INSERT ... VALUES is decorated with additional clauses.Tom Lane2010-10-02
| | | | | | | | | | In versions 8.2 and up, the grammar allows attaching ORDER BY, LIMIT, FOR UPDATE, or WITH to VALUES, and hence to INSERT ... VALUES. But the special-case code for VALUES in transformInsertStmt() wasn't expecting any of those, and just ignored them, leading to unexpected results. Rather than complicate the special-case path, just ensure that the presence of any of those clauses makes us treat the query as if it had a general SELECT. Per report from Hitoshi Harada.
* Remove cvs keywords from all files.Magnus Hagander2010-09-20
|
* Give a suitable HINT when an INSERT's data source is a RowExpr containingTom Lane2010-09-18
| | | | | | | | the same number of columns expected by the insert. This suggests that there were extra parentheses that converted the intended column list into a row expression. Original patch by Marko Tiikkaja, rather heavily editorialized by me.
* Small refactoring of makeVar() from a TargetEntryPeter Eisentraut2010-08-27
|
* pgindent run for 9.0Bruce Momjian2010-02-26
|
* Tweak the order of processing of WITH clauses so that they are processedTom Lane2010-02-12
| | | | | | | | | before we start analyzing the parent statement. This is to make it more clear that the WITH isn't affected by anything in the parent. I don't believe there's any actual bug here, because the stuff that was being done before WITH didn't affect subqueries; but it's certainly a potential for error (and apparently misled Marko into committing some real errors...).
* Do parse analysis of an EXPLAIN's contained statement during the normalTom Lane2010-01-15
| | | | | | | | parse analysis phase, rather than at execution time. This makes parameter handling work the same as it does in ordinary plannable queries, and in particular fixes the incompatibility that Pavel pointed out with plpgsql's new handling of variable references. plancache.c gets a little bit grottier, but the alternatives seem worse.
* Update copyright for the year 2010.Bruce Momjian2010-01-02
|
* Avoid a premature coercion failure in transformSetOperationTree() whenTom Lane2009-12-16
| | | | | | | | | | | | | | | | presented with an UNKNOWN-type Var, which can happen in cases where an unknown literal appeared in a subquery. While many such cases will fail later on anyway in the planner, there are some cases where the planner is able to flatten the query and replace the Var by the constant before it has to coerce the union column to the final type. I had added this check in 8.4 to provide earlier/better error detection, but it causes a regression for some cases that worked OK before. Fix by not making the check if the input node is UNKNOWN type and not a Const or Param. If it isn't going to work, it will fail anyway at plan time, with the only real loss being inability to provide an error cursor. Per gripe from Britt Piehler. In passing, rename a couple of variables to remove confusion from an inner scope masking the same variable names in an outer scope.
* Support ORDER BY within aggregate function calls, at long last providing aTom Lane2009-12-15
| | | | | | | | | | | | | non-kluge method for controlling the order in which values are fed to an aggregate function. At the same time eliminate the old implementation restriction that DISTINCT was only supported for single-argument aggregates. Possibly release-notable behavioral change: formerly, agg(DISTINCT x) dropped null values of x unconditionally. Now, it does so only if the agg transition function is strict; otherwise nulls are treated as DISTINCT normally would, ie, you get one copy. Andrew Gierth, reviewed by Hitoshi Harada
* Implement parser hooks for processing ColumnRef and ParamRef nodes, as per myTom Lane2009-10-31
| | | | | | | | | | | | | recent proposal. As proof of concept, remove knowledge of Params from the core parser, arranging for them to be handled entirely by parser hook functions. It turns out we need an additional hook for that --- I had forgotten about the code that handles inferring a parameter's type from context. This is a preliminary step towards letting plpgsql handle its variables through parser hooks. Additional work remains to be done to expose the facility through SPI, but I think this is all the changes needed in the core parser.
* When FOR UPDATE/SHARE is used with LIMIT, put the LockRows plan nodeTom Lane2009-10-28
| | | | | | | | | | | | | | | | | underneath the Limit node, not atop it. This fixes the old problem that such a query might unexpectedly return fewer rows than the LIMIT says, due to LockRows discarding updated rows. There is a related problem that LockRows might destroy the sort ordering produced by earlier steps; but fixing that by pushing LockRows below Sort would create serious performance problems that are unjustified in many real-world applications, as well as potential deadlock problems from locking many more rows than expected. Instead, keep the present semantics of applying FOR UPDATE after ORDER BY within a single query level; but allow the user to specify the other way by writing FOR UPDATE in a sub-select. To make that work, track whether FOR UPDATE appeared explicitly in sub-selects or got pushed down from the parent, and don't flatten a sub-select that contained an explicit FOR UPDATE.
* Make FOR UPDATE/SHARE in the primary query not propagate into WITH queries;Tom Lane2009-10-27
| | | | | | | | | | | | | | | | | | | | | | | | for example in WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE the FOR UPDATE will now affect bar but not foo. This is more useful and consistent than the original 8.4 behavior, which tried to propagate FOR UPDATE into the WITH query but always failed due to assorted implementation restrictions. Even though we are in process of removing those restrictions, it seems correct on philosophical grounds to not let the outer query's FOR UPDATE affect the WITH query. In passing, fix isLockedRel which frequently got things wrong in nested-subquery cases: "FOR UPDATE OF foo" applies to an alias foo in the current query level, not subqueries. This has been broken for a long time, but it doesn't seem worth back-patching further than 8.4 because the actual consequences are minimal. At worst the parser would sometimes get RowShareLock on a relation when it should be AccessShareLock or vice versa. That would only make a difference if someone were using ExclusiveLock concurrently, which no standard operation does, and anyway FOR UPDATE doesn't result in visible changes so it's not clear that the someone would notice any problem. Between that and the fact that FOR UPDATE barely works with subqueries at all in existing releases, I'm not excited about worrying about it.
* Re-implement EvalPlanQual processing to improve its performance and eliminateTom Lane2009-10-26
| | | | | | | | | | | | | | | | | | | | | | | | | a lot of strange behaviors that occurred in join cases. We now identify the "current" row for every joined relation in UPDATE, DELETE, and SELECT FOR UPDATE/SHARE queries. If an EvalPlanQual recheck is necessary, we jam the appropriate row into each scan node in the rechecking plan, forcing it to emit only that one row. The former behavior could rescan the whole of each joined relation for each recheck, which was terrible for performance, and what's much worse could result in duplicated output tuples. Also, the original implementation of EvalPlanQual could not re-use the recheck execution tree --- it had to go through a full executor init and shutdown for every row to be tested. To avoid this overhead, I've associated a special runtime Param with each LockRows or ModifyTable plan node, and arranged to make every scan node below such a node depend on that Param. Thus, by signaling a change in that Param, the EPQ machinery can just rescan the already-built test plan. This patch also adds a prohibition on set-returning functions in the targetlist of SELECT FOR UPDATE/SHARE. This is needed to avoid the duplicate-output-tuple problem. It seems fairly reasonable since the other restrictions on SELECT FOR UPDATE are meant to ensure that there is a unique correspondence between source tuples and result tuples, which an output SRF destroys as much as anything else does.
* Move the handling of SELECT FOR UPDATE locking and rechecking out ofTom Lane2009-10-12
| | | | | | | | | | | | | | | | | | | execMain.c and into a new plan node type LockRows. Like the recent change to put table updating into a ModifyTable plan node, this increases planning flexibility by allowing the operations to occur below the top level of the plan tree. It's necessary in any case to restore the previous behavior of having FOR UPDATE locking occur before ModifyTable does. This partially refactors EvalPlanQual to allow multiple rows-under-test to be inserted into the EPQ machinery before starting an EPQ test query. That isn't sufficient to fix EPQ's general bogosity in the face of plans that return multiple rows per test row, though. Since this patch is mostly about getting some plan node infrastructure in place and not about fixing ten-year-old bugs, I will leave EPQ improvements for another day. Another behavioral change that we could now think about is doing FOR UPDATE before LIMIT, but that too seems like it should be treated as a followon patch.
* Fix bug with WITH RECURSIVE immediately inside WITH RECURSIVE. 99% of theTom Lane2009-09-09
| | | | | | | | | | | code was already okay with this, but the hack that obtained the output column types of a recursive union in advance of doing real parse analysis of the recursive union forgot to handle the case where there was an inner WITH clause available to the non-recursive term. Best fix seems to be to refactor so that we don't need the "throwaway" parse analysis step at all. Instead, teach the transformSetOperationStmt code to set up the CTE's output column information after it's processed the non-recursive term normally. Per report from David Fetter.
* Modify the definition of window-function PARTITION BY and ORDER BY clausesTom Lane2009-08-27
| | | | | | | | | | | | | | | | | | | so that their elements are always taken as simple expressions over the query's input columns. It originally seemed like a good idea to make them act exactly like GROUP BY and ORDER BY, right down to the SQL92-era behavior of accepting output column names or numbers. However, that was not such a great idea, for two reasons: 1. It permits circular references, as exhibited in bug #5018: the output column could be the one containing the window function itself. (We actually had a regression test case illustrating this, but nobody thought twice about how confusing that would be.) 2. It doesn't seem like a good idea for, eg, "lead(foo) OVER (ORDER BY foo)" to potentially use two completely different meanings for "foo". Accordingly, narrow down the behavior of window clauses to use only the SQL99-compliant interpretation that the expressions are simple expressions.
* 8.4 pgindent run, with new combined Linux/FreeBSD/MinGW typedef listBruce Momjian2009-06-11
| | | | provided by Andrew.
* Support column-level privileges, as required by SQL standard.Tom Lane2009-01-22
| | | | Stephen Frost, with help from KaiGai Kohei and others
* Defend against null input in analyze_requires_snapshot(), per reportTom Lane2009-01-08
| | | | from Rushabh Lathia.
* Update copyright for 2009.Bruce Momjian2009-01-01
|
* Support window functions a la SQL:2008.Tom Lane2008-12-28
| | | | Hitoshi Harada, with some kibitzing from Heikki and Tom.
* Fix failure to ensure that a snapshot is available to datatype input functionsTom Lane2008-12-13
| | | | | | | | | | | | | | | | | | | when they are invoked by the parser. We had been setting up a snapshot at plan time but really it needs to be done earlier, before parse analysis. Per report from Dmitry Koterov. Also fix two related problems discovered while poking at this one: exec_bind_message called datatype input functions without establishing a snapshot, and SET CONSTRAINTS IMMEDIATE could call trigger functions without establishing a snapshot. Backpatch to 8.2. The underlying problem goes much further back, but it is masked in 8.1 and before because we didn't attempt to invoke domain check constraints within datatype input. It would only be exposed if a C-language datatype input function used the snapshot; which evidently none do, or we'd have heard complaints sooner. Since this code has changed a lot over time, a back-patch is hardly risk-free, and so I'm disinclined to patch further than absolutely necessary.
* Make SELECT FOR UPDATE/SHARE work on inheritance trees, by having the planTom Lane2008-11-15
| | | | | | | | | return the tableoid as well as the ctid for any FOR UPDATE targets that have child tables. All child tables are listed in the ExecRowMark list, but the executor just skips the ones that didn't produce the current row. Curiously, this longstanding restriction doesn't seem to have been documented anywhere; so no doc changes.
* Improve parser error location for cases where an INSERT or UPDATE commandTom Lane2008-10-07
| | | | | | | | | supplies an expression that can't be coerced to the target column type. The code previously attempted to point at the target column name, which doesn't work at all in an INSERT with omitted column name list, and is also not remarkably helpful when the problem is buried somewhere in a long INSERT-multi-VALUES command. Make it point at the failed expression instead.
* Fix GetCTEForRTE() to deal with the possibility that the RTE it's given cameTom Lane2008-10-06
| | | | from a query level above the current ParseState.