aboutsummaryrefslogtreecommitdiff
path: root/src/backend/parser/analyze.c
Commit message (Collapse)AuthorAge
* Support writable foreign tables.Tom Lane2013-03-10
| | | | | | | | | | | This patch adds the core-system infrastructure needed to support updates on foreign tables, and extends contrib/postgres_fdw to allow updates against remote Postgres servers. There's still a great deal of room for improvement in optimization of remote updates, but at least there's basic functionality there now. KaiGai Kohei, reviewed by Alexander Korotkov and Laurenz Albe, and rather heavily revised by Tom Lane.
* Add a materialized view relations.Kevin Grittner2013-03-03
| | | | | | | | | | | | | | | | | | | | | | A materialized view has a rule just like a view and a heap and other physical properties like a table. The rule is only used to populate the table, references in queries refer to the materialized data. This is a minimal implementation, but should still be useful in many cases. Currently data is only populated "on demand" by the CREATE MATERIALIZED VIEW and REFRESH MATERIALIZED VIEW statements. It is expected that future releases will add incremental updates with various timings, and that a more refined concept of defining what is "fresh" data will be developed. At some point it may even be possible to have queries use a materialized in place of references to underlying tables, but that requires the other above-mentioned features to be working first. Much of the documentation work by Robert Haas. Review by Noah Misch, Thom Brown, Robert Haas, Marko Tiikkaja Security review by KaiGai Kohei, with a decision on how best to implement sepgsql still pending.
* Improve error message wordingAlvaro Herrera2013-02-06
| | | | | | The wording changes applied in 0ac5ad513 were universally disliked. Per gripe from Andrew Dunstan
* Improve concurrency of foreign key lockingAlvaro Herrera2013-01-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch introduces two additional lock modes for tuples: "SELECT FOR KEY SHARE" and "SELECT FOR NO KEY UPDATE". These don't block each other, in contrast with already existing "SELECT FOR SHARE" and "SELECT FOR UPDATE". UPDATE commands that do not modify the values stored in the columns that are part of the key of the tuple now grab a SELECT FOR NO KEY UPDATE lock on the tuple, allowing them to proceed concurrently with tuple locks of the FOR KEY SHARE variety. Foreign key triggers now use FOR KEY SHARE instead of FOR SHARE; this means the concurrency improvement applies to them, which is the whole point of this patch. The added tuple lock semantics require some rejiggering of the multixact module, so that the locking level that each transaction is holding can be stored alongside its Xid. Also, multixacts now need to persist across server restarts and crashes, because they can now represent not only tuple locks, but also tuple updates. This means we need more careful tracking of lifetime of pg_multixact SLRU files; since they now persist longer, we require more infrastructure to figure out when they can be removed. pg_upgrade also needs to be careful to copy pg_multixact files over from the old server to the new, or at least part of multixact.c state, depending on the versions of the old and new servers. Tuple time qualification rules (HeapTupleSatisfies routines) need to be careful not to consider tuples with the "is multi" infomask bit set as being only locked; they might need to look up MultiXact values (i.e. possibly do pg_multixact I/O) to find out the Xid that updated a tuple, whereas they previously were assured to only use information readily available from the tuple header. This is considered acceptable, because the extra I/O would involve cases that would previously cause some commands to block waiting for concurrent transactions to finish. Another important change is the fact that locking tuples that have previously been updated causes the future versions to be marked as locked, too; this is essential for correctness of foreign key checks. This causes additional WAL-logging, also (there was previously a single WAL record for a locked tuple; now there are as many as updated copies of the tuple there exist.) With all this in place, contention related to tuples being checked by foreign key rules should be much reduced. As a bonus, the old behavior that a subtransaction grabbing a stronger tuple lock than the parent (sub)transaction held on a given tuple and later aborting caused the weaker lock to be lost, has been fixed. Many new spec files were added for isolation tester framework, to ensure overall behavior is sane. There's probably room for several more tests. There were several reviewers of this patch; in particular, Noah Misch and Andres Freund spent considerable time in it. Original idea for the patch came from Simon Riggs, after a problem report by Joel Jacobson. Most code is from me, with contributions from Marti Raudsepp, Alexander Shulgin, Noah Misch and Andres Freund. This patch was discussed in several pgsql-hackers threads; the most important start at the following message-ids: AANLkTimo9XVcEzfiBR-ut3KVNDkjm2Vxh+t8kAmWjPuv@mail.gmail.com 1290721684-sup-3951@alvh.no-ip.org 1294953201-sup-2099@alvh.no-ip.org 1320343602-sup-2290@alvh.no-ip.org 1339690386-sup-8927@alvh.no-ip.org 4FE5FF020200002500048A3D@gw.wicourts.gov 4FEAB90A0200002500048B7D@gw.wicourts.gov
* Update copyrights for 2013Bruce Momjian2013-01-01
| | | | | Fully update git head, and update back branches in ./COPYRIGHT and legal.sgml files.
* Check for stack overflow in transformSetOperationTree().Tom Lane2012-11-11
| | | | | | | | | | | | | | Since transformSetOperationTree() recurses, it can be driven to stack overflow with enough UNION/INTERSECT/EXCEPT clauses in a query. Add a check to ensure it fails cleanly instead of crashing. Per report from Matthew Gerber (though it's not clear whether this is the only thing going wrong for him). Historical note: I think the reasoning behind not putting a check here in the beginning was that the check in transformExpr() ought to be sufficient to guard the whole parser. However, because transformSetOperationTree() recurses all the way to the bottom of the set-operation tree before doing any analysis of the statement's expressions, that check doesn't save it.
* Allow OLD and NEW in multi-row VALUES within rules.Tom Lane2012-08-19
| | | | | Now that we have LATERAL, it's fairly painless to allow this case, which was left as a TODO in the original multi-row VALUES implementation.
* Centralize the logic for detecting misplaced aggregates, window funcs, etc.Tom Lane2012-08-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | Formerly we relied on checking after-the-fact to see if an expression contained aggregates, window functions, or sub-selects when it shouldn't. This is grotty, easily forgotten (indeed, we had forgotten to teach DefineIndex about rejecting window functions), and none too efficient since it requires extra traversals of the parse tree. To improve matters, define an enum type that classifies all SQL sub-expressions, store it in ParseState to show what kind of expression we are currently parsing, and make transformAggregateCall, transformWindowFuncCall, and transformSubLink check the expression type and throw error if the type indicates the construct is disallowed. This allows removal of a large number of ad-hoc checks scattered around the code base. The enum type is sufficiently fine-grained that we can still produce error messages of at least the same specificity as before. Bringing these error checks together revealed that we'd been none too consistent about phrasing of the error messages, so standardize the wording a bit. Also, rewrite checking of aggregate arguments so that it requires only one traversal of the arguments, rather than up to three as before. In passing, clean up some more comments left over from add_missing_from support, and annotate some tests that I think are dead code now that that's gone. (I didn't risk actually removing said dead code, though.)
* Merge parser's p_relnamespace and p_varnamespace lists into a single list.Tom Lane2012-08-08
| | | | | | | | Now that we are storing structs in these lists, the distinction between the two lists can be represented with a couple of extra flags while using only a single list. This simplifies the code and should save a little bit of palloc traffic, since the majority of RTEs are represented in both lists anyway.
* Implement SQL-standard LATERAL subqueries.Tom Lane2012-08-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | This patch implements the standard syntax of LATERAL attached to a sub-SELECT in FROM, and also allows LATERAL attached to a function in FROM, since set-returning function calls are expected to be one of the principal use-cases. The main change here is a rewrite of the mechanism for keeping track of which relations are visible for column references while the FROM clause is being scanned. The parser "namespace" lists are no longer lists of bare RTEs, but are lists of ParseNamespaceItem structs, which carry an RTE pointer as well as some visibility-controlling flags. Aside from supporting LATERAL correctly, this lets us get rid of the ancient hacks that required rechecking subqueries and JOIN/ON and function-in-FROM expressions for invalid references after they were initially parsed. Invalid column references are now always correctly detected on sight. In passing, remove assorted parser error checks that are now dead code by virtue of our having gotten rid of add_missing_from, as well as some comments that are obsolete for the same reason. (It was mainly add_missing_from that caused so much fudging here in the first place.) The planner support for this feature is very minimal, and will be improved in future patches. It works well enough for testing purposes, though. catversion bump forced due to new field in RangeTblEntry.
* Fix WITH attached to a nested set operation (UNION/INTERSECT/EXCEPT).Tom Lane2012-07-31
| | | | | | | | | | | | | Parse analysis neglected to cover the case of a WITH clause attached to an intermediate-level set operation; it only handled WITH at the top level or WITH attached to a leaf-level SELECT. Per report from Adam Mackler. In HEAD, I rearranged the order of SelectStmt's fields to put withClause with the other fields that can appear on non-leaf SelectStmts. In back branches, leave it alone to avoid a possible ABI break for third-party code. Back-patch to 8.4 where WITH support was added.
* Run pgindent on 9.2 source tree in preparation for first 9.3Bruce Momjian2012-06-10
| | | | commit-fest.
* Add some infrastructure for contrib/pg_stat_statements.Tom Lane2012-03-27
| | | | | | | | | | | | | | | | | | | | Add a queryId field to Query and PlannedStmt. This is not used by the core backend, except for being copied around at appropriate times. It's meant to allow plug-ins to track a particular query forward from parse analysis to execution. The queryId is intentionally not dumped into stored rules (and hence this commit doesn't bump catversion). You could argue that choice either way, but it seems better that stored rule strings not have any dependency on plug-ins that might or might not be present. Also, add a post_parse_analyze_hook that gets invoked at the end of parse analysis (but only for top-level analysis of complete queries, not cases such as analyzing a domain's default-value expression). This is mainly meant to be used to compute and assign a queryId, but it could have other applications. Peter Geoghegan
* Clean up compiler warnings from unused variables with asserts disabledPeter Eisentraut2012-03-21
| | | | | | For those variables only used when asserts are enabled, use a new macro PG_USED_FOR_ASSERTS_ONLY, which expands to __attribute__((unused)) when asserts are not enabled.
* Restructure SELECT INTO's parsetree representation into CreateTableAsStmt.Tom Lane2012-03-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | Making this operation look like a utility statement seems generally a good idea, and particularly so in light of the desire to provide command triggers for utility statements. The original choice of representing it as SELECT with an IntoClause appendage had metastasized into rather a lot of places, unfortunately, so that this patch is a great deal more complicated than one might at first expect. In particular, keeping EXPLAIN working for SELECT INTO and CREATE TABLE AS subcommands required restructuring some EXPLAIN-related APIs. Add-on code that calls ExplainOnePlan or ExplainOneUtility, or uses ExplainOneQuery_hook, will need adjustment. Also, the cases PREPARE ... SELECT INTO and CREATE RULE ... SELECT INTO, which formerly were accepted though undocumented, are no longer accepted. The PREPARE case can be replaced with use of CREATE TABLE AS EXECUTE. The CREATE RULE case doesn't seem to have much real-world use (since the rule would work only once before failing with "table already exists"), so we'll not bother with that one. Both SELECT INTO and CREATE TABLE AS still return a command tag of "SELECT nnnn". There was some discussion of returning "CREATE TABLE nnnn", but for the moment backwards compatibility wins the day. Andres Freund and Tom Lane
* Check misplaced window functions before checking aggregate/group by sanity.Tom Lane2012-02-08
| | | | | | | | | | | If somebody puts a window function in WHERE, we should complain about that in so many words. The previous coding tended to complain about the window function's arguments instead, which is likely to be misleading to users who are unclear on the semantics of window functions; as seen for example in bug #6440 from Matyas Novak. Just another example of how "add new code at the end" is frequently a bad heuristic.
* Update copyright notices for year 2012.Bruce Momjian2012-01-01
|
* Fix unsupported options in CREATE TABLE ... AS EXECUTE.Tom Lane2011-11-24
| | | | | | | | | | | The WITH [NO] DATA option was not supported, nor the ability to specify replacement column names; the former limitation wasn't even documented, as per recent complaint from Naoya Anzai. Fix by moving the responsibility for supporting these options into the executor. It actually takes less code this way ... catversion bump due to change in representation of IntoClause, which might affect stored rules.
* Pgindent run before 9.1 beta2.Bruce Momjian2011-06-09
|
* Expose the "*VALUES*" alias that we generate for a stand-alone VALUES list.Tom Lane2011-06-04
| | | | | | | | | | | | | | | | We were trying to make that strictly an internal implementation detail, but it turns out that it's exposed anyway when dumping a view defined like CREATE VIEW test_view AS VALUES (1), (2), (3) ORDER BY 1; This comes out as CREATE VIEW ... ORDER BY "*VALUES*".column1; which fails to parse when reloading the dump. Hacking ruleutils.c to suppress the column qualification looks like it'd be a risky business, so instead promote the RTE alias to full-fledged usability. Per bug #6049 from Dylan Adams. Back-patch to all supported branches.
* Make a code-cleanup pass over the collations patch.Tom Lane2011-04-22
| | | | | | | This patch is almost entirely cosmetic --- mostly cleaning up a lot of neglected comments, and fixing code layout problems in places where the patch made lines too long and then pgindent did weird things with that. I did find a bug-of-omission in equalTupleDescs().
* Fix handling of collations in multi-row VALUES constructs.Tom Lane2011-04-18
| | | | | | | | | Per spec we ought to apply select_common_collation() across the expressions in each column of the VALUES table. The original coding was just taking the first row and assuming it was representative. This patch adds a field to struct RangeTblEntry to carry the resolved collations, so initdb is forced for changes in stored rule representation.
* pgindent run before PG 9.1 beta 1.Bruce Momjian2011-04-10
|
* Revise collation derivation method and expression-tree representation.Tom Lane2011-03-19
| | | | | | | | | | | | | | | | | | | All expression nodes now have an explicit output-collation field, unless they are known to only return a noncollatable data type (such as boolean or record). Also, nodes that can invoke collation-aware functions store a separate field that is the collation value to pass to the function. This avoids confusion that arises when a function has collatable inputs and noncollatable output type, or vice versa. Also, replace the parser's on-the-fly collation assignment method with a post-pass over the completed expression tree. This allows us to use a more complex (and hopefully more nearly spec-compliant) assignment rule without paying for it in extra storage in every expression node. Fix assorted bugs in the planner's handling of collations by making collation one of the defining properties of an EquivalenceClass and by converting CollateExprs into discardable RelabelType nodes during expression preprocessing.
* Improve handling of unknown-type literals in UNION/INTERSECT/EXCEPT.Tom Lane2011-03-15
| | | | | | | | | | | | | | | | | | | | This patch causes unknown-type Consts to be coerced to the resolved output type of the set operation at parse time. Formerly such Consts were left alone until late in the planning stage. The disadvantage of that approach is that it disables some optimizations, because the planner sees the set-op leaf query as having different output column types than the overall set-op. We saw an example of that in a recent performance gripe from Claudio Freire. Fixing such a Const requires scribbling on the leaf query in transformSetOperationTree, but that should be all right since if the leaf query's semantics depended on that output column, it would already have resolved the unknown to something else. Most of the bulk of this patch is a simple adjustment of transformSetOperationTree's API so that upper levels can get at the TargetEntry containing a Const to be replaced: it now returns a list of TargetEntries, instead of just the bare expressions.
* Support data-modifying commands (INSERT/UPDATE/DELETE) in WITH.Tom Lane2011-02-25
| | | | | | | | | | | | | | This patch implements data-modifying WITH queries according to the semantics that the updates all happen with the same command counter value, and in an unspecified order. Therefore one WITH clause can't see the effects of another, nor can the outer query see the effects other than through the RETURNING values. And attempts to do conflicting updates will have unpredictable results. We'll need to document all that. This commit just fixes the code; documentation updates are waiting on author. Marko Tiikkaja and Hitoshi Harada
* Add a relkind field to RangeTblEntry to avoid some syscache lookups.Tom Lane2011-02-22
| | | | | | | | | The recent additions for FDW support required checking foreign-table-ness in several places in the parse/plan chain. While it's not clear whether that would really result in a noticeable slowdown, it seems best to avoid any performance risk by keeping a copy of the relation's relkind in RangeTblEntry. That might have some other uses later, anyway. Per discussion.
* Implement an API to let foreign-data wrappers actually be functional.Tom Lane2011-02-20
| | | | | | | This commit provides the core code and documentation needed. A contrib module test case will follow shortly. Shigeru Hanada, Jan Urbanski, Heikki Linnakangas
* Per-column collation supportPeter Eisentraut2011-02-08
| | | | | | | | This adds collation support for columns and domains, a COLLATE clause to override it per expression, and B-tree index support. Peter Eisentraut reviewed by Pavel Stehule, Itagaki Takahiro, Robert Haas, Noah Misch
* Stamp copyrights for year 2011.Bruce Momjian2011-01-01
|
* Provide hashing support for arrays.Tom Lane2010-10-30
| | | | | | | | | | | | | | | | | | | | | The core of this patch is hash_array() and associated typcache infrastructure, which works just about exactly like the existing support for array comparison. In addition I did some work to ensure that the planner won't think that an array type is hashable unless its element type is hashable, and similarly for sorting. This includes adding a datatype parameter to op_hashjoinable and op_mergejoinable, and adding an explicit "hashable" flag to SortGroupClause. The lack of a cross-check on the element type was a pre-existing bug in mergejoin support --- but it didn't matter so much before, because if you couldn't sort the element type there wasn't any good alternative to failing anyhow. Now that we have the alternative of hashing the array type, there are cases where we can avoid a failure by being picky at the planner stage, so it's time to be picky. The issue of exactly how to combine the per-element hash values to produce an array hash is still open for discussion, but the rest of this is pretty solid, so I'll commit it as-is.
* Allow WITH clauses to be attached to INSERT, UPDATE, DELETE statements.Tom Lane2010-10-15
| | | | | | | | | | | | | | | | This is not the hoped-for facility of using INSERT/UPDATE/DELETE inside a WITH, but rather the other way around. It seems useful in its own right anyway. Note: catversion bumped because, although the contents of stored rules might look compatible, there's actually a subtle semantic change. A single Query containing a WITH and INSERT...VALUES now represents writing the WITH before the INSERT, not before the VALUES. While it's not clear that that matters to anyone, it seems like a good idea to have it cited in the git history for catversion.h. Original patch by Marko Tiikkaja, with updating and cleanup by Hitoshi Harada.
* Behave correctly if INSERT ... VALUES is decorated with additional clauses.Tom Lane2010-10-02
| | | | | | | | | | In versions 8.2 and up, the grammar allows attaching ORDER BY, LIMIT, FOR UPDATE, or WITH to VALUES, and hence to INSERT ... VALUES. But the special-case code for VALUES in transformInsertStmt() wasn't expecting any of those, and just ignored them, leading to unexpected results. Rather than complicate the special-case path, just ensure that the presence of any of those clauses makes us treat the query as if it had a general SELECT. Per report from Hitoshi Harada.
* Remove cvs keywords from all files.Magnus Hagander2010-09-20
|
* Give a suitable HINT when an INSERT's data source is a RowExpr containingTom Lane2010-09-18
| | | | | | | | the same number of columns expected by the insert. This suggests that there were extra parentheses that converted the intended column list into a row expression. Original patch by Marko Tiikkaja, rather heavily editorialized by me.
* Small refactoring of makeVar() from a TargetEntryPeter Eisentraut2010-08-27
|
* pgindent run for 9.0Bruce Momjian2010-02-26
|
* Tweak the order of processing of WITH clauses so that they are processedTom Lane2010-02-12
| | | | | | | | | before we start analyzing the parent statement. This is to make it more clear that the WITH isn't affected by anything in the parent. I don't believe there's any actual bug here, because the stuff that was being done before WITH didn't affect subqueries; but it's certainly a potential for error (and apparently misled Marko into committing some real errors...).
* Do parse analysis of an EXPLAIN's contained statement during the normalTom Lane2010-01-15
| | | | | | | | parse analysis phase, rather than at execution time. This makes parameter handling work the same as it does in ordinary plannable queries, and in particular fixes the incompatibility that Pavel pointed out with plpgsql's new handling of variable references. plancache.c gets a little bit grottier, but the alternatives seem worse.
* Update copyright for the year 2010.Bruce Momjian2010-01-02
|
* Avoid a premature coercion failure in transformSetOperationTree() whenTom Lane2009-12-16
| | | | | | | | | | | | | | | | presented with an UNKNOWN-type Var, which can happen in cases where an unknown literal appeared in a subquery. While many such cases will fail later on anyway in the planner, there are some cases where the planner is able to flatten the query and replace the Var by the constant before it has to coerce the union column to the final type. I had added this check in 8.4 to provide earlier/better error detection, but it causes a regression for some cases that worked OK before. Fix by not making the check if the input node is UNKNOWN type and not a Const or Param. If it isn't going to work, it will fail anyway at plan time, with the only real loss being inability to provide an error cursor. Per gripe from Britt Piehler. In passing, rename a couple of variables to remove confusion from an inner scope masking the same variable names in an outer scope.
* Support ORDER BY within aggregate function calls, at long last providing aTom Lane2009-12-15
| | | | | | | | | | | | | non-kluge method for controlling the order in which values are fed to an aggregate function. At the same time eliminate the old implementation restriction that DISTINCT was only supported for single-argument aggregates. Possibly release-notable behavioral change: formerly, agg(DISTINCT x) dropped null values of x unconditionally. Now, it does so only if the agg transition function is strict; otherwise nulls are treated as DISTINCT normally would, ie, you get one copy. Andrew Gierth, reviewed by Hitoshi Harada
* Implement parser hooks for processing ColumnRef and ParamRef nodes, as per myTom Lane2009-10-31
| | | | | | | | | | | | | recent proposal. As proof of concept, remove knowledge of Params from the core parser, arranging for them to be handled entirely by parser hook functions. It turns out we need an additional hook for that --- I had forgotten about the code that handles inferring a parameter's type from context. This is a preliminary step towards letting plpgsql handle its variables through parser hooks. Additional work remains to be done to expose the facility through SPI, but I think this is all the changes needed in the core parser.
* When FOR UPDATE/SHARE is used with LIMIT, put the LockRows plan nodeTom Lane2009-10-28
| | | | | | | | | | | | | | | | | underneath the Limit node, not atop it. This fixes the old problem that such a query might unexpectedly return fewer rows than the LIMIT says, due to LockRows discarding updated rows. There is a related problem that LockRows might destroy the sort ordering produced by earlier steps; but fixing that by pushing LockRows below Sort would create serious performance problems that are unjustified in many real-world applications, as well as potential deadlock problems from locking many more rows than expected. Instead, keep the present semantics of applying FOR UPDATE after ORDER BY within a single query level; but allow the user to specify the other way by writing FOR UPDATE in a sub-select. To make that work, track whether FOR UPDATE appeared explicitly in sub-selects or got pushed down from the parent, and don't flatten a sub-select that contained an explicit FOR UPDATE.
* Make FOR UPDATE/SHARE in the primary query not propagate into WITH queries;Tom Lane2009-10-27
| | | | | | | | | | | | | | | | | | | | | | | | for example in WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE the FOR UPDATE will now affect bar but not foo. This is more useful and consistent than the original 8.4 behavior, which tried to propagate FOR UPDATE into the WITH query but always failed due to assorted implementation restrictions. Even though we are in process of removing those restrictions, it seems correct on philosophical grounds to not let the outer query's FOR UPDATE affect the WITH query. In passing, fix isLockedRel which frequently got things wrong in nested-subquery cases: "FOR UPDATE OF foo" applies to an alias foo in the current query level, not subqueries. This has been broken for a long time, but it doesn't seem worth back-patching further than 8.4 because the actual consequences are minimal. At worst the parser would sometimes get RowShareLock on a relation when it should be AccessShareLock or vice versa. That would only make a difference if someone were using ExclusiveLock concurrently, which no standard operation does, and anyway FOR UPDATE doesn't result in visible changes so it's not clear that the someone would notice any problem. Between that and the fact that FOR UPDATE barely works with subqueries at all in existing releases, I'm not excited about worrying about it.
* Re-implement EvalPlanQual processing to improve its performance and eliminateTom Lane2009-10-26
| | | | | | | | | | | | | | | | | | | | | | | | | a lot of strange behaviors that occurred in join cases. We now identify the "current" row for every joined relation in UPDATE, DELETE, and SELECT FOR UPDATE/SHARE queries. If an EvalPlanQual recheck is necessary, we jam the appropriate row into each scan node in the rechecking plan, forcing it to emit only that one row. The former behavior could rescan the whole of each joined relation for each recheck, which was terrible for performance, and what's much worse could result in duplicated output tuples. Also, the original implementation of EvalPlanQual could not re-use the recheck execution tree --- it had to go through a full executor init and shutdown for every row to be tested. To avoid this overhead, I've associated a special runtime Param with each LockRows or ModifyTable plan node, and arranged to make every scan node below such a node depend on that Param. Thus, by signaling a change in that Param, the EPQ machinery can just rescan the already-built test plan. This patch also adds a prohibition on set-returning functions in the targetlist of SELECT FOR UPDATE/SHARE. This is needed to avoid the duplicate-output-tuple problem. It seems fairly reasonable since the other restrictions on SELECT FOR UPDATE are meant to ensure that there is a unique correspondence between source tuples and result tuples, which an output SRF destroys as much as anything else does.
* Move the handling of SELECT FOR UPDATE locking and rechecking out ofTom Lane2009-10-12
| | | | | | | | | | | | | | | | | | | execMain.c and into a new plan node type LockRows. Like the recent change to put table updating into a ModifyTable plan node, this increases planning flexibility by allowing the operations to occur below the top level of the plan tree. It's necessary in any case to restore the previous behavior of having FOR UPDATE locking occur before ModifyTable does. This partially refactors EvalPlanQual to allow multiple rows-under-test to be inserted into the EPQ machinery before starting an EPQ test query. That isn't sufficient to fix EPQ's general bogosity in the face of plans that return multiple rows per test row, though. Since this patch is mostly about getting some plan node infrastructure in place and not about fixing ten-year-old bugs, I will leave EPQ improvements for another day. Another behavioral change that we could now think about is doing FOR UPDATE before LIMIT, but that too seems like it should be treated as a followon patch.
* Fix bug with WITH RECURSIVE immediately inside WITH RECURSIVE. 99% of theTom Lane2009-09-09
| | | | | | | | | | | code was already okay with this, but the hack that obtained the output column types of a recursive union in advance of doing real parse analysis of the recursive union forgot to handle the case where there was an inner WITH clause available to the non-recursive term. Best fix seems to be to refactor so that we don't need the "throwaway" parse analysis step at all. Instead, teach the transformSetOperationStmt code to set up the CTE's output column information after it's processed the non-recursive term normally. Per report from David Fetter.
* Modify the definition of window-function PARTITION BY and ORDER BY clausesTom Lane2009-08-27
| | | | | | | | | | | | | | | | | | | so that their elements are always taken as simple expressions over the query's input columns. It originally seemed like a good idea to make them act exactly like GROUP BY and ORDER BY, right down to the SQL92-era behavior of accepting output column names or numbers. However, that was not such a great idea, for two reasons: 1. It permits circular references, as exhibited in bug #5018: the output column could be the one containing the window function itself. (We actually had a regression test case illustrating this, but nobody thought twice about how confusing that would be.) 2. It doesn't seem like a good idea for, eg, "lead(foo) OVER (ORDER BY foo)" to potentially use two completely different meanings for "foo". Accordingly, narrow down the behavior of window clauses to use only the SQL99-compliant interpretation that the expressions are simple expressions.
* 8.4 pgindent run, with new combined Linux/FreeBSD/MinGW typedef listBruce Momjian2009-06-11
| | | | provided by Andrew.