aboutsummaryrefslogtreecommitdiff
path: root/src/backend/parser
Commit message (Collapse)AuthorAge
...
* Fix several bugs related to ON CONFLICT's EXCLUDED pseudo relation.Andres Freund2015-10-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Four related issues: 1) attnos/varnos/resnos for EXCLUDED were out of sync when a column after one dropped in the underlying relation was referenced. 2) References to whole-row variables (i.e. EXCLUDED.*) lead to errors. 3) It was possible to reference system columns in the EXCLUDED pseudo relations, even though they would not have valid contents. 4) References to EXCLUDED were rewritten by the RLS machinery, as EXCLUDED was treated as if it were the underlying relation. To fix the first two issues, generate the excluded targetlist with dropped columns in mind and add an entry for whole row variables. Instead of unconditionally adding a wholerow entry we could pull up the expression if needed, but doing it unconditionally seems simpler. The wholerow entry is only really needed for ruleutils/EXPLAIN support anyway. The remaining two issues are addressed by changing the EXCLUDED RTE to have relkind = composite. That fits with EXCLUDED not actually being a real relation, and allows to treat it differently in the relevant places. scanRTEForColumn now skips looking up system columns when the RTE has a composite relkind; fireRIRrules() already had a corresponding check, thereby preventing RLS expansion on EXCLUDED. Also add tests for these issues, and improve a few comments around excluded handling in setrefs.c. Reported-By: Peter Geoghegan, Geoff Winkless Author: Andres Freund, Amit Langote, Peter Geoghegan Discussion: CAEzk6fdzJ3xYQZGbcuYM2rBd2BuDkUksmK=mY9UYYDugg_GgZg@mail.gmail.com, CAM3SWZS+CauzbiCEcg-GdE6K6ycHE_Bz6Ksszy8AoixcMHOmsA@mail.gmail.com Backpatch: 9.5, where ON CONFLICT was introduced
* Determine whether it's safe to attempt a parallel plan for a query.Robert Haas2015-09-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 924bcf4f16d54c55310b28f77686608684734f42 introduced a framework for parallel computation in PostgreSQL that makes most but not all built-in functions safe to execute in parallel mode. In order to have parallel query, we'll need to be able to determine whether that query contains functions (either built-in or user-defined) that cannot be safely executed in parallel mode. This requires those functions to be labeled, so this patch introduces an infrastructure for that. Some functions currently labeled as safe may need to be revised depending on how pending issues related to heavyweight locking under paralllelism are resolved. Parallel plans can't be used except for the case where the query will run to completion. If portal execution were suspended, the parallel mode restrictions would need to remain in effect during that time, but that might make other queries fail. Therefore, this patch introduces a framework that enables consideration of parallel plans only when it is known that the plan will be run to completion. This probably needs some refinement; for example, at bind time, we do not know whether a query run via the extended protocol will be execution to completion or run with a limited fetch count. Having the client indicate its intentions at bind time would constitute a wire protocol break. Some contexts in which parallel mode would be safe are not adjusted by this patch; the default is not to try parallel plans except from call sites that have been updated to say that such plans are OK. This commit doesn't introduce any parallel paths or plans; it just provides a way to determine whether they could potentially be used. I'm committing it on the theory that the remaining parallel sequential scan patches will also get committed to this release, hopefully in the not-too-distant future. Robert Haas and Amit Kapila. Reviewed (in earlier versions) by Noah Misch.
* Rename 'cmd' to 'cmd_name' in CreatePolicyStmtStephen Frost2015-08-21
| | | | | | | | | | | To avoid confusion, rename CreatePolicyStmt's 'cmd' to 'cmd_name', parse_policy_command's 'cmd' to 'polcmd', and AlterPolicy's 'cmd_datum' to 'polcmd_datum', per discussion with Noah and as a follow-up to his correction of copynodes/equalnodes handling of the CreatePolicyStmt 'cmd' field. Back-patch to 9.5 where the CreatePolicyStmt was introduced, as we are still only in alpha.
* Remove gram.y's precedence declaration for OVERLAPS.Tom Lane2015-08-09
| | | | | | | | | | | | | | | | | The allowed syntax for OVERLAPS, viz "row OVERLAPS row", is sufficiently constrained that we don't actually need a precedence declaration for OVERLAPS; indeed removing this declaration does not change the generated gram.c file at all. Let's remove it to avoid confusion about whether OVERLAPS has precedence or not. If we ever generalize what we allow for OVERLAPS, we might need to put back a precedence declaration for it, but we might want some other level than what it has today --- and leaving the declaration there would just risk confusion about whether that would be an incompatible change. Likewise, remove OVERLAPS from the documentation's precedence table. Per discussion with Noah Misch. Back-patch to 9.5 where we hacked up some nearby precedence decisions.
* Share transition state between different aggregates when possible.Heikki Linnakangas2015-08-04
| | | | | | | | | | | | If there are two different aggregates in the query with same inputs, and the aggregates have the same initial condition and transition function, only calculate the state value once, and only call the final functions separately. For example, AVG(x) and SUM(x) aggregates have the same transition function, which accumulates the sum and number of input tuples. For a query like "SELECT AVG(x), SUM(x) FROM x", we can therefore accumulate the state function only once, which gives a nice speedup. David Rowley, reviewed and edited by me.
* Add IF NOT EXISTS processing to ALTER TABLE ADD COLUMNAndrew Dunstan2015-07-29
| | | | | Fabrízio de Royes Mello, reviewed by Payal Singh, Alvaro Herrera and Michael Paquier.
* Create new ParseExprKind for use by policy expressions.Joe Conway2015-07-29
| | | | | | | | | | | Policy USING and WITH CHECK expressions were using EXPR_KIND_WHERE for parse analysis, which results in inappropriate ERROR messages when the expression contains unsupported constructs such as aggregates. Create a new ParseExprKind called EXPR_KIND_POLICY and tailor the related messages to fit. Reported by Noah Misch. Reviewed by Dean Rasheed, Alvaro Herrera, and Robert Haas. Back-patch to 9.5 where RLS was introduced.
* Fix flattening of nested grouping sets.Andres Freund2015-07-26
| | | | | | | | | | | | | | | Previously nested grouping set specifications accidentally weren't flattened, but instead contained the nested specification as a element in the outer list. Fix this by, as actually documented in comments, concatenating the nested set specification into the outer one. Also add tests to prevent this from breaking again. Author: Andrew Gierth, with tests from Jeevan Chalke Reported-By: Jeevan Chalke Discussion: CAM2+6=V5YvuxB+EyN4iH=GbD-XTA435TCNvnDFSD--YvXs+pww@mail.gmail.com Backpatch: 9.5, where grouping sets were introduced
* Redesign tablesample method API, and do extensive code review.Tom Lane2015-07-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | The original implementation of TABLESAMPLE modeled the tablesample method API on index access methods, which wasn't a good choice because, without specialized DDL commands, there's no way to build an extension that can implement a TSM. (Raw inserts into system catalogs are not an acceptable thing to do, because we can't undo them during DROP EXTENSION, nor will pg_upgrade behave sanely.) Instead adopt an API more like procedural language handlers or foreign data wrappers, wherein the only SQL-level support object needed is a single handler function identified by having a special return type. This lets us get rid of the supporting catalog altogether, so that no custom DDL support is needed for the feature. Adjust the API so that it can support non-constant tablesample arguments (the original coding assumed we could evaluate the argument expressions at ExecInitSampleScan time, which is undesirable even if it weren't outright unsafe), and discourage sampling methods from looking at invisible tuples. Make sure that the BERNOULLI and SYSTEM methods are genuinely repeatable within and across queries, as required by the SQL standard, and deal more honestly with methods that can't support that requirement. Make a full code-review pass over the tablesample additions, and fix assorted bugs, omissions, infelicities, and cosmetic issues (such as failure to put the added code stanzas in a consistent ordering). Improve EXPLAIN's output of tablesample plans, too. Back-patch to 9.5 so that we don't have to support the original API in production.
* Fix bug around assignment expressions containing indirections.Andres Freund2015-07-24
| | | | | | | | | | | | | | | | | | Handling of assigned-to expressions with indirection (e.g. set f1[1] = 3) was broken for ON CONFLICT DO UPDATE. The problem was that ParseState was consulted to determine if an INSERT-appropriate or UPDATE-appropriate behavior should be used when transforming expressions with indirections. When the wrong path was taken the old row was substituted with NULL, leading to wrong results.. To fix remove p_is_update and only use p_is_insert to decide how to transform the assignment expression, and uset p_is_insert while parsing the on conflict statement. This isn't particularly pretty, but it's not any worse than before. Author: Peter Geoghegan, slightly edited by me Discussion: CAM3SWZS8RPvA=KFxADZWw3wAHnnbxMxDzkEC6fNaFc7zSm411w@mail.gmail.com Backpatch: 9.5, where the feature was introduced
* Add ALTER OPERATOR command, for changing selectivity estimator functions.Heikki Linnakangas2015-07-14
| | | | | | | | | Other options cannot be changed, as it's not totally clear if cached plans would need to be invalidated if one of the other options change. Selectivity estimator functions only change plan costs, not correctness of plans, so those should be safe. Original patch by Uriy Zhuravlev, heavily edited by me.
* Avoid passing NULL to memcmp() in lookups of zero-argument functions.Tom Lane2015-06-27
| | | | | | | | | | | | | | | | | | | | A few places assumed they could pass NULL for the argtypes array when looking up functions known to have zero arguments. At first glance it seems that this should be safe enough, since memcmp() is surely not allowed to fetch any bytes if its count argument is zero. However, close reading of the C standard says that such calls have undefined behavior, so we'd probably best avoid it. Since the number of places doing this is quite small, and some other places looking up zero-argument functions were already passing dummy arrays, let's standardize on the latter solution rather than hacking the function lookup code to avoid calling memcmp() in these cases. I also added Asserts to catch any future violations of the new rule. Given the utter lack of any evidence that this actually causes any problems in the field, I don't feel a need to back-patch this change. Per report from Piotr Stefaniak, though this is not his patch.
* pgindent run for 9.5Bruce Momjian2015-05-23
|
* Another typo fix.Tom Lane2015-05-20
| | | | In the spirit of the season.
* Collection of typo fixes.Heikki Linnakangas2015-05-20
| | | | | | | | | | | | | | | Use "a" and "an" correctly, mostly in comments. Two error messages were also fixed (they were just elogs, so no translation work required). Two function comments in pg_proc.h were also fixed. Etsuro Fujita reported one of these, but I found a lot more with grep. Also fix a few other typos spotted while grepping for the a/an typos. For example, "consists out of ..." -> "consists of ...". Plus a "though"/ "through" mixup reported by Euler Taveira. Many of these typos were in old code, which would be nice to backpatch to make future backpatching easier. But much of the code was new, and I didn't feel like crafting separate patches for each branch. So no backpatching.
* Various fixes around ON CONFLICT for rule deparsing.Andres Freund2015-05-19
| | | | | | | | | | Neither the deparsing of the new alias for INSERT's target table, nor of the inference clause was supported. Also fixup a typo in an error message. Add regression tests to test those code paths. Author: Peter Geoghegan
* Refactor ON CONFLICT index inference parse tree representation.Andres Freund2015-05-19
| | | | | | | | | | | | | | Defer lookup of opfamily and input type of a of a user specified opclass until the optimizer selects among available unique indexes; and store the opclass in the parse analyzed tree instead. The primary reason for doing this is that for rule deparsing it's easier to use the opclass than the previous representation. While at it also rename a variable in the inference code to better fit it's purpose. This is separate from the actual fixes for deparsing to make review easier.
* Fix parse tree of DROP TRANSFORM and COMMENT ON TRANSFORMPeter Eisentraut2015-05-18
| | | | | | | | | | The plain C string language name needs to be wrapped in makeString() so that the parse tree is copyable. This is detectable by -DCOPY_PARSE_PLAN_TREES. Add a test case for the COMMENT case. Also make the quoting in the error messages more consistent. discovered by Tom Lane
* Support GROUPING SETS, CUBE and ROLLUP.Andres Freund2015-05-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This SQL standard functionality allows to aggregate data by different GROUP BY clauses at once. Each grouping set returns rows with columns grouped by in other sets set to NULL. This could previously be achieved by doing each grouping as a separate query, conjoined by UNION ALLs. Besides being considerably more concise, grouping sets will in many cases be faster, requiring only one scan over the underlying data. The current implementation of grouping sets only supports using sorting for input. Individual sets that share a sort order are computed in one pass. If there are sets that don't share a sort order, additional sort & aggregation steps are performed. These additional passes are sourced by the previous sort step; thus avoiding repeated scans of the source data. The code is structured in a way that adding support for purely using hash aggregation or a mix of hashing and sorting is possible. Sorting was chosen to be supported first, as it is the most generic method of implementation. Instead of, as in an earlier versions of the patch, representing the chain of sort and aggregation steps as full blown planner and executor nodes, all but the first sort are performed inside the aggregation node itself. This avoids the need to do some unusual gymnastics to handle having to return aggregated and non-aggregated tuples from underlying nodes, as well as having to shut down underlying nodes early to limit memory usage. The optimizer still builds Sort/Agg node to describe each phase, but they're not part of the plan tree, but instead additional data for the aggregation node. They're a convenient and preexisting way to describe aggregation and sorting. The first (and possibly only) sort step is still performed as a separate execution step. That retains similarity with existing group by plans, makes rescans fairly simple, avoids very deep plans (leading to slow explains) and easily allows to avoid the sorting step if the underlying data is sorted by other means. A somewhat ugly side of this patch is having to deal with a grammar ambiguity between the new CUBE keyword and the cube extension/functions named cube (and rollup). To avoid breaking existing deployments of the cube extension it has not been renamed, neither has cube been made a reserved keyword. Instead precedence hacking is used to make GROUP BY cube(..) refer to the CUBE grouping sets feature, and not the function cube(). To actually group by a function cube(), unlikely as that might be, the function name has to be quoted. Needs a catversion bump because stored rules may change. Author: Andrew Gierth and Atri Sharma, with contributions from Andres Freund Reviewed-By: Andres Freund, Noah Misch, Tom Lane, Svenne Krap, Tomas Vondra, Erik Rijkers, Marti Raudsepp, Pavel Stehule Discussion: CAOeZVidmVRe2jU6aMk_5qkxnB7dfmPROzM7Ur8JPW5j8Y5X-Lw@mail.gmail.com
* TABLESAMPLE, SQL Standard and extensibleSimon Riggs2015-05-15
| | | | | | | | | | | | | | Add a TABLESAMPLE clause to SELECT statements that allows user to specify random BERNOULLI sampling or block level SYSTEM sampling. Implementation allows for extensible sampling functions to be written, using a standard API. Basic version follows SQLStandard exactly. Usable concrete use cases for the sampling API follow in later commits. Petr Jelinek Reviewed by Michael Paquier and Simon Riggs
* Support VERBOSE option in REINDEX command.Fujii Masao2015-05-15
| | | | | | | | | | | | | | | | When this option is specified, a progress report is printed as each index is reindexed. Per discussion, we agreed on the following syntax for the extensibility of the options. REINDEX (flexible options) { INDEX | ... } name Sawada Masahiko. Reviewed by Robert Haas, Fabrízio Mello, Alvaro Herrera, Kyotaro Horiguchi, Jim Nasby and me. Discussion: CAD21AoA0pK3YcOZAFzMae+2fcc3oGp5zoRggDyMNg5zoaWDhdQ@mail.gmail.com
* Allow on-the-fly capture of DDL event detailsAlvaro Herrera2015-05-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This feature lets user code inspect and take action on DDL events. Whenever a ddl_command_end event trigger is installed, DDL actions executed are saved to a list which can be inspected during execution of a function attached to ddl_command_end. The set-returning function pg_event_trigger_ddl_commands can be used to list actions so captured; it returns data about the type of command executed, as well as the affected object. This is sufficient for many uses of this feature. For the cases where it is not, we also provide a "command" column of a new pseudo-type pg_ddl_command, which is a pointer to a C structure that can be accessed by C code. The struct contains all the info necessary to completely inspect and even reconstruct the executed command. There is no actual deparse code here; that's expected to come later. What we have is enough infrastructure that the deparsing can be done in an external extension. The intention is that we will add some deparsing code in a later release, as an in-core extension. A new test module is included. It's probably insufficient as is, but it should be sufficient as a starting point for a more complete and future-proof approach. Authors: Álvaro Herrera, with some help from Andres Freund, Ian Barwick, Abhijit Menon-Sen. Reviews by Andres Freund, Robert Haas, Amit Kapila, Michael Paquier, Craig Ringer, David Steele. Additional input from Chris Browne, Dimitri Fontaine, Stephen Frost, Petr Jelínek, Tom Lane, Jim Nasby, Steven Singer, Pavel Stěhule. Based on original work by Dimitri Fontaine, though I didn't use his code. Discussion: https://www.postgresql.org/message-id/m2txrsdzxa.fsf@2ndQuadrant.fr https://www.postgresql.org/message-id/20131108153322.GU5809@eldon.alvh.no-ip.org https://www.postgresql.org/message-id/20150215044814.GL3391@alvh.no-ip.org
* Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE.Andres Freund2015-05-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The newly added ON CONFLICT clause allows to specify an alternative to raising a unique or exclusion constraint violation error when inserting. ON CONFLICT refers to constraints that can either be specified using a inference clause (by specifying the columns of a unique constraint) or by naming a unique or exclusion constraint. DO NOTHING avoids the constraint violation, without touching the pre-existing row. DO UPDATE SET ... [WHERE ...] updates the pre-existing tuple, and has access to both the tuple proposed for insertion and the existing tuple; the optional WHERE clause can be used to prevent an update from being executed. The UPDATE SET and WHERE clauses have access to the tuple proposed for insertion using the "magic" EXCLUDED alias, and to the pre-existing tuple using the table name or its alias. This feature is often referred to as upsert. This is implemented using a new infrastructure called "speculative insertion". It is an optimistic variant of regular insertion that first does a pre-check for existing tuples and then attempts an insert. If a violating tuple was inserted concurrently, the speculatively inserted tuple is deleted and a new attempt is made. If the pre-check finds a matching tuple the alternative DO NOTHING or DO UPDATE action is taken. If the insertion succeeds without detecting a conflict, the tuple is deemed inserted. To handle the possible ambiguity between the excluded alias and a table named excluded, and for convenience with long relation names, INSERT INTO now can alias its target table. Bumps catversion as stored rules change. Author: Peter Geoghegan, with significant contributions from Heikki Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes. Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs, Dean Rasheed, Stephen Frost and many others.
* Represent columns requiring insert and update privileges indentently.Andres Freund2015-05-08
| | | | | | | | | | | | | | | | | | | Previously, relation range table entries used a single Bitmapset field representing which columns required either UPDATE or INSERT privileges, despite the fact that INSERT and UPDATE privileges are separately cataloged, and may be independently held. As statements so far required either insert or update privileges but never both, that was sufficient. The required permission could be inferred from the top level statement run. The upcoming INSERT ... ON CONFLICT UPDATE feature needs to independently check for both privileges in one statement though, so that is not sufficient anymore. Bumps catversion as stored rules change. Author: Peter Geoghegan Reviewed-By: Andres Freund
* Rename coerce_type() local variable.Noah Misch2015-05-02
| | | | | | coerce_type() has local variables named targetTypeId, baseTypeId, and targetType. targetType has been the Type structure for baseTypeId, so rename it to baseType.
* Deparse named arguments to use the new => operator instead of :=Robert Haas2015-05-01
| | | | | | Tom Lane pointed out that this wasn't done, and asked whether that was intentional. Subsequent discussion was in favor of making the change, so here we go.
* Fix up some loose ends for CURRENT_USER as RoleSpecAlvaro Herrera2015-04-30
| | | | | | | | | | | | | | In commit 31eae6028eca4, some documents were not updated to show the new capability; fix that. Also, the error message you get when CURRENT_USER and SESSION_USER are used in a context that doesn't accept them could be clearer about it being a problem only in those contexts; so add the word "here". Author: Kyotaro HORIGUCHI His patch submission also included changes to GRANT/REVOKE, but those seemed more controversial, so I left them out. We can reconsider these changes later.
* Fix another test for RELKIND_RELATION that should allow foreign tables now.Tom Lane2015-04-28
| | | | | | | | | | I thought I'd gone through all of these before, but a fresh review found this one too. (Perhaps it would be better to just delete this test and let the failure occur later, but for the moment I'll preserve the logic.) The case that this was rejecting is like CREATE FOREIGN TABLE ft (f1 int ...) ...; CREATE TABLE c1 (UNIQUE(f1)) INHERITS(ft);
* Add transforms featurePeter Eisentraut2015-04-26
| | | | | | | | This provides a mechanism for specifying conversions between SQL data types and procedural languages. As examples, there are transforms for hstore and ltree for PL/Perl and PL/Python. reviews by Pavel Stěhule and Andres Freund
* Add comments warning against generalizing default_with_oids.Tom Lane2015-04-25
| | | | | | | | | | | | | pg_dump has historically assumed that default_with_oids affects only plain tables and not other relkinds. Conceivably we could make it apply to some newly invented relkind if we did so from the get-go, but changing the behavior for existing object types will break existing dump scripts. Add code comments warning about this interaction. Also, make sure that default_with_oids doesn't cause parse_utilcmd.c to think that CREATE FOREIGN TABLE will create an OID column. I think this is only a latent bug right now, since we don't allow UNIQUE/PKEY constraints in CREATE FOREIGN TABLE, but it's better to be consistent and future-proof.
* Revert: Honor OID status of CREATE LIKE'd tablesBruce Momjian2015-04-25
| | | | | | Reverts d992f8a8961c09ec219373ffe2b5e6473febd065 Report by Tom Lane
* Honor OID status of CREATE LIKE'd tablesBruce Momjian2015-04-20
| | | | | | Previously, tables created by CREATE LIKE never had OIDs. Report by Tom Lane
* Remove obsolete FORCE option from REINDEX.Fujii Masao2015-04-09
| | | | | | FORCE option has been marked "obsolete" since very old version 7.4 but existed for backwards compatibility. Per discussion on pgsql-hackers, we concluded that it's no longer worth keeping supporting the option.
* Transform ALTER TABLE/SET TYPE/USING expr during parse analysisAlvaro Herrera2015-04-03
| | | | | | | | | | | | | | | This lets later stages have access to the transformed expression; in particular it allows DDL-deparsing code during event triggers to pass the transformed expression to ruleutils.c, so that the complete command can be deparsed. This shuffles the timing of the transform calls a bit: previously, nothing was transformed during parse analysis, and only the RELKIND_RELATION case was being handled during execution. After this patch, all expressions are transformed during parse analysis (including those for relkinds other than RELATION), and the error for other relation kinds is thrown only during execution. So we do more work than before to reject some bogus cases. That seems acceptable.
* Remove spurious semicolons.Heikki Linnakangas2015-03-31
| | | | Petr Jelinek
* Fix gram.y comment to match realityAlvaro Herrera2015-03-25
| | | | | There are other comments in there that don't precisely match what's implemented, but this one confused me enough to be worth fixing.
* Add support for ALTER TABLE IF EXISTS ... RENAME CONSTRAINTBruce Momjian2015-03-24
| | | | | Also add regression test. Previously this was documented to work, but didn't.
* Allow foreign tables to participate in inheritance.Tom Lane2015-03-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Foreign tables can now be inheritance children, or parents. Much of the system was already ready for this, but we had to fix a few things of course, mostly in the area of planner and executor handling of row locks. As side effects of this, allow foreign tables to have NOT VALID CHECK constraints (and hence to accept ALTER ... VALIDATE CONSTRAINT), and to accept ALTER SET STORAGE and ALTER SET WITH/WITHOUT OIDS. Continuing to disallow these things would've required bizarre and inconsistent special cases in inheritance behavior. Since foreign tables don't enforce CHECK constraints anyway, a NOT VALID one is a complete no-op, but that doesn't mean we shouldn't allow it. And it's possible that some FDWs might have use for SET STORAGE or SET WITH OIDS, though doubtless they will be no-ops for most. An additional change in support of this is that when a ModifyTable node has multiple target tables, they will all now be explicitly identified in EXPLAIN output, for example: Update on pt1 (cost=0.00..321.05 rows=3541 width=46) Update on pt1 Foreign Update on ft1 Foreign Update on ft2 Update on child3 -> Seq Scan on pt1 (cost=0.00..0.00 rows=1 width=46) -> Foreign Scan on ft1 (cost=100.00..148.03 rows=1170 width=46) -> Foreign Scan on ft2 (cost=100.00..148.03 rows=1170 width=46) -> Seq Scan on child3 (cost=0.00..25.00 rows=1200 width=46) This was done mainly to provide an unambiguous place to attach "Remote SQL" fields, but it is useful for inherited updates even when no foreign tables are involved. Shigeru Hanada and Etsuro Fujita, reviewed by Ashutosh Bapat and Kyotaro Horiguchi, some additional hacking by me
* Setup cursor position for schema-qualified elementsAlvaro Herrera2015-03-18
| | | | | | | | This makes any errors thrown while looking up such schemas report the position of the error. Author: Ryan Kelly Reviewed by: Jeevan Chalke, Tom Lane
* Rationalize vacuuming options and parametersAlvaro Herrera2015-03-18
| | | | | | | | | | | | | | | | | | | We were involving the parser too much in setting up initial vacuuming parameters. This patch moves that responsibility elsewhere to simplify code, and also to make future additions easier. To do this, create a new struct VacuumParams which is filled just prior to vacuum execution, instead of at parse time; for user-invoked vacuuming this is set up in a new function ExecVacuum, while autovacuum sets it up by itself. While at it, add a new member VACOPT_SKIPTOAST to enum VacuumOption, only set by autovacuum, which is used to disable vacuuming of the toast table instead of the old do_toast parameter; this relieves the argument list of vacuum() and some callees a bit. This partially makes up for having added more arguments in an effort to avoid having autovacuum from constructing a VacuumStmt parse node. Author: Michael Paquier. Some tweaks by Álvaro Reviewed by: Robert Haas, Stephen Frost, Álvaro Herrera
* Support opfamily members in get_object_addressAlvaro Herrera2015-03-16
| | | | | | | | | | | | | | | | | | | | | In the spirit of 890192e99af and 4464303405f: have get_object_address understand individual pg_amop and pg_amproc objects. There is no way to refer to such objects directly in the grammar -- rather, they are almost always considered an integral part of the opfamily that contains them. (The only case that deals with them individually is ALTER OPERATOR FAMILY ADD/DROP, which carries the opfamily address separately and thus does not need it to be part of each added/dropped element's address.) In event triggers it becomes possible to become involved with individual amop/amproc elements, and this commit enables pg_get_object_address to do so as well. To make the overall coding simpler, this commit also slightly changes the get_object_address representation for opclasses and opfamilies: instead of having the AM name in the objargs array, I moved it as the first element of the objnames array. This enables the new code to use objargs for the type names used by pg_amop and pg_amproc. Reviewed by: Stephen Frost
* Improve representation of PlanRowMark.Tom Lane2015-03-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch fixes two inadequacies of the PlanRowMark representation. First, that the original LockingClauseStrength isn't stored (and cannot be inferred for foreign tables, which always get ROW_MARK_COPY). Since some PlanRowMarks are created out of whole cloth and don't actually have an ancestral RowMarkClause, this requires adding a dummy LCS_NONE value to enum LockingClauseStrength, which is fairly annoying but the alternatives seem worse. This fix allows getting rid of the use of get_parse_rowmark() in FDWs (as per the discussion around commits 462bd95705a0c23b and 8ec8760fc87ecde0), and it simplifies some things elsewhere. Second, that the representation assumed that all child tables in an inheritance hierarchy would use the same RowMarkType. That's true today but will soon not be true. We add an "allMarkTypes" field that identifies the union of mark types used in all a parent table's children, and use that where appropriate (currently, only in preprocess_targetlist()). In passing fix a couple of minor infelicities left over from the SKIP LOCKED patch, notably that _outPlanRowMark still thought waitPolicy is a bool. Catversion bump is required because the numeric values of enum LockingClauseStrength can appear in on-disk rules. Extracted from a much larger patch to support foreign table inheritance; it seemed worth breaking this out, since it's a separable concern. Shigeru Hanada and Etsuro Fujita, somewhat modified by me
* Require non-NULL pstate for all addRangeTableEntryFor* functions.Robert Haas2015-03-11
| | | | | | | Per discussion, it's better to have a consistent coding rule here. Michael Paquier, per a node from Greg Stark referencing an old post from Tom Lane.
* Make operator precedence follow the SQL standard more closely.Tom Lane2015-03-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | While the SQL standard is pretty vague on the overall topic of operator precedence (because it never presents a unified BNF for all expressions), it does seem reasonable to conclude from the spec for <boolean value expression> that OR has the lowest precedence, then AND, then NOT, then IS tests, then the six standard comparison operators, then everything else (since any non-boolean operator in a WHERE clause would need to be an argument of one of these). We were only sort of on board with that: most notably, while "<" ">" and "=" had properly low precedence, "<=" ">=" and "<>" were treated as generic operators and so had significantly higher precedence. And "IS" tests were even higher precedence than those, which is very clearly wrong per spec. Another problem was that "foo NOT SOMETHING bar" constructs, such as "x NOT LIKE y", were treated inconsistently because of a bison implementation artifact: they had the documented precedence with respect to operators to their right, but behaved like NOT (i.e., very low priority) with respect to operators to their left. Fixing the precedence issues is just a small matter of rearranging the precedence declarations in gram.y, except for the NOT problem, which requires adding an additional lookahead case in base_yylex() so that we can attach a different token precedence to NOT LIKE and allied two-word operators. The bulk of this patch is not the bug fix per se, but adding logic to parse_expr.c to allow giving warnings if an expression has changed meaning because of these precedence changes. These warnings are off by default and are enabled by the new GUC operator_precedence_warning. It's believed that very few applications will be affected by these changes, but it was agreed that a warning mechanism is essential to help debug any that are.
* Suggest to the user the column they may have meant to reference.Robert Haas2015-03-11
| | | | | | | | | | | | | | Error messages informing the user that no such column exists can sometimes provoke a perplexed response. This often happens due to a subtle typo in the column name or, perhaps less likely, in the alias name. To speed discovery of what the real issue is in such cases, we'll now search the range table for approximate matches. If there are one or two such matches that are good enough to think that they might be what the user intended to type, and better than all other approximate matches, we'll issue a hint suggesting that the user might have intended to reference those columns. Peter Geoghegan and Robert Haas
* Allow named parameters to be specified using => in addition to :=Robert Haas2015-03-10
| | | | | | | | | | | | | | | | SQL has standardized on => as the use of to specify named parameters, and we've wanted for many years to support the same syntax ourselves, but this has been complicated by the possible use of => as an operator name. In PostgreSQL 9.0, we began emitting a warning when an operator named => was defined, and in PostgreSQL 9.2, we stopped shipping a =>(text, text) operator as part of hstore. By the time the next major version of PostgreSQL is released, => will have been deprecated for a full five years, so hopefully there won't be too many people still relying on it. We continue to support := for compatibility with previous PostgreSQL releases. Pavel Stehule, reviewed by Petr Jelinek, with a few documentation tweaks by me.
* Allow CURRENT/SESSION_USER to be used in certain commandsAlvaro Herrera2015-03-09
| | | | | | | | | | | | | | | | | | | | | Commands such as ALTER USER, ALTER GROUP, ALTER ROLE, GRANT, and the various ALTER OBJECT / OWNER TO, as well as ad-hoc clauses related to roles such as the AUTHORIZATION clause of CREATE SCHEMA, the FOR clause of CREATE USER MAPPING, and the FOR ROLE clause of ALTER DEFAULT PRIVILEGES can now take the keywords CURRENT_USER and SESSION_USER as user specifiers in place of an explicit user name. This commit also fixes some quite ugly handling of special standards- mandated syntax in CREATE USER MAPPING, which in particular would fail to work in presence of a role named "current_user". The special role specifiers PUBLIC and NONE also have more consistent handling now. Also take the opportunity to add location tracking to user specifiers. Authors: Kyotaro Horiguchi. Heavily reworked by Álvaro Herrera. Reviewed by: Rushabh Lathia, Adam Brightwell, Marti Raudsepp.
* Remove residual NULL-pstate handling in addRangeTableEntry.Robert Haas2015-03-03
| | | | | | | | Passing a NULL pstate wouldn't actually work, because isLockedRefname() isn't prepared to cope with it; and there hasn't been any in-core code that tries in over a decade. So just remove the residual NULL handling. Spotted by Coverity; analysis and patch by Michael Paquier.
* Improve parser's one-extra-token lookahead mechanism.Tom Lane2015-02-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are a couple of places in our grammar that fail to be strict LALR(1), by requiring more than a single token of lookahead to decide what to do. Up to now we've dealt with that by using a filter between the lexer and parser that merges adjacent tokens into one in the places where two tokens of lookahead are necessary. But that creates a number of user-visible anomalies, for instance that you can't name a CTE "ordinality" because "WITH ordinality AS ..." triggers folding of WITH and ORDINALITY into one token. I realized that there's a better way. In this patch, we still do the lookahead basically as before, but we never merge the second token into the first; we replace just the first token by a special lookahead symbol when one of the lookahead pairs is seen. This requires a couple extra productions in the grammar, but it involves fewer special tokens, so that the grammar tables come out a bit smaller than before. The filter logic is no slower than before, perhaps a bit faster. I also fixed the filter logic so that when backing up after a lookahead, the current token's terminator is correctly restored; this eliminates some weird behavior in error message issuance, as is shown by the one change in existing regression test outputs. I believe that this patch entirely eliminates odd behaviors caused by lookahead for WITH. It doesn't really improve the situation for NULLS followed by FIRST/LAST unfortunately: those sequences still act like a reserved word, even though there are cases where they should be seen as two ordinary identifiers, eg "SELECT nulls first FROM ...". I experimented with additional grammar hacks but couldn't find any simple solution for that. Still, this is better than before, and it seems much more likely that we *could* somehow solve the NULLS case on the basis of this filter behavior than the previous one.
* Further tweaking of raw grammar output to distinguish different inputs.Tom Lane2015-02-23
| | | | | | | | | | | | | | | | | | | | Use a different A_Expr_Kind for LIKE/ILIKE/SIMILAR TO constructs, so that they can be distinguished from direct invocation of the underlying operators. Also, postpone selection of the operator name when transforming "x IN (select)" to "x = ANY (select)", so that those syntaxes can be told apart at parse analysis time. I had originally thought I'd also have to do something special for the syntaxes IS NOT DISTINCT FROM, IS NOT DOCUMENT, and x NOT IN (SELECT...), which the grammar translates as though they were NOT (construct). On reflection though, we can distinguish those cases reliably by noting whether the parse location shown for the NOT is the same as for its child node. This only requires tweaking the parse locations for NOT IN, which I've done here. These changes should have no effect outside the parser; they're just in support of being able to give accurate warnings for planned operator precedence changes.