aboutsummaryrefslogtreecommitdiff
path: root/src/backend/nodes
Commit message (Collapse)AuthorAge
...
* Add hasRowSecurity to copyfuncs/outfuncsStephen Frost2015-02-25
| | | | | | | | | | | The RLS patch added a hasRowSecurity field to PlannerGlobal and PlannedStmt but didn't update nodes/copyfuncs.c and nodes/outfuncs.c to reflect those additional fields. Correct that by adding entries to the appropriate functions for those fields. Pointed out by Robert.
* Further tweaking of raw grammar output to distinguish different inputs.Tom Lane2015-02-23
| | | | | | | | | | | | | | | | | | | | Use a different A_Expr_Kind for LIKE/ILIKE/SIMILAR TO constructs, so that they can be distinguished from direct invocation of the underlying operators. Also, postpone selection of the operator name when transforming "x IN (select)" to "x = ANY (select)", so that those syntaxes can be told apart at parse analysis time. I had originally thought I'd also have to do something special for the syntaxes IS NOT DISTINCT FROM, IS NOT DOCUMENT, and x NOT IN (SELECT...), which the grammar translates as though they were NOT (construct). On reflection though, we can distinguish those cases reliably by noting whether the parse location shown for the NOT is the same as for its child node. This only requires tweaking the parse locations for NOT IN, which I've done here. These changes should have no effect outside the parser; they're just in support of being able to give accurate warnings for planned operator precedence changes.
* Add parse location fields to NullTest and BooleanTest structs.Tom Lane2015-02-22
| | | | | | | | We did not need a location tag on NullTest or BooleanTest before, because no error messages referred directly to their locations. That's planned to change though, so add these fields in a separate housekeeping commit. Catversion bump because stored rules may change.
* Get rid of multiple applications of transformExpr() to the same tree.Tom Lane2015-02-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | transformExpr() has for many years had provisions to do nothing when applied to an already-transformed expression tree. However, this was always ugly and of dubious reliability, so we'd be much better off without it. The primary historical reason for it was that gram.y sometimes returned multiple links to the same subexpression, which is no longer true as of my BETWEEN fixes. We'd also grown some lazy hacks in CREATE TABLE LIKE (failing to distinguish between raw and already-transformed index specifications) and one or two other places. This patch removes the need for and support for re-transforming already transformed expressions. The index case is dealt with by adding a flag to struct IndexStmt to indicate that it's already been transformed; which has some benefit anyway in that tablecmds.c can now Assert that transformation has happened rather than just assuming. The other main reason was some rather sloppy code for array type coercion, which can be fixed (and its performance improved too) by refactoring. I did leave transformJoinUsingClause() still constructing expressions containing untransformed operator nodes being applied to Vars, so that transformExpr() still has to allow Var inputs. But that's a much narrower, and safer, special case than before, since Vars will never appear in a raw parse tree, and they don't have any substructure to worry about. In passing fix some oversights in the patch that added CREATE INDEX IF NOT EXISTS (missing processing of IndexStmt.if_not_exists). These appear relatively harmless, but still sloppy coding practice.
* Represent BETWEEN as a special node type in raw parse trees.Tom Lane2015-02-22
| | | | | | | | | | | | | | | | | | | | | Previously, gram.y itself converted BETWEEN into AND (or AND/OR) nests of expression comparisons. This was always as bogus as could be, but fixing it hasn't risen to the top of the to-do list. The present patch invents an A_Expr representation for BETWEEN expressions, and does the expansion to comparison trees in parse_expr.c which is at least a slightly saner place to be doing semantic conversions. There should be no change in the post- parse-analysis results. This does nothing for the semantic issues with BETWEEN (dubious connection to btree-opclass semantics, and multiple evaluation of possibly volatile subexpressions) ... but it's a necessary preliminary step before we could fix any of that. The main immediate benefit is that preserving BETWEEN as an identifiable raw-parse-tree construct will enable better error messages. While at it, fix the code so that multiply-referenced subexpressions are physically duplicated before being passed through transformExpr(). This gets rid of one of the principal reasons why transformExpr() has historically had to allow already-processed input.
* Use FLEXIBLE_ARRAY_MEMBER in a bunch more places.Tom Lane2015-02-20
| | | | | | | | | | | | | | | | Replace some bogus "x[1]" declarations with "x[FLEXIBLE_ARRAY_MEMBER]". Aside from being more self-documenting, this should help prevent bogus warnings from static code analyzers and perhaps compiler misoptimizations. This patch is just a down payment on eliminating the whole problem, but it gets rid of a lot of easy-to-fix cases. Note that the main problem with doing this is that one must no longer rely on computing sizeof(the containing struct), since the result would be compiler-dependent. Instead use offsetof(struct, lastfield). Autoconf also warns against spelling that offsetof(struct, lastfield[0]). Michael Paquier, review and additional fixes by me.
* Fix EXPLAIN output for cases where parent table is excluded by constraints.Tom Lane2015-02-17
| | | | | | | | | | | | | | The previous coding in EXPLAIN always labeled a ModifyTable node with the name of the target table affected by its first child plan. When originally written, this was necessarily the parent table of the inheritance tree, so everything was unconfusing. But when we added NO INHERIT constraints, it became possible for the parent table to be deleted from the plan by constraint exclusion while still leaving child tables present. This led to the ModifyTable plan node being labeled with the first surviving child, which was deemed confusing. Fix it by retaining the parent table's RT index in a new field in ModifyTable. Etsuro Fujita, reviewed by Ashutosh Bapat and myself
* Improve new caching logic in tbm_add_tuples().Tom Lane2015-01-16
| | | | | | | | | For no significant extra complexity, we can cache knowledge that the target page is lossy, and save a hash_search per iteration in that case as well. This probably makes little difference, since the extra rechecks that must occur when pages are lossy are way more expensive than anything we can save here ... but we might as well do it if we're going to cache anything.
* Make tbm_add_tuples more efficient by caching the last acccessed page.Andres Freund2015-01-16
| | | | | | | | | | | | | | | | When adding a large number of tuples to a TID bitmap using tbm_add_tuples() sometimes a lot of time was spent looking up a page's entry in the bitmap's internal hashtable. Improve efficiency by caching the last accessed page, while iterating over the passed in tuples, hoping consecutive tuples will often be on the same page. In many cases that's a good bet, and in the rest the added overhead isn't big. Discussion: 54479A85.8060309@sigaev.ru Author: Teodor Sigaev Reviewed-By: David Rowley
* Update copyright for 2015Bruce Momjian2015-01-06
| | | | Backpatch certain files through 9.0
* Remove unused fields from ReindexStmt.Fujii Masao2014-12-24
| | | | | | | fe263d1 changed the REINDEX logic so that those fields are not used at all, but forgot to remove them. Sawada Masahiko
* Improve hash_create's API for selecting simple-binary-key hash functions.Tom Lane2014-12-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, if you wanted anything besides C-string hash keys, you had to specify a custom hashing function to hash_create(). Nearly all such callers were specifying tag_hash or oid_hash; which is tedious, and rather error-prone, since a caller could easily miss the opportunity to optimize by using hash_uint32 when appropriate. Replace this with a design whereby callers using simple binary-data keys just specify HASH_BLOBS and don't need to mess with specific support functions. hash_create() itself will take care of optimizing when the key size is four bytes. This nets out saving a few hundred bytes of code space, and offers a measurable performance improvement in tidbitmap.c (which was not exploiting the opportunity to use hash_uint32 for its 4-byte keys). There might be some wins elsewhere too, I didn't analyze closely. In future we could look into offering a similar optimized hashing function for 8-byte keys. Under this design that could be done in a centralized and machine-independent fashion, whereas getting it right for keys of platform-dependent sizes would've been notationally painful before. For the moment, the old way still works fine, so as not to break source code compatibility for loadable modules. Eventually we might want to remove tag_hash and friends from the exported API altogether, since there's no real need for them to be explicitly referenced from outside dynahash.c. Teodor Sigaev and Tom Lane
* Add CINE option for CREATE TABLE AS and CREATE MATERIALIZED VIEWAndrew Dunstan2014-12-13
| | | | Fabrízio de Royes Mello reviewed by Rushabh Lathia.
* Use correct macro for reltablespace.Heikki Linnakangas2014-12-11
| | | | | | | It's an OID. WRITE_UINT_FIELD is identical to WRITE_OID_FIELD, but let's be tidy. Mark Dilger
* Add bms_get_singleton_member(), and use it where appropriate.Tom Lane2014-11-28
| | | | | | | | | | This patch adds a function that replaces a bms_membership() test followed by a bms_singleton_member() call, performing both the test and the extraction of a singleton set's member in one scan of the bitmapset. The performance advantage over the old way is probably minimal in current usage, but it seems worthwhile on notational grounds anyway. David Rowley
* Add bms_next_member(), and use it where appropriate.Tom Lane2014-11-28
| | | | | | | | | | | | This patch adds a way of iterating through the members of a bitmapset nondestructively, unlike the old way with bms_first_member(). While bms_next_member() is very slightly slower than bms_first_member() (at least for typical-size bitmapsets), eliminating the need to palloc and pfree a temporary copy of the target bitmapset is a significant win. So this method should be preferred in all cases where a temporary copy would be necessary. Tom Lane, with suggestions from Dean Rasheed and David Rowley
* Support arrays as input to array_agg() and ARRAY(SELECT ...).Tom Lane2014-11-25
| | | | | | | | | | | | | | | These cases formerly failed with errors about "could not find array type for data type". Now they yield arrays of the same element type and one higher dimension. The implementation involves creating functions with API similar to the existing accumArrayResult() family. I (tgl) also extended the base family by adding an initArrayResult() function, which allows callers to avoid special-casing the zero-inputs case if they just want an empty array as result. (Not all do, so the previous calling convention remains valid.) This allowed simplifying some existing code in xml.c and plperl.c. Ali Akbar, reviewed by Pavel Stehule, significantly modified by me
* Rearrange CustomScan API.Tom Lane2014-11-21
| | | | | | | | | | | | | Make it work more like FDW plans do: instead of assuming that there are expressions in a CustomScan plan node that the core code doesn't know about, insist that all subexpressions that need planner attention be in a "custom_exprs" list in the Plan representation. (Of course, the custom plugin can break the list apart again at executor initialization.) This lets us revert the parts of the patch that exposed setrefs.c and subselect.c processing to the outside world. Also revert the GetSpecialCustomVar stuff in ruleutils.c; that concept may work in future, but it's far from fully baked right now.
* Introduce custom path and scan providers.Robert Haas2014-11-07
| | | | | | | | | | | This allows extension modules to define their own methods for scanning a relation, and get the core code to use them. It's unclear as yet how much use this capability will find, but we won't find out if we never commit it. KaiGai Kohei, reviewed at various times and in various levels of detail by Shigeru Hanada, Tom Lane, Andres Freund, Álvaro Herrera, and myself.
* Implement IF NOT EXIST for CREATE INDEX.Fujii Masao2014-11-06
| | | | Fabrízio de Royes Mello, reviewed by Marti Raudsepp, Adam Brightwell and me.
* Avoid core dump in _outPathInfo() for Path without a parent RelOptInfo.Tom Lane2014-10-17
| | | | | | | | Nearly all Paths have parents, but a ResultPath representing an empty FROM clause does not. Avoid a core dump in such cases. I believe this is only a hazard for debugging usage, not for production, else we'd have heard about it before. Nonetheless, back-patch to 9.1 where the troublesome code was introduced. Noted while poking at bug #11703.
* Implement SKIP LOCKED for row-level locksAlvaro Herrera2014-10-07
| | | | | | | | | | | | | | | | | | This clause changes the behavior of SELECT locking clauses in the presence of locked rows: instead of causing a process to block waiting for the locks held by other processes (or raise an error, with NOWAIT), SKIP LOCKED makes the new reader skip over such rows. While this is not appropriate behavior for general purposes, there are some cases in which it is useful, such as queue-like tables. Catalog version bumped because this patch changes the representation of stored rules. Reviewed by Craig Ringer (based on a previous attempt at an implementation by Simon Riggs, who also provided input on the syntax used in the current patch), David Rowley, and Álvaro Herrera. Author: Thomas Munro
* Fix mishandling of CreateEventTrigStmt's eventname field.Robert Haas2014-09-22
| | | | | | It's a string, not a scalar. Petr Jelinek
* Row-Level Security Policies (RLS)Stephen Frost2014-09-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Building on the updatable security-barrier views work, add the ability to define policies on tables to limit the set of rows which are returned from a query and which are allowed to be added to a table. Expressions defined by the policy for filtering are added to the security barrier quals of the query, while expressions defined to check records being added to a table are added to the with-check options of the query. New top-level commands are CREATE/ALTER/DROP POLICY and are controlled by the table owner. Row Security is able to be enabled and disabled by the owner on a per-table basis using ALTER TABLE .. ENABLE/DISABLE ROW SECURITY. Per discussion, ROW SECURITY is disabled on tables by default and must be enabled for policies on the table to be used. If no policies exist on a table with ROW SECURITY enabled, a default-deny policy is used and no records will be visible. By default, row security is applied at all times except for the table owner and the superuser. A new GUC, row_security, is added which can be set to ON, OFF, or FORCE. When set to FORCE, row security will be applied even for the table owner and superusers. When set to OFF, row security will be disabled when allowed and an error will be thrown if the user does not have rights to bypass row security. Per discussion, pg_dump sets row_security = OFF by default to ensure that exports and backups will have all data in the table or will error if there are insufficient privileges to bypass row security. A new option has been added to pg_dump, --enable-row-security, to ask pg_dump to export with row security enabled. A new role capability, BYPASSRLS, which can only be set by the superuser, is added to allow other users to be able to bypass row security using row_security = OFF. Many thanks to the various individuals who have helped with the design, particularly Robert Haas for his feedback. Authors include Craig Ringer, KaiGai Kohei, Adam Brightwell, Dean Rasheed, with additional changes and rework by me. Reviewers have included all of the above, Greg Smith, Jeff McCormick, and Robert Haas.
* Implement IF NOT EXISTS for CREATE SEQUENCE.Heikki Linnakangas2014-08-26
| | | | Fabrízio de Royes Mello
* Rework 'MOVE ALL' to 'ALTER .. ALL IN TABLESPACE'Stephen Frost2014-08-21
| | | | | | | | | | | | As 'ALTER TABLESPACE .. MOVE ALL' really didn't change the tablespace but instead changed objects inside tablespaces, it made sense to rework the syntax and supporting functions to operate under the 'ALTER (TABLE|INDEX|MATERIALIZED VIEW)' syntax and to be in tablecmds.c. Pointed out by Alvaro, who also suggested the new syntax. Back-patch to 9.4.
* Partial fix for dropped columns in functions returning composite.Tom Lane2014-07-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When a view has a function-returning-composite in FROM, and there are some dropped columns in the underlying composite type, ruleutils.c printed junk in the column alias list for the reconstructed FROM entry. Before 9.3, this was prevented by doing get_rte_attribute_is_dropped tests while printing the column alias list; but that solution is not currently available to us for reasons I'll explain below. Instead, check for empty-string entries in the alias list, which can only exist if that column position had been dropped at the time the view was made. (The parser fills in empty strings to preserve the invariant that the aliases correspond to physical column positions.) While this is sufficient to handle the case of columns dropped before the view was made, we have still got issues with columns dropped after the view was made. In particular, the view could contain Vars that explicitly reference such columns! The dependency machinery really ought to refuse the column drop attempt in such cases, as it would do when trying to drop a table column that's explicitly referenced in views. However, we currently neglect to store dependencies on columns of composite types, and fixing that is likely to be too big to be back-patchable (not to mention that existing views in existing databases would not have the needed pg_depend entries anyway). So I'll leave that for a separate patch. Pre-9.3, ruleutils would print such Vars normally (with their original column names) even though it suppressed their entries in the RTE's column alias list. This is certainly bogus, since the printed view definition would fail to reload, but at least it didn't crash. However, as of 9.3 the printed column alias list is tightly tied to the names printed for Vars; so we can't treat columns as dropped for one purpose and not dropped for the other. This is why we can't just put back the get_rte_attribute_is_dropped test: it results in an assertion failure if the view in fact contains any Vars referencing the dropped column. Once we've got dependencies preventing such cases, we'll probably want to do it that way instead of relying on the empty-string test used here. This fix turned up a very ancient bug in outfuncs/readfuncs, namely that T_String nodes containing empty strings were not dumped/reloaded correctly: the node was printed as "<>" which is read as a string value of <>. Since (per SQL) we disallow empty-string identifiers, such nodes don't occur normally, which is why we'd not noticed. (Such nodes aren't used for literal constants, just identifiers.) Per report from Marc Schablewski. Back-patch to 9.3 which is where the rule printing behavior changed. The dangling-variable case is broken all the way back, but that's not what his complaint is about.
* Implement IMPORT FOREIGN SCHEMA.Tom Lane2014-07-10
| | | | | | | | | | | This command provides an automated way to create foreign table definitions that match remote tables, thereby reducing tedium and chances for error. In this patch, we provide the necessary core-server infrastructure and implement the feature fully in the postgres_fdw foreign-data wrapper. Other wrappers will throw a "feature not supported" error until/unless they are updated. Ronan Dunklau and Michael Paquier, additional work by me
* Implement UPDATE tab SET (col1,col2,...) = (SELECT ...), ...Tom Lane2014-06-18
| | | | | | | | | | | | | | | | This SQL-standard feature allows a sub-SELECT yielding multiple columns (but only one row) to be used to compute the new values of several columns to be updated. While the same results can be had with an independent sub-SELECT per column, such a workaround can require a great deal of duplicated computation. The standard actually says that the source for a multi-column assignment could be any row-valued expression. The implementation used here is tightly tied to our existing sub-SELECT support and can't handle other cases; the Bison grammar would have some issues with them too. However, I don't feel too bad about this since other cases can be converted into sub-SELECTs. For instance, "SET (a,b,c) = row_valued_function(x)" could be written "SET (a,b,c) = (SELECT * FROM row_valued_function(x))".
* Avoid recursion when processing simple lists of AND'ed or OR'ed clauses.Tom Lane2014-06-16
| | | | | | | | | | | | | | | | | | | Since most of the system thinks AND and OR are N-argument expressions anyway, let's have the grammar generate a representation of that form when dealing with input like "x AND y AND z AND ...", rather than generating a deeply-nested binary tree that just has to be flattened later by the planner. This avoids stack overflow in parse analysis when dealing with queries having more than a few thousand such clauses; and in any case it removes some rather unsightly inconsistencies, since some parts of parse analysis were generating N-argument ANDs/ORs already. It's still possible to get a stack overflow with weirdly parenthesized input, such as "x AND (y AND (z AND ( ... )))", but such cases are not mainstream usage. The maximum depth of parenthesization is already limited by Bison's stack in such cases, anyway, so that the limit is probably fairly platform-independent. Patch originally by Gurjeet Singh, heavily revised by me
* pgindent run for 9.4Bruce Momjian2014-05-06
| | | | | This includes removing tabs after periods in C comments, which was applied to back branches, so this change should not effect backpatching.
* Make security barrier views automatically updatableStephen Frost2014-04-12
| | | | | | | | | | | | | | | | | Views which are marked as security_barrier must have their quals applied before any user-defined quals are called, to prevent user-defined functions from being able to see rows which the security barrier view is intended to prevent them from seeing. Remove the restriction on security barrier views being automatically updatable by adding a new securityQuals list to the RTE structure which keeps track of the quals from security barrier views at each level, independently of the user-supplied quals. When RTEs are later discovered which have securityQuals populated, they are turned into subquery RTEs which are marked as security_barrier to prevent any user-supplied quals being pushed down (modulo LEAKPROOF quals). Dean Rasheed, reviewed by Craig Ringer, Simon Riggs, KaiGai Kohei
* Avoid repeated name lookups during table and index DDL.Robert Haas2014-02-17
| | | | | | | | | | | | | | | | | | | | | | If the name lookups come to different conclusions due to concurrent activity, we might perform some parts of the DDL on a different table than other parts. At least in the case of CREATE INDEX, this can be used to cause the permissions checks to be performed against a different table than the index creation, allowing for a privilege escalation attack. This changes the calling convention for DefineIndex, CreateTrigger, transformIndexStmt, transformAlterTableStmt, CheckIndexCompatible (in 9.2 and newer), and AlterTable (in 9.1 and older). In addition, CheckRelationOwnership is removed in 9.2 and newer and the calling convention is changed in older branches. A field has also been added to the Constraint node (FkConstraint in 8.4). Third-party code calling these functions or using the Constraint node will require updating. Report by Andres Freund. Patch by Robert Haas and Andres Freund, reviewed by Tom Lane. Security: CVE-2014-0062
* Separate multixact freezing parameters from xid'sAlvaro Herrera2014-02-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously we were piggybacking on transaction ID parameters to freeze multixacts; but since there isn't necessarily any relationship between rates of Xid and multixact consumption, this turns out not to be a good idea. Therefore, we now have multixact-specific freezing parameters: vacuum_multixact_freeze_min_age: when to remove multis as we come across them in vacuum (default to 5 million, i.e. early in comparison to Xid's default of 50 million) vacuum_multixact_freeze_table_age: when to force whole-table scans instead of scanning only the pages marked as not all visible in visibility map (default to 150 million, same as for Xids). Whichever of both which reaches the 150 million mark earlier will cause a whole-table scan. autovacuum_multixact_freeze_max_age: when for cause emergency, uninterruptible whole-table scans (default to 400 million, double as that for Xids). This means there shouldn't be more frequent emergency vacuuming than previously, unless multixacts are being used very rapidly. Backpatch to 9.3 where multixacts were made to persist enough to require freezing. To avoid an ABI break in 9.3, VacuumStmt has a couple of fields in an unnatural place, and StdRdOptions is split in two so that the newly added fields can go at the end. Patch by me, reviewed by Robert Haas, with additional input from Andres Freund and Tom Lane.
* ALTER TABLESPACE ... MOVE ... OWNED BYStephen Frost2014-01-23
| | | | | | | | | | Add the ability to specify the objects to move by who those objects are owned by (as relowner) and change ALL to mean ALL objects. This makes the command always operate against a well-defined set of objects and not have the objects-to-be-moved based on the role of the user running the command. Per discussion with Simon and Tom.
* Allow use of "z" flag in our printf calls, and use it where appropriate.Tom Lane2014-01-23
| | | | | | | | | | | | | | | | | | | | | | | | | Since C99, it's been standard for printf and friends to accept a "z" size modifier, meaning "whatever size size_t has". Up to now we've generally dealt with printing size_t values by explicitly casting them to unsigned long and using the "l" modifier; but this is really the wrong thing on platforms where pointers are wider than longs (such as Win64). So let's start using "z" instead. To ensure we can do that on all platforms, teach src/port/snprintf.c to understand "z", and add a configure test to force use of that implementation when the platform's version doesn't handle "z". Having done that, modify a bunch of places that were using the unsigned-long hack to use "z" instead. This patch doesn't pretend to have gotten everyplace that could benefit, but it catches many of them. I made an effort in particular to ensure that all uses of the same error message text were updated together, so as not to increase the number of translatable strings. It's possible that this change will result in format-string warnings from pre-C99 compilers. We might have to reconsider if there are any popular compilers that will warn about this; but let's start by seeing what the buildfarm thinks. Andres Freund, with a little additional work by me
* Add CREATE TABLESPACE ... WITH ... OptionsStephen Frost2014-01-18
| | | | | | | | | | | | | Tablespaces have a few options which can be set on them to give PG hints as to how the tablespace behaves (perhaps it's faster for sequential scans, or better able to handle random access, etc). These options were only available through the ALTER TABLESPACE command. This adds the ability to set these options at CREATE TABLESPACE time, removing the need to do both a CREATE TABLESPACE and ALTER TABLESPACE to get the correct options set on the tablespace. Vik Fearing, reviewed by Michael Paquier.
* Add ALTER TABLESPACE ... MOVE commandStephen Frost2014-01-18
| | | | | | | | | This adds a 'MOVE' sub-command to ALTER TABLESPACE which allows moving sets of objects from one tablespace to another. This can be extremely handy and avoids a lot of error-prone scripting. ALTER TABLESPACE ... MOVE will only move objects the user owns, will notify the user if no objects were found, and can be used to move ALL objects or specific types of objects (TABLES, INDEXES, or MATERIALIZED VIEWS).
* Suppress Coverity complaints in readfuncs.c.Heikki Linnakangas2014-01-16
| | | | | | | | Coverity is complaining that the value returned by pg_strtok in READ_LOCATION_FIELD and READ_BITMAPSET_FIELD macros is not used. In commit 39bfc94c86f1990e9db8ea3da0e82995cc1b76db, we did this to the other macros to placate compilers that complained when the variable was completely unused, this extends that to the last remaining macros.
* Update copyright for 2014Bruce Momjian2014-01-07
| | | | | Update all files in head, and files COPYRIGHT and legal.sgml in all back branches.
* Support ordered-set (WITHIN GROUP) aggregates.Tom Lane2013-12-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch introduces generic support for ordered-set and hypothetical-set aggregate functions, as well as implementations of the instances defined in SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(), percent_rank(), cume_dist()). We also added mode() though it is not in the spec, as well as versions of percentile_cont() and percentile_disc() that can compute multiple percentile values in one pass over the data. Unlike the original submission, this patch puts full control of the sorting process in the hands of the aggregate's support functions. To allow the support functions to find out how they're supposed to sort, a new API function AggGetAggref() is added to nodeAgg.c. This allows retrieval of the aggregate call's Aggref node, which may have other uses beyond the immediate need. There is also support for ordered-set aggregates to install cleanup callback functions, so that they can be sure that infrastructure such as tuplesort objects gets cleaned up. In passing, make some fixes in the recently-added support for variadic aggregates, and make some editorial adjustments in the recent FILTER additions for aggregates. Also, simplify use of IsBinaryCoercible() by allowing it to succeed whenever the target type is ANY or ANYELEMENT. It was inconsistent that it dealt with other polymorphic target types but not these. Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing, and rather heavily editorialized upon by Tom Lane
* Add ALTER SYSTEM command to edit the server configuration file.Tatsuo Ishii2013-12-18
| | | | | Patch contributed by Amit Kapila. Reviewed by Hari Babu, Masao Fujii, Boszormenyi Zoltan, Andres Freund, Greg Smith and others.
* Rename TABLE() to ROWS FROM().Noah Misch2013-12-10
| | | | | | | SQL-standard TABLE() is a subset of UNNEST(); they deal with arrays and other collection types. This feature, however, deals with set-returning functions. Use a different syntax for this feature to keep open the possibility of implementing the standard TABLE().
* Support multi-argument UNNEST(), and TABLE() syntax for multiple functions.Tom Lane2013-11-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds the ability to write TABLE( function1(), function2(), ...) as a single FROM-clause entry. The result is the concatenation of the first row from each function, followed by the second row from each function, etc; with NULLs inserted if any function produces fewer rows than others. This is believed to be a much more useful behavior than what Postgres currently does with multiple SRFs in a SELECT list. This syntax also provides a reasonable way to combine use of column definition lists with WITH ORDINALITY: put the column definition list inside TABLE(), where it's clear that it doesn't control the ordinality column as well. Also implement SQL-compliant multiple-argument UNNEST(), by turning UNNEST(a,b,c) into TABLE(unnest(a), unnest(b), unnest(c)). The SQL standard specifies TABLE() with only a single function, not multiple functions, and it seems to require an implicit UNNEST() which is not what this patch does. There may be something wrong with that reading of the spec, though, because if it's right then the spec's TABLE() is just a pointless alternative spelling of UNNEST(). After further review of that, we might choose to adopt a different syntax for what this patch does, but in any case this functionality seems clearly worthwhile. Andrew Gierth, reviewed by Zoltán Böszörményi and Heikki Linnakangas, and significantly revised by me
* Fix incorrect loop counts in tidbitmap.c.Tom Lane2013-11-15
| | | | | | | | | | | | | A couple of places that should have been iterating over WORDS_PER_CHUNK words were iterating over WORDS_PER_PAGE words instead. This thinko accidentally failed to fail, because (at least on common architectures with default BLCKSZ) WORDS_PER_CHUNK is a bit less than WORDS_PER_PAGE, and the extra words being looked at were always zero so nothing happened. Still, it's a bug waiting to happen if anybody ever fools with the parameters affecting TIDBitmap sizes, and it's a small waste of cycles too. So back-patch to all active branches. Etsuro Fujita
* Compute correct em_nullable_relids in get_eclass_for_sort_expr().Tom Lane2013-11-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | Bug #8591 from Claudio Freire demonstrates that get_eclass_for_sort_expr must be able to compute valid em_nullable_relids for any new equivalence class members it creates. I'd worried about this in the commit message for db9f0e1d9a4a0842c814a464cdc9758c3f20b96c, but claimed that it wasn't a problem because multi-member ECs should already exist when it runs. That is transparently wrong, though, because this function is also called by initialize_mergeclause_eclasses, which runs during deconstruct_jointree. The example given in the bug report (which the new regression test item is based upon) fails because the COALESCE() expression is first seen by initialize_mergeclause_eclasses rather than process_equivalence. Fixing this requires passing the appropriate nullable_relids set to get_eclass_for_sort_expr, and it requires new code to compute that set for top-level expressions such as ORDER BY, GROUP BY, etc. We store the top-level nullable_relids in a new field in PlannerInfo to avoid computing it many times. In the back branches, I've added the new field at the end of the struct to minimize ABI breakage for planner plugins. There doesn't seem to be a good alternative to changing get_eclass_for_sort_expr's API signature, though. There probably aren't any third-party extensions calling that function directly; moreover, if there are, they probably need to think about what to pass for nullable_relids anyway. Back-patch to 9.2, like the previous patch in this area.
* Fix whitespace issues found by git diff --check, add gitattributesPeter Eisentraut2013-11-10
| | | | | Set per file type attributes in .gitattributes to fine-tune whitespace checks. With the associated cleanups, the tree is now clean for git
* Add the notion of REPLICA IDENTITY for a table.Robert Haas2013-11-08
| | | | | | | Pending patches for logical replication will use this to determine which columns of a tuple ought to be considered as its candidate key. Andres Freund, with minor, mostly cosmetic adjustments by me
* Use appendStringInfoString instead of appendStringInfo where possible.Robert Haas2013-10-31
| | | | | | | This shaves a few cycles, and generally seems like good programming practice. David Rowley
* In bms_add_member(), use repalloc() if the bms needs to be enlarged.Heikki Linnakangas2013-09-30
| | | | | | | | | | | Previously bms_add_member() would palloc a whole-new copy of the existing set, copy the words, and pfree the old one. repalloc() is potentially much faster, and more importantly, this is less surprising if CurrentMemoryContext is not the same as the context the old set is in. bms_add_member() still allocates a new bitmapset in CurrentMemoryContext if NULL is passed as argument, but that is a lot less likely to induce bugs. Nicholas White.