aboutsummaryrefslogtreecommitdiff
path: root/src/backend
Commit message (Collapse)AuthorAge
* Fix the raw-parsetree representation of star (as in SELECT * FROM orTom Lane2008-08-30
| | | | | | SELECT foo.*) so that it cannot be confused with a quoted identifier "*". Instead create a separate node type A_Star to represent this notation. Per pgsql-hackers discussion of 2007-Sep-27.
* In GCC-based builds, use a better newNode() macro that relies on GCC-specificTom Lane2008-08-29
| | | | | | syntax to avoid a useless store into a global variable. Per experimentation, this works better than my original thought of trying to push the code into an out-of-line subroutine.
* Suppress gcc warning about possibly-uninitialized variable. It's notTom Lane2008-08-29
| | | | | | | | clear to me why I'd not seen this message before --- on F-9 it seems to only happen if Asserts are disabled, which ought to be irrelevant. Maybe that affects a decision whether to inline get_ten(), which would be needed to expose the warning condition to the compiler? Anyway, the fix is clear.
* Remove all traces that suggest that a non-Bison yacc might be supported, andPeter Eisentraut2008-08-29
| | | | | change build system to use only Bison. Simplify build rules, make file names uniform. Don't build the token table header file where it is not needed.
* Extend the parser location infrastructure to include a location field inTom Lane2008-08-28
| | | | | | | | | | | | | most node types used in expression trees (both before and after parse analysis). This allows us to place an error cursor in many situations where we formerly could not, because the information wasn't available beyond the very first level of parse analysis. There's a fair amount of work still to be done to persuade individual ereport() calls to actually include an error location, but this gets the initdb-forcing part of the work out of the way; and the situation is already markedly better than before for complaints about unimplementable implicit casts, such as CASE and UNION constructs with incompatible alternative data types. Per my proposal of a few days ago.
* Teach eval_const_expressions() to simplify an ArrayCoerceExpr to a constantTom Lane2008-08-26
| | | | | | | | when its input is constant and the element coercion function is immutable (or nonexistent, ie, binary-coercible case). This is an oversight in the 8.3 implementation of ArrayCoerceExpr, and its result is that certain cases involving IN or NOT IN with constants don't get optimized as they should be. Per experimentation with an example from Ow Mun Heng.
* Move exprType(), exprTypmod(), expression_tree_walker(), and related routinesTom Lane2008-08-25
| | | | | | into nodes/nodeFuncs, so as to reduce wanton cross-subsystem #includes inside the backend. There's probably more that should be done along this line, but this is a start anyway.
* Get rid of the last remaining uses of var_is_rel(), to wit some debuggingTom Lane2008-08-25
| | | | | | | | | checks in ExecIndexBuildScanKeys() that were inadequate anyway: it's better to verify the correct varno on an expected index key, not just reject OUTER and INNER. This makes the entire current contents of nodeFuncs.c dead code. I'll be replacing it with some other stuff later, as per recent proposal.
* Unconditionally write the statsfile when SIGHUP is received, to minimizeMagnus Hagander2008-08-25
| | | | the window during which backends have no statistics file to read.
* Update URL to Ross William's paper.Alvaro Herrera2008-08-25
| | | | Devrim Gunduz.
* Make stats_temp_directory PGC_SIGHUP, and document how it may cause a temporaryMagnus Hagander2008-08-25
| | | | | | | "outage" of the statistics views. This requires making the stats collector respond to SIGHUP, like the other utility processes already did.
* Convert remaining builtin set-returning functions to use OUT parameters, makingMagnus Hagander2008-08-25
| | | | | | it possible to call them without specifying a column list. Jaime Casanova
* Add missing descriptions for aggregates, functions and conversions.Bruce Momjian2008-08-23
| | | | Bernd Helmle
* Fix possible duplicate tuples while GiST scan. Now page is processedTeodor Sigaev2008-08-23
| | | | | | | | | at once and ItemPointers are collected in memory. Remove tuple's killing by killtuple() if tuple was moved to another page - it could produce unaceptable overhead. Backpatch up to 8.1 because the bug was introduced by GiST's concurrency support.
* Make "log_temp_files" super-user set only, like other logging options.Bruce Momjian2008-08-22
| | | | Simon Riggs
* Minor patch on pgbenchBruce Momjian2008-08-22
| | | | | | | | | | 1. -i option should run vacuum analyze only on pgbench tables, not *all* tables in database. 2. pre-run cleanup step was DELETE FROM HISTORY then VACUUM HISTORY. This is just a slow version of TRUNCATE HISTORY. Simon Riggs
* Improve wording of error message when a postgresql.conf setting isBruce Momjian2008-08-22
| | | | ignored because it can only be set at server start.
* Arrange to convert EXISTS subqueries that are equivalent to hashable INTom Lane2008-08-22
| | | | | | | | | | | | | | | | | | | | | | | subqueries into the same thing you'd have gotten from IN (except always with unknownEqFalse = true, so as to get the proper semantics for an EXISTS). I believe this fixes the last case within CVS HEAD in which an EXISTS could give worse performance than an equivalent IN subquery. The tricky part of this is that if the upper query probes the EXISTS for only a few rows, the hashing implementation can actually be worse than the default, and therefore we need to make a cost-based decision about which way to use. But at the time when the planner generates plans for subqueries, it doesn't really know how many times the subquery will be executed. The least invasive solution seems to be to generate both plans and postpone the choice until execution. Therefore, in a query that has been optimized this way, EXPLAIN will show two subplans for the EXISTS, of which only one will actually get executed. There is a lot more that could be done based on this infrastructure: in particular it's interesting to consider switching to the hash plan if we start out using the non-hashed plan but find a lot more upper rows going by than we expected. I have therefore left some minor inefficiencies in place, such as initializing both subplans even though we will currently only use one.
* Marginal improvement in sublink planning: allow unknownEqFalse optimizationTom Lane2008-08-20
| | | | | | | to be used for SubLinks that are underneath a top-level OR clause. Just as at the very top level of WHERE, it's not necessary to be accurate about whether the sublink returns FALSE or NULL, because either result has the same impact on whether the WHERE will succeed.
* Fix obsolete comment. It's no longer the case that Param nodes don'tTom Lane2008-08-20
| | | | carry typmod.
* Cause the output from debug_print_parse, debug_print_rewritten, andTom Lane2008-08-19
| | | | | | | | debug_print_plan to appear at LOG message level, not DEBUG1 as historically. Make debug_pretty_print default to on. Also, cause plans generated via EXPLAIN to be subject to debug_print_plan. This is all to make debug_print_plan a reasonably comfortable substitute for the former behavior of EXPLAIN VERBOSE.
* Add some defenses against constant-FALSE outer join conditions. SinceTom Lane2008-08-17
| | | | | | | | | | | | | | | | | eval_const_expressions will generally throw away anything that's ANDed with constant FALSE, what we're left with given an example like select * from tenk1 a where (unique1,0) in (select unique2,1 from tenk1 b); is a cartesian product computation, which is really not acceptable. This is a regression in CVS HEAD compared to previous releases, which were able to notice the impossible join condition in this case --- though not in some related cases that are also improved by this patch, such as select * from tenk1 a left join tenk1 b on (a.unique1=b.unique2 and 0=1); Fix by skipping evaluation of the appropriate side of the outer join in cases where it's demonstrably unnecessary.
* Remove prohibition against SubLinks in the WHERE clause of an EXISTS subqueryTom Lane2008-08-17
| | | | | | that we're considering pulling up. I hadn't wanted to think through whether that could work during the first pass at this stuff. However, on closer inspection it seems to be safe enough.
* Improve sublink pullup code to handle ANY/EXISTS sublinks that are at topTom Lane2008-08-17
| | | | | | | | | | | | | level of a JOIN/ON clause, not only at top level of WHERE. (However, we can't do this in an outer join's ON clause, unless the ANY/EXISTS refers only to the nullable side of the outer join, so that it can effectively be pushed down into the nullable side.) Per request from Kevin Grittner. In passing, fix a bug in the initial implementation of EXISTS pullup: it would Assert if the EXIST's WHERE clause used a join alias variable. Since we haven't yet flattened join aliases when this transformation happens, it's necessary to include join relids in the computed set of RHS relids.
* Clean up the loose ends in selectivity estimation left by my patch for semiTom Lane2008-08-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | and anti joins. To do this, pass the SpecialJoinInfo struct for the current join as an additional optional argument to operator join selectivity estimation functions. This allows the estimator to tell not only what kind of join is being formed, but which variable is on which side of the join; a requirement long recognized but not dealt with till now. This also leaves the door open for future improvements in the estimators, such as accounting for the null-insertion effects of lower outer joins. I didn't do anything about that in the current patch but the information is in principle deducible from what's passed. The patch also clarifies the definition of join selectivity for semi/anti joins: it's the fraction of the left input that has (at least one) match in the right input. This allows getting rid of some very fuzzy thinking that I had committed in the original 7.4-era IN-optimization patch. There's probably room to estimate this better than the present patch does, but at least we know what to estimate. Since I had to touch CREATE OPERATOR anyway to allow a variant signature for join estimator functions, I took the opportunity to add a couple of additional checks that were missing, per my recent message to -hackers: * Check that estimator functions return float8; * Require execute permission at the time of CREATE OPERATOR on the operator's function as well as the estimator functions; * Require ownership of any pre-existing operator that's modified by the command. I also moved the lookup of the functions out of OperatorCreate() and into operatorcmds.c, since that seemed more consistent with most of the other catalog object creation processes, eg CREATE TYPE.
* Performance fix for new anti-join code in nodeMergejoin.c: after finding aTom Lane2008-08-15
| | | | | | | | | | | | | | | | | | match in antijoin mode, we should advance to next outer tuple not next inner. We know we don't want to return this outer tuple, and there is no point in advancing over matching inner tuples now, because we'd just have to do it again if the next outer tuple has the same merge key. This makes a noticeable difference if there are lots of duplicate keys in both inputs. Similarly, after finding a match in semijoin mode, arrange to advance to the next outer tuple after returning the current match; or immediately, if it fails the extra quals. The rationale is the same. (This is a performance bug in existing releases; perhaps worth back-patching? The planner tries to avoid using mergejoin with lots of duplicates, so it may not be a big issue in practice.) Nestloop and hash got this right to start with, but I made some cosmetic adjustments there to make the corresponding bits of logic look more similar.
* Make the temporary directory for pgstat files configurable by the GUCMagnus Hagander2008-08-15
| | | | | | | | variable stats_temp_directory, instead of requiring the admin to mount/symlink the pg_stat_tmp directory manually. For now the config variable is PGC_POSTMASTER. Room for further improvment that would allow it to be changed on-the-fly.
* Fix pull_up_simple_union_all to copy all rtable entries from child subquery toHeikki Linnakangas2008-08-14
| | | | | | | parent, not only those with RangeTblRefs. We need them in ExecCheckRTPerms. Report by Brendan O'Shea. Back-patch to 8.2, where pull_up_simple_union_all was introduced.
* Implement SEMI and ANTI joins in the planner and executor. (Semijoins replaceTom Lane2008-08-14
| | | | | | | | | | | | | | the old JOIN_IN code, but antijoins are new functionality.) Teach the planner to convert appropriate EXISTS and NOT EXISTS subqueries into semi and anti joins respectively. Also, LEFT JOINs with suitable upper-level IS NULL filters are recognized as being anti joins. Unify the InClauseInfo and OuterJoinInfo infrastructure into "SpecialJoinInfo". With that change, it becomes possible to associate a SpecialJoinInfo with every join attempt, which permits some cleanup of join selectivity estimation. That needs to be taken much further than this patch does, but the next step is to change the API for oprjoin selectivity functions, which seems like material for a separate patch. So for the moment the output size estimates for semi and especially anti joins are quite bogus.
* Have autovacuum consider processing TOAST tables separately from theirAlvaro Herrera2008-08-13
| | | | | | | | main tables. This requires vacuum() to accept processing a toast table standalone, so there's a user-visible change in that it's now possible (for a superuser) to execute "VACUUM pg_toast.pg_toast_XXX".
* Introduce the concept of relation forks. An smgr relation can now consistHeikki Linnakangas2008-08-11
| | | | | | | | | | | | | | | | of multiple forks, and each fork can be created and grown separately. The bulk of this patch is about changing the smgr API to include an extra ForkNumber argument in every smgr function. Also, smgrscheduleunlink and smgrdounlink no longer implicitly call smgrclose, because other forks might still exist after unlinking one. The callers of those functions have been modified to call smgrclose instead. This patch in itself doesn't have any user-visible effect, but provides the infrastructure needed for upcoming patches. The additional forks envisioned are a rewritten FSM implementation that doesn't rely on a fixed-size shared memory block, and a visibility map to allow skipping portions of a table in VACUUM that have no dead tuples.
* Fix corner-case bug introduced with HOT: if REINDEX TABLE pg_class (or aTom Lane2008-08-10
| | | | | | | | | | | | REINDEX DATABASE including same) is done before a session has done any other update on pg_class, the pg_class relcache entry was left with an incorrect setting of rd_indexattr, because the indexed-attributes set would be first demanded at a time when we'd forced a partial list of indexes into the pg_class entry, and it would remain cached after that. This could result in incorrect decisions about HOT-update safety later in the same session. In practice, since only pg_class_relname_nsp_index would be missed out, only ALTER TABLE RENAME and ALTER TABLE SET SCHEMA could trigger a problem. Per report and test case from Ondrej Jirman.
* Install checks in executor startup to ensure that the tuples produced by anTom Lane2008-08-08
| | | | | | | | | | | | INSERT or UPDATE will match the target table's current rowtype. In pre-8.3 releases inconsistency can arise with stale cached plans, as reported by Merlin Moncure. (We patched the equivalent hazard on the SELECT side in Feb 2007; I'm not sure why we thought there was no risk on the insertion side.) In 8.3 and HEAD this problem should be impossible due to plan cache invalidation management, but it seems prudent to make the check anyway. Back-patch as far as 8.0. 7.x versions lack ALTER COLUMN TYPE, so there seems no way to abuse a stale plan comparably.
* Improve INTERSECT/EXCEPT hashing by realizing that we don't need to make anyTom Lane2008-08-07
| | | | | | | | | hashtable entries for tuples that are found only in the second input: they can never contribute to the output. Furthermore, this implies that the planner should endeavor to put first the smaller (in number of groups) input relation for an INTERSECT. Implement that, and upgrade prepunion's estimation of the number of rows returned by setops so that there's some amount of sanity in the estimate of which one is smaller.
* Support hashing for duplicate-elimination in INTERSECT and EXCEPT queries.Tom Lane2008-08-07
| | | | | | | | | | This completes my project of improving usage of hashing for duplicate elimination (aggregate functions with DISTINCT remain undone, but that's for some other day). As with the previous patches, this means we can INTERSECT/EXCEPT on datatypes that can hash but not sort, and it means that INTERSECT/EXCEPT without ORDER BY are no longer certain to produce sorted output.
* Teach the system how to use hashing for UNION. (INTERSECT/EXCEPT will follow,Tom Lane2008-08-07
| | | | | | | | | | | but seem like a separate patch since most of the remaining work is on the executor side.) I took the opportunity to push selection of the grouping operators for set operations into the parser where it belongs. Otherwise this is just a small exercise in making prepunion.c consider both alternatives. As with the recent DISTINCT patch, this means we can UNION on datatypes that can hash but not sort, and it means that UNION without ORDER BY is no longer certain to produce sorted output.
* Do not allow Unique nodes to be scanned backwards. The code claimed that itTom Lane2008-08-05
| | | | | | | would work, but in fact it didn't return the same rows when moving backwards as when moving forwards. This would have no visible effect in a DISTINCT query (at least assuming the column datatypes use a strong definition of equality), but it gave entirely wrong answers for DISTINCT ON queries.
* Department of second thoughts: fix newly-added code in planner.c to make realTom Lane2008-08-05
| | | | | | | | sure that DISTINCT ON does what it's supposed to, ie, sort by the full ORDER BY list before unique-ifying. The error seems masked in simple cases by the fact that query_planner won't return query pathkeys that only partially match the requested sort order, but I wouldn't want to bet that it couldn't be exposed in some way or other.
* In ReadOrZeroBuffer (and related entry points), don't bother to callTom Lane2008-08-05
| | | | | | | | PageHeaderIsValid when we zero the buffer instead of reading the page in. The actual performance improvement is probably marginal since this function isn't very heavily used, but a cycle saved is a cycle earned. Zdenek Kotala
* Move pgstat.tmp into a temporary directory under $PGDATA named pg_stat_tmp.Magnus Hagander2008-08-05
| | | | | | | | | | | This allows the use of a ramdrive (either through mount or symlink) for the temporary file that's written every half second, which should reduce I/O. On server shutdown/startup, the file is written to the old location in the global directory, to preserve data across restarts. Bump catversion since the $PGDATA directory layout changed.
* Improve SELECT DISTINCT to consider hash aggregation, as well as sort/uniq,Tom Lane2008-08-05
| | | | | | | | | | | | | | | | | | | as methods for implementing the DISTINCT step. This eliminates the former performance gap between DISTINCT and GROUP BY, and also makes it possible to do SELECT DISTINCT on datatypes that only support hashing not sorting. SELECT DISTINCT ON is still always implemented by sorting; it would take executor changes to support hashing that, and it's not clear it's worth the trouble. This is a release-note-worthy incompatibility from previous PG versions, since SELECT DISTINCT can no longer be counted on to deliver sorted output without explicitly saying ORDER BY. (Anyone who can't cope with that can consider turning off enable_hashagg.) Several regression test queries needed to have ORDER BY added to preserve stable output order. I fixed the ones that manifested here, but there might be some other cases that show up on other platforms.
* Improve CREATE/DROP/RENAME DATABASE so that when failing because the sourceTom Lane2008-08-04
| | | | | | | | | or target database is being accessed by other users, it tells you whether the "other users" are live sessions or uncommitted prepared transactions. (Indeed, it tells you exactly how many of each, but that's mostly just because it was easy to do so.) This should help forestall the gotcha of not realizing that a prepared transaction is what's blocking the command. Per discussion.
* Make GROUP BY work properly for datatypes that only support hashing and notTom Lane2008-08-03
| | | | | | sorting. The infrastructure for this was all in place already; it's only necessary to fix the planner to not assume that sorting is always an available option.
* Tighten up the sanity checks in TypeCreate(): pass-by-value types must haveTom Lane2008-08-03
| | | | | a size that is one of the supported values, not just anything <= sizeof(Datum). Cross-check the alignment specification against size as well.
* Rearrange the querytree representation of ORDER BY/GROUP BY/DISTINCT itemsTom Lane2008-08-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | as per my recent proposal: 1. Fold SortClause and GroupClause into a single node type SortGroupClause. We were already relying on them to be struct-equivalent, so using two node tags wasn't accomplishing much except to get in the way of comparing items with equal(). 2. Add an "eqop" field to SortGroupClause to carry the associated equality operator. This is cheap for the parser to get at the same time it's looking up the sort operator, and storing it eliminates the need for repeated not-so-cheap lookups during planning. In future this will also let us represent GROUP/DISTINCT operations on datatypes that have hash opclasses but no btree opclasses (ie, they have equality but no natural sort order). The previous representation simply didn't work for that, since its only indicator of comparison semantics was a sort operator. 3. Add a hasDistinctOn boolean to struct Query to explicitly record whether the distinctClause came from DISTINCT or DISTINCT ON. This allows removing some complicated and not 100% bulletproof code that attempted to figure that out from the distinctClause alone. This patch doesn't in itself create any new capability, but it's necessary infrastructure for future attempts to use hash-based grouping for DISTINCT and UNION/INTERSECT/EXCEPT.
* Add a few more DTrace probes to the backend.Alvaro Herrera2008-08-01
| | | | Robert Lor
* Rearrange the code in auth.c so that all functions for a single authenticationMagnus Hagander2008-08-01
| | | | | | method is grouped together in a reasonably similar way, keeping the "global shared functions" together in their own section as well. Makes it a lot easier to find your way around the code.
* Move ident authentication code into auth.c along with the other authenciationMagnus Hagander2008-08-01
| | | | routines, leaving hba.c to deal only with processing the HBA specific files.
* Fix parser so that we don't modify the user-written ORDER BY list in orderTom Lane2008-07-31
| | | | | | | | | | to represent DISTINCT or DISTINCT ON. This gets rid of a longstanding annoyance that a view or rule using SELECT DISTINCT will be dumped out with an overspecified ORDER BY list, and is one small step along the way to decoupling DISTINCT and ORDER BY enough so that hash-based implementation of DISTINCT will be possible. In passing, improve transformDistinctClause so that it doesn't reject duplicate DISTINCT ON items, as was reported by Steve Midgley a couple weeks ago.
* Require superuser privilege to create base types (but not composites, enums,Tom Lane2008-07-31
| | | | | | | | or domains). This was already effectively required because you had to own the I/O functions, and the I/O functions pretty much have to be written in C since we don't let PL functions take or return cstring. But given the possible security consequences of a malicious type definition, it seems prudent to enforce superuser requirement directly. Per recent discussion.