aboutsummaryrefslogtreecommitdiff
path: root/src/backend/executor
Commit message (Collapse)AuthorAge
...
* Fix several memory leaks when rescanning SRFs. Arrange for an SRF'sNeil Conway2008-02-29
| | | | | | | | | | | | | | | | | | "multi_call_ctx" to be a distinct sub-context of the EState's per-query context, and delete the multi_call_ctx as soon as the SRF finishes execution. This avoids leaking SRF memory until the end of the current query, which is particularly egregious when the SRF is scanned multiple times. This change also fixes a leak of the fields of the AttInMetadata struct in shutdown_MultiFuncCall(). Also fix a leak of the SRF result TupleDesc when rescanning a FunctionScan node. The TupleDesc is allocated in the per-query context for every call to ExecMakeTableFunctionResult(), so we should free it after calling that function. Since the SRF might choose to return a non-expendable TupleDesc, we only free the TupleDesc if it is not being reference-counted. Backpatch to 8.3 and 8.2 stable branches.
* Refactor backend makefiles to remove lots of duplicate codePeter Eisentraut2008-02-19
|
* Fix SPI_cursor_open() and SPI_is_cursor_plan() to push the SPI stack beforeTom Lane2008-02-12
| | | | | | | | | | | | | | | | | | | | | doing anything interesting, such as calling RevalidateCachedPlan(). The necessity of this is demonstrated by an example from Willem Buitendyk: during a replan, the planner might try to evaluate SPI-using functions, and so we'd better be in a clean SPI context. A small downside of this fix is that these two functions will now fail outright if called when not inside a SPI-using procedure (ie, a SPI_connect/SPI_finish pair). The documentation never promised or suggested that that would work, though; and they are normally used in concert with other functions, mainly SPI_prepare, that always have failed in such a case. So the odds of breaking something seem pretty low. In passing, make SPI_is_cursor_plan's error handling convention clearer, and fix documentation's erroneous claim that SPI_cursor_open would return NULL on error. Before 8.3 these functions could not invoke replanning, so there is probably no need for back-patching.
* Fix CREATE TABLE ... LIKE ... INCLUDING INDEXES to not cause unwantedTom Lane2008-02-07
| | | | | | | | | | | tablespace permissions failures when copying an index that is in the database's default tablespace. A side-effect of the change is that explicitly specifying the default tablespace no longer triggers a permissions check; this is not how it was done in pre-8.3 releases but is argued to be more consistent. Per bug #3921 from Andrew Gilligan. (Note: I argued in the subsequent discussion that maybe LIKE shouldn't copy index tablespaces at all, but since no one indicated agreement with that idea, I've refrained from doing it.)
* The original implementation of polymorphic aggregates didn't really get theTom Lane2008-01-11
| | | | | | checking of argument compatibility right; although the problem is only exposed with multiple-input aggregates in which some arguments are polymorphic and some are not. Per bug #3852 from Sokolov Yura.
* Update copyrights in source tree to 2008.Bruce Momjian2008-01-01
|
* Avoid incrementing the CommandCounter when CommandCounterIncrement is calledTom Lane2007-11-30
| | | | | | | | | | | | | | | | | | | | but no database changes have been made since the last CommandCounterIncrement. This should result in a significant improvement in the number of "commands" that can typically be performed within a transaction before hitting the 2^32 CommandId size limit. In particular this buys back (and more) the possible adverse consequences of my previous patch to fix plan caching behavior. The implementation requires tracking whether the current CommandCounter value has been "used" to mark any tuples. CommandCounter values stored into snapshots are presumed not to be used for this purpose. This requires some small executor changes, since the executor used to conflate the curcid of the snapshot it was using with the command ID to mark output tuples with. Separating these concepts allows some small simplifications in executor APIs. Something for the TODO list: look into having CommandCounterIncrement not do AcceptInvalidationMessages. It seems fairly bogus to be doing it there, but exactly where to do it instead isn't clear, and I'm disinclined to mess with asynchronous behavior during late beta.
* Repair bug that allowed RevalidateCachedPlan to attempt to rebuild a cachedTom Lane2007-11-30
| | | | | | | | | | plan before the effects of DDL executed in an immediately prior SPI operation had been absorbed. Per report from Chris Wood. This patch has an unpleasant side effect of causing the number of CommandCounterIncrement()s done by a typical plpgsql function to approximately double. Amelioration of the consequences of that will be undertaken in a separate patch.
* Re-run pgindent with updated list of typedefs. (Updated README shouldBruce Momjian2007-11-15
| | | | avoid this problem in the future.)
* pgindent run for 8.3.Bruce Momjian2007-11-15
|
* Tweak new error messages to match the actual syntax of DECLARE CURSOR.Tom Lane2007-10-25
| | | | | (Last night I copied-and-pasted from the WITH HOLD case, but that's wrong because of the bizarrely irregular syntax specified by the standard.)
* Disallow scrolling of FOR UPDATE/FOR SHARE cursors, so as to avoid problemsTom Lane2007-10-24
| | | | | | | | | | | in corner cases such as re-fetching a just-deleted row. We may be able to relax this someday, but let's find out how many people really care before we invest a lot of work in it. Per report from Heikki and subsequent discussion. While in the neighborhood, make the combination of INSENSITIVE and FOR UPDATE throw an error, since they are semantically incompatible. (Up to now we've accepted but just ignored the INSENSITIVE option of DECLARE CURSOR.)
* Fix UPDATE/DELETE WHERE CURRENT OF to support repeated update and update-Tom Lane2007-10-24
| | | | | | | | | | | | | | | | then-delete on the current cursor row. The basic fix is that nodeTidscan.c has to apply heap_get_latest_tid() to the current-scan-TID obtained from the cursor query; this ensures we get the latest row version to work with. However, since that only works if the query plan is a TID scan, we also have to hack the planner to make sure only that type of plan will be selected. (Formerly, the planner might decide to apply a seqscan if the table is very small. This change is probably a Good Thing anyway, since it's hard to see how a seqscan could really win.) That means the execQual.c code to support CurrentOfExpr as a regular expression type is dead code, so replace it with just an elog(). Also, add regression tests covering these cases. Note that the added tests expose the fact that re-fetching an updated row misbehaves if the cursor used FOR UPDATE. That's an independent bug that should be fixed later. Per report from Dharmendra Goyal.
* HOT updates. When we update a tuple without changing any of its indexedTom Lane2007-09-20
| | | | | | | | | | | | columns, and the new version can be stored on the same heap page, we no longer generate extra index entries for the new version. Instead, index searches follow the HOT-chain links to ensure they find the correct tuple version. In addition, this patch introduces the ability to "prune" dead tuples on a per-page basis, without having to do a complete VACUUM pass to recover space. VACUUM is still needed to clean up dead index entries, however. Pavan Deolasee, with help from a bunch of other people.
* Redefine the lp_flags field of item pointers as having four states, ratherTom Lane2007-09-12
| | | | | | | | | than two independent bits (one of which was never used in heap pages anyway, or at least hadn't been in a very long time). This gives us flexibility to add the HOT notions of redirected and dead item pointers without requiring anything so klugy as magic values of lp_off and lp_len. The state values are chosen so that for the states currently in use (pre-HOT) there is no change in the physical representation.
* Don't take ProcArrayLock while exiting a transaction that has no XID; there isTom Lane2007-09-07
| | | | | | | | | | no need for serialization against snapshot-taking because the xact doesn't affect anyone else's snapshot anyway. Per discussion. Also, move various info about the interlocking of transactions and snapshots out of code comments and into a hopefully-more-cohesive discussion in access/transam/README. Also, remove a couple of now-obsolete comments about having to force some WAL to be written to persuade RecordTransactionCommit to do its thing.
* Make eval_const_expressions() preserve typmod when simplifying something likeTom Lane2007-09-06
| | | | | | | | | | | null::char(3) to a simple Const node. (It already worked for non-null values, but not when we skipped evaluation of a strict coercion function.) This prevents loss of typmod knowledge in situations such as exhibited in bug #3598. Unfortunately there seems no good way to fix that bug in 8.1 and 8.2, because they simply don't carry a typmod for a plain Const node. In passing I made all the other callers of makeNullConst supply "real" typmod values too, though I think it probably doesn't matter anywhere else.
* Extend whole-row Var evaluation to cope with the case that the sub-planTom Lane2007-08-31
| | | | | | | | generating the tuples has resjunk output columns. This is not possible for simple table scans but can happen when evaluating a whole-row Var for a view. Per example from Patryk Kordylewski. The problem exists back to 8.0 but I'm not going to risk back-patching further than 8.2 because of the many changes in this area.
* Make ARRAY(SELECT ...) return an empty array, rather than a NULL, when theTom Lane2007-08-26
| | | | | sub-select returns zero rows. Per complaint from Jens Schicke. Since this is more in the nature of a definition change than a bug, not back-patched.
* Arrange to cache a ResultRelInfo in the executor's EState for relations thatTom Lane2007-08-15
| | | | | | | | | | | | | are not one of the query's defined result relations, but nonetheless have triggers fired against them while the query is active. This was formerly impossible but can now occur because of my recent patch to fix the firing order for RI triggers. Caching a ResultRelInfo avoids duplicating work by repeatedly opening and closing the same relation, and also allows EXPLAIN ANALYZE to "see" and report on these extra triggers. Use the same mechanism to cache open relations when firing deferred triggers at transaction shutdown; this replaces the former one-element-cache strategy used in that case, and should improve performance a bit when there are deferred triggers on a number of relations.
* Repair problems occurring when multiple RI updates have to be done to the sameTom Lane2007-08-15
| | | | | | | | | row within one query: we were firing check triggers before all the updates were done, leading to bogus failures. Fix by making the triggers queued by an RI update go at the end of the outer query's trigger event list, thereby effectively making the processing "breadth-first". This was indeed how it worked pre-8.0, so the bug does not occur in the 7.x branches. Per report from Pavel Stehule.
* Fix a gradual memory leak in ExecReScanAgg(). Because the aggregationNeil Conway2007-08-08
| | | | | | | | | | | | hash table is allocated in a child context of the agg node's memory context, MemoryContextReset() will reset but *not* delete the child context. Since ExecReScanAgg() proceeds to build a new hash table from scratch (in a new sub-context), this results in leaking the header for the previous memory context. Therefore, use MemoryContextResetAndDeleteChildren() instead. Credit: My colleague Sailesh Krishnamurthy at Truviso for isolating the cause of the leak.
* If we're gonna use ExecRelationIsTargetRelation here, might as wellTom Lane2007-07-31
| | | | simplify a bit further.
* Slight refactor for ExecOpenScanRelation(): we can useNeil Conway2007-07-27
| | | | | ExecRelationIsTargetRelation() to check if the relation is a target rel, rather than scanning through the result relation array ourselves.
* Revert an ill-considered portion of my patch of 12-Mar, which tried to save aTom Lane2007-06-17
| | | | | | | | | few lines in sql_exec_error_callback() by using the function source string field that the patch added to SQL function cache entries. This doesn't work because the fn_extra field isn't filled in yet during init_sql_fcache(). Probably it could be made to work, but it doesn't seem appropriate to contort the main code paths to make an error-reporting path a tad faster. Per report from Pavel Stehule.
* Improve UPDATE/DELETE WHERE CURRENT OF so that they can be used from plpgsqlTom Lane2007-06-11
| | | | | | | with a plpgsql-defined cursor. The underlying mechanism for this is that the main SQL engine will now take "WHERE CURRENT OF $n" where $n is a refcursor parameter. Not sure if we should document that fact or consider it an implementation detail. Per discussion with Pavel Stehule.
* Support UPDATE/DELETE WHERE CURRENT OF cursor_name, per SQL standard.Tom Lane2007-06-11
| | | | | | | | | Along the way, allow FOR UPDATE in non-WITH-HOLD cursors; there may once have been a reason to disallow that, but it seems to work now, and it's really rather necessary if you want to select a row via a cursor and then update it in a concurrent-safe fashion. Original patch by Arul Shaji, rather heavily editorialized by Tom Lane.
* Teach heapam code to know the difference between a real seqscan and theTom Lane2007-06-09
| | | | | | | | | pseudo HeapScanDesc created for a bitmap heap scan. This avoids some useless overhead during a bitmap scan startup, in particular invoking the syncscan code. (We might someday want to do that, but right now it's merely useless contention for shared memory, to say nothing of possibly pushing useful entries out of syncscan's small LRU list.) This also allows elimination of ugly pgstat_discount_heap_scan() kluge.
* Rework temp_tablespaces patch so that temp tablespaces are assigned separatelyTom Lane2007-06-07
| | | | | | | | | for each temp file, rather than once per sort or hashjoin; this allows spreading the data of a large sort or join across multiple tablespaces. (I remain dubious that this will make any difference in practice, but certain people insisted.) Arrange to cache the results of parsing the GUC variable instead of recomputing from scratch on every demand, and push usage of the cache down to the bottommost fd.c level.
* Fix up text concatenation so that it accepts all the reasonable cases thatTom Lane2007-06-06
| | | | | | | | were accepted by prior Postgres releases. This takes care of the loose end left by the preceding patch to downgrade implicit casts-to-text. To avoid breaking desirable behavior for array concatenation, introduce a new polymorphic pseudo-type "anynonarray" --- the added concatenation operators are actually text || anynonarray and anynonarray || text.
* Downgrade implicit casts to text to be assignment-only, except for the onesTom Lane2007-06-05
| | | | | | | | | | | | | | | | | | | | | | | | | from the other string-category types; this eliminates a lot of surprising interpretations that the parser could formerly make when there was no directly applicable operator. Create a general mechanism that supports casts to and from the standard string types (text,varchar,bpchar) for *every* datatype, by invoking the datatype's I/O functions. These new casts are assignment-only in the to-string direction, explicit-only in the other, and therefore should create no surprising behavior. Remove a bunch of thereby-obsoleted datatype-specific casting functions. The "general mechanism" is a new expression node type CoerceViaIO that can actually convert between *any* two datatypes if their external text representations are compatible. This is more general than needed for the immediate feature, but might be useful in plpgsql or other places in future. This commit does nothing about the issue that applying the concatenation operator || to non-text types will now fail, often with strange error messages due to misinterpreting the operator as array concatenation. Since it often (not always) worked before, we should either make it succeed or at least give a more user-friendly error; but details are still under debate. Peter Eisentraut and Tom Lane
* Create a GUC parameter temp_tablespaces that allows selection of theTom Lane2007-06-03
| | | | | | | | | | tablespace(s) in which to store temp tables and temporary files. This is a list to allow spreading the load across multiple tablespaces (a random list element is chosen each time a temp object is to be created). Temp files are not stored in per-database pgsql_tmp/ directories anymore, but per-tablespace directories. Jaime Casanova and Albert Cervera, with review by Bernd Helmle and Tom Lane.
* Buy back some of the cycles spent in more-expensive hash functions byTom Lane2007-06-01
| | | | | | | selecting power-of-2, rather than prime, numbers of buckets in hash joins. If the hash functions are doing their jobs properly by making all hash bits equally random, this is good enough, and it saves expensive integer division and modulus operations.
* The shortcut exit that I recently added to ExecInitIndexScan() forTom Lane2007-05-31
| | | | | | | | | EXPLAIN-only operation was a little too short; it skipped initializing the node's result tuple type, which may be needed depending on what's above the indexscan node. Call ExecAssignResultTypeFromTL before exiting. (For good luck I moved up the ExecAssignScanProjectionInfo call as well, so that everything except indexscan-specific initialization will still be done.) Per example from Grant Finnemore.
* Fix up pgstats counting of live and dead tuples to recognize that committedTom Lane2007-05-27
| | | | | | | | | | | and aborted transactions have different effects; also teach it not to assume that prepared transactions are always committed. Along the way, simplify the pgstats API by tying counting directly to Relations; I cannot detect any redeeming social value in having stats pointers in HeapScanDesc and IndexScanDesc structures. And fix a few corner cases in which counts might be missed because the relation's pgstat_info pointer hadn't been set.
* Create hooks to let a loadable plugin monitor (or even replace) the plannerTom Lane2007-05-25
| | | | | | | | | | | | | | | and/or create plans for hypothetical situations; in particular, investigate plans that would be generated using hypothetical indexes. This is a heavily-rewritten version of the hooks proposed by Gurjeet Singh for his Index Advisor project. In this formulation, the index advisor can be entirely a loadable module instead of requiring a significant part to be in the core backend, and plans can be generated for hypothetical indexes without requiring the creation and rolling-back of system catalog entries. The index advisor patch as-submitted is not compatible with these hooks, but it needs significant work anyway due to other 8.2-to-8.3 planner changes. With these hooks in the core backend, development of the advisor can proceed as a pgfoundry project.
* Teach tuplestore.c to throw away data before the "mark" point when the callerTom Lane2007-05-21
| | | | | | | | | | | | is using mark/restore but not rewind or backward-scan capability. Insert a materialize plan node between a mergejoin and its inner child if the inner child is a sort that is expected to spill to disk. The materialize shields the sort from the need to do mark/restore and thereby allows it to perform its final merge pass on-the-fly; while the materialize itself is normally cheap since it won't spill to disk unless the number of tuples with equal key values exceeds work_mem. Greg Stark, with some kibitzing from Tom Lane.
* Fix parameter recalculation for Limit nodes: during a ReScan call we mustTom Lane2007-05-17
| | | | | | | | | | | | | recompute the limit/offset immediately, so that the updated values are available when the child's ReScan function is invoked. Add a regression test for this, too. Bug is new in HEAD (due to the bounded-sorting patch) so no need for back-patch. I did not do anything about merging this signaling with chgParam processing, but if we were to do that we'd still need to compute the updated values at this point rather than during the first ProcNode call. Per observation and test case from Greg Stark, though I didn't use his patch.
* Teach tuplesort.c about "top N" sorting, in which only the first N tuplesTom Lane2007-05-04
| | | | | | | | | | need be returned. We keep a heap of the current best N tuples and sift-up new tuples into it as we scan the input. For M input tuples this means only about M*log(N) comparisons instead of M*log(M), not to mention a lot less workspace when N is small --- avoiding spill-to-disk for large M is actually the most attractive thing about it. Patch includes planner and executor support for invoking this facility in ORDER BY ... LIMIT queries. Greg Stark, with some editorialization by moi.
* Modify processing of DECLARE CURSOR and EXPLAIN so that they can resolve theTom Lane2007-04-27
| | | | | | | | | | | | | | types of unspecified parameters when submitted via extended query protocol. This worked in 8.2 but I had broken it during plancache changes. DECLARE CURSOR is now treated almost exactly like a plain SELECT through parse analysis, rewrite, and planning; only just before sending to the executor do we divert it away to ProcessUtility. This requires a special-case check in a number of places, but practically all of them were already special-casing SELECT INTO, so it's not too ugly. (Maybe it would be a good idea to merge the two by treating IntoClause as a form of utility statement? Not going to worry about that now, though.) That approach doesn't work for EXPLAIN, however, so for that I punted and used a klugy solution of running parse analysis an extra time if under extended query protocol.
* Fix dynahash.c to suppress hash bucket splits while a hash_seq_search() scanTom Lane2007-04-26
| | | | | | | | | | | | | | | | | | | | | | | is in progress on the same hashtable. This seems the least invasive way to fix the recently-recognized problem that a split could cause the scan to visit entries twice or (with much lower probability) miss them entirely. The only field-reported problem caused by this is the "failed to re-find shared lock object" PANIC in COMMIT PREPARED reported by Michel Dorochevsky, which was caused by multiply visited entries. However, it seems certain that mdsync() is vulnerable to missing required fsync's due to missed entries, and I am fearful that RelationCacheInitializePhase2() might be at risk as well. Because of that and the generalized hazard presented by this bug, back-patch all the supported branches. Along the way, fix pg_prepared_statement() and pg_cursor() to not assume that the hashtables they are examining will stay static between calls. This is risky regardless of the newly noted dynahash problem, because hash_seq_search() has never promised to cope with deletion of table entries other than the just-returned one. There may be no bug here because the only supported way to call these functions is via ExecMakeTableFunctionResult() which will cycle them to completion before doing anything very interesting, but it seems best to get rid of the assumption. This affects 8.2 and HEAD only, since those functions weren't there earlier.
* Make plancache store cursor options so it can pass them to planner duringTom Lane2007-04-16
| | | | | | a replan. I had originally thought this was not necessary, but the new SPI facilities create a path whereby queries planned with non-default options can get into the cache, so it is necessary.
* Support scrollable cursors (ie, 'direction' clause in FETCH) in plpgsql.Tom Lane2007-04-16
| | | | Pavel Stehule, reworked a bit by Tom.
* Expose more cursor-related functionality in SPI: specifically, allowTom Lane2007-04-16
| | | | | | | | | | | access to the planner's cursor-related planning options, and provide new FETCH/MOVE routines that allow access to the full power of those commands. Small refactoring of planner(), pg_plan_query(), and pg_plan_queries() APIs to make it convenient to pass the planning options down from SPI. This is the core-code portion of Pavel Stehule's patch for scrollable cursor support in plpgsql; I'll review and apply the plpgsql changes separately.
* Make 'col IS NULL' clauses be indexable conditions.Tom Lane2007-04-06
| | | | Teodor Sigaev, with some kibitzing from Tom Lane.
* Support varlena fields with single-byte headers and unaligned storage.Tom Lane2007-04-06
| | | | | | | | | This commit breaks any code that assumes that the mere act of forming a tuple (without writing it to disk) does not "toast" any fields. While all available regression tests pass, I'm not totally sure that we've fixed every nook and cranny, especially in contrib. Greg Stark with some help from Tom Lane
* Fix check_sql_fn_retval to allow the case where a SQL function declared toTom Lane2007-04-02
| | | | | | | | return void ends with a SELECT, if that SELECT has a single result that is also of type void. Without this, it's hard to write a void function that calls another void function. Per gripe from Peter. Back-patch as far as 8.0.
* Support enum data types. Along the way, use macros for the values ofTom Lane2007-04-02
| | | | | pg_type.typtype whereever practical. Tom Dunstan, with some kibitzing from Tom Lane.
* Teach CLUSTER to skip writing WAL if not needed (ie, not using archiving)Tom Lane2007-03-29
| | | | | --- Simon. Also, code review and cleanup for the previous COPY-no-WAL patches --- Tom.
* Fix array coercion expressions to ensure that the correct volatility isTom Lane2007-03-27
| | | | | | | | | seen by code inspecting the expression. The best way to do this seems to be to drop the original representation as a function invocation, and instead make a special expression node type that represents applying the element-type coercion function to each array element. In this way the element function is exposed and will be checked for volatility. Per report from Guillaume Smet.