aboutsummaryrefslogtreecommitdiff
path: root/src/backend/executor
Commit message (Collapse)AuthorAge
* Add defenses against integer overflow in dynahash numbuckets calculations.Tom Lane2012-12-11
| | | | | | | | | | | | | | | The dynahash code requires the number of buckets in a hash table to fit in an int; but since we calculate the desired hash table size dynamically, there are various scenarios where we might calculate too large a value. The resulting overflow can lead to infinite loops, division-by-zero crashes, etc. I (tgl) had previously installed some defenses against that in commit 299d1716525c659f0e02840e31fbe4dea3, but that covered only one call path. Moreover it worked by limiting the request size to work_mem, but in a 64-bit machine it's possible to set work_mem high enough that the problem appears anyway. So let's fix the problem at the root by installing limits in the dynahash.c functions themselves. Trouble report and patch by Jeff Davis.
* Fix assorted bugs in CREATE INDEX CONCURRENTLY.Tom Lane2012-11-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | This patch changes CREATE INDEX CONCURRENTLY so that the pg_index flag changes it makes without exclusive lock on the index are made via heap_inplace_update() rather than a normal transactional update. The latter is not very safe because moving the pg_index tuple could result in concurrent SnapshotNow scans finding it twice or not at all, thus possibly resulting in index corruption. In addition, fix various places in the code that ought to check to make sure that the indexes they are manipulating are valid and/or ready as appropriate. These represent bugs that have existed since 8.2, since a failed CREATE INDEX CONCURRENTLY could leave a corrupt or invalid index behind, and we ought not try to do anything that might fail with such an index. Also fix RelationReloadIndexInfo to ensure it copies all the pg_index columns that are allowed to change after initial creation. Previously we could have been left with stale values of some fields in an index relcache entry. It's not clear whether this actually had any user-visible consequences, but it's at least a bug waiting to happen. This is a subset of a patch already applied in 9.2 and HEAD. Back-patch into all earlier supported branches. Tom Lane and Andres Freund
* Fix cross-type case in partial row matching for hashed subplans.Tom Lane2012-10-11
| | | | | | | | | | | | | | When hashing a subplan like "WHERE (a, b) NOT IN (SELECT x, y FROM ...)", findPartialMatch() attempted to match rows using the hashtable's internal equality operators, which of course are for x and y's datatypes. What we need to use are the potentially cross-type operators for a=x, b=y, etc. Failure to do that leads to wrong answers or even crashes. The scope for problems is limited to cases where we have different types with compatible hash functions (else we'd not be using a hashed subplan), but for example int4 vs int8 can cause the problem. Per bug #7597 from Bo Jensen. This has been wrong since the hashed-subplan code was written, so patch all the way back.
* Fix rescan logic in nodeCtescan.Tom Lane2012-08-15
| | | | | | | | | | | | | | | | | | | | | | The previous coding essentially assumed that nodes would be rescanned in the same order they were initialized in; or at least that the "leader" of a group of CTEscans would be rescanned before any others were required to execute. Unfortunately, that isn't even a little bit true. It's possible to devise queries in which the leader isn't rescanned until other CTEscans on the same CTE have run to completion, or even in which the leader never gets a rescan call at all. The fix makes the leader specially responsible only for initial creation and final destruction of the tuplestore; rescan resets are now a symmetrically shared responsibility. This means that we might reset the tuplestore multiple times when restarting a plan subtree containing multiple CTEscans; but resetting an already-empty tuplestore is cheap enough that that doesn't seem like a problem. Per report from Adam Mackler; the new regression test cases are based on his example query. Back-patch to 8.4 where CTE scans were introduced.
* Fix whole-row Var evaluation to cope with resjunk columns (again).Tom Lane2012-07-20
| | | | | | | | | | | | | | | | | | When a whole-row Var is reading the result of a subquery, we need it to ignore any "resjunk" columns that the subquery might have evaluated for GROUP BY or ORDER BY purposes. We've hacked this area before, in commit 68e40998d058c1f6662800a648ff1e1ce5d99cba, but that fix only covered whole-row Vars of named composite types, not those of RECORD type; and it was mighty klugy anyway, since it just assumed without checking that any extra columns in the result must be resjunk. A proper fix requires getting hold of the subquery's targetlist so we can actually see which columns are resjunk (whereupon we can use a JunkFilter to get rid of them). So bite the bullet and add some infrastructure to make that possible. Per report from Andrew Dunstan and additional testing by Merlin Moncure. Back-patch to all supported branches. In 8.3, also back-patch commit 292176a118da6979e5d368a4baf27f26896c99a5, which for some reason I had not done at the time, but it's a prerequisite for this change.
* Fix memory leak in ARRAY(SELECT ...) subqueries.Tom Lane2012-06-21
| | | | | | | | | | | Repeated execution of an uncorrelated ARRAY_SUBLINK sub-select (which I think can only happen if the sub-select is embedded in a larger, correlated subquery) would leak memory for the duration of the query, due to not reclaiming the array generated in the previous execution. Per bug #6698 from Armando Miraglia. Diagnosis and fix idea by Heikki, patch itself by me. This has been like this all along, so back-patch to all supported versions.
* Don't allow CREATE TABLE AS to put relations in pg_global.Robert Haas2012-03-21
| | | | | | | | | | | | | This was never intended to be allowed, and is blocked for an ordinary CREATE TABLE, but CREATE TABLE AS slipped through the cracks. This commit won't do anything to fix existing cases where this has loophole has been exploited, but it still seems prudent to lock it down going forward. Back-branch commit only, as this problem has been refactored away on the master branch. Andres Freund
* Make executor's SELECT INTO code save and restore original tuple receiver.Tom Lane2012-01-04
| | | | | | | | | | | | As previously coded, the QueryDesc's dest pointer was left dangling (pointing at an already-freed receiver object) after ExecutorEnd. It's a bit astonishing that it took us this long to notice, and I'm not sure that the known problem case with SQL functions is the only one. Fix it by saving and restoring the original receiver pointer, which seems the most bulletproof way of ensuring any related bugs are also covered. Per bug #6379 from Paul Ramsey. Back-patch to 8.4 where the current handling of SELECT INTO was introduced.
* Avoid integer overflow when LIMIT + OFFSET >= 2^63.Heikki Linnakangas2011-08-02
| | | | This fixes bug #6139 reported by Hitoshi Harada.
* Disallow SELECT FOR UPDATE/SHARE on sequences.Tom Lane2011-06-02
| | | | | | | | | | | | | | | | We can't allow this because such an operation stores its transaction XID into the sequence tuple's xmax. Because VACUUM doesn't process sequences (and we don't want it to start doing so), such an xmax value won't get frozen, meaning it will eventually refer to nonexistent pg_clog storage, and even wrap around completely. Since the row lock is ignored by nextval and setval, the usefulness of the operation is highly debatable anyway. Per reports of trouble with pgpool 3.0, which had ill-advisedly started using such commands as a form of locking. In HEAD, also disallow SELECT FOR UPDATE/SHARE on toast tables. Although this does work safely given the current implementation, there seems no good reason to allow it. I refrained from changing that behavior in back branches, however.
* Install defenses against overflow in BuildTupleHashTable().Tom Lane2011-05-23
| | | | | | | | | | | | | | | | | | | | | The planner can sometimes compute very large values for numGroups, and in cases where we have no alternative to building a hashtable, such a value will get fed directly to BuildTupleHashTable as its nbuckets parameter. There were two ways in which that could go bad. First, BuildTupleHashTable declared the parameter as "int" but most callers were passing "long"s, so on 64-bit machines undetected overflow could occur leading to a bogus negative value. The obvious fix for that is to change the parameter to "long", which is what I've done in HEAD. In the back branches that seems a bit risky, though, since third-party code might be calling this function. So for them, just put in a kluge to treat negative inputs as INT_MAX. Second, hash_create can go nuts with extremely large requested table sizes (notably, my_log2 becomes an infinite loop for inputs larger than LONG_MAX/2). What seems most appropriate to avoid that is to bound the initial table size request to work_mem. This fixes bug #6035 reported by Daniel Schreiber. Although the reported case only occurs back to 8.4 since it involves WITH RECURSIVE, I think it's a good idea to install the defenses in all supported branches.
* Fix dangling-pointer problem in before-row update trigger processing.Tom Lane2011-02-21
| | | | | | | | | | | | | | | | | | | | | | ExecUpdate checked for whether ExecBRUpdateTriggers had returned a new tuple value by seeing if the returned tuple was pointer-equal to the old one. But the "old one" was in estate->es_junkFilter's result slot, which would be scribbled on if we had done an EvalPlanQual update in response to a concurrent update of the target tuple; therefore we were comparing a dangling pointer to a live one. Given the right set of circumstances we could get a false match, resulting in not forcing the tuple to be stored in the slot we thought it was stored in. In the case reported by Maxim Boguk in bug #5798, this led to "cannot extract system attribute from virtual tuple" failures when trying to do "RETURNING ctid". I believe there is a very-low-probability chance of more serious errors, such as generating incorrect index entries based on the original rather than the trigger-modified version of the row. In HEAD, change all of ExecBRInsertTriggers, ExecIRInsertTriggers, ExecBRUpdateTriggers, and ExecIRUpdateTriggers so that they continue to have similar APIs. In the back branches I just changed ExecBRUpdateTriggers, since there is no bug in the ExecBRInsertTriggers case.
* Fix wrong error reports in 'number of array dimensions exceeds theItagaki Takahiro2011-02-01
| | | | | | maximum allowed' messages, that have reported one-less dimensions. Alexey Klyukin
* Prevent inlining a SQL function with multiple OUT parameters.Tom Lane2010-12-01
| | | | | | | | | | | | | There were corner cases in which the planner would attempt to inline such a function, which would result in a failure at runtime due to loss of information about exactly what the result record type is. Fix by disabling inlining when the function's recorded result type is RECORD. There might be some sub-cases where inlining could still be allowed, but this is a simple and backpatchable fix, so leave refinements for another day. Per bug #5777 from Nate Carson. Back-patch to all supported branches. 8.1 happens to avoid a core-dump here, but it still does the wrong thing.
* Fix ExecMakeTableFunctionResult to verify that all rows returned by a SRFTom Lane2010-08-26
| | | | | | | | | | | returning "record" actually do have the same rowtype. This is needed because the parser can't realistically enforce that they will all have the same typmod, as seen in a recent example from David Wheeler. Back-patch to 8.0, which is as far back as we have the notion of RECORD subtypes being distinguished by typmod. Wheeler's example depends on 8.4-and-up features, but I suspect there may be ways to provoke similar failures before 8.4.
* Fix potential failure when hashing the output of a subplan that producesTom Lane2010-07-28
| | | | | | | | | | | | | | | a pass-by-reference datatype with a nontrivial projection step. We were using the same memory context for the projection operation as for the temporary context used by the hashtable routines in execGrouping.c. However, the hashtable routines feel free to reset their temp context at any time, which'd lead to destroying input data that was still needed. Report and diagnosis by Tao Ma. Back-patch to 8.1, where the problem was introduced by the changes that allowed us to work with "virtual" tuples instead of materializing intermediate tuple values everywhere. The earlier code looks quite similar, but it doesn't suffer the problem because the data gets copied into another context as a result of having to materialize ExecProject's output tuple.
* Rejigger mergejoin logic so that a tuple with a null in the first merge columnTom Lane2010-05-28
| | | | | | | | | | | | | | | | | | | is treated like end-of-input, if nulls sort last in that column and we are not doing outer-join filling for that input. In such a case, the tuple cannot join to anything from the other input (because we assume mergejoinable operators are strict), and neither can any tuple following it in the sort order. If we're not interested in doing outer-join filling we can just pretend the tuple and its successors aren't there at all. This can save a great deal of time in situations where there are many nulls in the join column, as in a recent example from Scott Marlowe. Also, since the planner tends to not count nulls in its mergejoin scan selectivity estimates, this is an important fix to make the runtime behavior more like the estimate. I regard this as an omission in the patch I wrote years ago to teach mergejoin that tuples containing nulls aren't joinable, so I'm back-patching it. But only to 8.3 --- in older versions, we didn't have a solid notion of whether nulls sort high or low, so attempting to apply this optimization could break things.
* Modify error context callback functions to not assume that they can fetchTom Lane2010-03-19
| | | | | | | | | | | | | | | | | | catalog entries via SearchSysCache and related operations. Although, at the time that these callbacks are called by elog.c, we have not officially aborted the current transaction, it still seems rather risky to initiate any new catalog fetches. In all these cases the needed information is readily available in the caller and so it's just a matter of a bit of extra notation to pass it to the callback. Per crash report from Dennis Koegel. I've concluded that the real fix for his problem is to clear the error context stack at entry to proc_exit, but it still seems like a good idea to make the callbacks a bit less fragile for other cases. Backpatch to 8.4. We could go further back, but the patch doesn't apply cleanly. In the absence of proof that this fixes something and isn't just paranoia, I'm not going to expend the effort.
* Fix ExecEvalArrayRef to pass down the old value of the array element or sliceTom Lane2010-02-18
| | | | | | | | | | | | | being assigned to, in case the expression to be assigned is a FieldStore that would need to modify that value. The need for this was foreseen some time ago, but not implemented then because we did not have arrays of composites. Now we do, but the point evidently got overlooked in that patch. Net result is that updating a field of an array element doesn't work right, as illustrated if you try the new regression test on an unpatched backend. Noted while experimenting with EXPLAIN VERBOSE, which has also got some issues in this area. Backpatch to 8.3, where arrays of composites were introduced.
* Improve ExecEvalVar's handling of whole-row variables in cases where theTom Lane2010-01-11
| | | | | | | | | | | | | | | | | rowtype contains dropped columns. Sometimes the input tuple will be formed from a select targetlist in which dropped columns are filled with a NULL of an arbitrary type (the planner typically uses INT4, since it can't tell what type the dropped column really was). So we need to relax the rowtype compatibility check to not insist on physical compatibility if the actual column value is NULL. In principle we might need to do this for functions returning composite types, too (see tupledesc_match()). In practice there doesn't seem to be a bug there, probably because the function will be using the same cached rowtype descriptor as the caller. Fixing that code path would require significant rearrangement, so I left it alone for now. Per complaint from Filip Rembialkowski.
* Add support for doing FULL JOIN ON FALSE. While this is really a ratherTom Lane2010-01-05
| | | | | | | | | | peculiar variant of UNION ALL, and so wouldn't likely get written directly as-is, it's possible for it to arise as a result of simplification of less-obviously-silly queries. In particular, now that we can do flattening of subqueries that have constant outputs and are underneath an outer join, it's possible for the case to result from simplification of queries of the type exhibited in bug #5263. Back-patch to 8.4 to avoid a functionality regression for this type of query.
* Previous fix for temporary file management broke returning a set fromHeikki Linnakangas2009-12-29
| | | | | | | | | | | | | | PL/pgSQL function within an exception handler. Make sure we use the right resource owner when we create the tuplestore to hold returned tuples. Simplify tuplestore API so that the caller doesn't need to be in the right memory context when calling tuplestore_put* functions. tuplestore.c automatically switches to the memory context used when the tuplestore was created. Tuplesort was already modified like this earlier. This patch also removes the now useless MemoryContextSwitch calls from callers. Report by Aleksei on pgsql-bugs on Dec 22 2009. Backpatch to 8.1, like the previous patch that broke this.
* Fix a bug introduced when set-returning SQL functions were made inline-able:Tom Lane2009-12-14
| | | | | | | | | | | | | | we have to cope with the possibility that the declared result rowtype contains dropped columns. This fails in 8.4, as per bug #5240. While at it, be more paranoid about inserting binary coercions when inlining. The pre-8.4 code did not really need to worry about that because it could not inline at all in any case where an added coercion could change the behavior of the function's statement. However, when inlining a SRF we allow sorting, grouping, and set-ops such as UNION. In these cases, modifying one of the targetlist entries that the sort/group/setop depends on could conceivably change the behavior of the function's statement --- so don't inline when such a case applies.
* Prevent indirect security attacks via changing session-local state withinTom Lane2009-12-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | an allegedly immutable index function. It was previously recognized that we had to prevent such a function from executing SET/RESET ROLE/SESSION AUTHORIZATION, or it could trivially obtain the privileges of the session user. However, since there is in general no privilege checking for changes of session-local state, it is also possible for such a function to change settings in a way that might subvert later operations in the same session. Examples include changing search_path to cause an unexpected function to be called, or replacing an existing prepared statement with another one that will execute a function of the attacker's choosing. The present patch secures VACUUM, ANALYZE, and CREATE INDEX/REINDEX against these threats, which are the same places previously deemed to need protection against the SET ROLE issue. GUC changes are still allowed, since there are many useful cases for that, but we prevent security problems by forcing a rollback of any GUC change after completing the operation. Other cases are handled by throwing an error if any change is attempted; these include temp table creation, closing a cursor, and creating or deleting a prepared statement. (In 7.4, the infrastructure to roll back GUC changes doesn't exist, so we settle for rejecting changes of "search_path" in these contexts.) Original report and patch by Gurjeet Singh, additional analysis by Tom Lane. Security: CVE-2009-4136
* Make the overflow guards in ExecChooseHashTableSize be more protective.Tom Lane2009-10-30
| | | | | | | | | | | | | | | The original coding ensured nbuckets and nbatch didn't exceed INT_MAX, which while not insane on its own terms did nothing to protect subsequent code like "palloc(nbatch * sizeof(BufFile *))". Since enormous join size estimates might well be planner error rather than reality, it seems best to constrain the initial sizes to be not more than work_mem/sizeof(pointer), thus ensuring the allocated arrays don't exceed work_mem. We will allow nbatch to get bigger than that during subsequent ExecHashIncreaseNumBatches calls, but we should still guard against integer overflow in those palloc requests. Per bug #5145 from Bernt Marius Johnsen. Although the given test case only seems to fail back to 8.2, previous releases have variants of this issue, so patch all supported branches.
* Ensure that a cursor has an immutable snapshot throughout its lifespan.Alvaro Herrera2009-10-02
| | | | | | | | | The old coding was using a regular snapshot, referenced elsewhere, that was subject to having its command counter updated. Fix by creating a private copy of the snapshot exclusively for the cursor. Backpatch to 8.4, which is when the bug was introduced during the snapshot management rewrite.
* Tweak ExecIndexEvalRuntimeKeys to forcibly detoast any toasted comparisonTom Lane2009-08-23
| | | | | | | | | | | | | | | | | | values before they get passed to the index access method. This avoids repeated detoastings that will otherwise ensue as the comparison value is examined by various index support functions. We have seen a couple of reports of cases where repeated detoastings result in an order-of-magnitude slowdown, so it seems worth adding a bit of extra logic to prevent this. I had previously proposed trying to avoid duplicate detoastings in general, but this fix takes care of what seems the most important case in practice with very little effort or risk. Back-patch to 8.4 so that the PostGIS folk won't have to wait a year to have this fix in a production release. (The issue exists further back, of course, but the code's diverged enough to make backpatching further a higher-risk action. Also it appears that the possible gains may be limited in prior releases because of different handling of lossy operators.)
* In a non-hashed Agg node, reset the "aggcontext" at group boundaries, insteadTom Lane2009-07-23
| | | | | | | | | | | of individually pfree'ing pass-by-reference transition values. This should be at least as fast as the prior coding, and it has the major advantage of clearing out any working data an aggregate function may have stored in or underneath the aggcontext. This avoids memory leakage when an aggregate such as array_agg() is used in GROUP BY mode. Per report from Chris Spotts. Back-patch to 8.4. In principle the problem could arise in prior versions, but since they didn't have array_agg the issue seems not critical.
* Fix error cleanup failure caused by 8.4 changes in plpgsql to try to avoidTom Lane2009-07-18
| | | | | | | | | | | | | | | | | | | memory leakage in error recovery. We were calling FreeExprContext, and therefore invoking ExprContextCallback callbacks, in both normal and error exits from subtransactions. However this isn't very safe, as shown in recent trouble report from Frank van Vugt, in which releasing a tupledesc refcount failed. It's also unnecessary, since the resources that callbacks might wish to release should be cleaned up by other error recovery mechanisms (ie the resource owners). We only really want FreeExprContext to release memory attached to the exprcontext in the error-exit case. So, add a bool parameter to FreeExprContext to tell it not to call the callbacks. A more general solution would be to pass the isCommit bool parameter on to the callbacks, so they could do only safe things during error exit. But that would make the patch significantly more invasive and possibly break third-party code that registers ExprContextCallback callbacks. We might want to do that later in HEAD, but for now I'll just do what seems reasonable to back-patch.
* Fix things so that array_agg_finalfn does not modify or free its inputTom Lane2009-06-20
| | | | | | | ArrayBuildState, per trouble report from Merlin Moncure. By adopting this fix, we are essentially deciding that aggregate final-functions should not modify their inputs ever. Adjust documentation and comments to match that conclusion.
* ExecAgg() failed to finish running out set-returning functions in the lastTom Lane2009-06-17
| | | | | | aggregated tuple of a run. Per report from Laurenz Albe. This is a new bug in 8.4, but only because prior versions rejected SRFs in an Agg plan node altogether.
* Revisit AlterTableCreateToastTable's API once again, hoping to make it whatTom Lane2009-06-11
| | | | | pg_migrator actually needs and not just a partial solution. We have to be able to specify the OID that the new toast table should be created with.
* Fix things so that you can still do "select foo()" where foo is a SQLTom Lane2009-06-11
| | | | | | | function returning setof record. This used to work, more or less accidentally, but I had broken it while extending the code to allow materialize-mode functions to be called in select lists. Add a regression test case so it doesn't get broken again. Per gripe from Greg Davidson.
* 8.4 pgindent run, with new combined Linux/FreeBSD/MinGW typedef listBruce Momjian2009-06-11
| | | | provided by Andrew.
* Fix xmlattribute escaping XML special characters twice (bug #4822).Peter Eisentraut2009-06-09
| | | | Author: Itagaki Takahiro <itagaki.takahiro@oss.ntt.co.jp>
* Improve the recently-added support for properly pluralized error messagesTom Lane2009-06-04
| | | | | | | | | | by extending the ereport() API to cater for pluralization directly. This is better than the original method of calling ngettext outside the elog.c code because (1) it avoids double translation, which wastes cycles and in the worst case could give a wrong result; and (2) it avoids having to use a different coding method in PL code than in the core backend. The client-side uses of ngettext are not touched since neither of these concerns is very pressing in the client environment. Per my proposal of yesterday.
* Add an option to AlterTableCreateToastTable() to allow its caller to forceTom Lane2009-05-07
| | | | | | | | a toast table to be built, even if the sum-of-column-widths calculation indicates one isn't needed. This is needed by pg_migrator because if the old table has a toast table, we have to migrate over the toast table since it might contain some live data, even though subsequent column drops could mean that no recently-added rows could require toasting.
* XMLATTRIBUTES() should send the attribute values throughPeter Eisentraut2009-04-08
| | | | | map_sql_value_to_xml_value() instead of directly through the data type output function. This is per SQL standard, and consistent with XMLELEMENT().
* Make ExecInitExpr build the list of SubPlans found in a plan tree in orderTom Lane2009-04-05
| | | | | | | | of discovery, rather than reverse order. This doesn't matter functionally (I suppose the previous coding dates from the time when lcons was markedly cheaper than lappend). However now that EXPLAIN is labeling subplans with IDs that are based on order of creation, this may help produce a slightly less surprising printout.
* Refactor ExecProject and associated routines so that fast-path code is usedTom Lane2009-04-02
| | | | | | | | | for simple Var targetlist entries all the time, even when there are other entries that are not simple Vars. Also, ensure that we prefetch attributes (with slot_getsomeattrs) for all Vars in the targetlist, even those buried within expressions. In combination these changes seem to significantly reduce the runtime for cases where tlists are mostly but not exclusively Vars. Per my proposal of yesterday.
* Revert DTrace patch from Robert LorBruce Momjian2009-04-02
|
* Add support for additional DTrace probes.Bruce Momjian2009-04-02
| | | | Robert Lor
* Fix an oversight in the support for storing/retrieving "minimal tuples" inTom Lane2009-03-30
| | | | | | | | | | | | | | | | | | | | | TupleTableSlots. We have functions for retrieving a minimal tuple from a slot after storing a regular tuple in it, or vice versa; but these were implemented by converting the internal storage from one format to the other. The problem with that is it invalidates any pass-by-reference Datums that were already fetched from the slot, since they'll be pointing into the just-freed version of the tuple. The known problem cases involve fetching both a whole-row variable and a pass-by-reference value from a slot that is fed from a tuplestore or tuplesort object. The added regression tests illustrate some simple cases, but there may be other failure scenarios traceable to the same bug. Note that the added tests probably only fail on unpatched code if it's built with --enable-cassert; otherwise the bug leads to fetching from freed memory, which will not have been overwritten without additional conditions. Fix by allowing a slot to contain both formats simultaneously; which turns out not to complicate the logic much at all, if anything it seems less contorted than before. Back-patch to 8.2, where minimal tuples were introduced.
* Fix possible failures when a tuplestore switches from in-memory to on-diskTom Lane2009-03-27
| | | | | | | | | mode while callers hold pointers to in-memory tuples. I reported this for the case of nodeWindowAgg's primary scan tuple, but inspection of the code shows that all of the calls in nodeWindowAgg and nodeCtescan are at risk. For the moment, fix it with a rather brute-force approach of copying whenever one of the at-risk callers requests a tuple. Later we might think of some sort of reference-count approach to reduce tuple copying.
* Gettext plural supportPeter Eisentraut2009-03-26
| | | | | | In the backend, I changed only a handful of exemplary or important-looking instances to make use of the plural support; there is probably more work there. For the rest of the source, this should cover all relevant cases.
* Optimize multi-batch hash joins when the outer relation has a nonuniformTom Lane2009-03-21
| | | | | | | | | distribution, by creating a special fast path for the (first few) most common values of the outer relation. Tuples having hashvalues matching the MCVs are effectively forced to be in the first batch, so that we never write them out to the batch temp files. Bryce Cutt and Ramon Lawrence, with some editorialization by me.
* Add new SQL:2008 error codes for invalid LIMIT and OFFSET values. RemovePeter Eisentraut2009-03-04
| | | | | unused nonstandard error code that was perhaps intended for this but never used.
* Ensure that INSERT ... SELECT into a table with OIDs never copies row OIDsTom Lane2009-02-08
| | | | | | | | from the source table. This could never happen anyway before 8.4 because the executor invariably applied a "junk filter" to rows due to be inserted; but now that we skip doing that when it's not necessary, the case can occur. Problem noted 2008-11-27 by KaiGai Kohei, though I misunderstood what he was on about at the time (the opacity of the patch he proposed didn't help).
* Allow reloption names to have qualifiers, initially supporting a TOASTAlvaro Herrera2009-02-02
| | | | | | | | qualifier, and add support for this in pg_dump. This allows TOAST tables to have user-defined fillfactor, and will also enable us to move the autovacuum parameters to reloptions without taking away the possibility of setting values for TOAST tables.
* Support column-level privileges, as required by SQL standard.Tom Lane2009-01-22
| | | | Stephen Frost, with help from KaiGai Kohei and others