aboutsummaryrefslogtreecommitdiff
path: root/src/backend
Commit message (Collapse)AuthorAge
...
* Request XLOG switch before writing checkpoint in pg_start_backup(). OtherwiseHeikki Linnakangas2009-05-07
| | | | | | | | | | | | | | | | | | you can end up with an unrecoverable backup if you start a new base backup right after finishing archive recovery. In that scenario, the redo pointer of the checkpoint that pg_start_backup() writes points to the XLOG segment where the timeline-changing end-of-archive-recovery checkpoint is. The beginning of that segment contains pages with the old timeline ID, and we don't accept that in recovery unless we find a history file covering the old timeline ID. If you omit pg_xlog from the base backup and clear the archive directory before starting the backup, there will be no such history file available. The bug is present in all versions since PITR was introduced in 8.0, but I'm back-patching only back to 8.2. Earlier versions didn't have XLOG switch records, making this fix unfeasible. Given the lack of reports until now, it doesn't seem worthwhile to spend more effort to fix 8.0 and 8.1. Per report and suggestion by Mikael Krantz
* Call SetLastError(0) before calling the file mapping functionsMagnus Hagander2009-05-04
| | | | | | | to make sure that the error code is reset, as a precaution in case the API doesn't properly reset it on success. This could be necessary, since we check the error value even if the function doesn't fail for specific success cases.
* When checking for datetime field overflow, we should allow a fractional-secondTom Lane2009-05-01
| | | | | | | | | | | | | | part that rounds up to exactly 1.0 second. The previous coding rejected input like "00:12:57.9999999999999999999999999999", with the exact number of nines needed to cause failure varying depending on float-timestamp option and possibly on platform. Obviously this should round up to the next integral second, if we don't have enough precision to distinguish the value from that. Per bug #4789 from Robert Kruus. In passing, fix a missed check for fractional seconds in one copy of the "is it greater than 24:00:00" code. Broken all the way back, so patch all the way back.
* Fix the handling of sub-SELECTs appearing in the arguments of an outer-levelTom Lane2009-04-25
| | | | | | | | | | | | | | | | | aggregate function. By definition, such a sub-SELECT cannot reference any variables of query levels between itself and the aggregate's semantic level (else the aggregate would've been assigned to that lower level instead). So the correct, most efficient implementation is to treat the sub-SELECT as being a sub-select of that outer query level, not the level the aggregate syntactically appears in. Not doing so also confuses the heck out of our parameter-passing logic, as illustrated in bug report from Daniel Grace. Fortunately, we were already copying the whole Aggref expression up to the outer query level, so all that's needed is to delay SS_process_sublinks processing of the sub-SELECT until control returns to the outer level. This has been broken since we introduced spec-compliant treatment of outer aggregates in 7.4; so patch all the way back.
* Remove HELIOS Software GmbH name and copyright from AIX dynloader files,Bruce Momjian2009-04-25
| | | | | | per approval from Helmut Tschemernjak, President. Only back branches; files removed from CVS HEAD.
* Fix 'all at one page bug' in picksplit method of R-tree emulation. Add defenseTeodor Sigaev2009-04-06
| | | | from buggy user-defined picksplit to GiST.
* Rewrite interval_hash() so that the hashcodes are equal for values thatTom Lane2009-04-04
| | | | | | | | | | | | | | | interval_eq() considers equal. I'm not sure how that fundamental requirement escaped us through multiple revisions of this hash function, but there it is; it's been wrong since interval_hash was first written for PG 7.1. Per bug #4748 from Roman Kononov. Backpatch to all supported releases. This patch changes the contents of hash indexes for interval columns. That's no particular problem for PG 8.4, since we've broken on-disk compatibility of hash indexes already; but it will require a migration warning note in the next minor releases of all existing branches: "if you have any hash indexes on columns of type interval, REINDEX them after updating".
* Fix a rare race condition when commit_siblings > 0 and a transaction commitsHeikki Linnakangas2009-03-31
| | | | | | | | | | | at the same instant as a new backend is spawned. Since CountActiveBackends() doesn't hold ProcArrayLock, it needs to be prepared for the case that a pointer at the end of the proc array is still NULL even though numProcs says it should be valid, since it doesn't hold ProcArrayLock. Backpatch to 8.1. 8.0 and earlier had this right, but it was broken in the split of PGPROC and sinval shared memory arrays. Per report and proposal by Marko Kreen.
* Fix an oversight in the support for storing/retrieving "minimal tuples" inTom Lane2009-03-30
| | | | | | | | | | | | | | | | | | | | | TupleTableSlots. We have functions for retrieving a minimal tuple from a slot after storing a regular tuple in it, or vice versa; but these were implemented by converting the internal storage from one format to the other. The problem with that is it invalidates any pass-by-reference Datums that were already fetched from the slot, since they'll be pointing into the just-freed version of the tuple. The known problem cases involve fetching both a whole-row variable and a pass-by-reference value from a slot that is fed from a tuplestore or tuplesort object. The added regression tests illustrate some simple cases, but there may be other failure scenarios traceable to the same bug. Note that the added tests probably only fail on unpatched code if it's built with --enable-cassert; otherwise the bug leads to fetching from freed memory, which will not have been overwritten without additional conditions. Fix by allowing a slot to contain both formats simultaneously; which turns out not to complicate the logic much at all, if anything it seems less contorted than before. Back-patch to 8.2, where minimal tuples were introduced.
* Install a search tree depth limit in GIN bulk-insert operations, to preventTom Lane2009-03-24
| | | | | | | | | | | | them from degrading badly when the input is sorted or nearly so. In this scenario the tree is unbalanced to the point of becoming a mere linked list, so insertions become O(N^2). The easiest and most safely back-patchable solution is to stop growing the tree sooner, ie limit the growth of N. We might later consider a rebalancing tree algorithm, but it's not clear that the benefit would be worth the cost and complexity. Per report from Sergey Burladyan and an earlier complaint from Heikki. Back-patch to 8.2; older versions didn't have GIN indexes.
* Fix core dump due to null-pointer dereference in to_char() when datetimeTom Lane2009-03-12
| | | | | | | | | | format codes are misapplied to a numeric argument. (The code still produces a pretty bogus error message in such cases, but I'll settle for stopping the crash for now.) Per bug #4700 from Sergey Burladyan. Problem exists in all supported branches, so patch all the way back. In HEAD, also clean up some ugly coding in the nearby cache management code.
* Put back our old workaround for machines that declare cbrt() in math.h butTom Lane2009-03-04
| | | | | | fail to provide the function itself. Not sure how we escaped testing anything later than 7.3 on such cases, but they still exist, as per André Volpato's report about AIX 5.3.
* Ooops ... fix some confusion between gettext() and _() in my previous patch.Tom Lane2009-03-03
| | | | | This has moved around in past releases, so just copying-and-pasting from HEAD didn't work as intended.
* When we are in error recursion trouble, arrange to suppress translation andTom Lane2009-03-02
| | | | | | | | | | | encoding conversion of any elog/ereport message being sent to the frontend. This generalizes a patch that I put in last October, which suppressed translation of only specific messages known to be associated with recursive can't-translate-the-message behavior. As shown in bug #4680, we need a more general answer in order to have some hope of coping with broken encoding conversion setups. This approach seems a good deal less klugy anyway. Patch in all supported branches.
* Fix buffer allocations in encoding conversion routines so that they won'tTom Lane2009-02-28
| | | | | | | | fail on zero-length inputs. This isn't an issue in normal use because the conversion infrastructure skips calling the converters for empty strings. However a problem was created by yesterday's patch to check whether the right conversion function is supplied in CREATE CONVERSION. The most future-proof fix seems to be to make the converters safe for this corner case.
* In CREATE CONVERSION, test that the given function is a valid conversionHeikki Linnakangas2009-02-27
| | | | | | | | function for the specified source and destination encodings. We do that by calling the function with an empty string. If it can't perform the requested conversion, it will throw an error. Backport to 7.4 - 8.3. Per bug report #4680 by Denis Afonin.
* Fix an old problem in decompilation of CASE constructs: the ruleutils.c codeTom Lane2009-02-25
| | | | | | | | | | | looks for a CaseTestExpr to figure out what the parser did, but it failed to consider the possibility that an implicit coercion might be inserted above the CaseTestExpr. This could result in an Assert failure in some cases (but correct results if Asserts weren't enabled), or an "unexpected CASE WHEN clause" error in other cases. Per report from Alan Li. Back-patch to 8.1; problem doesn't exist before that because CASE was implemented differently.
* Repair a longstanding bug in CLUSTER and the rewriting variants of ALTERTom Lane2009-02-24
| | | | | | | | | | | | | | | | | | TABLE: if the command is executed by someone other than the table owner (eg, a superuser) and the table has a toast table, the toast table's pg_type row ends up with the wrong typowner, ie, the command issuer not the table owner. This is quite harmless for most purposes, since no interesting permissions checks consult the pg_type row. However, it could lead to unexpected failures if one later tries to drop the role that issued the command (in 8.1 or 8.2), or strange warnings from pg_dump afterwards (in 8.3 and up, which will allow the DROP ROLE because we don't create a "redundant" owner dependency for table rowtypes). Problem identified by Cott Lang. Back-patch to 8.1. The problem is actually far older --- the CLUSTER variant can be demonstrated in 7.0 --- but it's mostly cosmetic before 8.1 because we didn't track ownership dependencies before 8.1. Also, fixing it before 8.1 would require changing the call signature of heap_create_with_catalog(), which seems to carry a nontrivial risk of breaking add-on modules.
* Defend against null input in analyze_requires_snapshot(), per reportTom Lane2009-01-30
| | | | | | | | from Rushabh Lathia. Back-patch of patch of 2009-01-08. This is necessary in 8.3, as reported by Bjorn Munch. It's not currently necessary in 8.2, AFAICS, but seems best to include it there too.
* Translation updatesPeter Eisentraut2009-01-29
|
* Replace argument-checking Asserts with regular test-and-elog checks in allTom Lane2009-01-29
| | | | | | | | | | | | encoding conversion functions. These are not can't-happen cases because it's possible to create a conversion with the wrong conversion function for the specified encoding pair. That would lead to an Assert crash in an Assert-enabled build, or incorrect conversion otherwise, neither of which is desirable. This would be a DOS issue if production databases were customarily built with asserts enabled, but fortunately that's not so. Per an observation by Heikki. Back-patch to all supported branches.
* Go over all OpenSSL return values and make sure we compare themMagnus Hagander2009-01-28
| | | | | | | | to the documented API value. The previous code got it right as it's implemented, but accepted too much/too little compared to the API documentation. Per comment from Zdenek Kotala.
* Fix erroneous memory context switch in autovacuum, which was returning to aAlvaro Herrera2009-01-20
| | | | | | | | | | context long after it had been destroyed. Per problem report from Justin Pasher. Patch by Tom Lane and me. 8.3 and later do not have this bug, because this code has been restructured for unrelated reasons. In 8.2 it does not manifest as a crash, but it still seems safer fixing it nonetheless.
* Insert conditional SPI_push/SPI_pop calls into InputFunctionCall,Tom Lane2009-01-07
| | | | | | | | | | | | | | | | | | | | OutputFunctionCall, and friends. This allows SPI-using functions to invoke datatype I/O without concern for the possibility that a SPI-using function will be called (which could be either the I/O function itself, or a function used in a domain check constraint). It's a tad ugly, but not nearly as ugly as what'd be needed to make this work via retail insertion of push/pop operations in all the PLs. This reverts my patch of 2007-01-30 that inserted some retail SPI_push/pop calls into plpgsql; that approach only fixed plpgsql, and not any other PLs. But the other PLs have the issue too, as illustrated by a recent gripe from Christian Schröder. Back-patch to 8.2, which is as far back as this solution will work. It's also as far back as we need to worry about the domain-constraint case, since earlier versions did not attempt to check domain constraints within datatype input. I'm not aware of any old I/O functions that use SPI themselves, so this should be sufficient for a back-patch.
* Fix logic in lazy vacuum to decide if it's worth trying to truncate the heap.Heikki Linnakangas2009-01-06
| | | | | | | | If the table was smaller than REL_TRUNCATE_FRACTION (= 16) pages, we always tried to acquire AccessExclusiveLock on it even if there was no empty pages at the end. Report by Simon Riggs. Back-patch all the way to 7.4.
* Make heap_update() set newtup->t_tableOid correctly, for consistency withTom Lane2008-12-16
| | | | | | | | | | the other major heapam.c functions. The only known consequence of this omission is that UPDATE RETURNING failed to return the correct value for "tableoid", as per report from KaiGai Kohei. Back-patch to 8.2. Arguably it's wrong all the way back; but without evidence of visible breakage before RETURNING was added, I'll desist from patching the older branches.
* Fix failure to ensure that a snapshot is available to datatype input functionsTom Lane2008-12-13
| | | | | | | | | | | | | | | | | | | when they are invoked by the parser. We had been setting up a snapshot at plan time but really it needs to be done earlier, before parse analysis. Per report from Dmitry Koterov. Also fix two related problems discovered while poking at this one: exec_bind_message called datatype input functions without establishing a snapshot, and SET CONSTRAINTS IMMEDIATE could call trigger functions without establishing a snapshot. Backpatch to 8.2. The underlying problem goes much further back, but it is masked in 8.1 and before because we didn't attempt to invoke domain check constraints within datatype input. It would only be exposed if a C-language datatype input function used the snapshot; which evidently none do, or we'd have heard complaints sooner. Since this code has changed a lot over time, a back-patch is hardly risk-free, and so I'm disinclined to patch further than absolutely necessary.
* Fix an oversight in the code that makes transitive-equality deductions fromTom Lane2008-12-01
| | | | | | | | | | | | | | | | | | | | outer join clauses. Given, say, ... from a left join b on a.a1 = b.b1 where a.a1 = 42; we'll deduce a clause b.b1 = 42 and then mark the original join clause redundant (we can't remove it completely for reasons I don't feel like squeezing into this log entry). However the original implementation of that wasn't bulletproof, because clause_selectivity() wouldn't honor this_selec if given nonzero varRelid --- which in practice meant that it worked as desired *except* when considering index scan quals. Which resulted in bogus underestimation of the size of the indexscan result for an inner indexscan in an outer join, and consequently a possibly bad choice of indexscan vs. bitmap scan. Fix by introducing an explicit test into clause_selectivity(). Also, to make sure we don't trigger that test in corner cases, change the convention to be that this_selec > 1, not this_selec = 1, means it's been marked redundant. Per trouble report from Scara Maccai. Back-patch to 8.2, where the problem was introduced.
* Ensure that the contents of a holdable cursor don't depend on out-of-lineTom Lane2008-12-01
| | | | | | | | | | | toasted values, since those could get dropped once the cursor's transaction is over. Per bug #4553 from Andrew Gierth. Back-patch as far as 8.1. The bug actually exists back to 7.4 when holdable cursors were introduced, but this patch won't work before 8.1 without significant adjustments. Given the lack of field complaints, it doesn't seem worth the work (and risk of introducing new bugs) to try to make a patch for the older branches.
* Remove inappropriate memory context switch in shutdown_MultiFuncCall().Tom Lane2008-11-30
| | | | | | | This was a thinko introduced in a patch from last February; it results in memory leakage if an SRF is shut down before the actual end of query, because subsequent code will be running in a longer-lived context than it's expecting to be.
* In predtest.c, install a limit on the number of branches we will process inTom Lane2008-11-12
| | | | | | | | | | | | | | | | | AND, OR, or equivalent clauses: if there are too many (more than 100) just exit without proving anything. This ensures that we don't spend O(N^2) time trying (and most likely failing) to prove anything about very long IN lists and similar cases. Also, install a couple of CHECK_FOR_INTERRUPTS calls to ensure that a long proof attempt can be interrupted. Per gripe from Sergey Konoplev. Back-patch the whole patch to 8.2 and just the CHECK_FOR_INTERRUPTS addition to 8.1. (The rest of the patch doesn't apply cleanly, and since 8.1 doesn't show the complained-of behavior anyway, it doesn't seem necessary to work hard on it.)
* Get rid of adjust_appendrel_attr_needed(), which has been broken ever sinceTom Lane2008-11-11
| | | | | | | | | | | we extended the appendrel mechanism to support UNION ALL optimization. The reason nobody noticed was that we are not actually using attr_needed data for appendrel children; hence it seems more reasonable to rip it out than fix it. Back-patch to 8.2 because an Assert failure is possible in corner cases. Per examination of an example from Jim Nasby. In HEAD, also get rid of AppendRelInfo.col_mappings, which is quite inadequate to represent UNION ALL situations; depend entirely on translated_vars instead.
* Translation updatesPeter Eisentraut2008-10-30
|
* Install a more robust solution for the problem of infinite error-processingTom Lane2008-10-27
| | | | | | | | | | | | | recursion when we are unable to convert a localized error message to the client's encoding. We've been over this ground before, but as reported by Ibrar Ahmed, it still didn't work in the case of conversion failures for the conversion-failure message itself :-(. Fix by installing a "circuit breaker" that disables attempts to localize this message once we get into recursion trouble. Patch all supported branches, because it is in fact broken in all of them; though I had to add some missing translations to the older branches in order to expose the failure in the particular test case I was using.
* Better solution to the IN-list issue: instead of having an arbitrary cutoff,Tom Lane2008-10-26
| | | | | | | treat Var and non-Var IN-list items differently. Only non-Var items are candidates to go into an ANY(ARRAY) construct --- we put all Vars as separate OR conditions on the grounds that that leaves more scope for optimization. Per suggestion from Robert Haas.
* Add a heuristic to transformAExprIn() to make it prefer expanding "x IN (list)"Tom Lane2008-10-25
| | | | | | | | | | | | | into an OR of equality comparisons, rather than x = ANY(ARRAY[...]), when there are Vars in the right-hand side. This avoids a performance regression compared to pre-8.2 releases, in cases where the OR form can be optimized into scans of multiple indexes. Limit the possible downside by preferring this form only when the list isn't very long (I set the cutoff at 32 elements, which is a bit arbitrary but in the right ballpark). Per discussion with Jim Nasby. In passing, also make it try the OR form if it cannot select a common type for the array elements; we've seen a complaint or two about how the OR form worked for such cases and ARRAY doesn't.
* Fix an old bug in after-trigger handling: AfterTriggerEndQuery took theTom Lane2008-10-25
| | | | | | | | | | | | | | | | | | | | | | | | | address of afterTriggers->query_stack[afterTriggers->query_depth] and hung onto it through all its firings of triggers. However, if a trigger causes sufficiently many nested query executions, query_stack will get repalloc'd bigger, leaving AfterTriggerEndQuery --- and hence afterTriggerInvokeEvents --- using a stale pointer. So far as I can find, the only consequence of this error is to stomp on a couple of words of already-freed memory; which would lead to a failure only if that chunk had already gotten re-allocated for something else. So it's hard to exhibit a simple failure case, but this is surely a bug. I noticed this while working on my recent patch to reduce pending-trigger space usage. The present patch is mighty ugly, because it requires making afterTriggerInvokeEvents know about all the possible event lists it might get called on. Fortunately, this is only needed in back branches because CVS HEAD avoids the problem in a different way: afterTriggerInvokeEvents only touches the passed AfterTriggerEventList pointer once at startup. Back branches are stable enough that wiring in knowledge of all possible call usages doesn't seem like a killer problem. Back-patch to 8.0. 7.4's trigger code is completely different and doesn't seem to have the problem (it doesn't even use repalloc).
* Fix GiST's killing tuple: GISTScanOpaque->curpos wasn'tTeodor Sigaev2008-10-22
| | | | | | correctly set. As result, killtuple() marks as dead wrong tuple on page. Bug was introduced by me while fixing possible duplicates during GiST index scan.
* Fix a small memory leak in ExecReScanAgg() in the hashed aggregation case.Neil Conway2008-10-16
| | | | | | | In the previous coding, the list of columns that needed to be hashed on was allocated in the per-query context, but we reallocated every time the Agg node was rescanned. Since this information doesn't change over a rescan, just construct the list of columns once during ExecInitAgg().
* Fix SPI_getvalue and SPI_getbinval to range-check the given attribute numberTom Lane2008-10-16
| | | | | | | | | | | | | | according to the TupleDesc's natts, not the number of physical columns in the tuple. The previous coding would do the wrong thing in cases where natts is different from the tuple's column count: either incorrectly report error when it should just treat the column as null, or actually crash due to indexing off the end of the TupleDesc's attribute array. (The second case is probably not possible in modern PG versions, due to more careful handling of inheritance cases than we once had. But it's still a clear lack of robustness here.) The incorrect error indication is ignored by all callers within the core PG distribution, so this bug has no symptoms visible within the core code, but it might well be an issue for add-on packages. So patch all the way back.
* When a relation is moved to another tablespace, we can't assume that we canHeikki Linnakangas2008-10-07
| | | | | | | | | | | | | | | | | use the old relfilenode in the new tablespace. There might be another relation in the new tablespace with the same relfilenode, so we must generate a fresh relfilenode in the new tablespace. The 8.3 patch to let deleted relation files linger as zero-length files until the next checkpoint made this more obvious: moving a relation from one table space another, and then back again, caused a collision with the lingering file. Back-patch to 8.1. The issue is present in 8.0 as well, but it doesn't seem worth fixing there, because we didn't have protection from OID collisions after OID wraparound before 8.1. Report by Guillaume Lelarge.
* Fix improper display of fractional seconds in interval valuesTom Lane2008-10-02
| | | | | | when using --enable-integer-datetimes and a non-ISO datestyle. Ron Mayer
* Fix more problems with rewriter failing to set Query.hasSubLinks when insertingTom Lane2008-09-24
| | | | | | | | | | a SubLink expression into a rule query. We missed cases where the original query contained a sub-SELECT in a function in FROM, a multi-row VALUES list, or a RETURNING list. Per bug #4434 from Dean Rasheed and subsequent investigation. Back-patch to 8.1; older releases don't have the issue because they didn't try to be smart about setting hasSubLinks only when needed.
* Initialize the minimum frozen Xid in vac_update_datfrozenxid usingAlvaro Herrera2008-09-11
| | | | | | | | | | | | | | | | | | | | GetOldestXmin() instead of RecentGlobalXmin; this is safer because we do not depend on the latter being correctly set elsewhere, and while it is more expensive, this code path is not performance-critical. This is a real risk for autovacuum, because it can execute whole cycles without doing a single vacuum, which would mean that RecentGlobalXmin would stay at its initialization value, FirstNormalTransactionId, causing a bogus value to be inserted in pg_database. This bug could explain some recent reports of failure to truncate pg_clog. At the same time, change the initialization of RecentGlobalXmin to InvalidTransactionId, and ensure that it's set to something else whenever it's going to be used. Using it as FirstNormalTransactionId in HOT page pruning could incur in data loss. InitPostgres takes care of setting it to a valid value, but the extra checks are there to prevent "special" backends from behaving in unusual ways. Per Tom Lane's detailed problem dissection in 29544.1221061979@sss.pgh.pa.us
* Fix possible duplicate tuples while GiST scan. Now page is processedTeodor Sigaev2008-08-23
| | | | | | | | | at once and ItemPointers are collected in memory. Remove tuple's killing by killtuple() if tuple was moved to another page - it could produce unaceptable overhead. Backpatch up to 8.1 because the bug was introduced by GiST's concurrency support.
* Fix pull_up_simple_union_all to copy all rtable entries from child subquery toHeikki Linnakangas2008-08-14
| | | | | | | parent, not only those with RangeTblRefs. We need them in ExecCheckRTPerms. Report by Brendan O'Shea. Back-patch to 8.2, where pull_up_simple_union_all was introduced.
* Install checks in executor startup to ensure that the tuples produced by anTom Lane2008-08-08
| | | | | | | | | | | | INSERT or UPDATE will match the target table's current rowtype. In pre-8.3 releases inconsistency can arise with stale cached plans, as reported by Merlin Moncure. (We patched the equivalent hazard on the SELECT side in Feb 2007; I'm not sure why we thought there was no risk on the insertion side.) In 8.3 and HEAD this problem should be impossible due to plan cache invalidation management, but it seems prudent to make the check anyway. Back-patch as far as 8.0. 7.x versions lack ALTER COLUMN TYPE, so there seems no way to abuse a stale plan comparably.
* Do not allow Unique nodes to be scanned backwards. The code claimed that itTom Lane2008-08-05
| | | | | | | would work, but in fact it didn't return the same rows when moving backwards as when moving forwards. This would have no visible effect in a DISTINCT query (at least assuming the column datatypes use a strong definition of equality), but it gave entirely wrong answers for DISTINCT ON queries.
* Fix parsing of LDAP URLs so it doesn't reject spaces in the "suffix" part.Tom Lane2008-07-24
| | | | Per report from César Miguel Oliveira Alves.
* Fix an oversight in the original implementation of performMultipleDeletions():Tom Lane2008-07-11
| | | | | | | | | | | | the alreadyDeleted list has to be passed down through deleteDependentObjects(), else objects that are deleted via auto/internal dependencies don't get reported back up to performMultipleDeletions(). Depending on the visitation order, this could cause the code to try to delete an already-deleted object, leading to strange errors in DROP OWNED (typically "cache lookup failed for relation NNNNN" or similar). Per bug #4289. Patch for back branches only. This code has recently been rewritten in HEAD, and doesn't have this particular bug anymore.