| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
| |
OID or new relfilenode. If the existing OIDs are sufficiently densely
populated, this could take a long time (perhaps even be an infinite loop),
so it seems wise to allow the system to respond to a cancel interrupt here.
Per a gripe from Jacky Leng.
Backpatch as far as 8.1. Older versions just fail on OID collision,
instead of looping.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
calculating a page's initial free space was fine, and should not have been
"improved" by letting PageGetHeapFreeSpace do it. VACUUM FULL is going to
reclaim LP_DEAD line pointers later, so there is no need for a guard
against the page being too full of line pointers, and having one risks
rejecting pages that are perfectly good move destinations.
This also exposed a second bug, which is that the empty_end_pages logic
assumed that any page with no live tuples would get entered into the
fraged_pages list automatically (by virtue of having more free space than
the threshold in the do_frag calculation). This assumption certainly
seems risky when a low fillfactor has been chosen, and even without
tunable fillfactor I think it could conceivably fail on a page with many
unused line pointers. So fix the code to force do_frag true when notup
is true, and patch this part of the fix all the way back.
Per report from Tomas Szepe.
|
|
|
|
|
|
|
|
|
|
| |
statement be a list of bare C strings, rather than String nodes, which is
what they need to be for copyfuncs/equalfuncs to work. Fortunately these
node types never go out to disk (if they did, we'd likely have noticed the
problem sooner), so we can just fix it without creating a need for initdb.
This bug has been there since 8.0, but 8.3 exposes it in a more common
code path (Parse messages) than prior releases did. Per bug #3940 from
Vladimir Kokovic.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
erroring out of a wait. We can use a PG_TRY block for this, but add a comment
explaining why it'd be a bad idea to use it for any other state cleanup.
Back-patch to 8.2. Prior releases had the same issue, but only with respect
to the process title, which is likely to get reset almost immediately anyway
after the transaction aborts, so it seems not worth changing them. In 8.2
and HEAD, the pg_stat_activity "waiting" flag could remain set incorrectly
for a long time.
Per report from Gurjeet Singh.
|
|
|
|
|
|
|
|
|
| |
Should fix a problem where two clusters are running under
two different service accounts and get colliding names,
causing only the first cluster to contain the pgident
event description.
Per report from Stephen Denne.
|
|
|
|
| |
cost-limit vacuum code. Per trouble report from Joshua Drake.
|
|
|
|
|
|
|
|
|
|
| |
subquery output column exactly once left-to-right. Although this is the case
in the original parser output, it might not be so after rewriting and
constant-folding, as illustrated by bug #3882 from Jan Mate. Instead
scan the subquery's target list to obtain needed per-column information;
this is duplicative of what the parser did, but only a couple dozen lines
need be copied, and we can clean up a couple of notational uglinesses.
Bug was introduced in 8.2 as part of revision of SubLink representation.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
constraint yields TRUE for every row of its table, only that it does not
yield FALSE (a NULL result isn't disallowed). This breaks a couple of
implications that would be true in two-valued logic. I had put in one such
mistake in an 8.2.5 patch: foo IS NULL doesn't refute a strict operator
on foo. But there was another in the original 8.2 release: NOT foo doesn't
refute an expression whose truth would imply the truth of foo.
Per report from Rajesh Kumar Mallah.
To preserve the ability to do constraint exclusion with one partition
holding NULL values, extend relation_excluded_by_constraints() to check
for attnotnull flags, and add col IS NOT NULL expressions to the set of
constraints we hope to refute.
|
|
|
|
|
|
|
|
|
|
|
|
| |
clauseless joins of relations that have unexploited join clauses. Rather
than looking at every other base relation in the query, the correct thing is
to examine the other relations in the "initial_rels" list of the current
make_rel_from_joinlist() invocation, because those are what we actually have
the ability to join against. This might be a subset of the whole query in
cases where join_collapse_limit or from_collapse_limit or full joins have
prevented merging the whole query into a single join problem. This is a bit
untidy because we have to pass those rels down through a new PlannerInfo
field, but it's necessary. Per bug #3865 from Oleg Kharin.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
report of poor planning in 8.3: it's unsafe to push a constant across an
outer join when the outer-join condition is delayed by lower outer joins,
unless we recheck the outer-join condition at the upper outer join.
8.2.x doesn't really have the ability to tell whether this is the case
or not, but fortunately it doesn't matter --- it seems most desirable to
keep the join condition whether it's entirely redundant or not. However,
it's usually mostly redundant, so force its selectivity to 1.0.
It might be a good idea to back-patch this into 8.1 as well, but I'll
refrain until/unless there's evidence that 8.1 actually fails on any
cases that this would fix.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
constant ORDER/GROUP BY entries properly:
http://archives.postgresql.org/pgsql-hackers/2001-04/msg00457.php
The original solution to that was in fact no good, as demonstrated by
today's report from Martin Pitt:
http://archives.postgresql.org/pgsql-bugs/2008-01/msg00027.php
We can't use the column-number-reference format for a constant that is
a resjunk targetlist entry, a case that was unfortunately not thought of
in the original discussion. What we can do instead (which did not work
at the time, but does work in 7.3 and up) is to emit the constant with
explicit ::typename decoration, even if it otherwise wouldn't need it.
This is sufficient to keep the parser from thinking it's a column number
reference, and indeed is probably what the user must have done to get
such a thing into the querytree in the first place.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
and CLUSTER) execute as the table owner rather than the calling user, using
the same privilege-switching mechanism already used for SECURITY DEFINER
functions. The purpose of this change is to ensure that user-defined
functions used in index definitions cannot acquire the privileges of a
superuser account that is performing routine maintenance. While a function
used in an index is supposed to be IMMUTABLE and thus not able to do anything
very interesting, there are several easy ways around that restriction; and
even if we could plug them all, there would remain a risk of reading sensitive
information and broadcasting it through a covert channel such as CPU usage.
To prevent bypassing this security measure, execution of SET SESSION
AUTHORIZATION and SET ROLE is now forbidden within a SECURITY DEFINER context.
Thanks to Itagaki Takahiro for reporting this vulnerability.
Security: CVE-2007-6600
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
are shared with Tcl, since it's their code to begin with, and the patches
have been copied from Tcl 8.5.0. Problems:
CVE-2007-4769: Inadequate check on the range of backref numbers allows
crash due to out-of-bounds read.
CVE-2007-4772: Infinite loop in regex optimizer for pattern '($|^)*'.
CVE-2007-6067: Very slow optimizer cleanup for regex with a large NFA
representation, as well as crash if we encounter an out-of-memory condition
during NFA construction.
Part of the response to CVE-2007-6067 is to put a limit on the number of
states in the NFA representation of a regex. This seems needed even though
the within-the-code problems have been corrected, since otherwise the code
could try to use very large amounts of memory for a suitably-crafted regex,
leading to potential DOS by driving the system into swap, activating a kernel
OOM killer, etc.
Although there are certainly plenty of ways to drive the system into effective
DOS with poorly-written SQL queries, these problems seem worth treating as
security issues because many applications might accept regex search patterns
from untrustworthy sources.
Thanks to Will Drewry of Google for reporting these problems. Patches by Will
Drewry and Tom Lane.
Security: CVE-2007-4769, CVE-2007-4772, CVE-2007-6067
|
|
|
|
|
|
|
|
|
| |
since these seem to happen after all in corrupted indexes. Make sure we
supply the index name in all cases, and provide relevant block numbers where
available. Also consistently identify the index name as such.
Back-patch to 8.2, in hopes that this might help Mason Hale figure out his
problem.
|
|
|
|
|
|
|
|
| |
The zero-point case is sensible so far as the data structure is concerned,
so maybe we ought to allow it sometime; but right now the textual input
routines for these types don't allow it, and it seems that not all the
functions for the types are prepared to cope.
Report and patch by Merlin Moncure.
|
|
|
|
|
|
|
|
|
|
| |
out that it's actually quite likely that a string that is an extension of
the given prefix will sort as larger than the "greater" string our previous
code created. To provide some defense against that, do the comparisons
against a modified string instead of just the bare prefix. We tack on
"Z", "z", "y", or "9", whichever is seen as largest in the current locale.
Testing suggests that this is sufficient at least for cases involving
ASCII data.
|
|
|
|
|
|
|
|
|
| |
whole table instead, to ensure that it goes away when the table is dropped.
Per bug #3723 from Sam Mason.
Backpatch as far as 7.4; AFAICT 7.3 does not have the issue, because it doesn't
have general-purpose expression indexes and so there must be at least one
column referenced by an index.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
make_greater_string() try harder to generate a string that's actually greater
than its input string. Before we just assumed that making a string that was
memcmp-greater was enough, but it is easy to generate examples where this is
not so when the locale is not C. Instead, loop until the relevant comparison
function agrees that the generated string is greater than the input.
Unfortunately this is probably not enough to guarantee that the generated
string is greater than all extensions of the input, so we cannot relax the
restriction to C locale for the LIKE/regex index optimization. But it should
at least improve the odds of getting a useful selectivity estimate in
prefix_selectivity(). Per example from Guillaume Smet.
Backpatch to 8.1, mainly because that's what the complainant is using...
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
negated-match operators. patternsel had been using the supplied operator as
though it were a positive-match operator, and thus obtaining a wrong result,
which was even more wrong after the caller subtracted it from 1. Seems
cleanest to give patternsel an explicit "negate" argument so that it knows
what's going on. Also install the same factorization scheme for pattern
join selectivity estimators; even though they are just stubs at the
moment, this may keep someone from making the same type of mistake when
they get filled out. Per report from Greg Mullane.
Backpatch to 8.2 --- previous releases do not show the problem because
patternsel() doesn't actually use the operator directly.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ginRedoInsert(), because other ginRedo* functions rewrite whole page or
make changes which could be applied several times without consistent's loss
- Remove check of identifying of corresponding split record:
it's possible that replaying of WAL starts after actual page split, but before
removing of that split from incomplete splits list. In this case, that check
cause FATAL error.
Per stress test which reproduces bug reported by Craig McElroy
<craig.mcelroy@contegix.com>
|
|
|
|
|
|
|
|
| |
usage of any information from system catalog, because it could be called during
replay of WAL.
Per bug report from Craig McElroy <craig.mcelroy@contegix.com>. Patch doesn't
change on-disk storage.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
if either of the input relations can legally be joined to any other rels using
join clauses. This avoids uselessly (and expensively) considering a lot of
really stupid join paths when there is a join restriction with a large
footprint, that is, lots of relations inside its LHS or RHS. My patch of
15-Feb-2007 had been causing the code to consider joining *every* combination
of rels inside such a group, which is exponentially bad :-(. With this
behavior, clauseless bushy joins will be done if necessary, but they'll be
put off as long as possible. Per report from Jakub Ouhrabka.
Backpatch to 8.2. We might someday want to backpatch to 8.1 as well, but 8.1
does not have the problem for OUTER JOIN nests, only for IN-clauses, so it's
not clear anyone's very likely to hit it in practice; and the current patch
doesn't apply cleanly to 8.1.
|
|
|
|
|
|
|
|
| |
of the sequence. Since OWNED BY never existed before 8.2, this seems
unlikely to create any compatibility issues. Other forms of ALTER SEQUENCE
continue to do what they did before, namely update currval to match the
sequence's actual last_val. That seems wrong on consideration, but we'll
not change it in a minor release --- 8.3 will make that fix.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
neglected to test whether an outer join's join-condition actually refers to
the lower outer join it is looking at. (The comment correctly described what
was supposed to happen, but the code didn't do it...) This often resulted in
adding an unnecessary constraint on the join order of the two outer joins,
which was bad enough. However, it also seems to expose a performance
problem in an older patch (from 15-Feb): once we've decided that there is a
join ordering constraint, we will start trying clauseless joins between every
combination of rels within the constraint, which pointlessly eats up lots of
time and space if there are numerous rels below the outer join. That probably
needs to be revisited :-(. Per gripe from Jakub Ouhrabka.
|
|
|
|
|
|
|
|
|
|
| |
it affects. The original coding neglected tablespace entirely (causing
the indexes to move to the database's default tablespace) and for an index
belonging to a UNIQUE or PRIMARY KEY constraint, it would actually try to
assign the parent table's reloptions to the index :-(. Per bug #3672 and
subsequent investigation.
8.0 and 8.1 did not have reloptions, but the tablespace bug is present.
|
|
|
|
|
|
|
| |
simplification gets detoasted before it is incorporated into a Const node.
Otherwise, if an immutable function were to return a TOAST pointer (an
unlikely case, but it can be made to happen), we would end up with a plan
that depends on the continued existence of the out-of-line toast datum.
|
|
|
|
|
|
|
|
|
|
|
|
| |
eval_const_expressions simplifies this to just "WHERE false", but we have
already done pull_up_IN_clauses so the IN join will be done, or at least
planned, anyway. The trouble case comes when the sub-SELECT is itself a join
and we decide to implement the IN by unique-ifying the sub-SELECT outputs:
with no remaining reference to the output Vars in WHERE, we won't have
propagated the Vars up to the upper join point, leading to "variable not found
in subplan target lists" error. Fix by adding an extra scan of in_info_list
and forcing all Vars mentioned therein to be propagated up to the IN join
point. Per bug report from Miroslav Sulc.
|
|
|
|
|
|
| |
CREATE INDEX CONCURRENTLY). Such an index might not have entries for every
heap row and thus clustering with it would result in silent data loss.
The scenario requires a pretty foolish DBA, but still ...
|
|
|
|
|
|
|
| |
recovery stop time was used. This avoids a corner-case risk of trying to
overwrite an existing archived copy of the last WAL segment, and seems
simpler and cleaner all around than the original definition. Per example
from Jon Colverson and subsequent analysis by Simon.
|
|
|
|
|
|
|
|
|
| |
happen condition can happen given incorrect input. The real problem is that
gram.y should try harder to distinguish * from "*" --- the latter is a legal
column name per spec, and someday we ought to treat it that way. However
fixing that is too invasive for a back-patch, and it's too late for the 8.3
cycle too. So just reduce the Assert to a plain elog for now. Per report
from NikhilS.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
table, by allocating just enough for a hardcoded number of dead tuples per
page. The current estimate is 200 dead tuples per page.
Per reports from Jeff Amiel, Erik Jones and Marko Kreen, and subsequent
discussion.
CVS: ----------------------------------------------------------------------
CVS: Enter Log. Lines beginning with `CVS:' are removed automatically
CVS:
CVS: Committing in .
CVS:
CVS: Modified Files:
CVS: commands/vacuumlazy.c
CVS: ----------------------------------------------------------------------
|
|
|
|
|
| |
per ITAGAKI Takahiro. Also, rewrite syslogger_forkexec() in hopes of
eliminating the confusion in the first place.
|
| |
|
|
|
|
|
|
|
|
| |
values. The previous coding essentially assumed that x = sqrt(x*x), which
does not hold for x < 0.
Thanks to Jie Zhang at Greenplum and Gavin Sherry for reporting this
issue.
|
|
|
|
|
| |
Seems to have been introduced in 8.1 by careless SECS_PER_DAY
search-and-replace.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
no-longer-needed pages at the end of a table. We thought we could throw away
pages containing HEAPTUPLE_DEAD tuples; but this is not so, because such
tuples very likely have index entries pointing at them, and we wouldn't have
removed the index entries. The problem only emerges in a somewhat unlikely
race condition: the dead tuples have to have been inserted by a transaction
that later aborted, and this has to have happened between VACUUM's initial
scan of the page and then rechecking it for empty in count_nondeletable_pages.
But that timespan will include an index-cleaning pass, so it's not all that
hard to hit. This seems to explain a couple of previously unsolved bug
reports.
|
| |
|
|
|
|
| |
was removed.
|
|
|
|
|
|
|
|
| |
recover from elog(ERROR). Problem was created by introduction of hash seq
search tracking awhile back, and affects all branches that have bgwriter;
in HEAD the disease has snuck into autovacuum and walwriter too. (Not sure
that the latter two use hash_seq_search at the moment, but surely they might
someday.) Per report from Sergey Koposov.
|
|
|
|
|
|
| |
database-wide editions.
Per report from bitsandbytes88 <at> hotmail.com and subsequent discussion.
|
|
|
|
|
|
|
|
|
|
|
|
| |
an exclusive lock on the table at this point, which we want to release as soon
as possible. This is called in the phase of lazy vacuum where we truncate the
empty pages at the end of the table.
An alternative solution would be to lower the vacuum delay settings before
starting the truncating phase, but this doesn't work very well in autovacuum
due to the autobalancing code (which can cause other processes to change our
cost delay settings). This case could be considered in the balancing code, but
it is simpler this way.
|
|
|
|
|
|
|
|
|
| |
big misalignement, then it tries to split page basing on distribution
of boxe's centers.
Per report from Dolafi, Tom <dolafit@janelia.hhmi.org>
Backpatch is needed, changes doesn't affect on-disk storage.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the number of rows likely to be produced by a query such as
SELECT * FROM t1 LEFT JOIN t2 USING (key) WHERE t2.key IS NULL;
What this is doing is selecting for t1 rows with no match in t2, and thus
it may produce a significant number of rows even if the t2.key table column
contains no nulls at all. 8.2 thinks the table column's null fraction is
relevant and thus may estimate no rows out, which results in terrible plans
if there are more joins above this one. A proper fix for this will involve
passing much more information about the context of a clause to the selectivity
estimator functions than we ever have. There's no time left to write such a
patch for 8.3, and it wouldn't be back-patchable into 8.2 anyway. Instead,
put in an ad-hoc test to defeat the normal table-stats-based estimation when
an IS NULL test is evaluated at an outer join, and just use a constant
estimate instead --- I went with 0.5 for lack of a better idea. This won't
catch every case but it will catch the typical ways of writing such queries,
and it seems unlikely to make things worse for other queries.
|
|
|
|
|
|
|
|
| |
generating the tuples has resjunk output columns. This is not possible for
simple table scans but can happen when evaluating a whole-row Var for a view.
Per example from Patryk Kordylewski. The problem exists back to 8.0 but
I'm not going to risk back-patching further than 8.2 because of the many
changes in this area.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
sets for outer joins, in the light of bug #3588 and additional thought and
experimentation. The original methodology was fatally flawed for nests of
more than two outer joins: it got the relationships between adjacent joins
right, but didn't always come to the right conclusions about whether a join
could be interchanged with one two or more levels below it. This was largely
caused by a mistaken idea that we should use the min_lefthand + min_righthand
sets of a sub-join as the minimum left or right input set of an upper join
when we conclude that the sub-join can't commute with the upper one. If
there's a still-lower join that the sub-join *can* commute with, this method
led us to think that that one could commute with the topmost join; which it
can't. Another problem (not directly connected to bug #3588) was that
make_outerjoininfo's processing-order-dependent method for enforcing outer
join identity #3 didn't work right: if we decided that join A could safely
commute with lower join B, we dropped all information about sub-joins under B
that join A could perhaps not safely commute with, because we removed B's
entire min_righthand from A's.
To fix, make an explicit computation of all inner join combinations that occur
below an outer join, and add to that the full syntactic relsets of any lower
outer joins that we determine it can't commute with. This method gives much
more direct enforcement of the outer join rearrangement identities, and it
turns out not to cost a lot of additional bookkeeping.
Thanks to Richard Harris for the bug report and test case.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
relcache entry after having heap_close'd it. This could lead to misbehavior
if a relcache flush wiped out the cache entry meanwhile. In 8.2 there is a
very real risk of CREATE INDEX CONCURRENTLY using the wrong relid for locking
and waiting purposes. I think the bug is only cosmetic in 8.0 and 8.1,
because their transgression is limited to using RelationGetRelationName(rel)
in an ereport message immediately after heap_close, and there's no way (except
with special debugging options) for a cache flush to occur in that interval.
Not quite sure that it's cosmetic in 7.4, but seems best to patch anyway.
Found by trying to run the regression tests with CLOBBER_CACHE_ALWAYS enabled.
Maybe we should try to do that on a regular basis --- it's awfully slow,
but perhaps some fast buildfarm machine could do it once in awhile.
|
|
|
|
|
|
| |
byte after the last full byte of the bit array, regardless of whether that
byte was part of the valid data or not. Found by buildfarm testing.
Thanks to Stefan Kaltenbrunner for nailing down the cause.
|
|
|
|
|
|
|
|
|
| |
row within one query: we were firing check triggers before all the updates
were done, leading to bogus failures. Fix by making the triggers queued by
an RI update go at the end of the outer query's trigger event list, thereby
effectively making the processing "breadth-first". This was indeed how it
worked pre-8.0, so the bug does not occur in the 7.x branches.
Per report from Pavel Stehule.
|
|
|
|
|
|
|
|
|
|
|
|
| |
hash table is allocated in a child context of the agg node's memory
context, MemoryContextReset() will reset but *not* delete the child
context. Since ExecReScanAgg() proceeds to build a new hash table
from scratch (in a new sub-context), this results in leaking the
header for the previous memory context. Therefore, use
MemoryContextResetAndDeleteChildren() instead.
Credit: My colleague Sailesh Krishnamurthy at Truviso for isolating
the cause of the leak.
|
|
|
|
|
|
|
| |
on Windows. This is yet another manifestation of the problem that Windows
returns time zone names that may be in a different encoding than we are using.
I've put a better solution in HEAD, but the back branches need a simple patch.
Per report from Hiroshi Saito.
|