| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
| |
of AND/OR clause branches that predtest.c would attempt to deal with. As
noted in bug #4721, that change disabled proof attempts for sizes of problems
that people are actually expecting it to work for. The original complaint
it was trying to solve was O(N^2) behavior for long IN-lists, so let's try
applying the limit to just ScalarArrayOpExprs rather than everything.
Another case of "foolish consistency" I fear.
Back-patch to 8.2, same as the previous patch was.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
you can end up with an unrecoverable backup if you start a new base backup
right after finishing archive recovery. In that scenario, the redo pointer of
the checkpoint that pg_start_backup() writes points to the XLOG segment where
the timeline-changing end-of-archive-recovery checkpoint is. The beginning
of that segment contains pages with the old timeline ID, and we don't accept
that in recovery unless we find a history file covering the old timeline ID.
If you omit pg_xlog from the base backup and clear the archive directory
before starting the backup, there will be no such history file available.
The bug is present in all versions since PITR was introduced in 8.0, but I'm
back-patching only back to 8.2. Earlier versions didn't have XLOG switch
records, making this fix unfeasible. Given the lack of reports until now,
it doesn't seem worthwhile to spend more effort to fix 8.0 and 8.1.
Per report and suggestion by Mikael Krantz
|
|
|
|
|
|
|
| |
to make sure that the error code is reset, as a precaution in
case the API doesn't properly reset it on success. This could
be necessary, since we check the error value even if the function
doesn't fail for specific success cases.
|
|
|
|
| |
Fujii Masao
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
part that rounds up to exactly 1.0 second. The previous coding rejected input
like "00:12:57.9999999999999999999999999999", with the exact number of nines
needed to cause failure varying depending on float-timestamp option and
possibly on platform. Obviously this should round up to the next integral
second, if we don't have enough precision to distinguish the value from that.
Per bug #4789 from Robert Kruus.
In passing, fix a missed check for fractional seconds in one copy of the
"is it greater than 24:00:00" code.
Broken all the way back, so patch all the way back.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
aggregate function. By definition, such a sub-SELECT cannot reference any
variables of query levels between itself and the aggregate's semantic level
(else the aggregate would've been assigned to that lower level instead).
So the correct, most efficient implementation is to treat the sub-SELECT as
being a sub-select of that outer query level, not the level the aggregate
syntactically appears in. Not doing so also confuses the heck out of our
parameter-passing logic, as illustrated in bug report from Daniel Grace.
Fortunately, we were already copying the whole Aggref expression up to the
outer query level, so all that's needed is to delay SS_process_sublinks
processing of the sub-SELECT until control returns to the outer level.
This has been broken since we introduced spec-compliant treatment of
outer aggregates in 7.4; so patch all the way back.
|
|
|
|
|
|
| |
per approval from Helmut Tschemernjak, President.
Only back branches; files removed from CVS HEAD.
|
|
|
|
|
| |
Argentina/San_Luis, Cuba, Jordan (historical correction only), Morocco,
Palestine, Syria, Tunisia.
|
|
|
|
| |
from buggy user-defined picksplit to GiST.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
interval_eq() considers equal. I'm not sure how that fundamental requirement
escaped us through multiple revisions of this hash function, but there it is;
it's been wrong since interval_hash was first written for PG 7.1.
Per bug #4748 from Roman Kononov.
Backpatch to all supported releases.
This patch changes the contents of hash indexes for interval columns. That's
no particular problem for PG 8.4, since we've broken on-disk compatibility
of hash indexes already; but it will require a migration warning note in
the next minor releases of all existing branches: "if you have any hash
indexes on columns of type interval, REINDEX them after updating".
|
|
|
|
|
|
| |
Windows without that, but we shouldn't put bad examples where people might
copy them. Also, reformat slightly to improve the odds that pgindent
won't go nuts on this.
|
|
|
|
|
|
|
|
|
| |
This method will not catch all different ways since the locale
handling in NTFS doesn't provide an easy way to do that, but it
will hopefully solve the most common cases causing startup
problems when the backend is found in the system PATH.
Attempts to fix bug #4694.
|
|
|
|
|
|
|
|
|
|
|
| |
casting effort whenever the input value was NULL. However this prevents
application of not-null domain constraints in the cases that use this
function, as illustrated in bug #4741. Since this function isn't meant
for use in performance-critical paths anyway, this certainly seems like
another case of "premature optimization is the root of all evil".
Back-patch as far as 8.2; older versions made no effort to enforce
domain constraints here anyway.
|
|
|
|
|
|
| |
Give an error message and exit instead, like we do elsewhere...
Per report from Wez Furlong and Robert Treat.
|
|
|
|
|
|
|
|
|
|
|
| |
at the same instant as a new backend is spawned. Since CountActiveBackends()
doesn't hold ProcArrayLock, it needs to be prepared for the case that a
pointer at the end of the proc array is still NULL even though numProcs says
it should be valid, since it doesn't hold ProcArrayLock. Backpatch to 8.1.
8.0 and earlier had this right, but it was broken in the split of PGPROC and
sinval shared memory arrays.
Per report and proposal by Marko Kreen.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
TupleTableSlots. We have functions for retrieving a minimal tuple from a slot
after storing a regular tuple in it, or vice versa; but these were implemented
by converting the internal storage from one format to the other. The problem
with that is it invalidates any pass-by-reference Datums that were already
fetched from the slot, since they'll be pointing into the just-freed version
of the tuple. The known problem cases involve fetching both a whole-row
variable and a pass-by-reference value from a slot that is fed from a
tuplestore or tuplesort object. The added regression tests illustrate some
simple cases, but there may be other failure scenarios traceable to the same
bug. Note that the added tests probably only fail on unpatched code if it's
built with --enable-cassert; otherwise the bug leads to fetching from freed
memory, which will not have been overwritten without additional conditions.
Fix by allowing a slot to contain both formats simultaneously; which turns out
not to complicate the logic much at all, if anything it seems less contorted
than before.
Back-patch to 8.2, where minimal tuples were introduced.
|
|
|
|
|
|
|
| |
with EXPLAIN ANALYZE VERBOSE.
Greg Sabino Mullane, reformatted by myself. Backpatch to 8.1, where the
bug was introduced.
|
|
|
|
|
|
|
|
|
|
|
|
| |
them from degrading badly when the input is sorted or nearly so. In this
scenario the tree is unbalanced to the point of becoming a mere linked list,
so insertions become O(N^2). The easiest and most safely back-patchable
solution is to stop growing the tree sooner, ie limit the growth of N. We
might later consider a rebalancing tree algorithm, but it's not clear that
the benefit would be worth the cost and complexity. Per report from Sergey
Burladyan and an earlier complaint from Heikki.
Back-patch to 8.2; older versions didn't have GIN indexes.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
format codes are misapplied to a numeric argument. (The code still produces
a pretty bogus error message in such cases, but I'll settle for stopping the
crash for now.) Per bug #4700 from Sergey Burladyan.
Problem exists in all supported branches, so patch all the way back.
In HEAD, also clean up some ugly coding in the nearby cache management
code.
|
|
|
|
|
|
|
| |
Mauritius began using DST in the summer 2008-2009; the Olson library has been
updated already.
Xavier Bugaud
|
|
|
|
|
|
| |
fail to provide the function itself. Not sure how we escaped testing anything
later than 7.3 on such cases, but they still exist, as per André Volpato's
report about AIX 5.3.
|
|
|
|
|
| |
This has moved around in past releases, so just copying-and-pasting from HEAD
didn't work as intended.
|
|
|
|
|
|
|
|
|
|
|
| |
encoding conversion of any elog/ereport message being sent to the frontend.
This generalizes a patch that I put in last October, which suppressed
translation of only specific messages known to be associated with recursive
can't-translate-the-message behavior. As shown in bug #4680, we need a more
general answer in order to have some hope of coping with broken encoding
conversion setups. This approach seems a good deal less klugy anyway.
Patch in all supported branches.
|
|
|
|
|
|
|
|
| |
fail on zero-length inputs. This isn't an issue in normal use because the
conversion infrastructure skips calling the converters for empty strings.
However a problem was created by yesterday's patch to check whether the
right conversion function is supplied in CREATE CONVERSION. The most
future-proof fix seems to be to make the converters safe for this corner case.
|
|
|
|
|
|
|
|
| |
function for the specified source and destination encodings. We do that by
calling the function with an empty string. If it can't perform the requested
conversion, it will throw an error.
Backport to 7.4 - 8.3. Per bug report #4680 by Denis Afonin.
|
|
|
|
|
|
|
|
|
|
|
| |
they are out of scope for any code after that anyway, leaving isnull true
should be harmless. However, PL/pgSQL Debugger doesn't seem to care about
the scoping and crashed, per report by Robert Walker (bug #4635). And it's
good to be tidy for debugging purposes too.
Fix in 8.3, 8.2 and 8.1 branches, CVS HEAD was fixed earlier already.
Analysis and fix by Ashesh Vashi and Dave Page.
|
|
|
|
|
|
|
|
|
|
|
| |
looks for a CaseTestExpr to figure out what the parser did, but it failed to
consider the possibility that an implicit coercion might be inserted above
the CaseTestExpr. This could result in an Assert failure in some cases
(but correct results if Asserts weren't enabled), or an "unexpected CASE WHEN
clause" error in other cases. Per report from Alan Li.
Back-patch to 8.1; problem doesn't exist before that because CASE was
implemented differently.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
TABLE: if the command is executed by someone other than the table owner (eg,
a superuser) and the table has a toast table, the toast table's pg_type row
ends up with the wrong typowner, ie, the command issuer not the table owner.
This is quite harmless for most purposes, since no interesting permissions
checks consult the pg_type row. However, it could lead to unexpected failures
if one later tries to drop the role that issued the command (in 8.1 or 8.2),
or strange warnings from pg_dump afterwards (in 8.3 and up, which will allow
the DROP ROLE because we don't create a "redundant" owner dependency for table
rowtypes). Problem identified by Cott Lang.
Back-patch to 8.1. The problem is actually far older --- the CLUSTER variant
can be demonstrated in 7.0 --- but it's mostly cosmetic before 8.1 because we
didn't track ownership dependencies before 8.1. Also, fixing it before 8.1
would require changing the call signature of heap_create_with_catalog(), which
seems to carry a nontrivial risk of breaking add-on modules.
|
|
|
|
|
|
|
| |
since it can be transient failures, causing kill() to not
properly send signals.
Original patch from Steve Marshall, modified by me.
|
|
|
|
|
|
|
|
| |
in the string, not just at the start. Per bug #4629 from Martin Blazek.
Back-patch to 8.2; prior versions don't have the problem, at least not in
the reported case, because they don't try to recognize INTO in non-SELECT
statements. (IOW, this is really fallout from the RETURNING patch.)
|
|
|
|
|
|
|
|
| |
from Rushabh Lathia.
Back-patch of patch of 2009-01-08. This is necessary in 8.3, as reported
by Bjorn Munch. It's not currently necessary in 8.2, AFAICS, but seems
best to include it there too.
|
| |
|
| |
|
|
|
|
|
| |
as the preferred spelling of that zone name, corrects historical DST
information for Switzerland and Cuba.
|
|
|
|
|
|
|
|
|
|
|
|
| |
encoding conversion functions. These are not can't-happen cases because
it's possible to create a conversion with the wrong conversion function
for the specified encoding pair. That would lead to an Assert crash in
an Assert-enabled build, or incorrect conversion otherwise, neither of
which is desirable. This would be a DOS issue if production databases
were customarily built with asserts enabled, but fortunately that's not so.
Per an observation by Heikki.
Back-patch to all supported branches.
|
|
|
|
|
|
|
|
| |
to the documented API value. The previous code got it right as
it's implemented, but accepted too much/too little compared to
the API documentation.
Per comment from Zdenek Kotala.
|
|
|
|
|
|
|
|
|
|
| |
context long after it had been destroyed.
Per problem report from Justin Pasher. Patch by Tom Lane and me.
8.3 and later do not have this bug, because this code has been restructured for
unrelated reasons. In 8.2 it does not manifest as a crash, but it still seems
safer fixing it nonetheless.
|
|
|
|
|
|
|
|
|
|
| |
rewritten into another kind of statement, for example if an INSERT is
rewritten into an UPDATE.
Back-patch to 8.3 and 8.2. For HEAD, Tom suggested inventing a new
SPI_OK_REWRITTEN return code, but that's not a backportable solution. I'll
do that as a separate patch, this patch will do as a stopgap measure for HEAD
too in the meanwhile.
|
|
|
|
|
|
|
| |
It's not possible to do CREATE DATABASE inside a transaction, so previously
we just got a server error instead.
Backpatch to 8.2, which is where the -1 feature appeared.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
OutputFunctionCall, and friends. This allows SPI-using functions to invoke
datatype I/O without concern for the possibility that a SPI-using function
will be called (which could be either the I/O function itself, or a function
used in a domain check constraint). It's a tad ugly, but not nearly as ugly
as what'd be needed to make this work via retail insertion of push/pop
operations in all the PLs.
This reverts my patch of 2007-01-30 that inserted some retail SPI_push/pop
calls into plpgsql; that approach only fixed plpgsql, and not any other PLs.
But the other PLs have the issue too, as illustrated by a recent gripe from
Christian Schröder.
Back-patch to 8.2, which is as far back as this solution will work. It's
also as far back as we need to worry about the domain-constraint case, since
earlier versions did not attempt to check domain constraints within datatype
input. I'm not aware of any old I/O functions that use SPI themselves, so
this should be sufficient for a back-patch.
|
|
|
|
|
|
|
|
| |
If the table was smaller than REL_TRUNCATE_FRACTION (= 16) pages, we always
tried to acquire AccessExclusiveLock on it even if there was no empty pages
at the end.
Report by Simon Riggs. Back-patch all the way to 7.4.
|
|
|
|
|
|
|
|
|
|
| |
the other major heapam.c functions. The only known consequence of this
omission is that UPDATE RETURNING failed to return the correct value for
"tableoid", as per report from KaiGai Kohei.
Back-patch to 8.2. Arguably it's wrong all the way back; but without
evidence of visible breakage before RETURNING was added, I'll desist from
patching the older branches.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
when they are invoked by the parser. We had been setting up a snapshot at
plan time but really it needs to be done earlier, before parse analysis.
Per report from Dmitry Koterov.
Also fix two related problems discovered while poking at this one:
exec_bind_message called datatype input functions without establishing a
snapshot, and SET CONSTRAINTS IMMEDIATE could call trigger functions without
establishing a snapshot.
Backpatch to 8.2. The underlying problem goes much further back, but it is
masked in 8.1 and before because we didn't attempt to invoke domain check
constraints within datatype input. It would only be exposed if a C-language
datatype input function used the snapshot; which evidently none do, or we'd
have heard complaints sooner. Since this code has changed a lot over time,
a back-patch is hardly risk-free, and so I'm disinclined to patch further
than absolutely necessary.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
outer join clauses. Given, say,
... from a left join b on a.a1 = b.b1 where a.a1 = 42;
we'll deduce a clause b.b1 = 42 and then mark the original join clause
redundant (we can't remove it completely for reasons I don't feel like
squeezing into this log entry). However the original implementation of
that wasn't bulletproof, because clause_selectivity() wouldn't honor
this_selec if given nonzero varRelid --- which in practice meant that
it worked as desired *except* when considering index scan quals. Which
resulted in bogus underestimation of the size of the indexscan result for
an inner indexscan in an outer join, and consequently a possibly bad
choice of indexscan vs. bitmap scan. Fix by introducing an explicit test
into clause_selectivity(). Also, to make sure we don't trigger that test
in corner cases, change the convention to be that this_selec > 1, not
this_selec = 1, means it's been marked redundant. Per trouble report from
Scara Maccai.
Back-patch to 8.2, where the problem was introduced.
|
|
|
|
|
|
|
|
|
|
|
| |
toasted values, since those could get dropped once the cursor's transaction
is over. Per bug #4553 from Andrew Gierth.
Back-patch as far as 8.1. The bug actually exists back to 7.4 when holdable
cursors were introduced, but this patch won't work before 8.1 without
significant adjustments. Given the lack of field complaints, it doesn't seem
worth the work (and risk of introducing new bugs) to try to make a patch for
the older branches.
|
|
|
|
|
|
|
| |
This was a thinko introduced in a patch from last February; it results
in memory leakage if an SRF is shut down before the actual end of query,
because subsequent code will be running in a longer-lived context than
it's expecting to be.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
AND, OR, or equivalent clauses: if there are too many (more than 100) just
exit without proving anything. This ensures that we don't spend O(N^2) time
trying (and most likely failing) to prove anything about very long IN lists
and similar cases.
Also, install a couple of CHECK_FOR_INTERRUPTS calls to ensure that a long
proof attempt can be interrupted.
Per gripe from Sergey Konoplev.
Back-patch the whole patch to 8.2 and just the CHECK_FOR_INTERRUPTS addition
to 8.1. (The rest of the patch doesn't apply cleanly, and since 8.1 doesn't
show the complained-of behavior anyway, it doesn't seem necessary to work
hard on it.)
|
|
|
|
|
|
|
|
|
|
|
| |
we extended the appendrel mechanism to support UNION ALL optimization. The
reason nobody noticed was that we are not actually using attr_needed data for
appendrel children; hence it seems more reasonable to rip it out than fix it.
Back-patch to 8.2 because an Assert failure is possible in corner cases.
Per examination of an example from Jim Nasby.
In HEAD, also get rid of AppendRelInfo.col_mappings, which is quite inadequate
to represent UNION ALL situations; depend entirely on translated_vars instead.
|
| |
|