| Commit message (Collapse) | Author | Age |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
in a subtransaction stays open even if the subtransaction is aborted, so
any temporary files related to it must stay alive as well. With the patch,
we use ResourceOwners to track open temporary files and don't automatically
close them at subtransaction end (though in the normal case temporary files
are registered with the subtransaction resource owner and will therefore be
closed).
At end of top transaction, we still check that there's no temporary files
marked as close-at-end-of-transaction open, but that's now just a debugging
cross-check as the resource owner cleanup should've closed them already.
|
|
|
|
|
|
|
| |
This avoids a useless connection retry and complaint in the postmaster log
when receiving a connection from 8.5 or later libpq.
Backpatch in all supported branches, but of course *not* HEAD.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
be part of multixacts, so allocate a slot for each prepared transaction in
the "oldest member" array in multixact.c. On PREPARE TRANSACTION, transfer
the oldest member value from the current backends slot to the prepared xact
slot. Also save and recover the value from the 2pc state file.
The symptom of the bug was that after a transaction prepared, a shared lock
still held by the prepared transaction was sometimes ignored by other
transactions.
Fix back to 8.1, where both 2PC and multixact were introduced.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In VACUUM FULL, an interrupt after the initial transaction has been recorded
as committed can cause postmaster to restart with the following error message:
PANIC: cannot abort transaction NNNN, it was already committed
This problem has been reported many times.
In lazy VACUUM, an interrupt after the table has been truncated by
lazy_truncate_heap causes other backends' relcache to still point to the
removed pages; this can cause future INSERT and UPDATE queries to error out
with the following error message:
could not read block XX of relation 1663/NNN/MMMM: read only 0 of 8192 bytes
The window to this race condition is extremely narrow, but it has been seen in
the wild involving a cancelled autovacuum process.
The solution for both problems is to inhibit interrupts in both operations
until after the respective transactions have been committed. It's not a
complete solution, because the transaction could theoretically be aborted by
some other error, but at least fixes the most common causes of both problems.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The original coding ensured nbuckets and nbatch didn't exceed INT_MAX,
which while not insane on its own terms did nothing to protect subsequent
code like "palloc(nbatch * sizeof(BufFile *))". Since enormous join size
estimates might well be planner error rather than reality, it seems best
to constrain the initial sizes to be not more than work_mem/sizeof(pointer),
thus ensuring the allocated arrays don't exceed work_mem. We will allow
nbatch to get bigger than that during subsequent ExecHashIncreaseNumBatches
calls, but we should still guard against integer overflow in those palloc
requests. Per bug #5145 from Bernt Marius Johnsen.
Although the given test case only seems to fail back to 8.2, previous
releases have variants of this issue, so patch all supported branches.
|
|
|
|
|
|
|
|
|
| |
that it's called within an AfterTriggerBeginQuery/AfterTriggerEndQuery pair.
The RI cascade triggers suppress that overhead on the assumption that they
are always run non-deferred, so it's possible to violate the condition if
someone mistakenly changes pg_trigger to mark such a trigger deferred.
We don't really care about supporting that, but throwing an error instead
of crashing seems desirable. Per report from Marcelo Costa.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
pam_message array contains exactly one PAM_PROMPT_ECHO_OFF message.
Instead, deal with however many messages there are, and don't throw error
for PAM_ERROR_MSG and PAM_TEXT_INFO messages. This logic is borrowed from
openssh 5.2p1, which hopefully has seen more real-world PAM usage than we
have. Per bug #5121 from Ryan Douglas, which turned out to be caused by
the conv_proc being called with zero messages. Apparently that is normal
behavior given the combination of Linux pam_krb5 with MS Active Directory
as the domain controller.
Patch all the way back, since this code has been essentially untouched
since 7.4. (Surprising we've not heard complaints before.)
|
|
|
|
|
|
| |
8, bitncmp() may dereference a pointer one byte out of bounds.
Chris Mikkelson (bug #5101)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
in CREATE OR REPLACE FUNCTION. The original code would update pg_shdepend
as if a new function was being created, even if it wasn't, with two bad
consequences: pg_shdepend might record the wrong owner for the function,
and any dependencies for roles mentioned in the function's ACL would be lost.
The fix is very easy: just don't touch pg_shdepend at all when doing a
function replacement.
Also update the CREATE FUNCTION reference page, which never explained
exactly what changes and doesn't change in a function replacement.
In passing, fix the CREATE VIEW reference page similarly; there's no
code bug there, but the docs didn't say what happens.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
possibility of shared-inval messages causing a relcache flush while it tries
to fill in missing data in preloaded relcache entries. There are actually
two distinct failure modes here:
1. The flush could delete the next-to-be-processed cache entry, causing
the subsequent hash_seq_search calls to go off into the weeds. This is
the problem reported by Michael Brown, and I believe it also accounts
for bug #5074. The simplest fix is to restart the hashtable scan after
we've read any new data from the catalogs. It appears that pre-8.4
branches have not suffered from this failure, because by chance there were
no other catalogs sharing the same hash chains with the catalogs that
RelationCacheInitializePhase2 had work to do for. However that's obviously
pretty fragile, and it seems possible that derivative versions with
additional system catalogs might be vulnerable, so I'm back-patching this
part of the fix anyway.
2. The flush could delete the *current* cache entry, in which case the
pointer to the newly-loaded data would end up being stored into an
already-deleted Relation struct. As long as it was still deleted, the only
consequence would be some leaked space in CacheMemoryContext. But it seems
possible that the Relation struct could already have been recycled, in
which case this represents a hard-to-reproduce clobber of cached data
structures, with unforeseeable consequences. The fix here is to pin the
entry while we work on it.
In passing, also change RelationCacheInitializePhase2 to Assert that
formrdesc() set up the relation's cached TupleDesc (rd_att) with the
correct type OID and hasoids values. This is more appropriate than
silently updating the values, because the original tupdesc might already
have been copied into the catcache. However this part of the patch is
not in HEAD because it fails due to some questionable recent changes in
formrdesc :-(. That will be cleaned up in a subsequent patch.
|
|
|
|
|
|
| |
only for secondary page split (i.e. for non-first columns of index)
Patch by Paul Ramsey <pramsey@opengeo.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
of checkpoint. Although the checkpoint has been written to WAL at that point
already, so that all data is safe, and we'll retry removing the WAL segment at
the next checkpoint, if such a failure persists we won't be able to remove any
other old WAL segments either and will eventually run out of disk space. It's
better to treat the failure as non-fatal, and move on to clean any other WAL
segment and continue with any other end-of-checkpoint cleanup.
We don't normally expect any such failures, but on Windows it can happen with
some anti-virus or backup software that lock files without FILE_SHARE_DELETE
flag.
Also, the loop in pgrename() to retry when the file is locked was broken. If a
file is locked on Windows, you get ERROR_SHARE_VIOLATION, not
ERROR_ACCESS_DENIED, at least on modern versions. Fix that, although I left
the check for ERROR_ACCESS_DENIED in there as well (presumably it was correct
in some environment), and added ERROR_LOCK_VIOLATION to be consistent with
similar checks in pgwin32_open(). Reduce the timeout on the loop from 30s to
10s, on the grounds that since it's been broken, we've effectively had a
timeout of 0s and no-one has complained, so a smaller timeout is actually
closer to the old behavior. A longer timeout would mean that if recycling a
WAL file fails because it's locked for some reason, InstallXLogFileSegment()
will hold ControlFileLock for longer, potentially blocking other backends, so
a long timeout isn't totally harmless.
While we're at it, set errno correctly in pgrename().
Backpatch to 8.2, which is the oldest version supported on Windows. The xlog.c
changes would make sense on other platforms and thus on older versions as
well, but since there's no such locking issues on other platforms, it's not
worth it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
file handle on it, the file goes into "pending deletion" state where it
still shows up in directory listing, but isn't accessible otherwise. That
confuses RemoveOldXLogFiles(), making it think that the file hasn't been
archived yet, while it actually was, and it was deleted along with the .done
file.
Fix that by renaming the file with ".deleted" extension before deleting it.
Also check the return value of rename() and unlink(), so that if the removal
fails for any reason (e.g another process is holding the file locked), we
don't delete the .done file until the WAL file is really gone.
Backpatch to 8.2, which is the oldest version supported on Windows.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
to unload and re-load the library.
The difficulty with unloading a library is that we haven't defined safe
protocols for doing so. In particular, there's no safe mechanism for
getting out of a "hook" function pointer unless libraries are unloaded
in reverse order of loading. And there's no mechanism at all for undefining
a custom GUC variable, so GUC would be left with a pointer to an old value
that might or might not still be valid, and very possibly wouldn't be in
the same place anymore.
While the unload and reload behavior had some usefulness in easing
development of new loadable libraries, it's of no use whatever to normal
users, so just disabling it isn't giving up that much. Someday we might
care to expend the effort to develop safe unload protocols; but even if
we did, there'd be little certainty that every third-party loadable module
was following them, so some security restrictions would still be needed.
Back-patch to 8.2; before that, LOAD was superuser-only anyway.
Security: unprivileged users could crash backend. CVE not assigned yet
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
functions.
This extends the previous patch that forbade SETting these variables inside
security-definer functions. RESET is equally a security hole, since it
would allow regaining privileges of the caller; furthermore it can trigger
Assert failures and perhaps other internal errors, since the code is not
expecting these variables to change in such contexts. The previous patch
did not cover this case because assign hooks don't really have enough
information, so move the responsibility for preventing this into guc.c.
Problem discovered by Heikki Linnakangas.
Security: no CVE assigned yet, extends CVE-2007-6600
|
| |
|
|
|
|
|
|
|
| |
and integer datetimes are in use. Per bug report from Hubert Depesz
Lubaczewski.
Alex Hunsaker
|
|
|
|
|
|
|
|
| |
we should ignore NULL array entries, not non-NULL ones. This had the
effect of disabling commit_delay, and could have caused a crash in the
rare race condition the patch was intended to fix.
Bug report and diagnosis by Jeff Janes, in bug #4952.
|
| |
|
|
|
|
|
| |
In what seems like an oversight, we used to treat 'TH' the same as lowercase
'th', but only with HH/HH12.
|
|
|
|
| |
we already do it for PAM.
|
|
|
|
|
|
|
|
|
|
|
|
| |
a number of other geometric operators also depend on. It miscalculated the
slope of the perpendicular to the given line segment anytime that slope was
other than 0, infinite, or +/-1. In some cases the error would be masked
because the true closest point on the line segment was one of its endpoints
rather than the intersection point, but in other cases it could give an
arbitrarily bad answer. Per bug #4872 from Nick Roosevelt.
Bug goes clear back to Berkeley days, so patch all supported branches.
Make a couple of cosmetic adjustments while at it.
|
| |
|
| |
|
|
|
|
|
|
|
| |
eg Japan. Report and fix by Itagaki Takahiro. Also fix CASHDEBUG printout
format for branches with 64-bit money type, and some minor comment cleanup.
Back-patch to 7.4, because it's broken all the way back.
|
|
|
|
|
|
|
|
| |
there's no analyzable attributes or indexes. We also used to report 0 live
and dead tuples for such tables, which messed with autovacuum threshold
calculations.
This fixes bug #4812 reported by George Su. Backpatch back to 8.1.
|
|
|
|
|
|
|
|
|
|
|
| |
of AND/OR clause branches that predtest.c would attempt to deal with. As
noted in bug #4721, that change disabled proof attempts for sizes of problems
that people are actually expecting it to work for. The original complaint
it was trying to solve was O(N^2) behavior for long IN-lists, so let's try
applying the limit to just ScalarArrayOpExprs rather than everything.
Another case of "foolish consistency" I fear.
Back-patch to 8.2, same as the previous patch was.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
you can end up with an unrecoverable backup if you start a new base backup
right after finishing archive recovery. In that scenario, the redo pointer of
the checkpoint that pg_start_backup() writes points to the XLOG segment where
the timeline-changing end-of-archive-recovery checkpoint is. The beginning
of that segment contains pages with the old timeline ID, and we don't accept
that in recovery unless we find a history file covering the old timeline ID.
If you omit pg_xlog from the base backup and clear the archive directory
before starting the backup, there will be no such history file available.
The bug is present in all versions since PITR was introduced in 8.0, but I'm
back-patching only back to 8.2. Earlier versions didn't have XLOG switch
records, making this fix unfeasible. Given the lack of reports until now,
it doesn't seem worthwhile to spend more effort to fix 8.0 and 8.1.
Per report and suggestion by Mikael Krantz
|
|
|
|
|
|
|
| |
to make sure that the error code is reset, as a precaution in
case the API doesn't properly reset it on success. This could
be necessary, since we check the error value even if the function
doesn't fail for specific success cases.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
part that rounds up to exactly 1.0 second. The previous coding rejected input
like "00:12:57.9999999999999999999999999999", with the exact number of nines
needed to cause failure varying depending on float-timestamp option and
possibly on platform. Obviously this should round up to the next integral
second, if we don't have enough precision to distinguish the value from that.
Per bug #4789 from Robert Kruus.
In passing, fix a missed check for fractional seconds in one copy of the
"is it greater than 24:00:00" code.
Broken all the way back, so patch all the way back.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
aggregate function. By definition, such a sub-SELECT cannot reference any
variables of query levels between itself and the aggregate's semantic level
(else the aggregate would've been assigned to that lower level instead).
So the correct, most efficient implementation is to treat the sub-SELECT as
being a sub-select of that outer query level, not the level the aggregate
syntactically appears in. Not doing so also confuses the heck out of our
parameter-passing logic, as illustrated in bug report from Daniel Grace.
Fortunately, we were already copying the whole Aggref expression up to the
outer query level, so all that's needed is to delay SS_process_sublinks
processing of the sub-SELECT until control returns to the outer level.
This has been broken since we introduced spec-compliant treatment of
outer aggregates in 7.4; so patch all the way back.
|
|
|
|
|
|
| |
per approval from Helmut Tschemernjak, President.
Only back branches; files removed from CVS HEAD.
|
|
|
|
| |
from buggy user-defined picksplit to GiST.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
interval_eq() considers equal. I'm not sure how that fundamental requirement
escaped us through multiple revisions of this hash function, but there it is;
it's been wrong since interval_hash was first written for PG 7.1.
Per bug #4748 from Roman Kononov.
Backpatch to all supported releases.
This patch changes the contents of hash indexes for interval columns. That's
no particular problem for PG 8.4, since we've broken on-disk compatibility
of hash indexes already; but it will require a migration warning note in
the next minor releases of all existing branches: "if you have any hash
indexes on columns of type interval, REINDEX them after updating".
|
|
|
|
|
|
|
|
|
|
|
| |
at the same instant as a new backend is spawned. Since CountActiveBackends()
doesn't hold ProcArrayLock, it needs to be prepared for the case that a
pointer at the end of the proc array is still NULL even though numProcs says
it should be valid, since it doesn't hold ProcArrayLock. Backpatch to 8.1.
8.0 and earlier had this right, but it was broken in the split of PGPROC and
sinval shared memory arrays.
Per report and proposal by Marko Kreen.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
TupleTableSlots. We have functions for retrieving a minimal tuple from a slot
after storing a regular tuple in it, or vice versa; but these were implemented
by converting the internal storage from one format to the other. The problem
with that is it invalidates any pass-by-reference Datums that were already
fetched from the slot, since they'll be pointing into the just-freed version
of the tuple. The known problem cases involve fetching both a whole-row
variable and a pass-by-reference value from a slot that is fed from a
tuplestore or tuplesort object. The added regression tests illustrate some
simple cases, but there may be other failure scenarios traceable to the same
bug. Note that the added tests probably only fail on unpatched code if it's
built with --enable-cassert; otherwise the bug leads to fetching from freed
memory, which will not have been overwritten without additional conditions.
Fix by allowing a slot to contain both formats simultaneously; which turns out
not to complicate the logic much at all, if anything it seems less contorted
than before.
Back-patch to 8.2, where minimal tuples were introduced.
|
|
|
|
|
|
|
|
|
|
|
|
| |
them from degrading badly when the input is sorted or nearly so. In this
scenario the tree is unbalanced to the point of becoming a mere linked list,
so insertions become O(N^2). The easiest and most safely back-patchable
solution is to stop growing the tree sooner, ie limit the growth of N. We
might later consider a rebalancing tree algorithm, but it's not clear that
the benefit would be worth the cost and complexity. Per report from Sergey
Burladyan and an earlier complaint from Heikki.
Back-patch to 8.2; older versions didn't have GIN indexes.
|
|
|
|
|
|
|
|
|
|
| |
format codes are misapplied to a numeric argument. (The code still produces
a pretty bogus error message in such cases, but I'll settle for stopping the
crash for now.) Per bug #4700 from Sergey Burladyan.
Problem exists in all supported branches, so patch all the way back.
In HEAD, also clean up some ugly coding in the nearby cache management
code.
|
|
|
|
|
|
| |
fail to provide the function itself. Not sure how we escaped testing anything
later than 7.3 on such cases, but they still exist, as per André Volpato's
report about AIX 5.3.
|
|
|
|
|
| |
This has moved around in past releases, so just copying-and-pasting from HEAD
didn't work as intended.
|
|
|
|
|
|
|
|
|
|
|
| |
encoding conversion of any elog/ereport message being sent to the frontend.
This generalizes a patch that I put in last October, which suppressed
translation of only specific messages known to be associated with recursive
can't-translate-the-message behavior. As shown in bug #4680, we need a more
general answer in order to have some hope of coping with broken encoding
conversion setups. This approach seems a good deal less klugy anyway.
Patch in all supported branches.
|
|
|
|
|
|
|
|
| |
fail on zero-length inputs. This isn't an issue in normal use because the
conversion infrastructure skips calling the converters for empty strings.
However a problem was created by yesterday's patch to check whether the
right conversion function is supplied in CREATE CONVERSION. The most
future-proof fix seems to be to make the converters safe for this corner case.
|
|
|
|
|
|
|
|
| |
function for the specified source and destination encodings. We do that by
calling the function with an empty string. If it can't perform the requested
conversion, it will throw an error.
Backport to 7.4 - 8.3. Per bug report #4680 by Denis Afonin.
|
|
|
|
|
|
|
|
|
|
|
| |
looks for a CaseTestExpr to figure out what the parser did, but it failed to
consider the possibility that an implicit coercion might be inserted above
the CaseTestExpr. This could result in an Assert failure in some cases
(but correct results if Asserts weren't enabled), or an "unexpected CASE WHEN
clause" error in other cases. Per report from Alan Li.
Back-patch to 8.1; problem doesn't exist before that because CASE was
implemented differently.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
TABLE: if the command is executed by someone other than the table owner (eg,
a superuser) and the table has a toast table, the toast table's pg_type row
ends up with the wrong typowner, ie, the command issuer not the table owner.
This is quite harmless for most purposes, since no interesting permissions
checks consult the pg_type row. However, it could lead to unexpected failures
if one later tries to drop the role that issued the command (in 8.1 or 8.2),
or strange warnings from pg_dump afterwards (in 8.3 and up, which will allow
the DROP ROLE because we don't create a "redundant" owner dependency for table
rowtypes). Problem identified by Cott Lang.
Back-patch to 8.1. The problem is actually far older --- the CLUSTER variant
can be demonstrated in 7.0 --- but it's mostly cosmetic before 8.1 because we
didn't track ownership dependencies before 8.1. Also, fixing it before 8.1
would require changing the call signature of heap_create_with_catalog(), which
seems to carry a nontrivial risk of breaking add-on modules.
|
|
|
|
|
|
|
|
| |
from Rushabh Lathia.
Back-patch of patch of 2009-01-08. This is necessary in 8.3, as reported
by Bjorn Munch. It's not currently necessary in 8.2, AFAICS, but seems
best to include it there too.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
encoding conversion functions. These are not can't-happen cases because
it's possible to create a conversion with the wrong conversion function
for the specified encoding pair. That would lead to an Assert crash in
an Assert-enabled build, or incorrect conversion otherwise, neither of
which is desirable. This would be a DOS issue if production databases
were customarily built with asserts enabled, but fortunately that's not so.
Per an observation by Heikki.
Back-patch to all supported branches.
|
|
|
|
|
|
|
|
| |
to the documented API value. The previous code got it right as
it's implemented, but accepted too much/too little compared to
the API documentation.
Per comment from Zdenek Kotala.
|