| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
| |
UPDATE/SHARE couldn't occur as a subquery in a query with a non-SELECT
top-level operation. Symptoms included outright failure (as in report from
Mark Mielke) and silently neglecting to take the requested row locks.
Back-patch to 8.3, because the visible failure in the INSERT ... SELECT case
is a regression from 8.2. I'm a bit hesitant to back-patch further given the
lack of field complaints.
|
|
|
|
|
|
|
|
|
|
| |
I never understood why initial authors GiST in pgsql choose so
stgrange signature for 'same' method:
bool *sameFn(Datum a, Datum b, bool* result)
instead of simple, logical
bool sameFn(Datum a, Datum b)
This change will break any existing GiST extension, so we still live with
it and will live.
|
|
|
|
|
|
| |
file; the idea is that we should clean up as much as we can, even if there's
some problem removing one file. Make the error messages a bit less misleading,
too. In passing, const-ify function arguments.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
place to prevent reusing relation OIDs before next checkpoint, and DROP
DATABASE. First, if a database was dropped, bgwriter would still try to unlink
the files that the rmtree() call by the DROP DATABASE command has already
deleted, or is just about to delete. Second, if a database is dropped, and
another database is created with the same OID, bgwriter would in the worst
case delete a relation in the new database that happened to get the same OID
as a dropped relation in the old database.
To fix these race conditions:
- make rmtree() ignore ENOENT errors. This fixes the 1st race condition.
- make ForgetDatabaseFsyncRequests forget unlink requests as well.
- force checkpoint on in dropdb on all platforms
Since ForgetDatabaseFsyncRequests() is asynchronous, the 2nd change isn't
enough on its own to fix the problem of dropping and creating a database with
same OID, but forcing a checkpoint on DROP DATABASE makes it sufficient.
Per Tom Lane's bug report and proposal. Backpatch to 8.3.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
we had several code paths where a physical tlist could be used for the input
to a Sort node, which is a dumb idea because any unneeded table columns will
increase the volume of data the sort has to push around.
(Unfortunately the easy-looking fix of calling disuse_physical_tlist during
make_sort_xxx doesn't work because in most cases we're already committed to
the current input tlist --- it's been marked with sort column numbers, or
we've built grouping column numbers using it, etc. The tlist has to be
selected properly at the calling level before we start constructing sort-col
information. This is easy enough to do, we were just failing to take the
point into consideration.)
Back-patch to 8.3. I believe the problem probably exists clear back to 7.4
when the physical tlist optimization was added, but I'm afraid to back-patch
further than 8.3 without a great deal more study than I want to put into it.
The code in this area has drifted a lot over time. The real-world importance
of these code paths is uncertain anyway --- I think in many cases we'd
probably prefer hash-based methods.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
corrupted. (Neither is very important if SIGTERM is used to shut down the
whole database cluster together, but there's a problem if someone tries to
SIGTERM individual backends.) To do this, introduce new infrastructure
macros PG_ENSURE_ERROR_CLEANUP/PG_END_ENSURE_ERROR_CLEANUP that take care
of transiently pushing an on_shmem_exit cleanup hook. Also use this method
for createdb cleanup --- that wasn't a shared-memory-corruption problem,
but SIGTERM abort of createdb could leave orphaned files lying around.
Backpatch as far as 8.2. The shmem corruption cases don't exist in 8.1,
and the createdb usage doesn't seem important enough to risk backpatching
further.
|
|
|
|
|
|
|
|
|
|
| |
it is trying to build a relcache entry for. This is an oversight in my 8.2
patch that tried to ensure we always took a lock on a relation before trying
to build its relcache entry. The implication is that if someone committed a
reindex of a critical system index at about the same time that some other
backend were starting up without a valid pg_internal.init file, the second one
might PANIC due to not seeing any valid version of the index's pg_class row.
Improbable case, but definitely not impossible.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
results to contain uninitialized, unpredictable values. While this was okay
as far as the datatypes themselves were concerned, it's a problem for the
parser because occurrences of the "same" literal might not be recognized as
equal by datumIsEqual (and hence not by equal()). It seems sufficient to fix
this in the input functions since the only critical use of equal() is in the
parser's comparisons of ORDER BY and DISTINCT expressions.
Per a trouble report from Marc Cousin.
Patch all the way back. Interestingly, array_in did not have the bug before
8.2, which may explain why the issue went unnoticed for so long.
|
|
|
|
|
|
| |
the columns it works with to be domains over the expected type, not just
exactly the expected type. In passing, fix ts_stat() the same way.
Per report from Markus Wollny.
|
|
|
|
|
|
|
|
|
|
|
|
| |
currently support this because we must be able to build Vars referencing
join columns, and varattno is only 16 bits wide. Perhaps this should be
improved in future, but considering that it never came up before, I'm not
sure the problem is worth much effort. Per bug #4070 from Marcello
Ceschia.
The problem seems largely academic in 8.0 and 7.4, because they have
(different) O(N^2) performance issues with such wide joins, but
back-patch all the way anyway.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
classed all as "dead"; also get it to count DEAD item pointers as dead rows,
instead of ignoring them as before. Also improve matters so that tuples
previously inserted or deleted by our own transaction are handled nicely:
the stats collector's live-tuple and dead-tuple counts will end up correct
after our transaction ends, regardless of whether we end in commit or abort.
While there's more work that could be done to improve the counting of in-doubt
tuples in both VACUUM and ANALYZE, this commit is enough to alleviate some
known bad behaviors in 8.3; and the other stuff that's been discussed seems
like research projects anyway.
Pavan Deolasee and Tom Lane
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
responsible for copying the query string into the new Portal. Such copying
is unnecessary in the common code path through exec_simple_query, and in
this case it can be enormously expensive because the string might contain
a large number of individual commands; we were copying the entire, long
string for each command, resulting in O(N^2) behavior for N commands.
(This is the cause of bug #4079.) A second problem with it is that
PortalDefineQuery really can't risk error, because if it elog's before
having set up the Portal, we will leak the plancache refcount that the
caller is trying to hand off to the portal. So go back to the design in
which the caller is responsible for making sure everything is copied into
the portal if necessary.
|
|
|
|
|
|
|
|
|
|
| |
eval_const_expressions needs to be passed the PlannerInfo ("root") structure,
because in some cases we want it to substitute values for Param nodes.
(So "constant" is not so constant as all that ...) This mistake partially
disabled optimization of unnamed extended-Query statements in 8.3: in
particular the LIKE-to-indexscan optimization would never be applied if the
LIKE pattern was passed as a parameter, and constraint exclusion depending
on a parameter value didn't work either.
|
|
|
|
| |
Add some regression tests for plausible failures in this area.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The places that did, eg,
(statbuf.st_mode & S_IFMT) == S_IFDIR
were correct, but there is no good reason not to use S_ISDIR() instead,
especially when that's what the other 90% of our code does. The places
that did, eg,
(statbuf.st_mode & S_IFDIR)
were flat out *wrong* and would fail in various platform-specific ways,
eg a symlink could be mistaken for a regular file on most Unixen.
The actual impact of this is probably small, since the problem cases
seem to always involve symlinks or sockets, which are unlikely to be
found in the directories that PG code might be scanning. But it's
clearly trouble waiting to happen, so patch all the way back anyway.
(There seem to be no occurrences of the mistake in 7.4.)
|
|
|
|
| |
Whatever we do about that, this isn't the path to the solution.
|
|
|
|
|
|
|
|
|
|
| |
the query result must be exactly one row (since we don't do this when there's
any GROUP BY). Therefore any ORDER BY or DISTINCT attached to the query is
useless and can be dropped. Aside from saving useless cycles, this protects
us against problems with matching the hacked-up tlist entries to sort clauses,
as seen in a bug report from Taiki Yamaguchi. We might need to work harder
if we ever try to optimize grouped queries with this approach, but this
solution will do for now.
|
|
|
|
|
|
|
|
| |
knowledge up through any joins it participates in. We were doing that already
in some special cases but not in the general case. Also, defend against zero
row estimates for the input relations in cost_mergejoin --- this fix may have
eliminated the only scenario in which that can happen, but be safe. Per
report from Alex Solovey.
|
|
|
|
|
|
| |
friends. Avoid double translation of some messages, ensure other messages
are exposed for translation (and make them follow the style guidelines),
avoid unsafe passing of an unpredictable message text as a format string.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ISO_8859-5 <-> MULE_INTERNAL conversion tables.
This was discovered when trying to convert a string containing those characters
from ISO_8859-5 to Windows-1251, because we use MULE_INTERNAL/KOI8R as an
intermediate encoding between those two.
While the missing "Yo" was just an omission in the conversion tables, there are
a few other characters like the "Numero" sign ("No" as a single character) that
exists in all the other cyrillic encodings (win1251, ISO_8859-5 and cp866), but
not in KOI8R. Added comments about that.
Patch by Sergey Burladyan. Back-patch to 7.4.
|
|
|
|
|
|
|
|
|
|
|
|
| |
case where there is a match to the pattern overall but the user has specified
a parenthesized subexpression and that subexpression hasn't got a match.
An example is substring('foo' from 'foo(bar)?'). This should return NULL,
since (bar) isn't matched, but it was mistakenly returning the whole-pattern
match instead (ie, 'foo'). Per bug #4044 from Rui Martins.
This has been broken since the beginning; patch in all supported versions.
The old behavior was sufficiently inconsistent that it's impossible to believe
anyone is depending on it.
|
|
|
|
|
|
| |
This accidentally failed to fail before 8.3, because the context we were
switching back to was long-lived anyway; but it sure looks risky as can be
now. Well spotted by Pavan Deolasee.
|
|
|
|
|
|
| |
job (i.e. to prevent Xid wraparound problems.) Bug reported by ITAGAKI
Takahiro in 20080314103837.63D3.52131E4D@oss.ntt.co.jp, though I didn't use his
patch.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
that are reported as "equal" by wcscoll() are checked to see if they really
are bitwise equal, and are sorted per strcmp() if not. We made this happen
a couple of years ago in the regular code path, but it unaccountably got
left out of the Windows/UTF8 case (probably brain fade on my part at the
time). As in the prior set of changes, affected users may need to reindex
indexes on textual columns.
Backpatch as far as 8.2, which is the oldest release we are still supporting
on Windows.
|
|
|
|
|
|
|
|
|
|
|
| |
messages if the calling transaction aborts later on. Collapsing out line
pointer redirects is a done deal as soon as we complete the page update,
so syscache *must* be notified even if the VACUUM FULL as a whole doesn't
complete. To fix, add some functionality to inval.c to allow the pending
inval messages to be sent immediately while heap_page_prune is still
running. The implementation is a bit chintzy: it will only work in the
context of VACUUM FULL. But that's all we need now, and it can always be
extended later if needed. Per my trouble report of a week ago.
|
|
|
|
|
|
|
|
|
|
|
| |
(probably NULL) before exiting. Up to now it's just left the variable as it
set it, which means that after we're done processing the current client
message, ActiveSnapshot is probably pointing at garbage (because this function
is typically run in MessageContext which will get reset). There doesn't seem
to have been any code path in which that mattered before 8.3, but now the
plancache module might try to use the stale value if the next client message
is a Bind for a prepared statement that is in need of replanning. Per report
from Alex Hunsaker.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
pg_listener modifications commanded by LISTEN and UNLISTEN until the end
of the current transaction. This allows us to hold the ExclusiveLock on
pg_listener until after commit, with no greater risk of deadlock than there
was before. Aside from fixing the race condition, this gets rid of a
truly ugly kludge that was there before, namely having to ignore
HeapTupleBeingUpdated failures during NOTIFY. There is a small potential
incompatibility, which is that if a transaction issues LISTEN or UNLISTEN
and then looks into pg_listener before committing, it won't see any resulting
row insertion or deletion, where before it would have. It seems unlikely
that anyone would be depending on that, though.
This patch also disallows LISTEN and UNLISTEN inside a prepared transaction.
That case had some pretty undesirable properties already, such as possibly
allowing pg_listener entries to be made for PIDs no longer present, so
disallowing it seems like a better idea than trying to maintain the behavior.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
it accumulates the set of changes to be made and then applies them. It had
to accumulate the set of changes anyway to prepare a WAL record for the
pruning action, so this isn't an enormous change; the only new complexity is
to not doubly mark tuples that are visited twice in the scan. The main
advantage is that we can substantially reduce the scope of the critical
section in which the changes are applied, thus avoiding PANIC in foreseeable
cases like running out of memory in inval.c. A nice secondary advantage is
that it is now far clearer that WAL replay will actually do the same thing
that the original pruning did.
This commit doesn't do anything about the open problem that
CacheInvalidateHeapTuple doesn't have the right semantics for a CTID change
caused by collapsing out a redirect pointer. But whatever we do about that,
it'll be a good idea to not do it inside a critical section.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
TopMemoryContext, rather than scattered through executor per-query contexts.
This poses no danger of memory leak since the ResourceOwner mechanism
guarantees release of no-longer-needed items. It is needed because the
per-query context might already be released by the time we try to clean up
the hash scan list. Report by ykhuang, diagnosis by Heikki.
Back-patch to 8.0, where the ResourceOwner-based cleanup was introduced.
The given test case does not fail before 8.2, probably because we rearranged
transaction abort processing somehow; but this coding is undoubtedly risky
so I'll patch 8.0 and 8.1 anyway.
|
|
|
|
|
|
|
|
|
|
| |
a unused memory holes in tsquery.
Per report by Richard Huxton <dev@archonet.com>.
It was working well because in fact tsquery->size is not used for any
kind of operation except comparing tsqueries. To prevent requirement
of renew all stored tsquery optimization in CompareTSQ is removed.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
caches that we don't actually need to touch. This saves some trivial
number of cycles and avoids certain cases of deadlock when doing concurrent
VACUUM FULL on system catalogs. Per report from Gavin Roy.
Backpatch to 8.2. In earlier versions, CatalogCacheInitializeCache didn't
lock the relation so there's no deadlock risk (though that certainly had
plenty of risks of its own).
|
|
|
|
|
|
|
| |
temporary table; we can't support that because there's no way to clean up the
source backend's internal state if the eventual COMMIT PREPARED is done by
another backend. This was checked correctly in 8.1 but I broke it in 8.2 :-(.
Patch by Heikki Linnakangas, original trouble report by John Smith.
|
|
|
|
|
|
|
|
| |
"struct varlena" would be at least word-aligned. Per buildfarm results
from gypsy_moth. I did a little bit of trawling for other instances of
this coding pattern, and didn't find any; but if we turn up any more
of them I think we'd better revert the "char [4]" patch and find another
way of making tuptoaster.c alignment-safe.
|
|
|
|
|
|
|
|
|
| |
to explicitly cast the output back to char before comparing it to a char
value, else we get the wrong result for high-bit-set characters. Found by
Rolf Jentsch. Also, fix several places where <ctype.h> functions were being
called without casting the argument to unsigned char; this is likewise
unportable, but we keep making that mistake :-(. These found by buildfarm
member salamander, which I will desperately miss if it ever goes belly-up.
|
|
|
|
|
|
|
|
|
| |
left in the code though it was not meant to be provided. It represents a
security hole because unprivileged users could use it to look at (at least the
first line of) any file readable by the backend. Fortunately, this is only
possible if the backend was built with XML support, so the damage is at least
mitigated; and 8.3 probably hasn't propagated into any security-critical uses
yet anyway. Per report from Sergey Burladyan.
|
|
|
|
|
|
|
|
|
|
| |
is also licensed to put a local variable declared that way at an unaligned
address. Which will not work if the variable is then manipulated with
SET_VARSIZE or other macros that assume alignment. So the previous patch
is not an unalloyed good, but on balance I think it's still a win, since
we have very few places that do that sort of thing. Fix the one place in
tuptoaster.c that does it. Per buildfarm results from gypsy_moth
(I'm a bit surprised that only one machine showed a failure).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
"multi_call_ctx" to be a distinct sub-context of the EState's per-query
context, and delete the multi_call_ctx as soon as the SRF finishes
execution. This avoids leaking SRF memory until the end of the current
query, which is particularly egregious when the SRF is scanned
multiple times. This change also fixes a leak of the fields of the
AttInMetadata struct in shutdown_MultiFuncCall().
Also fix a leak of the SRF result TupleDesc when rescanning a
FunctionScan node. The TupleDesc is allocated in the per-query context
for every call to ExecMakeTableFunctionResult(), so we should free it
after calling that function. Since the SRF might choose to return
a non-expendable TupleDesc, we only free the TupleDesc if it is
not being reference-counted.
Backpatch to 8.3 and 8.2 stable branches.
|
|
|
|
|
| |
a relevant error message instead of just dumping core. Odd that nobody
reported this before Darren Reed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
values into \nnn octal escape sequences. When the database encoding is
multibyte this is *necessary* to avoid generating invalidly encoded text.
Even in a single-byte encoding, the old behavior seems very hazardous ---
consider for example what happens if the text is transferred to another
database with a different encoding. Decoding would then yield some other
bytea value than what was encoded, which is surely undesirable. Per gripe
from Hernan Gonzalez.
Backpatch to 8.3, but not further. This is a bit of a judgment call, but I
make it on these grounds: pre-8.3 we don't really have much encoding safety
anyway because of the convert() function family, and we would also have much
higher risk of breaking existing apps that may not be expecting this behavior.
8.3 is still new enough that we can probably get away with making this change
in the function's behavior.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Formerly, DecodeDate attempted to verify the day-of-the-month exactly, but
it was under the misapprehension that it would know whether we were looking
at a BC year or not. In reality this check can't be made until the calling
function (eg DecodeDateTime) has processed all the fields. So, split the
BC adjustment and validity checks out into a new function ValidateDate that
is called only after processing all the fields. In passing, this patch
makes DecodeTimeOnly work for BC inputs, which it never did before.
(The historical veracity of all this is nonexistent, of course, but if
we're going to say we support proleptic Gregorian calendar then we should
do it correctly. In any case the unpatched code is broken because it could
emit dates that it would then reject on re-inputting.)
Per report from Bernd Helmle. Back-patch as far as 8.0; in 7.x we were
not using our own calendar support and so this seems a bit too risky
to put into 7.4.
|
|
|
|
| |
platforms this works, but on some it crashes. Zdenek Kotala
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
represented as "char ...[4]" not "int32". Since the length word is never
supposed to be accessed via this struct member anyway, this won't break
any existing code that is following the rules. The advantage is that C
compilers will no longer assume that a pointer to struct varlena is
word-aligned, which prevents incorrect optimizations in TOAST-pointer
access and perhaps other places. gcc doesn't seem to do this (at least
not at -O2), but the problem is demonstrable on some other compilers.
I changed struct inet as well, but didn't bother to touch a lot of other
struct definitions in which it wouldn't make any difference because there
were other fields forcing int alignment anyway. Hopefully none of those
struct definitions are used for accessing unaligned Datums.
|
|
|
|
|
|
|
|
|
|
| |
OID or new relfilenode. If the existing OIDs are sufficiently densely
populated, this could take a long time (perhaps even be an infinite loop),
so it seems wise to allow the system to respond to a cancel interrupt here.
Per a gripe from Jacky Leng.
Backpatch as far as 8.1. Older versions just fail on OID collision,
instead of looping.
|
|
|
|
| |
from Jaime Casanova.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
and RI_FKey_keyequal_upd_fk, as well as no-longer-needed calls of
ri_BuildQueryKeyFull. Aside from saving a few cycles, this avoids needless
deadlock risks when an update is not changing the columns that participate
in an RI constraint. Per a gripe from Alexey Nalbat.
Back-patch to 8.3. Earlier releases did have a need to open the other
relation due to the way in which they retrieved information about the RI
constraint, so this problem unfortunately can't easily be improved pre-8.3.
Tom Lane and Stephan Szabo
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
doing anything interesting, such as calling RevalidateCachedPlan(). The
necessity of this is demonstrated by an example from Willem Buitendyk:
during a replan, the planner might try to evaluate SPI-using functions,
and so we'd better be in a clean SPI context.
A small downside of this fix is that these two functions will now fail
outright if called when not inside a SPI-using procedure (ie, a
SPI_connect/SPI_finish pair). The documentation never promised or suggested
that that would work, though; and they are normally used in concert with
other functions, mainly SPI_prepare, that always have failed in such a case.
So the odds of breaking something seem pretty low.
In passing, make SPI_is_cursor_plan's error handling convention clearer,
and fix documentation's erroneous claim that SPI_cursor_open would
return NULL on error.
Before 8.3 these functions could not invoke replanning, so there is probably
no need for back-patching.
|