| Commit message (Collapse) | Author | Age |
... | |
| |
|
| |
|
| |
|
|
|
|
|
| |
-O3 or higher (presumably because it inlines more things). Per gripe
from Mark Mielke.
|
|
|
|
|
|
| |
of an index on a serial column, rather than the name of the associated
sequence. Fallout from recent changes in dependency setup for serials.
Per bug #2732 from Basil Evseenko.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
added to information_schema (per a SQL2003 addition). The original coding
failed if a referenced column participated in more than one pg_constraint
entry. Also, it did not work if an FK relied directly on a unique index
without any constraint syntactic sugar --- this case is outside the SQL spec,
but PG has always supported it, so it's reasonable for our information_schema
to handle it too. Per bug#2750 from Stephen Haberman.
Although this patch changes the initial catalog contents, I didn't force
initdb. Any beta3 testers who need the fix can install it via CREATE OR
REPLACE VIEW, so forcing them to initdb seems an unnecessary imposition.
|
|
|
|
|
|
|
| |
accurately: we have to distinguish the effects of the join's own ON
clauses from the effects of pushed-down clauses. Failing to do so
was a quick hack long ago, but it's time to be smarter. Per example
from Thomas H.
|
|
|
|
|
|
|
|
| |
30 seconds instead of retrying forever. Also modify xlog.c so that if
it fails to rename an old xlog segment up to a future slot, it will
unlink the segment instead. Per discussion of bug #2712, in which it
became apparent that Windows can handle unlinking a file that's being
held open, but not renaming it.
|
|
|
|
|
|
|
|
| |
The former coding relied on the actual allocated size of the last block,
which made it behave strangely if the first allocation in a context was
larger than ALLOC_CHUNK_LIMIT: subsequent allocations would be referenced
to that and not to the intended series of block sizes. Noted while
studying a memory wastage gripe from Tatsuo.
|
|
|
|
|
|
| |
more space is needed, instead of incrementing by a fixed amount; the old
method wastes lots of space and time when the ultimate size is large.
Per gripe from Tatsuo.
|
|
|
|
|
|
|
|
|
|
| |
text_to_array(): they all had O(N^2) behavior on long input strings in
multibyte encodings, because of repeated rescanning of the input text to
identify substrings whose positions/lengths were computed in characters
instead of bytes. Fix by tracking the current source position as a char
pointer as well as a character-count. Also avoid some unnecessary palloc
operations. text_to_array() also leaked memory intracall due to failure
to pfree temporary strings. Per gripe from Tatsuo Ishii.
|
|
|
|
|
| |
established: referencing an undefined parameter should result in an
error, not NULL.
|
|
|
|
|
|
|
|
|
|
| |
sub-arrays. Per discussion, if all inputs are empty arrays then result
must be an empty array too, whereas a mix of empty and nonempty arrays
should (and already did) draw an error. In the back branches, the
construct was strict: any NULL input immediately yielded a NULL output;
so I left that behavior alone. HEAD was simply ignoring NULL sub-arrays,
which doesn't seem very sensible. For lack of a better idea it now
treats NULL sub-arrays the same as empty ones.
|
|
|
|
|
| |
with fopen() not using FILE_SHARE_DELETE was indeed the bug we were after,
given lack of recent reports.
|
|
|
|
|
| |
the backend should rely on its working-directory setting instead.
Also do some message-style police work in contrib/adminpack.
|
|
|
|
|
|
|
|
|
|
|
|
| |
manually release the LDAP handle via ldap_unbind(). This isn't a
significant problem in practice because an error eventually results
in exiting the process, but we can cleanup correctly without too
much pain.
In passing, fix an error in snprintf() usage: the "size" parameter
to snprintf() is the size of the destination buffer, including space
for the NUL terminator. Also, depending on the value of NAMEDATALEN,
the old coding could have allowed for a buffer overflow.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
stale relcache init files (pg_internal.init), and there is no mechanism for
updating them during WAL replay. Easiest solution is just to delete the init
files at conclusion of startup, and let the first backend started in each
database take care of rebuilding the init file. Simon Riggs and Tom Lane.
Back-patched to 8.1. Arguably this should be fixed in 8.0 too, but it would
require significantly more code since 8.0 has no handy startup-time scan of
pg_database to piggyback on. Manual solution of the problem is possible
in 8.0 (just delete the pg_internal.init files before starting WAL replay),
so that may be a sufficient answer.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
in PITR scenarios. We now WAL-log the replacement of old XIDs with
FrozenTransactionId, so that such replacement is guaranteed to propagate to
PITR slave databases. Also, rather than relying on hint-bit updates to be
preserved, pg_clog is not truncated until all instances of an XID are known to
have been replaced by FrozenTransactionId. Add new GUC variables and
pg_autovacuum columns to allow management of the freezing policy, so that
users can trade off the size of pg_clog against the amount of freezing work
done. Revise the already-existing code that forces autovacuum of tables
approaching the wraparound point to make it more bulletproof; also, revise the
autovacuum logic so that anti-wraparound vacuuming is done per-table rather
than per-database. initdb forced because of changes in pg_class, pg_database,
and pg_autovacuum catalogs. Heikki Linnakangas, Simon Riggs, and Tom Lane.
|
|
|
|
|
|
|
|
|
|
|
| |
deletion code to avoid the case where an upper-level btree page remains "half
dead" for a significant period of time, and to block insertions into a key
range that is in process of being re-assigned to the right sibling of the
deleted page's parent. This prevents the scenario reported by Ed L. wherein
index keys could become out-of-order in the grandparent index level.
Since this is a moderately invasive fix, I'm applying it only to HEAD.
The bug exists back to 7.4, but the back branches will get a different patch.
|
|
|
|
|
|
|
|
|
|
| |
node of a SubLink or SubPlan testexpr field. Bug resulted from replacing
the old lefthand/exprs list fields with a simple expression field, and not
remembering that expression_tree_walker is coded to save a few cycles by
recursing directly to self on list fields (on the assumption the walker
isn't interested in List nodes per se). On non-list fields it must of
course call the walker. Possibly that hack isn't worth the risk of more
such bugs, but I'll leave it be for now. Per bug report from James Robinson.
|
|
|
|
|
|
| |
outer joins. Originally it was only looking for overlap of the righthand
side of a left join, but we have to do it on the lefthand side too.
Per example from Jean-Pierre Pelletier.
|
|
|
|
|
| |
This was required back when RESUME_INTERRUPTS could actually
execute ProcessInterrupts, but that hasn't been true since 2001...
|
| |
|
|
|
|
| |
Going to wait for buildfarm results before backpatching, this time.
|
|
|
|
| |
a fastpath function call.
|
|
|
|
| |
be NULL ... seems an unsafe assumption.
|
|
|
|
|
|
| |
if available (which it usually should be), during processing of Bind
and Execute protocol messages. This improves usefulness of
log_min_error_statement logging for extended query protocol.
|
|
|
|
| |
from Magnus that MSVC complains about this.
|
|
|
|
|
|
|
| |
sin_port in the returned IP address struct when servname is NULL. This has
been observed to cause failure to bind the stats collection socket, and
could perhaps cause other issues too. Per reports from Brad Nicholson
and Chris Browne.
|
|
|
|
|
|
| |
that conflict with the OID that we want to use for the new database.
This avoids the risk of trying to remove files that maybe we shouldn't
remove. Per gripe from Jon Lapham and subsequent discussion of 27-Sep.
|
|
|
|
|
|
|
|
| |
timezone actually has a daylight-savings rule. This avoids breaking
cases that used to work because they went through the DecodePosixTimezone
code path. Per contrib regression failures (mea culpa for not running
those yesterday...). Also document the already-applied change to allow
GMT offsets up to 14 hours.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
input routines. Remove the former "DecodePosixTimezone" function in favor of
letting the zic code handle POSIX-style zone specs (see tzparse()). In
particular this means that "PST+3" now means the same as "-03", whereas it
used to mean "-11" --- the zone abbreviation is effectively just a noise word
in this syntax. Make sure that all named and POSIX-style zone names will be
parsed as a single token. Fix long-standing bogosities in printing and input
of fractional-hour timezone offsets (since the tzparse() code will accept
these, we'd better make 'em work). Also correct an error in the original
coding of the zic-zone-name patch: in "timestamp without time zone" input,
zone names are supposed to be allowed but ignored, but the coding was such
that the zone changed the interpretation anyway.
|
|
|
|
|
|
|
|
| |
modules; the first try was not usable in EXEC_BACKEND builds (e.g.,
Windows). Instead, just provide some entry points to increase the
allocation requests during postmaster start, and provide a dedicated
LWLock that can be used to synchronize allocation operations performed
by backends. Per discussion with Marc Munro.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1) pgwin32_waitforsinglesocket(): WaitForMultipleObjectsEx now called with
finite timeout (100ms) in case of FP_WRITE and UDP socket. If timeout occurs
then pgwin32_waitforsinglesocket() tries to write empty packet goes to
WaitForMultipleObjectsEx again.
2) pgwin32_send(): add loop around WSASend and pgwin32_waitforsinglesocket().
The reason is: for overlapped socket, 'ok' result from
pgwin32_waitforsinglesocket() isn't guarantee that socket is still free,
it can become busy again and following WSASend call will fail with
WSAEWOULDBLOCK error.
See http://archives.postgresql.org/pgsql-hackers/2006-10/msg00561.php
|
|
|
|
|
|
|
|
|
|
| |
rows --- if the surrounding query queued any trigger events between the rows,
the events would be fired at the wrong time, leading to bizarre behavior.
Per report from Merlin Moncure.
This is a simple patch that should solve the problem fully in the back
branches, but in HEAD we also need to consider the possibility of queries
with RETURNING clauses. Will look into a fix for that separately.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
I introduced in 7.4.1 :-(. It's correct to allow unknown to be coerced to
ANY or ANYELEMENT, since it's a real-enough data type, but it most certainly
isn't an array datatype. This can cause a backend crash but AFAICT is not
exploitable as a security hole. Per report from Michael Fuhr.
Note: as fixed in HEAD, this changes a constant in the pg_stats view,
resulting in a change in the expected regression outputs. The back-branch
patches have been hacked to avoid that, so that pre-existing installations
won't start failing their regression tests.
|
|
|
|
| |
don't cheat on the raw-vs-cooked status of a constraint.
|
|
|
|
|
| |
Apple developer tools. We now use dlopen directly if available, and fall
back to the older code if not. Chris Campbell
|
| |
|
|
|
|
|
| |
specify it explicitly in backend/Makefile. Arrange for this value to
be known by get_stack_depth_rlimit() too. Per suggestion from Magnus.
|
| |
|
|
|
|
|
| |
query_list into the Portal, ie, the one seen and possibly modified by
the planner. My fault :-( Per report from Sergey Koposov.
|
|
|
|
|
|
|
|
| |
max_stack_depth is not set to an unsafe value.
This commit also provides configure-time checking for <sys/resource.h>,
and cleans up some perhaps-unportable code associated with use of that
include file and getrlimit().
|
|
|
|
|
|
| |
platform limit, rather than just blindly raising max_stack_depth.
Also, tweak the code to work properly if someone sets max_stack_depth
to more than 2Gb, which guc.c will allow on a 64-bit machine.
|
|
|
|
|
|
|
|
|
|
|
| |
overlapping possible matches for the separator string, such as
string_to_array('123xx456xxx789', 'xx').
Also, revise the logic of replace(), split_part(), and string_to_array()
to avoid O(N^2) work from redundant searches and conversions to pg_wchar
format when there are N matches to the separator string.
Backpatched the full patch as far as 8.0. 7.4 also has the bug, but the
code has diverged a lot, so I just went for a quick-and-dirty fix of the
bug itself in that branch.
|
|
|
|
|
|
|
|
|
|
|
| |
been initialized yet. This can happen because there are code paths that call
SysCacheGetAttr() on a tuple originally fetched from a different syscache
(hopefully on the same catalog) than the one specified in the call. It
doesn't seem useful or robust to try to prevent that from happening, so just
improve the function to cope instead. Per bug#2678 from Jeff Trout. The
specific example shown by Jeff is new in 8.1, but to be on the safe side
I'm backpatching 8.0 as well. We could patch 7.x similarly but I think
that's probably overkill, given the lack of evidence of old bugs of this ilk.
|
| |
|