| Commit message (Collapse) | Author | Age |
... | |
|
|
|
| |
be NULL ... seems an unsafe assumption.
|
|
|
|
|
|
| |
if available (which it usually should be), during processing of Bind
and Execute protocol messages. This improves usefulness of
log_min_error_statement logging for extended query protocol.
|
|
|
|
| |
from Magnus that MSVC complains about this.
|
|
|
|
|
|
|
| |
sin_port in the returned IP address struct when servname is NULL. This has
been observed to cause failure to bind the stats collection socket, and
could perhaps cause other issues too. Per reports from Brad Nicholson
and Chris Browne.
|
|
|
|
|
|
| |
that conflict with the OID that we want to use for the new database.
This avoids the risk of trying to remove files that maybe we shouldn't
remove. Per gripe from Jon Lapham and subsequent discussion of 27-Sep.
|
|
|
|
|
|
|
|
| |
timezone actually has a daylight-savings rule. This avoids breaking
cases that used to work because they went through the DecodePosixTimezone
code path. Per contrib regression failures (mea culpa for not running
those yesterday...). Also document the already-applied change to allow
GMT offsets up to 14 hours.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
input routines. Remove the former "DecodePosixTimezone" function in favor of
letting the zic code handle POSIX-style zone specs (see tzparse()). In
particular this means that "PST+3" now means the same as "-03", whereas it
used to mean "-11" --- the zone abbreviation is effectively just a noise word
in this syntax. Make sure that all named and POSIX-style zone names will be
parsed as a single token. Fix long-standing bogosities in printing and input
of fractional-hour timezone offsets (since the tzparse() code will accept
these, we'd better make 'em work). Also correct an error in the original
coding of the zic-zone-name patch: in "timestamp without time zone" input,
zone names are supposed to be allowed but ignored, but the coding was such
that the zone changed the interpretation anyway.
|
|
|
|
|
|
|
|
| |
modules; the first try was not usable in EXEC_BACKEND builds (e.g.,
Windows). Instead, just provide some entry points to increase the
allocation requests during postmaster start, and provide a dedicated
LWLock that can be used to synchronize allocation operations performed
by backends. Per discussion with Marc Munro.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1) pgwin32_waitforsinglesocket(): WaitForMultipleObjectsEx now called with
finite timeout (100ms) in case of FP_WRITE and UDP socket. If timeout occurs
then pgwin32_waitforsinglesocket() tries to write empty packet goes to
WaitForMultipleObjectsEx again.
2) pgwin32_send(): add loop around WSASend and pgwin32_waitforsinglesocket().
The reason is: for overlapped socket, 'ok' result from
pgwin32_waitforsinglesocket() isn't guarantee that socket is still free,
it can become busy again and following WSASend call will fail with
WSAEWOULDBLOCK error.
See http://archives.postgresql.org/pgsql-hackers/2006-10/msg00561.php
|
|
|
|
|
|
|
|
|
|
| |
rows --- if the surrounding query queued any trigger events between the rows,
the events would be fired at the wrong time, leading to bizarre behavior.
Per report from Merlin Moncure.
This is a simple patch that should solve the problem fully in the back
branches, but in HEAD we also need to consider the possibility of queries
with RETURNING clauses. Will look into a fix for that separately.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
I introduced in 7.4.1 :-(. It's correct to allow unknown to be coerced to
ANY or ANYELEMENT, since it's a real-enough data type, but it most certainly
isn't an array datatype. This can cause a backend crash but AFAICT is not
exploitable as a security hole. Per report from Michael Fuhr.
Note: as fixed in HEAD, this changes a constant in the pg_stats view,
resulting in a change in the expected regression outputs. The back-branch
patches have been hacked to avoid that, so that pre-existing installations
won't start failing their regression tests.
|
|
|
|
| |
don't cheat on the raw-vs-cooked status of a constraint.
|
|
|
|
|
| |
Apple developer tools. We now use dlopen directly if available, and fall
back to the older code if not. Chris Campbell
|
| |
|
|
|
|
|
| |
specify it explicitly in backend/Makefile. Arrange for this value to
be known by get_stack_depth_rlimit() too. Per suggestion from Magnus.
|
| |
|
|
|
|
|
| |
query_list into the Portal, ie, the one seen and possibly modified by
the planner. My fault :-( Per report from Sergey Koposov.
|
|
|
|
|
|
|
|
| |
max_stack_depth is not set to an unsafe value.
This commit also provides configure-time checking for <sys/resource.h>,
and cleans up some perhaps-unportable code associated with use of that
include file and getrlimit().
|
|
|
|
|
|
| |
platform limit, rather than just blindly raising max_stack_depth.
Also, tweak the code to work properly if someone sets max_stack_depth
to more than 2Gb, which guc.c will allow on a 64-bit machine.
|
|
|
|
|
|
|
|
|
|
|
| |
overlapping possible matches for the separator string, such as
string_to_array('123xx456xxx789', 'xx').
Also, revise the logic of replace(), split_part(), and string_to_array()
to avoid O(N^2) work from redundant searches and conversions to pg_wchar
format when there are N matches to the separator string.
Backpatched the full patch as far as 8.0. 7.4 also has the bug, but the
code has diverged a lot, so I just went for a quick-and-dirty fix of the
bug itself in that branch.
|
|
|
|
|
|
|
|
|
|
|
| |
been initialized yet. This can happen because there are code paths that call
SysCacheGetAttr() on a tuple originally fetched from a different syscache
(hopefully on the same catalog) than the one specified in the call. It
doesn't seem useful or robust to try to prevent that from happening, so just
improve the function to cope instead. Per bug#2678 from Jeff Trout. The
specific example shown by Jeff is new in 8.1, but to be on the safe side
I'm backpatching 8.0 as well. We could patch 7.x similarly but I think
that's probably overkill, given the lack of evidence of old bugs of this ilk.
|
| |
|
|
|
|
|
|
| |
remaining functions, simplify pglz_compress's API to not require a useless
data copy when compression fails. Also add a check in pglz_decompress that
the expected amount of data was decompressed.
|
|
|
|
|
|
| |
static variables. This avoids any risk of potential non-reentrancy,
and in particular offers a much cleaner workaround for the Intel compiler
bug that was affecting ginutil.c.
|
|
|
|
| |
proposed patches from John Jorgensen and Steve Singer.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
repeatedly. Now that we don't have to worry about memory leaks from
glibc's qsort, we can safely put CHECK_FOR_INTERRUPTS into the tuplesort
comparators, as was requested a couple months ago. Also, get rid of
non-reentrancy and an extra level of function call in tuplesort.c by
providing a variant qsort_arg() API that passes an extra void * argument
through to the comparison routine. (We might want to use that in other
places too, I didn't look yet.)
|
|
|
|
| |
David Fetter
|
|
|
|
| |
Euler Taveira de Oliveira
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
postgresql.conf.
- shared_buffers = 32000kB => 32MB
- temp_buffers = 8000kB => 8MB
- wal_buffers = 8 => 64kB
The code of initdb was a bit modified to write MB-unit values.
Values greater than 8000kB are rounded out to MB.
GUC_UNIT_XBLOCKS is added for wal_buffers. It is like GUC_UNIT_BLOCKS,
but uses XLOG_BLCKSZ instead of BLCKSZ.
Also, I cleaned up the test of GUC_UNIT_* flags in preparation to
add more unit flags in less bits.
ITAGAKI Takahiro
|
|
|
|
|
|
| |
stats_start_collector and stats_row_level to also be on
David Wheeler
|
|
|
|
|
|
|
| |
severity. This is to ensure the user can cancel a query that's spitting
out lots of notice/warning messages, even if they're coming from a loop
that doesn't otherwise contain a CHECK_FOR_INTERRUPTS. Per gripe from
Stephen Frost.
|
|
|
|
|
| |
CaseTestExpr, but forgot that the optimizer is sometimes able to replace
CaseTestExpr by Const.
|
|
|
|
|
|
|
|
|
|
| |
present; intervening positions are filled with nulls. This behavior
is required by SQL99 but was not implementable before 8.2 due to lack
of support for nulls in arrays. I have only made it work for the
one-dimensional case, which is all that SQL99 requires. It seems quite
complex to get it right in higher dimensions, and since we never allowed
extension at all in higher dimensions, I think that must count as a
future feature addition not a bug fix.
|
|
|
|
|
|
|
|
|
|
|
|
| |
the SQL spec, viz IS NULL is true if all the row's fields are null, IS NOT
NULL is true if all the row's fields are not null. The former coding got
this right for a limited number of cases with IS NULL (ie, those where it
could disassemble a ROW constructor at parse time), but was entirely wrong
for IS NOT NULL. Per report from Teodor.
I desisted from changing the behavior for arrays, since on closer inspection
it's not clear that there's any support for that in the SQL spec. This
probably needs more consideration.
|
|
|
|
|
|
|
| |
to performance. (A wholesale effort to get rid of strncpy should be
undertaken sometime, but not during beta.) This commit also fixes dynahash.c
to correctly truncate overlength string keys for hashtables, so that its
callers don't have to anymore.
|
|
|
|
|
|
| |
discussion.
Patch from Simon Riggs.
|
| |
|
|
|
|
|
|
| |
wrong answer, as has been seen to occur with a buggy Linux kernel. Not
really our bug, but it's a simple test in a seldom-used control path,
so might as well have a defense.
|
|
|
|
| |
for DROP AGGREGATE IF EXISTS. Per report from Teodor.
|
|
|
|
|
| |
backward compatibility for anyone using the old userlock code that's now
on pgfoundry --- locks from that code still show as 'userlock'.
|
|
|
|
|
|
|
|
|
| |
return true for exactly the characters treated as whitespace by their flex
scanners. Per report from Victor Snezhko and subsequent investigation.
Also fix a passel of unsafe usages of <ctype.h> functions, that is, ye olde
char-vs-unsigned-char issue. I won't miss <ctype.h> when we are finally
able to stop using it.
|
|
|
|
|
| |
match what SHOW displays as default value, to make the user experience
uniform.
|
|
|
|
|
|
|
| |
even when a single relation requires more than max_fsm_pages pages. Also,
make VACUUM emit a warning in this case, since it likely means that VACUUM
FULL or other drastic corrective measure is needed. Per reports from Jeff
Frost and others of unexpected changes in the claimed max_fsm_pages need.
|
|
|
|
|
|
|
|
|
| |
is a large enough histogram, it will use the number of matches in the
histogram to derive a selectivity estimate, rather than the admittedly
pretty bogus heuristics involving examining the pattern contents. I set
'large enough' at 100, but perhaps we should change that later. Also
apply the same technique in contrib/ltree's <@ and @> estimator. Per
discussion with Stefan Kaltenbrunner and Matteo Beccati.
|
|
|
|
|
|
|
|
|
|
|
| |
tables in the query compete for cache space, not just the one we are
currently costing an indexscan for. This seems more realistic, and it
definitely will help in examples recently exhibited by Stefan
Kaltenbrunner. To get the total size of all the tables involved, we must
tweak the handling of 'append relations' a bit --- formerly we looked up
information about the child tables on-the-fly during set_append_rel_pathlist,
but it needs to be done before we start doing any cost estimation, so
push it into the add_base_rels_to_query scan.
|
|
|
|
|
|
|
| |
contrib functionality. Along the way, remove the USER_LOCKS configuration
symbol, since it no longer makes any sense to try to compile that out.
No user documentation yet ... mmoncure has promised to write some.
Thanks to Abhijit Menon-Sen for creating a first draft to work from.
|