| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
| |
failed (due to lock conflicts or out-of-space). We might have already
extended the index's filesystem EOF before failing, causing the EOF to be
beyond what the metapage says is the last used page. Hence the invariant
maintained by the code needs to be "EOF is at or beyond last used page",
not "EOF is exactly the last used page". Problem was created by my patch
of 2006-11-19 that attempted to repair bug #2737. Since that was
back-patched to 7.4, this needs to be as well. Per report and test case
from Vlastimil Krejcir.
|
| |
|
|
|
|
|
|
|
|
| |
return void ends with a SELECT, if that SELECT has a single result that is
also of type void. Without this, it's hard to write a void function that
calls another void function. Per gripe from Peter.
Back-patch as far as 8.0.
|
|
|
|
| |
and GB18030. patches from ITAGAKI Takahiro.
|
|
|
|
|
|
|
| |
executed in read_only mode. This could lead to various relatively-subtle
failures, such as an allegedly stable function returning non-stable results.
Bug goes all the way back to the introduction of read-only mode in 8.0.
Per report from Gaetano Mendola.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
did not expect that a DEAD tuple could follow a RECENTLY_DEAD tuple in an
update chain, but because the OldestXmin rule for determining deadness is a
simplification of reality, it is possible for this situation to occur
(implying that the RECENTLY_DEAD tuple is in fact dead to all observers,
but this patch does not attempt to exploit that). The code would follow a
chain forward all the way, but then stop before a DEAD tuple when backing
up, meaning that not all of the chain got moved. This could lead to copying
the chain multiple times (resulting in duplicate copies of the live tuple at
its end), or leaving dangling index entries behind (which, aside from
generating warnings from later vacuums, creates a risk of wrong query
results or bogus duplicate-key errors once the heap slot the index entry
points to is repopulated).
The fix is to recheck HeapTupleSatisfiesVacuum while following a chain
forward, and to stop if a DEAD tuple is reached. Each contiguous group
of RECENTLY_DEAD tuples will therefore be copied as a separate chain.
The patch also adds a couple of extra sanity checks to verify correct
behavior.
Per report and test case from Pavan Deolasee.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
check_sql_fn_retval allows binary-compatibility cases, the expression
extracted from an inline-able SQL function might have a type that is only
binary-compatible with the declared function result type. To avoid possibly
changing the semantics of the expression, we should insert a RelabelType node
in such cases. This has only been shown to have bad consequences in recent
8.1 and up releases, but I suspect there may be failure cases in the older
branches too, so patch it all the way back. Per bug #3116 from Greg Mullane.
Along the way, fix an omission in eval_const_expressions_mutator: it failed
to copy the relabelformat field when processing a RelabelType. No known
observable failures from this, but it definitely isn't intended behavior.
|
|
|
|
|
| |
It has been wrong for this case since it was first written for 7.1 :-(
Per report from Pavel Hanák.
|
|
|
|
|
|
| |
portals using PORTAL_UTIL_SELECT strategy. This is currently significant only
for FETCH queries, which are supposed to include a count in the tag. Seems
it's been broken since 7.4, but nobody noticed before Knut Lehre.
|
|
|
|
|
|
|
|
|
|
|
| |
thought that it didn't have to reposition the underlying tuplestore if the
portal is atEnd. But this is not so, because tuplestores have separate read
and write cursors ... and the read cursor hasn't moved from the start.
This mistake explains bug #2970 from William Zhang.
Note: the coding here is pretty inefficient, but given that no one has noticed
this bug until now, I'd say hardly anyone uses the case where the cursor has
been advanced before being persisted. So maybe it's not worth worrying about.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
out that ExecEvalVar and friends don't necessarily have access to a tuple
descriptor with correct typmod: it definitely can contain -1, and possibly
might contain other values that are different from the Var's value.
Arguably this should be cleaned up someday, but it's not a simple change,
and in any case typmod discrepancies don't pose a security hazard.
Per reports from numerous people :-(
I'm not entirely sure whether the failure can occur in 8.0 --- the simple
test cases reported so far don't trigger it there. But back-patch the
change all the way anyway.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
made query plan. Use of ALTER COLUMN TYPE creates a hazard for cached
query plans: they could contain Vars that claim a column has a different
type than it now has. Fix this by checking during plan startup that Vars
at relation scan level match the current relation tuple descriptor. Since
at that point we already have at least AccessShareLock, we can be sure the
column type will not change underneath us later in the query. However,
since a backend's locks do not conflict against itself, there is still a
hole for an attacker to exploit: he could try to execute ALTER COLUMN TYPE
while a query is in progress in the current backend. Seal that hole by
rejecting ALTER TABLE whenever the target relation is already open in
the current backend.
This is a significant security hole: not only can one trivially crash the
backend, but with appropriate misuse of pass-by-reference datatypes it is
possible to read out arbitrary locations in the server process's memory,
which could allow retrieving database content the user should not be able
to see. Our thanks to Jeff Trout for the initial report.
Security: CVE-2007-0556
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
we should check that the function code returns the claimed result datatype
every time we parse the function for execution. Formerly, for simple
scalar result types we assumed the creation-time check was sufficient, but
this fails if the function selects from a table that's been redefined since
then, and even more obviously fails if check_function_bodies had been OFF.
This is a significant security hole: not only can one trivially crash the
backend, but with appropriate misuse of pass-by-reference datatypes it is
possible to read out arbitrary locations in the server process's memory,
which could allow retrieving database content the user should not be able
to see. Our thanks to Jeff Trout for the initial report.
Security: CVE-2007-0555
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
exactly at the point where we need to insert a new item, the calculation used
the wrong size for the "high key" of the new left page. This could lead to
choosing an unworkable split, resulting in "PANIC: failed to add item to the
left sibling" (or "right sibling") failure. Although this bug has been there
a long time, it's very difficult to trigger a failure before 8.2, since there
was generally a lot of free space on both sides of a chosen split. In 8.2,
where the user-selected fill factor determines how much free space the code
tries to leave, an unworkable split is much more likely. Report by Joe
Conway, diagnosis and fix by Heikki Linnakangas.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
page about the maximum UTF8 sequence length we support (4 bytes since 8.1,
3 before that). pg_utf2wchar_with_len never got updated to support 4-byte
characters at all, and in any case had a buffer-overrun risk in that it
could produce multiple pg_wchars from what mblen claims to be just one UTF8
character. The only reason we don't have a major security hole is that most
callers allocate worst-case output buffers; the sole exception in released
versions appears to be pre-8.2 iwchareq() (ie, ILIKE), which can be crashed
due to zeroing out its return address --- but AFAICS that can't be exploited
for anything more than a crash, due to inability to control what gets written
there. Per report from James Russell and Michael Fuhr.
Pre-8.1 the risk is much less, but I still think pg_utf2wchar_with_len's
behavior given an incomplete final character risks buffer overrun, so
back-patch that logic change anyway.
This patch also makes sure that UTF8 sequences exceeding the supported
length (whichever it is) are consistently treated as error cases, rather
than being treated like a valid shorter sequence in some places.
|
|
|
|
|
|
|
| |
involving unions of types having typmods. Variants of the failure are known
to occur in 8.1 and up; not sure if it's possible in 8.0 and 7.4, but since
the code exists that far back, I'll just patch 'em all. Per report from
Brian Hurt.
|
|
|
|
|
|
|
|
|
|
| |
form '^(foo)$'. Before, these could never be optimized into indexscans.
The recent changes to make psql and pg_dump generate such patterns (for \d
commands and -t and related switches, respectively) therefore represented
a big performance hit for people with large pg_class catalogs, as seen in
recent gripe from Erik Jones. While at it, be more paranoid about
case-sensitivity checking in multibyte encodings, and fix some other
corner cases in which a regex might be interpreted too liberally.
|
|
|
|
|
|
|
|
|
| |
ps_TupFromTlist in plan nodes that make use of it. This was being done
correctly in join nodes and Result nodes but not in any relation-scan nodes.
Bug would lead to bogus results if a set-returning function appeared in the
targetlist of a subquery that could be rescanned after partial execution,
for example a subquery within EXISTS(). Bug has been around forever :-(
... surprising it wasn't reported before.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
any no-longer-needed segments; just truncate them to zero bytes and leave
the files in place for possible future re-use. This avoids problems when
the segments are re-used due to relation growth shortly after truncation.
Before, the bgwriter, and possibly other backends, could still be holding
open file references to the old segment files, and would write dirty blocks
into those files where they'd disappear from the view of other processes.
Back-patch as far as 8.0. I believe the 7.x branches are not vulnerable,
because they had no bgwriter, and "blind" writes by other backends would
always be done via freshly-opened file references.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
preference for filling pages out-of-order tends to confuse the sanity checks
in md.c, as per report from Balazs Nagy in bug #2737. The fix is to ensure
that the smgr-level code always has the same idea of the logical EOF as the
hash index code does, by using ReadBuffer(P_NEW) where we are adding a single
page to the end of the index, and using smgrextend() to reserve a large batch
of pages when creating a new splitpoint. The patch is a bit ugly because it
avoids making any changes in md.c, which seems the most prudent approach for a
backpatchable beta-period fix. After 8.3 development opens, I'll take a look
at a cleaner but more invasive patch, in particular getting rid of the now
unnecessary hack to allow reading beyond EOF in mdread().
Backpatch as far as 7.4. The bug likely exists in 7.3 as well, but because
of the magnitude of the 7.3-to-7.4 changes in hash, the later-version patch
doesn't even begin to apply. Given the other known bugs in the 7.3-era hash
code, it does not seem worth trying to develop a separate patch for 7.3.
|
|
|
|
|
|
|
|
|
|
| |
sub-arrays. Per discussion, if all inputs are empty arrays then result
must be an empty array too, whereas a mix of empty and nonempty arrays
should (and already did) draw an error. In the back branches, the
construct was strict: any NULL input immediately yielded a NULL output;
so I left that behavior alone. HEAD was simply ignoring NULL sub-arrays,
which doesn't seem very sensible. For lack of a better idea it now
treats NULL sub-arrays the same as empty ones.
|
|
|
|
|
|
|
|
|
| |
_bt_pagedel to recover from the failure: just search the whole parent level
if searching to the right fails. This does nothing for the underlying problem
that index keys became out-of-order in the grandparent level. However, we
believe that there is no other consequence worse than slightly inefficient
searching, so this narrow patch seems like the safest solution for the back
branches.
|
| |
|
|
|
|
|
|
|
| |
sin_port in the returned IP address struct when servname is NULL. This has
been observed to cause failure to bind the stats collection socket, and
could perhaps cause other issues too. Per reports from Brad Nicholson
and Chris Browne.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1) pgwin32_waitforsinglesocket(): WaitForMultipleObjectsEx now called with
finite timeout (100ms) in case of FP_WRITE and UDP socket. If timeout occurs
then pgwin32_waitforsinglesocket() tries to write empty packet goes to
WaitForMultipleObjectsEx again.
2) pgwin32_send(): add loop around WSASend and pgwin32_waitforsinglesocket().
The reason is: for overlapped socket, 'ok' result from
pgwin32_waitforsinglesocket() isn't guarantee that socket is still free,
it can become busy again and following WSASend call will fail with
WSAEWOULDBLOCK error.
See http://archives.postgresql.org/pgsql-hackers/2006-10/msg00561.php
|
|
|
|
|
|
|
|
|
|
| |
rows --- if the surrounding query queued any trigger events between the rows,
the events would be fired at the wrong time, leading to bizarre behavior.
Per report from Merlin Moncure.
This is a simple patch that should solve the problem fully in the back
branches, but in HEAD we also need to consider the possibility of queries
with RETURNING clauses. Will look into a fix for that separately.
|
|
|
|
|
|
|
|
|
|
|
|
| |
I introduced in 7.4.1 :-(. It's correct to allow unknown to be coerced to
ANY or ANYELEMENT, since it's a real-enough data type, but it most certainly
isn't an array datatype. This can cause a backend crash but AFAICT is not
exploitable as a security hole. Per report from Michael Fuhr.
Note: as fixed in HEAD, this changes a constant in the pg_stats view,
resulting in a change in the expected regression outputs. The back-branch
patches have been hacked to avoid that, so that pre-existing installations
won't start failing their regression tests.
|
|
|
|
|
|
|
|
| |
attislocal = true, since they are not really inherited but merely copied
from the original table. I'm not sure if there are any cases where it makes
a real difference given the existing uses of the flag, but wrong is wrong.
This was fixed in passing in HEAD by the LIKE INCLUDING CONSTRAINTS patch,
but never back-patched.
|
|
|
|
|
|
|
|
|
|
|
| |
overlapping possible matches for the separator string, such as
string_to_array('123xx456xxx789', 'xx').
Also, revise the logic of replace(), split_part(), and string_to_array()
to avoid O(N^2) work from redundant searches and conversions to pg_wchar
format when there are N matches to the separator string.
Backpatched the full patch as far as 8.0. 7.4 also has the bug, but the
code has diverged a lot, so I just went for a quick-and-dirty fix of the
bug itself in that branch.
|
|
|
|
|
|
|
|
|
|
|
| |
been initialized yet. This can happen because there are code paths that call
SysCacheGetAttr() on a tuple originally fetched from a different syscache
(hopefully on the same catalog) than the one specified in the call. It
doesn't seem useful or robust to try to prevent that from happening, so just
improve the function to cope instead. Per bug#2678 from Jeff Trout. The
specific example shown by Jeff is new in 8.1, but to be on the safe side
I'm backpatching 8.0 as well. We could patch 7.x similarly but I think
that's probably overkill, given the lack of evidence of old bugs of this ilk.
|
|
|
|
|
| |
didn't create a dependency from the new conversion to its schema. Back-patch
to all supported releases.
|
| |
|
|
|
|
| |
various sockets - should fix the occasional stats test regression failures we see.
|
|
|
|
|
|
|
|
|
|
|
|
| |
EINTR; the stats code was failing to do this and so were a couple of places
in the postmaster. The stats code assumed that recv() could not return EINTR
if a preceding select() showed the socket to be read-ready, but this is
demonstrably false with our Windows implementation of recv(), and it may
not be the case on all Unix variants either. I think this explains the
intermittent stats regression test failures we've been seeing, as well
as reports of stats collector instability under high load on Windows.
Backpatch as far as 8.0.
|
|
|
|
|
|
| |
a table. Otherwise a USING clause that yields NULL can leave the table
violating its constraint (possibly there are other cases too). Per report
from Alexander Pravking.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
already-aborted transaction block. GetSnapshotData throws an Assert if
not in a valid transaction; hence we mustn't attempt to set a snapshot
for the function until after checking for aborted transaction. This is
harmless AFAICT if Asserts aren't enabled (GetSnapshotData will compute
a bogus snapshot, but it doesn't matter since HandleFunctionRequest will
throw an error shortly anywy). Hence, not a major bug.
Along the way, add some ability to log fastpath calls when statement
logging is turned on. This could probably stand to be improved further,
but not logging anything is clearly undesirable.
Backpatched as far as 8.0; bug doesn't exist before that.
|
|
|
|
|
| |
HAVE_INT64_TIMESTAMP was mentioning PG_CONTROL_VERSION instead.
Victor Snezhko
|
|
|
|
|
|
|
|
| |
in a rule WHERE expression while inserting it into the original query,
because the 8.0 ResolveNew API is wrongly designed. This is fixed in 8.1
but I'm disinclined to risk back-porting the changes. Instead, just stop
the coredump and instead issue the same 'cannot handle whole-row reference'
message that 7.4 and before generated in this situation.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
parser will allow "\'" to be used to represent a literal quote mark. The
"\'" representation has been deprecated for some time in favor of the
SQL-standard representation "''" (two single quote marks), but it has been
used often enough that just disallowing it immediately won't do. Hence
backslash_quote allows the settings "on", "off", and "safe_encoding",
the last meaning to allow "\'" only if client_encoding is a valid server
encoding. That is now the default, and the reason is that in encodings
such as SJIS that allow 0x5c (ASCII backslash) to be the last byte of a
multibyte character, accepting "\'" allows SQL-injection attacks as per
CVE-2006-2314 (further details will be published after release). The
"on" setting is available for backward compatibility, but it must not be
used with clients that are exposed to untrusted input.
Thanks to Akio Ishida and Yasuo Ohgaki for identifying this security issue.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
characters in all cases. Formerly we mostly just threw warnings for invalid
input, and failed to detect it at all if no encoding conversion was required.
The tighter check is needed to defend against SQL-injection attacks as per
CVE-2006-2313 (further details will be published after release). Embedded
zero (null) bytes will be rejected as well. The checks are applied during
input to the backend (receipt from client or COPY IN), so it no longer seems
necessary to check in textin() and related routines; any string arriving at
those functions will already have been validated. Conversion failure
reporting (for characters with no equivalent in the destination encoding)
has been cleaned up and made consistent while at it.
Also, fix a few longstanding errors in little-used encoding conversion
routines: win1251_to_iso, win866_to_iso, euc_tw_to_big5, euc_tw_to_mic,
mic_to_euc_tw were all broken to varying extents.
Patches by Tatsuo Ishii and Tom Lane. Thanks to Akio Ishida and Yasuo Ohgaki
for identifying the security issues.
|
|
|
|
| |
and 8.0. Later releases already patched.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
a multiple (OR'ed) indexscan. It was checking for duplicate
tuple->t_data->t_ctid, when what it should be checking is tuple->t_self.
The trouble situation occurs when a live tuple has t_ctid not pointing to
itself, which can happen if an attempted UPDATE was rolled back. After a
VACUUM, an unrelated tuple could be installed where the failed update tuple
was, leading to one live tuple's t_ctid pointing to an unrelated tuple.
If one of these tuples is fetched by an earlier OR'ed indexscan and the other
by a later indexscan, nodeIndexscan.c would incorrectly ignore the second
tuple. The bug exists in all 7.4.* and 8.0.* versions, but not in earlier
or later branches because this code was only used in those releases. Per
trouble report from Rafael Martinez Guerrero.
|
|
|
|
|
| |
custom-generated DH parameters from actually being used by the server.
Found by Michael Fuhr.
|
| |
|
|
|
|
|
|
| |
tablespace, not the base directory.
Kris Jurka
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
alternatives ("|" symbol). The original coding allowed the added ^ and $
constraints to be absorbed into the first and last alternatives, producing
a pattern that would match more than it should. Per report from Eric Noriega.
I also changed the pattern to add an ARE director ("***:"), ensuring that
SIMILAR TO patterns do not change behavior if regex_flavor is changed. This
is necessary to make the non-capturing parentheses work, and seems like a
good idea on general principles.
Back-patched as far as 7.4. 7.3 also has the bug, but a fix seems impractical
because that version's regex engine doesn't have non-capturing parens.
|
|
|
|
|
|
| |
have symlinks (ie, Windows). Although it'll never be called on to do anything
useful during normal operation on such a platform, it's still needed to
re-create dropped directories during WAL replay.
|
|
|
|
|
|
|
|
|
|
| |
passed extend = true whenever we are reading a page we intend to reinitialize
completely, even if we think the page "should exist". This is because it
might indeed not exist, if the relation got truncated sometime after the
current xlog record was made and before the crash we're trying to recover
from. These two thinkos appear to explain both of the old bug reports
discussed here:
http://archives.postgresql.org/pgsql-hackers/2005-05/msg01369.php
|
|
|
|
|
|
|
|
|
| |
byte-swapping on the port number which causes the call to fail on Intel
Macs.
This patch uses htons() instead of htonl() and fixes this bug.
Ashley Clark
|