| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
| |
The code in shift_jis_20042euc_jis_2004() would fetch two bytes even when
only one remained in the string. Since conversion functions aren't
supposed to assume null-terminated input, this poses a small risk of
fetching past the end of memory and incurring SIGSEGV. No such crash has
been identified in the field, but we've certainly seen the equivalent
happen in other code paths, so patch this one all the way back.
Report and patch by Noah Misch.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since the last couple of columns of pg_type are often NULL,
sizeof(FormData_pg_type) can be an overestimate of the actual size of the
tuple data part. Therefore memcpy'ing that much out of the catalog cache,
as analyze.c was doing, poses a small risk of copying past the end of
memory and incurring SIGSEGV. No such crash has been identified in the
field, but we've certainly seen the equivalent happen in other code paths,
so patch this one all the way back.
Per valgrind testing by Noah Misch, though this is not his proposed patch.
I chose to use SearchSysCacheCopy1 rather than inventing special-purpose
infrastructure for copying only the minimal part of a pg_type tuple.
|
| |
|
| |
|
|
|
|
| |
DST law changes in Canada, Egypt, Russia, Samoa, South Sudan.
|
|
|
|
|
|
|
|
|
|
|
|
| |
The $(PERL) macro will be set by configure if it finds perl at all,
but $(perl_privlibexp) isn't configured unless you said --with-perl.
This results in confusing error messages if someone cd's into
src/pl/plperl and tries to build there despite the configure omission,
as reported by Tomas Vondra in bug #6198. Add simple checks to
provide a more useful report, while not disabling other use of the
makefile such as "make clean".
Back-patch to 9.0, which is as far as the patch applies easily.
|
|
|
|
|
| |
Remove unnecessary and circular #include of syncrep.h from proc.h.
Add htup.h to tablecmds.h so it will compile without prerequisites.
|
|
|
|
|
|
|
|
|
|
|
|
| |
">" should be ">>". This typo results in failure to use all of the bits
of the provided seed.
This might rise to the level of a security bug if we were relying on
srand48 for any security-critical purposes, but we are not --- in fact,
it's not used at all unless the platform lacks srandom(), which is
improbable. Even on such a platform the exposure seems minimal.
Reported privately by Andres Freund.
|
|
|
|
| |
fit PostgreSQL style.
|
| |
|
|
|
|
|
| |
changes for the win32 setlocale() wrapper I put into ecpg, to make it compile
on MinGW.
|
| |
|
|
|
|
|
|
|
| |
on Windows. ecpglib doesn't link with libpgport, but picks and compiles
the .c files it needs individually. To cope with that, move the setlocale()
wrapper from chklocale.c to a separate setlocale.c file, and include that
in ecpglib.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
dots. I previously worked around this in initdb, mapping the known
problematic locale names to aliases that work, but Hiroshi Inoue pointed
out that that's not enough because even if you use one of the aliases, like
"Chinese_HKG", setlocale(LC_CTYPE, NULL) returns back the long form, ie.
"Chinese_Hong Kong S.A.R.". When we try to restore an old locale value by
passing that value back to setlocale(), it fails. Note that you are affected
by this bug also if you use one of those short-form names manually, so just
reverting the hack in initdb won't fix it.
To work around that, move the locale name mapping from initdb to a wrapper
around setlocale(), so that the mapping is invoked on every setlocale() call.
Also, add a few checks for failed setlocale() calls in the backend. These
calls shouldn't fail, and if they do there isn't much we can do about it,
but at least you'll get a warning.
Backpatch to 9.1, where the initdb hack was introduced. The Windows bug
affects older versions too if you set locale manually to one of the aliases,
but given the lack of complaints from the field, I'm hesitent to backpatch.
|
|
|
|
|
|
| |
ifdef block. It has nothing to do with whether the replacement snprintf
function is used. It caused no live bug, because the replacement snprintf
function is always used on Win32, but it was nevertheless misplaced.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Examination of examples provided by Mark Kirkwood and others has convinced
me that actually commit 7f3eba30c9d622d1981b1368f2d79ba0999cdff2 was quite
a few bricks shy of a load. The useful part of that patch was clamping
ndistinct for the inner side of a semi or anti join, and the reason why
that's needed is that it's the only way that restriction clauses
eliminating rows from the inner relation can affect the estimated size of
the join result. I had not clearly understood why the clamping was
appropriate, and so mis-extrapolated to conclude that we should clamp
ndistinct for the outer side too, as well as for both sides of regular
joins. These latter actions were all wrong, and are reverted with this
patch. In addition, the clamping logic is now made to affect the behavior
of both paths in eqjoinsel_semi, with or without MCV lists to compare.
When we have MCVs, we suppose that the most common values are the ones
that are most likely to survive the decimation resulting from a lower
restriction clause, so we think of the clamping as eliminating non-MCV
values, or potentially even the least-common MCVs for the inner relation.
Back-patch to 8.4, same as previous fixes in this area.
|
|
|
|
|
|
|
| |
This fixes a pg_upgrade bug that could lead to query errors when
clog files are improperly removed.
Backpatch to 8.4, 9.0, 9.1.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch fixes an oversight in my commit
7f3eba30c9d622d1981b1368f2d79ba0999cdff2 of 2008-10-23. That patch
accounted for baserel restriction clauses that reduced the number of rows
coming out of a table (and hence the number of possibly-distinct values of
a join variable), but not for join restriction clauses that might have been
applied at a lower level of join. To account for the latter, look up the
sizes of the min_lefthand and min_righthand inputs of the current join,
and clamp with those in the same way as for the base relations.
Noted while investigating a complaint from Ben Chobot, although this in
itself doesn't seem to explain his report.
Back-patch to 8.4; previous versions used different estimation methods
for which this heuristic isn't relevant.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It is possible for VACUUM to scan no pages at all, if the visibility map
shows that all pages are all-visible. In this situation VACUUM has no new
information to report about the relation's tuple density, so it wasn't
changing pg_class.reltuples ... but it updated pg_class.relpages anyway.
That's wrong in general, since there is no evidence to justify changing the
density ratio reltuples/relpages, but it's particularly bad if the previous
state was relpages=reltuples=0, which means "unknown tuple density".
We just replaced "unknown" with "zero". ANALYZE would eventually recover
from this, but it could take a lot of repetitions of ANALYZE to do so if
the relation size is much larger than the maximum number of pages ANALYZE
will scan, because of the moving-average behavior introduced by commit
b4b6923e03f4d29636a94f6f4cc2f5cf6298b8c8.
The only known situation where we could have relpages=reltuples=0 and yet
the visibility map asserts everything's visible is immediately following
a pg_upgrade. It might be advisable for pg_upgrade to try to preserve the
relpages/reltuples statistics; but in any case this code is wrong on its
own terms, so fix it. Per report from Sergey Koposov.
Back-patch to 8.4, where the visibility map was introduced, same as the
previous change.
|
|
|
|
|
| |
Per bug #6181 from Itagaki Takahiro. Also do some marginal code cleanup
and improve error handling.
|
|
|
|
|
|
|
|
|
| |
On closer inspection, whining in restore_toc_entries_parallel is really
much too late for any user-facing error case. The right place to do it
is at the start of RestoreArchive(), before we've done anything interesting
(suh as trying to DROP all the targets ...)
Back-patch to 8.4, where parallel restore was introduced.
|
|
|
|
|
|
|
|
|
|
|
| |
If we are unable to do a parallel restore because the input file is stdin
or is otherwise unseekable, we should complain and fail immediately, not
after having done some of the restore. Complaining once per thread isn't
so cool either, and the messages should be worded to make it clear this is
an unsupported case not some weird race-condition bug. Per complaint from
Lonni Friedman.
Back-patch to 8.4, where parallel restore was introduced.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These days, such a response is far more likely to signify a server-side
problem, such as fork failure. Reporting "server does not support SSL"
(in sslmode=require) could be quite misleading. But the results could
be even worse in sslmode=prefer: if the problem was transient and the
next connection attempt succeeds, we'll have silently fallen back to
protocol version 2.0, possibly disabling features the user needs.
Hence, it seems best to just eliminate the assumption that backing off
to non-SSL/2.0 protocol is the way to recover from an "E" response, and
instead treat the server error the same as we would in non-SSL cases.
I tested this change against a pre-7.0 server, and found that there
was a second logic bug in the "prefer" path: the test to decide whether
to make a fallback connection attempt assumed that we must have opened
conn->ssl, which in fact does not happen given an "E" response. After
fixing that, the code does indeed connect successfully to pre-7.0,
as long as you didn't set sslmode=require. (If you did, you get
"Unsupported frontend protocol", which isn't completely off base
given the server certainly doesn't support SSL.)
Since there seems no reason to believe that pre-7.0 servers exist anymore
in the wild, back-patch to all supported branches.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are assorted situations wherein PQconnectPoll() will abandon a
connection attempt and try again with different parameters (eg, SSL versus
not SSL). However, the code forgot to discard any pending data in libpq's
I/O buffers when doing this. In at least one case (server returns E
message during SSL negotiation), there is unread input data which bollixes
the next connection attempt. I have not checked to see whether this is
possible in the other cases where we close the socket and retry, but it
seems like a matter of good defensive programming to add explicit
buffer-flushing code to all of them.
This is one of several issues exposed by Daniel Farina's report of
misbehavior after a server-side fork failure.
This has been wrong since forever, so back-patch to all supported branches.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
tsvector_concat() allocated its result workspace using the "conservative"
estimate of the sum of the two input tsvectors' sizes. Unfortunately that
wasn't so conservative as all that, because it supposed that the number of
pad bytes required could not grow. Which it can, as per test case from
Jesper Krogh, if there's a mix of lexemes with positions and lexemes
without them in the input data. The fix is to assume that we might add
a not-previously-present pad byte for each and every lexeme in the two
inputs; which really is conservative, but it doesn't seem worthwhile to
try to be more precise.
This is an aboriginal bug in tsvector_concat, so back-patch to all
versions containing it.
|
|
|
|
|
|
|
|
| |
This was deemed unnecessary initially but in later discussion it was
agreed otherwise.
Original file from Kevin Grittner, allegedly from Dan Ports.
I had to clean up whitespace a bit per changes from Heikki.
|
|
|
|
|
|
|
|
|
|
| |
Per previous experimentation, backtracking slows down lexing performance
significantly (by about a third). It's usually pretty easy to avoid, just
need to have rules that accept an incomplete construct and do whatever the
lexer would have done otherwise.
The backtracking was introduced by the patch that added quoted variable
substitution. Back-patch to 9.0 where that was added.
|
|
|
|
| |
Shigeru Hanada
|
|
|
|
|
| |
This reverts commit 1bde67c0b9adce8b7ed2a2d1fcb2788cf96cea64, which
should have been done only on the master branch.
|
|
|
|
|
|
|
| |
This makes it slightly more clear that '*' is not part of the default
value, in case that wasn't obvious.
As requested by Dougal Sutherland.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The previous coding resulted in contrib modules unintentionally overriding
the use of CONTRIB_TESTDB. There seems no particularly good reason to
allow that (after all, the makefile can set CONTRIB_TESTDB if that's really
what it intends).
In passing, document REGRESS_OPTS where the other pgxs.mk options are
documented.
Back-patch to 9.1 --- in prior versions, there were no cases of contrib
modules setting REGRESS_OPTS without including the --dbname switch, so
while the coding was fragile there was no actual bug.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When we implemented extensions, we made findDependentObjects() treat
EXTENSION dependency links similarly to INTERNAL links. However, that
logic contained an implicit assumption that an object could have at most
one INTERNAL dependency, so it did not work correctly for objects having
both INTERNAL and DEPENDENCY links. This led to failure to drop some
extension member objects when dropping the extension. Furthermore, we'd
never actually exercised the case of recursing to an internally-referenced
(owning) object from anything other than a NORMAL dependency, and it turns
out that passing the incoming dependency's flags to the owning object is
the Wrong Thing. This led to sometimes dropping a whole extension silently
when we should have rejected the drop command for lack of CASCADE.
Since we obviously were under-testing extension drop scenarios, add some
regression test cases. Unfortunately, such test cases require some
extensions (duh), so we can't test for problems in the core regression
tests. I chose to add them to the earthdistance contrib module, which is
a good test case because it has a dependency on the cube contrib module.
Back-patch to 9.1. Arguably these are pre-existing bugs in INTERNAL
dependency handling, but since it appears that the cases can never arise
pre-9.1, I'll refrain from back-patching the logic changes further than
that.
|
|
|
|
|
|
|
|
|
|
|
|
| |
When creating a new schema for a non-relocatable extension, we neglected
to check whether the calling user has permission to create schemas.
That didn't matter in the original coding, since we had already checked
superuserness, but in the new dispensation where users need not be
superusers, we should check it. Use CreateSchemaCommand() rather than
calling NamespaceCreate() directly, so that we also enforce the rules
about reserved schema names.
Per complaint from KaiGai Kohei, though this isn't the same as his patch.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
set_append_rel_pathlist supposed that, while computing per-column width
estimates for the appendrel, it could ignore child rels for which the
translated reltargetlist entry wasn't a Var. This gave rise to completely
silly estimates in some common cases, such as constant outputs from some or
all of the arms of a UNION ALL. Instead, fall back on get_typavgwidth to
estimate from the value's datatype; which might be a poor estimate but at
least it's not completely wacko.
That problem was exposed by an Assert in set_subquery_size_estimates, which
unfortunately was still overoptimistic even with that fix, since we don't
compute attr_widths estimates for appendrels that are entirely excluded by
constraints. So remove the Assert; we'll just fall back on get_typavgwidth
in such cases.
Also, since set_subquery_size_estimates calls set_baserel_size_estimates
which calls set_rel_width, there's no need for set_subquery_size_estimates
to call get_typavgwidth; set_rel_width will handle it for us if we just
leave the estimate set to zero. Remove the unnecessary code.
Per report from Erik Rijkers and subsequent investigation.
|
|
|
|
|
|
|
|
|
|
|
|
| |
The previous coding would result in deleting and not re-creating the
extension membership pg_depend rows, since there was no
CommandCounterIncrement that would allow recordDependencyOnCurrentExtension
to see that the deletion had happened. Make it work like the shell type
case, ie, keep the existing entries (and then throw an error if they're for
the wrong extension).
Per bug #6172 from Hitoshi Harada. Investigation and fix by Dimitri
Fontaine.
|
|
|
|
|
|
|
|
|
| |
Due to tuple-slot mismanagement, evaluation of WHEN conditions for AFTER
ROW UPDATE triggers could crash if there had been a BEFORE ROW trigger
fired for the same update. Fix by not trying to overload the use of
estate->es_trig_tuple_slot. Per report from Yoran Heling.
Back-patch to 9.0, when trigger WHEN conditions were introduced.
|
|
|
|
|
|
|
|
|
|
|
| |
As pointed out by Sergey Koposov, repeated invocations of tbm_lossify can
make building a large tidbitmap into an O(N^2) operation. To fix, make
sure we remove more than the minimum amount of information per call, and
add a fallback path to behave sanely if we're unable to fit the bitmap
within the requested amount of memory.
This has been wrong since the tidbitmap code was written, so back-patch
to all supported branches.
|
| |
|
|
|
|
|
|
|
|
| |
Now that we have a test that requires nondefault settings to pass, it seems
like we'd better mention that detail in the directions about how to run the
tests.
Also do some very minor copy-editing.
|
|
|
|
|
|
|
|
| |
isolation regression tests.
Alvaro committed these fixes to master branch on Tue Jul 29th, as part of
Noah Misch's patch. The rest of that patch is not needed on 9.1, but this
part should be backpatched.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
order of begin, prepare, and commit of three concurrent transactions that
have conflicts between them.
The test runs for a quite long time, and the expected output file is huge,
but this test caught some serious bugs during development, so seems
worthwhile to keep. The test uses prepared transactions, so it fails if the
server has max_prepared_transactions=0. Because of that, it's marked as
"ignore" in the schedule file.
Dan Ports
|
|
|
|
| |
cosmetic, it removes a lot of IMHO ugly whitespace from the expected output.
|
|
|
|
|
|
|
| |
Module initialization functions in Python 3 must have external
linkage, because PyMODINIT_FUNC does dllexport on Windows-like
platforms. Without this change, the build with Python 3 fails on
Windows.
|
|
|
|
|
|
|
|
| |
Dropped columns within a composite type were not handled correctly.
Also, we did not check for whether a composite result type had changed
since we cached the information about it.
Jan Urbański, per a bug report from Jean-Baptiste Quenot
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug reported by David Wheeler, fix by Alex Hunsaker.
# Please enter the commit message for your changes. Lines starting
# with '#' will be ignored, and an empty message aborts the commit.
# On branch master
# Changes to be committed:
# (use "git reset HEAD <file>..." to unstage)
#
# modified: src/pl/plperl/plperl.c
#
# Untracked files:
# (use "git add <file>..." to include in what will be committed)
#
# autom4te.cache/
# configure.in~
# doc/src/sgml/ref/grant.sgml~
# src/backend/port/win32_latch.c~
# src/bin/psql/command.c~
# src/include/pg_config.h.win32~
# src/pl/plpython/plpython.c~
# src/tools/msvc/pgbison.bat~
# src/tools/msvc/pgbison.pl.bak
# src/tools/msvc/pgflex.bat~
# src/tools/msvc/pgflex.pl.bak
# src/tools/pgindent/README~
# src/tools/pgindent/pgindent.pl
# src/tools/pgindent/pgindent.pl~
# xxxxx
# yyyyyy
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
streamed backup, throw an error and refuse to start up. The restore has not
finished correctly in that case and the data directory is possibly corrupt.
We already errored out in case of archive recovery, but could not during
crash recovery because we couldn't distinguish between the case that
pg_start_backup() was called and the database then crashed (must not error,
data is OK), and the case that we're restoring from a backup and not all
the needed WAL was replayed (data can be corrupt).
To distinguish those cases, add a line to backup_label to indicate
whether the backup was taken with pg_start/stop_backup(), or by streaming
(ie. pg_basebackup).
This is a different implementation than what I committed to 9.2 a week ago.
That implementation was not back-patchable because it required re-initdb.
Fujii Masao
|
|
|
|
|
| |
The translation tools are very unhappy about seeing \r in translatable
strings, so move it to a separate fprintf call.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The TID isn't stable enough: we might queue an sinval event before a VACUUM
FULL, and then process it afterwards, when the target tuple no longer has
the same TID. So we must invalidate entries on the basis of hash value
only. The old coding can be shown to result in various bizarre,
hard-to-reproduce errors in the presence of concurrent VACUUM FULLs on
system catalogs, and could easily result in permanent catalog corruption,
up to and including complete loss of tables.
This commit is just a minimal fix that removes the unsafe comparison.
We should remove transmission of the tuple TID from sinval messages
altogether, and then arrange to suppress the extra message in the common
case of a heap_update that doesn't change the key hashvalue. But that's
going to be much more invasive, and will only produce a probably-marginal
performance gain, so it doesn't seem like material for a back-patch.
Back-patch to 9.0. Before that, VACUUM FULL refused to do any tuple moving
if it found any INSERT_IN_PROGRESS or DELETE_IN_PROGRESS tuples (and
CLUSTER would give up altogether), so there was no risk of moving a tuple
that might be the subject of an unsent sinval message.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We have to be sure that we have revalidated each nailed-in-cache relcache
entry before we try to use it to load data for some other relcache entry.
The introduction of "mapped relations" in 9.0 broke this, because although
we updated the state kept in relmapper.c early enough, we failed to
propagate that information into relcache entries soon enough; in
particular, we could try to fetch pg_class rows out of pg_class before
we'd updated its relcache entry's rd_node.relNode value from the map.
This bug accounts for Dave Gould's report of failures after "vacuum full
pg_class", and I believe that there is risk for other system catalogs
as well.
The core part of the fix is to copy relmapper data into the relcache
entries during "phase 1" in RelationCacheInvalidate(), before they'll be
used in "phase 2". To try to future-proof the code against other similar
bugs, I also rearranged the order in which nailed relations are visited
during phase 2: now it's pg_class first, then pg_class_oid_index, then
other nailed relations. This should ensure that RelationClearRelation can
apply RelationReloadIndexInfo to all nailed indexes without risking use
of not-yet-revalidated relcache entries.
Back-patch to 9.0 where the relation mapper was introduced.
|