| Commit message (Collapse) | Author | Age |
|
|
|
|
| |
The function was renamed in 908e23473, but the comment never learned
about it.
|
|
|
|
|
|
|
|
|
|
| |
The reloptions stuff allows this option to be set on a matview.
While it's questionable whether that is useful or was really intended,
it does work, and we shouldn't change that in minor releases. Commit
e3e66d8a9 disabled the option since I didn't realize that it was
possible for it to be set on a matview. Tweak the test to re-allow it.
Discussion: <19749.1478711862@sss.pgh.pa.us>
|
|
|
|
|
|
|
|
|
|
| |
We really ought to make StdRdOptions and the other decoded forms of
reloptions self-identifying, but for the moment, assume that only plain
relations could possibly be user_catalog_tables. Fixes problem with bogus
"ON CONFLICT is not supported on table ... used as a catalog table" error
when target is a view with cascade option.
Discussion: <26681.1477940227@sss.pgh.pa.us>
|
|
|
|
|
| |
In pg_log_v(), we need to translate the fmt before processing, not the
formatted message afterwards.
|
|
|
|
|
|
|
|
|
|
|
| |
This was already fixed in HEAD as part of 6ad8ac60 but was not
backpatched.
Also change the way pg_xlog is handled to be the same as the other
directories.
Patch from me with pg_xlog addition from Michael Paquier, test updates
from David Steele.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For a very long time, pltcl's spi_exec and spi_execp commands have had
a behavior of storing the current row number as an element of output
arrays, but this was never documented. Fix that.
For an equally long time, pltcl_trigger_handler had a behavior of silently
ignoring ".tupno" as an output column name, evidently so that the result
of spi_exec could be used directly as a trigger result tuple. Not sure
how useful that really is, but in any case it's bad that it would break
attempts to use ".tupno" as an actual column name. We can fix it by not
checking for ".tupno" until after we check for a column name match. This
comports with the effective behavior of spi_exec[p] that ".tupno" is only
magic when you don't have an actual column named that.
In passing, wordsmith the description of returning modified tuples from
a pltcl trigger.
Noted while working on Jim Nasby's patch to support composite results
from pltcl. The inability to return trigger tuples using ".tupno" as
a column name is a bug, so back-patch to all supported branches.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We must do this in case the expression evaluation results in calling
another plpgsql function (or, really, anything using SPI). I missed
the need for this when I converted exec_cast_value() from doing a
simple InputFunctionCall() to doing ExecEvalExpr() in commit 1345cc67b.
There is a SPI_push_conditional in InputFunctionCall(), so that there
was no bug before that.
Per bug #14414 from Marcos Castedo. Add a regression test based on his
example, which was that a plpgsql function in a domain check constraint
didn't work when assigning to a domain-type variable within plpgsql.
Report: <20161106010947.1387.66380@wrigleys.postgresql.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The workaround the IANA guys chose to get rid of the clang warning
we'd silenced in commit 23ed2ba81 turns out not to satisfy Coverity.
Go back to the previous solution, ie, remove the useless comparison
to SIZE_MAX. (In principle, there could be machines out there where
it's not useless because ptrdiff_t is wider than size_t. But the whole
thing is pretty academic anyway, as we could never approach this limit
for any sane estimate of the amount of data that zic will ever be asked
to work with.)
Also, s/lineno/lineno_t/g, because if we accept their decision to start
using "lineno" as a typedef, it is going to have very unpleasant
consequences in our next pgindent run. Noted that while fooling with
pltcl yesterday.
|
|
|
|
|
|
|
| |
Seems to be a copy-and-pasteo. Odd that we heard no reports of
compiler warnings about it.
Thomas Munro
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch absorbs some unreleased fixes for symlink manipulation bugs
introduced in tzcode 2016g. Ordinarily I'd wait around for a released
version, but in this case it seems like we could do with extra testing,
in particular checking whether it works in EDB's VMware build environment.
This corresponds to commit aec59156abbf8472ba201b6c7ca2592f9c10e077 in
https://github.com/eggert/tz.
Per a report from Sandeep Thakkar, building in an environment where hard
links are not supported in the timezone data installation directory failed,
because upstream code refactoring had broken the case of symlinking from an
existing symlink. Further experimentation also showed that the symlinks
were sometimes made incorrectly, with too many or too few "../"'s in the
symlink contents.
Back-patch of commit 1f87181e12beb067d21b79493393edcff14c190b.
Report: <CANFyU94_p6mqRQc2i26PFp5QAOQGB++AjGX=FO8LDpXw0GSTjw@mail.gmail.com>
Discussion: http://mm.icann.org/pipermail/tz/2016-November/024431.html
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
tsquery_rewrite() tries to find matches to subsets of AND/OR conditions;
for example, in the query 'a | b | c' the substitution subquery 'a | c'
should match and lead to replacement of the first and third items.
That's fine, but the matching algorithm apparently takes about O(2^N)
for an N-clause query (I say "apparently" because the code is also both
unintelligible and uncommented). We could probably do better than that
even without any extra assumptions --- but actually, we know that the
subclauses are sorted, indeed are depending on that elsewhere in this very
same function. So we can just scan the two lists a single time to detect
matches, as though we were doing a merge join.
Also do a re-flattening call (QTNTernary()) in tsquery_rewrite_query, just
to make sure that the tree fits the expectations of the next search cycle.
I didn't try to devise a test case for this, but I'm pretty sure that the
oversight could have led to failure to match in some cases where a match
would be expected.
Improve comments, and also stick a CHECK_FOR_INTERRUPTS into
dofindsubquery, just in case it's still too slow for somebody.
Per report from Andreas Seltenreich. Back-patch to all supported branches.
Discussion: <8760oasf2y.fsf@credativ.de>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
QTNTernary() contains logic to flatten, eg, '(a & b) & c' into 'a & b & c',
which is all well and good, but it tries to do that to NOT nodes as well,
so that '!!a' gets changed to '!a'. Explicitly restrict the conversion to
be done only on AND and OR nodes, and add a test case illustrating the bug.
In passing, provide some comments for the sadly naked functions in
tsquery_util.c, and simplify some baroque logic in QTNFree(), which
I think may have been leaking some items it intended to free.
Noted while investigating a complaint from Andreas Seltenreich.
Back-patch to all supported versions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the previous coding, if an aggregate's transition function returned an
expanded array, nodeAgg.c and nodeWindowAgg.c would always copy it and thus
force it into the flat representation. This led to ping-ponging between
flat and expanded formats, which costs a lot. For an aggregate using
array_append as transition function, I measured about a 15X slowdown
compared to the pre-9.5 code, when working on simple int[] arrays.
Of course, the old code was already O(N^2) in this usage due to copying
flat arrays all the time, but it wasn't quite this inefficient.
To fix, teach nodeAgg.c and nodeWindowAgg.c to allow expanded transition
values without copying, so long as the transition function takes care to
return the transition value already properly parented under the aggcontext.
That puts a bit of extra responsibility on the transition function, but
doing it this way allows us to not need any extra logic in the fast path
of advance_transition_function (ie, with a pass-by-value transition value,
or with a modified-in-place pass-by-reference value). We already know
that that's a hot spot so I'm loath to add any cycles at all there. Also,
while only array_append currently knows how to follow this convention,
this solution allows other transition functions to opt-in without needing
to have a whitelist in the core aggregation code.
(The reason we would need a whitelist is that currently, if you pass a
R/W expanded-object pointer to an arbitrary function, it's allowed to do
anything with it including deleting it; that breaks the core agg code's
assumption that it should free discarded values. Returning a value under
aggcontext is the transition function's signal that it knows it is an
aggregate transition function and will play nice. Possibly the API rules
for expanded objects should be refined, but that would not be a
back-patchable change.)
With this fix, an aggregate using array_append is no longer O(N^2), so it's
much faster than pre-9.5 code rather than much slower. It's still a bit
slower than the bespoke infrastructure for array_agg, but the differential
seems to be only about 10%-20% rather than orders of magnitude.
Discussion: <6315.1477677885@sss.pgh.pa.us>
|
|
|
|
|
|
|
|
| |
This bug exists as far back as 9.0, when Hot Standby was introduced,
so back-patch to all supported branches.
Report and patch by Takayuki Tsunakawa, reviewed by Michael Paquier
and Kuntal Ghosh.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a restartpoint flushed no dirty buffers, it could fail to update
the minimum recovery point, leading to a minimum recovery point prior
to the starting REDO location. perform_base_backup() would interpret
that as meaning that no WAL files at all needed to be included in the
backup, failing an internal sanity check. To fix, have restartpoints
always update the minimum recovery point to just after the checkpoint
record itself, so that the file (or files) containing the checkpoint
record will always be included in the backup.
Code by Amit Kapila, per a design suggestion by me, with some
additional work on the code comment by me. Test case by Michael
Paquier. Report by Kyotaro Horiguchi.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The code to change the deferrability properties of a foreign-key constraint
updated all the associated triggers to match; but a moment's examination of
the code that creates those triggers in the first place shows that only
some of them should track the constraint's deferrability properties. This
leads to odd failures in subsequent exercise of the foreign key, as the
triggers are fired at the wrong times. Fix that, and add a regression test
comparing the trigger properties produced by ALTER CONSTRAINT with those
you get by creating the constraint as-intended to begin with.
Per report from James Parks. Back-patch to 9.4 where this ALTER
functionality was introduced.
Report: <CAJ3Xv+jzJ8iNNUcp4RKW8b6Qp1xVAxHwSXVpjBNygjKxcVuE9w@mail.gmail.com>
|
|
|
|
|
|
|
|
|
|
|
| |
I broke this in commit f3094920a. Apparently it's dead code anyway,
at least as far as our buildfarm is concerned (and the upstream IANA
code doesn't worry at all about symlink() not being present).
But as long as the rest of our code is willing to guard against not
having symlink(), this should too. Noted while investigating a
tangentially-related complaint from Sandeep Thakkar.
Back-patch to keep branches in sync.
|
| |
|
|
|
|
|
| |
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 2c7265d5e5c1d7858c6636bd366f28c41ed5173e
|
|
|
|
|
|
|
|
|
| |
An oversight in setting the boundaries of known commit timestamps during
startup caused old commit timestamps to become inaccessible after a
server restart.
Author and reporter: Julien Rouhaud
Review, test code: Craig Ringer
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
INSERT ... ON CONFLICT (specifically ExecCheckHeapTupleVisible) contains
another example of this unsafe coding practice. It is much harder to get
a failure out of it than the case fixed in commit 6292c2339, because in
most scenarios any hint bits that could be set would have already been set
earlier in the command. However, Konstantin Knizhnik reported a failure
with a custom transaction manager, and it's clearly possible to get a
failure via a race condition in async-commit mode.
For lack of a reproducible example, no regression test case in this
commit.
I did some testing with Asserts added to tqual.c's functions, and can say
that running "make check-world" exposed these two bugs and no others.
The Asserts are messy enough that I've not added them to the code for now.
Report: <57EE93C8.8080504@postgrespro.ru>
Related-Discussion: <CAO3NbwOycQjt2Oqy2VW-eLTq2M5uGMyHnGm=RNga4mjqcYD7gQ@mail.gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A transaction that conflicts against itself, for example
INSERT INTO t(pk) VALUES (1),(1) ON CONFLICT DO NOTHING;
should behave the same regardless of isolation level. It certainly
shouldn't throw a serialization error, as retrying will not help.
We got this wrong due to the ON CONFLICT logic not considering the case,
as reported by Jason Dusek.
Core of this patch is by Peter Geoghegan (based on an earlier patch by
Thomas Munro), though I didn't take his proposed code refactoring for fear
that it might have unexpected side-effects. Test cases by Thomas Munro
and myself.
Report: <CAO3NbwOycQjt2Oqy2VW-eLTq2M5uGMyHnGm=RNga4mjqcYD7gQ@mail.gmail.com>
Related-Discussion: <57EE93C8.8080504@postgrespro.ru>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Despite the argumentation I wrote in commit 7a2fe85b0, it's unsafe to do
this, because in corner cases it's possible for HeapTupleSatisfiesSelf
to try to set hint bits on the target tuple; and at least since 8.2 we
have required the buffer content lock to be held while setting hint bits.
The added regression test exercises one such corner case. Unpatched, it
causes an assertion failure in assert-enabled builds, or otherwise would
cause a hint bit change in a buffer we don't hold lock on, which given
the right race condition could result in checksum failures or other data
consistency problems. The odds of a problem in the field are probably
pretty small, but nonetheless back-patch to all supported branches.
Report: <19391.1477244876@sss.pgh.pa.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With track_io_timing = on, EXPLAIN (ANALYZE, BUFFERS) will emit fields
named like "I/O Read Time". The slash makes that invalid as an XML
element name, so that adding FORMAT XML would produce invalid XML.
We already have code in there to translate spaces to dashes, so let's
generalize that to convert anything that isn't a valid XML name character,
viz letters, digits, hyphens, underscores, and periods. We could just
reject slashes, which would run a bit faster. But the fact that this went
unnoticed for so long doesn't give me a warm feeling that we'd notice the
next creative violation, so let's make it a permanent fix.
Reported by Markus Winand, though this isn't his initial patch proposal.
Back-patch to 9.2 where track_io_timing was added. The problem is only
latent in 9.1, so I don't feel a need to fix it there.
Discussion: <E0BF6A45-68E8-45E6-918F-741FB332C6BB@winand.at>
|
|
|
|
|
|
| |
This absorbs a fix for a symlink-manipulation bug in zic that was
introduced in 2016g. It probably isn't interesting for our use-case,
but I'm not quite sure, so let's update while we're at it.
|
|
|
|
|
|
|
| |
(Didn't I just do this? Oh well.)
DST law changes in Palestine. Historical corrections for Turkey.
Switch to numeric abbreviations for Asia/Colombo.
|
|
|
|
|
| |
clang points out that SIZE_MAX wouldn't fit into an int, which means
this comparison is pretty useless. Per report from Thomas Munro.
|
|
|
|
| |
Per buildfarm.
|
|
|
|
|
|
| |
This is mostly to absorb some corner-case fixes in zic for year-2037
timestamps. The other changes that have been made are unlikely to affect
our usage, but nonetheless we may as well take 'em.
|
|
|
|
|
|
| |
IANA got rid of the really silly "abbreviation" and replaced it with one
that's only moderately silly. But it's still pointless, so keep on not
showing it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
DST law changes in Turkey. Historical corrections for America/Los_Angeles,
Europe/Kirov, Europe/Moscow, Europe/Samara, and Europe/Ulyanovsk.
Rename Asia/Rangoon to Asia/Yangon, with a backward compatibility link.
The IANA crew continue their campaign to replace invented time zone
abbrevations with numeric GMT offsets. This update changes numerous zones
in Antarctica and the former Soviet Union, for instance Antarctica/Casey
now reports "+08" not "AWST" in the pg_timezone_names view. I kept these
abbreviations in the tznames/ data files, however, so that we will still
accept them for input. (We may want to start trimming those files someday,
but today is not that day.)
An exception is that since IANA no longer claims that "AMT" is in use
in Armenia for GMT+4, I replaced it in the Default file with GMT-4,
corresponding to Amazon Time which is in use in South America. It may be
that that meaning is also invented and IANA will drop it in a future
update; but for now, it seems silly to give pride of place to a meaning
not traceable to IANA over one that is.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a relation is truncated, it is important that the FSM is truncated as
well. Otherwise, after recovery, the FSM can return a page that has been
truncated away, leading to errors like:
ERROR: could not read block 28991 in file "base/16390/572026": read only 0
of 8192 bytes
We were using MarkBufferDirtyHint() to dirty the buffer holding the last
remaining page of the FSM, but during recovery, that might in fact not
dirty the page, and the FSM update might be lost.
To fix, use the stronger MarkBufferDirty() function. MarkBufferDirty()
requires us to do WAL-logging ourselves, to protect from a torn page, if
checksumming is enabled.
Also fix an oversight in visibilitymap_truncate: it also needs to WAL-log
when checksumming is enabled.
Analysis by Pavan Deolasee.
Discussion: <CABOikdNr5vKucqyZH9s1Mh0XebLs_jRhKv6eJfNnD2wxTn=_9A@mail.gmail.com>
Backpatch to 9.3, where we got data checksums.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CommandId is declared as uint32, and values up to 4G are indeed legal.
cidout() handles them properly by treating the value as unsigned int.
But cidin() was just using atoi(), which has platform-dependent behavior
for values outside the range of signed int, as reported by Bart Lengkeek
in bug #14379. Use strtoul() instead, as xidin() does.
In passing, make some purely cosmetic changes to make xidin/xidout
look more like cidin/cidout; the former didn't have a monopoly on
best practice IMO.
Neither xidin nor cidin make any attempt to throw error for invalid input.
I didn't change that here, and am not sure it's worth worrying about
since neither is really a user-facing type. The point is just to ensure
that indubitably-valid inputs work as expected.
It's been like this for a long time, so back-patch to all supported
branches.
Report: <20161018152550.1413.6439@wrigleys.postgresql.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
bitshiftright() and bitshiftleft() would recursively call each other
infinitely if the user passed INT_MIN for the shift amount, due to integer
overflow in negating the shift amount. To fix, clamp to -VARBITMAXLEN.
That doesn't change the results since any shift distance larger than the
input bit string's length produces an all-zeroes result.
Also fix some places that seemed inadequately paranoid about input typmods
exceeding VARBITMAXLEN. While a typmod accepted by anybit_typmodin() will
certainly be much less than that, at least some of these spots are
reachable with user-chosen integer values.
Andreas Seltenreich and Tom Lane
Discussion: <87d1j2zqtz.fsf@credativ.de>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
pgstat_twophase_postcommit is supposed to duplicate the math in
AtEOXact_PgStat, but it had missed out the bit about clearing
t_delta_live_tuples/t_delta_dead_tuples for a TRUNCATE.
It's harder than you might think to replicate the issue here, because
those counters would only be nonzero when a previous transaction in
the same backend had added/deleted tuples in the truncated table,
and those counts hadn't been sent to the stats collector yet.
Evident oversight in commit d42358efb. I've not added a regression
test for this; we tried to add one in d42358efb, and had to revert it
because it was too timing-sensitive for the buildfarm.
Back-patch to 9.5 where d42358efb came in.
Stas Kelvich
Discussion: <EB57BF68-C06D-4737-BDDC-4BA778F4E62B@postgrespro.ru>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It's not good for an inherited child constraint to be marked connoinherit;
that would result in the constraint not propagating to grandchild tables,
if any are created later. The code mostly prevented this from happening
but there was one case that was missed.
This is somewhat related to commit e55a946a8, which also tightened checks
on constraint merging. Hence, back-patch to 9.2 like that one. This isn't
so much because there's a concrete feature-related reason to stop there,
as to avoid having more distinct behaviors than we have to in this area.
Amit Langote
Discussion: <b28ee774-7009-313d-dd55-5bdd81242c41@lab.ntt.co.jp>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Even if Linux's mmap() is okay with a partial-hugepage request, munmap()
is not, as reported by Chris Richards. Therefore it behooves us to try
a bit harder to find out the actual hugepage size, instead of assuming
that we can skate by with a guess.
For the moment, just look into /proc/meminfo to find out the default
hugepage size, and use that. Later, on kernels that support requests
for nondefault sizes, we might try to consider other alternatives.
But that smells more like a new feature than a bug fix, especially if
we want to provide any way for the DBA to control it, so leave it for
another day.
I set this up to allow easy addition of platform-specific code for
non-Linux platforms, if needed; but right now there are no reports
suggesting that we need to work harder on other platforms.
Back-patch to 9.4 where hugepage support was introduced.
Discussion: <31056.1476303954@sss.pgh.pa.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix detaching of the mmap'd segment to have its own on_shmem_exit callback,
rather than piggybacking on the one for detaching from the SysV segment.
That was confusing, and given the distance between the two attach calls,
it was trouble waiting to happen.
Make the detaching calls idempotent by clearing AnonymousShmem to show
we've already unmapped. I spent quite a bit of time yesterday trying
to find a path that would allow the munmap()'s to be done twice, and
while I did not succeed, it seems silly that there's even a question.
Make the #ifdef logic less confusing by separating "do we want to use
anonymous shmem" from EXEC_BACKEND. Even though there's no current
scenario where those conditions are different, it is not helpful for
different places in the same file to be testing EXEC_BACKEND for what
are fundamentally different reasons.
Don't do on_exit_reset() in StartBackgroundWorker(). At best that's
useless (InitPostmasterChild would have done it already) and at worst
it could zap some callback that's unrelated to shared memory.
Improve comments, and simplify the huge_pages enablement logic slightly.
Back-patch to 9.4 where hugepage support was introduced.
Arguably this should go into 9.3 as well, but the code looks
significantly different there, and I doubt it's worth the
trouble of adapting the patch given I can't show a live bug.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This turns out not to be as harmless as I thought: MSVC will complain
if it sees an "extern" declaration without PGDLLEXPORT and then one with.
(Seems fairly silly, given that this can be changed after the fact by the
linker, but there you have it.) Therefore, contrib modules that have
extern's for V1 functions in header files are falling over in the
buildfarm, since none of those externs are marked PGDLLEXPORT.
We might or might not conclude that we're willing to plaster those
declarations with PGDLLEXPORT in HEAD, but in any case there's no way we're
going to ship this change in the back branches. Third-party authors would
not thank us for breaking their code in a minor release. Hence, revert
the addition of PGDLLEXPORT (but let's keep the extra info in the comment).
If we do the other changes we can revert this commit in HEAD.
Per buildfarm.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This isn't really necessary for our own code, because we use a .DEF file
in MSVC builds (see gendef.pl), or --export-all-symbols in MinGW and
Cygwin builds, to ensure that all global symbols in loadable modules
will be exported on Windows. However, third-party authors might use
different build processes that need this marker, and it's harmless
enough for our own builds.
To some extent, this is an oversight in commit e7128e8db, so back-patch
to 9.4 where that was added.
Laurenz Albe
Discussion: <A737B7A37273E048B164557ADEF4A58B539300BD@ntex2010a.host.magwien.gv.at>
|
|
|
|
| |
Amit Langote
|
|
|
|
|
|
|
|
|
|
|
| |
Ordinarily there would not be an async result sitting around at this
point, but it appears that in corner cases there can be. Considering
all the work we're about to launch, it's hardly going to cost anything
noticeable to check.
It's been like this forever, so back-patch to all supported branches.
Report: <CAD-Qf1eLUtBOTPXyFQGW-4eEsop31tVVdZPu4kL9pbQ6tJPO8g@mail.gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Historically, we've allowed users to add a CHECK constraint to a child
table and then add an identical CHECK constraint to the parent. This
results in "merging" the two constraints so that the pre-existing
child constraint ends up with both conislocal = true and coninhcount > 0.
However, if you tried to do it in the other order, you got a duplicate
constraint error. This is problematic for pg_dump, which needs to issue
separated ADD CONSTRAINT commands in some cases, but has no good way to
ensure that the constraints will be added in the required order.
And it's more than a bit arbitrary, too. The goal of complaining about
duplicated ADD CONSTRAINT commands can be served if we reject the case of
adding a constraint when the existing one already has conislocal = true;
but if it has conislocal = false, let's just make the ADD CONSTRAINT set
conislocal = true. In this way, either order of adding the constraints
has the same end result.
Another problem was that the code allowed creation of a parent constraint
marked convalidated that is merged with a child constraint that is
!convalidated. In this case, an inheritance scan of the parent table could
emit some rows violating the constraint condition, which would be an
unexpected result given the marking of the parent constraint as validated.
Hence, forbid merging of constraints in this case. (Note: valid child and
not-valid parent seems fine, so continue to allow that.)
Per report from Benedikt Grundmann. Back-patch to 9.2 where we introduced
possibly-not-valid check constraints. The second bug obviously doesn't
apply before that, and I think the first doesn't either, because pg_dump
only gets into this situation when dealing with not-valid constraints.
Report: <CADbMkNPT-Jz5PRSQ4RbUASYAjocV_KHUWapR%2Bg8fNvhUAyRpxA%40mail.gmail.com>
Discussion: <22108.1475874586@sss.pgh.pa.us>
|
|
|
|
|
|
|
|
|
| |
Back-patch commit 0dba54f1666ead71c54ce100b39efda67596d297 into the older
branches. This test is almost as much of a patching hazard there as it is
in HEAD, and it has no more reason to be needed than it does in HEAD.
I went back as far as 9.2; I judged 9.1 not worth the trouble since
it's on the verge of being EOL'd.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The need for this was previously obscured even on picky platforms
by the hack we used to support direct cross-module references in
the transforms contrib modules. Now that that hack is gone, the
undefined symbol is exposed, as reported by Robert Haas.
Back-patch to 9.5 where we started to use -Wl,-undefined,dynamic_lookup.
I'm a bit surprised that the older branches don't seem to contain
any gettext references in this module, but since they don't fail
at build time, they must not. (We might be able to get away with
leaving this alone in 9.5/9.6, but I think it's cleaner if the
reference gets resolved at link time.)
Report: <CA+TgmoaHJKU5kcWZcYduATYVT7Mnx+8jUnycaYYL7OtCwCigug@mail.gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I somehow had assumed that in the spinlock (in turn possibly using
semaphores) based fallback atomics implementation 32 bit writes could be
done without a lock. As far as the write goes that's correct, since
postgres supports only platforms with single-copy atomicity for aligned
32bit writes. But writing without holding the spinlock breaks
read-modify-write operations like pg_atomic_compare_exchange_u32(),
since they'll potentially "miss" a concurrent write, which can't happen
in actual hardware implementations.
In 9.6+ when using the fallback atomics implementation this could lead
to buffer header locks not being properly marked as released, and
potentially some related state corruption. I don't see a related danger
in 9.5 (earliest release with the API), because pg_atomic_write_u32()
wasn't used in a concurrent manner there.
The state variable of local buffers, before this change, were
manipulated using pg_atomic_write_u32(), to avoid unnecessary
synchronization overhead. As that'd not be the case anymore, introduce
and use pg_atomic_unlocked_write_u32(), which does not correctly
interact with RMW operations.
This bug only caused issues when postgres is compiled on platforms
without atomics support (i.e. no common new platform), or when compiled
with --disable-atomics, which explains why this wasn't noticed in
testing.
Reported-By: Tom Lane
Discussion: <14947.1475690465@sss.pgh.pa.us>
Backpatch: 9.5-, where the atomic operations API was introduced.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Recent Perl and/or new Linux distributions are starting to remove "." from
the @INC list by default. That breaks pg_rewind and ssl test suites, which
use helper perl modules that reside in the same directory. To fix, add the
current source directory explicitly to prove's include dir.
The vcregress.pl script probably also needs something like this, but I
wasn't able to remove '.' from @INC on Windows to test this, and don't want
to try doing that blindly.
Discussion: <20160908204529.flg6nivjuwp5vaoy@alap3.anarazel.de>
|
|
|
|
|
|
|
|
|
| |
They are supposed to be mutually exclusive, but there was no check for
that.
Michael Banck
Discussion: <20161007103414.GD12247@nighthawk.caipicrew.dd-dns.de>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Leaving the error in the error queue used to be harmless, because the
X509_STORE_load_locations() call used to be the last step in
initialize_SSL(), and we would clear the queue before the next
SSL_connect() call. But previous commit moved things around. The symptom
was that if a CRL file was not found, and one of the subsequent
initialization steps, like loading the client certificate or private key,
failed, we would incorrectly print the "no such file" error message from
the earlier X509_STORE_load_locations() call as the reason.
Backpatch to all supported versions, like the previous patch.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There were several issues with the old coding:
1. There was a race condition, if two threads opened a connection at the
same time. We used a mutex around SSL_CTX_* calls, but that was not
enough, e.g. if one thread SSL_CTX_load_verify_locations() with one
path, and another thread set it with a different path, before the first
thread got to establish the connection.
2. Opening two different connections, with different sslrootcert settings,
seemed to fail outright with "SSL error: block type is not 01". Not sure
why.
3. We created the SSL object, before calling SSL_CTX_load_verify_locations
and SSL_CTX_use_certificate_chain_file on the SSL context. That was
wrong, because the options set on the SSL context are propagated to the
SSL object, when the SSL object is created. If they are set after the
SSL object has already been created, they won't take effect until the
next connection. (This is bug #14329)
At least some of these could've been fixed while still using a shared
context, but it would've been more complicated and error-prone. To keep
things simple, let's just use a separate SSL context for each connection,
and accept the overhead.
Backpatch to all supported versions.
Report, analysis and test case by Kacper Zuk.
Discussion: <20160920101051.1355.79453@wrigleys.postgresql.org>
|