| Commit message (Collapse) | Author | Age |
| |
|
|
|
|
|
| |
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 729c338a50b452e86cd740cb9878554be4264f32
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
pg_upgrade has always attempted to ensure that the transient dump files
it creates are inaccessible except to the owner. However, refactoring
in commit 76a7650c4 broke that for the file containing "pg_dumpall -g"
output; since then, that file was protected according to the process's
default umask. Since that file may contain role passwords (hopefully
encrypted, but passwords nonetheless), this is a particularly unfortunate
oversight. Prudent users of pg_upgrade on multiuser systems would
probably run it under a umask tight enough that the issue is moot, but
perhaps some users are depending only on pg_upgrade's umask changes to
protect their data.
To fix this in a future-proof way, let's just tighten the umask at
process start. There are no files pg_upgrade needs to write at a
weaker security level; and if there were, transiently relaxing the
umask around where they're created would be a safer approach.
Report and patch by Tom Lane; the idea for the fix is due to Noah Misch.
Back-patch to all supported branches.
Security: CVE-2018-1053
|
|
|
|
|
|
|
| |
Update the documentation for \pset to mention
columns|linestyle|pager_min_lines.
Author: Дилян Палаузов <dpa-postgres@aegee.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
create_plan_recurse lacked any stack depth check. This is not per
our normal coding rules, but I'd supposed it was safe because earlier
planner processing is more complex and presumably should eat more
stack. But bug #15033 from Andrew Grossman shows this isn't true,
at least not for queries having the form of a many-thousand-way
INTERSECT stack.
Further testing showed that recurse_set_operations is also capable
of being crashed in this way, since it likewise will recurse to the
bottom of a parsetree before calling any support functions that
might themselves contain any stack checks. However, its stack
consumption is only perhaps a third of create_plan_recurse's.
It's possible that this particular problem with create_plan_recurse can
only manifest in 9.6 and later, since before that we didn't build a Path
tree for set operations. But having seen this example, I now have no
faith in the proposition that create_plan_recurse doesn't need a stack
check, so back-patch to all supported branches.
Discussion: https://postgr.es/m/20180127050845.28812.58244@wrigleys.postgresql.org
|
|
|
|
|
|
| |
DST law changes in Brazil, Sao Tome and Principe. Historical corrections
for Bolivia, Japan, and South Sudan. The "US/Pacific-New" zone has been
removed (it was only a link to America/Los_Angeles anyway).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we're inside a lateral subquery, there may be no unparameterized paths
for a particular child relation of an appendrel, in which case we *must*
be able to create similarly-parameterized paths for each other child
relation, else the planner will fail with "could not devise a query plan
for the given query". This means that there are situations where we'd
better be able to reparameterize at least one path for each child.
This calls into question the assumption in reparameterize_path() that
it can just punt if it feels like it. However, the only case that is
known broken right now is where the child is itself an appendrel so that
all its paths are AppendPaths. (I think possibly I disregarded that in
the original coding on the theory that nested appendrels would get folded
together --- but that only happens *after* reparameterize_path(), so it's
not excused from handling a child AppendPath.) Given that this code's been
like this since 9.3 when LATERAL was introduced, it seems likely we'd have
heard of other cases by now if there were a larger problem.
Per report from Elvis Pranskevichus. Back-patch to 9.3.
Discussion: https://postgr.es/m/5981018.zdth1YWmNy@hammer.magicstack.net
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
_tocEntryRequired() expects that it can identify ACL, SECURITY LABEL,
and COMMENT TOC entries that are for large objects by seeing whether
the tag for them starts with "LARGE OBJECT ". While that works fine
for actual large objects, which are indeed tagged that way, it's
subject to false positives unless every such entry's tag starts with an
appropriate type ID. And in fact it does not work for ACLs, because
up to now we customarily tagged those entries with just the bare name
of the object. This means that an ACL for an object named
"LARGE OBJECT something" would be misclassified as data not schema,
with undesirable results in a schema-only or data-only dump ---
although pg_upgrade seems unaffected, due to the special case for
binary-upgrade mode further down in _tocEntryRequired().
We can fix this by changing all the dumpACL calls to use the label
strings already in use for comments and security labels, which do
follow the convention of starting with an object type indicator.
Well, mostly they follow it. dumpDatabase() got it wrong, using
just the bare database name for those purposes, so that a database
named "LARGE OBJECT something" would similarly be subject to having
its comment or security label dropped or included when not wanted.
Bring that into line too. (Note that up to now, database ACLs have
not been processed by pg_dump, so that this issue doesn't affect them.)
_tocEntryRequired() itself is not free of fault: it was overly liberal
about matching object tags to "LARGE OBJECT " in binary-upgrade mode.
This looks like it is probably harmless because there would be no data
component to strip anyway in that mode, but at best it's trouble
waiting to happen, so tighten that up too.
The possible misclassification of SECURITY LABEL entries for databases is
in principle a security problem, but the opportunities for actual exploits
seem too narrow to be interesting. The other cases seem like just bugs,
since an object owner can change its ACL or comment for himself, he needn't
try to trick someone else into doing it by choosing a strange name.
This has been broken since per-large-object TOC entries were introduced
in 9.0, so back-patch to all supported branches.
Discussion: https://postgr.es/m/21714.1516553459@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For no apparent reason, this function was using a 16bit-wide inhseqno
value, rather than the correct 32 bit width which is what is stored in
the pg_inherits catalog. This becomes evident if you try to create a
table with more than 65535 parents, because this error appears:
ERROR: duplicate key value violates unique constraint «pg_inherits_relid_seqno_index»
DETAIL: Key (inhrelid, inhseqno)=(329371, 0) already exists.
Needless to say, having so many parents is an uncommon situations, which
explains why this error has never been reported despite being having
been introduced with the Postgres95 1.01 sources in commit d31084e9d111:
https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/commands/creatinh.c;hb=d31084e9d111#l349
Backpatch all the way back.
David Rowley noticed this while reviewing a patch of mine.
Discussion: https://postgr.es/m/CAKJS1f8Dn7swSEhOWwzZzssW7747YB=2Hi+T7uGud40dur69-g@mail.gmail.com
|
|
|
|
| |
Patch by: "Rader, David" <davidr@openscg.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a query against an inheritance tree runs concurrently with an ALTER
TABLE that's disinheriting one of the tree members, it's possible to get
a "could not find inherited attribute" error because after obtaining lock
on the removed member, make_inh_translation_list sees that its columns
have attinhcount=0 and decides they aren't the columns it's looking for.
An ideal fix, perhaps, would avoid including such a just-removed member
table in the query at all; but there seems no way to accomplish that
without adding expensive catalog rechecks or creating a likelihood of
deadlocks. Instead, let's just drop the check on attinhcount. In this
way, a query that's included a just-disinherited child will still
succeed, which is not a completely unreasonable behavior.
This problem has existed for a long time, so back-patch to all supported
branches. Also add an isolation test verifying related behaviors.
Patch by me; the new isolation test is based on Kyotaro Horiguchi's work.
Discussion: https://postgr.es/m/20170626.174612.23936762.horiguchi.kyotaro@lab.ntt.co.jp
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we flatten a subquery whose target list contains constants or
expressions, when those output columns are used in GROUPING SET columns,
the planner was capable of doing the wrong thing by merging a pulled-up
expression into the surrounding expression during const-simplification.
Then the late processing that attempts to match subexpressions to grouping
sets would fail to match those subexpressions to grouping sets, with the
effect that they'd not go to null when expected.
To fix, wrap such subquery outputs in PlaceHolderVars, ensuring that
they preserve their separate identity throughout the planner's expression
processing. This is a bit of a band-aid, because the wrapper defeats
const-simplification even in places where it would be safe to allow.
But a nicer fix would likely be too invasive to back-patch, and the
consequences of the missed optimizations probably aren't large in most
cases.
Back-patch to 9.5 where grouping sets were introduced.
Heikki Linnakangas, with small mods and better test cases by me;
additional review by Andrew Gierth
Discussion: https://postgr.es/m/7dbdcf5c-b5a6-ef89-4958-da212fe10176@iki.fi
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As src/backend/access/transam/README says, PageGetLSN may only be called
by processes holding either exclusive lock on buffer, or a shared lock
on buffer plus buffer header lock. Therefore any place that only holds
a shared buffer lock must use BufferGetLSNAtomic instead of PageGetLSN,
which internally obtains buffer header lock prior to reading the LSN.
A few callsites failed to comply with this rule. This was detected by
running all tests under a new (not committed) assertion that verifies
PageGetLSN locking contract. All but one of the callsites that failed
the assertion are fixed by this patch. Remaining callsites were
inspected manually and determined not to need any change.
The exception (unfixed callsite) is in TestForOldSnapshot, which only
has a Page argument, making it impossible to access the corresponding
Buffer from it. Fixing that seems a much larger patch that will have to
be done separately; and that's just as well, since it was only
introduced in 9.6 and other bugs are much older.
Some of these bugs are ancient; backpatch all the way back to 9.3.
Authors: Jacob Champion, Asim Praveen, Ashwin Agrawal
Reviewed-by: Michaël Paquier
Discussion: https://postgr.es/m/CABAq_6GXgQDVu3u12mK9O5Xt5abBZWQ0V40LZCE+oUf95XyNFg@mail.gmail.com
|
|
|
|
| |
Backpatch-through: 9.4 (9.3 didn't need improving)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Logical decoding's reorderbuffer.c may spill transaction files to disk
when transactions are large. These are supposed to be removed when they
become "too old" by xid; but file removal requires the boundary LSNs of
the transaction to be known. The final_lsn is only set when we see the
commit or abort record for the transaction, but nothing sets the value
for transactions that crash, so the removal code misbehaves -- in
assertion-enabled builds, it crashes by a failed assertion.
To fix, modify the final_lsn of transactions that don't have a value
set, to the LSN of the very latest change in the transaction. This
causes the spilled files to be removed appropriately.
Author: Atsushi Torikoshi
Reviewed-by: Kyotaro HORIGUCHI, Craig Ringer, Masahiko Sawada
Discussion: https://postgr.es/m/54e4e488-186b-a056-6628-50628e4e4ebc@lab.ntt.co.jp
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Upcoming versions of glibc will contain copy_file_range(2), a wrapper
around a new linux syscall for in-kernel copying of data ranges. This
conflicts with pg_rewinds function of the same name.
Therefore rename pg_rewinds version. As our version isn't a generic
copying facility we decided to choose a rewind specific function name.
Per buildfarm animal caiman and subsequent discussion with Tom Lane.
Author: Andres Freund
Discussion:
https://postgr.es/m/20180103033425.w7jkljth3e26sduc@alap3.anarazel.de
https://postgr.es/m/31122.1514951044@sss.pgh.pa.us
Backpatch: 9.5-, where pg_rewind was introduced
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 614350a3 allowed for an different builds of OpenSSL libraries on
Windows, but ignored the fact that the alternative builds don't have
config-specific libraries. This patch fixes the Solution file to ask for
the correct libraries.
per offline discussions with Leonardo Cecchi and Marco Nenciarini,
Backpatch to all live branches.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
XactLockTableWait assumed that its xid argument has already added itself
to the lock table. That assumption led to another assumption that if
locking the xid has succeeded but the xid is reported as still in
progress, then the input xid must have been a subtransaction.
These assumptions hold true for the original uses of this code in
locking related to on-disk tuples, but they break down in logical
replication slot snapshot building -- in particular, when a standby
snapshot logged contains an xid that's already in ProcArray but not yet
in the lock table. This leads to assertion failures that can be
reproduced all the way back to 9.4, when logical decoding was
introduced.
To fix, change SubTransGetParent to SubTransGetTopmostTransaction which
has a slightly different API: it returns the argument Xid if there is no
parent, and it goes all the way to the top instead of moving up the
levels one by one. Also, to avoid busy-waiting, add a 1ms sleep to give
the other process time to register itself in the lock table.
For consistency, change ConditionalXactLockTableWait the same way.
Author: Petr Jelínek
Discussion: https://postgr.es/m/1B3E32D8-FCF4-40B4-AEF9-5C0E3AC57969@postgrespro.ru
Reported-by: Konstantin Knizhnik
Diagnosed-by: Stas Kelvich, Petr Jelínek
Reviewed-by: Andres Freund, Robert Haas
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Multiple sessions doing CREATE INDEX CONCURRENTLY simultaneously are
supposed to be able to work in parallel, as evidenced by fixes in commit
c3d09b3bd23f specifically to support this case. In reality, one of the
sessions would be aborted by a misterious "deadlock detected" error.
Jeff Janes diagnosed that this is because of leftover snapshots used for
system catalog scans -- this was broken by 8aa3e47510b9 keeping track of
(registering) the catalog snapshot. To fix the deadlocks, it's enough
to de-register that snapshot prior to waiting.
Backpatch to 9.4, which introduced MVCC catalog scans.
Include an isolationtester spec that 8 out of 10 times reproduces the
deadlock with the unpatched code for me (Álvaro).
Author: Jeff Janes
Diagnosed-by: Jeff Janes
Reported-by: Jeremy Finzel
Discussion: https://postgr.es/m/CAMa1XUhHjCv8Qkx0WOr1Mpm_R4qxN26EibwCrj0Oor2YBUFUTg%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since 9.4, we've allowed the syntax "select union select" and variants
of that. However, the planner wasn't expecting a no-column set operation
and ended up treating the set operation as if it were UNION ALL.
Pre-v10, there seem to be some executor issues that would need to be
fixed to support such cases, and it doesn't really seem worth expending
much effort on. Just disallow it, instead.
Per report from Victor Yegorov.
Discussion: https://postgr.es/m/CAGnEbojGJrRSOgJwNGM7JSJZpVAf8xXcVPbVrGdhbVEHZ-BUMw@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The previous commit has shown that the sanity checks around freezing
aren't strong enough. Strengthening them seems especially important
because the existance of the bug has caused corruption that we don't
want to make even worse during future vacuum cycles.
The errors are emitted with ereport rather than elog, despite being
"should never happen" messages, so a proper error code is emitted. To
avoid superflous translations, mark messages as internal.
Author: Andres Freund and Alvaro Herrera
Reviewed-By: Alvaro Herrera, Michael Paquier
Discussion: https://postgr.es/m/20171102112019.33wb7g5wp4zpjelu@alap3.anarazel.de
Backpatch: 9.3-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously it was possible that a tuple was not pruned during vacuum,
even though its update xmax (i.e. the updating xid in a multixact with
both key share lockers and an updater) was below the cutoff horizon.
As the freezing code assumed, rightly so, that that's not supposed to
happen, xmax would be preserved (as a member of a new multixact or
xmax directly). That causes two problems: For one the tuple is below
the xmin horizon, which can cause problems if the clog is truncated or
once there's an xid wraparound. The bigger problem is that that will
break HOT chains, which in turn can lead two to breakages: First,
failing index lookups, which in turn can e.g lead to constraints being
violated. Second, future hot prunes / vacuums can end up making
invisible tuples visible again. There's other harmful scenarios.
Fix the problem by recognizing that tuples can be DEAD instead of
RECENTLY_DEAD, even if the multixactid has alive members, if the
update_xid is below the xmin horizon. That's safe because newer
versions of the tuple will contain the locking xids.
A followup commit will harden the code somewhat against future similar
bugs and already corrupted data.
Author: Andres Freund, with changes by Alvaro Herrera
Reported-By: Daniel Wood
Analyzed-By: Andres Freund, Alvaro Herrera, Robert Haas, Peter
Geoghegan, Daniel Wood, Yi Wen Wong, Michael Paquier
Reviewed-By: Alvaro Herrera, Robert Haas, Michael Paquier
Discussion:
https://postgr.es/m/E5711E62-8FDF-4DCA-A888-C200BF6B5742@amazon.com
https://postgr.es/m/20171102112019.33wb7g5wp4zpjelu@alap3.anarazel.de
Backpatch: 9.3-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The logical slots have a fast code path for sending data so as not to
impose too high a per message overhead. The fast path skips checks for
interrupts and timeouts. However, the existing coding failed to consider
the fact that a transaction with a large number of changes may take a
very long time to be processed and sent to the client. This causes the
walsender to ignore interrupts for potentially a long time and more
importantly it will result in the walsender being killed due to
timeout at the end of such a transaction.
This commit changes the fast path to also check for interrupts and only
allows calling the fast path when the last keepalive check happened less
than half the walsender timeout ago. Otherwise the slower code path will
be taken.
Backpatched to 9.4
Petr Jelinek, reviewed by Kyotaro HORIGUCHI, Yura Sokolov, Craig
Ringer and Robert Haas.
Discussion: https://postgr.es/m/e082a56a-fd95-a250-3bae-0fff93832510@2ndquadrant.com
|
|
|
|
|
|
|
|
|
|
|
| |
I noticed that _SPI_execute_plan initially sets spierrcontext.arg = NULL,
and only fills it in some time later. If an error were to happen in
between, _SPI_error_callback would try to dereference the null pointer.
This is unlikely --- there's not much between those points except
push-snapshot calls --- but it's clearly not impossible. Tweak the
callback to do nothing if the pointer isn't set yet.
It's been like this for awhile, so back-patch to all supported branches.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Notably, this permits linking to the 32-bit Perl binaries advertised on
perl.org, namely Strawberry Perl and ActivePerl. This has a side effect
of permitting linking to binaries built with obsolete MSVC versions.
By default, MSVC 2012 and later require a "safe exception handler table"
in each binary. MinGW-built, 32-bit DLLs lack the relevant exception
handler metadata, so linking to them failed with error LNK2026. Restore
the semantics of MSVC 2010, which omits the table from a given binary if
some linker input lacks metadata. This has no effect on 64-bit builds
or on MSVC 2010 and earlier. Back-patch to 9.3 (all supported
versions).
Reported by Victor Wagner.
Discussion: https://postgr.es/m/20160326154321.7754ab8f@wagner.wagner.home
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commits 5a5c2feca3fd858e70ea348822595547e6fa6c15 and
b5178c5d08ca59e30f9d9428fa6fdb2741794e65 introduced support for modern
MSVC-built, 32-bit Perl, but they broke use of MinGW-built, 32-bit Perl
distributions like Strawberry Perl and modern ActivePerl. Perl has no
robust means to report whether it expects a -D_USE_32BIT_TIME_T ABI, so
test this. Back-patch to 9.3 (all supported versions).
The chief alternative was a heuristic of adding -D_USE_32BIT_TIME_T when
$Config{gccversion} is nonempty. That banks on every gcc-built Perl
using the same ABI. gcc could change its default ABI the way MSVC once
did, and one could build Perl with gcc and the non-default ABI.
The GNU make build system could benefit from a similar test, without
which it does not support MSVC-built Perl. For now, just add a comment.
Most users taking the special step of building Perl with MSVC probably
build PostgreSQL with MSVC.
Discussion: https://postgr.es/m/20171130041441.GA3161526@rfd.leadboat.com
|
|
|
|
| |
This simplifies back-patch of the next change to v9.5 and v9.6.
|
|
|
|
| |
Reported-by: Masahiko Sawada <sawada.mshk@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a worker is flagged as BGW_NEVER_RESTART and we fail to start it,
or if it is not marked BGW_NEVER_RESTART but is terminated before
startup succeeds, what BgwHandleStatus should be reported? The
previous code really hadn't considered this possibility (as indicated
by the comments which ignore it completely) and would typically return
BGWH_NOT_YET_STARTED, but that's not a good answer, because then
there's no way for code using GetBackgroundWorkerPid() to tell the
difference between a worker that has not started but will start
later and a worker that has not started and will never be started.
So, when this case happens, return BGWH_STOPPED instead. Update the
comments to reflect this.
The preceding fix by itself is insufficient to fix the problem,
because the old code also didn't send a notification to the process
identified in bgw_notify_pid when startup failed. That might've
been technically correct under the theory that the status of the
worker was BGWH_NOT_YET_STARTED, because the status would indeed not
change when the worker failed to start, but now that we're more
usefully reporting BGWH_STOPPED, a notification is needed.
Without these fixes, code which starts background workers and then
uses the recommended APIs to wait for those background workers to
start would hang indefinitely if the postmaster failed to fork a
worker.
Amit Kapila and Robert Haas
Discussion: http://postgr.es/m/CAA4eK1KDfKkvrjxsKJi3WPyceVi3dH1VCkbTJji2fuwKuB=3uw@mail.gmail.com
|
|
|
|
|
|
|
|
|
| |
This makes life easier for extension authors who wish to support
Windows.
Brian Cloutier, slightly amended by me.
Discussion: http://postgr.es/m/CAJCy68fscdNhmzFPS4kyO00CADkvXvEa-28H-OtENk-pa2OTWw@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch fixes a couple of low-probability bugs that could lead to
reporting an irrelevant errno value (and hence possibly a wrong SQLSTATE)
concerning directory-open or file-open failures. It also fixes places
where we took shortcuts in reporting such errors, either by using elog
instead of ereport or by using ereport but forgetting to specify an
errcode. And it eliminates a lot of just plain redundant error-handling
code.
In service of all this, export fd.c's formerly-static function
ReadDirExtended, so that external callers can make use of the coding
pattern
dir = AllocateDir(path);
while ((de = ReadDirExtended(dir, path, LOG)) != NULL)
if they'd like to treat directory-open failures as mere LOG conditions
rather than errors. Also fix FreeDir to be a no-op if we reach it
with dir == NULL, as such a coding pattern would cause.
Then, remove code at many call sites that was throwing an error or log
message for AllocateDir failure, as ReadDir or ReadDirExtended can handle
that job just fine. Aside from being a net code savings, this gets rid of
a lot of not-quite-up-to-snuff reports, as mentioned above. (In some
places these changes result in replacing a custom error message such as
"could not open tablespace directory" with more generic wording "could not
open directory", but it was agreed that the custom wording buys little as
long as we report the directory name.) In some other call sites where we
can't just remove code, change the error reports to be fully
project-style-compliant.
Also reorder code in restoreTwoPhaseData that was acquiring a lock
between AllocateDir and ReadDir; in the unlikely but surely not
impossible case that LWLockAcquire changes errno, AllocateDir failures
would be misreported. There is no great value in opening the directory
before acquiring TwoPhaseStateLock, so just do it in the other order.
Also fix CheckXLogRemoved to guarantee that it preserves errno,
as quite a number of call sites are implicitly assuming. (Again,
it's unlikely but I think not impossible that errno could change
during a SpinLockAcquire. If so, this function was broken for its
own purposes as well as breaking callers.)
And change a few places that were using not-per-project-style messages,
such as "could not read directory" when "could not open directory" is
more correct.
Back-patch the exporting of ReadDirExtended, in case we have occasion
to back-patch some fix that makes use of it; it's not needed right now
but surely making it global is pretty harmless. Also back-patch the
restoreTwoPhaseData and CheckXLogRemoved fixes. The rest of this is
essentially cosmetic and need not get back-patched.
Michael Paquier, with a bit of additional work by me
Discussion: https://postgr.es/m/CAB7nPqRpOCxjiirHmebEFhXVTK7V5Jvw4bz82p7Oimtsm3TyZA@mail.gmail.com
|
|
|
|
|
|
| |
Invoking the Makefile without an explicit target was building every
possible target instead of just the "all" target. Back-patch to 9.3
(all supported versions).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
rewriteTargetListUD's processing is dependent on the relkind of the query's
target table. That was fine at the time it was made to act that way, even
for queries on inheritance trees, because all tables in an inheritance tree
would necessarily be plain tables. However, the 9.5 feature addition
allowing some members of an inheritance tree to be foreign tables broke the
assumption that rewriteTargetListUD's output tlist could be applied to all
child tables with nothing more than column-number mapping. This led to
visible failures if foreign child tables had row-level triggers, and would
also break in cases where child tables belonged to FDWs that used methods
other than CTID for row identification.
To fix, delay running rewriteTargetListUD until after the planner has
expanded inheritance, so that it is applied separately to the (already
mapped) tlist for each child table. We can conveniently call it from
preprocess_targetlist. Refactor associated code slightly to avoid the
need to heap_open the target relation multiple times during
preprocess_targetlist. (The APIs remain a bit ugly, particularly around
the point of which steps scribble on parse->targetList and which don't.
But avoiding such scribbling would require a change in FDW callback APIs,
which is more pain than it's worth.)
Also fix ExecModifyTable to ensure that "tupleid" is reset to NULL when
we transition from rows providing a CTID to rows that don't. (That's
really an independent bug, but it manifests in much the same cases.)
Add a regression test checking one manifestation of this problem, which
was that row-level triggers on a foreign child table did not work right.
Back-patch to 9.5 where the problem was introduced.
Etsuro Fujita, reviewed by Ildus Kurbangaliev and Ashutosh Bapat
Discussion: https://postgr.es/m/20170514150525.0346ba72@postgrespro.ru
|
|
|
|
| |
Andreas Karlsson
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Originally, we palloc'd this buffer just barely big enough to hold the
largest xlog record seen so far. It turns out that that can result in
valgrind complaints, because some compilers will emit code that assumes
it can safely fetch padding bytes at the end of a struct, and those
padding bytes were unallocated so far as aset.c was concerned. We can
fix that by MAXALIGN'ing the palloc request size, ensuring that it is big
enough to include any possible padding that might've been omitted from
the on-disk record.
An additional objection to the original coding is that it could result in
many repeated palloc cycles, in the worst case where we see a series of
gradually larger xlog records. We can ameliorate that cheaply by
imposing a minimum buffer size that's large enough for most xlog records.
BLCKSZ/2 was chosen after a bit of discussion.
In passing, remove an obsolete comment in struct xl_heap_new_cid that the
combocid field is free due to alignment considerations. Perhaps that was
true at some point, but it's not now.
Back-patch to 9.5 where this code came in.
Discussion: https://postgr.es/m/E1eHa4J-0006hI-Q8@gemulon.postgresql.org
|
|
|
|
|
|
|
|
|
|
|
| |
The various has_*_privilege() functions all support an optional
WITH GRANT OPTION added to the supported privilege types to test
whether the privilege is held with grant option. That is, all except
has_sequence_privilege() variations. Fix that.
Back-patch to all supported branches.
Discussion: https://postgr.es/m/005147f6-8280-42e9-5a03-dd2c1e4397ef@joeconway.com
|
|
|
|
| |
Missed this dependency in commits 7cce222c9 et al.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Traditionally IANA has distributed their timezone data in pure source
form, replete with extensive historical comments. As of release 2017c,
they've added a compact single-file format that omits comments and
abbreviates command keywords. This form is way shorter than the pure
source, even before considering its allegedly better compressibility.
Hence, let's distribute the data in that form rather than pure source.
I'm pushing this now, rather than at the next timezone database update,
so that it's easy to confirm that this data file produces compiled zic
output that's identical to what we were getting before.
Discussion: https://postgr.es/m/1915.1511210334@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When nodeValuesscan.c was written, it was impossible to have a SubPlan in
VALUES --- any sub-SELECT there would have to be uncorrelated and thereby
would produce an InitPlan instead. We therefore took a shortcut in the
logic that throws away a ValuesScan's per-row expression evaluation data
structures. This was broken by the introduction of LATERAL however; a
sub-SELECT containing a lateral reference produces a correlated SubPlan.
The cleanest fix for this would be to give up the optimization of
discarding the expression eval state. But that still seems pretty
unappetizing for long VALUES lists. It seems to work to just prevent
the subexpressions from hooking into the ValuesScan node's subPlan
list, so let's do that and see how well it works. (If this breaks,
due to additional connections between the subexpressions and the outer
query structures, we might consider compromises like throwing away data
only for VALUES rows not containing SubPlans.)
Per bug #14924 from Christian Duta. Back-patch to 9.3 where LATERAL
was introduced.
Discussion: https://postgr.es/m/20171124120836.1463.5310@wrigleys.postgresql.org
|
|
|
|
|
|
|
|
|
|
| |
This is necessary for ActivePerl 5.18 onwards and for Strawberry Perl.
It is not sufficient for 32-bit builds with newer Visual Studio; these
fail with error LINK2026. Back-patch to 9.3 (all supported versions).
Reported by Victor Wagner.
Discussion: https://postgr.es/m/20160326154321.7754ab8f@wagner.wagner.home
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, any attempt to request a 3.x protocol version other than
3.0 would lead to a hard connection failure, which made the minor
protocol version really no different from the major protocol version
and precluded gentle protocol version breaks. Instead, when the
client requests a 3.x protocol version where x is greater than 0, send
the new NegotiateProtocolVersion message to convey that we support
only 3.0. This makes it possible to introduce new minor protocol
versions without requiring a connection retry when the server is
older.
In addition, if the startup packet includes name/value pairs where
the name starts with "_pq_.", assume that those are protocol options,
not GUCs. Include those we don't support (i.e. all of them, at
present) in the NegotiateProtocolVersion message so that the client
knows they were not understood. This makes it possible for the
client to request previously-unsupported features without bumping
the protocol version at all; the client can tell from the server's
response whether the option was understood.
It will take some time before servers that support these new
facilities become common in the wild; to speed things up and make
things easier for a future 3.1 protocol version, back-patch to all
supported releases.
Robert Haas and Badrul Chowdhury
Discussion: http://postgr.es/m/BN6PR21MB0772FFA0CBD298B76017744CD1730@BN6PR21MB0772.namprd21.prod.outlook.com
Discussion: http://postgr.es/m/30788.1498672033@sss.pgh.pa.us
|
|
|
|
|
|
| |
David Carlier (from a patch being carried by OpenBSD packagers)
Discussion: https://postgr.es/m/CA+XhMqzwFSGVU7MEnfhCecc8YdP98tigXzzpd0AAdwaGwaVXEA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
| |
Apparently there are still people out there who care about this old
architecture. They probably care about dusty versions of Postgres
too, so back-patch to all supported branches.
David Carlier (from a patch being carried by OpenBSD packagers)
Discussion: https://postgr.es/m/CA+XhMqzwFSGVU7MEnfhCecc8YdP98tigXzzpd0AAdwaGwaVXEA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Our initial work with int128 neglected alignment considerations, an
oversight that came back to bite us in bug #14897 from Vincent Lachenal.
It is unsurprising that int128 might have a 16-byte alignment requirement;
what's slightly more surprising is that even notoriously lax Intel chips
sometimes enforce that.
Raising MAXALIGN seems out of the question: the costs in wasted disk and
memory space would be significant, and there would also be an on-disk
compatibility break. Nor does it seem very practical to try to allow some
data structures to have more-than-MAXALIGN alignment requirement, as we'd
have to push knowledge of that throughout various code that copies data
structures around.
The only way out of the box is to make type int128 conform to the system's
alignment assumptions. Fortunately, gcc supports that via its
__attribute__(aligned()) pragma; and since we don't currently support
int128 on non-gcc-workalike compilers, we shouldn't be losing any platform
support this way.
Although we could have just done pg_attribute_aligned(MAXIMUM_ALIGNOF) and
called it a day, I did a little bit of extra work to make the code more
portable than that: it will also support int128 on compilers without
__attribute__(aligned()), if the native alignment of their 128-bit-int
type is no more than that of int64.
Add a regression test case that exercises the one known instance of the
problem, in parallel aggregation over a bigint column.
Back-patch of commit 751804998. The code known to be affected only exists
in 9.6 and later, but we do have some stuff using int128 in 9.5, so patch
back to 9.5.
Discussion: https://postgr.es/m/20171110185747.31519.28038@wrigleys.postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Generalize section 1 to handle stuff that is principally about the
compiler (not libraries), such as attributes, and collect stuff there
that had been dropped into various other parts of c.h. Also, push
all the gettext macros into section 8, so that section 0 is really
just inclusions rather than inclusions and random other stuff.
The primary goal here is to get pg_attribute_aligned() defined before
section 3, so that we can use it with int128. But this seems like good
cleanup anyway.
This patch just moves macro definitions around, and shouldn't result
in any changes in generated code.
Back-patch of commit 91aec93e6.
Discussion: https://postgr.es/m/20171110185747.31519.28038@wrigleys.postgresql.org
|
|
|
|
|
| |
Also, add a warning to catch future instances of naming a nonexistent
file as a prerequisite. Back-patch to 9.3 (all supported versions).
|
|
|
|
|
|
|
|
|
|
|
|
| |
Apart from calling write_stderr() on failure, the handler depends on no
PostgreSQL facilities. We have experienced crashes before reaching the
former call site. Given such an early crash, this change cannot hurt
and may produce a helpful dump. Absent an early crash, this change has
no effect. Back-patch to 9.3 (all supported versions).
Takayuki Tsunakawa
Discussion: https://postgr.es/m/0A3221C70F24FB45833433255569204D1F80CD13@G01JPEXMBYT05
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
PostgreSQL running as a Windows service crashed upon calling
write_stderr() before MemoryContextInit(). This fix completes work
started in 5735efee15540765315aa8c1a230575e756037f7. Messages this
early contain only ASCII bytes; if we removed the CurrentMemoryContext
requirement, the ensuing conversions would have no effect. Back-patch
to 9.3 (all supported versions).
Takayuki Tsunakawa, reviewed by Michael Paquier.
Discussion: https://postgr.es/m/0A3221C70F24FB45833433255569204D1F80CC73@G01JPEXMBYT05
|
|
|
|
|
|
|
| |
This suite had been a proper superset of the regular ecpg test suite,
but the three newest tests didn't reach it. To make this less likely to
recur, delete the extra schedule file and pass the TCP-specific test on
the command line. Back-patch to 9.3 (all supported versions).
|
|
|
|
|
|
|
| |
Since commit 868898739a8da9ab74c105b8349b7b5c711f265a, it has assumed
"localhost" resolves to both ::1 and 127.0.0.1. We gain nothing from
that assumption, and it does not hold in a default installation of Red
Hat Enterprise Linux 5. Back-patch to 9.3 (all supported versions).
|