| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
| |
The code took the LOCALE option as the default/fallback for
ICU_LOCALE, but this was neither documented nor intended, so remove
it. (It was probably left in from an earlier patch version.)
Reported-by: Marina Polyakova <m.polyakova@postgrespro.ru>
Discussion: https://www.postgresql.org/message-id/flat/f385ba25e7f8be427b8c582e5cca7d79%40postgrespro.ru#515a31c5429d6d37ad1d5c9d66962a1e
|
|
|
|
|
|
|
|
|
| |
sysctl is more portable than Linux's /proc/sys file tree, and
often easier to use too. That's why most of our docs refer to
sysctl when talking about how to adjust kernel parameters.
Bring the few stragglers into line.
Discussion: https://postgr.es/m/361175.1661187463@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While decoding changes in a loop, if we skip all the changes there is no
CFI making the loop uninterruptible.
Reported-by: Whale Song and Andrey Borodin
Bug: 17580
Author: Masahiko Sawada
Reviwed-by: Amit Kapila
Backpatch-through: 10
Discussion: https://postgr.es/m/17580-849c1d5b6d7eb422@postgresql.org
Discussion: https://postgr.es/m/B319ECD6-9A28-4CDF-A8F4-3591E0BF2369@yandex-team.ru
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Somewhere during the development of the patch acquiring a lock during read
access to variable-numbered stats got lost. The missing lock acquisition won't
cause corruption, but can lead to reading torn values when accessing
stats. Add the missing lock acquisitions.
Reported-by: Greg Stark <stark@mit.edu>
Reviewed-by: "Drouvot, Bertrand" <bdrouvot@amazon.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/CAM-w4HMYkM_DkYhWtUGV+qE_rrBxKOzOF0+5faozxO3vXrc9wA@mail.gmail.com
Backpatch: 15-
|
|
|
|
|
|
|
|
|
| |
An assertion would fail when creating a database with libc locale
provider from a template database with icu locale provider.
Reported-by: Marina Polyakova <m.polyakova@postgrespro.ru>
Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/f385ba25e7f8be427b8c582e5cca7d79%40postgrespro.ru#515a31c5429d6d37ad1d5c9d66962a1e
|
|
|
|
|
|
|
|
|
|
|
| |
As such the current usage of & won't produce incorrect results but it
would be better to use && to short-circuit the evaluation of second
condition when the same is not required.
Author: Ranier Vilela
Reviewed-by: Tom Lane, Bharath Rupireddy
Backpatch-through: 15, where it was introduced
Discussion: https://postgr.es/m/CAEudQApL8QcoYwQuutkWKY_h7gBY8F0Xs34YKfc7-G0i83K_pw@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Compiling with -Wshadow=compatible-local yields quite a few warnings about
local variables being shadowed by compatible local variables in an inner
scope. Of course, this is perfectly valid in C, but we have had bugs in
the past as a result of developers failing to notice this. af7d270dd is a
recent example.
Here we do a cleanup of warnings we receive from -Wshadow=compatible-local
for code which is new to PostgreSQL 15. We've yet to have the discussion
about if we actually ever want to run that as a standard compilation flag.
We'll need to at least get the number of warnings down to something easier
to manage before we can realistically consider if we want this or not.
This commit is the first step towards reducing the warnings.
The changes being made here are all fairly trivial. Because of that, and
the fact that v15 is still in beta, this is being back-patched into 15.
It seems more risky not to do this as the risk of future bugs is increased
by the additional conflicts that this commit could cause for any future
bug fixes touching the same areas as this commit.
Author: Justin Pryzby
Discussion: https://postgr.es/m/20220817145434.GC26426%40telsasoft.com
Backpatch-through: 15
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Consistently avoid trusting a sample of only one page at the point that
VACUUM determines a new reltuples for the target table (though only when
the table is larger than a single page). This is follow-up work to
commit 74388a1a, which added a heuristic to prevent reltuples from
becoming distorted by successive VACUUM operations that each scan only a
single heap page (which was itself more or less a bugfix for an issue in
commit 44fa8488, which simplified VACUUM's handling of scanned pages).
The original bugfix commit did not account for certain remaining cases
that where not affected by its "2% of total relpages" heuristic. This
happened with relations that are small enough that just one of its pages
exceeded the 2% threshold, yet still big enough for VACUUM to deem
skipping most of its pages via the visibility map worthwhile. reltuples
could still become distorted over time with such a table, at least in
scenarios where the VACUUM command is run repeatedly and without the
table itself ever changing.
Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAH2-Wzk7d4m3oEbEWkWQKd+gz-eD_peBvdXVk1a_KBygXadFeg@mail.gmail.com
Backpatch: 15-, where the rules for scanned pages changed.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Initialize shared memory allocated for index stats to avoid a hard
crash. This was possible when parallel VACUUM became confused about the
current phase of index processing.
Oversight in commit 8e1fae1938, which refactored parallel VACUUM.
Author: Masahiko Sawada <sawada.mshk@gmail.com>
Reported-By: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/20220818133406.GL26426@telsasoft.com
Backpatch: 15-, the first version with the refactoring commit.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When creating a partitioned index, DefineIndex tries to identify
any existing indexes on the partitions that match the partitioned
index, so that it can absorb those as child indexes instead of
building new ones. Part of the matching is to compare IndexInfo
structs --- but that wasn't done quite right. We're comparing
the IndexInfo built within DefineIndex itself to one made from
existing catalog contents by BuildIndexInfo. Notably, while
BuildIndexInfo will run index expressions and predicates through
expression preprocessing, that has not happened to DefineIndex's
struct. The result is failure to match and subsequent creation
of duplicate indexes.
The easiest and most bulletproof fix is to build a new IndexInfo
using BuildIndexInfo, thereby guaranteeing that the processing done
is identical.
While here, let's also extract the opfamily and collation data
from the new partitioned index, removing ad-hoc logic that
duplicated knowledge about how those are constructed.
Per report from Christophe Pettus. Back-patch to v11 where
we invented partitioned indexes.
Richard Guo and Tom Lane
Discussion: https://postgr.es/m/8864BFAA-81FD-4BF9-8E06-7DEB8D4164ED@thebuild.com
|
|
|
|
|
|
|
|
|
|
| |
This should improve performance, and was suggested by Andres Freund.
Back-patch to v15 to keep the code consistent across branches.
Dilip Kumar
Discussion: http://postgr.es/m/C3458199-FEDD-4356-865A-08DFAA5D4065@anarazel.de
Discussion: http://postgr.es/m/CAFiTN-sJ0vVpJrZ=R5M+g7Tr8=NN4wKOtrqOcDEsfFfnZgivVA@mail.gmail.com
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The table column that stores this is of type oid, but is actually limited
to uint16 and has a different path for creating new values. Some of
the documentation already referred to it as an ID, so let's standardize
on that.
While at it, most format strings already use %u, so for consintency
change the remaining stragglers using %d.
Per suggestions from Tom Lane and Justin Pryzby
Discussion: https://www.postgresql.org/message-id/3437166.1659620465%40sss.pgh.pa.us
Backpatch to v15
|
|
|
|
|
|
|
|
|
|
|
| |
The assert, introduced by 9f1cf97bb5, is intended to check if the prefix
is terminated by a \0 byte, but it has two flaws. Firstly, prefix_size
includes the \0 byte, so prefix[prefix_size] points to the byte after
the null byte. Secondly, the check ensures the byte is not equal \0,
while it should be checking the opposite.
Backpatch-through: 14
Discussion: https://postgr.es/m/b99b6101-2f14-3796-3dfa-4a6cd7d4326d@enterprisedb.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The current publisher code checks if UPDATE or DELETE can be executed with
the replica identity of the table even if it's a partitioned table. We can
skip checking the replica identity for partitioned tables because the
operations are actually performed on the leaf partitions (not the
partitioned table).
Reported-by: Brad Nicholson
Author: Hou Zhijie
Reviewed-by: Peter Smith, Amit Kapila
Backpatch-through: 13
Discussion: https://postgr.es/m/CAMMnM%3D8i5DohH%3DYKzV0_wYuYSYvuOJoL9F5nzXTc%2ByzsG1f6rg%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
| |
This option switch supports a total of 8 values, as told by
set_plan_disabling_options() and the documentation, but this was not
reflected in the output generated by --help.
Author: Junwang Zhao
Discussion: https://postgr.es/m/CAEG8a3+pT3cWzyjzKs184L1XMNm8NDnoJLiSjAYSO7XqpRh_vA@mail.gmail.com
Backpatch-through: 10
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When enlarging the work buffers of a VarStringSortSupport object,
varstrfastcmp_locale was careful to keep them in the ssup_cxt
memory context; but varstr_abbrev_convert just used palloc().
The latter creates a hazard that the buffers could be freed out
from under the VarStringSortSupport object, resulting in stomping
on whatever gets allocated in that memory later.
In practice, because we only use this code for ICU collations
(cf. 3df9c374e), the problem is confined to use of ICU collations.
I believe it may have been unreachable before the introduction
of incremental sort, too, as traditional sorting usually just
uses one context for the duration of the sort.
We could fix this by making the broken stanzas in varstr_abbrev_convert
match the non-broken ones in varstrfastcmp_locale. However, it seems
like a better idea to dodge the issue altogether by replacing the
pfree-and-allocate-anew coding with repalloc, which automatically
preserves the chunk's memory context. This fix does add a few cycles
because repalloc will copy the chunk's content, which the existing
coding assumes is useless. However, we don't expect that these buffer
enlargement operations are performance-critical. Besides that, it's
far from obvious that copying the buffer contents isn't required, since
these stanzas make no effort to mark the buffers invalid by resetting
last_returned, cache_blob, etc. That seems to be safe upon examination,
but it's fragile and could easily get broken in future, which wouldn't
get revealed in testing with short-to-moderate-size strings.
Per bug #17584 from James Inform. Whether or not the issue is
reachable in the older branches, this code has been broken on its
own terms from its introduction, so patch all the way back.
Discussion: https://postgr.es/m/17584-95c79b4a7d771f44@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It's possible to reach this case when work_mem is very small and tupsize
is (relatively) very large. In that case ExecChooseHashTableSize would
get an assertion failure, or with asserts off it'd compute nbuckets = 0,
which'd likely cause misbehavior later (I've not checked). To fix,
clamp the number of buckets to be at least 1.
This is due to faulty conversion of old my_log2() coding in 28d936031.
Back-patch to v13, as that was.
Zhang Mingli
Discussion: https://postgr.es/m/beb64ca0-91e2-44ac-bf4a-7ea36275ec02@Spark
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Most parts of the parser can expect that the stack overflow check
in transformExprRecurse() will trigger before things get desperate.
However, transformFromClauseItem() can recurse directly to self
without having analyzed any expressions, so it's possible to drive
it to a stack-overrun crash. Add a check to prevent that.
Per bug #17583 from Egor Chindyaskin. Back-patch to all supported
branches.
Richard Guo
Discussion: https://postgr.es/m/17583-33be55b9f981f75c@postgresql.org
|
|
|
|
|
|
|
|
| |
As of 897795240cfaaed724af2f53ed2c50c9862f951f, check constraints can
be declared invalid. But that patch didn't update _outConstraint() to
also show the relevant struct fields (which were only applicable to
foreign keys before that). This currently only affects debugging
output, so no impact in practice.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If an error occurs before we close the fake relcache entry, the the
fake relcache entry will be destroyed by the SmgrRelation will
survive until end of transaction. Its smgr_owner pointer ends up
pointing to already-freed memory.
The original reason for using a fake relcache entry here was to try
to avoid reusing an SMgrRelation across a relevant invalidation. To
avoid that problem, just call smgropen() again each time we need a
reference to it. Hopefully someday we will come up with a more
elegant approach, but accessing uninitialized memory is bad so let's
do this for now.
Dilip Kumar, reviewed by Andres Freund and Tom Lane. Report by
Justin Pryzby.
Discussion: http://postgr.es/m/20220802175043.GA13682@telsasoft.com
Discussion: http://postgr.es/m/CAFiTN-vSFeE6_W9z698XNtFROOA_nSqUXWqLcG0emob_kJ+dEQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
| |
The grammar added for MERGE inadvertently made it accepted syntax in
places that were not prepared to deal with it -- namely COPY and inside
CTEs, but invoking these things with MERGE currently causes assertion
failures or weird misbehavior in non-assertion builds. Protect those
places by checking for it explicitly until somebody decides to implement
it.
Reported-by: Alexey Borzov <borz_off@cs.msu.su>
Discussion: https://postgr.es/m/17579-82482cd7b267b862@postgresql.org
|
|
|
|
|
|
|
|
|
| |
The set of fields printed by _outConstraint() in the CONSTR_IDENTITY
case didn't match the set of fields actually used in that case. (The
code was probably uncarefully copied from the CONSTR_DEFAULT case.)
Fix that by using the right set of fields. Since there is no read
support for this node type, this is really just for debugging output
right now, so it doesn't affect anything important.
|
|
|
|
| |
Mistake introduced by 639a86e36aaecb84faaf941dcd0b183ba0aba9e9.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, we relied on HEAP2_NEW_CID records and XACT_INVALIDATION
records to know if the transaction has modified the catalog, and that
information is not serialized to snapshot. Therefore, after the restart,
if the logical decoding decodes only the commit record of the transaction
that has actually modified a catalog, we will miss adding its XID to the
snapshot. Thus, we will end up looking at catalogs with the wrong
snapshot.
To fix this problem, this changes the snapshot builder so that it
remembers the last-running-xacts list of the decoded RUNNING_XACTS record
after restoring the previously serialized snapshot. Then, we mark the
transaction as containing catalog changes if it's in the list of initial
running transactions and its commit record has XACT_XINFO_HAS_INVALS. To
avoid ABI breakage, we store the array of the initial running transactions
in the static variables InitialRunningXacts and NInitialRunningXacts,
instead of storing those in SnapBuild or ReorderBuffer.
This approach has a false positive; we could end up adding the transaction
that didn't change catalog to the snapshot since we cannot distinguish
whether the transaction has catalog changes only by checking the COMMIT
record. It doesn't have the information on which (sub) transaction has
catalog changes, and XACT_XINFO_HAS_INVALS doesn't necessarily indicate
that the transaction has catalog change. But that won't be a problem since
we use snapshot built during decoding only to read system catalogs.
On the master branch, we took a more future-proof approach by writing
catalog modifying transactions to the serialized snapshot which avoids the
above false positive. But we cannot backpatch it because of a change in
the SnapBuild.
Reported-by: Mike Oh
Author: Masahiko Sawada
Reviewed-by: Amit Kapila, Shi yu, Takamichi Osumi, Kyotaro Horiguchi, Bertrand Drouvot, Ahsan Hadi
Backpatch-through: 10
Discussion: https://postgr.es/m/81D0D8B0-E7C4-4999-B616-1E5004DBDCD2%40amazon.com
|
|
|
|
|
|
| |
Reviewed by David Steele and Justin Pryzby
Discussion: http://postgr.es/m/CA+TgmoafqboATDSoXHz8VLrSwK_MDhjthK4hEpYjqf9_1Fmczw%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
fmgr_sql must make expanded-datum arguments read-only, because
it's possible that the function body will pass the argument to
more than one callee function. If one of those functions takes
the datum's R/W property as license to scribble on it, then later
callees will see an unexpected value, leading to wrong answers.
From a performance standpoint, it'd be nice to skip this in the
common case that the argument value is passed to only one callee.
However, detecting that seems fairly hard, and certainly not
something that I care to attempt in a back-patched bug fix.
Per report from Adam Mackler. This has been broken since we
invented expanded datums, so back-patch to all supported branches.
Discussion: https://postgr.es/m/WScDU5qfoZ7PB2gXwNqwGGgDPmWzz08VdydcPFLhOwUKZcdWbblbo-0Lku-qhuEiZoXJ82jpiQU4hOjOcrevYEDeoAvz6nR0IU4IHhXnaCA=@mackler.email
Discussion: https://postgr.es/m/187436.1660143060@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, if an extension script did CREATE OR REPLACE and there was
an existing object not belonging to the extension, it would overwrite
the object and adopt it into the extension. This is problematic, first
because the overwrite is probably unintentional, and second because we
didn't change the object's ownership. Thus a hostile user could create
an object in advance of an expected CREATE EXTENSION command, and would
then have ownership rights on an extension object, which could be
modified for trojan-horse-type attacks.
Hence, forbid CREATE OR REPLACE of an existing object unless it already
belongs to the extension. (Note that we've always forbidden replacing
an object that belongs to some other extension; only the behavior for
previously-free-standing objects changes here.)
For the same reason, also fail CREATE IF NOT EXISTS when there is
an existing object that doesn't belong to the extension.
Our thanks to Sven Klemm for reporting this problem.
Security: CVE-2022-2625
|
|
|
|
|
| |
Source-Git-URL: ssh://git@git.postgresql.org/pgtranslation/messages.git
Source-Git-Hash: 32d3177724e332eac673a46449d46a4ee8670147
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
RelationCopyStorageUsingBuffer thought it could skip copying
empty pages, but of course that does not work at all, because
subsequent blocks will be out of place.
Also fix it to acquire share lock on the source buffer. It *might*
be safe to not do that, but it's not very certain, and I don't think
this code deserves any benefit of the doubt.
Dilip Kumar, per complaint from me
Discussion: https://postgr.es/m/3679800.1659654066@sss.pgh.pa.us
|
|
|
|
|
|
|
|
| |
On closer inspection, mcv.c isn't as broken for ScalarArrayOpExpr
as I thought. The Var-on-right issue is real enough, but actually
it does cope fine with a NULL array constant --- I was misled by
an XXX comment suggesting it didn't. Undo that part of the code
change, and replace the XXX comment with something less misleading.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since v14, the extended stats machinery will try to estimate for
otherwise-unsupported boolean expressions if they match an expression
available from an extended stats object. mcv.c did not get the memo
about this, and would spit up with "unknown clause type". Fortunately
the case is easy to handle, since we can expect the expression yields
boolean.
While here, replace some not-terribly-on-point assertions with
simpler runtime tests for lookup failure. That seems appropriate
so that we get an elog not a crash if we somehow get to the new
it-should-be-a-bool-expression code with a subexpression that
doesn't match any stats column.
Per report from Danny Shemesh. Thanks to Justin Pryzby for
preliminary investigation.
Discussion: https://postgr.es/m/CAFZC=QqD6=27wQPOW1pbRa98KPyuyn+7cL_Ay_Ck-roZV84vHg@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
statext_is_compatible_clause_internal() checked that the arguments
of a ScalarArrayOpExpr are one Var and one Const, but it would allow
cases where the Const was on the left. Subsequent uses of the clause
are not expecting that and would suffer assertion failures or core
dumps. mcv.c also had not bothered to cope with the case of a NULL
array constant, which seems really unacceptably sloppy of somebody.
(Although our tools failed us there too, since AFAIK neither Coverity
nor any compiler warned of the obvious use-of-uninitialized-variable
condition.) It seems best to handle that by having
statext_is_compatible_clause_internal() reject it.
Noted while fixing bug #17570. Back-patch to v13 where the
extended stats code grew some awareness of ScalarArrayOpExpr.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit a4d75c86b improved the extended-stats logic to allow extended
stats to be collected on expressions not just bare Vars. To apply
such stats, we first verify that the user has permissions to read all
columns used in the stats. (If not, the query will likely fail at
runtime, but the planner ought not do so.) That had to get extended
to check permissions of columns appearing within such expressions,
but the code for that was completely wrong: it applied pull_varattnos
to the wrong pointer, leading to "unrecognized node type" failures.
Furthermore, although you couldn't get to this because of that bug,
it failed to account for the attnum offset applied by pull_varattnos.
This escaped recognition so far because the code in question is not
reached when the user has whole-table SELECT privilege (which is the
common case), and because only subexpressions not specially handled
by statext_is_compatible_clause_internal() are at risk.
I think a large part of the reason for this bug is under-documentation
of what statext_is_compatible_clause() is doing and what its arguments
are, so do some work on the comments to try to improve that.
Per bug #17570 from Alexander Kozhemyakin. Patch by Richard Guo;
comments and other cosmetic improvements by me. (Thanks also to
Japin Li for diagnosis.) Back-patch to v14 where the bug came in.
Discussion: https://postgr.es/m/17570-f2f2e0f4bccf0965@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
That bit is unlogged and therefore it's wrong to consider it in WAL page
comparison.
Add a test that tickles the case, as branch testing technology allows.
This has been a problem ever since wal consistency checking was
introduced (commit a507b86900f6 for pg10), so backpatch to all supported
branches.
Author: 王海洋 (Haiyang Wang) <wanghaiyang.001@bytedance.com>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/CACciXAD2UvLMOhc4jX9VvOKt7DtYLr3OYRBhvOZ-jRxtzc_7Jg@mail.gmail.com
Discussion: https://postgr.es/m/CACciXADOfErX9Bx0nzE_SkdfXr6Bbpo5R=v_B6MUTEYW4ya+cg@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Using ATSimpleRecursion() in ATPrepCmd() to do so as bbb927b4db9b did is
not correct, because ATPrepCmd() can't distinguish between triggers that
may be cloned and those that may not, so would wrongly try to recurse
for the latter category of triggers.
So this commit restores the code in EnableDisableTrigger() that
86f575948c77 had added to do the recursion, which would do it only for
triggers that may be cloned, that is, row-level triggers. This also
changes tablecmds.c such that ATExecCmd() is able to pass the value of
ONLY flag down to EnableDisableTrigger() using its new 'recurse'
parameter.
This also fixes what seems like an oversight of 86f575948c77 that the
recursion to partition triggers would only occur if EnableDisableTrigger()
had actually changed the trigger. It is more apt to recurse to inspect
partition triggers even if the parent's trigger didn't need to be
changed: only then can we be certain that all descendants share the same
state afterwards.
Backpatch all the way back to 11, like bbb927b4db9b. Care is taken not
to break ABI compatibility (and that no catversion bump is needed.)
Co-authored-by: Amit Langote <amitlangote09@gmail.com>
Reviewed-by: Dmitry Koval <d.koval@postgrespro.ru>
Discussion: https://postgr.es/m/CA+HiwqG-cZT3XzGAnEgZQLoQbyfJApVwOTQaCaas1mhpf+4V5A@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
| |
This was a simple omission in 9d9c02ccd where the code didn't correctly
set the operator to use in the run condition OpExpr when the window
function was both monotonically increasing and decreasing.
Bug discovered by Julien Roze, although he did not report it.
Reported-by: Phil Florent
Discussion: https://postgr.es/m/PA4P191MB160009A09B9D0624359278CFBA9F9@PA4P191MB1600.EURP191.PROD.OUTLOOK.COM
Backpatch-through: 15, where 9d9c02ccd was added
|
|
|
|
|
|
|
|
|
|
| |
Adjusting this function was overlooked in commit 94aa7cc5f. The only
visible symptom (so far) is that INSERT ... ON CONFLICT could go into
an endless loop when inserting a null that has a conflict.
Richard Guo and Tom Lane, per bug #17558 from Andrew Kesper
Discussion: https://postgr.es/m/17558-3f6599ffcf52fd4a@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Ordinarily the functions called in this loop ought to have plenty
of CFIs themselves; but we've now seen a case where no such CFI is
reached, making the loop uninterruptible. Even though that's from
a recently-introduced bug, it seems prudent to install a CFI at
the loop level in all branches.
Per discussion of bug #17558 from Andrew Kesper (an actual fix for
that bug will follow).
Discussion: https://postgr.es/m/17558-3f6599ffcf52fd4a@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit fac1b470a thought we could check for set-returning functions
by testing only the top-level node in an expression tree. This is
wrong in itself, and to make matters worse it encouraged others
to make the same mistake, by exporting tlist.c's special-purpose
IS_SRF_CALL() as a widely-visible macro. I can't find any evidence
that anyone's taken the bait, but it was only a matter of time.
Use expression_returns_set() instead, and stuff the IS_SRF_CALL()
genie back in its bottle, this time with a warning label. I also
added a couple of cross-reference comments.
After a fair amount of fooling around, I've despaired of making
a robust test case that exposes the bug reliably, so no test case
here. (Note that the test case added by fac1b470a is itself
broken, in that it doesn't notice if you remove the code change.
The repro given by the bug submitter currently doesn't fail either
in v15 or HEAD, though I suspect that may indicate an unrelated bug.)
Per bug #17564 from Martijn van Oosterhout. Back-patch to v13,
as the faulty patch was.
Discussion: https://postgr.es/m/17564-c7472c2f90ef2da3@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, a byte with the high bit set was just transmitted
as-is by charin() and charout(). This is problematic if the
database encoding is multibyte, because the result of charout()
won't be validly encoded, which breaks various stuff that
expects all text strings to be validly encoded. We've
previously decided to enforce encoding validity rather than try
to individually harden each place that might have a problem with
such strings, so it's time to do something about "char".
To fix, represent high-bit-set characters as \ooo (backslash
and three octal digits), following the ancient "escape" format
for bytea. charin() will continue to accept the old way as well,
though that is only reachable in single-byte encodings.
Add some test cases just so there is coverage for this code.
We'll otherwise leave this question undocumented as it was before,
because we don't really want to encourage end-user use of "char".
For the moment, back-patch into v15 so that this change appears
in 15beta3. If there's not great pushback we should consider
absorbing this change into the older branches.
Discussion: https://postgr.es/m/2318797.1638558730@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I thought commit fd96d14d9 had plugged all the holes of this sort,
but no, function RTEs could produce oversize tuples too, either
via long coldeflists or just from multiple functions in one RTE.
(I'm pretty sure the other variants of base RTEs aren't a problem,
because they ultimately refer to either a table or a sub-SELECT,
whose widths are enforced elsewhere. But we explicitly allow join
RTEs to be overwidth, as long as you don't try to form their
tuple result.)
Per further discussion of bug #17561. As before, patch all branches.
Discussion: https://postgr.es/m/17561-80350151b9ad2ad4@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The code tried to access ARR_DIMS(v)[0] and ARR_LBOUND(v)[0]
whether or not those values exist. This made the range check
on the "n" argument unstable --- it might or might not fail, and
if it did it would report garbage for the allowed upper limit.
These bogus accesses would probably annoy Valgrind, and if you
were very unlucky even lead to SIGSEGV.
Report and fix by Martin Kalcher. Back-patch to v14 where this
function was added.
Discussion: https://postgr.es/m/baaeb413-b8a8-4656-5757-ef347e5ec11f@aboutsource.net
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Two callers of generate_useful_gather_paths were testing the wrong
thing when deciding whether to call that function: they checked for
being at the top of the current join subproblem, rather than being at
the actual top join. This'd result in failing to construct parallel
paths for a sub-join for which they might be useful.
While set_rel_pathlist() isn't actively broken, it seems best to
make its identical-in-intention test for this be like the other two.
This has been wrong all along, but given the lack of field complaints
I'm hesitant to back-patch into stable branches; we usually prefer
to avoid non-bug-fix changes in plan choices in minor releases.
It seems not too late for v15 though.
Richard Guo, reviewed by Antonin Houska and Tom Lane
Discussion: https://postgr.es/m/CAMbWs4-mH8Zf87-w+3P2J=nJB+5OyicO28ia9q_9o=Lamf_VHg@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A RowExpr with more than MaxTupleAttributeNumber columns would fail at
execution anyway, since we cannot form a tuple datum with more than that
many columns. While heap_form_tuple() has a check for too many columns,
it emerges that there are some intermediate bits of code that don't
check and can be driven to failure with sufficiently many columns.
Checking this at parse time seems like the most appropriate place to
install a defense, since we already check SELECT list length there.
While at it, make the SELECT-list-length error use the same errcode
(TOO_MANY_COLUMNS) as heap_form_tuple does, rather than the generic
PROGRAM_LIMIT_EXCEEDED.
Per bug #17561 from Egor Chindyaskin. The given test case crashes
in all supported branches (and probably a lot further back),
so patch all.
Discussion: https://postgr.es/m/17561-80350151b9ad2ad4@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 9a974cbcba005256a19991203583a94b4f9a21a9 arranged to preserve
the relfilenode of user tables across pg_upgrade, but failed to notice
that pg_upgrade treats pg_largeobject as a user table and thus it needs
the same treatment. Otherwise, large objects will appear to vanish
after a pg_upgrade.
Commit d498e052b4b84ae21b3b68d5b3fda6ead65d1d4d fixed this problem
by teaching pg_dump to UPDATE pg_class.relfilenode for pg_largeobject
and its index. However, because an UPDATE on the catalog rows doesn't
change anything on disk, this can leave stray files behind in the new
cluster. They will normally be empty, but it's a little bit untidy.
Hence, this commit arranges to do the same thing using DDL. Specifically,
it makes TRUNCATE work for the pg_largeobject catalog when in
binary-upgrade mode, and it then uses that command in binary-upgrade
dumps as a way of setting pg_class.relfilenode for pg_largeobject and
its index. That way, the old files are removed from the new cluster.
Discussion: http://postgr.es/m/CA+TgmoYYMXGUJO5GZk1-MByJGu_bB8CbOL6GJQC8=Bzt6x6vDg@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Crash recovery on standby may encounter missing directories
when replaying database-creation WAL records. Prior to this
patch, the standby would fail to recover in such a case;
however, the directories could be legitimately missing.
Consider the following sequence of commands:
CREATE DATABASE
DROP DATABASE
DROP TABLESPACE
If, after replaying the last WAL record and removing the
tablespace directory, the standby crashes and has to replay the
create database record again, crash recovery must be able to continue.
A fix for this problem was already attempted in 49d9cfc68bf4, but it
was reverted because of design issues. This new version is based
on Robert Haas' proposal: any missing tablespaces are created
during recovery before reaching consistency. Tablespaces
are created as real directories, and should be deleted
by later replay. CheckRecoveryConsistency ensures
they have disappeared.
The problems detected by this new code are reported as PANIC,
except when allow_in_place_tablespaces is set to ON, in which
case they are WARNING. Apart from making tests possible, this
gives users an escape hatch in case things don't go as planned.
Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Author: Asim R Praveen <apraveen@pivotal.io>
Author: Paul Guo <paulguo@gmail.com>
Reviewed-by: Anastasia Lubennikova <lubennikovaav@gmail.com> (older versions)
Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com> (older versions)
Reviewed-by: Michaël Paquier <michael@paquier.xyz>
Diagnosed-by: Paul Guo <paulguo@gmail.com>
Discussion: https://postgr.es/m/CAEET0ZGx9AvioViLf7nbR_8tH9-=27DN5xWJ2P9-ROH16e4JUA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We have a few commands that "can't run in a transaction block",
meaning that if they complete their processing but then we fail
to COMMIT, we'll be left with inconsistent on-disk state.
However, the existing defenses for this are only watertight for
simple query protocol. In extended protocol, we didn't commit
until receiving a Sync message. Since the client is allowed to
issue another command instead of Sync, we're in trouble if that
command fails or is an explicit ROLLBACK. In any case, sitting
in an inconsistent state while waiting for a client message
that might not come seems pretty risky.
This case wasn't reachable via libpq before we introduced pipeline
mode, but it's always been an intended aspect of extended query
protocol, and likely there are other clients that could reach it
before.
To fix, set a flag in PreventInTransactionBlock that tells
exec_execute_message to force an immediate commit. This seems
to be the approach that does least damage to existing working
cases while still preventing the undesirable outcomes.
While here, add some documentation to protocol.sgml that explicitly
says how to use pipelining. That's latent in the existing docs if
you know what to look for, but it's better to spell it out; and it
provides a place to document this new behavior.
Per bug #17434 from Yugo Nagata. It's been wrong for ages,
so back-patch to all supported branches.
Discussion: https://postgr.es/m/17434-d9f7a064ce2a88a3@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since a2c8499, HbaFileName (default pg_hba.conf) was getting used
instead of IdentFileName (default pg_ident.conf) as the parent file to
use as reference when parsing the contents of pg_ident.conf, with
pg_ident.conf correctly opened, when feeding this information to
pg_ident_file_mappings. This had two consequences:
- On an I/O error when reading pg_ident.conf, the user would get an
ERROR message referring to pg_hba.conf and not pg_ident.conf.
- When reading an external file with a relative path using '@' in
pg_ident.conf, the directory used to look at the file to load would be
the base directory of pg_hba.conf rather than the one of pg_ident.conf,
leading to errors in pg_ident_file_mappings inconsistent with what gets
loaded at startup when pg_ident.conf and pg_hba.conf are located in
different directories.
This error only impacted the SQL view pg_ident_file_mappings that uses a
logic new to v15 to fill the view with the parsed information, not the
code paths loading these authentication files at startup.
Author: Julien Rouhaud
Discussion: https://postgr.es/m/20220726050402.vsr6fmz7rsgpmdz3@jrouhaud
Backpatch-through: 15
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously we did this after InitPostgres, at a somewhat randomly chosen
place within PostgresMain. However, since commit a0ffa885e doing this
outside a transaction can cause a crash, if we need to check permissions
while replacing a placeholder GUC. (Besides which, a preloaded library
could itself want to do database access within _PG_init.)
To avoid needing an additional transaction start/end in every session,
move the process_session_preload_libraries call to within InitPostgres's
transaction. That requires teaching the code not to call it when
InitPostgres is called from somewhere other than PostgresMain, since
we don't want session_preload_libraries to affect background workers.
The most future-proof solution here seems to be to add an additional
flag parameter to InitPostgres; fortunately, we're not yet very worried
about API stability for v15.
Doing this also exposed the fact that we're currently honoring
session_preload_libraries in walsenders, even those not connected to
any database. This seems, at minimum, a POLA violation: walsenders
are not interactive sessions. Let's stop doing that.
(All these comments also apply to local_preload_libraries, of course.)
Per report from Gurjeet Singh (thanks also to Nathan Bossart and Kyotaro
Horiguchi for review). Backpatch to v15 where a0ffa885e came in.
Discussion: https://postgr.es/m/CABwTF4VEpwTHhRQ+q5MiC5ucngN-whN-PdcKeufX7eLSoAfbZA@mail.gmail.com
|