| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In commit ec62cb0aa, I foolishly replaced ExecEvalWholeRowVar's
lookup_rowtype_tupdesc_domain call with just lookup_rowtype_tupdesc,
because I didn't see how a domain could be involved there, and
there were no regression test cases to jog my memory. But the
existing code was correct, so revert that change and add a test
case showing why it's necessary. (Note: per comment in struct
DatumTupleFields, it is correct to produce an output tuple that's
labeled with the base composite type, not the domain; hence just
blindly looking through the domain is correct here.)
Per bug #17515 from Dan Kubb. Back-patch to v11 where domains over
composites became a thing.
Discussion: https://postgr.es/m/17515-a24737438363aca0@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug #17512 highlighted that a suitably broken data type could cause the
backend to crash if either the hash function or equality function were in
someway non-deterministic based on their input values. Such a data type
could cause a crash of the backend due to some code which assumes that
we'll always find a hash table entry corresponding to an item in the
Memoize LRU list.
Here we remove the assumption that we'll always find the entry
corresponding to the given LRU list item and add run-time checks to verify
we have found the given item in the cache.
This is not a fix for bug #17512, but it will turn the crash reported by
that bug report into an internal ERROR.
Reported-by: Ales Zeleny
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAApHDvpxFSTwvoYWT7kmFVSZ9zLAeHb=S9vrz=RExMgSkQNWqw@mail.gmail.com
Backpatch-through: 14, where Memoize was added.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
pg_stat_get_subscription scanned one more LogicalRepWorker array entry
than is really allocated. In the worst case this could lead to SIGSEGV,
if the LogicalRepCtx data structure is near the end of shared memory.
That seems quite unlikely though (thanks to the ordering of calls in
CreateSharedMemoryAndSemaphores) and we've heard no field reports of it.
A more likely misbehavior is one row of garbage data in the function's
result, but even that is not real likely because of the check that the
pid field matches some live backend.
Report and fix by Kuntal Ghosh. This bug is old, so back-patch
to all supported branches.
Discussion: https://postgr.es/m/CAGz5QCJykEDzW6jQK6Yz7Qh_PMtD=95de_7QoocbVR2Qy8hWZA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since a117cebd6, some older gcc versions issue "variable may be used
uninitialized in this function" complaints for brin_summarize_range.
Silence that using the same coding pattern as in bt_index_check_internal;
arguably, a117cebd6 had too narrow a view of which compilers might give
trouble.
Nathan Bossart and Tom Lane. Back-patch as the previous commit was.
Discussion: https://postgr.es/m/20220601163537.GA2331988@nathanxps13
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit d9d076222f5b "VACUUM: ignore indexing operations
with CONCURRENTLY".
These changes caused indexes created with the CONCURRENTLY option to
miss heap tuples that were HOT-updated and HOT-pruned during the index
creation. Before these changes, HOT pruning would have been prevented
by the Xmin of the transaction creating the index, but because this
change was precisely to allow the Xmin to move forward ignoring that
backend, now other backends scanning the table can prune them. This is
not a problem for VACUUM (which requires a lock that conflicts with a
CREATE INDEX CONCURRENTLY operation), but HOT-prune can definitely
occur. In other words, Xmin advancement was sped up, but at the cost of
corrupting the resulting index.
Regrettably, this means that the new feature in PG14 that RIC/CIC on
very large tables no longer force VACUUM to retain very old tuples goes
away. We might try to implement it again in a later release, but for
now the risk of indexes missing tuples is too high and there's no easy
fix.
Backpatch to 14, where this change appeared.
Reported-by: Peter Slavov <pet.slavov@gmail.com>
Diagnosys-by: Andrey Borodin <x4mmm@yandex-team.ru>
Diagnosys-by: Michael Paquier <michael@paquier.xyz>
Diagnosys-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/17485-396609c6925b982d%40postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We hadn't noticed this because (a) few people feed invalid
timezone abbreviation files to the server, and (b) in typical
scenarios guc.c would throw ereport(ERROR) and then transaction
abort handling would silently clean up the leaked file reference.
However, it was possible to observe file leakage warnings if one
breaks an already-active abbreviation file, because guc.c does
not throw ERROR when loading supposedly-validated settings during
session start or SIGHUP processing.
Report and fix by Kyotaro Horiguchi (cosmetic adjustments by me)
Discussion: https://postgr.es/m/20220530.173740.748502979257582392.horikyota.ntt@gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the codepath when no encoding conversion is required, the check for
incomplete character at the end of input incorrectly used server
encoding's max character length, instead of the client's. Usually the
server and client encodings are the same when we're not performing
encoding conversion, but SQL_ASCII is an exception.
In the passing, also fix some outdated comments that still talked about
the old COPY protocol. It was removed in v14.
Per bug #17501 from Vitaly Voronov. Backpatch to v14 where this was
introduced.
Discussion: https://www.postgresql.org/message-id/17501-128b1dd039362ae6@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a short description is specified as NULL in one of the various
DefineCustomXXXVariable() functions available to external modules to
define a custom parameter, SHOW ALL would crash. This change teaches
SHOW ALL to properly handle NULL short descriptions, as well as any code
paths that manipulate it, to gain in flexibility. Note that
help_config.c was already able to do that, when describing a set of GUCs
for postgres --describe-config.
Author: Steve Chavez
Reviewed by: Nathan Bossart, Andres Freund, Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/CAGRrpzY6hO-Kmykna_XvsTv8P2DshGiU6G3j8yGao4mk0CqjHA%40mail.gmail.com
Backpatch-through: 10
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commits a59c79564 et al. tried to sync libpq's SSL key file
permissions checks with what we've used for years in the backend.
We did not intend to create any new failure cases, but it turns out
we did: restricting the key file's ownership breaks cases where the
client is allowed to read a key file despite not having the identical
UID. In particular a client running as root used to be able to read
someone else's key file; and having seen that I suspect that there are
other, less-dubious use cases that this restriction breaks on some
platforms.
We don't really need an ownership check, since if we can read the key
file despite its having restricted permissions, it must have the right
ownership --- under normal conditions anyway, and the point of this
patch is that any additional corner cases where that works should be
deemed allowable, as they have been historically. Hence, just drop
the ownership check, and rearrange the permissions check to get rid
of its faulty assumption that geteuid() can't be zero. (Note that the
comparable backend-side code doesn't have to cater for geteuid() == 0,
since the server rejects that very early on.)
This does have the end result that the permissions safety check used
for a root user's private key file is weaker than that used for
anyone else's. While odd, root really ought to know what she's doing
with file permissions, so I think this is acceptable.
Per report from Yogendra Suralkar. Like the previous patch,
back-patch to all supported branches.
Discussion: https://postgr.es/m/MW3PR15MB3931DF96896DC36D21AFD47CA3D39@MW3PR15MB3931.namprd15.prod.outlook.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ruleutils.c was coded to suppress the AS label for a SELECT output
expression if the column name is "?column?", which is the parser's
fallback if it can't think of something better. This is fine, and
avoids ugly clutter, so long as (1) nothing further up in the parse
tree relies on that column name or (2) the same fallback would be
assigned when the rule or view definition is reloaded. Unfortunately
(2) is far from certain, both because ruleutils.c might print the
expression in a different form from how it was originally written
and because FigureColname's rules might change in future releases.
So we shouldn't rely on that.
Detecting exactly whether there is any outer-level use of a SELECT
column name would be rather expensive. This patch takes the simpler
approach of just passing down a flag indicating whether there *could*
be any outer use; for example, the output column names of a SubLink
are not referenceable, and we also do not care about the names exposed
by the right-hand side of a setop. This is sufficient to suppress
unwanted clutter in all but one case in the regression tests. That
seems like reasonable evidence that it won't be too much in users'
faces, while still fixing the cases we need to fix.
Per bug #17486 from Nicolas Lutic. This issue is ancient, so
back-patch to all supported branches.
Discussion: https://postgr.es/m/17486-1ad6fd786728b8af@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When an implicit operator family is created, it wasn't getting reported.
Make it do so.
This has always been missing. Backpatch to 10.
Author: Masahiko Sawada <sawada.mshk@gmail.com>
Reported-by: Leslie LEMAIRE <leslie.lemaire@developpement-durable.gouv.fr>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Michael Paquiër <michael@paquier.xyz>
Discussion: https://postgr.es/m/f74d69e151b22171e8829551b1159e77@developpement-durable.gouv.fr
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a slight, convenient semantics change from what commit
0f0cfb494004 ("Fix parallel operations that prevent oldest xmin from
advancing") introduced that lets us simplify the coding in the one place
where it is used.
Backpatch to 13. This is related to commit 6fea65508a1a ("Tighten
ComputeXidHorizons' handling of walsenders") rewriting the code site
where this is used, which has not yet been backpatched, but it may well
be in the future.
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Discussion: https://postgr.es/m/202204191637.eldwa2exvguw@alvherre.pgsql
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We weren't checking the length of the column list in the alias clause of
an XMLTABLE or JSON_TABLE function (a "tablefunc" RTE), and it was
possible to make the server crash by passing an overly long one. Fix it
by throwing an error in that case, like the other places that deal with
alias lists.
In passing, modify the equivalent test used for join RTEs to look like
the other ones, which was different for no apparent reason.
This bug came in when XMLTABLE was born in version 10; backpatch to all
stable versions.
Reported-by: Wang Ke <krking@zju.edu.cn>
Discussion: https://postgr.es/m/17480-1c9d73565bb28e90@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In order to estimate the cache hit ratio of a Memoize node, one of the
inputs we require is the estimated number of times the Memoize node will
be rescanned. The higher this number, the large the cache hit ratio is
likely to become. Unfortunately, the value being passed as the number of
"calls" to the Memoize was incorrectly using the Nested Loop's
outer_path->parent->rows instead of outer_path->rows. This failed to
account for the fact that the outer_path might be parameterized by some
upper-level Nested Loop.
This problem could lead to Memoize plans appearing more favorable than
they might actually be. It could also lead to extended executor startup
times when work_mem values were large due to the planner setting overly
large MemoizePath->est_entries resulting in the Memoize hash table being
initially made much larger than might be required.
Fix this simply by passing outer_path->rows rather than
outer_path->parent->rows. Also, adjust the expected regression test
output for a plan change.
Reported-by: Pavel Stehule
Author: David Rowley
Discussion: https://postgr.es/m/CAFj8pRAMp%3DQsMi6sPQJ4W3hczoFJRvyXHJV3AZAZaMyTVM312Q%40mail.gmail.com
Backpatch-through: 14, where Memoize was introduced
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a cluster is promoted (aka the control file shows a state different
than DB_IN_ARCHIVE_RECOVERY) while CreateRestartPoint() is still
processing, this function could miss an update of the control file for
"checkPoint" and "checkPointCopy" but still do the recycling and/or
removal of the past WAL segments, assuming that the to-be-updated LSN
values should be used as reference points for the cleanup. This causes
a follow-up restart attempting crash recovery to fail with a PANIC on a
missing checkpoint record if the end-of-recovery checkpoint triggered by
the promotion did not complete while the cluster abruptly stopped or
crashed before the completion of this checkpoint. The PANIC would be
caused by the redo LSN referred in the control file as located in a
segment already gone, recycled by the previous restartpoint with
"checkPoint" out-of-sync in the control file.
This commit fixes the update of the control file during restartpoints so
as "checkPoint" and "checkPointCopy" are updated even if the cluster has
been promoted while a restartpoint is running, to be on par with the set
of WAL segments actually recycled in the end of CreateRestartPoint().
7863ee4 has fixed this problem already on master, but the release timing
of the latest point versions did not let me enough time to study and fix
that on all the stable branches.
Reported-by: Fujii Masao, Rui Zhao
Author: Kyotaro Horiguchi
Reviewed-by: Nathan Bossart, Michael Paquier
Discussion: https://postgr.es/m/20220316.102444.2193181487576617583.horikyota.ntt@gmail.com
Backpatch-through: 10
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This follows in the footsteps of commit 2591ee8ec by removing one more
ill-advised shortcut from planning of GroupingFuncs. It's true that
we don't intend to execute the argument expression(s) at runtime, but
we still have to process any Vars appearing within them, or we risk
failure at setrefs.c time (or more fundamentally, in EXPLAIN trying
to print such an expression). Vars in upper plan nodes have to have
referents in the next plan level, whether we ever execute 'em or not.
Per bug #17479 from Michael J. Sullivan. Back-patch to all supported
branches.
Richard Guo
Discussion: https://postgr.es/m/17479-6260deceaf0ad304@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The problem is that we don't send keep-alive messages for a long time
while processing large transactions during logical replication where we
don't send any data of such transactions. This can happen when the table
modified in the transaction is not published or because all the changes
got filtered. We do try to send the keep_alive if necessary at the end of
the transaction (via WalSndWriteData()) but by that time the
subscriber-side can timeout and exit.
To fix this we try to send the keepalive message if required after
processing certain threshold of changes.
Reported-by: Fabrice Chapuis
Author: Wang wei and Amit Kapila
Reviewed By: Masahiko Sawada, Euler Taveira, Hou Zhijie, Hayato Kuroda
Backpatch-through: 10
Discussion: https://postgr.es/m/CAA5-nLARN7-3SLU_QUxfy510pmrYK6JJb=bk3hcgemAM_pAv+w@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
| |
The parser code that transformed VALUES from row-oriented to
column-oriented lists failed if there were zero columns.
You can't write that straightforwardly (though probably you
should be able to), but the case can be reached by expanding
a "tab.*" reference to a zero-column table.
Per bug #17477 from Wang Ke. Back-patch to all supported branches.
Discussion: https://postgr.es/m/17477-0af3c6ac6b0a6ae0@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit eafdf9de06e9b60168f5e47cedcfceecdc6d4b5f
and its back-branch counterparts. Corey Huinker pointed out that
we'd discussed this exact change back in 2016 and rejected it,
on the grounds that there's at least one usage pattern with LIMIT
where an infinite endpoint can usefully be used. Perhaps that
argument needs to be re-litigated, but there's no time left before
our back-branch releases. To keep our options open, restore the
status quo ante; if we do end up deciding to change things, waiting
one more quarter won't hurt anything.
Rather than just doing a straight revert, I added a new test case
demonstrating the usage with LIMIT. That'll at least remind us of
the issue if we forget again.
Discussion: https://postgr.es/m/3603504.1652068977@sss.pgh.pa.us
Discussion: https://postgr.es/m/CADkLM=dzw0Pvdqp5yWKxMd+VmNkAMhG=4ku7GnCZxebWnzmz3Q@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
| |
It intended to, but did not, achieve this. Adopt the new standard of
setting user ID just after locking the relation. Back-patch to v10 (all
supported versions).
Reviewed by Simon Riggs. Reported by Alvaro Herrera.
Security: CVE-2022-1552
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a feature enumerates relations and runs functions associated with
all found relations, the feature's user shall not need to trust every
user having permission to create objects. BRIN-specific functionality
in autovacuum neglected to account for this, as did pg_amcheck and
CLUSTER. An attacker having permission to create non-temp objects in at
least one schema could execute arbitrary SQL functions under the
identity of the bootstrap superuser. CREATE INDEX (not a
relation-enumerating operation) and REINDEX protected themselves too
late. This change extends to the non-enumerating amcheck interface.
Back-patch to v10 (all supported versions).
Sergey Shinderuk, reviewed (in earlier versions) by Alexander Lakhin.
Reported by Alexander Lakhin.
Security: CVE-2022-1552
|
|
|
|
|
| |
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: b7586f1542a8ffdfd1416e425f55e4e89c9a9505
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The tests added in 9f8a050f68d failed nearly reliably on FreeBSD in CI, and
occasionally on the buildfarm. That turns out to be caused not by a bug in the
test, but by a longstanding bug in recovery conflict handling.
The standby timeout handler, used by ResolveRecoveryConflictWithBufferPin(),
executed SendRecoveryConflictWithBufferPin() inside a signal handler. A bad
idea, because the deadlock timeout handler (or a spurious latch set) could
have interrupted ProcWaitForSignal(). If unlucky that could cause a
self-deadlock on ProcArrayLock, if the deadlock check is in
SendRecoveryConflictWithBufferPin()->CancelDBBackends().
To fix, set a flag in StandbyTimeoutHandler(), and check the flag in
ResolveRecoveryConflictWithBufferPin().
Subsequently the recovery conflict tests will be backpatched.
Discussion: https://postgr.es/m/20220413002626.udl7lll7f3o7nre7@alap3.anarazel.de
Backpatch: 10-
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
mark_async_capable_plan(), which is called from create_append_plan() to
determine whether subplans are async-capable, failed to take into
account that the given subplan created from a given subpath might
include a gating Result node if the subpath is a SubqueryScanPath or
ForeignPath, causing a segmentation fault there when the subplan created
from a SubqueryScanPath includes the Result node, or causing
ExecAsyncRequest() to throw an error about an unrecognized node type
when the subplan created from a ForeignPath includes the Result node,
because in the latter case the Result node was unintentionally
considered as async-capable, but we don't currently support executing
Result nodes asynchronously. Fix by modifying mark_async_capable_plan()
to disable asynchronous execution in such cases. Also, adjust code in
the ProjectionPath case in mark_async_capable_plan(), for consistency
with other cases, and adjust/improve comments there.
is_async_capable_path() added in commit 27e1f1456, which was rewritten
to mark_async_capable_plan() in a later commit, has the same issue,
causing the error at execution mentioned above, so back-patch to v14
where the aforesaid commit went in.
Per report from Justin Pryzby.
Etsuro Fujita, reviewed by Zhihong Yu and Justin Pryzby.
Discussion: https://postgr.es/m/20220408124338.GK24419%40telsasoft.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This function looks for a reference to the recursive WITH CTE,
but it checked only the CTE name not ctelevelsup, so that it could
seize on a lower CTE that happened to have the same name. This
would result in planner failures later, either weird errors such as
"could not find attribute 2 in subquery targetlist", or crashes
or assertion failures. The code also merely Assert'ed that it found
a matching entry, which is not guaranteed at all by the parser.
Per bugs #17320 and #17318 from Zhiyong Wu.
Thanks to Kyotaro Horiguchi for investigation.
Discussion: https://postgr.es/m/17320-70e37868182512ab@postgresql.org
Discussion: https://postgr.es/m/17318-2eb65a3a611d2368@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
inline_cte() expected to find exactly as many references to the
target CTE as its cterefcount indicates. While that should be
accurate for the tree as emitted by the parser, there are some
optimizations that occur upstream of here that could falsify it,
notably removal of unused subquery output expressions.
Trying to make the accounting 100% accurate seems expensive and
doomed to future breakage. It's not really worth it, because
all this code is protecting is downstream assumptions that every
referenced CTE has a plan. Let's convert those assertions to
regular test-and-elog just in case there's some actual problem,
and then drop the failing assertion.
Per report from Tomas Vondra (thanks also to Richard Guo for
analysis). Back-patch to v12 where the faulty code came in.
Discussion: https://postgr.es/m/29196a1e-ed47-c7ca-9be2-b1c636816183@enterprisedb.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CLUSTER sort won't use the datum1 SortTuple field when clustering
against an index whose leading key is an expression. This makes it
unsafe to use the abbreviated keys optimization, which was missed by the
logic that sets up SortSupport state. Affected tuplesorts output tuples
in a completely bogus order as a result (the wrong SortSupport based
comparator was used for the leading attribute).
This issue is similar to the bug fixed on the master branch by recent
commit cc58eecc5d. But it's a far older issue, that dates back to the
introduction of the abbreviated keys optimization by commit 4ea51cdfe8.
Backpatch to all supported versions.
Author: Peter Geoghegan <pg@bowt.ie>
Author: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/CA+hUKG+bA+bmwD36_oDxAoLrCwZjVtST2fqe=b4=qZcmU7u89A@mail.gmail.com
Backpatch: 10-
|
|
|
|
|
|
|
|
|
|
| |
Such cases will lead to infinite loops, so they're of no practical
value. The numeric variant of generate_series() already threw error
for this, so borrow its message wording.
Per report from Richard Wesley. Back-patch to all supported branches.
Discussion: https://postgr.es/m/91B44E7B-68D5-448F-95C8-B4B3B0F5DEAF@duckdblabs.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
An ALTER FUNCTION command that tried to update both the function's
proparallel property and its proconfig list failed to do the former,
because it stored the new proparallel value into a tuple that was
no longer the interesting one. Carelessness in 7aea8e4f2.
(I did not bother with a regression test, because the only likely
future breakage would be for someone to ignore the comment I added
and add some other field update after the heap_modify_tuple step.
A test using existing function properties could not catch that.)
Per report from Bryn Llewellyn. Back-patch to all supported branches.
Discussion: https://postgr.es/m/8AC9A37F-99BD-446F-A2F7-B89AD0022774@yugabyte.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The new numeric code for extract epoch from interval accidentally
truncated the DAYS_PER_YEAR value to an integer, leading to results
that mismatched the floating-point interval_part calculations.
The commit a2da77cdb4661826482ebf2ddba1f953bc74afe4 that introduced
this actually contains the regression test change that this reverts.
I suppose this was missed at the time.
Reported-by: Joseph Koshakow <koshy44@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/CAAvxfHd5n%3D13NYA2q_tUq%3D3%3DSuWU-CufmTf-Ozj%3DfrEgt7pXwQ%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We don't allow to invoke more sync workers once we have reached the sync
worker limit per subscription. But the check to enforce this also doesn't
allow to launch an apply worker if it gets restarted.
This code was introduced by commit de43897122 but we caught the problem
only with the test added by recent commit c91f71b9dc which started failing
occasionally in the buildfarm.
As per buildfarm.
Diagnosed-by: Amit Kapila, Masahiko Sawada, Tomas Vondra
Author: Amit Kapila
Backpatch-through: 10
Discussion: https://postgr.es/m/CAH2L28vddB_NFdRVpuyRBJEBWjz4BSyTB=_ektNRH8NJ1jf95g@mail.gmail.com
https://postgr.es/m/f90d2b03-4462-ce95-a524-d91464e797c8@enterprisedb.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Don't try to look at the attidentity field of system attributes,
because they're not there in the TupleDescAttr array. Sometimes
this is harmless because we accidentally pick up a zero, but
otherwise we'll report "no owned sequence found" from an attempt
to alter a system attribute. (It seems possible that a SIGSEGV
could occur, too, though I've not seen it in testing.)
It's not in this function's charter to complain that you can't
alter a system column, so instead just hard-wire an assumption
that system attributes aren't identities. I didn't bother with
a regression test because the appearance of the bug is very
erratic.
Per bug #17465 from Roman Zharkov. Back-patch to all supported
branches. (There's not actually a live bug before v12, because
before that get_attidentity() did the right thing anyway.
But for consistency I changed the test in the older branches too.)
Discussion: https://postgr.es/m/17465-f2a554a6cb5740d3@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The back-patch of commit bbace5697df12398e87ffd9879171c39d27f5b33 had
the unfortunate effect of changing the layout of PGPROC in the
back-branches, which could break extensions. This happened because it
changed the delayChkpt from type bool to type int. So, change it back,
and add a new bool delayChkptEnd field instead. The new field should
fall within what used to be padding space within the struct, and so
hopefully won't cause any extensions to break.
Per report from Markus Wanner and discussion with Tom Lane and others.
Patch originally by me, somewhat revised by Markus Wanner per a
suggestion from Michael Paquier. A very similar patch was developed
by Kyotaro Horiguchi, but I failed to see the email in which that was
posted before writing one of my own.
Discussion: http://postgr.es/m/CA+Tgmoao-kUD9c5nG5sub3F7tbo39+cdr8jKaOVEs_1aBWcJ3Q@mail.gmail.com
Discussion: http://postgr.es/m/20220406.164521.17171257901083417.horikyota.ntt@gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
heap_fetch() used to have a "keep_buf" parameter that told it to return
ownership of the buffer pin to the caller after finding that the
requested tuple TID exists but is invisible to the specified snapshot.
This was thoughtlessly removed in commit 5db6df0c0, which broke
heapam_tuple_lock() (formerly EvalPlanQualFetch) because that function
needs to do more accesses to the tuple even if it's invisible. The net
effect is that we would continue to touch the page for a microsecond or
two after releasing pin on the buffer. Usually no harm would result;
but if a different session decided to defragment the page concurrently,
we could see garbage data and mistakenly conclude that there's no newer
tuple version to chain up to. (It's hard to say whether this has
happened in the field. The bug was actually found thanks to a later
change that allowed valgrind to detect accesses to non-pinned buffers.)
The most reasonable way to fix this is to reintroduce keep_buf,
although I made it behave slightly differently: buffer ownership
is passed back only if there is a valid tuple at the requested TID.
In HEAD, we can just add the parameter back to heap_fetch().
To avoid an API break in the back branches, introduce an additional
function heap_fetch_extended() in those branches.
In HEAD there is an additional, less obvious API change: tuple->t_data
will be set to NULL in all cases where buffer ownership is not returned,
in particular when the tuple exists but fails the time qual (and
!keep_buf). This is to defend against any other callers attempting to
access non-pinned buffers. We concluded that making that change in back
branches would be more likely to introduce problems than cure any.
In passing, remove a comment about heap_fetch that was obsoleted by
9a8ee1dc6.
Per bug #17462 from Daniil Anisimov. Back-patch to v12 where the bug
was introduced.
Discussion: https://postgr.es/m/17462-9c98a0f00df9bd36@postgresql.org
|
|
|
|
|
|
|
| |
With asserts disabled, late-model clang notices that this variable
is incremented but never otherwise read.
Discussion: https://postgr.es/m/3171401.1649275153@sss.pgh.pa.us
|
| |
|
|
|
|
|
|
|
| |
This reverts commit 49d9cfc68bf4. The approach taken by this patch has
problems, so we'll come up with a radically different fix.
Discussion: https://postgr.es/m/CA+TgmoYcUPL+WOJL2ZzhH=zmrhj0iOQ=iCFM0SuYqBbqZEamEg@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
| |
pg_stat_get_replication_slot() accidentally was marked as non-strict, crashing
when called with NULL input. As it's already released, introduce an explicit
NULL check in 14, fix the catalog in HEAD.
Bumps catversion in HEAD.
Discussion: https://postgr.es/m/20220326212432.s5n2maw6kugnpyxw@alap3.anarazel.de
Backpatch: 14-, where replication slot stats were introduced
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 8c6d30f21 caused this function to fail to set *displen
in the PS_USE_NONE code path. If the variable's previous value
had been negative, that'd lead to a memory clobber at some call
sites. We'd managed not to notice due to very thin test coverage
of such configurations, but this appears to explain buildfarm member
lorikeet's recent struggles.
Credit to Andrew Dunstan for spotting the problem. Back-patch
to v13 where the bug was introduced.
Discussion: https://postgr.es/m/136102.1648320427@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Crash recovery on standby may encounter missing directories when
replaying create database WAL records. Prior to this patch, the standby
would fail to recover in such a case. However, the directories could be
legitimately missing. Consider a sequence of WAL records as follows:
CREATE DATABASE
DROP DATABASE
DROP TABLESPACE
If, after replaying the last WAL record and removing the tablespace
directory, the standby crashes and has to replay the create database
record again, the crash recovery must be able to move on.
This patch adds a mechanism similar to invalid-page tracking, to keep a
tally of missing directories during crash recovery. If all the missing
directory references are matched with corresponding drop records at the
end of crash recovery, the standby can safely continue following the
primary.
Backpatch to 13, at least for now. The bug is older, but fixing it in
older branches requires more careful study of the interactions with
commit e6d8069522c8, which appeared in 13.
A new TAP test file is added to verify the condition. However, because
it depends on commit d6d317dbf615, it can only be added to branch
master. I (Álvaro) manually verified that the code behaves as expected
in branch 14. It's a bit nervous-making to leave the code uncovered by
tests in older branches, but leaving the bug unfixed is even worse.
Also, the main reason this fix took so long is precisely that we
couldn't agree on a good strategy to approach testing for the bug, so
perhaps this is the best we can do.
Diagnosed-by: Paul Guo <paulguo@gmail.com>
Author: Paul Guo <paulguo@gmail.com>
Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Author: Asim R Praveen <apraveen@pivotal.io>
Discussion: https://postgr.es/m/CAEET0ZGx9AvioViLf7nbR_8tH9-=27DN5xWJ2P9-ROH16e4JUA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If TRUNCATE causes some buffers to be invalidated and thus the
checkpoint does not flush them, TRUNCATE must also ensure that the
corresponding files are truncated on disk. Otherwise, a replay
from the checkpoint might find that the buffers exist but have
the wrong contents, which may cause replay to fail.
Report by Teja Mupparti. Patch by Kyotaro Horiguchi, per a design
suggestion from Heikki Linnakangas, with some changes to the
comments by me. Review of this and a prior patch that approached
the issue differently by Heikki Linnakangas, Andres Freund, Álvaro
Herrera, Masahiko Sawada, and Tom Lane.
Discussion: http://postgr.es/m/BYAPR06MB6373BF50B469CA393C614257ABF00@BYAPR06MB6373.namprd06.prod.outlook.com
|
|
|
|
|
|
|
|
|
|
|
| |
Noticed via -fsanitize=undefined. Introduced when a few columns in
GetConfigOptionByNum() / pg_settings started to be translated in 72be8c29a /
PG 12.
Backpatch to all affected branches, for the same reasons as 46ab07ffda9.
Discussion: https://postgr.es/m/20220323173537.ll7klrglnp4gn2um@alap3.anarazel.de
Backpatch: 12-
|
|
|
|
|
|
|
|
|
| |
Noticed via -fsanitize=undefined.
Backpatch to all branches, for the same reasons as 46ab07ffda9.
Discussion: https://postgr.es/m/20220323173537.ll7klrglnp4gn2um@alap3.anarazel.de
Backpatch: 10-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Invalidate abortedRecPtr and missingContrecPtr after a missing
continuation record is successfully skipped on a standby. This fixes a
PANIC caused when a recently promoted standby attempts to write an
OVERWRITE_RECORD with an LSN of the previously read aborted record.
Backpatch to 10 (all stable versions).
Author: Sami Imseih <simseih@amazon.com>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/44D259DE-7542-49C4-8A52-2AB01534DCA9@amazon.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The planner needs to treat GroupingFunc like Aggref for many purposes,
in particular with respect to processing of the argument expressions,
which are not to be evaluated at runtime. A few places hadn't gotten
that memo, notably including subselect.c's processing of outer-level
aggregates. This resulted in assertion failures or wrong plans for
cases in which a GROUPING() construct references an outer aggregation
level.
Also fix missing special cases for GroupingFunc in cost_qual_eval
(resulting in wrong cost estimates for GROUPING(), although it's
not clear that that would affect plan shapes in practice) and in
ruleutils.c (resulting in excess parentheses in pretty-print mode).
Per bug #17088 from Yaoguang Chen. Back-patch to all supported
branches.
Richard Guo, Tom Lane
Discussion: https://postgr.es/m/17088-e33882b387de7f5c@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
DROP INDEX needs to lock the index's table before the index itself,
else it will deadlock against ordinary queries that acquire the
relation locks in that order. This is correctly mechanized for
plain indexes by RangeVarCallbackForDropRelation; but in the case of
a partitioned index, we neglected to lock the child tables in advance
of locking the child indexes. We can fix that by traversing the
inheritance tree and acquiring the needed locks in RemoveRelations,
after we have acquired our locks on the parent partitioned table and
index.
While at it, do some refactoring to eliminate confusion between
the actual and expected relkind in RangeVarCallbackForDropRelation.
We can save a couple of syscache lookups too, by having that function
pass back info that RemoveRelations will need.
Back-patch to v11 where partitioned indexes were added.
Jimmy Yih, Gaurab Dey, Tom Lane
Discussion: https://postgr.es/m/BYAPR05MB645402330042E17D91A70C12BD5F9@BYAPR05MB6454.namprd05.prod.outlook.com
|
|
|
|
|
|
|
|
|
|
|
| |
The output of table_to_xmlschema() and allied functions includes
a regex describing valid values for these types ... but the regex
was itself invalid, as it failed to escape a literal "+" sign.
Report and fix by Renan Soares Lopes. Back-patch to all
supported branches.
Discussion: https://postgr.es/m/7f6fabaa-3f8f-49ab-89ca-59fbfe633105@me.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In commit bf7ca1587, I had the bright idea that we could make the
result of a whole-row Var (that is, foo.*) track any column aliases
that had been applied to the FROM entry the Var refers to. However,
that's not terribly logically consistent, because now the output of
the Var is no longer of the named composite type that the Var claims
to emit. bf7ca1587 tried to handle that by changing the output
tuple values to be labeled with a blessed RECORD type, but that's
really pretty disastrous: we can wind up storing such tuples onto
disk, whereupon they're not readable by other sessions.
The only practical fix I can see is to give up on what bf7ca1587
tried to do, and say that the column names of tuples produced by
a whole-row Var are always those of the underlying named composite
type, query aliases or no. While this introduces some inconsistencies,
it removes others, so it's not that awful in the abstract. What *is*
kind of awful is to make such a behavioral change in a back-patched
bug fix. But corrupt data is worse, so back-patched it will be.
(A workaround available to anyone who's unhappy about this is to
introduce an extra level of sub-SELECT, so that the whole-row Var is
referring to the sub-SELECT's output and not to a named table type.
Then the Var is of type RECORD to begin with and there's no issue.)
Per report from Miles Delahunty. The faulty commit dates to 9.5,
so back-patch to all supported branches.
Discussion: https://postgr.es/m/2950001.1638729947@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 83fd4532a7 allowed publishing of changes via ancestors, for
publications defined with publish_via_partition_root. But the way
the ancestor was determined in get_rel_sync_entry() was incorrect,
simply updating the same variable. So with multiple publications,
replicating different ancestors, the outcome depended on the order
of publications in the list - the value from the last loop was used,
even if it wasn't the top-most ancestor.
This is a probably rare situation, as in most cases publications do
not overlap, so each partition has exactly one candidate ancestor
to replicate as and there's no ambiguity.
Fixed by tracking the "ancestor level" for each publication, and
picking the top-most ancestor. Adds a test case, verifying the
correct ancestor is used for publishing the changes and that this
does not depend on order of publications in the list.
Older releases have another bug in this loop - once all actions are
replicated, the loop is terminated, on the assumption that inspecting
additional publications is unecessary. But that misses the fact that
those additional applications may replicate different ancestors.
Fixed by removal of this break condition. We might still terminate the
loop in some cases (e.g. when replicating all actions and the ancestor
is the partition root).
Backpatch to 13, where publish_via_partition_root was introduced.
Initial report and fix by me, test added by Hou zj. Reviews and
improvements by Amit Kapila.
Author: Tomas Vondra, Hou zj, Amit Kapila
Reviewed-by: Amit Kapila, Hou zj
Discussion: https://postgr.es/m/d26d24dd-2fab-3c48-0162-2b7f84a9c893%40enterprisedb.com
|