| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
| |
We can use a single line to print all tuple counts that MERGE processed,
for conciseness, and elide those that are zeroes. Non-text formats
report all numbers, as is typical.
Per comment from Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/20220511163350.GL19626@telsasoft.com
|
|
|
|
|
|
|
| |
s/primary_slotname/primary_slot_name/.
Author: Bharath Rupireddy
Discussion: https://postgr.es/m/CALj2ACX3=pHkCpoGG-z+O=7Gp5YZv70jmfTyGnNV7YF3SkK73g@mail.gmail.com
|
|
|
|
|
| |
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: dde45df385dab9032155c1f867b677d55695310c
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In order to estimate the cache hit ratio of a Memoize node, one of the
inputs we require is the estimated number of times the Memoize node will
be rescanned. The higher this number, the large the cache hit ratio is
likely to become. Unfortunately, the value being passed as the number of
"calls" to the Memoize was incorrectly using the Nested Loop's
outer_path->parent->rows instead of outer_path->rows. This failed to
account for the fact that the outer_path might be parameterized by some
upper-level Nested Loop.
This problem could lead to Memoize plans appearing more favorable than
they might actually be. It could also lead to extended executor startup
times when work_mem values were large due to the planner setting overly
large MemoizePath->est_entries resulting in the Memoize hash table being
initially made much larger than might be required.
Fix this simply by passing outer_path->rows rather than
outer_path->parent->rows. Also, adjust the expected regression test
output for a plan change.
Reported-by: Pavel Stehule
Author: David Rowley
Discussion: https://postgr.es/m/CAFj8pRAMp%3DQsMi6sPQJ4W3hczoFJRvyXHJV3AZAZaMyTVM312Q%40mail.gmail.com
Backpatch-through: 14, where Memoize was introduced
|
|
|
|
|
|
|
|
|
| |
pgstat_create_function() creates stats for a function in a transactional
fashion, so the stats would be dropped if transaction creating the
function is aborted, not committed.
Author: Amul Sul
Discussion: https://postgr.es/m/CAAJ_b97x1T3xgAMWNj4w7kSgN0nTuG-vLrQJ4NB-dsNr0Kudxw@mail.gmail.com
|
|
|
|
| |
Like commit c9d297751959.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I started out with the intention to rename value_type to item_type to
avoid a collision with a typedef name that appears on some platforms.
Along the way, I noticed that the adjacent field "format" was not being
correctly handled by the backend/nodes/ infrastructure functions:
copyfuncs.c erroneously treated it as a scalar, while equalfuncs,
outfuncs, and readfuncs omitted handling it at all. This looks like
it might be cosmetic at the moment because the field is always NULL
after parse analysis; but that's likely a bug in itself, and the code's
certainly not very future-proof. Let's fix it while we can still do so
without forcing an initdb on beta testers.
Further study found a few other inconsistencies in the backend/nodes/
infrastructure for the recently-added JSON node types, so fix those too.
catversion bumped because of potential change in stored rules.
Discussion: https://postgr.es/m/526703.1652385613@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, preloaded libraries are expected to request additional
shared memory and LWLocks in _PG_init(). However, it is not unusal
for such requests to depend on MaxBackends, which won't be
initialized at that time. Such requests could also depend on GUCs
that other modules might change. This introduces a new hook where
modules can safely use MaxBackends and GUCs to request additional
shared memory and LWLocks.
Furthermore, this change restricts requests for shared memory and
LWLocks to this hook. Previously, libraries could make requests
until the size of the main shared memory segment was calculated.
Unlike before, we no longer silently ignore requests received at
invalid times. Instead, we FATAL if someone tries to request
additional shared memory or LWLocks outside of the hook.
Nathan Bossart and Julien Rouhaud
Discussion: https://postgr.es/m/20220412210112.GA2065815%40nathanxps13
Discussion: https://postgr.es/m/Yn2jE/lmDhKtkUdr@paquier.xyz
|
|
|
|
|
|
| |
In the style of pgindent, done semi-manually.
Discussion: https://www.postgresql.org/message-id/flat/7d062ecc-7444-23ec-a159-acd8adf9b586%40enterprisedb.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
pgstat_report_stat() is only supposed to be called outside of transactions. In
5891c7a8ed8 I added a pgstat_report_stat() call into LogicalRepApplyLoop()'s
timeout branch. While not commonly reached inside a transaction, it is
reachable (e.g. due to network bottlenecks or the sender being stalled / slow
for some reason).
To fix, add a !IsTransactionState() check.
No test added because there's no easy way to reproduce this case without
patching the code.
Reported-By: Erik Rijkers <er@xs4all.nl>
Discussion: https://postgr.es/m/b3463b8c-2328-dcac-0136-af95715493c1@xs4all.nl
|
|
|
|
|
|
|
| |
The declaration is automatically generated. Noticed when experimenting with
adding PGDLLIMPORT markers for functions.
Discussion: https://postgr.es/m/20220512164513.vaheofqp2q24l65r@alap3.anarazel.de
|
|
|
|
|
|
|
|
|
| |
A few places used binary_upgrade_* variables without including the header,
which worked without warnings because the variables are defined in those
places. However that can cause linker complaints with MSVC - except that we
don't see them right now, due to the use of a symbol export file.
Discussion: https://postgr.es/m/20220512164513.vaheofqp2q24l65r@alap3.anarazel.de
|
|
|
|
|
|
|
|
| |
Noticed when comparing the set of exported symbols without / with
-fvisibility=hidden after adding PGDLLIMPORT to intentionally exported
symbols.
Discussion: https://postgr.es/m/20220512164513.vaheofqp2q24l65r@alap3.anarazel.de
|
|
|
|
|
| |
Run pgindent, pgperltidy, and reformat-dat-files.
I manually fixed a couple of comments that pgindent uglified.
|
|
|
|
| |
Discussion: https://postgr.es/m/20220506234924.6mxxotl3xl63db3l@alap3.anarazel.de
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This follows in the footsteps of commit 2591ee8ec by removing one more
ill-advised shortcut from planning of GroupingFuncs. It's true that
we don't intend to execute the argument expression(s) at runtime, but
we still have to process any Vars appearing within them, or we risk
failure at setrefs.c time (or more fundamentally, in EXPLAIN trying
to print such an expression). Vars in upper plan nodes have to have
referents in the next plan level, whether we ever execute 'em or not.
Per bug #17479 from Michael J. Sullivan. Back-patch to all supported
branches.
Richard Guo
Discussion: https://postgr.es/m/17479-6260deceaf0ad304@postgresql.org
|
|
|
|
|
|
|
|
|
|
| |
The code for unloading a library has been commented-out for over 12
years, ever since commit 602a9ef5a7c60151e10293ae3c4bb3fbb0132d03, and we're
no closer to supporting it now than we were back then.
Nathan Bossart, reviewed by Michael Paquier and by me.
Discussion: http://postgr.es/m/Ynsc9bRL1caUSBSE@paquier.xyz
|
|
|
|
|
|
|
|
| |
This fixes the grammar of some comments in a couple of tests (SQL and
TAP), and in some C files.
Author: Justin Pryzby
Discussion: https://postgr.es/m/20220511020334.GH19626@telsasoft.com
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To enable diagnosis of systems that are not processing ProcSignalBarrier
requests promptly, add a LOG message every 5 seconds if we seem to be
wedged. Although you could already see this state as a wait event in
pg_stat_activity, the log message also shows the PID of the process that
is preventing progress.
Also add DEBUG1 logging around the whole wait loop.
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/CA%2BTgmoYJ03r5359gQutRGP9BtigYCg3_UskcmnVjBf-QO3-0pQ%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The problem is that we don't send keep-alive messages for a long time
while processing large transactions during logical replication where we
don't send any data of such transactions. This can happen when the table
modified in the transaction is not published or because all the changes
got filtered. We do try to send the keep_alive if necessary at the end of
the transaction (via WalSndWriteData()) but by that time the
subscriber-side can timeout and exit.
To fix this we try to send the keepalive message if required after
processing certain threshold of changes.
Reported-by: Fabrice Chapuis
Author: Wang wei and Amit Kapila
Reviewed By: Masahiko Sawada, Euler Taveira, Hou Zhijie, Hayato Kuroda
Backpatch-through: 10
Discussion: https://postgr.es/m/CAA5-nLARN7-3SLU_QUxfy510pmrYK6JJb=bk3hcgemAM_pAv+w@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Presently, the server may emit a variety of log messages when inspecting
a runtime-computed GUC, mostly in the shape of one LOG message with the
default configuration, related to the startup sequence launched as such
GUCs require a load of the control file and of external shared
libraries.
For example, the server will always emit a "database system is shut
down" LOG (unless the user has set log_min_messages higher than LOG),
which is an annoying behavior as "postgres -C" is expected to only emit
in its output the parameter value we are looking for. The parameter
value is sent to stdout, while the logs are sent to stderr so we could
recommend to use a redirection, but there was not much love for this
workaround either.
To avoid such extra log messages, per discussion, this change sets
log_min_messages to FATAL internally when -C is used on a
runtime-computed GUC (even if set to PANIC in postgresql.conf). At
FATAL, the user will still receive messages explaining why a GUC value
cannot be inspected, and will know if the command is attempted on a
server already running, something not supported yet for a
runtime-computed GUC.
Reported-by: Magnus Hagander, Bruce Momjian
Author: Nathan Bossart, Michael Paquier
Discussion: https://postgr.es/m/Yni6ZHkGotUU+RSf@paquier.xyz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
697492434 added 3 new quicksort specialization functions for common
datatypes.
That commit was not very consistent in how it would determine if we're
compiling for 32-bit or 64-bit machines. It would sometimes use
USE_FLOAT8_BYVAL and at other times check if SIZEOF_DATUM == 8. This
could cause theoretical problems due to the way USE_FLOAT8_BYVAL is now
defined based on SIZEOF_VOID_P >= 8. If pointers for some reason were
ever larger than 8-bytes then we'd end up doing 32-bit comparisons
mistakenly. Let's just always check SIZEOF_DATUM >= 8.
It also seems that ssup_datum_signed_cmp is just never used on 32-bit
builds, so let's just ifdef that out to make sure we never accidentally
use that comparison function on such machines. This also allows us to
ifdef out 1 of the 3 new specialization quicksort functions in 32-bit
builds which seems to shrink down the binary by over 4KB on my machine.
In passing, also add the missing DatumGetInt32() / DatumGetInt64() macros
in the comparison functions.
Discussion: https://postgr.es/m/CAApHDvqcQExRhtRa9hJrJB_5egs3SUfOcutP3m+3HO8A+fZTPA@mail.gmail.com
Reviewed-by: John Naylor
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
The parser code that transformed VALUES from row-oriented to
column-oriented lists failed if there were zero columns.
You can't write that straightforwardly (though probably you
should be able to), but the case can be reached by expanding
a "tab.*" reference to a zero-column table.
Per bug #17477 from Wang Ke. Back-patch to all supported branches.
Discussion: https://postgr.es/m/17477-0af3c6ac6b0a6ae0@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit eafdf9de06e9b60168f5e47cedcfceecdc6d4b5f
and its back-branch counterparts. Corey Huinker pointed out that
we'd discussed this exact change back in 2016 and rejected it,
on the grounds that there's at least one usage pattern with LIMIT
where an infinite endpoint can usefully be used. Perhaps that
argument needs to be re-litigated, but there's no time left before
our back-branch releases. To keep our options open, restore the
status quo ante; if we do end up deciding to change things, waiting
one more quarter won't hurt anything.
Rather than just doing a straight revert, I added a new test case
demonstrating the usage with LIMIT. That'll at least remind us of
the issue if we forget again.
Discussion: https://postgr.es/m/3603504.1652068977@sss.pgh.pa.us
Discussion: https://postgr.es/m/CADkLM=dzw0Pvdqp5yWKxMd+VmNkAMhG=4ku7GnCZxebWnzmz3Q@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
| |
It intended to, but did not, achieve this. Adopt the new standard of
setting user ID just after locking the relation. Back-patch to v10 (all
supported versions).
Reviewed by Simon Riggs. Reported by Alvaro Herrera.
Security: CVE-2022-1552
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a feature enumerates relations and runs functions associated with
all found relations, the feature's user shall not need to trust every
user having permission to create objects. BRIN-specific functionality
in autovacuum neglected to account for this, as did pg_amcheck and
CLUSTER. An attacker having permission to create non-temp objects in at
least one schema could execute arbitrary SQL functions under the
identity of the bootstrap superuser. CREATE INDEX (not a
relation-enumerating operation) and REINDEX protected themselves too
late. This change extends to the non-enumerating amcheck interface.
Back-patch to v10 (all supported versions).
Sergey Shinderuk, reviewed (in earlier versions) by Alexander Lakhin.
Reported by Alexander Lakhin.
Security: CVE-2022-1552
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a cluster is promoted (aka the control file shows a state different
than DB_IN_ARCHIVE_RECOVERY) while CreateRestartPoint() is still
processing, this function could miss an update of the control file for
"checkPoint" and "checkPointCopy" but still do the recycling and/or
removal of the past WAL segments, assuming that the to-be-updated LSN
values should be used as reference points for the cleanup. This causes
a follow-up restart attempting crash recovery to fail with a PANIC on a
missing checkpoint record if the end-of-recovery checkpoint triggered by
the promotion did not complete while the cluster abruptly stopped or
crashed before the completion of this checkpoint. The PANIC would be
caused by the redo LSN referred in the control file as located in a
segment already gone, recycled by the previous restartpoint with
"checkPoint" out-of-sync in the control file.
This commit fixes the update of the control file during restartpoints so
as "checkPoint" and "checkPointCopy" are updated even if the cluster has
been promoted while a restartpoint is running, to be on par with the set
of WAL segments actually recycled in the end of CreateRestartPoint().
This problem exists in all the stable branches. However, commit
7ff23c6, by removing the last call of CreateCheckPoint() from the
startup process, has made this bug much easier to reason about as
concurrent checkpoints are not possible anymore. No backpatch is done
yet, mostly out of caution from me as a point release is close by, but
we need to think harder about the case of concurrent checkpoints at
promotion if the bgwriter is not considered as running by the startup
process in ~v14, so this change is done only on HEAD for the moment.
Reported-by: Fujii Masao, Rui Zhao
Author: Kyotaro Horiguchi
Reviewed-by: Nathan Bossart, Michael Paquier
Discussion: https://postgr.es/m/20220316.102444.2193181487576617583.horikyota.ntt@gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commits 4eb21763 and b74e94dc introduced a way to force every backend to
close all relation files, to fix an ancient Windows-only bug.
This commit extends that behavior to all operating systems and adds
a couple of extra barrier points, to fix a totally different class of
bug: the reuse of relfilenodes in scenarios that have no other kind of
cache invalidation to prevent file descriptor mix-ups.
In all releases, data corruption could occur when you moved a database
to another tablespace and then back again. Despite that, no back-patch
for now as the infrastructure required is too new and invasive. In
master only, since commit aa010514, it could also happen when using
CREATE DATABASE with a user-supplied OID or via pg_upgrade.
Author: Andres Freund <andres@anarazel.de>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/20220209220004.kb3dgtn2x2k2gtdm%40alap3.anarazel.de
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With sufficiently bad luck, it was possible for IssuePendingWritebacks()
to reopen a file after we'd processed PROCSIGNAL_BARRIER_SMGRRELEASE and
before the file was unlinked by some other backend. That left a small
hole in commit 4eb21763's plan to fix all spurious errors from DROP
TABLESPACE and similar on Windows.
Fix by closing md.c's segments, instead of just closing fd.c's
descriptors, and then teaching smgrwriteback() not to open files that
aren't already open.
Reported-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/20220209220004.kb3dgtn2x2k2gtdm%40alap3.anarazel.de
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since 6bc8ef0b7f1f1df3998745a66e1790e27424aa0c, the maximum number
of backends can't change as background workers are registered, but
these comments still reflect the way things worked prior to that.
Also, per recent discussion, some modules call SetConfigOption()
from _PG_init(). It's not entirely clear to me whether we want to
regard that as a fully supported operation, but since we know it's
a thing that happens, it at least deserves a mention in the comments,
so add that.
Nathan Bossart, reviewed by Anton A. Melnikov
Discussion: http://postgr.es/m/20220419154658.GA2487941@nathanxps13
|
|
|
|
|
|
|
| |
Introduced in 5aa2350.
Author: Peter Smith
Discussion: https://postgr.es/m/CAHut+PsuWz6_7aCmivNU5FahgQxDUTQtc3+__XnWkBzQcfn43w@mail.gmail.com
|
|
|
|
|
| |
Update a few items that have become supported or mostly supported but
weren't updated at the time.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
SubqueryScan was always getting labeled with a rowcount estimate
appropriate for non-parallel cases. However, nodes that are
underneath a Gather should be treated as processing only one
worker's share of the rows, whether the particular node is explicitly
parallel-aware or not. Most non-scan-level node types get this
right automatically because they base their rowcount estimate on
that of their input sub-Path(s). But SubqueryScan didn't do that,
instead using the whole-relation rowcount estimate as if it were
a non-parallel-aware scan node. If there is a parallel-aware node
below the SubqueryScan, this is wrong, and it results in inflating
the cost estimates for nodes above the SubqueryScan, which can cause
us to not choose a parallel plan, or choose a silly one --- as indeed
is visible in the one regression test whose results change with this
patch. (Although that plan tree appears to contain no SubqueryScans,
there were some in it before setrefs.c deleted them.)
To fix, use path->subpath->rows not baserel->tuples as the number
of input tuples we'll process. This requires estimating the quals'
selectivity afresh, which is slightly annoying; but it shouldn't
really add much cost thanks to the caching done in RestrictInfo.
This is pretty clearly a bug fix, but I'll refrain from back-patching
as people might not appreciate plan choices changing in stable branches.
The fact that it took us this long to identify the bug suggests that
it's not a major problem.
Per report from bucoo, though this is not his proposed patch.
Discussion: https://postgr.es/m/202204121457159307248@sohu.com
|
|
|
|
|
|
|
| |
It doesn't seem very useful, and it's a bit in the way of the planned
node support automation.
Discussion: https://www.postgresql.org/message-id/202204191140.3wsbevfhqmu3@alvherre.pgsql
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The tests added in 9f8a050f68d failed nearly reliably on FreeBSD in CI, and
occasionally on the buildfarm. That turns out to be caused not by a bug in the
test, but by a longstanding bug in recovery conflict handling.
The standby timeout handler, used by ResolveRecoveryConflictWithBufferPin(),
executed SendRecoveryConflictWithBufferPin() inside a signal handler. A bad
idea, because the deadlock timeout handler (or a spurious latch set) could
have interrupted ProcWaitForSignal(). If unlucky that could cause a
self-deadlock on ProcArrayLock, if the deadlock check is in
SendRecoveryConflictWithBufferPin()->CancelDBBackends().
To fix, set a flag in StandbyTimeoutHandler(), and check the flag in
ResolveRecoveryConflictWithBufferPin().
Subsequently the recovery conflict tests will be backpatched.
Discussion: https://postgr.es/m/20220413002626.udl7lll7f3o7nre7@alap3.anarazel.de
Backpatch: 10-
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instability in the test for pg_walinspect revealed that
pg_get_wal_records_info_till_end_of_wal(x) would try to decode all the
records with a start LSN earlier than the flush LSN, even though that
might include a partial record at the end of the range. In that case,
read_local_xlog_page_no_wait() would return NULL when it tried to read
past the flush LSN, which would be interpreted as an error by the
caller. That caused a test failure only on a BF animal that had been
restarted recently, but could be expected to happen in the wild quite
easily depending on the alignment of various parameters.
Fix by using private data in read_local_xlog_page_no_wait() to signal
end-of-wal to the caller, so that it can be properly distinguished
from a real error.
Discussion: https://postgr.es/m/Ymd/e5eeZMNAkrXo%40paquier.xyz
Discussion: https://postgr.es/m/111657.1650910309@sss.pgh.pa.us
Authors: Thomas Munro, Bharath Rupireddy.
|
|
|
|
| |
Discussion: https://postgr.es/m/d03d809c-d0fb-fd6a-1476-d6dc18ec940e@dunslane.net
|
|
|
|
|
|
|
|
| |
Commit f4fb45d15c contained a bug in removing items with null values when
unique keys are required, where the leading items that are sorted
contained such values. Fix that and add a test for it.
Discussion: https://postgr.es/m/CAJA4AWQ_XbSmsNbW226UqNyRLJ+wb=iQkQMj77cQyoNkqtf=2Q@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
mark_async_capable_plan(), which is called from create_append_plan() to
determine whether subplans are async-capable, failed to take into
account that the given subplan created from a given subpath might
include a gating Result node if the subpath is a SubqueryScanPath or
ForeignPath, causing a segmentation fault there when the subplan created
from a SubqueryScanPath includes the Result node, or causing
ExecAsyncRequest() to throw an error about an unrecognized node type
when the subplan created from a ForeignPath includes the Result node,
because in the latter case the Result node was unintentionally
considered as async-capable, but we don't currently support executing
Result nodes asynchronously. Fix by modifying mark_async_capable_plan()
to disable asynchronous execution in such cases. Also, adjust code in
the ProjectionPath case in mark_async_capable_plan(), for consistency
with other cases, and adjust/improve comments there.
is_async_capable_path() added in commit 27e1f1456, which was rewritten
to mark_async_capable_plan() in a later commit, has the same issue,
causing the error at execution mentioned above, so back-patch to v14
where the aforesaid commit went in.
Per report from Justin Pryzby.
Etsuro Fujita, reviewed by Zhihong Yu and Justin Pryzby.
Discussion: https://postgr.es/m/20220408124338.GK24419%40telsasoft.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commits 2c902bb and ccfbd92. Per buildfarm members
kestrel, rorqual and calliphoridae, the assertions checking that a TLI
history file should not exist when created by a WAL receiver have been
failing, and switching to durable_rename() over durable_rename_excl()
would cause the newest TLI history file to overwrite the existing one.
We need to think harder about such cases, so revert the new logic for
now.
Note that all the failures have been reported in the test
025_stuck_on_old_timeline.
Discussion: https://postgr.es/m/511362.1651116498@sss.pgh.pa.us
|
|
|
|
|
|
| |
Euler Taveira
Disussion: https://www.postgresql.org/message-id/25f95189-eef8-43c4-9d7b-419b651963c8%40www.fastmail.com
|
|
|
|
|
|
|
|
|
|
| |
ccfbd92 has replaced all existing in-core callers of this function in
favor of durable_rename(). durable_rename_excl() is by nature unsafe on
crashes happening at the wrong time, so just remove it.
Author: Nathan Bossart
Reviewed-by: Robert Haas, Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/20220407182954.GA1231544@nathanxps13
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
durable_rename_excl() attempts to avoid overwriting any existing files
by using link() and unlink(), falling back to rename() on some platforms
(e.g., Windows where link() followed by unlink() is not concurrent-safe,
see 909b449). Most callers of durable_rename_excl() use it just in case
there is an existing file, but it happens that for all of them we never
expect a target file to exist (WAL segment recycling, creation of
timeline history file and basic_archive).
basic_archive used durable_rename_excl() to avoid overwriting an archive
concurrently created by another server. Now, there is a stat() call to
avoid overwriting an existing archive a couple of lines above, so note
that this change opens a small TOCTOU window in this module between the
stat() call and durable_rename().
Furthermore, as mentioned in the top comment of durable_rename_excl(),
this routine can result in multiple hard links to the same file and data
corruption, with two or more links to the same file in pg_wal/ if a
crash happens before the unlink() call during WAL recycling.
Specifically, this would produce links to the same file for the current
WAL file and the next one because the half-recycled WAL file was
re-recycled during crash recovery of a follow-up cluster restart.
This change replaces all calls to durable_rename_excl() with
durable_rename(). This removes the protection against accidentally
overwriting an existing file, but some platforms are already living
without it, and all those code paths never expect an existing file (a
couple of assertions are added to check after that, in case).
This is a bug fix, but knowing the unlikeliness of the problem involving
one of more crashes at an exceptionally bad moment, no backpatch is
done. This could be revisited in the future.
Author: Nathan Bossart
Reviewed-by: Robert Haas, Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/20220407182954.GA1231544@nathanxps13
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Unlike existing WRITE_*_ARRAY macros, WRITE_INDEX_ARRAY needs to
handle the case that the field is NULL. We already have the
convention to print NULL fields as "<>", so we do that here as well.
There is currently no corresponding read function for this, so reading
this back in is not implemented, but it could be if needed.
Reported-by: Richard Guo <guofenglinux@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/CAMbWs4-LN%3DbF8f9eU2R94dJtF54DfDvBq%2BovqHnOQqbinYDrUw%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Several places didn't do it, and in many cases it didn't matter because
it would be a small allocation in a short-lived context; but other
places may accumulate a few (for example, in CreateDatabaseUsingFileCopy,
one per tablespace). In most databases this is highly unlikely to be
very serious either, but it seems better to make the code consistent in
case there's future copy-and-paste.
The only case of actual concern seems to be the aforementioned routine,
which is new with commit 9c08aea6a309, so there's no need to backpatch.
As pointed out by Coverity.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This function looks for a reference to the recursive WITH CTE,
but it checked only the CTE name not ctelevelsup, so that it could
seize on a lower CTE that happened to have the same name. This
would result in planner failures later, either weird errors such as
"could not find attribute 2 in subquery targetlist", or crashes
or assertion failures. The code also merely Assert'ed that it found
a matching entry, which is not guaranteed at all by the parser.
Per bugs #17320 and #17318 from Zhiyong Wu.
Thanks to Kyotaro Horiguchi for investigation.
Discussion: https://postgr.es/m/17320-70e37868182512ab@postgresql.org
Discussion: https://postgr.es/m/17318-2eb65a3a611d2368@postgresql.org
|