| Commit message (Collapse) | Author | Age |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add WITHOUT OVERLAPS clause to PRIMARY KEY and UNIQUE constraints.
These are backed by GiST indexes instead of B-tree indexes, since they
are essentially exclusion constraints with = for the scalar parts of
the key and && for the temporal part.
(previously committed as 46a0cd4cefb, reverted by 46a0cd4cefb; the new
part is this:)
Because 'empty' && 'empty' is false, the temporal PK/UQ constraint
allowed duplicates, which is confusing to users and breaks internal
expectations. For instance, when GROUP BY checks functional
dependencies on the PK, it allows selecting other columns from the
table, but in the presence of duplicate keys you could get the value
from any of their rows. So we need to forbid empties.
This all means that at the moment we can only support ranges and
multiranges for temporal PK/UQs, unlike the original patch (above).
Documentation and tests for this are added. But this could
conceivably be extended by introducing some more general support for
the notion of "empty" for other types.
Author: Paul A. Jungwirth <pj@illuminatedcomputing.com>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: jian he <jian.universality@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/CA+renyUApHgSZF9-nd-a0+OPGharLQLO=mDHcY4_qQ0+noCUVg@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit is similar to 1eff8279d and expands the idea to Window
aggregate nodes so that users can know how much memory or disk the
tuplestore used.
This commit uses newly introduced tuplestore_get_stats() to inquire this
information and add some additional output in EXPLAIN ANALYZE to
display the information for the Window aggregate node.
Reviewed-by: David Rowley, Ashutosh Bapat, Maxim Orlov, Jian He
Discussion: https://postgr.es/m/20240706.202254.89740021795421286.ishii%40postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1eff8279d added an API to tuplestore.c to allow callers to obtain
storage telemetry data. That API wasn't quite good enough for callers
that perform tuplestore_clear() as the telemetry functions only
accounted for the current state of the tuplestore, not the maximums
before tuplestore_clear() was called.
There's a pending patch that would like to add tuplestore telemetry
output to EXPLAIN ANALYZE for WindowAgg. That node type uses
tuplestore_clear() before moving to the next window partition and we
want to show the maximum space used, not the space used for the final
partition.
Reviewed-by: Tatsuo Ishii, Ashutosh Bapat
Discussion: https://postgres/m/CAApHDvoY8cibGcicLV0fNh=9JVx9PANcWvhkdjBnDCc9Quqytg@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If there are subqueries in the grouping expressions, each of these
subqueries in the targetlist and HAVING clause is expanded into
distinct SubPlan nodes. As a result, only one of these SubPlan nodes
would be converted to reference to the grouping key column output by
the Agg node; others would have to get evaluated afresh. This is not
efficient, and with grouping sets this can cause wrong results issues
in cases where they should go to NULL because they are from the wrong
grouping set. Furthermore, during re-evaluation, these SubPlan nodes
might use nulled column values from grouping sets, which is not
correct.
This issue is not limited to subqueries. For other types of
expressions that are part of grouping items, if they are transformed
into another form during preprocessing, they may fail to match lower
target items. This can also lead to wrong results with grouping sets.
To fix this issue, we introduce a new kind of RTE representing the
output of the grouping step, with columns that are the Vars or
expressions being grouped on. In the parser, we replace the grouping
expressions in the targetlist and HAVING clause with Vars referencing
this new RTE, so that the output of the parser directly expresses the
semantic requirement that the grouping expressions be gotten from the
grouping output rather than computed some other way. In the planner,
we first preprocess all the columns of this new RTE and then replace
any Vars in the targetlist and HAVING clause that reference this new
RTE with the underlying grouping expressions, so that we will have
only one instance of a SubPlan node for each subquery contained in the
grouping expressions.
Bump catversion because this changes the querytree produced by the
parser.
Thanks to Tom Lane for the idea to invent a new kind of RTE.
Per reports from Geoff Winkless, Tobias Wendorff, Richard Guo from
various threads.
Author: Richard Guo
Reviewed-by: Ashutosh Bapat, Sutou Kouhei
Discussion: https://postgr.es/m/CAMbWs4_dp7e7oTwaiZeBX8+P1rXw4ThkZxh1QG81rhu9Z47VsQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
| |
This change adds WAL usage reporting to the output of ANALYZE VERBOSE
and autoanalyze reports. It aligns the analyze output with VACUUM,
providing consistency. Additionally, it aids in troubleshooting cases
where WAL records are generated during analyze operations.
Author: Anthonin Bonnefoy
Reviewed-by: Masahiko Sawada
Discussion: https://postgr.es/m/CAO6_Xqr__kTTCLkftqS0qSCm-J7_xbRG3Ge2rWhucxQJMJhcRA%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
SPI_connect/SPI_connect_ext have not returned any value other than
SPI_OK_CONNECT since commit 1833f1a1c in v10; any errors are thrown
via ereport. (The most likely failure is out-of-memory, which has
always been thrown that way, so callers had better be prepared for
such errors.) This makes it somewhat pointless to check these
functions' result, and some callers within our code haven't been
bothering; indeed, the only usage example within spi.sgml doesn't
bother. So it's likely that the omission has propagated into
extensions too.
Hence, let's standardize on not checking, and document the return
value as historical, while not actually changing these functions'
behavior. (The original proposal was to change their return type
to "void", but that would needlessly break extensions that are
conforming to the old practice.) This saves a small amount of
boilerplate code in a lot of places.
Stepan Neretin
Discussion: https://postgr.es/m/CAMaYL5Z9Uk8cD9qGz9QaZ2UBJFOu7jFx5Mwbznz-1tBbPDQZow@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As introduced by f9900df5f94, a REINDEX CONCURRENTLY job done for an
index with predicates or expressions would set PROC_IN_SAFE_IC in its
MyProc->statusFlags, causing it to be ignored by other concurrent
operations.
Such concurrent index rebuilds should never be ignored, as a predicate
or an expression could call a user-defined function that accesses a
different table than the table where the index is rebuilt.
A test that uses injection points is added, backpatched down to 17.
Michail has proposed a different test, but I have added something
simpler with more coverage.
Oversight in f9900df5f949.
Author: Michail Nikolaev
Discussion: https://postgr.es/m/CANtu0oj9A3kZVduFTG0vrmGnKB+DCHgEpzOp0qAyOgmks84j0w@mail.gmail.com
Backpatch-through: 14
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The following routines are changed to use the syscache entries added for
pg_extension in 490f869d92e5:
- get_extension_oid()
- get_extension_name()
- get_extension_schema()
A catalog scan is costly and could easily lead to a noticeable
performance impact when called once or more per query, so this is going
to be helpful for developers for extension data lookups.
Author: Andrei Lepikhov
Reviewed-by: Jelte Fennema-Nio
Discussion: https://postgr.es/m/529295b2-6ba9-4dae-acd1-20a9c6fb8f9a@gmail.com
|
|
|
|
|
|
|
| |
Instead just look up the collation and check collate_is_c field.
Author: Andreas Karlsson
Discussion: https://postgr.es/m/60929555-4709-40a7-b136-bcb44cff5a3c@proxel.se
|
|
|
|
|
|
|
|
|
| |
This commit updates a couple of error messages around control file data,
GUCs and server settings, unifying to the same message where possible.
This reduces the translation burden a bit.
Author: Peter Smith
Discussion: https://postgr.es/m/CAHut+Pv-kSN8SkxSdoHano_wPubqcg5789ejhCDZAcLFceBR-w@mail.gmail.com
|
|
|
|
|
| |
Author: Alexander Lakhin
Discussion: https://postgr.es/m/f7e514cf-2446-21f1-a5d2-8c089a6e2168@gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Similarly to 2065ddf5e34c, this introduces a define for "pg_tblspc".
This makes the style more consistent with the existing PG_STAT_TMP_DIR,
for example.
There is a difference with the other cases with the introduction of
PG_TBLSPC_DIR_SLASH, required in two places for recovery and backups.
Author: Bertrand Drouvot
Reviewed-by: Ashutosh Bapat, Álvaro Herrera, Yugo Nagata, Michael
Paquier
Discussion: https://postgr.es/m/ZryVvjqS9SnV1GPP@ip-10-97-1-34.eu-west-3.compute.internal
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit removes log_cnt from the tuple returned by the SQL function.
This field is an internal counter that tracks when a WAL record should
be generated for a sequence, and it is reset each time the sequence is
restored or recovered. It is not necessary to rebuild the sequence DDL
commands for pg_dump and pg_upgrade where this function is used. The
field can still be queried with a scan of the "table" created
under-the-hood for a sequence.
Issue noticed while hacking on a feature that can rely on this new
function rather than pg_sequence_last_value(), aimed at making sequence
computation more easily pluggable.
Bump catalog version.
Reviewed-by: Nathan Bossart
Discussion: https://postgr.es/m/Zsvka3r-y2ZoXAdH@paquier.xyz
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This does not make sense. It would write the output of the USING
clause into the converted column, which would violate the generation
expression. This adds a check to error out if this is specified.
There was a test for this, but that test errored out for a different
reason, so it was not effective.
Reported-by: Jian He <jian.universality@gmail.com>
Reviewed-by: Yugo NAGATA <nagata@sraoss.co.jp>
Discussion: https://www.postgresql.org/message-id/flat/c7083982-69f4-4b14-8315-f9ddb20b9834%40eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
| |
Both sub-commands use the same routine to switch the relpersistence of a
relation, duplicated the same checks, and used a style inconsistent with
access methods and tablespaces.
SET LOGEED/UNLOGGED is refactored to avoid any duplication, setting the
reason why a relation rewrite happens within ATPrepChangePersistence().
This shaves some code.
Discussion: https://postgr.es/m/ZiiyGFTBNkqcMQi_@paquier.xyz
|
|
|
|
|
|
|
|
|
|
|
| |
macOS doesn't have posix_fadvise(), but fcntl() with the F_RDADVISE
command does the same thing.
Some related documentation has been generalized to not mention
posix_advise() specifically anymore.
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/0827edec-1317-4917-a186-035eb1e3241d%40eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit reverts 1adf16b8fb, 87c21bb941, and subsequent fixes and
improvements including df64c81ca9, c99ef1811a, 9dfcac8e15, 885742b9f8,
842c9b2705, fcf80c5d5f, 96c7381c4c, f4fc7cb54b, 60ae37a8bc, 259c96fa8f,
449cdcd486, 3ca43dbbb6, 2a679ae94e, 3a82c689fd, fbd4321fd5, d53a4286d7,
c086896625, 4e5d6c4091, 04158e7fa3.
The reason for reverting is security issues related to repeatable name lookups
(CVE-2014-0062). Even though 04158e7fa3 solved part of the problem, there
are still remaining issues, which aren't feasible to even carefully analyze
before the RC deadline.
Reported-by: Noah Misch, Robert Haas
Discussion: https://postgr.es/m/20240808171351.a9.nmisch%40google.com
Backpatch-through: 17
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, createPartitionTable() opens newly created table using its name.
This approach is prone to privilege escalation attack, because we might end
up opening another table than we just created.
This commit address the issue above by opening newly created table by its
OID. It appears to be tricky to get a relation OID out of ProcessUtility().
We have to extend TableLikeClause with new newRelationOid field, which is
filled within ProcessUtility() to be further accessed by caller.
Security: CVE-2014-0062
Reported-by: Noah Misch
Discussion: https://postgr.es/m/20240808171351.a9.nmisch%40google.com
Reviewed-by: Pavel Borisov, Dmitry Koval
|
|
|
|
|
|
|
|
|
| |
Apply the same code simplification to ATExecAddColumn as was done in
7ff9afbbd: apply GETSTRUCT() once instead of doing it repeatedly in
the same function.
Author: Tender Wang
Discussion: https://postgr.es/m/CAHewXNkO9+U437jvKT14s0MCu6Qpf6G-p2mZK5J9mAi4cHDgpQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For a long time we have forbidden binary-coercible casts to or from
composite and array types, because such a cast cannot work correctly:
the type OID embedded in the value would need to change, but it won't
in a binary coercion. That reasoning applies equally to range types,
but we overlooked installing a similar restriction here when we
invented range types. Do so now.
Given the lack of field complaints, we won't change this in stable
branches, but it seems not too late for v17.
Per discussion of a problem noted by Peter Eisentraut.
Discussion: https://postgr.es/m/076968e1-0852-40a9-bc0b-117cd3f0e43c@eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now that disable_cost is not included in the cost estimate, there's
no visible sign in EXPLAIN output of which plan nodes are disabled.
Fix that by propagating the number of disabled nodes from Path to
Plan, and then showing it in the EXPLAIN output.
There is some question about whether this is a desirable change.
While I personally believe that it is, it seems best to make it a
separate commit, in case we decide to back out just this part, or
rework it.
Reviewed by Andres Freund, Heikki Linnakangas, and David Rowley.
Discussion: http://postgr.es/m/CA+TgmoZ_+MS+o6NeGK2xyBv-xM+w1AfFVuHE4f_aq6ekHv7YSQ@mail.gmail.com
|
|
|
|
|
|
|
|
| |
Apply GETSTRUCT() once instead of doing it repeatedly in the same
function. This simplifies the notation and makes the function's
structure more similar to the surrounding ones.
Discussion: https://www.postgresql.org/message-id/flat/a368248e-69e4-40be-9c07-6c3b5880b0a6@eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add a comment explaining dropdb() can't rely on syscache. The issue with
flattened rows was fixed by commit 0f92b230f88b, but better to have
a clear explanation why the systable scan is necessary. The other places
doing in-place updates on pg_database have the same comment.
Suggestion and patch by Yugo Nagata. Backpatch to 12, same as the fix.
Author: Yugo Nagata
Backpatch-through: 12
Discussion: https://postgr.es/m/CAJTYsWWNkCt+-UnMhg=BiCD3Mh8c2JdHLofPxsW3m2dkDFw8RA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit c66a7d75e652 modified DROP DATABASE so that if interrupted, the
database is known to be in an invalid state and can only be dropped.
This is done by setting a flag using an in-place update, so that it's
not lost in case of rollback.
For databases with many ACLs, this may however fail like this:
ERROR: wrong tuple length
This happens because with many ACLs, the pg_database.datacl attribute
gets TOASTed. The dropdb() code reads the tuple from the syscache, which
means it's detoasted. But the in-place update expects the tuple length
to match the on-disk tuple.
Fixed by reading the tuple from the catalog directly, not from syscache.
Report and fix by Ayush Tiwari. Backpatch to 12. The DROP DATABASE fix
was backpatched to 11, but 11 is EOL at this point.
Reported-by: Ayush Tiwari
Author: Ayush Tiwari
Reviewed-by: Tomas Vondra
Backpatch-through: 12
Discussion: https://postgr.es/m/CAJTYsWWNkCt+-UnMhg=BiCD3Mh8c2JdHLofPxsW3m2dkDFw8RA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 97ddda8a82ac470ae581d0eb485b6577707678bc removed the rmtree()
behavior from XLOG_TBLSPC_CREATE, obsoleting that part of the comment.
The comment's point about XLOG_DBASE_CREATE was wrong when commit
fa0f466d5329e10b16f3b38c8eaf5306f7e234e8 introduced the point. (It
would have been accurate if that commit had predated commit
fbcbc5d06f53aea412130deb52e216aa3883fb8d introducing the second
checkpoint of CREATE DATABASE.) Nothing can skip log_smgrcreate() on
the basis of wal_level=minimal, so don't comment on that.
Commit c6b92041d38512a4176ed76ad06f713d2e6c01a8 expanded WAL skipping
from five specific operations to relfilenodes generally, hence the
CreateDatabaseUsingFileCopy() comment change.
Discussion: https://postgr.es/m/20231008022204.cc@rfd.leadboat.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There's not much point in asserting a pointer isn't NULL after some code
has already dereferenced that pointer.
Adjust the code so that the Assert occurs before the pointer dereference.
The Assert probably has questionable value in the first place, but it
seems worth keeping around to document the contract between
CopyMultiInsertInfoNextFreeSlot() and its callers.
Author: Amul Sul <sulamul@gmail.com>
Discussion: https://postgr.es/m/CAAJ_b94hXQzXaJxTLShkxQUgezf_SUxhzX9TH2f-g6gP7bne7g@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
| |
The handling of binary and text formats are quite different here, so
it's more clear to check for the format first and have two separate
loops.
Author: jian he <jian.universality@gmail.com>
Reviewed-by: Ilia Evdokimov, Junwang Zhao
Discussion: https://www.postgresql.org/message-id/CACJufxFzHCeFBQF0M%2BSgk_NwknWuQ4oU7tS1isVeBrbhcKOHkg@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
| |
There were several sets of very similar local variable names, such as
"downer" and "dbowner", which was very confusing and error-prone.
Rename the former to "ownerEl" and so on, similar to collationcmds.c
and typecmds.c.
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://www.postgresql.org/message-id/flat/e5bce225-ee04-40c7-a280-ea7214318048%40eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Attempting to add a system column for a table to an existing publication
would result in the not very intuitive error message of:
ERROR: negative bitmapset member not allowed
Here we improve that to have it display the same error message as a user
would see if they tried adding a system column for a table when adding
it to the publication in the first place.
Doing this requires making the function which validates the list of
columns an extern function. The signature of the static function wasn't
an ideal external API as it made the code more complex than it needed to be.
Here we adjust the function to have it populate a Bitmapset of attribute
numbers. Doing it this way allows code simplification.
There was no particular bug here other than the weird error message, so
no backpatch.
Bug: #18558
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Author: Peter Smith, David Rowley
Discussion: https://postgr.es/m/18558-411bc81b03592125@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, log_autovacuum_min_duration utilized dedicated code for
logging resource statistics, such as system and buffer usage during
autoanalyze. However, this logging functionality was not utilized by
ANALYZE VERBOSE.
This commit adds resource statistics reporting to ANALYZE VERBOSE by
reusing the same logging code as autoanalyze.
Author: Anthonin Bonnefoy
Reviewed-by: Masahiko Sawada
Discussion: https://postgr.es/m/CAO6_Xqr__kTTCLkftqS0qSCm-J7_xbRG3Ge2rWhucxQJMJhcRA%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, (auto)analyze used global variables VacuumPageHit,
VacuumPageMiss, and VacuumPageDirty to track buffer usage. However,
pgBufferUsage provides a more generic way to track buffer usage with
support functions.
This change replaces those global variables with pgBufferUsage in
analyze. Since analyze was the sole user of those variables, it
removes their declarations. Vacuum previously used those variables but
replaced them with pgBufferUsage as part of a bug fix, commit
5cd72cc0c.
Additionally, it adjusts the buffer usage message in both vacuum and
analyze for better consistency.
Author: Anthonin Bonnefoy
Reviewed-by: Masahiko Sawada, Michael Paquier
Discussion: https://postgr.es/m/CAO6_Xqr__kTTCLkftqS0qSCm-J7_xbRG3Ge2rWhucxQJMJhcRA%40mail.gmail.com
|
|
|
|
|
| |
Reflect the fact that we remove waiters from the heap, not just set their
latches.
|
|
|
|
|
|
|
|
|
|
|
|
| |
pg_wal_replay_wait() is intended to be called on standby. However, standby
can be promoted to primary at any moment, even concurrently with the
pg_wal_replay_wait() call. If recovery is not currently in progress
that doesn't mean the wait was unsuccessful. Thus, we always need to recheck
if the target LSN is replayed.
Reported-by: Kevin Hale Boyes
Discussion: https://postgr.es/m/CAPpHfdu5QN%2BZGACS%2B7foxmr8_nekgA2PA%2B-G3BuOUrdBLBFb6Q%40mail.gmail.com
Author: Alexander Korotkov
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Trying to attach a table as a partition which is already on the
referenced side of a foreign key on the partitioned table that it is
being attached to, leads to strange behavior: we try to clone the
foreign key from the parent to the partition, but this new FK points to
the partition itself, and the mix of pg_constraint rows and triggers
doesn't behave well.
Rather than trying to untangle the mess (which might be possible given
sufficient time), I opted to forbid the ATTACH. This doesn't seem a
problematic restriction, given that we already fail to create the
foreign key if you do it the other way around, that is, having the
partition first and the FK second.
Backpatch to all supported branches.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Reviewed-by: Tender Wang <tndrwang@gmail.com>
Discussion: https://postgr.es/m/18541-628a61bc267cd2d3@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I also took the liberty of changing
errmsg("COPY DEFAULT only available using COPY FROM")
to
errmsg("COPY %s cannot be used with %s", "DEFAULT", "COPY TO")
because the original wording is unlike all other messages that indicate
option incompatibility. This message was added by commit 9f8377f7a279
(16-era), in whose development thread there was no discussion on this
point.
Backpatch to 17.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There was no need for these to be static buffers, local variables work
just as well. I think they were marked as 'static' to imply that they
are read-only, but 'const' is more appropriate for that, so change
them to const.
To make it possible to mark the variables as 'const', also add 'const'
decorations to the transformRelOptions() signature.
Reviewed-by: Andres Freund
Discussion: https://www.postgresql.org/message-id/54c29fb0-edf2-48ea-9814-44e918bbd6e8@iki.fi
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Parallel workers failed after a sequence like
BEGIN;
CREATE USER foo;
SET SESSION AUTHORIZATION foo;
because check_session_authorization could not see the uncommitted
pg_authid row for "foo". This is because we ran RestoreGUCState()
in a separate transaction using an ordinary just-created snapshot.
The same disease afflicts any other GUC that requires catalog lookups
and isn't forgiving about the lookups failing.
To fix, postpone RestoreGUCState() into the worker's main transaction
after we've set up a snapshot duplicating the leader's. This affects
check_transaction_isolation and check_transaction_deferrable, which
think they should only run during transaction start. Make them
act like check_transaction_read_only, which already knows it should
silently accept the value when InitializingParallelWorker.
This un-reverts commit f5f30c22e. The original plan was to back-patch
that, but the fact that 0ae5b763e proved to be a pre-requisite shows
that the subtle API change for GUC hooks might actually break some of
them. The problem we're trying to fix seems not worth taking such a
risk for in stable branches.
Per bug #18545 from Andrey Rachitskiy.
Discussion: https://postgr.es/m/18545-feba138862f19aaa@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The previous coding here threw an error from assign_client_encoding
if it was invoked in a parallel worker. That's a very fundamental
violation of the GUC hook API: assign hooks must not throw errors.
The place to complain is in the check hook, so move the test to
there, and use the regular check-hook API (ie return false) to
report it.
The reason this coding is a problem is that it breaks GUC rollback,
which may occur after we leave InitializingParallelWorker state.
That case seems not actually reachable before now, but commit
f5f30c22e made it reachable, so we need to fix this before that
can be un-reverted.
In passing, improve the commentary in ParallelWorkerMain, and
add a check for failure of SetClientEncoding. That's another
case that can't happen now but might become possible after
foreseeable code rearrangements (notably, if the shortcut of
skipping PrepareClientEncoding stops being OK).
Discussion: https://postgr.es/m/18545-feba138862f19aaa@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since commit 4b74ebf726, the refresh logic is used to populate
materialized views, so we can simplify the error message in
ExecCreateTableAs().
Also, RefreshMatViewByOid() is moved to just after
create_ctas_nodata() call to improve code readability.
Author: Yugo Nagata
Discussion: https://postgr.es/m/20240802161301.d975daca9ba7a706fa05ecd7@sraoss.co.jp
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
pg_wal_replay_wait() is to be used on standby and specifies waiting for
the specific WAL location to be replayed. This option is useful when
the user makes some data changes on primary and needs a guarantee to see
these changes are on standby.
The queue of waiters is stored in the shared memory as an LSN-ordered pairing
heap, where the waiter with the nearest LSN stays on the top. During
the replay of WAL, waiters whose LSNs have already been replayed are deleted
from the shared memory pairing heap and woken up by setting their latches.
pg_wal_replay_wait() needs to wait without any snapshot held. Otherwise,
the snapshot could prevent the replay of WAL records, implying a kind of
self-deadlock. This is why it is only possible to implement
pg_wal_replay_wait() as a procedure working without an active snapshot,
not a function.
Catversion is bumped.
Discussion: https://postgr.es/m/eb12f9b03851bb2583adab5df9579b4b%40postgrespro.ru
Author: Kartyshov Ivan, Alexander Korotkov
Reviewed-by: Michael Paquier, Peter Eisentraut, Dilip Kumar, Amit Kapila
Reviewed-by: Alexander Lakhin, Bharath Rupireddy, Euler Taveira
Reviewed-by: Heikki Linnakangas, Kyotaro Horiguchi
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit f5f30c22ed69fb37b896c4d4546b2ab823c3fd61.
Some buildfarm animals are failing with "cannot change
"client_encoding" during a parallel operation". It looks like
assign_client_encoding is unhappy at being asked to roll back a
client_encoding setting after a parallel worker encounters a
failure. There must be more to it though: why didn't I see this
during local testing? In any case, it's clear that moving the
RestoreGUCState() call is not as side-effect-free as I thought.
Given that the bug f5f30c22e intended to fix has gone unreported
for years, it's not something that's urgent to fix; I'm not
willing to risk messing with it further with only days to our
next release wrap.
|
|
|
|
|
|
|
|
|
| |
RefreshMatviewByOid is used for both REFRESH and CREATE MATERIALIZED
VIEW. This flag is currently just used for handling internal error
messages, but also aimed to improve code-readability.
Author: Yugo Nagata
Discussion: https://postgr.es/m/20240726122630.70e889f63a4d7e26f8549de8@sraoss.co.jp
|
|
|
|
|
| |
Author: Yugo Nagata
Discussion: https://postgr.es/m/20240726122630.70e889f63a4d7e26f8549de8@sraoss.co.jp
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Parallel workers failed after a sequence like
BEGIN;
CREATE USER foo;
SET SESSION AUTHORIZATION foo;
because check_session_authorization could not see the uncommitted
pg_authid row for "foo". This is because we ran RestoreGUCState()
in a separate transaction using an ordinary just-created snapshot.
The same disease afflicts any other GUC that requires catalog lookups
and isn't forgiving about the lookups failing.
To fix, postpone RestoreGUCState() into the worker's main transaction
after we've set up a snapshot duplicating the leader's. This affects
check_transaction_isolation and check_transaction_deferrable, which
think they should only run during transaction start. Make them
act like check_transaction_read_only, which already knows it should
silently accept the value when InitializingParallelWorker.
Per bug #18545 from Andrey Rachitskiy. Back-patch to all
supported branches, because this has been wrong for awhile.
Discussion: https://postgr.es/m/18545-feba138862f19aaa@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This new function returns the data for the given sequence, i.e.,
the values within the sequence tuple. Since this function is a
substitute for SELECT from the sequence, the SELECT privilege is
required on the sequence in question. It returns all NULLs for
sequences for which we lack privileges, other sessions' temporary
sequences, and unlogged sequences on standbys.
This function is primarily intended for use by pg_dump in a
follow-up commit that will use it to optimize dumpSequenceData().
Like pg_sequence_last_value(), which is a support function for the
pg_sequences system view, pg_sequence_read_tuple() is left
undocumented.
Bumps catversion.
Reviewed-by: Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/20240503025140.GA1227404%40nathanxps13
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add some comments to mention that this value must be at least 2 and also
add a StaticAssertDecl to cause compilation failure if anyone tries to
build with an invalid value.
The multiInsertBuffers list must have at least two elements due to how the
code in CopyMultiInsertInfoFlush() pushes the current ResultRelInfo's
CopyMultiInsertBuffer to the end of the list. If the first element is
also the last element, bad things will happen.
Author: Zhang Mingli <avamingli@gmail.com>
Discussion: https://postgr.es/m/CAApHDvpQ6t9ROcqbD-OgqR04Kfq4vQKw79Vo6r5j%2BciHwsSfkA%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We restrict typed tables (those declared as "OF composite_type")
to be based on stand-alone composite types, not composite types
that are the implicitly-created rowtypes of other tables.
But if you tried to do that, you got the very confusing error
message "type foo is not a composite type". Provide a more specific
message for that case. Also clarify related documentation in the
CREATE TABLE man page.
Erik Wienhold and David G. Johnston, per complaint from Hannu Krosing.
Discussion: https://postgr.es/m/CAMT0RQRysCb_Amy5CTENSc5GfsvXL1a4qX3mv_hx31_v74P==g@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The two_phase option is controlled by both the publisher (as a slot
option) and the subscriber (as a subscription option), so the slot option
must also be modified.
Changing the 'two_phase' option for a subscription from 'true' to 'false'
is permitted only when there are no pending prepared transactions
corresponding to that subscription. Otherwise, the changes of already
prepared transactions can be replicated again along with their corresponding
commit leading to duplicate data or errors.
To avoid data loss, the 'two_phase' option for a subscription can only be
changed from 'false' to 'true' once the initial data synchronization is
completed. Therefore this is performed later by the logical replication worker.
Author: Hayato Kuroda, Ajin Cherian, Amit Kapila
Reviewed-by: Peter Smith, Hou Zhijie, Amit Kapila, Vitaly Davydov, Vignesh C
Discussion: https://postgr.es/m/8fab8-65d74c80-1-2f28e880@39088166
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add extern declarations in appropriate header files for global
variables related to GUC. In many cases, this was handled quite
inconsistently before, with some GUC variables declared in a header
file and some only pulled in via ad-hoc extern declarations in various
.c files.
Also add PGDLLIMPORT qualifications to those variables. These were
previously missing because src/tools/mark_pgdllimport.pl has only been
used with header files.
This also fixes -Wmissing-variable-declarations warnings for GUC
variables (not yet part of the standard warning options).
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://www.postgresql.org/message-id/flat/e0a62134-83da-4ba4-8cdb-ceb0111c95ce@eisentraut.org
|