| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
"meson test" used to ignore the PG_TEST_EXTRA environment variable,
which meant that in order to run additional tests, you had to run
"meson setup -DPG_TEST_EXTRA=...". That's somewhat expensive, and not
consistent with autoconf builds. Allow PG_TEST_EXTRA environment
variable to override the setup-time option at run time, so that you
can do "PG_TEST_EXTRA=... meson test".
To implement this, the configuration time value is passed as an extra
"--pg-test-extra" argument to testwrap instead of adding it to the
test environment. If the environment variable is set at the time of
running test, testwrap uses the value from the environment variable
and ignores the --pg-test-extra option.
Now that "meson test" obeys the environment variable, we can remove it
from the "meson setup" steps in the CI script. It will now be picked
up from the environment variable like with "make check".
Author: Nazir Bilal Yavuzk, Ashutosh Bapat
Reviewed-by: Ashutosh Bapat with inputs from Tom Lane and Andrew Dunstan
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The inplace update survives ROLLBACK. The inval didn't, so another
backend's DDL could then update the row without incorporating the
inplace update. In the test this fixes, a mix of CREATE INDEX and ALTER
TABLE resulted in a table with an index, yet relhasindex=f. That is a
source of index corruption. Back-patch to v12 (all supported versions).
The back branch versions don't change WAL, because those branches just
added end-of-recovery SIResetAll(). All branches change the ABI of
extern function PrepareToInvalidateCacheTuple(). No PGXN extension
calls that, and there's no apparent use case in extensions.
Reviewed by Nitin Motiani and (in earlier versions) Andres Freund.
Discussion: https://postgr.es/m/20240523000548.58.nmisch@google.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, WaitForLSNReplay() immediately throws an error if waiting for LSN
replay is not successful. This commit teaches WaitForLSNReplay() to return
the result of waiting, while making pg_wal_replay_wait() responsible for
throwing an appropriate error.
This is preparation to adding 'no_error' argument to pg_wal_replay_wait() and
new function pg_wal_replay_wait_status(), which returns the last wait result
status.
Additionally, we stop distinguishing situations when we find our instance to
be not in a recovery state before entering the waiting loop and inside
the waiting loop. Standby promotion may happen at any moment, even between
issuing a procedure call statement and pg_wal_replay_wait() doing a first
check of recovery status. Thus, there is no pointing distinguishing these
situations.
Also, since we may exit the waiting loop and see our instance not in recovery
without throwing an error, we need to deleteLSNWaiter() in that case. We do
this unconditionally for the sake of simplicity, even if standby was already
promoted after reaching the target LSN, the startup process surely already
deleted us.
Reported-by: Michael Paquier
Discussion: https://postgr.es/m/ZtUF17gF0pNpwZDI%40paquier.xyz
Reviewed-by: Michael Paquier, Pavel Borisov
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, when a single relcache entry gets invalidated,
TypeCacheRelCallback() has to loop over all type cache entries to find
appropriate typentry to invalidate. Unfortunately, using the syscache here
is impossible, because this callback could be called outside a transaction
and this makes impossible catalog lookups. This is why present commit
introduces RelIdToTypeIdCacheHash to map relation OID to its composite type
OID.
We are keeping RelIdToTypeIdCacheHash entry while corresponding type cache
entry have something to clean. Therefore, RelIdToTypeIdCacheHash shouldn't
get bloat in the case of temporary tables flood.
There are many places in lookup_type_cache() where syscache invalidation,
user interruption, or even error could occur. In order to handle this, we
keep an array of in-progress type cache entries. In the case of
lookup_type_cache() interruption this array is processed to keep
RelIdToTypeIdCacheHash in a consistent state.
Discussion: https://postgr.es/m/5812a6e5-68ae-4d84-9d85-b443176966a1%40sigaev.ru
Author: Teodor Sigaev
Reviewed-by: Aleksander Alekseev, Tom Lane, Michael Paquier, Roman Zharkov
Reviewed-by: Andrei Lepikhov, Pavel Borisov, Jian He, Alexander Lakhin
Reviewed-by: Artur Zakirov
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, CREATE/ALTER EXTENSION gave basically no useful
context about errors reported while executing script files.
I think the idea was that you could run the same commands
manually to see the error, but that's often quite inconvenient.
Let's improve that.
If we get an error during raw parsing, we won't have a current
statement identified by a RawStmt node, but we should always get
a syntax error position. Show the portion of the script from
the last semicolon-newline before the error position to the first
one after it. There are cases where this might show only a
fragment of a statement, but that should be uncommon, and it
seems better than showing the whole script file.
Without an error cursor, if we have gotten past raw parsing (which
we probably have), we can report just the current SQL statement as
an item of error context.
In any case also report the script file name as error context,
since it might not be entirely obvious which of a series of
update scripts failed. We can also show an approximate script
line number in case whatever we printed of the query isn't
sufficiently identifiable.
The error-context code path is already exercised by some
test_extensions test cases, but add tests for the syntax-error
path.
Discussion: https://postgr.es/m/ZvV1ClhnbJLCz7Sm@msg.df7cb.de
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
... to fix bugs when the referenced table is partitioned.
The catalog representation we chose for foreign keys connecting
partitioned tables (in commit f56f8f8da6af) is inconvenient, in the
sense that a standalone table has a different way to represent the
constraint when referencing a partitioned table, than when the same
table becomes a partition (and vice versa). Because of this, we need to
create additional catalog rows on detach (pg_constraint and pg_trigger),
and remove them on attach. We were doing some of those things, but not
all of them, leading to missing catalog rows in certain cases.
The worst problem seems to be that we are missing action triggers after
detaching a partition, which means that you could update/delete rows
from the referenced partitioned table that still had referencing rows on
that table, the server failing to throw the required errors.
!!!
Note that this means existing databases with FKs that reference
partitioned tables might have rows that break relational integrity, on
tables that were once partitions on the referencing side of the FK.
Another possible problem is that trying to reattach a table
that had been detached would fail indicating that internal triggers
cannot be found, which from the user's point of view is nonsensical.
In branches 15 and above, we fix this by creating a new helper function
addFkConstraint() which is in charge of creating a standalone
pg_constraint row, and repurposing addFkRecurseReferencing() and
addFkRecurseReferenced() so that they're only the recursive routine for
each side of the FK, and they call addFkConstraint() to create
pg_constraint at each partitioning level and add the necessary triggers.
These new routines can be used during partition creation, partition
attach and detach, and foreign key creation. This reduces redundant
code and simplifies the flow.
In branches 14 and 13, we have a much simpler fix that consists on
simply removing the constraint on detach. The reason is that those
branches are missing commit f4566345cf40, which reworked the way this
works in a way that we didn't consider back-patchable at the time.
We opted to leave branch 12 alone, because it's different from branch 13
enough that the fix doesn't apply; and because it is going in EOL mode
very soon, patching it now might be worse since there's no way to undo
the damage if it goes wrong.
Existing databases might need to be repaired.
In the future we might want to rethink the catalog representation to
avoid this problem, but for now the code seems to do what's required to
make the constraints operate correctly.
Co-authored-by: Jehan-Guillaume de Rorthais <jgdr@dalibo.com>
Co-authored-by: Tender Wang <tndrwang@gmail.com>
Co-authored-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Reported-by: Guillaume Lelarge <guillaume@lelarge.info>
Reported-by: Jehan-Guillaume de Rorthais <jgdr@dalibo.com>
Reported-by: Thomas Baehler (SBB CFF FFS) <thomas.baehler2@sbb.ch>
Discussion: https://postgr.es/m/20230420144344.40744130@karst
Discussion: https://postgr.es/m/20230705233028.2f554f73@karst
Discussion: https://postgr.es/m/GVAP278MB02787E7134FD691861635A8BC9032@GVAP278MB0278.CHEP278.PROD.OUTLOOK.COM
Discussion: https://postgr.es/m/18541-628a61bc267cd2d3@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Invent a notion of "local" storage that will automatically be
reclaimed at the end of each statement. Use this for location
strings as well as other visibly short-lived data within the parser.
Also, make cat_str and make_str return local storage and not free
their inputs, which allows dispensing with a whole lot of retail
mm_strdup calls. We do have to add some new ones in places where
a local-lifetime string needs to be added to a longer-lived data
structure, but on balance there are a lot less mm_strdup calls than
before.
In hopes of flushing out places where changes were necessary,
I changed YYLTYPE from "char *" to "const char *", which forced
const-ification of various function arguments that probably
should've been like that all along.
This still leaks somewhat more memory than v17, but that will be
cleaned up in future commits.
Discussion: https://postgr.es/m/2011420.1713493114@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 149ac7d4559 which re-implemented pgindent in Perl explicitly
imported the devnull function from File::Spec, but the module does
not export anything. In recent versions of Perl calling a missing
import function cause a warning, which combined with warnings being
fatal cause pgindent to error out.
Backpatch to all supported versions.
Author: Erik Wienhold <ewie@ewie.name>
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discusson: https://postgr.es/m/2372cd74-11b0-46f9-b28e-8f9627215d19@ewie.name
Backpatch-through: v12
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Separate out psql_completion's giant else-if chain of *Matches
tests into a new function. Add the infrastructure needed for
table-driven checking of the initial match of each completion
rule. As-is, however, the code continues to operate as it did.
The new behavior applies only if SWITCH_CONVERSION_APPLIED
is #defined, which it is not here. (The preprocessor added
in the next patch will add a #define for that.)
The first and last couple of bits of psql_completion are not
based on HeadMatches/TailMatches/Matches tests, so they stay
where they are; they won't become part of the switch.
This patch also fixes up a couple of if-conditions that didn't meet
the conditions enumerated in the comment for match_previous_words().
Those restrictions exist to simplify the preprocessor.
Discussion: https://postgr.es/m/2208466.1720729502@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the PostgreSQL C type naming schema, the type PageData should be
what the pointer of type Page points to. But in this case it's
actually an unrelated type local to generic_xlog.c. Rename that to a
more specific name. This makes room to possible add a PageData type
with the mentioned meaning, but this is not done here.
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://www.postgresql.org/message-id/flat/001d457e-c118-4219-8132-e1846c2ae3c9%40eisentraut.org
|
|
|
|
|
|
|
|
| |
This also works for compressed tar-format backups. However, -n must be
used, because we use pg_waldump to verify WAL, and it doesn't yet know
how to verify WAL that is stored inside of a tarfile.
Amul Sul, reviewed by Sravan Kumar and by me, and revised by me.
|
|
|
|
|
|
|
|
|
| |
Some header files under contrib/isn/ are not meant to be included
independently, and they fail -Wmissing-variable-declarations when
doing so.
Reported-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BYVt5MBD-w0HyHpsGb4U8RNge3DvAbDmOFy_epGhZ2Mg%40mail.gmail.com#aba3226c6dd493923bd6ce95d25a2d77
|
|
|
|
|
|
|
|
|
|
| |
Reported-by: Andrew Dunstan
Discussion: https://postgr.es/m/b2465837-56df-4794-a0b5-5e6ed44ed870@dunslane.net
Author: Andrew Dunstan
Backpatch-through: 12
|
|
|
|
|
| |
While it doesn't directly influence indentation right now, add it for
uniformity.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A number of pg_upgrade steps require connecting to every database
in the cluster and running the same query in each one. When there
are many databases, these steps are particularly time-consuming,
especially since they are performed sequentially, i.e., we connect
to a database, run the query, and process the results before moving
on to the next database.
This commit introduces a new framework that makes it easy to
parallelize most of these once-in-each-database tasks by processing
multiple databases concurrently. This framework manages a set of
slots that follow a simple state machine, and it uses libpq's
asynchronous APIs to establish the connections and run the queries.
The --jobs option is used to determine the number of slots to use.
To use this new task framework, callers simply need to provide the
query and a callback function to process its results, and the
framework takes care of the rest. A more complete description is
provided at the top of the new task.c file.
None of the eligible once-in-each-database tasks are converted to
use this new framework in this commit. That will be done via
several follow-up commits.
Reviewed-by: Jeff Davis, Robert Haas, Daniel Gustafsson, Ilya Gladyshev, Corey Huinker
Discussion: https://postgr.es/m/20240516211638.GA1688936%40nathanxps13
|
|
|
|
|
|
|
|
| |
Reported-by: jian he
Discussion: https://postgr.es/m/ZuYsS5XdA7hVcV9l@momjian.us
Backpatch-through: 12
|
|
|
|
|
|
|
|
|
|
| |
Small improvement not worth the code churn.
Reported-by: Andrew Dunstan
Discussion: https://postgr.es/m/42f2242a-422b-4aa3-8d60-d67b229c4a52@dunslane.net
Backpatch-through: master
|
|
|
|
|
|
| |
Eliminate warnings of Perl Critic from src/tools.
Backpatch-through: master
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If there are subqueries in the grouping expressions, each of these
subqueries in the targetlist and HAVING clause is expanded into
distinct SubPlan nodes. As a result, only one of these SubPlan nodes
would be converted to reference to the grouping key column output by
the Agg node; others would have to get evaluated afresh. This is not
efficient, and with grouping sets this can cause wrong results issues
in cases where they should go to NULL because they are from the wrong
grouping set. Furthermore, during re-evaluation, these SubPlan nodes
might use nulled column values from grouping sets, which is not
correct.
This issue is not limited to subqueries. For other types of
expressions that are part of grouping items, if they are transformed
into another form during preprocessing, they may fail to match lower
target items. This can also lead to wrong results with grouping sets.
To fix this issue, we introduce a new kind of RTE representing the
output of the grouping step, with columns that are the Vars or
expressions being grouped on. In the parser, we replace the grouping
expressions in the targetlist and HAVING clause with Vars referencing
this new RTE, so that the output of the parser directly expresses the
semantic requirement that the grouping expressions be gotten from the
grouping output rather than computed some other way. In the planner,
we first preprocess all the columns of this new RTE and then replace
any Vars in the targetlist and HAVING clause that reference this new
RTE with the underlying grouping expressions, so that we will have
only one instance of a SubPlan node for each subquery contained in the
grouping expressions.
Bump catversion because this changes the querytree produced by the
parser.
Thanks to Tom Lane for the idea to invent a new kind of RTE.
Per reports from Geoff Winkless, Tobias Wendorff, Richard Guo from
various threads.
Author: Richard Guo
Reviewed-by: Ashutosh Bapat, Sutou Kouhei
Discussion: https://postgr.es/m/CAMbWs4_dp7e7oTwaiZeBX8+P1rXw4ThkZxh1QG81rhu9Z47VsQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
| |
This replaces two functions for iterating over all blocks in a range. A
pending patch will use this instead of adding a third.
Nazir Bilal Yavuz
Discussion: https://postgr.es/m/20240820184742.f2.nmisch@google.com
|
|
|
|
|
|
|
|
| |
This commit reverts c14d4acb8 as the patch design didn't take into account
that TypeCacheEntry could be invalidated during the lookup_type_cache() call.
Reported-by: Alexander Lakhin
Discussion: https://postgr.es/m/1927cba4-177e-5c23-cbcc-d444a850304f%40gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently when a single relcache entry gets invalidated,
TypeCacheRelCallback() has to loop over all type cache entries to find
appropriate typentry to invalidate. Unfortunately, using the syscache here
is impossible, because this callback could be called outside a transaction
and this makes impossible catalog lookups. This is why present commit
introduces RelIdToTypeIdCacheHash to map relation OID to its composite type
OID.
We are keeping RelIdToTypeIdCacheHash entry while corresponding type cache
entry have something to clean. Therefore, RelIdToTypeIdCacheHash shouldn't
get bloat in the case of temporary tables flood.
Discussion: https://postgr.es/m/5812a6e5-68ae-4d84-9d85-b443176966a1%40sigaev.ru
Author: Teodor Sigaev
Reviewed-by: Aleksander Alekseev, Tom Lane, Michael Paquier, Roman Zharkov
Reviewed-by: Andrei Lepikhov, Pavel Borisov
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit reverts 1adf16b8fb, 87c21bb941, and subsequent fixes and
improvements including df64c81ca9, c99ef1811a, 9dfcac8e15, 885742b9f8,
842c9b2705, fcf80c5d5f, 96c7381c4c, f4fc7cb54b, 60ae37a8bc, 259c96fa8f,
449cdcd486, 3ca43dbbb6, 2a679ae94e, 3a82c689fd, fbd4321fd5, d53a4286d7,
c086896625, 4e5d6c4091, 04158e7fa3.
The reason for reverting is security issues related to repeatable name lookups
(CVE-2014-0062). Even though 04158e7fa3 solved part of the problem, there
are still remaining issues, which aren't feasible to even carefully analyze
before the RC deadline.
Reported-by: Noah Misch, Robert Haas
Discussion: https://postgr.es/m/20240808171351.a9.nmisch%40google.com
Backpatch-through: 17
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, only unnamed prepared statement are supported by psql with
the meta-command \bind. With only this command, it is not possible to
test named statement creation, execution or close through the extended
protocol.
This commit introduces three additional commands:
* \parse creates a prepared statement using the extended protocol,
acting as a wrapper of libpq's PQsendPrepare().
* \bind_named binds and executes an existing prepared statement using
the extended protocol, for PQsendQueryPrepared().
* \close closes an existing prepared statement using the extended
protocol, for PQsendClosePrepared().
This is going to be useful to add regression tests for the extended
query protocol, and I have some plans for that on separate threads.
Note that \bind relies on PQsendQueryParams().
The code of psql is refactored so as bind_flag is replaced by an enum in
_psqlSettings that tracks the type of libpq routine to execute, based on
the meta-command involved, with the default being PQsendQuery(). This
refactoring piece has been written by me, while Anthonin has implemented
the rest.
Author: Anthonin Bonnefoy, Michael Paquier
Reviewed-by: Aleksander Alekseev, Jelte Fennema-Nio
Discussion: https://postgr.es/m/CAO6_XqpSq0Q0kQcVLCbtagY94V2GxNP3zCnR6WnOM8WqXPK4nw@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch provides the additional logging information in the following
conflict scenarios while applying changes:
insert_exists: Inserting a row that violates a NOT DEFERRABLE unique constraint.
update_differ: Updating a row that was previously modified by another origin.
update_exists: The updated row value violates a NOT DEFERRABLE unique constraint.
update_missing: The tuple to be updated is missing.
delete_differ: Deleting a row that was previously modified by another origin.
delete_missing: The tuple to be deleted is missing.
For insert_exists and update_exists conflicts, the log can include the origin
and commit timestamp details of the conflicting key with track_commit_timestamp
enabled.
update_differ and delete_differ conflicts can only be detected when
track_commit_timestamp is enabled on the subscriber.
We do not offer additional logging for exclusion constraint violations because
these constraints can specify rules that are more complex than simple equality
checks. Resolving such conflicts won't be straightforward. This area can be
further enhanced if required.
Author: Hou Zhijie
Reviewed-by: Shveta Malik, Amit Kapila, Nisha Moond, Hayato Kuroda, Dilip Kumar
Discussion: https://postgr.es/m/OS0PR01MB5716352552DFADB8E9AD1D8994C92@OS0PR01MB5716.jpnprd01.prod.outlook.com
|
|
|
|
|
|
|
|
|
|
|
| |
MacPorts version 2.9.3 started failing in our ci_macports_packages.sh
script, for reasons not fully determined, but plausibly linked to the
release of 2.10.1. 2.10.1 seems to work, so let's switch to it.
Back-patch to 15, where CI began.
Reported-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/81f104e8-f0a9-43c0-85bd-2bbbf590a5b8%40eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I (rhaas) intended "bbstreamer" to stand for "base backup streamer,"
but that implies that this infrastructure can only ever be used by
pg_basebackup. In fact, it is a generally useful way of streaming
data from a tar or compressed tar file, and it could be extended to
work with other archive formats as well if we ever wanted to do that.
Hence, rename it to "astreamer" (archive streamer) in preparation for
reusing the infrastructure from pg_verifybackup (and perhaps
eventually also other utilities, such as pg_combinebackup or
pg_waldump).
This is purely a renaming commit. Comment adjustments and relocation
of the actual code to someplace from which it can be reused are left
to future commits.
Amul Sul, reviewed by Sravan Kumar and by me.
Discussion: http://postgr.es/m/CAAJ_b94StvLWrc_p4q-f7n3OPfr6GhL8_XuAg2aAaYZp1tF-nw@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Like 75534436a477, this acts mainly as a template to show what can be
achieved with fixed-numbered stats (like WAL, bgwriter, etc.) with the
pluggable cumulative statistics APIs introduced in 7949d9594582.
Fixed-numbered stats are defined in their own file, named
injection_stats_fixed.c, separated entirely from the variable-numbered
case in injection_stats.c. This is mainly for clarity as having both
examples in the same file would be confusing.
Note that this commit uses the helper routines added in 2eff9e678d35.
The stats stored track globally the number of times injection points
have been attached, detached or run. Two more fields should be added
later for the number of times a point has been cached or loaded, but
what's here is enough as a template.
More TAP tests are added, providing coverage for fixed-numbered custom
stats.
Author: Michael Paquier
Reviewed-by: Dmitry Dolgov, Bertrand Drouvot
Discussion: https://postgr.es/m/Zmqm9j5EO0I4W8dx@paquier.xyz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This acts as a template of what can be achieved with the pluggable
cumulative stats APIs introduced in 7949d9594582 for the
variable-numbered case where stats entries are stored in the pgstats
dshash, while being potentially useful on its own for injection points,
say to add starting and/or stopping conditions based on the statistics
(want to trigger a callback after N calls, for example?).
Currently, the only data gathered is the number of times an injection
point is run. More fields can always be added as required. All the
routines related to the stats are located in their own file, called
injection_stats.c in the test module injection_points, for clarity.
The stats can be used only if the test module is loaded through
shared_preload_libraries. The key of the dshash uses InvalidOid for the
database, and an int4 hash of the injection point name as object ID.
A TAP test is added to provide coverage for the new custom cumulative
stats APIs, showing the persistency of the data across restarts, for
example.
Author: Michael Paquier
Reviewed-by: Dmitry Dolgov, Bertrand Drouvot
Discussion: https://postgr.es/m/Zmqm9j5EO0I4W8dx@paquier.xyz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This warning flag detects global variables not declared in header
files. This is similar to what -Wmissing-prototypes does for
functions. (More correctly, it is similar to what
-Wmissing-declarations does for functions, but -Wmissing-prototypes is
a superset of that in C.)
This flag is new in GCC 14. Clang has supported it for a while.
Several recent commits have cleaned up warnings triggered by this, so
it should now be clean.
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://www.postgresql.org/message-id/flat/e0a62134-83da-4ba4-8cdb-ceb0111c95ce@eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
pg_wal_replay_wait() is to be used on standby and specifies waiting for
the specific WAL location to be replayed. This option is useful when
the user makes some data changes on primary and needs a guarantee to see
these changes are on standby.
The queue of waiters is stored in the shared memory as an LSN-ordered pairing
heap, where the waiter with the nearest LSN stays on the top. During
the replay of WAL, waiters whose LSNs have already been replayed are deleted
from the shared memory pairing heap and woken up by setting their latches.
pg_wal_replay_wait() needs to wait without any snapshot held. Otherwise,
the snapshot could prevent the replay of WAL records, implying a kind of
self-deadlock. This is why it is only possible to implement
pg_wal_replay_wait() as a procedure working without an active snapshot,
not a function.
Catversion is bumped.
Discussion: https://postgr.es/m/eb12f9b03851bb2583adab5df9579b4b%40postgrespro.ru
Author: Kartyshov Ivan, Alexander Korotkov
Reviewed-by: Michael Paquier, Peter Eisentraut, Dilip Kumar, Amit Kapila
Reviewed-by: Alexander Lakhin, Bharath Rupireddy, Euler Taveira
Reviewed-by: Heikki Linnakangas, Kyotaro Horiguchi
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This function dumps the sequence definitions. It is called once
per sequence, and each such call executes a query to retrieve the
metadata for a single sequence. This can cause pg_dump to take
significantly longer, especially when there are many sequences.
This commit improves the performance of this function by gathering
all the sequence metadata with a single query at the beginning of
pg_dump. This information is stored in a sorted array that
dumpSequence() can bsearch() for what it needs. This follows a
similar approach as commits d5e8930f50 and 2329cad1b9, which
introduced sorted arrays for role information and pg_class
information, respectively. As with those commits, this patch will
cause pg_dump to use more memory, but that isn't expected to be too
egregious.
Note that before version 10, the sequence metadata was stored in
the sequence relation itself, which makes it difficult to gather
all the sequence metadata with a single query. For those older
versions, we continue to use the preexisting query-per-sequence
approach.
Reviewed-by: Euler Taveira
Discussion: https://postgr.es/m/20240503025140.GA1227404%40nathanxps13
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit modifies dumpSequence() to parse all the sequence
metadata into the appropriate types instead of carting around
string pointers to the PGresult data. Besides allowing us to free
the PGresult storage earlier in the function, this eliminates the
need to compare min_value and max_value to their respective
defaults as strings.
This is preparatory work for a follow-up commit that will improve
the performance of dumpSequence() in a similar manner to how commit
2329cad1b9 optimized binary_upgrade_set_pg_class_oids().
Reviewed-by: Euler Taveira
Discussion: https://postgr.es/m/20240503025140.GA1227404%40nathanxps13
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit d01ce180 invented a new way to find the latest MacPorts version.
By bad luck, a new beta release has just been published, and it seems
to lack some packages we need. Go back to searching for this specific
version for now. We still search with a pattern so that we can find the
package for the running version of macOS, but for now we always look for
2.9.3. The code to do that had been anticipated already in a commented
out line, I just didn't expect to have to use it so soon...
Also include the whole MacPorts installation script in the cache key, so
that changes to the script cause a fresh installation. This should make
it a bit easier to reason about the effect of changes on cached state in
github accounts using CI, when we make adjustments.
Back-patch to 15, like d01ce180.
Discussion: https://postgr.es/m/CA%2BhUKGLqJdv6RcwyZ_0H7khxtLTNJyuK%2BvDFzv3uwYbn8hKH6A%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Previously we were using ghcr.io/cirruslabs/macos-XXX-base:latest
images, but Cirrus has started ignoring that and using a particular
image, currently ghcr.io/cirruslabs/macos-runner:sonoma, for github
accounts using free CI resources (as opposed to dedicated runner
machines, as cfbot uses). Let's just ask for that image anyway, to stay
in sync.
2. Instead of hard-coding a MacPorts installation URL, deduce it from
the running macOS version and the available releases. This removes the
need to keep the ci_macports_packages.sh in sync with .cirrus.task.yml,
and to advance the MacPorts version from time to time.
3. Change the cache key we use to cache the whole macports installation
across builds to include the OS major version, to trigger a fresh
installation when appropriate.
Back-patch to 15 where CI began.
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CA%2BhUKGLqJdv6RcwyZ_0H7khxtLTNJyuK%2BvDFzv3uwYbn8hKH6A%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
| |
This allows using injection points without having a PGPROC, like early
at backend startup, or in the postmaster.
The injection points facility is new in v17, so backpatch there.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Disussion: https://www.postgresql.org/message-id/4317a7f7-8d24-435e-9e49-29b72a3dc418@iki.fi
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test result files might be checked out using Unix or Windows style line
endings, depening on git flags, so on Windows we use the
--strip-trailing-cr flag to tell diff to ignore line endings
differences.
The flag is added to the diff invocation for the test_json_parser module
tests and the pg_bsd_indent tests. in pg_regress.c we replace the
current use of the "-w" flag, which ignore all white space differences,
with this one which only ignores line end differences.
Discussion: https://postgr.es/m/20240707052030.r77hbdkid3mwksop@awork3.anarazel.de
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Nodes like Memoize report the cache stats for each parallel worker, so it
makes sense to show the exact and lossy pages in Parallel Bitmap Heap Scan
in a similar way. Likewise, Sort shows the method and memory used for
each worker.
There was some discussion on whether the leader stats should include the
totals for each parallel worker or not. I did some analysis on this to
see what other parallel node types do and it seems only Parallel Hash does
anything like this. All the rest, per what's supported by
ExecParallelRetrieveInstrumentation() are consistent with each other.
Author: David Geier <geidav.pg@gmail.com>
Author: Heikki Linnakangas <hlinnaka@iki.fi>
Author: Donghang Lin <donghanglin@gmail.com>
Author: Alena Rybakina <lena.ribackina@yandex.ru>
Author: David Rowley <dgrowleyml@gmail.com>
Reviewed-by: Dmitry Dolgov <9erthalion6@gmail.com>
Reviewed-by: Michael Christofides <michael@pgmustard.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Tomas Vondra <tomas.vondra@enterprisedb.com>
Reviewed-by: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Donghang Lin <donghanglin@gmail.com>
Reviewed-by: Masahiro Ikeda <Masahiro.Ikeda@nttdata.com>
Discussion: https://postgr.es/m/b3d80961-c2e5-38cc-6a32-61886cdf766d%40gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
macOS 15's SDK pulls in headers related to <regex.h> when we include
<xlocale.h>. This causes our own regex_t implementation to clash with
the OS's regex_t implementation. Luckily our function names already had
pg_ prefixes, but the macros and typenames did not.
Include <regex.h> explicitly on all POSIX systems, and fix everything
that breaks. Then we can prove that we are capable of fully hiding and
replacing the system regex API with our own.
1. Deal with standard-clobbering macros by undefining them all first.
POSIX says they are "symbolic constants". If they are macros, this
allows us to redefine them. If they are enums or variables, our macros
will hide them.
2. Deal with standard-clobbering types by giving our types pg_
prefixes, and then using macros to redirect xxx_t -> pg_xxx_t.
After including our "regex/regex.h", the system <regex.h> is hidden,
because we've replaced all the standard names. The PostgreSQL source
tree and extensions can continue to use standard prefix-less type and
macro names, but reach our implementation, if they included our
"regex/regex.h" header.
Back-patch to all supported branches, so that macOS 15's tool chain can
build them.
Reported-by: Stan Hu <stanhu@gmail.com>
Suggested-by: Tom Lane <tgl@sss.pgh.pa.us>
Tested-by: Aleksander Alekseev <aleksander@timescale.com>
Discussion: https://postgr.es/m/CAMBWrQnEwEJtgOv7EUNsXmFw2Ub4p5P%2B5QTBEgYwiyjy7rAsEQ%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This function generates the commands that preserve the OIDs and
relfilenodes of relations during pg_upgrade. It is called once per
relevant relation, and each such call executes a relatively
expensive query to retrieve information for a single pg_class_oid.
This can cause pg_dump to take significantly longer when
--binary-upgrade is specified, especially when there are many
tables.
This commit improves the performance of this function by gathering
all the required pg_class information with a single query at the
beginning of pg_dump. This information is stored in a sorted array
that binary_upgrade_set_pg_class_oids() can bsearch() for what it
needs. This follows a similar approach as commit d5e8930f50, which
introduced a sorted array for role information.
With this patch, 'pg_dump --binary-upgrade' will use more memory,
but that isn't expected to be too egregious. Per the mailing list
discussion, folks feel that this is worth the trade-off.
Reviewed-by: Corey Huinker, Michael Paquier, Daniel Gustafsson
Discussion: https://postgr.es/m/20240418041712.GA3441570%40nathanxps13
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a continuation of 0c1aca461481, with some cleanup in:
- msvc_gendef.pl
- pgindent
- 005_negotiate_encryption.pl, as of an oversight of d39a49c1e459 that
has removed %params in test_matrix(), making also $server_config
useless.
Author: Dagfinn Ilmari Mannsåker
Discussion: https://postgr.es/m/87wmm4dkci.fsf@wibble.ilmari.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This old CPU architecture hasn't been produced in decades, and
whatever instances might still survive are surely too underpowered
for anyone to consider running Postgres on in production. We'd
nonetheless continued to carry code support for it (largely at my
insistence), because its unique implementation of spinlocks seemed
like a good edge case for our spinlock infrastructure. However,
our last buildfarm animal of this type was retired last year, and
it seems quite unlikely that another will emerge. Without the ability
to run tests, the argument that this is useful test code fails to
hold water. Furthermore, carrying code support for an untestable
architecture has costs not to be ignored. So, remove HPPA-specific
code, in the same vein as commits 718aa43a4 and 92d70b77e.
Discussion: https://postgr.es/m/3351991.1697728588@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
| |
The standby_slot_names GUC allows the specification of physical standby
slots that must be synchronized before the logical walsenders associated
with logical failover slots. However, for this purpose, the GUC name is
too generic.
Author: Hou Zhijie
Reviewed-by: Bertrand Drouvot, Masahiko Sawada
Backpatch-through: 17
Discussion: https://postgr.es/m/ZnWeUgdHong93fQN@momjian.us
|
|
|
|
| |
Let the hacking begin ...
|
|
|
|
|
|
|
|
|
| |
Both injection points and customization of type "Extension" are new in
v17, so this just changes a detail of an unreleased feature.
Reported by Robert Haas. Reviewed by Michael Paquier.
Discussion: https://postgr.es/m/CA+TgmobfMU5pdXP36D5iAwxV5WKE_vuDLtp_1QyH+H5jMMt21g@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
| |
This could have caused spurious failures only on SPARC Linux, because
today's only todo_start tests for that platform. Back-patch to v16,
where Meson support first appeared.
Reviewed by Robert Haas.
Discussion: https://postgr.es/m/20240512232923.aa.nmisch@google.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We did not recover the subtransaction IDs of prepared transactions
when starting a hot standby from a shutdown checkpoint. As a result,
such subtransactions were considered as aborted, rather than
in-progress. That would lead to hint bits being set incorrectly, and
the subtransactions suddenly becoming visible to old snapshots when
the prepared transaction was committed.
To fix, update pg_subtrans with prepared transactions's subxids when
starting hot standby from a shutdown checkpoint. The snapshots taken
from that state need to be marked as "suboverflowed", so that we also
check the pg_subtrans.
Backport to all supported versions.
Discussion: https://www.postgresql.org/message-id/6b852e98-2d49-4ca1-9e95-db419a2696e0@iki.fi
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit f5e4dedfa exposed libpq's internal function PQsocketPoll
without a lot of thought about whether that was an API we really
wanted to chisel in stone. The main problem with it is the use of
time_t to specify the timeout. While we do want an absolute time
so that a loop around PQsocketPoll doesn't have problems with
timeout slippage, time_t has only 1-second resolution. That's
already problematic for libpq's own internal usage --- for example,
pqConnectDBComplete has long had a kluge to treat "connect_timeout=1"
as 2 seconds so that it doesn't accidentally round to nearly zero.
And it's even less likely to be satisfactory for external callers.
Hence, let's change this while we still can.
The best idea seems to be to use an int64 count of microseconds since
the epoch --- basically the same thing as the backend's TimestampTz,
but let's use the standard Unix epoch (1970-01-01) since that's more
likely for clients to be easy to calculate. Millisecond resolution
would be plenty for foreseeable uses, but maybe the day will come that
we're glad we used microseconds.
Also, since time(2) isn't especially helpful for computing timeouts
defined this way, introduce a new function PQgetCurrentTimeUSec
to get the current time in this form.
Remove the hack in pqConnectDBComplete, so that "connect_timeout=1"
now means what you'd expect.
We can also remove the "#include <time.h>" that f5e4dedfa added to
libpq-fe.h, since there's no longer a need for time_t in that header.
It seems better for v17 not to enlarge libpq-fe.h's include footprint
from what it's historically been, anyway.
I also failed to resist the temptation to do some wordsmithing
on PQsocketPoll's documentation.
Patch by me, per complaint from Dominique Devienne.
Discussion: https://postgr.es/m/913559.1718055575@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
| |
Make sure that function declarations use names that exactly match the
corresponding names from function definitions in pg_bsd_indent.
This commit was written with help from clang-tidy, by mechanically
applying the same rules as similar clean-up commits.
Discussion: https://postgr.es/m/CAH2-WzkaBS8w-vCbG5M5Bx7XikC0WhNLJV_+Z_YAWW9Kef6OBQ@mail.gmail.com
|