| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The phase of the test waiting for a restartpoint to complete was not
working as intended, due to a log_contains() call incorrectly
written.
The problem reported by the author could be simply reproduced by
removing the injection_points_wakeup() call: the test succeeds rather
than waiting for the restartpoint completion. In most cases, the
restartpoint completion is fast enough that the test offered the wanted
coverage. On slow machines, it could have become unreliable.
Oversight in 6782709df81f.
Author: Nitin Jadhav
Discussion: https://postgr.es/m/CAMm1aWa_6u+o52r7h7G6pX-oWD0Qraf0ee17Ma50qxGS0B_Rzg@mail.gmail.com
Backpatch-through: 17
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The correct function documented to free the memory allocated for the
result returned by PQescapeIdentifier() and PQescapeLiteral() is
PQfreemem(). pg_createsubscriber.c relied on pg_free() instead, which
is not incorrect as both do a free() internally, but inconsistent with
the documentation.
While on it, this commit fixes a small memory leak introduced by
4867f8a555ce, as the code of pg_createsubscriber makes this effort.
Author: Ranier Vilela
Reviewed-by: Euler Taveira
Discussion: https://postgr.es/m/CAEudQAp=AW5dJXrGLbC_aZg_9nOo=42W7uLDRONFQE-gcgnkgQ@mail.gmail.com
Backpatch-through: 17
|
|
|
|
|
|
|
|
|
|
|
| |
Remove some (char *) casts related to uses of the pg_checksum_page()
function. These casts are useless, because everything involved
already has the right type. Moreover, these casts actually silently
discarded a const qualifier. The declaration of a higher-level
function needs to be adjusted to fix that.
Reviewed-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Discussion: https://www.postgresql.org/message-id/flat/fd1fcedb-3492-4fc8-9e3e-74b97f2db6c7%40eisentraut.org
|
|
|
|
|
|
|
|
|
|
| |
Remove (char *) casts around memory functions such as memcmp(),
memcpy(), or memset() where the cast is useless. Since these
functions don't take char * arguments anyway, these casts are at best
complicated casts to (void *), about which see commit 7f798aca1d5.
Reviewed-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Discussion: https://www.postgresql.org/message-id/flat/fd1fcedb-3492-4fc8-9e3e-74b97f2db6c7%40eisentraut.org
|
|
|
|
|
|
|
|
|
|
| |
Remove (char *) casts around string functions where the arguments or
result already have the right type and the cast is useless (or worse,
potentially casts away a qualifier, but this doesn't appear to be the
case here).
Reviewed-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Discussion: https://www.postgresql.org/message-id/flat/fd1fcedb-3492-4fc8-9e3e-74b97f2db6c7%40eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit adds the amount of time spent sleeping due to
cost-based delay to the pg_stat_progress_vacuum and
pg_stat_progress_analyze system views. A new configuration
parameter named track_cost_delay_timing, which is off by default,
controls whether this information is gathered. For vacuum, the
reported value includes the sleep time of any associated parallel
workers. However, parallel workers only report their sleep time
once per second to avoid overloading the leader process.
Bumps catversion.
Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Co-authored-by: Nathan Bossart <nathandbossart@gmail.com>
Reviewed-by: Sami Imseih <samimseih@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Masahiro Ikeda <ikedamsh@oss.nttdata.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Sergei Kornilov <sk@zsrv.org>
Discussion: https://postgr.es/m/ZmaXmWDL829fzAVX%40ip-10-97-1-34.eu-west-3.compute.internal
|
|
|
|
|
|
|
|
|
|
|
|
| |
This function is used in both vacuum and analyze code paths, and a
follow-up commit will require distinguishing between the two. This
commit forces callers to specify whether they are in a vacuum or
analyze path, but it does not use that information for anything
yet.
Author: Nathan Bossart <nathandbossart@gmail.com>
Co-authored-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://postgr.es/m/ZmaXmWDL829fzAVX%40ip-10-97-1-34.eu-west-3.compute.internal
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
pgbench client-side data generation uses COPY FREEZE to load data for most
tables. COPY FREEZE isn't supported for partitioned tables and since pgbench
only supports partitioning pgbench_accounts, pgbench used a hard-coded check to
skip COPY FREEZE and use plain COPY for a partitioned pgbench_accounts.
If the user has manually partitioned one of the other pgbench tables, this
causes client-side data generation to error out with:
ERROR: cannot perform COPY FREEZE on a partitioned table
Fix this by limiting COPY FREEZE to ordinary tables (RELKIND_RELATION).
Author: Sergey Tatarintsev <s.tatarintsev@postgrespro.ru>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/flat/97f55fca-8a7b-4da8-b413-7d1c57010676%40postgrespro.ru
|
|
|
|
|
|
|
|
| |
Requires adding a guard against shift-by-32. Previously, that was
impossible because the number of partitions was always greater than 1,
but a new injection point can force the number of partitions to 1.
Discussion: https://postgr.es/m/ff4e59305e5d689e03cd256a736348d3e7958f8f.camel@j-davis.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Aggressive vacuums must scan every unfrozen tuple in order to advance
the relfrozenxid/relminmxid. Because data is often vacuumed before it is
old enough to require freezing, relations may build up a large backlog
of pages that are set all-visible but not all-frozen in the visibility
map. When an aggressive vacuum is triggered, all of these pages must be
scanned. These pages have often been evicted from shared buffers and
even from the kernel buffer cache. Thus, aggressive vacuums often incur
large amounts of extra I/O at the expense of foreground workloads.
To amortize the cost of aggressive vacuums, eagerly scan some
all-visible but not all-frozen pages during normal vacuums.
All-visible pages that are eagerly scanned and set all-frozen in the
visibility map are counted as successful eager freezes and those not
frozen are counted as failed eager freezes.
If too many eager scans fail in a row, eager scanning is temporarily
suspended until a later portion of the relation. The number of failures
tolerated is configurable globally and per table.
To effectively amortize aggressive vacuums, we cap the number of
successes as well. Capping eager freeze successes also limits the amount
of potentially wasted work if these pages are modified again before the
next aggressive vacuum. Once we reach the maximum number of blocks
successfully eager frozen, eager scanning is disabled for the remainder
of the vacuum of the relation.
Original design idea from Robert Haas, with enhancements from
Andres Freund, Tomas Vondra, and me
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Robert Treat <rob@xzilla.net>
Reviewed-by: Bilal Yavuz <byavuz81@gmail.com>
Discussion: https://postgr.es/m/flat/CAAKRu_ZF_KCzZuOrPrOqjGVe8iRVWEAJSpzMgRQs%3D5-v84cXUg%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
| |
"I/O" seems more descriptive than "Asynchronous Behavior", given that some of
the GUCs in the section don't relate to anything asynchronous.
Most other abbreviations in the config sections are un-abbreviated, but
"Input/Output" seems less likely to be helpful than just IO or I/O.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/x3tlw2jk5gm3r3mv47hwrshffyw7halpczkfbk3peksxds7bvc@lguk43z3bsyq
|
|
|
|
|
|
|
|
| |
Having all the worker related GUCs in the same section as IO controlling GUCs
doesn't really make sense. Create a separate section for them.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/x3tlw2jk5gm3r3mv47hwrshffyw7halpczkfbk3peksxds7bvc@lguk43z3bsyq
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 1dc5ebc90 allowed PL/pgSQL to perform in-place updates
of expanded-object variables that are being updated with
assignments like "x := f(x, ...)". However this was allowed
only for a hard-wired list of functions f(), since we need to
be sure that f() will not modify the variable if it fails.
It was always envisioned that we should make that extensible,
but at the time we didn't have a good way to do so. Since
then we've invented the idea of "support functions" to allow
attaching specialized optimization knowledge to functions,
and that is a perfect mechanism for doing this.
Hence, adjust PL/pgSQL to use a support function request instead
of hard-wired logic to decide if in-place update is safe.
Preserve the previous optimizations by creating support functions
for the three functions that were previously hard-wired.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru>
Reviewed-by: Pavel Borisov <pashkin.elfe@gmail.com>
Discussion: https://postgr.es/m/CACxu=vJaKFNsYxooSnW1wEgsAO5u_v1XYBacfVJ14wgJV_PYeg@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a read/write expanded variable is declared locally to the
assignment statement that is updating it, and it is referenced
exactly once in the assignment RHS, then we can optimize the
operation as a direct update of the expanded value, whether
or not the function(s) operating on it can be trusted not to
modify the value before throwing an error. This works because
if an error does get thrown, we no longer care what value the
variable has.
In cases where that doesn't work, fall back to the previous
rule that checks for safety of the top-level function.
In any case, postpone determination of whether these optimizations
are feasible until we are executing a Param referencing the target
variable and that variable holds a R/W expanded object. While the
previous incarnation of exec_check_rw_parameter was pretty cheap,
this is a bit less so, and our plan to invoke support functions
will make it even less so. So avoiding the check for variables
where it couldn't be useful should be a win.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru>
Reviewed-by: Pavel Borisov <pashkin.elfe@gmail.com>
Discussion: https://postgr.es/m/CACxu=vJaKFNsYxooSnW1wEgsAO5u_v1XYBacfVJ14wgJV_PYeg@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Mark whether the target of a potentially optimizable assignment
is "local", in the sense of being declared inside any exception
block that could trap an error thrown from the assignment.
(This implies that we needn't preserve the variable's value
in case of an error. This patch doesn't do anything with the
knowledge, but the next one will.)
Normally, this requires a post-parsing scan of the function's
parse tree, since we don't know while parsing a BEGIN ...
construct whether we will find EXCEPTION at its end. However,
if there are no BEGIN ... EXCEPTION blocks in the function at
all, then all assignments are local, even those to variables
representing function arguments. We optimize that common case
by initializing the target_is_local flags to "true", and fixing
them up with a post-scan only if we found EXCEPTION.
Note that variables' default-value expressions are never interesting
for expanded-variable optimization, since they couldn't contain a
reference to the target variable anyway. But the code is set up
to compute their target_param and target_is_local correctly anyway,
for consistency and in case someone thinks of a use for that data.
I added a bit of plpgsql_dumptree support to help verify that this
code sets the flags as expected. I also added a plpgsql_dumptree
call in plpgsql_compile_inline. It was at best an oversight that
"#option dump" didn't work in a DO block; now it does.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru>
Reviewed-by: Pavel Borisov <pashkin.elfe@gmail.com>
Discussion: https://postgr.es/m/CACxu=vJaKFNsYxooSnW1wEgsAO5u_v1XYBacfVJ14wgJV_PYeg@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This short and boring patch simply moves the responsibility for
initializing PLpgSQL_expr.target_param into plpgsql parsing,
rather than doing it at first execution of the expr as before.
This doesn't save anything in terms of runtime, since the work was
trivial and done only once per expr anyway. But it makes the info
available during parsing, which will be useful for the next step.
Likewise set PLpgSQL_expr.func during parsing. According to the
comments, this was once impossible; but it's certainly possible
since we invented the plpgsql_curr_compile variable. Again, this
saves little runtime, but it seems far cleaner conceptually.
While at it, I reordered stuff in struct PLpgSQL_expr to make it
clearer which fields are filled when, and merged some duplicative
code in pl_gram.y.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru>
Reviewed-by: Pavel Borisov <pashkin.elfe@gmail.com>
Discussion: https://postgr.es/m/CACxu=vJaKFNsYxooSnW1wEgsAO5u_v1XYBacfVJ14wgJV_PYeg@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We haven't done this up to now because there was only one use-case,
namely plpgsql_free_function_memory's search for expressions to clean
up. However an upcoming patch has another need for walking plpgsql
functions' statement trees, so let's create sharable tree-walker
infrastructure in the same style as expression_tree_walker().
This patch actually makes the code shorter, although that's
mainly down to having used a more compact coding style. (I didn't
write a separate subroutine for each statement type, and I made
use of some newer notations like foreach_ptr.)
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru>
Reviewed-by: Pavel Borisov <pashkin.elfe@gmail.com>
Discussion: https://postgr.es/m/CACxu=vJaKFNsYxooSnW1wEgsAO5u_v1XYBacfVJ14wgJV_PYeg@mail.gmail.com
|
|
|
|
| |
This was forgotten to be changed in commit 9c727360bcc.
|
|
|
|
|
|
|
|
| |
Oversight in commit 3d17d7d7fb7a. Worth noting that pgindent was fine
as-is.
Author: Sami Imseih
Discussion: https://postgr.es/m/CAA5RZ0t80hP2aTv97QtEJy39GkxKmDBVDiTBApfiuTa4O=TEWQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We need to tell fmtId() what encoding to assume, but this function
doesn't know that. Fortunately we can fix that without changing the
function's API, because we can just use SQL_ASCII. That's because
database names in connection requests are effectively binary not text:
no encoding-aware processing will happen on them.
This fixes XversionUpgrade failures seen in the buildfarm. The
alternative of having pg_upgrade use setFmtEncoding() is unappetizing,
given that it's connecting to multiple databases that may have
different encodings.
Andres Freund, Noah Misch, Tom Lane
Security: CVE-2025-1094
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Follow locking behavior of ANALYZE when importing statistics. In
particular, when importing index statistics, the table must be locked
in ShareUpdateExclusive mode. Fixes bug reportd by Jian He.
ANALYZE doesn't update statistics on partitioned indexes, and the
locking requirements are slightly different for in-place updates on
partitioned indexes versus normal indexes. To be conservative, lock
both the partitioned table and the partitioned index in
ShareUpdateExclusive mode when importing stats for a partitioned
index.
Author: Corey Huinker
Reported-by: Jian He
Reviewed-by: Michael Paquier
Discussion: https://www.postgresql.org/message-id/CACJufxGreTY7qsCV8%2BBkuv0p5SXGTScgh%3DD%2BDq6%3D%2B_%3DXTp7FWg%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
| |
On machines where char is unsigned this could lead to option parsing looping
endlessly. It's also too narrow a type on other hardware.
Found via Tom Lane's monitoring of the buildfarm.
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Security: CVE-2025-1094
Backpatch-through: 13
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As highlighted by the prior commit, writing correct escape functions is less
trivial than one might hope.
This test module tries to verify that different escaping functions behave
reasonably. It e.g. tests:
- Invalidly encoded input to an escape function leads to invalidly encoded
output
- Trailing incomplete multi-byte characters are handled sensibly
- Escaped strings are parsed as single statement by psql's parser (which
derives from the backend parser)
There are further tests that would be good to add. But even in the current
state it was rather useful for writing the fix in the prior commit.
Reviewed-by: Noah Misch <noah@leadboat.com>
Backpatch-through: 13
Security: CVE-2025-1094
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously invalidly encoded input to various escaping functions could lead to
the escaped string getting incorrectly parsed by psql. To be safe, escaping
functions need to ensure that neither invalid nor incomplete multi-byte
characters can be used to "escape" from being quoted.
Functions which can report errors now return an error in more cases than
before. Functions that cannot report errors now replace invalid input bytes
with a byte sequence that cannot be used to escape the quotes and that is
guaranteed to error out when a query is sent to the server.
The following functions are fixed by this commit:
- PQescapeLiteral()
- PQescapeIdentifier()
- PQescapeString()
- PQescapeStringConn()
- fmtId()
- appendStringLiteral()
Reported-by: Stephen Fewer <stephen_fewer@rapid7.com>
Reviewed-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Backpatch-through: 13
Security: CVE-2025-1094
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit adds fmtIdEnc() and fmtQualifiedIdEnc(), which allow to specify
the encoding as an explicit argument. Additionally setFmtEncoding() is
provided, which defines the encoding when no explicit encoding is provided, to
avoid breaking all code using fmtId().
All users of fmtId()/fmtQualifiedId() are either converted to the explicit
version or a call to setFmtEncoding() has been added.
This commit does not yet utilize the now well-defined encoding, that will
happen in a subsequent commit.
Reviewed-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Backpatch-through: 13
Security: CVE-2025-1094
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are cases where we cannot / do not want to error out for invalidly
encoded input. In such cases it can be useful to replace e.g. an incomplete
multi-byte characters with bytes that will trigger an error when getting
validated as part of a larger string.
Unfortunately, until now, for some encoding no such sequence existed. For
those encodings this commit removes one previously accepted input combination
- we consider that to be ok, as the chosen bytes are outside of the valid
ranges for the encodings, we just previously failed to detect that.
As we cannot add a new field to pg_wchar_table without breaking ABI, this is
implemented "in-line" in the newly added function.
Author: Noah Misch <noah@leadboat.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Backpatch-through: 13
Security: CVE-2025-1094
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is similar to d575051b9af9 but this time for the comments in
plannodes.h to avoid long lines, which is useful if adding per-field
annotations with pg_node_attr() to these planner structures.
Some patches are under discussion to add such properties to planner
fields, which is something that may or may not happen, and this change
makes future proposals easier to work on and review, which being more
consistent in style with the parse nodes.
Author: Sami Imseih
Discussion: https://postgr.es/m/Z5xTb5iBHVGns35R@paquier.xyz
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now that we generate different SQL for temporal NO ACTION vs RESTRICT
foreign keys, we should cache their query plans with different keys.
Since the key also includes the constraint oid, this shouldn't be
necessary, but we have been seeing build farm failures that suggest we
might be sometimes using a cached NO ACTION plan to implement a RESTRICT
constraint.
Author: Paul A. Jungwirth <pj@illuminatedcomputing.com>
Discussion: https://www.postgresql.org/message-id/flat/CA+renyUApHgSZF9-nd-a0+OPGharLQLO=mDHcY4_qQ0+noCUVg@mail.gmail.com
|
|
|
|
|
|
|
| |
This also makes it match the equivalent APIs in libpq.
Author: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Discussion: https://www.postgresql.org/message-id/flat/fd1fcedb-3492-4fc8-9e3e-74b97f2db6c7%40eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This unifies the output used should any differences be found in the
files provided, information that 027_stream_regress did not show on
failures. TAP tests of pg_combinebackup and pg_upgrade now rely on the
refactored routine, reducing the dependency to the diff command. The
callers of this routine can optionally specify a custom line-comparison
function.
There are a couple of tests that still use directly a diff command:
001_pg_bsd_indent, 017_shm and test_json_parser's 003. These rely on
different properties and are left out for now.
Extracted from a larger patch by the same author.
Author: Ashutosh Bapat
Discussion: https://postgr.es/m/Z6RQS-tMzGYjlA-H@paquier.xyz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit af35fe501 caused "pgbench -i" to emit a '\r' character
for each data row loaded (when stderr is a terminal).
That's effectively invisible on-screen, but it causes the
connected terminal program to consume a lot of cycles.
It's even worse if you're connected over ssh, as the data
then has to pass through the ssh tunnel.
Simplest fix is to move the added logic inside the if-tests
that check whether to print a progress line. We could do
it another way that avoids duplicating these few lines,
but on the whole this seems the most transparent way to
write it.
Like the previous commit, back-patch to all supported versions.
Reported-by: Andres Freund <andres@anarazel.de>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://postgr.es/m/4k4drkh7bcmdezq6zbkhp25mnrzpswqi2o75d5uv2eeg3aq6q7@b7kqdmzzwzgb
Backpatch-through: 13
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, only btrees were supported as the arbiter index for
speculative insertion because there was no way to get the equality
strategy number for other index methods. We have this now (commit
c09e5a6a016), so we can support this.
At the moment, only btree supports unique indexes, so this does not
change anything in practice, but it would allow another index method
that has amcanunique to be supported.
Co-authored-by: Mark Dilger <mark.dilger@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/flat/E72EAA49-354D-4C2E-8EB9-255197F55330@enterprisedb.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, only btrees were supported as the referenced unique index
for foreign keys because there was no way to get the equality strategy
number for other index methods. We have this now (commit
c09e5a6a016), so we can support this. In fact, this is now just a
special case of the existing generalized "period" foreign key
support, since that already knows how to lookup equality strategy
numbers.
Note that this does not change the requirement that the referenced
index needs to be unique, and at the moment, only btree supports that,
so this does not change anything in practice, but it would allow
another index method that has amcanunique to be supported.
Co-authored-by: Mark Dilger <mark.dilger@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/flat/E72EAA49-354D-4C2E-8EB9-255197F55330@enterprisedb.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds a new variant of generated columns that are computed on read
(like a view, unlike the existing stored generated columns, which are
computed on write, like a materialized view).
The syntax for the column definition is
... GENERATED ALWAYS AS (...) VIRTUAL
and VIRTUAL is also optional. VIRTUAL is the default rather than
STORED to match various other SQL products. (The SQL standard makes
no specification about this, but it also doesn't know about VIRTUAL or
STORED.) (Also, virtual views are the default, rather than
materialized views.)
Virtual generated columns are stored in tuples as null values. (A
very early version of this patch had the ambition to not store them at
all. But so much stuff breaks or gets confused if you have tuples
where a column in the middle is completely missing. This is a
compromise, and it still saves space over being forced to use stored
generated columns. If we ever find a way to improve this, a bit of
pg_upgrade cleverness could allow for upgrades to a newer scheme.)
The capabilities and restrictions of virtual generated columns are
mostly the same as for stored generated columns. In some cases, this
patch keeps virtual generated columns more restricted than they might
technically need to be, to keep the two kinds consistent. Some of
that could maybe be relaxed later after separate careful
considerations.
Some functionality that is currently not supported, but could possibly
be added as incremental features, some easier than others:
- index on or using a virtual column
- hence also no unique constraints on virtual columns
- extended statistics on virtual columns
- foreign-key constraints on virtual columns
- not-null constraints on virtual columns (check constraints are supported)
- ALTER TABLE / DROP EXPRESSION
- virtual column cannot have domain type
- virtual columns are not supported in logical replication
The tests in generated_virtual.sql have been copied over from
generated_stored.sql with the keyword replaced. This way we can make
sure the behavior is mostly aligned, and the differences can be
visible. Some tests for currently not supported features are
currently commented out.
Reviewed-by: Jian He <jian.universality@gmail.com>
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Tested-by: Shlok Kyal <shlok.kyal.oss@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/a368248e-69e4-40be-9c07-6c3b5880b0a6@eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit introduces changes to track unpruned relations explicitly,
making it possible for top-level plan nodes, such as ModifyTable and
LockRows, to avoid processing partitions pruned during initial
pruning. Scan-level nodes, such as Append and MergeAppend, already
avoid the unnecessary processing by accessing partition pruning
results directly via part_prune_index. In contrast, top-level nodes
cannot access pruning results directly and need to determine which
partitions remain unpruned.
To address this, this commit introduces a new bitmapset field,
es_unpruned_relids, which the executor uses to track the set of
unpruned relations. This field is referenced during plan
initialization to skip initializing certain nodes for pruned
partitions. It is initialized with PlannedStmt.unprunableRelids,
a new field that the planner populates with RT indexes of relations
that cannot be pruned during runtime pruning. These include relations
not subject to partition pruning and those required for execution
regardless of pruning.
PlannedStmt.unprunableRelids is computed during set_plan_refs() by
removing the RT indexes of runtime-prunable relations, identified
from PartitionPruneInfos, from the full set of relation RT indexes.
ExecDoInitialPruning() then updates es_unpruned_relids by adding
partitions that survive initial pruning.
To support this, PartitionedRelPruneInfo and PartitionedRelPruningData
now include a leafpart_rti_map[] array that maps partition indexes to
their corresponding RT indexes. The former is used in set_plan_refs()
when constructing unprunableRelids, while the latter is used in
ExecDoInitialPruning() to convert partition indexes returned by
get_matching_partitions() into RT indexes, which are then added to
es_unpruned_relids.
These changes make it possible for ModifyTable and LockRows nodes to
process only relations that remain unpruned after initial pruning.
ExecInitModifyTable() trims lists, such as resultRelations,
withCheckOptionLists, returningLists, and updateColnosLists, to
consider only unpruned partitions. It also creates ResultRelInfo
structs only for these partitions. Similarly, child RowMarks for
pruned relations are skipped.
By avoiding unnecessary initialization of structures for pruned
partitions, these changes improve the performance of updates and
deletes on partitioned tables during initial runtime pruning.
Due to ExecInitModifyTable() changes as described above, EXPLAIN on a
plan for UPDATE and DELETE that uses runtime initial pruning no longer
lists partitions pruned during initial pruning.
Reviewed-by: Robert Haas <robertmhaas@gmail.com> (earlier versions)
Reviewed-by: Tomas Vondra <tomas@vondra.me>
Discussion: https://postgr.es/m/CA+HiwqFGkMSge6TgC9KQzde0ohpAycLQuV7ooitEEpbKB0O_mg@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The CI has reported that using a marker to force the output of the
detach step to happen after the wait step was not enough, as
isolationtester has managed to report the detach step as waiting before
the wait step finishes in some runs.
src/test/isolation/README tells that there is a more drastic method to
enforce the ordering of the output: an empty step positioned just after
the wait step can force the wait step to complete before the detach step
begins. This method has been able to pass 10 runs in the CI here, while
HEAD seems to fail 15~20% of the time in the CF bot.
Discussion: https://postgr.es/m/Z6WO8FbqK_FHmrzC@paquier.xyz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Three tests in the main regression test suite are proving to not be
portable across multiple runs on a deployed cluster as stats of
pg_stat_io are reset. Problems happen for tests on:
- Writes of WAL in the init context, when creating a WAL segment.
- Syncs of WAL in the init context, when creating a WAL segment.
- Reads of WAL in the normal context, requiring a WAL record to be read.
For a `make check`, this could rely on the checkpoint record read by the
startup process when starting the cluster, something that is not going
to work for a deployed node.
Two of the three tests are moved to the recovery TAP test
029_stats_restart, where we already check the consistency of stats
data. The test for syncs is dropped as TAP can run with fsync=off. The
other two are checked with some data from a freshly-initialized cluster.
Per discussion with Tom Lane, Bertrand Drouvot and Nazir Bilal Yavuz.
Discussion: https://postgr.es/m/915687.1738780322@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This didn't actually work: the COPY succeeds, but the FREEZE
optimization isn't applied. There doesn't seem to be an easy way
to support FREEZE on foreign tables, so let's follow the precedent
established by commit 5c9a5513a3 by raising an error early. This
is arguably a bug fix, but due to the lack of reports, the minimal
discussion on the mailing list, and the potential to break existing
scripts, I am not back-patching it for now.
Author: Sami Imseih <samimseih@gmail.com>
Reviewed-by: Zhang Mingli <zmlpostgres@gmail.com>
Discussion: https://postgr.es/m/CAA5RZ0ujeNgKpE3OrLtR%3DeJGa5LkGMekFzQTwjgw%3DrzaLufQLQ%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds the ability for a SASL mechanism to signal PQconnectPoll()
that some arbitrary work, external to the Postgres connection, is
required for authentication to continue. There is no consumer for
this capability as part of this commit, it is infrastructure which
is required for future work on supporting the OAUTHBEARER mechanism.
To ensure that threads are not blocked waiting for the SASL mechanism
to make long-running calls, the mechanism communicates with the top-
level client via the "altsock": a file or socket descriptor, opaque to
this layer of libpq, which is signaled when work is ready to be done
again. The altsock temporarily replaces the regular connection
descriptor, so existing PQsocket() clients should continue to operate
correctly using their existing polling implementations.
For a mechanism to use this it should set an authentication callback,
conn->async_auth(), and a cleanup callback, conn->cleanup_async_auth(),
and return SASL_ASYNC during the exchange. It should then assign
conn->altsock during the first call to async_auth(). When the cleanup
callback is called, either because authentication has succeeded or
because the connection is being dropped, the altsock must be released
and disconnected from the PGconn object.
This was extracted from the larger OAUTHBEARER patchset which has
been developed, and reviewed by many, over several years and it is
thus likely that some reviewer credit of much earlier versions has
been accidentally omitted.
Author: Jacob Champion <jacob.champion@enterprisedb.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Antonin Houska <ah@cybertec.at>
Discussion: https://postgr.es/m/CAOYmi+kJqzo6XsR9TEhvVfeVNQ-TyFM5LATypm9yoQVYk=4Wrw@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The OpenSSL project stopped using the eay names back in 2016
on platforms other than Microsoft Windows, and version 1.1.0
removed the names from Windows as well. Since we now require
OpenSSL 1.1.1 we can remove support for using the eay names
from our tree as well.
Author: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/3C445F8E-D43E-4970-9CD9-A54882197714@yesql.se
Discussion: https://postgr.es/m/CAHrt6656W9OnFomQTHBGYDcM5CKZ7hcgzFt8L+N0ezBZfcN3zA@mail.gmail.com
|
|
|
|
|
|
|
|
| |
Most GUCs that accept a special value to disable the feature
mention it in their GUC description. This commit adds that
information to autovacuum_vacuum_max_threshold's description.
Oversight in commit 306dc520b9.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This test has been disabled in 9f00edc22888 due to an instable expected
output, where it would be possible for the wait step to report its
result after the detach step is done. The expected output was ordered
so as the detach would always report last.
Isolation test permutations have the option to use markers to control
the ordering for cases like this one, as documented in
src/test/isolation/README. The permutation is enabled once again, this
time with a marker added so as the detach step reports only once the
wait step has finished, ensuring a correct output ordering.
Reviewed-by: Bertrand Drouvot
Discussion: https://postgr.es/m/Z6MBZTX5EqQ6A8Zc@paquier.xyz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
One way autovacuum chooses tables to vacuum is by comparing the
number of updated or deleted tuples with a value calculated using
autovacuum_vacuum_threshold and autovacuum_vacuum_scale_factor.
The threshold specifies the base value for comparison, and the
scale factor specifies the fraction of the table size to add to it.
This strategy ensures that smaller tables are vacuumed after fewer
updates/deletes than larger tables, which is reasonable in many
cases but can result in infrequent vacuums on very large tables.
This is undesirable for a couple of reasons, such as very large
tables incurring a huge amount of bloat between vacuums.
This new parameter provides a way to set a limit on the value
calculated with autovacuum_vacuum_threshold and
autovacuum_vacuum_scale_factor so that very large tables are
vacuumed more frequently. By default, it is set to 100,000,000
tuples, but it can be disabled by setting it to -1. It can also be
adjusted for individual tables by changing storage parameters.
Author: Nathan Bossart <nathandbossart@gmail.com>
Co-authored-by: Frédéric Yhuel <frederic.yhuel@dalibo.com>
Reviewed-by: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Laurenz Albe <laurenz.albe@cybertec.at>
Reviewed-by: Michael Banck <mbanck@gmx.net>
Reviewed-by: Joe Conway <mail@joeconway.com>
Reviewed-by: Sami Imseih <samimseih@gmail.com>
Reviewed-by: David Rowley <dgrowleyml@gmail.com>
Reviewed-by: wenhui qiu <qiuwenhuifx@gmail.com>
Reviewed-by: Vinícius Abrahão <vinnix.bsd@gmail.com>
Reviewed-by: Robert Treat <rob@xzilla.net>
Reviewed-by: Alena Rybakina <a.rybakina@postgrespro.ru>
Discussion: https://postgr.es/m/956435f8-3b2f-47a6-8756-8c54ded61802%40dalibo.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If exactly one relation type is requested in a command of the \dtisv
family, say "tables", "indexes", etc instead of "relations". This
should cover the majority of actual uses, without creating a huge
number of new translatable strings. The error messages for no
matching relations are adjusted as well.
In passing, invent "pg_log_error_internal()" to be used for frontend
error messages that don't seem to need translation, analogously to
errmsg_internal() in the backend. The implementation is a bit cheesy,
being just a macro to prevent xgettext from recognizing a trigger
keyword. This won't avoid a useless gettext lookup cycle at runtime
--- but surely we don't care about an extra microsecond or two in
what's supposed to be a can't-happen case. I (tgl) also made
"pg_fatal_internal()", though it's not used in this patch.
Author: Greg Sabino Mullane <htamfids@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAKAnmm+7o93fQV-RFkGaN1QnP-0D4d3JTykD+cLueqjDMKdfag@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It is possible for the inactive_since value of an invalid replication slot
to be updated multiple times, which is unexpected behavior like during the
release of the slot or at the time of restart. This is harmless because
invalid slots are not allowed to be accessed but it is not prudent to
update invalid slots. We are planning to invalidate slots due to other
reasons like idle time and it will look odd that the slot's inactive_since
displays the recent time in this field after invalidated due to idle time.
So, this patch ensures that the inactive_since field of slots is not
updated for invalid slots.
In the passing, ensure to use the same inactive_since time for all the
slots at restart while restoring them from the disk.
Author: Nisha Moond <nisha.moond412@gmail.com>
Author: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Reviewed-by: Vignesh C <vignesh21@gmail.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Hou Zhijie <houzj.fnst@fujitsu.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Discussion: https://postgr.es/m/CABdArM7QdifQ_MHmMA=Cc4v8+MeckkwKncm2Nn6tX9wSCQ-+iw@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The missing dependency was, e.g., visible when doing
ninja clean && ninja meson-test-prereq && meson test --no-rebuild --suite setup --suite libpq
This is a bit more complicated than other related fixes, because until now
libpq's tests depended on 'frontend_code', which includes a dependency on
fe_utils, which in turns on libpq. That in turn required
src/interfaces/libpq/test to be entered from the top-level, not from
libpq/meson.build. Because of that the test definitions in libpq/meson.build
could not declare a dependency on the binaries defined in
libpq/test/meson.build.
To fix this, this commit creates frontend_no_fe_utils_code, which allows us to
recurse into libpq/test from withing libpq/meson.build.
Apply this to all branches with meson support, as part of an effort to fix
incorrect test dependencies that can lead to test failures.
Discussion: https://postgr.es/m/CAGECzQSvM3iSDmjF+=Kof5an6jN8UbkP_4cKKT9w6GZavmb5yQ@mail.gmail.com
Discussion: https://postgr.es/m/bdba588f-69a9-4f3e-9b95-62d07210a32e@eisentraut.org
Backpatch: 16-, where meson support was added
|
|
|
|
|
|
|
|
|
|
|
|
| |
The missing dependency was, e.g., visible when doing
ninja clean && ninja meson-test-prereq && meson test --no-rebuild --suite setup --suite libpq_pipeline
Apply this to all branches with meson support, as part of an effort to fix
incorrect test dependencies that can lead to test failures.
Discussion: https://postgr.es/m/CAGECzQSvM3iSDmjF+=Kof5an6jN8UbkP_4cKKT9w6GZavmb5yQ@mail.gmail.com
Discussion: https://postgr.es/m/bdba588f-69a9-4f3e-9b95-62d07210a32e@eisentraut.org
Backpatch: 16-, where meson support was added
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is required to ensure correct test dependencies, previously
the test binaries would not necessarily be built.
The missing dependency was, e.g., visible when doing
ninja clean && ninja meson-test-prereq && m test --no-rebuild --suite setup --suite test_json_parser
Apply this to all branches with meson support, as part of an effort to fix
incorrect test dependencies that can lead to test failures.
Author: Peter Eisentraut <peter@eisentraut.org>
Author: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CAGECzQSvM3iSDmjF+=Kof5an6jN8UbkP_4cKKT9w6GZavmb5yQ@mail.gmail.com
Discussion: https://postgr.es/m/bdba588f-69a9-4f3e-9b95-62d07210a32e@eisentraut.org
Backpatch: 17-, where test_json_parser was added
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is required to ensure correct test dependencies, previously
pg_regress_ecpg would not necessarily be built.
The missing dependency was, e.g., visible when doing
ninja clean && ninja meson-test-prereq && meson test --no-rebuild --suite setup --suite ecpg
Apply this to all branches with meson support, as part of an effort to fix
incorrect test dependencies that can lead to test failures.
Discussion: https://postgr.es/m/CAGECzQSvM3iSDmjF+=Kof5an6jN8UbkP_4cKKT9w6GZavmb5yQ@mail.gmail.com
Discussion: https://postgr.es/m/bdba588f-69a9-4f3e-9b95-62d07210a32e@eisentraut.org
Backpatch: 16-, where meson support was added
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Protect against malformed timestamps. Also protect against negative WalSegSz
as it triggers division by zero:
((0x100000000UL) / (WalSegSz)) can turn into zero in
XLogFileName(xlogfilename, ControlFile->checkPointCopy.ThisTimeLineID,
segno, WalSegSz);
because if WalSegSz is -1 then by arithmetic rules in C we get
0x100000000UL / 0xFFFFFFFFFFFFFFFFUL == 0.
Author: Ilyasov Ian <ianilyasov@outlook.com>
Author: Anton Voloshin <a.voloshin@postgrespro.ru>
Backpatch-through: 13
|