| Commit message (Collapse) | Author | Age |
... | |
|
|
|
|
|
|
|
|
|
|
| |
The previous code (from commit 5d2e1cc117b) did not detect end of
string correctly, so it would fail to error out if fewer than the
expected number of fields were present, which could then later lead to
a crash when NULL string pointers are accessed.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Reported-by: Ranier Vilela <ranier.vf@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/79692bf9-17d3-41e6-b9c9-fc8c3944222a@eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 3bf3ab8c56 initially introduced support for unlogged
materialized views, but this was later disallowed by commit 3223b25ff7.
Additionally, commit d25f519107 added more code for handling
unlogged materialized views. This commit cleans up all unused
code related to them.
If unlogged materialized views had been supported in any official
release, psql would need to retain code to handle them for compatibility
with older servers. However, since they were never included in
an official release, this code is no longer necessary.
Author: Pixian Shi
Reviewed-by: Yugo Nagata, Fujii Masao
Discussion: https://postgr.es/m/CAAccyYKRZ=OvAvgowiSH+OELbStLP=p2Ht=R3CgT=OaNSH5DAA@mail.gmail.com
|
|
|
|
|
| |
Reviewed-by: Corey Huinker
Discussion: https://postgr.es/m/901ab7cf01957f92ea8b30b6feeb0eacfb7505fc.camel@j-davis.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
After repartitioning the inner side of a hash join that would have
exceeded the allowed size, we check if all the tuples from a parent
partition moved to one child partition. That is evidence that it
contains duplicate keys and later attempts to repartition will also
fail, so we should give up trying to limit memory (for lack of a better
fallback strategy).
A thinko prevented the check from working correctly in partition 0 (the
one that is partially loaded into memory already). After
repartitioning, we should check for extreme skew if the *parent*
partition's space_exhausted flag was set, not the child partition's.
The consequence was repeated futile repartitioning until per-partition
data exceeded various limits including "ERROR: invalid DSA memory alloc
request size 1811939328", OS allocation failure, or temporary disk space
errors. (We could also do something about some of those symptoms, but
that's material for separate patches.)
This problem only became likely when PostgreSQL 16 introduced support
for Parallel Hash Right/Full Join, allowing NULL keys into the hash
table. Repartitioning always leaves NULL in partition 0, no matter how
many times you do it, because the hash value is all zero bits. That's
unlikely for other hashed values, but they might still have caused
wasted extra effort before giving up.
Back-patch to all supported releases.
Reported-by: Craig Milhiser <craig@milhiser.com>
Reviewed-by: Andrei Lepikhov <lepihov@gmail.com>
Discussion: https://postgr.es/m/CA%2BwnhO1OfgXbmXgC4fv_uu%3DOxcDQuHvfoQ4k0DFeB0Qqd-X-rQ%40mail.gmail.com
|
|
|
|
|
|
|
|
|
| |
The result is already of the correct type, so these casts don't do
anything.
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Reviewed-by: Tender Wang <tndrwang@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/637eeea8-5663-460b-a114-39572c0f6c6e%40eisentraut.org
|
|
|
|
|
|
|
|
|
|
| |
Instead of using Node *, we can use an incomplete struct. That way,
everything has the correct type and fewer casts are required. This
technique is already used elsewhere in node type definitions.
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Reviewed-by: Tender Wang <tndrwang@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/637eeea8-5663-460b-a114-39572c0f6c6e%40eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
adf97c156 made it so ExprStates could support hashing and changed Hash
Join to use that instead of manually extracting Datums from tuples and
hashing them one column at a time.
When hashing multiple columns or expressions, the code added in that
commit stored the intermediate hash value in the ExprState's resvalue
field. That was a mistake as steps may be injected into the ExprState
between each hashing step that look at or overwrite the stored
intermediate hash value. EEOP_PARAM_SET is an example of such a step.
Here we fix this by adding a new dedicated field for storing
intermediate hash values and adjust the code so that all apart from the
final hashing step store their result in the intermediate field.
In passing, rename a variable so that it's more aligned to the
surrounding code and also so a few lines stay within the 80 char margin.
Reported-by: Andres Freund
Reviewed-by: Alena Rybakina <a.rybakina@postgrespro.ru>
Discussion: https://postgr.es/m/CAApHDvqo9eenEFXND5zZ9JxO_k4eTA4jKMGxSyjdTrsmYvnmZw@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit adds missing checks for COPY FORCE_NOT_NULL and FORCE_NULL
when applied to all columns via "*". These options now correctly
require CSV mode and are disallowed in COPY TO, making their behavior
consistent with FORCE_QUOTE.
Some regression tests are added to verify the correct behavior for the
all-columns case, including FORCE_QUOTE, which was not tested.
Backpatch down to 17, where support for the all-column grammar with
FORCE_NOT_NULL and FORCE_NULL has been added.
Author: Joel Jacobson
Reviewed-by: Zhang Mingli
Discussion: https://postgr.es/m/65030d1d-5f90-4fa4-92eb-f5f50389858e@app.fastmail.com
Backpatch-through: 17
|
|
|
|
| |
Oversight in commit 79fa7b3b.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 2dc1deaea turns out to have been still a brick shy of a load,
because CALL statements executing within a plpgsql exception block
could still pass the wrong snapshot to stable functions within the
CALL's argument list. That happened because standard_ProcessUtility
forces isAtomicContext to true if IsTransactionBlock is true, which
it always will be inside a subtransaction. Then ExecuteCallStmt
would think it does not need to push a new snapshot --- but
_SPI_execute_plan didn't do so either, since it thought it was in
nonatomic mode.
The best fix for this seems to be for _SPI_execute_plan to operate
in atomic execution mode if IsSubTransaction() is true, even when the
SPI context as a whole is non-atomic. This makes _SPI_execute_plan
have the same rules about when non-atomic execution is allowed as
_SPI_commit/_SPI_rollback have about when COMMIT/ROLLBACK are allowed,
which seems appropriately symmetric. (If anyone ever tries to allow
COMMIT/ROLLBACK inside a subtransaction, this would all need to be
rethought ... but I'm unconvinced that such a thing could be logically
consistent at all.)
For further consistency, also check IsSubTransaction() in
SPI_inside_nonatomic_context. That does not matter for its
one present-day caller StartTransaction, which can't be reached
inside a subtransaction. But if any other callers ever arise,
they'd presumably want this definition.
Per bug #18656 from Alexander Alehin. Back-patch to all
supported branches, like previous fixes in this area.
Discussion: https://postgr.es/m/18656-cade1780866ef66c@postgresql.org
|
|
|
|
|
| |
Reported-by: Alexander Korotkov
Discussion: https://postgr.es/m/CAPpHfduAiGSsvUc614Z-JOnyQffcMeJncWMF2HnUL8wFy4fuWA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit a4ccc1cef introduced the Generation Context and modified the
logical decoding process to use a Generation Context with a fixed
block size of 8MB for storing tuple data decoded during logical
decoding (i.e., rb->tup_context). Several reports have indicated that
the logical decoding process can be terminated due to
out-of-memory (OOM) situations caused by excessive memory usage in
rb->tup_context.
This issue can occur when decoding a workload involving several
concurrent transactions, including a long-running transaction that
modifies tuples. By design, the Generation Context does not free a
memory block until all chunks within that block are
released. Consequently, if tuples modified by the long-running
transaction are stored across multiple memory blocks, these blocks
remain allocated until the long-running transaction completes, leading
to substantial memory fragmentation. The memory usage during logical
decoding, tracked by rb->size, does not account for memory
fragmentation, resulting in potentially much higher memory consumption
than the value of the logical_decoding_work_mem parameter.
Various improvement strategies were discussed in the relevant
thread. This change reduces the block size of the Generation Context
used in rb->tup_context from 8MB to 8kB. This modification
significantly decreases the likelihood of substantial memory
fragmentation occurring and is relatively straightforward to
backport. Performance testing across multiple platforms has confirmed
that this change will not introduce any performance degradation that
would impact actual operation.
Backport to all supported branches.
Reported-by: Alex Richman, Michael Guissine, Avi Weinberg
Reviewed-by: Amit Kapila, Fujii Masao, David Rowley
Tested-by: Hayato Kuroda, Shlok Kyal
Discussion: https://postgr.es/m/CAD21AoBTY1LATZUmvSXEssvq07qDZufV4AF-OHh9VD2pC0VY2A%40mail.gmail.com
Backpatch-through: 12
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 5bf748b8 taught nbtree ScalarArrayOp index scans to decide when
and how to start the next primitive index scan based on physical index
characteristics. This included rules for deciding whether to start a
new primitive index scan (or whether to move onto the right sibling leaf
page instead) that specifically consider truncated lower-order columns
(-inf columns) from leaf page high keys.
These omitted columns were treated as satisfying the scan's required
scan keys, though only for scan keys marked required in the current scan
direction (forward). Scan keys that didn't get this behavior (those
marked required in the backwards direction only) usually didn't give the
scan reasonable cause to reposition itself to a later leaf page (via
another descent of the index in _bt_first), but _bt_advance_array_keys
would nevertheless always give up by forcing another call to _bt_first.
_bt_advance_array_keys was unwilling to allow the scan to continue onto
the next leaf page, to reconsider whether we really should start another
primitive scan based on the details of the sibling page's tuples. This
didn't match its behavior with similar cases involving keys required in
the current scan direction (forward), which seems unprincipled. It led
to an excessive number of primitive scans/index descents for queries
with a higher-order = array scan key (with dense, contiguous values)
mixed with a lower-order required > or >= scan key.
Bring > and >= strategy scan keys in line with other required scan key
types: treat truncated -inf scan keys as having satisfied scan keys
required in either scan direction (forwards and backwards alike) during
array advancement. That way affected scans can continue to the right
sibling leaf page. Advancement must now schedule an explicit recheck of
the right sibling page's high key in cases involving > or >= scan keys.
The recheck gives the scan a way to back out and start another primitive
index scan (we can't just rely on _bt_checkkeys with > or >= scan keys).
This work can be considered a stand alone optimization on top of the
work from commit 5bf748b8. But it was written in preparation for an
upcoming patch that will add skip scan to nbtree. In practice scans
that use "skip arrays" will tend to be much more sensitive to any
implementation deficiencies in this area.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Tomas Vondra <tomas@vondra.me>
Discussion: https://postgr.es/m/CAH2-Wz=9A_UtM7HzUThSkQ+BcrQsQZuNhWOvQWK06PRkEp=SKQ@mail.gmail.com
|
|
|
|
|
|
|
|
| |
An oversight of 3a8a1f3254b.
Reported-by: Tender Wang <tndrwang@gmail.com>
Author: Tender Wang <tndrwang@gmail.com>
Backpatch-through: 16
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Two near-identical copies of clause_sides_match_join() existed in
joinpath.c and analyzejoins.c. Deduplicate this by moving the function
into restrictinfo.h.
It isn't quite clear that keeping the inline property of this function
is worthwhile, but this commit is just an exercise in code
deduplication. More effort would be required to determine if the inline
property is worth keeping.
Author: James Hunter <james.hunter.pg@gmail.com>
Discussion: https://postgr.es/m/CAJVSvF7Nm_9kgMLOch4c-5fbh3MYg%3D9BdnDx3Dv7Fcb64zr64Q%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This module provides SQL functions that allow to inspect logical
decoding components.
It currently allows to inspect the contents of serialized logical
snapshots of a running database cluster, which is useful for debugging
or educational purposes.
Author: Bertrand Drouvot
Reviewed-by: Amit Kapila, Shveta Malik, Peter Smith, Peter Eisentraut
Reviewed-by: David G. Johnston
Discussion: https://postgr.es/m/ZscuZ92uGh3wm4tW%40ip-10-97-1-34.eu-west-3.compute.internal
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit moves the definitions of the SnapBuild and SnapBuildOnDisk
structs, related to logical snapshots, to the snapshot_internal.h
file. This change allows external tools, such as
pg_logicalinspect (with an upcoming patch), to access and utilize the
contents of logical snapshots.
Author: Bertrand Drouvot
Reviewed-by: Amit Kapila, Shveta Malik, Peter Smith
Discussion: https://postgr.es/m/ZscuZ92uGh3wm4tW%40ip-10-97-1-34.eu-west-3.compute.internal
|
|
|
|
|
|
|
|
| |
Move implementation of pg_locale_t code for libc collations into
pg_locale_libc.c. Other locale-related code, such as
pg_perm_setlocale(), remains in pg_locale.c for now.
Discussion: https://postgr.es/m/flat/2830211e1b6e6a2e26d845780b03e125281ea17b.camel@j-davis.com
|
|
|
|
| |
Discussion: https://postgr.es/m/flat/2830211e1b6e6a2e26d845780b03e125281ea17b.camel@j-davis.com
|
|
|
|
|
|
|
|
|
| |
Commit d746021de1 introduced construct_array_builtin() for built-in
data types, but forgot some replacements linked to FLOAT8OID.
Author: Bertrand Drouvot
Reviewed-by: Peter Eisentraut
Discussion: https://postgr.es/m/CAD21AoCERkwmttY44dqUw%3Dm_9QCctu7W%2Bp6B7w_VqxRJA1Qq_Q%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The MergeJoin struct was tracking "mergeStrategies", which were an
array of btree strategy numbers, purely for the purpose of comparing
it later against btree strategies to determine if the scan direction
was forward or reverse. Change that. Instead, track
"mergeReversals", an array of bool, to indicate the same without an
unfortunate assumption that a strategy number refers specifically to a
btree strategy.
Author: Mark Dilger <mark.dilger@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/flat/E72EAA49-354D-4C2E-8EB9-255197F55330@enterprisedb.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Functions make_pathkey_from_sortop() and transformWindowDefinitions(),
which receive a SortGroupClause, were determining the sort order
(ascending vs. descending) by comparing that structure's operator
strategy to BTLessStrategyNumber, but could just as easily have gotten
it from the SortGroupClause object, if it had such a field, so add
one. This reduces the number of places that hardcode the assumption
that the strategy refers specifically to a btree strategy, rather than
some other index AM's operators.
Author: Mark Dilger <mark.dilger@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/flat/E72EAA49-354D-4C2E-8EB9-255197F55330@enterprisedb.com
|
|
|
|
|
| |
Reported-by: Noriyoshi Shinoda
Discussion: https://postgr.es/m/DM4PR84MB17345E2DFF28A5557B7CBC3CEE7A2@DM4PR84MB1734.NAMPRD84.PROD.OUTLOOK.COM
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
max_parallel_maintenance_workers has been introduced in 9da0cc35284b,
and used a hardcoded limit of 1024 rather than this variable.
max_parallel_workers and max_parallel_workers_per_gather already used
MAX_PARALLEL_WORKER_LIMIT (1024) as their upper-bound since
6599c9ac3340.
Author: Matthias van de Meent
Reviewed-by: Zhang Mingli
Discussion: https://postgr.es/m/CAEze2WiCiJD+8Wig_wGPyn4vgdPjbnYXy2Rw+9KYi6izTMuP=w@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
find_computable_ec_member() had the wrong mental model of what
its primary caller prepare_sort_from_pathkeys() would do with
the selected EquivalenceClass member expression. We will not
compute the EC expression in a plan node atop the one returning
the passed-in targetlist; rather, the EC expression will be
computed as an additional column of that targetlist. So any
Var or quasi-Var used in the given tlist is also available to the
EC expression. In simple cases this makes no difference because
the given tlist is just a list of Vars or quasi-Vars --- but if
we are considering an appendrel member produced by flattening
a UNION ALL, the tlist may contain expressions, resulting in
failure to match and a "could not find pathkey item to sort"
error.
To fix, we can flatten both the tlist and the EC members with
pull_var_clause(), and then just check for subset-ness, so
that the code is actually shorter than before.
While this bug is quite old, the present patch only works back to
v13. We could possibly make it work in v12 by back-patching parts
of 375398244. On the whole though I don't like the risk/reward
ratio of that idea. v12's final release is next month, meaning
there would be no chance to correct matters if the patch causes a
regression. Since this failure has escaped notice for 14 years,
it's likely nobody will hit it in the field with v12.
Per bug #18652 from Alexander Lakhin.
Andrei Lepikhov and Tom Lane
Discussion: https://postgr.es/m/18652-deaa782ebcca85d1@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
| |
A missed check for the builtin collation provider could result in
falling through to call isalpha().
This does not appear to have practical consequences because it only
happens for characters in the ASCII range. Regardless, the builtin
provider should not be calling libc functions, so backpatch.
Discussion: https://postgr.es/m/1bd5a0a5192f82c22ee7527e825b18ab0028b2c7.camel@j-davis.com
Backpatch-through: 17
|
|
|
|
|
|
|
|
|
|
|
| |
These functions are used to tweak statistics on any relation, provided
that the user has MAINTAIN privilege on the relation, or is the database
owner.
Bump catalog version.
Author: Corey Huinker
Discussion: https://postgr.es/m/CADkLM=eErgzn7ECDpwFcptJKOk9SxZEk5Pot4d94eVTZsvj3gw@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
PostgreSQL has for a long time mixed two BIO implementations, which can
lead to subtle bugs and inconsistencies. This cleans up our BIO by just
just setting up the methods we need. This patch does not introduce any
functionality changes.
The following methods are no longer defined due to not being needed:
- gets: Not used by libssl
- puts: Not used by libssl
- create: Sets up state not used by libpq
- destroy: Not used since libpq use BIO_NOCLOSE, if it was used it close
the socket from underneath libpq
- callback_ctrl: Not implemented by sockets
The following methods are defined for our BIO:
- read: Used for reading arbitrary length data from the BIO. No change
in functionality from the previous implementation.
- write: Used for writing arbitrary length data to the BIO. No change
in functionality from the previous implementation.
- ctrl: Used for processing ctrl messages in the BIO (similar to ioctl).
The only ctrl message which matters is BIO_CTRL_FLUSH used for
writing out buffered data (or signal EOF and that no more data
will be written). BIO_CTRL_FLUSH is mandatory to implement and
is implemented as a no-op since there is no intermediate buffer
to flush.
BIO_CTRL_EOF is the out-of-band method for signalling EOF to
read_ex based BIO's. Our BIO is not read_ex based but someone
could accidentally call BIO_CTRL_EOF on us so implement mainly
for completeness sake.
As the implementation is no longer related to BIO_s_socket or calling
SSL_set_fd, methods have been renamed to reference the PGconn and Port
types instead.
This also reverts back to using BIO_set_data, with our fallback, as a small
optimization as BIO_set_app_data require the ex_data mechanism in OpenSSL.
Author: David Benjamin <davidben@google.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/CAF8qwaCZ97AZWXtg_y359SpOHe+HdJ+p0poLCpJYSUxL-8Eo8A@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This function returns the name, size, and last modification time of
each regular file in pg_wal/summaries. This allows administrators
to grant privileges to view the contents of this directory without
granting privileges on pg_ls_dir(), which allows listing the
contents of many other directories. This commit also gives the
pg_monitor predefined role EXECUTE privileges on the new
pg_ls_summariesdir() function.
Bumps catversion.
Author: Yushi Ogiwara
Reviewed-by: Michael Paquier, Fujii Masao
Discussion: https://postgr.es/m/a0a3af15a9b9daa107739eb45aa9a9bc%40oss.nttdata.com
|
|
|
|
|
|
|
|
|
| |
Commit 062a84442424 introduced use of deconstruct_array when
deconstruct_array_builtin can be used instead. Do that to save some
code.
Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://postgr.es/m/Zwi5g2GzlUX1NqxR@ip-10-97-1-34.eu-west-3.compute.internal
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
c01743aa4 added EXPLAIN output to display the plan node's disabled_node
count whenever that count is above 0. Seemingly, there weren't many
people who liked that output as each parent of a disabled node would
also have a "Disabled Nodes" output due to the way disabled_nodes is
accumulated towards the root plan node. It was often hard and sometimes
impossible to figure out which nodes were disabled from looking at
EXPLAIN. You might think it would be possible to manually add up the
numbers from the "Disabled Nodes" output of a given node's children to
figure out if that node has a higher disabled_nodes count than its
children, but that wouldn't have worked for Append and Merge Append nodes
if some disabled child nodes were run-time pruned during init plan. Those
children are not displayed in EXPLAIN.
Here we attempt to improve this output by only showing "Disabled: true"
against only the nodes which are explicitly disabled themselves. That
seems to be the output that's desired by the most people who voiced
their opinion. This is done by summing up the disabled_nodes of the
given node's children and checking if that number is less than the
disabled_nodes of the current node.
This commit also fixes a bug in make_sort() which was neglecting to set
the Sort's disabled_nodes field. This should have copied what was done
in cost_sort(), but it hadn't been updated. With the new output, the
choice to not maintain that field properly was clearly wrong as the
disabled-ness of the node was attributed to the Sort's parent instead.
Reviewed-by: Laurenz Albe, Alena Rybakina
Discussion: https://postgr.es/m/9e4ad616bebb103ec2084bf6f724cfc739e7fabb.camel@cybertec.at
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 90189eefc1e1 narrowed pg_attribute.attinhcount and
pg_constraint.coninhcount from 32 to 16 bits, but kept other related
structs with 32-bit wide fields: ColumnDef and CookedConstraint contain
an int 'inhcount' field which is itself checked for overflow on
increments, but there's no check that the values aren't above INT16_MAX
before assigning to the catalog columns. This means that a creative
user can get a inconsistent table definition and override some
protections.
Fix it by changing those other structs to also use int16.
Also, modernize style by using pg_add_s16_overflow for overflow testing
instead of checking for negative values.
We also have Constraint.inhcount, which is here removed completely.
This was added by commit b0e96f311985 and not removed by its revert at
6f8bb7c1e961. It is not needed by the upcoming not-null constraints
patch.
This is mostly academic, so we agreed not to backpatch to avoid ABI
problems.
Bump catversion because of the changes to parse nodes.
Co-authored-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Co-authored-by: 何建 (jian he) <jian.universality@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/202410081611.up4iyofb5ie7@alvherre.pgsql
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 9391f7152 added a "PlannerInfo *root" parameter to
estimate_array_length, but failed to consider the possibility that
NULL would be passed for that, leading to a null pointer dereference.
We could rectify the particular case shown in the bug report by fixing
simplify_function/inline_function to pass through the root pointer.
However, as long as eval_const_expressions is documented to accept
NULL for root, similar hazards would remain. For now, let's just do
the narrow fix of hardening estimate_array_length to not crash.
Its behavior with NULL root will be the same as it was before
9391f7152, so this is not too awful.
Per report from Fredrik Widlert (via Paul Ramsey). Back-patch to v17
where 9391f7152 came in.
Discussion: https://postgr.es/m/518339E7-173E-45EC-A0FF-9A4A62AA4F40@cleverelephant.ca
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The name extracted from the record of the GUC tables is applied to more
internal places of guc.c. This change has the advantage to simplify
parse_and_validate_value(), where the "name" was only used in elog
messages, while it was required to match with the name from the GUC
record.
pg_parameter_aclcheck() now passes the name of the GUC from its record
in two places rather than the caller's argument. The value given to
this function goes through convert_GUC_name_for_parameter_acl() that
does a simple ASCII downcasing.
Few GUCs mix character casing in core; one test is added for one of
these code paths with "IntervalStyle".
Author: Peter Smith, Michael Paquier
Discussion: https://postgr.es/m/ZwNh4vkc2NHJHnND@paquier.xyz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In some cases, we may want to transfer a HAVING clause into WHERE in
hopes of eliminating tuples before aggregation instead of after.
Previously, we couldn't do this if there were any nonempty grouping
sets, because we didn't have a way to tell if the HAVING clause
referenced any columns that were nullable by the grouping sets, and
moving such a clause into WHERE could potentially change the results.
Now, with expressions marked nullable by grouping sets with the RT
index of the RTE_GROUP RTE, it is much easier to identify those
clauses that reference any nullable-by-grouping-sets columns: we just
need to check if the RT index of the RTE_GROUP RTE is present in the
clause. For other HAVING clauses, they can be safely pushed down.
Author: Richard Guo
Discussion: https://postgr.es/m/CAMbWs4-NpzPgtKU=hgnvyn+J-GanxQCjrUi7piNzZ=upiCV=2Q@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For a mergejoin, if the given outer path or inner path is not already
well enough ordered, we need to do an explicit sort. Currently, we
only consider explicit full sort and do not account for incremental
sort.
In this patch, for the outer path of a mergejoin, we choose to use
explicit incremental sort if it is enabled and there are presorted
keys. For the inner path, though, we cannot use incremental sort
because it does not support mark/restore at present.
The rationale is based on the assumption that incremental sort is
always faster than full sort when there are presorted keys, a premise
that has been applied in various parts of the code. In addition, the
current cost model tends to favor incremental sort as being cheaper
than full sort in the presence of presorted keys, making it reasonable
not to consider full sort in such cases.
It could be argued that what if a mergejoin with an incremental sort
as the outer path is selected as the inner path of another mergejoin.
However, this should not be a problem, because mergejoin itself does
not support mark/restore either, and we will add a Material node on
top of it anyway in this case (see final_cost_mergejoin).
There is one ensuing plan change in the regression tests, and we have
to modify that test case to ensure that it continues to test what it
is intended to.
No backpatch as this could result in plan changes.
Author: Richard Guo
Reviewed-by: David Rowley, Tomas Vondra
Discussion: https://postgr.es/m/CAMbWs49x425QrX7h=Ux05WEnt8GS757H-jOP3_xsX5t1FoUsZw@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These fields can be set by executor nodes to record how many parallel
workers were planned to be launched and how many of them have been
actually launched within the number initially planned. This data is
able to give an approximation of the parallel worker draught a system
is facing, making easier the tuning of related configuration parameters.
These fields will be used by some follow-up patches to populate other
parts of the system with their data.
Author: Guillaume Lelarge, Benoit Lobréau
Discussion: https://postgr.es/m/783bc7f7-659a-42fa-99dd-ee0565644e25@dalibo.com
Discussion: https://postgr.es/m/CAECtzeWtTGOK0UgKXdDGpfTVSa5bd_VbUt6K6xn8P7X+_dZqKw@mail.gmail.com
|
|
|
|
|
|
|
|
|
| |
Similar to a0f1fce80, although we chose to duplicate logic
rather than invoke byteacmp, primarily to avoid repeat detoasting.
Marat Buharov, Aleksander Alekseev
Discussion: https://postgr.es/m/CAPCEVGXiASjodos4P8pgyV7ixfVn-ZgG9YyiRZRbVqbGmfuDyg@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
| |
AIO will need a resource owner to do IO. Right now we create a resowner
on-demand during basebackup, and we could do the same for AIO. But it seems
easier to just always create an aux process resowner.
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-by: Noah Misch <noah@leadboat.com>
Discussion: https://postgr.es/m/1f6b50a7-38ef-4d87-8246-786d39f46ab9@iki.fi
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With real AIO it doesn't make sense to cross segment boundaries with one
IO. Add smgrmaxcombine() to allow upper layers to query which buffers can be
merged.
We could continue to cross segment boundaries when not using AIO, but it
doesn't really make sense, because md.c will never be able to perform the read
across the segment boundary in one system call. Which means we'll mark more
buffers as undergoing IO than really makes sense - if another backend desires
to read the same blocks, it'll be blocked longer than necessary. So it seems
better to just never cross the boundary.
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-by: Noah Misch <noah@leadboat.com>
Discussion: https://postgr.es/m/1f6b50a7-38ef-4d87-8246-786d39f46ab9@iki.fi
|
|
|
|
|
|
|
|
|
|
|
| |
As pg_flush_data() doesn't do anything with fsync disabled, there's no point
in tracking the buffer for writeback. Arguably the better fix would be to
change pg_flush_data() to flush data even with fsync off, but that's a
behavioral change, whereas this is just a small optimization.
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-by: Noah Misch <noah@leadboat.com>
Discussion: https://postgr.es/m/1f6b50a7-38ef-4d87-8246-786d39f46ab9@iki.fi
|
|
|
|
|
|
|
|
|
|
|
|
| |
This seems nicer than having to duplicate the logic between
InitProcess() and ProcKill() for which child processes have a
PMChildFlags slot.
Move the MarkPostmasterChildActive() call earlier in InitProcess(),
out of the section protected by the spinlock.
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://www.postgresql.org/message-id/a102f15f-eac4-4ff2-af02-f9ff209ec66f@iki.fi
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 9e2d870119 placed the check for binary mode and on_error
before default values were inserted, which was not ideal.
This commit moves the check to a more appropriate position
after default values are set.
Additionally, the comment incorrectly mentioned two checks before
inserting defaults, when there are actually three. This commit corrects
that comment.
Author: Atsushi Torikoshi
Reviewed-by: Fujii Masao
Discussion: https://postgr.es/m/8830518a-28ac-43a2-8a11-1676d9a3cdf8@oss.nttdata.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, when ON_ERROR was set to 'ignore', the COPY command
would skip all rows with data type conversion errors, with no way to
limit the number of skipped rows before failing.
This commit introduces the REJECT_LIMIT option, allowing users to
specify the maximum number of erroneous rows that can be skipped.
If more rows encounter data type conversion errors than allowed by
REJECT_LIMIT, the COPY command will fail with an error, even when
ON_ERROR = 'ignore'.
Author: Atsushi Torikoshi
Reviewed-by: Junwang Zhao, Kirill Reshke, jian he, Fujii Masao
Discussion: https://postgr.es/m/63f99327aa6b404cc951217fa3e61fe4@oss.nttdata.com
|
|
|
|
|
|
|
|
|
|
| |
In execGrouping.c, execTuplesMatchPrepare() was doing a memory
allocation that was not necessary when the number of columns was 0.
In foreign.c, pg_options_to_table() was assigning twice a variable to
the same value.
Author: Ranier Vilela
Discussion: https://postgr.es/m/CAEudQAqup0agbSzMjSLSTn=OANyCzxENF1+HrSYnr3WyZib7=Q@mail.gmail.com
|
|
|
|
|
|
|
|
|
| |
The cache needs to be available very early, so don't rely on
InitializeSearchPath() to initialize the it.
Reported-by: Murat Efendioğlu
Discussion: https://postgr.es/m/CACbCzujQ4zS8MM1bx-==+tr+D3Hk5G1cjN4XkUQ+Q=cEpwhzqg@mail.gmail.com
Backpatch-through: 17
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Several places treat MyStartTime as a "long", which is only 32 bits
wide on some platforms. In reality, MyStartTime is a pg_time_t,
i.e., a signed 64-bit integer. This will lead to interesting bugs
on the aforementioned systems in 2038 when signed 32-bit integers
are no longer sufficient to store Unix time (e.g., "pg_ctl start"
hanging). To fix, ensure that MyStartTime is handled as a 64-bit
value everywhere. (Of course, users will need to ensure that
time_t is 64 bits wide on their system, too.)
Co-authored-by: Max Johnson
Discussion: https://postgr.es/m/CO1PR07MB905262E8AC270FAAACED66008D682%40CO1PR07MB9052.namprd07.prod.outlook.com
Backpatch-through: 12
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 6aa44060a3 removed pg_authid's TOAST table because the only
varlena column is rolpassword, which cannot be de-TOASTed during
authentication because we haven't selected a database yet and
cannot read pg_class. Since that change, attempts to set password
hashes that require out-of-line storage will fail with a "row is
too big" error. This error message might be confusing to users.
This commit places a limit on the length of password hashes so that
attempts to set long password hashes will fail with a more
user-friendly error. The chosen limit of 512 bytes should be
sufficient to avoid "row is too big" errors independent of BLCKSZ,
but it should also be lenient enough for all reasonable use-cases
(or at least all the use-cases we could imagine).
Reviewed-by: Tom Lane, Jonathan Katz, Michael Paquier, Jacob Champion
Discussion: https://postgr.es/m/89e8649c-eb74-db25-7945-6d6b23992394%40gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
During logical decoding of in-progress transactions, we perform the toast
table scan while fetching the default toast value for an attribute. We
forgot to initialize the flag during this scan to indicate that the system
table scan is in progress. We need this flag to ensure that during logical
decoding we never directly access the tableam or heap APIs because we check
for concurrent aborts only in systable_* APIs.
Reported-by: Alexander Lakhin
Author: Takeshi Ideriha, Hou Zhijie
Reviewed-by: Amit Kapila, Hou Zhijie
Backpatch-through: 14
Discussion: https://postgr.es/m/18641-6687273b7f15269d@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This GUC is written as camel-case in most of the documentation and the
GUC table (but not postgresql.conf.sample), and two error messages
hardcoded it with lower case characters. Let's use a style more
consistent.
Most of the noise comes from the regression tests, updated to reflect
the GUC name in these error messages.
Author: Peter Smith
Reviewed-by: Peter Eisentraut, Álvaro Herrera
Discussion: https://postgr.es/m/CAHut+Pv-kSN8SkxSdoHano_wPubqcg5789ejhCDZAcLFceBR-w@mail.gmail.com
|