| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
94985c210 added code to detect when WindowFuncs were monotonic and
allowed additional quals to be "pushed down" into the subquery to be
used as WindowClause runConditions in order to short-circuit execution
in nodeWindowAgg.c.
The Node representation of runConditions wasn't well selected and
because we do qual pushdown before planning the subquery, the planning
of the subquery could perform subquery pull-up of nested subqueries.
For WindowFuncs with args, the arguments could be changed after pushing
the qual down to the subquery.
This was made more difficult by the fact that the code duplicated the
WindowFunc inside an OpExpr to include in the WindowClauses runCondition
field. This could result in duplication of subqueries and a pull-up of
such a subquery could result in another initplan parameter being issued
for the 2nd version of the subplan. This could result in errors such as:
ERROR: WindowFunc not found in subplan target lists
To fix this, we change the node representation of these run conditions
and instead of storing an OpExpr containing the WindowFunc in a list
inside WindowClause, we now store a new node type named
WindowFuncRunCondition within a new field in the WindowFunc. These get
transformed into OpExprs later in planning once subquery pull-up has been
performed.
This problem did exist in v15 and v16, but that was fixed by 9d36b883b
and e5d20bbd.
Cat version bump due to new node type and modifying WindowFunc struct.
Bug: #18305
Reported-by: Zuming Jiang
Discussion: https://postgr.es/m/18305-33c49b4c830b37b3%40postgresql.org
|
|
|
|
|
| |
Author: Alexander Lakhin
Discussion: https://postgr.es/m/ae9f2fcb-4b24-5bb0-4240-efbbbd944ca1@gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While converting a pg_attribute tuple into a ColumnDef,
ColumnDef::compression remains NULL if there is no compression method
set fot the attribute. Calling strcmp() with NULL
ColumnDef::compression, when comparing compression methods of parents,
causes segmentation fault in MergeInheritedAttribute(). Skip
comparing compression methods if either of them is NULL.
Author: Ashutosh Bapat <ashutosh.bapat@enterprisedb.com>
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://www.postgresql.org/message-id/b22a6834-aacb-7b18-0424-a3f5fe889667%40gmail.com
|
|
|
|
|
|
|
|
|
|
| |
Same as 42b041243, except that the trouble case is a publication
WHERE clause that depends on a column.
Again reported by Alexander Lakhin. Back-patch to v15 where
we added publication WHERE clauses.
Discussion: https://postgr.es/m/548a47bc-87ae-b3df-c6a2-60b9966f808b@gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We support changing NO INHERIT constraint to INHERIT for constraints in
child relations when adding a constraint to some ancestor relation, and
also during pg_upgrade's schema restore; but other than those special
cases, command ALTER TABLE ADD CONSTRAINT should not be allowed to
change an existing constraint from NO INHERIT to INHERIT, as that would
require to process child relations so that they also acquire an
appropriate constraint, which we may not be in a position to do. (It'd
also be surprising behavior.)
It is conceivable that we want to allow ALTER TABLE SET NOT NULL to make
such a change; but in that case some more code is needed to implement it
correctly, so for now I've made that throw the same error message.
Also, during the prep phase of ALTER TABLE ADD CONSTRAINT, acquire locks
on all descendant tables; otherwise we might operate on child tables on
which no locks are held, particularly in the mode where a primary key
causes not-null constraints to be created on children.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/7d923a66-55f0-3395-cd40-81c142b5448b@gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Such constraints are semantically useless and only bring weird cases
along, so reject them.
As a side effect, we can no longer have "throwaway" constraints in
pg_dump for primary keys in partitioned tables, but since they don't
serve any useful purpose, we can just omit them.
Maybe this should be done for all types of constraints, but it's just
not-null ones that acquired this "ability" in the 17 timeframe, so for
the moment I'm not changing anything else.
Per note by Alexander Lakhin.
Discussion: https://postgr.es/m/7d923a66-55f0-3395-cd40-81c142b5448b@gmail.com
|
|
|
|
|
|
|
|
|
| |
max_ios should be int rather than int16, otherwise there's not much
point in doing:
max_ios = Min(max_ios, PG_INT16_MAX);
Discussion: https://postgr.es/m/CAApHDvr9Un-XpDr_+AFdOGM38O2K8SpfoHimqZ838gguTGYBiQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A parallel worker's buffer usage is accumulated to its pgBufferUsage
and then is accumulated into the leader's one at the end of the
parallel vacuum. However, since the leader process used to use
dedicated VacuumPage{Hit, Miss, Dirty} globals for the buffer usage
reporting, the worker's buffer usage was not included, leading to an
incorrect buffer usage report.
To fix the problem, this commit makes vacuum use pgBufferUsage
instruments for buffer usage reporting instead of VacuumPage{Hit,
Miss, Dirty} globals. These global variables are still used by ANALYZE
command and autoanalyze.
This also fixes the buffer usage report of vacuuming on temporary
tables, since the buffers dirtied by MarkLocalBufferDirty() were not
tracked by the VacuumPageDirty variable.
Parallel vacuum was introduced in 13, but the buffer usage reporting
for VACUUM command with the VERBOSE option was implemented in
15. So backpatch to 15.
Reported-by: Anthonin Bonnefoy
Author: Anthonin Bonnefoy
Reviewed-by: Alena Rybakina, Masahiko Sawada
Discussion: https://postgr.es/m/CAO6_XqrQk+QZQcYs_C6nk0cMfHuUWk85vT9CrcA1NffFbAVE2A@mail.gmail.com
Backpatch-through: 15
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As an optimization, we store "name" columns as cstrings in btree
indexes.
Here we modify it so that Index Only Scans convert these cstrings back
to names with NAMEDATALEN bytes rather than storing the cstring in the
tuple slot, as was happening previously.
Bug: #17855
Reported-by: Alexander Lakhin
Reviewed-by: Alexander Lakhin, Tom Lane
Discussion: https://postgr.es/m/17855-5f523e0f9769a566@postgresql.org
Backpatch-through: 12, all supported versions
|
|
|
|
|
| |
Discussion: https://postgr.es/m/4ea13583-7305-40b0-8525-58381533e2b1@eisentraut.org
Reported-by: Peter Eisentraut
|
|
|
|
|
|
|
|
|
|
| |
This commit makes new partitions created by ALTER TABLE ... SPLIT PARTITION
and ALTER TABLE ... MERGE PARTITIONS commands inherit the paret table access
method.
Reported-by: Alexander Lakhin
Discussion: https://postgr.es/m/84ada05b-be5c-473e-6d1c-ebe5dd21b190%40gmail.com
Reviewed-by: Pavel Borisov
|
|
|
|
|
|
|
|
|
|
| |
Currently, the error message is produced by a system of complex substitutions
making it quite untranslatable and hard to read. This commit splits this into
4 plain error messages suitable for translation.
Reported-by: Kyotaro Horiguchi
Discussion: https://postgr.es/m/20240408.152402.1485994009160660141.horikyota.ntt%40gmail.com
Reviewed-by: Pavel Borisov
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The createPartitionTable() function is responsible for creating new partitions
for ALTER TABLE ... MERGE PARTITIONS, and ALTER TABLE ... SPLIT PARTITION
commands. It emulates the behaviour of CREATE TABLE ... (LIKE ...), where
new table persistence should be specified by the user. In the table
partitioning persistent of the partition and its parent must match. So, this
commit makes createPartitionTable() copy the persistence of the parent
partition.
Also, this commit makes createPartitionTable() recheck the persistence after
the new table creation. This is needed because persistence might be affected
by pg_temp in search_path.
This commit also changes the signature of createPartitionTable() making it
take the parent's Relation itself instead of the name of the parent relation,
and return the Relation of new partition. That doesn't lead to
complications, because both callers have the parent table open and need to
open the new partition.
Reported-by: Alexander Lakhin
Discussion: https://postgr.es/m/dbc8b96c-3cf0-d1ee-860d-0e491da20485%40gmail.com
Author: Dmitry Koval
Reviewed-by: Alexander Korotkov, Robert Haas, Justin Pryzby, Pavel Borisov
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The name collision happens when the name of the new partition is the same as
the name of one of the merging partitions. Currently, ATExecMergePartitions()
first gives the new partition a temporary name and then renames it when old
partitions are deleted. That negatively influences the naming of related
objects like indexes and constrains, which could inherit a temporary name.
This commit changes the implementation in the following way. A merging
partition gets renamed first, then the new partition is created with the
right name immediately. This resolves the issue of the naming of related
objects.
Reported-by: Alexander Lakhin
Discussion: https://postgr.es/m/edfbd846-dcc1-42d1-ac26-715691b687d3%40postgrespro.ru
Author: Dmitry Koval, Alexander Korotkov
Reviewed-by: Robert Haas, Justin Pryzby, Pavel Borisov
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If an ACL recorded in pg_init_privs mentions a non-pinned role,
that reference must also be noted in pg_shdepend so that we know
that the role can't go away without removing the ACL reference.
Otherwise, DROP ROLE could succeed and leave dangling entries
behind, which is what's causing the recent upgrade-check failures
on buildfarm member copperhead.
This has been wrong since pg_init_privs was introduced, but it's
escaped notice because typical pg_init_privs entries would only
mention the bootstrap superuser (pinned) or at worst the owner
of the extension (who can't go away before the extension does).
We lack even a representation of such a role reference for
pg_shdepend. My first thought for a solution was entries listing
pg_init_privs in classid, but that doesn't work because then there's
noplace to put the granted-on object's classid. Rather than adding
a new column to pg_shdepend, let's add a new deptype code
SHARED_DEPENDENCY_INITACL. Much of the associated boilerplate
code can be cribbed from code for SHARED_DEPENDENCY_ACL.
A lot of the bulk of this patch just stems from the new need to pass
the object's owner ID to recordExtensionInitPriv, so that we can
consult it while updating pg_shdepend. While many callers have that
at hand already, a few places now need to fetch the owner ID of an
arbitrary privilege-bearing object. For that, we assume that there
is a catcache on the relevant catalog's OID column, which is an
assumption already made in ExecGrant_common so it seems okay here.
We do need an entirely new routine RemoveRoleFromInitPriv to perform
cleanup of pg_init_privs ACLs during DROP OWNED BY. It's analogous
to RemoveRoleFromObjectACL, but we can't share logic because that
function operates by building a command parsetree and invoking
existing GRANT/REVOKE infrastructure. There is of course no SQL
command that would update pg_init_privs entries when we're not in
process of creating their extension, so we need a routine that can
do the updates directly.
catversion bump because this changes the expected contents of
pg_shdepend. For the same reason, there's no hope of back-patching
this, even though it fixes a longstanding bug. Fortunately, the
case where it's a problem seems to be near nonexistent in the field.
If it weren't for the buildfarm breakage, I'd have been content to
leave this for v18.
Patch by me; thanks to Daniel Gustafsson for review and discussion.
Discussion: https://postgr.es/m/1745535.1712358659@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
| |
Repeating loads of inplace-updated fields tends to cause bugs like the
one from the previous commit. While there's no bug to fix in these code
sites, adopt the load-once style. This improves the chance of future
copy/paste finding the safe style.
Discussion: https://postgr.es/m/20240423003956.e7.nmisch@google.com
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
vac_update_datfrozenxid() did multiple loads of relfrozenxid and
relminmxid from buffer memory, and it assumed each would get the same
value. Not so if a concurrent vac_update_relstats() did an inplace
update. Commit 2d2e40e3befd8b9e0d2757554537345b15fa6ea2 fixed the same
kind of bug in vac_truncate_clog(). Today's bug could cause the
rel-level field and XIDs in the rel's rows to precede the db-level
field. A cluster having such values should VACUUM affected tables.
Back-patch to v12 (all supported versions).
Discussion: https://postgr.es/m/20240423003956.e7.nmisch@google.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the client supports ALPN but tries to use some other protocol, like
HTTPS, reject the connection in the server. That is surely a confusion
of some sort. Furthermore, the ALPN RFC 7301 says:
> In the event that the server supports no protocols that the client
> advertises, then the server SHALL respond with a fatal
> "no_application_protocol" alert.
This commit makes the server follow that advice.
In the client, specifically check for the OpenSSL error code for the
"no_application_protocol" alert. Otherwise you got a cryptic "SSL
error: SSL error code 167773280" error if you tried to connect to a
non-PostgreSQL server that rejects the connection with
"no_application_protocol". ERR_reason_error_string() returns NULL for
that code, which frankly seems like an OpenSSL bug to me, but we can
easily print a better message ourselves.
Reported-by: Jacob Champion
Discussion: https://www.postgresql.org/message-id/6aedcaa5-60f3-49af-a857-2c76ba55a1f3@iki.fi
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit a740b213d4b4d3360ad0cac696e47e5ec0eb8864.
Subsequent discussion showed that there was interest in a more general
facility to configure when server log events would produce backtraces,
and this existing limited way couldn't be extended in a compatible
way. So the consensus was to revert this for PostgreSQL 17 and
reconsider this topic for PostgreSQL 18.
Discussion: https://www.postgresql.org/message-id/flat/CAGECzQTChkvn5Xj772LB3%3Dxo2x_LcaO5O0HQvXqobm1xVp6%2B4w%40mail.gmail.com#764bcdbb73e162787e1ad984935e51e3
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ALTER COLUMN TYPE wasn't expecting to find any pg_proc objects
depending on the column whose type is to be altered. That indeed
wasn't possible when this code was written, but it is possible
since we introduced new-style SQL function bodies.
It's about as difficult to fix this case as it is to fix dependent
views, and we've been punting on those for years, so I don't feel
too awful about punting for functions too. (I sure wouldn't risk
back-patching such code.) So just throw a more user-facing error.
Also, adjust some of the existing comments to reflect that these
are all pretty much the same issue.
(This patch also fixes it so we will tolerate finding such a
dependency during ALTER COLUMN SET EXPRESSION; in that, we need
not do anything to the function, so no error is wanted. That
problem is new in HEAD.)
Per bug #18449 from Alexander Lakhin. Back-patch to v14 where
we added new-style SQL functions.
Discussion: https://postgr.es/m/18449-f8248467aaa294d5@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In commit 25cd2d640 I (tgl) opined that "The additions of the months
and microseconds fields could also overflow, of course. However,
I believe we need no additional checks there; the existing range
checks should catch such cases". This is demonstrably wrong however
for the microseconds field, and given that discovery it seems prudent
to be paranoid about the months addition as well.
Report and patch by Joseph Koshakow. As before, back-patch to all
supported branches. (However, the test case doesn't work before
v15 because we didn't allow wider-than-int32 numbers in interval
literals. A variant test could probably be built that fits within
that restriction, but it didn't seem worth the trouble.)
Discussion: https://postgr.es/m/CAAvxfHf77sRHKoEzUw9_cMYSpbpNS2C+J_+8Dq4+0oi8iKopeA@mail.gmail.com
|
|
|
|
|
|
|
| |
Also, fix a comment incorrectly referencing the "streaming read API".
This was renamed to "read stream" shortly before being committed.
Discussion: https://postgr.es/m/CAApHDvq-2Zdqytm_Hf3RmVf0qg5PS9jTFAJ5QTc9xH9pwvwDTA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Allow pg_sync_replication_slots() to error out during promotion of standby.
This makes the behavior of the SQL function consistent with the slot sync
worker. We also ensured that pg_sync_replication_slots() cannot be
executed if sync_replication_slots is enabled and the slotsync worker is
already running to perform the synchronization of slots. Previously, it
would have succeeded in cases when the worker is idle and failed when it
is performing sync which could confuse users.
This patch fixes another issue in the slot sync worker where
SignalHandlerForShutdownRequest() needs to be registered *before* setting
SlotSyncCtx->pid, otherwise, the slotsync worker could miss handling
SIGINT sent by the startup process(ShutDownSlotSync) if it is sent before
worker could register SignalHandlerForShutdownRequest(). To be consistent,
all signal handlers' registration is moved to a prior location before we
set the worker's pid.
Ensure that we clean up synced temp slots at the end of
pg_sync_replication_slots() to avoid such slots being left over after
promotion.
Ensure that ShutDownSlotSync() captures SlotSyncCtx->pid under spinlock to
avoid accessing invalid value as it can be reset by concurrent slot sync
exit due to an error.
Author: Shveta Malik
Reviewed-by: Hou Zhijie, Bertrand Drouvot, Amit Kapila, Masahiko Sawada
Discussion: https://postgr.es/m/CAJpy0uBefXUS_TSz=oxmYKHdg-fhxUT0qfjASW3nmqnzVC3p6A@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
| |
A permission check is performed in be_lo_put() just after returning
from inv_open(), but the permission is already checked in inv_open(),
so we can remove the second check.
This check was added in 8d9881911f0, but then the refactoring in
ae20b23a9e7 should have removed it.
Author: Yugo NAGATA <nagata@sraoss.co.jp>
Discussion: https://www.postgresql.org/message-id/flat/20240424185932.9789628b99a49ec81b020425%40sraoss.co.jp
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We missed performing table sync if the invalidation happened while the
non-ready tables list was being prepared. This occurs because the sync
state was set to valid at the end of non-ready table list preparation
irrespective of the invalidations processed while the list is being
prepared.
Fix it by changing the boolean variable to a tri-state enum and by setting
table state to valid only if no invalidations have occurred while the list
is being prepared.
Reprted-by: Alexander Lakhin
Diagnosed-by: Alexander Lakhin
Author: Vignesh C
Reviewed-by: Hou Zhijie, Alexander Lakhin, Ajin Cherian, Amit Kapila
Backpatch-through: 15
Discussion: https://postgr.es/m/711a6afe-edb7-1211-cc27-1bef8239eec7@gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The SSL_R_VERSION_TOO_LOW error reason is supported in LibreSSL since
LibreSSL 3.6.3, shipped in OpenBSD 7.2. SSL_R_VERSION_TOO_HIGH is on
the other hand not supported in any version of LibreSSL. Previously
we only checked for SSL_R_VERSION_TOO_HIGH and then applied both under
that guard since OpenSSL has only ever supported both at the same time.
This breaks the check into one per reason to allow SSL_R_VERSION_TOO_LOW
to work when using LibreSSL.
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/eac70d46-e61c-4d71-a1e1-78e2bfa19485@eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
| |
LibreSSL doesn't support the SSL_OP_NO_RENEGOTIATION macro which is
used by OpenSSL, instead it has invented a similar one for client-
side renegotiation: SSL_OP_NO_CLIENT_RENEGOTIATION. This has been
supported since LibreSSL 2.5.1 which by now can be considered well
below the minimum requirement.
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/eac70d46-e61c-4d71-a1e1-78e2bfa19485@eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
JsonExprState.input_finfo is only assigned to, never read, and
it's really fairly useless since the value can be gotten out of
the adjacent input_fcinfo field. Let's remove it before someone
starts to depend on it.
While here, also remove TidScanState.tss_htup and AggState.combinedproj,
which are referenced nowhere. Those should have been removed by the
commits that caused them to become disused, but were not.
I don't think a catversion bump is necessary here, since plan trees
are never stored on disk.
Matthias van de Meent
Discussion: https://postgr.es/m/CAEze2WjsY4d0TBymLNGK4zpttUcg_YZaTjyWz2VfDUV6YH8wXQ@mail.gmail.com
|
|
|
|
|
|
|
| |
If the GUC has a unit, label the minimum and maximum values
with the unit explicitly. Per suggestion from Jian He.
Discussion: https://postgr.es/m/CACJufxFJo6FyVg9W8yvNAxbjP+EJ9wieE9d9vw5LpPzyLnLLOQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Do not allow ALTER SUBSCRIPTION ... SET (failover = on|off) in a
transaction block as the changed failover option of the slot can't be
rolled back. For the same reason, we refrain from altering the replication
slot's failover property if the subscription is created with a valid
slot_name and create_slot=false.
Reprted-by: Kuroda Hayato
Author: Hou Zhijie
Reviewed-by: Shveta Malik, Bertrand Drouvot, Kuroda Hayato
Discussion: https://postgr.es/m/OS0PR01MB57165542B09DFA4943830BF294082@OS0PR01MB5716.jpnprd01.prod.outlook.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Certain cases involving the use of cursors had assertion failures within
_bt_preprocess_keys's recently added no-op return path. The assertion
in question made the faulty assumption that a second or third call to
_bt_preprocess_keys (within the same btrescan) could only happen when
another scheduled primitive index scan was just about to begin.
It would be possible to address the problem by only allowing scans that
have array keys to take the new no-op path, forcing affected cases to
perform redundant preprocessing work. It seems simpler to just remove
the assertion, and reframe the no-op path as a more general mechanism.
Take this simpler approach.
The important underlying principle is that we only need to perform
preprocessing once per btrescan (at most). This is expected regardless
of whether or not the scan happens to have array keys.
Oversight in commit 1b134ca5, which enhanced nbtree ScalarArrayOp
execution.
Reported-By: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/ef0f7c8b-a6fa-362e-6fd6-054950f947ca@gmail.com
|
|
|
|
|
|
|
|
|
|
|
| |
This led to spurious assertion failures in certain scenarios involving
pseudo types.
Oversight in commit 5bf748b8, which enhanced nbtree ScalarArrayOp
execution.
Reported-By: Richard Guo <guofenglinux@gmail.com>
Discussion: https://postgr.es/m/CAMbWs48f5rDOwxaT76Zd40m7n9iGZQcjEk7vG_5p3YWNh6oPfA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When specifying the createdb strategy, the documentation suggests valid
options are FILE_COPY and WAL_LOG, but the code does case-sensitive
comparison and accepts only "file_copy" and "wal_log" as valid.
Fixed by doing a case-insensitive comparison using pg_strcasecmp(), same
as for other string parameters nearby.
While at it, apply fmtId() to a nearby "locale_provider". This already
did the comparison in case-insensitive way, but the value would not be
double-quoted, confusing the parser and the error message.
Backpatch to 15, where the strategy was introduced.
Backpatch-through: 15
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/90c6913a-1dd2-42b4-8365-ce3b09c39b17@enterprisedb.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The optimization for inserts into BRIN indexes added by c1ec02be1d79
relies on a cache that needs to be explicitly released after calling
index_insert(). The commit however failed to invoke the cleanup in
validate_index(), which calls index_insert() indirectly through
table_index_validate_scan().
After inspecting index_insert() callers, it seems unique_key_recheck()
is missing the call too.
Fixed by adding the two missing index_insert_cleanup() calls.
The commit does two additional improvements. The aminsertcleanup()
signature is modified to have the index as the first argument, to make
it more like the other AM callbacks. And the aminsertcleanup() callback
is invoked even if the ii_AmCache is NULL, so that it can decide if the
cleanup is necessary.
Author: Alvaro Herrera, Tomas Vondra
Reported-by: Alexander Lakhin
Discussion: https://postgr.es/m/202401091043.e3nrqiad6gb7@alvherre.pgsql
|
|
|
|
|
|
|
|
| |
Typos introduced by commits c1ec02be1d79, b43757171470 and dae761a87eda.
Author: Alvaro Herrera
Reported-by: Alexander Lakhin
Discussion: https://postgr.es/m/202401091043.e3nrqiad6gb7@alvherre.pgsql
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It is possible for certain cases to remove not-null constraints without
maintaining the attnotnull in its correct state; for example if you drop
a column that's part of the primary key, and the other columns of the PK don't
have not-null constraints, then we should reset the attnotnull flags for
those other columns; up to this commit, we didn't. Handle those cases
better by doing the attnotnull reset in RemoveConstraintById() instead
of in dropconstraint_internal().
However, there are some cases where we must not do so. For example if
those other columns are in replica identity indexes or are generated
identity columns, we must keep attnotnull set, even though it results in
the catalog inconsistency that no not-null constraint supports that.
Because the attnotnull reset now happens in more places than before, for
instance when a column of the primary key changes type, we need an
additional trick to reinstate it as necessary. Introduce a new
alter-table pass that does this, which needs simply reschedule some
AT_SetAttNotNull subcommands that were already being generated and
ignored.
Because of the exceptions in which attnotnull is not reset noted above,
we also include a pg_dump hack to include a not-null constraint when the
attnotnull flag is set even if no pg_constraint row exists. This part
is undesirable but necessary, because failing to handle the case can
result in unrestorable dumps.
Reported-by: Tender Wang <tndrwang@gmail.com>
Co-authored-by: Tender Wang <tndrwang@gmail.com>
Reviewed-by: jian he <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CAHewXN=hMbNa3d43NOR=OCgdgpTt18S-1fmueCoEGesyeK4bqw@mail.gmail.com
|
|
|
|
|
|
|
|
| |
Code quality improvement for 0294df2f1f84.
Aleksander Alekseev, reviewed by Richard Guo.
Discussion: https://postgr.es/m/CAJ7c6TMsiaV5urU_Pq6zJ2tXPDwk69-NKVh4AMN5XrRiM7N%2BGA%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes various typos, duplicated words, and tiny bits of whitespace
mainly in code comments but also in docs.
Author: Daniel Gustafsson <daniel@yesql.se>
Author: Heikki Linnakangas <hlinnaka@iki.fi>
Author: Alexander Lakhin <exclusion@gmail.com>
Author: David Rowley <dgrowleyml@gmail.com>
Author: Nazir Bilal Yavuz <byavuz81@gmail.com>
Discussion: https://postgr.es/m/3F577953-A29E-4722-98AD-2DA9EFF2CBB8@yesql.se
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Preprocessing for nbtree index scans allowed array "input" scan keys
already marked eliminated during array-specific preprocessing to be
"fixed up" during preprocessing proper. This allowed eliminated scan
keys on DESC index columns to spurious have their strategy commuted,
causing assertion failures.
To fix, teach _bt_fix_scankey_strategy to ignore these scan keys. This
brings it in line with its only caller, _bt_preprocess_keys.
Oversight in commit 5bf748b8, which enhanced nbtree ScalarArrayOp
execution.
Reported-By: Donghang Lin <donghanglin@gmail.com>
Discussion: https://postgr.es/m/CAA=D8a2sHK6CAzZ=0CeafC-Y-MFXbYxnRSHvZTi=+JHu6kAa8Q@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In tables with primary keys, pg_dump creates tables with primary keys by
initially dumping them with throw-away not-null constraints (marked "no
inherit" so that they don't create problems elsewhere), to later drop
them once the primary key is restored. Because of a unrelated
consideration, on tables with children we add not-null constraints to
all columns of the primary key when it is created.
If both a table and its child have primary keys, and pg_dump happens to
emit the child table first (and its throw-away not-null) and later its
parent table, the creation of the parent's PK will fail because the
throw-away not-null constraint collides with the permanent not-null
constraint that the PK wants to add, so the dump fails to restore.
We can work around this problem by letting the primary key "take over"
the child's not-null. This requires no changes to pg_dump, just two
changes to ALTER TABLE: first, the ability to convert a no-inherit
not-null constraint into a regular inheritable one (including recursing
down to children, if there are any); second, the ability to "drop" a
constraint that is defined both directly in the table and inherited from
a parent (which simply means to mark it as no longer having a local
definition).
Secondarily, change ATPrepAddPrimaryKey() to acquire locks all the way
down the inheritance hierarchy, in case we need to recurse when
propagating constraints.
These two changes allow pg_dump to reproduce more cases involving
inheritance from versions 16 and older.
Lastly, make two changes to pg_dump: 1) do not try to drop a not-null
constraint that's marked as inherited; this allows a dump to restore
with no errors if a table with a PK inherits from another which also has
a PK; 2) avoid giving inherited constraints throwaway names, for the
rare cases where such a constraint survives after the restore.
Reported-by: Andrew Bille <andrewbille@gmail.com>
Reported-by: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/CAJnzarwkfRu76_yi3dqVF_WL-MpvT54zMwAxFwJceXdHB76bOA@mail.gmail.com
Discussion: https://postgr.es/m/Zh0aAH7tbZb-9HbC@pryzbyj2023
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This addresses some post-commit review comments for commits 6185c973,
de3600452, and 9425c596a0, with the following changes:
* Fix JSON_TABLE() syntax documentation to use the term
"path_expression" for JSON path expressions instead of
"json_path_specification" to be consistent with the other SQL/JSON
functions.
* Fix a typo in the example code in JSON_TABLE() documentation.
* Rewrite some newly added comments in jsonpath.h.
* In JsonPathQuery(), add missing cast to int before printing an enum
value.
Reported-by: Jian He <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CACJufxG_e0QLCgaELrr2ZNz7AxPeGCNKAORe3fHtFCQLsH4J4Q@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
SQL/JSON query functions allow specifying an expression to return
when either of ON ERROR or ON EMPTY condition occurs when evaluating
the JSON path expression. The parser (transformJsonBehavior()) checks
that the specified expression is one of the supported expressions, but
there are two issues with how the check is done that are fixed in this
commit:
* No check for some expressions related to coercion, such as
CoerceViaIO, that may appear in the transformed user-specified
expressions that include cast(s)
* An unsupported expression may be masked by a coercion-related
expression, which must be flagged by checking the latter's
argument expression recursively
Author: Jian He <jian.universality@gmail.com>
Author: Amit Langote <amitlangote09@gmail.com>
Reported-by: Jian He <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CACJufxEqhqsfrg_p7EMyo5zak3d767iFDL8vz_4%3DZBHpOtrghw@mail.gmail.com
Discussion: https://postgr.es/m/CACJufxGOerH1QJknm1noh-Kz5FqU4p7QfeZSeVT2tN_4SLXYNg@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This improves some error messages emitted by SQL/JSON query functions
by mentioning column name when available, such as when they are
invoked as part of evaluating JSON_TABLE() columns. To do so, a new
field column_name is added to both JsonFuncExpr and JsonExpr that is
only populated when creating those nodes for transformed JSON_TABLE()
columns.
While at it, relevant error messages are reworded for clarity.
Reported-by: Jian He <jian.universality@gmail.com>
Suggested-by: Jian He <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CACJufxG_e0QLCgaELrr2ZNz7AxPeGCNKAORe3fHtFCQLsH4J4Q@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
fefd9a3fed turned tail recursion of CommitTransactionCommand() and
AbortCurrentTransaction() into iteration. However, it splits the handling of
cases between different functions.
This commit puts the handling of all the cases into
AbortCurrentTransactionInternal() and CommitTransactionCommandInternal().
Now CommitTransactionCommand() and AbortCurrentTransaction() are just doing
the repeated calls of internal functions.
Reported-by: Andres Freund
Discussion: https://postgr.es/m/20240415224834.w6piwtefskoh32mv%40awork3.anarazel.de
Author: Andres Freund
|
|
|
|
|
|
|
|
|
| |
GlobalVisTestNonRemovableHorizon() was only used for the implementation of
snapshot_too_old, which was removed in f691f5b80a8. As using
GlobalVisTestNonRemovableHorizon() is not particularly efficient, no new uses
for it should be added. Therefore remove.
Discussion: https://postgr.es/m/20240415185720.q4dg4dlcyvvrabz4@awork3.anarazel.de
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit b43757171470 added support for parallel builds of BRIN indexes,
using code similar to BTREE. But there were to be a couple unnecessary
differences, particularly in how the leader waits for the workers, and
merges the results. So remove these, to make the code more similar.
The leader never waited on the workersdonecv condition variable, but
simply called WaitForParallelWorkersToFinish() in _brin_end_parallel()
and then merged the per-worker results. This worked correctly, but it
seems better to do the wait and merge before _brin_end_parallel().
This commit moves the relevant code to _brin_parallel_heapscan/merge(),
which means _brin_end_parallel() remains responsible only for exiting
the parallel mode and accumulating WAL usage data.
Discussion: https://postgr.es/m/3733d042-71e1-6ae6-5fac-00c12db62db6@enterprisedb.com
|
|
|
|
|
|
|
|
|
|
| |
The configure check for HAVE_DECL_LLVMORCREGISTERPERF was removed by
e9a9843e138, but some code guarded by it was left. (That commit
removed the "register" calls but left the "unregister" calls.) That
code cannot be reached anymore, so remove it.
Reported-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://www.postgresql.org/message-id/5539b16c-cff7-46d5-9621-c3fb6b549e9e@iki.fi
|
|
|
|
|
|
|
|
|
| |
Databases exist, contrary to datatabases.
Oversight in e83d1b0c40cc.
Author: Japin Li
Discussion: https://postgr.es/m/ME3P282MB316611A2F7BF43919F695228B6082@ME3P282MB3166.AUSP282.PROD.OUTLOOK.COM
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The ON_ERROR option of the COPY command previously allowed omitting
its value, which was inconsistent with the syntax synopsis in the
documentation and the behavior of other non-boolean COPY options.
This change enforces providing a value for the ON_ERROR option,
ensuring consistency across other non-boolean options and aligning
with the documented syntax.
Author: Atsushi Torikoshi
Reviewed-by: Masahiko Sawada
Discussion: https://postgr.es/m/a9770bf57646d90dedc3d54cf32634b2%40oss.nttdata.com
|
|
|
|
|
|
|
|
|
|
|
|
| |
Oversight in 29f6a959c.
In passing, since we now have 4 memory context types to choose from,
provide a brief overview of the specialities of each memory context
type.
Reported-by: Amul Sul
Author: Amul Sul, David Rowley
Discussion: https://postgr.es/m/CAAJ_b94U2s9nHh--DEK=sPEZUQ+x7vQJ7529fF8UAH97QJ9NXg@mail.gmail.com
|