| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When performing a bitmap heap scan, we don't want to miss concurrent
writes that occurred after we observed the heap's rs_nblocks, but before
we took predicate locks on index pages. Therefore, we can't skip
fetching any heap tuples that are referenced by the index, because we
need to test them all with CheckForSerializableConflictOut(). The
old optimization that would ignore any references to blocks >=
rs_nblocks gets in the way of that requirement, because it means that
concurrent writes in that window are ignored.
Removing that optimization shouldn't affect correctness at any isolation
level, because any new tuples shouldn't be visible to an MVCC snapshot.
There also shouldn't be any error-causing references to heap blocks past
the end, because we should have held at least an AccessShareLock on the
table before the index scan. It can't get smaller while our transaction
is running. For now, though, we'll keep the optimization at lower
levels to avoid making unnecessary changes in a bug fix.
Back-patch to all supported releases. In release 11, the code is in a
different place but not fundamentally different. Fixes one aspect of
bug #17949.
Reported-by: Artem Anisimov <artem.anisimov.255@gmail.com>
Reviewed-by: Dmitry Dolgov <9erthalion6@gmail.com>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://postgr.es/m/17949-a0f17035294a55e2%40postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When predicate-locking btrees, we have a special case for completely
empty btrees, since there is no page to lock. This was racy, because,
without buffer lock held, a matching key could be inserted between the
_bt_search() and the PredicateLockRelation() calls.
Fix, by rechecking _bt_search() after taking the relation-level SIREAD
lock, if using SERIALIZABLE isolation and an empty btree is discovered.
Back-patch to all supported releases. Fixes one aspect of bug #17949.
Reported-by: Artem Anisimov <artem.anisimov.255@gmail.com>
Reviewed-by: Dmitry Dolgov <9erthalion6@gmail.com>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://postgr.es/m/17949-a0f17035294a55e2%40postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit f24523672d fixed a memory leak by moving the modifiedCols bitmap
into the per-row memory context. In the case of AFTER UPDATE triggers,
the bitmap is however referenced from an event kept until the end of the
query, resulting in a use-after-free bug.
Fixed by copying the bitmap into the AfterTriggerEvents memory context,
which is the one where we keep the trigger events. There's only one
place that needs to do the copy, but the memory context may not exist
yet. Doing that in a separate function seems more readable.
Report by Alexander Pyhalov, fix by me. Backpatch to 13, where the
bitmap was added to the event by commit 71d60e2aa0.
Reported-by: Alexander Pyhalov
Backpatch-through: 13
Discussion: https://postgr.es/m/acddb17c89b0d6cb940eaeda18c08bbe@postgrespro.ru
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The Incremental Sort had a couple issues, resulting in leaking memory
during rescans, possibly triggering OOM. The code had a couple of
related flaws:
1. During rescans, the sort states were reset but then also set to NULL
(despite the comment saying otherwise). ExecIncrementalSort then
sees NULL and initializes a new sort state, leaking the memory used
by the old one.
2. Initializing the sort state also automatically rebuilt the info about
presorted keys, leaking the already initialized info. presorted_keys
was also unnecessarily reset to NULL.
Patch by James Coleman, based on patches by Laurenz Albe and Tom Lane.
Backpatch to 13, where Incremental Sort was introduced.
Author: James Coleman, Laurenz Albe, Tom Lane
Reported-by: Laurenz Albe, Zu-Ming Jiang
Backpatch-through: 13
Discussion: https://postgr.es/m/b2bd02dff61af15e3526293e2771f874cf2a3be7.camel%40cybertec.at
Discussion: https://postgr.es/m/db03c582-086d-e7cd-d4a1-3bc722f81765%40inf.ethz.ch
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The logic that introduced partitioned indexes missed a few things when
invalidating a partitioned index when these are created, still the code
is written to handle recursions:
1) If created from scratch because a mapping index could not be found,
the new index created could be itself invalid, if for example it was a
partitioned index with one of its leaves invalid.
2) A CCI was missing when indisvalid is set for a parent index, leading
to inconsistent trees when recursing across more than one level for a
partitioned index creation if an invalidation of the parent was
required.
This could lead to the creation of a partition index tree where some of
the partitioned indexes are marked as invalid, but some of the parents
are marked valid, which is not something that should happen (as
validatePartitionedIndex() defines, indisvalid is switched to true for a
partitioned index iff all its partitions are themselves valid).
This patch makes sure that indisvalid is set to false on a partitioned
index if at least one of its partition is invalid. The flag is set to
true if *all* its partitions are valid.
The regression test added in this commit abuses of a failed concurrent
index creation, marked as invalid, that maps with an index created on
its partitioned table afterwards.
Reported-by: Alexander Lakhin
Reviewed-by: Alexander Lakhin
Discussion: https://postgr.es/m/14987634-43c0-0cb3-e075-94d423607e08@gmail.com
Backpatch-through: 11
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the given composite datum is toasted out-of-line,
DatumGetHeapTupleHeader will perform database accesses to detoast it.
That can invalidate the result of get_cached_rowtype, as documented
(perhaps not plainly enough) in that function's API spec; which leads
to strange errors or crashes when we try to use the TupleDesc to read
the tuple. In short then, trying to update a field of a composite
column could fail intermittently if the overall column value is wide
enough to require toasting.
We can fix the bug at no cost by just changing the order of
operations, since we don't need the TupleDesc until after detoasting.
(Other callers of get_cached_rowtype appear to get this right already,
so there's only one bug.)
Note that the added regression test case reveals this bug reliably
only with debug_discard_caches/CLOBBER_CACHE_ALWAYS.
Per bug #17994 from Alexander Lakhin. Sadly, this patch does not fix
the missing-values issue revealed in the bug discussion; we'll need
some more work to cover that.
Discussion: https://postgr.es/m/17994-5c7100b51b4790e9@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
PARTITION
A portion of ALTER TABLE .. ATTACH PARTITION is to ensure that the
partition being attached to the partitioned table has a correct set of
indexes, so as there is a consistent index mapping between the
partitioned table and its new-to-be partition. However, as introduced
in 8b08f7d, the current logic could choose an invalid index as a match,
which is something that can exist when dealing with more than two levels
of partitioning, like attaching a partitioned table (that has
partitions, with an index created by CREATE INDEX ON ONLY) to another
partitioned table.
A partitioned index with indisvalid set to false is equivalent to an
incomplete partition tree, meaning that an invalid partitioned index
does not have indexes defined in all its partitions. Hence, choosing an
invalid partitioned index can create inconsistent partition index trees,
where the parent attaching to is valid, but its partition may be
invalid.
In the report from Alexander Lakhin, this showed up as an assertion
failure when validating an index. Without assertions enabled, the
partition index tree would be actually broken, as indisvalid should
be switched to true for a partitioned index once all its partitions are
themselves valid. With two levels of partitioning, the top partitioned
table used a valid index and was able to link to an invalid index stored
on its partition, itself a partitioned table.
I have studied a few options here (like the possibility to switch
indisvalid to false for the parent), but came down to the conclusion
that we'd better rely on a simple rule: invalid indexes had better never
be chosen, so as the partition attached uses and creates indexes that
the parent expects. Some regression tests are added to provide some
coverage. Note that the existing coverage is not impacted.
This is a problem since partitioned indexes exist, so backpatch all the
way down to v11.
Reported-by: Alexander Lakhin
Discussion: https://postgr.es/14987634-43c0-0cb3-e075-94d423607e08@gmail.com
Backpatch-through: 11
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
TParserGet() recurses for some token types, meaning it's possible
to drive it to stack overflow. Since this is a minority behavior,
I chose to add the check_stack_depth() call to the two places that
recurse rather than doing it during every single call.
While at it, add CHECK_FOR_INTERRUPTS(), because this can run
unpleasantly long for long inputs.
Per bug #17995 from Zuming Jiang. This is old, so back-patch
to all supported branches.
Discussion: https://postgr.es/m/17995-9f20ff3e6389db4c@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Avoid "right sibling %u of block %u is not next child" errors when
vacuuming a corrupt nbtree index. Just LOG the issue and press on.
That way VACUUM will have a decent chance of finishing off all required
processing for the index (and for the table as a whole).
This is similar to recent work from commit 5abff197, as well as work
from commit 5b861baa (later backpatched as commit 43e409ce), which
taught nbtree VACUUM to keep going when its "re-find" check fails. The
hardening added by this commit takes place directly after the "re-find"
check, right before the critical section for the first stage of page
deletion.
Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAH2-Wz=dayg0vjs4+er84TS9ami=csdzjpuiCGbEw=idhwqhzQ@mail.gmail.com
Backpatch: 11- (all supported versions).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
exec_parse_message() wants to create a cached plan in all cases,
including for empty input. The empty-input path does not have
a test for being in an aborted transaction, making it possible
that plancache.c will fail due to trying to do database lookups
even though there's no real work to do.
One solution would be to throw an aborted-transaction error in
this path too, but it's not entirely clear whether the lack of
such an error was intentional or whether some clients might be
relying on non-error behavior. Instead, let's hack plancache.c
so that it treats empty statements with the same logic it
already had for transaction control commands, ensuring that it
can soldier through even in an already-aborted transaction.
Per bug #17983 from Alexander Lakhin. Back-patch to all
supported branches.
Discussion: https://postgr.es/m/17983-da4569fcb878672e@postgresql.org
|
|
|
|
|
|
|
|
|
|
| |
The existing errhint message and docs were missing the fact that we can't
disassociate from the slot unless the subscription is disabled.
Author: Robert Sjöblom, Peter Smith
Reviewed-by: Peter Eisentraut, Amit Kapila
Backpatch-through: 11
Discussion: https://postgr.es/m/807bdf85-61ea-88e2-5712-6d9fcd4eabff@fortnox.se
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the inner-side expressions contain PARAM_EXEC Params, we must
re-hash whenever the values of those Params change. The executor
mechanism for that exists already, but we failed to invoke it because
finalize_plan() neglected to search the Hash.hashkeys field for
Params. This allowed a previous scan's hash table to be re-used
when it should not be, leading to rows missing from the join's output.
(I believe incorrectly-included join rows are impossible however,
since checking the real hashclauses would reject false matches.)
This bug is very ancient, dating probably to d24d75ff1 of 7.4.
Sadly, this simple fix depends on the plan representational changes
made by 2abd7ae9b, so it will only work back to v12. I thought
about trying to make some kind of hack for v11, but I'm leery
of putting code significantly different from what is used in the
newer branches into a nearly-EOL branch. Seeing that the bug
escaped detection for a full twenty years, problematic cases
must be rare; so I don't feel too awful about leaving v11 as-is.
Per bug #17985 from Zuming Jiang. Back-patch to v12.
Discussion: https://postgr.es/m/17985-748b66607acd432e@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a follow-up of f663b00, that has been committed to v13 and v14,
tweaking the TAP test for two-phase transactions so as it provides
coverage for the bug that has been fixed. This change is done in its
own commit for clarity, as v15 and HEAD did not show the problematic
behavior, still missed coverage for it.
While on it, this adds a comment about the dependency of the last
partial segment rename and RecoverPreparedTransactions() at the end of
recovery, as that can be easy to miss.
Author: Michael Paquier
Reviewed-by: Kyotaro Horiguchi
Discussion: https://postgr.es/m/743b9b45a2d4013bd90b6a5cba8d6faeb717ee34.camel@cybertec.at
Backpatch-through: 13
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When archiving is enabled, a promotion request would fail with the
following error when some 2PC transaction needs to be recovered from
WAL, preventing the promotion to complete:
FATAL: requested WAL segment pg_wal/000000010000000000000001 has already been removed
The origin of the problem is that the last partial segment of the old
timeline is renamed before recovering the 2PC data via
RecoverPreparedTransactions() at the end of recovery, causing the FATAL
because the segment wanted is now renamed with a .partial suffix. This
commit reorders a bit the end-of-recovery actions so as the execution of
recovery_end_command, the cleanup of the old segments of the old
timeline (RemoveNonParentXlogFiles) and the last partial segment rename
are done after the 2PC transaction data is recovered with
RecoverPreparedTransactions(). This makes the order of these
end-of-recovery actions more consistent with ~15, at the exception of
the end-of-recovery checkpoint that still needs to happen before all the
actions reordered here in v13 and v14, contrary to what 15~ does.
v15 and newer versions have "fixed" this problem somewhat accidentally
with 811051c, where the end-of-recovery actions got reordered. In this
case, the recovery of 2PC transactions happens before the renaming of
the last partial segment of the old timeline.
v13 and v14 are the versions that can easily see this problem as per the
refactoring of 38a95731 where XLogReaderState is reset in
XLogBeginRead() before reading the 2PC transaction data. v11 and v12
could also see this problem, but may finish by reading the 2PC data from
some of the WAL buffers instead. Perhaps something could be done for
these two branches, but I am not really excited about doing something on
these per the lack of complaints and per the fact that v11 is soon going
to be EOL'd soon (there is always a risk of breaking something).
Note that the TAP test 009_twophase.pl is able to exhibit the issue if
it enables archiving on the primary node, which does not impact the test
coverage as restore_command would remain unused. This is something that
should be changed on v15 and HEAD as well, so this will be changed in a
separate commit for clarity.
Author: Julian Markwort
Reviewed-by: Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/743b9b45a2d4013bd90b6a5cba8d6faeb717ee34.camel@cybertec.at
Backpatch-through: 13
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Here we adjust relation_has_unique_index_for() so that it no longer makes
use of partial unique indexes as uniqueness proofs. It is incorrect to
use these as the predicates used by check_index_predicates() to set
predOK makes use of not only baserestrictinfo quals as proofs, but also
qual from join conditions. For relation_has_unique_index_for()'s case, we
need to know the relation is unique for a given set of columns before any
joins are evaluated, so if predOK was only set to true due to some join
qual, then it's unsafe to use such indexes in
relation_has_unique_index_for(). The final plan may not even make use
of that index, which could result in reading tuples that are not as
unique as the planner previously expected them to be.
Bug: #17975
Reported-by: Tor Erik Linnerud
Backpatch-through: 11, all supported versions
Discussion: https://postgr.es/m/17975-98a90c156f25c952%40postgresql.org
|
|
|
|
|
|
|
| |
Back-patch down to 11.
Author: Sho Kato (<kato-sho@fujitsu.com>)
Discussion: https://postgr.es/m/TYCPR01MB68499042A33BC32241193AAF9F5BA%40TYCPR01MB6849.jpnprd01.prod.outlook.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
rewriteRuleAction neglected to check for SubLink nodes in the
securityQuals of range table entries. This could lead to failing
to convert such a SubLink to a SubPlan, resulting in assertion
crashes or weird errors later in planning.
In passing, fix some poor coding in rewriteTargetView:
we should not pass the source parsetree's hasSubLinks
field to ReplaceVarsFromTargetList's outer_hasSubLinks.
ReplaceVarsFromTargetList knows enough to ignore that
when a Query node is passed, but it's still confusing
and bad precedent: if we did try to update that flag
we'd be updating a stale copy of the parsetree.
Per bug #17972 from Alexander Lakhin. This has been broken since
we added RangeTblEntry.securityQuals (although the presented test
case only fails back to 215b43cdc), so back-patch all the way.
Discussion: https://postgr.es/m/17972-f422c094237847d0@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 927d9abb6 purported to make datetime() accept any string
that could be output for a datetime value by to_jsonb(). But it
overlooked the possibility of fractional seconds being present,
so that cases as simple as to_jsonb(now()) would defeat it.
Fix by adding formats that include ".US" to the list in
executeDateTimeMethod(). (Note that while this is nominally
microseconds, it'll do the right thing for fractions with
fewer than six digits.)
In passing, re-order the list to restore the datatype ordering
specified in its comment. The violation accidentally did not
break anything; but the next edit might be less lucky, so add
more comments.
Per report from Tim Field. Back-patch to v13 where datetime()
was added, like the previous patch.
Discussion: https://postgr.es/m/014A028B-5CE6-4FDF-AC24-426CA6FC9CEE@mohiohio.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit fixes an oversight introduced in 10520f4, that has added
delayChkptEnd to PGPROC to avoid ABI breakages on stable branches, where
two spots have missed to initialize this variable (delayChkpt was
switched back from int to bool, and it was initialized as 0 so there was
no consequences for it):
- InitProcess(), where the per-process data structures of a backend are
initialized.
- InitAuxiliaryProcess(), same but for auxiliary processes.
An interruption during relation truncation while this flag is set could
cause an assertion failure when a follow-up process does a relation
truncation while reusing the same PGPROC entry. A second effect could
be incorrect checkpoint end delays.
While on it, add an assertion in ProcArrayClearTransaction() for
delayChkptEnd to be in line with 5788e25. This is needed only for v14.
This issue affects v11~v14, but not v15~, as we use a single field
called delayChkptFlags to delay checkpoints there.
Author: suyu.cmj (mengjuan.cmj@alibaba-inc.com)
Reviewed-by: Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/9c3d2a49-db5f-43cb-840b-d58f9a684295.mengjuan.cmj@alibaba-inc.com
Backpatch-through: 11
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit fc22b6623b (generated columns) replaced ExecGetUpdatedCols() with
ExecGetAllUpdatedCols() in a couple places handling UPDATE (triggers and
lock mode). However, ExecGetUpdatedCols() did exec_rt_fetch() while
ExecGetAllUpdatedCols() also allocates memory through bms_union()
without paying attention to the memory context and happened to use the
long-lived ExecutorState, leaking the memory until the end of the query.
The amount of leaked memory is proportional to the number of (updated)
attributes, types of UPDATE triggers, and the number of processed rows
(which for UPDATE ... FROM ... may be much higher than updated rows).
Fixed by switching to the per-tuple context in GetAllUpdatedColumns().
This is fine for all in-core callers, but external callers may need to
copy the result. But we're not aware of any such callers.
Note the issue was introduced by fc22b6623b, but the macros were later
renamed by f50e888990.
Backpatch to 12, where the issue was introduced.
Reported-by: Tomas Vondra
Reviewed-by: Andres Freund, Tom Lane, Jakub Wartak
Backpatch-through: 12
Discussion: https://postgr.es/m/222a3442-7f7d-246c-ed9b-a76209d19239@enterprisedb.com
|
|
|
|
|
|
|
|
|
|
| |
In reality, recordXtime will always be set by the getRecordTimestamp
call, but the compiler doesn't necessarily see that.
Back-patch to all supported versions.
Author: Tristan Partin
Discussion: https://www.postgresql.org/message-id/CT5MN8E11U0M.1NYNCHXYUHY41@gonk
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Avoid "right sibling's left-link doesn't match" errors when vacuuming a
corrupt nbtree index. Just LOG the issue and press on. That way VACUUM
will have a decent chance of finishing off all required processing for
the index (and for the table as a whole).
This error was seen in the field from time to time (it's more than a
theoretical risk), so giving VACUUM the ability to press on like this
has real value. Nothing short of a REINDEX is expected to fix the
underlying index corruption, so giving up (by throwing an error) risks
making a bad situation far worse. Anything that blocks forward progress
by VACUUM like this might go unnoticed for a long time. This could
eventually lead to a wraparound/xidStopLimit outage.
Note that _bt_unlink_halfdead_page() has always been able to bail on
page deletion when the target page's left sibling page was in an
inconsistent state. It now does the same thing (returns false to back
out of the second phase of deletion) when it notices sibling link
corruption in the target page's right sibling page.
This is similar to the work from commit 5b861baa (later backpatched as
commit 43e409ce), which taught nbtree to press on with vacuuming an
index when page deletion fails to "re-find" a downlink in the target
page's parent page. The "re-find" check seems to make VACUUM bail on
page deletion more often in practice, but there is no reason to take any
chances here.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://postgr.es/m/CAH2-Wzko2q2kP1+UvgJyP9g0mF4hopK0NtQZcxwvMv9_ytGhkQ@mail.gmail.com
Backpatch: 11- (all supported versions).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
BRIN indexes did not properly distinguish between summaries for empty
(no rows) and all-NULL ranges, treating them as essentially the same
thing. Summaries were initialized with allnulls=true, and opclasses
simply reset allnulls to false when processing the first non-NULL value.
This however produces incorrect results if the range starts with a NULL
value (or a sequence of NULL values), in which case we forget the range
contains NULL values when adding the first non-NULL value.
This happens because the allnulls flag is used for two separate
purposes - to mark empty ranges (not representing any rows yet) and
ranges containing only NULL values.
Opclasses don't know which of these cases it is, and so don't know
whether to set hasnulls=true. Setting the flag in both cases would make
it correct, but it would also make BRIN indexes useless for queries with
IS NULL clauses. All ranges start empty (and thus allnulls=true), so all
ranges would end up with either allnulls=true or hasnulls=true.
The severity of the issue is somewhat reduced by the fact that it only
happens when adding values to an existing summary with allnulls=true.
This can happen e.g. for small tables (because a summary for the first
range exists for all BRIN indexes), or for tables with large fraction of
NULL values in the indexed columns.
Bulk summarization (e.g. during CREATE INDEX or automatic summarization)
that processes all values at once is not affected by this issue. In this
case the flags were updated in a slightly different way, not forgetting
the NULL values.
To identify empty ranges we use a new flag, stored in an unused bit in
the BRIN tuple header so the on-disk format remains the same. A matching
flag is added to BrinMemTuple, into a 3B gap after bt_placeholder.
That means there's no risk of ABI breakage, although we don't actually
pass the BrinMemTuple to any public API.
We could also skip storing index tuples for empty summaries, but then
we'd have to always process such ranges - even if there are no rows in
large parts of the table (e.g. after a bulk DELETE), it would still
require reading the pages etc. So we store them, but ignore them when
building the bitmap.
Backpatch to 11. The issue exists since BRIN indexes were introduced in
9.5, but older releases are already EOL.
Backpatch-through: 11
Reviewed-by: Justin Pryzby, Matthias van de Meent, Alvaro Herrera
Discussion: https://postgr.es/m/402430e4-7d9d-6cf1-09ef-464d80afff3b@enterprisedb.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When merging BRIN summaries, union_tuples() did not correctly update the
target hasnulls/allnulls flags. When merging all-NULL summary into a
summary without any NULL values, the result had both flags set to false
(instead of having hasnulls=true).
This happened because the code only considered the hasnulls flags,
ignoring the possibility the source summary has allnulls=true.
Discovered while investigating issues with handling empty BRIN ranges
and handling of NULL values, but it's a separate problem (has nothing to
do with empty ranges).
Fixed by considering both flags on the source summary, and updating the
hasnulls flag on the target summary.
Backpatch to 11. The bug exists since 9.5 (where BRIN indexes were
introduced), but those releases are EOL already.
Discussion: https://postgr.es/m/9d993d0d-e431-2196-9ccc-0554d0e60154%40enterprisedb.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If an SRF in the FROM clause references a table having row-level
security policies, and we inline that SRF into the calling query,
we neglected to mark the plan as potentially dependent on which
role is executing it. This could lead to later executions in the
same session returning or hiding rows that should have been hidden
or returned instead.
Our thanks to Wolfgang Walther for reporting this problem.
Stephen Frost and Tom Lane
Security: CVE-2023-2455
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The two methods don't cooperate, so set_config_option("search_path",
...) has been ineffective under non-empty overrideStack. This defect
enabled an attacker having database-level CREATE privilege to execute
arbitrary code as the bootstrap superuser. While that particular attack
requires v13+ for the trusted extension attribute, other attacks are
feasible in all supported versions.
Standardize on the combination of NewGUCNestLevel() and
set_config_option("search_path", ...). It is newer than
PushOverrideSearchPath(), more-prevalent, and has no known
disadvantages. The "override" mechanism remains for now, for
compatibility with out-of-tree code. Users should update such code,
which likely suffers from the same sort of vulnerability closed here.
Back-patch to v11 (all supported versions).
Alexander Lakhin. Reported by Alexander Lakhin.
Security: CVE-2023-2454
|
|
|
|
|
| |
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 5880bed52cbf5fb44921c4a42b23e3251575dcdb
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This wait event was documented as "CommitTsBuffer" since its
introduction, but the code named it "CommitTSBuffer". This commit fixes
the code to follow the term documented, which is also more consistent
with the naming of the other wait events used for commit timestamps.
Introduced by 5da1493.
Author: Alexander Lakhin
Discussion: https://postgr.es/m/e8c38840-596a-83d6-bd8d-cebc51111572@gmail.com
Backpatch-through: 13
|
|
|
|
|
|
|
|
|
|
|
|
| |
These functions incautiously fetched the array's first lower bound
even when the array is zero-dimensional, thus fetching the word
after the allocated array space. While almost always harmless,
with very bad luck this could result in SIGSEGV. Fix by adding
an early exit for empty input.
Per bug #17920 from Alexander Lakhin.
Discussion: https://postgr.es/m/17920-f7c228c627b6d02e%40postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CREATE SCHEMA AUTHORIZATION with appended schema elements can lead to
crashes when comparing the schema name of the query with the schemas
used in the qualification of some clauses in the elements' queries.
The origin of the problem is that the transformation routine for the
elements listed in a CREATE SCHEMA query uses as new, expected, schema
name the one listed in CreateSchemaStmt itself. However, depending on
the query, CreateSchemaStmt.schemaname may be NULL, being computed
instead from the role specification of the query given by the
AUTHORIZATION clause, that could be either:
- A user name string, with the new schema name being set to the same
value as the role given.
- Guessed from CURRENT_ROLE, SESSION_ROLE or CURRENT_ROLE, with a new
schema name computed from the security context where CREATE SCHEMA is
running.
Regression tests are added for CREATE SCHEMA with some appended elements
(some of them with schema qualifications), covering also some role
specification patterns.
While on it, this simplifies the context structure used during the
transformation of the elements listed in a CREATE SCHEMA query by
removing the fields for the role specification and the role type. They
were not used, and for the role specification this could be confusing as
the schema name may by extracted from that at the beginning of
CreateSchemaCommand().
This issue exists for a long time, so backpatch down to all the versions
supported.
Reported-by: Song Hongyu
Author: Michael Paquier
Reviewed-by: Richard Guo
Discussion: https://postgr.es/m/17909-f65c12dfc5f0451d@postgresql.org
Backpatch-through: 11
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The call to XLogGetReplicationSlotMinimumLSN() might return a
greater LSN than the one given to the function. Subsequent segment
number calculations might then underflow, which could result in
unexpected behavior when removing or recyling WAL files. This was
introduced with max_slot_wal_keep_size in c655077639. To fix, skip
the block of code for replication slots if the LSN is greater.
Reported-by: Xu Xingwang
Author: Kyotaro Horiguchi
Reviewed-by: Junwang Zhao
Discussion: https://postgr.es/m/17903-4288d439dee856c6%40postgresql.org
Backpatch-through: 13
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 1021bd6a89 excluded autovacuum workers from cost-limit balance
calculations when per-relation options were set. The code checks for
limit and cost_delay being greater than zero, but since cost_delay can
be set to -1 the test needs to check for greater than or zero.
Backpatch to all supported branches since 1021bd6a89 was backpatched
all the way at the time.
Author: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/CAD21AoBS7o6Ljt_vfqPQPf67AhzKu3fR0iqk8B=vVYczMugKMQ@mail.gmail.com
Backpatch-through: v11 (all supported branches)
|
|
|
|
|
|
|
|
|
|
|
| |
We need to call them only when validate == true.
Backpatch to 13, where opclass options were introduced.
Reported-by: Tom Lane
Discussion: https://postgr.es/m/2656633.1681831542%40sss.pgh.pa.us
Reviewed-by: Tom Lane, Pavel Borisov
Backpatch-through: 13
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For regex escape sequences, just test directly for the relevant ASCII
characters rather than using locale-sensitive character
classification.
This fixes an assertion failure when a locale considers a non-ASCII
character, such as "൧", to be a digit.
Reported-by: Richard Guo
Discussion: https://postgr.es/m/CAMbWs49Q6UoKGeT8pBkMtJGJd+16CBFZaaWUk9Du+2ERE5g_YA@mail.gmail.com
Backpatch-through: 11
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the last few pages in the specified range are empty (all zero),
then log_newpage_range() could try to emit an empty WAL record
containing no FPIs. This at least upsets an Assert in
ReserveXLogInsertLocation, and might perhaps have bad real-world
consequences in non-assert builds.
This has been broken since log_newpage_range() was introduced,
but the case was hard if not impossible to hit before commit 3d6a98457
decided it was okay to leave VM and FSM pages intentionally zero.
Nonetheless, it seems prudent to back-patch. log_newpage_range()
was added in v12 but later back-patched, so this affects all
supported branches.
Matthias van de Meent, per report from Justin Pryzby
Discussion: https://postgr.es/m/ZD1daibg4RF50IOj@telsasoft.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 3e310d837 taught isAssignmentIndirectionExpr() to look through
CoerceToDomain nodes. That's not sufficient, because since commit
04fe805a1 it's been possible for the planner to simplify
CoerceToDomain to RelabelType when the domain has no constraints
to enforce. So we need to look through RelabelType too.
Per bug #17897 from Alexander Lakhin. Although 3e310d837 was
back-patched to v11, it seems sufficient to apply this change
to v12 and later, since 04fe805a1 came in in v12.
Dmitry Dolgov
Discussion: https://postgr.es/m/17897-4216c546c3874044@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The partition pruning logic assumed that "b IS NOT true" was exactly the
same as "b IS FALSE". This is not the case when considering NULL values.
Fix this so we correctly include any partition which could hold NULL
values for the NOT case.
Additionally, this fixes a bug in the partition pruning code which handles
partitioned tables partitioned like ((NOT boolcol)). This is a seemingly
unlikely schema design, and it was untested and also broken.
Here we add tests for the ((NOT boolcol)) case and insert some actual data
into those tables and verify we do get the correct rows back when running
queries. I've also adjusted the existing boolpart tests to include some
data and verify we get the correct results too.
Both the bugs being fixed here could lead to incorrect query results with
fewer rows being returned than expected. No additional rows could have
been returned accidentally.
In passing, remove needless ternary expression. It's more simple just to
pass !is_not_clause to makeBoolConst(). It makes sense to do this so the
code is consistent with the bug fix in the "else if" condition just below.
David Kimura did submit a patch to fix the first of the issues here, but
that's not what's being committed here.
Reported-by: David Kimura
Reviewed-by: Richard Guo, David Kimura
Discussion: https://postgr.es/m/CAHnPFjQ5qxs6J_p+g8=ww7GQvfn71_JE+Tygj0S7RdRci1uwPw@mail.gmail.com
Backpatch-through: 11, all supported versions
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Our policy since commit ab77a5a45 has been that a plan node having
any initplans is automatically not parallel-safe. (This could be
relaxed, but not today.) clean_up_removed_plan_level neglected
this, and could attach initplans to a parallel-safe child plan
node without clearing the plan's parallel-safe flag. That could
lead to "subplan was not initialized" errors at runtime, in case
an initplan referenced another one and only the referencing one
got transmitted to parallel workers.
The fix in clean_up_removed_plan_level is trivial enough.
materialize_finished_plan also moves initplans from one node
to another, but it's okay because it already copies the source
node's parallel_safe flag. The other place that does this kind
of thing is standard_planner's hack to inject a top-level Gather
when debug_parallel_query is active. But that's actually dead
code given that we're correctly enforcing the "initplans aren't
parallel safe" rule, so just replace it with an Assert that
there are no initplans.
Also improve some related comments.
Normally we'd add a regression test case for this sort of bug.
The mistake itself is already reached by existing tests, but there
is accidentally no visible problem. The only known test case that
creates an actual failure seems too indirect and fragile to justify
keeping it as a regression test (not least because it fails to fail
in v11, though the bug is clearly present there too).
Per report from Justin Pryzby. Back-patch to all supported branches.
Discussion: https://postgr.es/m/ZDVt6MaNWkRDO1LQ@telsasoft.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
tsquery's GETQUERY() macro is only safe to apply to a tsquery
that is known non-empty; otherwise it gives a pointer to garbage.
Before commit 5a617d75d, ts_headline() avoided this pitfall, but
only in a very indirect, nonobvious way. (hlCover could not reach
its TS_execute call, because if the query contains no lexemes
then hlFirstIndex would surely return -1.) After that commit,
it fell into the trap, resulting in weird errors such as
"unrecognized operator" and/or valgrind complaints. In HEAD,
fix this by not calling TS_execute_locations() at all for an
empty query. In the back branches, add a defensive check to
hlCover() --- that's not fixing any live bug, but I judge the
code a bit too fragile as-is.
Also, both mark_hl_fragments() and mark_hl_words() were careless
about the possibility of empty search text: in the cases where
no match has been found, they'd end up telling mark_fragment() to
mark from word indexes 0 to 0 inclusive, even when there is no
word 0. This is harmless since we over-allocated the prs->words
array, but it does annoy valgrind. Fix so that the end index is -1
and thus mark_fragment() will do nothing in such cases.
Bottom line is that this fixes a live bug in HEAD, but in the
back branches it's only getting rid of a valgrind nitpick.
Back-patch anyway.
Per report from Alexander Lakhin.
Discussion: https://postgr.es/m/c27f642d-020b-01ff-ae61-086af287c4fd@gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In v13 and v14, the ENABLE/DISABLE TRIGGER USER variant malfunctioned
on cloned triggers, failing to find the clones because it thought they
were system triggers. Other variants of ENABLE/DISABLE TRIGGER would
improperly apply a superuserness check. Fix by adjusting the is-it-
a-system-trigger check to match reality in those branches. (As far
as I can find, this is the only place that got it wrong.)
There's no such bug in v15/HEAD, because we revised the catalog
representation of system triggers to be what this code was expecting.
However, add the test case to these branches anyway, because this area
is visibly pretty fragile. Also remove an obsoleted comment.
The recent v15/HEAD commit 6949b921d fixed a nearby bug. I now see
that my commit message for that was inaccurate: the behavior of
recursing to clone triggers is older than v15, but it didn't apply
to the case in v13/v14 because in those branches parent partitioned
tables have no pg_trigger entries for foreign-key triggers. But add
the test case from that commit to v13/v14, just to show what is
happening there.
Per bug #17886 from DzmitryH.
Discussion: https://postgr.es/m/17886-5406d5d828aa4aa3@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Up through v11 it was sensible to use the "oid" system column as
a foreign key column, but since that was removed there's no visible
usefulness in making any of the remaining system columns a foreign
key. Moreover, since the TupleTableSlot rewrites in v12, such cases
actively fail because of implicit assumptions that only user columns
appear in foreign keys. The lack of complaints about that seems
like good evidence that no one is trying to do it. Hence, rather
than trying to repair those assumptions (of which there are at least
two, maybe more), let's just forbid the case up front.
Per this patch, a system column in either the referenced or
referencing side of a foreign key will draw this error; however,
putting one in the referenced side would have failed later anyway,
since we don't allow unique indexes to be made on system columns.
Per bug #17877 from Alexander Lakhin. Back-patch to v12; the
case still appears to work in v11, so we shouldn't break it there.
Discussion: https://postgr.es/m/17877-4bcc658e33df6de1@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
| |
The totalrows/totaldeadrows outputs were left uninitialized in cases
where we find no analyzable child tables of a partitioned table. This
could lead to setting the partitioned table's pg_class.reltuples value
to garbage. It's not clear that that would have any very bad effects
in practice, but fix it anyway because it's making valgrind unhappy.
Reported and diagnosed by Alexander Lakhin (bug #17880).
Discussion: https://postgr.es/m/17880-9282037c923d856e@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When calling generateSerialExtraStmts(), we would pass in the
constraint->options. In some cases, generateSerialExtraStmts() would
modify the referenced List to remove elements from it, but doing so is
invalid without assigning the list back to all variables that point to it.
In the particular reported problem case, the List became empty, in which
cases it became NIL, but the passed in constraint->options didn't get to
find out about that and was left pointing to free'd memory.
To fix this, just perform a list_copy() inside generateSerialExtraStmts().
We could just do a list_copy() just before we perform the delete from the
list, however, that seems less robust. Let's make sure the generated
CreateSeqStmt gets a completely different copy of the list to be safe.
Bug: #17879
Reported-by: Fei Changhong
Diagnosed-by: Fei Changhong
Discussion: https://postgr.es/m/17879-b7dfb5debee58ff5@postgresql.org
Backpatch-through: 11, all supported versions
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
gistBuildCallback tried to fetch the size of an index tuple that
might have already been freed by gistProcessEmptyingQueue.
While this seems to usually be harmless in production builds,
in principle it could result in a SIGSEGV, or more likely a bogus
value for indtuplesSize leading to poor page-split decisions later
in the build.
The memory management here is confusing and could stand to be
refactored, but for the moment it seems to be enough to fetch
the tuple size sooner. AFAICT the indtuples[Size] totals aren't
used in between these places; even if they were, the updated
values shouldn't be any worse to use. So just move the
incrementing of the totals up.
It's not very clear why our valgrind-using buildfarm animals
haven't noticed this problem, because the relevant code path
does seem to be exercised according to the code coverage report.
I think the reason that we didn't fix this bug after the first
report is that I'd wanted to try to understand that better.
However, now that it's been re-discovered let's just be pragmatic
and fix it already.
Original report by Alexander Lakhin (bug #16329),
later rediscovered by Egor Chindyaskin (bug #17874).
Patch by Alexander Lakhin (commentary by Pavel Borisov and me).
Back-patch to all supported branches.
Discussion: https://postgr.es/m/16329-7a6aa9b6fa1118a1@postgresql.org
Discussion: https://postgr.es/m/17874-63ca6c7ce42d2103@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
find_composite_type_dependencies() ignored indexes, which is a poor
decision because an expression index could have a stored column of
a composite (or other container) type even when the underlying table
does not. Teach it to detect such cases and error out. We have to
work a bit harder than for other relations because the pg_depend entry
won't identify the specific index column of concern, but it's not much
new code.
This does not address bug #17872's original complaint that dropping
a column in such a type might lead to violations of the uniqueness
property that a unique index is supposed to ensure. That seems of
much less concern to me because it won't lead to crashes.
Per bug #17872 from Alexander Lakhin. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/17872-d0fbb799dc3fd85d@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The nested-arrays code path in ExecEvalArrayExpr() used palloc to
allocate the result array, whereas every other array-creating function
has used palloc0 since 18c0b4ecc. This mostly works, but unused bits
past the end of the nulls bitmap may end up undefined. That causes
valgrind complaints with -DWRITE_READ_PARSE_PLAN_TREES, and could
cause planner misbehavior as cited in 18c0b4ecc. There seems no very
good reason why we should strive to avoid palloc0 in just this one case,
so fix it the easy way with s/palloc/palloc0/.
While looking at that I noted that we also failed to check for overflow
of "nbytes" and "nitems" while summing the sizes of the sub-arrays,
potentially allowing a crash due to undersized output allocation.
For "nbytes", follow the policy used by other array-munging code of
checking for overflow after each addition. (As elsewhere, the last
addition of the array's overhead space doesn't need an extra check,
since palloc itself will catch a value between 1Gb and 2Gb.)
For "nitems", there's no very good reason to sum the inputs at all,
since we can perfectly well use ArrayGetNItems' result instead of
ignoring it.
Per discussion of this bug, also remove redundant zeroing of the
nulls bitmap in array_set_element and array_set_slice.
Patch by Alexander Lakhin and myself, per bug #17858 from Alexander
Lakhin; thanks also to Richard Guo. These bugs are a dozen years old,
so back-patch to all supported branches.
Discussion: https://postgr.es/m/17858-8fd287fd3663d051@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
| |
We fail to apply updates and deletes when the REPLICA IDENTITY FULL is
used for the table having generated columns. We didn't use to ignore
generated columns while doing tuple comparison among the tuples from
the publisher and subscriber during apply of updates and deletes.
Author: Onder Kalaci
Reviewed-by: Shi yu, Amit Kapila
Backpatch-through: 12
Discussion: https://postgr.es/m/CACawEhVQC9WoofunvXg12aXtbqKnEgWxoRx3+v8q32AWYsdpGg@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
| |
We fail to apply updates and deletes when the REPLICA IDENTITY FULL is
used for the table having dropped columns. We didn't use to ignore dropped
columns while doing tuple comparison among the tuples from the publisher
and subscriber during apply of updates and deletes.
Author: Onder Kalaci, Shi yu
Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/CACawEhVQC9WoofunvXg12aXtbqKnEgWxoRx3+v8q32AWYsdpGg@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With unlucky timing and parallel_leader_participation=off (not the
default), PHJ could attempt to access per-batch shared state just as it
was being freed. There was code intended to prevent that by checking
for a cleared pointer, but it was racy. Fix, by introducing an extra
barrier phase. The new phase PHJ_BUILD_RUNNING means that it's safe to
access the per-batch state to find a batch to help with, and
PHJ_BUILD_DONE means that it is too late. The last to detach will free
the array of per-batch state as before, but now it will also atomically
advance the phase, so that late attachers can avoid the hazard. This
mirrors the way per-batch hash tables are freed (see phases
PHJ_BATCH_PROBING and PHJ_BATCH_DONE).
An earlier attempt to fix this (commit 3b8981b6, later reverted) missed
one special case. When the inner side is empty (the "empty inner
optimization), the build barrier would only make it to
PHJ_BUILD_HASHING_INNER phase before workers attempted to detach from
the hashtable. In that case, fast-forward the build barrier to
PHJ_BUILD_RUNNING before proceeding, so that our later assertions hold
and we can still negotiate who is cleaning up.
Revealed by build farm failures, where BarrierAttach() failed a sanity
check assertion, because the memory had been clobbered by dsa_free().
In non-assert builds, the result could be a segmentation fault.
Back-patch to all supported releases.
Author: Thomas Munro <thomas.munro@gmail.com>
Author: Melanie Plageman <melanieplageman@gmail.com>
Reported-by: Michael Paquier <michael@paquier.xyz>
Reported-by: David Geier <geidav.pg@gmail.com>
Tested-by: David Geier <geidav.pg@gmail.com>
Discussion: https://postgr.es/m/20200929061142.GA29096%40paquier.xyz
|
|
|
|
|
|
|
|
|
|
|
| |
Further to commit 6a9229da, checking for NULL is now redundant. An "out
of memory" error would have been thrown already by palloc() and treated
as FATAL, so we can delete a few more lines.
Back-patch to all releases, like those other commits.
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/4040668.1679013388%40sss.pgh.pa.us
|