| Commit message (Collapse) | Author | Age |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The pgoutput module caches publication names in a list and frees it upon
invalidation. However, the code forgot to free the actual publication
names within the list elements, as publication names are pstrdup()'d in
GetPublication(). This would cause memory to leak in
CacheMemoryContext, bloating it over time as this context is not
cleaned.
This is a problem for WAL senders running for a long time, as an
accumulation of invalidation requests would bloat its cache memory
usage. A second case, where this leak is easier to see, involves a
backend calling SQL functions like pg_logical_slot_{get,peek}_changes()
which create a new decoding context with each execution. More
publications create more bloat.
To address this, this commit adds a new memory context within the
logical decoding context and resets it each time the publication names
cache is invalidated, based on a suggestion from Amit Kapila. This
ensures that the lifespan of the publication names aligns with that of
the logical decoding context.
Contrary to the HEAD-only commit f0c569d71515 that has changed
PGOutputData to track this new child memory context, the context is
tracked with a static variable whose state is reset with a MemoryContext
reset callback attached to PGOutputData->context, so as ABI
compatibility is preserved in stable branches. This approach is based
on an suggestion from Amit Kapila.
Analyzed-by: Michael Paquier, Jeff Davis
Author: Masahiko Sawada
Reviewed-by: Amit Kapila, Michael Paquier, Euler Taveira, Hou Zhijie
Discussion: https://postgr.es/m/Z0khf9EVMVLOc_YY@paquier.xyz
Backpatch-through: 13
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GetSnapshotData() set TransactionXmin = MyProc->xmin, but when
SnapshotResetXmin() advanced MyProc->xmin, it did not advance
TransactionXmin correspondingly. That meant that TransactionXmin could
be older than MyProc->xmin, and XIDs between than TransactionXmin and
the real MyProc->xmin could be vacuumed away. One known consequence is
in pg_subtrans lookups: we might try to look up the status of an XID
that was already truncated away.
Back-patch to all supported versions.
Reviewed-by: Andres Freund
Discussion: https://www.postgresql.org/message-id/d27a046d-a1e4-47d1-a95c-fbabe41debb4@iki.fi
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
RelationTruncate() does three things, while holding an
AccessExclusiveLock and preventing checkpoints:
1. Logs the truncation.
2. Drops buffers, even if they're dirty.
3. Truncates some number of files.
Step 2 could previously be canceled if it had to wait for I/O, and step
3 could and still can fail in file APIs. All orderings of these
operations have data corruption hazards if interrupted, so we can't give
up until the whole operation is done. When dirty pages were discarded
but the corresponding blocks were left on disk due to ERROR, old page
versions could come back from disk, reviving deleted data (see
pgsql-bugs #18146 and several like it). When primary and standby were
allowed to disagree on relation size, standbys could panic (see
pgsql-bugs #18426) or revive data unknown to visibility management on
the primary (theorized).
Changes:
* WAL is now unconditionally flushed first
* smgrtruncate() is now called in a critical section, preventing
interrupts and causing PANIC on file API failure
* smgrtruncate() has a new parameter for existing fork sizes,
because it can't call smgrnblocks() itself inside a critical section
The changes apply to RelationTruncate(), smgr_redo() and
pg_truncate_visibility_map(). That last is also brought up to date with
other evolutions of the truncation protocol.
The VACUUM FileTruncate() failure mode had been discussed in older
reports than the ones referenced below, with independent analysis from
many people, but earlier theories on how to fix it were too complicated
to back-patch. The more recently invented cancellation bug was
diagnosed by Alexander Lakhin. Other corruption scenarios were spotted
by me while iterating on this patch and earlier commit 75818b3a.
Back-patch to all supported releases.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reported-by: rootcause000@gmail.com
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/18146-04e908c662113ad5%40postgresql.org
Discussion: https://postgr.es/m/18426-2d18da6586f152d6%40postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
durable_rename_excl() attempts to avoid overwriting any existing files
by using link() and unlink(), and it falls back to rename() on some
platforms (aka WIN32), which offers no such overwrite protection. Most
callers use durable_rename_excl() just in case there is an existing
file, but in practice there shouldn't be one (see below for more
details).
Furthermore, failures during durable_rename_excl() can result in
multiple hard links to the same file. As per Nathan's tests, it is
possible to end up with two links to the same file in pg_wal after a
crash just before unlink() during WAL recycling. Specifically, the test
produced links to the same file for the current WAL file and the next
one because the half-recycled WAL file was re-recycled upon restarting,
leading to WAL corruption.
This change replaces all the calls of durable_rename_excl() to
durable_rename(). This removes the protection against accidentally
overwriting an existing file, but some platforms are already living
without it and ordinarily there shouldn't be one. The function itself
is left around in case any extensions are using it. It will be removed
on HEAD via a follow-up commit.
Here is a summary of the existing callers of durable_rename_excl() (see
second discussion link at the bottom), replaced by this commit. First,
basic_archive used it to avoid overwriting an archive concurrently
created by another server, but as mentioned above, it will still
overwrite files on some platforms. Second, xlog.c uses it to recycle
past WAL segments, where an overwrite should not happen (origin of the
change at f0e37a8) because there are protections about the WAL segment
to select when recycling an entry. The third and last area is related
to the write of timeline history files. writeTimeLineHistory() will
write a new timeline history file at the end of recovery on promotion,
so there should be no such files for the same timeline.
What remains is writeTimeLineHistoryFile(), that can be used in parallel
by a WAL receiver and the startup process, and some digging of the
buildfarm shows that EEXIST from a WAL receiver can happen with an error
of "could not link file \"pg_wal/xlogtemp.NN\" to \"pg_wal/MM.history\",
which would cause an automatic restart of the WAL receiver as it is
promoted to FATAL, hence this should improve the stability of the WAL
receiver as rename() would overwrite an existing TLI history file
already fetched by the startup process at recovery.
This is the second time this change is attempted, ccfbd92 being the
first one, but this time no assertions are added for the case of a TLI
history file written concurrently by the WAL receiver or the startup
process because we can expect one to exist (some of the TAP tests are
able to trigger with a proper timing).
This commit has been originally applied on v16~ as of dac1ff30906b, and
we have received more reports of this issue, where clusters can become
corrupted at replay in older stable branches with multiple links
pointing to the same physical WAL segment file. This backpatch
addresses the problem for the v13~v15 range.
Author: Nathan Bossart
Reviewed-by: Robert Haas, Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/20220407182954.GA1231544@nathanxps13
Discussion: https://postgr.es/m/Ym6GZbqQdlalSKSG@paquier.xyz
Discussion: https://postgr.es/m/CAJhEC04tBkYPF4q2uS_rCytauvNEVqdBAzasBEokfceFhF=KDQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the non-recursive part of a recursive CTE ended up using
TTSOpsBufferHeapTuple as the table slot type, then a duplicate value
could cause an Assert failure in CheckOpSlotCompatibility() when
checking the hash table for the duplicate value. The expected slot type
for the deform step was TTSOpsMinimalTuple so the Assert failed when the
TTSOpsBufferHeapTuple slot was used.
This is a long-standing bug which we likely didn't notice because it
seems much more likely that the non-recursive term would have required
projection and used a TTSOpsVirtual slot, which CheckOpSlotCompatibility
is ok with.
There doesn't seem to be any harm done here other than the Assert
failure. Both TTSOpsMinimalTuple and TTSOpsBufferHeapTuple slot types
require tuple deformation, so the EEOP_*_FETCHSOME ExprState step would
have properly existed in the ExprState.
The solution is to pass NULL for the ExecBuildGroupingEqual's 'lops'
parameter. This means the ExprState's EEOP_*_FETCHSOME step won't
expect a fixed slot type. This makes CheckOpSlotCompatibility() happy as
no checking is performed when the ExprEvalStep is not expecting a fixed
slot type.
Reported-by: Richard Guo
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAMbWs4-8U9q2LAtf8+ghV11zeUReA3AmrYkxzBEv0vKnDxwkKA@mail.gmail.com
Backpatch-through: 13, all supported versions
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a dshash table grows very large (e.g., the dshash table for
cumulative statistics when there are millions of tables), resizing
it may fail with an error like:
ERROR: invalid DSA memory alloc request size 1073741824
To fix, permit dshash resizing to allocate more than 1 GB by
providing the DSA_ALLOC_HUGE flag.
Reported-by: Andreas Scherbaum
Author: Matthias van de Meent
Reviewed-by: Cédric Villemain, Michael Paquier, Andres Freund
Discussion: https://postgr.es/m/80a12d59-0d5e-4c54-866c-e69cd6536471%40pgug.de
Backpatch-through: 13
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The test failed if you ran the regression tests with TEMP_CONFIG with
recovery_min_apply_delay = '500ms'. Fix the race condition by waiting
for transaction to be applied in the replica, like in a few other
tests.
The failing test was introduced in commit cbfbda7841. Backpatch to all
supported versions like that commit (except v12, which is no longer
supported).
Reported-by: Alexander Lakhin
Discussion: https://www.postgresql.org/message-id/09e2a70a-a6c2-4b5c-aeae-040a7449c9f2@gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
An \if command appearing within a false (not-to-be-executed) \if
branch was incorrectly treated the same as \elif. This could allow
statements within the inner \if to be executed when they should
not be. Also the missing inner \if stack entry would result in an
assertion failure (in assert-enabled builds) when the final \endif
is reached.
Report and patch by Michail Nikolaev. Back-patch to all
supported branches.
Discussion: https://postgr.es/m/CANtu0oiA1ke=SP6tauhNqkUdv5QFsJtS1p=aOOf_iU+EhyKkjQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If an owned sequence is considered interesting, force its owning
table to be marked interesting too. This ensures, in particular,
that we'll fetch the owning table's column names so we have the
data needed for ALTER TABLE ... ADD GENERATED. Previously there were
edge cases where pg_dump could get SIGSEGV due to not having filled in
the column names. (The known case is where the owning table has been
made part of an extension while its identity sequence is not a member;
but there may be others.)
Also, if it's an identity sequence, force its dumped-components mask
to exactly match the owning table: dump definition only if we're
dumping the table's definition, dump data only if we're dumping the
table's data, etc. This generalizes the code introduced in commit
b965f2617 that set the sequence's dump mask to NONE if the owning
table's mask is NONE. That's insufficient to prevent failures,
because for example the table's mask might only request dumping ACLs,
which would lead us to still emit ALTER TABLE ADD GENERATED even
though we didn't create the table. It seems better to treat an
identity sequence as though it were an inseparable aspect of the
table, matching the treatment used in the backend's dependency logic.
Perhaps this policy needs additional refinement, but let's wait to
see some field use-cases before changing it further.
While here, add a comment in pg_dump.h warning against writing tests
like "if (dobj->dump == DUMP_COMPONENT_NONE)", which was a bug in this
case. There is one other example in getPublicationNamespaces, which
if it's not a bug is at least remarkably unclear and under-documented.
Changing that requires a separate discussion, however.
Per report from Artur Zakirov. Back-patch to all supported branches.
Discussion: https://postgr.es/m/CAKNkYnwXFBf136=u9UqUxFUVagevLQJ=zGd5BsLhCsatDvQsKQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This backpatches commits beb4e9ba1652 and 1fb17b190341 (originally
appearing in previously in REL_15_STABLE) to REL_14_STABLE. Performance
of the WAL archiver can become pretty critical at times, and reports
exist of users getting in serious trouble (hours of downtime, loss of
replicas) because of lack of this optimization.
We'd like to backpatch these to REL_13_STABLE too, but because of the
very invasive changes made by commit d75288fb27b8 in the 14 timeframe,
we deem it too risky :-(
Original commit messages appear below.
Discussion: https://postgr.es/m/202411131605.m66syq5i5ucl@alvherre.pgsql
commit beb4e9ba1652a04f66ff20261444d06f678c0b2d
Author: Robert Haas <rhaas@postgresql.org>
AuthorDate: Thu Nov 11 15:02:53 2021 -0500
Improve performance of pgarch_readyXlog() with many status files.
Presently, the archive_status directory was scanned for each file to
archive. When there are many status files, say because archive_command
has been failing for a long time, these directory scans can get very
slow. With this change, the archiver remembers several files to archive
during each directory scan, speeding things up.
To ensure timeline history files are archived as quickly as possible,
XLogArchiveNotify() forces the archiver to do a new directory scan as
soon as the .ready file for one is created.
Nathan Bossart, per a long discussion involving many people. It is
not clear to me exactly who out of all those people reviewed this
particular patch.
Discussion: http://postgr.es/m/CA+TgmobhAbs2yabTuTRkJTq_kkC80-+jw=pfpypdOJ7+gAbQbw@mail.gmail.com
Discussion: http://postgr.es/m/620F3CE1-0255-4D66-9D87-0EADE866985A@amazon.com
commit 1fb17b1903414676bd371068739549cd2966fe87
Author: Tom Lane <tgl@sss.pgh.pa.us>
AuthorDate: Wed Dec 29 17:02:50 2021 -0500
Fix issues in pgarch's new directory-scanning logic.
The arch_filenames[] array elements were one byte too small, so that
a maximum-length filename would get corrupted if another entry
were made after it. (Noted by Thomas Munro, fix by Nathan Bossart.)
Move these arrays into a palloc'd struct, so that we aren't wasting
a few kilobytes of static data in each non-archiver process.
Add a binaryheap_reset() call to make it plain that we start the
directory scan with an empty heap. I don't think there's any live
bug of that sort, but it seems fragile, and this is very cheap
insurance.
Cleanup for commit beb4e9ba1, so no back-patch needed.
Discussion: https://postgr.es/m/CA+hUKGLHAjHuKuwtzsW7uMJF4BVPcQRL-UMZG_HM-g0y7yLkUg@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
| |
Since commit 97550c0711972a9856b5db751539bbaf2f88884c, these failed with
"PANIC: proc_exit() called in child process" due to uninitialized or
stale MyProcPid. That was reachable if close() failed in
ClosePostmasterPorts() or setlocale(category, "C") failed, both
unlikely. Back-patch to v13 (all supported versions).
Discussion: https://postgr.es/m/20241208034614.45.nmisch@google.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Our parallel-mode code only works when we are executing a query
in full, so ExecutePlan must disable parallel mode when it is
asked to do partial execution. The previous logic for this
involved passing down a flag (variously named execute_once or
run_once) from callers of ExecutorRun or PortalRun. This is
overcomplicated, and unsurprisingly some of the callers didn't
get it right, since it requires keeping state that not all of
them have handy; not to mention that the requirements for it were
undocumented. That led to assertion failures in some corner
cases. The only state we really need for this is the existing
QueryDesc.already_executed flag, so let's just put all the
responsibility in ExecutePlan. (It could have been done in
ExecutorRun too, leading to a slightly shorter patch -- but if
there's ever more than one caller of ExecutePlan, it seems better
to have this logic in the subroutine than the callers.)
This makes those ExecutorRun/PortalRun parameters unnecessary.
In master it seems okay to just remove them, returning the
API for those functions to what it was before parallelism.
Such an API break is clearly not okay in stable branches,
but for them we can just leave the parameters in place after
documenting that they do nothing.
Per report from Yugo Nagata, who also reviewed and tested
this patch. Back-patch to all supported branches.
Discussion: https://postgr.es/m/20241206062549.710dc01cf91224809dd6c0e1@sraoss.co.jp
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Usually an entry in pg_amop or pg_amproc does not need a dependency on
its amoplefttype/amoprighttype/amproclefttype/amprocrighttype types,
because there is an indirect dependency via the argument types of its
referenced operator or procedure, or via the opclass it belongs to.
However, for some support procedures in some index AMs, the argument
types of the support procedure might not mention the column data type
at all. Also, the amop/amproc entry might be treated as "loose" in
the opfamily, in which case it lacks a dependency on any particular
opclass; or it might be a cross-type entry having a reference to a
datatype that is not its opclass' opcintype.
The upshot of all this is that there are cases where a datatype can
be dropped while leaving behind amop/amproc entries that mention it,
because there is no path in pg_depend showing that those entries
depend on that type. Such entries are harmless in normal activity,
because they won't get used, but they cause problems for maintenance
actions such as dropping the operator family. They also cause pg_dump
to produce bogus output. The previous commit put a band-aid on the
DROP OPERATOR FAMILY failure, but a real fix is needed.
To fix, add pg_depend entries showing that a pg_amop/pg_amproc entry
depends on its lefttype/righttype. To avoid bloating pg_depend too
much, skip this if the referenced operator or function has that type
as an input type. (I did not bother with considering the possible
indirect dependency via the opclass' opcintype; at least in the
reported case, that wouldn't help anyway.)
Probably, the reason this has escaped notice for so long is that
add-on datatypes and relevant opclasses/opfamilies are usually
packaged as extensions nowadays, so that there's no way to drop
a type without dropping the referencing opclasses/opfamilies too.
Still, in the absence of pg_depend entries there's nothing that
constrains DROP EXTENSION to drop the opfamily entries before the
datatype, so it seems possible for a DROP failure to occur anyway.
The specific case that was reported doesn't fail in v13, because
v13 prefers to attach the support procedure to the opclass not the
opfamily. But it's surely possible to construct other edge cases
that do fail in v13, so patch that too.
Per report from Yoran Heling. Back-patch to all supported branches.
Discussion: https://postgr.es/m/Z1MVCOh1hprjK5Sf@gmai021
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Yoran Heling reported a case where a data type could be dropped
while references to its OID remain behind in pg_amproc. This
causes getObjectDescription to fail, which blocks dropping the
operator family (since our DROP code likes to construct descriptions
of everything it's dropping). The proper fix for this requires
adding more pg_depend entries. But to allow DROP to go through with
already-corrupt catalogs, tweak getObjectDescription to print "???"
for the type instead of failing when it processes such an entry.
I changed the logic for pg_amop similarly, for consistency,
although it is not known that the problem can manifest in pg_amop.
Per report from Yoran Heling. Back-patch to all supported
branches (although the problem may be unreachable in v13).
Discussion: https://postgr.es/m/Z1MVCOh1hprjK5Sf@gmai021
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These format codes produce or consume strings of digits, so they
should be labeled with is_digit = true, but they were not.
This has effect in only one place, where is_next_separator()
is checked to see if the preceding format code should slurp up
all the available digits. Thus, with a format such as '...SSFF3'
with remaining input '12345', the 'SS' code would consume all
five digits (and then complain about seconds being out of range)
when it should eat only two digits.
Per report from Nick Davies. This bug goes back to d589f9446
where the FFn codes were introduced, so back-patch to v13.
Discussion: https://postgr.es/m/AM8PR08MB6356AC979252CFEA78B56678B6312@AM8PR08MB6356.eurprd08.prod.outlook.com
|
|
|
|
|
|
|
|
|
|
|
| |
check_restrict_nonsystem_relation_kind() correctly uses guc_malloc()
in v16 and later. But in older branches it must use malloc()
directly, and it forgot to check for failure return.
Faulty backpatching of 66e94448a.
Karina Litskevich
Discussion: https://postgr.es/m/CACiT8iZ=atkguKVbpN4HmJFMb4+T9yEowF5JuPZG8W+kkZ9L6w@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, it set only DELAY_CHKPT_COMPLETE. That was important,
because it meant that if the XLOG_SMGR_TRUNCATE record preceded a
XLOG_CHECKPOINT_ONLINE record in the WAL, then the truncation would also
happen on disk before the XLOG_CHECKPOINT_ONLINE record was
written.
However, it didn't guarantee that the sync request for the truncation
was processed before the XLOG_CHECKPOINT_ONLINE record was written. By
setting DELAY_CHKPT_START, we guarantee that if an XLOG_SMGR_TRUNCATE
record is written to WAL before the redo pointer of a concurrent
checkpoint, the sync request queued by that operation must be processed
by that checkpoint, rather than being left for the following one.
This is a refinement of commit 412ad7a5563. Back-patch to all supported
releases, like that commit.
Author: Robert Haas <robertmhaas@gmail.com>
Reported-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/CA%2BhUKG%2B-2rjGZC2kwqr2NMLBcEBp4uf59QT1advbWYF_uc%2B0Aw%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The loops over cursor argument variables neglected to ever advance
"prevvar". The code would accidentally do the right thing anyway
when removing the first or second list entry, but if it had to
remove the third or later entry then it would also remove all
entries between there and the first entry. AFAICS this would
only matter for cursors that reference out-of-scope variables,
which is a weird Informix compatibility hack; between that and
the lack of impact for short lists, it's not so surprising that
nobody has complained. Nonetheless it's a pretty obvious bug.
It would have been more obvious if these loops used a more standard
coding style for chasing the linked lists --- this business with the
"prev" pointer sometimes pointing at the current list entry is
confusing and overcomplicated. So rather than just add a minimal
band-aid, I chose to rewrite the loops in the same style we use
elsewhere, where the "prev" pointer is NULL until we are dealing with
a non-first entry and we save the "next" pointer at the top of the
loop. (Two of the four loops touched here are not actually buggy,
but it seems better to make them all look alike.)
Coverity discovered this problem, but not until 2b41de4a5 added code
to free no-longer-needed arguments structs. With that, the incorrect
link updates are possibly touching freed memory, and it complained
about that. Nonetheless the list corruption hazard is ancient, so
back-patch to all supported branches.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 352f6f2d used %d instead of %lu to format DWORD (unsigned long)
with psprintf(). The _WIN32_WINNT value recently changed for MinGW in
REL_15_STABLE (commit d700e8d7), so the code was suddenly being
compiled, with warnings from gcc.
The warnings were already fixed in 16+ by commits 495ed0ef and a9bc04b2
after the _WIN32_WINNT value was increase there. 14 and 13 didn't warn
because they still use a lower value for MinGW, and supported versions
of Visual Studio should compile the code in all live branches but don't
check our format string.
The change doesn't affect the result: sizeof(int) == sizeof(long) on
this platform, and the values are computed with expressions that cannot
exceed INT_MAX so were never interpreted as negative.
Back-patch the formatting change from those commits into 13-15. This
should turn CI's 15 branch green again and stop fairywren from warning
about that on 15.
Reported-by: Andres Freund <andres@anarazel.de>
Reported-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/t2vjrcb3bloxf5qqvxjst6r7lvrefqyecxgt2koy5ho5b5glr2%40yuupmm6whgob
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 9044fc1d45a0 skipped SectionMemoryManager.h in headerscheck, and
by extension also cpluspluscheck, because it's C++ and would fail both
tests. That worked in master and REL_17_STABLE due to 7b8e2ae2fd3b, but
older branches have a separate cpluspluscheck script. We need to copy
the filtering rule into there too.
This problem was being reported by CI's CompilerWarnings task in the 15
and 16 branches, but was a victim of alert fatigue syndrome (unrelated
problems in the back-branches).
Fix 16, and back-patch to 13, as those are the live branches that have a
separate cpluspluscheck script.
|
|
|
|
|
|
|
|
|
| |
This reverts commit d77f91214fb7 on all stable branches, due to concerns
regarding the compatility side effects this could create in a minor
release. The change still exists on HEAD.
Discussion: https://postgr.es/m/CA+TgmoZqRgeFTg4+Yf_CMRRXiHuNz1u6ZC4FvVk+rxw0RmOPnw@mail.gmail.com
Backpatch-through: 13
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
During pgbench's table initialization, progress updates could display
leftover characters from the previous message if the new message
was shorter. This commit resolves the issue by appending spaces to
the current message to fully overwrite any remaining characters from
the previous line.
Back-patch to all the supported versions.
Author: Yushi Ogiwara, Tatsuo Ishii, Fujii Masao
Reviewed-by: Tatsuo Ishii, Fujii Masao
Discussion: https://postgr.es/m/9a9b8b95b6a709877ae48ad5b0c59bb9@oss.nttdata.com
|
|
|
|
| |
from commit 9044fc1d45a
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When using a pipeline, a transaction starts from the first command and
is committed with a Sync message or when the pipeline ends.
Functions like IsInTransactionBlock() or PreventInTransactionBlock()
were already able to understand a pipeline as being in a transaction
block, but it was not the case of CheckTransactionBlock(). This
function is called for example to generate a WARNING for SET LOCAL,
complaining that it is used outside of a transaction block.
The current state of the code caused multiple problems, like:
- SET LOCAL executed at any stage of a pipeline issued a WARNING, even
if the command was at least second in line where the pipeline is in a
transaction state.
- LOCK TABLE failed when invoked at any step of a pipeline, even if it
should be able to work within a transaction block.
The pipeline protocol assumes that the first command of a pipeline is
not part of a transaction block, and that any follow-up commands is
considered as within a transaction block.
This commit changes the backend so as an implicit transaction block is
started each time the first Execute message of a pipeline has finished
processing, with this implicit transaction block ended once a sync is
processed. The checks based on XACT_FLAGS_PIPELINING in the routines
checking if we are in a transaction block are not necessary: it is
enough to rely on the existing ones.
Some tests are added to pgbench, that can be backpatched down to v17
when \syncpipeline is involved and down to v14 where \startpipeline and
\endpipeline are available. This is unfortunately limited regarding the
error patterns that can be checked, but it provides coverage for various
pipeline combinations to check if these succeed or fail. These tests
are able to capture the case of SET LOCAL's WARNING. The author has
proposed a different feature to improve the coverage by adding similar
meta-commands to psql where error messages could be checked, something
more useful for the cases where commands cannot be used in transaction
blocks, like REINDEX CONCURRENTLY or VACUUM. This is considered as
future work for v18~.
Author: Anthonin Bonnefoy
Reviewed-by: Jelte Fennema-Nio, Michael Paquier
Discussion: https://postgr.es/m/CAO6_XqrWO8uNBQrSu5r6jh+vTGi5Oiyk4y8yXDORdE2jbzw8xw@mail.gmail.com
Backpatch-through: 13
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If passed a read-write expanded object pointer, the EEOP_NULLIF
code would hand that same pointer to the equality function
and then (unless equality was reported) also return the same
pointer as its value. This is no good, because a function that
receives a read-write expanded object pointer is fully entitled
to scribble on or even delete the object, thus corrupting the
NULLIF output. (This problem is likely unobservable with the
equality functions provided in core Postgres, but it's easy to
demonstrate with one coded in plpgsql.)
To fix, make sure the pointer passed to the equality function
is read-only. We can still return the original read-write
pointer as the NULLIF result, allowing optimization of later
operations.
Per bug #18722 from Alexander Lakhin. This has been wrong
since we invented expanded objects, so back-patch to all
supported branches.
Discussion: https://postgr.es/m/18722-fd9e645448cc78b4@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This WARNING appeared because SearchSysCacheLocked1() read
cc_relisshared before catcache initialization, when the field is false
unconditionally. On the basis of reading false there, it constructed a
locktag as though pg_tablespace weren't relisshared. Only shared
catalogs could be affected, and only GRANT TABLESPACE was affected in
practice. SearchSysCacheLocked1() callers use one other shared-relation
syscache, DATABASEOID. DATABASEOID is initialized by the end of
CheckMyDatabase(), making the problem unreachable for pg_database.
Back-patch to v13 (all supported versions). This has no known impact
before v16, where ExecGrant_common() first appeared. Earlier branches
avoid trouble by having a separate ExecGrant_Tablespace() that doesn't
use LOCKTAG_TUPLE. However, leaving this unfixed in v15 could ensnare a
future back-patch of a SearchSysCacheLocked1() call.
Reported by Aya Iwata.
Discussion: https://postgr.es/m/OS7PR01MB11964507B5548245A7EE54E70EA212@OS7PR01MB11964.jpnprd01.prod.outlook.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Tcl 9 changed several API functions to take Tcl_Size, which is
ptrdiff_t, instead of int, for 64-bit enablement. We have to change a
few local variables to be compatible with that. We also provide a
fallback typedef of Tcl_Size for older Tcl versions.
The affected variables are used for quantities that will not approach
values beyond the range of int, so this doesn't change any
functionality.
Reviewed-by: Tristan Partin <tristan@partin.io>
Discussion: https://www.postgresql.org/message-id/flat/bce0fe54-75b4-438e-b42b-8e84bc7c0e9c%40eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously we checked "for <stdbool.h> that conforms to C99" using
autoconf's AC_HEADER_STDBOOL macro. We've required C99 since PostgreSQL
12, so the test was redundant, and under C23 it was broken: autoconf
2.69's implementation doesn't understand C23's new empty header (the
macros it's looking for went away, replaced by language keywords).
Later autoconf versions fixed that, but let's just remove the
anachronistic test.
HAVE_STDBOOL_H and HAVE__BOOL will no longer be defined, but they
weren't directly tested in core or likely extensions (except in 11, see
below). PG_USE_STDBOOL (or USE_STDBOOL in 11 and 12) is still defined
when sizeof(bool) is 1, which should be true on all modern systems.
Otherwise we define our own bool type and values of size 1, which would
fail to compile under C23 as revealed by the broken test. (We'll
probably clean that dead code up in master, but here we want a minimal
back-patchable change.)
This came to our attention when GCC 15 recently started using using C23
by default and failed to compile the replacement code, as reported by
Sam James and build farm animal alligator.
Back-patch to all supported releases, and then two older versions that
also know about <stdbool.h>, per the recently-out-of-support policy[1].
12 requires C99 so it's much like the supported releases, but 11 only
assumes C89 so it now uses AC_CHECK_HEADERS instead of the overly picky
AC_HEADER_STDBOOL. (I could find no discussion of which historical
systems had <stdbool.h> but failed the conformance test; if they ever
existed, they surely aren't relevant to that policy's goals.)
[1] https://wiki.postgresql.org/wiki/Committing_checklist#Policies
Reported-by: Sam James <sam@gentoo.org>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org> (master version)
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> (approach)
Discussion: https://www.postgresql.org/message-id/flat/87o72eo9iu.fsf%40gentoo.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It failed to set the archive_command as it desired because of a syntax
problem. Oversight in commit 90bcc7c2db1d.
This bug doesn't cause the test to fail, because the test only checks
pg_rewind's output messages, not the actual outcome (and the outcome in
both cases is that the file is kept, not deleted). But in either case
the message about the file being kept is there, so it's hard to get
excited about doing much more.
Reported-by: Antonin Houska <ah@cybertec.at>
Author: Alexander Kukushkin <cyberdemn@gmail.com>
Discussion: https://postgr.es/m/7822.1732167825@antos
|
|
|
|
|
|
|
|
|
|
|
|
| |
Apparently this information has been outdated since first committed,
because we adopted a different implementation during development per
reviews and this detail was not updated in the README.
This has been wrong since commit 0ac5ad5134f2 introduced the file in
2013. Backpatch to all live branches.
Reported-by: Will Mortensen <will@extrahop.com>
Discussion: https://postgr.es/m/CAMpnoC6yEQ=c0Rdq-J7uRedrP7Zo9UMp6VZyP23QMT68n06cvA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Ordinarily transformSetOperationTree will collect all UNION/
INTERSECT/EXCEPT steps into the setOperations tree of the topmost
Query, so that leaf queries do not contain any setOperations.
However, it cannot thus flatten a subquery that also contains
WITH, ORDER BY, FOR UPDATE, or LIMIT. I (tgl) forgot that in
commit 07b4c48b6 and wrote an assertion in rule deparsing that
a leaf's setOperations would always be empty.
If it were nonempty then we would want to parenthesize the subquery
to ensure that the output represents the setop nesting correctly
(e.g. UNION below INTERSECT had better get parenthesized). So
rather than just removing the faulty Assert, let's change it into
an additional case to check to decide whether to add parens. We
don't expect that the additional case will ever fire, but it's
cheap insurance.
Man Zeng and Tom Lane
Discussion: https://postgr.es/m/tencent_7ABF9B1F23B0C77606FC5FE3@qq.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the dim past we figured it was okay to ignore collations
when combining UNION set-operation nodes into a single N-way
UNION operation. I believe that was fine at the time, but
it stopped being fine when we added nondeterministic collations:
the semantics of distinct-ness are affected by those. v17 made
it even less fine by allowing per-child sorting operations to
be merged via MergeAppend, although I think we accidentally
avoided any live bug from that.
Add a check that collations match before deciding that two
UNION nodes are equivalent. I also failed to resist the
temptation to comment plan_union_children() a little better.
Back-patch to all supported branches (v13 now), since they
all have nondeterministic collations.
Discussion: https://postgr.es/m/3605568.1731970579@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
fixempties() counts the number of in-arcs in the regex NFA and then
allocates an array of that many arc pointers. If the NFA contains no
arcs, this amounts to malloc(0) for which some platforms return NULL.
The code mistakenly treats that as indicating out-of-memory. Thus,
we can get a bogus "out of memory" failure for some unsatisfiable
regexes.
This happens only in v15 and earlier, since bea3d7e38 switched to
using palloc() rather than bare malloc(). And at least of the
platforms in the buildfarm, only AIX seems to return NULL. So the
impact is pretty narrow. But I don't especially want to ship code
that is failing its own regression tests, so let's fix this for
this week's releases.
A quick code survey says that there is only the one place in
src/backend/regex/ that is at risk of doing malloc(0), so we'll just
band-aid that place. A more future-proof fix could be to install a
malloc() wrapper similar to pg_malloc(). But this code seems unlikely
to change much more in the affected branches, so that's probably
overkill.
The only known test case for this involves a complemented character
class in a bracket expression, for example [^\s\S], and we didn't
support that in v13. So it may be that the problem is unreachable
in v13. But I'm not 100% sure of that, so patch v13 too.
Discussion: https://postgr.es/m/661fd81b-f069-8f30-1a41-e195c57449b4@gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commits aac2c9b4f et al. added a bool field to struct ResultRelInfo.
That's no problem in the master branch, but in released branches
care must be taken when modifying publicly-visible structs to avoid
an ABI break for extensions. Frequently we solve that by adding the
new field at the end of the struct, and that's what was done here.
But ResultRelInfo has stricter constraints than just about any other
node type in Postgres. Some executor APIs require extensions to index
into arrays of ResultRelInfo, which means that any change whatever in
sizeof(ResultRelInfo) causes a fatal ABI break.
Fortunately, this is easy to fix, because the new field can be
squeezed into available padding space instead --- indeed, that's where
it was put in master, so this fix also removes a cross-branch coding
variation.
Per report from Pavan Deolasee. Patch v14-v17 only; earlier versions
did not gain the extra field, nor is there any problem in master.
Discussion: https://postgr.es/m/CABOikdNmVBC1LL6pY26dyxAS2f+gLZvTsNt=2XbcyG7WxXVBBQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
After commit 5a2fed911a85ed6d8a015a6bafe3a0d9a69334ae, the catalog state
resulting from these commands ceased to affect sessions. Restore the
longstanding behavior, which is like beginning the session with a SET
ROLE command. If cherry-picking the CVE-2024-10978 fixes, default to
including this, too. (This fixes an unintended side effect of fixing
CVE-2024-10978.) Back-patch to v12, like that commit. The release team
decided to include v12, despite the original intent to halt v12 commits
earlier this week.
Tom Lane and Noah Misch. Reported by Etienne LAFARGE.
Discussion: https://postgr.es/m/CADOZwSb0UsEr4_UTFXC5k7=fyyK8uKXekucd+-uuGjJsGBfxgw@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously LogicalIncreaseRestartDecodingForSlot() accidentally
accepted any LSN as the candidate_lsn and candidate_valid after the
restart_lsn of the replication slot was updated, so it potentially
caused the restart_lsn to move backwards.
A scenario where this could happen in logical replication is: after a
logical replication restart, based on previous candidate_lsn and
candidate_valid values in memory, the restart_lsn advances upon
receiving a subscriber acknowledgment. Then, logical decoding restarts
from an older point, setting candidate_lsn and candidate_valid based
on an old RUNNING_XACTS record. Subsequent subscriber acknowledgments
then update the restart_lsn to an LSN older than the current value.
In the reported case, after WAL files were removed by a checkpoint,
the retreated restart_lsn prevented logical replication from
restarting due to missing WAL segments.
This change essentially modifies the 'if' condition to 'else if'
condition within the function. The previous code had an asymmetry in
this regard compared to LogicalIncreaseXminForSlot(), which does
almost the same thing for different fields.
The WAL removal issue was reported by Hubert Depesz Lubaczewski.
Backpatch to all supported versions, since the bug exists since 9.4
where logical decoding was introduced.
Reviewed-by: Tomas Vondra, Ashutosh Bapat, Amit Kapila
Discussion: https://postgr.es/m/Yz2hivgyjS1RfMKs%40depesz.com
Discussion: https://postgr.es/m/85fff40e-148b-4e86-b921-b4b846289132%40vondra.me
Backpatch-through: 13
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In commit 08c0d6ad6 which introduced "rainbow" arcs in regex NFAs,
I didn't think terribly hard about what to do when creating the color
complement of a rainbow arc. Clearly, the complement cannot match any
characters, and I took the easy way out by just not building any arcs
at all in the complement arc set. That mostly works, but Nikolay
Shaplov found a case where it doesn't: if we decide to delete that
sub-NFA later because it's inside a "{0}" quantifier, delsub()
suffered an assertion failure. That's because delsub() relies on
the target sub-NFA being fully connected. That was always true
before, and the best fix seems to be to restore that property.
Hence, invent a new arc type CANTMATCH that can be generated in
place of an empty color complement, and drop it again later when we
start NFA optimization. (At that point we don't need to do delsub()
any more, and besides there are other cases where NFA optimization can
lead to disconnected subgraphs.)
It appears that this bug has no consequences in a non-assert-enabled
build: there will be some transiently leaked NFA states/arcs, but
they'll get cleaned up eventually. Still, we don't like assertion
failures, so back-patch to v14 where rainbow arcs were introduced.
Per bug #18708 from Nikolay Shaplov.
Discussion: https://postgr.es/m/18708-f94f2599c9d2c005@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, in unlucky cases, it was possible for pg_rewind to remove
certain WAL segments from the rewound demoted primary. In particular
this happens if those files have been marked for archival (i.e., their
.ready files were created) but not yet archived; the newly promoted node
no longer has such files because of them having been recycled, but they
are likely critical for recovery in the demoted node. If pg_rewind
removes them, recovery is not possible anymore.
Fix this by maintaining a hash table of files in this situation in the
scan that looks for a checkpoint, which the decide_file_actions phase
can consult so that it knows to preserve them.
Backpatch to 14. The problem also exists in 13, but that branch was not
blessed with commit eb00f1d4bf96, so this patch is difficult to apply
there. Users of older releases will just have to continue to be extra
careful when rewinding.
Co-authored-by: Полина Бунгина (Polina Bungina) <bungina@gmail.com>
Co-authored-by: Alexander Kukushkin <cyberdemn@gmail.com>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Atsushi Torikoshi <torikoshia@oss.nttdata.com>
Discussion: https://postgr.es/m/CAAtGL4AhzmBRsEsaDdz7065T+k+BscNadfTqP1NcPmsqwA5HBw@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The current code calls array_eq() and does not provide FmgrInfo. This commit
provides initialization of FmgrInfo and uses C collation as the safe option
for text comparison because we don't know anything about the semantics of
opclass options.
Backpatch to 13, where opclass options were introduced.
Reported-by: Nicolas Maus
Discussion: https://postgr.es/m/18692-72ea398df3ec6712%40postgresql.org
Backpatch-through: 13
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 5a2fed911 had an unexpected side-effect: the parallel worker
launched for the new test case would fail if it couldn't use a
superuser-reserved connection slot. The reason that test failed
while all our pre-existing ones worked is that the connection
privilege tests in InitPostgres had been based on the superuserness
of the leader's AuthenticatedUserId, but after the rearrangements
of 5a2fed911 we were testing the superuserness of CurrentUserId,
which the new test case deliberately made to be a non-superuser.
This all seems very accidental and probably not the behavior we really
want, but a security patch is no time to be redesigning things.
Pending some discussion about desirable semantics, hack it so that
InitPostgres continues to pay attention to the superuserness of
AuthenticatedUserId when starting a parallel worker.
Nathan Bossart and Tom Lane, per buildfarm member sawshark.
Security: CVE-2024-10978
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
TestUpgradeXversion knows how to make the main regression database's
references to pg_regress.so be version-independent. But it doesn't
do that for plperl's database, so that the C function added by
commit b7e3a52a8 is causing cross-version upgrade test failures.
Path of least resistance is to just drop the function at the end
of the new test.
In <= v14, also take the opportunity to clean up the generated
test files.
Security: CVE-2024-10979
|
|
|
|
|
|
| |
v16 commit 8fe3e697a1a83a722b107c7cb9c31084e1f4d077 used REGRESS_OPTS in
a way needing this. That broke "vcregress plcheck". Back-patch
v16..v12; newer versions don't have this build system.
|
|
|
|
|
|
|
| |
v14 and earlier use generated test files, which require being
.gitignore'd to avoid git complaints when testing in-tree.
Security: CVE-2024-10979
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The SQL spec mandates that SET SESSION AUTHORIZATION implies
SET ROLE NONE. We tried to implement that within the lowest-level
functions that manipulate these settings, but that was a bad idea.
In particular, guc.c assumes that it doesn't matter in what order
it applies GUC variable updates, but that was not the case for these
two variables. This problem, compounded by some hackish attempts to
work around it, led to some security-grade issues:
* Rolling back a transaction that had done SET SESSION AUTHORIZATION
would revert to SET ROLE NONE, even if that had not been the previous
state, so that the effective user ID might now be different from what
it had been.
* The same for SET SESSION AUTHORIZATION in a function SET clause.
* If a parallel worker inspected current_setting('role'), it saw
"none" even when it should see something else.
Also, although the parallel worker startup code intended to cope
with the current role's pg_authid row having disappeared, its
implementation of that was incomplete so it would still fail.
Fix by fully separating the miscinit.c functions that assign
session_authorization from those that assign role. To implement the
spec's requirement, teach set_config_option itself to perform "SET
ROLE NONE" when it sets session_authorization. (This is undoubtedly
ugly, but the alternatives seem worse. In particular, there's no way
to do it within assign_session_authorization without incompatible
changes in the API for GUC assign hooks.) Also, improve
ParallelWorkerMain to directly set all the relevant user-ID variables
instead of relying on some of them to get set indirectly. That
allows us to survive not finding the pg_authid row during worker
startup.
In v16 and earlier, this includes back-patching 9987a7bf3 which
fixed a violation of GUC coding rules: SetSessionAuthorization
is not an appropriate place to be throwing errors from.
Security: CVE-2024-10978
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a CTE, subquery, sublink, security invoker view, or coercion
projection references a table with row-level security policies, we
neglected to mark the plan as potentially dependent on which role
is executing it. This could lead to later executions in the same
session returning or hiding rows that should have been hidden or
returned instead.
Reported-by: Wolfgang Walther
Reviewed-by: Noah Misch
Security: CVE-2024-10976
Backpatch-through: 12
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Many process environment variables (e.g. PATH), bypass the containment
expected of a trusted PL. Hence, trusted PLs must not offer features
that achieve setenv(). Otherwise, an attacker having USAGE privilege on
the language often can achieve arbitrary code execution, even if the
attacker lacks a database server operating system user.
To fix PL/Perl, replace trusted PL/Perl %ENV with a tied hash that just
replaces each modification attempt with a warning. Sites that reach
these warnings should evaluate the application-specific implications of
proceeding without the environment modification:
Can the application reasonably proceed without the modification?
If no, switch to plperlu or another approach.
If yes, the application should change the code to stop attempting
environment modifications. If that's too difficult, add "untie
%main::ENV" in any code executed before the warning. For example,
one might add it to the start of the affected function or even to
the plperl.on_plperl_init setting.
In passing, link to Perl's guidance about the Perl features behind the
security posture of PL/Perl.
Back-patch to v12 (all supported versions).
Andrew Dunstan and Noah Misch
Security: CVE-2024-10979
|
|
|
|
|
| |
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: aa68946b3e7aa4447636af97ee7b0f249144085d
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit changes libpq so that errors reported by the backend during
the protocol negotiation for SSL and GSS are discarded by the client, as
these may include bytes that could be consumed by the client and write
arbitrary bytes to a client's terminal.
A failure with the SSL negotiation now leads to an error immediately
reported, without a retry on any other methods allowed, like a fallback
to a plaintext connection.
A failure with GSS discards the error message received, and we allow a
fallback as it may be possible that the error is caused by a connection
attempt with a pre-11 server, GSS encryption having been introduced in
v12. This was a problem only with v17 and newer versions; older
versions discard the error message already in this case, assuming a
failure caused by a lack of support for GSS encryption.
Author: Jacob Champion
Reviewed-by: Peter Eisentraut, Heikki Linnakangas, Michael Paquier
Security: CVE-2024-10977
Backpatch-through: 12
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit ac04aa84a put the shutoff for this into the planner, which is
not ideal because it doesn't prevent us from re-using a previously
made parallel plan. Revert the planner change and instead put the
shutoff into InitializeParallelDSM, modeling it on the existing code
there for recovering from failure to allocate a DSM segment.
However, that code path is mostly untested, and testing a bit harder
showed there's at least one bug: ExecHashJoinReInitializeDSM is not
prepared for us to have skipped doing parallel DSM setup. I also
thought the Assert in ReinitializeParallelWorkers is pretty
ill-advised, and replaced it with a silent Min() operation.
The existing test case added by ac04aa84a serves fine to test this
version of the fix, so no change needed there.
Patch by me, but thanks to Noah Misch for the core idea that we
could shut off worker creation when !INTERRUPTS_CAN_BE_PROCESSED.
Back-patch to v12, as ac04aa84a was.
Discussion: https://postgr.es/m/CAC-SaSzHUKT=vZJ8MPxYdC_URPfax+yoA1hKTcF4ROz_Q6z0_Q@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the collation of any join key column doesn’t match the collation of
the corresponding partition key, partitionwise joins can yield incorrect
results. For example, rows that would match under the join key collation
might be located in different partitions due to the partitioning
collation. In such cases, a partitionwise join would yield different
results from a non-partitionwise join, so disallow it in such cases.
Reported-by: Tender Wang <tndrwang@gmail.com>
Author: Jian He <jian.universality@gmail.com>
Reviewed-by: Tender Wang <tndrwang@gmail.com>
Reviewed-by: Junwang Zhao <zhjwpku@gmail.com>
Discussion: https://postgr.es/m/CAHewXNno_HKiQ6PqyLYfuqDtwp7KKHZiH1J7Pqyz0nr+PS2Dwg@mail.gmail.com
Backpatch-through: 12
|