| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This switches the pgstats write code to use durable_rename() rather than
rename(). This ensures that the stats file's data is durable when the
statistics are written, which is something only happening at shutdown
now with the checkpointer doing the job.
This could cause the statistics to be lost even after PostgreSQL is shut
down, should a host failure happen, for example.
Suggested-by: Konstantin Knizhnik
Reviewed-by: Bertrand Drouvot
Discussion: https://postgr.es/m/ZpDQTZ0cAz0WEbh7@paquier.xyz
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, CREATE MATERIALIZED VIEW ... WITH DATA populated the MV
the same way as CREATE TABLE ... AS.
Instead, reuse the REFRESH logic, which locks down security-restricted
operations and restricts the search_path. This reduces the chance that
a subsequent refresh will fail.
Reported-by: Noah Misch
Backpatch-through: 17
Discussion: https://postgr.es/m/20240630222344.db.nmisch@google.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit provides testig coverage for ccd38024bc3c, checking that a
role granted pg_signal_autovacuum_worker is able to stop a vacuum
worker.
An injection point with a wait is placed at the beginning of autovacuum
worker startup to make sure that a worker is still alive when sending
and processing the signal sent.
Author: Anthony Leung, Michael Paquier, Kirill Reshke
Reviewed-by: Andrey Borodin, Nathan Bossart
Discussion: https://postgr.es/m/CALdSSPiQPuuQpOkF7x0g2QkA5eE-3xXt7hiJFvShV1bHKDvf8w@mail.gmail.com
|
|
|
|
|
|
|
|
| |
Oops.
Reported-by: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://postgr.es/m/ZpVZB9rH5tHllO75@nathan
Backpatch: 12-, like 43cd30bcd1c
|
|
|
|
|
|
| |
Reported-by: Noah Misch
Backpatch-through: 17
Discussion: https://postgr.es/m/20240630222344.db.nmisch@google.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before this change guc_var_compare() cast the input arguments to
const struct config_generic *. That's not quite right however, as the input
on one side is often just a char * on one side.
Instead just use char *, the first field in config_generic.
This fixes a -Warray-bounds warning with some versions of gcc. While the
warning is only known to be triggered for <= 15, the issue the warning points
out seems real, so apply the fix everywhere.
Author: Nazir Bilal Yavuz <byavuz81@gmail.com>
Reported-by: Erik Rijkers <er@xs4all.nl>
Suggested-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/a74a1a0d-0fd2-3649-5224-4f754e8f91aa%40xs4all.nl
|
|
|
|
|
|
|
|
|
|
| |
The problem fixed by commit 53c8d6c9 would have been noticed if we'd
been running LLVM's verify pass on generated IR. Doing so also reveals
a complaint about incorrect name mangling, fixed here. Only enabled for
LLVM 17+ because it uses the new pass manager API.
Suggested-by: Dmitry Dolgov <9erthalion6@gmail.com>
Discussion: https://postgr.es/m/CAFj8pRACpVFr7LMdVYENUkScG5FCYMZDDdSGNU-tch%2Bw98OxYg%40mail.gmail.com
|
|
|
|
|
|
|
|
|
| |
It's a ProcNumber, not a process id. Both are integers, so it's
harmless, but clearly wrong. It's been wrong since forever, the
mistake has survived through a couple of refactorings already.
Spotted-by: Thomas Munro
Discussion: https://www.postgresql.org/message-id/CA+hUKGKPTLSGMyE4Brin-osY8omPLNXmVWDMfrRABLp=6QrR_Q@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
| |
This allows using injection points without having a PGPROC, like early
at backend startup, or in the postmaster.
The injection points facility is new in v17, so backpatch there.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Disussion: https://www.postgresql.org/message-id/4317a7f7-8d24-435e-9e49-29b72a3dc418@iki.fi
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As commit ca4103025d stated, new partitions without a specified tablespace
should inherit the parent relation's tablespace. However, previously,
ALTER TABLE MERGE PARTITIONS and ALTER TABLE SPLIT PARTITION commands
always created new partitions in the default tablespace, ignoring
the parent's tablespace. This commit ensures new partitions inherit
the parent's tablespace.
Backpatch to v17 where these commands were introduced.
Author: Fujii Masao
Reviewed-by: Masahiko Sawada
Discussion: https://postgr.es/m/abaf390b-3320-40a5-8815-ef476db5cfe7@oss.nttdata.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we intend to generate a Memoize node on top of a path, we need
cache keys of some sort. Currently we search for the cache keys in
the parameterized clauses of the path as well as the lateral_vars of
its parent. However, it turns out that this is not sufficient because
there might be lateral references derived from PlaceHolderVars, which
we fail to take into consideration.
This oversight can cause us to miss opportunities to utilize the
Memoize node. Moreover, in some plans, failing to recognize all the
cache keys could result in performance regressions. This is because
without identifying all the cache keys, we would need to purge the
entire cache every time we get a new outer tuple during execution.
This patch fixes this issue by extracting lateral Vars from within
PlaceHolderVars and subsequently including them in the cache keys.
In passing, this patch also includes a comment clarifying that Memoize
nodes are currently not added on top of join relation paths. This
explains why this patch only considers PlaceHolderVars that are due to
be evaluated at baserels.
Author: Richard Guo
Reviewed-by: Tom Lane, David Rowley, Andrei Lepikhov
Discussion: https://postgr.es/m/CAMbWs48jLxn0pAPZpJ50EThZ569Xrw+=4Ac3QvkpQvNszbeoNg@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
checkWellFormedRecursion would issue "missing recursive reference"
if a WITH RECURSIVE query contained a single self-reference but
that self-reference was inside a top-level WITH, ORDER BY, LIMIT,
etc, rather than inside the second arm of the UNION as expected.
We already intended to throw more-on-point errors for such cases,
but those error checks must be done before examining the UNION arm
in order to have the desired results. So this patch need only
move some code (and improve the comments).
Per bug #18536 from Alexander Lakhin. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/18536-0a342ec07901203e@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Such queries don't expand automatically updatable views, and ModifyTable
uses the wholerow attribute unconditionally. The user-visible behavior
is fine, so change to more-specific assertions. Commit
d5f788b41dc2cbdde6e7694c70dda54d829a5ed5 added the wrong assertion.
Back-patch to v17, where commit 5f2e179bd31e5f5803005101eb12a8d7bf8db8f3
introduced MERGE view_name.
Reported by Alexander Lakhin.
Discussion: https://postgr.es/m/e4b40a88-c134-6926-3196-bc4501cb87a2@gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ANALYZE sets relhassubclass=f when a partitioned table no longer has
partitions. An ANALYZE doing that proceeded to apply the inplace update
of pg_class.reltuples to the old pg_class tuple instead of the new
tuple, losing that reltuples=0 change if the ANALYZE committed.
Non-partitioning inheritance trees were unaffected. Back-patch to v14,
where commit 375aed36ad83f0e021e9bdd3a0034c0c992c66dc introduced
maintenance of partitioned table pg_class.reltuples.
Reported by Alexander Lakhin.
Discussion: https://postgr.es/m/a295b499-dcab-6a99-c06e-01cf60593344@gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Winsock only signals an FD_CLOSE event once if the other end of the
socket shuts down gracefully. Because each WaitLatchOrSocket() call
constructs and destroys a new event handle every time, with unlucky
timing we can lose it and hang. We get away with this only if the other
end disconnects non-gracefully, because FD_CLOSE is repeatedly signaled
in that case.
To fix this design flaw in our Windows socket support fundamentally,
we'd probably need to rearchitect it so that a single event handle
exists for the lifetime of a socket, or switch to completely different
multiplexing or async I/O APIs. That's going to be a bigger job
and probably wouldn't be back-patchable.
This brute force kludge closes the race by explicitly polling with
MSG_PEEK before sleeping.
Back-patch to all supported releases. This should hopefully clear up
some random build farm and CI hang failures reported over the years. It
might also allow us to try using graceful shutdown in more places again
(reverted in commit 29992a6) to fix instability in the transmission of
FATAL error messages, but that isn't done by this commit.
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Tested-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/176008.1715492071%40sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a partitioned table has an index that doesn't support a constraint,
but a partition has an equivalent index that does, then a DETACH
operation would misbehave: a crash in assertion-enabled systems (because
we fail to find the constraint in the parent that we expect to), or a
broken coninhcount value (-1) in production systems (because we blindly
believe that we've successfully detached the parent).
While we should reject an ATTACH of a partition with such an index, we
have failed to do so in existing releases, so adding an error in stable
releases might break the (unlikely) existing applications that rely on
this behavior. At this point I don't even want to reject them in
master, because it'd break pg_upgrade if such databases exist, and there
would be no easy way to fix existing databases without expensive index
rebuilds.
(Later on we could add ALTER TABLE ... ADD CONSTRAINT USING INDEX to
partitioned tables, which would allow the user to fix such patterns. At
that point we could add more restrictions to prevent the problem from
its root.)
Also, add a test case that leaves one table in this condition, so that
we can verify that pg_upgrade continues to work if we later decide to
change the policy on the master branch.
Backpatch to all supported branches.
Co-authored-by: Tender Wang <tndrwang@gmail.com>
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Reviewed-by: Tender Wang <tndrwang@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/18500-62948b6fe5522f56@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
| |
This routine can currently only be called from the postmaster in
single-user mode or the checkpointer, but there was no sanity check to
make sure that this was always the case.
This has proved to be useful when hacking the zone (at least to me), to
make sure that the write of the pgstats file happens at shutdown, as
wanted by design, in the correct process context.
Discussion: https://postgr.es/m/ZnEiqAITL-VgZDoY@paquier.xyz
|
|
|
|
|
| |
Author: ChangAo Chen
Discussion: https://postgr.es/m/tencent_CDECB843B30A8B6B5152FA6458F0F00FDE09@qq.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When generating non-parallel nestloop paths for each available outer
path, we always consider materializing the cheapest inner path if
feasible. Similarly, in this patch, we also consider materializing
the cheapest inner path when building partial nestloop paths. This
approach potentially reduces the need to rescan the inner side of a
partial nestloop path for each outer tuple.
Author: Tender Wang
Reviewed-by: Richard Guo, Robert Haas, David Rowley, Alena Rybakina
Reviewed-by: Tomasz Rybak, Paul Jungwirth, Yuki Fujii
Discussion: https://postgr.es/m/CAHewXNkPmtEXNfVQMou_7NqQmFABca9f4etjBtdbbm0ZKDmWvw@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
| |
The comment at the top of pgstat_read_statsfile() mentioned that the
stats are read from the on-disk file into the pgstats dshash. This is
incorrect for fix-numbered stats as these are loaded directly into
shared memory. This commit simplifies the comment to be more general.
Author: Bertrand Drouvot
Discussion: https://postgr.es/m/Zo/eJIHUcqKxeSgv@ip-10-97-1-34.eu-west-3.compute.internal
|
|
|
|
|
|
|
|
|
|
| |
These messages mostly said "could not connect to the publisher: %s"
which is lacking context. Add some verbiage to indicate which
subscription or worker process is failing.
Nisha Moond
Discussion: https://postgr.es/m/CABdArM7q1=zqL++cYd0hVMg3u_tc0S=0Of=Um-KvDhLony0cSg@mail.gmail.com
|
|
|
|
|
|
|
|
|
| |
Like min/max for arrays, these are just thin wrappers around
the existing btree comparison function for records.
Aleksander Alekseev
Discussion: https://postgr.es/m/CAO=iB8L4WYSNxCJ8GURRjQsrXEQ2-zn3FiCsh2LMqvWq2WcONg@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When creating and initializing a logical slot, the restart_lsn is set
to the latest WAL insertion point (or the latest replay point on
standbys). Subsequently, WAL records are decoded from that point to
find the start point for extracting changes in the
DecodingContextFindStartpoint() function. Since the initial
restart_lsn could be in the middle of a transaction, the start point
must be a consistent point where we won't see the data for partial
transactions.
Previously, when not building a full snapshot, serialized snapshots
were restored, and the SnapBuild jumps to the consistent state even
while finding the start point. Consequently, the slot's restart_lsn
and confirmed_flush could be set to the middle of a transaction. This
could lead to various unexpected consequences. Specifically, there
were reports of logical decoding decoding partial transactions, and
assertion failures occurred because only subtransactions were decoded
without decoding their top-level transaction until decoding the commit
record.
To resolve this issue, the changes prevent restoring the serialized
snapshot and jumping to the consistent state while finding the start
point.
On v17 and HEAD, a flag indicating whether snapshot restores should be
skipped has been added to the SnapBuild struct, and SNAPBUILD_VERSION
has been bumpded.
On backbranches, the flag is stored in the LogicalDecodingContext
instead, preserving on-disk compatibility.
Backpatch to all supported versions.
Reported-by: Drew Callahan
Reviewed-by: Amit Kapila, Hayato Kuroda
Discussion: https://postgr.es/m/2444AA15-D21B-4CCE-8052-52C7C2DAFE5C%40amazon.com
Backpatch-through: 12
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This new entry type is used for all the fixed-numbered statistics,
making possible support for custom pluggable stats. In short, we need
to be able to detect more easily if a stats kind exists or not when
reading back its data from the pgstats file without a dependency on the
order of the entries read. The kind ID of the stats is added to the
data written.
The data is written in the same fashion as previously, with the
fixed-numbered stats first and the dshash entries next. The read part
becomes more flexible, loading fixed-numbered stats into shared memory
based on the new entry type found.
Bump PGSTAT_FILE_FORMAT_ID.
Reviewed-by: Bertrand Drouvot
Discussion: https://postgr.es/m/Zot5bxoPYdS7yaoy@paquier.xyz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This new callback gives fixed-numbered stats the possibility to take
actions based on the area of shared memory allocated for them.
This removes from pgstat_shmem.c any knowledge specific to the types
of fixed-numbered stats, and the initializations happen in their own
files. Like b68b29bc8fec, this change is useful to make this area of
the code more pluggable, so as custom fixed-numbered stats can take
actions after their shared memory area is initialized.
Reviewed-by: Bertrand Drouvot
Discussion: https://postgr.es/m/Zot5bxoPYdS7yaoy@paquier.xyz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Formerly, the computation of the bucket index involved calling
div_var() with a scale determined by select_div_scale(), and then
taking the floor of the result. That involved computing anything from
16 to 1000 digits after the decimal point, only for floor_var() to
throw them away. In addition, the quotient was computed with rounding
in the final digit, which meant that in rare cases the whole result
could round up to the wrong bucket, and could exceed count. Thus it
was also necessary to clamp the result to the range [1, count], though
that didn't prevent the result being in the wrong internal bucket.
Instead, compute the quotient using floor division, which guarantees
the correct result, as specified by the SQL spec, and doesn't need to
be clamped. This is both much simpler and more efficient, since it no
longer computes any quotient digits after the decimal point.
In addition, it is not necessary to have separate code to handle
reversed bounds, since the signs cancel out when dividing.
As with b0e9e4d76c and a2a0c7c29e, no back-patch.
Dean Rasheed, reviewed by Joel Jacobson.
Discussion: https://postgr.es/m/CAEZATCVbJH%2BLE9EXW8Rk3AxLe%3DjbOk2yrT_AUJGGh5Rah6zoeg%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch modifies the pg_get_acl() function to accept a third argument
called "objsubid", bringing it on par with similar functions in this
area like pg_describe_object(). This enables the retrieval of ACLs for
relation attributes when scanning dependencies.
Bump catalog version.
Author: Joel Jacobson
Discussion: https://postgr.es/m/f2539bff-64be-47f0-9f0b-df85d3cc0432@app.fastmail.com
|
|
|
|
|
|
| |
Reported-by: Noah Misch
Discussion: https://postgr.es/m/20240630223047.1f.nmisch@google.com
Backpatch-through: 17
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
libxml2 2.13 has an entirely different rule than earlier versions
about when to emit "chunk is not well balanced" errors. This
causes regression test output discrepancies for three test cases
that formerly provoked that error (along with others) and now don't.
Closer inspection shows that at least in 2.13, this error is pretty
useless because it can only be emitted after some other more-relevant
error. So let's get rid of the cross-version discrepancy by just
suppressing it. In case some older libxml2 version is capable of
emitting this error by itself, suppress only when some other error
has already been captured.
Like 066e8ac6e and 6082b3d5d, this will need to be back-patched,
but let's check the results in HEAD first. (The patch for xml_2.out,
in particular, is blind since I can't test it here.)
Erik Wienhold and Tom Lane, per report from Frank Streitzig.
Discussion: https://postgr.es/m/trinity-b0161630-d230-4598-9ebc-7a23acdb37cb-1720186432160@3c-app-gmx-bap25
Discussion: https://postgr.es/m/trinity-361ba18b-541a-4fe7-bc63-655ae3a7d599-1720259822452@3c-app-gmx-bs01
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since commit 3a9b18b309, roles with privileges of pg_signal_backend
cannot signal autovacuum workers. Many users treated the ability
to signal autovacuum workers as a feature instead of a bug, so we
are reintroducing it via a new predefined role. Having privileges
of this new role, named pg_signal_autovacuum_worker, only permits
signaling autovacuum workers. It does not permit signaling other
types of superuser backends.
Bumps catversion.
Author: Kirill Reshke
Reviewed-by: Anthony Leung, Michael Paquier, Andrey Borodin
Discussion: https://postgr.es/m/CALdSSPhC4GGmbnugHfB9G0%3DfAxjCSug_-rmL9oUh0LTxsyBfsg%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, the comment incorrectly stated that libpqrcv_check_conninfo()
returns true or false based on the connection string check.
However, this function actually has a void return type and
raises an error if the check fails.
Author: Rintaro Ikeda
Reviewed-by: Jelte Fennema-Nio, Fujii Masao
Discussion: https://postgr.es/m/6a1ca81b27fec4da0ccdfaaaec787982@oss.nttdata.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When either input has a small number of digits, and the exact product
is requested, the speed of numeric multiplication can be increased
significantly by using a faster direct multiplication algorithm. This
works by fully computing each result digit in turn, starting with the
least significant, and propagating the carry up. This save cycles by
not requiring a temporary buffer to store digit products, not making
multiple passes over the digits of the longer input, and not requiring
separate carry-propagation passes.
For now, this is used when the shorter input has 1-4 NBASE digits (up
to 13-16 decimal digits), and the longer input is of any size, which
covers a lot of common real-world cases. Also, the relative benefit
increases as the size of the longer input increases.
Possible future work would be to try extending the technique to larger
numbers of digits in the shorter input.
Joel Jacobson and Dean Rasheed.
Discussion: https://postgr.es/m/44d2ffca-d560-4919-b85a-4d07060946aa@app.fastmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 0fdab27ad6 changed the code to wait for WAL to be available before
determining the timeline but forgot to move the failure check.
This change is to make the related code easier to understand and enhance
otherwise there is no bug in the current code.
In the passing, improve the nearby comments to explain why we determine
am_cascading_walsender after waiting for the required WAL.
Author: Peter Smith
Reviewed-by: Bertrand Drouvot, Amit Kapila
Discussion: https://postgr.es/m/CAHut+PvqX49fusLyXspV1Mmd_EekPtXG0oT146vZjcb9XDvNgw@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is similar to 9004abf6206e, but this time for the write part of the
stats file. The code is changed so as, rather than referring to
individual members of PgStat_Snapshot in an order based on their
PgStat_Kind value, a loop based on pgstat_kind_infos is used to retrieve
the contents to write from the snapshot structure, for a size of
PgStat_KindInfo's shared_data_len.
This requires the addition to PgStat_KindInfo of an offset to track the
location of each fixed-numbered stats in PgStat_Snapshot. This change
is useful to make this area of the code more easily pluggable, and
reduces the knowledge of specific fixed-numbered kinds in pgstat.c.
Reviewed-by: Bertrand Drouvot
Discussion: https://postgr.es/m/Zot5bxoPYdS7yaoy@paquier.xyz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, pg_wal_summary_contents() had two issues,
causing discrepancies between pg_wal_summary_contents()
and the pg_walsummary command on the same WAL summary file:
(1) It did not emit the limit block when that's the only data for
a particular relation fork.
(2) It emitted the same limit block multiple times if the list of
block numbers was long enough.
This commit fixes these issues.
Backpatch to v17 where pg_wal_summary_contents() was added.
Author: Fujii Masao
Reviewed-by: Robert Haas
Discussion: https://postgr.es/m/90980ee6-2da6-42f6-a7b0-b7bae62ae279@oss.nttdata.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Nodes like Memoize report the cache stats for each parallel worker, so it
makes sense to show the exact and lossy pages in Parallel Bitmap Heap Scan
in a similar way. Likewise, Sort shows the method and memory used for
each worker.
There was some discussion on whether the leader stats should include the
totals for each parallel worker or not. I did some analysis on this to
see what other parallel node types do and it seems only Parallel Hash does
anything like this. All the rest, per what's supported by
ExecParallelRetrieveInstrumentation() are consistent with each other.
Author: David Geier <geidav.pg@gmail.com>
Author: Heikki Linnakangas <hlinnaka@iki.fi>
Author: Donghang Lin <donghanglin@gmail.com>
Author: Alena Rybakina <lena.ribackina@yandex.ru>
Author: David Rowley <dgrowleyml@gmail.com>
Reviewed-by: Dmitry Dolgov <9erthalion6@gmail.com>
Reviewed-by: Michael Christofides <michael@pgmustard.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Tomas Vondra <tomas.vondra@enterprisedb.com>
Reviewed-by: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Donghang Lin <donghanglin@gmail.com>
Reviewed-by: Masahiro Ikeda <Masahiro.Ikeda@nttdata.com>
Discussion: https://postgr.es/m/b3d80961-c2e5-38cc-6a32-61886cdf766d%40gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
| |
This provides the planner with row estimates for
generate_series(TIMESTAMP, TIMESTAMP, INTERVAL),
generate_series(TIMESTAMPTZ, TIMESTAMPTZ, INTERVAL) and
generate_series(TIMESTAMPTZ, TIMESTAMPTZ, INTERVAL, TEXT) when the input
parameter values can be estimated during planning.
Author: David Rowley
Reviewed-by: jian he <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CAApHDvrBE%3D%2BASo_sGYmQJ3GvO8GPvX5yxXhRS%3Dt_ybd4odFkhQ%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While this strategy is ordinarily quite costly because it requires
performing two checkpoints, testing shows that it tends to be a
faster choice than WAL_LOG during pg_upgrade, presumably because
fsync is turned off. Furthermore, we can skip the checkpoints
altogether because the problems they are intended to prevent don't
apply to pg_upgrade. Instead, we just need to CHECKPOINT once in
the new cluster after making any changes to template0 and before
restoring the rest of the databases. This ensures that said
template0 changes are written out to disk prior to creating the
databases via FILE_COPY.
Co-authored-by: Matthias van de Meent
Reviewed-by: Ranier Vilela, Dilip Kumar, Robert Haas, Michael Paquier
Discussion: https://postgr.es/m/Zl9ta3FtgdjizkJ5%40nathan
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
xmlParseInNodeContext has basically the same functionality with
a different API: we have to supply an xmlNode that's attached to a
document rather than just the document. That's not hard though.
The benefits are two:
* Early 2.13.x releases of libxml2 contain a bug that causes
xmlParseBalancedChunkMemory to return the wrong status value in some
cases. This breaks our regression tests. While that bug is now fixed
upstream and will probably never be seen in any production-oriented
distro, it is currently a problem on some more-bleeding-edge-friendly
platforms.
* xmlParseBalancedChunkMemory is considered to depend on libxml2's
semi-deprecated SAX1 APIs, and will go away when and if they do.
There may already be libxml2 builds out there that lack this function.
So there are both short- and long-term reasons to make this change.
While here, avoid allocating an xmlParserCtxt in DOCUMENT parse mode,
since that code path is not going to use it.
Like 066e8ac6e, this will need to be back-patched. This is just a
trial commit to see if the buildfarm agrees that we can use
xmlParseInNodeContext unconditionally.
Erik Wienhold and Tom Lane, per report from Frank Streitzig.
Discussion: https://postgr.es/m/trinity-b0161630-d230-4598-9ebc-7a23acdb37cb-1720186432160@3c-app-gmx-bap25
Discussion: https://postgr.es/m/trinity-361ba18b-541a-4fe7-bc63-655ae3a7d599-1720259822452@3c-app-gmx-bs01
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The numeric round() and trunc() functions clamp the scale argument to
the range between +/- NUMERIC_MAX_RESULT_SCALE (2000), which is much
smaller than the actual allowed range of type numeric. As a result,
they return incorrect results when asked to round/truncate more than
2000 digits before or after the decimal point.
Fix by using the correct upper and lower scale limits based on the
actual allowed (and documented) range of type numeric.
While at it, use the new NUMERIC_WEIGHT_MAX constant instead of
SHRT_MAX in all other overflow checks, and fix a comment thinko in
power_var() introduced by e54a758d24 -- the minimum value of
ln_dweight is -NUMERIC_DSCALE_MAX (-16383), not -SHRT_MAX, though this
doesn't affect the point being made in the comment, that the resulting
local_rscale value may exceed NUMERIC_MAX_DISPLAY_SCALE (1000).
Back-patch to all supported branches.
Dean Rasheed, reviewed by Joel Jacobson.
Discussion: https://postgr.es/m/CAEZATCXB%2BrDTuMjhK5ZxcouufigSc-X4tGJCBTMpZ3n%3DxxQuhg%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Both of these counters were using the "long" data type. On MSVC that's
a 32-bit type. On modern hardware, I was able to demonstrate that we can
wrap those counters with a query that only takes 15 minutes to run.
This issue may manifest itself either by not showing the values of the
counters because they've wrapped and are less than zero, resulting in
them being filtered by the > 0 checks in show_tidbitmap_info(), or bogus
numbers being displayed which are modulus 2^32 of the actual number.
Widen these counters to uint64.
Discussion: https://postgr.es/m/CAApHDvpS_97TU+jWPc=T83WPp7vJa1dTw3mojEtAVEZOWh9bjQ@mail.gmail.com
|
|
|
|
|
| |
Author: Junwang Zhao
Discussion: https://postgr.es/m/CAEG8a3L9GgfKc+XT+NMHPY7atAOVYqjUqKEFQKhcPHFYRW=PuQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For an inner_unique join, we always assume that the executor will stop
scanning for matches after the first match. Therefore, for a mergejoin
that is inner_unique and whose mergeclauses are sufficient to identify a
match, we set the skip_mark_restore flag to true, indicating that the
executor need not do mark/restore calls. However, merge-right-anti-join
did not get this memo and continues scanning the inner side for matches
after the first match. If there are duplicates in the outer scan, we
may incorrectly skip matching some inner tuples, which can lead to wrong
results.
Here we fix this issue by ensuring that merge-right-anti-join also
advances to next outer tuple after the first match in inner_unique
cases. This also saves cycles by avoiding unnecessary scanning of inner
tuples after the first match.
Although hash-right-anti-join does not suffer from this wrong results
issue, we apply the same change to it as well, to help save cycles for
the same reason.
Per bug #18522 from Antti Lampinen, and bug #18526 from Feliphe Pozzer.
Back-patch to v16 where right-anti-join was introduced.
Author: Richard Guo
Discussion: https://postgr.es/m/18522-c7a8956126afdfd0@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It looks like we should have been doing this all along,
but we got away with the wrong coding until libxml2 2.13.0
tightened up xmlAddChild's behavior.
There is more stuff to be fixed to be compatible with 2.13.0,
and it will all need to be back-patched. This is just a
trial commit to see if the buildfarm agrees that we can use
xmlAddChildList unconditionally.
Erik Wienhold, per report from Frank Streitzig.
Discussion: https://postgr.es/m/trinity-b0161630-d230-4598-9ebc-7a23acdb37cb-1720186432160@3c-app-gmx-bap25
Discussion: https://postgr.es/m/trinity-361ba18b-541a-4fe7-bc63-655ae3a7d599-1720259822452@3c-app-gmx-bs01
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
590b045c3 made it so tuplestore.c would store tuples inside a
generation.c memory context. After fixing a bug report in 97651b013, it
seems that it's probably best not to allocate BufFile related
allocations in that context. Let's keep it just for tuple data.
This adjusts the code to switch to the Tuplestorestate.context's parent,
which is the MemoryContext that tuplestore_begin_common() was called in.
It does not seem worth adding a new field in Tuplestorestate to store
this when we can access it by looking at the Tuplestorestate's
context's parent.
Discussion: https://postgr.es/m/CAApHDvqFt_CdJtSr+E9YLZb7jZAyRCy3hjQ+ktM+dcOFVq-xkg@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This only affects MEMORY_CONTEXT_CHECKING builds.
This fixes an off-by-one issue in GenerationRealloc() where the
fast-path code which tries to reuse the existing allocation if the
existing chunk is >= the new requested size. The code there thought it
was always ok to use the existing chunk, but when oldsize == size there
isn't enough space to store the sentinel byte. If both sizes matched
exactly set_sentinel() would overwrite the first byte beyond the chunk
and then subsequent GenerationRealloc() calls could then fail the
Assert(chunk->requested_size < oldsize) check which is trying to ensure
the chunk is large enough to store the sentinel.
The same issue does not exist in aset.c as the sentinel checking code
only adds a sentinel byte if there's enough space in the chunk.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/49275921-7b39-41af-5eb8-97b50ce3312e@gmail.com
Backpatch-through: 16, where the problem was introduced by 0e480385e
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Each of max_connections, max_worker_processes,
autovacuum_max_workers, and max_wal_senders has a GUC check hook
that verifies the sum of those GUCs does not exceed a hard-coded
limit (see the comment for MAX_BACKENDS in postmaster.h). In
general, the hooks effectively guard against egregious
misconfigurations.
However, this approach has some problems. Since these check hooks
are called as each GUC is assigned its user-specified value, only
one of the hooks will be called with all the relevant GUCs set. If
one or more of the user-specified values are less than the initial
values of the GUCs' underlying variables, false positives can
occur.
Furthermore, the error message emitted when one of the check hooks
fails is not tremendously helpful. For example, the command
$ pg_ctl -D . start -o "-c max_connections=262100 -c max_wal_senders=10000"
fails with the following error:
FATAL: invalid value for parameter "max_wal_senders": 10000
Fortunately, there is an extra copy of this check in
InitializeMaxBackends() that we can rely on, so this commit removes
the aforementioned GUC check hooks in favor of that one. It also
enhances the error message to clearly show the values of the
relevant GUCs and the hard-coded limit their sum may not exceed.
The downside of this change is that server startup progresses
further before failing due to such misconfigurations (thus taking
longer), but these failures are expected to be rare, so we don't
anticipate any real harm in practice.
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/ZnMr2k-Nk5vj7T7H%40nathan
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This can be used to load an injection point and prewarm the
backend-level cache before running it, to avoid issues if the point
cannot be loaded due to restrictions in the code path where it would be
run, like a critical section where no memory allocation can happen
(load_external_function() can do allocations when expanding a library
name).
Tests can use a macro called INJECTION_POINT_LOAD() to load an injection
point. The test module injection_points gains some tests, and a SQL
function able to load an injection point.
Based on a request from Andrey Borodin, who has implemented a test for
multixacts requiring this facility.
Reviewed-by: Andrey Borodin
Discussion: https://postgr.es/m/ZkrBE1e2q2wGvsoN@paquier.xyz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since commit 5764f611e1, we've been using the ilist.h functions for
handling the linked list. There's no need for 'links' to be the first
element of the struct anymore, except for one call in InitProcess
where we used a straight cast from the 'dlist_node *' to PGPROC *,
without the dlist_container() macro. That was just an oversight in
commit 5764f611e1, fix it.
There no imminent need to move 'links' from being the first field, but
let's be tidy.
Reviewed-by: Aleksander Alekseev, Andres Freund
Discussion: https://www.postgresql.org/message-id/22aa749e-cc1a-424a-b455-21325473a794@iki.fi
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Here we make tuplestore.c use a generation.c memory context rather than
allocating tuples into the CurrentMemoryContext, which primarily is the
ExecutorState or PortalHoldContext memory context. Not having a
dedicated context can cause the CurrentMemoryContext context to become
bloated when pfree'd chunks are not reused by future tuples. Using
generation speeds up users of tuplestore.c, such as the Materialize,
WindowAgg and CTE Scan executor nodes. The main reason for the speedup is
due to generation.c being more memory efficient than aset.c memory
contexts. Specifically, generation does not round sizes up to the next
power of 2 value. This both saves memory, allowing more tuples to fit in
work_mem, but also makes the memory usage more compact and fit on fewer
cachelines. One benchmark showed up to a 22% performance increase in a
query containing a Materialize node. Much higher gains are possible if
the memory reduction prevents tuplestore.c from spilling to disk. This is
especially true for WindowAgg nodes where improvements of several thousand
times are possible if the memory reductions made here prevent tuplestore
from spilling to disk.
Additionally, a generation.c memory context is much better suited for this
job as it works well with FIFO palloc/pfree patterns, which is exactly how
tuplestore.c uses it. Because of the way generation.c allocates memory,
tuples consecutively stored in tuplestores are much more likely to be
stored consecutively in memory. This allows the CPU's hardware prefetcher
to work more efficiently as it provides a more predictable pattern to
allow cachelines for the next tuple to be loaded from RAM in advance of
them being needed by the executor.
Using a dedicated memory context for storing tuples also allows us to more
efficiently clean up the memory used by the tuplestore as we can reset or
delete the context rather than looping over all stored tuples and
pfree'ing them one by one.
Also, remove a badly placed USEMEM call in readtup_heap(). The tuple
wasn't being allocated in the Tuplestorestate's context, so no need to
adjust the memory consumed by the tuplestore there.
Author: David Rowley
Reviewed-by: Matthias van de Meent, Dmitry Dolgov
Discussion: https://postgr.es/m/CAApHDvp5Py9g4Rjq7_inL3-MCK1Co2CRt_YWFwTU2zfQix0p4A@mail.gmail.com
|