| Commit message (Collapse) | Author | Age |
... | |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For a long time we have forbidden binary-coercible casts to or from
composite and array types, because such a cast cannot work correctly:
the type OID embedded in the value would need to change, but it won't
in a binary coercion. That reasoning applies equally to range types,
but we overlooked installing a similar restriction here when we
invented range types. Do so now.
Given the lack of field complaints, we won't change this in stable
branches, but it seems not too late for v17.
Per discussion of a problem noted by Peter Eisentraut.
Discussion: https://postgr.es/m/076968e1-0852-40a9-bc0b-117cd3f0e43c@eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now that disable_cost is not included in the cost estimate, there's
no visible sign in EXPLAIN output of which plan nodes are disabled.
Fix that by propagating the number of disabled nodes from Path to
Plan, and then showing it in the EXPLAIN output.
There is some question about whether this is a desirable change.
While I personally believe that it is, it seems best to make it a
separate commit, in case we decide to back out just this part, or
rework it.
Reviewed by Andres Freund, Heikki Linnakangas, and David Rowley.
Discussion: http://postgr.es/m/CA+TgmoZ_+MS+o6NeGK2xyBv-xM+w1AfFVuHE4f_aq6ekHv7YSQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, when a path type was disabled by e.g. enable_seqscan=false,
we either avoided generating that path type in the first place, or
more commonly, we added a large constant, called disable_cost, to the
estimated startup cost of that path. This latter approach can distort
planning. For instance, an extremely expensive non-disabled path
could seem to be worse than a disabled path, especially if the full
cost of that path node need not be paid (e.g. due to a Limit).
Or, as in the regression test whose expected output changes with this
commit, the addition of disable_cost can make two paths that would
normally be distinguishible in cost seem to have fuzzily the same cost.
To fix that, we now count the number of disabled path nodes and
consider that a high-order component of both the startup cost and the
total cost. Hence, the path list is now sorted by disabled_nodes and
then by total_cost, instead of just by the latter, and likewise for
the partial path list. It is important that this number is a count
and not simply a Boolean; else, as soon as we're unable to respect
disabled path types in all portions of the path, we stop trying to
avoid them where we can.
Because the path list is now sorted by the number of disabled nodes,
the join prechecks must compute the count of disabled nodes during
the initial cost phase instead of postponing it to final cost time.
Counts of disabled nodes do not cross subquery levels; at present,
there is no reason for them to do so, since the we do not postpone
path selection across subquery boundaries (see make_subplan).
Reviewed by Andres Freund, Heikki Linnakangas, and David Rowley.
Discussion: http://postgr.es/m/CA+TgmoZ_+MS+o6NeGK2xyBv-xM+w1AfFVuHE4f_aq6ekHv7YSQ@mail.gmail.com
|
|
|
|
| |
Oversight in commit a95ff1fe2eb4926b13e0940ad1f37d048704bdb0
|
|
|
|
|
|
|
|
| |
Apply GETSTRUCT() once instead of doing it repeatedly in the same
function. This simplifies the notation and makes the function's
structure more similar to the surrounding ones.
Discussion: https://www.postgresql.org/message-id/flat/a368248e-69e4-40be-9c07-6c3b5880b0a6@eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We advance origin progress during abort on successful streaming and
application of ROLLBACK in parallel streaming mode. But the origin
shouldn't be advanced during an error or unsuccessful apply due to
shutdown. Otherwise, it will result in a transaction loss as such a
transaction won't be sent again by the server.
Reported-by: Hou Zhijie
Author: Hayato Kuroda and Shveta Malik
Reviewed-by: Amit Kapila
Backpatch-through: 16
Discussion: https://postgr.es/m/TYAPR01MB5692FAC23BE40C69DA8ED4AFF5B92@TYAPR01MB5692.jpnprd01.prod.outlook.com
|
|
|
|
|
| |
Author: Andreas Karlsson
Discussion: https://postgr.es/m/69c2a864-846f-4309-bd5a-aaa1c34f9a11@proxel.se
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ab02d702ef08 has removed from the backend the code able to support the
unloading of modules, because this has never worked. This removes the
last references to _PG_fini(), that could be used as a callback for
modules to manipulate the stack when unloading a library.
The test module ldap_password_func had the idea to declare it, doing
nothing. The function declaration in fmgr.h is gone.
It was left around in 2022 to avoid breaking extension code, but at this
stage there are also benefits in letting extension developers know that
keeping the unloading code is pointless and this move leads to less
maintenance.
Reviewed-by: Tom Lane, Heikki Linnakangas
Discussion: https://postgr.es/m/ZsQfi0AUJoMF6NSd@paquier.xyz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The descriptions for ProcArrayGroupUpdate and XactGroupUpdate claim
that these events mean we are waiting for the group leader "at end
of a parallel operation," but neither pertains to parallel
operations. This commit reverts these descriptions to their
wording before commit 3048898e73, i.e., "end of a parallel
operation" is changed to "transaction end."
Author: Sameer Kumar
Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/CAGPeHmh6UMrKQHKCmX%2B5vV5TH9P%3DKw9en3k68qEem6J%3DyrZPUA%40mail.gmail.com
Backpatch-through: 13
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before commit a0e0fb1ba56f, multixact.c contained a case in the
multixact-read path where it would loop sleeping 1ms each time until
another multixact-create path completed, which was uncovered by any
tests. That commit changed the code to rely on a condition variable
instead. Add a test now, which relies on injection points and "loading"
thereof (because of it being in a critical section), per commit
4b211003ecc2.
Author: Andrey Borodin <x4mmm@yandex-team.ru>
Reviewed-by: Michaël Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/0925F9A9-4D53-4B27-A87E-3D83A757B0E0@yandex-team.ru
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch provides the additional logging information in the following
conflict scenarios while applying changes:
insert_exists: Inserting a row that violates a NOT DEFERRABLE unique constraint.
update_differ: Updating a row that was previously modified by another origin.
update_exists: The updated row value violates a NOT DEFERRABLE unique constraint.
update_missing: The tuple to be updated is missing.
delete_differ: Deleting a row that was previously modified by another origin.
delete_missing: The tuple to be deleted is missing.
For insert_exists and update_exists conflicts, the log can include the origin
and commit timestamp details of the conflicting key with track_commit_timestamp
enabled.
update_differ and delete_differ conflicts can only be detected when
track_commit_timestamp is enabled on the subscriber.
We do not offer additional logging for exclusion constraint violations because
these constraints can specify rules that are more complex than simple equality
checks. Resolving such conflicts won't be straightforward. This area can be
further enhanced if required.
Author: Hou Zhijie
Reviewed-by: Shveta Malik, Amit Kapila, Nisha Moond, Hayato Kuroda, Dilip Kumar
Discussion: https://postgr.es/m/OS0PR01MB5716352552DFADB8E9AD1D8994C92@OS0PR01MB5716.jpnprd01.prod.outlook.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Here we add ExprState support for obtaining a 32-bit hash value from a
list of expressions. This allows both faster hashing and also JIT
compilation of these expressions. This is especially useful when hash
joins have multiple join keys as the previous code called ExecEvalExpr on
each hash join key individually and that was inefficient as tuple
deformation would have only taken into account one key at a time, which
could lead to walking the tuple once for each join key. With the new
code, we'll determine the maximum attribute required and deform the tuple
to that point only once.
Some performance tests done with this change have shown up to a 20%
performance increase of a query containing a Hash Join without JIT
compilation and up to a 26% performance increase when JIT is enabled and
optimization and inlining were performed by the JIT compiler. The
performance increase with 1 join column was less with a 14% increase
with and without JIT. This test was done using a fairly small hash
table and a large number of hash probes. The increase will likely be
less with large tables, especially ones larger than L3 cache as memory
pressure is more likely to be the limiting factor there.
This commit only addresses Hash Joins, but lays expression evaluation
and JIT compilation infrastructure for other hashing needs such as Hash
Aggregate.
Author: David Rowley
Reviewed-by: Alexey Dvoichenkov <alexey@hyperplane.net>
Reviewed-by: Tels <nospam-pg-abuse@bloodgate.com>
Discussion: https://postgr.es/m/CAApHDvoexAxgQFNQD_GRkr2O_eJUD1-wUGm%3Dm0L%2BGc%3DT%3DkEa4g%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a partition is detached and immediately dropped, a prepared
statement could try to compute a new partition descriptor that includes
it. This leads to this kind of error:
ERROR: could not open relation with OID 457639
Avoid this by skipping the partition in expand_partitioned_rtentry if it
doesn't exist.
Noted by me while investigating bug #18559. Kuntal Gosh helped to
identify the exact failure.
Backpatch to 14, where DETACH CONCURRENTLY was introduced.
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Kuntal Ghosh <kuntalghosh.2007@gmail.com>
Reviewed-by: Junwang Zhao <zhjwpku@gmail.com>
Discussion: https://postgr.es/m/202408122233.bo4adt3vh5bi@alvherre.pgsql
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Report search_path changes to the client. Multi-tenant applications
often map tenants to schemas, and use search_path to pick the tenant a
given connection works with. This breaks when a connection pool (like
PgBouncer), because the search_path may change unexpectedly.
There are other GUCs we might want reported (e.g. various timeouts), but
search_path is by far the biggest foot gun that can lead either to
puzzling failures during query execution (when objects are missing or
are defined differently), or even to accessing incorrect data.
Many existing tools modify search_path, pg_dump being a notable example.
Ideally, clients could specify which GUCs are interesting and should be
subject to this reporting, but we don't support that. GUC_REPORT is what
connection pools rely on for other interesting GUCs, so just use that.
When this change was initially proposed in 2014, one of the concerns was
impact on performance. But this was addressed by commit 2432b1a04087,
which ensures we report each GUC at most once per query, no matter how
many times it changed during execution.
Eventually, this might be replaced / superseded by allowing doing this
by making the protocol extensible in this direction, but it's unclear
when (or if) that happens. Until then, we can leverage GUC_REPORT.
Author: Alexander Kukushkin, Jelte Fennema-Nio
Discussion: https://postgr.es/m/CAFh8B=k8s7WrcqhafmYhdN1+E5LVzZi_QaYDq8bKvrGJTAhY2Q@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add a comment explaining dropdb() can't rely on syscache. The issue with
flattened rows was fixed by commit 0f92b230f88b, but better to have
a clear explanation why the systable scan is necessary. The other places
doing in-place updates on pg_database have the same comment.
Suggestion and patch by Yugo Nagata. Backpatch to 12, same as the fix.
Author: Yugo Nagata
Backpatch-through: 12
Discussion: https://postgr.es/m/CAJTYsWWNkCt+-UnMhg=BiCD3Mh8c2JdHLofPxsW3m2dkDFw8RA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 274bbced disabled session tickets for TLSv1.3 on top of the
already disabled TLSv1.2 session tickets, but accidentally caused
a regression where TLSv1.2 session tickets were incorrectly sent.
Fix by unconditionally disabling TLSv1.2 session tickets and only
disable TLSv1.3 tickets when the right version of OpenSSL is used.
Backpatch to all supported branches.
Reported-by: Cameron Vogt <cvogt@automaticcontrols.net>
Reported-by: Fire Emerald <fire.github@gmail.com>
Reviewed-by: Jacob Champion <jacob.champion@enterprisedb.com>
Discussion: https://postgr.es/m/DM6PR16MB3145CF62857226F350C710D1AB852@DM6PR16MB3145.namprd16.prod.outlook.com
Backpatch-through: v12
|
|
|
|
|
|
|
|
| |
Commit ca051d8b101 called newlocale(LC_COLLATE, ...) instead of
newlocale(LC_COLLATE_MASK, ...), in code reached only on FreeBSD. They
have the same value on that OS, explaining why it worked. Fix.
Back-patch to 14, where ca051d8b101 landed.
|
|
|
|
|
|
|
|
| |
The log message on backend crash used wrong variable, which could be
uninitialized. Introduced in commit 28a520c0b7.
Reported-by: Alexander Lakhin
Discussion: https://www.postgresql.org/message-id/451b0797-83b8-cdbc-727f-8d7a7b0e3bca@gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a continuation of c9e24573905b, containing changes included into
the proposed patch that have been missed in the actual commit. I have
managed to miss these diffs while doing a rebase of the original patch.
Thanks to Noah Misch, Peter Eisentraut and Alexander Korotkov for the
pokes.
Discussion: https://postgr.es/m/92fe572d-638e-4162-aef6-1c42a2936f25@eisentraut.org
Discussion: https://postgr.es/m/20240810175055.cd.nmisch@google.com
Backpatch-through: 17
|
|
|
|
|
|
|
|
|
|
|
| |
One of the two slot scans in SlruSelectLRUPage was not walking only the
slots in the specific bank where the buffer could be; change it to do
that.
Oversight in 53c2a97a9266.
Author: Sergey Sargsyan <sergey.sargsyan.2001@gmail.com>
Discussion: https://postgr.es/m/18582-5f301dd30ba91a38@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds two counters to the fixed-numbered stats of injection points
to track the number of times injection points have been cached and
loaded from the cache, as of the additions coming from a0a5869a8598 and
4b211003ecc2.
These should have been part of f68cd847fa40, but I have lacked time and
energy back then, and it did not prevent the code to be a useful
template.
While on it, this commit simplifies the description of a few tests while
adding coverage for the new stats data.
Author: Yogesh Sharma
Discussion: https://postgr.es/m/3a6977f7-54ab-43ce-8806-11d5e15526a2@catprosystems.com
|
|
|
|
|
|
|
|
|
|
|
| |
MacPorts version 2.9.3 started failing in our ci_macports_packages.sh
script, for reasons not fully determined, but plausibly linked to the
release of 2.10.1. 2.10.1 seems to work, so let's switch to it.
Back-patch to 15, where CI began.
Reported-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/81f104e8-f0a9-43c0-85bd-2bbbf590a5b8%40eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit c66a7d75e652 modified DROP DATABASE so that if interrupted, the
database is known to be in an invalid state and can only be dropped.
This is done by setting a flag using an in-place update, so that it's
not lost in case of rollback.
For databases with many ACLs, this may however fail like this:
ERROR: wrong tuple length
This happens because with many ACLs, the pg_database.datacl attribute
gets TOASTed. The dropdb() code reads the tuple from the syscache, which
means it's detoasted. But the in-place update expects the tuple length
to match the on-disk tuple.
Fixed by reading the tuple from the catalog directly, not from syscache.
Report and fix by Ayush Tiwari. Backpatch to 12. The DROP DATABASE fix
was backpatched to 11, but 11 is EOL at this point.
Reported-by: Ayush Tiwari
Author: Ayush Tiwari
Reviewed-by: Tomas Vondra
Backpatch-through: 12
Discussion: https://postgr.es/m/CAJTYsWWNkCt+-UnMhg=BiCD3Mh8c2JdHLofPxsW3m2dkDFw8RA@mail.gmail.com
|
|
|
|
|
|
|
|
| |
simplehash.h references pg_fatal(), which cpluspluscheck says is
undeclared, causing the CI CompilerWarnings task to fail since commit
aa2d6b15. Include the header it needs.
Discussion: https://postgr.es/m/CA%2BhUKGJC3d4PXkErpfOWrzQqcq6MLiCv0%2BAH0CMQnB6hdLUFEw%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 97ddda8a82ac470ae581d0eb485b6577707678bc removed the rmtree()
behavior from XLOG_TBLSPC_CREATE, obsoleting that part of the comment.
The comment's point about XLOG_DBASE_CREATE was wrong when commit
fa0f466d5329e10b16f3b38c8eaf5306f7e234e8 introduced the point. (It
would have been accurate if that commit had predated commit
fbcbc5d06f53aea412130deb52e216aa3883fb8d introducing the second
checkpoint of CREATE DATABASE.) Nothing can skip log_smgrcreate() on
the basis of wal_level=minimal, so don't comment on that.
Commit c6b92041d38512a4176ed76ad06f713d2e6c01a8 expanded WAL skipping
from five specific operations to relfilenodes generally, hence the
CreateDatabaseUsingFileCopy() comment change.
Discussion: https://postgr.es/m/20231008022204.cc@rfd.leadboat.com
|
|
|
|
|
|
|
|
|
|
| |
The commit was "Provide API for streaming relation data.".
Reported-by: Nazir Bilal Yavuz
Discussion: https://postgr.es/m/CAN55FZ3KsZ2faZs1sK5J0W+_8B3myB232CfLYGie4u4BBMwP3g@mail.gmail.com
Backpatch-through: master
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There's not much point in asserting a pointer isn't NULL after some code
has already dereferenced that pointer.
Adjust the code so that the Assert occurs before the pointer dereference.
The Assert probably has questionable value in the first place, but it
seems worth keeping around to document the contract between
CopyMultiInsertInfoNextFreeSlot() and its callers.
Author: Amul Sul <sulamul@gmail.com>
Discussion: https://postgr.es/m/CAAJ_b94hXQzXaJxTLShkxQUgezf_SUxhzX9TH2f-g6gP7bne7g@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 108d2adb9e missed updating a few places in the jsonb code
that rely on signed integer wrapping for correctness. These can
also be fixed by using pg_abs_s32() to negate a signed integer
(that is known to be negative) for comparison with an unsigned
integer.
Reported-by: Alexander Lakhin
Discussion: https://postgr.es/m/bfff906f-300d-81ea-83b7-f2c93845e7f2%40gmail.com
|
|
|
|
|
|
|
|
|
| |
This is in preparation for adding a second source file to this
directory.
Amul Sul, reviewed by Sravan Kumar and revised a bit by me.
Discussion: http://postgr.es/m/CAAJ_b95mcGjkfAf1qduOR97CokW8-_i-dWLm3v6x1w2-OW9M+A@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
| |
This is in preparation for adding a second source file to this
directory. It will need access to this value. Also, fewer global
variables is usually a good thing.
Amul Sul, reviewed by Sravan Kumar and revised a bit by me.
Discussion: http://postgr.es/m/CAAJ_b95mcGjkfAf1qduOR97CokW8-_i-dWLm3v6x1w2-OW9M+A@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Duplicate the comment from astreamer_plain_writer_new instead of just
referring to it. Add a further note to mention that there are dangers
if anything else is written to the same FILE. Also add a comment where
we dup() the filehandle, referring to the existing comment in
astreamer_gzip_writer_finalize(), because the dup() looks wrong on
first glance without that comment to clarify.
Per concerns expressed by Tom Lane on pgsql-security, and using
some wording suggested by him.
Discussion: http://postgr.es/m/CA+TgmoYTFAD0YTh4HC1Nuhn0YEyoQi0_CENFgVzAY_YReiSksQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Not all messages that libpq received from the server would be sent
through our message tracing logic. This commit tries to fix that by
introducing a new function pqParseDone which make it harder to forget
about doing so.
The messages that we now newly send through our tracing logic are:
- CopyData (received by COPY TO STDOUT)
- Authentication requests
- NegotiateProtocolVersion
- Some ErrorResponse messages during connection startup
- ReadyForQuery when received after a FunctionCall message
Author: Jelte Fennema-Nio <postgres@jeltef.nl>
Discussion: https://postgr.es/m/CAGECzQSoPHtZ4xe0raJ6FYSEiPPS+YWXBhOGo+Y1YecLgknF3g@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
"EXTRACT(WEEK FROM interval_value)" formerly threw an error.
Define it as "tm->tm_mday / 7". (With C99 division semantics,
this gives consistent results for negative intervals.)
"EXTRACT(QUARTER FROM interval_value)" has been implemented
all along, but it formerly gave extremely strange results for
negative intervals. Fix it so that the output for -N months
is the negative of the output for N months.
Per bug #18348 from Michael Bondarenko and subsequent discussion.
Discussion: https://postgr.es/m/18348-b097a3587dfde8a4@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit updates a couple of places in the jsonb code to no
longer rely on signed integer wrapping for correctness. Like
commit 9e9a2b7031, this is intended to move us closer towards
removing -fwrapv, which may enable some compiler optimizations.
However, there is presently no plan to actually remove that
compiler option in the near future.
This commit makes use of the newly introduced pg_abs_s32() routine
to negate a signed integer (that is known to be negative) for
comparison with an unsigned integer. In passing, change one use of
INT_MIN to the more portable PG_INT32_MIN.
Reported-by: Alexander Lakhin
Author: Joseph Koshakow
Reviewed-by: Jian He
Discussion: https://postgr.es/m/CAAvxfHdBPOyEGS7s%2Bxf4iaW0-cgiq25jpYdWBqQqvLtLe_t6tw%40mail.gmail.com
|
|
|
|
|
|
|
|
|
| |
And improve the comments.
Backpatch to v17 where this was introduced.
Reviewed-by: Noah Misch
Discussion: https://www.postgresql.org/message-id/cac7d1b6-8358-40be-af0b-21bc9b27d34c@iki.fi
|
|
|
|
|
|
|
|
|
|
| |
The handling of binary and text formats are quite different here, so
it's more clear to check for the format first and have two separate
loops.
Author: jian he <jian.universality@gmail.com>
Reviewed-by: Ilia Evdokimov, Junwang Zhao
Discussion: https://www.postgresql.org/message-id/CACJufxFzHCeFBQF0M%2BSgk_NwknWuQ4oU7tS1isVeBrbhcKOHkg@mail.gmail.com
|
|
|
|
|
|
|
|
| |
Commit a78fcfb51243 removed the last use of it.
Author: Hugo Zhang, Aleksander Alekseev
Reviewed-by: Daniel Gustafsson
Discussion: https://www.postgresql.org/message-id/NT0PR01MB129459E243721B954611938F9CDD2%40NT0PR01MB1294.CHNPR01.prod.partner.outlook.cn
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
libpq checks the permissions of the password file before opening it.
The way this is done in two separate operations, a static analyzer
would flag as a time-of-check-time-of-use violation. In practice, you
can't do anything with that, but it still seems better style to fix
it.
To fix it, open the file first and then check the permissions on the
opened file handle.
Reviewed-by: Aleksander Alekseev <aleksander@timescale.com>
Reviewed-by: Andreas Karlsson <andreas@proxel.se>
Discussion: https://www.postgresql.org/message-id/flat/a3356054-14ae-4e7a-acc6-249d19dac20b%40eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit attempts to update a few places, such as the money,
numeric, and timestamp types, to no longer rely on signed integer
wrapping for correctness. This is intended to move us closer
towards removing -fwrapv, which may enable some compiler
optimizations. However, there is presently no plan to actually
remove that compiler option in the near future.
Besides using some of the existing overflow-aware routines in
int.h, this commit introduces and makes use of some new ones.
Specifically, it adds functions that accept a signed integer and
return its absolute value as an unsigned integer with the same
width (e.g., pg_abs_s64()). It also adds functions that accept an
unsigned integer, store the result of negating that integer in a
signed integer with the same width, and return whether the negation
overflowed (e.g., pg_neg_u64_overflow()).
Finally, this commit adds a couple of tests for timestamps near
POSTGRES_EPOCH_JDATE.
Author: Joseph Koshakow
Reviewed-by: Tom Lane, Heikki Linnakangas, Jian He
Discussion: https://postgr.es/m/CAAvxfHdBPOyEGS7s%2Bxf4iaW0-cgiq25jpYdWBqQqvLtLe_t6tw%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
| |
ecpg's lexer and parser files aren't normally processed by
pgindent, and unsurprisingly there's a lot of code in there
that doesn't really match project style. I spent some time
running pgindent over the fragments of these files that are
C code, and this is the result. This is in the same spirit
as commit 30ed71e42, though apparently Peter used a different
method for that one, since it didn't find these problems.
Discussion: https://postgr.es/m/2011420.1713493114@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We shouldn't ask the client to use a protocol version later than the
one that they requested. To avoid that, if the client requests a
version newer than the latest one we support, set FrontendProtocol
to the latest version we support, not the requested version. Then,
use that value when building the NegotiateProtocolVersion message.
(It seems good on general principle to avoid setting FrontendProtocol
to a version we don't support, anyway.)
None of this really matters right now, because we only support a
single protocol version, but if that ever changes, we'll need this.
Jelte Fennema-Nio, reviewed by me and incorporating some of my
proposed wording
Discussion: https://postgr.es/m/CAGECzQTyXDNtMXdq2L-Wp=OvOCPa07r6+U_MGb==h90MrfT+fQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently mul_var() uses the schoolbook multiplication algorithm,
which is O(n^2) in the number of NBASE digits. To improve performance
for large inputs, convert the inputs to base NBASE^2 before
multiplying, which effectively halves the number of digits in each
input, theoretically speeding up the computation by a factor of 4. In
practice, the actual speedup for large inputs varies between around 3
and 6 times, depending on the system and compiler used. In turn, this
significantly reduces the runtime of the numeric_big regression test.
For this to work, 64-bit integers are required for the products of
base-NBASE^2 digits, so this works best on 64-bit machines, on which
it is faster whenever the shorter input has more than 4 or 5 NBASE
digits. On 32-bit machines, the additional overheads, especially
during carry propagation and the final conversion back to base-NBASE,
are significantly higher, and it is only faster when the shorter input
has more than around 50 NBASE digits. When the shorter input has more
than 6 NBASE digits (so that mul_var_short() cannot be used), but
fewer than around 50 NBASE digits, there may be a noticeable slowdown
on 32-bit machines. That seems to be an acceptable tradeoff, given the
performance gains for other inputs, and the effort that would be
required to maintain code specifically targeting 32-bit machines.
Joel Jacobson and Dean Rasheed.
Discussion: https://postgr.es/m/9d8a4a42-c354-41f3-bbf3-199e1957db97%40app.fastmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit ca481d3c9a introduced mul_var_short(), which is used by
mul_var() whenever the shorter input has 1-4 NBASE digits and the
exact product is requested. As speculated on in that commit, it can be
extended to work for more digits in the shorter input. This commit
extends it up to 6 NBASE digits (up to 24 decimal digits), for which
it also gives a significant speedup. This covers more cases likely to
occur in real-world queries, for which using base-NBASE^2 arithmetic
provides little benefit.
To avoid code bloat and duplication, refactor it a bit using macros
and exploiting the fact that some portions of the code are shared
between the different cases.
Dean Rasheed, reviewed by Joel Jacobson.
Discussion: https://postgr.es/m/9d8a4a42-c354-41f3-bbf3-199e1957db97%40app.fastmail.com
|
|
|
|
|
|
|
|
|
|
| |
There were several sets of very similar local variable names, such as
"downer" and "dbowner", which was very confusing and error-prone.
Rename the former to "ownerEl" and so on, similar to collationcmds.c
and typecmds.c.
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://www.postgresql.org/message-id/flat/e5bce225-ee04-40c7-a280-ea7214318048%40eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Attempting to add a system column for a table to an existing publication
would result in the not very intuitive error message of:
ERROR: negative bitmapset member not allowed
Here we improve that to have it display the same error message as a user
would see if they tried adding a system column for a table when adding
it to the publication in the first place.
Doing this requires making the function which validates the list of
columns an extern function. The signature of the static function wasn't
an ideal external API as it made the code more complex than it needed to be.
Here we adjust the function to have it populate a Bitmapset of attribute
numbers. Doing it this way allows code simplification.
There was no particular bug here other than the weird error message, so
no backpatch.
Bug: #18558
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Author: Peter Smith, David Rowley
Discussion: https://postgr.es/m/18558-411bc81b03592125@postgresql.org
|
|
|
|
|
|
|
|
|
|
| |
Since these are single bytes instead of v2 or v3 messages they need
custom tracing logic. These "messages" don't even have official names
in the protocol specification, so I (Jelte) called them SSLResponse and
GSSENCResponse here.
Author: Jelte Fennema-Nio <postgres@jeltef.nl>
Discussion: https://postgr.es/m/CAGECzQSoPHtZ4xe0raJ6FYSEiPPS+YWXBhOGo+Y1YecLgknF3g@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
| |
According to the commit message in 8ec569479, we must have all variables
in header files marked with PGDLLIMPORT. In commit d3cc5ffe81f6 some
variables were moved from launch_backend.c file to several header files.
This adds PGDLLIMPORT to moved variables.
Author: Sofia Kopikova <s.kopikova@postgrespro.ru>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/e0b17014-5319-4dd6-91cd-93d9c8fc9539%40postgrespro.ru
|
|
|
|
|
|
|
|
|
|
|
|
| |
The TRACE_SORT macro guarded the availability of the trace_sort GUC
setting. But it has been enabled by default ever since it was
introduced in PostgreSQL 8.1, and there have been no reports that
someone wanted to disable it. So just remove the macro to simplify
things. (For the avoidance of doubt: The trace_sort GUC is still
there. This only removes the rarely-used macro guarding it.)
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://www.postgresql.org/message-id/flat/be5f7162-7c1d-44e3-9a78-74dcaa6529f2%40eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Historically, MinGW environments lacked some Windows API calls, so we
took a different code path in win32_langinfo(). Somehow, the code
change in commit 35eeea62 (removing setlocale() calls) caused one
particular 001_initdb.pl test to fail on MinGW + ICU builds, because
pg_import_system_collations() found no collations. It might take a
MinGW user to discover the exact reason.
Updating that function to use the same code as MSVC seems to fix that
test, so lets do that. (There are plenty more places that test for MSVC
unnecessarily, to be investigated later.)
While here, also rename the helper function win32_langinfo() to
win32_get_codeset(), to explain what it does less confusingly; it's not
really a general langinfo() substitute.
Noticed by triggering the optional MinGW CI task; no build farm animals
failed.
Discussion: https://postgr.es/m/CA%2BhUKGKBWfhXQ3J%2B2Lj5PhKvQnGD%3DsywA0XQcb7boTCf%3DerVLg%40mail.gmail.com
|