| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
| |
Some compilers (e.g., gcc before version 7) mistakenly think "carry"
might be used uninitialized.
Reported by Tom Lane, per various buildfarm members, e.g. arowana.
|
|
|
|
|
|
|
|
| |
This commit reverts c14d4acb8 as the patch design didn't take into account
that TypeCacheEntry could be invalidated during the lookup_type_cache() call.
Reported-by: Alexander Lakhin
Discussion: https://postgr.es/m/1927cba4-177e-5c23-cbcc-d444a850304f%40gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently when a single relcache entry gets invalidated,
TypeCacheRelCallback() has to loop over all type cache entries to find
appropriate typentry to invalidate. Unfortunately, using the syscache here
is impossible, because this callback could be called outside a transaction
and this makes impossible catalog lookups. This is why present commit
introduces RelIdToTypeIdCacheHash to map relation OID to its composite type
OID.
We are keeping RelIdToTypeIdCacheHash entry while corresponding type cache
entry have something to clean. Therefore, RelIdToTypeIdCacheHash shouldn't
get bloat in the case of temporary tables flood.
Discussion: https://postgr.es/m/5812a6e5-68ae-4d84-9d85-b443176966a1%40sigaev.ru
Author: Teodor Sigaev
Reviewed-by: Aleksander Alekseev, Tom Lane, Michael Paquier, Roman Zharkov
Reviewed-by: Andrei Lepikhov, Pavel Borisov
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit reverts 1adf16b8fb, 87c21bb941, and subsequent fixes and
improvements including df64c81ca9, c99ef1811a, 9dfcac8e15, 885742b9f8,
842c9b2705, fcf80c5d5f, 96c7381c4c, f4fc7cb54b, 60ae37a8bc, 259c96fa8f,
449cdcd486, 3ca43dbbb6, 2a679ae94e, 3a82c689fd, fbd4321fd5, d53a4286d7,
c086896625, 4e5d6c4091, 04158e7fa3.
The reason for reverting is security issues related to repeatable name lookups
(CVE-2014-0062). Even though 04158e7fa3 solved part of the problem, there
are still remaining issues, which aren't feasible to even carefully analyze
before the RC deadline.
Reported-by: Noah Misch, Robert Haas
Discussion: https://postgr.es/m/20240808171351.a9.nmisch%40google.com
Backpatch-through: 17
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use gmtime_r() and localtime_r() instead of gmtime() and localtime(),
for thread-safety.
There are a few affected calls in libpq and ecpg's libpgtypes, which
are probably effectively bugs, because those libraries already claim
to be thread-safe.
There is one affected call in the backend. Most of the backend
otherwise uses the custom functions pg_gmtime() and pg_localtime(),
which are implemented differently.
While we're here, change the call in the backend to gmtime*() instead
of localtime*(), since for that use time zone behavior is irrelevant,
and this side-steps any questions about when time zones are
initialized by localtime_r() vs localtime().
Portability: gmtime_r() and localtime_r() are in POSIX but are not
available on Windows. Windows has functions gmtime_s() and
localtime_s() that can fulfill the same purpose, so we add some small
wrappers around them. (Note that these *_s() functions are also
different from the *_s() functions in the bounds-checking extension of
C11. We are not using those here.)
On MinGW, you can get the POSIX-style *_r() functions by defining
_POSIX_C_SOURCE appropriately before including <time.h>. This leads
to a conflict at least in plpython because apparently _POSIX_C_SOURCE
gets defined in some header there, and then our replacement
definitions conflict with the system definitions. To avoid that sort
of thing, we now always define _POSIX_C_SOURCE on MinGW and use the
POSIX-style functions here.
Reviewed-by: Stepan Neretin <sncfmgg@gmail.com>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/eba1dc75-298e-4c46-8869-48ba8aad7d70@eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rather than the SQL injection_points_load(), this commit changes the
injection point test introduced in 768a9fd5535f to rely on the two
macros INJECTION_POINT_LOAD() and INJECTION_POINT_CACHED(), that have
been originally introduced for the sake of this test.
This runs the test as a two-step process: load the injection point, then
run its callback directly from the local cache loaded. What the test
did originally was also fine, but the point here is to have an example
in core of how to use these new macros.
While on it, fix the header ordering in multixact.c, as pointed out by
Alexander Korotkov. This was an oversight in 768a9fd5535f.
Per discussion with Álvaro Herrera.
Author: Michael Paquier
Discussion: https://postgr.es/m/ZsUnJUlSOBNAzwW1@paquier.xyz
Discussion: https://postgr.es/m/CAPpHfduzaBz7KMhwuVOZMTpG=JniPG4aUosXPZCxZydmzq_oEQ@mail.gmail.com
|
|
|
|
|
|
|
| |
It's normal for the name in a free slot to match the new name. The
max_inuse mechanism kept simple cases from reaching the problem. The
problem could appear when index 0 was the previously-detached entry and
index 1 is in use. Back-patch to v17, where this code first appeared.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, createPartitionTable() opens newly created table using its name.
This approach is prone to privilege escalation attack, because we might end
up opening another table than we just created.
This commit address the issue above by opening newly created table by its
OID. It appears to be tricky to get a relation OID out of ProcessUtility().
We have to extend TableLikeClause with new newRelationOid field, which is
filled within ProcessUtility() to be further accessed by caller.
Security: CVE-2014-0062
Reported-by: Noah Misch
Discussion: https://postgr.es/m/20240808171351.a9.nmisch%40google.com
Reviewed-by: Pavel Borisov, Dmitry Koval
|
|
|
|
|
|
|
|
|
| |
Apply the same code simplification to ATExecAddColumn as was done in
7ff9afbbd: apply GETSTRUCT() once instead of doing it repeatedly in
the same function.
Author: Tender Wang
Discussion: https://postgr.es/m/CAHewXNkO9+U437jvKT14s0MCu6Qpf6G-p2mZK5J9mAi4cHDgpQ@mail.gmail.com
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For a long time we have forbidden binary-coercible casts to or from
composite and array types, because such a cast cannot work correctly:
the type OID embedded in the value would need to change, but it won't
in a binary coercion. That reasoning applies equally to range types,
but we overlooked installing a similar restriction here when we
invented range types. Do so now.
Given the lack of field complaints, we won't change this in stable
branches, but it seems not too late for v17.
Per discussion of a problem noted by Peter Eisentraut.
Discussion: https://postgr.es/m/076968e1-0852-40a9-bc0b-117cd3f0e43c@eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now that disable_cost is not included in the cost estimate, there's
no visible sign in EXPLAIN output of which plan nodes are disabled.
Fix that by propagating the number of disabled nodes from Path to
Plan, and then showing it in the EXPLAIN output.
There is some question about whether this is a desirable change.
While I personally believe that it is, it seems best to make it a
separate commit, in case we decide to back out just this part, or
rework it.
Reviewed by Andres Freund, Heikki Linnakangas, and David Rowley.
Discussion: http://postgr.es/m/CA+TgmoZ_+MS+o6NeGK2xyBv-xM+w1AfFVuHE4f_aq6ekHv7YSQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, when a path type was disabled by e.g. enable_seqscan=false,
we either avoided generating that path type in the first place, or
more commonly, we added a large constant, called disable_cost, to the
estimated startup cost of that path. This latter approach can distort
planning. For instance, an extremely expensive non-disabled path
could seem to be worse than a disabled path, especially if the full
cost of that path node need not be paid (e.g. due to a Limit).
Or, as in the regression test whose expected output changes with this
commit, the addition of disable_cost can make two paths that would
normally be distinguishible in cost seem to have fuzzily the same cost.
To fix that, we now count the number of disabled path nodes and
consider that a high-order component of both the startup cost and the
total cost. Hence, the path list is now sorted by disabled_nodes and
then by total_cost, instead of just by the latter, and likewise for
the partial path list. It is important that this number is a count
and not simply a Boolean; else, as soon as we're unable to respect
disabled path types in all portions of the path, we stop trying to
avoid them where we can.
Because the path list is now sorted by the number of disabled nodes,
the join prechecks must compute the count of disabled nodes during
the initial cost phase instead of postponing it to final cost time.
Counts of disabled nodes do not cross subquery levels; at present,
there is no reason for them to do so, since the we do not postpone
path selection across subquery boundaries (see make_subplan).
Reviewed by Andres Freund, Heikki Linnakangas, and David Rowley.
Discussion: http://postgr.es/m/CA+TgmoZ_+MS+o6NeGK2xyBv-xM+w1AfFVuHE4f_aq6ekHv7YSQ@mail.gmail.com
|
|
|
|
| |
Oversight in commit a95ff1fe2eb4926b13e0940ad1f37d048704bdb0
|
|
|
|
|
|
|
|
| |
Apply GETSTRUCT() once instead of doing it repeatedly in the same
function. This simplifies the notation and makes the function's
structure more similar to the surrounding ones.
Discussion: https://www.postgresql.org/message-id/flat/a368248e-69e4-40be-9c07-6c3b5880b0a6@eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We advance origin progress during abort on successful streaming and
application of ROLLBACK in parallel streaming mode. But the origin
shouldn't be advanced during an error or unsuccessful apply due to
shutdown. Otherwise, it will result in a transaction loss as such a
transaction won't be sent again by the server.
Reported-by: Hou Zhijie
Author: Hayato Kuroda and Shveta Malik
Reviewed-by: Amit Kapila
Backpatch-through: 16
Discussion: https://postgr.es/m/TYAPR01MB5692FAC23BE40C69DA8ED4AFF5B92@TYAPR01MB5692.jpnprd01.prod.outlook.com
|
|
|
|
|
| |
Author: Andreas Karlsson
Discussion: https://postgr.es/m/69c2a864-846f-4309-bd5a-aaa1c34f9a11@proxel.se
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The descriptions for ProcArrayGroupUpdate and XactGroupUpdate claim
that these events mean we are waiting for the group leader "at end
of a parallel operation," but neither pertains to parallel
operations. This commit reverts these descriptions to their
wording before commit 3048898e73, i.e., "end of a parallel
operation" is changed to "transaction end."
Author: Sameer Kumar
Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/CAGPeHmh6UMrKQHKCmX%2B5vV5TH9P%3DKw9en3k68qEem6J%3DyrZPUA%40mail.gmail.com
Backpatch-through: 13
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before commit a0e0fb1ba56f, multixact.c contained a case in the
multixact-read path where it would loop sleeping 1ms each time until
another multixact-create path completed, which was uncovered by any
tests. That commit changed the code to rely on a condition variable
instead. Add a test now, which relies on injection points and "loading"
thereof (because of it being in a critical section), per commit
4b211003ecc2.
Author: Andrey Borodin <x4mmm@yandex-team.ru>
Reviewed-by: Michaël Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/0925F9A9-4D53-4B27-A87E-3D83A757B0E0@yandex-team.ru
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch provides the additional logging information in the following
conflict scenarios while applying changes:
insert_exists: Inserting a row that violates a NOT DEFERRABLE unique constraint.
update_differ: Updating a row that was previously modified by another origin.
update_exists: The updated row value violates a NOT DEFERRABLE unique constraint.
update_missing: The tuple to be updated is missing.
delete_differ: Deleting a row that was previously modified by another origin.
delete_missing: The tuple to be deleted is missing.
For insert_exists and update_exists conflicts, the log can include the origin
and commit timestamp details of the conflicting key with track_commit_timestamp
enabled.
update_differ and delete_differ conflicts can only be detected when
track_commit_timestamp is enabled on the subscriber.
We do not offer additional logging for exclusion constraint violations because
these constraints can specify rules that are more complex than simple equality
checks. Resolving such conflicts won't be straightforward. This area can be
further enhanced if required.
Author: Hou Zhijie
Reviewed-by: Shveta Malik, Amit Kapila, Nisha Moond, Hayato Kuroda, Dilip Kumar
Discussion: https://postgr.es/m/OS0PR01MB5716352552DFADB8E9AD1D8994C92@OS0PR01MB5716.jpnprd01.prod.outlook.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Here we add ExprState support for obtaining a 32-bit hash value from a
list of expressions. This allows both faster hashing and also JIT
compilation of these expressions. This is especially useful when hash
joins have multiple join keys as the previous code called ExecEvalExpr on
each hash join key individually and that was inefficient as tuple
deformation would have only taken into account one key at a time, which
could lead to walking the tuple once for each join key. With the new
code, we'll determine the maximum attribute required and deform the tuple
to that point only once.
Some performance tests done with this change have shown up to a 20%
performance increase of a query containing a Hash Join without JIT
compilation and up to a 26% performance increase when JIT is enabled and
optimization and inlining were performed by the JIT compiler. The
performance increase with 1 join column was less with a 14% increase
with and without JIT. This test was done using a fairly small hash
table and a large number of hash probes. The increase will likely be
less with large tables, especially ones larger than L3 cache as memory
pressure is more likely to be the limiting factor there.
This commit only addresses Hash Joins, but lays expression evaluation
and JIT compilation infrastructure for other hashing needs such as Hash
Aggregate.
Author: David Rowley
Reviewed-by: Alexey Dvoichenkov <alexey@hyperplane.net>
Reviewed-by: Tels <nospam-pg-abuse@bloodgate.com>
Discussion: https://postgr.es/m/CAApHDvoexAxgQFNQD_GRkr2O_eJUD1-wUGm%3Dm0L%2BGc%3DT%3DkEa4g%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a partition is detached and immediately dropped, a prepared
statement could try to compute a new partition descriptor that includes
it. This leads to this kind of error:
ERROR: could not open relation with OID 457639
Avoid this by skipping the partition in expand_partitioned_rtentry if it
doesn't exist.
Noted by me while investigating bug #18559. Kuntal Gosh helped to
identify the exact failure.
Backpatch to 14, where DETACH CONCURRENTLY was introduced.
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Kuntal Ghosh <kuntalghosh.2007@gmail.com>
Reviewed-by: Junwang Zhao <zhjwpku@gmail.com>
Discussion: https://postgr.es/m/202408122233.bo4adt3vh5bi@alvherre.pgsql
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Report search_path changes to the client. Multi-tenant applications
often map tenants to schemas, and use search_path to pick the tenant a
given connection works with. This breaks when a connection pool (like
PgBouncer), because the search_path may change unexpectedly.
There are other GUCs we might want reported (e.g. various timeouts), but
search_path is by far the biggest foot gun that can lead either to
puzzling failures during query execution (when objects are missing or
are defined differently), or even to accessing incorrect data.
Many existing tools modify search_path, pg_dump being a notable example.
Ideally, clients could specify which GUCs are interesting and should be
subject to this reporting, but we don't support that. GUC_REPORT is what
connection pools rely on for other interesting GUCs, so just use that.
When this change was initially proposed in 2014, one of the concerns was
impact on performance. But this was addressed by commit 2432b1a04087,
which ensures we report each GUC at most once per query, no matter how
many times it changed during execution.
Eventually, this might be replaced / superseded by allowing doing this
by making the protocol extensible in this direction, but it's unclear
when (or if) that happens. Until then, we can leverage GUC_REPORT.
Author: Alexander Kukushkin, Jelte Fennema-Nio
Discussion: https://postgr.es/m/CAFh8B=k8s7WrcqhafmYhdN1+E5LVzZi_QaYDq8bKvrGJTAhY2Q@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add a comment explaining dropdb() can't rely on syscache. The issue with
flattened rows was fixed by commit 0f92b230f88b, but better to have
a clear explanation why the systable scan is necessary. The other places
doing in-place updates on pg_database have the same comment.
Suggestion and patch by Yugo Nagata. Backpatch to 12, same as the fix.
Author: Yugo Nagata
Backpatch-through: 12
Discussion: https://postgr.es/m/CAJTYsWWNkCt+-UnMhg=BiCD3Mh8c2JdHLofPxsW3m2dkDFw8RA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 274bbced disabled session tickets for TLSv1.3 on top of the
already disabled TLSv1.2 session tickets, but accidentally caused
a regression where TLSv1.2 session tickets were incorrectly sent.
Fix by unconditionally disabling TLSv1.2 session tickets and only
disable TLSv1.3 tickets when the right version of OpenSSL is used.
Backpatch to all supported branches.
Reported-by: Cameron Vogt <cvogt@automaticcontrols.net>
Reported-by: Fire Emerald <fire.github@gmail.com>
Reviewed-by: Jacob Champion <jacob.champion@enterprisedb.com>
Discussion: https://postgr.es/m/DM6PR16MB3145CF62857226F350C710D1AB852@DM6PR16MB3145.namprd16.prod.outlook.com
Backpatch-through: v12
|
|
|
|
|
|
|
|
| |
Commit ca051d8b101 called newlocale(LC_COLLATE, ...) instead of
newlocale(LC_COLLATE_MASK, ...), in code reached only on FreeBSD. They
have the same value on that OS, explaining why it worked. Fix.
Back-patch to 14, where ca051d8b101 landed.
|
|
|
|
|
|
|
|
| |
The log message on backend crash used wrong variable, which could be
uninitialized. Introduced in commit 28a520c0b7.
Reported-by: Alexander Lakhin
Discussion: https://www.postgresql.org/message-id/451b0797-83b8-cdbc-727f-8d7a7b0e3bca@gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a continuation of c9e24573905b, containing changes included into
the proposed patch that have been missed in the actual commit. I have
managed to miss these diffs while doing a rebase of the original patch.
Thanks to Noah Misch, Peter Eisentraut and Alexander Korotkov for the
pokes.
Discussion: https://postgr.es/m/92fe572d-638e-4162-aef6-1c42a2936f25@eisentraut.org
Discussion: https://postgr.es/m/20240810175055.cd.nmisch@google.com
Backpatch-through: 17
|
|
|
|
|
|
|
|
|
|
|
| |
One of the two slot scans in SlruSelectLRUPage was not walking only the
slots in the specific bank where the buffer could be; change it to do
that.
Oversight in 53c2a97a9266.
Author: Sergey Sargsyan <sergey.sargsyan.2001@gmail.com>
Discussion: https://postgr.es/m/18582-5f301dd30ba91a38@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit c66a7d75e652 modified DROP DATABASE so that if interrupted, the
database is known to be in an invalid state and can only be dropped.
This is done by setting a flag using an in-place update, so that it's
not lost in case of rollback.
For databases with many ACLs, this may however fail like this:
ERROR: wrong tuple length
This happens because with many ACLs, the pg_database.datacl attribute
gets TOASTed. The dropdb() code reads the tuple from the syscache, which
means it's detoasted. But the in-place update expects the tuple length
to match the on-disk tuple.
Fixed by reading the tuple from the catalog directly, not from syscache.
Report and fix by Ayush Tiwari. Backpatch to 12. The DROP DATABASE fix
was backpatched to 11, but 11 is EOL at this point.
Reported-by: Ayush Tiwari
Author: Ayush Tiwari
Reviewed-by: Tomas Vondra
Backpatch-through: 12
Discussion: https://postgr.es/m/CAJTYsWWNkCt+-UnMhg=BiCD3Mh8c2JdHLofPxsW3m2dkDFw8RA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 97ddda8a82ac470ae581d0eb485b6577707678bc removed the rmtree()
behavior from XLOG_TBLSPC_CREATE, obsoleting that part of the comment.
The comment's point about XLOG_DBASE_CREATE was wrong when commit
fa0f466d5329e10b16f3b38c8eaf5306f7e234e8 introduced the point. (It
would have been accurate if that commit had predated commit
fbcbc5d06f53aea412130deb52e216aa3883fb8d introducing the second
checkpoint of CREATE DATABASE.) Nothing can skip log_smgrcreate() on
the basis of wal_level=minimal, so don't comment on that.
Commit c6b92041d38512a4176ed76ad06f713d2e6c01a8 expanded WAL skipping
from five specific operations to relfilenodes generally, hence the
CreateDatabaseUsingFileCopy() comment change.
Discussion: https://postgr.es/m/20231008022204.cc@rfd.leadboat.com
|
|
|
|
|
|
|
|
|
|
| |
The commit was "Provide API for streaming relation data.".
Reported-by: Nazir Bilal Yavuz
Discussion: https://postgr.es/m/CAN55FZ3KsZ2faZs1sK5J0W+_8B3myB232CfLYGie4u4BBMwP3g@mail.gmail.com
Backpatch-through: master
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There's not much point in asserting a pointer isn't NULL after some code
has already dereferenced that pointer.
Adjust the code so that the Assert occurs before the pointer dereference.
The Assert probably has questionable value in the first place, but it
seems worth keeping around to document the contract between
CopyMultiInsertInfoNextFreeSlot() and its callers.
Author: Amul Sul <sulamul@gmail.com>
Discussion: https://postgr.es/m/CAAJ_b94hXQzXaJxTLShkxQUgezf_SUxhzX9TH2f-g6gP7bne7g@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 108d2adb9e missed updating a few places in the jsonb code
that rely on signed integer wrapping for correctness. These can
also be fixed by using pg_abs_s32() to negate a signed integer
(that is known to be negative) for comparison with an unsigned
integer.
Reported-by: Alexander Lakhin
Discussion: https://postgr.es/m/bfff906f-300d-81ea-83b7-f2c93845e7f2%40gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
"EXTRACT(WEEK FROM interval_value)" formerly threw an error.
Define it as "tm->tm_mday / 7". (With C99 division semantics,
this gives consistent results for negative intervals.)
"EXTRACT(QUARTER FROM interval_value)" has been implemented
all along, but it formerly gave extremely strange results for
negative intervals. Fix it so that the output for -N months
is the negative of the output for N months.
Per bug #18348 from Michael Bondarenko and subsequent discussion.
Discussion: https://postgr.es/m/18348-b097a3587dfde8a4@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit updates a couple of places in the jsonb code to no
longer rely on signed integer wrapping for correctness. Like
commit 9e9a2b7031, this is intended to move us closer towards
removing -fwrapv, which may enable some compiler optimizations.
However, there is presently no plan to actually remove that
compiler option in the near future.
This commit makes use of the newly introduced pg_abs_s32() routine
to negate a signed integer (that is known to be negative) for
comparison with an unsigned integer. In passing, change one use of
INT_MIN to the more portable PG_INT32_MIN.
Reported-by: Alexander Lakhin
Author: Joseph Koshakow
Reviewed-by: Jian He
Discussion: https://postgr.es/m/CAAvxfHdBPOyEGS7s%2Bxf4iaW0-cgiq25jpYdWBqQqvLtLe_t6tw%40mail.gmail.com
|
|
|
|
|
|
|
|
|
| |
And improve the comments.
Backpatch to v17 where this was introduced.
Reviewed-by: Noah Misch
Discussion: https://www.postgresql.org/message-id/cac7d1b6-8358-40be-af0b-21bc9b27d34c@iki.fi
|
|
|
|
|
|
|
|
|
|
| |
The handling of binary and text formats are quite different here, so
it's more clear to check for the format first and have two separate
loops.
Author: jian he <jian.universality@gmail.com>
Reviewed-by: Ilia Evdokimov, Junwang Zhao
Discussion: https://www.postgresql.org/message-id/CACJufxFzHCeFBQF0M%2BSgk_NwknWuQ4oU7tS1isVeBrbhcKOHkg@mail.gmail.com
|
|
|
|
|
|
|
|
| |
Commit a78fcfb51243 removed the last use of it.
Author: Hugo Zhang, Aleksander Alekseev
Reviewed-by: Daniel Gustafsson
Discussion: https://www.postgresql.org/message-id/NT0PR01MB129459E243721B954611938F9CDD2%40NT0PR01MB1294.CHNPR01.prod.partner.outlook.cn
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit attempts to update a few places, such as the money,
numeric, and timestamp types, to no longer rely on signed integer
wrapping for correctness. This is intended to move us closer
towards removing -fwrapv, which may enable some compiler
optimizations. However, there is presently no plan to actually
remove that compiler option in the near future.
Besides using some of the existing overflow-aware routines in
int.h, this commit introduces and makes use of some new ones.
Specifically, it adds functions that accept a signed integer and
return its absolute value as an unsigned integer with the same
width (e.g., pg_abs_s64()). It also adds functions that accept an
unsigned integer, store the result of negating that integer in a
signed integer with the same width, and return whether the negation
overflowed (e.g., pg_neg_u64_overflow()).
Finally, this commit adds a couple of tests for timestamps near
POSTGRES_EPOCH_JDATE.
Author: Joseph Koshakow
Reviewed-by: Tom Lane, Heikki Linnakangas, Jian He
Discussion: https://postgr.es/m/CAAvxfHdBPOyEGS7s%2Bxf4iaW0-cgiq25jpYdWBqQqvLtLe_t6tw%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We shouldn't ask the client to use a protocol version later than the
one that they requested. To avoid that, if the client requests a
version newer than the latest one we support, set FrontendProtocol
to the latest version we support, not the requested version. Then,
use that value when building the NegotiateProtocolVersion message.
(It seems good on general principle to avoid setting FrontendProtocol
to a version we don't support, anyway.)
None of this really matters right now, because we only support a
single protocol version, but if that ever changes, we'll need this.
Jelte Fennema-Nio, reviewed by me and incorporating some of my
proposed wording
Discussion: https://postgr.es/m/CAGECzQTyXDNtMXdq2L-Wp=OvOCPa07r6+U_MGb==h90MrfT+fQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently mul_var() uses the schoolbook multiplication algorithm,
which is O(n^2) in the number of NBASE digits. To improve performance
for large inputs, convert the inputs to base NBASE^2 before
multiplying, which effectively halves the number of digits in each
input, theoretically speeding up the computation by a factor of 4. In
practice, the actual speedup for large inputs varies between around 3
and 6 times, depending on the system and compiler used. In turn, this
significantly reduces the runtime of the numeric_big regression test.
For this to work, 64-bit integers are required for the products of
base-NBASE^2 digits, so this works best on 64-bit machines, on which
it is faster whenever the shorter input has more than 4 or 5 NBASE
digits. On 32-bit machines, the additional overheads, especially
during carry propagation and the final conversion back to base-NBASE,
are significantly higher, and it is only faster when the shorter input
has more than around 50 NBASE digits. When the shorter input has more
than 6 NBASE digits (so that mul_var_short() cannot be used), but
fewer than around 50 NBASE digits, there may be a noticeable slowdown
on 32-bit machines. That seems to be an acceptable tradeoff, given the
performance gains for other inputs, and the effort that would be
required to maintain code specifically targeting 32-bit machines.
Joel Jacobson and Dean Rasheed.
Discussion: https://postgr.es/m/9d8a4a42-c354-41f3-bbf3-199e1957db97%40app.fastmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit ca481d3c9a introduced mul_var_short(), which is used by
mul_var() whenever the shorter input has 1-4 NBASE digits and the
exact product is requested. As speculated on in that commit, it can be
extended to work for more digits in the shorter input. This commit
extends it up to 6 NBASE digits (up to 24 decimal digits), for which
it also gives a significant speedup. This covers more cases likely to
occur in real-world queries, for which using base-NBASE^2 arithmetic
provides little benefit.
To avoid code bloat and duplication, refactor it a bit using macros
and exploiting the fact that some portions of the code are shared
between the different cases.
Dean Rasheed, reviewed by Joel Jacobson.
Discussion: https://postgr.es/m/9d8a4a42-c354-41f3-bbf3-199e1957db97%40app.fastmail.com
|
|
|
|
|
|
|
|
|
|
| |
There were several sets of very similar local variable names, such as
"downer" and "dbowner", which was very confusing and error-prone.
Rename the former to "ownerEl" and so on, similar to collationcmds.c
and typecmds.c.
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://www.postgresql.org/message-id/flat/e5bce225-ee04-40c7-a280-ea7214318048%40eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Attempting to add a system column for a table to an existing publication
would result in the not very intuitive error message of:
ERROR: negative bitmapset member not allowed
Here we improve that to have it display the same error message as a user
would see if they tried adding a system column for a table when adding
it to the publication in the first place.
Doing this requires making the function which validates the list of
columns an extern function. The signature of the static function wasn't
an ideal external API as it made the code more complex than it needed to be.
Here we adjust the function to have it populate a Bitmapset of attribute
numbers. Doing it this way allows code simplification.
There was no particular bug here other than the weird error message, so
no backpatch.
Bug: #18558
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Author: Peter Smith, David Rowley
Discussion: https://postgr.es/m/18558-411bc81b03592125@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
| |
The TRACE_SORT macro guarded the availability of the trace_sort GUC
setting. But it has been enabled by default ever since it was
introduced in PostgreSQL 8.1, and there have been no reports that
someone wanted to disable it. So just remove the macro to simplify
things. (For the avoidance of doubt: The trace_sort GUC is still
there. This only removes the rarely-used macro guarding it.)
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://www.postgresql.org/message-id/flat/be5f7162-7c1d-44e3-9a78-74dcaa6529f2%40eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, log_autovacuum_min_duration utilized dedicated code for
logging resource statistics, such as system and buffer usage during
autoanalyze. However, this logging functionality was not utilized by
ANALYZE VERBOSE.
This commit adds resource statistics reporting to ANALYZE VERBOSE by
reusing the same logging code as autoanalyze.
Author: Anthonin Bonnefoy
Reviewed-by: Masahiko Sawada
Discussion: https://postgr.es/m/CAO6_Xqr__kTTCLkftqS0qSCm-J7_xbRG3Ge2rWhucxQJMJhcRA%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, (auto)analyze used global variables VacuumPageHit,
VacuumPageMiss, and VacuumPageDirty to track buffer usage. However,
pgBufferUsage provides a more generic way to track buffer usage with
support functions.
This change replaces those global variables with pgBufferUsage in
analyze. Since analyze was the sole user of those variables, it
removes their declarations. Vacuum previously used those variables but
replaced them with pgBufferUsage as part of a bug fix, commit
5cd72cc0c.
Additionally, it adjusts the buffer usage message in both vacuum and
analyze for better consistency.
Author: Anthonin Bonnefoy
Reviewed-by: Masahiko Sawada, Michael Paquier
Discussion: https://postgr.es/m/CAO6_Xqr__kTTCLkftqS0qSCm-J7_xbRG3Ge2rWhucxQJMJhcRA%40mail.gmail.com
|
|
|
|
| |
Some newer code was applying this inconsistently.
|
|
|
|
|
|
|
|
| |
constexpr is a keyword in C23. Rename a conflicting identifier for
future-proofing.
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/08abc832-1384-4aca-a535-1a79765b565e%40eisentraut.org
|