| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug #17512 highlighted that a suitably broken data type could cause the
backend to crash if either the hash function or equality function were in
someway non-deterministic based on their input values. Such a data type
could cause a crash of the backend due to some code which assumes that
we'll always find a hash table entry corresponding to an item in the
Memoize LRU list.
Here we remove the assumption that we'll always find the entry
corresponding to the given LRU list item and add run-time checks to verify
we have found the given item in the cache.
This is not a fix for bug #17512, but it will turn the crash reported by
that bug report into an internal ERROR.
Reported-by: Ales Zeleny
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAApHDvpxFSTwvoYWT7kmFVSZ9zLAeHb=S9vrz=RExMgSkQNWqw@mail.gmail.com
Backpatch-through: 14, where Memoize was added.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
pg_stat_get_subscription scanned one more LogicalRepWorker array entry
than is really allocated. In the worst case this could lead to SIGSEGV,
if the LogicalRepCtx data structure is near the end of shared memory.
That seems quite unlikely though (thanks to the ordering of calls in
CreateSharedMemoryAndSemaphores) and we've heard no field reports of it.
A more likely misbehavior is one row of garbage data in the function's
result, but even that is not real likely because of the check that the
pid field matches some live backend.
Report and fix by Kuntal Ghosh. This bug is old, so back-patch
to all supported branches.
Discussion: https://postgr.es/m/CAGz5QCJykEDzW6jQK6Yz7Qh_PMtD=95de_7QoocbVR2Qy8hWZA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, we simply combine the column lists when publishing tables on
multiple publications and that can sometimes lead to unexpected behavior.
Say, if a column is published in any row-filtered publication, then the
values for that column are sent to the subscriber even for rows that don't
match the row filter, as long as the row matches the row filter for any
other publication, even if that other publication doesn't include the
column.
The main purpose of introducing a column list is to have statically
different shapes on publisher and subscriber or hide sensitive column
data. In both cases, it doesn't seem to make sense to combine column
lists.
So, we disallow the cases where the column list is different for the same
table when combining publications. It can be later extended to combine the
column lists for selective cases where required.
Reported-by: Alvaro Herrera
Author: Hou Zhijie
Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/202204251548.mudq7jbqnh7r@alvherre.pgsql
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since a117cebd6, some older gcc versions issue "variable may be used
uninitialized in this function" complaints for brin_summarize_range.
Silence that using the same coding pattern as in bt_index_check_internal;
arguably, a117cebd6 had too narrow a view of which compilers might give
trouble.
Nathan Bossart and Tom Lane. Back-patch as the previous commit was.
Discussion: https://postgr.es/m/20220601163537.GA2331988@nathanxps13
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit d9d076222f5b "VACUUM: ignore indexing operations
with CONCURRENTLY".
These changes caused indexes created with the CONCURRENTLY option to
miss heap tuples that were HOT-updated and HOT-pruned during the index
creation. Before these changes, HOT pruning would have been prevented
by the Xmin of the transaction creating the index, but because this
change was precisely to allow the Xmin to move forward ignoring that
backend, now other backends scanning the table can prune them. This is
not a problem for VACUUM (which requires a lock that conflicts with a
CREATE INDEX CONCURRENTLY operation), but HOT-prune can definitely
occur. In other words, Xmin advancement was sped up, but at the cost of
corrupting the resulting index.
Regrettably, this means that the new feature in PG14 that RIC/CIC on
very large tables no longer force VACUUM to retain very old tuples goes
away. We might try to implement it again in a later release, but for
now the risk of indexes missing tuples is too high and there's no easy
fix.
Backpatch to 14, where this change appeared.
Reported-by: Peter Slavov <pet.slavov@gmail.com>
Diagnosys-by: Andrey Borodin <x4mmm@yandex-team.ru>
Diagnosys-by: Michael Paquier <michael@paquier.xyz>
Diagnosys-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/17485-396609c6925b982d%40postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We hadn't noticed this because (a) few people feed invalid
timezone abbreviation files to the server, and (b) in typical
scenarios guc.c would throw ereport(ERROR) and then transaction
abort handling would silently clean up the leaked file reference.
However, it was possible to observe file leakage warnings if one
breaks an already-active abbreviation file, because guc.c does
not throw ERROR when loading supposedly-validated settings during
session start or SIGHUP processing.
Report and fix by Kyotaro Horiguchi (cosmetic adjustments by me)
Discussion: https://postgr.es/m/20220530.173740.748502979257582392.horikyota.ntt@gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With the old logic, when the reciever had not yet attached, we would
never call shm_mq_inc_bytes_written(), even if force_flush = true
was specified. That could result in a situation where data that the
sender believes it has sent is never received.
Along the way, remove a useless function prototype for a nonexistent
function from shm_mq.h.
Commit 46846433a03dff4f2e08c8a161e54a842da360d6 introduced these
problems.
Pavan Deolasee, with a few changes by me.
Discussion: https://postgr.es/m/CABOikdPkwtLLCTnzzmpSMXo3QZa2yXq0J7Q61ssdLFAJYrOVvQ@mail.gmail.com
|
|
|
|
|
| |
Author: Peter Smith
Discussion: https://postgr.es/m/CAHut+Pu-V22PiJF2ym9_NVZe-+qnycfyEX24dZm=7URWhDHJ3w@mail.gmail.com
|
|
|
|
|
|
|
|
|
| |
Build farm animal gharial recently failed a few times in a parallel
worker's call to OwnLatch() with "ERROR: latch already owned". Let's
turn that into a PANIC and show the PID of the owner, to try to learn
more.
Discussion: https://postgr.es/m/CA%2BhUKGJ_0RGcr7oUNzcHdn7zHqHSB_wLSd3JyS2YC_DYB%2B-V%3Dg%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 1a36bc9db (SQL/JSON query functions) introduced STRING as a
type_func_name_keyword, thereby breaking applications that use
"string" as a table name, column name, function parameter name, etc.
That seems like a pretty bad thing, not least because the SQL spec
says that STRING is an unreserved keyword.
This is easy enough to fix so far as the core grammar is concerned.
However, doing so causes some ECPG test cases to fail, specifically
those that use "string" as a typedef name. It turns out this is
because portions of the ECPG grammar allow type_func_name_keywords
but not unreserved_keywords as typedef names. That's pretty horrid,
and it's mildly astonishing that we've not heard complaints about it
before. We can fix two of those uses trivially, but the ones in the
var_type production are less easy. As a stopgap, hard-code STRING as
an allowed alternative in var_type.
Per report from Alastair McKinley.
Discussion: https://postgr.es/m/3661437.1653855582@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the codepath when no encoding conversion is required, the check for
incomplete character at the end of input incorrectly used server
encoding's max character length, instead of the client's. Usually the
server and client encodings are the same when we're not performing
encoding conversion, but SQL_ASCII is an exception.
In the passing, also fix some outdated comments that still talked about
the old COPY protocol. It was removed in v14.
Per bug #17501 from Vitaly Voronov. Backpatch to v14 where this was
introduced.
Discussion: https://www.postgresql.org/message-id/17501-128b1dd039362ae6@postgresql.org
|
|
|
|
|
|
|
|
| |
Somewhat embarrassing oversight in 98f897339b0. Does not have a functional
impact, but is unnecessarily confusing.
Reported-By: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/Yo2351qVYqd/bJws@paquier.xyz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit 06f5295 as per issues with this approach, both in
terms of efficiency impact and stability. First, contrary to the
single-item cache for transaction IDs in transam.c, the cache may finish
by not be hit for a long time, and without an invalidation mechanism to
clear it, it would cause inconsistent results on wraparound for
example. Second, the use of SubTransGetTopmostTransaction() for the
caching has a limited impact on performance. SubTransGetParent() could
have more impact, though the benchmarking of the single-item approach
still needs to be proved, particularly under the conditions where SLRU
lookups are stressed in parallel with overflowed snapshots (aka more
than 64 subxids generated, for example).
After discussion with Andres Freund.
Discussion: https://postgr.es/m/20220524235250.gtt3uu5zktfkr4hv@alap3.anarazel.de
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a short description is specified as NULL in one of the various
DefineCustomXXXVariable() functions available to external modules to
define a custom parameter, SHOW ALL would crash. This change teaches
SHOW ALL to properly handle NULL short descriptions, as well as any code
paths that manipulate it, to gain in flexibility. Note that
help_config.c was already able to do that, when describing a set of GUCs
for postgres --describe-config.
Author: Steve Chavez
Reviewed by: Nathan Bossart, Andres Freund, Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/CAGRrpzY6hO-Kmykna_XvsTv8P2DshGiU6G3j8yGao4mk0CqjHA%40mail.gmail.com
Backpatch-through: 10
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
9d9c02ccd added code to allow the executor to take shortcuts when quals
on monotonic window functions guaranteed that once the qual became false
it could never become true again. When possible, baserestrictinfo quals
are converted to become these quals, which we call run conditions.
Unfortunately, in 9d9c02ccd, I forgot to update
remove_unused_subquery_outputs to teach it about these run conditions.
This could cause a WindowFunc column which was unused in the target list
but referenced by an upper-level WHERE clause to be removed from the
subquery when the qual in the WHERE clause was converted into a window run
condition. Because of this, the entire WindowClause would be removed from
the query resulting in additional rows making it into the resultset when
they should have been filtered out by the WHERE clause.
Here we fix this by recording which target list items in the subquery have
run conditions. That gets passed along to remove_unused_subquery_outputs
to tell it not to remove these items from the target list.
Bug: #17495
Reported-by: Jeremy Evans
Reviewed-by: Richard Guo
Discussion: https://postgr.es/m/17495-7ffe2fa0b261b9fa@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commits a59c79564 et al. tried to sync libpq's SSL key file
permissions checks with what we've used for years in the backend.
We did not intend to create any new failure cases, but it turns out
we did: restricting the key file's ownership breaks cases where the
client is allowed to read a key file despite not having the identical
UID. In particular a client running as root used to be able to read
someone else's key file; and having seen that I suspect that there are
other, less-dubious use cases that this restriction breaks on some
platforms.
We don't really need an ownership check, since if we can read the key
file despite its having restricted permissions, it must have the right
ownership --- under normal conditions anyway, and the point of this
patch is that any additional corner cases where that works should be
deemed allowable, as they have been historically. Hence, just drop
the ownership check, and rearrange the permissions check to get rid
of its faulty assumption that geteuid() can't be zero. (Note that the
comparable backend-side code doesn't have to cater for geteuid() == 0,
since the server rejects that very early on.)
This does have the end result that the permissions safety check used
for a root user's private key file is weaker than that used for
anyone else's. While odd, root really ought to know what she's doing
with file permissions, so I think this is acceptable.
Per report from Yogendra Suralkar. Like the previous patch,
back-patch to all supported branches.
Discussion: https://postgr.es/m/MW3PR15MB3931DF96896DC36D21AFD47CA3D39@MW3PR15MB3931.namprd15.prod.outlook.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
repeat() checked for integer overflow during its calculation of the
required output space, but it just passed the resulting integer to
palloc(). This meant that result sizes between 1GB and 2GB led to
ERRCODE_INTERNAL_ERROR, "invalid memory alloc request size" rather
than ERRCODE_PROGRAM_LIMIT_EXCEEDED, "requested length too large".
That seems like a bit of a wart, so add an explicit AllocSizeIsValid
check to make these error cases uniform.
Do likewise in the sibling functions lpad() etc. While we're here,
also modernize their overflow checks to use pg_mul_s32_overflow() etc
instead of expensive divisions.
Per complaint from Japin Li. This is basically cosmetic, so I don't
feel a need to back-patch.
Discussion: https://postgr.es/m/ME3P282MB16676ED32167189CB0462173B6D69@ME3P282MB1667.AUSP282.PROD.OUTLOOK.COM
|
|
|
|
|
|
|
|
|
| |
Mistake in 5891c7a8ed8, likely made when switching the default value from none
to fetch during development.
Reported-By: Nathan Bossart <nathandbossart@gmail.com>
Author: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://postgr.es/m/20220524220147.GA1298892@nathanxps13
|
|
|
|
|
|
| |
Author: Atsushi Torikoshi
Reviewed-by: Nathan Bossart
Discussion: https://postgr.es/m/d00ddbf29f9d09b3a471e64977560de1@oss.nttdata.com
|
|
|
|
|
|
|
| |
These were of value only during development.
Reported by Justin Pryzby
Discussion: https://www.postgresql.org/message-id/20220519201254.GU19626%40telsasoft.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ruleutils.c was coded to suppress the AS label for a SELECT output
expression if the column name is "?column?", which is the parser's
fallback if it can't think of something better. This is fine, and
avoids ugly clutter, so long as (1) nothing further up in the parse
tree relies on that column name or (2) the same fallback would be
assigned when the rule or view definition is reloaded. Unfortunately
(2) is far from certain, both because ruleutils.c might print the
expression in a different form from how it was originally written
and because FigureColname's rules might change in future releases.
So we shouldn't rely on that.
Detecting exactly whether there is any outer-level use of a SELECT
column name would be rather expensive. This patch takes the simpler
approach of just passing down a flag indicating whether there *could*
be any outer use; for example, the output column names of a SubLink
are not referenceable, and we also do not care about the names exposed
by the right-hand side of a setop. This is sufficient to suppress
unwanted clutter in all but one case in the regression tests. That
seems like reasonable evidence that it won't be too much in users'
faces, while still fixing the cases we need to fix.
Per bug #17486 from Nicolas Lutic. This issue is ancient, so
back-patch to all supported branches.
Discussion: https://postgr.es/m/17486-1ad6fd786728b8af@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Several places in the planner tried to clamp a double value to fit
in a "long" by doing
(long) Min(x, (double) LONG_MAX);
This is subtly incorrect, because it casts LONG_MAX to double and
potentially back again. If long is 64 bits then the double value
is inexact, and the platform might round it up to LONG_MAX+1
resulting in an overflow and an undesirably negative output.
While it's not hard to rewrite the expression into a safe form,
let's put it into a common function to reduce the risk of someone
doing it wrong in future.
In principle this is a bug fix, but since the problem could only
manifest with group count estimates exceeding 2^63, it seems unlikely
that anyone has actually hit this or will do so anytime soon. We're
fixing it mainly to satisfy fuzzer-type tools. That being the case,
a HEAD-only fix seems sufficient.
Andrey Lepikhov
Discussion: https://postgr.es/m/ebbc2efb-7ef9-bf2f-1ada-d6ec48f70e58@postgrespro.ru
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When an implicit operator family is created, it wasn't getting reported.
Make it do so.
This has always been missing. Backpatch to 10.
Author: Masahiko Sawada <sawada.mshk@gmail.com>
Reported-by: Leslie LEMAIRE <leslie.lemaire@developpement-durable.gouv.fr>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Michael Paquiër <michael@paquier.xyz>
Discussion: https://postgr.es/m/f74d69e151b22171e8829551b1159e77@developpement-durable.gouv.fr
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a slight, convenient semantics change from what commit
0f0cfb494004 ("Fix parallel operations that prevent oldest xmin from
advancing") introduced that lets us simplify the coding in the one place
where it is used.
Backpatch to 13. This is related to commit 6fea65508a1a ("Tighten
ComputeXidHorizons' handling of walsenders") rewriting the code site
where this is used, which has not yet been backpatched, but it may well
be in the future.
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Discussion: https://postgr.es/m/202204191637.eldwa2exvguw@alvherre.pgsql
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 923def9a53 and 52e4f0cd47 allowed to specify column lists and row
filters for publication tables. This commit extends the
pg_publication_tables view and pg_get_publication_tables function to
display that information.
This information will be useful to users and we also need this for the
later commit that prohibits combining multiple publications with different
column lists for the same table.
Author: Hou Zhijie
Reviewed By: Amit Kapila, Alvaro Herrera, Shi Yu, Takamichi Osumi
Discussion: https://postgr.es/m/202204251548.mudq7jbqnh7r@alvherre.pgsql
|
|
|
|
|
|
|
|
|
| |
An 'else' clause was misplaced in commit 598ac10be1c2, making zero-rows
output look a bit silly. Add a test case for it.
Pointed out by Tom Lane.
Discussion: https://postgr.es/m/21030.1652893083@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We weren't checking the length of the column list in the alias clause of
an XMLTABLE or JSON_TABLE function (a "tablefunc" RTE), and it was
possible to make the server crash by passing an overly long one. Fix it
by throwing an error in that case, like the other places that deal with
alias lists.
In passing, modify the equivalent test used for join RTEs to look like
the other ones, which was different for no apparent reason.
This bug came in when XMLTABLE was born in version 10; backpatch to all
stable versions.
Reported-by: Wang Ke <krking@zju.edu.cn>
Discussion: https://postgr.es/m/17480-1c9d73565bb28e90@postgresql.org
|
|
|
|
|
|
|
|
|
|
| |
We can use a single line to print all tuple counts that MERGE processed,
for conciseness, and elide those that are zeroes. Non-text formats
report all numbers, as is typical.
Per comment from Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/20220511163350.GL19626@telsasoft.com
|
|
|
|
|
|
|
| |
s/primary_slotname/primary_slot_name/.
Author: Bharath Rupireddy
Discussion: https://postgr.es/m/CALj2ACX3=pHkCpoGG-z+O=7Gp5YZv70jmfTyGnNV7YF3SkK73g@mail.gmail.com
|
|
|
|
|
| |
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: dde45df385dab9032155c1f867b677d55695310c
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In order to estimate the cache hit ratio of a Memoize node, one of the
inputs we require is the estimated number of times the Memoize node will
be rescanned. The higher this number, the large the cache hit ratio is
likely to become. Unfortunately, the value being passed as the number of
"calls" to the Memoize was incorrectly using the Nested Loop's
outer_path->parent->rows instead of outer_path->rows. This failed to
account for the fact that the outer_path might be parameterized by some
upper-level Nested Loop.
This problem could lead to Memoize plans appearing more favorable than
they might actually be. It could also lead to extended executor startup
times when work_mem values were large due to the planner setting overly
large MemoizePath->est_entries resulting in the Memoize hash table being
initially made much larger than might be required.
Fix this simply by passing outer_path->rows rather than
outer_path->parent->rows. Also, adjust the expected regression test
output for a plan change.
Reported-by: Pavel Stehule
Author: David Rowley
Discussion: https://postgr.es/m/CAFj8pRAMp%3DQsMi6sPQJ4W3hczoFJRvyXHJV3AZAZaMyTVM312Q%40mail.gmail.com
Backpatch-through: 14, where Memoize was introduced
|
|
|
|
|
|
|
|
|
| |
pgstat_create_function() creates stats for a function in a transactional
fashion, so the stats would be dropped if transaction creating the
function is aborted, not committed.
Author: Amul Sul
Discussion: https://postgr.es/m/CAAJ_b97x1T3xgAMWNj4w7kSgN0nTuG-vLrQJ4NB-dsNr0Kudxw@mail.gmail.com
|
|
|
|
| |
Like commit c9d297751959.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I started out with the intention to rename value_type to item_type to
avoid a collision with a typedef name that appears on some platforms.
Along the way, I noticed that the adjacent field "format" was not being
correctly handled by the backend/nodes/ infrastructure functions:
copyfuncs.c erroneously treated it as a scalar, while equalfuncs,
outfuncs, and readfuncs omitted handling it at all. This looks like
it might be cosmetic at the moment because the field is always NULL
after parse analysis; but that's likely a bug in itself, and the code's
certainly not very future-proof. Let's fix it while we can still do so
without forcing an initdb on beta testers.
Further study found a few other inconsistencies in the backend/nodes/
infrastructure for the recently-added JSON node types, so fix those too.
catversion bumped because of potential change in stored rules.
Discussion: https://postgr.es/m/526703.1652385613@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, preloaded libraries are expected to request additional
shared memory and LWLocks in _PG_init(). However, it is not unusal
for such requests to depend on MaxBackends, which won't be
initialized at that time. Such requests could also depend on GUCs
that other modules might change. This introduces a new hook where
modules can safely use MaxBackends and GUCs to request additional
shared memory and LWLocks.
Furthermore, this change restricts requests for shared memory and
LWLocks to this hook. Previously, libraries could make requests
until the size of the main shared memory segment was calculated.
Unlike before, we no longer silently ignore requests received at
invalid times. Instead, we FATAL if someone tries to request
additional shared memory or LWLocks outside of the hook.
Nathan Bossart and Julien Rouhaud
Discussion: https://postgr.es/m/20220412210112.GA2065815%40nathanxps13
Discussion: https://postgr.es/m/Yn2jE/lmDhKtkUdr@paquier.xyz
|
|
|
|
|
|
| |
In the style of pgindent, done semi-manually.
Discussion: https://www.postgresql.org/message-id/flat/7d062ecc-7444-23ec-a159-acd8adf9b586%40enterprisedb.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
pgstat_report_stat() is only supposed to be called outside of transactions. In
5891c7a8ed8 I added a pgstat_report_stat() call into LogicalRepApplyLoop()'s
timeout branch. While not commonly reached inside a transaction, it is
reachable (e.g. due to network bottlenecks or the sender being stalled / slow
for some reason).
To fix, add a !IsTransactionState() check.
No test added because there's no easy way to reproduce this case without
patching the code.
Reported-By: Erik Rijkers <er@xs4all.nl>
Discussion: https://postgr.es/m/b3463b8c-2328-dcac-0136-af95715493c1@xs4all.nl
|
|
|
|
|
|
|
| |
The declaration is automatically generated. Noticed when experimenting with
adding PGDLLIMPORT markers for functions.
Discussion: https://postgr.es/m/20220512164513.vaheofqp2q24l65r@alap3.anarazel.de
|
|
|
|
|
|
|
|
|
| |
A few places used binary_upgrade_* variables without including the header,
which worked without warnings because the variables are defined in those
places. However that can cause linker complaints with MSVC - except that we
don't see them right now, due to the use of a symbol export file.
Discussion: https://postgr.es/m/20220512164513.vaheofqp2q24l65r@alap3.anarazel.de
|
|
|
|
|
|
|
|
| |
Noticed when comparing the set of exported symbols without / with
-fvisibility=hidden after adding PGDLLIMPORT to intentionally exported
symbols.
Discussion: https://postgr.es/m/20220512164513.vaheofqp2q24l65r@alap3.anarazel.de
|
|
|
|
|
| |
Run pgindent, pgperltidy, and reformat-dat-files.
I manually fixed a couple of comments that pgindent uglified.
|
|
|
|
| |
Discussion: https://postgr.es/m/20220506234924.6mxxotl3xl63db3l@alap3.anarazel.de
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This follows in the footsteps of commit 2591ee8ec by removing one more
ill-advised shortcut from planning of GroupingFuncs. It's true that
we don't intend to execute the argument expression(s) at runtime, but
we still have to process any Vars appearing within them, or we risk
failure at setrefs.c time (or more fundamentally, in EXPLAIN trying
to print such an expression). Vars in upper plan nodes have to have
referents in the next plan level, whether we ever execute 'em or not.
Per bug #17479 from Michael J. Sullivan. Back-patch to all supported
branches.
Richard Guo
Discussion: https://postgr.es/m/17479-6260deceaf0ad304@postgresql.org
|
|
|
|
|
|
|
|
|
|
| |
The code for unloading a library has been commented-out for over 12
years, ever since commit 602a9ef5a7c60151e10293ae3c4bb3fbb0132d03, and we're
no closer to supporting it now than we were back then.
Nathan Bossart, reviewed by Michael Paquier and by me.
Discussion: http://postgr.es/m/Ynsc9bRL1caUSBSE@paquier.xyz
|
|
|
|
|
|
|
|
| |
This fixes the grammar of some comments in a couple of tests (SQL and
TAP), and in some C files.
Author: Justin Pryzby
Discussion: https://postgr.es/m/20220511020334.GH19626@telsasoft.com
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To enable diagnosis of systems that are not processing ProcSignalBarrier
requests promptly, add a LOG message every 5 seconds if we seem to be
wedged. Although you could already see this state as a wait event in
pg_stat_activity, the log message also shows the PID of the process that
is preventing progress.
Also add DEBUG1 logging around the whole wait loop.
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/CA%2BTgmoYJ03r5359gQutRGP9BtigYCg3_UskcmnVjBf-QO3-0pQ%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The problem is that we don't send keep-alive messages for a long time
while processing large transactions during logical replication where we
don't send any data of such transactions. This can happen when the table
modified in the transaction is not published or because all the changes
got filtered. We do try to send the keep_alive if necessary at the end of
the transaction (via WalSndWriteData()) but by that time the
subscriber-side can timeout and exit.
To fix this we try to send the keepalive message if required after
processing certain threshold of changes.
Reported-by: Fabrice Chapuis
Author: Wang wei and Amit Kapila
Reviewed By: Masahiko Sawada, Euler Taveira, Hou Zhijie, Hayato Kuroda
Backpatch-through: 10
Discussion: https://postgr.es/m/CAA5-nLARN7-3SLU_QUxfy510pmrYK6JJb=bk3hcgemAM_pAv+w@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Presently, the server may emit a variety of log messages when inspecting
a runtime-computed GUC, mostly in the shape of one LOG message with the
default configuration, related to the startup sequence launched as such
GUCs require a load of the control file and of external shared
libraries.
For example, the server will always emit a "database system is shut
down" LOG (unless the user has set log_min_messages higher than LOG),
which is an annoying behavior as "postgres -C" is expected to only emit
in its output the parameter value we are looking for. The parameter
value is sent to stdout, while the logs are sent to stderr so we could
recommend to use a redirection, but there was not much love for this
workaround either.
To avoid such extra log messages, per discussion, this change sets
log_min_messages to FATAL internally when -C is used on a
runtime-computed GUC (even if set to PANIC in postgresql.conf). At
FATAL, the user will still receive messages explaining why a GUC value
cannot be inspected, and will know if the command is attempted on a
server already running, something not supported yet for a
runtime-computed GUC.
Reported-by: Magnus Hagander, Bruce Momjian
Author: Nathan Bossart, Michael Paquier
Discussion: https://postgr.es/m/Yni6ZHkGotUU+RSf@paquier.xyz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
697492434 added 3 new quicksort specialization functions for common
datatypes.
That commit was not very consistent in how it would determine if we're
compiling for 32-bit or 64-bit machines. It would sometimes use
USE_FLOAT8_BYVAL and at other times check if SIZEOF_DATUM == 8. This
could cause theoretical problems due to the way USE_FLOAT8_BYVAL is now
defined based on SIZEOF_VOID_P >= 8. If pointers for some reason were
ever larger than 8-bytes then we'd end up doing 32-bit comparisons
mistakenly. Let's just always check SIZEOF_DATUM >= 8.
It also seems that ssup_datum_signed_cmp is just never used on 32-bit
builds, so let's just ifdef that out to make sure we never accidentally
use that comparison function on such machines. This also allows us to
ifdef out 1 of the 3 new specialization quicksort functions in 32-bit
builds which seems to shrink down the binary by over 4KB on my machine.
In passing, also add the missing DatumGetInt32() / DatumGetInt64() macros
in the comparison functions.
Discussion: https://postgr.es/m/CAApHDvqcQExRhtRa9hJrJB_5egs3SUfOcutP3m+3HO8A+fZTPA@mail.gmail.com
Reviewed-by: John Naylor
|
| |
|