| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rewrite array_in() and its subroutines so that we make only one
pass over the input text, rather than two. This requires
potentially re-pallocing the working arrays values[] and nulls[]
larger than our initial guess, but that cost will hopefully be made
up by avoiding duplicate parsing. In any case this coding seems
much clearer and more straightforward than what we had before.
This also fixes array_in() to reject non-rectangular input (that is,
different brace depths in different parts of the input) more reliably
than before, and to give a better error message when it does so.
This is analogous to the plpython and plperl fixes in 0553528e7 and
f47004add. Like those PLs, we now accept input such as '{{},{}}'
as a valid representation of an empty array, which we did not before.
Additionally, reject explicit array subscripts that are outside the
integer range (previously you just got whatever atoi() converted
them to), and make some other minor improvements in error reporting.
Although this is arguably a bug fix, it's also a behavioral change
that might trip somebody up, so no back-patch.
Tom Lane, Heikki Linnakangas, and Jian He. Thanks to Alexander Lakhin
for the initial report and for review/testing.
Discussion: https://postgr.es/m/2794005.1683042087@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
| |
Also clarify some other error wording.
Reported-by: Kyotaro Horiguchi
Discussion: https://postgr.es/m/20220802.133046.1941977979333284049.horikyota.ntt@gmail.com
Backpatch-through: master
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It's clearly stated in the comments that ginFindParents() must keep
the pin on the index's root page that's associated with the topmost
GinBtreeStack item. However, the code path for the case that the
desired downlink has been pushed down to the next index level
ignored this proviso, and would release the pin anyway if we were
still examining the root level. That led to an assertion failure
or "buffer NNNN is not owned by resource owner" error later, when
we try to release the pin again at the end of the insertion.
This is quite hard to reproduce, since it can only happen if an
index root page split occurs concurrently with our own insertion.
Thanks to Jeff Janes for finding a test case that triggers it
often enough to allow investigation.
This has been there since the beginning of GIN, so back-patch
to all supported branches.
Discussion: https://postgr.es/m/CAMkU=1yCAKtv86dMrD__Ja-7KzjE=uMeKX8y__cx5W-OEWy2ow@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit b7eda3e0e moved XidInMVCCSnapshot() from tqual.c into snapmgr.c,
but follow-up commit c91560def incorrectly updated this reference. We
could fix it, but as pointed out by Daniel Gustafsson, 1) the reader can
easily find the file that contains the definition of that function, e.g.
by grepping, and 2) this kind of reference is prone to going stale; so
let's just remove it.
Back-patch to all supported branches.
Reviewed by Daniel Gustafsson.
Discussion: https://postgr.es/m/CAPmGK145VdKkPBLWS2urwhgsfidbSexwY-9zCL6xSUJH%2BBTUUg%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
| |
Commit 00d7fb5e2e started to use REGBUF_NO_CHANGE at a few places in the
code where we register the buffer before marking it dirty but missed
updating one of the code flows in the hash index where we free the overflow
page without any live tuples on it.
Author: Amit Kapila and Hayato Kuroda
Discussion: http://postgr.es/m/f045c8f7-ee24-ead6-3679-c04a43d21351@gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
sendFileWithContent() previously got the content length by using
strlen(), assuming that the content given is always a string. Some
patches are under discussion to pass binary contents to a base backup
stream, where an arbitrary length needs to be given by the caller
instead.
The patch extends sendFileWithContent() to be able to handle this case,
where len < 0 can be used to indicate an arbitrary length rather than
rely on strlen() for the content length.
A comment in sendFileWithContent() mentioned the backup_label file.
However, this routine is used by more file types, like the tablespace
map, so adjust it in passing.
Author: David Steele
Discussion: https://postgr.es/m/2daf8adc-8db7-4204-a7f2-a7e94e2bfa4b@pgmasters.net
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, pg_stat_reset_shared() can use an argument to specify the
target of statistics to reset, doing nothing for NULL as it is strict.
This patch adds to pg_stat_reset_shared() the possibility to reset all
the stats types already handled in this function rather than do nothing
if the argument value given is NULL or if nothing is specified
(proisstrict is switched to false). Like previously, SLRUs are not
included in what gets reset.
The idea to use NULL or no argument to control if all the shared stats
already covered by this function should be reset has been proposed by
Andres Freund.
Bump catalog version.
Author: Atsushi Torikoshi
Reviewed-by: Kyotaro Horiguchi, Michael Paquier, Bharath Rupireddy,
Matthias van de Meent
Discussion: https://postgr.es/m/4291a55137ddda77cf7cc5f46e846daf@oss.nttdata.com
|
|
|
|
|
|
|
|
|
|
| |
It seems that a PHV evaluated/needed at or below the self join should not have
a problem if we remove the self join. But this requires further investigation.
For now, we just do not remove self joins if the rel to be removed is laterally
referenced by PHVs.
Discussion: https://postgr.es/m/CAMbWs4-ns73VF9gi37q61G3dS6Xuos+HtryMaBh37WQn=BsaJw@mail.gmail.com
Author: Richard Guo
|
|
|
|
|
|
|
|
|
|
|
| |
We don't want existing slots in the old cluster to get invalidated during
the upgrade. During an upgrade, we set this variable to -1 via the command
line in an attempt to prevent such invalidations, but users have ways to
override it. This patch ensures the value is not overridden by the user.
Author: Kyotaro Horiguchi
Reviewed-by: Peter Smith, Alvaro Herrera
Discussion: http://postgr.es/m/20231027.115759.2206827438943188717.horikyota.ntt@gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We can simplify FieldSelect on a whole-row Var into a plain Var
for the selected field. However, we should copy the whole-row Var's
varnullingrels when we do so, because the new Var is clearly nullable
by exactly the same rels as the original. Failure to do this led to
errors like "wrong varnullingrels (b) (expected (b 3)) for Var 2/2".
Richard Guo, per bug #18184 from Marian Krucina. Back-patch to
v16 where varnullingrels was introduced.
Discussion: https://postgr.es/m/18184-5868dd258782058e@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
| |
Add missing replacement of relids in phv->phexpr. Also, remove extra
replace_relid() over phv->phrels.
Reported-by: Zuming Jiang
Bug: #18187
Discussion: https://postgr.es/m/flat/18187-831da249cbd2ff8e%40postgresql.org
Author: Richard Guo
Reviewed-by: Andrei Lepikhov
|
|
|
|
|
|
|
|
|
|
|
|
| |
When casting an interval to a time, the original code suffered from
64-bit integer overflow for inputs with a sufficiently large negative
"time" field, leading to bogus results.
Fix by rewriting the algorithm in a simpler form, that more obviously
cannot overflow. While at it, improve the test coverage to include
negative interval inputs.
Discussion: https://postgr.es/m/CAEZATCXoUKHkcuq4q63hkiPsKZJd0kZWzgKtU%2BNT0aU4wbf_Pw%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When executing a MERGE UPDATE action, if the UPDATE is turned into a
cross-partition DELETE then INSERT, do not attempt to invoke AFTER
UPDATE ROW triggers, or any of the other post-update actions in
ExecUpdateEpilogue().
For consistency with a plain UPDATE command, such triggers should not
be fired (and typically fail anyway), and similarly, other post-update
actions, such as WCO/RLS checks should not be executed, and might also
lead to unexpected failures.
Therefore, as with ExecUpdate(), make ExecMergeMatched() return
immediately if ExecUpdateAct() reports that a cross-partition update
was done, to be sure that no further processing is done for that
tuple.
Back-patch to v15, where MERGE was introduced.
Discussion: https://postgr.es/m/CAEZATCWjBgagyNZs02vgDF0DvASYj-iHTFtXG2-nP3orZhmtcw%40mail.gmail.com
|
|
|
|
|
|
|
|
|
| |
We seem to have accidentally used "insure" in a few places. Correct
that.
Author: Peter Smith
Discussion: https://postgr.es/m/CAHut+Pv0biqrhA3pMhu40aDsj343mTsD75khKnHsLqR8P04f=Q@mail.gmail.com
Backpatch-through: 12, oldest supported version
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When the hash table is in use, ResoureOwnerSort() moves any elements
from the small fixed-size array to the hash table, and sorts it. When
the hash table is not in use, it sorts the elements in the small
fixed-size array directly. However, ResourceOwnerSort() and
ResourceOwnerReleaseAll() had different idea on when the hash table is
in use: ResourceOwnerSort() checked owner->nhash != 0, and
ResourceOwnerReleaseAll() checked owner->hash != NULL. If the hash
table was allocated but was currently empty, you hit an assertion
failure.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://www.postgresql.org/message-id/be58d565-9e95-d417-4e47-f6bd408dea4b@gmail.com
|
|
|
|
|
|
|
|
|
|
| |
Commit b0e96f311985 introduced a bunch of recursive functions, but
failed to make them check for stack depth. This can cause the backend
to crash when operating on inheritance hierarchies several thousands
deep. Protect the code by adding the missing stack depth checks.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/b2ac2392-9727-5f76-e890-721ac80c1615@gmail.com
|
|
|
|
|
|
|
|
|
|
| |
This buys back some of the performance loss that we otherwise saw from the
previous commit.
Reviewed-by: Aleksander Alekseev, Michael Paquier, Julien Rouhaud
Reviewed-by: Kyotaro Horiguchi, Hayato Kuroda, Álvaro Herrera, Zhihong Yu
Reviewed-by: Peter Eisentraut, Andres Freund
Discussion: https://www.postgresql.org/message-id/d746cead-a1ef-7efe-fb47-933311e876a3%40iki.fi
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of having a separate array/hash for each resource kind, use a
single array and hash to hold all kinds of resources. This makes it
possible to introduce new resource "kinds" without having to modify
the ResourceOwnerData struct. In particular, this makes it possible
for extensions to register custom resource kinds.
The old approach was to have a small array of resources of each kind,
and if it fills up, switch to a hash table. The new approach also uses
an array and a hash, but now the array and the hash are used at the
same time. The array is used to hold the recently added resources, and
when it fills up, they are moved to the hash. This keeps the access to
recent entries fast, even when there are a lot of long-held resources.
All the resource-specific ResourceOwnerEnlarge*(),
ResourceOwnerRemember*(), and ResourceOwnerForget*() functions have
been replaced with three generic functions that take resource kind as
argument. For convenience, we still define resource-specific wrapper
macros around the generic functions with the old names, but they are
now defined in the source files that use those resource kinds.
The release callback no longer needs to call ResourceOwnerForget on
the resource being released. ResourceOwnerRelease unregisters the
resource from the owner before calling the callback. That needed some
changes in bufmgr.c and some other files, where releasing the
resources previously always called ResourceOwnerForget.
Each resource kind specifies a release priority, and
ResourceOwnerReleaseAll releases the resources in priority order. To
make that possible, we have to restrict what you can do between
phases. After calling ResourceOwnerRelease(), you are no longer
allowed to remember any more resources in it or to forget any
previously remembered resources by calling ResourceOwnerForget. There
was one case where that was done previously. At subtransaction commit,
AtEOSubXact_Inval() would handle the invalidation messages and call
RelationFlushRelation(), which temporarily increased the reference
count on the relation being flushed. We now switch to the parent
subtransaction's resource owner before calling AtEOSubXact_Inval(), so
that there is a valid ResourceOwner to temporarily hold that relcache
reference.
Other end-of-xact routines make similar calls to AtEOXact_Inval()
between release phases, but I didn't see any regression test failures
from those, so I'm not sure if they could reach a codepath that needs
remembering extra resources.
There were two exceptions to how the resource leak WARNINGs on commit
were printed previously: llvmjit silently released the context without
printing the warning, and a leaked buffer io triggered a PANIC. Now
everything prints a WARNING, including those cases.
Add tests in src/test/modules/test_resowner.
Reviewed-by: Aleksander Alekseev, Michael Paquier, Julien Rouhaud
Reviewed-by: Kyotaro Horiguchi, Hayato Kuroda, Álvaro Herrera, Zhihong Yu
Reviewed-by: Peter Eisentraut, Andres Freund
Discussion: https://www.postgresql.org/message-id/cbfabeb0-cd3c-e951-a572-19b365ed314d%40iki.fi
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These are functions where a lot of things happen between the
ResourceOwnerEnlarge and ResourceOwnerRemember calls. It's important
that there are no unrelated ResourceOwnerRemember calls in the code in
between, otherwise the reserved entry might be used up by the
intervening ResourceOwnerRemember and not be available at the intended
ResourceOwnerRemember call anymore. I don't see any bugs here, but the
longer the code path between the calls is, the harder it is to verify.
In bufmgr.c, there is a function similar to ResourceOwnerEnlarge,
ReservePrivateRefCountEntry(), to ensure that the private refcount
array has enough space. The ReservePrivateRefCountEntry() calls were
made at different places than the ResourceOwnerEnlargeBuffers()
calls. Move the ResourceOwnerEnlargeBuffers() and
ReservePrivateRefCountEntry() calls together for consistency.
Reviewed-by: Aleksander Alekseev, Michael Paquier, Julien Rouhaud
Reviewed-by: Kyotaro Horiguchi, Hayato Kuroda, Álvaro Herrera, Zhihong Yu
Reviewed-by: Peter Eisentraut, Andres Freund
Discussion: https://www.postgresql.org/message-id/cbfabeb0-cd3c-e951-a572-19b365ed314d%40iki.fi
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
false_positive_rate is a parameter that can be set with the bloom
opclass in BRIN, and setting it to a value of exactly 0.25 would trigger
an assertion in the first INSERT done on the index with value set.
The assertion changed here relied on BLOOM_{MIN|MAX}_FALSE_POSITIVE_RATE
that are somewhat arbitrary values, and specifying an out-of-range value
would also trigger a failure when defining such an index. So, as-is,
the assertion was just doubling on the min-max check of the reloption.
This is now enlarged to check that it is a correct percentage value,
instead, based on a suggestion by Tom Lane.
Author: Alexander Lakhin
Reviewed-by: Tom Lane, Shihao Zhong
Discussion: https://postgr.es/m/17969-a6c54de48026d694@postgresql.org
Backpatch-through: 14
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I added it by mistake in commit 7103ebb7aae8. To clean up, struct
MergeAction needs to be moved to primnodes.h from parsenodes.h. (This
forces us to also move OverridingKind to primnodes.h).
Having to add parsenodes.h to bootstrap.h as fallout is a bit
surprising, since nothing nominally needs it there. However, per
comments in bootscanner.l, it is needed so that YYSTYPE can be declared.
I think this only started with commit dac048f71ebb, but I didn't
actually verify that.
In passing, stop including parsenodes.h in tcopprot.h. Nothing needs it
there.
Per discussion on a patch by Ashutosh Bapat.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/202311071106.6y7b2ascqjlz@alvherre.pgsql
|
|
|
|
|
|
|
|
|
|
| |
This file separates public and static functions with a separator
comment, but two routines were not defined in a location reflecting
that, so reorder them.
Author: Aleksander Alekseev
Reviewed-by: Álvaro Herrera, Michael Paquier
Discussion: https://postgr.es/m/CAJ7c6TMX2dd0g91UKvcC+CVygKQYJkKJq1+ZzT4rOK42+b53=w@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
f0efa5aec introduced the concept of "read-only" StringInfos which makes
use of an existing, possibly not NUL terminated, buffer.
Here we adjust two places that make use of StringInfos to receive data
to avoid using appendBinaryStringInfo() in cases where a NUL termination
character is not required. This saves a possible palloc() and saves
having to needlessly memcpy() from one buffer to another.
Here we adjust two places which were using appendBinaryStringInfo().
Neither of these cases seem particularly performance-critical. In the
case of XLogWalRcvProcessMsg(), the appendBinaryStringInfo() was only
appending 24 bytes. The change made here does mean that we can get rid
of the incoming_message global variable and make that local instead.
The apply_spooled_messages() case applies in logical decoding when
applying (possibly large) changes which have been serialized to a file.
Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/CAApHDvoxYUDHwqPf-ShvchsERf1RzmkGoLwg63JNvHCkDCuyKQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
array_set_element() and related functions allow an array to be
enlarged by assigning to subscripts outside the current array bounds.
While these places were careful to check that the new bounds are
allowable, they neglected to consider the risk of integer overflow
in computing the new bounds. In edge cases, we could compute new
bounds that are invalid but get past the subsequent checks,
allowing bad things to happen. Memory stomps that are potentially
exploitable for arbitrary code execution are possible, and so is
disclosure of server memory.
To fix, perform the hazardous computations using overflow-detecting
arithmetic routines, which fortunately exist in all still-supported
branches.
The test cases added for this generate (after patching) errors that
mention the value of MaxArraySize, which is platform-dependent.
Rather than introduce multiple expected-files, use psql's VERBOSITY
parameter to suppress the printing of the message text. v11 psql
lacks that parameter, so omit the tests in that branch.
Our thanks to Pedro Gallegos for reporting this problem.
Security: CVE-2023-5869
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
transformAggregateCall() captures the datatypes of the aggregate's
arguments immediately to construct the Aggref.aggargtypes list.
This seems reasonable because the arguments have already been
transformed --- but there is an edge case where they haven't been.
Specifically, if we have an unknown-type literal in an ANY argument
position, nothing will have been done with it earlier. But if we
also have DISTINCT, then addTargetToGroupList() converts the literal
to "text" type, resulting in the aggargtypes list not matching the
actual runtime type of the argument. The end result is that the
aggregate tries to interpret a "text" value as being of type
"unknown", that is a zero-terminated C string. If the text value
contains no zero bytes, this could result in disclosure of server
memory following the text literal value.
To fix, move the collection of the aggargtypes list to the end
of transformAggregateCall(), after DISTINCT has been handled.
This requires slightly more code, but not a great deal.
Our thanks to Jingzhou Fu for reporting this problem.
Security: CVE-2023-5868
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A PostgreSQL release tarball contains a number of prebuilt files, in
particular files produced by bison, flex, perl, and well as html and
man documentation. We have done this consistent with established
practice at the time to not require these tools for building from a
tarball. Some of these tools were hard to get, or get the right
version of, from time to time, and shipping the prebuilt output was a
convenience to users.
Now this has at least two problems:
One, we have to make the build system(s) work in two modes: Building
from a git checkout and building from a tarball. This is pretty
complicated, but it works so far for autoconf/make. It does not
currently work for meson; you can currently only build with meson from
a git checkout. Making meson builds work from a tarball seems very
difficult or impossible. One particular problem is that since meson
requires a separate build directory, we cannot make the build update
files like gram.h in the source tree. So if you were to build from a
tarball and update gram.y, you will have a gram.h in the source tree
and one in the build tree, but the way things work is that the
compiler will always use the one in the source tree. So you cannot,
for example, make any gram.y changes when building from a tarball.
This seems impossible to fix in a non-horrible way.
Second, there is increased interest nowadays in precisely tracking the
origin of software. We can reasonably track contributions into the
git tree, and users can reasonably track the path from a tarball to
packages and downloads and installs. But what happens between the git
tree and the tarball is obscure and in some cases non-reproducible.
The solution for both of these issues is to get rid of the step that
adds prebuilt files to the tarball. The tarball now only contains
what is in the git tree (*). Getting the additional build
dependencies is no longer a problem nowadays, and the complications to
keep these dual build modes working are significant. And of course we
want to get the meson build system working universally.
This commit removes the make distprep target altogether. The make
dist target continues to do its job, it just doesn't call distprep
anymore.
(*) - The tarball also contains the INSTALL file that is built at make
dist time, but not by distprep. This is unchanged for now.
The make maintainer-clean target, whose job it is to remove the
prebuilt files in addition to what make distclean does, is now just an
alias to make distprep. (In practice, it is probably obsolete given
that git clean is available.)
The following programs are now hard build requirements in configure
(they were already required by meson.build):
- bison
- flex
- perl
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://www.postgresql.org/message-id/flat/e07408d9-e5f2-d9fd-5672-f53354e9305e@eisentraut.org
|
|
|
|
|
|
|
|
|
| |
It was always false in single-user mode, in autovacuum workers, and in
background workers. This had no specifically-identified security
consequences, but non-core code or future work might make it
security-relevant. Back-patch to v11 (all supported versions).
Jelte Fennema-Nio. Reported by Jelte Fennema-Nio.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Documentation says it cannot signal "a backend owned by a superuser".
On the contrary, it could signal background workers, including the
logical replication launcher. It could signal autovacuum workers and
the autovacuum launcher. Block all that. Signaling autovacuum workers
and those two launchers doesn't stall progress beyond what one could
achieve other ways. If a cluster uses a non-core extension with a
background worker that does not auto-restart, this could create a denial
of service with respect to that background worker. A background worker
with bugs in its code for responding to terminations or cancellations
could experience those bugs at a time the pg_signal_backend member
chooses. Back-patch to v11 (all supported versions).
Reviewed by Jelte Fennema-Nio. Reported by Hemanth Sandrana and
Mahendrakar Srinivasarao.
Security: CVE-2023-5870
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This function implements the standard XMLTest function, which
converts text into xml text nodes. It uses the libxml2 function
xmlEncodeSpecialChars to escape predefined entities (&"<>), so
that those do not cause any conflict when concatenating the text
node output with existing xml documents.
This also adds a note in features.sgml about not supporting
XML(SEQUENCE). The SQL specification defines a RETURNING clause
to a set of XML functions, where RETURNING CONTENT or RETURNING
SEQUENCE can be defined. Since PostgreSQL doesn't support
XML(SEQUENCE) all of these functions operate with an
implicit RETURNING CONTENT.
Author: Jim Jones <jim.jones@uni-muenster.de>
Reviewed-by: Vik Fearing <vik@postgresfriends.org>
Discussion: https://postgr.es/m/86617a66-ec95-581f-8d54-08059cca8885@uni-muenster.de
|
|
|
|
|
| |
Reported-by: Richard Guo
Discussion: https://postgr.es/m/CAMbWs4_STsG1PKQBuvQC8W4sPo3KvML3=jOTjKLUYQuK3g8cpQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
recurse_set_operations() uses the parse tree for the group number estimation,
because of the "varno 0" hack. At the same time 2489d76c49 made root->parse
and corresponding parent_root->simple_rte_array[]->subquery distinct copies
of the parse tree, while d3d55ce571 introduced self-join removal replacing
relid of removed relation only in one of the copies.
The present commit fixes this bug by making recurse_set_operations() call
estimate_num_groups() with the copy of the parse tree processed by self-join
removal.
In future, we may think about maintaining just one copy of the parse tree
and/or keeping removed relids as aliases.
Reported-by: Zuming Jiang
Bug: #18170
Discussion: https://postgr.es/m/flat/18170-f1d17bf9a0d58b24%40postgresql.org
Author: Richard Guo, Alexander Korotkov
Reviewed-by: Andrei Lepikhov
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As per the preceding commit, GUC APIs generally expose NULL-valued
string variables as empty strings. Extend that policy to
GetConfigOption() and GetConfigOptionResetString(), eliminating
a crash hazard for unwary callers, as well as a fundamental
ambiguity in GetConfigOption()'s API.
No back-patch, since this is an API change and conceivably somebody
somewhere is depending on this corner case.
Xing Guo, Aleksander Alekseev, Tom Lane
Discussion: https://postgr.es/m/CACpMh+AyDx5YUpPaAgzVwC1d8zfOL4JoD-uyFDnNSa1z0EsDQQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
get_explain_guc_options() crashed if a string GUC marked GUC_EXPLAIN
has a NULL boot_val. Nosing around found a couple of other places
that seemed insufficiently cautious about NULL string values, although
those are likely unreachable in practice. Add some commentary
defining the expectations for NULL values of string variables,
in hopes of forestalling future additions of more such bugs.
Xing Guo, Aleksander Alekseev, Tom Lane
Discussion: https://postgr.es/m/CACpMh+AyDx5YUpPaAgzVwC1d8zfOL4JoD-uyFDnNSa1z0EsDQQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
| |
Introduce unicode_version(), icu_unicode_version(), and
unicode_assigned().
The latter requires introducing a new lookup table for the Unicode
General Category, which is generated along with the other Unicode
lookup tables.
Discussion: https://postgr.es/m/CA+TgmoYzYR-yhU6k1XFCADeyj=Oyz2PkVsa3iKv+keM8wp-F_A@mail.gmail.com
Reviewed-by: Peter Eisentraut
|
|
|
|
|
|
|
|
|
|
|
| |
The name of the function resulting from the macro expansion was
incorrectly stated.
Backpatch to 16 where it was introduced.
Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/20231101.172308.1740861597185391383.horikyota.ntt@gmail.com
Backpatch-through: v16
|
|
|
|
|
|
|
|
|
|
|
|
| |
This also updates some C comments.
Reported-by: suchithjn22@gmail.com
Discussion: https://postgr.es/m/167336599095.2667301.15497893107226841625@wrigleys.postgresql.org
Author: Laurenz Albe (doc patch)
Backpatch-through: 11
|
|
|
|
|
|
|
|
| |
Discussion: https://postgr.es/m/CAHut+Ps5MdQ1b4jp9rd63zfE2X25mV58y1W+hm2v53svtGDxBQ@mail.gmail.com
Author: Peter Smith
Backpatch-through: master
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
pgstatindex failed with ERRCODE_DATA_CORRUPTED, of the "can't-happen"
class XX. The other functions succeeded on an empty index; they might
have malfunctioned if the failed index build left torn I/O or other
complex state. Report an ERROR in statistics functions pgstatindex,
pgstatginindex, pgstathashindex, and pgstattuple. Report DEBUG1 and
skip all index I/O in maintenance functions brin_desummarize_range,
brin_summarize_new_values, brin_summarize_range, and
gin_clean_pending_list. Back-patch to v11 (all supported versions).
Discussion: https://postgr.es/m/20231001195309.a3@google.com
|
|
|
|
|
|
|
|
| |
Reported-by: Michael Paquier
Discussion: https://postgr.es/m/ZT9YH7-TTx27V3yW@paquier.xyz
Backpatch-through: master
|
|
|
|
|
|
|
|
|
|
| |
Fill in .objtype field where an appropriate value exists.
These cases are currently not used (see also comments at
get_object_type()), but we might as well fill in what's possible in
case additional uses arise.
Discussion: https://www.postgresql.org/message-id/flat/75ae5875-3abc-dafc-8aec-73247ed41cde@eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When beginning recovery, a LOG is displayed by the startup process to
show which recovery mode will be used depending on the .signal file(s)
set in the data folder, like "standby mode", recovery up to a given
target type and value, or archive recovery.
A different patch is under discussion to simplify the startup code by
requiring the presence of recovery.signal and/or standby.signal when a
backup_label file is read. Delaying a bit this LOG ensures that the
correct recovery mode would be reported, and putting it at this position
does not make it lose its value.
While on it, this commit adds a few comments documenting a bit more the
initial recovery steps and their dependencies, and fixes an incorrect
comment format. This introduces no behavior changes.
Extracted from a larger patch by me.
Reviewed-by: David Steele, Bowen Shi
Discussion: https://postgr.es/m/ZArVOMifjzE7f8W7@paquier.xyz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When beginning recovery from a base backup by reading a backup_label
file, it may be possible that no checkpoint record is available
depending on the method used when the case backup was taken, which would
prevent recovery from beginning. In this case, the FATAL messages
issued, initially added by c900c15269f0f, mentioned recovery.signal as
an option to do recovery but not standby.signal. Let's add it as an
available option, for clarity.
Per suggestion from Bowen Shi, extracted from a larger patch by me.
Author: Michael Paquier
Discussion: https://postgr.es/m/CAM_vCudkSjr7NsNKSdjwtfAm9dbzepY6beZ5DP177POKy8=2aw@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Historically, the statistics of the checkpointer have been always part
of pg_stat_bgwriter. This commit removes a few columns from
pg_stat_bgwriter, and introduces pg_stat_checkpointer with equivalent,
renamed columns (plus a new one for the reset timestamp):
- checkpoints_timed -> num_timed
- checkpoints_req -> num_requested
- checkpoint_write_time -> write_time
- checkpoint_sync_time -> sync_time
- buffers_checkpoint -> buffers_written
The fields of PgStat_CheckpointerStats and its SQL functions are renamed
to match with the new field names, for consistency. Note that
background writer and checkpointer have been split into two different
processes in commits 806a2aee3791 and bf405ba8e460. The pgstat
structures were already split, making this change straight-forward.
Bump catalog version.
Author: Bharath Rupireddy
Reviewed-by: Bertrand Drouvot, Andres Freund, Michael Paquier
Discussion: https://postgr.es/m/CALj2ACVxX2ii=66RypXRweZe2EsBRiPMj0aHfRfHUeXJcC7kHg@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit refactors find_tabstat_entry() so as transaction counters
for inserted, updated and deleted tuples are included in the result
returned. If a shared entry is found for a relation, its result is now
a copy of the PgStat_TableStatus entry retrieved from shared memory.
This idea has been proposed by Andres Freund.
While on it, the following SQL functions, used in system views, are
refactored with macros, in the same spirit as 83a1a1b56645, reducing the
amount of code:
- pg_stat_get_xact_tuples_deleted()
- pg_stat_get_xact_tuples_inserted()
- pg_stat_get_xact_tuples_updated()
There is now only one caller of find_tabstat_entry() in the tree.
Author: Bertrand Drouvot
Discussion: https://postgr.es/m/b9e1f543-ee93-8168-d530-d961708ad9d3@gmail.com
|
|
|
|
|
|
|
|
|
|
| |
The original code did very little to guard against integer or floating
point overflow when computing the interval's fields. Detect any such
overflows and error out, rather than silently returning bogus results.
Joseph Koshakow, reviewed by Ashutosh Bapat and me.
Discussion: https://postgr.es/m/CAAvxfHcm1TPwH_zaGWuFoL8pZBestbRZTU6Z%3D-RvAdSXTPbKfg%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When looping around after finding that the set-returning function
returned zero rows for the current input tuple, ExecProjectSet
neglected to reset either of the two memory contexts it's
responsible for cleaning out. Typically this wouldn't cause much
problem, because once the SRF does return at least one row, the
contexts would get reset on the next call. However, if the SRF
returns no rows for many input tuples in succession, quite a lot
of memory could be transiently consumed.
To fix, make sure we reset both contexts while looping around.
Per bug #18172 from Sergei Kornilov. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/18172-9b8c5fc1d676ded3@postgresql.org
|
|
|
|
|
|
|
|
| |
Discussion: https://postgr.es/m/CAEG8a3L7UoZXH1VmzpV-VDkex2kt68nWKuW1WiohoT=RrzYKWA@mail.gmail.com
Author: Junwang Zhao
Backpatch-through: master
|
|
|
|
|
|
|
|
| |
Discussion: https://postgr.es/m/CAEG8a3LZHTR5S+OPZCbZvECwsqdbx=pBRFZZyDjKaAtgoALOQQ@mail.gmail.com
Author: Junwang Zhao
Backpatch-through: master
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When calculating distance for interval values, the code mostly mimicked
interval_mi, i.e. it built a new interval value for the difference.
That however does not work for sufficiently distant interval values,
when the difference overflows the interval range.
Instead, we can calculate the distance directly, without constructing
the intermediate (and unnecessary) interval value.
Backpatch to 14, where minmax-multi indexes were introduced.
Reported-by: Dean Rasheed
Reviewed-by: Ashutosh Bapat, Dean Rasheed
Backpatch-through: 14
Discussion: https://postgr.es/m/eef0ea8c-4aaa-8d0d-027f-58b1f35dd170@enterprisedb.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Make sure that infinite values in date/timestamp columns are treated as
if in infinite distance. Infinite values should not be merged with other
values, leaving them as outliers. The code however returned distance 0
in this case, so that infinite values were merged first. While this does
not break the index (i.e. it still produces correct query results), it
may make it much less efficient.
We don't need explicit handling of infinite date/timestamp values when
calculating distances, because those values are represented as extreme
but regular values (e.g. INT64_MIN/MAX for the timestamp type).
We don't need an exact distance, just a value that is much larger than
distanced between regular values. With the added cast to double values,
we can simply subtract the values.
The regression test queries a value in the "gap" and checks the range
was properly eliminated by the BRIN index.
This only affects minmax-multi indexes on timestamp/date columns with
infinite values, which is not very common in practice. The affected
indexes may need to be rebuilt.
Backpatch to 14, where minmax-multi indexes were introduced.
Reported-by: Ashutosh Bapat
Reviewed-by: Ashutosh Bapat, Dean Rasheed
Backpatch-through: 14
Discussion: https://postgr.es/m/eef0ea8c-4aaa-8d0d-027f-58b1f35dd170@enterprisedb.com
|