| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In commit 8bf6ec3ba I assumed that no code path could reach
ExecGetExtraUpdatedCols without having gone through
ExecInitStoredGenerated. That turns out not to be the case in
logical replication: if there's an ON UPDATE trigger on the target
table, trigger.c will call this code before anybody has set up its
generated columns. Having seen that, I don't have a lot of faith in
there not being other such paths. ExecGetExtraUpdatedCols can call
ExecInitStoredGenerated for itself, as long as we are willing to
assume that it is only called in CMD_UPDATE operations, which on
the whole seems like a safer leap of faith.
Per report from Vitaly Davydov.
Discussion: https://postgr.es/m/d259d69652b8c2ff50e14cda3c236c7f@postgrespro.ru
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We were identifying the updatable generated columns of inheritance
children by transposing the calculation made for their parent.
However, there's nothing that says a traditional-inheritance child
can't have generated columns that aren't there in its parent, or that
have different dependencies than are in the parent's expression.
(At present it seems that we don't enforce that for partitioning
either, which is likely wrong to some degree or other; but the case
clearly needs to be handled with traditional inheritance.)
Hence, drop the very-klugy-anyway "extraUpdatedCols" RTE field
in favor of identifying which generated columns depend on updated
columns during executor startup. In HEAD we can remove
extraUpdatedCols altogether; in back branches, it's still there but
always empty. Another difference between the HEAD and back-branch
versions of this patch is that in HEAD we can add the new bitmap field
to ResultRelInfo, but that would cause an ABI break in back branches.
Like 4b3e37993, add a List field at the end of struct EState instead.
Back-patch to v13. The bogus calculation is also being made in v12,
but it doesn't have the same visible effect because we don't use it
to decide which generated columns to recalculate; as a consequence of
which the patch doesn't apply easily. I think that there might still
be a demonstrable bug associated with trigger firing conditions, but
that's such a weird corner-case usage that I'm content to leave it
unfixed in v12.
Amit Langote and Tom Lane
Discussion: https://postgr.es/m/CA+HiwqFshLKNvQUd1DgwJ-7tsTp=dwv7KZqXC4j2wYBV1aCDUA@mail.gmail.com
Discussion: https://postgr.es/m/2793383.1672944799@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When considering an empty grouping set, we fetched
phasedata->eqfunctions[-1]. Because the eqfunctions array is
palloc'd, that would always be an aset pointer in released versions,
and thus the code accidentally failed to malfunction (since it would
do nothing unless it found a null pointer). Nonetheless this seems
like trouble waiting to happen, so add a check for length == 0.
It's depressing that our valgrind testing did not catch this.
Maybe we should reconsider the choice to not mark that word NOACCESS?
Richard Guo
Discussion: https://postgr.es/m/CAMbWs4-vZuuPOZsKOYnSAaPYGKhmacxhki+vpOKk0O7rymccXQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In commit ffbb7e65a, I added a ModifyTableState member to ResultRelInfo
to save the owning ModifyTableState for use by nodeModifyTable.c when
performing batch inserts, but as pointed out by Tom Lane, that changed
the array stride of es_result_relations, and that would break any
previously-compiled extension code that accesses that array. Fix by
removing that member from ResultRelInfo and instead adding a List member
at the end of EState to save such ModifyTableStates.
Per report from Tom Lane. Back-patch to v14, like the previous commit;
I chose to apply the patch to HEAD as well, to make back-patching easy.
Discussion: http://postgr.es/m/4065383.1669395453%40sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
9d9c02ccd added window "run conditions", which allows the evaluation of
monotonic window functions to be skipped when the run condition is no
longer true. Prior to this commit, once the run condition was no longer
true and we stopped evaluating the window functions, we simply just left
the ecxt_aggvalues[] and ecxt_aggnulls[] arrays alone to store whatever
value was stored there the last time the window function was evaluated.
Leaving a stale value in there isn't really a problem on 64-bit builds as
all of the window functions which we recognize as monotonic all return
int8, which is passed by value on 64-bit builds. However, on 32-bit
builds, this was a problem as the value stored in the ecxt_values[]
element would be a by-ref value and it would be pointing to some memory
which would get reset once the tuple context is destroyed. Since the
WindowAgg node will output these values in the resulting tupleslot, this
could be problematic for the top-level WindowAgg node which must look at
these values to filter out the rows that don't meet its filter condition.
Here we fix this by just zeroing the ecxt_aggvalues[] and setting the
ecxt_aggnulls[] array to true when the run condition first becomes false.
This results in the WindowAgg's output having NULLs for the WindowFunc's
columns rather than the stale or pointer pointing to possibly freed
memory. These tuples with the NULLs can only make it as far as the
top-level WindowAgg node before they're filtered out. To ensure that
these tuples *are* always filtered out, we now insist that OpExprs making
up the run condition are strict OpExprs. Currently, all the window
functions which the planner recognizes as monotonic return INT8 and the
operator which is used for the run condition must be a member of a btree
opclass. In reality, these restrictions exclude nothing that's built-in
to Postgres and are unlikely to exclude anyone's custom operators due to
the requirement that the operator is part of a btree opclass. It would be
unusual if those were not strict.
Reported-by: Sergey Shinderuk, using valgrind
Reviewed-by: Richard Guo, Sergey Shinderuk
Discussion: https://postgr.es/m/29184c50-429a-ebd7-f1fb-0589c6723a35@postgrespro.ru
Backpatch-through: 15, where 9d9c02ccd was added
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is an oversight in commit 7c337b6b5: I apparently didn't think
about the possibility of a SQL function being executed multiple
times within a query. In that case, functions.c's primitive caching
mechanism allows the same utility parse tree to be presented for
execution more than once. We have to tell ProcessUtility to make
a working copy of the parse tree, or bad things happen.
Normally I'd add a regression test, but I think the reported crasher
is dependent on some rather random implementation choices that are
nowhere near functions.c, so its usefulness as a long-lived test
feels questionable. In any case, this fix is clearly correct given
the design choices of 7c337b6b5.
Per bug #17702 from Xin Wen. Thanks to Daniel Gustafsson for
analysis. Back-patch to v14 where the faulty commit came in
(before that, the responsibility for copying scribble-able
utility parse trees lay elsewhere).
Discussion: https://postgr.es/m/17702-ad24fdcdd1e9047a@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit b663a4136, which allowed FDWs to INSERT rows in bulk, added to
nodeModifyTable.c code to flush pending inserts to the foreign-table
result relation(s) before completing processing of the ModifyTable node,
but the code failed to take into account the case where the INSERT query
has modifying CTEs, leading to incorrect results.
Also, that commit failed to flush pending inserts before firing BEFORE
ROW triggers so that rows are visible to such triggers.
In that commit we scanned through EState's
es_tuple_routing_result_relations or es_opened_result_relations list to
find the foreign-table result relations to which pending inserts are
flushed, but that would be inefficient in some cases. So to fix, 1) add
a List member to EState to record the insert-pending result relations,
and 2) modify nodeModifyTable.c so that it adds the foreign-table result
relation to the list in ExecInsert() if appropriate, and flushes pending
inserts properly using the list where needed.
While here, fix a copy-and-pasteo in a comment in ExecBatchInsert(),
which was added by that commit.
Back-patch to v14 where that commit appeared.
Discussion: https://postgr.es/m/CAPmGK16qutyCmyJJzgQOhfBq%3DNoGDqTB6O0QBZTihrbqre%2BoxA%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
| |
Reporting tuples for which nothing is done is useless and goes against
the documented behavior, so don't do it.
Backpatch to 15.
Reported by: Luca Ferrari
Discussion: https://postgr.es/m/CAKoxK+42MmACUh6s8XzASQKizbzrtOGA6G1UjzCP75NcXHsiNw@mail.gmail.com
|
|
|
|
|
|
|
|
| |
This commend references a struct that disappeared before MERGE was
merged ... and ExecDelete is not called by the committed MERGE anyway.
Revert to the original wording.
Backpatch to 15
|
|
|
|
|
|
|
|
|
|
|
|
| |
UPDATE was listed twice and DELETE was omitted, replace one UPDATE
with DELETE instead.
Backpatch through v15 where MERGE was added.
Author: Myo Wai Thant <myo.waithant@fujitsu.com>
Reviewed-by: Richard Guo <guofenglinux@gmail.com>
Discussion: https://postgr.es/m/OSAPR01MB43247E46931E9E9CFC4AA0F29A079@OSAPR01MB4324.jpnprd01.prod.outlook.com
Backpatch-through: 15
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Remove the stale text that is a leftover from an earlier version of the
patch to add support for batch insertion, and adjust the wording in the
remaining text.
Back-patch to v14 where batch insertion came in.
Review and wording adjustment by Tom Lane.
Discussion: https://postgr.es/m/CAPmGK14goatHPHQv2Aeu_UTKqZ%2BBO%2BP%2Bzd3HKv5D%2BdyyfWKDSw%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
91e9e89dc modified nodeSort.c so that it used datum sorts when the
targetlist of the outer node contained only a single column. That commit
failed to recognise that the Datum returned by tuplesort_getdatum() must
be pfree'd when the type is a byref type. Ronan Dunklau did originally
propose the patch with that restriction, but that, probably through my own
fault, got lost during further development work.
Due to the timing of this report (PG15 RC1 is almost out the door), let's
just restrict the datum sort optimization to apply for byval types only.
We might want to look harder into making this work for byref types in
PG16.
Reported-by: Önder Kalacı
Diagnosis-by: Tom Lane
Discussion: https://postgr.es/m/CACawEhVxe0ufR26UcqtU7GYGRuubq3p6ZWPGXL4cxy_uexpAAQ@mail.gmail.com
Backpatch-through: 15, where 91e9e89dc was introduced.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The API contract for planstate_tree_walker() callbacks is that they
take a PlanState pointer and a context pointer. Somebody figured
they could save a couple lines of code by ignoring that, and passing
ExecShutdownNode itself as the walker even though it has but one
argument. Somewhat remarkably, we've gotten away with that so far.
However, it seems clear that the upcoming C2x standard means to
forbid such cases, and compilers that actively break such code
likely won't be far behind. So spend the extra few lines of code
to do it honestly with a separate walker function.
In HEAD, we might as well go further and remove ExecShutdownNode's
useless return value. I left that as-is in back branches though,
to forestall complaints about ABI breakage.
Back-patch, with the thought that this might become of practical
importance before our stable branches are all out of service.
It doesn't seem to be fixing any live bug on any currently known
platform, however.
Discussion: https://postgr.es/m/208054.1663534665@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The reverts the following and makes some associated cleanups:
commit f79b803dc: Common SQL/JSON clauses
commit f4fb45d15: SQL/JSON constructors
commit 5f0adec25: Make STRING an unreserved_keyword.
commit 33a377608: IS JSON predicate
commit 1a36bc9db: SQL/JSON query functions
commit 606948b05: SQL JSON functions
commit 49082c2cc: RETURNING clause for JSON() and JSON_SCALAR()
commit 4e34747c8: JSON_TABLE
commit fadb48b00: PLAN clauses for JSON_TABLE
commit 2ef6f11b0: Reduce running time of jsonb_sqljson test
commit 14d3f24fa: Further improve jsonb_sqljson parallel test
commit a6baa4bad: Documentation for SQL/JSON features
commit b46bcf7a4: Improve readability of SQL/JSON documentation.
commit 112fdb352: Fix finalization for json_objectagg and friends
commit fcdb35c32: Fix transformJsonBehavior
commit 4cd8717af: Improve a couple of sql/json error messages
commit f7a605f63: Small cleanups in SQL/JSON code
commit 9c3d25e17: Fix JSON_OBJECTAGG uniquefying bug
commit a79153b7a: Claim SQL standard compliance for SQL/JSON features
commit a1e7616d6: Rework SQL/JSON documentation
commit 8d9f9634e: Fix errors in copyfuncs/equalfuncs support for JSON node types.
commit 3c633f32b: Only allow returning string types or bytea from json_serialize
commit 67b26703b: expression eval: Fix EEOP_JSON_CONSTRUCTOR and EEOP_JSONEXPR size.
The release notes are also adjusted.
Backpatch to release 15.
Discussion: https://postgr.es/m/40d2c882-bcac-19a9-754d-4299e1d87ac7@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The current publisher code checks if UPDATE or DELETE can be executed with
the replica identity of the table even if it's a partitioned table. We can
skip checking the replica identity for partitioned tables because the
operations are actually performed on the leaf partitions (not the
partitioned table).
Reported-by: Brad Nicholson
Author: Hou Zhijie
Reviewed-by: Peter Smith, Amit Kapila
Backpatch-through: 13
Discussion: https://postgr.es/m/CAMMnM%3D8i5DohH%3DYKzV0_wYuYSYvuOJoL9F5nzXTc%2ByzsG1f6rg%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It's possible to reach this case when work_mem is very small and tupsize
is (relatively) very large. In that case ExecChooseHashTableSize would
get an assertion failure, or with asserts off it'd compute nbuckets = 0,
which'd likely cause misbehavior later (I've not checked). To fix,
clamp the number of buckets to be at least 1.
This is due to faulty conversion of old my_log2() coding in 28d936031.
Back-patch to v13, as that was.
Zhang Mingli
Discussion: https://postgr.es/m/beb64ca0-91e2-44ac-bf4a-7ea36275ec02@Spark
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
fmgr_sql must make expanded-datum arguments read-only, because
it's possible that the function body will pass the argument to
more than one callee function. If one of those functions takes
the datum's R/W property as license to scribble on it, then later
callees will see an unexpected value, leading to wrong answers.
From a performance standpoint, it'd be nice to skip this in the
common case that the argument value is passed to only one callee.
However, detecting that seems fairly hard, and certainly not
something that I care to attempt in a back-patched bug fix.
Per report from Adam Mackler. This has been broken since we
invented expanded datums, so back-patch to all supported branches.
Discussion: https://postgr.es/m/WScDU5qfoZ7PB2gXwNqwGGgDPmWzz08VdydcPFLhOwUKZcdWbblbo-0Lku-qhuEiZoXJ82jpiQU4hOjOcrevYEDeoAvz6nR0IU4IHhXnaCA=@mackler.email
Discussion: https://postgr.es/m/187436.1660143060@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
| |
Adjusting this function was overlooked in commit 94aa7cc5f. The only
visible symptom (so far) is that INSERT ... ON CONFLICT could go into
an endless loop when inserting a null that has a conflict.
Richard Guo and Tom Lane, per bug #17558 from Andrew Kesper
Discussion: https://postgr.es/m/17558-3f6599ffcf52fd4a@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Ordinarily the functions called in this loop ought to have plenty
of CFIs themselves; but we've now seen a case where no such CFI is
reached, making the loop uninterruptible. Even though that's from
a recently-introduced bug, it seems prudent to install a CFI at
the loop level in all branches.
Per discussion of bug #17558 from Andrew Kesper (an actual fix for
that bug will follow).
Discussion: https://postgr.es/m/17558-3f6599ffcf52fd4a@postgresql.org
|
|
|
|
|
|
|
| |
A code comment said that the standard does not define a number for
ERRCODE_SQL_JSON_ITEM_CANNOT_BE_CAST_TO_TARGET_TYPE, but this was
fixed in a later draft version of the standard, so use that number
now.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
50e17ad28 increased the size of ExprEvalStep from 64 bytes up to 88 bytes.
Lots of effort was spent during the development of the current expression
evaluation code to make an instance of this struct as small as possible.
Making this struct larger than needed reduces CPU cache efficiency during
expression evaluation which causes noticeable slowdowns during query
execution.
In order to reduce the size of the struct, here we remove the fn_addr
field. The values from this field can be obtained via fcinfo, just with
some extra pointer dereferencing. The extra indirection does not seem to
cause any noticeable slowdowns.
Various other fields have been moved into the ScalarArrayOpExprHashTable
struct. These fields are only used when the ScalarArrayOpExprHashTable
pointer has already been dereferenced, so no additional pointer
dereferences occur for these. Here we also make hash_fcinfo_data the last
field in ScalarArrayOpExprHashTable so that we can avoid a further pointer
dereference to get the FunctionCallInfoBaseData. This also saves a call to
palloc().
50e17ad28 was added in 14, but it's too late to adjust the size of the
ExprEvalStep in that version, so here we just backpatch to 15, which is
currently in beta.
Author: Andres Freund, David Rowley
Discussion: https://postgr.es/m/20220616233130.rparivafipt6doj3@alap3.anarazel.de
Backpatch-through: 15
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The new expression step types increased the size of ExprEvalStep by ~4 for all
types of expression steps, slowing down expression evaluation noticeably. Move
them out of line.
There's other issues with these expression steps, but addressing them is
largely independent of this aspect.
Author: Andres Freund <andres@anarazel.de>
Reviewed-By: Andrew Dunstan <andrew@dunslane.net>
Discussion: https://postgr.es/m/20220616233130.rparivafipt6doj3@alap3.anarazel.de
Backpatch: 15-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In commit ec62cb0aa, I foolishly replaced ExecEvalWholeRowVar's
lookup_rowtype_tupdesc_domain call with just lookup_rowtype_tupdesc,
because I didn't see how a domain could be involved there, and
there were no regression test cases to jog my memory. But the
existing code was correct, so revert that change and add a test
case showing why it's necessary. (Note: per comment in struct
DatumTupleFields, it is correct to produce an output tuple that's
labeled with the base composite type, not the domain; hence just
blindly looking through the domain is correct here.)
Per bug #17515 from Dan Kubb. Back-patch to v11 where domains over
composites became a thing.
Discussion: https://postgr.es/m/17515-a24737438363aca0@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug #17512 highlighted that a suitably broken data type could cause the
backend to crash if either the hash function or equality function were in
someway non-deterministic based on their input values. Such a data type
could cause a crash of the backend due to some code which assumes that
we'll always find a hash table entry corresponding to an item in the
Memoize LRU list.
Here we remove the assumption that we'll always find the entry
corresponding to the given LRU list item and add run-time checks to verify
we have found the given item in the cache.
This is not a fix for bug #17512, but it will turn the crash reported by
that bug report into an internal ERROR.
Reported-by: Ales Zeleny
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAApHDvpxFSTwvoYWT7kmFVSZ9zLAeHb=S9vrz=RExMgSkQNWqw@mail.gmail.com
Backpatch-through: 14, where Memoize was added.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Several places in the planner tried to clamp a double value to fit
in a "long" by doing
(long) Min(x, (double) LONG_MAX);
This is subtly incorrect, because it casts LONG_MAX to double and
potentially back again. If long is 64 bits then the double value
is inexact, and the platform might round it up to LONG_MAX+1
resulting in an overflow and an undesirably negative output.
While it's not hard to rewrite the expression into a safe form,
let's put it into a common function to reduce the risk of someone
doing it wrong in future.
In principle this is a bug fix, but since the problem could only
manifest with group count estimates exceeding 2^63, it seems unlikely
that anyone has actually hit this or will do so anytime soon. We're
fixing it mainly to satisfy fuzzer-type tools. That being the case,
a HEAD-only fix seems sufficient.
Andrey Lepikhov
Discussion: https://postgr.es/m/ebbc2efb-7ef9-bf2f-1ada-d6ec48f70e58@postgrespro.ru
|
|
|
|
| |
Like commit c9d297751959.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I started out with the intention to rename value_type to item_type to
avoid a collision with a typedef name that appears on some platforms.
Along the way, I noticed that the adjacent field "format" was not being
correctly handled by the backend/nodes/ infrastructure functions:
copyfuncs.c erroneously treated it as a scalar, while equalfuncs,
outfuncs, and readfuncs omitted handling it at all. This looks like
it might be cosmetic at the moment because the field is always NULL
after parse analysis; but that's likely a bug in itself, and the code's
certainly not very future-proof. Let's fix it while we can still do so
without forcing an initdb on beta testers.
Further study found a few other inconsistencies in the backend/nodes/
infrastructure for the recently-added JSON node types, so fix those too.
catversion bumped because of potential change in stored rules.
Discussion: https://postgr.es/m/526703.1652385613@sss.pgh.pa.us
|
|
|
|
|
| |
Run pgindent, pgperltidy, and reformat-dat-files.
I manually fixed a couple of comments that pgindent uglified.
|
|
|
|
|
|
|
|
| |
This fixes the grammar of some comments in a couple of tests (SQL and
TAP), and in some C files.
Author: Justin Pryzby
Discussion: https://postgr.es/m/20220511020334.GH19626@telsasoft.com
|
| |
|
|
|
|
|
|
|
|
|
| |
Should have been done this way to start with, but I failed to notice
This way we avoid some pointless initialization, and better contains the
variable to exist in the scope where it is really used.
Reviewed-by: Michaël Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/202204191345.qerjy3kxi3eb@alvherre.pgsql
|
|
|
|
|
|
|
|
| |
There's no reason to keep a separate local variable when we have a place
for it elsewhere. This allows to simplify some code.
Reviewed-by: Michaël Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/202204191345.qerjy3kxi3eb@alvherre.pgsql
|
|
|
|
|
|
|
|
|
| |
These are useless and distracting. We wouldn't have written the code
with them to begin with, so there's no reason to keep them.
Author: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/20220411020336.GB26620@telsasoft.com
Discussion: https://postgr.es/m/attachment/133167/0016-Extraneous-blank-lines.patch
|
|
|
|
|
|
|
|
|
|
| |
Add an assert to make this very explicit, as well as a code comment.
The former should silence Coverity complaining about this.
Introduced by 7103ebb7aae8.
Reported-by: Ranier Vilela
Discussion: https://postgr.es/m/CAEudQAqTTAOzXiYybab+1DQOb3ZUuK99=p_KD+yrRFhcDbd0jg@mail.gmail.com
|
|
|
|
|
| |
Author: Justin Pryzby
Discussion: https://postgr.es/m/20220411020336.GB26620@telsasoft.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, the output of EXPLAIN (BUFFERS) option showed only the I/O
timing spent reading and writing shared and local buffers. This commit
adds on top of that the I/O timing for temporary buffers in the output
of EXPLAIN (for spilled external sorts, hashes, materialization. etc).
This can be helpful for users in cases where the I/O related to
temporary buffers is the bottleneck.
Like its cousin, this information is available only when track_io_timing
is enabled. Playing the patch, this is showing an extra overhead of up
to 1% even when using gettimeofday() as implementation for interval
timings, which is slightly within the usual range noise still that's
measurable.
Author: Masahiko Sawada
Reviewed-by: Georgios Kokolatos, Melanie Plageman, Julien Rouhaud,
Ranier Vilela
Discussion: https://postgr.es/m/CAD21AoAJgotTeP83p6HiAGDhs_9Fw9pZ2J=_tYTsiO5Ob-V5GQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Window functions such as row_number() always return a value higher than
the previously returned value for tuples in any given window partition.
Traditionally queries such as;
SELECT * FROM (
SELECT *, row_number() over (order by c) rn
FROM t
) t WHERE rn <= 10;
were executed fairly inefficiently. Neither the query planner nor the
executor knew that once rn made it to 11 that nothing further would match
the outer query's WHERE clause. It would blindly continue until all
tuples were exhausted from the subquery.
Here we implement means to make the above execute more efficiently.
This is done by way of adding a pg_proc.prosupport function to various of
the built-in window functions and adding supporting code to allow the
support function to inform the planner if the window function is
monotonically increasing, monotonically decreasing, both or neither. The
planner is then able to make use of that information and possibly allow
the executor to short-circuit execution by way of adding a "run condition"
to the WindowAgg to allow it to determine if some of its execution work
can be skipped.
This "run condition" is not like a normal filter. These run conditions
are only built using quals comparing values to monotonic window functions.
For monotonic increasing functions, quals making use of the btree
operators for <, <= and = can be used (assuming the window function column
is on the left). You can see here that once such a condition becomes false
that a monotonic increasing function could never make it subsequently true
again. For monotonically decreasing functions the >, >= and = btree
operators for the given type can be used for run conditions.
The best-case situation for this is when there is a single WindowAgg node
without a PARTITION BY clause. Here when the run condition becomes false
the WindowAgg node can simply return NULL. No more tuples will ever match
the run condition. It's a little more complex when there is a PARTITION
BY clause. In this case, we cannot return NULL as we must still process
other partitions. To speed this case up we pull tuples from the outer
plan to check if they're from the same partition and simply discard them
if they are. When we find a tuple belonging to another partition we start
processing as normal again until the run condition becomes false or we run
out of tuples to process.
When there are multiple WindowAgg nodes to evaluate then this complicates
the situation. For intermediate WindowAggs we must ensure we always
return all tuples to the calling node. Any filtering done could lead to
incorrect results in WindowAgg nodes above. For all intermediate nodes,
we can still save some work when the run condition becomes false. We've
no need to evaluate the WindowFuncs anymore. Other WindowAgg nodes cannot
reference the value of these and these tuples will not appear in the final
result anyway. The savings here are small in comparison to what can be
saved in the top-level WingowAgg, but still worthwhile.
Intermediate WindowAgg nodes never filter out tuples, but here we change
WindowAgg so that the top-level WindowAgg filters out tuples that don't
match the intermediate WindowAgg node's run condition. Such filters
appear in the "Filter" clause in EXPLAIN for the top-level WindowAgg node.
Here we add prosupport functions to allow the above to work for;
row_number(), rank(), dense_rank(), count(*) and count(expr). It appears
technically possible to do the same for min() and max(), however, it seems
unlikely to be useful enough, so that's not done here.
Bump catversion
Author: David Rowley
Reviewed-by: Andy Fan, Zhihong Yu
Discussion: https://postgr.es/m/CAApHDvqvp3At8++yF8ij06sdcoo1S_b2YoaT9D4Nf+MObzsrLQ@mail.gmail.com
|
|
|
|
|
|
|
| |
This reverts commit 99392cdd78b788295e52b9f4942fa11992fd5ba9.
We'd rather rewrite ri_triggers.c as a whole rather than piecemeal.
Discussion: https://postgr.es/m/E1ncXX2-000mFt-Pe@gemulon.postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Modify the subroutines called by RI trigger functions that want to check
if a given referenced value exists in the referenced relation to simply
scan the foreign key constraint's unique index, instead of using SPI to
execute
SELECT 1 FROM referenced_relation WHERE ref_key = $1
This saves a lot of work, especially when inserting into or updating a
referencing relation.
This rewrite allows to fix a PK row visibility bug caused by a partition
descriptor hack which requires ActiveSnapshot to be set to come up with
the correct set of partitions for the RI query running under REPEATABLE
READ isolation. We now set that snapshot indepedently of the snapshot
to be used by the PK index scan, so the two no longer interfere. The
buggy output in src/test/isolation/expected/fk-snapshot.out of the
relevant test case added by commit 00cb86e75d6d has been corrected.
(The bug still exists in branch 14, however, but this fix is too
invasive to backpatch.)
Author: Amit Langote <amitlangote09@gmail.com>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Corey Huinker <corey.huinker@gmail.com>
Reviewed-by: Li Japin <japinli@hotmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Zhihong Yu <zyu@yugabyte.com>
Discussion: https://postgr.es/m/CA+HiwqGkfJfYdeq5vHPh6eqPKjSbfpDDY+j-kXYFePQedtSLeg@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts a sequence of commits, implementing features related to
logical decoding and replication of sequences:
- 0da92dc530c9251735fc70b20cd004d9630a1266
- 80901b32913ffa59bf157a4d88284b2b3a7511d9
- b779d7d8fdae088d70da5ed9fcd8205035676df3
- d5ed9da41d96988d905b49bebb273a9b2d6e2915
- a180c2b34de0989269fdb819bff241a249bf5380
- 75b1521dae1ff1fde17fda2e30e591f2e5d64b6a
- 2d2232933b02d9396113662e44dca5f120d6830e
- 002c9dd97a0c874fd1693a570383e2dd38cd40d5
- 05843b1aa49df2ecc9b97c693b755bd1b6f856a9
The implementation has issues, mostly due to combining transactional and
non-transactional behavior of sequences. It's not clear how this could
be fixed, but it'll require reworking significant part of the patch.
Discussion: https://postgr.es/m/95345a19-d508-63d1-860a-f5c2f41e8d40@enterprisedb.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Move the execution pruning initialization steps that are common
between both ExecInitAppend() and ExecInitMergeAppend() into a new
function ExecInitPartitionPruning() defined in execPartition.c.
Those steps include creation of a PartitionPruneState to be used for
all instances of pruning and determining the minimal set of child
subplans that need to be initialized by performing initial pruning if
needed, and finally adjusting the subplan_map arrays in the
PartitionPruneState to reflect the new set of subplans remaining
after initial pruning if it was indeed performed.
ExecCreatePartitionPruneState() is no longer exported out of
execPartition.c and has been renamed to CreatePartitionPruneState()
as a local sub-routine of ExecInitPartitionPruning().
* Likewise, ExecFindInitialMatchingSubPlans() that was in charge of
performing initial pruning no longer needs to be exported. In fact,
since it would now have the same body as the more generally named
ExecFindMatchingSubPlans(), except differing in the value of
initial_prune passed to the common subroutine
find_matching_subplans_recurse(), it seems better to remove it and add
an initial_prune argument to ExecFindMatchingSubPlans().
* Add an ExprContext field to PartitionPruneContext to remove the
implicit assumption in the runtime pruning code that the ExprContext to
use to compute pruning expressions that need one can always rely on the
PlanState providing it. A future patch will allow runtime pruning (at
least the initial pruning steps) to be performed without the
corresponding PlanState yet having been created, so this will help.
Author: Amit Langote <amitlangote09@gmail.com>
Discussion: https://postgr.es/m/CA+HiwqEYCpEqh2LMDOp9mT+4-QoVe8HgFMKBjntEMCTZLpcCCA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This feature allows jsonb data to be treated as a table and thus used in
a FROM clause like other tabular data. Data can be selected from the
jsonb using jsonpath expressions, and hoisted out of nested structures
in the jsonb to form multiple rows, more or less like an outer join.
Nikita Glukhov
Reviewers have included (in no particular order) Andres Freund, Alexander
Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zhihong Yu (whose
name I previously misspelled), Himanshu Upadhyaya, Daniel Gustafsson,
Justin Pryzby.
Discussion: https://postgr.es/m/7e2cb85d-24cf-4abb-30a5-1a33715959bd@postgrespro.ru
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The general usage pattern when we store tuples in tuplesort.c is that
we store a series of tuples one by one then either perform a sort or spill
them to disk. In the common case, there is no pfreeing of already stored
tuples. For the common case since we do not individually pfree tuples, we
have very little need for aset.c memory allocation behavior which
maintains freelists and always rounds allocation sizes up to the next
power of 2 size.
Here we conditionally use generation.c contexts for storing tuples in
tuplesort.c when the sort will never be bounded. Unfortunately, the
memory context to store tuples is already created by the time any calls
would be made to tuplesort_set_bound(), so here we add a new sort option
that allows callers to specify if they're going to need a bounded sort or
not. We'll use a standard aset.c allocator when this sort option is not
set.
Extension authors must ensure that the TUPLESORT_ALLOWBOUNDED flag is
used when calling tuplesort_begin_* for any sorts that make a call to
tuplesort_set_bound().
Author: David Rowley
Reviewed-by: Andy Fan
Discussion: https://postgr.es/m/CAApHDvoH4ASzsAOyHcxkuY01Qf++8JJ0paw+03dk+W25tQEcNQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This replaces the bool flag for randomAccess. An upcoming patch requires
adding another option, so instead of breaking the API for that, then
breaking it again one day if we add more options, let's just break it
once. Any boolean options we add in the future will just make use of an
unused bit in the flags.
Any extensions making use of tuplesorts will need to update their code
to pass TUPLESORT_RANDOMACCESS instead of true for randomAccess.
TUPLESORT_NONE can be used for a set of empty options.
Author: David Rowley
Reviewed-by: Justin Pryzby
Discussion: https://postgr.es/m/CAApHDvoH4ASzsAOyHcxkuY01Qf%2B%2B8JJ0paw%2B03dk%2BW25tQEcNQ%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This Patch introduces three SQL standard JSON functions:
JSON() (incorrectly mentioned in my commit message for f4fb45d15c)
JSON_SCALAR()
JSON_SERIALIZE()
JSON() produces json values from text, bytea, json or jsonb values, and
has facilitites for handling duplicate keys.
JSON_SCALAR() produces a json value from any scalar sql value, including
json and jsonb.
JSON_SERIALIZE() produces text or bytea from input which containis or
represents json or jsonb;
For the most part these functions don't add any significant new
capabilities, but they will be of use to users wanting standard
compliant JSON handling.
Nikita Glukhov
Reviewers have included (in no particular order) Andres Freund, Alexander
Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zihong Yu,
Himanshu Upadhyaya, Daniel Gustafsson, Justin Pryzby.
Discussion: https://postgr.es/m/cd0bb935-0158-78a7-08b5-904886deac4b@postgrespro.ru
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This introduces the SQL/JSON functions for querying JSON data using
jsonpath expressions. The functions are:
JSON_EXISTS()
JSON_QUERY()
JSON_VALUE()
All of these functions only operate on jsonb. The workaround for now is
to cast the argument to jsonb.
JSON_EXISTS() tests if the jsonpath expression applied to the jsonb
value yields any values. JSON_VALUE() must return a single value, and an
error occurs if it tries to return multiple values. JSON_QUERY() must
return a json object or array, and there are various WRAPPER options for
handling scalar or multi-value results. Both these functions have
options for handling EMPTY and ERROR conditions.
Nikita Glukhov
Reviewers have included (in no particular order) Andres Freund, Alexander
Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zihong Yu,
Himanshu Upadhyaya, Daniel Gustafsson, Justin Pryzby.
Discussion: https://postgr.es/m/cd0bb935-0158-78a7-08b5-904886deac4b@postgrespro.ru
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch intrdocuces the SQL standard IS JSON predicate. It operates
on text and bytea values representing JSON as well as on the json and
jsonb types. Each test has an IS and IS NOT variant. The tests are:
IS JSON [VALUE]
IS JSON ARRAY
IS JSON OBJECT
IS JSON SCALAR
IS JSON WITH | WITHOUT UNIQUE KEYS
These are mostly self-explanatory, but note that IS JSON WITHOUT UNIQUE
KEYS is true whenever IS JSON is true, and IS JSON WITH UNIQUE KEYS is
true whenever IS JSON is true except it IS JSON OBJECT is true and there
are duplicate keys (which is never the case when applied to jsonb values).
Nikita Glukhov
Reviewers have included (in no particular order) Andres Freund, Alexander
Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zihong Yu,
Himanshu Upadhyaya, Daniel Gustafsson, Justin Pryzby.
Discussion: https://postgr.es/m/cd0bb935-0158-78a7-08b5-904886deac4b@postgrespro.ru
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
MERGE performs actions that modify rows in the target table using a
source table or query. MERGE provides a single SQL statement that can
conditionally INSERT/UPDATE/DELETE rows -- a task that would otherwise
require multiple PL statements. For example,
MERGE INTO target AS t
USING source AS s
ON t.tid = s.sid
WHEN MATCHED AND t.balance > s.delta THEN
UPDATE SET balance = t.balance - s.delta
WHEN MATCHED THEN
DELETE
WHEN NOT MATCHED AND s.delta > 0 THEN
INSERT VALUES (s.sid, s.delta)
WHEN NOT MATCHED THEN
DO NOTHING;
MERGE works with regular tables, partitioned tables and inheritance
hierarchies, including column and row security enforcement, as well as
support for row and statement triggers and transition tables therein.
MERGE is optimized for OLTP and is parameterizable, though also useful
for large scale ETL/ELT. MERGE is not intended to be used in preference
to existing single SQL commands for INSERT, UPDATE or DELETE since there
is some overhead. MERGE can be used from PL/pgSQL.
MERGE does not support targetting updatable views or foreign tables, and
RETURNING clauses are not allowed either. These limitations are likely
fixable with sufficient effort. Rewrite rules are also not supported,
but it's not clear that we'd want to support them.
Author: Pavan Deolasee <pavan.deolasee@gmail.com>
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Author: Amit Langote <amitlangote09@gmail.com>
Author: Simon Riggs <simon.riggs@enterprisedb.com>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Reviewed-by: Andres Freund <andres@anarazel.de> (earlier versions)
Reviewed-by: Peter Geoghegan <pg@bowt.ie> (earlier versions)
Reviewed-by: Robert Haas <robertmhaas@gmail.com> (earlier versions)
Reviewed-by: Japin Li <japinli@hotmail.com>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Tomas Vondra <tomas.vondra@enterprisedb.com>
Reviewed-by: Zhihong Yu <zyu@yugabyte.com>
Discussion: https://postgr.es/m/CANP8+jKitBSrB7oTgT9CY2i1ObfOt36z0XMraQc+Xrz8QB0nXA@mail.gmail.com
Discussion: https://postgr.es/m/CAH2-WzkJdBuxj9PO=2QaO9-3h3xGbQPZ34kJH=HukRekwM-GZg@mail.gmail.com
Discussion: https://postgr.es/m/20201231134736.GA25392@alvherre.pgsql
|