| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, we used to use the array of size max_replication_slots to
store stats for replication slots. But that had two problems in the cases
where a message for dropping a slot gets lost: 1) the stats for the new
slot are not recorded if the array is full and 2) writing beyond the end
of the array if the user reduces the max_replication_slots.
This commit uses HTAB for replication slot statistics, resolving both
problems. Now, pgstat_vacuum_stat() search for all the dead replication
slots in stats hashtable and tell the collector to remove them. To avoid
showing the stats for the already-dropped slots, pg_stat_replication_slots
view searches slot stats by the slot name taken from pg_replication_slots.
Also, we send a message for creating a slot at slot creation, initializing
the stats. This reduces the possibility that the stats are accumulated
into the old slot stats when a message for dropping a slot gets lost.
Reported-by: Andres Freund
Author: Sawada Masahiko, test case by Vignesh C
Reviewed-by: Amit Kapila, Vignesh C, Dilip Kumar
Discussion: https://postgr.es/m/20210319185247.ldebgpdaxsowiflw@alap3.anarazel.de
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The Truncate operation acquires an exclusive lock on the target relation
and indexes. It then waits for logical replication of the operation to
finish at commit. Now because we are acquiring the shared lock on the
target index to get index attributes in pgoutput while sending the
changes for the Truncate operation, it leads to a deadlock.
Actually, we don't need to acquire a lock on the target index as we build
the cache entry using a historic snapshot and all the later changes are
absorbed while decoding WAL. So, we wrote a special purpose function for
logical replication to get a bitmap of replica identity attribute numbers
where we get that information without locking the target index.
We decided not to backpatch this as there doesn't seem to be any field
complaint about this issue since it was introduced in commit 5dfd1e5a in
v11.
Reported-by: Haiying Tang
Author: Takamichi Osumi, test case by Li Japin
Reviewed-by: Amit Kapila, Ajin Cherian
Discussion: https://postgr.es/m/OS0PR01MB6113C2499C7DC70EE55ADB82FB759@OS0PR01MB6113.jpnprd01.prod.outlook.com
|
|
|
|
|
|
| |
New keywords per 71f4c8c6f74b.
Discussion: https://postgr.es/m/20210422204035.GA25929@alvherre.pgsql
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 2ec993a7c, which added triggers on views, modified the rewriter
to add dummy entries like "SET x = x" for all columns that weren't
actually being updated by the user in any UPDATE directed at a view.
That was needed at the time to produce a complete "NEW" row to pass
to the trigger. Later it was found to cause problems for ordinary
updatable views, so commit cab5dc5da restricted it to happen only for
trigger-updatable views. But in the wake of commit 86dc90056, we
really don't need it at all. nodeModifyTable.c populates the trigger
"OLD" row from the whole-row variable that is generated for the view,
and then it computes the "NEW" row using that old row and the UPDATE
targetlist. So there is no need for the UPDATE tlist to have dummy
entries, any more than it needs them for regular tables or other
types of views.
(The comments for rewriteTargetListIU suggest that we must do this
for correct expansion of NEW references in rules, but I now think
that that was just lazy comment editing in 2ec993a7c. If we didn't
need it for rules on views before there were triggers, we don't need
it after that.)
This essentially propagates 86dc90056's decision that we don't need
dummy column updates into the view case. Aside from making the
different cases more uniform and hence possibly forestalling future
bugs, it ought to save a little bit of rewriter/planner effort.
Discussion: https://postgr.es/m/2181213.1619397634@sss.pgh.pa.us
|
|
|
|
|
|
| |
The verification of permissions doesn't succeed on Cygwin, because the
required feature is not implemented for Cygwin at the moment. So skip
this part of the test, like MinGW already does.
|
|
|
|
|
|
|
|
|
|
|
| |
The tuple routing logic used by a logical replication worker can fire
triggers on relations part of a partition tree, but there was no test
coverage in this area. The existing script 003_constraints.pl included
something, but nothing when a tuple is applied across partitioned tables
on a subscriber.
Author: Amit Langote
Discussion: https://postgr.es/m/OS0PR01MB611383FA0FE92EB9DE21946AFB769@OS0PR01MB6113.jpnprd01.prod.outlook.com
|
|
|
|
|
|
|
|
|
|
|
|
| |
We send the prepare for the concurrently aborted xacts so that later when
rollback prepared is decoded and sent, the downstream should be able to
rollback such a xact. For 'streaming' case (when we send changes for
in-progress transactions), we were sending prepare twice when concurrent
abort was detected.
Author: Peter Smith
Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/f82133c6-6055-b400-7922-97dae9f2b50b@enterprisedb.com
|
|
|
|
|
|
| |
This was already unused in the initial commit
257836a75585934cc05ed7a80bccf8190d41e056. Apparently, it was used in
an earlier proposed patch version.
|
|
|
|
|
| |
Author: Peter Smith
Discussion: https://postgr.es/m/CAHut+PtvzuYY0zu=dVRK_WVz5WGos1+otZWgEWqjha1ncoSRag@mail.gmail.com
|
|
|
|
|
|
|
|
|
| |
This function's behavior for UPDATE on a trigger-updatable view was
justified by analogy to what preptlist.c used to do for UPDATE on
regular tables. Since preptlist.c hasn't done that since 86dc90056,
that argument is no longer sensible, let alone convincing. I think
we do still need it to act that way, so update the comment to explain
why.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
This will install amcheck in the database if not present. The default
schema is for the extension is pg_catalog, but this can be overridden by
providing a value for the option.
Mark Dilger, slightly editorialized by me.
(rather divergent)
Discussion: https://postgr.es/m/bdc0f7c2-09e3-ee57-8471-569dfb509234@dunslane.net
|
|
|
|
|
|
|
|
|
|
| |
As well as 'devel' version_stamp.pl provides for 'alphaN'
'betaN' and 'rcN', so teach PostgresVersion about those.
Also stash the version string instead of trying to reconstruct it during
stringification.
Discussion: https://postgr.es/m/YIHlw5nSgAHs4dK1@paquier.xyz
|
|
|
|
|
|
|
|
|
| |
1375422 has refactored this area of the executor code, and some comments
went out-of-sync.
Author: Yukun Wang
Reviewed-by: Amul Sul
Discussion: https://postgr.es/m/OS0PR01MB60033394FCAEF79B98F078F5B4459@OS0PR01MB6003.jpnprd01.prod.outlook.com
|
|
|
|
|
|
|
|
|
|
| |
6f6f284 has introduced a specific macro to make printf()-ing of LSNs
easier. This takes care of what looks like the remaining code paths
that did not get the call.
Author: Michael Paquier
Reviewed-by: Kyotaro Horiguchi, Tom Lane
Discussion: https://postgr.es/m/YIJS9x6K8ruizN7j@paquier.xyz
|
|
|
|
|
|
|
|
|
|
| |
Instead, put them in via a format placeholder. This reduces the
number of distinct translatable messages and also reduces the chances
of typos during translation. We already did this for the system call
arguments in a number of cases, so this is just the same thing taken a
bit further.
Discussion: https://www.postgresql.org/message-id/flat/92d6f545-5102-65d8-3c87-489f71ea0a37%40enterprisedb.com
|
|
|
|
| |
Some code thought this was unsigned, but it's signed int.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These functions shouldn't receive null arguments: multirange_constructor0()
doesn't have any arguments while multirange_constructor2() has a single array
argument, which is never null.
But mark them strict anyway for the sake of uniformity.
Also, make checks for null arguments use elog() instead of ereport() as these
errors should normally be never thrown. And adjust corresponding comments.
Catversion is bumped.
Reported-by: Peter Eisentraut
Discussion: https://postgr.es/m/0f783a96-8d67-9e71-996b-f34a7352eeef%40enterprisedb.com
|
|
|
|
|
|
|
|
|
|
|
| |
Commit bbe0a81db6 introduced "INCLUDING COMPRESSION" option
in CREATE TABLE command, but previously TableLikeOption in gram.y and
parsenodes.h didn't classify this new option in alphabetical order
with the rest.
Author: Fujii Masao
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/YHerAixOhfR1ryXa@paquier.xyz
|
|
|
|
|
| |
This was already mostly done, but some error messages were printed the
long way.
|
| |
|
|
|
|
|
|
|
| |
Oversight in 2a0faed.
Author: Hou Zhijie
Discussion: https://postgr.es/m/OS0PR01MB5716405E2464D85E6DB6DC0794469@OS0PR01MB5716.jpnprd01.prod.outlook.com
|
|
|
|
|
|
|
|
| |
%lld with (long long), or %llu with (unsigned long long) are more
adapted. This is similar to 3286065.
Author: Kyotaro Horiguchi
Discussion: https://postgr.es/m/20210421.200000.1462448394029407895.horikyota.ntt@gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is cleanup for commit 27e1f1456:
* ExecAppendAsyncEventWait(), which was modified a bit further by commit
a8af856d3, duplicated the same nevents calculation. Simplify the code
a little bit to avoid the duplication. Update comments there.
* Add an assertion to ExecAppendAsyncRequest().
* Update a comment about merging the async_capable options from input
relations in merge_fdw_options(), per complaint from Kyotaro Horiguchi.
* Add a comment for fetch_more_data_begin().
Author: Etsuro Fujita
Discussion: https://postgr.es/m/CAPmGK1637W30Wx3MnrReewhafn6F_0J76mrJGoFXFnpPq4QfvA%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adopt a more consistent policy about what slot-type-specific
getsysattr functions should do when system attributes are not
available. To wit, they should all throw the same user-oriented
error, rather than variously crashing or emitting developer-oriented
messages.
This closes a identifiable problem in commits a71cfc56b and
3fb93103a (in v13 and v12), so back-patch into those branches,
along with a test case to try to ensure we don't break it again.
It is not known that any of the former crash cases are reachable
in HEAD, but this seems like a good safety improvement in any case.
Discussion: https://postgr.es/m/141051591267657@mail.yandex.ru
|
|
|
|
|
|
|
|
|
| |
Have interested callers of find_inheritance_children set the
detached_exist value to false prior to calling it, so that that routine
only has to set it true in the rare cases where it is necessary. Don't
touch it otherwise.
Per buildfarm member thorntail (which reported a UBSan failure here).
|
|
|
|
| |
per gripe from Alvaro Herrera.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
During queries coming from ri_triggers.c, we need to omit partitions
that are marked pending detach -- otherwise, the RI query is tricked
into allowing a row into the referencing table whose corresponding row
is in the detached partition. Which is bogus: once the detach operation
completes, the row becomes an orphan.
However, the code was not doing that in repeatable-read transactions,
because relcache kept a copy of the partition descriptor that included
the partition, and used it in the RI query. This commit changes the
partdesc cache code to only keep descriptors that aren't dependent on
a snapshot (namely: those where no detached partition exist, and those
where detached partitions are included). When a partdesc-without-
detached-partitions is requested, we create one afresh each time; also,
those partdescs are stored in PortalContext instead of
CacheMemoryContext.
find_inheritance_children gets a new output *detached_exist boolean,
which indicates whether any partition marked pending-detach is found.
Its "include_detached" input flag is changed to "omit_detached", because
that name captures desired the semantics more naturally.
CreatePartitionDirectory() and RelationGetPartitionDesc() arguments are
identically renamed.
This was noticed because a buildfarm member that runs with relcache
clobbering, which would not keep the improperly cached partdesc, broke
one test, which led us to realize that the expected output of that test
was bogus. This commit also corrects that expected output.
Author: Amit Langote <amitlangote09@gmail.com>
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/3269784.1617215412@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A new PostgresVersion object type is created and this is used in
PostgresNode using the output of `pg_config --version` and the result
stored in the PostgresNode object. This object can be compared to other
PostgresVersion objects, or to a number or string.
PostgresNode is currently believed to be compatible with versions down
to release 12, so PostgresNode will issue a warning if used with a
version prior to that.
No attempt has been made to deal with incompatibilities in older
versions - that remains work to be undertaken in a subsequent
development cycle.
Based on code from Mark Dilger and Jehan-Guillaume de Rorthais.
Discussion: https://postgr.es/m/a80421c0-3d7e-def1-bcfe-24777f15e344@dunslane.net
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Creating a trigger on a relation to which an apply operation is
triggered would cause a relation leak once the change gets committed,
as the executor would miss that the relation needs to be closed
beforehand. This issue got introduced with the refactoring done in
1375422c, where it becomes necessary to track relations within
es_opened_result_relations to make sure that they are closed.
We have discussed using ExecInitResultRelation() coupled with
ExecCloseResultRelations() for the relations in need of tracking by the
apply operations in the subscribers, which would simplify greatly the
opening and closing of indexes, but this requires a larger rework and
reorganization of the worker code, particularly for the tuple routing
part. And that's not really welcome post feature freeze. So, for now,
settle down to the same solution as TRUNCATE which is to fill in
es_opened_result_relations with the relation opened, to make sure that
ExecGetTriggerResultRel() finds them and that they get closed.
The code is lightly refactored so as a relation is not registered three
times for each DML code path, making the whole a bit easier to follow.
Reported-by: Tang Haiying, Shi Yu, Hou Zhijie
Author: Amit Langote, Masahiko Sawada, Hou Zhijie
Reviewed-by: Amit Kapila, Michael Paquier
Discussion: https://postgr.es/m/OS0PR01MB611383FA0FE92EB9DE21946AFB769@OS0PR01MB6113.jpnprd01.prod.outlook.com
|
|
|
|
|
|
| |
Per observation from Tom Lane.
Discussion: https://postgr.es/m/1901125.1617904665@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
| |
On ALTER TABLE .. DETACH CONCURRENTLY, we add a new table constraint
that duplicates the partition constraint. But if the partition already
has another constraint that implies that one, then that's unnecessary.
We were already avoiding the addition of a duplicate constraint if there
was an exact 'equal' match -- this just improves the quality of the check.
Author: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/20210410184226.GY6592@telsasoft.com
|
| |
|
|
|
|
|
|
|
|
|
|
| |
This has been found to cause hangs where tcp usage is forced.
Alexey Kodratov
Discussion: https://postgr.es/m/82e271a9a11928337fcb5b5e57b423c0@postgrespro.ru
Backpatch to all live branches
|
|
|
|
|
|
|
|
|
|
| |
Since pg_depend can contain duplicate entries, we need to eliminate
those in information schema views that build on pg_depend, using
DISTINCT. Some of the older views already did that correctly, but
some of the more recently added ones didn't. (In some of these views,
it might not be possible to reproduce the issue because of how the
implementation happens to deduplicate dependencies while recording
them, but it seems better to keep this consistent in all cases.)
|
|
|
|
| |
Should be %u rather than %d.
|
| |
|
|
|
|
| |
Use %lld and cast to long long int instead.
|
| |
|
|
|
|
|
|
|
|
| |
This compatibility has been added in 45b9805, but psql forgot the call.
Author: Wei Wang
Reviewed-by: Aleksander Alekseev
Discussion: https://postgr.es/m/OS3PR01MB6275935F62E161BCD393D6559E489@OS3PR01MB6275.jpnprd01.prod.outlook.com
|
|
|
|
|
|
| |
While tracking down the bug fixed in the preceding commit, I got quite
annoyed by the low quality of spg_desc's output. Add missing fields,
try to make the formatting consistent.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit f003d9f87 left this macro with inadequate (or, one could say,
too much) parenthesization. Which was catastrophic to the correctness
of calls such as "if (!XLogRecHasBlockRef(record, 1)) ...". There
are only a few of those, which perhaps explains why we didn't notice
immediately (with our general weakness of WAL replay testing being
another factor). I found it by debugging intermittent replay failures
like
2021-04-08 14:33:30.191 EDT [29463] PANIC: failed to locate backup block with ID 1
2021-04-08 14:33:30.191 EDT [29463] CONTEXT: WAL redo at 0/95D3438 for SPGist/ADD_NODE: off 1; blkref #0: rel 1663/16384/25998, blk 1
|
|
|
|
|
|
|
|
|
|
|
| |
log_statement is issued before query_id can be computed, so properly
clear the value, and document the interaction.
Reported-by: Fujii Masao, Michael Paquier
Discussion: https://postgr.es/m/YHPkU8hFi4no4NSw@paquier.xyz
Author: Julien Rouhaud
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, it was pg_stat_activity.queryid to match the
pg_stat_statements queryid column. This is an adjustment to patch
4f0b0966c8. This also adjusts some of the internal function calls to
match. Catversion bumped.
Reported-by: Álvaro Herrera, Julien Rouhaud
Discussion: https://postgr.es/m/20210408032704.GA7498@alvherre.pgsql
|
|
|
|
|
|
|
|
|
|
|
| |
I didn't particularly like this function name, as it fails to
express what's going on. Also, returning the sort expression
alone isn't too helpful --- typically, a caller would also
need some other fields of the EquivalenceMember. But the
sole caller really only needs a bool result, so let's make
it "bool relation_can_be_sorted_early()".
Discussion: https://postgr.es/m/91f3ec99-85a4-fa55-ea74-33f85a5c651f@swarm64.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
An oversight introduced by the incremental-sort patches caused
"could not find pathkey item to sort" errors in some situations
where a sort key involves an aggregate or window function.
The basic problem here is that find_em_expr_usable_for_sorting_rel
isn't properly modeling what prepare_sort_from_pathkeys will do
later. Rather than hoping we can keep those functions in sync,
let's refactor so that they actually share the code for
identifying a suitable sort expression.
With this refactoring, tlist.c's tlist_member_ignore_relabel
is unused. I removed it in HEAD but left it in place in v13,
in case any extensions are using it.
Per report from Luc Vlaming. Back-patch to v13 where the
problem arose.
James Coleman and Tom Lane
Discussion: https://postgr.es/m/91f3ec99-85a4-fa55-ea74-33f85a5c651f@swarm64.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit b34ca595ab provided for installation-aware instances of
PostgresNode. However, it turns out that IPC::Run works against this by
caching the path to a binary and not consulting the path again, even if
it has changed. We work around this by calling Postgres binaries with
the installed path rather than just a bare name to be looked up in the
environment path, if there is an installed path. For the common case
where there is no installed path we continue to use the bare command
name.
Diagnosis and solution from Mark Dilger
Discussion: https://postgr.es/m/E8F512F8-B4D6-4514-BA8D-2E671439DA92@enterprisedb.com
|
|
|
|
|
|
| |
Author: Julien Rouhaud
Backpatch-through: 11
Discussion: https://postgr.es/m/20210420121659.odjueyd4rpilorn5@nol
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Document VACUUM's soft assumption that any LP_DEAD items encountered
during pruning will become LP_UNUSED items before VACUUM finishes up.
This is integral to the accounting used by VACUUM to generate its final
report on the table to the stats collector. It also affects how VACUUM
determines which heap pages are truncatable. In both cases VACUUM is
concerned with the likely contents of the page in the near future, not
the current contents of the page.
This state of affairs created the false impression that VACUUM's dead
tuple accounting had significant difference with similar accounting used
during ANALYZE. There were and are no substantive differences, at least
when the soft assumption completely works out. This is far clearer now.
Also document cases where things don't quite work out for VACUUM's dead
tuple accounting. It's possible that a significant number of LP_DEAD
items will be left behind by VACUUM, and won't be recorded as remaining
dead tuples in VACUUM's statistics collector report. This behavior
dates back to commit a96c41fe, which taught VACUUM to run without index
and heap vacuuming at the user's request. The failsafe mechanism added
to VACUUM more recently by commit 1e55e7d1 takes the same approach to
dead tuple accounting.
Reported-By: Masahiko Sawada <sawada.mshk@gmail.com>
Discussion: https://postgr.es/m/CAH2-Wz=Jmtu18PrsYq3EvvZJGOmZqSO2u3bvKpx9xJa5uhNp=Q@mail.gmail.com
|
|
|
|
| |
Should be signed, not unsigned.
|