| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a constraint gets renamed, it may have associated with it a target
relation (for example domain constraints don't have one). Not
invalidating the target relation cache when issuing the renaming can
result in issues with subsequent commands that refer to the old
constraint name using the relation cache, causing various failures. One
pattern spotted was using CREATE TABLE LIKE after a constraint
renaming.
Reported-by: Stuart <sfbarbee@gmail.com>
Author: Amit Langote
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/2047094.V130LYfLq4@station53.ousa.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
At least as far back as the 2008 spec, POSIX has defined strsignal(3)
for looking up descriptive strings for signal numbers. We hadn't gotten
the word though, and were still using the crufty old sys_siglist array,
which is in no standard even though most Unixen provide it.
Aside from not being formally standards-compliant, this was just plain
ugly because it involved #ifdef's at every place using the code.
To eliminate the #ifdef's, create a portability function pg_strsignal,
which wraps strsignal(3) if available and otherwise falls back to
sys_siglist[] if available. The set of Unixen with neither API is
probably empty these days, but on any platform with neither, you'll
just get "unrecognized signal". All extant callers print the numeric
signal number too, so no need to work harder than that.
Along the way, upgrade pg_basebackup's child-error-exit reporting
to match the rest of the system.
Discussion: https://postgr.es/m/25758.1544983503@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit ffa4cbd62 added logic to detect SIGPIPE failure of a COPY child
process, but it only worked correctly if the SIGPIPE occurred in the
immediate child process. Depending on the shell in use and the
complexity of the shell command string, we might instead get back
an exit code of 128 + SIGPIPE, representing a shell error exit
reporting SIGPIPE in the child process.
We could just hack up ClosePipeToProgram() to add the extra case,
but it seems like this is a fairly general issue deserving a more
general and better-documented solution. I chose to add a couple
of functions in src/common/wait_error.c, which is a natural place
to know about wait-result encodings, that will test for either a
specific child-process signal type or any child-process signal failure.
Then, adjust other places that were doing ad-hoc tests of this type
to use the common functions.
In RestoreArchivedFile, this fixes a race condition affecting whether
the process will report an error or just silently proc_exit(1): before,
that depended on whether the intermediate shell got SIGTERM'd itself
or reported a child process failing on SIGTERM.
Like the previous patch, back-patch to v10; we could go further
but there seems no real need to.
Per report from Erik Rijkers.
Discussion: https://postgr.es/m/f3683f87ab1701bea5d86a7742b22432@xs4all.nl
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When we first put in collations support, we basically punted on teaching
pg_statistic, ANALYZE, and the planner selectivity functions about that.
They've just used DEFAULT_COLLATION_OID independently of the actual
collation of the data. It's time to improve that, so:
* Add columns to pg_statistic that record the specific collation associated
with each statistics slot.
* Teach ANALYZE to use the column's actual collation when comparing values
for statistical purposes, and record this in the appropriate slot. (Note
that type-specific typanalyze functions are now expected to fill
stats->stacoll with the appropriate collation, too.)
* Teach assorted selectivity functions to use the actual collation of
the stats they are looking at, instead of just assuming it's
DEFAULT_COLLATION_OID.
This should give noticeably better results in selectivity estimates for
columns with nondefault collations, at least for query clauses that use
that same collation (which would be the default behavior in most cases).
It's still true that comparisons with explicit COLLATE clauses different
from the stored data's collation won't be well-estimated, but that's no
worse than before. Also, this patch does make the first step towards
doing better with that, which is that it's now theoretically possible to
collect stats for a collation other than the column's own collation.
Patch by me; thanks to Peter Eisentraut for review.
Discussion: https://postgr.es/m/14706.1544630227@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The cache lookup routines for foreign-data wrappers and foreign servers
are extended with an extra argument to handle a set of flags. The only
value which can be used now is to indicate if a missing object should
result in an error or not, and are designed to be extensible on need.
Those new routines are added into the existing set of user-visible
FDW APIs and documented in consequence. They will be used for future
patches to improve the SQL interface for object addresses.
Author: Michael Paquier
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/CAB7nPqSZxrSmdHK-rny7z8mi=EAFXJ5J-0RbzDw6aus=wB5azQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The changes I made in 578b229718e assigned oids below
FirstBootstrapObjectId to objects in include/catalog/*.dat files that
did not have an oid assigned, starting at the max oid explicitly
assigned. Tom criticized that for mainly two reasons:
1) It's not clear which values are manually and which explicitly
assigned.
2) The space below FirstBootstrapObjectId gets pretty crowded, and
some PostgreSQL forks have used oids >= 9000 for their own objects,
to avoid conflicting.
Thus create a new range for objects not assigned explicit oids, but
assigned by genbki.pl. For now 1-9999 is for explicitly assigned oids,
FirstGenbkiObjectId (10000) to FirstBootstrapObjectId (1200) -1 is for
genbki.pl assigned oids, and < FirstNormalObjectId (16384) is for oids
assigned during bootstrap. It's possible that we'll have to adjust
these boundaries, but there's some headroom for now.
Add a note suggesting that oids in forks should be assigned in the
9000-9999 range.
Catversion bump for obvious reasons.
Per complaint from Tom Lane.
Author: Andres Freund
Discussion: https://postgr.es/m/16845.1544393682@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
| |
The idea here is to not call recordDependencyOn for the default collation,
since we know that's pinned. But what the code actually did was to record
the partition key's dependency on the opclass twice, instead.
Evidently introduced by sloppy coding in commit 2186b608b. Back-patch
to v10 where that came in.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a domain has no constraints, then CoerceToDomain doesn't really do
anything and can be simplified to a RelabelType. This not only
eliminates cycles at execution, but allows the planner to optimize better
(for instance, match the coerced expression to an index on the underlying
column). However, we do have to support invalidating the plan later if
a constraint gets added to the domain. That's comparable to the case of
a change to a SQL function that had been inlined into a plan, so all the
necessary logic already exists for plans depending on functions. We
need only duplicate or share that logic for domains.
ALTER DOMAIN ADD/DROP CONSTRAINT need to be taught to send out sinval
messages for the domain's pg_type entry, since those operations don't
update that row. (ALTER DOMAIN SET/DROP NOT NULL do update that row,
so no code change is needed for them.)
Testing this revealed what's really a pre-existing bug in plpgsql:
it caches the SQL-expression-tree expansion of type coercions and
had no provision for invalidating entries in that cache. Up to now
that was only a problem if such an expression had inlined a SQL
function that got changed, which is unlikely though not impossible.
But failing to track changes of domain constraints breaks an existing
regression test case and would likely cause practical problems too.
We could fix that locally in plpgsql, but what seems like a better
idea is to build some generic infrastructure in plancache.c to store
standalone expressions and track invalidation events for them.
(It's tempting to wonder whether plpgsql's "simple expression" stuff
could use this code with lower overhead than its current use of the
heavyweight plancache APIs. But I've left that idea for later.)
Other stuff fixed in passing:
* Allow estimate_expression_value() to drop CoerceToDomain
unconditionally, effectively assuming that the coercion will succeed.
This will improve planner selectivity estimates for cases involving
estimatable expressions that are coerced to domains. We could have
done this independently of everything else here, but there wasn't
previously any need for eval_const_expressions_mutator to know about
CoerceToDomain at all.
* Use a dlist for plancache.c's list of cached plans, rather than a
manually threaded singly-linked list. That eliminates a potential
performance problem in DropCachedPlan.
* Fix a couple of inconsistencies in typecmds.c about whether
operations on domains drop RowExclusiveLock on pg_type. Our common
practice is that DDL operations do drop catalog locks, so standardize
on that choice.
Discussion: https://postgr.es/m/19958.1544122124@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When GIN vacuum deletes a posting tree page, it assumes that no concurrent
searchers can access it, thanks to ginStepRight() locking two pages at once.
However, since 9.4 searches can skip parts of posting trees descending from the
root. That leads to the risk that page is deleted and reclaimed before
concurrent search can access it.
This commit prevents the risk of above by waiting for every transaction, which
might wait to reference this page, to finish. Due to binary compatibility
we can't change GinPageOpaqueData to store corresponding transaction id.
Instead we reuse page header pd_prune_xid field, which is unused in index pages.
Discussion: https://postgr.es/m/31a702a.14dd.166c1366ac1.Coremail.chjischj%40163.com
Author: Andrey Borodin, Alexander Korotkov
Reviewed-by: Alexander Korotkov
Backpatch-through: 9.4
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On standby ginRedoDeletePage() can work concurrently with read-only queries.
Those queries can traverse posting tree in two ways.
1) Using rightlinks by ginStepRight(), which locks the next page before
unlocking its left sibling.
2) Using downlinks by ginFindLeafPage(), which locks at most one page at time.
Original lock order was: page, parent, left sibling. That lock order can
deadlock with ginStepRight(). In order to prevent deadlock this commit changes
lock order to: left sibling, page, parent. Note, that position of parent in
locking order seems insignificant, because we only lock one page at time while
traversing downlinks.
Reported-by: Chen Huajun
Diagnosed-by: Chen Huajun, Peter Geoghegan, Andrey Borodin
Discussion: https://postgr.es/m/31a702a.14dd.166c1366ac1.Coremail.chjischj%40163.com
Author: Alexander Korotkov
Backpatch-through: 9.4
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before 218f51584d5 if posting tree page is about to be deleted, then the whole
posting tree is locked by LockBufferForCleanup() on root preventing all the
concurrent inserts. 218f51584d5 reduced locking to the subtree containing
page to be deleted. However, due to concurrent parent split, inserter doesn't
always holds pins on all the pages constituting path from root to the target
leaf page. That could cause a deadlock between GIN vacuum process and GIN
inserter. And we didn't find non-invasive way to fix this.
This commit reverts VACUUM behavior to lock the whole posting tree before
delete any page. However, we keep another useful change by 218f51584d5: the
tree is locked only if there are pages to be deleted.
Reported-by: Chen Huajun
Diagnosed-by: Chen Huajun, Andrey Borodin, Peter Geoghegan
Discussion: https://postgr.es/m/31a702a.14dd.166c1366ac1.Coremail.chjischj%40163.com
Author: Alexander Korotkov, based on ideas from Andrey Borodin and Peter Geoghegan
Reviewed-by: Andrey Borodin
Backpatch-through: 10
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
postgres_fdw's postgresGetForeignPlan() assumes without checking that the
outer_plan it's given for a join relation must have a NestLoop, MergeJoin,
or HashJoin node at the top. That's been wrong at least since commit
4bbf6edfb (which could cause insertion of a Sort node on top) and it seems
like a pretty unsafe thing to Just Assume even without that.
Through blind good fortune, this doesn't seem to have any worse
consequences today than strange EXPLAIN output, but it's clearly trouble
waiting to happen.
To fix, test the node type explicitly before touching Join-specific
fields, and avoid jamming the new tlist into a node type that can't
do projection. Export a new support function from createplan.c
to avoid building low-level knowledge about the latter into FDWs.
Back-patch to 9.6 where the faulty coding was added. Note that the
associated regression test cases don't show any changes before v11,
apparently because the tests back-patched with 4bbf6edfb don't actually
exercise the problem case before then (there's no top-level Sort
in those plans).
Discussion: https://postgr.es/m/8946.1544644803@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Our support for multiple-set-clauses in UPDATE assumes that the Params
referencing a MULTIEXPR_SUBLINK SubPlan will appear before that SubPlan
in the targetlist of the plan node that calculates the updated row.
(Yeah, it's a hack...) In some PG branches it's possible that a Result
node gets inserted between the primary calculation of the update tlist
and the ModifyTable node. setrefs.c did the wrong thing in this case
and left the upper-level Params as Params, causing a crash at runtime.
What it should do is replace them with "outer" Vars referencing the child
plan node's output. That's a result of careless ordering of operations
in fix_upper_expr_mutator, so we can fix it just by reordering the code.
Fix fix_join_expr_mutator similarly for consistency, even though join
nodes could never appear in such a context. (In general, it seems
likely to be a bit cheaper to use Vars than Params in such situations
anyway, so this patch might offer a tiny performance improvement.)
The hazard extends back to 9.5 where the MULTIEXPR_SUBLINK stuff
was introduced, so back-patch that far. However, this may be a live
bug only in 9.6.x and 10.x, as the other branches don't seem to want
to calculate the final tlist below the Result node. (That plan shape
change between branches might be a mini-bug in itself, but I'm not
really interested in digging into the reasons for that right now.
Still, add a regression test memorializing what we expect there,
so we'll notice if it changes again.)
Per bug report from Eduards Bezverhijs.
Discussion: https://postgr.es/m/b6cd572a-3e44-8785-75e9-c512a5a17a73@tieto.com
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes a crash which happened when calling the function directly
with a relation OID referring to a non-existing object, and changes the
behavior so as NULL is returned for unsupported relkinds instead of
generating an error. This puts the new function in line with many other
system functions, and eases actions like full scans of pg_class.
Author: Michael Paquier
Reviewed-by: Amit Langote, Stephen Frost
Discussion: https://postgr.es/m/20181207010406.GO2407@paquier.xyz
|
|
|
|
|
|
|
|
|
|
| |
Although copyfuncs.c has a check_stack_depth call in its recursion,
equalfuncs.c, outfuncs.c, and readfuncs.c lacked one. This seems
unwise.
Likewise fix planstate_tree_walker(), in branches where that exists.
Discussion: https://postgr.es/m/30253.1544286631@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
| |
Previously, it would just pass back a partially-uninitialized tupdesc,
which doesn't seem like a safe or useful behavior.
Backpatch to v10 where this code came in.
Discussion: https://postgr.es/m/30830.1544384975@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
| |
In toast_fetch_datum_slice(), we Assert() that what is passed in isn't
compressed, but we then later had a check to see what the length of if
what was passed in is compressed. That later check is rather confusing
since toast_fetch_datum_slice() is only ever called with non-compressed
datums and the Assert() earlier makes it clear that one shouldn't be
passing in compressed datums.
Add a comment to make it clear that toast_fetch_datum_slice() is just
for non-compressed datums, and remove the dead code.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a WAL segment is recycled, its ".ready" and ".done" status files
get also automatically removed, however this is not done in a durable
manner. Hence, in a subsequent crash, it could be possible that a
".ready" status file is still around with its corresponding segment
already gone.
If the backend reaches such a state, the archive command would most
likely complain about a segment non-existing and would keep retrying,
causing WAL segments to bloat pg_wal/, potentially making Postgres crash
hard when running out of space.
As status files are removed after each individual segment, using
durable_unlink() does not completely close the window either, as a crash
could happen between the moment the WAL segment is recycled and the
moment its status files are removed. This has also some performance
impact with the additional fsync() calls needed to make the removal in a
durable manner. Doing the cleanup at recovery is not cost-free either
as this makes crash recovery potentially take longer than necessary.
So, instead, as per an idea of Stephen Frost, make the archiver aware of
orphan status files and remove them on-the-fly if the corresponding
segment goes missing. Removal failures follow a model close to what
happens for WAL segments, where multiple attempts are done before giving
up temporarily, and where a successful orphan removal makes the archiver
move immediately to the next WAL segment thought as ready to be
archived.
Author: Michael Paquier
Reviewed-by: Nathan Bossart, Andres Freund, Stephen Frost, Kyotaro
Horiguchi
Discussion: https://postgr.es/m/20180928032827.GF1500@paquier.xyz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The timestamp generated by the standby at message transmission has been
included in the protocol since its introduction for both the status
update message and hot standby feedback message, but it has never
appeared in pg_stat_replication. Seeing this timestamp does not matter
much with a cluster which has a lot of activity, but on a mostly-idle
cluster, this makes monitoring able to react faster than the configured
timeouts.
Author: MyungKyu LIM
Reviewed-by: Michael Paquier, Masahiko Sawada
Discussion: https://postgr.es/m/1657809367.407321.1533027417725.JavaMail.jboss@ep2ml404
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The stanza of ExecuteTruncate[Guts] that truncates a target table's toast
relation re-used the loop local variable "rel" to reference the toast rel.
This was safe enough when written, but commit d42358efb added code below
that that supposed "rel" still pointed to the parent table. Therefore,
the stats counter update was applied to the wrong relcache entry (the
toast rel not the user rel); and if we were unlucky and that relcache
entry had been flushed during reindex_relation, very bad things could
ensue.
(I'm surprised that CLOBBER_CACHE_ALWAYS testing hasn't found this.
I'm even more surprised that the problem wasn't detected during the
development of d42358efb; it must not have been tested in any case
with a toast table, as the incorrect stats counts are very obvious.)
To fix, replace use of "rel" in that code branch with a more local
variable. Adjust test cases added by d42358efb so that some of them
use tables with toast tables.
Per bug #15540 from Pan Bian. Back-patch to 9.5 where d42358efb came in.
Discussion: https://postgr.es/m/15540-01078812338195c0@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Remove dead code (which would be incorrect if it weren't dead),
per report from Pan Bian. Add a CHECK_FOR_INTERRUPTS in the
inner loop over child relations, because there's little point
in having one in the outer loop if there's not one here too.
Minor stylistic adjustments and comment improvements.
Seems to be aboriginal to this code (cf commit 665d1fad9).
Back-patch to v10 where that came in, not because any of this
is significant, but just to keep the branches looking similar.
Discussion: https://postgr.es/m/15539-06d00ef6b1e2e1bb@postgresql.org
|
|
|
|
|
|
|
|
|
| |
As per the error message style guide of the documentation, those should
be full sentences.
Author: Daniel Gustafsson
Reviewed-by: Michael Paquier, Álvaro Herrera
Discussion: https://1E8D49B4-16BC-4420-B4ED-58501D9E076B@yesql.se
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Places that are testing for *printf failure ought to include the format
string in their error reports, since bad-format-string is one of the
more likely causes of such failure. This both makes it easier to find
and repair the mistake, and provides at least some useful info to the
user who stumbles across such a problem.
Also, tighten snprintf.c to report EINVAL for an invalid flag or
final character in a format %-spec (including the case where the
%-spec is missing a final character altogether). This seems like
better project policy, and it also allows removing an instruction
or two from the hot code path.
Back-patch the error reporting change in pvsnprintf, since it should be
harmless and may be helpful; but not the snprintf.c change.
Per discussion of bug #15511 from Ertuğrul Kahveci, which reported an
invalid translated format string. These changes don't fix that error,
but they should improve matters next time we make such a mistake.
Discussion: https://postgr.es/m/15511-1d8b6a0bc874112f@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
| |
Skipping over the "hole" in full page images in the XLOG code was
described as being a form of compression, but this got a bit confusing
since we now have PGLZ-based compression happening, so adjust the
wording to discuss "removing" the "hole" and keeping the talk about
compression to where we're talking about using PGLZ-based compression of
the full page images.
Reviewed-By: Kyotaro Horiguchi
Discussion: https://postgr.es/m/20181127234341.GM3415@tamriel.snowman.net
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When an indexes is created on a partitioned table using ONLY (don't
recurse to partitions), it gets marked invalid until index partitions
are attached for each table partition. But there's no reason to do this
if there are no partitions ... and moreover, there's no way to get the
index to become valid afterwards, because all partitions that get
created/attached get their own index partition already attached to the
parent index, so there's no chance to do ALTER INDEX ... ATTACH PARTITION
that would make the parent index valid.
Fix by not marking the index as invalid to begin with.
This is very similar to 9139aa19423b, but the pg_dump aspect does not
appear to be relevant until we add FKs that can point to PKs on
partitioned tables. (I tried to cause the pg_upgrade test to break by
leaving some of these bogus tables around, but wasn't able to.)
Making this change means that an index that was supposed to be invalid
in the insert_conflict regression test is no longer invalid; reorder the
DDL so that the test continues to verify the behavior we want it to.
Author: Álvaro Herrera
Reviewed-by: Amit Langote
Discussion: https://postgr.es/m/20181203225019.2vvdef2ybnkxt364@alvherre.pgsql
|
|
|
|
|
| |
Backends don't typically exist uncleanly, but they can certainly exit
uncleanly, and it's exiting uncleanly that's being discussed here.
|
|
|
|
|
|
|
|
|
| |
This does not improve the security and reliability of the touched areas,
but it makes the style more consistent.
Author: Michael Paquier
Reviewed-by- Noah Misch
Discussion: https://postgr.es/m/20180309075538.GD9376@paquier.xyz
|
|
|
|
|
|
|
| |
My original coding was questionable anyway.
Reported-by: Sergei Kornilov
Discussion: https://postgr.es/m/9645101543575886@myt6-27270b78ac4f.qloud-c.yandex.net
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Three issues are fixed in this patch:
- Base backups forgot to ignore files specific to EXEC_BACKEND, leading
to spurious warnings when checksums are enabled, per analysis from me.
- pg_verify_checksums forgot about files specific to EXEC_BACKEND,
leading to failures of the tool on any such build, particularly Windows.
This error was originally found by newly-introduced TAP tests in various
buildfarm members using EXEC_BACKEND.
- pg_verify_checksums forgot to count for temporary files and temporary
paths, which could be valid relation files, without checksums, per
report from Andres Freund. More tests are added to cover this case.
A new test case which emulates corruption for a file in a different
tablespace is added, coming from from Michael Banck, while I have coded
the main code and refactored the test code.
Author: Michael Banck, Michael Paquier
Reviewed-by: Stephen Frost, David Steele
Discussion: https://postgr.es/m/20181021134206.GA14282@paquier.xyz
|
|
|
|
|
|
|
|
|
|
| |
This allows to set a lower log_min_duration_statement value without
incurring excessive log traffic (which reduces performance). This can
be useful to analyze workloads with lots of short queries.
Author: Adrien Nayrat
Reviewed-by: David Rowley, Vik Fearing
Discussion: https://postgr.es/m/c30ee535-ee1e-db9f-fa97-146b9f62caed@anayrat.info
|
|
|
|
|
| |
Author: Takeshi Ideriha
Discussion: https://postgr.es/m/4E72940DA2BF16479384A86D54D0988A6F3BF22D%40G01JPEXMBKW04
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes an oversight from c6c3334 which forgot that if a subset of
WAL senders are stopping and in a sync state, other WAL senders could
still be waiting for a WAL position to be synced while committing a
transaction. However the subset of stopping senders would not release
waiters, potentially breaking synchronous replication guarantees. This
commit makes sure that even WAL senders stopping are able to release
waiters and are tracked properly.
On 9.4, this can also trigger an assertion failure when setting for
example max_wal_senders to 1 where a WAL sender is not able to find
itself as in synchronous state when the instance stops.
Reported-by: Paul Guo
Author: Paul Guo, Michael Paquier
Discussion: https://postgr.es/m/CAEET0ZEv8VFqT3C-cQm6byOB4r4VYWcef1J21dOX-gcVhCSpmA@mail.gmail.com
Backpatch-through: 9.4
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Move the responsibility for checking for and reporting a failure from
the only current BufFileSize() caller, logtape.c, to BufFileSize()
itself. Code within buffile.c is generally responsible for interfacing
with fd.c to report irrecoverable failures. This seems like a
convention that's worth sticking to.
Reorganizing things this way makes it easy to make the error message
raised in the event of BufFileSize() failure descriptive of the
underlying problem. We're now clear on the distinction between
temporary file name and BufFile name, and can show errno, confident that
its value actually relates to the error being reported. In passing, an
existing, similar buffile.c ereport() + errcode_for_file_access() site
is changed to follow the same conventions.
The API of the function BufFileSize() is changed by this commit, despite
already being in a stable release (Postgres 11). This seems acceptable,
since the BufFileSize() ABI was changed by commit aa551830421, which
hasn't made it into a point release yet. Besides, it's difficult to
imagine a third party BufFileSize() caller not just raising an error
anyway, since BufFile state should be considered corrupt when
BufFileSize() fails.
Per complaint from Tom Lane.
Discussion: https://postgr.es/m/26974.1540826748@sss.pgh.pa.us
Backpatch: 11-, where shared BufFiles were introduced.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The previous recovery.conf regime accepted multiple recovery_target*
settings and used the last one. This does not translate well to the
general GUC system. Specifically, under EXEC_BACKEND, the settings
are written out not in any particular order, so the order in which
they were originally set is not available to new processes.
Rather than redesign the GUC system, it was decided to abandon the old
behavior and only allow one recovery target setting. A second setting
will cause an error. However, it is allowed to set the same parameter
multiple times or unset a parameter and set a different one.
Discussion: https://www.postgresql.org/message-id/flat/27802171543235530%40iva2-6ec8f0a6115e.qloud-c.yandex.net#701a59c837ad0bf8c244344aaf3ef5a4
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since commit 2f1d2b7a we have set PAM_RHOST to "[local]" for Unix
sockets. This caused Linux PAM's libaudit integration to make DNS
requests for that name. It's not exactly clear what value PAM_RHOST
should have in that case, but it seems clear that we shouldn't set it
to an unresolvable name, so don't do that.
Back-patch to 9.6. Bug #15520.
Author: Thomas Munro
Reviewed-by: Peter Eisentraut
Reported-by: Albert Schabhuetl
Discussion: https://postgr.es/m/15520-4c266f986998e1c5%40postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
During table rewrites (VACUUM FULL and CLUSTER), the main heap is logged
using XLOG / FPI records, and thus (correctly) ignored in decoding.
But the associated TOAST table is WAL-logged as plain INSERT records,
and so was logically decoded and passed to reorder buffer.
That has severe consequences with TOAST tables of non-trivial size.
Firstly, reorder buffer has to keep all those changes, possibly spilling
them to a file, incurring I/O costs and disk space.
Secondly, ReoderBufferCommit() was stashing all those TOAST chunks into
a hash table, which got discarded only after processing the row from the
main heap. But as the main heap is not decoded for rewrites, this never
happened, so all the TOAST data accumulated in memory, resulting either
in excessive memory consumption or OOM.
The fix is simple, as commit e9edc1ba already introduced infrastructure
(namely HEAP_INSERT_NO_LOGICAL flag) to skip logical decoding of TOAST
tables, but it only applied it to system tables. So simply use it for
all TOAST data in raw_heap_insert().
That would however solve only the memory consumption issue - the TOAST
changes would still be decoded and added to the reorder buffer, and
spilled to disk (although without TOAST tuple data, so much smaller).
But we can solve that by tweaking DecodeInsert() to just ignore such
INSERT records altogether, using XLH_INSERT_CONTAINS_NEW_TUPLE flag,
instead of skipping them later in ReorderBufferCommit().
Review: Masahiko Sawada
Discussion: https://www.postgresql.org/message-id/flat/1a17c643-e9af-3dba-486b-fbe31bc1823a%402ndquadrant.com
Backpatch: 9.4-, where logical decoding was introduced
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If you extend a relation, it should count as a block written, not
read (we write a zero-filled block). If you ask for a zero-filled
buffer, it shouldn't be counted as read or written.
Later we might consider counting zero-filled buffers with a separate
counter, if they become more common due to future work.
Author: Thomas Munro
Reviewed-by: Haribabu Kommi, Kyotaro Horiguchi, David Rowley
Discussion: https://postgr.es/m/CAEepm%3D3JytB3KPpvSwXzkY%2Bdwc5zC8P8Lk7Nedkoci81_0E9rA%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The function generated to perform JIT compiled tuple deforming failed
when HeapTupleHeader's t_hoff was bigger than a signed int8. I'd
failed to realize that LLVM's getelementptr would treat an int8 index
argument as signed, rather than unsigned. That means that a hoff
larger than 127 would result in a negative offset being applied. Fix
that by widening the index to 32bit.
Add a testcase with a wide table. Don't drop it, as it seems useful to
verify other tools deal properly with wide tables.
Thanks to Justin Pryzby for both reporting a bug and then reducing it
to a reproducible testcase!
Reported-By: Justin Pryzby
Author: Andres Freund
Discussion: https://postgr.es/m/20181115223959.GB10913@telsasoft.com
Backpatch: 11, just as jit compilation was
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
recovery.conf settings are now set in postgresql.conf (or other GUC
sources). Currently, all the affected settings are PGC_POSTMASTER;
this could be refined in the future case by case.
Recovery is now initiated by a file recovery.signal. Standby mode is
initiated by a file standby.signal. The standby_mode setting is
gone. If a recovery.conf file is found, an error is issued.
The trigger_file setting has been renamed to promote_trigger_file as
part of the move.
The documentation chapter "Recovery Configuration" has been integrated
into "Server Configuration".
pg_basebackup -R now appends settings to postgresql.auto.conf and
creates a standby.signal file.
Author: Fujii Masao <masao.fujii@gmail.com>
Author: Simon Riggs <simon@2ndquadrant.com>
Author: Abhijit Menon-Sen <ams@2ndquadrant.com>
Author: Sergei Kornilov <sk@zsrv.org>
Discussion: https://www.postgresql.org/message-id/flat/607741529606767@web3g.yandex.ru/
|
|
|
|
|
|
|
|
|
|
| |
Commit cfdf4dc4 added an assertion that every WaitLatch() or similar
handles postmaster death. One place did not, but was missed in
review and testing due to the need for an SSL connection. Fix, by
asking for WL_EXIT_ON_PM_DEATH.
Reported-by: Christoph Berg
Discussion: https://postgr.es/m/20181124143845.GA15039%40msg.df7cb.de
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ftoi4 and its sibling coercion functions did their overflow checks in
a way that looked superficially plausible, but actually depended on an
assumption that the MIN and MAX comparison constants can be represented
exactly in the float4 or float8 domain. That fails in ftoi4, ftoi8,
and dtoi8, resulting in a possibility that values near the MAX limit will
be wrongly converted (to negative values) when they need to be rejected.
Also, because we compared before rounding off the fractional part,
the other three functions threw errors for values that really ought
to get rounded to the min or max integer value.
Fix by doing rint() first (requiring an assumption that it handles
NaN and Inf correctly; but dtoi8 and ftoi8 were assuming that already),
and by comparing to values that should coerce to float exactly, namely
INTxx_MIN and -INTxx_MIN. Also remove some random cosmetic discrepancies
between these six functions.
Per bug #15519 from Victor Petrovykh. This should get back-patched,
but first let's see what the buildfarm thinks of it --- I'm not too
sure about portability of some of the regression test cases.
Patch by me; thanks to Andrew Gierth for analysis and discussion.
Discussion: https://postgr.es/m/15519-4fc785b483201ff1@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We should never estimate the output of a semijoin to be more rows than
we estimate for an inner join with the same input rels and join condition;
it's obviously impossible for that to happen. However, given the
relatively poor quality of our semijoin selectivity estimates ---
particularly, but not only, in cases where we punt and return a default
estimate --- we did often deliver such estimates. To improve matters,
calculate both estimates inside eqjoinsel() and take the smaller one.
The bulk of this patch is just mechanical refactoring to avoid repetitive
information lookup when we call both eqjoinsel_semi and eqjoinsel_inner.
The actual new behavior is just
selec = Min(selec, inner_rel->rows * selec_inner);
which looks a bit odd but is correct because of our different definitions
for inner and semi join selectivity.
There is one ensuing plan change in the regression tests, but it looks
reasonable enough (and checking the actual row counts shows that the
estimate moved closer to reality, not further away).
Per bug #15160 from Alexey Ermakov. Although this is arguably a bug fix,
I won't risk destabilizing plan choices in stable branches by
back-patching.
Tom Lane, reviewed by Melanie Plageman
Discussion: https://postgr.es/m/152395805004.19366.3107109716821067806@wrigleys.postgresql.org
|
|
|
|
|
|
|
| |
Commit cfdf4dc4fc96 left a few unnecessary assignments, one of which
caused compiler warnings, as reported by Erik Rijkers. Remove them all.
Discussion: https://postgr.es/m/df0dcca2025b3d90d946ecc508ca9678@xs4all.nl
|
|
|
|
|
|
|
| |
Missing in dfa608141982.
Author: David Rowley
Discussion: https://postgr.es/m/CAKJS1f-M3NMTCpv=vDfkoqHbMPFf=3-Z1ud=+1DHH00tC+zLaQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Users of the WaitEventSet and WaitLatch() APIs can now choose between
asking for WL_POSTMASTER_DEATH and then handling it explicitly, or asking
for WL_EXIT_ON_PM_DEATH to trigger immediate exit on postmaster death.
This reduces code duplication, since almost all callers want the latter.
Repair all code that was previously ignoring postmaster death completely,
or requesting the event but ignoring it, or requesting the event but then
doing an unconditional PostmasterIsAlive() call every time through its
event loop (which is an expensive syscall on platforms for which we don't
have USE_POSTMASTER_DEATH_SIGNAL support).
Assert that callers of WaitLatchXXX() under the postmaster remember to
ask for either WL_POSTMASTER_DEATH or WL_EXIT_ON_PM_DEATH, to prevent
future bugs.
The only process that doesn't handle postmaster death is syslogger. It
waits until all backends holding the write end of the syslog pipe
(including the postmaster) have closed it by exiting, to be sure to
capture any parting messages. By using the WaitEventSet API directly
it avoids the new assertion, and as a by-product it may be slightly
more efficient on platforms that have epoll().
Author: Thomas Munro
Reviewed-by: Kyotaro Horiguchi, Heikki Linnakangas, Tom Lane
Discussion: https://postgr.es/m/CAEepm%3D1TCviRykkUb69ppWLr_V697rzd1j3eZsRMmbXvETfqbQ%40mail.gmail.com,
https://postgr.es/m/CAEepm=2LqHzizbe7muD7-2yHUbTOoF7Q+qkSD5Q41kuhttRTwA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
populate_recordset_worker() failed to consider the possibility that the
supplied JSON data contains no rows, so that update_cached_tupdesc never
got called. This led to a null-pointer dereference since commit 9a5e8ed28;
before that it led to a bogus "set-valued function called in context that
cannot accept a set" error. Fix by forcing the update to happen.
Per bug #15514. Back-patch to v11 as 9a5e8ed28 was. (If we were excited
about the bogus error, we could perhaps go back further, but it'd take more
work to figure out how to fix it in older branches. Given the lack of
field complaints about that aspect, I'm not excited.)
Discussion: https://postgr.es/m/15514-59d5b4c4065b178b@postgresql.org
|
|
|
|
|
| |
Author: Amit Langote
Discussion: https://postgr.es/m/CA+HiwqHg0=UL+Dhh3gpiwYNA=ufk9Lb7GQ2c=5rs=ZmVTP7xAw@mail.gmail.com
|
|
|
|
| |
Per pink buildfarm.
|
|
|
|
|
| |
Per David Rowley
Discussion: https://postgr.es/m/CAKJS1f-MstvBWdkOzACsOHyBgj2oXcBM8kfv+NhVe-Ux-wq9Sg@mail.gmail.com
|