aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
* Make pg_statistic and related code account more honestly for collations.Tom Lane2018-12-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we first put in collations support, we basically punted on teaching pg_statistic, ANALYZE, and the planner selectivity functions about that. They've just used DEFAULT_COLLATION_OID independently of the actual collation of the data. It's time to improve that, so: * Add columns to pg_statistic that record the specific collation associated with each statistics slot. * Teach ANALYZE to use the column's actual collation when comparing values for statistical purposes, and record this in the appropriate slot. (Note that type-specific typanalyze functions are now expected to fill stats->stacoll with the appropriate collation, too.) * Teach assorted selectivity functions to use the actual collation of the stats they are looking at, instead of just assuming it's DEFAULT_COLLATION_OID. This should give noticeably better results in selectivity estimates for columns with nondefault collations, at least for query clauses that use that same collation (which would be the default behavior in most cases). It's still true that comparisons with explicit COLLATE clauses different from the stored data's collation won't be well-estimated, but that's no worse than before. Also, this patch does make the first step towards doing better with that, which is that it's now theoretically possible to collect stats for a collation other than the column's own collation. Patch by me; thanks to Peter Eisentraut for review. Discussion: https://postgr.es/m/14706.1544630227@sss.pgh.pa.us
* Introduce new extended routines for FDW and foreign server lookupsMichael Paquier2018-12-14
| | | | | | | | | | | | | | The cache lookup routines for foreign-data wrappers and foreign servers are extended with an extra argument to handle a set of flags. The only value which can be used now is to indicate if a missing object should result in an error or not, and are designed to be extensible on need. Those new routines are added into the existing set of user-visible FDW APIs and documented in consequence. They will be used for future patches to improve the SQL interface for object addresses. Author: Michael Paquier Reviewed-by: Álvaro Herrera Discussion: https://postgr.es/m/CAB7nPqSZxrSmdHK-rny7z8mi=EAFXJ5J-0RbzDw6aus=wB5azQ@mail.gmail.com
* Create a separate oid range for oids assigned by genbki.pl.Andres Freund2018-12-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | The changes I made in 578b229718e assigned oids below FirstBootstrapObjectId to objects in include/catalog/*.dat files that did not have an oid assigned, starting at the max oid explicitly assigned. Tom criticized that for mainly two reasons: 1) It's not clear which values are manually and which explicitly assigned. 2) The space below FirstBootstrapObjectId gets pretty crowded, and some PostgreSQL forks have used oids >= 9000 for their own objects, to avoid conflicting. Thus create a new range for objects not assigned explicit oids, but assigned by genbki.pl. For now 1-9999 is for explicitly assigned oids, FirstGenbkiObjectId (10000) to FirstBootstrapObjectId (1200) -1 is for genbki.pl assigned oids, and < FirstNormalObjectId (16384) is for oids assigned during bootstrap. It's possible that we'll have to adjust these boundaries, but there's some headroom for now. Add a note suggesting that oids in forks should be assigned in the 9000-9999 range. Catversion bump for obvious reasons. Per complaint from Tom Lane. Author: Andres Freund Discussion: https://postgr.es/m/16845.1544393682@sss.pgh.pa.us
* Fix bogus logic for skipping unnecessary partcollation dependencies.Tom Lane2018-12-13
| | | | | | | | | The idea here is to not call recordDependencyOn for the default collation, since we know that's pinned. But what the code actually did was to record the partition key's dependency on the opclass twice, instead. Evidently introduced by sloppy coding in commit 2186b608b. Back-patch to v10 where that came in.
* Drop no-op CoerceToDomain nodes from expressions at planning time.Tom Lane2018-12-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a domain has no constraints, then CoerceToDomain doesn't really do anything and can be simplified to a RelabelType. This not only eliminates cycles at execution, but allows the planner to optimize better (for instance, match the coerced expression to an index on the underlying column). However, we do have to support invalidating the plan later if a constraint gets added to the domain. That's comparable to the case of a change to a SQL function that had been inlined into a plan, so all the necessary logic already exists for plans depending on functions. We need only duplicate or share that logic for domains. ALTER DOMAIN ADD/DROP CONSTRAINT need to be taught to send out sinval messages for the domain's pg_type entry, since those operations don't update that row. (ALTER DOMAIN SET/DROP NOT NULL do update that row, so no code change is needed for them.) Testing this revealed what's really a pre-existing bug in plpgsql: it caches the SQL-expression-tree expansion of type coercions and had no provision for invalidating entries in that cache. Up to now that was only a problem if such an expression had inlined a SQL function that got changed, which is unlikely though not impossible. But failing to track changes of domain constraints breaks an existing regression test case and would likely cause practical problems too. We could fix that locally in plpgsql, but what seems like a better idea is to build some generic infrastructure in plancache.c to store standalone expressions and track invalidation events for them. (It's tempting to wonder whether plpgsql's "simple expression" stuff could use this code with lower overhead than its current use of the heavyweight plancache APIs. But I've left that idea for later.) Other stuff fixed in passing: * Allow estimate_expression_value() to drop CoerceToDomain unconditionally, effectively assuming that the coercion will succeed. This will improve planner selectivity estimates for cases involving estimatable expressions that are coerced to domains. We could have done this independently of everything else here, but there wasn't previously any need for eval_const_expressions_mutator to know about CoerceToDomain at all. * Use a dlist for plancache.c's list of cached plans, rather than a manually threaded singly-linked list. That eliminates a potential performance problem in DropCachedPlan. * Fix a couple of inconsistencies in typecmds.c about whether operations on domains drop RowExclusiveLock on pg_type. Our common practice is that DDL operations do drop catalog locks, so standardize on that choice. Discussion: https://postgr.es/m/19958.1544122124@sss.pgh.pa.us
* Prevent GIN deleted pages from being reclaimed too earlyAlexander Korotkov2018-12-13
| | | | | | | | | | | | | | | | | | When GIN vacuum deletes a posting tree page, it assumes that no concurrent searchers can access it, thanks to ginStepRight() locking two pages at once. However, since 9.4 searches can skip parts of posting trees descending from the root. That leads to the risk that page is deleted and reclaimed before concurrent search can access it. This commit prevents the risk of above by waiting for every transaction, which might wait to reference this page, to finish. Due to binary compatibility we can't change GinPageOpaqueData to store corresponding transaction id. Instead we reuse page header pd_prune_xid field, which is unused in index pages. Discussion: https://postgr.es/m/31a702a.14dd.166c1366ac1.Coremail.chjischj%40163.com Author: Andrey Borodin, Alexander Korotkov Reviewed-by: Alexander Korotkov Backpatch-through: 9.4
* Prevent deadlock in ginRedoDeletePage()Alexander Korotkov2018-12-13
| | | | | | | | | | | | | | | | | | | | On standby ginRedoDeletePage() can work concurrently with read-only queries. Those queries can traverse posting tree in two ways. 1) Using rightlinks by ginStepRight(), which locks the next page before unlocking its left sibling. 2) Using downlinks by ginFindLeafPage(), which locks at most one page at time. Original lock order was: page, parent, left sibling. That lock order can deadlock with ginStepRight(). In order to prevent deadlock this commit changes lock order to: left sibling, page, parent. Note, that position of parent in locking order seems insignificant, because we only lock one page at time while traversing downlinks. Reported-by: Chen Huajun Diagnosed-by: Chen Huajun, Peter Geoghegan, Andrey Borodin Discussion: https://postgr.es/m/31a702a.14dd.166c1366ac1.Coremail.chjischj%40163.com Author: Alexander Korotkov Backpatch-through: 9.4
* Fix deadlock in GIN vacuum introduced by 218f51584d5Alexander Korotkov2018-12-13
| | | | | | | | | | | | | | | | | | | | | Before 218f51584d5 if posting tree page is about to be deleted, then the whole posting tree is locked by LockBufferForCleanup() on root preventing all the concurrent inserts. 218f51584d5 reduced locking to the subtree containing page to be deleted. However, due to concurrent parent split, inserter doesn't always holds pins on all the pages constituting path from root to the target leaf page. That could cause a deadlock between GIN vacuum process and GIN inserter. And we didn't find non-invasive way to fix this. This commit reverts VACUUM behavior to lock the whole posting tree before delete any page. However, we keep another useful change by 218f51584d5: the tree is locked only if there are pages to be deleted. Reported-by: Chen Huajun Diagnosed-by: Chen Huajun, Andrey Borodin, Peter Geoghegan Discussion: https://postgr.es/m/31a702a.14dd.166c1366ac1.Coremail.chjischj%40163.com Author: Alexander Korotkov, based on ideas from Andrey Borodin and Peter Geoghegan Reviewed-by: Andrey Borodin Backpatch-through: 10
* Repair bogus EPQ plans generated for postgres_fdw foreign joins.Tom Lane2018-12-12
| | | | | | | | | | | | | | | | | | | | | | | | | postgres_fdw's postgresGetForeignPlan() assumes without checking that the outer_plan it's given for a join relation must have a NestLoop, MergeJoin, or HashJoin node at the top. That's been wrong at least since commit 4bbf6edfb (which could cause insertion of a Sort node on top) and it seems like a pretty unsafe thing to Just Assume even without that. Through blind good fortune, this doesn't seem to have any worse consequences today than strange EXPLAIN output, but it's clearly trouble waiting to happen. To fix, test the node type explicitly before touching Join-specific fields, and avoid jamming the new tlist into a node type that can't do projection. Export a new support function from createplan.c to avoid building low-level knowledge about the latter into FDWs. Back-patch to 9.6 where the faulty coding was added. Note that the associated regression test cases don't show any changes before v11, apparently because the tests back-patched with 4bbf6edfb don't actually exercise the problem case before then (there's no top-level Sort in those plans). Discussion: https://postgr.es/m/8946.1544644803@sss.pgh.pa.us
* Repair bogus handling of multi-assignment Params in upper plan levels.Tom Lane2018-12-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Our support for multiple-set-clauses in UPDATE assumes that the Params referencing a MULTIEXPR_SUBLINK SubPlan will appear before that SubPlan in the targetlist of the plan node that calculates the updated row. (Yeah, it's a hack...) In some PG branches it's possible that a Result node gets inserted between the primary calculation of the update tlist and the ModifyTable node. setrefs.c did the wrong thing in this case and left the upper-level Params as Params, causing a crash at runtime. What it should do is replace them with "outer" Vars referencing the child plan node's output. That's a result of careless ordering of operations in fix_upper_expr_mutator, so we can fix it just by reordering the code. Fix fix_join_expr_mutator similarly for consistency, even though join nodes could never appear in such a context. (In general, it seems likely to be a bit cheaper to use Vars than Params in such situations anyway, so this patch might offer a tiny performance improvement.) The hazard extends back to 9.5 where the MULTIEXPR_SUBLINK stuff was introduced, so back-patch that far. However, this may be a live bug only in 9.6.x and 10.x, as the other branches don't seem to want to calculate the final tlist below the Result node. (That plan shape change between branches might be a mini-bug in itself, but I'm not really interested in digging into the reasons for that right now. Still, add a regression test memorializing what we expect there, so we'll notice if it changes again.) Per bug report from Eduards Bezverhijs. Discussion: https://postgr.es/m/b6cd572a-3e44-8785-75e9-c512a5a17a73@tieto.com
* Tweak pg_partition_tree for undefined relations and unsupported relkindsMichael Paquier2018-12-12
| | | | | | | | | | | | This fixes a crash which happened when calling the function directly with a relation OID referring to a non-existing object, and changes the behavior so as NULL is returned for unsupported relkinds instead of generating an error. This puts the new function in line with many other system functions, and eases actions like full scans of pg_class. Author: Michael Paquier Reviewed-by: Amit Langote, Stephen Frost Discussion: https://postgr.es/m/20181207010406.GO2407@paquier.xyz
* Fix test_rls_hooks to assign expression collations properly.Tom Lane2018-12-11
| | | | | | | | | | | | This module overlooked this necessary fixup step on the results of transformWhereClause(). It accidentally worked anyway, because the constructed expression involved type "name" which is not collatable, but it fell over while I was experimenting with changing "name" to be collatable. Back-patch, not because there's any live bug here in back branches, but because somebody might use this code as a model for some real application and then not understand why it doesn't work.
* Raise some timeouts to 180s, in test code.Noah Misch2018-12-10
| | | | | | | | | | | | Slow runs of buildfarm members chipmunk, hornet and mandrill saw the shorter timeouts expire. The 180s timeout in poll_query_until has been trouble-free since 2a0f89cd717ce6d49cdc47850577823682167e87 introduced it two years ago, so use 180s more widely. Back-patch to 9.6, where the first of these timeouts was introduced. Reviewed by Michael Paquier. Discussion: https://postgr.es/m/20181209001601.GC2973271@rfd.leadboat.com
* Add stack depth checks to key recursive functions in backend/nodes/*.c.Tom Lane2018-12-10
| | | | | | | | | | Although copyfuncs.c has a check_stack_depth call in its recursion, equalfuncs.c, outfuncs.c, and readfuncs.c lacked one. This seems unwise. Likewise fix planstate_tree_walker(), in branches where that exists. Discussion: https://postgr.es/m/30253.1544286631@sss.pgh.pa.us
* Make TupleDescInitBuiltinEntry throw error for unsupported types.Tom Lane2018-12-10
| | | | | | | | | Previously, it would just pass back a partially-uninitialized tupdesc, which doesn't seem like a safe or useful behavior. Backpatch to v10 where this code came in. Discussion: https://postgr.es/m/30830.1544384975@sss.pgh.pa.us
* Add pg_dump test for empty OP classStephen Frost2018-12-10
| | | | | | | This adds a pg_dump test for an empty operator class. Author: Michael Paquier Discussion: https://postgr.es/m/20181208011142.GU3415@tamriel.snowman.net
* Add additional partition tests to pg_dumpStephen Frost2018-12-10
| | | | | | | This adds a few tests for non-inherited constraints. Author: Amit Langote Discussion: https://postgr.es/m/20181208001735.GT3415%40tamriel.snowman.net
* Remove dead code in toast_fetch_datum_sliceStephen Frost2018-12-10
| | | | | | | | | | | | In toast_fetch_datum_slice(), we Assert() that what is passed in isn't compressed, but we then later had a check to see what the length of if what was passed in is compressed. That later check is rather confusing since toast_fetch_datum_slice() is only ever called with non-compressed datums and the Assert() earlier makes it clear that one shouldn't be passing in compressed datums. Add a comment to make it clear that toast_fetch_datum_slice() is just for non-compressed datums, and remove the dead code.
* Ensure cleanup of orphan archive status filesMichael Paquier2018-12-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When a WAL segment is recycled, its ".ready" and ".done" status files get also automatically removed, however this is not done in a durable manner. Hence, in a subsequent crash, it could be possible that a ".ready" status file is still around with its corresponding segment already gone. If the backend reaches such a state, the archive command would most likely complain about a segment non-existing and would keep retrying, causing WAL segments to bloat pg_wal/, potentially making Postgres crash hard when running out of space. As status files are removed after each individual segment, using durable_unlink() does not completely close the window either, as a crash could happen between the moment the WAL segment is recycled and the moment its status files are removed. This has also some performance impact with the additional fsync() calls needed to make the removal in a durable manner. Doing the cleanup at recovery is not cost-free either as this makes crash recovery potentially take longer than necessary. So, instead, as per an idea of Stephen Frost, make the archiver aware of orphan status files and remove them on-the-fly if the corresponding segment goes missing. Removal failures follow a model close to what happens for WAL segments, where multiple attempts are done before giving up temporarily, and where a successful orphan removal makes the archiver move immediately to the next WAL segment thought as ready to be archived. Author: Michael Paquier Reviewed-by: Nathan Bossart, Andres Freund, Stephen Frost, Kyotaro Horiguchi Discussion: https://postgr.es/m/20180928032827.GF1500@paquier.xyz
* Add timestamp of last received message from standby to pg_stat_replicationMichael Paquier2018-12-09
| | | | | | | | | | | | | | The timestamp generated by the standby at message transmission has been included in the protocol since its introduction for both the status update message and hot standby feedback message, but it has never appeared in pg_stat_replication. Seeing this timestamp does not matter much with a cluster which has a lot of activity, but on a mostly-idle cluster, this makes monitoring able to react faster than the configured timeouts. Author: MyungKyu LIM Reviewed-by: Michael Paquier, Masahiko Sawada Discussion: https://postgr.es/m/1657809367.407321.1533027417725.JavaMail.jboss@ep2ml404
* In PQprint(), write HTML table trailer before closing the output pipe.Tom Lane2018-12-07
| | | | | | | | | | | | | | This is an astonishingly ancient bit of silliness, dating AFAICS to commit edb519b14 of 27-Jul-1996 which added the pipe close stanza in the wrong place. It happens to be harmless given that the code above this won't enable the pager if html3 output mode is selected. Still, somebody might try to relax that restriction someday, and in any case it could confuse readers and static analysis tools, so let's fix it in HEAD. Per bug #15541 from Pan Bian. Discussion: https://postgr.es/m/15541-c835d8b9a903f7ad@postgresql.org
* Fix misapplication of pgstat_count_truncate to wrong relation.Tom Lane2018-12-07
| | | | | | | | | | | | | | | | | | | | | | | | The stanza of ExecuteTruncate[Guts] that truncates a target table's toast relation re-used the loop local variable "rel" to reference the toast rel. This was safe enough when written, but commit d42358efb added code below that that supposed "rel" still pointed to the parent table. Therefore, the stats counter update was applied to the wrong relcache entry (the toast rel not the user rel); and if we were unlucky and that relcache entry had been flushed during reindex_relation, very bad things could ensue. (I'm surprised that CLOBBER_CACHE_ALWAYS testing hasn't found this. I'm even more surprised that the problem wasn't detected during the development of d42358efb; it must not have been tested in any case with a toast table, as the incorrect stats counts are very obvious.) To fix, replace use of "rel" in that code branch with a more local variable. Adjust test cases added by d42358efb so that some of them use tables with toast tables. Per bug #15540 from Pan Bian. Back-patch to 9.5 where d42358efb came in. Discussion: https://postgr.es/m/15540-01078812338195c0@postgresql.org
* Clean up sloppy coding in publicationcmds.c's OpenTableList().Tom Lane2018-12-07
| | | | | | | | | | | | | | Remove dead code (which would be incorrect if it weren't dead), per report from Pan Bian. Add a CHECK_FOR_INTERRUPTS in the inner loop over child relations, because there's little point in having one in the outer loop if there's not one here too. Minor stylistic adjustments and comment improvements. Seems to be aboriginal to this code (cf commit 665d1fad9). Back-patch to v10 where that came in, not because any of this is significant, but just to keep the branches looking similar. Discussion: https://postgr.es/m/15539-06d00ef6b1e2e1bb@postgresql.org
* Fix some errhint and errdetail strings missing a periodMichael Paquier2018-12-07
| | | | | | | | | As per the error message style guide of the documentation, those should be full sentences. Author: Daniel Gustafsson Reviewed-by: Michael Paquier, Álvaro Herrera Discussion: https://1E8D49B4-16BC-4420-B4ED-58501D9E076B@yesql.se
* Improve our response to invalid format strings, and detect more cases.Tom Lane2018-12-06
| | | | | | | | | | | | | | | | | | | | | | | Places that are testing for *printf failure ought to include the format string in their error reports, since bad-format-string is one of the more likely causes of such failure. This both makes it easier to find and repair the mistake, and provides at least some useful info to the user who stumbles across such a problem. Also, tighten snprintf.c to report EINVAL for an invalid flag or final character in a format %-spec (including the case where the %-spec is missing a final character altogether). This seems like better project policy, and it also allows removing an instruction or two from the hot code path. Back-patch the error reporting change in pvsnprintf, since it should be harmless and may be helpful; but not the snprintf.c change. Per discussion of bug #15511 from Ertuğrul Kahveci, which reported an invalid translated format string. These changes don't fix that error, but they should improve matters next time we make such a mistake. Discussion: https://postgr.es/m/15511-1d8b6a0bc874112f@postgresql.org
* Cleanup minor pg_dump memory leaksStephen Frost2018-12-06
| | | | | | | | | | | | | | In dumputils, we may have successfully parsed the acls when we discover that we can't parse the reverse ACLs and then return- check and free aclitems if that happens. In dumpTableSchema, move ftoptions and srvname under the relkind != RELKIND_VIEW branch (since they're only used there) and then check if they've been allocated and, if so, free them at the end of that block. Pointed out by Pavel Raiskup, though I didn't use those patches. Discussion: https://postgr.es/m/2183976.vkCJMhdhmF@nb.usersys.redhat.com
* Cleanup comments in xlog compressionStephen Frost2018-12-06
| | | | | | | | | | | | Skipping over the "hole" in full page images in the XLOG code was described as being a form of compression, but this got a bit confusing since we now have PGLZ-based compression happening, so adjust the wording to discuss "removing" the "hole" and keeping the talk about compression to where we're talking about using PGLZ-based compression of the full page images. Reviewed-By: Kyotaro Horiguchi Discussion: https://postgr.es/m/20181127234341.GM3415@tamriel.snowman.net
* Don't mark partitioned indexes invalid unnecessarilyAlvaro Herrera2018-12-05
| | | | | | | | | | | | | | | | | | | | | | | | | | When an indexes is created on a partitioned table using ONLY (don't recurse to partitions), it gets marked invalid until index partitions are attached for each table partition. But there's no reason to do this if there are no partitions ... and moreover, there's no way to get the index to become valid afterwards, because all partitions that get created/attached get their own index partition already attached to the parent index, so there's no chance to do ALTER INDEX ... ATTACH PARTITION that would make the parent index valid. Fix by not marking the index as invalid to begin with. This is very similar to 9139aa19423b, but the pg_dump aspect does not appear to be relevant until we add FKs that can point to PKs on partitioned tables. (I tried to cause the pg_upgrade test to break by leaving some of these bogus tables around, but wasn't able to.) Making this change means that an index that was supposed to be invalid in the insert_conflict regression test is no longer invalid; reorder the DDL so that the test continues to verify the behavior we want it to. Author: Álvaro Herrera Reviewed-by: Amit Langote Discussion: https://postgr.es/m/20181203225019.2vvdef2ybnkxt364@alvherre.pgsql
* Fix typoStephen Frost2018-12-04
| | | | | Backends don't typically exist uncleanly, but they can certainly exit uncleanly, and it's exiting uncleanly that's being discussed here.
* Add some missing schema qualificationsMichael Paquier2018-12-03
| | | | | | | | | This does not improve the security and reliability of the touched areas, but it makes the style more consistent. Author: Michael Paquier Reviewed-by- Noah Misch Discussion: https://postgr.es/m/20180309075538.GD9376@paquier.xyz
* Add PGXS options to control TAP and isolation tests, take twoMichael Paquier2018-12-03
| | | | | | | | | | | | | | | | | | | | The following options are added for extensions: - TAP_TESTS, to allow an extention to run TAP tests which are the ones present in t/*.pl. A subset of tests can always be run with the existing PROVE_TESTS for developers. - ISOLATION, to define a list of isolation tests. - ISOLATION_OPTS, to pass custom options to isolation_tester. A couple of custom Makefile rules have been accumulated across the tree to cover the lack of facility in PGXS for a couple of releases when using those test suites, which are all now replaced with the new flags, without reducing the test coverage. Note that tests of contrib/bloom/ are not enabled yet, as those are proving unstable in the buildfarm. Author: Michael Paquier Reviewed-by: Adam Berlin, Álvaro Herrera, Tom Lane, Nikolay Shaplov, Arthur Zakirov Discussion: https://postgr.es/m/20180906014849.GG2726@paquier.xyz
* Eliminate parallel-make hazard in ecpg/preproc.Tom Lane2018-12-01
| | | | | | | | | | | | | | | | | | | Re-making ecpglib's typename.o is dangerous because another make thread could be doing that at the same time. While we've not heard field complaints traceable to this, it seems inevitable that it'd bite someone eventually. Instead, symlink typename.c into the preproc directory and recompile it there. That file is small enough that compiling it twice isn't much of a penalty. Furthermore, this way we get a .o file that's made without shlib CFLAGS, which seems cleaner. This requires adding more stuff to the module's -I list. The MSVC aspect of that is untested, but I'm sure the buildfarm will tell me if I got it wrong. Per a suggestion from Peter Eisentraut. Although this is theoretically a bug fix, the lack of field reports makes me feel we needn't back-patch. Discussion: https://postgr.es/m/31364.1543511708@sss.pgh.pa.us
* Rename ecpg's various "extern.h" files to have distinct names.Tom Lane2018-12-01
| | | | | | | | | | | This should reduce confusion, and in particular make it safe to copy typename.c into preproc/ and compile it there. This doesn't affect anything outside ecpg, and particularly not end users, because these files don't get installed; they just exist to share declarations among the .c files of each subdirectory. Discussion: https://postgr.es/m/31364.1543511708@sss.pgh.pa.us
* Add a --socketdir option to pg_upgrade.Tom Lane2018-12-01
| | | | | | | | | | | | | | | | | This allows control of the directory in which the postmaster sockets are created for the temporary postmasters started by pg_upgrade. The default location remains the current working directory, which is typically fine, but if it is deeply nested then its pathname might be too long to be a socket name. In passing, clean up some messiness in pg_upgrade's option handling, particularly the confusing and undocumented way that configuration-only datadirs were handled. And fix check_required_directory's substantially under-baked cleanup of directory pathnames. Daniel Gustafsson, reviewed by Hironobu Suzuki, some code cleanup by me Discussion: https://postgr.es/m/E72DD5C3-2268-48A5-A907-ED4B34BEC223@yesql.se
* Fix tablespace path TAP test of pg_verify_checksums for msysMichael Paquier2018-12-01
| | | | | | | | | | | | TAP tests on msys need to run with the DTK perl, which understands msys virtualized paths. Postgres, however, does not understand such paths, so before a path can be used safely with CREATE TABLESPACE, it needs to be translated into a path on the underlying file system. Per report from buildfarm member jacana. Suggested fix is from Andrew Dunstan. Discussion: https://postgr.es/m/20181130053555.GF2267@paquier.xyz
* Silence compiler warningAlvaro Herrera2018-11-30
| | | | | | | My original coding was questionable anyway. Reported-by: Sergei Kornilov Discussion: https://postgr.es/m/9645101543575886@myt6-27270b78ac4f.qloud-c.yandex.net
* Fix typo.Amit Kapila2018-11-30
|
* Fix various checksum check problems for pg_verify_checksums and base backupsMichael Paquier2018-11-30
| | | | | | | | | | | | | | | | | | | | | Three issues are fixed in this patch: - Base backups forgot to ignore files specific to EXEC_BACKEND, leading to spurious warnings when checksums are enabled, per analysis from me. - pg_verify_checksums forgot about files specific to EXEC_BACKEND, leading to failures of the tool on any such build, particularly Windows. This error was originally found by newly-introduced TAP tests in various buildfarm members using EXEC_BACKEND. - pg_verify_checksums forgot to count for temporary files and temporary paths, which could be valid relation files, without checksums, per report from Andres Freund. More tests are added to cover this case. A new test case which emulates corruption for a file in a different tablespace is added, coming from from Michael Banck, while I have coded the main code and refactored the test code. Author: Michael Banck, Michael Paquier Reviewed-by: Stephen Frost, David Steele Discussion: https://postgr.es/m/20181021134206.GA14282@paquier.xyz
* Switch pg_verify_checksums back to a blacklistMichael Paquier2018-11-30
| | | | | | | | | | | | | | | This basically reverts commit d55241af705667d4503638e3f77d3689fd6be31, leaving around a portion of the regression tests still adapted with empty relation files, and corrupted cases. This is also proving to be failing to check properly relation files located in a non-default tablespace path. Per discussion with various folks, including Stephen Frost, David Steele, Andres Freund, Michael Banck and myself. Reported-by: Michael Banck Discussion: https://postgr.es/m/20181021134206.GA14282@paquier.xyz Backpatch-through: 11
* Add log_statement_sample_rate parameterAlvaro Herrera2018-11-29
| | | | | | | | | | This allows to set a lower log_min_duration_statement value without incurring excessive log traffic (which reduces performance). This can be useful to analyze workloads with lots of short queries. Author: Adrien Nayrat Reviewed-by: David Rowley, Vik Fearing Discussion: https://postgr.es/m/c30ee535-ee1e-db9f-fa97-146b9f62caed@anayrat.info
* Ensure static libraries have correct mod time even if ranlib messes it up.Tom Lane2018-11-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In at least Apple's version of ranlib, the output file is updated to have a mod time equal to the max of the timestamps of its components, and that data only has seconds precision. On a filesystem with sub-second file timestamp precision --- say, APFS --- this can result in the finished static library appearing older than its input files, which causes useless rebuilds and possible outright failures in parallel makes. We've only seen this reported in the field from people using Apple's ranlib with a non-Apple make, because Apple's make doesn't know about sub-second timestamps either so it doesn't decide rebuilds are needed. But Apple's ranlib presumably shares code with at least some BSDen, so it's not that unlikely that the same problem could arise elsewhere. To fix, just "touch" the output file after ranlib finishes. We seem to need this in only one place. There are other calls of ranlib in our makefiles, but they are working on intermediate files whose timestamps are not actually important, or else on an installed static library for which sub-second timestamp precision is unlikely to matter either. (Also, so far as I can tell, Apple's ranlib doesn't mess up the file timestamp in the latter usage anyhow.) In passing, change "ranlib" to "$(RANLIB)" in one place that was bypassing the make macro for no good reason. Per bug #15525 from Jack Kelly (via Alyssa Ross). Back-patch to all supported branches. Discussion: https://postgr.es/m/15525-a30da084f17a1faa@postgresql.org
* Add support for NO_INSTALLCHECK in MSVC scriptsMichael Paquier2018-11-29
| | | | | | | | | | | | | | | | | | | | When fetching a list of tests for a given extension in contrib/ or src/test/modules/, NO_INSTALLCHECK now gets checked first. If present, an empty list of tests is returned to let the caller know that tests for this module need to be bypassed. This actually fixes a set of issues with MSVC with modules using REGRESS_OPTS, as an incorrect parsing caused the launched command to eat the first test listed. The actual effect on the tree is that several modules listed a single test, so regressions have been running with no actual tests. pg_stat_statements, test_rls_hooks and commit_ts were impacted by that. Some other modules like test_decoding (or snapshot_too_old) don't use yet PGXS rules, but their makefiles will soon be refactored with an upcoming patch. Author: Michael Paquier Reviewed-by: Andrew Dunstan Discussion: https://postgr.es/m/20181126054302.GI1776@paquier.xyz
* Fix minor typo in dsa.c.Thomas Munro2018-11-29
| | | | | Author: Takeshi Ideriha Discussion: https://postgr.es/m/4E72940DA2BF16479384A86D54D0988A6F3BF22D%40G01JPEXMBKW04
* Add missing NO_INSTALLCHECK in commit_ts and test_rls_hooksMichael Paquier2018-11-29
| | | | | | | | | | | | | | | This bypasses installcheck if specified, which makes sense for those modules as they require non-default configuration, something which typical users don't have. Those have been missing from the start, still no back-patch is done. This will be used by an upcoming patch for MSVC scripts adding support for NO_INSTALLCHECK as installcheck is the default mode for contrib and modules for performance reasons in the buildfarm. Author: Michael Paquier Reviewed-by: Andrew Dunstan Discussion: https://postgr.es/m/20181126054302.GI1776@paquier.xyz
* Fix handling of synchronous replication for stopping WAL sendersMichael Paquier2018-11-29
| | | | | | | | | | | | | | | | | | | This fixes an oversight from c6c3334 which forgot that if a subset of WAL senders are stopping and in a sync state, other WAL senders could still be waiting for a WAL position to be synced while committing a transaction. However the subset of stopping senders would not release waiters, potentially breaking synchronous replication guarantees. This commit makes sure that even WAL senders stopping are able to release waiters and are tracked properly. On 9.4, this can also trigger an assertion failure when setting for example max_wal_senders to 1 where a WAL sender is not able to find itself as in synchronous state when the instance stops. Reported-by: Paul Guo Author: Paul Guo, Michael Paquier Discussion: https://postgr.es/m/CAEET0ZEv8VFqT3C-cQm6byOB4r4VYWcef1J21dOX-gcVhCSpmA@mail.gmail.com Backpatch-through: 9.4
* Have BufFileSize() ereport() on FileSize() failure.Peter Geoghegan2018-11-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move the responsibility for checking for and reporting a failure from the only current BufFileSize() caller, logtape.c, to BufFileSize() itself. Code within buffile.c is generally responsible for interfacing with fd.c to report irrecoverable failures. This seems like a convention that's worth sticking to. Reorganizing things this way makes it easy to make the error message raised in the event of BufFileSize() failure descriptive of the underlying problem. We're now clear on the distinction between temporary file name and BufFile name, and can show errno, confident that its value actually relates to the error being reported. In passing, an existing, similar buffile.c ereport() + errcode_for_file_access() site is changed to follow the same conventions. The API of the function BufFileSize() is changed by this commit, despite already being in a stable release (Postgres 11). This seems acceptable, since the BufFileSize() ABI was changed by commit aa551830421, which hasn't made it into a point release yet. Besides, it's difficult to imagine a third party BufFileSize() caller not just raising an error anyway, since BufFile state should be considered corrupt when BufFileSize() fails. Per complaint from Tom Lane. Discussion: https://postgr.es/m/26974.1540826748@sss.pgh.pa.us Backpatch: 11-, where shared BufFiles were introduced.
* Only allow one recovery target settingPeter Eisentraut2018-11-28
| | | | | | | | | | | | | | | The previous recovery.conf regime accepted multiple recovery_target* settings and used the last one. This does not translate well to the general GUC system. Specifically, under EXEC_BACKEND, the settings are written out not in any particular order, so the order in which they were originally set is not available to new processes. Rather than redesign the GUC system, it was decided to abandon the old behavior and only allow one recovery target setting. A second setting will cause an error. However, it is allowed to set the same parameter multiple times or unset a parameter and set a different one. Discussion: https://www.postgresql.org/message-id/flat/27802171543235530%40iva2-6ec8f0a6115e.qloud-c.yandex.net#701a59c837ad0bf8c244344aaf3ef5a4
* Don't set PAM_RHOST for Unix sockets.Thomas Munro2018-11-28
| | | | | | | | | | | | | | | Since commit 2f1d2b7a we have set PAM_RHOST to "[local]" for Unix sockets. This caused Linux PAM's libaudit integration to make DNS requests for that name. It's not exactly clear what value PAM_RHOST should have in that case, but it seems clear that we shouldn't set it to an unresolvable name, so don't do that. Back-patch to 9.6. Bug #15520. Author: Thomas Munro Reviewed-by: Peter Eisentraut Reported-by: Albert Schabhuetl Discussion: https://postgr.es/m/15520-4c266f986998e1c5%40postgresql.org
* Do not decode TOAST data for table rewritesTomas Vondra2018-11-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | During table rewrites (VACUUM FULL and CLUSTER), the main heap is logged using XLOG / FPI records, and thus (correctly) ignored in decoding. But the associated TOAST table is WAL-logged as plain INSERT records, and so was logically decoded and passed to reorder buffer. That has severe consequences with TOAST tables of non-trivial size. Firstly, reorder buffer has to keep all those changes, possibly spilling them to a file, incurring I/O costs and disk space. Secondly, ReoderBufferCommit() was stashing all those TOAST chunks into a hash table, which got discarded only after processing the row from the main heap. But as the main heap is not decoded for rewrites, this never happened, so all the TOAST data accumulated in memory, resulting either in excessive memory consumption or OOM. The fix is simple, as commit e9edc1ba already introduced infrastructure (namely HEAP_INSERT_NO_LOGICAL flag) to skip logical decoding of TOAST tables, but it only applied it to system tables. So simply use it for all TOAST data in raw_heap_insert(). That would however solve only the memory consumption issue - the TOAST changes would still be decoded and added to the reorder buffer, and spilled to disk (although without TOAST tuple data, so much smaller). But we can solve that by tweaking DecodeInsert() to just ignore such INSERT records altogether, using XLH_INSERT_CONTAINS_NEW_TUPLE flag, instead of skipping them later in ReorderBufferCommit(). Review: Masahiko Sawada Discussion: https://www.postgresql.org/message-id/flat/1a17c643-e9af-3dba-486b-fbe31bc1823a%402ndquadrant.com Backpatch: 9.4-, where logical decoding was introduced
* Use wildcard to match parens after CREATE STATISTICSTomas Vondra2018-11-28
| | | | | | | | | | | CREATE STATISTICS completion was checking manually for the start and end of the parenthesised list of types. That works, but we now have a better way to do that as commit 121213d9d taught word_matches() to allow '*' in the middle of an alternative. But it only applied that to tab completion for EXPLAIN, ANALYZE and VACUUM. Use it for CREATE STATISTICS too. Author: Dagfinn Ilmari Mannsåker Discussion: https://www.postgresql.org/message-id/flat/d8jwooziy1s.fsf%40dalvik.ping.uio.no