aboutsummaryrefslogtreecommitdiff
path: root/contrib
Commit message (Collapse)AuthorAge
...
* Add combining characters to unaccent.rules.Thomas Munro2019-02-01
| | | | | | | | Strip certain classes of combining characters, so that accents encoded this way are removed. Author: Hugh Ranalli Discussion: https://postgr.es/m/15548-cef1b3f8de190d4f%40postgresql.org
* Rename nodes/relation.h to nodes/pathnodes.h.Tom Lane2019-01-29
| | | | | | | | | | | | | The old name of this file was never a very good indication of what it was for. Now that there's also access/relation.h, we have a potential confusion hazard as well, so let's rename it to something more apropos. Per discussion, "pathnodes.h" is reasonable, since a good fraction of the file is Path node definitions. While at it, tweak a couple of other headers that were gratuitously importing relation.h into modules that don't need it. Discussion: https://postgr.es/m/7719.1548688728@sss.pgh.pa.us
* Refactor planner's header files.Tom Lane2019-01-29
| | | | | | | | | | | | | | | | | | | | | | | | Create a new header optimizer/optimizer.h, which exposes just the planner functions that can be used "at arm's length", without need to access Paths or the other planner-internal data structures defined in nodes/relation.h. This is intended to provide the whole planner API seen by most of the rest of the system; although FDWs still need to use additional stuff, and more thought is also needed about just what selfuncs.c should rely on. The main point of doing this now is to limit the amount of new #include baggage that will be needed by "planner support functions", which I expect to introduce later, and which will be in relevant datatype modules rather than anywhere near the planner. This commit just moves relevant declarations into optimizer.h from other header files (a couple of which go away because everything got moved), and adjusts #include lists to match. There's further cleanup that could be done if we want to decide that some stuff being exposed by optimizer.h doesn't belong in the planner at all, but I'll leave that for another day. Discussion: https://postgr.es/m/11460.1548706639@sss.pgh.pa.us
* postgres_fdw: Fix test for cached costs in estimate_path_cost_size().Etsuro Fujita2019-01-29
| | | | | | | | | | | estimate_path_cost_size() failed to re-use cached costs when the cached startup/total cost was 0, so it calculated the costs redundantly. This is an oversight in commit aa09cd242f; but apply the patch to HEAD only because there are no reports of actual trouble from that. Author: Etsuro Fujita Discussion: https://postgr.es/m/5C4AF3F3.4060409%40lab.ntt.co.jp
* In the planner, replace an empty FROM clause with a dummy RTE.Tom Lane2019-01-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The fact that "SELECT expression" has no base relations has long been a thorn in the side of the planner. It makes it hard to flatten a sub-query that looks like that, or is a trivial VALUES() item, because the planner generally uses relid sets to identify sub-relations, and such a sub-query would have an empty relid set if we flattened it. prepjointree.c contains some baroque logic that works around this in certain special cases --- but there is a much better answer. We can replace an empty FROM clause with a dummy RTE that acts like a table of one row and no columns, and then there are no such corner cases to worry about. Instead we need some logic to get rid of useless dummy RTEs, but that's simpler and covers more cases than what was there before. For really trivial cases, where the query is just "SELECT expression" and nothing else, there's a hazard that adding the extra RTE makes for a noticeable slowdown; even though it's not much processing, there's not that much for the planner to do overall. However testing says that the penalty is very small, close to the noise level. In more complex queries, this is able to find optimizations that we could not find before. The new RTE type is called RTE_RESULT, since the "scan" plan type it gives rise to is a Result node (the same plan we produced for a "SELECT expression" query before). To avoid confusion, rename the old ResultPath path type to GroupResultPath, reflecting that it's only used in degenerate grouping cases where we know the query produces just one grouped row. (It wouldn't work to unify the two cases, because there are different rules about where the associated quals live during query_planner.) Note: although this touches readfuncs.c, I don't think a catversion bump is required, because the added case can't occur in stored rules, only plans. Patch by me, reviewed by David Rowley and Mark Dilger Discussion: https://postgr.es/m/15944.1521127664@sss.pgh.pa.us
* Revert "Avoid creation of the free space map for small heap relations."Amit Kapila2019-01-28
| | | | This reverts commit ac88d2962a96a9c7e83d5acfc28fe49a72812086.
* Avoid creation of the free space map for small heap relations.Amit Kapila2019-01-28
| | | | | | | | | | | | | | | | | | | | | | | | Previously, all heaps had FSMs. For very small tables, this means that the FSM took up more space than the heap did. This is wasteful, so now we refrain from creating the FSM for heaps with 4 pages or fewer. If the last known target block has insufficient space, we still try to insert into some other page before giving up and extending the relation, since doing otherwise leads to table bloat. Testing showed that trying every page penalized performance slightly, so we compromise and try every other page. This way, we visit at most two pages. Any pages with wasted free space become visible at next relation extension, so we still control table bloat. As a bonus, directly attempting one or two pages can even be faster than consulting the FSM would have been. Once the FSM is created for a heap we don't remove it even if somebody deletes all the rows from the corresponding relation. We don't think it is a useful optimization as it is quite likely that relation will again grow to the same size. Author: John Naylor with design inputs and some code contribution by Amit Kapila Reviewed-by: Amit Kapila Tested-by: Mithun C Y Discussion: https://www.postgresql.org/message-id/CAJVSVGWvB13PzpbLEecFuGFc5V2fsO736BsdTakPiPAcdMM5tQ@mail.gmail.com
* Change function call information to be variable length.Andres Freund2019-01-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Before this change FunctionCallInfoData, the struct arguments etc for V1 function calls are stored in, always had space for FUNC_MAX_ARGS/100 arguments, storing datums and their nullness in two arrays. For nearly every function call 100 arguments is far more than needed, therefore wasting memory. Arg and argnull being two separate arrays also guarantees that to access a single argument, two cachelines have to be touched. Change the layout so there's a single variable-length array with pairs of value / isnull. That drastically reduces memory consumption for most function calls (on x86-64 a two argument function now uses 64bytes, previously 936 bytes), and makes it very likely that argument value and its nullness are on the same cacheline. Arguments are stored in a new NullableDatum struct, which, due to padding, needs more memory per argument than before. But as usually far fewer arguments are stored, and individual arguments are cheaper to access, that's still a clear win. It's likely that there's other places where conversion to NullableDatum arrays would make sense, e.g. TupleTableSlots, but that's for another commit. Because the function call information is now variable-length allocations have to take the number of arguments into account. For heap allocations that can be done with SizeForFunctionCallInfoData(), for on-stack allocations there's a new LOCAL_FCINFO(name, nargs) macro that helps to allocate an appropriately sized and aligned variable. Some places with stack allocation function call information don't know the number of arguments at compile time, and currently variably sized stack allocations aren't allowed in postgres. Therefore allow for FUNC_MAX_ARGS space in these cases. They're not that common, so for now that seems acceptable. Because of the need to allocate FunctionCallInfo of the appropriate size, older extensions may need to update their code. To avoid subtle breakages, the FunctionCallInfoData struct has been renamed to FunctionCallInfoBaseData. Most code only references FunctionCallInfo, so that shouldn't cause much collateral damage. This change is also a prerequisite for more efficient expression JIT compilation (by allocating the function call information on the stack, allowing LLVM to optimize it away); previously the size of the call information caused problems inside LLVM's optimizer. Author: Andres Freund Reviewed-By: Tom Lane Discussion: https://postgr.es/m/20180605172952.x34m5uz6ju6enaem@alap3.anarazel.de
* postgres_fdw: Account for tlist eval costs in estimate_path_cost_size().Etsuro Fujita2019-01-24
| | | | | | | | | | | | | | | | | | Previously, estimate_path_cost_size() didn't account for tlist eval costs, except when costing a foreign-grouping path using local statistics, but such costs should be accounted for when costing that path using remote estimates, because some of the tlist expressions might be evaluated locally. Also, such costs should be accounted for in the case of a foreign-scan or foreign-join path, because the tlist might contain PlaceHolderVars, which postgres_fdw currently evaluates locally. This also fixes an oversight in my commit f8f6e44676. Like that commit, apply this to HEAD only to avoid destabilizing existing plan choices. Author: Etsuro Fujita Discussion: https://postgr.es/m/5BFD3EAD.2060301%40lab.ntt.co.jp
* Fix misc typos in comments.Heikki Linnakangas2019-01-23
| | | | | | Spotted mostly by Fabien Coelho. Discussion: https://www.postgresql.org/message-id/alpine.DEB.2.21.1901230947050.16643@lancre
* Move remaining code from tqual.[ch] to heapam.h / heapam_visibility.c.Andres Freund2019-01-21
| | | | | | | | | | | | | | Given these routines are heap specific, and that there will be more generic visibility support in via table AM, it makes sense to move the prototypes to heapam.h (routines like HeapTupleSatisfiesVacuum will not be exposed in a generic fashion, because they are too storage specific). Similarly, the code in tqual.c is specific to heap, so moving it into access/heap/ makes sense. Author: Andres Freund Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
* Move generic snapshot related code from tqual.h to snapmgr.h.Andres Freund2019-01-21
| | | | | | | | | | | | | | | | The code in tqual.c is largely heap specific. Due to the upcoming pluggable storage work, it therefore makes sense to move it into access/heap/ (as the file's header notes, the tqual name isn't very good). But the various statically allocated snapshot and snapshot initialization functions are now (see previous commit) generic and do not depend on functions declared in tqual.h anymore. Therefore move. Also move XidInMVCCSnapshot as that's useful for future AMs, and already used outside of tqual.c. Author: Andres Freund Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
* Change snapshot type to be determined by enum rather than callback.Andres Freund2019-01-21
| | | | | | | | | | | | | | | | | | | | | | This is in preparation for allowing the same snapshot be used for different table AMs. With the current callback based approach we would need one callback for each supported AM, which clearly would not be extensible. Thus add a new Snapshot->snapshot_type field, and move the dispatch into HeapTupleSatisfiesVisibility() (which is now a function). Later work will then dispatch calls to HeapTupleSatisfiesVisibility() and other AMs visibility functions depending on the type of the table. The central SnapshotType enum also seems like a good location to centralize documentation about the intended behaviour of various types of snapshots. As tqual.h isn't included by bufmgr.h any more (as HeapTupleSatisfies* isn't referenced by TestForOldSnapshot() anymore) a few files now need to include it directly. Author: Andres Freund, loosely based on earlier work by Haribabu Kommi Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
* Fix sepgsql regression test.Tom Lane2019-01-21
| | | | | | | Message order in the expected output changes due to commit f1ad067fc. Per buildfarm. Discussion: https://postgr.es/m/20190121201134.dyx6anto6akflh5d@alap3.anarazel.de
* Remove superfluous tqual.h includes.Andres Freund2019-01-21
| | | | | | | | | | | | Most of these had been obsoleted by 568d4138c / the SnapshotNow removal. This is is preparation for moving most of tqual.[ch] into either snapmgr.h or heapam.h, which in turn is in preparation for pluggable table AMs. Author: Andres Freund Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
* Replace uses of heap_open et al with the corresponding table_* function.Andres Freund2019-01-21
| | | | | Author: Andres Freund Discussion: https://postgr.es/m/20190111000539.xbv7s6w7ilcvm7dp@alap3.anarazel.de
* Replace heapam.h includes with {table, relation}.h where applicable.Andres Freund2019-01-21
| | | | | | | | | A lot of files only included heapam.h for relation_open, heap_open etc - replace the heapam.h include in those files with the narrower header. Author: Andres Freund Discussion: https://postgr.es/m/20190111000539.xbv7s6w7ilcvm7dp@alap3.anarazel.de
* Replace @postgresql.org with @lists.postgresql.org for mailinglistsMagnus Hagander2019-01-19
| | | | | | Commit c0d0e54084 replaced the ones in the documentation, but missed out on the ones in the code. Replace those as well, but unlike c0d0e54084, don't backpatch the code changes to avoid breaking translations.
* postgres_fdw: Remove duplicate code in DML execution callback functions.Etsuro Fujita2019-01-17
| | | | | | | | | | | postgresExecForeignInsert(), postgresExecForeignUpdate(), and postgresExecForeignDelete() are coded almost identically, except that postgresExecForeignInsert() does not need CTID. Extract that code into a separate function and use it in all the three function implementations. Author: Ashutosh Bapat Reviewed-By: Michael Paquier Discussion: https://postgr.es/m/CAFjFpRcz8yoY7cBTYofcrCLwjaDeCcGKyTUivUbRiA57y3v-bw%40mail.gmail.com
* Don't include heapam.h from others headers.Andres Freund2019-01-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | heapam.h previously was included in a number of widely used headers (e.g. execnodes.h, indirectly in executor.h, ...). That's problematic on its own, as heapam.h contains a lot of low-level details that don't need to be exposed that widely, but becomes more problematic with the upcoming introduction of pluggable table storage - it seems inappropriate for heapam.h to be included that widely afterwards. heapam.h was largely only included in other headers to get the HeapScanDesc typedef (which was defined in heapam.h, even though HeapScanDescData is defined in relscan.h). The better solution here seems to be to just use the underlying struct (forward declared where necessary). Similar for BulkInsertState. Another problem was that LockTupleMode was used in executor.h - parts of the file tried to cope without heapam.h, but due to the fact that it indirectly included it, several subsequent violations of that goal were not not noticed. We could just reuse the approach of declaring parameters as int, but it seems nicer to move LockTupleMode to lockoptions.h - that's not a perfect location, but also doesn't seem bad. As a number of files relied on implicitly included heapam.h, a significant number of files grew an explicit include. It's quite probably that a few external projects will need to do the same. Author: Andres Freund Reviewed-By: Alvaro Herrera Discussion: https://postgr.es/m/20190114000701.y4ttcb74jpskkcfb@alap3.anarazel.de
* Extend pg_stat_statements_reset to reset statistics specific to aAmit Kapila2019-01-11
| | | | | | | | | | | | | | | | | | particular user/db/query. The function pg_stat_statements_reset() is extended to accept userid, dbid, and queryid as input parameters. Now, it can discard the statistics gathered so far by pg_stat_statements corresponding to the specified userid, dbid, and queryid. If no parameter is specified or all the specified parameters have default value aka 0, it will discard all statistics as per the old behavior. The new behavior is useful to get the fresh statistics for a specific user/database/query without resetting all the existing statistics. Author: Haribabu Kommi, with few additional changes by me Reviewed-by: Michael Paquier, Amit Kapila and Fujii Masao Discussion: https://postgr.es/m/CAJrrPGcyh-gkFswyc6C661K6cknL0XkNqVT0sQt2mFNMR4HRKA@mail.gmail.com
* Update unaccent rules with release 34 of CLDR for Latin-ASCII.xmlMichael Paquier2019-01-10
| | | | | | | | | | | This has required an update of the python script generating the rules, as its format has changed in release 29. This release has also added new punctuation and symbols, and a new set of rules has been generated to include them. The way to find newest versions of Latin-ASCII gets also more clearly documented. Author: Hugh Ranalli, Michael Paquier Discussion: https://postgr.es/m/15548-cef1b3f8de190d4f@postgresql.org
* Replace the data structure used for keyword lookup.Tom Lane2019-01-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, ScanKeywordLookup was passed an array of string pointers. This had some performance deficiencies: the strings themselves might be scattered all over the place depending on the compiler (and some quick checking shows that at least with gcc-on-Linux, they indeed weren't reliably close together). That led to very cache-unfriendly behavior as the binary search touched strings in many different pages. Also, depending on the platform, the string pointers might need to be adjusted at program start, so that they couldn't be simple constant data. And the ScanKeyword struct had been designed with an eye to 32-bit machines originally; on 64-bit it requires 16 bytes per keyword, making it even more cache-unfriendly. Redesign so that the keyword strings themselves are allocated consecutively (as part of one big char-string constant), thereby eliminating the touch-lots-of-unrelated-pages syndrome. And get rid of the ScanKeyword array in favor of three separate arrays: uint16 offsets into the keyword array, uint16 token codes, and uint8 keyword categories. That reduces the overhead per keyword to 5 bytes instead of 16 (even less in programs that only need one of the token codes and categories); moreover, the binary search only touches the offsets array, further reducing its cache footprint. This also lets us put the token codes somewhere else than the keyword strings are, which avoids some unpleasant build dependencies. While we're at it, wrap the data used by ScanKeywordLookup into a struct that can be treated as an opaque type by most callers. That doesn't change things much right now, but it will make it less painful to switch to a hash-based lookup method, as is being discussed in the mailing list thread. Most of the change here is associated with adding a generator script that can build the new data structure from the same list-of-PG_KEYWORD header representation we used before. The PG_KEYWORD lists that plpgsql and ecpg used to embed in their scanner .c files have to be moved into headers, and the Makefiles have to be taught to invoke the generator script. This work is also necessary if we're to consider hash-based lookup, since the generator script is what would be responsible for constructing a hash table. Aside from saving a few kilobytes in each program that includes the keyword table, this seems to speed up raw parsing (flex+bison) by a few percent. So it's worth doing even as it stands, though we think we can gain even more with a follow-on patch to switch to hash-based lookup. John Naylor, with further hacking by me Discussion: https://postgr.es/m/CAJVSVGXdFVU2sgym89XPL=Lv1zOS5=EHHQ8XWNzFL=mTXkKMLw@mail.gmail.com
* unaccent: Make generate_unaccent_rules.py Python 3 compatiblePeter Eisentraut2019-01-04
| | | | | | | Python 2 is still supported. Author: Hugh Ranalli <hugh@whtc.ca> Discussion: https://www.postgresql.org/message-id/CAAhbUMNyZ+PhNr_mQ=G161K0-hvbq13Tz2is9M3WK+yX9cQOCw@mail.gmail.com
* Update copyright for 2019Bruce Momjian2019-01-02
| | | | Backpatch-through: certain files through 9.4
* Convert unaccent tests to UTF-8Peter Eisentraut2019-01-02
| | | | | | | This makes it easier to add new tests that are specific to Unicode features. The files were previously in KOI8-R. Discussion: https://www.postgresql.org/message-id/8506.1545111362@sss.pgh.pa.us
* Remove configure switch --disable-strong-randomMichael Paquier2019-01-01
| | | | | | | | | | | | | | | | This removes a portion of infrastructure introduced by fe0a0b5 to allow compilation of Postgres in environments where no strong random source is available, meaning that there is no linking to OpenSSL and no /dev/urandom (Windows having its own CryptoAPI). No systems shipped this century lack /dev/urandom, and the buildfarm is actually not testing this switch at all, so just remove it. This simplifies particularly some backend code which included a fallback implementation using shared memory, and removes a set of alternate regression output files from pgcrypto. Author: Michael Paquier Reviewed-by: Tom Lane Discussion: https://postgr.es/m/20181230063219.GG608@paquier.xyz
* Fix generation of padding message before encrypting Elgamal in pgcryptoMichael Paquier2019-01-01
| | | | | | | | | | | | | | fe0a0b5, which has added a stronger random source in Postgres, has introduced a thinko when creating a padding message which gets encrypted for Elgamal. The padding message cannot have zeros, which are replaced by random bytes. However if pg_strong_random() failed, the message would finish by being considered in correct shape for encryption with zeros. Author: Tom Lane Reviewed-by: Michael Paquier Discussion: https://postgr.es/m/20186.1546188423@sss.pgh.pa.us Backpatch-through: 10
* Remove obsolete IndexIs* macrosPeter Eisentraut2018-12-27
| | | | | | | | | Remove IndexIsValid(), IndexIsReady(), IndexIsLive() in favor of accessing the index structure directly. These macros haven't been used consistently, and the original reason of maintaining source compatibility with PostgreSQL 9.2 is gone. Discussion: https://www.postgresql.org/message-id/flat/d419147c-09d4-6196-5d9d-0234b230880a%402ndquadrant.com
* Disable WAL-skipping optimization for COPY on views and foreign tablesMichael Paquier2018-12-23
| | | | | | | | | | | | | | | | | | | | COPY can skip writing WAL when loading data on a table which has been created in the same transaction as the one loading the data, however this cannot work on views or foreign table as this would result in trying to flush relation files which do not exist. So disable the optimization so as commands are able to work the same way with any configuration of wal_level. Tests are added to cover the different cases, which need to have wal_level set to minimal to allow the problem to show up, and that is not the default configuration. Reported-by: Luis M. Carril, Etsuro Fujita Author: Amit Langote, Michael Paquier Reviewed-by: Etsuro Fujita Discussion: https://postgr.es/m/15552-c64aa14c5c22f63c@postgresql.org Backpatch-through: 10, where support for COPY on views has been added, while v11 has added support for COPY on foreign tables.
* Update sepgsql regression test results for commit ca4103025.Tom Lane2018-12-18
| | | | Per buildfarm.
* Create a separate oid range for oids assigned by genbki.pl.Andres Freund2018-12-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | The changes I made in 578b229718e assigned oids below FirstBootstrapObjectId to objects in include/catalog/*.dat files that did not have an oid assigned, starting at the max oid explicitly assigned. Tom criticized that for mainly two reasons: 1) It's not clear which values are manually and which explicitly assigned. 2) The space below FirstBootstrapObjectId gets pretty crowded, and some PostgreSQL forks have used oids >= 9000 for their own objects, to avoid conflicting. Thus create a new range for objects not assigned explicit oids, but assigned by genbki.pl. For now 1-9999 is for explicitly assigned oids, FirstGenbkiObjectId (10000) to FirstBootstrapObjectId (1200) -1 is for genbki.pl assigned oids, and < FirstNormalObjectId (16384) is for oids assigned during bootstrap. It's possible that we'll have to adjust these boundaries, but there's some headroom for now. Add a note suggesting that oids in forks should be assigned in the 9000-9999 range. Catversion bump for obvious reasons. Per complaint from Tom Lane. Author: Andres Freund Discussion: https://postgr.es/m/16845.1544393682@sss.pgh.pa.us
* Repair bogus EPQ plans generated for postgres_fdw foreign joins.Tom Lane2018-12-12
| | | | | | | | | | | | | | | | | | | | | | | | | postgres_fdw's postgresGetForeignPlan() assumes without checking that the outer_plan it's given for a join relation must have a NestLoop, MergeJoin, or HashJoin node at the top. That's been wrong at least since commit 4bbf6edfb (which could cause insertion of a Sort node on top) and it seems like a pretty unsafe thing to Just Assume even without that. Through blind good fortune, this doesn't seem to have any worse consequences today than strange EXPLAIN output, but it's clearly trouble waiting to happen. To fix, test the node type explicitly before touching Join-specific fields, and avoid jamming the new tlist into a node type that can't do projection. Export a new support function from createplan.c to avoid building low-level knowledge about the latter into FDWs. Back-patch to 9.6 where the faulty coding was added. Note that the associated regression test cases don't show any changes before v11, apparently because the tests back-patched with 4bbf6edfb don't actually exercise the problem case before then (there's no top-level Sort in those plans). Discussion: https://postgr.es/m/8946.1544644803@sss.pgh.pa.us
* Fix some errhint and errdetail strings missing a periodMichael Paquier2018-12-07
| | | | | | | | | As per the error message style guide of the documentation, those should be full sentences. Author: Daniel Gustafsson Reviewed-by: Michael Paquier, Álvaro Herrera Discussion: https://1E8D49B4-16BC-4420-B4ED-58501D9E076B@yesql.se
* postgres_fdw: Improve cost and size estimation for aggregate pushdown.Etsuro Fujita2018-12-04
| | | | | | | | | | | | | | | | | | | | | | | | | | In commit 7012b132d07c2b4ea15b0b3cb1ea9f3278801d98, which added aggregate pushdown to postgres_fdw, we didn't account for the evaluation cost and the selectivity of HAVING quals attached to ForeignPaths performing aggregate pushdown, as core had never accounted for that for AggPaths and GroupPaths. And we didn't set these values of the locally-checked quals (ie, fpinfo's local_conds_cost and local_conds_sel), which were initialized to zeros, but since estimate_path_cost_size factors in these to estimate the result size and the evaluation cost of such a ForeignPath when the use_remote_estimate option is enabled, this caused it to produce underestimated results in that case. By commit 7b6c07547190f056b0464098bb5a2247129d7aa2 core was changed so that it accounts for the evaluation cost and the selectivity of HAVING quals in aggregation paths, so change the postgres_fdw's aggregate pushdown code as well as such. This not only fixes the underestimation issue mentioned above, but improves the estimation using local statistics in that function when that option is disabled. This would be a bug fix rather than an improvement, but apply it to HEAD only to avoid destabilizing existing plan choices. Author: Etsuro Fujita Discussion: https://postgr.es/m/5BFD3EAD.2060301%40lab.ntt.co.jp
* Add PGXS options to control TAP and isolation tests, take twoMichael Paquier2018-12-03
| | | | | | | | | | | | | | | | | | | | The following options are added for extensions: - TAP_TESTS, to allow an extention to run TAP tests which are the ones present in t/*.pl. A subset of tests can always be run with the existing PROVE_TESTS for developers. - ISOLATION, to define a list of isolation tests. - ISOLATION_OPTS, to pass custom options to isolation_tester. A couple of custom Makefile rules have been accumulated across the tree to cover the lack of facility in PGXS for a couple of releases when using those test suites, which are all now replaced with the new flags, without reducing the test coverage. Note that tests of contrib/bloom/ are not enabled yet, as those are proving unstable in the buildfarm. Author: Michael Paquier Reviewed-by: Adam Berlin, Álvaro Herrera, Tom Lane, Nikolay Shaplov, Arthur Zakirov Discussion: https://postgr.es/m/20180906014849.GG2726@paquier.xyz
* C comment: remove extra '*'Bruce Momjian2018-11-28
| | | | | | | | | | Reported-by: Etsuro Fujita Discussion: https://postgr.es/m/5BFE34DE.1080404@lab.ntt.co.jp Author: Etsuro Fujita Backpatch-through: 10
* Revert all new recent changes to add PGXS options for TAP and isolationMichael Paquier2018-11-26
| | | | | | | | | | | | A set of failures in buildfarm machines are proving that this is not quite ready yet because of another set of issues: - MSVC scripts assume that REGRESS_OPTS can only use top_builddir. Some test suites actually finish by using top_srcdir, like pg_stat_statements which cause the regression tests to never run. - Trying to enforce top_builddir does not work either when using VPATH as this is not recognized properly. - TAP tests of bloom are unstable on various platforms, causing various failures.
* Fix regression test handling of test_decoding with MSVCMichael Paquier2018-11-26
| | | | | | | | | | | The set of scripts in charge of running the regression tests for MSVC run currently under the assumption that only $(top_builddir) can used in option values defined in REGRESS_OPTS, and those options need to have a specific format as well to be correctly parsed, so fix the Makefile values so as those are correctly set. Per complains from buildfarm member dory and whelk, with some extra testing done on my side with MSVC to check this patch.
* Disable temporarily TAP tests for contrib/bloom/Michael Paquier2018-11-26
| | | | | | | | | The recent commit 03faa4a8 has enabled those tests, however several buildfarm members are complaining about their stability on Windows and macOS. This will keep the buildfarm green, while investigating the root problem. Discussion: https://postgr.es/m/20181126003351.GE1776@paquier.xyz
* Add PGXS options to control TAP and isolation testsMichael Paquier2018-11-26
| | | | | | | | | | | | | | | | | | | | | The following options are added for extensions: - TAP_TESTS, to allow an extention to run TAP tests which are the ones present in t/*.pl. A subset of tests can always be run with the existing PROVE_TESTS for developers. - ISOLATION, to define a list of isolation tests. - ISOLATION_OPTS, to pass custom options to isolation_tester. A couple of custom Makefile targets have been accumulated across the tree to cover the lack of facility in PGXS for a couple of releases when using those test suites, which are all now replaced with the new flags, without reducing the test coverage. This also fixes an issue with contrib/bloom/, which had a custom target to trigger its TAP tests of its own not part of the main check runs. Author: Michael Paquier Reviewed-by: Adam Berlin, Álvaro Herrera, Tom Lane, Nikolay Shaplov, Arthur Zakirov Discussion: https://postgr.es/m/20180906014849.GG2726@paquier.xyz
* Integrate recovery.conf into postgresql.confPeter Eisentraut2018-11-25
| | | | | | | | | | | | | | | | | | | | | | | | | recovery.conf settings are now set in postgresql.conf (or other GUC sources). Currently, all the affected settings are PGC_POSTMASTER; this could be refined in the future case by case. Recovery is now initiated by a file recovery.signal. Standby mode is initiated by a file standby.signal. The standby_mode setting is gone. If a recovery.conf file is found, an error is issued. The trigger_file setting has been renamed to promote_trigger_file as part of the move. The documentation chapter "Recovery Configuration" has been integrated into "Server Configuration". pg_basebackup -R now appends settings to postgresql.auto.conf and creates a standby.signal file. Author: Fujii Masao <masao.fujii@gmail.com> Author: Simon Riggs <simon@2ndquadrant.com> Author: Abhijit Menon-Sen <ams@2ndquadrant.com> Author: Sergei Kornilov <sk@zsrv.org> Discussion: https://www.postgresql.org/message-id/flat/607741529606767@web3g.yandex.ru/
* Fix hstore hash function for empty hstores upgraded from 8.4.Andrew Gierth2018-11-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Hstore data generated on pg 8.4 and pg_upgraded to current versions remains in its original on-disk format unless modified. The same goes for values generated by the addon hstore-new module on pre-9.0 versions. (The hstoreUpgrade function converts old values on the fly when read in, but the on-disk value is not modified by this.) Since old-format empty hstores (and hstore-new hstores) have representations compatible with the new format, hstoreUpgrade thought it could get away without modifying such values; but this breaks hstore_hash (and the new hstore_hash_extended) which assumes bit-perfect matching between semantically identical hstore values. Only one bit actually differs (the "new version" flag in the count field) but that of course is enough to break the hash. Fix by making hstoreUpgrade unconditionally convert all old values to new format. Backpatch all the way, even though this changes a hash value in some cases, because in those cases the hash value is already failing - for example, a hash join between old- and new-format empty hstores will be failing to match, or a hash index on an hstore column containing an old-format empty value will be failing to find the value since it will be searching for a hash derived from a new-format datum. (There are no known field reports of this happening, probably because hashing of hstores has only been useful in limited circumstances and there probably isn't much upgraded data being used this way.) Per concerns arising from discussion of commit eb6f29141be. Original bug is my fault. Discussion: https://postgr.es/m/60b1fd3b-7332-40f0-7e7f-f2f04f777747%402ndquadrant.com
* Avoid crashes in contrib/intarray gist__int_ops (bug #15518)Andrew Gierth2018-11-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1. Integer overflow in internal_size could result in memory corruption in decompression since a zero-length array would be allocated and then written to. This leads to crashes or corruption when traversing an index which has been populated with sufficiently sparse values. Fix by using int64 for computations and checking for overflow. 2. Integer overflow in g_int_compress could cause pessimal merge choices, resulting in unnecessarily large ranges (which would in turn trigger issue 1 above). Fix by using int64 again. 3. Even without overflow, array sizes could become large enough to cause unexplained memory allocation errors. Fix by capping the sizes to a safe limit and report actual errors pointing at gist__intbig_ops as needed. 4. Large inputs to the compression function always consist of large runs of consecutive integers, and the compression loop was processing these one at a time in an O(N^2) manner with a lot of overhead. The expected runtime of this function could easily exceed 6 months for a single call as a result. Fix by performing a linear-time first pass, which reduces the worst case to something on the order of seconds. Backpatch all the way, since this has been wrong forever. Per bug #15518 from report from irc user "dymk", analysis and patch by me. Discussion: https://postgr.es/m/15518-799e426c3b4f8358@postgresql.org
* Add a 64-bit hash function for type hstore.Tom Lane2018-11-23
| | | | | | | | | There's some question about the correctness of the hash function, but if it's wrong, the 32-bit version is also wrong. Amul Sul, reviewed by Hironobu Suzuki Discussion: https://postgr.es/m/CAAJ_b947JjnNr9Cp45iNjSqKf6PA5mCTmKsRwPjows93YwQrmw@mail.gmail.com
* Add a 64-bit hash function for type citext.Tom Lane2018-11-23
| | | | | | Amul Sul, reviewed by Hironobu Suzuki Discussion: https://postgr.es/m/CAAJ_b947JjnNr9Cp45iNjSqKf6PA5mCTmKsRwPjows93YwQrmw@mail.gmail.com
* Silence compiler warningsAlvaro Herrera2018-11-23
| | | | | | | Commit cfdf4dc4fc96 left a few unnecessary assignments, one of which caused compiler warnings, as reported by Erik Rijkers. Remove them all. Discussion: https://postgr.es/m/df0dcca2025b3d90d946ecc508ca9678@xs4all.nl
* Add WL_EXIT_ON_PM_DEATH pseudo-event.Thomas Munro2018-11-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Users of the WaitEventSet and WaitLatch() APIs can now choose between asking for WL_POSTMASTER_DEATH and then handling it explicitly, or asking for WL_EXIT_ON_PM_DEATH to trigger immediate exit on postmaster death. This reduces code duplication, since almost all callers want the latter. Repair all code that was previously ignoring postmaster death completely, or requesting the event but ignoring it, or requesting the event but then doing an unconditional PostmasterIsAlive() call every time through its event loop (which is an expensive syscall on platforms for which we don't have USE_POSTMASTER_DEATH_SIGNAL support). Assert that callers of WaitLatchXXX() under the postmaster remember to ask for either WL_POSTMASTER_DEATH or WL_EXIT_ON_PM_DEATH, to prevent future bugs. The only process that doesn't handle postmaster death is syslogger. It waits until all backends holding the write end of the syslog pipe (including the postmaster) have closed it by exiting, to be sure to capture any parting messages. By using the WaitEventSet API directly it avoids the new assertion, and as a by-product it may be slightly more efficient on platforms that have epoll(). Author: Thomas Munro Reviewed-by: Kyotaro Horiguchi, Heikki Linnakangas, Tom Lane Discussion: https://postgr.es/m/CAEepm%3D1TCviRykkUb69ppWLr_V697rzd1j3eZsRMmbXvETfqbQ%40mail.gmail.com, https://postgr.es/m/CAEepm=2LqHzizbe7muD7-2yHUbTOoF7Q+qkSD5Q41kuhttRTwA@mail.gmail.com
* Blind attempt at fixing sepgsql output for 578b22.Andres Freund2018-11-20
|
* Fix sepgsql compile error caused by oid removal.Andres Freund2018-11-20
| | | | | | | | | | | Per buildfarm animal rhinoceros. I (Andres) missed replacing a few uses of ObjectIdAttributeNumber in sepgsql. It's quite probable that the sepgsql test output will need more adapting than done in 578b22... Author: Thomas Munro Discussion: https://postgr.es/m/CAEepm=2Sk+66HJV8FLDfm_sKTn22j7cWTY_Y1Rok3RxeWL_Y0w@mail.gmail.com