aboutsummaryrefslogtreecommitdiff
path: root/src/backend/utils
Commit message (Collapse)AuthorAge
...
* Fix compiler warning in jsonpath_exec.cAlexander Korotkov2019-03-17
| | | | | | | Warning was observed in gcc 4.4.6, gcc 4.4.7 and probably others. Reported-by: Tom Lane Discussion: https://postgr.es/m/25151.1552751426%40sss.pgh.pa.us
* Suppress -Wimplicit-fallthrough warnings in new jsonpath code.Tom Lane2019-03-16
| | | | Per buildfarm. See commit 41c912cad for precedent.
* Numeric error suppression in jsonpathAlexander Korotkov2019-03-16
| | | | | | | | | | | Add support of numeric error suppression to jsonpath as it's required by standard. This commit doesn't use PG_TRY()/PG_CATCH() in order to implement that. Instead, it provides internal versions of numeric functions used, which support error suppression. Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com Author: Alexander Korotkov, Nikita Glukhov Reviewed-by: Tomas Vondra
* Partial implementation of SQL/JSON path languageAlexander Korotkov2019-03-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | SQL 2016 standards among other things contains set of SQL/JSON features for JSON processing inside of relational database. The core of SQL/JSON is JSON path language, allowing access parts of JSON documents and make computations over them. This commit implements partial support JSON path language as separate datatype called "jsonpath". The implementation is partial because it's lacking datetime support and suppression of numeric errors. Missing features will be added later by separate commits. Support of SQL/JSON features requires implementation of separate nodes, and it will be considered in subsequent patches. This commit includes following set of plain functions, allowing to execute jsonpath over jsonb values: * jsonb_path_exists(jsonb, jsonpath[, jsonb, bool]), * jsonb_path_match(jsonb, jsonpath[, jsonb, bool]), * jsonb_path_query(jsonb, jsonpath[, jsonb, bool]), * jsonb_path_query_array(jsonb, jsonpath[, jsonb, bool]). * jsonb_path_query_first(jsonb, jsonpath[, jsonb, bool]). This commit also implements "jsonb @? jsonpath" and "jsonb @@ jsonpath", which are wrappers over jsonpath_exists(jsonb, jsonpath) and jsonpath_predicate(jsonb, jsonpath) correspondingly. These operators will have an index support (implemented in subsequent patches). Catversion bumped, to add new functions and operators. Code was written by Nikita Glukhov and Teodor Sigaev, revised by me. Documentation was written by Oleg Bartunov and Liudmila Mantrova. The work was inspired by Oleg Bartunov. Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com Author: Nikita Glukhov, Teodor Sigaev, Alexander Korotkov, Oleg Bartunov, Liudmila Mantrova Reviewed-by: Tomas Vondra, Andrew Dunstan, Pavel Stehule, Alexander Korotkov
* Avoid casting away a constPeter Eisentraut2019-03-16
|
* Further reduce memory footprint of CLOBBER_CACHE_ALWAYS testing.Tom Lane2019-03-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some buildfarm members using CLOBBER_CACHE_ALWAYS have been having OOM problems of late. Commit 2455ab488 addressed this problem by recovering space transiently used within RelationBuildPartitionDesc, but it turns out that leaves quite a lot on the table, because other subroutines of RelationBuildDesc also leak memory like mad. Let's move the temp-context management into RelationBuildDesc so that leakage from the other subroutines is also recovered. I examined this issue by arranging for postgres.c to dump the size of MessageContext just before resetting it in each command cycle, and then running the update.sql regression test (which is one of the two that are seeing buildfarm OOMs) with and without CLOBBER_CACHE_ALWAYS. Before 2455ab488, the peak space usage with CCA was as much as 250MB. That patch got it down to ~80MB, but with this patch it's about 0.5MB, and indeed the space usage now seems nearly indistinguishable from a non-CCA build. RelationBuildDesc's traditional behavior of not worrying about leaking transient data is of many years' standing, so I'm pretty hesitant to change that without more evidence that it'd be useful in a normal build. (So far as I can see, non-CCA memory consumption is about the same with or without this change, whuch if anything suggests that it isn't useful.) Hence, configure the patch so that we recover space only when CLOBBER_CACHE_ALWAYS or CLOBBER_CACHE_RECURSIVELY is defined. However, that choice can be overridden at compile time, in case somebody would like to do some performance testing and try to develop evidence for changing that decision. It's possible that we ought to back-patch this change, but in the absence of back-branch OOM problems in the buildfarm, I'm not in a hurry to do that. Discussion: https://postgr.es/m/CA+TgmoY3bRmGB6-DUnoVy5fJoreiBJ43rwMrQRCdPXuKt4Ykaw@mail.gmail.com
* Enable parallel query with SERIALIZABLE isolation.Thomas Munro2019-03-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, the SERIALIZABLE isolation level prevented parallel query from being used. Allow the two features to be used together by sharing the leader's SERIALIZABLEXACT with parallel workers. An extra per-SERIALIZABLEXACT LWLock is introduced to make it safe to share, and new logic is introduced to coordinate the early release of the SERIALIZABLEXACT required for the SXACT_FLAG_RO_SAFE optimization, as follows: The first backend to observe the SXACT_FLAG_RO_SAFE flag (set by some other transaction) will 'partially release' the SERIALIZABLEXACT, meaning that the conflicts and locks it holds are released, but the SERIALIZABLEXACT itself will remain active because other backends might still have a pointer to it. Whenever any backend notices the SXACT_FLAG_RO_SAFE flag, it clears its own MySerializableXact variable and frees local resources so that it can skip SSI checks for the rest of the transaction. In the special case of the leader process, it transfers the SERIALIZABLEXACT to a new variable SavedSerializableXact, so that it can be completely released at the end of the transaction after all workers have exited. Remove the serializable_okay flag added to CreateParallelContext() by commit 9da0cc35, because it's now redundant. Author: Thomas Munro Reviewed-by: Haribabu Kommi, Robert Haas, Masahiko Sawada, Kevin Grittner Discussion: https://postgr.es/m/CAEepm=0gXGYhtrVDWOTHS8SQQy_=S9xo+8oCxGLWZAOoeJ=yzQ@mail.gmail.com
* Add support for hyperbolic functions, as well as log10().Tom Lane2019-03-12
| | | | | | | | | | | | | | | | | | | | | The SQL:2016 standard adds support for the hyperbolic functions sinh(), cosh(), and tanh(). POSIX has long required libm to provide those functions as well as their inverses asinh(), acosh(), atanh(). Hence, let's just expose the libm functions to the SQL level. As with the trig functions, we only implement versions for float8, not numeric. For the moment, we'll assume that all platforms actually do have these functions; if experience teaches otherwise, some autoconf effort may be needed. SQL:2016 also adds support for base-10 logarithm, but with the function name log10(), whereas the name we've long used is log(). Add aliases named log10() for the float8 and numeric versions. Lætitia Avrot Discussion: https://postgr.es/m/CAB_COdguG22LO=rnxDQ2DW1uzv8aQoUzyDQNJjrR4k00XSgm5w@mail.gmail.com
* Allow fractional input values for integer GUCs, and improve rounding logic.Tom Lane2019-03-11
| | | | | | | | | | | | | | | | | | | | Historically guc.c has just refused examples like set work_mem = '30.1GB', but it seems more useful for it to take that and round off the value to some reasonable approximation of what the user said. Just rounding to the parameter's native unit would work, but it would lead to rather silly-looking settings, such as 31562138kB for this example. Instead let's round to the nearest multiple of the next smaller unit (if any), producing 30822MB. Also, do the units conversion math in floating point and round to integer (if needed) only at the end. This produces saner results for inputs that aren't exact multiples of the parameter's native unit, and removes another difference in the behavior for integer vs. float parameters. In passing, document the ability to use hex or octal input where it ought to be documented. Discussion: https://postgr.es/m/1798.1552165479@sss.pgh.pa.us
* Give up on testing guc.c's behavior for "infinity" inputs.Tom Lane2019-03-11
| | | | | | | | | | | Further buildfarm testing shows that on the machines that are failing ac75959cd's test case, what we're actually getting from strtod("-infinity") is a syntax error (endptr == value) not ERANGE at all. This test case is not worth carrying two sets of expected output for, so just remove it, and revert commit b212245f9's misguided attempt to work around the platform dependency. Discussion: https://postgr.es/m/E1h33xk-0001Og-Gs@gemulon.postgresql.org
* tableam: Add and use scan APIs.Andres Freund2019-03-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Too allow table accesses to be not directly dependent on heap, several new abstractions are needed. Specifically: 1) Heap scans need to be generalized into table scans. Do this by introducing TableScanDesc, which will be the "base class" for individual AMs. This contains the AM independent fields from HeapScanDesc. The previous heap_{beginscan,rescan,endscan} et al. have been replaced with a table_ version. There's no direct replacement for heap_getnext(), as that returned a HeapTuple, which is undesirable for a other AMs. Instead there's table_scan_getnextslot(). But note that heap_getnext() lives on, it's still used widely to access catalog tables. This is achieved by new scan_begin, scan_end, scan_rescan, scan_getnextslot callbacks. 2) The portion of parallel scans that's shared between backends need to be able to do so without the user doing per-AM work. To achieve that new parallelscan_{estimate, initialize, reinitialize} callbacks are introduced, which operate on a new ParallelTableScanDesc, which again can be subclassed by AMs. As it is likely that several AMs are going to be block oriented, block oriented callbacks that can be shared between such AMs are provided and used by heap. table_block_parallelscan_{estimate, intiialize, reinitialize} as callbacks, and table_block_parallelscan_{nextpage, init} for use in AMs. These operate on a ParallelBlockTableScanDesc. 3) Index scans need to be able to access tables to return a tuple, and there needs to be state across individual accesses to the heap to store state like buffers. That's now handled by introducing a sort-of-scan IndexFetchTable, which again is intended to be subclassed by individual AMs (for heap IndexFetchHeap). The relevant callbacks for an AM are index_fetch_{end, begin, reset} to create the necessary state, and index_fetch_tuple to retrieve an indexed tuple. Note that index_fetch_tuple implementations need to be smarter than just blindly fetching the tuples for AMs that have optimizations similar to heap's HOT - the currently alive tuple in the update chain needs to be fetched if appropriate. Similar to table_scan_getnextslot(), it's undesirable to continue to return HeapTuples. Thus index_fetch_heap (might want to rename that later) now accepts a slot as an argument. Core code doesn't have a lot of call sites performing index scans without going through the systable_* API (in contrast to loads of heap_getnext calls and working directly with HeapTuples). Index scans now store the result of a search in IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the target is not generally a HeapTuple anymore that seems cleaner. To be able to sensible adapt code to use the above, two further callbacks have been introduced: a) slot_callbacks returns a TupleTableSlotOps* suitable for creating slots capable of holding a tuple of the AMs type. table_slot_callbacks() and table_slot_create() are based upon that, but have additional logic to deal with views, foreign tables, etc. While this change could have been done separately, nearly all the call sites that needed to be adapted for the rest of this commit also would have been needed to be adapted for table_slot_callbacks(), making separation not worthwhile. b) tuple_satisfies_snapshot checks whether the tuple in a slot is currently visible according to a snapshot. That's required as a few places now don't have a buffer + HeapTuple around, but a slot (which in heap's case internally has that information). Additionally a few infrastructure changes were needed: I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now internally uses a slot to keep track of tuples. While systable_getnext() still returns HeapTuples, and will so for the foreseeable future, the index API (see 1) above) now only deals with slots. The remainder, and largest part, of this commit is then adjusting all scans in postgres to use the new APIs. Author: Andres Freund, Haribabu Kommi, Alvaro Herrera Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
* Fix typos in commit 8586bf7ed8.Amit Kapila2019-03-11
| | | | | Author: Amit Kapila Discussion: https://postgr.es/m/CAA4eK1KNv1Mg2krf4E9ssWFnE=8A9mZ1VbVywXBZTFSzb+wP2g@mail.gmail.com
* Move hash_any prototype from access/hash.h to utils/hashutils.hAlvaro Herrera2019-03-11
| | | | | | | | | | | | | | | | | | | | | ... as well as its implementation from backend/access/hash/hashfunc.c to backend/utils/hash/hashfn.c. access/hash is the place for the hash index AM, not really appropriate for generic facilities, which is what hash_any is; having things the old way meant that anything using hash_any had to include the AM's include file, pointlessly polluting its namespace with unrelated, unnecessary cruft. Also move the HTEqual strategy number to access/stratnum.h from access/hash.h. To avoid breaking third-party extension code, add an #include "utils/hashutils.h" to access/hash.h. (An easily removed line by committers who enjoy their asbestos suits to protect them from angry extension authors.) Discussion: https://postgr.es/m/201901251935.ser5e4h6djt2@alvherre.pgsql
* In guc.c, ignore ERANGE errors from strtod().Tom Lane2019-03-11
| | | | | | | | | | | | | Instead, just proceed with the infinity or zero result that it should return for overflow/underflow. This avoids a platform dependency, in that various versions of strtod are inconsistent about whether they signal ERANGE for a value that's specified as infinity. It's possible this won't be enough to remove the buildfarm failures we're seeing from ac75959cd, in which case I'll take out the infinity test case that commit added. But first let's see if we can fix it. Discussion: https://postgr.es/m/E1h33xk-0001Og-Gs@gemulon.postgresql.org
* Remove unused macroPeter Eisentraut2019-03-11
| | | | Use was removed in 25ca5a9a54923a5d6746f771c4c23e85a195bde5.
* Reduce the default value of autovacuum_vacuum_cost_delay to 2ms.Tom Lane2019-03-10
| | | | | | | This is a better way to implement the desired change of increasing autovacuum's default resource consumption. Discussion: https://postgr.es/m/28720.1552101086@sss.pgh.pa.us
* Revert "Increase the default vacuum_cost_limit from 200 to 2000"Tom Lane2019-03-10
| | | | | | | | | | | | | | This reverts commit bd09503e633b8077822bb4daf91625b71ac16253. Per discussion, it seems like what we should do instead is to reduce the default value of autovacuum_vacuum_cost_delay by the same factor. That's functionally equivalent as long as the platform can accurately service the smaller delay request, which should be true on anything released in the last 10 years or more. And smaller, more-closely-spaced delays are better in terms of providing a steady I/O load. Discussion: https://postgr.es/m/28720.1552101086@sss.pgh.pa.us
* Convert [autovacuum_]vacuum_cost_delay into floating-point GUCs.Tom Lane2019-03-10
| | | | | | | | | | | | | | | | | | | | | This change makes it possible to specify sub-millisecond delays, which work well on most modern platforms, though that was not true when the cost-delay feature was designed. To support this without breaking existing configuration entries, improve guc.c to allow floating-point GUCs to have units. Also, allow "us" (microseconds) as an input/output unit for time-unit GUCs. (It's not allowed as a base unit, at least not yet.) Likewise change the autovacuum_vacuum_cost_delay reloption to be floating-point; this forces a catversion bump because the layout of StdRdOptions changes. This patch doesn't in itself change the default values or allowed ranges for these parameters, and it should not affect the behavior for any already-allowed setting for them. Discussion: https://postgr.es/m/1798.1552165479@sss.pgh.pa.us
* Include GUC's unit, if it has one, in out-of-range error messages.Tom Lane2019-03-10
| | | | | | | | | | | | This should reduce confusion in cases where we've applied a units conversion, so that the number being reported (and the quoted range limits) are in some other units than what the user gave in the setting we're rejecting. Some of the changes here assume that float GUCs can have units, which isn't true just yet, but will be shortly. Discussion: https://postgr.es/m/3811.1552169665@sss.pgh.pa.us
* Disallow NaN as a value for floating-point GUCs.Tom Lane2019-03-10
| | | | | | | | | | | | | | | | | | None of the code that uses GUC values is really prepared for them to hold NaN, but parse_real() didn't have any defense against accepting such a value. Treat it the same as a syntax error. I haven't attempted to analyze the exact consequences of setting any of the float GUCs to NaN, but since they're quite unlikely to be good, this seems like a back-patchable bug fix. Note: we don't need an explicit test for +-Infinity because those will be rejected by existing range checks. I added a regression test for that in HEAD, but not older branches because the spelling of the value in the error message will be platform-dependent in branches where we don't always use port/snprintf.c. Discussion: https://postgr.es/m/1798.1552165479@sss.pgh.pa.us
* Track block level checksum failures in pg_stat_databaseMagnus Hagander2019-03-09
| | | | | | | | | | This adds a column that counts how many checksum failures have occurred on files belonging to a specific database. Both checksum failures during normal backend processing and those created when a base backup detects a checksum failure are counted. Author: Magnus Hagander Reviewed by: Julien Rouhaud
* Avoid some table rewrites for ALTER TABLE .. SET DATA TYPE timestamp.Noah Misch2019-03-08
| | | | | | | | | | | | When the timezone is UTC, timestamptz and timestamp are binary coercible in both directions. See b8a18ad4850ea5ad7884aa6ab731fd392e73b4ad and c22ecc6562aac895f0f0529707d7bdb460fd2a49 for the previous attempt in this problem space. Skip the table rewrite; for now, continue to needlessly rewrite any index on an affected column. Reviewed by Simon Riggs and Tom Lane. Discussion: https://postgr.es/m/20190226061450.GA1665944@rfd.leadboat.com
* Tighten use of OpenTransientFile and CloseTransientFileMichael Paquier2019-03-09
| | | | | | | | | | | | | | | | | | | | | | | | | | This fixes two sets of issues related to the use of transient files in the backend: 1) OpenTransientFile() has been used in some code paths with read-write flags while read-only is sufficient, so switch those calls to be read-only where necessary. These have been reported by Joe Conway. 2) When opening transient files, it is up to the caller to close the file descriptors opened. In error code paths, CloseTransientFile() gets called to clean up things before issuing an error. However in normal exit paths, a lot of callers of CloseTransientFile() never actually reported errors, which could leave a file descriptor open without knowing about it. This is an issue I complained about a couple of times, but never had the courage to write and submit a patch, so here we go. Note that one frontend code path is impacted by this commit so as an error is issued when fetching control file data, making backend and frontend to be treated consistently. Reported-by: Joe Conway, Michael Paquier Author: Michael Paquier Reviewed-by: Álvaro Herrera, Georgios Kokolatos, Joe Conway Discussion: https://postgr.es/m/20190301023338.GD1348@paquier.xyz Discussion: https://postgr.es/m/c49b69ec-e2f7-ff33-4f17-0eaa4f2cef27@joeconway.com
* Fix crash with old libxml2Alvaro Herrera2019-03-08
| | | | | | | | | | | | | Certain libxml2 versions (such as the 2.7.6 commonly seen in older distributions, but apparently only on x86_64) contain a bug that causes xmlCopyNode, when called on a XML_DOCUMENT_NODE, to return a node that xmlFreeNode crashes on. Arrange to call xmlFreeDoc instead of xmlFreeNode for those nodes. Per buildfarm members lapwing and grison. Author: Pavel Stehule, light editing by Álvaro. Discussion: https://postgr.es/m/20190308024436.GA2374@alvherre.pgsql
* Fix minor deficiencies in XMLTABLE, xpath(), xmlexists()Alvaro Herrera2019-03-07
| | | | | | | | | | | | | | | | | | | | | | | | Correctly process nodes of more types than previously. In some cases, nodes were being ignored (nothing was output); in other cases, trying to return them resulted in errors about unrecognized nodes. In yet other cases, necessary escaping (of XML special characters) was not being done. Fix all those (as far as the authors could find) and add regression tests cases verifying the new behavior. I (Álvaro) was of two minds about backpatching these changes. They do seem bugfixes that would benefit most users of the affected functions; but on the other hand it would change established behavior in minor releases, so it seems prudent not to. Authors: Pavel Stehule, Markus Winand, Chapman Flack Discussion: https://postgr.es/m/CAFj8pRA6J25CtAZ2TuRvxK3gat7-bBUYh0rfE2yM7Hj9GD14Dg@mail.gmail.com https://postgr.es/m/8BDB0627-2105-4564-AA76-7849F028B96E@winand.at The elephant in the room as pointed out by Chapman Flack, not fixed in this commit, is that we still have XMLTABLE operating on XPath 1.0 instead of the standard-mandated XQuery (or even its subset XPath 2.0). Fixing that is a major undertaking, however.
* Allow ATTACH PARTITION with only ShareUpdateExclusiveLock.Robert Haas2019-03-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We still require AccessExclusiveLock on the partition itself, because otherwise an insert that violates the newly-imposed partition constraint could be in progress at the same time that we're changing that constraint; only the lock level on the parent relation is weakened. To make this safe, we have to cope with (at least) three separate problems. First, relevant DDL might commit while we're in the process of building a PartitionDesc. If so, find_inheritance_children() might see a new partition while the RELOID system cache still has the old partition bound cached, and even before invalidation messages have been queued. To fix that, if we see that the pg_class tuple seems to be missing or to have a null relpartbound, refetch the value directly from the table. We can't get the wrong value, because DETACH PARTITION still requires AccessExclusiveLock throughout; if we ever want to change that, this will need more thought. In testing, I found it quite difficult to hit even the null-relpartbound case; the race condition is extremely tight, but the theoretical risk is there. Second, successive calls to RelationGetPartitionDesc might not return the same answer. The query planner will get confused if lookup up the PartitionDesc for a particular relation does not return a consistent answer for the entire duration of query planning. Likewise, query execution will get confused if the same relation seems to have a different PartitionDesc at different times. Invent a new PartitionDirectory concept and use it to ensure consistency. This ensures that a single invocation of either the planner or the executor sees the same view of the PartitionDesc from beginning to end, but it does not guarantee that the planner and the executor see the same view. Since this allows pointers to old PartitionDesc entries to survive even after a relcache rebuild, also postpone removing the old PartitionDesc entry until we're certain no one is using it. For the most part, it seems to be OK for the planner and executor to have different views of the PartitionDesc, because the executor will just ignore any concurrently added partitions which were unknown at plan time; those partitions won't be part of the inheritance expansion, but invalidation messages will trigger replanning at some point. Normally, this happens by the time the very next command is executed, but if the next command acquires no locks and executes a prepared query, it can manage not to notice until a new transaction is started. We might want to tighten that up, but it's material for a separate patch. There would still be a small window where a query that started just after an ATTACH PARTITION command committed might fail to notice its results -- but only if the command starts before the commit has been acknowledged to the user. All in all, the warts here around serializability seem small enough to be worth accepting for the considerable advantage of being able to add partitions without a full table lock. Although in general the consequences of new partitions showing up between planning and execution are limited to the query not noticing the new partitions, run-time partition pruning will get confused in that case, so that's the third problem that this patch fixes. Run-time partition pruning assumes that indexes into the PartitionDesc are stable between planning and execution. So, add code so that if new partitions are added between plan time and execution time, the indexes stored in the subplan_map[] and subpart_map[] arrays within the plan's PartitionedRelPruneInfo get adjusted accordingly. There does not seem to be a simple way to generalize this scheme to cope with partitions that are removed, mostly because they could then get added back again with different bounds, but it works OK for added partitions. This code does not try to ensure that every backend participating in a parallel query sees the same view of the PartitionDesc. That currently doesn't matter, because we never pass PartitionDesc indexes between backends. Each backend will ignore the concurrently added partitions which it notices, and it doesn't matter if different backends are ignoring different sets of concurrently added partitions. If in the future that matters, for example because we allow writes in parallel query and want all participants to do tuple routing to the same set of partitions, the PartitionDirectory concept could be improved to share PartitionDescs across backends. There is a draft patch to serialize and restore PartitionDescs on the thread where this patch was discussed, which may be a useful place to start. Patch by me. Thanks to Alvaro Herrera, David Rowley, Simon Riggs, Amit Langote, and Michael Paquier for discussion, and to Alvaro Herrera for some review. Discussion: http://postgr.es/m/CA+Tgmobt2upbSocvvDej3yzokd7AkiT+PvgFH+a9-5VV1oJNSQ@mail.gmail.com Discussion: http://postgr.es/m/CA+TgmoZE0r9-cyA-aY6f8WFEROaDLLL7Vf81kZ8MtFCkxpeQSw@mail.gmail.com Discussion: http://postgr.es/m/CA+TgmoY13KQZF-=HNTrt9UYWYx3_oYOQpu9ioNT49jGgiDpUEA@mail.gmail.com
* tableam: introduce table AM infrastructure.Andres Freund2019-03-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This introduces the concept of table access methods, i.e. CREATE ACCESS METHOD ... TYPE TABLE and CREATE TABLE ... USING (storage-engine). No table access functionality is delegated to table AMs as of this commit, that'll be done in following commits. Subsequent commits will incrementally abstract table access functionality to be routed through table access methods. That change is too large to be reviewed & committed at once, so it'll be done incrementally. Docs will be updated at the end, as adding them incrementally would likely make them less coherent, and definitely is a lot more work, without a lot of benefit. Table access methods are specified similar to index access methods, i.e. pg_am.amhandler returns, as INTERNAL, a pointer to a struct with callbacks. In contrast to index AMs that struct needs to live as long as a backend, typically that's achieved by just returning a pointer to a constant struct. Psql's \d+ now displays a table's access method. That can be disabled with HIDE_TABLEAM=true, which is mainly useful so regression tests can be run against different AMs. It's quite possible that this behaviour still needs to be fine tuned. For now it's not allowed to set a table AM for a partitioned table, as we've not resolved how partitions would inherit that. Disallowing allows us to introduce, if we decide that's the way forward, such a behaviour without a compatibility break. Catversion bumped, to add the heap table AM and references to it. Author: Haribabu Kommi, Andres Freund, Alvaro Herrera, Dimitri Golgov and others Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql https://postgr.es/m/20190107235616.6lur25ph22u5u5av@alap3.anarazel.de https://postgr.es/m/20190304234700.w5tmhducs5wxgzls@alap3.anarazel.de
* Increase the default vacuum_cost_limit from 200 to 2000Andrew Dunstan2019-03-06
| | | | | | | | | | | | | | | | | | | | | | | | The original 200 default value was set back in f425b605f4e when the cost delay settings were first added. Hardware has improved quite a bit since then and we've also made improvements such as sorting buffers during checkpoints (9cd00c457e6) which should result in less random writes. This low default value was reportedly causing problems with badly configured servers and in the absence of a native method to remove excessive bloat from tables without incurring an AccessExclusiveLock, this often made cleaning up the damage caused by badly configured auto-vacuums difficult. It seems more likely that someone will notice that auto-vacuum is running too quickly than too slowly, so let's go all out and multiple the default value for the setting by 10. With the default vacuum_cost_page_dirty and autovacuum_vacuum_cost_delay (assuming a page size of 8192 bytes), this allows autovacuum a theoretical maximum dirty write rate of around 39MB/s instead of just 3.9MB/s. Author: David Rowley Discussion: https://postgr.es/m/CAKJS1f_YbXC2qTMPyCbmsPiKvZYwpuQNQMohiRXLj1r=8_rYvw@mail.gmail.com
* pg_partition_ancestorsAlvaro Herrera2019-03-04
| | | | | | | | Adds another introspection feature for partitioning, necessary for further psql patches. Reviewed-by: Michaël Paquier Discussion: https://postgr.es/m/20190226222757.GA31622@alvherre.pgsql
* Consider only relations part of partition trees in partition functionsMichael Paquier2019-03-02
| | | | | | | | | | | | | | | | | | | | This changes the partition functions so as tables and indexes which are not part of partition trees are handled the same way as what is done for undefined objects and unsupported relkinds: pg_partition_tree() returns no rows and pg_partition_root() returns a NULL result. Hence, partitioned tables, partitioned indexes and relations whose flag pg_class.relispartition is set are considered as valid objects to process. Previously, tables and indexes not included in a partition tree were processed the same way as a partition or a partitioned table, which caused the functions to return inconsistent results for inherited tables, especially when inheriting from multiple tables. Reported-by: Álvaro Herrera Author: Amit Langote, Michael Paquier Reviewed-by: Tom Lane Discussion: https://postgr.es/m/20190228193203.GA26151@alvherre.pgsql
* Make pg_partition_tree return no rows on unsupported and undefined objectsMichael Paquier2019-03-01
| | | | | | | | | | | The function was tweaked so as it returned one row full of NULLs when working on an unsupported relkind or an undefined object as of cc53123, and after discussion with Amit and Álvaro it looks more natural to make it return no rows. Author: Michael Paquier Reviewed-by: Álvaro Herrera, Amit Langote Discussion: https://postgr.es/m/20190227184808.GA17357@alvherre.pgsql
* Merge near-duplicate code in RI triggersPeter Eisentraut2019-02-28
| | | | | | | | | | | Merge ri_setnull and ri_setdefault into one function ri_set. These functions were to a large part identical. This is a continuation in spirit of 4797f9b519995ceca5d6b8550b5caa2ff6d19347. Author: Corey Huinker <corey.huinker@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/0ccdd3e1-10b0-dd05-d8a7-183507c11eb1%402ndquadrant.com
* Standardize some more loops that chase down parallel lists.Tom Lane2019-02-28
| | | | | | | | | | | | | | | | | | | | | | | | | We have forboth() and forthree() macros that simplify iterating through several parallel lists, but not everyplace that could reasonably use those was doing so. Also invent forfour() and forfive() macros to do the same for four or five parallel lists, and use those where applicable. The immediate motivation for doing this is to reduce the number of ad-hoc lnext() calls, to reduce the footprint of a WIP patch. However, it seems like good cleanup and error-proofing anyway; the places that were combining forthree() with a manually iterated loop seem particularly illegible and bug-prone. There was some speculation about restructuring related parsetree representations to reduce the need for parallel list chasing of this sort. Perhaps that's a win, or perhaps not, but in any case it would be considerably more invasive than this patch; and it's not particularly related to my immediate goal of improving the List infrastructure. So I'll leave that question for another day. Patch by me; thanks to David Rowley for review. Discussion: https://postgr.es/m/11587.1550975080@sss.pgh.pa.us
* Clean up some variable names in ri_triggers.cPeter Eisentraut2019-02-28
| | | | | | | | There was a mix of old_slot/oldslot, new_slot/newslot. Since we've changed everything from row to slot, we might as well take this opportunity to clean this up. Also update some more comments for the slot change.
* Compact for loopsPeter Eisentraut2019-02-28
| | | | | | | Declare loop variable in for loop, for readability and to save space. Reviewed-by: Corey Huinker <corey.huinker@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/0ccdd3e1-10b0-dd05-d8a7-183507c11eb1%402ndquadrant.com
* Reduce commentsPeter Eisentraut2019-02-28
| | | | | | | | | Reduce the vertical space used by comments in ri_triggers.c, making the file longer and more tedious to read than it needs to be. Update some comments to use a more common style. Reviewed-by: Corey Huinker <corey.huinker@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/0ccdd3e1-10b0-dd05-d8a7-183507c11eb1%402ndquadrant.com
* Remove unnecessary unused MATCH PARTIAL codePeter Eisentraut2019-02-28
| | | | | | | | | | | | | | | | | | | | | | ri_triggers.c spends a lot of space catering to a not-yet-implemented MATCH PARTIAL option. An actual implementation would probably not use the existing code structure anyway, so let's just simplify this for now. First, have ri_FetchConstraintInfo() check that riinfo->confmatchtype is valid. Then we don't have to repeat that everywhere. In the various referential action functions, we don't need to pay attention to the match type at all right now, so remove all that code. A future MATCH PARTIAL implementation would probably have some conditions added to the present code, but it won't need an entirely separate switch branch in each case. In RI_FKey_fk_upd_check_required(), reorganize the code to make it much simpler. Reviewed-by: Corey Huinker <corey.huinker@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/0ccdd3e1-10b0-dd05-d8a7-183507c11eb1%402ndquadrant.com
* Update commentPeter Eisentraut2019-02-28
| | | | for ff11e7f4b9ae017585c3ba146db7ba39c31f209a
* Improve documentation of data_sync_retryMichael Paquier2019-02-28
| | | | | | | | Reflecting an updated parameter value requires a server restart, which was not mentioned in the documentation and in postgresql.conf.sample. Reported-by: Thomas Poty Discussion: https://postgr.es/m/15659-0cd812f13027a2d8@postgresql.org
* Use slots in trigger infrastructure, except for the actual invocation.Andres Freund2019-02-26
| | | | | | | | | | | | | | | | | | | | | In preparation for abstracting table storage, convert trigger.c to track tuples in slots. Which also happens to make code calling triggers simpler. As the calling interface for triggers themselves is not changed in this patch, HeapTuples still are extracted from the slot at that time. But that's handled solely inside trigger.c, not visible to callers. It's quite likely that we'll want to revise the external trigger interface, but that's a separate large project. As part of this work the slots used for old/new/return tuples are moved from EState into ResultRelInfo, as different updated tables might need different slots. The slots are now also now created on-demand, which is good both from an efficiency POV, but also makes the modifying code simpler. Author: Andres Freund, Amit Khandekar and Ashutosh Bapat Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
* Fix inconsistent out-of-memory error reporting in dsa.c.Thomas Munro2019-02-25
| | | | | | | | | | | | | | | | | | | | | | | | | | Commit 16be2fd1 introduced the flag DSA_ALLOC_NO_OOM to control whether the DSA allocator would raise an error or return InvalidDsaPointer on failure to allocate. One edge case was not handled correctly: if we fail to allocate an internal "span" object for a large allocation, we would always return InvalidDsaPointer regardless of the flag; a caller not expecting that could then dereference a null pointer. This is a plausible explanation for a one-off report of a segfault. Remove a redundant pair of braces so that all three stanzas that handle DSA_ALLOC_NO_OOM match in style, for visual consistency. While fixing inconsistencies, if FreePageManagerGet() can't supply the pages that our book-keeping says it should be able to supply, then we should always report a FATAL error. Previously we treated that as a regular allocation failure in one code path, but as a FATAL condition in another. Back-patch to 10, where dsa.c landed. Author: Thomas Munro Reported-by: Jakub Glapa Discussion: https://postgr.es/m/CAEepm=2oPqXxyWQ-1o60tpOLrwkw=VpgNXqqF1VN2EyO9zKGQw@mail.gmail.com
* Move estimate_hashagg_tablesize to selfuncs.c, and widen result to double.Tom Lane2019-02-21
| | | | | | | | | | | | | | | | | It seems to make more sense for this to be in selfuncs.c, since it's largely a statistical-estimation thing, and it's related to other functions like estimate_hash_bucket_stats that are there. While at it, change the result type from Size to double. Perhaps at one point it was impossible for the result to overflow an integer, but I've got no confidence in that proposition anymore. Nothing's actually done with the result except to compare it to a work_mem-based limit, so as long as we don't get an overflow on the way to that comparison, things should be fine even with very large dNumGroups. Code movement proposed by Antonin Houska, type change by me Discussion: https://postgr.es/m/25767.1549359615@localhost
* Hide other user's pg_stat_ssl rowsPeter Eisentraut2019-02-21
| | | | | | | | | | Change pg_stat_ssl so that an unprivileged user can only see their own rows; other rows will be all null. This makes the behavior consistent with pg_stat_activity, where information about where the connection came from is also restricted. Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://www.postgresql.org/message-id/flat/63117976-d02c-c8e2-3aef-caa31a5ab8d3%402ndquadrant.com
* Move code for managing PartitionDescs into a new file, partdesc.cRobert Haas2019-02-21
| | | | | | | | | | This is similar in spirit to the existing partbounds.c file in the same directory, except that there's a lot less code in the new file created by this commit. Pending work in this area proposes to add a bunch more code related to PartitionDescs, though, and this will give us a good place to put it. Discussion: http://postgr.es/m/CA+TgmoZUwPf_uanjF==gTGBMJrn8uCq52XYvAEorNkLrUdoawg@mail.gmail.com
* Mark correctly initial slot snapshots with MVCC type when builtMichael Paquier2019-02-20
| | | | | | | | | | | | | | | | When building an initial slot snapshot, snapshots are marked with historic MVCC snapshots as type with the marker field being set in SnapBuildBuildSnapshot() but not overriden in SnapBuildInitialSnapshot(). Existing callers of SnapBuildBuildSnapshot() do not care about the type of snapshot used, but extensions calling it actually may, as reported. While on it, mark correctly the snapshot type when importing one. This is cosmetic as the field is enforced to 0. Author: Antonin Houska Reviewed-by: Álvaro Herrera, Michael Paquier Discussion: https://postgr.es/m/23215.1527665193@localhost Backpatch-through: 9.4
* Use varargs macro for CACHEDEBUGPeter Eisentraut2019-02-19
| | | | Reviewed-by: Andres Freund <andres@anarazel.de>
* Allow user control of CTE materialization, and change the default behavior.Tom Lane2019-02-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Historically we've always materialized the full output of a CTE query, treating WITH as an optimization fence (so that, for example, restrictions from the outer query cannot be pushed into it). This is appropriate when the CTE query is INSERT/UPDATE/DELETE, or is recursive; but when the CTE query is non-recursive and side-effect-free, there's no hazard of changing the query results by pushing restrictions down. Another argument for materialization is that it can avoid duplicate computation of an expensive WITH query --- but that only applies if the WITH query is called more than once in the outer query. Even then it could still be a net loss, if each call has restrictions that would allow just a small part of the WITH query to be computed. Hence, let's change the behavior for WITH queries that are non-recursive and side-effect-free. By default, we will inline them into the outer query (removing the optimization fence) if they are called just once. If they are called more than once, we will keep the old behavior by default, but the user can override this and force inlining by specifying NOT MATERIALIZED. Lastly, the user can force the old behavior by specifying MATERIALIZED; this would mainly be useful when the query had deliberately been employing WITH as an optimization fence to prevent a poor choice of plan. Andreas Karlsson, Andrew Gierth, David Fetter Discussion: https://postgr.es/m/87sh48ffhb.fsf@news-spur.riddles.org.uk
* Make use of compiler builtins and/or assembly for CLZ, CTZ, POPCNT.Tom Lane2019-02-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Test for the compiler builtins __builtin_clz, __builtin_ctz, and __builtin_popcount, and make use of these in preference to handwritten C code if they're available. Create src/port infrastructure for "leftmost one", "rightmost one", and "popcount" so as to centralize these decisions. On x86_64, __builtin_popcount generally won't make use of the POPCNT opcode because that's not universally supported yet. Provide code that checks CPUID and then calls POPCNT via asm() if available. This requires indirecting through a function pointer, which is an annoying amount of overhead for a one-instruction operation, but it's probably not worth working harder than this for our current use-cases. I'm not sure we've found all the existing places that could profit from this new infrastructure; but we at least touched all the ones that used copied-and-pasted versions of the bitmapset.c code, and got rid of multiple copies of the associated constant arrays. While at it, replace c-compiler.m4's one-per-builtin-function macros with a single one that can handle all the cases we need to worry about so far. Also, because I'm paranoid, make those checks into AC_LINK checks rather than just AC_COMPILE; the former coding failed to verify that libgcc has support for the builtin, in cases where it's not inline code. David Rowley, Thomas Munro, Alvaro Herrera, Tom Lane Discussion: https://postgr.es/m/CAKJS1f9WTAGG1tPeJnD18hiQW5gAk59fQ6WK-vfdAKEHyRg2RA@mail.gmail.com
* Refactor index cost estimation functions in view of IndexClause changes.Tom Lane2019-02-15
| | | | | | | | | | | | | | | | | | | Get rid of deconstruct_indexquals() in favor of just iterating over the IndexClause list directly. The extra services that that function used to provide, such as hiding clause commutation and associating the right index column with each clause, are no longer useful given the new data structure. I'd originally thought that it'd provide a useful amount of abstraction by freeing callers from paying attention to the exact clause type of each indexqual, but that hope proves to have been vain, because few callers can ignore the semantic differences between different clause types. Indeed, removing it results in a net code savings, and probably some cycles shaved by not having to build an extra list-of-structs data structure. Also, export a few formerly-static support functions, with the goal of allowing extension AMs to write functionality equivalent to genericcostestimate() without pointless code duplication. Discussion: https://postgr.es/m/24586.1550106354@sss.pgh.pa.us
* Simplify the planner's new representation of indexable clauses a little.Tom Lane2019-02-14
| | | | | | | | | | | | | | | | | | | | | | In commit 1a8d5afb0, I thought it'd be a good idea to define IndexClause.indexquals as NIL in the most common case where the given clause (IndexClause.rinfo) is usable exactly as-is. It'd be more consistent to define the indexquals in that case as being a one-element list containing IndexClause.rinfo, but I thought saving the palloc overhead for making such a list would be worthwhile. In hindsight, that was a great example of "premature optimization is the root of all evil": it's complicated everyplace that needs to deal with the indexquals, requiring duplicative code to handle both the simple case and the not-simple case. I'd initially found that tolerable but it's getting less so as I mop up some areas that I'd not touched in 1a8d5afb0. In any case, two more pallocs during a planner run are surely at the noise level (a conclusion confirmed by a bit of microbenchmarking). So let's change this decision before it becomes set in stone, and insist that IndexClause.indexquals always be a valid list of the actual index quals for the clause. Discussion: https://postgr.es/m/24586.1550106354@sss.pgh.pa.us